All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] btrfs: scrub: big renaming to address the page and sector difference
@ 2022-03-11  7:34 Qu Wenruo
  2022-03-11  7:34 ` [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev Qu Wenruo
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Qu Wenruo @ 2022-03-11  7:34 UTC (permalink / raw)
  To: linux-btrfs

This patchset can be cherry-picked from my github repo:
https://github.com/adam900710/linux/tree/refactor_scrub

They are the first 3 patches after misc-next.

From the ancient day, btrfs doesn't support sectorsize < PAGE_SIZE, thus
a lot of the old code consider one page == one sector, not only the
behavior, but also the naming.

This is no longer true after v5.16 since we have subpage support.

One of the worst location is scrub, we have tons of things named like
scrub_page, scrub_block::pagev, scrub_bio::pagev.

Even scrub for subpage is supported, the naming is not touched yet.

This patchset will first do the rename, providing the basis for later
scrub enhancement for subpage.

This patchset should not bring any behavior change.

Changelog:
v2:
- Rebased to misc-next directly, before scrub entrance refactor
  As this patchset is safer than entrance refactor.

  Minor conflicts due to scrub_remap() renaming.

Qu Wenruo (3):
  btrfs: scrub: rename members related to scrub_block::pagev
  btrfs: scrub: rename scrub_page to scrub_sector
  btrfs: scrub: rename scrub_bio::pagev and related members

 fs/btrfs/scrub.c | 688 +++++++++++++++++++++++------------------------
 1 file changed, 344 insertions(+), 344 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev
  2022-03-11  7:34 [PATCH v2 0/3] btrfs: scrub: big renaming to address the page and sector difference Qu Wenruo
@ 2022-03-11  7:34 ` Qu Wenruo
  2022-03-11 17:49   ` David Sterba
  2022-03-11  7:34 ` [PATCH v2 2/3] btrfs: scrub: rename scrub_page to scrub_sector Qu Wenruo
  2022-03-11  7:34 ` [PATCH v2 3/3] btrfs: scrub: rename scrub_bio::pagev and related members Qu Wenruo
  2 siblings, 1 reply; 8+ messages in thread
From: Qu Wenruo @ 2022-03-11  7:34 UTC (permalink / raw)
  To: linux-btrfs

The following will be renamed in this patch:

- scrub_block::pagev -> sectorv

- scrub_block::page_count -> sector_count

- SCRUB_MAX_PAGES_PER_BLOCK -> SCRUB_MAX_SECTORS_PER_BLOCK

- page_num -> sector_num to iterate scrub_block::sectorv

For now scrub_page is not yet renamed, as the current changeset is
already large enough.

The rename for scrub_page will come in a separate patch.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 220 +++++++++++++++++++++++------------------------
 1 file changed, 110 insertions(+), 110 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 11089568b287..fd67e1acdba6 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -52,7 +52,7 @@ struct scrub_ctx;
  * The following value times PAGE_SIZE needs to be large enough to match the
  * largest node/leaf/sector size that shall be supported.
  */
-#define SCRUB_MAX_PAGES_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
+#define SCRUB_MAX_SECTORS_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
 
 struct scrub_recover {
 	refcount_t		refs;
@@ -94,8 +94,8 @@ struct scrub_bio {
 };
 
 struct scrub_block {
-	struct scrub_page	*pagev[SCRUB_MAX_PAGES_PER_BLOCK];
-	int			page_count;
+	struct scrub_page	*sectorv[SCRUB_MAX_SECTORS_PER_BLOCK];
+	int			sector_count;
 	atomic_t		outstanding_pages;
 	refcount_t		refs; /* free mem on transition to zero */
 	struct scrub_ctx	*sctx;
@@ -728,16 +728,16 @@ static void scrub_print_warning(const char *errstr, struct scrub_block *sblock)
 	u8 ref_level = 0;
 	int ret;
 
-	WARN_ON(sblock->page_count < 1);
-	dev = sblock->pagev[0]->dev;
+	WARN_ON(sblock->sector_count < 1);
+	dev = sblock->sectorv[0]->dev;
 	fs_info = sblock->sctx->fs_info;
 
 	path = btrfs_alloc_path();
 	if (!path)
 		return;
 
-	swarn.physical = sblock->pagev[0]->physical;
-	swarn.logical = sblock->pagev[0]->logical;
+	swarn.physical = sblock->sectorv[0]->physical;
+	swarn.logical = sblock->sectorv[0]->logical;
 	swarn.errstr = errstr;
 	swarn.dev = NULL;
 
@@ -817,16 +817,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	struct scrub_block *sblock_bad;
 	int ret;
 	int mirror_index;
-	int page_num;
+	int sector_num;
 	int success;
 	bool full_stripe_locked;
 	unsigned int nofs_flag;
 	static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
 				      DEFAULT_RATELIMIT_BURST);
 
-	BUG_ON(sblock_to_check->page_count < 1);
+	BUG_ON(sblock_to_check->sector_count < 1);
 	fs_info = sctx->fs_info;
-	if (sblock_to_check->pagev[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
+	if (sblock_to_check->sectorv[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
 		/*
 		 * if we find an error in a super block, we just report it.
 		 * They will get written with the next transaction commit
@@ -837,13 +837,13 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 		spin_unlock(&sctx->stat_lock);
 		return 0;
 	}
-	logical = sblock_to_check->pagev[0]->logical;
-	BUG_ON(sblock_to_check->pagev[0]->mirror_num < 1);
-	failed_mirror_index = sblock_to_check->pagev[0]->mirror_num - 1;
-	is_metadata = !(sblock_to_check->pagev[0]->flags &
+	logical = sblock_to_check->sectorv[0]->logical;
+	BUG_ON(sblock_to_check->sectorv[0]->mirror_num < 1);
+	failed_mirror_index = sblock_to_check->sectorv[0]->mirror_num - 1;
+	is_metadata = !(sblock_to_check->sectorv[0]->flags &
 			BTRFS_EXTENT_FLAG_DATA);
-	have_csum = sblock_to_check->pagev[0]->have_csum;
-	dev = sblock_to_check->pagev[0]->dev;
+	have_csum = sblock_to_check->sectorv[0]->have_csum;
+	dev = sblock_to_check->sectorv[0]->dev;
 
 	if (!sctx->is_dev_replace && btrfs_repair_one_zone(fs_info, logical))
 		return 0;
@@ -1011,25 +1011,25 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 			continue;
 
 		/* raid56's mirror can be more than BTRFS_MAX_MIRRORS */
-		if (!scrub_is_page_on_raid56(sblock_bad->pagev[0])) {
+		if (!scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
 			if (mirror_index >= BTRFS_MAX_MIRRORS)
 				break;
-			if (!sblocks_for_recheck[mirror_index].page_count)
+			if (!sblocks_for_recheck[mirror_index].sector_count)
 				break;
 
 			sblock_other = sblocks_for_recheck + mirror_index;
 		} else {
-			struct scrub_recover *r = sblock_bad->pagev[0]->recover;
+			struct scrub_recover *r = sblock_bad->sectorv[0]->recover;
 			int max_allowed = r->bioc->num_stripes - r->bioc->num_tgtdevs;
 
 			if (mirror_index >= max_allowed)
 				break;
-			if (!sblocks_for_recheck[1].page_count)
+			if (!sblocks_for_recheck[1].sector_count)
 				break;
 
 			ASSERT(failed_mirror_index == 0);
 			sblock_other = sblocks_for_recheck + 1;
-			sblock_other->pagev[0]->mirror_num = 1 + mirror_index;
+			sblock_other->sectorv[0]->mirror_num = 1 + mirror_index;
 		}
 
 		/* build and submit the bios, check checksums */
@@ -1078,16 +1078,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	 * area are unreadable.
 	 */
 	success = 1;
-	for (page_num = 0; page_num < sblock_bad->page_count;
-	     page_num++) {
-		struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
+	for (sector_num = 0; sector_num < sblock_bad->sector_count;
+	     sector_num++) {
+		struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
 		struct scrub_block *sblock_other = NULL;
 
 		/* skip no-io-error page in scrub */
 		if (!spage_bad->io_error && !sctx->is_dev_replace)
 			continue;
 
-		if (scrub_is_page_on_raid56(sblock_bad->pagev[0])) {
+		if (scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
 			/*
 			 * In case of dev replace, if raid56 rebuild process
 			 * didn't work out correct data, then copy the content
@@ -1100,10 +1100,10 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 			/* try to find no-io-error page in mirrors */
 			for (mirror_index = 0;
 			     mirror_index < BTRFS_MAX_MIRRORS &&
-			     sblocks_for_recheck[mirror_index].page_count > 0;
+			     sblocks_for_recheck[mirror_index].sector_count > 0;
 			     mirror_index++) {
 				if (!sblocks_for_recheck[mirror_index].
-				    pagev[page_num]->io_error) {
+				    sectorv[sector_num]->io_error) {
 					sblock_other = sblocks_for_recheck +
 						       mirror_index;
 					break;
@@ -1125,7 +1125,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 				sblock_other = sblock_bad;
 
 			if (scrub_write_page_to_dev_replace(sblock_other,
-							    page_num) != 0) {
+							    sector_num) != 0) {
 				atomic64_inc(
 					&fs_info->dev_replace.num_write_errors);
 				success = 0;
@@ -1133,7 +1133,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 		} else if (sblock_other) {
 			ret = scrub_repair_page_from_good_copy(sblock_bad,
 							       sblock_other,
-							       page_num, 0);
+							       sector_num, 0);
 			if (0 == ret)
 				spage_bad->io_error = 0;
 			else
@@ -1186,18 +1186,18 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 			struct scrub_block *sblock = sblocks_for_recheck +
 						     mirror_index;
 			struct scrub_recover *recover;
-			int page_index;
+			int sector_index;
 
-			for (page_index = 0; page_index < sblock->page_count;
-			     page_index++) {
-				sblock->pagev[page_index]->sblock = NULL;
-				recover = sblock->pagev[page_index]->recover;
+			for (sector_index = 0; sector_index < sblock->sector_count;
+			     sector_index++) {
+				sblock->sectorv[sector_index]->sblock = NULL;
+				recover = sblock->sectorv[sector_index]->recover;
 				if (recover) {
 					scrub_put_recover(fs_info, recover);
-					sblock->pagev[page_index]->recover =
+					sblock->sectorv[sector_index]->recover =
 									NULL;
 				}
-				scrub_page_put(sblock->pagev[page_index]);
+				scrub_page_put(sblock->sectorv[sector_index]);
 			}
 		}
 		kfree(sblocks_for_recheck);
@@ -1255,18 +1255,18 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 {
 	struct scrub_ctx *sctx = original_sblock->sctx;
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
-	u64 length = original_sblock->page_count * fs_info->sectorsize;
-	u64 logical = original_sblock->pagev[0]->logical;
-	u64 generation = original_sblock->pagev[0]->generation;
-	u64 flags = original_sblock->pagev[0]->flags;
-	u64 have_csum = original_sblock->pagev[0]->have_csum;
+	u64 length = original_sblock->sector_count * fs_info->sectorsize;
+	u64 logical = original_sblock->sectorv[0]->logical;
+	u64 generation = original_sblock->sectorv[0]->generation;
+	u64 flags = original_sblock->sectorv[0]->flags;
+	u64 have_csum = original_sblock->sectorv[0]->have_csum;
 	struct scrub_recover *recover;
 	struct btrfs_io_context *bioc;
 	u64 sublen;
 	u64 mapped_length;
 	u64 stripe_offset;
 	int stripe_index;
-	int page_index = 0;
+	int sector_index = 0;
 	int mirror_index;
 	int nmirrors;
 	int ret;
@@ -1306,7 +1306,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 		recover->bioc = bioc;
 		recover->map_length = mapped_length;
 
-		ASSERT(page_index < SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(sector_index < SCRUB_MAX_SECTORS_PER_BLOCK);
 
 		nmirrors = min(scrub_nr_raid_mirrors(bioc), BTRFS_MAX_MIRRORS);
 
@@ -1328,7 +1328,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 				return -ENOMEM;
 			}
 			scrub_page_get(spage);
-			sblock->pagev[page_index] = spage;
+			sblock->sectorv[sector_index] = spage;
 			spage->sblock = sblock;
 			spage->flags = flags;
 			spage->generation = generation;
@@ -1336,7 +1336,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 			spage->have_csum = have_csum;
 			if (have_csum)
 				memcpy(spage->csum,
-				       original_sblock->pagev[0]->csum,
+				       original_sblock->sectorv[0]->csum,
 				       sctx->fs_info->csum_size);
 
 			scrub_stripe_index_and_offset(logical,
@@ -1352,13 +1352,13 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 					 stripe_offset;
 			spage->dev = bioc->stripes[stripe_index].dev;
 
-			BUG_ON(page_index >= original_sblock->page_count);
+			BUG_ON(sector_index >= original_sblock->sector_count);
 			spage->physical_for_dev_replace =
-				original_sblock->pagev[page_index]->
+				original_sblock->sectorv[sector_index]->
 				physical_for_dev_replace;
 			/* for missing devices, dev->bdev is NULL */
 			spage->mirror_num = mirror_index + 1;
-			sblock->page_count++;
+			sblock->sector_count++;
 			spage->page = alloc_page(GFP_NOFS);
 			if (!spage->page)
 				goto leave_nomem;
@@ -1369,7 +1369,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 		scrub_put_recover(fs_info, recover);
 		length -= sublen;
 		logical += sublen;
-		page_index++;
+		sector_index++;
 	}
 
 	return 0;
@@ -1392,7 +1392,7 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
 	bio->bi_private = &done;
 	bio->bi_end_io = scrub_bio_wait_endio;
 
-	mirror_num = spage->sblock->pagev[0]->mirror_num;
+	mirror_num = spage->sblock->sectorv[0]->mirror_num;
 	ret = raid56_parity_recover(bio, spage->recover->bioc,
 				    spage->recover->map_length,
 				    mirror_num, 0);
@@ -1406,9 +1406,9 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
 static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 					  struct scrub_block *sblock)
 {
-	struct scrub_page *first_page = sblock->pagev[0];
+	struct scrub_page *first_page = sblock->sectorv[0];
 	struct bio *bio;
-	int page_num;
+	int sector_num;
 
 	/* All pages in sblock belong to the same stripe on the same device. */
 	ASSERT(first_page->dev);
@@ -1418,8 +1418,8 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 	bio = btrfs_bio_alloc(BIO_MAX_VECS);
 	bio_set_dev(bio, first_page->dev->bdev);
 
-	for (page_num = 0; page_num < sblock->page_count; page_num++) {
-		struct scrub_page *spage = sblock->pagev[page_num];
+	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
+		struct scrub_page *spage = sblock->sectorv[sector_num];
 
 		WARN_ON(!spage->page);
 		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
@@ -1436,8 +1436,8 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 
 	return;
 out:
-	for (page_num = 0; page_num < sblock->page_count; page_num++)
-		sblock->pagev[page_num]->io_error = 1;
+	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++)
+		sblock->sectorv[sector_num]->io_error = 1;
 
 	sblock->no_io_error_seen = 0;
 }
@@ -1453,17 +1453,17 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 				struct scrub_block *sblock,
 				int retry_failed_mirror)
 {
-	int page_num;
+	int sector_num;
 
 	sblock->no_io_error_seen = 1;
 
 	/* short cut for raid56 */
-	if (!retry_failed_mirror && scrub_is_page_on_raid56(sblock->pagev[0]))
+	if (!retry_failed_mirror && scrub_is_page_on_raid56(sblock->sectorv[0]))
 		return scrub_recheck_block_on_raid56(fs_info, sblock);
 
-	for (page_num = 0; page_num < sblock->page_count; page_num++) {
+	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
 		struct bio *bio;
-		struct scrub_page *spage = sblock->pagev[page_num];
+		struct scrub_page *spage = sblock->sectorv[sector_num];
 
 		if (spage->dev->bdev == NULL) {
 			spage->io_error = 1;
@@ -1507,7 +1507,7 @@ static void scrub_recheck_block_checksum(struct scrub_block *sblock)
 	sblock->checksum_error = 0;
 	sblock->generation_error = 0;
 
-	if (sblock->pagev[0]->flags & BTRFS_EXTENT_FLAG_DATA)
+	if (sblock->sectorv[0]->flags & BTRFS_EXTENT_FLAG_DATA)
 		scrub_checksum_data(sblock);
 	else
 		scrub_checksum_tree_block(sblock);
@@ -1516,15 +1516,15 @@ static void scrub_recheck_block_checksum(struct scrub_block *sblock)
 static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
 					     struct scrub_block *sblock_good)
 {
-	int page_num;
+	int sector_num;
 	int ret = 0;
 
-	for (page_num = 0; page_num < sblock_bad->page_count; page_num++) {
+	for (sector_num = 0; sector_num < sblock_bad->sector_count; sector_num++) {
 		int ret_sub;
 
 		ret_sub = scrub_repair_page_from_good_copy(sblock_bad,
 							   sblock_good,
-							   page_num, 1);
+							   sector_num, 1);
 		if (ret_sub)
 			ret = ret_sub;
 	}
@@ -1534,10 +1534,10 @@ static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
 
 static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
 					    struct scrub_block *sblock_good,
-					    int page_num, int force_write)
+					    int sector_num, int force_write)
 {
-	struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
-	struct scrub_page *spage_good = sblock_good->pagev[page_num];
+	struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
+	struct scrub_page *spage_good = sblock_good->sectorv[sector_num];
 	struct btrfs_fs_info *fs_info = sblock_bad->sctx->fs_info;
 	const u32 sectorsize = fs_info->sectorsize;
 
@@ -1581,7 +1581,7 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
 static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
 {
 	struct btrfs_fs_info *fs_info = sblock->sctx->fs_info;
-	int page_num;
+	int sector_num;
 
 	/*
 	 * This block is used for the check of the parity on the source device,
@@ -1590,19 +1590,19 @@ static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
 	if (sblock->sparity)
 		return;
 
-	for (page_num = 0; page_num < sblock->page_count; page_num++) {
+	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
 		int ret;
 
-		ret = scrub_write_page_to_dev_replace(sblock, page_num);
+		ret = scrub_write_page_to_dev_replace(sblock, sector_num);
 		if (ret)
 			atomic64_inc(&fs_info->dev_replace.num_write_errors);
 	}
 }
 
 static int scrub_write_page_to_dev_replace(struct scrub_block *sblock,
-					   int page_num)
+					   int sector_num)
 {
-	struct scrub_page *spage = sblock->pagev[page_num];
+	struct scrub_page *spage = sblock->sectorv[sector_num];
 
 	BUG_ON(spage->page == NULL);
 	if (spage->io_error)
@@ -1786,8 +1786,8 @@ static int scrub_checksum(struct scrub_block *sblock)
 	sblock->generation_error = 0;
 	sblock->checksum_error = 0;
 
-	WARN_ON(sblock->page_count < 1);
-	flags = sblock->pagev[0]->flags;
+	WARN_ON(sblock->sector_count < 1);
+	flags = sblock->sectorv[0]->flags;
 	ret = 0;
 	if (flags & BTRFS_EXTENT_FLAG_DATA)
 		ret = scrub_checksum_data(sblock);
@@ -1812,8 +1812,8 @@ static int scrub_checksum_data(struct scrub_block *sblock)
 	struct scrub_page *spage;
 	char *kaddr;
 
-	BUG_ON(sblock->page_count < 1);
-	spage = sblock->pagev[0];
+	BUG_ON(sblock->sector_count < 1);
+	spage = sblock->sectorv[0];
 	if (!spage->have_csum)
 		return 0;
 
@@ -1852,12 +1852,12 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
 	struct scrub_page *spage;
 	char *kaddr;
 
-	BUG_ON(sblock->page_count < 1);
+	BUG_ON(sblock->sector_count < 1);
 
-	/* Each member in pagev is just one block, not a full page */
-	ASSERT(sblock->page_count == num_sectors);
+	/* Each member in pagev is just one sector , not a full page */
+	ASSERT(sblock->sector_count == num_sectors);
 
-	spage = sblock->pagev[0];
+	spage = sblock->sectorv[0];
 	kaddr = page_address(spage->page);
 	h = (struct btrfs_header *)kaddr;
 	memcpy(on_disk_csum, h->csum, sctx->fs_info->csum_size);
@@ -1888,7 +1888,7 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
 			    sectorsize - BTRFS_CSUM_SIZE);
 
 	for (i = 1; i < num_sectors; i++) {
-		kaddr = page_address(sblock->pagev[i]->page);
+		kaddr = page_address(sblock->sectorv[i]->page);
 		crypto_shash_update(shash, kaddr, sectorsize);
 	}
 
@@ -1911,8 +1911,8 @@ static int scrub_checksum_super(struct scrub_block *sblock)
 	int fail_gen = 0;
 	int fail_cor = 0;
 
-	BUG_ON(sblock->page_count < 1);
-	spage = sblock->pagev[0];
+	BUG_ON(sblock->sector_count < 1);
+	spage = sblock->sectorv[0];
 	kaddr = page_address(spage->page);
 	s = (struct btrfs_super_block *)kaddr;
 
@@ -1966,8 +1966,8 @@ static void scrub_block_put(struct scrub_block *sblock)
 		if (sblock->sparity)
 			scrub_parity_put(sblock->sparity);
 
-		for (i = 0; i < sblock->page_count; i++)
-			scrub_page_put(sblock->pagev[i]);
+		for (i = 0; i < sblock->sector_count; i++)
+			scrub_page_put(sblock->sectorv[i]);
 		kfree(sblock);
 	}
 }
@@ -2155,8 +2155,8 @@ static void scrub_missing_raid56_worker(struct btrfs_work *work)
 	u64 logical;
 	struct btrfs_device *dev;
 
-	logical = sblock->pagev[0]->logical;
-	dev = sblock->pagev[0]->dev;
+	logical = sblock->sectorv[0]->logical;
+	dev = sblock->sectorv[0]->dev;
 
 	if (sblock->no_io_error_seen)
 		scrub_recheck_block_checksum(sblock);
@@ -2193,8 +2193,8 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 {
 	struct scrub_ctx *sctx = sblock->sctx;
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
-	u64 length = sblock->page_count * PAGE_SIZE;
-	u64 logical = sblock->pagev[0]->logical;
+	u64 length = sblock->sector_count * fs_info->sectorsize;
+	u64 logical = sblock->sectorv[0]->logical;
 	struct btrfs_io_context *bioc = NULL;
 	struct bio *bio;
 	struct btrfs_raid_bio *rbio;
@@ -2227,8 +2227,8 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 	if (!rbio)
 		goto rbio_out;
 
-	for (i = 0; i < sblock->page_count; i++) {
-		struct scrub_page *spage = sblock->pagev[i];
+	for (i = 0; i < sblock->sector_count; i++) {
+		struct scrub_page *spage = sblock->sectorv[i];
 
 		raid56_add_scrub_pages(rbio, spage->page, spage->logical);
 	}
@@ -2290,9 +2290,9 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 			scrub_block_put(sblock);
 			return -ENOMEM;
 		}
-		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
 		scrub_page_get(spage);
-		sblock->pagev[index] = spage;
+		sblock->sectorv[index] = spage;
 		spage->sblock = sblock;
 		spage->dev = dev;
 		spage->flags = flags;
@@ -2307,7 +2307,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 		} else {
 			spage->have_csum = 0;
 		}
-		sblock->page_count++;
+		sblock->sector_count++;
 		spage->page = alloc_page(GFP_KERNEL);
 		if (!spage->page)
 			goto leave_nomem;
@@ -2317,7 +2317,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 		physical_for_dev_replace += l;
 	}
 
-	WARN_ON(sblock->page_count == 0);
+	WARN_ON(sblock->sector_count == 0);
 	if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state)) {
 		/*
 		 * This case should only be hit for RAID 5/6 device replace. See
@@ -2325,8 +2325,8 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 		 */
 		scrub_missing_raid56_pages(sblock);
 	} else {
-		for (index = 0; index < sblock->page_count; index++) {
-			struct scrub_page *spage = sblock->pagev[index];
+		for (index = 0; index < sblock->sector_count; index++) {
+			struct scrub_page *spage = sblock->sectorv[index];
 			int ret;
 
 			ret = scrub_add_page_to_rd_bio(sctx, spage);
@@ -2456,8 +2456,8 @@ static void scrub_block_complete(struct scrub_block *sblock)
 	}
 
 	if (sblock->sparity && corrupted && !sblock->data_corrected) {
-		u64 start = sblock->pagev[0]->logical;
-		u64 end = sblock->pagev[sblock->page_count - 1]->logical +
+		u64 start = sblock->sectorv[0]->logical;
+		u64 end = sblock->sectorv[sblock->sector_count - 1]->logical +
 			  sblock->sctx->fs_info->sectorsize;
 
 		ASSERT(end - start <= U32_MAX);
@@ -2624,10 +2624,10 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 			scrub_block_put(sblock);
 			return -ENOMEM;
 		}
-		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
 		/* For scrub block */
 		scrub_page_get(spage);
-		sblock->pagev[index] = spage;
+		sblock->sectorv[index] = spage;
 		/* For scrub parity */
 		scrub_page_get(spage);
 		list_add_tail(&spage->list, &sparity->spages);
@@ -2644,7 +2644,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 		} else {
 			spage->have_csum = 0;
 		}
-		sblock->page_count++;
+		sblock->sector_count++;
 		spage->page = alloc_page(GFP_KERNEL);
 		if (!spage->page)
 			goto leave_nomem;
@@ -2656,9 +2656,9 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 		physical += sectorsize;
 	}
 
-	WARN_ON(sblock->page_count == 0);
-	for (index = 0; index < sblock->page_count; index++) {
-		struct scrub_page *spage = sblock->pagev[index];
+	WARN_ON(sblock->sector_count == 0);
+	for (index = 0; index < sblock->sector_count; index++) {
+		struct scrub_page *spage = sblock->sectorv[index];
 		int ret;
 
 		ret = scrub_add_page_to_rd_bio(sctx, spage);
@@ -4058,18 +4058,18 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
 	}
 
 	if (fs_info->nodesize >
-	    PAGE_SIZE * SCRUB_MAX_PAGES_PER_BLOCK ||
-	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_PAGES_PER_BLOCK) {
+	    SCRUB_MAX_SECTORS_PER_BLOCK * fs_info->sectorsize ||
+	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_SECTORS_PER_BLOCK) {
 		/*
 		 * would exhaust the array bounds of pagev member in
 		 * struct scrub_block
 		 */
 		btrfs_err(fs_info,
-			  "scrub: size assumption nodesize and sectorsize <= SCRUB_MAX_PAGES_PER_BLOCK (%d <= %d && %d <= %d) fails",
+			  "scrub: size assumption nodesize and sectorsize <= SCRUB_MAX_SECTORS_PER_BLOCK (%d <= %d && %d <= %d) fails",
 		       fs_info->nodesize,
-		       SCRUB_MAX_PAGES_PER_BLOCK,
+		       SCRUB_MAX_SECTORS_PER_BLOCK,
 		       fs_info->sectorsize,
-		       SCRUB_MAX_PAGES_PER_BLOCK);
+		       SCRUB_MAX_SECTORS_PER_BLOCK);
 		return -EINVAL;
 	}
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] btrfs: scrub: rename scrub_page to scrub_sector
  2022-03-11  7:34 [PATCH v2 0/3] btrfs: scrub: big renaming to address the page and sector difference Qu Wenruo
  2022-03-11  7:34 ` [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev Qu Wenruo
@ 2022-03-11  7:34 ` Qu Wenruo
  2022-03-11 18:01   ` David Sterba
  2022-03-11  7:34 ` [PATCH v2 3/3] btrfs: scrub: rename scrub_bio::pagev and related members Qu Wenruo
  2 siblings, 1 reply; 8+ messages in thread
From: Qu Wenruo @ 2022-03-11  7:34 UTC (permalink / raw)
  To: linux-btrfs

Since the subpage support of scrub, scrub_sector is in fact just
representing one sector.

Thus the name scrub_page is no longer correct, rename it to
scrub_sector.

This will also rename involving short names like spage -> ssector, and
other functions which takes scrub_page as arguments.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 460 +++++++++++++++++++++++------------------------
 1 file changed, 230 insertions(+), 230 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index fd67e1acdba6..c9198c9af4c4 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -60,7 +60,7 @@ struct scrub_recover {
 	u64			map_length;
 };
 
-struct scrub_page {
+struct scrub_sector {
 	struct scrub_block	*sblock;
 	struct page		*page;
 	struct btrfs_device	*dev;
@@ -87,16 +87,16 @@ struct scrub_bio {
 	blk_status_t		status;
 	u64			logical;
 	u64			physical;
-	struct scrub_page	*pagev[SCRUB_PAGES_PER_BIO];
+	struct scrub_sector	*pagev[SCRUB_PAGES_PER_BIO];
 	int			page_count;
 	int			next_free;
 	struct btrfs_work	work;
 };
 
 struct scrub_block {
-	struct scrub_page	*sectorv[SCRUB_MAX_SECTORS_PER_BLOCK];
+	struct scrub_sector	*sectorv[SCRUB_MAX_SECTORS_PER_BLOCK];
 	int			sector_count;
-	atomic_t		outstanding_pages;
+	atomic_t		outstanding_sectors;
 	refcount_t		refs; /* free mem on transition to zero */
 	struct scrub_ctx	*sctx;
 	struct scrub_parity	*sparity;
@@ -129,7 +129,7 @@ struct scrub_parity {
 
 	refcount_t		refs;
 
-	struct list_head	spages;
+	struct list_head	ssectors;
 
 	/* Work of parity check and repair */
 	struct btrfs_work	work;
@@ -212,24 +212,24 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 static void scrub_recheck_block_checksum(struct scrub_block *sblock);
 static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
 					     struct scrub_block *sblock_good);
-static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
+static int scrub_repair_sector_from_good_copy(struct scrub_block *sblock_bad,
 					    struct scrub_block *sblock_good,
-					    int page_num, int force_write);
+					    int sector_num, int force_write);
 static void scrub_write_block_to_dev_replace(struct scrub_block *sblock);
-static int scrub_write_page_to_dev_replace(struct scrub_block *sblock,
-					   int page_num);
+static int scrub_write_sector_to_dev_replace(struct scrub_block *sblock,
+					     int sector_num);
 static int scrub_checksum_data(struct scrub_block *sblock);
 static int scrub_checksum_tree_block(struct scrub_block *sblock);
 static int scrub_checksum_super(struct scrub_block *sblock);
 static void scrub_block_put(struct scrub_block *sblock);
-static void scrub_page_get(struct scrub_page *spage);
-static void scrub_page_put(struct scrub_page *spage);
+static void scrub_sector_get(struct scrub_sector *ssector);
+static void scrub_sector_put(struct scrub_sector *ssector);
 static void scrub_parity_get(struct scrub_parity *sparity);
 static void scrub_parity_put(struct scrub_parity *sparity);
-static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
-		       u64 physical, struct btrfs_device *dev, u64 flags,
-		       u64 gen, int mirror_num, u8 *csum,
-		       u64 physical_for_dev_replace);
+static int scrub_sectors(struct scrub_ctx *sctx, u64 logical, u32 len,
+			 u64 physical, struct btrfs_device *dev, u64 flags,
+			 u64 gen, int mirror_num, u8 *csum,
+			 u64 physical_for_dev_replace);
 static void scrub_bio_end_io(struct bio *bio);
 static void scrub_bio_end_io_worker(struct btrfs_work *work);
 static void scrub_block_complete(struct scrub_block *sblock);
@@ -238,17 +238,17 @@ static void scrub_remap_extent(struct btrfs_fs_info *fs_info,
 			       u64 *extent_physical,
 			       struct btrfs_device **extent_dev,
 			       int *extent_mirror_num);
-static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
-				    struct scrub_page *spage);
+static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
+				      struct scrub_sector *ssector);
 static void scrub_wr_submit(struct scrub_ctx *sctx);
 static void scrub_wr_bio_end_io(struct bio *bio);
 static void scrub_wr_bio_end_io_worker(struct btrfs_work *work);
 static void scrub_put_ctx(struct scrub_ctx *sctx);
 
-static inline int scrub_is_page_on_raid56(struct scrub_page *spage)
+static inline int scrub_is_page_on_raid56(struct scrub_sector *ssector)
 {
-	return spage->recover &&
-	       (spage->recover->bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
+	return ssector->recover &&
+	       (ssector->recover->bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
 }
 
 static void scrub_pending_bio_inc(struct scrub_ctx *sctx)
@@ -798,8 +798,8 @@ static inline void scrub_put_recover(struct btrfs_fs_info *fs_info,
 
 /*
  * scrub_handle_errored_block gets called when either verification of the
- * pages failed or the bio failed to read, e.g. with EIO. In the latter
- * case, this function handles all pages in the bio, even though only one
+ * sectors failed or the bio failed to read, e.g. with EIO. In the latter
+ * case, this function handles all sectors in the bio, even though only one
  * may be bad.
  * The goal of this function is to repair the errored block by using the
  * contents of one of the mirrors.
@@ -854,7 +854,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	 * might be waiting the scrub task to pause (which needs to wait for all
 	 * the worker tasks to complete before pausing).
 	 * We do allocations in the workers through insert_full_stripe_lock()
-	 * and scrub_add_page_to_wr_bio(), which happens down the call chain of
+	 * and scrub_add_sector_to_wr_bio(), which happens down the call chain of
 	 * this function.
 	 */
 	nofs_flag = memalloc_nofs_save();
@@ -918,7 +918,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 		goto out;
 	}
 
-	/* setup the context, map the logical blocks and alloc the pages */
+	/* setup the context, map the logical blocks and alloc the sectors */
 	ret = scrub_setup_recheck_block(sblock_to_check, sblocks_for_recheck);
 	if (ret) {
 		spin_lock(&sctx->stat_lock);
@@ -937,7 +937,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	if (!sblock_bad->header_error && !sblock_bad->checksum_error &&
 	    sblock_bad->no_io_error_seen) {
 		/*
-		 * the error disappeared after reading page by page, or
+		 * the error disappeared after reading sector by sector, or
 		 * the area was part of a huge bio and other parts of the
 		 * bio caused I/O errors, or the block layer merged several
 		 * read requests into one and the error is caused by a
@@ -998,10 +998,10 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	 * that is known to contain an error is rewritten. Afterwards
 	 * the block is known to be corrected.
 	 * If a mirror is found which is completely correct, and no
-	 * checksum is present, only those pages are rewritten that had
+	 * checksum is present, only those sectors are rewritten that had
 	 * an I/O error in the block to be repaired, since it cannot be
-	 * determined, which copy of the other pages is better (and it
-	 * could happen otherwise that a correct page would be
+	 * determined, which copy of the other sectors is better (and it
+	 * could happen otherwise that a correct sector would be
 	 * overwritten by a bad one).
 	 */
 	for (mirror_index = 0; ;mirror_index++) {
@@ -1080,11 +1080,11 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	success = 1;
 	for (sector_num = 0; sector_num < sblock_bad->sector_count;
 	     sector_num++) {
-		struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
+		struct scrub_sector *ssector_bad = sblock_bad->sectorv[sector_num];
 		struct scrub_block *sblock_other = NULL;
 
-		/* skip no-io-error page in scrub */
-		if (!spage_bad->io_error && !sctx->is_dev_replace)
+		/* skip no-io-error sectors in scrub */
+		if (!ssector_bad->io_error && !sctx->is_dev_replace)
 			continue;
 
 		if (scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
@@ -1096,8 +1096,8 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 			 * sblock_for_recheck array to target device.
 			 */
 			sblock_other = NULL;
-		} else if (spage_bad->io_error) {
-			/* try to find no-io-error page in mirrors */
+		} else if (ssector_bad->io_error) {
+			/* try to find no-io-error sector in mirrors */
 			for (mirror_index = 0;
 			     mirror_index < BTRFS_MAX_MIRRORS &&
 			     sblocks_for_recheck[mirror_index].sector_count > 0;
@@ -1115,27 +1115,27 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 
 		if (sctx->is_dev_replace) {
 			/*
-			 * did not find a mirror to fetch the page
-			 * from. scrub_write_page_to_dev_replace()
-			 * handles this case (page->io_error), by
+			 * Did not find a mirror to fetch the sector
+			 * from. scrub_write_sector_to_dev_replace()
+			 * handles this case (sector->io_error), by
 			 * filling the block with zeros before
 			 * submitting the write request
 			 */
 			if (!sblock_other)
 				sblock_other = sblock_bad;
 
-			if (scrub_write_page_to_dev_replace(sblock_other,
-							    sector_num) != 0) {
+			if (scrub_write_sector_to_dev_replace(sblock_other,
+							      sector_num) != 0) {
 				atomic64_inc(
 					&fs_info->dev_replace.num_write_errors);
 				success = 0;
 			}
 		} else if (sblock_other) {
-			ret = scrub_repair_page_from_good_copy(sblock_bad,
-							       sblock_other,
-							       sector_num, 0);
+			ret = scrub_repair_sector_from_good_copy(sblock_bad,
+								 sblock_other,
+								 sector_num, 0);
 			if (0 == ret)
-				spage_bad->io_error = 0;
+				ssector_bad->io_error = 0;
 			else
 				success = 0;
 		}
@@ -1197,7 +1197,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 					sblock->sectorv[sector_index]->recover =
 									NULL;
 				}
-				scrub_page_put(sblock->sectorv[sector_index]);
+				scrub_sector_put(sblock->sectorv[sector_index]);
 			}
 		}
 		kfree(sblocks_for_recheck);
@@ -1272,7 +1272,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 	int ret;
 
 	/*
-	 * note: the two members refs and outstanding_pages
+	 * note: the two members refs and outstanding_sectors
 	 * are not used (and not set) in the blocks that are used for
 	 * the recheck procedure
 	 */
@@ -1313,13 +1313,13 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 		for (mirror_index = 0; mirror_index < nmirrors;
 		     mirror_index++) {
 			struct scrub_block *sblock;
-			struct scrub_page *spage;
+			struct scrub_sector *ssector;
 
 			sblock = sblocks_for_recheck + mirror_index;
 			sblock->sctx = sctx;
 
-			spage = kzalloc(sizeof(*spage), GFP_NOFS);
-			if (!spage) {
+			ssector = kzalloc(sizeof(*ssector), GFP_NOFS);
+			if (!ssector) {
 leave_nomem:
 				spin_lock(&sctx->stat_lock);
 				sctx->stat.malloc_errors++;
@@ -1327,15 +1327,15 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 				scrub_put_recover(fs_info, recover);
 				return -ENOMEM;
 			}
-			scrub_page_get(spage);
-			sblock->sectorv[sector_index] = spage;
-			spage->sblock = sblock;
-			spage->flags = flags;
-			spage->generation = generation;
-			spage->logical = logical;
-			spage->have_csum = have_csum;
+			scrub_sector_get(ssector);
+			sblock->sectorv[sector_index] = ssector;
+			ssector->sblock = sblock;
+			ssector->flags = flags;
+			ssector->generation = generation;
+			ssector->logical = logical;
+			ssector->have_csum = have_csum;
 			if (have_csum)
-				memcpy(spage->csum,
+				memcpy(ssector->csum,
 				       original_sblock->sectorv[0]->csum,
 				       sctx->fs_info->csum_size);
 
@@ -1348,23 +1348,23 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 						      mirror_index,
 						      &stripe_index,
 						      &stripe_offset);
-			spage->physical = bioc->stripes[stripe_index].physical +
+			ssector->physical = bioc->stripes[stripe_index].physical +
 					 stripe_offset;
-			spage->dev = bioc->stripes[stripe_index].dev;
+			ssector->dev = bioc->stripes[stripe_index].dev;
 
 			BUG_ON(sector_index >= original_sblock->sector_count);
-			spage->physical_for_dev_replace =
+			ssector->physical_for_dev_replace =
 				original_sblock->sectorv[sector_index]->
 				physical_for_dev_replace;
 			/* for missing devices, dev->bdev is NULL */
-			spage->mirror_num = mirror_index + 1;
+			ssector->mirror_num = mirror_index + 1;
 			sblock->sector_count++;
-			spage->page = alloc_page(GFP_NOFS);
-			if (!spage->page)
+			ssector->page = alloc_page(GFP_NOFS);
+			if (!ssector->page)
 				goto leave_nomem;
 
 			scrub_get_recover(recover);
-			spage->recover = recover;
+			ssector->recover = recover;
 		}
 		scrub_put_recover(fs_info, recover);
 		length -= sublen;
@@ -1382,19 +1382,19 @@ static void scrub_bio_wait_endio(struct bio *bio)
 
 static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
 					struct bio *bio,
-					struct scrub_page *spage)
+					struct scrub_sector *ssector)
 {
 	DECLARE_COMPLETION_ONSTACK(done);
 	int ret;
 	int mirror_num;
 
-	bio->bi_iter.bi_sector = spage->logical >> 9;
+	bio->bi_iter.bi_sector = ssector->logical >> 9;
 	bio->bi_private = &done;
 	bio->bi_end_io = scrub_bio_wait_endio;
 
-	mirror_num = spage->sblock->sectorv[0]->mirror_num;
-	ret = raid56_parity_recover(bio, spage->recover->bioc,
-				    spage->recover->map_length,
+	mirror_num = ssector->sblock->sectorv[0]->mirror_num;
+	ret = raid56_parity_recover(bio, ssector->recover->bioc,
+				    ssector->recover->map_length,
 				    mirror_num, 0);
 	if (ret)
 		return ret;
@@ -1406,26 +1406,26 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
 static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 					  struct scrub_block *sblock)
 {
-	struct scrub_page *first_page = sblock->sectorv[0];
+	struct scrub_sector *first_sector = sblock->sectorv[0];
 	struct bio *bio;
 	int sector_num;
 
-	/* All pages in sblock belong to the same stripe on the same device. */
-	ASSERT(first_page->dev);
-	if (!first_page->dev->bdev)
+	/* All sectors in sblock belong to the same stripe on the same device. */
+	ASSERT(first_sector->dev);
+	if (!first_sector->dev->bdev)
 		goto out;
 
 	bio = btrfs_bio_alloc(BIO_MAX_VECS);
-	bio_set_dev(bio, first_page->dev->bdev);
+	bio_set_dev(bio, first_sector->dev->bdev);
 
 	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
-		struct scrub_page *spage = sblock->sectorv[sector_num];
+		struct scrub_sector *ssector = sblock->sectorv[sector_num];
 
-		WARN_ON(!spage->page);
-		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
+		WARN_ON(!ssector->page);
+		bio_add_page(bio, ssector->page, PAGE_SIZE, 0);
 	}
 
-	if (scrub_submit_raid56_bio_wait(fs_info, bio, first_page)) {
+	if (scrub_submit_raid56_bio_wait(fs_info, bio, first_sector)) {
 		bio_put(bio);
 		goto out;
 	}
@@ -1444,10 +1444,10 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 
 /*
  * this function will check the on disk data for checksum errors, header
- * errors and read I/O errors. If any I/O errors happen, the exact pages
+ * errors and read I/O errors. If any I/O errors happen, the exact sectors
  * which are errored are marked as being bad. The goal is to enable scrub
- * to take those pages that are not errored from all the mirrors so that
- * the pages that are errored in the just handled mirror can be repaired.
+ * to take those sectors that are not errored from all the mirrors so that
+ * the sectors that are errored in the just handled mirror can be repaired.
  */
 static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 				struct scrub_block *sblock,
@@ -1463,24 +1463,24 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 
 	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
 		struct bio *bio;
-		struct scrub_page *spage = sblock->sectorv[sector_num];
+		struct scrub_sector *ssector = sblock->sectorv[sector_num];
 
-		if (spage->dev->bdev == NULL) {
-			spage->io_error = 1;
+		if (ssector->dev->bdev == NULL) {
+			ssector->io_error = 1;
 			sblock->no_io_error_seen = 0;
 			continue;
 		}
 
-		WARN_ON(!spage->page);
+		WARN_ON(!ssector->page);
 		bio = btrfs_bio_alloc(1);
-		bio_set_dev(bio, spage->dev->bdev);
+		bio_set_dev(bio, ssector->dev->bdev);
 
-		bio_add_page(bio, spage->page, fs_info->sectorsize, 0);
-		bio->bi_iter.bi_sector = spage->physical >> 9;
+		bio_add_page(bio, ssector->page, fs_info->sectorsize, 0);
+		bio->bi_iter.bi_sector = ssector->physical >> 9;
 		bio->bi_opf = REQ_OP_READ;
 
 		if (btrfsic_submit_bio_wait(bio)) {
-			spage->io_error = 1;
+			ssector->io_error = 1;
 			sblock->no_io_error_seen = 0;
 		}
 
@@ -1492,9 +1492,9 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 }
 
 static inline int scrub_check_fsid(u8 fsid[],
-				   struct scrub_page *spage)
+				   struct scrub_sector *ssector)
 {
-	struct btrfs_fs_devices *fs_devices = spage->dev->fs_devices;
+	struct btrfs_fs_devices *fs_devices = ssector->dev->fs_devices;
 	int ret;
 
 	ret = memcmp(fsid, fs_devices->fsid, BTRFS_FSID_SIZE);
@@ -1522,9 +1522,9 @@ static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
 	for (sector_num = 0; sector_num < sblock_bad->sector_count; sector_num++) {
 		int ret_sub;
 
-		ret_sub = scrub_repair_page_from_good_copy(sblock_bad,
-							   sblock_good,
-							   sector_num, 1);
+		ret_sub = scrub_repair_sector_from_good_copy(sblock_bad,
+							     sblock_good,
+							     sector_num, 1);
 		if (ret_sub)
 			ret = ret_sub;
 	}
@@ -1532,41 +1532,41 @@ static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
 	return ret;
 }
 
-static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
-					    struct scrub_block *sblock_good,
-					    int sector_num, int force_write)
+static int scrub_repair_sector_from_good_copy(struct scrub_block *sblock_bad,
+					      struct scrub_block *sblock_good,
+					      int sector_num, int force_write)
 {
-	struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
-	struct scrub_page *spage_good = sblock_good->sectorv[sector_num];
+	struct scrub_sector *ssector_bad = sblock_bad->sectorv[sector_num];
+	struct scrub_sector *ssector_good = sblock_good->sectorv[sector_num];
 	struct btrfs_fs_info *fs_info = sblock_bad->sctx->fs_info;
 	const u32 sectorsize = fs_info->sectorsize;
 
-	BUG_ON(spage_bad->page == NULL);
-	BUG_ON(spage_good->page == NULL);
+	BUG_ON(ssector_bad->page == NULL);
+	BUG_ON(ssector_good->page == NULL);
 	if (force_write || sblock_bad->header_error ||
-	    sblock_bad->checksum_error || spage_bad->io_error) {
+	    sblock_bad->checksum_error || ssector_bad->io_error) {
 		struct bio *bio;
 		int ret;
 
-		if (!spage_bad->dev->bdev) {
+		if (!ssector_bad->dev->bdev) {
 			btrfs_warn_rl(fs_info,
 				"scrub_repair_page_from_good_copy(bdev == NULL) is unexpected");
 			return -EIO;
 		}
 
 		bio = btrfs_bio_alloc(1);
-		bio_set_dev(bio, spage_bad->dev->bdev);
-		bio->bi_iter.bi_sector = spage_bad->physical >> 9;
+		bio_set_dev(bio, ssector_bad->dev->bdev);
+		bio->bi_iter.bi_sector = ssector_bad->physical >> 9;
 		bio->bi_opf = REQ_OP_WRITE;
 
-		ret = bio_add_page(bio, spage_good->page, sectorsize, 0);
+		ret = bio_add_page(bio, ssector_good->page, sectorsize, 0);
 		if (ret != sectorsize) {
 			bio_put(bio);
 			return -EIO;
 		}
 
 		if (btrfsic_submit_bio_wait(bio)) {
-			btrfs_dev_stat_inc_and_print(spage_bad->dev,
+			btrfs_dev_stat_inc_and_print(ssector_bad->dev,
 				BTRFS_DEV_STAT_WRITE_ERRS);
 			atomic64_inc(&fs_info->dev_replace.num_write_errors);
 			bio_put(bio);
@@ -1593,22 +1593,22 @@ static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
 	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
 		int ret;
 
-		ret = scrub_write_page_to_dev_replace(sblock, sector_num);
+		ret = scrub_write_sector_to_dev_replace(sblock, sector_num);
 		if (ret)
 			atomic64_inc(&fs_info->dev_replace.num_write_errors);
 	}
 }
 
-static int scrub_write_page_to_dev_replace(struct scrub_block *sblock,
-					   int sector_num)
+static int scrub_write_sector_to_dev_replace(struct scrub_block *sblock,
+					     int sector_num)
 {
-	struct scrub_page *spage = sblock->sectorv[sector_num];
+	struct scrub_sector *ssector = sblock->sectorv[sector_num];
 
-	BUG_ON(spage->page == NULL);
-	if (spage->io_error)
-		clear_page(page_address(spage->page));
+	BUG_ON(ssector->page == NULL);
+	if (ssector->io_error)
+		clear_page(page_address(ssector->page));
 
-	return scrub_add_page_to_wr_bio(sblock->sctx, spage);
+	return scrub_add_sector_to_wr_bio(sblock->sctx, ssector);
 }
 
 static int fill_writer_pointer_gap(struct scrub_ctx *sctx, u64 physical)
@@ -1633,8 +1633,8 @@ static int fill_writer_pointer_gap(struct scrub_ctx *sctx, u64 physical)
 	return ret;
 }
 
-static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
-				    struct scrub_page *spage)
+static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
+				    struct scrub_sector *ssector)
 {
 	struct scrub_bio *sbio;
 	int ret;
@@ -1657,14 +1657,14 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
 		struct bio *bio;
 
 		ret = fill_writer_pointer_gap(sctx,
-					      spage->physical_for_dev_replace);
+					      ssector->physical_for_dev_replace);
 		if (ret) {
 			mutex_unlock(&sctx->wr_lock);
 			return ret;
 		}
 
-		sbio->physical = spage->physical_for_dev_replace;
-		sbio->logical = spage->logical;
+		sbio->physical = ssector->physical_for_dev_replace;
+		sbio->logical = ssector->logical;
 		sbio->dev = sctx->wr_tgtdev;
 		bio = sbio->bio;
 		if (!bio) {
@@ -1679,14 +1679,14 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
 		bio->bi_opf = REQ_OP_WRITE;
 		sbio->status = 0;
 	} else if (sbio->physical + sbio->page_count * sectorsize !=
-		   spage->physical_for_dev_replace ||
+		   ssector->physical_for_dev_replace ||
 		   sbio->logical + sbio->page_count * sectorsize !=
-		   spage->logical) {
+		   ssector->logical) {
 		scrub_wr_submit(sctx);
 		goto again;
 	}
 
-	ret = bio_add_page(sbio->bio, spage->page, sectorsize, 0);
+	ret = bio_add_page(sbio->bio, ssector->page, sectorsize, 0);
 	if (ret != sectorsize) {
 		if (sbio->page_count < 1) {
 			bio_put(sbio->bio);
@@ -1698,8 +1698,8 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
 		goto again;
 	}
 
-	sbio->pagev[sbio->page_count] = spage;
-	scrub_page_get(spage);
+	sbio->pagev[sbio->page_count] = ssector;
+	scrub_sector_get(ssector);
 	sbio->page_count++;
 	if (sbio->page_count == sctx->pages_per_bio)
 		scrub_wr_submit(sctx);
@@ -1754,15 +1754,15 @@ static void scrub_wr_bio_end_io_worker(struct btrfs_work *work)
 			&sbio->sctx->fs_info->dev_replace;
 
 		for (i = 0; i < sbio->page_count; i++) {
-			struct scrub_page *spage = sbio->pagev[i];
+			struct scrub_sector *ssector = sbio->pagev[i];
 
-			spage->io_error = 1;
+			ssector->io_error = 1;
 			atomic64_inc(&dev_replace->num_write_errors);
 		}
 	}
 
 	for (i = 0; i < sbio->page_count; i++)
-		scrub_page_put(sbio->pagev[i]);
+		scrub_sector_put(sbio->pagev[i]);
 
 	bio_put(sbio->bio);
 	kfree(sbio);
@@ -1809,26 +1809,26 @@ static int scrub_checksum_data(struct scrub_block *sblock)
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
 	SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
 	u8 csum[BTRFS_CSUM_SIZE];
-	struct scrub_page *spage;
+	struct scrub_sector *ssector;
 	char *kaddr;
 
 	BUG_ON(sblock->sector_count < 1);
-	spage = sblock->sectorv[0];
-	if (!spage->have_csum)
+	ssector = sblock->sectorv[0];
+	if (!ssector->have_csum)
 		return 0;
 
-	kaddr = page_address(spage->page);
+	kaddr = page_address(ssector->page);
 
 	shash->tfm = fs_info->csum_shash;
 	crypto_shash_init(shash);
 
 	/*
-	 * In scrub_pages() and scrub_pages_for_parity() we ensure each spage
+	 * In scrub_sectors() and scrub_sectors_for_parity() we ensure each ssector
 	 * only contains one sector of data.
 	 */
 	crypto_shash_digest(shash, kaddr, fs_info->sectorsize, csum);
 
-	if (memcmp(csum, spage->csum, fs_info->csum_size))
+	if (memcmp(csum, ssector->csum, fs_info->csum_size))
 		sblock->checksum_error = 1;
 	return sblock->checksum_error;
 }
@@ -1849,16 +1849,16 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
 	const u32 sectorsize = sctx->fs_info->sectorsize;
 	const int num_sectors = fs_info->nodesize >> fs_info->sectorsize_bits;
 	int i;
-	struct scrub_page *spage;
+	struct scrub_sector *ssector;
 	char *kaddr;
 
 	BUG_ON(sblock->sector_count < 1);
 
-	/* Each member in pagev is just one sector , not a full page */
+	/* Each member in sectorv is just one sector */
 	ASSERT(sblock->sector_count == num_sectors);
 
-	spage = sblock->sectorv[0];
-	kaddr = page_address(spage->page);
+	ssector = sblock->sectorv[0];
+	kaddr = page_address(ssector->page);
 	h = (struct btrfs_header *)kaddr;
 	memcpy(on_disk_csum, h->csum, sctx->fs_info->csum_size);
 
@@ -1867,15 +1867,15 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
 	 * a) don't have an extent buffer and
 	 * b) the page is already kmapped
 	 */
-	if (spage->logical != btrfs_stack_header_bytenr(h))
+	if (ssector->logical != btrfs_stack_header_bytenr(h))
 		sblock->header_error = 1;
 
-	if (spage->generation != btrfs_stack_header_generation(h)) {
+	if (ssector->generation != btrfs_stack_header_generation(h)) {
 		sblock->header_error = 1;
 		sblock->generation_error = 1;
 	}
 
-	if (!scrub_check_fsid(h->fsid, spage))
+	if (!scrub_check_fsid(h->fsid, ssector))
 		sblock->header_error = 1;
 
 	if (memcmp(h->chunk_tree_uuid, fs_info->chunk_tree_uuid,
@@ -1906,23 +1906,23 @@ static int scrub_checksum_super(struct scrub_block *sblock)
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
 	SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
 	u8 calculated_csum[BTRFS_CSUM_SIZE];
-	struct scrub_page *spage;
+	struct scrub_sector *ssector;
 	char *kaddr;
 	int fail_gen = 0;
 	int fail_cor = 0;
 
 	BUG_ON(sblock->sector_count < 1);
-	spage = sblock->sectorv[0];
-	kaddr = page_address(spage->page);
+	ssector = sblock->sectorv[0];
+	kaddr = page_address(ssector->page);
 	s = (struct btrfs_super_block *)kaddr;
 
-	if (spage->logical != btrfs_super_bytenr(s))
+	if (ssector->logical != btrfs_super_bytenr(s))
 		++fail_cor;
 
-	if (spage->generation != btrfs_super_generation(s))
+	if (ssector->generation != btrfs_super_generation(s))
 		++fail_gen;
 
-	if (!scrub_check_fsid(s->fsid, spage))
+	if (!scrub_check_fsid(s->fsid, ssector))
 		++fail_cor;
 
 	shash->tfm = fs_info->csum_shash;
@@ -1943,10 +1943,10 @@ static int scrub_checksum_super(struct scrub_block *sblock)
 		++sctx->stat.super_errors;
 		spin_unlock(&sctx->stat_lock);
 		if (fail_cor)
-			btrfs_dev_stat_inc_and_print(spage->dev,
+			btrfs_dev_stat_inc_and_print(ssector->dev,
 				BTRFS_DEV_STAT_CORRUPTION_ERRS);
 		else
-			btrfs_dev_stat_inc_and_print(spage->dev,
+			btrfs_dev_stat_inc_and_print(ssector->dev,
 				BTRFS_DEV_STAT_GENERATION_ERRS);
 	}
 
@@ -1967,22 +1967,22 @@ static void scrub_block_put(struct scrub_block *sblock)
 			scrub_parity_put(sblock->sparity);
 
 		for (i = 0; i < sblock->sector_count; i++)
-			scrub_page_put(sblock->sectorv[i]);
+			scrub_sector_put(sblock->sectorv[i]);
 		kfree(sblock);
 	}
 }
 
-static void scrub_page_get(struct scrub_page *spage)
+static void scrub_sector_get(struct scrub_sector *ssector)
 {
-	atomic_inc(&spage->refs);
+	atomic_inc(&ssector->refs);
 }
 
-static void scrub_page_put(struct scrub_page *spage)
+static void scrub_sector_put(struct scrub_sector *ssector)
 {
-	if (atomic_dec_and_test(&spage->refs)) {
-		if (spage->page)
-			__free_page(spage->page);
-		kfree(spage);
+	if (atomic_dec_and_test(&ssector->refs)) {
+		if (ssector->page)
+			__free_page(ssector->page);
+		kfree(ssector);
 	}
 }
 
@@ -2060,10 +2060,10 @@ static void scrub_submit(struct scrub_ctx *sctx)
 	btrfsic_submit_bio(sbio->bio);
 }
 
-static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
-				    struct scrub_page *spage)
+static int scrub_add_sector_to_rd_bio(struct scrub_ctx *sctx,
+				      struct scrub_sector *ssector)
 {
-	struct scrub_block *sblock = spage->sblock;
+	struct scrub_block *sblock = ssector->sblock;
 	struct scrub_bio *sbio;
 	const u32 sectorsize = sctx->fs_info->sectorsize;
 	int ret;
@@ -2089,9 +2089,9 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 	if (sbio->page_count == 0) {
 		struct bio *bio;
 
-		sbio->physical = spage->physical;
-		sbio->logical = spage->logical;
-		sbio->dev = spage->dev;
+		sbio->physical = ssector->physical;
+		sbio->logical = ssector->logical;
+		sbio->dev = ssector->dev;
 		bio = sbio->bio;
 		if (!bio) {
 			bio = btrfs_bio_alloc(sctx->pages_per_bio);
@@ -2105,16 +2105,16 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 		bio->bi_opf = REQ_OP_READ;
 		sbio->status = 0;
 	} else if (sbio->physical + sbio->page_count * sectorsize !=
-		   spage->physical ||
+		   ssector->physical ||
 		   sbio->logical + sbio->page_count * sectorsize !=
-		   spage->logical ||
-		   sbio->dev != spage->dev) {
+		   ssector->logical ||
+		   sbio->dev != ssector->dev) {
 		scrub_submit(sctx);
 		goto again;
 	}
 
-	sbio->pagev[sbio->page_count] = spage;
-	ret = bio_add_page(sbio->bio, spage->page, sectorsize, 0);
+	sbio->pagev[sbio->page_count] = ssector;
+	ret = bio_add_page(sbio->bio, ssector->page, sectorsize, 0);
 	if (ret != sectorsize) {
 		if (sbio->page_count < 1) {
 			bio_put(sbio->bio);
@@ -2126,7 +2126,7 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 	}
 
 	scrub_block_get(sblock); /* one for the page added to the bio */
-	atomic_inc(&sblock->outstanding_pages);
+	atomic_inc(&sblock->outstanding_sectors);
 	sbio->page_count++;
 	if (sbio->page_count == sctx->pages_per_bio)
 		scrub_submit(sctx);
@@ -2228,9 +2228,9 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 		goto rbio_out;
 
 	for (i = 0; i < sblock->sector_count; i++) {
-		struct scrub_page *spage = sblock->sectorv[i];
+		struct scrub_sector *ssector = sblock->sectorv[i];
 
-		raid56_add_scrub_pages(rbio, spage->page, spage->logical);
+		raid56_add_scrub_pages(rbio, ssector->page, ssector->logical);
 	}
 
 	btrfs_init_work(&sblock->work, scrub_missing_raid56_worker, NULL, NULL);
@@ -2249,7 +2249,7 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 	spin_unlock(&sctx->stat_lock);
 }
 
-static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
+static int scrub_sectors(struct scrub_ctx *sctx, u64 logical, u32 len,
 		       u64 physical, struct btrfs_device *dev, u64 flags,
 		       u64 gen, int mirror_num, u8 *csum,
 		       u64 physical_for_dev_replace)
@@ -2273,7 +2273,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 	sblock->no_io_error_seen = 1;
 
 	for (index = 0; len > 0; index++) {
-		struct scrub_page *spage;
+		struct scrub_sector *ssector;
 		/*
 		 * Here we will allocate one page for one sector to scrub.
 		 * This is fine if PAGE_SIZE == sectorsize, but will cost
@@ -2281,8 +2281,8 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 		 */
 		u32 l = min(sectorsize, len);
 
-		spage = kzalloc(sizeof(*spage), GFP_KERNEL);
-		if (!spage) {
+		ssector = kzalloc(sizeof(*ssector), GFP_KERNEL);
+		if (!ssector) {
 leave_nomem:
 			spin_lock(&sctx->stat_lock);
 			sctx->stat.malloc_errors++;
@@ -2291,25 +2291,25 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 			return -ENOMEM;
 		}
 		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
-		scrub_page_get(spage);
-		sblock->sectorv[index] = spage;
-		spage->sblock = sblock;
-		spage->dev = dev;
-		spage->flags = flags;
-		spage->generation = gen;
-		spage->logical = logical;
-		spage->physical = physical;
-		spage->physical_for_dev_replace = physical_for_dev_replace;
-		spage->mirror_num = mirror_num;
+		scrub_sector_get(ssector);
+		sblock->sectorv[index] = ssector;
+		ssector->sblock = sblock;
+		ssector->dev = dev;
+		ssector->flags = flags;
+		ssector->generation = gen;
+		ssector->logical = logical;
+		ssector->physical = physical;
+		ssector->physical_for_dev_replace = physical_for_dev_replace;
+		ssector->mirror_num = mirror_num;
 		if (csum) {
-			spage->have_csum = 1;
-			memcpy(spage->csum, csum, sctx->fs_info->csum_size);
+			ssector->have_csum = 1;
+			memcpy(ssector->csum, csum, sctx->fs_info->csum_size);
 		} else {
-			spage->have_csum = 0;
+			ssector->have_csum = 0;
 		}
 		sblock->sector_count++;
-		spage->page = alloc_page(GFP_KERNEL);
-		if (!spage->page)
+		ssector->page = alloc_page(GFP_KERNEL);
+		if (!ssector->page)
 			goto leave_nomem;
 		len -= l;
 		logical += l;
@@ -2326,10 +2326,10 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 		scrub_missing_raid56_pages(sblock);
 	} else {
 		for (index = 0; index < sblock->sector_count; index++) {
-			struct scrub_page *spage = sblock->sectorv[index];
+			struct scrub_sector *ssector = sblock->sectorv[index];
 			int ret;
 
-			ret = scrub_add_page_to_rd_bio(sctx, spage);
+			ret = scrub_add_sector_to_rd_bio(sctx, ssector);
 			if (ret) {
 				scrub_block_put(sblock);
 				return ret;
@@ -2365,19 +2365,19 @@ static void scrub_bio_end_io_worker(struct btrfs_work *work)
 	ASSERT(sbio->page_count <= SCRUB_PAGES_PER_BIO);
 	if (sbio->status) {
 		for (i = 0; i < sbio->page_count; i++) {
-			struct scrub_page *spage = sbio->pagev[i];
+			struct scrub_sector *ssector = sbio->pagev[i];
 
-			spage->io_error = 1;
-			spage->sblock->no_io_error_seen = 0;
+			ssector->io_error = 1;
+			ssector->sblock->no_io_error_seen = 0;
 		}
 	}
 
 	/* now complete the scrub_block items that have all pages completed */
 	for (i = 0; i < sbio->page_count; i++) {
-		struct scrub_page *spage = sbio->pagev[i];
-		struct scrub_block *sblock = spage->sblock;
+		struct scrub_sector *ssector = sbio->pagev[i];
+		struct scrub_block *sblock = ssector->sblock;
 
-		if (atomic_dec_and_test(&sblock->outstanding_pages))
+		if (atomic_dec_and_test(&sblock->outstanding_sectors))
 			scrub_block_complete(sblock);
 		scrub_block_put(sblock);
 	}
@@ -2571,7 +2571,7 @@ static int scrub_extent(struct scrub_ctx *sctx, struct map_lookup *map,
 			if (have_csum == 0)
 				++sctx->stat.no_csum;
 		}
-		ret = scrub_pages(sctx, logical, l, physical, dev, flags, gen,
+		ret = scrub_sectors(sctx, logical, l, physical, dev, flags, gen,
 				  mirror_num, have_csum ? csum : NULL,
 				  physical_for_dev_replace);
 		if (ret)
@@ -2584,7 +2584,7 @@ static int scrub_extent(struct scrub_ctx *sctx, struct map_lookup *map,
 	return 0;
 }
 
-static int scrub_pages_for_parity(struct scrub_parity *sparity,
+static int scrub_sectors_for_parity(struct scrub_parity *sparity,
 				  u64 logical, u32 len,
 				  u64 physical, struct btrfs_device *dev,
 				  u64 flags, u64 gen, int mirror_num, u8 *csum)
@@ -2613,10 +2613,10 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 	scrub_parity_get(sparity);
 
 	for (index = 0; len > 0; index++) {
-		struct scrub_page *spage;
+		struct scrub_sector *ssector;
 
-		spage = kzalloc(sizeof(*spage), GFP_KERNEL);
-		if (!spage) {
+		ssector = kzalloc(sizeof(*ssector), GFP_KERNEL);
+		if (!ssector) {
 leave_nomem:
 			spin_lock(&sctx->stat_lock);
 			sctx->stat.malloc_errors++;
@@ -2626,27 +2626,27 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 		}
 		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
 		/* For scrub block */
-		scrub_page_get(spage);
-		sblock->sectorv[index] = spage;
+		scrub_sector_get(ssector);
+		sblock->sectorv[index] = ssector;
 		/* For scrub parity */
-		scrub_page_get(spage);
-		list_add_tail(&spage->list, &sparity->spages);
-		spage->sblock = sblock;
-		spage->dev = dev;
-		spage->flags = flags;
-		spage->generation = gen;
-		spage->logical = logical;
-		spage->physical = physical;
-		spage->mirror_num = mirror_num;
+		scrub_sector_get(ssector);
+		list_add_tail(&ssector->list, &sparity->ssectors);
+		ssector->sblock = sblock;
+		ssector->dev = dev;
+		ssector->flags = flags;
+		ssector->generation = gen;
+		ssector->logical = logical;
+		ssector->physical = physical;
+		ssector->mirror_num = mirror_num;
 		if (csum) {
-			spage->have_csum = 1;
-			memcpy(spage->csum, csum, sctx->fs_info->csum_size);
+			ssector->have_csum = 1;
+			memcpy(ssector->csum, csum, sctx->fs_info->csum_size);
 		} else {
-			spage->have_csum = 0;
+			ssector->have_csum = 0;
 		}
 		sblock->sector_count++;
-		spage->page = alloc_page(GFP_KERNEL);
-		if (!spage->page)
+		ssector->page = alloc_page(GFP_KERNEL);
+		if (!ssector->page)
 			goto leave_nomem;
 
 
@@ -2658,17 +2658,17 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 
 	WARN_ON(sblock->sector_count == 0);
 	for (index = 0; index < sblock->sector_count; index++) {
-		struct scrub_page *spage = sblock->sectorv[index];
+		struct scrub_sector *ssector = sblock->sectorv[index];
 		int ret;
 
-		ret = scrub_add_page_to_rd_bio(sctx, spage);
+		ret = scrub_add_sector_to_rd_bio(sctx, ssector);
 		if (ret) {
 			scrub_block_put(sblock);
 			return ret;
 		}
 	}
 
-	/* last one frees, either here or in bio completion for last page */
+	/* last one frees, either here or in bio completion for last sector */
 	scrub_block_put(sblock);
 	return 0;
 }
@@ -2707,7 +2707,7 @@ static int scrub_extent_for_parity(struct scrub_parity *sparity,
 			if (have_csum == 0)
 				goto skip;
 		}
-		ret = scrub_pages_for_parity(sparity, logical, l, physical, dev,
+		ret = scrub_sectors_for_parity(sparity, logical, l, physical, dev,
 					     flags, gen, mirror_num,
 					     have_csum ? csum : NULL);
 		if (ret)
@@ -2767,7 +2767,7 @@ static int get_raid56_logic_offset(u64 physical, int num,
 static void scrub_free_parity(struct scrub_parity *sparity)
 {
 	struct scrub_ctx *sctx = sparity->sctx;
-	struct scrub_page *curr, *next;
+	struct scrub_sector *curr, *next;
 	int nbits;
 
 	nbits = bitmap_weight(sparity->ebitmap, sparity->nsectors);
@@ -2778,9 +2778,9 @@ static void scrub_free_parity(struct scrub_parity *sparity)
 		spin_unlock(&sctx->stat_lock);
 	}
 
-	list_for_each_entry_safe(curr, next, &sparity->spages, list) {
+	list_for_each_entry_safe(curr, next, &sparity->ssectors, list) {
 		list_del_init(&curr->list);
-		scrub_page_put(curr);
+		scrub_sector_put(curr);
 	}
 
 	kfree(sparity);
@@ -2943,7 +2943,7 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
 	sparity->logic_start = logic_start;
 	sparity->logic_end = logic_end;
 	refcount_set(&sparity->refs, 1);
-	INIT_LIST_HEAD(&sparity->spages);
+	INIT_LIST_HEAD(&sparity->ssectors);
 	sparity->dbitmap = sparity->bitmap;
 	sparity->ebitmap = (void *)sparity->bitmap + bitmap_len;
 
@@ -3940,9 +3940,9 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
 		if (!btrfs_check_super_location(scrub_dev, bytenr))
 			continue;
 
-		ret = scrub_pages(sctx, bytenr, BTRFS_SUPER_INFO_SIZE, bytenr,
-				  scrub_dev, BTRFS_EXTENT_FLAG_SUPER, gen, i,
-				  NULL, bytenr);
+		ret = scrub_sectors(sctx, bytenr, BTRFS_SUPER_INFO_SIZE, bytenr,
+				    scrub_dev, BTRFS_EXTENT_FLAG_SUPER, gen, i,
+				    NULL, bytenr);
 		if (ret)
 			return ret;
 	}
@@ -4061,7 +4061,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
 	    SCRUB_MAX_SECTORS_PER_BLOCK * fs_info->sectorsize ||
 	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_SECTORS_PER_BLOCK) {
 		/*
-		 * would exhaust the array bounds of pagev member in
+		 * would exhaust the array bounds of sectorv member in
 		 * struct scrub_block
 		 */
 		btrfs_err(fs_info,
@@ -4137,7 +4137,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
 	/*
 	 * In order to avoid deadlock with reclaim when there is a transaction
 	 * trying to pause scrub, make sure we use GFP_NOFS for all the
-	 * allocations done at btrfs_scrub_pages() and scrub_pages_for_parity()
+	 * allocations done at btrfs_scrub_sectors() and scrub_sectors_for_parity()
 	 * invoked by our callees. The pausing request is done when the
 	 * transaction commit starts, and it blocks the transaction until scrub
 	 * is paused (done at specific points at scrub_stripe() or right above
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] btrfs: scrub: rename scrub_bio::pagev and related members
  2022-03-11  7:34 [PATCH v2 0/3] btrfs: scrub: big renaming to address the page and sector difference Qu Wenruo
  2022-03-11  7:34 ` [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev Qu Wenruo
  2022-03-11  7:34 ` [PATCH v2 2/3] btrfs: scrub: rename scrub_page to scrub_sector Qu Wenruo
@ 2022-03-11  7:34 ` Qu Wenruo
  2 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2022-03-11  7:34 UTC (permalink / raw)
  To: linux-btrfs

Since the subpage support for scrub, one page no longer always represents
one sector, thus scrub_bio::pagev and scrub_bio::sector_count are no
longer accurate.

Rename them to scrub_bio::sectorv and scrub_bio::sector_count
respectively.

This also involves scrub_ctx::pages_per_bio and other macros involved.

Now the rename of pages involved in scrub should be finished.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 76 ++++++++++++++++++++++++------------------------
 1 file changed, 38 insertions(+), 38 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index c9198c9af4c4..2316269cade0 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -45,7 +45,7 @@ struct scrub_ctx;
  * operations. The first one configures an upper limit for the number
  * of (dynamically allocated) pages that are added to a bio.
  */
-#define SCRUB_PAGES_PER_BIO	32	/* 128KiB per bio for x86 */
+#define SCRUB_SECTORS_PER_BIO	32	/* 128KiB per bio for x86 */
 #define SCRUB_BIOS_PER_SCTX	64	/* 8MiB per device in flight for x86 */
 
 /*
@@ -87,8 +87,8 @@ struct scrub_bio {
 	blk_status_t		status;
 	u64			logical;
 	u64			physical;
-	struct scrub_sector	*pagev[SCRUB_PAGES_PER_BIO];
-	int			page_count;
+	struct scrub_sector	*sectorv[SCRUB_SECTORS_PER_BIO];
+	int			sector_count;
 	int			next_free;
 	struct btrfs_work	work;
 };
@@ -158,7 +158,7 @@ struct scrub_ctx {
 	struct list_head	csum_list;
 	atomic_t		cancel_req;
 	int			readonly;
-	int			pages_per_bio;
+	int			sectors_per_bio;
 
 	/* State of IO submission throttling affecting the associated device */
 	ktime_t			throttle_deadline;
@@ -535,9 +535,9 @@ static noinline_for_stack void scrub_free_ctx(struct scrub_ctx *sctx)
 	if (sctx->curr != -1) {
 		struct scrub_bio *sbio = sctx->bios[sctx->curr];
 
-		for (i = 0; i < sbio->page_count; i++) {
-			WARN_ON(!sbio->pagev[i]->page);
-			scrub_block_put(sbio->pagev[i]->sblock);
+		for (i = 0; i < sbio->sector_count; i++) {
+			WARN_ON(!sbio->sectorv[i]->page);
+			scrub_block_put(sbio->sectorv[i]->sblock);
 		}
 		bio_put(sbio->bio);
 	}
@@ -572,7 +572,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
 		goto nomem;
 	refcount_set(&sctx->refs, 1);
 	sctx->is_dev_replace = is_dev_replace;
-	sctx->pages_per_bio = SCRUB_PAGES_PER_BIO;
+	sctx->sectors_per_bio = SCRUB_SECTORS_PER_BIO;
 	sctx->curr = -1;
 	sctx->fs_info = fs_info;
 	INIT_LIST_HEAD(&sctx->csum_list);
@@ -586,7 +586,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
 
 		sbio->index = i;
 		sbio->sctx = sctx;
-		sbio->page_count = 0;
+		sbio->sector_count = 0;
 		btrfs_init_work(&sbio->work, scrub_bio_end_io_worker, NULL,
 				NULL);
 
@@ -1650,10 +1650,10 @@ static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
 			return -ENOMEM;
 		}
 		sctx->wr_curr_bio->sctx = sctx;
-		sctx->wr_curr_bio->page_count = 0;
+		sctx->wr_curr_bio->sector_count = 0;
 	}
 	sbio = sctx->wr_curr_bio;
-	if (sbio->page_count == 0) {
+	if (sbio->sector_count == 0) {
 		struct bio *bio;
 
 		ret = fill_writer_pointer_gap(sctx,
@@ -1668,7 +1668,7 @@ static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
 		sbio->dev = sctx->wr_tgtdev;
 		bio = sbio->bio;
 		if (!bio) {
-			bio = btrfs_bio_alloc(sctx->pages_per_bio);
+			bio = btrfs_bio_alloc(sctx->sectors_per_bio);
 			sbio->bio = bio;
 		}
 
@@ -1678,9 +1678,9 @@ static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
 		bio->bi_iter.bi_sector = sbio->physical >> 9;
 		bio->bi_opf = REQ_OP_WRITE;
 		sbio->status = 0;
-	} else if (sbio->physical + sbio->page_count * sectorsize !=
+	} else if (sbio->physical + sbio->sector_count * sectorsize !=
 		   ssector->physical_for_dev_replace ||
-		   sbio->logical + sbio->page_count * sectorsize !=
+		   sbio->logical + sbio->sector_count * sectorsize !=
 		   ssector->logical) {
 		scrub_wr_submit(sctx);
 		goto again;
@@ -1688,7 +1688,7 @@ static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
 
 	ret = bio_add_page(sbio->bio, ssector->page, sectorsize, 0);
 	if (ret != sectorsize) {
-		if (sbio->page_count < 1) {
+		if (sbio->sector_count < 1) {
 			bio_put(sbio->bio);
 			sbio->bio = NULL;
 			mutex_unlock(&sctx->wr_lock);
@@ -1698,10 +1698,10 @@ static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
 		goto again;
 	}
 
-	sbio->pagev[sbio->page_count] = ssector;
+	sbio->sectorv[sbio->sector_count] = ssector;
 	scrub_sector_get(ssector);
-	sbio->page_count++;
-	if (sbio->page_count == sctx->pages_per_bio)
+	sbio->sector_count++;
+	if (sbio->sector_count == sctx->sectors_per_bio)
 		scrub_wr_submit(sctx);
 	mutex_unlock(&sctx->wr_lock);
 
@@ -1726,7 +1726,7 @@ static void scrub_wr_submit(struct scrub_ctx *sctx)
 	btrfsic_submit_bio(sbio->bio);
 
 	if (btrfs_is_zoned(sctx->fs_info))
-		sctx->write_pointer = sbio->physical + sbio->page_count *
+		sctx->write_pointer = sbio->physical + sbio->sector_count *
 			sctx->fs_info->sectorsize;
 }
 
@@ -1748,21 +1748,21 @@ static void scrub_wr_bio_end_io_worker(struct btrfs_work *work)
 	struct scrub_ctx *sctx = sbio->sctx;
 	int i;
 
-	ASSERT(sbio->page_count <= SCRUB_PAGES_PER_BIO);
+	ASSERT(sbio->sector_count <= SCRUB_SECTORS_PER_BIO);
 	if (sbio->status) {
 		struct btrfs_dev_replace *dev_replace =
 			&sbio->sctx->fs_info->dev_replace;
 
-		for (i = 0; i < sbio->page_count; i++) {
-			struct scrub_sector *ssector = sbio->pagev[i];
+		for (i = 0; i < sbio->sector_count; i++) {
+			struct scrub_sector *ssector = sbio->sectorv[i];
 
 			ssector->io_error = 1;
 			atomic64_inc(&dev_replace->num_write_errors);
 		}
 	}
 
-	for (i = 0; i < sbio->page_count; i++)
-		scrub_sector_put(sbio->pagev[i]);
+	for (i = 0; i < sbio->sector_count; i++)
+		scrub_sector_put(sbio->sectorv[i]);
 
 	bio_put(sbio->bio);
 	kfree(sbio);
@@ -2078,7 +2078,7 @@ static int scrub_add_sector_to_rd_bio(struct scrub_ctx *sctx,
 		if (sctx->curr != -1) {
 			sctx->first_free = sctx->bios[sctx->curr]->next_free;
 			sctx->bios[sctx->curr]->next_free = -1;
-			sctx->bios[sctx->curr]->page_count = 0;
+			sctx->bios[sctx->curr]->sector_count = 0;
 			spin_unlock(&sctx->list_lock);
 		} else {
 			spin_unlock(&sctx->list_lock);
@@ -2086,7 +2086,7 @@ static int scrub_add_sector_to_rd_bio(struct scrub_ctx *sctx,
 		}
 	}
 	sbio = sctx->bios[sctx->curr];
-	if (sbio->page_count == 0) {
+	if (sbio->sector_count == 0) {
 		struct bio *bio;
 
 		sbio->physical = ssector->physical;
@@ -2094,7 +2094,7 @@ static int scrub_add_sector_to_rd_bio(struct scrub_ctx *sctx,
 		sbio->dev = ssector->dev;
 		bio = sbio->bio;
 		if (!bio) {
-			bio = btrfs_bio_alloc(sctx->pages_per_bio);
+			bio = btrfs_bio_alloc(sctx->sectors_per_bio);
 			sbio->bio = bio;
 		}
 
@@ -2104,19 +2104,19 @@ static int scrub_add_sector_to_rd_bio(struct scrub_ctx *sctx,
 		bio->bi_iter.bi_sector = sbio->physical >> 9;
 		bio->bi_opf = REQ_OP_READ;
 		sbio->status = 0;
-	} else if (sbio->physical + sbio->page_count * sectorsize !=
+	} else if (sbio->physical + sbio->sector_count * sectorsize !=
 		   ssector->physical ||
-		   sbio->logical + sbio->page_count * sectorsize !=
+		   sbio->logical + sbio->sector_count * sectorsize !=
 		   ssector->logical ||
 		   sbio->dev != ssector->dev) {
 		scrub_submit(sctx);
 		goto again;
 	}
 
-	sbio->pagev[sbio->page_count] = ssector;
+	sbio->sectorv[sbio->sector_count] = ssector;
 	ret = bio_add_page(sbio->bio, ssector->page, sectorsize, 0);
 	if (ret != sectorsize) {
-		if (sbio->page_count < 1) {
+		if (sbio->sector_count < 1) {
 			bio_put(sbio->bio);
 			sbio->bio = NULL;
 			return -EIO;
@@ -2127,8 +2127,8 @@ static int scrub_add_sector_to_rd_bio(struct scrub_ctx *sctx,
 
 	scrub_block_get(sblock); /* one for the page added to the bio */
 	atomic_inc(&sblock->outstanding_sectors);
-	sbio->page_count++;
-	if (sbio->page_count == sctx->pages_per_bio)
+	sbio->sector_count++;
+	if (sbio->sector_count == sctx->sectors_per_bio)
 		scrub_submit(sctx);
 
 	return 0;
@@ -2362,10 +2362,10 @@ static void scrub_bio_end_io_worker(struct btrfs_work *work)
 	struct scrub_ctx *sctx = sbio->sctx;
 	int i;
 
-	ASSERT(sbio->page_count <= SCRUB_PAGES_PER_BIO);
+	ASSERT(sbio->sector_count <= SCRUB_SECTORS_PER_BIO);
 	if (sbio->status) {
-		for (i = 0; i < sbio->page_count; i++) {
-			struct scrub_sector *ssector = sbio->pagev[i];
+		for (i = 0; i < sbio->sector_count; i++) {
+			struct scrub_sector *ssector = sbio->sectorv[i];
 
 			ssector->io_error = 1;
 			ssector->sblock->no_io_error_seen = 0;
@@ -2373,8 +2373,8 @@ static void scrub_bio_end_io_worker(struct btrfs_work *work)
 	}
 
 	/* now complete the scrub_block items that have all pages completed */
-	for (i = 0; i < sbio->page_count; i++) {
-		struct scrub_sector *ssector = sbio->pagev[i];
+	for (i = 0; i < sbio->sector_count; i++) {
+		struct scrub_sector *ssector = sbio->sectorv[i];
 		struct scrub_block *sblock = ssector->sblock;
 
 		if (atomic_dec_and_test(&sblock->outstanding_sectors))
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev
  2022-03-11  7:34 ` [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev Qu Wenruo
@ 2022-03-11 17:49   ` David Sterba
  2022-03-11 23:17     ` Qu Wenruo
  0 siblings, 1 reply; 8+ messages in thread
From: David Sterba @ 2022-03-11 17:49 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Fri, Mar 11, 2022 at 03:34:18PM +0800, Qu Wenruo wrote:
> The following will be renamed in this patch:
> 
> - scrub_block::pagev -> sectorv

I think you can come up with a different naming scheme and not copy the
original, eg. something closer what we've been using elsewhere. Here the
'pagev' is a page vector, but 'sectors' is IMHO fine and also
understandable.

> 
> - scrub_block::page_count -> sector_count
> 
> - SCRUB_MAX_PAGES_PER_BLOCK -> SCRUB_MAX_SECTORS_PER_BLOCK
> 
> - page_num -> sector_num to iterate scrub_block::sectorv
> 
> For now scrub_page is not yet renamed, as the current changeset is
> already large enough.
> 
> The rename for scrub_page will come in a separate patch.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/scrub.c | 220 +++++++++++++++++++++++------------------------
>  1 file changed, 110 insertions(+), 110 deletions(-)
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index 11089568b287..fd67e1acdba6 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -52,7 +52,7 @@ struct scrub_ctx;
>   * The following value times PAGE_SIZE needs to be large enough to match the
>   * largest node/leaf/sector size that shall be supported.
>   */
> -#define SCRUB_MAX_PAGES_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
> +#define SCRUB_MAX_SECTORS_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
>  
>  struct scrub_recover {
>  	refcount_t		refs;
> @@ -94,8 +94,8 @@ struct scrub_bio {
>  };
>  
>  struct scrub_block {
> -	struct scrub_page	*pagev[SCRUB_MAX_PAGES_PER_BLOCK];
> -	int			page_count;
> +	struct scrub_page	*sectorv[SCRUB_MAX_SECTORS_PER_BLOCK];
> +	int			sector_count;
>  	atomic_t		outstanding_pages;
>  	refcount_t		refs; /* free mem on transition to zero */
>  	struct scrub_ctx	*sctx;
> @@ -728,16 +728,16 @@ static void scrub_print_warning(const char *errstr, struct scrub_block *sblock)
>  	u8 ref_level = 0;
>  	int ret;
>  
> -	WARN_ON(sblock->page_count < 1);
> -	dev = sblock->pagev[0]->dev;
> +	WARN_ON(sblock->sector_count < 1);
> +	dev = sblock->sectorv[0]->dev;
>  	fs_info = sblock->sctx->fs_info;
>  
>  	path = btrfs_alloc_path();
>  	if (!path)
>  		return;
>  
> -	swarn.physical = sblock->pagev[0]->physical;
> -	swarn.logical = sblock->pagev[0]->logical;
> +	swarn.physical = sblock->sectorv[0]->physical;
> +	swarn.logical = sblock->sectorv[0]->logical;
>  	swarn.errstr = errstr;
>  	swarn.dev = NULL;
>  
> @@ -817,16 +817,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  	struct scrub_block *sblock_bad;
>  	int ret;
>  	int mirror_index;
> -	int page_num;
> +	int sector_num;
>  	int success;
>  	bool full_stripe_locked;
>  	unsigned int nofs_flag;
>  	static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
>  				      DEFAULT_RATELIMIT_BURST);
>  
> -	BUG_ON(sblock_to_check->page_count < 1);
> +	BUG_ON(sblock_to_check->sector_count < 1);
>  	fs_info = sctx->fs_info;
> -	if (sblock_to_check->pagev[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
> +	if (sblock_to_check->sectorv[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
>  		/*
>  		 * if we find an error in a super block, we just report it.
>  		 * They will get written with the next transaction commit
> @@ -837,13 +837,13 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  		spin_unlock(&sctx->stat_lock);
>  		return 0;
>  	}
> -	logical = sblock_to_check->pagev[0]->logical;
> -	BUG_ON(sblock_to_check->pagev[0]->mirror_num < 1);
> -	failed_mirror_index = sblock_to_check->pagev[0]->mirror_num - 1;
> -	is_metadata = !(sblock_to_check->pagev[0]->flags &
> +	logical = sblock_to_check->sectorv[0]->logical;
> +	BUG_ON(sblock_to_check->sectorv[0]->mirror_num < 1);
> +	failed_mirror_index = sblock_to_check->sectorv[0]->mirror_num - 1;
> +	is_metadata = !(sblock_to_check->sectorv[0]->flags &
>  			BTRFS_EXTENT_FLAG_DATA);
> -	have_csum = sblock_to_check->pagev[0]->have_csum;
> -	dev = sblock_to_check->pagev[0]->dev;
> +	have_csum = sblock_to_check->sectorv[0]->have_csum;
> +	dev = sblock_to_check->sectorv[0]->dev;
>  
>  	if (!sctx->is_dev_replace && btrfs_repair_one_zone(fs_info, logical))
>  		return 0;
> @@ -1011,25 +1011,25 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  			continue;
>  
>  		/* raid56's mirror can be more than BTRFS_MAX_MIRRORS */
> -		if (!scrub_is_page_on_raid56(sblock_bad->pagev[0])) {
> +		if (!scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
>  			if (mirror_index >= BTRFS_MAX_MIRRORS)
>  				break;
> -			if (!sblocks_for_recheck[mirror_index].page_count)
> +			if (!sblocks_for_recheck[mirror_index].sector_count)
>  				break;
>  
>  			sblock_other = sblocks_for_recheck + mirror_index;
>  		} else {
> -			struct scrub_recover *r = sblock_bad->pagev[0]->recover;
> +			struct scrub_recover *r = sblock_bad->sectorv[0]->recover;
>  			int max_allowed = r->bioc->num_stripes - r->bioc->num_tgtdevs;
>  
>  			if (mirror_index >= max_allowed)
>  				break;
> -			if (!sblocks_for_recheck[1].page_count)
> +			if (!sblocks_for_recheck[1].sector_count)
>  				break;
>  
>  			ASSERT(failed_mirror_index == 0);
>  			sblock_other = sblocks_for_recheck + 1;
> -			sblock_other->pagev[0]->mirror_num = 1 + mirror_index;
> +			sblock_other->sectorv[0]->mirror_num = 1 + mirror_index;
>  		}
>  
>  		/* build and submit the bios, check checksums */
> @@ -1078,16 +1078,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  	 * area are unreadable.
>  	 */
>  	success = 1;
> -	for (page_num = 0; page_num < sblock_bad->page_count;
> -	     page_num++) {
> -		struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
> +	for (sector_num = 0; sector_num < sblock_bad->sector_count;

This is a simple indexing, so while sector_num is accurate a plain 'i'
would work too. It would also make some lines shorter.

> +	     sector_num++) {
> +		struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
>  		struct scrub_block *sblock_other = NULL;
>  
>  		/* skip no-io-error page in scrub */
>  		if (!spage_bad->io_error && !sctx->is_dev_replace)
>  			continue;
>  
> -		if (scrub_is_page_on_raid56(sblock_bad->pagev[0])) {
> +		if (scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
>  			/*
>  			 * In case of dev replace, if raid56 rebuild process
>  			 * didn't work out correct data, then copy the content
> @@ -1100,10 +1100,10 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  			/* try to find no-io-error page in mirrors */
>  			for (mirror_index = 0;
>  			     mirror_index < BTRFS_MAX_MIRRORS &&
> -			     sblocks_for_recheck[mirror_index].page_count > 0;
> +			     sblocks_for_recheck[mirror_index].sector_count > 0;
>  			     mirror_index++) {
>  				if (!sblocks_for_recheck[mirror_index].
> -				    pagev[page_num]->io_error) {
> +				    sectorv[sector_num]->io_error) {
>  					sblock_other = sblocks_for_recheck +
>  						       mirror_index;
>  					break;
> @@ -1125,7 +1125,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  				sblock_other = sblock_bad;
>  
>  			if (scrub_write_page_to_dev_replace(sblock_other,
> -							    page_num) != 0) {
> +							    sector_num) != 0) {
>  				atomic64_inc(
>  					&fs_info->dev_replace.num_write_errors);
>  				success = 0;
> @@ -1133,7 +1133,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  		} else if (sblock_other) {
>  			ret = scrub_repair_page_from_good_copy(sblock_bad,
>  							       sblock_other,
> -							       page_num, 0);
> +							       sector_num, 0);
>  			if (0 == ret)
>  				spage_bad->io_error = 0;
>  			else
> @@ -1186,18 +1186,18 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  			struct scrub_block *sblock = sblocks_for_recheck +
>  						     mirror_index;
>  			struct scrub_recover *recover;
> -			int page_index;
> +			int sector_index;
>  
> -			for (page_index = 0; page_index < sblock->page_count;
> -			     page_index++) {
> -				sblock->pagev[page_index]->sblock = NULL;
> -				recover = sblock->pagev[page_index]->recover;
> +			for (sector_index = 0; sector_index < sblock->sector_count;
> +			     sector_index++) {
> +				sblock->sectorv[sector_index]->sblock = NULL;
> +				recover = sblock->sectorv[sector_index]->recover;
>  				if (recover) {
>  					scrub_put_recover(fs_info, recover);
> -					sblock->pagev[page_index]->recover =
> +					sblock->sectorv[sector_index]->recover =
>  									NULL;
>  				}
> -				scrub_page_put(sblock->pagev[page_index]);
> +				scrub_page_put(sblock->sectorv[sector_index]);
>  			}
>  		}
>  		kfree(sblocks_for_recheck);
> @@ -1255,18 +1255,18 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  {
>  	struct scrub_ctx *sctx = original_sblock->sctx;
>  	struct btrfs_fs_info *fs_info = sctx->fs_info;
> -	u64 length = original_sblock->page_count * fs_info->sectorsize;
> -	u64 logical = original_sblock->pagev[0]->logical;
> -	u64 generation = original_sblock->pagev[0]->generation;
> -	u64 flags = original_sblock->pagev[0]->flags;
> -	u64 have_csum = original_sblock->pagev[0]->have_csum;
> +	u64 length = original_sblock->sector_count * fs_info->sectorsize;

						>> fs_info->sectorsize_bits

> +	u64 logical = original_sblock->sectorv[0]->logical;
> +	u64 generation = original_sblock->sectorv[0]->generation;
> +	u64 flags = original_sblock->sectorv[0]->flags;
> +	u64 have_csum = original_sblock->sectorv[0]->have_csum;
>  	struct scrub_recover *recover;
>  	struct btrfs_io_context *bioc;
>  	u64 sublen;
>  	u64 mapped_length;
>  	u64 stripe_offset;
>  	int stripe_index;
> -	int page_index = 0;
> +	int sector_index = 0;
>  	int mirror_index;
>  	int nmirrors;
>  	int ret;
> @@ -1306,7 +1306,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  		recover->bioc = bioc;
>  		recover->map_length = mapped_length;
>  
> -		ASSERT(page_index < SCRUB_MAX_PAGES_PER_BLOCK);
> +		ASSERT(sector_index < SCRUB_MAX_SECTORS_PER_BLOCK);
>  
>  		nmirrors = min(scrub_nr_raid_mirrors(bioc), BTRFS_MAX_MIRRORS);
>  
> @@ -1328,7 +1328,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  				return -ENOMEM;
>  			}
>  			scrub_page_get(spage);
> -			sblock->pagev[page_index] = spage;
> +			sblock->sectorv[sector_index] = spage;
>  			spage->sblock = sblock;
>  			spage->flags = flags;
>  			spage->generation = generation;
> @@ -1336,7 +1336,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  			spage->have_csum = have_csum;
>  			if (have_csum)
>  				memcpy(spage->csum,
> -				       original_sblock->pagev[0]->csum,
> +				       original_sblock->sectorv[0]->csum,
>  				       sctx->fs_info->csum_size);
>  
>  			scrub_stripe_index_and_offset(logical,
> @@ -1352,13 +1352,13 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  					 stripe_offset;
>  			spage->dev = bioc->stripes[stripe_index].dev;
>  
> -			BUG_ON(page_index >= original_sblock->page_count);
> +			BUG_ON(sector_index >= original_sblock->sector_count);
>  			spage->physical_for_dev_replace =
> -				original_sblock->pagev[page_index]->
> +				original_sblock->sectorv[sector_index]->
>  				physical_for_dev_replace;
>  			/* for missing devices, dev->bdev is NULL */
>  			spage->mirror_num = mirror_index + 1;
> -			sblock->page_count++;
> +			sblock->sector_count++;
>  			spage->page = alloc_page(GFP_NOFS);
>  			if (!spage->page)
>  				goto leave_nomem;
> @@ -1369,7 +1369,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  		scrub_put_recover(fs_info, recover);
>  		length -= sublen;
>  		logical += sublen;
> -		page_index++;
> +		sector_index++;
>  	}
>  
>  	return 0;
> @@ -1392,7 +1392,7 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
>  	bio->bi_private = &done;
>  	bio->bi_end_io = scrub_bio_wait_endio;
>  
> -	mirror_num = spage->sblock->pagev[0]->mirror_num;
> +	mirror_num = spage->sblock->sectorv[0]->mirror_num;
>  	ret = raid56_parity_recover(bio, spage->recover->bioc,
>  				    spage->recover->map_length,
>  				    mirror_num, 0);
> @@ -1406,9 +1406,9 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
>  static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>  					  struct scrub_block *sblock)
>  {
> -	struct scrub_page *first_page = sblock->pagev[0];
> +	struct scrub_page *first_page = sblock->sectorv[0];
>  	struct bio *bio;
> -	int page_num;
> +	int sector_num;

Also 'i'

>  	/* All pages in sblock belong to the same stripe on the same device. */
>  	ASSERT(first_page->dev);
> @@ -1418,8 +1418,8 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>  	bio = btrfs_bio_alloc(BIO_MAX_VECS);
>  	bio_set_dev(bio, first_page->dev->bdev);
>  
> -	for (page_num = 0; page_num < sblock->page_count; page_num++) {
> -		struct scrub_page *spage = sblock->pagev[page_num];
> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
> +		struct scrub_page *spage = sblock->sectorv[sector_num];
>  
>  		WARN_ON(!spage->page);
>  		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
> @@ -1436,8 +1436,8 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>  
>  	return;
>  out:
> -	for (page_num = 0; page_num < sblock->page_count; page_num++)
> -		sblock->pagev[page_num]->io_error = 1;
> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++)
> +		sblock->sectorv[sector_num]->io_error = 1;
>  
>  	sblock->no_io_error_seen = 0;
>  }
> @@ -1453,17 +1453,17 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
>  				struct scrub_block *sblock,
>  				int retry_failed_mirror)
>  {
> -	int page_num;
> +	int sector_num;

And here too

>  	sblock->no_io_error_seen = 1;
>  
>  	/* short cut for raid56 */
> -	if (!retry_failed_mirror && scrub_is_page_on_raid56(sblock->pagev[0]))
> +	if (!retry_failed_mirror && scrub_is_page_on_raid56(sblock->sectorv[0]))
>  		return scrub_recheck_block_on_raid56(fs_info, sblock);
>  
> -	for (page_num = 0; page_num < sblock->page_count; page_num++) {
> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
>  		struct bio *bio;
> -		struct scrub_page *spage = sblock->pagev[page_num];
> +		struct scrub_page *spage = sblock->sectorv[sector_num];
>  
>  		if (spage->dev->bdev == NULL) {
>  			spage->io_error = 1;
> @@ -1507,7 +1507,7 @@ static void scrub_recheck_block_checksum(struct scrub_block *sblock)
>  	sblock->checksum_error = 0;
>  	sblock->generation_error = 0;
>  
> -	if (sblock->pagev[0]->flags & BTRFS_EXTENT_FLAG_DATA)
> +	if (sblock->sectorv[0]->flags & BTRFS_EXTENT_FLAG_DATA)
>  		scrub_checksum_data(sblock);
>  	else
>  		scrub_checksum_tree_block(sblock);
> @@ -1516,15 +1516,15 @@ static void scrub_recheck_block_checksum(struct scrub_block *sblock)
>  static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
>  					     struct scrub_block *sblock_good)
>  {
> -	int page_num;
> +	int sector_num;

i

>  	int ret = 0;
>  
> -	for (page_num = 0; page_num < sblock_bad->page_count; page_num++) {
> +	for (sector_num = 0; sector_num < sblock_bad->sector_count; sector_num++) {
>  		int ret_sub;
>  
>  		ret_sub = scrub_repair_page_from_good_copy(sblock_bad,
>  							   sblock_good,
> -							   page_num, 1);
> +							   sector_num, 1);
>  		if (ret_sub)
>  			ret = ret_sub;
>  	}
> @@ -1534,10 +1534,10 @@ static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
>  
>  static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
>  					    struct scrub_block *sblock_good,
> -					    int page_num, int force_write)
> +					    int sector_num, int force_write)
>  {
> -	struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
> -	struct scrub_page *spage_good = sblock_good->pagev[page_num];
> +	struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
> +	struct scrub_page *spage_good = sblock_good->sectorv[sector_num];
>  	struct btrfs_fs_info *fs_info = sblock_bad->sctx->fs_info;
>  	const u32 sectorsize = fs_info->sectorsize;
>  
> @@ -1581,7 +1581,7 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
>  static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
>  {
>  	struct btrfs_fs_info *fs_info = sblock->sctx->fs_info;
> -	int page_num;
> +	int sector_num;

i

>  	/*
>  	 * This block is used for the check of the parity on the source device,
> @@ -1590,19 +1590,19 @@ static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
>  	if (sblock->sparity)
>  		return;
>  
> -	for (page_num = 0; page_num < sblock->page_count; page_num++) {
> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
>  		int ret;
>  
> -		ret = scrub_write_page_to_dev_replace(sblock, page_num);
> +		ret = scrub_write_page_to_dev_replace(sblock, sector_num);
>  		if (ret)
>  			atomic64_inc(&fs_info->dev_replace.num_write_errors);
>  	}
>  }
>  
>  static int scrub_write_page_to_dev_replace(struct scrub_block *sblock,
> -					   int page_num)
> +					   int sector_num)
>  {
> -	struct scrub_page *spage = sblock->pagev[page_num];
> +	struct scrub_page *spage = sblock->sectorv[sector_num];
>  
>  	BUG_ON(spage->page == NULL);
>  	if (spage->io_error)
> @@ -1786,8 +1786,8 @@ static int scrub_checksum(struct scrub_block *sblock)
>  	sblock->generation_error = 0;
>  	sblock->checksum_error = 0;
>  
> -	WARN_ON(sblock->page_count < 1);
> -	flags = sblock->pagev[0]->flags;
> +	WARN_ON(sblock->sector_count < 1);
> +	flags = sblock->sectorv[0]->flags;
>  	ret = 0;
>  	if (flags & BTRFS_EXTENT_FLAG_DATA)
>  		ret = scrub_checksum_data(sblock);
> @@ -1812,8 +1812,8 @@ static int scrub_checksum_data(struct scrub_block *sblock)
>  	struct scrub_page *spage;
>  	char *kaddr;
>  
> -	BUG_ON(sblock->page_count < 1);
> -	spage = sblock->pagev[0];
> +	BUG_ON(sblock->sector_count < 1);
> +	spage = sblock->sectorv[0];
>  	if (!spage->have_csum)
>  		return 0;
>  
> @@ -1852,12 +1852,12 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
>  	struct scrub_page *spage;
>  	char *kaddr;
>  
> -	BUG_ON(sblock->page_count < 1);
> +	BUG_ON(sblock->sector_count < 1);
>  
> -	/* Each member in pagev is just one block, not a full page */
> -	ASSERT(sblock->page_count == num_sectors);
> +	/* Each member in pagev is just one sector , not a full page */
> +	ASSERT(sblock->sector_count == num_sectors);
>  
> -	spage = sblock->pagev[0];
> +	spage = sblock->sectorv[0];
>  	kaddr = page_address(spage->page);
>  	h = (struct btrfs_header *)kaddr;
>  	memcpy(on_disk_csum, h->csum, sctx->fs_info->csum_size);
> @@ -1888,7 +1888,7 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
>  			    sectorsize - BTRFS_CSUM_SIZE);
>  
>  	for (i = 1; i < num_sectors; i++) {
> -		kaddr = page_address(sblock->pagev[i]->page);
> +		kaddr = page_address(sblock->sectorv[i]->page);
>  		crypto_shash_update(shash, kaddr, sectorsize);
>  	}
>  
> @@ -1911,8 +1911,8 @@ static int scrub_checksum_super(struct scrub_block *sblock)
>  	int fail_gen = 0;
>  	int fail_cor = 0;
>  
> -	BUG_ON(sblock->page_count < 1);
> -	spage = sblock->pagev[0];
> +	BUG_ON(sblock->sector_count < 1);
> +	spage = sblock->sectorv[0];
>  	kaddr = page_address(spage->page);
>  	s = (struct btrfs_super_block *)kaddr;
>  
> @@ -1966,8 +1966,8 @@ static void scrub_block_put(struct scrub_block *sblock)
>  		if (sblock->sparity)
>  			scrub_parity_put(sblock->sparity);
>  
> -		for (i = 0; i < sblock->page_count; i++)
> -			scrub_page_put(sblock->pagev[i]);
> +		for (i = 0; i < sblock->sector_count; i++)
> +			scrub_page_put(sblock->sectorv[i]);
>  		kfree(sblock);
>  	}
>  }
> @@ -2155,8 +2155,8 @@ static void scrub_missing_raid56_worker(struct btrfs_work *work)
>  	u64 logical;
>  	struct btrfs_device *dev;
>  
> -	logical = sblock->pagev[0]->logical;
> -	dev = sblock->pagev[0]->dev;
> +	logical = sblock->sectorv[0]->logical;
> +	dev = sblock->sectorv[0]->dev;
>  
>  	if (sblock->no_io_error_seen)
>  		scrub_recheck_block_checksum(sblock);
> @@ -2193,8 +2193,8 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
>  {
>  	struct scrub_ctx *sctx = sblock->sctx;
>  	struct btrfs_fs_info *fs_info = sctx->fs_info;
> -	u64 length = sblock->page_count * PAGE_SIZE;
> -	u64 logical = sblock->pagev[0]->logical;
> +	u64 length = sblock->sector_count * fs_info->sectorsize;
> +	u64 logical = sblock->sectorv[0]->logical;
>  	struct btrfs_io_context *bioc = NULL;
>  	struct bio *bio;
>  	struct btrfs_raid_bio *rbio;
> @@ -2227,8 +2227,8 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
>  	if (!rbio)
>  		goto rbio_out;
>  
> -	for (i = 0; i < sblock->page_count; i++) {
> -		struct scrub_page *spage = sblock->pagev[i];
> +	for (i = 0; i < sblock->sector_count; i++) {
> +		struct scrub_page *spage = sblock->sectorv[i];
>  
>  		raid56_add_scrub_pages(rbio, spage->page, spage->logical);
>  	}
> @@ -2290,9 +2290,9 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  			scrub_block_put(sblock);
>  			return -ENOMEM;
>  		}
> -		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
> +		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
>  		scrub_page_get(spage);
> -		sblock->pagev[index] = spage;
> +		sblock->sectorv[index] = spage;
>  		spage->sblock = sblock;
>  		spage->dev = dev;
>  		spage->flags = flags;
> @@ -2307,7 +2307,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  		} else {
>  			spage->have_csum = 0;
>  		}
> -		sblock->page_count++;
> +		sblock->sector_count++;
>  		spage->page = alloc_page(GFP_KERNEL);
>  		if (!spage->page)
>  			goto leave_nomem;
> @@ -2317,7 +2317,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  		physical_for_dev_replace += l;
>  	}
>  
> -	WARN_ON(sblock->page_count == 0);
> +	WARN_ON(sblock->sector_count == 0);
>  	if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state)) {
>  		/*
>  		 * This case should only be hit for RAID 5/6 device replace. See
> @@ -2325,8 +2325,8 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  		 */
>  		scrub_missing_raid56_pages(sblock);
>  	} else {
> -		for (index = 0; index < sblock->page_count; index++) {
> -			struct scrub_page *spage = sblock->pagev[index];
> +		for (index = 0; index < sblock->sector_count; index++) {
> +			struct scrub_page *spage = sblock->sectorv[index];
>  			int ret;
>  
>  			ret = scrub_add_page_to_rd_bio(sctx, spage);
> @@ -2456,8 +2456,8 @@ static void scrub_block_complete(struct scrub_block *sblock)
>  	}
>  
>  	if (sblock->sparity && corrupted && !sblock->data_corrected) {
> -		u64 start = sblock->pagev[0]->logical;
> -		u64 end = sblock->pagev[sblock->page_count - 1]->logical +
> +		u64 start = sblock->sectorv[0]->logical;
> +		u64 end = sblock->sectorv[sblock->sector_count - 1]->logical +
>  			  sblock->sctx->fs_info->sectorsize;
>  
>  		ASSERT(end - start <= U32_MAX);
> @@ -2624,10 +2624,10 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>  			scrub_block_put(sblock);
>  			return -ENOMEM;
>  		}
> -		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
> +		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
>  		/* For scrub block */
>  		scrub_page_get(spage);
> -		sblock->pagev[index] = spage;
> +		sblock->sectorv[index] = spage;
>  		/* For scrub parity */
>  		scrub_page_get(spage);
>  		list_add_tail(&spage->list, &sparity->spages);
> @@ -2644,7 +2644,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>  		} else {
>  			spage->have_csum = 0;
>  		}
> -		sblock->page_count++;
> +		sblock->sector_count++;
>  		spage->page = alloc_page(GFP_KERNEL);
>  		if (!spage->page)
>  			goto leave_nomem;
> @@ -2656,9 +2656,9 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>  		physical += sectorsize;
>  	}
>  
> -	WARN_ON(sblock->page_count == 0);
> -	for (index = 0; index < sblock->page_count; index++) {
> -		struct scrub_page *spage = sblock->pagev[index];
> +	WARN_ON(sblock->sector_count == 0);
> +	for (index = 0; index < sblock->sector_count; index++) {
> +		struct scrub_page *spage = sblock->sectorv[index];
>  		int ret;
>  
>  		ret = scrub_add_page_to_rd_bio(sctx, spage);
> @@ -4058,18 +4058,18 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>  	}
>  
>  	if (fs_info->nodesize >
> -	    PAGE_SIZE * SCRUB_MAX_PAGES_PER_BLOCK ||
> -	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_PAGES_PER_BLOCK) {
> +	    SCRUB_MAX_SECTORS_PER_BLOCK * fs_info->sectorsize ||
> +	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_SECTORS_PER_BLOCK) {
>  		/*
>  		 * would exhaust the array bounds of pagev member in
>  		 * struct scrub_block
>  		 */
>  		btrfs_err(fs_info,
> -			  "scrub: size assumption nodesize and sectorsize <= SCRUB_MAX_PAGES_PER_BLOCK (%d <= %d && %d <= %d) fails",
> +			  "scrub: size assumption nodesize and sectorsize <= SCRUB_MAX_SECTORS_PER_BLOCK (%d <= %d && %d <= %d) fails",
>  		       fs_info->nodesize,
> -		       SCRUB_MAX_PAGES_PER_BLOCK,
> +		       SCRUB_MAX_SECTORS_PER_BLOCK,
>  		       fs_info->sectorsize,
> -		       SCRUB_MAX_PAGES_PER_BLOCK);
> +		       SCRUB_MAX_SECTORS_PER_BLOCK);
>  		return -EINVAL;
>  	}
>  
> -- 
> 2.35.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/3] btrfs: scrub: rename scrub_page to scrub_sector
  2022-03-11  7:34 ` [PATCH v2 2/3] btrfs: scrub: rename scrub_page to scrub_sector Qu Wenruo
@ 2022-03-11 18:01   ` David Sterba
  0 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2022-03-11 18:01 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Fri, Mar 11, 2022 at 03:34:19PM +0800, Qu Wenruo wrote:
> Since the subpage support of scrub, scrub_sector is in fact just
> representing one sector.
> 
> Thus the name scrub_page is no longer correct, rename it to
> scrub_sector.
> 
> This will also rename involving short names like spage -> ssector, and
> other functions which takes scrub_page as arguments.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/scrub.c | 460 +++++++++++++++++++++++------------------------
>  1 file changed, 230 insertions(+), 230 deletions(-)
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index fd67e1acdba6..c9198c9af4c4 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -60,7 +60,7 @@ struct scrub_recover {
>  	u64			map_length;
>  };
>  
> -struct scrub_page {
> +struct scrub_sector {
>  	struct scrub_block	*sblock;
>  	struct page		*page;
>  	struct btrfs_device	*dev;
> @@ -87,16 +87,16 @@ struct scrub_bio {
>  	blk_status_t		status;
>  	u64			logical;
>  	u64			physical;
> -	struct scrub_page	*pagev[SCRUB_PAGES_PER_BIO];
> +	struct scrub_sector	*pagev[SCRUB_PAGES_PER_BIO];
>  	int			page_count;
>  	int			next_free;
>  	struct btrfs_work	work;
>  };
>  
>  struct scrub_block {
> -	struct scrub_page	*sectorv[SCRUB_MAX_SECTORS_PER_BLOCK];
> +	struct scrub_sector	*sectorv[SCRUB_MAX_SECTORS_PER_BLOCK];
>  	int			sector_count;
> -	atomic_t		outstanding_pages;
> +	atomic_t		outstanding_sectors;
>  	refcount_t		refs; /* free mem on transition to zero */
>  	struct scrub_ctx	*sctx;
>  	struct scrub_parity	*sparity;
> @@ -129,7 +129,7 @@ struct scrub_parity {
>  
>  	refcount_t		refs;
>  
> -	struct list_head	spages;
> +	struct list_head	ssectors;

This is also an opportunity to rename it to something more suitable,
I really don't like ssectors. We've been using 'list' or you may find
something better.

>  	/* Work of parity check and repair */
>  	struct btrfs_work	work;
> @@ -212,24 +212,24 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
>  static void scrub_recheck_block_checksum(struct scrub_block *sblock);
>  static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
>  					     struct scrub_block *sblock_good);
> -static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
> +static int scrub_repair_sector_from_good_copy(struct scrub_block *sblock_bad,
>  					    struct scrub_block *sblock_good,
> -					    int page_num, int force_write);
> +					    int sector_num, int force_write);
>  static void scrub_write_block_to_dev_replace(struct scrub_block *sblock);
> -static int scrub_write_page_to_dev_replace(struct scrub_block *sblock,
> -					   int page_num);
> +static int scrub_write_sector_to_dev_replace(struct scrub_block *sblock,
> +					     int sector_num);
>  static int scrub_checksum_data(struct scrub_block *sblock);
>  static int scrub_checksum_tree_block(struct scrub_block *sblock);
>  static int scrub_checksum_super(struct scrub_block *sblock);
>  static void scrub_block_put(struct scrub_block *sblock);
> -static void scrub_page_get(struct scrub_page *spage);
> -static void scrub_page_put(struct scrub_page *spage);
> +static void scrub_sector_get(struct scrub_sector *ssector);
> +static void scrub_sector_put(struct scrub_sector *ssector);
>  static void scrub_parity_get(struct scrub_parity *sparity);
>  static void scrub_parity_put(struct scrub_parity *sparity);
> -static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
> -		       u64 physical, struct btrfs_device *dev, u64 flags,
> -		       u64 gen, int mirror_num, u8 *csum,
> -		       u64 physical_for_dev_replace);
> +static int scrub_sectors(struct scrub_ctx *sctx, u64 logical, u32 len,
> +			 u64 physical, struct btrfs_device *dev, u64 flags,
> +			 u64 gen, int mirror_num, u8 *csum,
> +			 u64 physical_for_dev_replace);
>  static void scrub_bio_end_io(struct bio *bio);
>  static void scrub_bio_end_io_worker(struct btrfs_work *work);
>  static void scrub_block_complete(struct scrub_block *sblock);
> @@ -238,17 +238,17 @@ static void scrub_remap_extent(struct btrfs_fs_info *fs_info,
>  			       u64 *extent_physical,
>  			       struct btrfs_device **extent_dev,
>  			       int *extent_mirror_num);
> -static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
> -				    struct scrub_page *spage);
> +static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
> +				      struct scrub_sector *ssector);
>  static void scrub_wr_submit(struct scrub_ctx *sctx);
>  static void scrub_wr_bio_end_io(struct bio *bio);
>  static void scrub_wr_bio_end_io_worker(struct btrfs_work *work);
>  static void scrub_put_ctx(struct scrub_ctx *sctx);
>  
> -static inline int scrub_is_page_on_raid56(struct scrub_page *spage)
> +static inline int scrub_is_page_on_raid56(struct scrub_sector *ssector)
>  {
> -	return spage->recover &&
> -	       (spage->recover->bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
> +	return ssector->recover &&
> +	       (ssector->recover->bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
>  }
>  
>  static void scrub_pending_bio_inc(struct scrub_ctx *sctx)
> @@ -798,8 +798,8 @@ static inline void scrub_put_recover(struct btrfs_fs_info *fs_info,
>  
>  /*
>   * scrub_handle_errored_block gets called when either verification of the
> - * pages failed or the bio failed to read, e.g. with EIO. In the latter
> - * case, this function handles all pages in the bio, even though only one
> + * sectors failed or the bio failed to read, e.g. with EIO. In the latter
> + * case, this function handles all sectors in the bio, even though only one
>   * may be bad.
>   * The goal of this function is to repair the errored block by using the
>   * contents of one of the mirrors.
> @@ -854,7 +854,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  	 * might be waiting the scrub task to pause (which needs to wait for all
>  	 * the worker tasks to complete before pausing).
>  	 * We do allocations in the workers through insert_full_stripe_lock()
> -	 * and scrub_add_page_to_wr_bio(), which happens down the call chain of
> +	 * and scrub_add_sector_to_wr_bio(), which happens down the call chain of
>  	 * this function.
>  	 */
>  	nofs_flag = memalloc_nofs_save();
> @@ -918,7 +918,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  		goto out;
>  	}
>  
> -	/* setup the context, map the logical blocks and alloc the pages */
> +	/* setup the context, map the logical blocks and alloc the sectors */
>  	ret = scrub_setup_recheck_block(sblock_to_check, sblocks_for_recheck);
>  	if (ret) {
>  		spin_lock(&sctx->stat_lock);
> @@ -937,7 +937,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  	if (!sblock_bad->header_error && !sblock_bad->checksum_error &&
>  	    sblock_bad->no_io_error_seen) {
>  		/*
> -		 * the error disappeared after reading page by page, or
> +		 * the error disappeared after reading sector by sector, or
>  		 * the area was part of a huge bio and other parts of the
>  		 * bio caused I/O errors, or the block layer merged several
>  		 * read requests into one and the error is caused by a
> @@ -998,10 +998,10 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  	 * that is known to contain an error is rewritten. Afterwards
>  	 * the block is known to be corrected.
>  	 * If a mirror is found which is completely correct, and no
> -	 * checksum is present, only those pages are rewritten that had
> +	 * checksum is present, only those sectors are rewritten that had
>  	 * an I/O error in the block to be repaired, since it cannot be
> -	 * determined, which copy of the other pages is better (and it
> -	 * could happen otherwise that a correct page would be
> +	 * determined, which copy of the other sectors is better (and it
> +	 * could happen otherwise that a correct sector would be
>  	 * overwritten by a bad one).
>  	 */
>  	for (mirror_index = 0; ;mirror_index++) {
> @@ -1080,11 +1080,11 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  	success = 1;
>  	for (sector_num = 0; sector_num < sblock_bad->sector_count;
>  	     sector_num++) {
> -		struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
> +		struct scrub_sector *ssector_bad = sblock_bad->sectorv[sector_num];
>  		struct scrub_block *sblock_other = NULL;
>  
> -		/* skip no-io-error page in scrub */
> -		if (!spage_bad->io_error && !sctx->is_dev_replace)
> +		/* skip no-io-error sectors in scrub */

Comments that get updates should also follow the preferred style, ie.
the first uppercase letter.

> +		if (!ssector_bad->io_error && !sctx->is_dev_replace)
>  			continue;
>  
>  		if (scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
> @@ -1096,8 +1096,8 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  			 * sblock_for_recheck array to target device.
>  			 */
>  			sblock_other = NULL;
> -		} else if (spage_bad->io_error) {
> -			/* try to find no-io-error page in mirrors */
> +		} else if (ssector_bad->io_error) {
> +			/* try to find no-io-error sector in mirrors */
>  			for (mirror_index = 0;
>  			     mirror_index < BTRFS_MAX_MIRRORS &&
>  			     sblocks_for_recheck[mirror_index].sector_count > 0;
> @@ -1115,27 +1115,27 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  
>  		if (sctx->is_dev_replace) {
>  			/*
> -			 * did not find a mirror to fetch the page
> -			 * from. scrub_write_page_to_dev_replace()
> -			 * handles this case (page->io_error), by
> +			 * Did not find a mirror to fetch the sector
> +			 * from. scrub_write_sector_to_dev_replace()
> +			 * handles this case (sector->io_error), by
>  			 * filling the block with zeros before
>  			 * submitting the write request
>  			 */
>  			if (!sblock_other)
>  				sblock_other = sblock_bad;
>  
> -			if (scrub_write_page_to_dev_replace(sblock_other,
> -							    sector_num) != 0) {
> +			if (scrub_write_sector_to_dev_replace(sblock_other,
> +							      sector_num) != 0) {
>  				atomic64_inc(
>  					&fs_info->dev_replace.num_write_errors);
>  				success = 0;
>  			}
>  		} else if (sblock_other) {
> -			ret = scrub_repair_page_from_good_copy(sblock_bad,
> -							       sblock_other,
> -							       sector_num, 0);
> +			ret = scrub_repair_sector_from_good_copy(sblock_bad,
> +								 sblock_other,
> +								 sector_num, 0);
>  			if (0 == ret)
> -				spage_bad->io_error = 0;
> +				ssector_bad->io_error = 0;
>  			else
>  				success = 0;
>  		}
> @@ -1197,7 +1197,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  					sblock->sectorv[sector_index]->recover =
>  									NULL;
>  				}
> -				scrub_page_put(sblock->sectorv[sector_index]);
> +				scrub_sector_put(sblock->sectorv[sector_index]);
>  			}
>  		}
>  		kfree(sblocks_for_recheck);
> @@ -1272,7 +1272,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  	int ret;
>  
>  	/*
> -	 * note: the two members refs and outstanding_pages
> +	 * note: the two members refs and outstanding_sectors
>  	 * are not used (and not set) in the blocks that are used for
>  	 * the recheck procedure
>  	 */
> @@ -1313,13 +1313,13 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  		for (mirror_index = 0; mirror_index < nmirrors;
>  		     mirror_index++) {
>  			struct scrub_block *sblock;
> -			struct scrub_page *spage;
> +			struct scrub_sector *ssector;
>  
>  			sblock = sblocks_for_recheck + mirror_index;
>  			sblock->sctx = sctx;
>  
> -			spage = kzalloc(sizeof(*spage), GFP_NOFS);
> -			if (!spage) {
> +			ssector = kzalloc(sizeof(*ssector), GFP_NOFS);
> +			if (!ssector) {
>  leave_nomem:
>  				spin_lock(&sctx->stat_lock);
>  				sctx->stat.malloc_errors++;
> @@ -1327,15 +1327,15 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  				scrub_put_recover(fs_info, recover);
>  				return -ENOMEM;
>  			}
> -			scrub_page_get(spage);
> -			sblock->sectorv[sector_index] = spage;
> -			spage->sblock = sblock;
> -			spage->flags = flags;
> -			spage->generation = generation;
> -			spage->logical = logical;
> -			spage->have_csum = have_csum;
> +			scrub_sector_get(ssector);
> +			sblock->sectorv[sector_index] = ssector;
> +			ssector->sblock = sblock;
> +			ssector->flags = flags;
> +			ssector->generation = generation;
> +			ssector->logical = logical;
> +			ssector->have_csum = have_csum;
>  			if (have_csum)
> -				memcpy(spage->csum,
> +				memcpy(ssector->csum,
>  				       original_sblock->sectorv[0]->csum,
>  				       sctx->fs_info->csum_size);
>  
> @@ -1348,23 +1348,23 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>  						      mirror_index,
>  						      &stripe_index,
>  						      &stripe_offset);
> -			spage->physical = bioc->stripes[stripe_index].physical +
> +			ssector->physical = bioc->stripes[stripe_index].physical +
>  					 stripe_offset;
> -			spage->dev = bioc->stripes[stripe_index].dev;
> +			ssector->dev = bioc->stripes[stripe_index].dev;
>  
>  			BUG_ON(sector_index >= original_sblock->sector_count);
> -			spage->physical_for_dev_replace =
> +			ssector->physical_for_dev_replace =
>  				original_sblock->sectorv[sector_index]->
>  				physical_for_dev_replace;
>  			/* for missing devices, dev->bdev is NULL */
> -			spage->mirror_num = mirror_index + 1;
> +			ssector->mirror_num = mirror_index + 1;
>  			sblock->sector_count++;
> -			spage->page = alloc_page(GFP_NOFS);
> -			if (!spage->page)
> +			ssector->page = alloc_page(GFP_NOFS);
> +			if (!ssector->page)
>  				goto leave_nomem;
>  
>  			scrub_get_recover(recover);
> -			spage->recover = recover;
> +			ssector->recover = recover;
>  		}
>  		scrub_put_recover(fs_info, recover);
>  		length -= sublen;
> @@ -1382,19 +1382,19 @@ static void scrub_bio_wait_endio(struct bio *bio)
>  
>  static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
>  					struct bio *bio,
> -					struct scrub_page *spage)
> +					struct scrub_sector *ssector)
>  {
>  	DECLARE_COMPLETION_ONSTACK(done);
>  	int ret;
>  	int mirror_num;
>  
> -	bio->bi_iter.bi_sector = spage->logical >> 9;
> +	bio->bi_iter.bi_sector = ssector->logical >> 9;
>  	bio->bi_private = &done;
>  	bio->bi_end_io = scrub_bio_wait_endio;
>  
> -	mirror_num = spage->sblock->sectorv[0]->mirror_num;
> -	ret = raid56_parity_recover(bio, spage->recover->bioc,
> -				    spage->recover->map_length,
> +	mirror_num = ssector->sblock->sectorv[0]->mirror_num;
> +	ret = raid56_parity_recover(bio, ssector->recover->bioc,
> +				    ssector->recover->map_length,
>  				    mirror_num, 0);
>  	if (ret)
>  		return ret;
> @@ -1406,26 +1406,26 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
>  static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>  					  struct scrub_block *sblock)
>  {
> -	struct scrub_page *first_page = sblock->sectorv[0];
> +	struct scrub_sector *first_sector = sblock->sectorv[0];
>  	struct bio *bio;
>  	int sector_num;
>  
> -	/* All pages in sblock belong to the same stripe on the same device. */
> -	ASSERT(first_page->dev);
> -	if (!first_page->dev->bdev)
> +	/* All sectors in sblock belong to the same stripe on the same device. */
> +	ASSERT(first_sector->dev);
> +	if (!first_sector->dev->bdev)
>  		goto out;
>  
>  	bio = btrfs_bio_alloc(BIO_MAX_VECS);
> -	bio_set_dev(bio, first_page->dev->bdev);
> +	bio_set_dev(bio, first_sector->dev->bdev);
>  
>  	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
> -		struct scrub_page *spage = sblock->sectorv[sector_num];
> +		struct scrub_sector *ssector = sblock->sectorv[sector_num];
>  
> -		WARN_ON(!spage->page);
> -		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
> +		WARN_ON(!ssector->page);
> +		bio_add_page(bio, ssector->page, PAGE_SIZE, 0);
>  	}
>  
> -	if (scrub_submit_raid56_bio_wait(fs_info, bio, first_page)) {
> +	if (scrub_submit_raid56_bio_wait(fs_info, bio, first_sector)) {
>  		bio_put(bio);
>  		goto out;
>  	}
> @@ -1444,10 +1444,10 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>  
>  /*
>   * this function will check the on disk data for checksum errors, header
> - * errors and read I/O errors. If any I/O errors happen, the exact pages
> + * errors and read I/O errors. If any I/O errors happen, the exact sectors
>   * which are errored are marked as being bad. The goal is to enable scrub
> - * to take those pages that are not errored from all the mirrors so that
> - * the pages that are errored in the just handled mirror can be repaired.
> + * to take those sectors that are not errored from all the mirrors so that
> + * the sectors that are errored in the just handled mirror can be repaired.
>   */
>  static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
>  				struct scrub_block *sblock,
> @@ -1463,24 +1463,24 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
>  
>  	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
>  		struct bio *bio;
> -		struct scrub_page *spage = sblock->sectorv[sector_num];
> +		struct scrub_sector *ssector = sblock->sectorv[sector_num];
>  
> -		if (spage->dev->bdev == NULL) {
> -			spage->io_error = 1;
> +		if (ssector->dev->bdev == NULL) {
> +			ssector->io_error = 1;
>  			sblock->no_io_error_seen = 0;
>  			continue;
>  		}
>  
> -		WARN_ON(!spage->page);
> +		WARN_ON(!ssector->page);
>  		bio = btrfs_bio_alloc(1);
> -		bio_set_dev(bio, spage->dev->bdev);
> +		bio_set_dev(bio, ssector->dev->bdev);
>  
> -		bio_add_page(bio, spage->page, fs_info->sectorsize, 0);
> -		bio->bi_iter.bi_sector = spage->physical >> 9;
> +		bio_add_page(bio, ssector->page, fs_info->sectorsize, 0);
> +		bio->bi_iter.bi_sector = ssector->physical >> 9;
>  		bio->bi_opf = REQ_OP_READ;
>  
>  		if (btrfsic_submit_bio_wait(bio)) {
> -			spage->io_error = 1;
> +			ssector->io_error = 1;
>  			sblock->no_io_error_seen = 0;
>  		}
>  
> @@ -1492,9 +1492,9 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
>  }
>  
>  static inline int scrub_check_fsid(u8 fsid[],
> -				   struct scrub_page *spage)
> +				   struct scrub_sector *ssector)
>  {
> -	struct btrfs_fs_devices *fs_devices = spage->dev->fs_devices;
> +	struct btrfs_fs_devices *fs_devices = ssector->dev->fs_devices;
>  	int ret;
>  
>  	ret = memcmp(fsid, fs_devices->fsid, BTRFS_FSID_SIZE);
> @@ -1522,9 +1522,9 @@ static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
>  	for (sector_num = 0; sector_num < sblock_bad->sector_count; sector_num++) {
>  		int ret_sub;
>  
> -		ret_sub = scrub_repair_page_from_good_copy(sblock_bad,
> -							   sblock_good,
> -							   sector_num, 1);
> +		ret_sub = scrub_repair_sector_from_good_copy(sblock_bad,
> +							     sblock_good,
> +							     sector_num, 1);
>  		if (ret_sub)
>  			ret = ret_sub;
>  	}
> @@ -1532,41 +1532,41 @@ static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
>  	return ret;
>  }
>  
> -static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
> -					    struct scrub_block *sblock_good,
> -					    int sector_num, int force_write)
> +static int scrub_repair_sector_from_good_copy(struct scrub_block *sblock_bad,
> +					      struct scrub_block *sblock_good,
> +					      int sector_num, int force_write)
>  {
> -	struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
> -	struct scrub_page *spage_good = sblock_good->sectorv[sector_num];
> +	struct scrub_sector *ssector_bad = sblock_bad->sectorv[sector_num];
> +	struct scrub_sector *ssector_good = sblock_good->sectorv[sector_num];
>  	struct btrfs_fs_info *fs_info = sblock_bad->sctx->fs_info;
>  	const u32 sectorsize = fs_info->sectorsize;
>  
> -	BUG_ON(spage_bad->page == NULL);
> -	BUG_ON(spage_good->page == NULL);
> +	BUG_ON(ssector_bad->page == NULL);
> +	BUG_ON(ssector_good->page == NULL);
>  	if (force_write || sblock_bad->header_error ||
> -	    sblock_bad->checksum_error || spage_bad->io_error) {
> +	    sblock_bad->checksum_error || ssector_bad->io_error) {
>  		struct bio *bio;
>  		int ret;
>  
> -		if (!spage_bad->dev->bdev) {
> +		if (!ssector_bad->dev->bdev) {
>  			btrfs_warn_rl(fs_info,
>  				"scrub_repair_page_from_good_copy(bdev == NULL) is unexpected");
>  			return -EIO;
>  		}
>  
>  		bio = btrfs_bio_alloc(1);
> -		bio_set_dev(bio, spage_bad->dev->bdev);
> -		bio->bi_iter.bi_sector = spage_bad->physical >> 9;
> +		bio_set_dev(bio, ssector_bad->dev->bdev);
> +		bio->bi_iter.bi_sector = ssector_bad->physical >> 9;
>  		bio->bi_opf = REQ_OP_WRITE;
>  
> -		ret = bio_add_page(bio, spage_good->page, sectorsize, 0);
> +		ret = bio_add_page(bio, ssector_good->page, sectorsize, 0);
>  		if (ret != sectorsize) {
>  			bio_put(bio);
>  			return -EIO;
>  		}
>  
>  		if (btrfsic_submit_bio_wait(bio)) {
> -			btrfs_dev_stat_inc_and_print(spage_bad->dev,
> +			btrfs_dev_stat_inc_and_print(ssector_bad->dev,
>  				BTRFS_DEV_STAT_WRITE_ERRS);
>  			atomic64_inc(&fs_info->dev_replace.num_write_errors);
>  			bio_put(bio);
> @@ -1593,22 +1593,22 @@ static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
>  	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
>  		int ret;
>  
> -		ret = scrub_write_page_to_dev_replace(sblock, sector_num);
> +		ret = scrub_write_sector_to_dev_replace(sblock, sector_num);
>  		if (ret)
>  			atomic64_inc(&fs_info->dev_replace.num_write_errors);
>  	}
>  }
>  
> -static int scrub_write_page_to_dev_replace(struct scrub_block *sblock,
> -					   int sector_num)
> +static int scrub_write_sector_to_dev_replace(struct scrub_block *sblock,
> +					     int sector_num)
>  {
> -	struct scrub_page *spage = sblock->sectorv[sector_num];
> +	struct scrub_sector *ssector = sblock->sectorv[sector_num];
>  
> -	BUG_ON(spage->page == NULL);
> -	if (spage->io_error)
> -		clear_page(page_address(spage->page));
> +	BUG_ON(ssector->page == NULL);
> +	if (ssector->io_error)
> +		clear_page(page_address(ssector->page));
>  
> -	return scrub_add_page_to_wr_bio(sblock->sctx, spage);
> +	return scrub_add_sector_to_wr_bio(sblock->sctx, ssector);
>  }
>  
>  static int fill_writer_pointer_gap(struct scrub_ctx *sctx, u64 physical)
> @@ -1633,8 +1633,8 @@ static int fill_writer_pointer_gap(struct scrub_ctx *sctx, u64 physical)
>  	return ret;
>  }
>  
> -static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
> -				    struct scrub_page *spage)
> +static int scrub_add_sector_to_wr_bio(struct scrub_ctx *sctx,
> +				    struct scrub_sector *ssector)
>  {
>  	struct scrub_bio *sbio;
>  	int ret;
> @@ -1657,14 +1657,14 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
>  		struct bio *bio;
>  
>  		ret = fill_writer_pointer_gap(sctx,
> -					      spage->physical_for_dev_replace);
> +					      ssector->physical_for_dev_replace);
>  		if (ret) {
>  			mutex_unlock(&sctx->wr_lock);
>  			return ret;
>  		}
>  
> -		sbio->physical = spage->physical_for_dev_replace;
> -		sbio->logical = spage->logical;
> +		sbio->physical = ssector->physical_for_dev_replace;
> +		sbio->logical = ssector->logical;
>  		sbio->dev = sctx->wr_tgtdev;
>  		bio = sbio->bio;
>  		if (!bio) {
> @@ -1679,14 +1679,14 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
>  		bio->bi_opf = REQ_OP_WRITE;
>  		sbio->status = 0;
>  	} else if (sbio->physical + sbio->page_count * sectorsize !=
> -		   spage->physical_for_dev_replace ||
> +		   ssector->physical_for_dev_replace ||
>  		   sbio->logical + sbio->page_count * sectorsize !=
> -		   spage->logical) {
> +		   ssector->logical) {
>  		scrub_wr_submit(sctx);
>  		goto again;
>  	}
>  
> -	ret = bio_add_page(sbio->bio, spage->page, sectorsize, 0);
> +	ret = bio_add_page(sbio->bio, ssector->page, sectorsize, 0);
>  	if (ret != sectorsize) {
>  		if (sbio->page_count < 1) {
>  			bio_put(sbio->bio);
> @@ -1698,8 +1698,8 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
>  		goto again;
>  	}
>  
> -	sbio->pagev[sbio->page_count] = spage;
> -	scrub_page_get(spage);
> +	sbio->pagev[sbio->page_count] = ssector;
> +	scrub_sector_get(ssector);
>  	sbio->page_count++;
>  	if (sbio->page_count == sctx->pages_per_bio)
>  		scrub_wr_submit(sctx);
> @@ -1754,15 +1754,15 @@ static void scrub_wr_bio_end_io_worker(struct btrfs_work *work)
>  			&sbio->sctx->fs_info->dev_replace;
>  
>  		for (i = 0; i < sbio->page_count; i++) {
> -			struct scrub_page *spage = sbio->pagev[i];
> +			struct scrub_sector *ssector = sbio->pagev[i];
>  
> -			spage->io_error = 1;
> +			ssector->io_error = 1;
>  			atomic64_inc(&dev_replace->num_write_errors);
>  		}
>  	}
>  
>  	for (i = 0; i < sbio->page_count; i++)
> -		scrub_page_put(sbio->pagev[i]);
> +		scrub_sector_put(sbio->pagev[i]);
>  
>  	bio_put(sbio->bio);
>  	kfree(sbio);
> @@ -1809,26 +1809,26 @@ static int scrub_checksum_data(struct scrub_block *sblock)
>  	struct btrfs_fs_info *fs_info = sctx->fs_info;
>  	SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
>  	u8 csum[BTRFS_CSUM_SIZE];
> -	struct scrub_page *spage;
> +	struct scrub_sector *ssector;
>  	char *kaddr;
>  
>  	BUG_ON(sblock->sector_count < 1);
> -	spage = sblock->sectorv[0];
> -	if (!spage->have_csum)
> +	ssector = sblock->sectorv[0];
> +	if (!ssector->have_csum)
>  		return 0;
>  
> -	kaddr = page_address(spage->page);
> +	kaddr = page_address(ssector->page);
>  
>  	shash->tfm = fs_info->csum_shash;
>  	crypto_shash_init(shash);
>  
>  	/*
> -	 * In scrub_pages() and scrub_pages_for_parity() we ensure each spage
> +	 * In scrub_sectors() and scrub_sectors_for_parity() we ensure each ssector
>  	 * only contains one sector of data.
>  	 */
>  	crypto_shash_digest(shash, kaddr, fs_info->sectorsize, csum);
>  
> -	if (memcmp(csum, spage->csum, fs_info->csum_size))
> +	if (memcmp(csum, ssector->csum, fs_info->csum_size))
>  		sblock->checksum_error = 1;
>  	return sblock->checksum_error;
>  }
> @@ -1849,16 +1849,16 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
>  	const u32 sectorsize = sctx->fs_info->sectorsize;
>  	const int num_sectors = fs_info->nodesize >> fs_info->sectorsize_bits;
>  	int i;
> -	struct scrub_page *spage;
> +	struct scrub_sector *ssector;
>  	char *kaddr;
>  
>  	BUG_ON(sblock->sector_count < 1);
>  
> -	/* Each member in pagev is just one sector , not a full page */
> +	/* Each member in sectorv is just one sector */
>  	ASSERT(sblock->sector_count == num_sectors);
>  
> -	spage = sblock->sectorv[0];
> -	kaddr = page_address(spage->page);
> +	ssector = sblock->sectorv[0];
> +	kaddr = page_address(ssector->page);
>  	h = (struct btrfs_header *)kaddr;
>  	memcpy(on_disk_csum, h->csum, sctx->fs_info->csum_size);
>  
> @@ -1867,15 +1867,15 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
>  	 * a) don't have an extent buffer and
>  	 * b) the page is already kmapped
>  	 */
> -	if (spage->logical != btrfs_stack_header_bytenr(h))
> +	if (ssector->logical != btrfs_stack_header_bytenr(h))
>  		sblock->header_error = 1;
>  
> -	if (spage->generation != btrfs_stack_header_generation(h)) {
> +	if (ssector->generation != btrfs_stack_header_generation(h)) {
>  		sblock->header_error = 1;
>  		sblock->generation_error = 1;
>  	}
>  
> -	if (!scrub_check_fsid(h->fsid, spage))
> +	if (!scrub_check_fsid(h->fsid, ssector))
>  		sblock->header_error = 1;
>  
>  	if (memcmp(h->chunk_tree_uuid, fs_info->chunk_tree_uuid,
> @@ -1906,23 +1906,23 @@ static int scrub_checksum_super(struct scrub_block *sblock)
>  	struct btrfs_fs_info *fs_info = sctx->fs_info;
>  	SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
>  	u8 calculated_csum[BTRFS_CSUM_SIZE];
> -	struct scrub_page *spage;
> +	struct scrub_sector *ssector;
>  	char *kaddr;
>  	int fail_gen = 0;
>  	int fail_cor = 0;
>  
>  	BUG_ON(sblock->sector_count < 1);
> -	spage = sblock->sectorv[0];
> -	kaddr = page_address(spage->page);
> +	ssector = sblock->sectorv[0];
> +	kaddr = page_address(ssector->page);
>  	s = (struct btrfs_super_block *)kaddr;
>  
> -	if (spage->logical != btrfs_super_bytenr(s))
> +	if (ssector->logical != btrfs_super_bytenr(s))
>  		++fail_cor;
>  
> -	if (spage->generation != btrfs_super_generation(s))
> +	if (ssector->generation != btrfs_super_generation(s))
>  		++fail_gen;
>  
> -	if (!scrub_check_fsid(s->fsid, spage))
> +	if (!scrub_check_fsid(s->fsid, ssector))
>  		++fail_cor;
>  
>  	shash->tfm = fs_info->csum_shash;
> @@ -1943,10 +1943,10 @@ static int scrub_checksum_super(struct scrub_block *sblock)
>  		++sctx->stat.super_errors;
>  		spin_unlock(&sctx->stat_lock);
>  		if (fail_cor)
> -			btrfs_dev_stat_inc_and_print(spage->dev,
> +			btrfs_dev_stat_inc_and_print(ssector->dev,
>  				BTRFS_DEV_STAT_CORRUPTION_ERRS);
>  		else
> -			btrfs_dev_stat_inc_and_print(spage->dev,
> +			btrfs_dev_stat_inc_and_print(ssector->dev,
>  				BTRFS_DEV_STAT_GENERATION_ERRS);
>  	}
>  
> @@ -1967,22 +1967,22 @@ static void scrub_block_put(struct scrub_block *sblock)
>  			scrub_parity_put(sblock->sparity);
>  
>  		for (i = 0; i < sblock->sector_count; i++)
> -			scrub_page_put(sblock->sectorv[i]);
> +			scrub_sector_put(sblock->sectorv[i]);
>  		kfree(sblock);
>  	}
>  }
>  
> -static void scrub_page_get(struct scrub_page *spage)
> +static void scrub_sector_get(struct scrub_sector *ssector)
>  {
> -	atomic_inc(&spage->refs);
> +	atomic_inc(&ssector->refs);
>  }
>  
> -static void scrub_page_put(struct scrub_page *spage)
> +static void scrub_sector_put(struct scrub_sector *ssector)
>  {
> -	if (atomic_dec_and_test(&spage->refs)) {
> -		if (spage->page)
> -			__free_page(spage->page);
> -		kfree(spage);
> +	if (atomic_dec_and_test(&ssector->refs)) {
> +		if (ssector->page)
> +			__free_page(ssector->page);
> +		kfree(ssector);
>  	}
>  }
>  
> @@ -2060,10 +2060,10 @@ static void scrub_submit(struct scrub_ctx *sctx)
>  	btrfsic_submit_bio(sbio->bio);
>  }
>  
> -static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
> -				    struct scrub_page *spage)
> +static int scrub_add_sector_to_rd_bio(struct scrub_ctx *sctx,
> +				      struct scrub_sector *ssector)
>  {
> -	struct scrub_block *sblock = spage->sblock;
> +	struct scrub_block *sblock = ssector->sblock;
>  	struct scrub_bio *sbio;
>  	const u32 sectorsize = sctx->fs_info->sectorsize;
>  	int ret;
> @@ -2089,9 +2089,9 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
>  	if (sbio->page_count == 0) {
>  		struct bio *bio;
>  
> -		sbio->physical = spage->physical;
> -		sbio->logical = spage->logical;
> -		sbio->dev = spage->dev;
> +		sbio->physical = ssector->physical;
> +		sbio->logical = ssector->logical;
> +		sbio->dev = ssector->dev;
>  		bio = sbio->bio;
>  		if (!bio) {
>  			bio = btrfs_bio_alloc(sctx->pages_per_bio);
> @@ -2105,16 +2105,16 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
>  		bio->bi_opf = REQ_OP_READ;
>  		sbio->status = 0;
>  	} else if (sbio->physical + sbio->page_count * sectorsize !=
> -		   spage->physical ||
> +		   ssector->physical ||
>  		   sbio->logical + sbio->page_count * sectorsize !=
> -		   spage->logical ||
> -		   sbio->dev != spage->dev) {
> +		   ssector->logical ||
> +		   sbio->dev != ssector->dev) {
>  		scrub_submit(sctx);
>  		goto again;
>  	}
>  
> -	sbio->pagev[sbio->page_count] = spage;
> -	ret = bio_add_page(sbio->bio, spage->page, sectorsize, 0);
> +	sbio->pagev[sbio->page_count] = ssector;
> +	ret = bio_add_page(sbio->bio, ssector->page, sectorsize, 0);
>  	if (ret != sectorsize) {
>  		if (sbio->page_count < 1) {
>  			bio_put(sbio->bio);
> @@ -2126,7 +2126,7 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
>  	}
>  
>  	scrub_block_get(sblock); /* one for the page added to the bio */
> -	atomic_inc(&sblock->outstanding_pages);
> +	atomic_inc(&sblock->outstanding_sectors);
>  	sbio->page_count++;
>  	if (sbio->page_count == sctx->pages_per_bio)
>  		scrub_submit(sctx);
> @@ -2228,9 +2228,9 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
>  		goto rbio_out;
>  
>  	for (i = 0; i < sblock->sector_count; i++) {
> -		struct scrub_page *spage = sblock->sectorv[i];
> +		struct scrub_sector *ssector = sblock->sectorv[i];
>  
> -		raid56_add_scrub_pages(rbio, spage->page, spage->logical);
> +		raid56_add_scrub_pages(rbio, ssector->page, ssector->logical);
>  	}
>  
>  	btrfs_init_work(&sblock->work, scrub_missing_raid56_worker, NULL, NULL);
> @@ -2249,7 +2249,7 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
>  	spin_unlock(&sctx->stat_lock);
>  }
>  
> -static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
> +static int scrub_sectors(struct scrub_ctx *sctx, u64 logical, u32 len,
>  		       u64 physical, struct btrfs_device *dev, u64 flags,
>  		       u64 gen, int mirror_num, u8 *csum,
>  		       u64 physical_for_dev_replace)
> @@ -2273,7 +2273,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  	sblock->no_io_error_seen = 1;
>  
>  	for (index = 0; len > 0; index++) {
> -		struct scrub_page *spage;
> +		struct scrub_sector *ssector;
>  		/*
>  		 * Here we will allocate one page for one sector to scrub.
>  		 * This is fine if PAGE_SIZE == sectorsize, but will cost
> @@ -2281,8 +2281,8 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  		 */
>  		u32 l = min(sectorsize, len);
>  
> -		spage = kzalloc(sizeof(*spage), GFP_KERNEL);
> -		if (!spage) {
> +		ssector = kzalloc(sizeof(*ssector), GFP_KERNEL);
> +		if (!ssector) {
>  leave_nomem:
>  			spin_lock(&sctx->stat_lock);
>  			sctx->stat.malloc_errors++;
> @@ -2291,25 +2291,25 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  			return -ENOMEM;
>  		}
>  		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
> -		scrub_page_get(spage);
> -		sblock->sectorv[index] = spage;
> -		spage->sblock = sblock;
> -		spage->dev = dev;
> -		spage->flags = flags;
> -		spage->generation = gen;
> -		spage->logical = logical;
> -		spage->physical = physical;
> -		spage->physical_for_dev_replace = physical_for_dev_replace;
> -		spage->mirror_num = mirror_num;
> +		scrub_sector_get(ssector);
> +		sblock->sectorv[index] = ssector;
> +		ssector->sblock = sblock;
> +		ssector->dev = dev;
> +		ssector->flags = flags;
> +		ssector->generation = gen;
> +		ssector->logical = logical;
> +		ssector->physical = physical;
> +		ssector->physical_for_dev_replace = physical_for_dev_replace;
> +		ssector->mirror_num = mirror_num;
>  		if (csum) {
> -			spage->have_csum = 1;
> -			memcpy(spage->csum, csum, sctx->fs_info->csum_size);
> +			ssector->have_csum = 1;
> +			memcpy(ssector->csum, csum, sctx->fs_info->csum_size);
>  		} else {
> -			spage->have_csum = 0;
> +			ssector->have_csum = 0;
>  		}
>  		sblock->sector_count++;
> -		spage->page = alloc_page(GFP_KERNEL);
> -		if (!spage->page)
> +		ssector->page = alloc_page(GFP_KERNEL);
> +		if (!ssector->page)
>  			goto leave_nomem;
>  		len -= l;
>  		logical += l;
> @@ -2326,10 +2326,10 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>  		scrub_missing_raid56_pages(sblock);
>  	} else {
>  		for (index = 0; index < sblock->sector_count; index++) {
> -			struct scrub_page *spage = sblock->sectorv[index];
> +			struct scrub_sector *ssector = sblock->sectorv[index];
>  			int ret;
>  
> -			ret = scrub_add_page_to_rd_bio(sctx, spage);
> +			ret = scrub_add_sector_to_rd_bio(sctx, ssector);
>  			if (ret) {
>  				scrub_block_put(sblock);
>  				return ret;
> @@ -2365,19 +2365,19 @@ static void scrub_bio_end_io_worker(struct btrfs_work *work)
>  	ASSERT(sbio->page_count <= SCRUB_PAGES_PER_BIO);
>  	if (sbio->status) {
>  		for (i = 0; i < sbio->page_count; i++) {
> -			struct scrub_page *spage = sbio->pagev[i];
> +			struct scrub_sector *ssector = sbio->pagev[i];
>  
> -			spage->io_error = 1;
> -			spage->sblock->no_io_error_seen = 0;
> +			ssector->io_error = 1;
> +			ssector->sblock->no_io_error_seen = 0;
>  		}
>  	}
>  
>  	/* now complete the scrub_block items that have all pages completed */
>  	for (i = 0; i < sbio->page_count; i++) {
> -		struct scrub_page *spage = sbio->pagev[i];
> -		struct scrub_block *sblock = spage->sblock;
> +		struct scrub_sector *ssector = sbio->pagev[i];
> +		struct scrub_block *sblock = ssector->sblock;
>  
> -		if (atomic_dec_and_test(&sblock->outstanding_pages))
> +		if (atomic_dec_and_test(&sblock->outstanding_sectors))
>  			scrub_block_complete(sblock);
>  		scrub_block_put(sblock);
>  	}
> @@ -2571,7 +2571,7 @@ static int scrub_extent(struct scrub_ctx *sctx, struct map_lookup *map,
>  			if (have_csum == 0)
>  				++sctx->stat.no_csum;
>  		}
> -		ret = scrub_pages(sctx, logical, l, physical, dev, flags, gen,
> +		ret = scrub_sectors(sctx, logical, l, physical, dev, flags, gen,
>  				  mirror_num, have_csum ? csum : NULL,
>  				  physical_for_dev_replace);
>  		if (ret)
> @@ -2584,7 +2584,7 @@ static int scrub_extent(struct scrub_ctx *sctx, struct map_lookup *map,
>  	return 0;
>  }
>  
> -static int scrub_pages_for_parity(struct scrub_parity *sparity,
> +static int scrub_sectors_for_parity(struct scrub_parity *sparity,
>  				  u64 logical, u32 len,
>  				  u64 physical, struct btrfs_device *dev,
>  				  u64 flags, u64 gen, int mirror_num, u8 *csum)
> @@ -2613,10 +2613,10 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>  	scrub_parity_get(sparity);
>  
>  	for (index = 0; len > 0; index++) {
> -		struct scrub_page *spage;
> +		struct scrub_sector *ssector;
>  
> -		spage = kzalloc(sizeof(*spage), GFP_KERNEL);
> -		if (!spage) {
> +		ssector = kzalloc(sizeof(*ssector), GFP_KERNEL);
> +		if (!ssector) {
>  leave_nomem:
>  			spin_lock(&sctx->stat_lock);
>  			sctx->stat.malloc_errors++;
> @@ -2626,27 +2626,27 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>  		}
>  		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
>  		/* For scrub block */
> -		scrub_page_get(spage);
> -		sblock->sectorv[index] = spage;
> +		scrub_sector_get(ssector);
> +		sblock->sectorv[index] = ssector;
>  		/* For scrub parity */
> -		scrub_page_get(spage);
> -		list_add_tail(&spage->list, &sparity->spages);
> -		spage->sblock = sblock;
> -		spage->dev = dev;
> -		spage->flags = flags;
> -		spage->generation = gen;
> -		spage->logical = logical;
> -		spage->physical = physical;
> -		spage->mirror_num = mirror_num;
> +		scrub_sector_get(ssector);
> +		list_add_tail(&ssector->list, &sparity->ssectors);
> +		ssector->sblock = sblock;
> +		ssector->dev = dev;
> +		ssector->flags = flags;
> +		ssector->generation = gen;
> +		ssector->logical = logical;
> +		ssector->physical = physical;
> +		ssector->mirror_num = mirror_num;
>  		if (csum) {
> -			spage->have_csum = 1;
> -			memcpy(spage->csum, csum, sctx->fs_info->csum_size);
> +			ssector->have_csum = 1;
> +			memcpy(ssector->csum, csum, sctx->fs_info->csum_size);
>  		} else {
> -			spage->have_csum = 0;
> +			ssector->have_csum = 0;
>  		}
>  		sblock->sector_count++;
> -		spage->page = alloc_page(GFP_KERNEL);
> -		if (!spage->page)
> +		ssector->page = alloc_page(GFP_KERNEL);
> +		if (!ssector->page)
>  			goto leave_nomem;
>  
>  
> @@ -2658,17 +2658,17 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>  
>  	WARN_ON(sblock->sector_count == 0);
>  	for (index = 0; index < sblock->sector_count; index++) {
> -		struct scrub_page *spage = sblock->sectorv[index];
> +		struct scrub_sector *ssector = sblock->sectorv[index];
>  		int ret;
>  
> -		ret = scrub_add_page_to_rd_bio(sctx, spage);
> +		ret = scrub_add_sector_to_rd_bio(sctx, ssector);
>  		if (ret) {
>  			scrub_block_put(sblock);
>  			return ret;
>  		}
>  	}
>  
> -	/* last one frees, either here or in bio completion for last page */
> +	/* last one frees, either here or in bio completion for last sector */
>  	scrub_block_put(sblock);
>  	return 0;
>  }
> @@ -2707,7 +2707,7 @@ static int scrub_extent_for_parity(struct scrub_parity *sparity,
>  			if (have_csum == 0)
>  				goto skip;
>  		}
> -		ret = scrub_pages_for_parity(sparity, logical, l, physical, dev,
> +		ret = scrub_sectors_for_parity(sparity, logical, l, physical, dev,
>  					     flags, gen, mirror_num,
>  					     have_csum ? csum : NULL);
>  		if (ret)
> @@ -2767,7 +2767,7 @@ static int get_raid56_logic_offset(u64 physical, int num,
>  static void scrub_free_parity(struct scrub_parity *sparity)
>  {
>  	struct scrub_ctx *sctx = sparity->sctx;
> -	struct scrub_page *curr, *next;
> +	struct scrub_sector *curr, *next;
>  	int nbits;
>  
>  	nbits = bitmap_weight(sparity->ebitmap, sparity->nsectors);
> @@ -2778,9 +2778,9 @@ static void scrub_free_parity(struct scrub_parity *sparity)
>  		spin_unlock(&sctx->stat_lock);
>  	}
>  
> -	list_for_each_entry_safe(curr, next, &sparity->spages, list) {
> +	list_for_each_entry_safe(curr, next, &sparity->ssectors, list) {
>  		list_del_init(&curr->list);
> -		scrub_page_put(curr);
> +		scrub_sector_put(curr);
>  	}
>  
>  	kfree(sparity);
> @@ -2943,7 +2943,7 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
>  	sparity->logic_start = logic_start;
>  	sparity->logic_end = logic_end;
>  	refcount_set(&sparity->refs, 1);
> -	INIT_LIST_HEAD(&sparity->spages);
> +	INIT_LIST_HEAD(&sparity->ssectors);
>  	sparity->dbitmap = sparity->bitmap;
>  	sparity->ebitmap = (void *)sparity->bitmap + bitmap_len;
>  
> @@ -3940,9 +3940,9 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
>  		if (!btrfs_check_super_location(scrub_dev, bytenr))
>  			continue;
>  
> -		ret = scrub_pages(sctx, bytenr, BTRFS_SUPER_INFO_SIZE, bytenr,
> -				  scrub_dev, BTRFS_EXTENT_FLAG_SUPER, gen, i,
> -				  NULL, bytenr);
> +		ret = scrub_sectors(sctx, bytenr, BTRFS_SUPER_INFO_SIZE, bytenr,
> +				    scrub_dev, BTRFS_EXTENT_FLAG_SUPER, gen, i,
> +				    NULL, bytenr);
>  		if (ret)
>  			return ret;
>  	}
> @@ -4061,7 +4061,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>  	    SCRUB_MAX_SECTORS_PER_BLOCK * fs_info->sectorsize ||
>  	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_SECTORS_PER_BLOCK) {
>  		/*
> -		 * would exhaust the array bounds of pagev member in
> +		 * would exhaust the array bounds of sectorv member in
>  		 * struct scrub_block
>  		 */
>  		btrfs_err(fs_info,
> @@ -4137,7 +4137,7 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>  	/*
>  	 * In order to avoid deadlock with reclaim when there is a transaction
>  	 * trying to pause scrub, make sure we use GFP_NOFS for all the
> -	 * allocations done at btrfs_scrub_pages() and scrub_pages_for_parity()
> +	 * allocations done at btrfs_scrub_sectors() and scrub_sectors_for_parity()
>  	 * invoked by our callees. The pausing request is done when the
>  	 * transaction commit starts, and it blocks the transaction until scrub
>  	 * is paused (done at specific points at scrub_stripe() or right above
> -- 
> 2.35.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev
  2022-03-11 17:49   ` David Sterba
@ 2022-03-11 23:17     ` Qu Wenruo
  2022-03-14 19:17       ` David Sterba
  0 siblings, 1 reply; 8+ messages in thread
From: Qu Wenruo @ 2022-03-11 23:17 UTC (permalink / raw)
  To: dsterba, Qu Wenruo, linux-btrfs



On 2022/3/12 01:49, David Sterba wrote:
> On Fri, Mar 11, 2022 at 03:34:18PM +0800, Qu Wenruo wrote:
>> The following will be renamed in this patch:
>>
>> - scrub_block::pagev -> sectorv
>
> I think you can come up with a different naming scheme and not copy the
> original, eg. something closer what we've been using elsewhere. Here the
> 'pagev' is a page vector, but 'sectors' is IMHO fine and also
> understandable.

OK, I would to 'sectors' as I'm not a fan of the 'v' suffix either.

>
>>
>> - scrub_block::page_count -> sector_count
>>
>> - SCRUB_MAX_PAGES_PER_BLOCK -> SCRUB_MAX_SECTORS_PER_BLOCK
>>
>> - page_num -> sector_num to iterate scrub_block::sectorv
>>
>> For now scrub_page is not yet renamed, as the current changeset is
>> already large enough.
>>
>> The rename for scrub_page will come in a separate patch.
>>
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>> ---
>>   fs/btrfs/scrub.c | 220 +++++++++++++++++++++++------------------------
>>   1 file changed, 110 insertions(+), 110 deletions(-)
>>
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index 11089568b287..fd67e1acdba6 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -52,7 +52,7 @@ struct scrub_ctx;
>>    * The following value times PAGE_SIZE needs to be large enough to match the
>>    * largest node/leaf/sector size that shall be supported.
>>    */
>> -#define SCRUB_MAX_PAGES_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
>> +#define SCRUB_MAX_SECTORS_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
>>
>>   struct scrub_recover {
>>   	refcount_t		refs;
>> @@ -94,8 +94,8 @@ struct scrub_bio {
>>   };
>>
>>   struct scrub_block {
>> -	struct scrub_page	*pagev[SCRUB_MAX_PAGES_PER_BLOCK];
>> -	int			page_count;
>> +	struct scrub_page	*sectorv[SCRUB_MAX_SECTORS_PER_BLOCK];
>> +	int			sector_count;
>>   	atomic_t		outstanding_pages;
>>   	refcount_t		refs; /* free mem on transition to zero */
>>   	struct scrub_ctx	*sctx;
>> @@ -728,16 +728,16 @@ static void scrub_print_warning(const char *errstr, struct scrub_block *sblock)
>>   	u8 ref_level = 0;
>>   	int ret;
>>
>> -	WARN_ON(sblock->page_count < 1);
>> -	dev = sblock->pagev[0]->dev;
>> +	WARN_ON(sblock->sector_count < 1);
>> +	dev = sblock->sectorv[0]->dev;
>>   	fs_info = sblock->sctx->fs_info;
>>
>>   	path = btrfs_alloc_path();
>>   	if (!path)
>>   		return;
>>
>> -	swarn.physical = sblock->pagev[0]->physical;
>> -	swarn.logical = sblock->pagev[0]->logical;
>> +	swarn.physical = sblock->sectorv[0]->physical;
>> +	swarn.logical = sblock->sectorv[0]->logical;
>>   	swarn.errstr = errstr;
>>   	swarn.dev = NULL;
>>
>> @@ -817,16 +817,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   	struct scrub_block *sblock_bad;
>>   	int ret;
>>   	int mirror_index;
>> -	int page_num;
>> +	int sector_num;
>>   	int success;
>>   	bool full_stripe_locked;
>>   	unsigned int nofs_flag;
>>   	static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
>>   				      DEFAULT_RATELIMIT_BURST);
>>
>> -	BUG_ON(sblock_to_check->page_count < 1);
>> +	BUG_ON(sblock_to_check->sector_count < 1);
>>   	fs_info = sctx->fs_info;
>> -	if (sblock_to_check->pagev[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
>> +	if (sblock_to_check->sectorv[0]->flags & BTRFS_EXTENT_FLAG_SUPER) {
>>   		/*
>>   		 * if we find an error in a super block, we just report it.
>>   		 * They will get written with the next transaction commit
>> @@ -837,13 +837,13 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   		spin_unlock(&sctx->stat_lock);
>>   		return 0;
>>   	}
>> -	logical = sblock_to_check->pagev[0]->logical;
>> -	BUG_ON(sblock_to_check->pagev[0]->mirror_num < 1);
>> -	failed_mirror_index = sblock_to_check->pagev[0]->mirror_num - 1;
>> -	is_metadata = !(sblock_to_check->pagev[0]->flags &
>> +	logical = sblock_to_check->sectorv[0]->logical;
>> +	BUG_ON(sblock_to_check->sectorv[0]->mirror_num < 1);
>> +	failed_mirror_index = sblock_to_check->sectorv[0]->mirror_num - 1;
>> +	is_metadata = !(sblock_to_check->sectorv[0]->flags &
>>   			BTRFS_EXTENT_FLAG_DATA);
>> -	have_csum = sblock_to_check->pagev[0]->have_csum;
>> -	dev = sblock_to_check->pagev[0]->dev;
>> +	have_csum = sblock_to_check->sectorv[0]->have_csum;
>> +	dev = sblock_to_check->sectorv[0]->dev;
>>
>>   	if (!sctx->is_dev_replace && btrfs_repair_one_zone(fs_info, logical))
>>   		return 0;
>> @@ -1011,25 +1011,25 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   			continue;
>>
>>   		/* raid56's mirror can be more than BTRFS_MAX_MIRRORS */
>> -		if (!scrub_is_page_on_raid56(sblock_bad->pagev[0])) {
>> +		if (!scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
>>   			if (mirror_index >= BTRFS_MAX_MIRRORS)
>>   				break;
>> -			if (!sblocks_for_recheck[mirror_index].page_count)
>> +			if (!sblocks_for_recheck[mirror_index].sector_count)
>>   				break;
>>
>>   			sblock_other = sblocks_for_recheck + mirror_index;
>>   		} else {
>> -			struct scrub_recover *r = sblock_bad->pagev[0]->recover;
>> +			struct scrub_recover *r = sblock_bad->sectorv[0]->recover;
>>   			int max_allowed = r->bioc->num_stripes - r->bioc->num_tgtdevs;
>>
>>   			if (mirror_index >= max_allowed)
>>   				break;
>> -			if (!sblocks_for_recheck[1].page_count)
>> +			if (!sblocks_for_recheck[1].sector_count)
>>   				break;
>>
>>   			ASSERT(failed_mirror_index == 0);
>>   			sblock_other = sblocks_for_recheck + 1;
>> -			sblock_other->pagev[0]->mirror_num = 1 + mirror_index;
>> +			sblock_other->sectorv[0]->mirror_num = 1 + mirror_index;
>>   		}
>>
>>   		/* build and submit the bios, check checksums */
>> @@ -1078,16 +1078,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   	 * area are unreadable.
>>   	 */
>>   	success = 1;
>> -	for (page_num = 0; page_num < sblock_bad->page_count;
>> -	     page_num++) {
>> -		struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
>> +	for (sector_num = 0; sector_num < sblock_bad->sector_count;
>
> This is a simple indexing, so while sector_num is accurate a plain 'i'
> would work too. It would also make some lines shorter.

Here I intentionally avoid using single letter, because the existing
code is doing a pretty bad practice by doing a double for loop.

Here we're doing two different loops, one is iterating all the sectors,
the other one is iterating all the mirrors.

Thus we need to distinguish them, or it' can easily get screwed up using
different loop indexes.

>
>> +	     sector_num++) {
>> +		struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
>>   		struct scrub_block *sblock_other = NULL;
>>
>>   		/* skip no-io-error page in scrub */
>>   		if (!spage_bad->io_error && !sctx->is_dev_replace)
>>   			continue;
>>
>> -		if (scrub_is_page_on_raid56(sblock_bad->pagev[0])) {
>> +		if (scrub_is_page_on_raid56(sblock_bad->sectorv[0])) {
>>   			/*
>>   			 * In case of dev replace, if raid56 rebuild process
>>   			 * didn't work out correct data, then copy the content
>> @@ -1100,10 +1100,10 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   			/* try to find no-io-error page in mirrors */
>>   			for (mirror_index = 0;
>>   			     mirror_index < BTRFS_MAX_MIRRORS &&
>> -			     sblocks_for_recheck[mirror_index].page_count > 0;
>> +			     sblocks_for_recheck[mirror_index].sector_count > 0;

See, the 2nd loop iterator.

Thus I guess that's why the original code is using @mirror_index and
@page_index.

>>   			     mirror_index++) {
>>   				if (!sblocks_for_recheck[mirror_index].
>> -				    pagev[page_num]->io_error) {
>> +				    sectorv[sector_num]->io_error) {
>>   					sblock_other = sblocks_for_recheck +
>>   						       mirror_index;
>>   					break;
>> @@ -1125,7 +1125,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   				sblock_other = sblock_bad;
>>
>>   			if (scrub_write_page_to_dev_replace(sblock_other,
>> -							    page_num) != 0) {
>> +							    sector_num) != 0) {
>>   				atomic64_inc(
>>   					&fs_info->dev_replace.num_write_errors);
>>   				success = 0;
>> @@ -1133,7 +1133,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   		} else if (sblock_other) {
>>   			ret = scrub_repair_page_from_good_copy(sblock_bad,
>>   							       sblock_other,
>> -							       page_num, 0);
>> +							       sector_num, 0);
>>   			if (0 == ret)
>>   				spage_bad->io_error = 0;
>>   			else
>> @@ -1186,18 +1186,18 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>   			struct scrub_block *sblock = sblocks_for_recheck +
>>   						     mirror_index;
>>   			struct scrub_recover *recover;
>> -			int page_index;
>> +			int sector_index;
>>
>> -			for (page_index = 0; page_index < sblock->page_count;
>> -			     page_index++) {
>> -				sblock->pagev[page_index]->sblock = NULL;
>> -				recover = sblock->pagev[page_index]->recover;
>> +			for (sector_index = 0; sector_index < sblock->sector_count;
>> +			     sector_index++) {
>> +				sblock->sectorv[sector_index]->sblock = NULL;
>> +				recover = sblock->sectorv[sector_index]->recover;
>>   				if (recover) {
>>   					scrub_put_recover(fs_info, recover);
>> -					sblock->pagev[page_index]->recover =
>> +					sblock->sectorv[sector_index]->recover =
>>   									NULL;
>>   				}
>> -				scrub_page_put(sblock->pagev[page_index]);
>> +				scrub_page_put(sblock->sectorv[sector_index]);
>>   			}
>>   		}
>>   		kfree(sblocks_for_recheck);
>> @@ -1255,18 +1255,18 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>>   {
>>   	struct scrub_ctx *sctx = original_sblock->sctx;
>>   	struct btrfs_fs_info *fs_info = sctx->fs_info;
>> -	u64 length = original_sblock->page_count * fs_info->sectorsize;
>> -	u64 logical = original_sblock->pagev[0]->logical;
>> -	u64 generation = original_sblock->pagev[0]->generation;
>> -	u64 flags = original_sblock->pagev[0]->flags;
>> -	u64 have_csum = original_sblock->pagev[0]->have_csum;
>> +	u64 length = original_sblock->sector_count * fs_info->sectorsize;
>
> 						>> fs_info->sectorsize_bits

Well, that's why I kept everything just a renaming, as the shift is in a
wrong direction...

>
>> +	u64 logical = original_sblock->sectorv[0]->logical;
>> +	u64 generation = original_sblock->sectorv[0]->generation;
>> +	u64 flags = original_sblock->sectorv[0]->flags;
>> +	u64 have_csum = original_sblock->sectorv[0]->have_csum;
>>   	struct scrub_recover *recover;
>>   	struct btrfs_io_context *bioc;
>>   	u64 sublen;
>>   	u64 mapped_length;
>>   	u64 stripe_offset;
>>   	int stripe_index;
>> -	int page_index = 0;
>> +	int sector_index = 0;
>>   	int mirror_index;
>>   	int nmirrors;
>>   	int ret;
>> @@ -1306,7 +1306,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>>   		recover->bioc = bioc;
>>   		recover->map_length = mapped_length;
>>
>> -		ASSERT(page_index < SCRUB_MAX_PAGES_PER_BLOCK);
>> +		ASSERT(sector_index < SCRUB_MAX_SECTORS_PER_BLOCK);
>>
>>   		nmirrors = min(scrub_nr_raid_mirrors(bioc), BTRFS_MAX_MIRRORS);
>>
>> @@ -1328,7 +1328,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>>   				return -ENOMEM;
>>   			}
>>   			scrub_page_get(spage);
>> -			sblock->pagev[page_index] = spage;
>> +			sblock->sectorv[sector_index] = spage;
>>   			spage->sblock = sblock;
>>   			spage->flags = flags;
>>   			spage->generation = generation;
>> @@ -1336,7 +1336,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>>   			spage->have_csum = have_csum;
>>   			if (have_csum)
>>   				memcpy(spage->csum,
>> -				       original_sblock->pagev[0]->csum,
>> +				       original_sblock->sectorv[0]->csum,
>>   				       sctx->fs_info->csum_size);
>>
>>   			scrub_stripe_index_and_offset(logical,
>> @@ -1352,13 +1352,13 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>>   					 stripe_offset;
>>   			spage->dev = bioc->stripes[stripe_index].dev;
>>
>> -			BUG_ON(page_index >= original_sblock->page_count);
>> +			BUG_ON(sector_index >= original_sblock->sector_count);
>>   			spage->physical_for_dev_replace =
>> -				original_sblock->pagev[page_index]->
>> +				original_sblock->sectorv[sector_index]->
>>   				physical_for_dev_replace;
>>   			/* for missing devices, dev->bdev is NULL */
>>   			spage->mirror_num = mirror_index + 1;
>> -			sblock->page_count++;
>> +			sblock->sector_count++;
>>   			spage->page = alloc_page(GFP_NOFS);
>>   			if (!spage->page)
>>   				goto leave_nomem;
>> @@ -1369,7 +1369,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
>>   		scrub_put_recover(fs_info, recover);
>>   		length -= sublen;
>>   		logical += sublen;
>> -		page_index++;
>> +		sector_index++;
>>   	}
>>
>>   	return 0;
>> @@ -1392,7 +1392,7 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
>>   	bio->bi_private = &done;
>>   	bio->bi_end_io = scrub_bio_wait_endio;
>>
>> -	mirror_num = spage->sblock->pagev[0]->mirror_num;
>> +	mirror_num = spage->sblock->sectorv[0]->mirror_num;
>>   	ret = raid56_parity_recover(bio, spage->recover->bioc,
>>   				    spage->recover->map_length,
>>   				    mirror_num, 0);
>> @@ -1406,9 +1406,9 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
>>   static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>>   					  struct scrub_block *sblock)
>>   {
>> -	struct scrub_page *first_page = sblock->pagev[0];
>> +	struct scrub_page *first_page = sblock->sectorv[0];
>>   	struct bio *bio;
>> -	int page_num;
>> +	int sector_num;
>
> Also 'i'

This single letter usage is much safer here, the function only has one
single loop, no double loop to cause problems.

>
>>   	/* All pages in sblock belong to the same stripe on the same device. */
>>   	ASSERT(first_page->dev);
>> @@ -1418,8 +1418,8 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>>   	bio = btrfs_bio_alloc(BIO_MAX_VECS);
>>   	bio_set_dev(bio, first_page->dev->bdev);
>>
>> -	for (page_num = 0; page_num < sblock->page_count; page_num++) {
>> -		struct scrub_page *spage = sblock->pagev[page_num];
>> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
>> +		struct scrub_page *spage = sblock->sectorv[sector_num];
>>
>>   		WARN_ON(!spage->page);
>>   		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
>> @@ -1436,8 +1436,8 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
>>
>>   	return;
>>   out:
>> -	for (page_num = 0; page_num < sblock->page_count; page_num++)
>> -		sblock->pagev[page_num]->io_error = 1;
>> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++)
>> +		sblock->sectorv[sector_num]->io_error = 1;
>>
>>   	sblock->no_io_error_seen = 0;
>>   }
>> @@ -1453,17 +1453,17 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
>>   				struct scrub_block *sblock,
>>   				int retry_failed_mirror)
>>   {
>> -	int page_num;
>> +	int sector_num;
>
> And here too

The remaining usage of single letter is all good.

I'll update the patch regarding to those single letter and comment updates.

Thanks,
Qu

>
>>   	sblock->no_io_error_seen = 1;
>>
>>   	/* short cut for raid56 */
>> -	if (!retry_failed_mirror && scrub_is_page_on_raid56(sblock->pagev[0]))
>> +	if (!retry_failed_mirror && scrub_is_page_on_raid56(sblock->sectorv[0]))
>>   		return scrub_recheck_block_on_raid56(fs_info, sblock);
>>
>> -	for (page_num = 0; page_num < sblock->page_count; page_num++) {
>> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
>>   		struct bio *bio;
>> -		struct scrub_page *spage = sblock->pagev[page_num];
>> +		struct scrub_page *spage = sblock->sectorv[sector_num];
>>
>>   		if (spage->dev->bdev == NULL) {
>>   			spage->io_error = 1;
>> @@ -1507,7 +1507,7 @@ static void scrub_recheck_block_checksum(struct scrub_block *sblock)
>>   	sblock->checksum_error = 0;
>>   	sblock->generation_error = 0;
>>
>> -	if (sblock->pagev[0]->flags & BTRFS_EXTENT_FLAG_DATA)
>> +	if (sblock->sectorv[0]->flags & BTRFS_EXTENT_FLAG_DATA)
>>   		scrub_checksum_data(sblock);
>>   	else
>>   		scrub_checksum_tree_block(sblock);
>> @@ -1516,15 +1516,15 @@ static void scrub_recheck_block_checksum(struct scrub_block *sblock)
>>   static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
>>   					     struct scrub_block *sblock_good)
>>   {
>> -	int page_num;
>> +	int sector_num;
>
> i
>
>>   	int ret = 0;
>>
>> -	for (page_num = 0; page_num < sblock_bad->page_count; page_num++) {
>> +	for (sector_num = 0; sector_num < sblock_bad->sector_count; sector_num++) {
>>   		int ret_sub;
>>
>>   		ret_sub = scrub_repair_page_from_good_copy(sblock_bad,
>>   							   sblock_good,
>> -							   page_num, 1);
>> +							   sector_num, 1);
>>   		if (ret_sub)
>>   			ret = ret_sub;
>>   	}
>> @@ -1534,10 +1534,10 @@ static int scrub_repair_block_from_good_copy(struct scrub_block *sblock_bad,
>>
>>   static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
>>   					    struct scrub_block *sblock_good,
>> -					    int page_num, int force_write)
>> +					    int sector_num, int force_write)
>>   {
>> -	struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
>> -	struct scrub_page *spage_good = sblock_good->pagev[page_num];
>> +	struct scrub_page *spage_bad = sblock_bad->sectorv[sector_num];
>> +	struct scrub_page *spage_good = sblock_good->sectorv[sector_num];
>>   	struct btrfs_fs_info *fs_info = sblock_bad->sctx->fs_info;
>>   	const u32 sectorsize = fs_info->sectorsize;
>>
>> @@ -1581,7 +1581,7 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
>>   static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
>>   {
>>   	struct btrfs_fs_info *fs_info = sblock->sctx->fs_info;
>> -	int page_num;
>> +	int sector_num;
>
> i >
>>   	/*
>>   	 * This block is used for the check of the parity on the source device,
>> @@ -1590,19 +1590,19 @@ static void scrub_write_block_to_dev_replace(struct scrub_block *sblock)
>>   	if (sblock->sparity)
>>   		return;
>>
>> -	for (page_num = 0; page_num < sblock->page_count; page_num++) {
>> +	for (sector_num = 0; sector_num < sblock->sector_count; sector_num++) {
>>   		int ret;
>>
>> -		ret = scrub_write_page_to_dev_replace(sblock, page_num);
>> +		ret = scrub_write_page_to_dev_replace(sblock, sector_num);
>>   		if (ret)
>>   			atomic64_inc(&fs_info->dev_replace.num_write_errors);
>>   	}
>>   }
>>
>>   static int scrub_write_page_to_dev_replace(struct scrub_block *sblock,
>> -					   int page_num)
>> +					   int sector_num)
>>   {
>> -	struct scrub_page *spage = sblock->pagev[page_num];
>> +	struct scrub_page *spage = sblock->sectorv[sector_num];
>>
>>   	BUG_ON(spage->page == NULL);
>>   	if (spage->io_error)
>> @@ -1786,8 +1786,8 @@ static int scrub_checksum(struct scrub_block *sblock)
>>   	sblock->generation_error = 0;
>>   	sblock->checksum_error = 0;
>>
>> -	WARN_ON(sblock->page_count < 1);
>> -	flags = sblock->pagev[0]->flags;
>> +	WARN_ON(sblock->sector_count < 1);
>> +	flags = sblock->sectorv[0]->flags;
>>   	ret = 0;
>>   	if (flags & BTRFS_EXTENT_FLAG_DATA)
>>   		ret = scrub_checksum_data(sblock);
>> @@ -1812,8 +1812,8 @@ static int scrub_checksum_data(struct scrub_block *sblock)
>>   	struct scrub_page *spage;
>>   	char *kaddr;
>>
>> -	BUG_ON(sblock->page_count < 1);
>> -	spage = sblock->pagev[0];
>> +	BUG_ON(sblock->sector_count < 1);
>> +	spage = sblock->sectorv[0];
>>   	if (!spage->have_csum)
>>   		return 0;
>>
>> @@ -1852,12 +1852,12 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
>>   	struct scrub_page *spage;
>>   	char *kaddr;
>>
>> -	BUG_ON(sblock->page_count < 1);
>> +	BUG_ON(sblock->sector_count < 1);
>>
>> -	/* Each member in pagev is just one block, not a full page */
>> -	ASSERT(sblock->page_count == num_sectors);
>> +	/* Each member in pagev is just one sector , not a full page */
>> +	ASSERT(sblock->sector_count == num_sectors);
>>
>> -	spage = sblock->pagev[0];
>> +	spage = sblock->sectorv[0];
>>   	kaddr = page_address(spage->page);
>>   	h = (struct btrfs_header *)kaddr;
>>   	memcpy(on_disk_csum, h->csum, sctx->fs_info->csum_size);
>> @@ -1888,7 +1888,7 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
>>   			    sectorsize - BTRFS_CSUM_SIZE);
>>
>>   	for (i = 1; i < num_sectors; i++) {
>> -		kaddr = page_address(sblock->pagev[i]->page);
>> +		kaddr = page_address(sblock->sectorv[i]->page);
>>   		crypto_shash_update(shash, kaddr, sectorsize);
>>   	}
>>
>> @@ -1911,8 +1911,8 @@ static int scrub_checksum_super(struct scrub_block *sblock)
>>   	int fail_gen = 0;
>>   	int fail_cor = 0;
>>
>> -	BUG_ON(sblock->page_count < 1);
>> -	spage = sblock->pagev[0];
>> +	BUG_ON(sblock->sector_count < 1);
>> +	spage = sblock->sectorv[0];
>>   	kaddr = page_address(spage->page);
>>   	s = (struct btrfs_super_block *)kaddr;
>>
>> @@ -1966,8 +1966,8 @@ static void scrub_block_put(struct scrub_block *sblock)
>>   		if (sblock->sparity)
>>   			scrub_parity_put(sblock->sparity);
>>
>> -		for (i = 0; i < sblock->page_count; i++)
>> -			scrub_page_put(sblock->pagev[i]);
>> +		for (i = 0; i < sblock->sector_count; i++)
>> +			scrub_page_put(sblock->sectorv[i]);
>>   		kfree(sblock);
>>   	}
>>   }
>> @@ -2155,8 +2155,8 @@ static void scrub_missing_raid56_worker(struct btrfs_work *work)
>>   	u64 logical;
>>   	struct btrfs_device *dev;
>>
>> -	logical = sblock->pagev[0]->logical;
>> -	dev = sblock->pagev[0]->dev;
>> +	logical = sblock->sectorv[0]->logical;
>> +	dev = sblock->sectorv[0]->dev;
>>
>>   	if (sblock->no_io_error_seen)
>>   		scrub_recheck_block_checksum(sblock);
>> @@ -2193,8 +2193,8 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
>>   {
>>   	struct scrub_ctx *sctx = sblock->sctx;
>>   	struct btrfs_fs_info *fs_info = sctx->fs_info;
>> -	u64 length = sblock->page_count * PAGE_SIZE;
>> -	u64 logical = sblock->pagev[0]->logical;
>> +	u64 length = sblock->sector_count * fs_info->sectorsize;
>> +	u64 logical = sblock->sectorv[0]->logical;
>>   	struct btrfs_io_context *bioc = NULL;
>>   	struct bio *bio;
>>   	struct btrfs_raid_bio *rbio;
>> @@ -2227,8 +2227,8 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
>>   	if (!rbio)
>>   		goto rbio_out;
>>
>> -	for (i = 0; i < sblock->page_count; i++) {
>> -		struct scrub_page *spage = sblock->pagev[i];
>> +	for (i = 0; i < sblock->sector_count; i++) {
>> +		struct scrub_page *spage = sblock->sectorv[i];
>>
>>   		raid56_add_scrub_pages(rbio, spage->page, spage->logical);
>>   	}
>> @@ -2290,9 +2290,9 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>>   			scrub_block_put(sblock);
>>   			return -ENOMEM;
>>   		}
>> -		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
>> +		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
>>   		scrub_page_get(spage);
>> -		sblock->pagev[index] = spage;
>> +		sblock->sectorv[index] = spage;
>>   		spage->sblock = sblock;
>>   		spage->dev = dev;
>>   		spage->flags = flags;
>> @@ -2307,7 +2307,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>>   		} else {
>>   			spage->have_csum = 0;
>>   		}
>> -		sblock->page_count++;
>> +		sblock->sector_count++;
>>   		spage->page = alloc_page(GFP_KERNEL);
>>   		if (!spage->page)
>>   			goto leave_nomem;
>> @@ -2317,7 +2317,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>>   		physical_for_dev_replace += l;
>>   	}
>>
>> -	WARN_ON(sblock->page_count == 0);
>> +	WARN_ON(sblock->sector_count == 0);
>>   	if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state)) {
>>   		/*
>>   		 * This case should only be hit for RAID 5/6 device replace. See
>> @@ -2325,8 +2325,8 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
>>   		 */
>>   		scrub_missing_raid56_pages(sblock);
>>   	} else {
>> -		for (index = 0; index < sblock->page_count; index++) {
>> -			struct scrub_page *spage = sblock->pagev[index];
>> +		for (index = 0; index < sblock->sector_count; index++) {
>> +			struct scrub_page *spage = sblock->sectorv[index];
>>   			int ret;
>>
>>   			ret = scrub_add_page_to_rd_bio(sctx, spage);
>> @@ -2456,8 +2456,8 @@ static void scrub_block_complete(struct scrub_block *sblock)
>>   	}
>>
>>   	if (sblock->sparity && corrupted && !sblock->data_corrected) {
>> -		u64 start = sblock->pagev[0]->logical;
>> -		u64 end = sblock->pagev[sblock->page_count - 1]->logical +
>> +		u64 start = sblock->sectorv[0]->logical;
>> +		u64 end = sblock->sectorv[sblock->sector_count - 1]->logical +
>>   			  sblock->sctx->fs_info->sectorsize;
>>
>>   		ASSERT(end - start <= U32_MAX);
>> @@ -2624,10 +2624,10 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>>   			scrub_block_put(sblock);
>>   			return -ENOMEM;
>>   		}
>> -		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
>> +		ASSERT(index < SCRUB_MAX_SECTORS_PER_BLOCK);
>>   		/* For scrub block */
>>   		scrub_page_get(spage);
>> -		sblock->pagev[index] = spage;
>> +		sblock->sectorv[index] = spage;
>>   		/* For scrub parity */
>>   		scrub_page_get(spage);
>>   		list_add_tail(&spage->list, &sparity->spages);
>> @@ -2644,7 +2644,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>>   		} else {
>>   			spage->have_csum = 0;
>>   		}
>> -		sblock->page_count++;
>> +		sblock->sector_count++;
>>   		spage->page = alloc_page(GFP_KERNEL);
>>   		if (!spage->page)
>>   			goto leave_nomem;
>> @@ -2656,9 +2656,9 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
>>   		physical += sectorsize;
>>   	}
>>
>> -	WARN_ON(sblock->page_count == 0);
>> -	for (index = 0; index < sblock->page_count; index++) {
>> -		struct scrub_page *spage = sblock->pagev[index];
>> +	WARN_ON(sblock->sector_count == 0);
>> +	for (index = 0; index < sblock->sector_count; index++) {
>> +		struct scrub_page *spage = sblock->sectorv[index];
>>   		int ret;
>>
>>   		ret = scrub_add_page_to_rd_bio(sctx, spage);
>> @@ -4058,18 +4058,18 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
>>   	}
>>
>>   	if (fs_info->nodesize >
>> -	    PAGE_SIZE * SCRUB_MAX_PAGES_PER_BLOCK ||
>> -	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_PAGES_PER_BLOCK) {
>> +	    SCRUB_MAX_SECTORS_PER_BLOCK * fs_info->sectorsize ||
>> +	    fs_info->sectorsize > PAGE_SIZE * SCRUB_MAX_SECTORS_PER_BLOCK) {
>>   		/*
>>   		 * would exhaust the array bounds of pagev member in
>>   		 * struct scrub_block
>>   		 */
>>   		btrfs_err(fs_info,
>> -			  "scrub: size assumption nodesize and sectorsize <= SCRUB_MAX_PAGES_PER_BLOCK (%d <= %d && %d <= %d) fails",
>> +			  "scrub: size assumption nodesize and sectorsize <= SCRUB_MAX_SECTORS_PER_BLOCK (%d <= %d && %d <= %d) fails",
>>   		       fs_info->nodesize,
>> -		       SCRUB_MAX_PAGES_PER_BLOCK,
>> +		       SCRUB_MAX_SECTORS_PER_BLOCK,
>>   		       fs_info->sectorsize,
>> -		       SCRUB_MAX_PAGES_PER_BLOCK);
>> +		       SCRUB_MAX_SECTORS_PER_BLOCK);
>>   		return -EINVAL;
>>   	}
>>
>> --
>> 2.35.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev
  2022-03-11 23:17     ` Qu Wenruo
@ 2022-03-14 19:17       ` David Sterba
  0 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2022-03-14 19:17 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: dsterba, Qu Wenruo, linux-btrfs

On Sat, Mar 12, 2022 at 07:17:42AM +0800, Qu Wenruo wrote:
> >> @@ -1078,16 +1078,16 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
> >>   	 * area are unreadable.
> >>   	 */
> >>   	success = 1;
> >> -	for (page_num = 0; page_num < sblock_bad->page_count;
> >> -	     page_num++) {
> >> -		struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
> >> +	for (sector_num = 0; sector_num < sblock_bad->sector_count;
> >
> > This is a simple indexing, so while sector_num is accurate a plain 'i'
> > would work too. It would also make some lines shorter.
> 
> Here I intentionally avoid using single letter, because the existing
> code is doing a pretty bad practice by doing a double for loop.
> 
> Here we're doing two different loops, one is iterating all the sectors,
> the other one is iterating all the mirrors.
> 
> Thus we need to distinguish them, or it' can easily get screwed up using
> different loop indexes.

Yeah in this case it makes more sense to keep the descriptive name.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-03-14 19:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-11  7:34 [PATCH v2 0/3] btrfs: scrub: big renaming to address the page and sector difference Qu Wenruo
2022-03-11  7:34 ` [PATCH v2 1/3] btrfs: scrub: rename members related to scrub_block::pagev Qu Wenruo
2022-03-11 17:49   ` David Sterba
2022-03-11 23:17     ` Qu Wenruo
2022-03-14 19:17       ` David Sterba
2022-03-11  7:34 ` [PATCH v2 2/3] btrfs: scrub: rename scrub_page to scrub_sector Qu Wenruo
2022-03-11 18:01   ` David Sterba
2022-03-11  7:34 ` [PATCH v2 3/3] btrfs: scrub: rename scrub_bio::pagev and related members Qu Wenruo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.