linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version)
@ 2020-10-26  7:11 Qu Wenruo
  2020-10-26  7:11 ` [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page Qu Wenruo
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

To my surprise, the scrub functionality is completely independent, thus
it can support subpage much easier than subpage read/write itself.

Thus here comes the independent scrub support for subpage.

== BACKGROUND ==
With my experimental subpage read/write, the scrub is always reporting
my subpage fs is completely corrupted, every tree block and every data
sector is corrupted.

Thus there must be something wrong with the scrub and subpage.

== CAUSE ==
It turns out that, scrub is hard coding tons of PAGE_SIZE, and always
assume PAGE_SIZE == sectorsize.
Structure scrub_page is in fact more like scrub_sector, where it only
stores one sector.

But there is also some good news, since scrub is submitting its own read
write bio, it avoids all the hassles to handle page unaligned sectors.

== WORKAROUND ==
The workaround is pretty straightforward, always store just one sector
in one scrub_page.
And teach the scrub_checksum_*() functions to follow the sector size of
each scrub_page.

The cost is pretty obvious for 64K page size systems.
If using 4K sector size, we need a full page to scrub one 4K sector.
And we will allocate 16 times more space to scrub 4K sectors compared to
4K page size systems.

But still, the cost should be more or less acceptable for now.

== TODO ==
To properly handle the case, we should get rid of scrub_page completely.

The main objective of scrub_page is just to restore the per-sector csum.
In fact all the members like logical/physical/physical_for_replace can
be calculated from scrub_block.

If we can store pages/csums/csums_bitmap in scrub_block, we can easily
do proper page based csum check for both data and metadata, and take the
advantage of much larger page size.

But that work is beyond the scope of subpage support, I will take that
work after the subpage functionality if fully completely.

== PATCHSET STRUCTURE ==
Patch 01~04:	Small refactors and cleanups spotted during the
		development.
Patch 05~08:	Support for subpage scrub.

All these patches will be also included in next subpage patchset update,
but considering they are way more independent than current subpage
patchset, it's still worthy submitting.


The support won't change anything for current sector size == PAGE_SIZE
cases.
Both 4K and 64K page systems tested.

For subpage testing, it's only basic scrub and repair tested, and there
are still some blockage for full fstests run.

Qu Wenruo (8):
  btrfs: scrub: distinguish scrub_page from regular page
  btrfs: scrub: remove the @force parameter of scrub_pages()
  btrfs: scrub: use flexible array for scrub_page::csums
  btrfs: scrub: refactor scrub_find_csum()
  btrfs: scrub: introduce scrub_page::page_len for subpage support
  btrfs: scrub: always allocate one full page for one sector for RAID56
  btrfs: scrub: support subpage tree block scrub
  btrfs: scrub: support subpage data scrub

 fs/btrfs/scrub.c | 292 +++++++++++++++++++++++++++++++----------------
 1 file changed, 192 insertions(+), 100 deletions(-)

-- 
2.29.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  2020-10-26 14:13   ` Josef Bacik
  2020-10-26  7:11 ` [PATCH 2/8] btrfs: scrub: remove the @force parameter of scrub_pages() Qu Wenruo
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

There are several call sites where we declare something like
"struct scrub_page *page".

This is just asking for troubles when read the code, as we also have
scrub_page::page member.

To avoid confusion, use "spage" for scrub_page strcture pointers.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 102 +++++++++++++++++++++++------------------------
 1 file changed, 51 insertions(+), 51 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 806523515d2f..0e25f9391b93 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -256,10 +256,10 @@ static void __scrub_blocked_if_needed(struct btrfs_fs_info *fs_info);
 static void scrub_blocked_if_needed(struct btrfs_fs_info *fs_info);
 static void scrub_put_ctx(struct scrub_ctx *sctx);
 
-static inline int scrub_is_page_on_raid56(struct scrub_page *page)
+static inline int scrub_is_page_on_raid56(struct scrub_page *spage)
 {
-	return page->recover &&
-	       (page->recover->bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
+	return spage->recover &&
+	       (spage->recover->bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK);
 }
 
 static void scrub_pending_bio_inc(struct scrub_ctx *sctx)
@@ -1092,11 +1092,11 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	success = 1;
 	for (page_num = 0; page_num < sblock_bad->page_count;
 	     page_num++) {
-		struct scrub_page *page_bad = sblock_bad->pagev[page_num];
+		struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
 		struct scrub_block *sblock_other = NULL;
 
 		/* skip no-io-error page in scrub */
-		if (!page_bad->io_error && !sctx->is_dev_replace)
+		if (!spage_bad->io_error && !sctx->is_dev_replace)
 			continue;
 
 		if (scrub_is_page_on_raid56(sblock_bad->pagev[0])) {
@@ -1108,7 +1108,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 			 * sblock_for_recheck array to target device.
 			 */
 			sblock_other = NULL;
-		} else if (page_bad->io_error) {
+		} else if (spage_bad->io_error) {
 			/* try to find no-io-error page in mirrors */
 			for (mirror_index = 0;
 			     mirror_index < BTRFS_MAX_MIRRORS &&
@@ -1147,7 +1147,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 							       sblock_other,
 							       page_num, 0);
 			if (0 == ret)
-				page_bad->io_error = 0;
+				spage_bad->io_error = 0;
 			else
 				success = 0;
 		}
@@ -1325,13 +1325,13 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 		for (mirror_index = 0; mirror_index < nmirrors;
 		     mirror_index++) {
 			struct scrub_block *sblock;
-			struct scrub_page *page;
+			struct scrub_page *spage;
 
 			sblock = sblocks_for_recheck + mirror_index;
 			sblock->sctx = sctx;
 
-			page = kzalloc(sizeof(*page), GFP_NOFS);
-			if (!page) {
+			spage = kzalloc(sizeof(*spage), GFP_NOFS);
+			if (!spage) {
 leave_nomem:
 				spin_lock(&sctx->stat_lock);
 				sctx->stat.malloc_errors++;
@@ -1339,15 +1339,15 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 				scrub_put_recover(fs_info, recover);
 				return -ENOMEM;
 			}
-			scrub_page_get(page);
-			sblock->pagev[page_index] = page;
-			page->sblock = sblock;
-			page->flags = flags;
-			page->generation = generation;
-			page->logical = logical;
-			page->have_csum = have_csum;
+			scrub_page_get(spage);
+			sblock->pagev[page_index] = spage;
+			spage->sblock = sblock;
+			spage->flags = flags;
+			spage->generation = generation;
+			spage->logical = logical;
+			spage->have_csum = have_csum;
 			if (have_csum)
-				memcpy(page->csum,
+				memcpy(spage->csum,
 				       original_sblock->pagev[0]->csum,
 				       sctx->csum_size);
 
@@ -1360,23 +1360,23 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 						      mirror_index,
 						      &stripe_index,
 						      &stripe_offset);
-			page->physical = bbio->stripes[stripe_index].physical +
+			spage->physical = bbio->stripes[stripe_index].physical +
 					 stripe_offset;
-			page->dev = bbio->stripes[stripe_index].dev;
+			spage->dev = bbio->stripes[stripe_index].dev;
 
 			BUG_ON(page_index >= original_sblock->page_count);
-			page->physical_for_dev_replace =
+			spage->physical_for_dev_replace =
 				original_sblock->pagev[page_index]->
 				physical_for_dev_replace;
 			/* for missing devices, dev->bdev is NULL */
-			page->mirror_num = mirror_index + 1;
+			spage->mirror_num = mirror_index + 1;
 			sblock->page_count++;
-			page->page = alloc_page(GFP_NOFS);
-			if (!page->page)
+			spage->page = alloc_page(GFP_NOFS);
+			if (!spage->page)
 				goto leave_nomem;
 
 			scrub_get_recover(recover);
-			page->recover = recover;
+			spage->recover = recover;
 		}
 		scrub_put_recover(fs_info, recover);
 		length -= sublen;
@@ -1394,19 +1394,19 @@ static void scrub_bio_wait_endio(struct bio *bio)
 
 static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info,
 					struct bio *bio,
-					struct scrub_page *page)
+					struct scrub_page *spage)
 {
 	DECLARE_COMPLETION_ONSTACK(done);
 	int ret;
 	int mirror_num;
 
-	bio->bi_iter.bi_sector = page->logical >> 9;
+	bio->bi_iter.bi_sector = spage->logical >> 9;
 	bio->bi_private = &done;
 	bio->bi_end_io = scrub_bio_wait_endio;
 
-	mirror_num = page->sblock->pagev[0]->mirror_num;
-	ret = raid56_parity_recover(fs_info, bio, page->recover->bbio,
-				    page->recover->map_length,
+	mirror_num = spage->sblock->pagev[0]->mirror_num;
+	ret = raid56_parity_recover(fs_info, bio, spage->recover->bbio,
+				    spage->recover->map_length,
 				    mirror_num, 0);
 	if (ret)
 		return ret;
@@ -1431,10 +1431,10 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 	bio_set_dev(bio, first_page->dev->bdev);
 
 	for (page_num = 0; page_num < sblock->page_count; page_num++) {
-		struct scrub_page *page = sblock->pagev[page_num];
+		struct scrub_page *spage = sblock->pagev[page_num];
 
-		WARN_ON(!page->page);
-		bio_add_page(bio, page->page, PAGE_SIZE, 0);
+		WARN_ON(!spage->page);
+		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
 	}
 
 	if (scrub_submit_raid56_bio_wait(fs_info, bio, first_page)) {
@@ -1475,24 +1475,24 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 
 	for (page_num = 0; page_num < sblock->page_count; page_num++) {
 		struct bio *bio;
-		struct scrub_page *page = sblock->pagev[page_num];
+		struct scrub_page *spage = sblock->pagev[page_num];
 
-		if (page->dev->bdev == NULL) {
-			page->io_error = 1;
+		if (spage->dev->bdev == NULL) {
+			spage->io_error = 1;
 			sblock->no_io_error_seen = 0;
 			continue;
 		}
 
-		WARN_ON(!page->page);
+		WARN_ON(!spage->page);
 		bio = btrfs_io_bio_alloc(1);
-		bio_set_dev(bio, page->dev->bdev);
+		bio_set_dev(bio, spage->dev->bdev);
 
-		bio_add_page(bio, page->page, PAGE_SIZE, 0);
-		bio->bi_iter.bi_sector = page->physical >> 9;
+		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
+		bio->bi_iter.bi_sector = spage->physical >> 9;
 		bio->bi_opf = REQ_OP_READ;
 
 		if (btrfsic_submit_bio_wait(bio)) {
-			page->io_error = 1;
+			spage->io_error = 1;
 			sblock->no_io_error_seen = 0;
 		}
 
@@ -1548,36 +1548,36 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
 					    struct scrub_block *sblock_good,
 					    int page_num, int force_write)
 {
-	struct scrub_page *page_bad = sblock_bad->pagev[page_num];
-	struct scrub_page *page_good = sblock_good->pagev[page_num];
+	struct scrub_page *spage_bad = sblock_bad->pagev[page_num];
+	struct scrub_page *spage_good = sblock_good->pagev[page_num];
 	struct btrfs_fs_info *fs_info = sblock_bad->sctx->fs_info;
 
-	BUG_ON(page_bad->page == NULL);
-	BUG_ON(page_good->page == NULL);
+	BUG_ON(spage_bad->page == NULL);
+	BUG_ON(spage_good->page == NULL);
 	if (force_write || sblock_bad->header_error ||
-	    sblock_bad->checksum_error || page_bad->io_error) {
+	    sblock_bad->checksum_error || spage_bad->io_error) {
 		struct bio *bio;
 		int ret;
 
-		if (!page_bad->dev->bdev) {
+		if (!spage_bad->dev->bdev) {
 			btrfs_warn_rl(fs_info,
 				"scrub_repair_page_from_good_copy(bdev == NULL) is unexpected");
 			return -EIO;
 		}
 
 		bio = btrfs_io_bio_alloc(1);
-		bio_set_dev(bio, page_bad->dev->bdev);
-		bio->bi_iter.bi_sector = page_bad->physical >> 9;
+		bio_set_dev(bio, spage_bad->dev->bdev);
+		bio->bi_iter.bi_sector = spage_bad->physical >> 9;
 		bio->bi_opf = REQ_OP_WRITE;
 
-		ret = bio_add_page(bio, page_good->page, PAGE_SIZE, 0);
+		ret = bio_add_page(bio, spage_good->page, PAGE_SIZE, 0);
 		if (PAGE_SIZE != ret) {
 			bio_put(bio);
 			return -EIO;
 		}
 
 		if (btrfsic_submit_bio_wait(bio)) {
-			btrfs_dev_stat_inc_and_print(page_bad->dev,
+			btrfs_dev_stat_inc_and_print(spage_bad->dev,
 				BTRFS_DEV_STAT_WRITE_ERRS);
 			atomic64_inc(&fs_info->dev_replace.num_write_errors);
 			bio_put(bio);
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/8] btrfs: scrub: remove the @force parameter of scrub_pages()
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
  2020-10-26  7:11 ` [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  2020-10-26 14:20   ` Josef Bacik
  2020-10-26  7:11 ` [PATCH 3/8] btrfs: scrub: use flexible array for scrub_page::csums Qu Wenruo
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

The @force parameter for scrub_pages() is to indicate whether we want to
force bio submission.

Currently it's only used for super block scrub, and it can be easily
determined by the @flags.

So remove the parameter to make the parameter a little shorter.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 0e25f9391b93..3cb51d1f061f 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -237,7 +237,7 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 				    struct scrub_page *spage);
 static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 		       u64 physical, struct btrfs_device *dev, u64 flags,
-		       u64 gen, int mirror_num, u8 *csum, int force,
+		       u64 gen, int mirror_num, u8 *csum,
 		       u64 physical_for_dev_replace);
 static void scrub_bio_end_io(struct bio *bio);
 static void scrub_bio_end_io_worker(struct btrfs_work *work);
@@ -2152,12 +2152,16 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock)
 
 static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 		       u64 physical, struct btrfs_device *dev, u64 flags,
-		       u64 gen, int mirror_num, u8 *csum, int force,
+		       u64 gen, int mirror_num, u8 *csum,
 		       u64 physical_for_dev_replace)
 {
 	struct scrub_block *sblock;
+	bool force_submit = false;
 	int index;
 
+	if (flags & BTRFS_EXTENT_FLAG_SUPER)
+		force_submit = true;
+
 	sblock = kzalloc(sizeof(*sblock), GFP_KERNEL);
 	if (!sblock) {
 		spin_lock(&sctx->stat_lock);
@@ -2231,7 +2235,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 			}
 		}
 
-		if (force)
+		if (force_submit)
 			scrub_submit(sctx);
 	}
 
@@ -2442,7 +2446,7 @@ static int scrub_extent(struct scrub_ctx *sctx, struct map_lookup *map,
 				++sctx->stat.no_csum;
 		}
 		ret = scrub_pages(sctx, logical, l, physical, dev, flags, gen,
-				  mirror_num, have_csum ? csum : NULL, 0,
+				  mirror_num, have_csum ? csum : NULL,
 				  physical_for_dev_replace);
 		if (ret)
 			return ret;
@@ -3707,7 +3711,7 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
 
 		ret = scrub_pages(sctx, bytenr, BTRFS_SUPER_INFO_SIZE, bytenr,
 				  scrub_dev, BTRFS_EXTENT_FLAG_SUPER, gen, i,
-				  NULL, 1, bytenr);
+				  NULL, bytenr);
 		if (ret)
 			return ret;
 	}
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/8] btrfs: scrub: use flexible array for scrub_page::csums
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
  2020-10-26  7:11 ` [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page Qu Wenruo
  2020-10-26  7:11 ` [PATCH 2/8] btrfs: scrub: remove the @force parameter of scrub_pages() Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  2020-10-26 14:23   ` Josef Bacik
  2020-10-26  7:11 ` [PATCH 4/8] btrfs: scrub: refactor scrub_find_csum() Qu Wenruo
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

There are several factors affecting how many checksum bytes are needed
for one scrub_page:

- Sector size and page size
  For subpage case, one page can contain several sectors, thus the csum
  size will differ.

- Checksum size
  Since btrfs supports different csum size now, which can vary from 4
  bytes for CRC32 to 32 bytes for SHA256.

So instead of using fixed BTRFS_CSUM_SIZE, now use flexible array for
scrub_page::csums, and determine the size at scrub_page allocation time.

This does not only provide the basis for later subpage scrub support,
but also reduce the memory usage for default btrfs on x86_64.
As the default CRC32 only uses 4 bytes, thus we can save 28 bytes for
each scrub page.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 41 ++++++++++++++++++++++++++++++-----------
 1 file changed, 30 insertions(+), 11 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 3cb51d1f061f..321d6d457942 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -76,9 +76,14 @@ struct scrub_page {
 		unsigned int	have_csum:1;
 		unsigned int	io_error:1;
 	};
-	u8			csum[BTRFS_CSUM_SIZE];
-
 	struct scrub_recover	*recover;
+
+	/*
+	 * The csums size for the page is deteremined by page size,
+	 * sector size and csum size.
+	 * Thus the length has to be determined at runtime.
+	 */
+	u8			csums[];
 };
 
 struct scrub_bio {
@@ -207,6 +212,19 @@ struct full_stripe_lock {
 	struct mutex mutex;
 };
 
+static struct scrub_page *alloc_scrub_page(struct scrub_ctx *sctx, gfp_t mask)
+{
+	u32 sectorsize = sctx->fs_info->sectorsize;
+	size_t size;
+
+	/* No support for multi-page sector size yet */
+	ASSERT(PAGE_SIZE >= sectorsize && IS_ALIGNED(PAGE_SIZE, sectorsize));
+
+	size = sizeof(struct scrub_page);
+	size += (PAGE_SIZE / sectorsize) * sctx->csum_size;
+	return kzalloc(size, mask);
+}
+
 static void scrub_pending_bio_inc(struct scrub_ctx *sctx);
 static void scrub_pending_bio_dec(struct scrub_ctx *sctx);
 static int scrub_handle_errored_block(struct scrub_block *sblock_to_check);
@@ -1330,7 +1348,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 			sblock = sblocks_for_recheck + mirror_index;
 			sblock->sctx = sctx;
 
-			spage = kzalloc(sizeof(*spage), GFP_NOFS);
+			spage = alloc_scrub_page(sctx, GFP_NOFS);
 			if (!spage) {
 leave_nomem:
 				spin_lock(&sctx->stat_lock);
@@ -1347,8 +1365,8 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 			spage->logical = logical;
 			spage->have_csum = have_csum;
 			if (have_csum)
-				memcpy(spage->csum,
-				       original_sblock->pagev[0]->csum,
+				memcpy(spage->csums,
+				       original_sblock->pagev[0]->csums,
 				       sctx->csum_size);
 
 			scrub_stripe_index_and_offset(logical,
@@ -1800,7 +1818,7 @@ static int scrub_checksum_data(struct scrub_block *sblock)
 	crypto_shash_init(shash);
 	crypto_shash_digest(shash, kaddr, PAGE_SIZE, csum);
 
-	if (memcmp(csum, spage->csum, sctx->csum_size))
+	if (memcmp(csum, spage->csums, sctx->csum_size))
 		sblock->checksum_error = 1;
 
 	return sblock->checksum_error;
@@ -2180,7 +2198,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 		struct scrub_page *spage;
 		u64 l = min_t(u64, len, PAGE_SIZE);
 
-		spage = kzalloc(sizeof(*spage), GFP_KERNEL);
+		spage = alloc_scrub_page(sctx, GFP_KERNEL);
 		if (!spage) {
 leave_nomem:
 			spin_lock(&sctx->stat_lock);
@@ -2202,7 +2220,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 		spage->mirror_num = mirror_num;
 		if (csum) {
 			spage->have_csum = 1;
-			memcpy(spage->csum, csum, sctx->csum_size);
+			memcpy(spage->csums, csum, sctx->csum_size);
 		} else {
 			spage->have_csum = 0;
 		}
@@ -2487,7 +2505,9 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 		struct scrub_page *spage;
 		u64 l = min_t(u64, len, PAGE_SIZE);
 
-		spage = kzalloc(sizeof(*spage), GFP_KERNEL);
+		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
+
+		spage = alloc_scrub_page(sctx, GFP_KERNEL);
 		if (!spage) {
 leave_nomem:
 			spin_lock(&sctx->stat_lock);
@@ -2496,7 +2516,6 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 			scrub_block_put(sblock);
 			return -ENOMEM;
 		}
-		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
 		/* For scrub block */
 		scrub_page_get(spage);
 		sblock->pagev[index] = spage;
@@ -2512,7 +2531,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 		spage->mirror_num = mirror_num;
 		if (csum) {
 			spage->have_csum = 1;
-			memcpy(spage->csum, csum, sctx->csum_size);
+			memcpy(spage->csums, csum, sctx->csum_size);
 		} else {
 			spage->have_csum = 0;
 		}
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/8] btrfs: scrub: refactor scrub_find_csum()
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
                   ` (2 preceding siblings ...)
  2020-10-26  7:11 ` [PATCH 3/8] btrfs: scrub: use flexible array for scrub_page::csums Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  2020-10-26 14:39   ` Josef Bacik
  2020-10-26  7:11 ` [PATCH 5/8] btrfs: scrub: introduce scrub_page::page_len for subpage support Qu Wenruo
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

Function scrub_find_csum() is to locate the csum for bytenr @logical
from sctx->csum_list.

However it lacks a lot of comments to explaining things like how the
csum_list is organized and why we need to drop csum range which is
before us.

Refactor the function by:
- Add more comment explaining the behavior
- Add comment explaining why we need to drop the csum range
- Put the csum copy in the main loop
  This is mostly for the incoming patches to make scrub_find_csum() able
  to find multiple checksums.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 70 +++++++++++++++++++++++++++++++++++-------------
 1 file changed, 51 insertions(+), 19 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 321d6d457942..0d078393f986 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -2386,37 +2386,69 @@ static void scrub_block_complete(struct scrub_block *sblock)
 	}
 }
 
+static void drop_csum_range(struct scrub_ctx *sctx,
+			    struct btrfs_ordered_sum *sum)
+{
+	u32 sectorsize = sctx->fs_info->sectorsize;
+
+	sctx->stat.csum_discards += sum->len / sectorsize;
+	list_del(&sum->list);
+	kfree(sum);
+}
+
+/*
+ * Find the desired csum for range [@logical, @logical + sectorsize), and
+ * store the csum into @csum.
+ *
+ * The search source is sctx->csum_list, which is a pre-populated list
+ * storing bytenr ordered csum ranges.
+ * We're reponsible to cleanup any range that is before @logical.
+ *
+ * Return 0 if there is no csum for the range.
+ * Return 1 if there is csum for the range and copied to @csum.
+ */
 static int scrub_find_csum(struct scrub_ctx *sctx, u64 logical, u8 *csum)
 {
-	struct btrfs_ordered_sum *sum = NULL;
-	unsigned long index;
-	unsigned long num_sectors;
+	u32 sectorsize = sctx->fs_info->sectorsize;
+	u32 csum_size = sctx->csum_size;
+	bool found = false;
 
 	while (!list_empty(&sctx->csum_list)) {
+		struct btrfs_ordered_sum *sum = NULL;
+		unsigned long index;
+		unsigned long num_sectors;
+
 		sum = list_first_entry(&sctx->csum_list,
 				       struct btrfs_ordered_sum, list);
+		/* The current csum range is beyond our range, no csum found */
 		if (sum->bytenr > logical)
-			return 0;
-		if (sum->bytenr + sum->len > logical)
 			break;
 
-		++sctx->stat.csum_discards;
-		list_del(&sum->list);
-		kfree(sum);
-		sum = NULL;
-	}
-	if (!sum)
-		return 0;
+		/*
+		 * The current sum is before our bytenr, since scrub is
+		 * always done in bytenr order, the csum will never be used
+		 * anymore, clean it up so that later calls won't bother the
+		 * range, and continue search the next range.
+		 */
+		if (sum->bytenr + sum->len <= logical) {
+			drop_csum_range(sctx, sum);
+			continue;
+		}
 
-	index = div_u64(logical - sum->bytenr, sctx->fs_info->sectorsize);
-	ASSERT(index < UINT_MAX);
+		/* Now the csum range covers our bytenr, copy the csum */
+		found = true;
+		index = div_u64(logical - sum->bytenr, sectorsize);
+		num_sectors = sum->len / sectorsize;
 
-	num_sectors = sum->len / sctx->fs_info->sectorsize;
-	memcpy(csum, sum->sums + index * sctx->csum_size, sctx->csum_size);
-	if (index == num_sectors - 1) {
-		list_del(&sum->list);
-		kfree(sum);
+		memcpy(csum, sum->sums + index * csum_size, csum_size);
+
+		/* Cleanup the range if we're at the end of the csum range */
+		if (index == num_sectors - 1)
+			drop_csum_range(sctx, sum);
+		break;
 	}
+	if (!found)
+		return 0;
 	return 1;
 }
 
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/8] btrfs: scrub: introduce scrub_page::page_len for subpage support
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
                   ` (3 preceding siblings ...)
  2020-10-26  7:11 ` [PATCH 4/8] btrfs: scrub: refactor scrub_find_csum() Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  2020-10-26  7:11 ` [PATCH 6/8] btrfs: scrub: always allocate one full page for one sector for RAID56 Qu Wenruo
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

Currently scrub_page only has one csum for each page, this is fine if
page size == sector size, then each page has one csum for it.

But for subpage support, we could have cases where only part of the page
is utilized. E.g one 4K sector is read into a 64K page.
In that case, we need a way to determine which range is really utilized.

This patch will introduce scrub_page::page_len so that we can know
where the utilized range ends.

This is especially important for subpage. As write bio can overwrite
existing data if we just submit full page bio.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 36 +++++++++++++++++++++++++-----------
 1 file changed, 25 insertions(+), 11 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 0d078393f986..bad88c651dd4 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -72,9 +72,15 @@ struct scrub_page {
 	u64			physical_for_dev_replace;
 	atomic_t		refs;
 	struct {
-		unsigned int	mirror_num:8;
-		unsigned int	have_csum:1;
-		unsigned int	io_error:1;
+		/*
+		 * For subpage case, where only part of the page is utilized
+		 * Note that 16 bits can only go 65535, not 65536, thus we have
+		 * to use 17 bits here.
+		 */
+		u32	page_len:17;
+		u32	mirror_num:8;
+		u32	have_csum:1;
+		u32	io_error:1;
 	};
 	struct scrub_recover	*recover;
 
@@ -217,6 +223,11 @@ static struct scrub_page *alloc_scrub_page(struct scrub_ctx *sctx, gfp_t mask)
 	u32 sectorsize = sctx->fs_info->sectorsize;
 	size_t size;
 
+	/*
+	 * The bits in scrub_page::page_len only supports up to 64K page size.
+	 */
+	BUILD_BUG_ON(PAGE_SIZE > SZ_64K);
+
 	/* No support for multi-page sector size yet */
 	ASSERT(PAGE_SIZE >= sectorsize && IS_ALIGNED(PAGE_SIZE, sectorsize));
 
@@ -1359,6 +1370,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 			}
 			scrub_page_get(spage);
 			sblock->pagev[page_index] = spage;
+			spage->page_len = sublen;
 			spage->sblock = sblock;
 			spage->flags = flags;
 			spage->generation = generation;
@@ -1452,7 +1464,7 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info,
 		struct scrub_page *spage = sblock->pagev[page_num];
 
 		WARN_ON(!spage->page);
-		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
+		bio_add_page(bio, spage->page, spage->page_len, 0);
 	}
 
 	if (scrub_submit_raid56_bio_wait(fs_info, bio, first_page)) {
@@ -1505,7 +1517,7 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info,
 		bio = btrfs_io_bio_alloc(1);
 		bio_set_dev(bio, spage->dev->bdev);
 
-		bio_add_page(bio, spage->page, PAGE_SIZE, 0);
+		bio_add_page(bio, spage->page, spage->page_len, 0);
 		bio->bi_iter.bi_sector = spage->physical >> 9;
 		bio->bi_opf = REQ_OP_READ;
 
@@ -1588,8 +1600,8 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad,
 		bio->bi_iter.bi_sector = spage_bad->physical >> 9;
 		bio->bi_opf = REQ_OP_WRITE;
 
-		ret = bio_add_page(bio, spage_good->page, PAGE_SIZE, 0);
-		if (PAGE_SIZE != ret) {
+		ret = bio_add_page(bio, spage_good->page, spage_good->page_len, 0);
+		if (ret != spage_good->page_len) {
 			bio_put(bio);
 			return -EIO;
 		}
@@ -1685,8 +1697,8 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx,
 		goto again;
 	}
 
-	ret = bio_add_page(sbio->bio, spage->page, PAGE_SIZE, 0);
-	if (ret != PAGE_SIZE) {
+	ret = bio_add_page(sbio->bio, spage->page, spage->page_len, 0);
+	if (ret != spage->page_len) {
 		if (sbio->page_count < 1) {
 			bio_put(sbio->bio);
 			sbio->bio = NULL;
@@ -2033,8 +2045,8 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx,
 	}
 
 	sbio->pagev[sbio->page_count] = spage;
-	ret = bio_add_page(sbio->bio, spage->page, PAGE_SIZE, 0);
-	if (ret != PAGE_SIZE) {
+	ret = bio_add_page(sbio->bio, spage->page, spage->page_len, 0);
+	if (ret != spage->page_len) {
 		if (sbio->page_count < 1) {
 			bio_put(sbio->bio);
 			sbio->bio = NULL;
@@ -2210,6 +2222,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
 		scrub_page_get(spage);
 		sblock->pagev[index] = spage;
+		spage->page_len = l;
 		spage->sblock = sblock;
 		spage->dev = dev;
 		spage->flags = flags;
@@ -2554,6 +2567,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 		/* For scrub parity */
 		scrub_page_get(spage);
 		list_add_tail(&spage->list, &sparity->spages);
+		spage->page_len = l;
 		spage->sblock = sblock;
 		spage->dev = dev;
 		spage->flags = flags;
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/8] btrfs: scrub: always allocate one full page for one sector for RAID56
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
                   ` (4 preceding siblings ...)
  2020-10-26  7:11 ` [PATCH 5/8] btrfs: scrub: introduce scrub_page::page_len for subpage support Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  2020-10-26  7:11 ` [PATCH 7/8] btrfs: scrub: support subpage tree block scrub Qu Wenruo
  2020-10-26  7:11 ` [PATCH 8/8] btrfs: scrub: support subpage data scrub Qu Wenruo
  7 siblings, 0 replies; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

For scrub_pages() and scrub_pages_for_parity(), we currently allocate
one scrub_page structure for one page.

This is fine if we only read/write one sector one time.
But for cases like scrubing RAID56, we need to read/write the full
stripe, which is in 64K size.

For subpage size, we will submit the read in just one page, which is
normally a good thing, but for RAID56 case, it only expects to see one
sector, not the full stripe in its endio function.
This could lead to wrong parity checksum for RAID56 on subpage.

To make the existing code work well for subpage case, here we take a
shortcut, by always allocating a full page for one sector.

This should provide the basis to make RAID56 work for subpage case.

The cost is pretty obvious now, for one RAID56 stripe now we always need 16
pages. For support subpage situation (64K page size, 4K sector size),
this means we need full one megabyte to scrub just one RAID56 stripe.

And for data scrub, each 4K sector will also need one 64K page.

This is mostly just a workaround, the proper fix for this is a much
larger project, using scrub_block to replace scrub_page, and allow
scrub_block to handle multi pages, csums, and csum_bitmap to avoid
allocating one page for each sector.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index bad88c651dd4..10836aec389f 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -2186,6 +2186,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 		       u64 physical_for_dev_replace)
 {
 	struct scrub_block *sblock;
+	u32 sectorsize = sctx->fs_info->sectorsize;
 	bool force_submit = false;
 	int index;
 
@@ -2208,7 +2209,15 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
 
 	for (index = 0; len > 0; index++) {
 		struct scrub_page *spage;
-		u64 l = min_t(u64, len, PAGE_SIZE);
+		/*
+		 * Here we will allocate one page for one sector to scrub.
+		 * This is fine if PAGE_SIZE == sectorsize, but will cost
+		 * more memory for PAGE_SIZE > sectorsize case.
+		 *
+		 * TODO: Make scrub_block to handle multiple pages and csums,
+		 * so that we don't need scrub_page structure at all.
+		 */
+		u32 l = min_t(u32, sectorsize, len);
 
 		spage = alloc_scrub_page(sctx, GFP_KERNEL);
 		if (!spage) {
@@ -2528,8 +2537,11 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 {
 	struct scrub_ctx *sctx = sparity->sctx;
 	struct scrub_block *sblock;
+	u32 sectorsize = sctx->fs_info->sectorsize;
 	int index;
 
+	ASSERT(IS_ALIGNED(len, sectorsize));
+
 	sblock = kzalloc(sizeof(*sblock), GFP_KERNEL);
 	if (!sblock) {
 		spin_lock(&sctx->stat_lock);
@@ -2548,7 +2560,8 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity,
 
 	for (index = 0; len > 0; index++) {
 		struct scrub_page *spage;
-		u64 l = min_t(u64, len, PAGE_SIZE);
+		/* Check scrub_page() for the reason why we use sectorsize */
+		u32 l = sectorsize;
 
 		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
 
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 7/8] btrfs: scrub: support subpage tree block scrub
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
                   ` (5 preceding siblings ...)
  2020-10-26  7:11 ` [PATCH 6/8] btrfs: scrub: always allocate one full page for one sector for RAID56 Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  2020-10-26  7:11 ` [PATCH 8/8] btrfs: scrub: support subpage data scrub Qu Wenruo
  7 siblings, 0 replies; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

To support subpage tree block scrub, scrub_checksum_tree_block() only
needs to learn 2 new tricks:
- Follow scrub_page::page_len
  Now scrub_page only represents one sector, we need to follow it
  properly.

- Run checksum on all sectors
  Since scrub_page only represents one sector, we need to run hash on
  all sectors, no longer just (nodesize >> PAGE_SIZE).

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 10836aec389f..13355cc2b1ae 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -1841,15 +1841,21 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
 	struct scrub_ctx *sctx = sblock->sctx;
 	struct btrfs_header *h;
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
+	u32 sectorsize = sctx->fs_info->sectorsize;
+	u32 nodesize = sctx->fs_info->nodesize;
 	SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
 	u8 calculated_csum[BTRFS_CSUM_SIZE];
 	u8 on_disk_csum[BTRFS_CSUM_SIZE];
-	const int num_pages = sctx->fs_info->nodesize >> PAGE_SHIFT;
+	const int num_sectors = nodesize / sectorsize;
 	int i;
 	struct scrub_page *spage;
 	char *kaddr;
 
 	BUG_ON(sblock->page_count < 1);
+
+	/* Each pagev[] is in fact just one sector, not a full page */
+	ASSERT(sblock->page_count == num_sectors);
+
 	spage = sblock->pagev[0];
 	kaddr = page_address(spage->page);
 	h = (struct btrfs_header *)kaddr;
@@ -1878,11 +1884,11 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
 	shash->tfm = fs_info->csum_shash;
 	crypto_shash_init(shash);
 	crypto_shash_update(shash, kaddr + BTRFS_CSUM_SIZE,
-			    PAGE_SIZE - BTRFS_CSUM_SIZE);
+			    spage->page_len - BTRFS_CSUM_SIZE);
 
-	for (i = 1; i < num_pages; i++) {
+	for (i = 1; i < num_sectors; i++) {
 		kaddr = page_address(sblock->pagev[i]->page);
-		crypto_shash_update(shash, kaddr, PAGE_SIZE);
+		crypto_shash_update(shash, kaddr, sblock->pagev[i]->page_len);
 	}
 
 	crypto_shash_final(shash, calculated_csum);
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 8/8] btrfs: scrub: support subpage data scrub
  2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
                   ` (6 preceding siblings ...)
  2020-10-26  7:11 ` [PATCH 7/8] btrfs: scrub: support subpage tree block scrub Qu Wenruo
@ 2020-10-26  7:11 ` Qu Wenruo
  7 siblings, 0 replies; 13+ messages in thread
From: Qu Wenruo @ 2020-10-26  7:11 UTC (permalink / raw)
  To: linux-btrfs

Btrfs scrub is in fact much more flex than buffered data write path, as
we can read an unaligned subpage data into page offset 0.

This ability makes subpage support much easier, we just need to check
each scrub_page::page_len and ensure we only calculate hash for [0,
page_len) of a page, and call it a day for subpage scrub support.

There is a small thing to notice, for subpage case, we still do sector
by sector scrub.
This means we will submit a read bio for each sector to scrub, resulting
the same amount of read bios, just like the 4K page systems.

This behavior can be considered as a good thing, if we want everything
to be the same as 4K page systems.
But this also means, we're wasting the ability to submit larger bio
using 64K page size.
This is another problem to consider in the future.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 13355cc2b1ae..55a357e9416e 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -1824,15 +1824,19 @@ static int scrub_checksum_data(struct scrub_block *sblock)
 	if (!spage->have_csum)
 		return 0;
 
+	/*
+	 * In scrub_pages() and scrub_pages_for_parity() we ensure
+	 * each spage only contains just one sector of data.
+	 */
+	ASSERT(spage->page_len == sctx->fs_info->sectorsize);
 	kaddr = page_address(spage->page);
 
 	shash->tfm = fs_info->csum_shash;
 	crypto_shash_init(shash);
-	crypto_shash_digest(shash, kaddr, PAGE_SIZE, csum);
+	crypto_shash_digest(shash, kaddr, spage->page_len, csum);
 
 	if (memcmp(csum, spage->csums, sctx->csum_size))
 		sblock->checksum_error = 1;
-
 	return sblock->checksum_error;
 }
 
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page
  2020-10-26  7:11 ` [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page Qu Wenruo
@ 2020-10-26 14:13   ` Josef Bacik
  0 siblings, 0 replies; 13+ messages in thread
From: Josef Bacik @ 2020-10-26 14:13 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 10/26/20 3:11 AM, Qu Wenruo wrote:
> There are several call sites where we declare something like
> "struct scrub_page *page".
> 
> This is just asking for troubles when read the code, as we also have
> scrub_page::page member.
> 
> To avoid confusion, use "spage" for scrub_page strcture pointers.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/8] btrfs: scrub: remove the @force parameter of scrub_pages()
  2020-10-26  7:11 ` [PATCH 2/8] btrfs: scrub: remove the @force parameter of scrub_pages() Qu Wenruo
@ 2020-10-26 14:20   ` Josef Bacik
  0 siblings, 0 replies; 13+ messages in thread
From: Josef Bacik @ 2020-10-26 14:20 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 10/26/20 3:11 AM, Qu Wenruo wrote:
> The @force parameter for scrub_pages() is to indicate whether we want to
> force bio submission.
> 
> Currently it's only used for super block scrub, and it can be easily
> determined by the @flags.
> 
> So remove the parameter to make the parameter a little shorter.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/8] btrfs: scrub: use flexible array for scrub_page::csums
  2020-10-26  7:11 ` [PATCH 3/8] btrfs: scrub: use flexible array for scrub_page::csums Qu Wenruo
@ 2020-10-26 14:23   ` Josef Bacik
  0 siblings, 0 replies; 13+ messages in thread
From: Josef Bacik @ 2020-10-26 14:23 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 10/26/20 3:11 AM, Qu Wenruo wrote:
> There are several factors affecting how many checksum bytes are needed
> for one scrub_page:
> 
> - Sector size and page size
>    For subpage case, one page can contain several sectors, thus the csum
>    size will differ.
> 
> - Checksum size
>    Since btrfs supports different csum size now, which can vary from 4
>    bytes for CRC32 to 32 bytes for SHA256.
> 
> So instead of using fixed BTRFS_CSUM_SIZE, now use flexible array for
> scrub_page::csums, and determine the size at scrub_page allocation time.
> 
> This does not only provide the basis for later subpage scrub support,
> but also reduce the memory usage for default btrfs on x86_64.
> As the default CRC32 only uses 4 bytes, thus we can save 28 bytes for
> each scrub page.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>   fs/btrfs/scrub.c | 41 ++++++++++++++++++++++++++++++-----------
>   1 file changed, 30 insertions(+), 11 deletions(-)
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index 3cb51d1f061f..321d6d457942 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -76,9 +76,14 @@ struct scrub_page {
>   		unsigned int	have_csum:1;
>   		unsigned int	io_error:1;
>   	};
> -	u8			csum[BTRFS_CSUM_SIZE];
> -
>   	struct scrub_recover	*recover;
> +
> +	/*
> +	 * The csums size for the page is deteremined by page size,
> +	 * sector size and csum size.
> +	 * Thus the length has to be determined at runtime.
> +	 */
> +	u8			csums[];
>   };
>   
>   struct scrub_bio {
> @@ -207,6 +212,19 @@ struct full_stripe_lock {
>   	struct mutex mutex;
>   };
>   
> +static struct scrub_page *alloc_scrub_page(struct scrub_ctx *sctx, gfp_t mask)
> +{
> +	u32 sectorsize = sctx->fs_info->sectorsize;
> +	size_t size;
> +
> +	/* No support for multi-page sector size yet */
> +	ASSERT(PAGE_SIZE >= sectorsize && IS_ALIGNED(PAGE_SIZE, sectorsize));

Check patch complains about this

WARNING: Comparisons should place the constant on the right side of the test
#32: FILE: fs/btrfs/scrub.c:221:
+       ASSERT(PAGE_SIZE >= sectorsize && IS_ALIGNED(PAGE_SIZE, sectorsize));


Once you fix that you can add

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/8] btrfs: scrub: refactor scrub_find_csum()
  2020-10-26  7:11 ` [PATCH 4/8] btrfs: scrub: refactor scrub_find_csum() Qu Wenruo
@ 2020-10-26 14:39   ` Josef Bacik
  0 siblings, 0 replies; 13+ messages in thread
From: Josef Bacik @ 2020-10-26 14:39 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 10/26/20 3:11 AM, Qu Wenruo wrote:
> Function scrub_find_csum() is to locate the csum for bytenr @logical
> from sctx->csum_list.
> 
> However it lacks a lot of comments to explaining things like how the
> csum_list is organized and why we need to drop csum range which is
> before us.
> 
> Refactor the function by:
> - Add more comment explaining the behavior
> - Add comment explaining why we need to drop the csum range
> - Put the csum copy in the main loop
>    This is mostly for the incoming patches to make scrub_find_csum() able
>    to find multiple checksums.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>   fs/btrfs/scrub.c | 70 +++++++++++++++++++++++++++++++++++-------------
>   1 file changed, 51 insertions(+), 19 deletions(-)
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index 321d6d457942..0d078393f986 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -2386,37 +2386,69 @@ static void scrub_block_complete(struct scrub_block *sblock)
>   	}
>   }
>   
> +static void drop_csum_range(struct scrub_ctx *sctx,
> +			    struct btrfs_ordered_sum *sum)
> +{
> +	u32 sectorsize = sctx->fs_info->sectorsize;
> +
> +	sctx->stat.csum_discards += sum->len / sectorsize;
> +	list_del(&sum->list);
> +	kfree(sum);
> +}
> +
> +/*
> + * Find the desired csum for range [@logical, @logical + sectorsize), and
> + * store the csum into @csum.
> + *
> + * The search source is sctx->csum_list, which is a pre-populated list
> + * storing bytenr ordered csum ranges.
> + * We're reponsible to cleanup any range that is before @logical.
> + *
> + * Return 0 if there is no csum for the range.
> + * Return 1 if there is csum for the range and copied to @csum.
> + */
>   static int scrub_find_csum(struct scrub_ctx *sctx, u64 logical, u8 *csum)
>   {
> -	struct btrfs_ordered_sum *sum = NULL;
> -	unsigned long index;
> -	unsigned long num_sectors;
> +	u32 sectorsize = sctx->fs_info->sectorsize;
> +	u32 csum_size = sctx->csum_size;
> +	bool found = false;
>   
>   	while (!list_empty(&sctx->csum_list)) {
> +		struct btrfs_ordered_sum *sum = NULL;
> +		unsigned long index;
> +		unsigned long num_sectors;
> +
>   		sum = list_first_entry(&sctx->csum_list,
>   				       struct btrfs_ordered_sum, list);
> +		/* The current csum range is beyond our range, no csum found */
>   		if (sum->bytenr > logical)
> -			return 0;
> -		if (sum->bytenr + sum->len > logical)
>   			break;
>   
> -		++sctx->stat.csum_discards;
> -		list_del(&sum->list);
> -		kfree(sum);
> -		sum = NULL;
> -	}
> -	if (!sum)
> -		return 0;
> +		/*
> +		 * The current sum is before our bytenr, since scrub is
> +		 * always done in bytenr order, the csum will never be used
> +		 * anymore, clean it up so that later calls won't bother the
> +		 * range, and continue search the next range.
> +		 */
> +		if (sum->bytenr + sum->len <= logical) {
> +			drop_csum_range(sctx, sum);
> +			continue;
> +		}
>   
> -	index = div_u64(logical - sum->bytenr, sctx->fs_info->sectorsize);
> -	ASSERT(index < UINT_MAX);
> +		/* Now the csum range covers our bytenr, copy the csum */
> +		found = true;
> +		index = div_u64(logical - sum->bytenr, sectorsize);
> +		num_sectors = sum->len / sectorsize;
>   
> -	num_sectors = sum->len / sctx->fs_info->sectorsize;
> -	memcpy(csum, sum->sums + index * sctx->csum_size, sctx->csum_size);
> -	if (index == num_sectors - 1) {
> -		list_del(&sum->list);
> -		kfree(sum);
> +		memcpy(csum, sum->sums + index * csum_size, csum_size);
> +
> +		/* Cleanup the range if we're at the end of the csum range */
> +		if (index == num_sectors - 1)
> +			drop_csum_range(sctx, sum);
> +		break;
>   	}
> +	if (!found)
> +		return 0;
>   	return 1;
>   }

If it's just a bool we're returning, change this to

static bool scrub_find_csum()

and do

return found.

Thanks,

Josef

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-10-26 14:40 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26  7:11 [PATCH 0/8] btrfs: scrub: support subpage scrub (completely independent version) Qu Wenruo
2020-10-26  7:11 ` [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page Qu Wenruo
2020-10-26 14:13   ` Josef Bacik
2020-10-26  7:11 ` [PATCH 2/8] btrfs: scrub: remove the @force parameter of scrub_pages() Qu Wenruo
2020-10-26 14:20   ` Josef Bacik
2020-10-26  7:11 ` [PATCH 3/8] btrfs: scrub: use flexible array for scrub_page::csums Qu Wenruo
2020-10-26 14:23   ` Josef Bacik
2020-10-26  7:11 ` [PATCH 4/8] btrfs: scrub: refactor scrub_find_csum() Qu Wenruo
2020-10-26 14:39   ` Josef Bacik
2020-10-26  7:11 ` [PATCH 5/8] btrfs: scrub: introduce scrub_page::page_len for subpage support Qu Wenruo
2020-10-26  7:11 ` [PATCH 6/8] btrfs: scrub: always allocate one full page for one sector for RAID56 Qu Wenruo
2020-10-26  7:11 ` [PATCH 7/8] btrfs: scrub: support subpage tree block scrub Qu Wenruo
2020-10-26  7:11 ` [PATCH 8/8] btrfs: scrub: support subpage data scrub Qu Wenruo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).