* [PATCH 0/2] btrfs: scrub: cleanup scrub_raid56_parity() using find_first_extent_item() helper
@ 2022-01-07 7:09 Qu Wenruo
2022-01-07 7:09 ` [PATCH 1/2] btrfs: refactor scrub_raid56_parity() Qu Wenruo
2022-01-07 7:09 ` [PATCH 2/2] btrfs: use find_first_extent_item() to replace the open-coded extent item search Qu Wenruo
0 siblings, 2 replies; 5+ messages in thread
From: Qu Wenruo @ 2022-01-07 7:09 UTC (permalink / raw)
To: linux-btrfs
This patchset is depedent on previous patchset titled "btrfs: refactor
scrub entrances for each profile", and can be fetched from github:
https://github.com/adam900710/linux/tree/refactor_scrub
This patchset is doing the remaining cleanup for extent item iteration
of scrub_raid56_parity().
As that function is mostly a variant of scrub_simple_mirror() with
different target operations.
This patchset will refactor a double while loops into a single while
loop with a helper, then cleanup the open-coded extent item lookup code.
With this patchset, all extent item iteration code will share the new
find_first_extent_item() based solution.
Qu Wenruo (2):
btrfs: refactor scrub_raid56_parity()
btrfs: use find_first_extent_item() to replace the open-coded extent
item search
fs/btrfs/scrub.c | 299 +++++++++++++++++++----------------------------
1 file changed, 118 insertions(+), 181 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/2] btrfs: refactor scrub_raid56_parity()
2022-01-07 7:09 [PATCH 0/2] btrfs: scrub: cleanup scrub_raid56_parity() using find_first_extent_item() helper Qu Wenruo
@ 2022-01-07 7:09 ` Qu Wenruo
2022-01-07 8:20 ` Damien Le Moal
2022-01-07 7:09 ` [PATCH 2/2] btrfs: use find_first_extent_item() to replace the open-coded extent item search Qu Wenruo
1 sibling, 1 reply; 5+ messages in thread
From: Qu Wenruo @ 2022-01-07 7:09 UTC (permalink / raw)
To: linux-btrfs
Currently scrub_raid56_parity() has a large double loop, handling the
following things at the same time:
- Iterate each data stripe
- Iterate each extent item in one data stripe
Refactor this mess by:
- Introduce a new helper to handle data stripe iteration
The new helper is scrub_raid56_data_stripe_for_parity(), which
only has one while() loop handling the extent items inside the
data stripe.
The code is still mostly the same as the old code.
- Call cond_resched() for each extent
Previously we only call cond_resched() under a complex if () check.
I see no special reason to do that, and for other scrub functions,
like scrub_simple_mirror() we're already doing the same cond_resched()
after scrubbing one extent.
- Add extra comments
To improve the readability
Please note that, this patch is only to address the double loop, there
are incoming patches to do extra cleanup.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/scrub.c | 347 +++++++++++++++++++++++------------------------
1 file changed, 166 insertions(+), 181 deletions(-)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 9c6954d7f604..572852416b29 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3019,6 +3019,166 @@ static bool does_range_cross_boundary(u64 extent_start, u64 extent_len,
extent_start + extent_len > boundary_start + boudary_len);
}
+static int scrub_raid56_data_stripe_for_parity(struct scrub_ctx *sctx,
+ struct scrub_parity *sparity,
+ struct map_lookup *map,
+ struct btrfs_device *sdev,
+ struct btrfs_path *path,
+ u64 logical)
+{
+ struct btrfs_fs_info *fs_info = sctx->fs_info;
+ struct btrfs_root *extent_root = btrfs_extent_root(fs_info, logical);
+ struct btrfs_root *csum_root = btrfs_csum_root(fs_info, logical);
+ struct btrfs_key key;
+ int ret;
+
+ ASSERT(map->type & BTRFS_BLOCK_GROUP_RAID56_MASK);
+
+ /* Path should not be populated */
+ ASSERT(!path->nodes[0]);
+
+ if (btrfs_fs_incompat(fs_info, SKINNY_METADATA))
+ key.type = BTRFS_METADATA_ITEM_KEY;
+ else
+ key.type = BTRFS_EXTENT_ITEM_KEY;
+ key.objectid = logical;
+ key.offset = (u64)-1;
+
+ ret = btrfs_search_slot(NULL, extent_root, &key, path, 0, 0);
+ if (ret < 0)
+ return ret;
+
+ if (ret > 0) {
+ ret = btrfs_previous_extent_item(extent_root, path, 0);
+ if (ret < 0)
+ return ret;
+ if (ret > 0) {
+ btrfs_release_path(path);
+ ret = btrfs_search_slot(NULL, extent_root, &key, path,
+ 0, 0);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ while (1) {
+ struct btrfs_io_context *bioc = NULL;
+ struct btrfs_device *extent_dev;
+ struct btrfs_extent_item *ei;
+ struct extent_buffer *l;
+ int slot;
+ u64 mapped_length;
+ u64 extent_size;
+ u64 extent_flags;
+ u64 extent_gen;
+ u64 extent_start;
+ u64 extent_physical;
+ u64 extent_mirror_num;
+
+ l = path->nodes[0];
+ slot = path->slots[0];
+ if (slot >= btrfs_header_nritems(l)) {
+ ret = btrfs_next_leaf(extent_root, path);
+ if (ret == 0)
+ continue;
+
+ /* No more extent items or error, exit */
+ break;
+ }
+ btrfs_item_key_to_cpu(l, &key, slot);
+
+ if (key.type != BTRFS_EXTENT_ITEM_KEY &&
+ key.type != BTRFS_METADATA_ITEM_KEY)
+ goto next;
+
+ if (key.type == BTRFS_METADATA_ITEM_KEY)
+ extent_size = fs_info->nodesize;
+ else
+ extent_size = key.offset;
+
+ if (key.objectid + extent_size <= logical)
+ goto next;
+
+ /* Beyond this data stripe */
+ if (key.objectid >= logical + map->stripe_len)
+ break;
+
+ ei = btrfs_item_ptr(l, slot, struct btrfs_extent_item);
+ extent_flags = btrfs_extent_flags(l, ei);
+ extent_gen = btrfs_extent_generation(l, ei);
+
+ if ((extent_flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) &&
+ (key.objectid < logical || key.objectid + extent_size >
+ logical + map->stripe_len)) {
+ btrfs_err(fs_info,
+ "scrub: tree block %llu spanning stripes, ignored. logical=%llu",
+ key.objectid, logical);
+ spin_lock(&sctx->stat_lock);
+ sctx->stat.uncorrectable_errors++;
+ spin_unlock(&sctx->stat_lock);
+ goto next;
+ }
+
+ extent_start = key.objectid;
+ ASSERT(extent_size <= U32_MAX);
+
+ /* Truncate the range inside the data stripe */
+ if (extent_start < logical) {
+ extent_size -= logical - extent_start;
+ extent_start = logical;
+ }
+ if (extent_start + extent_size > logical + map->stripe_len)
+ extent_size = logical + map->stripe_len - extent_start;
+
+ scrub_parity_mark_sectors_data(sparity, extent_start, extent_size);
+
+ mapped_length = extent_size;
+ ret = btrfs_map_block(fs_info, BTRFS_MAP_READ, extent_start,
+ &mapped_length, &bioc, 0);
+ if (!ret) {
+ if (!bioc || mapped_length < extent_size)
+ ret = -EIO;
+ }
+ if (ret) {
+ btrfs_put_bioc(bioc);
+ scrub_parity_mark_sectors_error(sparity, extent_start,
+ extent_size);
+ break;
+ }
+ extent_physical = bioc->stripes[0].physical;
+ extent_mirror_num = bioc->mirror_num;
+ extent_dev = bioc->stripes[0].dev;
+ btrfs_put_bioc(bioc);
+
+ ret = btrfs_lookup_csums_range(csum_root, extent_start,
+ extent_start + extent_size - 1,
+ &sctx->csum_list, 1);
+ if (ret) {
+ scrub_parity_mark_sectors_error(sparity, extent_start,
+ extent_size);
+ break;
+ }
+
+ ret = scrub_extent_for_parity(sparity, extent_start,
+ extent_size, extent_physical,
+ extent_dev, extent_flags,
+ extent_gen, extent_mirror_num);
+ scrub_free_csums(sctx);
+
+ if (ret) {
+ scrub_parity_mark_sectors_error(sparity, extent_start,
+ extent_size);
+ break;
+ }
+
+ cond_resched();
+next:
+ path->slots[0]++;
+ }
+ btrfs_release_path(path);
+ return ret;
+}
+
static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
struct map_lookup *map,
struct btrfs_device *sdev,
@@ -3026,28 +3186,12 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
u64 logic_end)
{
struct btrfs_fs_info *fs_info = sctx->fs_info;
- struct btrfs_root *root = btrfs_extent_root(fs_info, logic_start);
- struct btrfs_root *csum_root;
- struct btrfs_extent_item *extent;
- struct btrfs_io_context *bioc = NULL;
struct btrfs_path *path;
- u64 flags;
+ u64 cur_logical;
int ret;
- int slot;
- struct extent_buffer *l;
- struct btrfs_key key;
- u64 generation;
- u64 extent_logical;
- u64 extent_physical;
- /* Check the comment in scrub_stripe() for why u32 is enough here */
- u32 extent_len;
- u64 mapped_length;
- struct btrfs_device *extent_dev;
struct scrub_parity *sparity;
int nsectors;
int bitmap_len;
- int extent_mirror_num;
- int stop_loop = 0;
path = btrfs_alloc_path();
if (!path) {
@@ -3085,173 +3229,14 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
sparity->ebitmap = (void *)sparity->bitmap + bitmap_len;
ret = 0;
- while (logic_start < logic_end) {
- if (btrfs_fs_incompat(fs_info, SKINNY_METADATA))
- key.type = BTRFS_METADATA_ITEM_KEY;
- else
- key.type = BTRFS_EXTENT_ITEM_KEY;
- key.objectid = logic_start;
- key.offset = (u64)-1;
-
- ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+ for (cur_logical = logic_start; cur_logical < logic_end;
+ cur_logical += map->stripe_len) {
+ ret = scrub_raid56_data_stripe_for_parity(sctx, sparity, map,
+ sdev, path, cur_logical);
if (ret < 0)
- goto out;
-
- if (ret > 0) {
- ret = btrfs_previous_extent_item(root, path, 0);
- if (ret < 0)
- goto out;
- if (ret > 0) {
- btrfs_release_path(path);
- ret = btrfs_search_slot(NULL, root, &key,
- path, 0, 0);
- if (ret < 0)
- goto out;
- }
- }
-
- stop_loop = 0;
- while (1) {
- u64 bytes;
-
- l = path->nodes[0];
- slot = path->slots[0];
- if (slot >= btrfs_header_nritems(l)) {
- ret = btrfs_next_leaf(root, path);
- if (ret == 0)
- continue;
- if (ret < 0)
- goto out;
-
- stop_loop = 1;
- break;
- }
- btrfs_item_key_to_cpu(l, &key, slot);
-
- if (key.type != BTRFS_EXTENT_ITEM_KEY &&
- key.type != BTRFS_METADATA_ITEM_KEY)
- goto next;
-
- if (key.type == BTRFS_METADATA_ITEM_KEY)
- bytes = fs_info->nodesize;
- else
- bytes = key.offset;
-
- if (key.objectid + bytes <= logic_start)
- goto next;
-
- if (key.objectid >= logic_end) {
- stop_loop = 1;
- break;
- }
-
- while (key.objectid >= logic_start + map->stripe_len)
- logic_start += map->stripe_len;
-
- extent = btrfs_item_ptr(l, slot,
- struct btrfs_extent_item);
- flags = btrfs_extent_flags(l, extent);
- generation = btrfs_extent_generation(l, extent);
-
- if ((flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) &&
- (key.objectid < logic_start ||
- key.objectid + bytes >
- logic_start + map->stripe_len)) {
- btrfs_err(fs_info,
- "scrub: tree block %llu spanning stripes, ignored. logical=%llu",
- key.objectid, logic_start);
- spin_lock(&sctx->stat_lock);
- sctx->stat.uncorrectable_errors++;
- spin_unlock(&sctx->stat_lock);
- goto next;
- }
-again:
- extent_logical = key.objectid;
- ASSERT(bytes <= U32_MAX);
- extent_len = bytes;
-
- if (extent_logical < logic_start) {
- extent_len -= logic_start - extent_logical;
- extent_logical = logic_start;
- }
-
- if (extent_logical + extent_len >
- logic_start + map->stripe_len)
- extent_len = logic_start + map->stripe_len -
- extent_logical;
-
- scrub_parity_mark_sectors_data(sparity, extent_logical,
- extent_len);
-
- mapped_length = extent_len;
- bioc = NULL;
- ret = btrfs_map_block(fs_info, BTRFS_MAP_READ,
- extent_logical, &mapped_length, &bioc,
- 0);
- if (!ret) {
- if (!bioc || mapped_length < extent_len)
- ret = -EIO;
- }
- if (ret) {
- btrfs_put_bioc(bioc);
- goto out;
- }
- extent_physical = bioc->stripes[0].physical;
- extent_mirror_num = bioc->mirror_num;
- extent_dev = bioc->stripes[0].dev;
- btrfs_put_bioc(bioc);
-
- csum_root = btrfs_csum_root(fs_info, extent_logical);
- ret = btrfs_lookup_csums_range(csum_root,
- extent_logical,
- extent_logical + extent_len - 1,
- &sctx->csum_list, 1);
- if (ret)
- goto out;
-
- ret = scrub_extent_for_parity(sparity, extent_logical,
- extent_len,
- extent_physical,
- extent_dev, flags,
- generation,
- extent_mirror_num);
-
- scrub_free_csums(sctx);
-
- if (ret)
- goto out;
-
- if (extent_logical + extent_len <
- key.objectid + bytes) {
- logic_start += map->stripe_len;
-
- if (logic_start >= logic_end) {
- stop_loop = 1;
- break;
- }
-
- if (logic_start < key.objectid + bytes) {
- cond_resched();
- goto again;
- }
- }
-next:
- path->slots[0]++;
- }
-
- btrfs_release_path(path);
-
- if (stop_loop)
break;
-
- logic_start += map->stripe_len;
- }
-out:
- if (ret < 0) {
- ASSERT(logic_end - logic_start <= U32_MAX);
- scrub_parity_mark_sectors_error(sparity, logic_start,
- logic_end - logic_start);
}
+
scrub_parity_put(sparity);
scrub_submit(sctx);
mutex_lock(&sctx->wr_lock);
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] btrfs: use find_first_extent_item() to replace the open-coded extent item search
2022-01-07 7:09 [PATCH 0/2] btrfs: scrub: cleanup scrub_raid56_parity() using find_first_extent_item() helper Qu Wenruo
2022-01-07 7:09 ` [PATCH 1/2] btrfs: refactor scrub_raid56_parity() Qu Wenruo
@ 2022-01-07 7:09 ` Qu Wenruo
1 sibling, 0 replies; 5+ messages in thread
From: Qu Wenruo @ 2022-01-07 7:09 UTC (permalink / raw)
To: linux-btrfs
Since we have find_first_extent_item() to iterate the extent items of a
certain range, there is no need to use the open-coded version.
Replace the final scrub call site with find_first_extent_item().
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/scrub.c | 100 ++++++++++++-----------------------------------
1 file changed, 26 insertions(+), 74 deletions(-)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 572852416b29..14e6a13edc19 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3029,44 +3029,16 @@ static int scrub_raid56_data_stripe_for_parity(struct scrub_ctx *sctx,
struct btrfs_fs_info *fs_info = sctx->fs_info;
struct btrfs_root *extent_root = btrfs_extent_root(fs_info, logical);
struct btrfs_root *csum_root = btrfs_csum_root(fs_info, logical);
- struct btrfs_key key;
+ u64 cur_logical = logical;
int ret;
ASSERT(map->type & BTRFS_BLOCK_GROUP_RAID56_MASK);
-
/* Path should not be populated */
ASSERT(!path->nodes[0]);
- if (btrfs_fs_incompat(fs_info, SKINNY_METADATA))
- key.type = BTRFS_METADATA_ITEM_KEY;
- else
- key.type = BTRFS_EXTENT_ITEM_KEY;
- key.objectid = logical;
- key.offset = (u64)-1;
-
- ret = btrfs_search_slot(NULL, extent_root, &key, path, 0, 0);
- if (ret < 0)
- return ret;
-
- if (ret > 0) {
- ret = btrfs_previous_extent_item(extent_root, path, 0);
- if (ret < 0)
- return ret;
- if (ret > 0) {
- btrfs_release_path(path);
- ret = btrfs_search_slot(NULL, extent_root, &key, path,
- 0, 0);
- if (ret < 0)
- return ret;
- }
- }
-
- while (1) {
+ while (cur_logical < logical + map->stripe_len) {
struct btrfs_io_context *bioc = NULL;
struct btrfs_device *extent_dev;
- struct btrfs_extent_item *ei;
- struct extent_buffer *l;
- int slot;
u64 mapped_length;
u64 extent_size;
u64 extent_flags;
@@ -3075,60 +3047,40 @@ static int scrub_raid56_data_stripe_for_parity(struct scrub_ctx *sctx,
u64 extent_physical;
u64 extent_mirror_num;
- l = path->nodes[0];
- slot = path->slots[0];
- if (slot >= btrfs_header_nritems(l)) {
- ret = btrfs_next_leaf(extent_root, path);
- if (ret == 0)
- continue;
-
- /* No more extent items or error, exit */
+ ret = find_first_extent_item(extent_root, path, cur_logical,
+ logical + map->stripe_len - cur_logical);
+ /* No more extent item in this data stripe */
+ if (ret > 0) {
+ ret = 0;
break;
}
- btrfs_item_key_to_cpu(l, &key, slot);
-
- if (key.type != BTRFS_EXTENT_ITEM_KEY &&
- key.type != BTRFS_METADATA_ITEM_KEY)
- goto next;
-
- if (key.type == BTRFS_METADATA_ITEM_KEY)
- extent_size = fs_info->nodesize;
- else
- extent_size = key.offset;
-
- if (key.objectid + extent_size <= logical)
- goto next;
-
- /* Beyond this data stripe */
- if (key.objectid >= logical + map->stripe_len)
+ if (ret < 0)
break;
+ get_extent_info(path, &extent_start, &extent_size,
+ &extent_flags, &extent_gen);
- ei = btrfs_item_ptr(l, slot, struct btrfs_extent_item);
- extent_flags = btrfs_extent_flags(l, ei);
- extent_gen = btrfs_extent_generation(l, ei);
-
+ /* Metadata should not cross stripe boundaries */
if ((extent_flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) &&
- (key.objectid < logical || key.objectid + extent_size >
- logical + map->stripe_len)) {
+ does_range_cross_boundary(extent_start, extent_size,
+ logical, map->stripe_len)) {
btrfs_err(fs_info,
- "scrub: tree block %llu spanning stripes, ignored. logical=%llu",
- key.objectid, logical);
+ "scrub: tree block %llu spanning stripes, ignored. logical=%llu",
+ extent_start, logical);
spin_lock(&sctx->stat_lock);
sctx->stat.uncorrectable_errors++;
spin_unlock(&sctx->stat_lock);
- goto next;
+ cur_logical += extent_size;
+ continue;
}
- extent_start = key.objectid;
- ASSERT(extent_size <= U32_MAX);
+ /* Skip hole range which doesn't have any extent */
+ cur_logical = max(extent_start, cur_logical);
- /* Truncate the range inside the data stripe */
- if (extent_start < logical) {
- extent_size -= logical - extent_start;
- extent_start = logical;
- }
- if (extent_start + extent_size > logical + map->stripe_len)
- extent_size = logical + map->stripe_len - extent_start;
+ /* Truncate the range inside this data stripe */
+ extent_size = min(extent_start + extent_size,
+ logical + map->stripe_len) - cur_logical;
+ extent_start = cur_logical;
+ ASSERT(extent_size <= U32_MAX);
scrub_parity_mark_sectors_data(sparity, extent_start, extent_size);
@@ -3172,8 +3124,7 @@ static int scrub_raid56_data_stripe_for_parity(struct scrub_ctx *sctx,
}
cond_resched();
-next:
- path->slots[0]++;
+ cur_logical += extent_size;
}
btrfs_release_path(path);
return ret;
@@ -3369,6 +3320,7 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
break;
get_extent_info(&path, &extent_start, &extent_len,
&extent_flags, &extent_gen);
+
/* Skip hole range which doesn't have any extent */
cur_logical = max(extent_start, cur_logical);
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] btrfs: refactor scrub_raid56_parity()
2022-01-07 7:09 ` [PATCH 1/2] btrfs: refactor scrub_raid56_parity() Qu Wenruo
@ 2022-01-07 8:20 ` Damien Le Moal
2022-01-07 9:52 ` Qu Wenruo
0 siblings, 1 reply; 5+ messages in thread
From: Damien Le Moal @ 2022-01-07 8:20 UTC (permalink / raw)
To: Qu Wenruo, linux-btrfs
On 1/7/22 16:09, Qu Wenruo wrote:
> Currently scrub_raid56_parity() has a large double loop, handling the
> following things at the same time:
>
> - Iterate each data stripe
> - Iterate each extent item in one data stripe
>
> Refactor this mess by:
>
> - Introduce a new helper to handle data stripe iteration
> The new helper is scrub_raid56_data_stripe_for_parity(), which
> only has one while() loop handling the extent items inside the
> data stripe.
>
> The code is still mostly the same as the old code.
>
> - Call cond_resched() for each extent
> Previously we only call cond_resched() under a complex if () check.
> I see no special reason to do that, and for other scrub functions,
> like scrub_simple_mirror() we're already doing the same cond_resched()
> after scrubbing one extent.
>
> - Add extra comments
> To improve the readability
>
> Please note that, this patch is only to address the double loop, there
> are incoming patches to do extra cleanup.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/scrub.c | 347 +++++++++++++++++++++++------------------------
> 1 file changed, 166 insertions(+), 181 deletions(-)
>
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index 9c6954d7f604..572852416b29 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -3019,6 +3019,166 @@ static bool does_range_cross_boundary(u64 extent_start, u64 extent_len,
> extent_start + extent_len > boundary_start + boudary_len);
> }
>
> +static int scrub_raid56_data_stripe_for_parity(struct scrub_ctx *sctx,
> + struct scrub_parity *sparity,
> + struct map_lookup *map,
> + struct btrfs_device *sdev,
> + struct btrfs_path *path,
> + u64 logical)
> +{
> + struct btrfs_fs_info *fs_info = sctx->fs_info;
> + struct btrfs_root *extent_root = btrfs_extent_root(fs_info, logical);
> + struct btrfs_root *csum_root = btrfs_csum_root(fs_info, logical);
> + struct btrfs_key key;
> + int ret;
> +
> + ASSERT(map->type & BTRFS_BLOCK_GROUP_RAID56_MASK);
> +
> + /* Path should not be populated */
> + ASSERT(!path->nodes[0]);
> +
> + if (btrfs_fs_incompat(fs_info, SKINNY_METADATA))
> + key.type = BTRFS_METADATA_ITEM_KEY;
> + else
> + key.type = BTRFS_EXTENT_ITEM_KEY;
> + key.objectid = logical;
> + key.offset = (u64)-1;
> +
> + ret = btrfs_search_slot(NULL, extent_root, &key, path, 0, 0);
> + if (ret < 0)
> + return ret;
> +
> + if (ret > 0) {
> + ret = btrfs_previous_extent_item(extent_root, path, 0);
> + if (ret < 0)
> + return ret;
> + if (ret > 0) {
> + btrfs_release_path(path);
> + ret = btrfs_search_slot(NULL, extent_root, &key, path,
> + 0, 0);
> + if (ret < 0)
> + return ret;
> + }
> + }
> +
> + while (1) {
> + struct btrfs_io_context *bioc = NULL;
> + struct btrfs_device *extent_dev;
> + struct btrfs_extent_item *ei;
> + struct extent_buffer *l;
> + int slot;
> + u64 mapped_length;
> + u64 extent_size;
> + u64 extent_flags;
> + u64 extent_gen;
> + u64 extent_start;
> + u64 extent_physical;
> + u64 extent_mirror_num;
> +
> + l = path->nodes[0];
> + slot = path->slots[0];
> + if (slot >= btrfs_header_nritems(l)) {
> + ret = btrfs_next_leaf(extent_root, path);
> + if (ret == 0)
> + continue;
> +
> + /* No more extent items or error, exit */
> + break;
> + }
> + btrfs_item_key_to_cpu(l, &key, slot);
> +
> + if (key.type != BTRFS_EXTENT_ITEM_KEY &&
> + key.type != BTRFS_METADATA_ITEM_KEY)
> + goto next;
> +
> + if (key.type == BTRFS_METADATA_ITEM_KEY)
> + extent_size = fs_info->nodesize;
> + else
> + extent_size = key.offset;
> +
> + if (key.objectid + extent_size <= logical)
> + goto next;
> +
> + /* Beyond this data stripe */
> + if (key.objectid >= logical + map->stripe_len)
> + break;
> +
> + ei = btrfs_item_ptr(l, slot, struct btrfs_extent_item);
> + extent_flags = btrfs_extent_flags(l, ei);
> + extent_gen = btrfs_extent_generation(l, ei);
> +
> + if ((extent_flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) &&
> + (key.objectid < logical || key.objectid + extent_size >
> + logical + map->stripe_len)) {
> + btrfs_err(fs_info,
> + "scrub: tree block %llu spanning stripes, ignored. logical=%llu",
> + key.objectid, logical);
> + spin_lock(&sctx->stat_lock);
> + sctx->stat.uncorrectable_errors++;
> + spin_unlock(&sctx->stat_lock);
> + goto next;
> + }
> +
> + extent_start = key.objectid;
> + ASSERT(extent_size <= U32_MAX);
> +
> + /* Truncate the range inside the data stripe */
> + if (extent_start < logical) {
> + extent_size -= logical - extent_start;
> + extent_start = logical;
> + }
> + if (extent_start + extent_size > logical + map->stripe_len)
> + extent_size = logical + map->stripe_len - extent_start;
> +
> + scrub_parity_mark_sectors_data(sparity, extent_start, extent_size);
> +
> + mapped_length = extent_size;
> + ret = btrfs_map_block(fs_info, BTRFS_MAP_READ, extent_start,
> + &mapped_length, &bioc, 0);
> + if (!ret) {
> + if (!bioc || mapped_length < extent_size)
> + ret = -EIO;
> + }
The double "if" looks a little weird... You can simplify this:
if ((!ret) && (!bioc || mapped_length < extent_size))
ret = -EIO;
> + if (ret) {
> + btrfs_put_bioc(bioc);
> + scrub_parity_mark_sectors_error(sparity, extent_start,
> + extent_size);
> + break;
> + }
> + extent_physical = bioc->stripes[0].physical;
> + extent_mirror_num = bioc->mirror_num;
> + extent_dev = bioc->stripes[0].dev;
> + btrfs_put_bioc(bioc);
> +
> + ret = btrfs_lookup_csums_range(csum_root, extent_start,
> + extent_start + extent_size - 1,
> + &sctx->csum_list, 1);
> + if (ret) {
> + scrub_parity_mark_sectors_error(sparity, extent_start,
> + extent_size);
> + break;
> + }
> +
> + ret = scrub_extent_for_parity(sparity, extent_start,
> + extent_size, extent_physical,
> + extent_dev, extent_flags,
> + extent_gen, extent_mirror_num);
> + scrub_free_csums(sctx);
> +
> + if (ret) {
> + scrub_parity_mark_sectors_error(sparity, extent_start,
> + extent_size);
> + break;
> + }
> +
It would be nice to have the entire code above factored in one or more
functions to make reading the loop easier.
> + cond_resched();
> +next:
> + path->slots[0]++;
> + }
> + btrfs_release_path(path);
> + return ret;
> +}
> +
> static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
> struct map_lookup *map,
> struct btrfs_device *sdev,
> @@ -3026,28 +3186,12 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
> u64 logic_end)
> {
> struct btrfs_fs_info *fs_info = sctx->fs_info;
> - struct btrfs_root *root = btrfs_extent_root(fs_info, logic_start);
> - struct btrfs_root *csum_root;
> - struct btrfs_extent_item *extent;
> - struct btrfs_io_context *bioc = NULL;
> struct btrfs_path *path;
> - u64 flags;
> + u64 cur_logical;
> int ret;
> - int slot;
> - struct extent_buffer *l;
> - struct btrfs_key key;
> - u64 generation;
> - u64 extent_logical;
> - u64 extent_physical;
> - /* Check the comment in scrub_stripe() for why u32 is enough here */
> - u32 extent_len;
> - u64 mapped_length;
> - struct btrfs_device *extent_dev;
> struct scrub_parity *sparity;
> int nsectors;
> int bitmap_len;
> - int extent_mirror_num;
> - int stop_loop = 0;
>
> path = btrfs_alloc_path();
> if (!path) {
> @@ -3085,173 +3229,14 @@ static noinline_for_stack int scrub_raid56_parity(struct scrub_ctx *sctx,
> sparity->ebitmap = (void *)sparity->bitmap + bitmap_len;
>
> ret = 0;
> - while (logic_start < logic_end) {
> - if (btrfs_fs_incompat(fs_info, SKINNY_METADATA))
> - key.type = BTRFS_METADATA_ITEM_KEY;
> - else
> - key.type = BTRFS_EXTENT_ITEM_KEY;
> - key.objectid = logic_start;
> - key.offset = (u64)-1;
> -
> - ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
> + for (cur_logical = logic_start; cur_logical < logic_end;
> + cur_logical += map->stripe_len) {
> + ret = scrub_raid56_data_stripe_for_parity(sctx, sparity, map,
> + sdev, path, cur_logical);
> if (ret < 0)
> - goto out;
> -
> - if (ret > 0) {
> - ret = btrfs_previous_extent_item(root, path, 0);
> - if (ret < 0)
> - goto out;
> - if (ret > 0) {
> - btrfs_release_path(path);
> - ret = btrfs_search_slot(NULL, root, &key,
> - path, 0, 0);
> - if (ret < 0)
> - goto out;
> - }
> - }
> -
> - stop_loop = 0;
> - while (1) {
> - u64 bytes;
> -
> - l = path->nodes[0];
> - slot = path->slots[0];
> - if (slot >= btrfs_header_nritems(l)) {
> - ret = btrfs_next_leaf(root, path);
> - if (ret == 0)
> - continue;
> - if (ret < 0)
> - goto out;
> -
> - stop_loop = 1;
> - break;
> - }
> - btrfs_item_key_to_cpu(l, &key, slot);
> -
> - if (key.type != BTRFS_EXTENT_ITEM_KEY &&
> - key.type != BTRFS_METADATA_ITEM_KEY)
> - goto next;
> -
> - if (key.type == BTRFS_METADATA_ITEM_KEY)
> - bytes = fs_info->nodesize;
> - else
> - bytes = key.offset;
> -
> - if (key.objectid + bytes <= logic_start)
> - goto next;
> -
> - if (key.objectid >= logic_end) {
> - stop_loop = 1;
> - break;
> - }
> -
> - while (key.objectid >= logic_start + map->stripe_len)
> - logic_start += map->stripe_len;
> -
> - extent = btrfs_item_ptr(l, slot,
> - struct btrfs_extent_item);
> - flags = btrfs_extent_flags(l, extent);
> - generation = btrfs_extent_generation(l, extent);
> -
> - if ((flags & BTRFS_EXTENT_FLAG_TREE_BLOCK) &&
> - (key.objectid < logic_start ||
> - key.objectid + bytes >
> - logic_start + map->stripe_len)) {
> - btrfs_err(fs_info,
> - "scrub: tree block %llu spanning stripes, ignored. logical=%llu",
> - key.objectid, logic_start);
> - spin_lock(&sctx->stat_lock);
> - sctx->stat.uncorrectable_errors++;
> - spin_unlock(&sctx->stat_lock);
> - goto next;
> - }
> -again:
> - extent_logical = key.objectid;
> - ASSERT(bytes <= U32_MAX);
> - extent_len = bytes;
> -
> - if (extent_logical < logic_start) {
> - extent_len -= logic_start - extent_logical;
> - extent_logical = logic_start;
> - }
> -
> - if (extent_logical + extent_len >
> - logic_start + map->stripe_len)
> - extent_len = logic_start + map->stripe_len -
> - extent_logical;
> -
> - scrub_parity_mark_sectors_data(sparity, extent_logical,
> - extent_len);
> -
> - mapped_length = extent_len;
> - bioc = NULL;
> - ret = btrfs_map_block(fs_info, BTRFS_MAP_READ,
> - extent_logical, &mapped_length, &bioc,
> - 0);
> - if (!ret) {
> - if (!bioc || mapped_length < extent_len)
> - ret = -EIO;
> - }
> - if (ret) {
> - btrfs_put_bioc(bioc);
> - goto out;
> - }
> - extent_physical = bioc->stripes[0].physical;
> - extent_mirror_num = bioc->mirror_num;
> - extent_dev = bioc->stripes[0].dev;
> - btrfs_put_bioc(bioc);
> -
> - csum_root = btrfs_csum_root(fs_info, extent_logical);
> - ret = btrfs_lookup_csums_range(csum_root,
> - extent_logical,
> - extent_logical + extent_len - 1,
> - &sctx->csum_list, 1);
> - if (ret)
> - goto out;
> -
> - ret = scrub_extent_for_parity(sparity, extent_logical,
> - extent_len,
> - extent_physical,
> - extent_dev, flags,
> - generation,
> - extent_mirror_num);
> -
> - scrub_free_csums(sctx);
> -
> - if (ret)
> - goto out;
> -
> - if (extent_logical + extent_len <
> - key.objectid + bytes) {
> - logic_start += map->stripe_len;
> -
> - if (logic_start >= logic_end) {
> - stop_loop = 1;
> - break;
> - }
> -
> - if (logic_start < key.objectid + bytes) {
> - cond_resched();
> - goto again;
> - }
> - }
> -next:
> - path->slots[0]++;
> - }
> -
> - btrfs_release_path(path);
> -
> - if (stop_loop)
> break;
> -
> - logic_start += map->stripe_len;
> - }
> -out:
> - if (ret < 0) {
> - ASSERT(logic_end - logic_start <= U32_MAX);
> - scrub_parity_mark_sectors_error(sparity, logic_start,
> - logic_end - logic_start);
> }
> +
> scrub_parity_put(sparity);
> scrub_submit(sctx);
> mutex_lock(&sctx->wr_lock);
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] btrfs: refactor scrub_raid56_parity()
2022-01-07 8:20 ` Damien Le Moal
@ 2022-01-07 9:52 ` Qu Wenruo
0 siblings, 0 replies; 5+ messages in thread
From: Qu Wenruo @ 2022-01-07 9:52 UTC (permalink / raw)
To: Damien Le Moal, Qu Wenruo, linux-btrfs
On 2022/1/7 16:20, Damien Le Moal wrote:
> On 1/7/22 16:09, Qu Wenruo wrote:
>> + if (!ret) {
>> + if (!bioc || mapped_length < extent_size)
>> + ret = -EIO;
>> + }
>
> The double "if" looks a little weird... You can simplify this:
>
> if ((!ret) && (!bioc || mapped_length < extent_size))
> ret = -EIO;
Sure, this is indeed better.
[...]
>> + ret = scrub_extent_for_parity(sparity, extent_start,
>> + extent_size, extent_physical,
>> + extent_dev, extent_flags,
>> + extent_gen, extent_mirror_num);
>> + scrub_free_csums(sctx);
>> +
>> + if (ret) {
>> + scrub_parity_mark_sectors_error(sparity, extent_start,
>> + extent_size);
>> + break;
>> + }
>> +
>
> It would be nice to have the entire code above factored in one or more
> functions to make reading the loop easier.
Originally there is a if (ret < 0) check before return, and at there
call the mark_sectors_error().
Since in this patch extent_start/extent_size is defined inside the loop,
making it harder to keep the branch out of the main loop.
But you mentioned this, I'd better move the
scrub_parity_mark_sectors_error() out of the loop, as it's making the
code reading harder.
Thanks,
Qu
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-01-07 9:52 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-07 7:09 [PATCH 0/2] btrfs: scrub: cleanup scrub_raid56_parity() using find_first_extent_item() helper Qu Wenruo
2022-01-07 7:09 ` [PATCH 1/2] btrfs: refactor scrub_raid56_parity() Qu Wenruo
2022-01-07 8:20 ` Damien Le Moal
2022-01-07 9:52 ` Qu Wenruo
2022-01-07 7:09 ` [PATCH 2/2] btrfs: use find_first_extent_item() to replace the open-coded extent item search Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).