All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC v2 00/12] btrfs: make read repair work in synchronous mode
@ 2022-04-27  7:18 Qu Wenruo
  2022-04-28  7:01 ` Qu Wenruo
  0 siblings, 1 reply; 2+ messages in thread
From: Qu Wenruo @ 2022-04-27  7:18 UTC (permalink / raw)
  To: linux-btrfs

[CHANGELOG]
RFC v1 -> RFC v2:
- Assemble a bio list for read/write bios and submit them in one go
  This allows less submit bio hooks, while still allow us to wait
  for them all to finish.

- Completely remove io_failure_tree infrastructure
  Now we don't need to remember which mirror we hit error.
  At end_bio_extent_readpage() we either get good data and done the
  repair already, or we there aren't enough mirrors for us to recover
  all data.

  This is mostly trading on-stack memory of end_bio_extent_readpage()
  with btrfs_inode::io_failure_tree.
  The latter tree has a much longer lifespan, thus I think it's still a
  win overall

[RFC POINTS]
- How to improve read_repair_get_sector()?
  Currently we always iterate the whole bio to grab the target
  page/pgoff.

  Is there any better cached method to avoid such iteration?

- Is this new code logically more reader-friendly?
  It's more for sure straight-forward, but I doubt if it's any easier to
  read compared to the old code.

- btrfs/157 failure
  Need extra check to find out why btrfs/157 failed.
  In theory, we should just iterate through all mirrors, I guess it's we
  have no way to exhaust all combinations, thus the extra 2 "mirrors"
  can gave us wrong result for RAID6.

[BEFORE]
For corrupted sectors, we just record the logical bytenr and mirror
number into io_failure_tree, then re-queue the same block with different
mirror number and call it a day.

The re-queued read will trigger enter the same endio function, with
extra failrec handling to either continue re-queue (csum mismatch/read
failure), or clear the current failrec and submit a write to fix the
corrupted mirror (read succeeded and csum match/no csum).

This is harder to read, as we need to enter the same river twice or even
more.

[AFTER]
For corrupted sectors, we record the following things into an on-stack
structure in end_bio_extent_readpage():

- The original bio

- The original file offset of the bio
  This is for direct IO case, as we can not grab file offset just using
  page_offset()

- Offset inside the bio of the corrupted sector

- Corrupted mirror

Then in the new btrfs_read_repair_ctrl structure, we hold those info
like:

Original bio logical = X, file_offset = Y, inode=(R/I)

Offset inside bio: 0  4k 8K 12K 16K
cur_bad_bitmap     | X| X|  | X|

Each set bit will indicate we have a corrupted sector inside the
original bio.

During endio function, we only populate the cur_bad_bitmap.

After we have iterated all sectors of the original bio, then we call
btrfs_read_repair_finish() to do the real repair by:

- Build a list of bios for cur_bad_bitmap
  For above case, bio offset [0, 8K) will be inside one bio, while another bio
  for bio offset [12K, 16K).

  And the page/pgoff will be extracted from the orignial bio.

  This is a little different from the old behavior, as old behavior will
  submit a new bio for each sector.
  The new behavior will save some btrfs_map_bio() calls.

- Submit all the bios in the bio list and wait them to finish

- Re-verify the read result

- Submit write for the corrupted mirror
  Currently the write is still submitted for each sector and we will
  wait for each sector to finish.
  This needs some optimization.

  And for repaired sectors, remove them from @cur_bad_bitmap.

- Do the same loop until either 1) we tried all mirrors, or 2) no more
  corrupted sectors
  
- Handle the remaining corrupted sectors
  Either mark them error for buffered read, or just return an error for
  direct IO.

By this we can:
- Remove the re-entry behavior of endio function
  Now everything is handled inside end_bio_extent_readpage().

- Remove the io_failure_tree completely
  As we don't need to record which mirror has failed.

- Slightly reduced overhead on read repair
  Now we won't call btrfs_map_bio() for each corrupted sector, as we can
  merge the sectors into a much larger bio.


Qu Wenruo (12):
  btrfs: introduce a pure data checksum checking helper
  btrfs: always save bio::bi_iter into btrfs_bio::iter before submitting
  btrfs: remove duplicated parameters from submit_data_read_repair()
  btrfs: add btrfs_read_repair_ctrl to record corrupted sectors
  btrfs: add a helper to queue a corrupted sector for read repair
  btrfs: introduce a helper to repair from one mirror
  btrfs: allow btrfs read repair to submit all writes in one go
  btrfs: switch buffered read to the new btrfs_read_repair_* based
    repair routine
  btrfs: switch direct IO routine to use btrfs_read_repair_ctrl
  btrfs: cleanup btrfs_repair_one_sector()
  btrfs: remove io_failure_record infrastructure completely
  btrfs: remove btrfs_inode::io_failure_tree

 fs/btrfs/btrfs_inode.h       |   5 -
 fs/btrfs/compression.c       |  12 +-
 fs/btrfs/ctree.h             |   2 +
 fs/btrfs/extent-io-tree.h    |  15 -
 fs/btrfs/extent_io.c         | 744 ++++++++++++++++++-----------------
 fs/btrfs/extent_io.h         |  89 +++--
 fs/btrfs/inode.c             | 108 +++--
 include/trace/events/btrfs.h |   1 -
 8 files changed, 518 insertions(+), 458 deletions(-)

-- 
2.36.0


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH RFC v2 00/12] btrfs: make read repair work in synchronous mode
  2022-04-27  7:18 [PATCH RFC v2 00/12] btrfs: make read repair work in synchronous mode Qu Wenruo
@ 2022-04-28  7:01 ` Qu Wenruo
  0 siblings, 0 replies; 2+ messages in thread
From: Qu Wenruo @ 2022-04-28  7:01 UTC (permalink / raw)
  To: linux-btrfs



On 2022/4/27 15:18, Qu Wenruo wrote:
> [CHANGELOG]
> RFC v1 -> RFC v2:
> - Assemble a bio list for read/write bios and submit them in one go
>    This allows less submit bio hooks, while still allow us to wait
>    for them all to finish.
> 
> - Completely remove io_failure_tree infrastructure
>    Now we don't need to remember which mirror we hit error.
>    At end_bio_extent_readpage() we either get good data and done the
>    repair already, or we there aren't enough mirrors for us to recover
>    all data.
> 
>    This is mostly trading on-stack memory of end_bio_extent_readpage()
>    with btrfs_inode::io_failure_tree.
>    The latter tree has a much longer lifespan, thus I think it's still a
>    win overall
> 
> [RFC POINTS]
> - How to improve read_repair_get_sector()?
>    Currently we always iterate the whole bio to grab the target
>    page/pgoff.
> 
>    Is there any better cached method to avoid such iteration?
> 
> - Is this new code logically more reader-friendly?
>    It's more for sure straight-forward, but I doubt if it's any easier to
>    read compared to the old code.
> 
> - btrfs/157 failure
>    Need extra check to find out why btrfs/157 failed.
>    In theory, we should just iterate through all mirrors, I guess it's we
>    have no way to exhaust all combinations, thus the extra 2 "mirrors"
>    can gave us wrong result for RAID6.

This is related to the writeback behavior for bad mirrors.

For RAID56, the mirror_num is not really indicating a mirror, but an 
hint on how to rebuild the data.

For RAID6 it can be as large as the number of stripes. The data stripe 
where our read is, is always corrupted (or we won't need to rebuild).
The mirror number can be used to iterate through all combination of the 
next possible corrupted stripe.

When we got the correct data using a specific mirror number, we should 
not write back the correct data for RAID56.

As it will trigger RMW, and RMW always read data stripes from disk 
(including the other corrupted data in RAID6), thus the write back will 
in fact cause the other corrupted data to be eternal.

In that case, we just need to skip the unnecessary writeback for RAID56.

This will not write the correct full stripe back to disk, but it's way 
better than corrupting the data furthermore.

The fix is already updated in my github repo.

Thanks,
Qu

> 
> [BEFORE]
> For corrupted sectors, we just record the logical bytenr and mirror
> number into io_failure_tree, then re-queue the same block with different
> mirror number and call it a day.
> 
> The re-queued read will trigger enter the same endio function, with
> extra failrec handling to either continue re-queue (csum mismatch/read
> failure), or clear the current failrec and submit a write to fix the
> corrupted mirror (read succeeded and csum match/no csum).
> 
> This is harder to read, as we need to enter the same river twice or even
> more.
> 
> [AFTER]
> For corrupted sectors, we record the following things into an on-stack
> structure in end_bio_extent_readpage():
> 
> - The original bio
> 
> - The original file offset of the bio
>    This is for direct IO case, as we can not grab file offset just using
>    page_offset()
> 
> - Offset inside the bio of the corrupted sector
> 
> - Corrupted mirror
> 
> Then in the new btrfs_read_repair_ctrl structure, we hold those info
> like:
> 
> Original bio logical = X, file_offset = Y, inode=(R/I)
> 
> Offset inside bio: 0  4k 8K 12K 16K
> cur_bad_bitmap     | X| X|  | X|
> 
> Each set bit will indicate we have a corrupted sector inside the
> original bio.
> 
> During endio function, we only populate the cur_bad_bitmap.
> 
> After we have iterated all sectors of the original bio, then we call
> btrfs_read_repair_finish() to do the real repair by:
> 
> - Build a list of bios for cur_bad_bitmap
>    For above case, bio offset [0, 8K) will be inside one bio, while another bio
>    for bio offset [12K, 16K).
> 
>    And the page/pgoff will be extracted from the orignial bio.
> 
>    This is a little different from the old behavior, as old behavior will
>    submit a new bio for each sector.
>    The new behavior will save some btrfs_map_bio() calls.
> 
> - Submit all the bios in the bio list and wait them to finish
> 
> - Re-verify the read result
> 
> - Submit write for the corrupted mirror
>    Currently the write is still submitted for each sector and we will
>    wait for each sector to finish.
>    This needs some optimization.
> 
>    And for repaired sectors, remove them from @cur_bad_bitmap.
> 
> - Do the same loop until either 1) we tried all mirrors, or 2) no more
>    corrupted sectors
>    
> - Handle the remaining corrupted sectors
>    Either mark them error for buffered read, or just return an error for
>    direct IO.
> 
> By this we can:
> - Remove the re-entry behavior of endio function
>    Now everything is handled inside end_bio_extent_readpage().
> 
> - Remove the io_failure_tree completely
>    As we don't need to record which mirror has failed.
> 
> - Slightly reduced overhead on read repair
>    Now we won't call btrfs_map_bio() for each corrupted sector, as we can
>    merge the sectors into a much larger bio.
> 
> 
> Qu Wenruo (12):
>    btrfs: introduce a pure data checksum checking helper
>    btrfs: always save bio::bi_iter into btrfs_bio::iter before submitting
>    btrfs: remove duplicated parameters from submit_data_read_repair()
>    btrfs: add btrfs_read_repair_ctrl to record corrupted sectors
>    btrfs: add a helper to queue a corrupted sector for read repair
>    btrfs: introduce a helper to repair from one mirror
>    btrfs: allow btrfs read repair to submit all writes in one go
>    btrfs: switch buffered read to the new btrfs_read_repair_* based
>      repair routine
>    btrfs: switch direct IO routine to use btrfs_read_repair_ctrl
>    btrfs: cleanup btrfs_repair_one_sector()
>    btrfs: remove io_failure_record infrastructure completely
>    btrfs: remove btrfs_inode::io_failure_tree
> 
>   fs/btrfs/btrfs_inode.h       |   5 -
>   fs/btrfs/compression.c       |  12 +-
>   fs/btrfs/ctree.h             |   2 +
>   fs/btrfs/extent-io-tree.h    |  15 -
>   fs/btrfs/extent_io.c         | 744 ++++++++++++++++++-----------------
>   fs/btrfs/extent_io.h         |  89 +++--
>   fs/btrfs/inode.c             | 108 +++--
>   include/trace/events/btrfs.h |   1 -
>   8 files changed, 518 insertions(+), 458 deletions(-)
> 


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-04-28  7:01 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-27  7:18 [PATCH RFC v2 00/12] btrfs: make read repair work in synchronous mode Qu Wenruo
2022-04-28  7:01 ` Qu Wenruo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.