All of lore.kernel.org
 help / color / mirror / Atom feed
From: Graham Cobb <g.btrfs@cobb.uk.net>
To: Qu Wenruo <wqu@suse.com>, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH 00/14] btrfs: scrub: introduce SCRUB_LOGICAL flag
Date: Mon, 3 Jul 2023 13:58:37 +0100	[thread overview]
Message-ID: <fda86f3c-528c-9cd6-6228-d0278a73f138@cobb.uk.net> (raw)
In-Reply-To: <cover.1688368617.git.wqu@suse.com>

On 03/07/2023 08:32, Qu Wenruo wrote:
> [CHANGELOG]
> RFC->v1:
> - Add RAID56 support
>   Which is the biggest advantage of the new scrub mode.
> 
> - More basic tests around various repair situations
> 
> [REPO]
> Please fetch from github repo:
> https://github.com/adam900710/linux/tree/scrub_logical
> 
> This is based on David's misc-next, but still has one extra regression
> fix which is not yet merged. ("btrfs: raid56: always verify the P/Q
> contents for scrub")
> 
> [BACKGROUND]
> Scrub originally works in a per-device basis, that means if we want to
> scrub the full fs, we would queue a scrub for each writeable device.
> 
> This is normally fine, but some behavior is not ideal like the
> following:
> 		X	X+16K	X+32K
>  Mirror 1	|XXXXXXX|       |
>  Mirror 2	|       |XXXXXXX|
> 
> When scrubbing mirror 1, we found [X, X+16K) has corruption, then we
> will try to read from mirror 2 and repair using the correct data.
> 
> This applies to range [X+16K, X+32K) of mirror 2, causing the good copy
> to be read twice, which may or may not be a big deal.
> 
> But the problem can easily go crazy for RAID56, as its repair requires
> the full stripe to be read out, so is its P/Q verification, e.g.:
> 
> 		X	X+16K	X+32K
>  Data stripe 1	|XXXXXXX|       |	|	|
>  Data stripe 2	|       |XXXXXXX|	|	|
>  Parity stripe	|       |	|XXXXXXX|	|
> 
> In above case, if we're scrubbing all mirrors, we would read the same
> contents again and again:
> 
> Scrub on mirror 1:
> - Read data stripe 1 for the initial read.
> - Read data stripes 1 + 2 + parity for the rebuild.
> 
> Scrub on mirror 2:
> - Read data stripe 2 for the initial read.
> - Read data stripes 1 + 2 + parity for the rebuild.
> 
> Scrub on Parity
> - Read data stripes 1 + 2 for the data stripes verification
> - Read data stripes 1 + 2 + parity for the data rebuild
>   This step is already improved by recent optimization to use cached
>   stripes.
> - Read the parity stripe for the final verification
> 
> So for data stripes, they are read *five* times, and *three* times for
> parity stripes.
> 
> [ENHANCEMENT]
> Instead of the existing per-device scrub, this patchset introduce a new
> flag, SCRUB_LOGICAL, to do logical address space based scrub.
> 
> Unlike per-device scrub, this new flag would make scrub a per-fs
> operation.
> 
> This allows us to scrub the different mirrors in one go, and avoid
> unnecessary read to repair the corruption.
> 
> This means, no matter what profile, they always read the same data just
> once.
> 
> This also makes user space handling much simpler, just one ioctl to
> scrub the full fs, without the current multi-thread scrub code.

I have a comment on terminology. If I understand correctly, this flag
changes scrub from an operation performed in parallel on all devices, to
one done sequentially, with less parallelism.

The original code scrubs every device at the same time. In very rough
terms, for a filesystem with more devices than copies, the duration for
a scrub with no errors is the time taken to read every block on the most
occupied device. Other disks will finish earlier.

In the same case, the new code will take the time taken to read one
block from every file (not just those on the most occupied disk). It is
not clear to me how much parallelism will occur in this case.

I am not saying it isn't worth doing, but that it may be best to explain
it in terms of parallel vs. sequential rather than physical vs. logical.
Certainly in making the user documentation, and scrub command line,
clear to the user and possibly even in the naming of the flag (e.g.
SCRUB_SEQUENTIAL instead of SCRUB_LOGICAL).

> 
> [TODO]
> - More testing
>   Currently only done functional tests, no stress tests yet.
> 
> - Better progs integration
>   In theory we can silently try SCRUB_LOGICAL first, if the kernel
>   doesn't support it, then fallback to the old multi-device scrub.

Maybe not if my understanding is correct. On filesystems with more disks
than copies the new code may take noticeably longer?

Or do I misunderstand?

Graham

> 
>   But for current testing, a dedicated option is still assigned for
>   scrub subcommand.
> 
>   And currently it only supports full report, no summary nor scrub file
>   support.
> 
> Qu Wenruo (14):
>   btrfs: raid56: remove unnecessary parameter for
>     raid56_parity_alloc_scrub_rbio()
>   btrfs: raid56: allow scrub operation to update both P and Q stripes
>   btrfs: raid56: allow caching P/Q stripes
>   btrfs: raid56: add the interfaces to submit recovery rbio
>   btrfs: add the ability to read P/Q stripes directly
>   btrfs: scrub: make scrub_ctx::stripes dynamically allocated
>   btrfs: scrub: introduce the skeleton for logical-scrub
>   btrfs: scrub: extract the common preparation before scrubbing a block
>     group
>   btrfs: scrub: implement the chunk iteration code for scrub_logical
>   btrfs: scrub: implement the basic extent iteration code
>   btrfs: scrub: implement the repair part of scrub logical
>   btrfs: scrub: add RAID56 support for queue_scrub_logical_stripes()
>   btrfs: scrub: introduce the RAID56 data recovery path for scrub
>     logical
>   btrfs: scrub: implement the RAID56 P/Q scrub code
> 
>  fs/btrfs/disk-io.c         |    1 +
>  fs/btrfs/fs.h              |   12 +
>  fs/btrfs/ioctl.c           |    6 +-
>  fs/btrfs/raid56.c          |  134 +++-
>  fs/btrfs/raid56.h          |   17 +-
>  fs/btrfs/scrub.c           | 1198 ++++++++++++++++++++++++++++++------
>  fs/btrfs/scrub.h           |    2 +
>  fs/btrfs/volumes.c         |   50 +-
>  fs/btrfs/volumes.h         |   16 +
>  include/uapi/linux/btrfs.h |   11 +-
>  10 files changed, 1206 insertions(+), 241 deletions(-)
> 


  parent reply	other threads:[~2023-07-03 13:03 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-03  7:32 [PATCH 00/14] btrfs: scrub: introduce SCRUB_LOGICAL flag Qu Wenruo
2023-07-03  7:32 ` [PATCH 01/14] btrfs: raid56: remove unnecessary parameter for raid56_parity_alloc_scrub_rbio() Qu Wenruo
2023-07-03 12:33   ` Anand Jain
2023-07-12 16:23   ` Christoph Hellwig
2023-07-03  7:32 ` [PATCH 02/14] btrfs: raid56: allow scrub operation to update both P and Q stripes Qu Wenruo
2023-07-12 16:24   ` Christoph Hellwig
2023-07-03  7:32 ` [PATCH 03/14] btrfs: raid56: allow caching P/Q stripes Qu Wenruo
2023-07-12 16:27   ` Christoph Hellwig
2023-07-03  7:32 ` [PATCH 04/14] btrfs: raid56: add the interfaces to submit recovery rbio Qu Wenruo
2023-07-03  7:32 ` [PATCH 05/14] btrfs: add the ability to read P/Q stripes directly Qu Wenruo
2023-07-03  9:46   ` Qu Wenruo
2023-07-03  7:32 ` [PATCH 06/14] btrfs: scrub: make scrub_ctx::stripes dynamically allocated Qu Wenruo
2023-07-03  7:32 ` [PATCH 07/14] btrfs: scrub: introduce the skeleton for logical-scrub Qu Wenruo
2023-07-03  7:32 ` [PATCH 08/14] btrfs: scrub: extract the common preparation before scrubbing a block group Qu Wenruo
2023-07-03  7:32 ` [PATCH 09/14] btrfs: scrub: implement the chunk iteration code for scrub_logical Qu Wenruo
2023-07-03  7:32 ` [PATCH 10/14] btrfs: scrub: implement the basic extent iteration code Qu Wenruo
2023-07-03  7:32 ` [PATCH 11/14] btrfs: scrub: implement the repair part of scrub logical Qu Wenruo
2023-07-03  7:32 ` [PATCH 12/14] btrfs: scrub: add RAID56 support for queue_scrub_logical_stripes() Qu Wenruo
2023-07-03  7:32 ` [PATCH 13/14] btrfs: scrub: introduce the RAID56 data recovery path for scrub logical Qu Wenruo
2023-07-03  7:32 ` [PATCH 14/14] btrfs: scrub: implement the RAID56 P/Q scrub code Qu Wenruo
2023-07-03 12:58 ` Graham Cobb [this message]
2023-07-03 22:40   ` [PATCH 00/14] btrfs: scrub: introduce SCRUB_LOGICAL flag Qu Wenruo
2023-07-03 23:19     ` Graham Cobb
2023-07-13 12:14 ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fda86f3c-528c-9cd6-6228-d0278a73f138@cobb.uk.net \
    --to=g.btrfs@cobb.uk.net \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.