From: Anthony Youngman <antlists@youngman.org.uk>
To: Adam Goryachev <mailinglists@websitemanagers.com.au>,
Mikael Abrahamsson <swmike@swm.pp.se>
Cc: Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: Filesystem corruption on RAID1
Date: Mon, 21 Aug 2017 15:03:59 +0100 [thread overview]
Message-ID: <de4af3a8-aee2-4632-6038-4adb006bed92@youngman.org.uk> (raw)
In-Reply-To: <bdb354cb-62f4-a254-11f6-6cdb00639341@websitemanagers.com.au>
On 21/08/17 00:11, Adam Goryachev wrote:
> On 21/08/17 02:10, Wols Lists wrote:
>> On 20/08/17 16:48, Mikael Abrahamsson wrote:
>>> On Mon, 21 Aug 2017, Adam Goryachev wrote:
>>>
>>>> data (even where it is wrong). So just do a check/repair which will
>>>> ensure both drives are consistent, then you can safely do the fsck.
>>>> (Assuming you fixed the problem causing random write errors first).
>>> This involves manual intervention.
>>>
>>> While I don't know how to implement this, let's at least see if we can
>>> architect something for throwing ideas around.
>>>
>>> What about having an option for any raid level that would do "repair on
>>> read". So you can do "0" or "1" on this. RAID1 would mean it reads all
>>> stripes and if there is inconsistency, pick one and write it to all of
>>> them. It could also be some kind of IOCTL option I guess. For RAID5/6,
>>> read all data drives, and check parity. If parity is wrong, write
>>> parity.
>>>
>>> This could mean that if filesystem developers wanted to do repair (and
>>> this could be a userspace option or mount option), it would use the
>>> beforementioned option for all fsck-like operation to make sure that
>>> metadata was consistent while doing fsck (this would be different for
>>> different tools, if it's an "fs needs to be mounted"-type of fs, or if
>>> it's an "offline fsck" type filesystem. Then it could go back to normal
>>> operation for everything else that would hopefully not cause
>>> catastrophical failures to the filesystem, but instead just individual
>>> file corruption in case of mismatches.
>>>
>> Look for the thread "RFC Raid error detection and auto-recovery, 10th
>> May.
>>
>> Basically, that proposed a three-way flag - "default" is the current
>> "read the data section", "check" would read the entire stripe and
>> compare a mirror or calculate parity on a raid and return a read error
>> if it couldn't work out the correct data, and "fix" would write the
>> correct data back if it could work it out.
>>
>> So basically, on a two-disk raid-1, or raid 4 or 5, both "check" and
>> "fix" would return read errors if there's a problem and you're SOL
>> without a backup.
>>
>> With a three-disk or more raid-1, or raid-6, it would return the correct
>> data (and fix the stripe) if it could, otherwise again you're SOL.
>
> From memory, the main sticking point was in implementing this with
> RAID6 and the argument that you might not be able to choose the "right"
> pieces of data because there wasn't a sufficient amount of data to know
> which was corrupted.
That was the impression I got, but I really don't understand the
problem. If *ANY* one stripe is corrupted, we have two unknowns, two
parity blocks, and we can recalculate the missing stripe.
If two or more stripes are corrupt, the recovery will return garbage
(which is detectable) and we return a read error. We DO NOT attempt to
rewrite the stripe! In your words, if we can't choose the "right" piece
of data, we bail and do nothing.
As I understood it, the worry was that we would run the recovery
algorithm and then overwrite the data with garbage, but nobody ever gave
me a plausible scenario where that could happen. The only plausible
scenario is where multiple stripes are corrupted in such a way that the
recovery algorithm is fooled into thinking only one stripe is affected.
And if I read that paper correctly, the odds of that happening are very low.
Short summary - if just one stripe is corrupted, then my proposal will
fix and return CORRECT data. If however, more than one stripe is
corrupted, then my proposal will with near-perfect accuracy bail and do
nothing (apart from returning a read error). As I say, the only risk to
the data is if the error looks like a single-stripe problem when it
isn't, and that's unlikely.
I've had enough data-loss scenarios in my career to be rather paranoid
about scribbling over stuff when I don't know what I'm doing ... (I do
understand concerns about "using the wrong tool to fix the wrong
problem", but you don't refuse to sell a punter a wheel-wrench because
he might not be able to tell the difference between a flat tyre and a
mis-firing engine).
Cheers,
Wol
next prev parent reply other threads:[~2017-08-21 14:03 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-13 15:35 Filesystem corruption on RAID1 Gionatan Danti
2017-07-13 16:48 ` Roman Mamedov
2017-07-13 21:28 ` Gionatan Danti
2017-07-13 21:34 ` Reindl Harald
2017-07-13 22:34 ` Gionatan Danti
2017-07-14 0:32 ` Reindl Harald
2017-07-14 0:52 ` Anthony Youngman
2017-07-14 1:10 ` Reindl Harald
2017-07-14 10:46 ` Gionatan Danti
2017-07-14 10:58 ` Reindl Harald
2017-08-17 8:23 ` Gionatan Danti
2017-08-17 12:41 ` Roger Heflin
2017-08-17 14:31 ` Gionatan Danti
2017-08-17 17:33 ` Wols Lists
2017-08-17 20:50 ` Gionatan Danti
2017-08-17 21:01 ` Roger Heflin
2017-08-17 21:21 ` Gionatan Danti
2017-08-17 21:23 ` Gionatan Danti
2017-08-17 22:51 ` Wols Lists
2017-08-18 12:26 ` Gionatan Danti
2017-08-18 12:54 ` Roger Heflin
2017-08-18 19:42 ` Gionatan Danti
2017-08-20 7:14 ` Mikael Abrahamsson
2017-08-20 7:24 ` Gionatan Danti
2017-08-20 10:43 ` Mikael Abrahamsson
2017-08-20 13:07 ` Wols Lists
2017-08-20 15:38 ` Adam Goryachev
2017-08-20 15:48 ` Mikael Abrahamsson
2017-08-20 16:10 ` Wols Lists
2017-08-20 23:11 ` Adam Goryachev
2017-08-21 14:03 ` Anthony Youngman [this message]
2017-08-20 19:11 ` Gionatan Danti
2017-08-20 19:03 ` Gionatan Danti
2017-08-20 19:01 ` Gionatan Danti
2017-08-31 22:55 ` Robert L Mathews
2017-09-01 5:39 ` Reindl Harald
2017-09-01 23:14 ` Robert L Mathews
2017-08-20 23:22 ` Chris Murphy
2017-08-21 5:57 ` Gionatan Danti
2017-08-21 8:37 ` Mikael Abrahamsson
2017-08-21 12:28 ` Gionatan Danti
2017-08-21 14:09 ` Anthony Youngman
2017-08-21 17:33 ` Chris Murphy
2017-08-21 17:52 ` Reindl Harald
2017-07-14 1:48 ` Chris Murphy
2017-07-14 7:22 ` Roman Mamedov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=de4af3a8-aee2-4632-6038-4adb006bed92@youngman.org.uk \
--to=antlists@youngman.org.uk \
--cc=linux-raid@vger.kernel.org \
--cc=mailinglists@websitemanagers.com.au \
--cc=swmike@swm.pp.se \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.