From: DanglingPointer <danglingpointerexception@gmail.com>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>, linux-btrfs@vger.kernel.org
Subject: Re: RAID56 Warning on "multiple serious data-loss bugs"
Date: Tue, 29 Jan 2019 09:07:44 +1100 [thread overview]
Message-ID: <a18f2081-cf5d-d914-bb16-dba51c949c30@gmail.com> (raw)
In-Reply-To: <59a60289-1130-27b4-960b-9014fc8d68e8@gmx.com>
Thanks Qu!
I thought as much from following the mailing list and your great work
over the years!
Would it be possible to get the wiki updated to reflect the current
"real" status?
From Qu's statement and perspective, there's no difference to other
non-BTRFS software RAID56's out there that are marked as stable (except
ZFS).
Also there are no "multiple serious data-loss bugs".
Please do consider my proposal as it will decrease the amount of
incorrect paranoia that exists in the community.
As long as the Wiki properly mentions the current state with the options
for mitigation; like backup power and perhaps RAID1 for metadata or
anything else you believe as appropriate.
Thanks,
DP
On 28/1/19 11:52 am, Qu Wenruo wrote:
>
> On 2019/1/26 下午7:45, DanglingPointer wrote:
>>
>> Hi All,
>>
>> For clarity for the masses, what are the "multiple serious data-loss
>> bugs" as mentioned in the btrfs wiki?
>> The bullet points on this page:
>> https://btrfs.wiki.kernel.org/index.php/RAID56
>> don't enumerate the bugs. Not even in a high level. If anything what
>> can be closest to a bug or issue or "resilience use-case missing" would
>> be the first point on that page.
>>
>> "Parity may be inconsistent after a crash (the "write hole"). The
>> problem born when after "an unclean shutdown" a disk failure happens.
>> But these are *two* distinct failures. These together break the BTRFS
>> raid5 redundancy. If you run a scrub process after "an unclean shutdown"
>> (with no disk failure in between) those data which match their checksum
>> can still be read out while the mismatched data are lost forever."
>>
>> So in a nutshell; "What are the multiple serious data-loss bugs?".
> There used to be two, like scrub racing (minor), and screwing up good
> copy when doing recovery (major).
>
> Although these two should already be fixed.
>
> So for current upstream kernel, there should be no major problem despite
> write hole.
>
> Thanks,
> Qu
>
>> If
>> there aren't any, perhaps updating the wiki should be considered for
>> something less the "dramatic" .
>>
>>
>>
next prev parent reply other threads:[~2019-01-28 22:07 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-26 11:45 RAID56 Warning on "multiple serious data-loss bugs" DanglingPointer
2019-01-26 12:07 ` waxhead
2019-01-26 14:05 ` Remi Gauvin
2019-01-28 0:52 ` Qu Wenruo
2019-01-28 15:23 ` Supercilious Dude
2019-01-28 16:24 ` Adam Borowski
2019-01-28 22:07 ` DanglingPointer [this message]
2019-01-28 22:52 ` Remi Gauvin
2019-01-29 19:02 ` Chris Murphy
2019-01-29 19:47 ` Goffredo Baroncelli
2019-01-30 1:41 ` DanglingPointer
2019-02-01 18:45 ` Remi Gauvin
2019-01-29 1:46 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a18f2081-cf5d-d914-bb16-dba51c949c30@gmail.com \
--to=danglingpointerexception@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).