Linux-BTRFS Archive on
 help / color / Atom feed
From: DanglingPointer <>
To: linux-btrfs <>
Cc:, Chris Murphy <>,
	Remi Gauvin <>
Subject: Re: RAID56 Warning on "multiple serious data-loss bugs"
Date: Wed, 30 Jan 2019 12:41:27 +1100
Message-ID: <> (raw)
In-Reply-To: <>

Going back to my original email, would the BTRFS wiki admins consider a 
better more reflective update of the RAID56 status page?

It still states "multiple serious data-loss bugs" which as Qu Wenruo has 
already clarified is not the case.  The only "bug" left is the write 
hole edge-case problem.

On 30/1/19 6:47 am, Goffredo Baroncelli wrote:
> On 29/01/2019 20.02, Chris Murphy wrote:
>> On Mon, Jan 28, 2019 at 3:52 PM Remi Gauvin <> wrote:
>>> On 2019-01-28 5:07 p.m., DanglingPointer wrote:
>>>>  From Qu's statement and perspective, there's no difference to other
>>>> non-BTRFS software RAID56's out there that are marked as stable (except
>>>> ZFS).
>>>> Also there are no "multiple serious data-loss bugs".
>>>> Please do consider my proposal as it will decrease the amount of
>>>> incorrect paranoia that exists in the community.
>>>> As long as the Wiki properly mentions the current state with the options
>>>> for mitigation; like backup power and perhaps RAID1 for metadata or
>>>> anything else you believe as appropriate.
>>> Should implement some way to automatically scrub on unclean shutdown.
>>> BTRFS is the only (to my knowlege) Raid implementation that will not
>>> automatically detect an unclean shutdown and fix the affected parity
>>> blocks, (either by some form of write journal/write intent map, or full
>>> resync.)
>> There's no dirty bit set on mount, and thus no dirty bit to unset on
>> clean mount, from which to infer a dirty unmount if it's present at
>> the next mount.
> It would be sufficient to use the log, which BTRFS already has. During each transaction, when an area is touched by a rwm cycle, it has to tracked in the log.
> In case of unclean shutdown, it is already implemented a way to replay the log. So it would be sufficient to track a scrub of these area as "log replay".
> Of course I am talking as not a BTRFS developers, so the reality could be more complex: e.g. I don't know how it would be easy to raise a scrub process on per area basis.
> BR
> G.Baroncelli

  reply index

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-26 11:45 DanglingPointer
2019-01-26 12:07 ` waxhead
2019-01-26 14:05   ` Remi Gauvin
2019-01-28  0:52 ` Qu Wenruo
2019-01-28 15:23   ` Supercilious Dude
2019-01-28 16:24     ` Adam Borowski
2019-01-28 22:07   ` DanglingPointer
2019-01-28 22:52     ` Remi Gauvin
2019-01-29 19:02       ` Chris Murphy
2019-01-29 19:47         ` Goffredo Baroncelli
2019-01-30  1:41           ` DanglingPointer [this message]
2019-02-01 18:45         ` Remi Gauvin
2019-01-29  1:46     ` Qu Wenruo

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-BTRFS Archive on

Archives are clonable:
	git clone --mirror linux-btrfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-btrfs linux-btrfs/ \
	public-inbox-index linux-btrfs

Newsgroup available over NNTP:

AGPL code for this site: git clone public-inbox