All of lore.kernel.org
 help / color / mirror / Atom feed
* Exactly what is wrong with RAID5/6
@ 2017-06-20 22:57 waxhead
  2017-06-20 23:25 ` Hugo Mills
                   ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: waxhead @ 2017-06-20 22:57 UTC (permalink / raw)
  To: linux-btrfs

I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
The wiki refer to kernel 3.19 which was released in February 2015 so I 
assume that the information there is a tad outdated (the last update on 
the wiki page was July 2016)
https://btrfs.wiki.kernel.org/index.php/RAID56

Now there are four problems listed

1. Parity may be inconsistent after a crash (the "write hole")
Is this still true, if yes - would not this apply for RAID1 / RAID10 as 
well? How was it solved there , and why can't that be done for RAID5/6

2. Parity data is not checksummed
Why is this a problem? Does it have to do with the design of BTRFS somehow?
Parity is after all just data, BTRFS does checksum data so what is the 
reason this is a problem?

3. No support for discard? (possibly -- needs confirmation with cmason)
Does this matter that much really?, is there an update on this?

4. The algorithm uses as many devices as are available: No support for a 
fixed-width stripe.
What is the plan for this one? There was patches on the mailing list by 
the SnapRAID author to support up to 6 parity devices. Will the (re?) 
resign of btrfs raid5/6 support a scheme that allows for multiple parity 
devices?

I do have a few other questions as well...

5. BTRFS does still (kernel 4.9) not seem to use the device ID to 
communicate with devices.

If you on a multi device filesystem yank out a device, for example 
/dev/sdg and it reappear as /dev/sdx for example btrfs will still 
happily try to write to /dev/sdg even if btrfs fi sh /mnt shows the 
correct device ID. What is the status for getting BTRFS to properly 
understand that a device is missing?

6. RAID1 needs to be able to make two copies always. E.g. if you have 
three disks you can loose one and it should still work. What about 
RAID10 ? If you have for example 6 disk RAID10 array, loose one disk and 
reboots (due to #5 above). Will RAID10 recognize that the array now is a 
5 disk array and stripe+mirror over 2 disks (or possibly 2.5 disks?) 
instead of 3? In other words, will it work as long as it can create a 
RAID10 profile that requires a minimum of four disks?

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2017-06-23 18:45 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-20 22:57 Exactly what is wrong with RAID5/6 waxhead
2017-06-20 23:25 ` Hugo Mills
2017-06-21  3:48   ` Chris Murphy
2017-06-21  6:51     ` Marat Khalili
2017-06-21  7:31       ` Peter Grandi
2017-06-21 17:13       ` Andrei Borzenkov
2017-06-21 18:43       ` Chris Murphy
2017-06-21  8:45 ` Qu Wenruo
2017-06-21 12:43   ` Christoph Anton Mitterer
2017-06-21 13:41     ` Austin S. Hemmelgarn
2017-06-21 17:20       ` Andrei Borzenkov
2017-06-21 17:30         ` Austin S. Hemmelgarn
2017-06-21 17:03   ` Goffredo Baroncelli
2017-06-22  2:05     ` Qu Wenruo
2017-06-21 18:24   ` Chris Murphy
2017-06-21 20:12     ` Goffredo Baroncelli
2017-06-21 23:19       ` Chris Murphy
2017-06-22  2:12     ` Qu Wenruo
2017-06-22  2:43       ` Chris Murphy
2017-06-22  3:55         ` Qu Wenruo
2017-06-22  5:15       ` Goffredo Baroncelli
2017-06-23 17:25 ` Michał Sokołowski
2017-06-23 18:45   ` Austin S. Hemmelgarn

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.