linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: antlists <antlists@youngman.org.uk>
To: Brian Allen Vanderburg II <brianvanderburg2@aim.com>,
	linux-raid@vger.kernel.org
Subject: Re: Linux raid-like idea
Date: Sat, 12 Sep 2020 17:19:18 +0100	[thread overview]
Message-ID: <9220ea81-3a81-bb98-22e3-be1a123113a1@youngman.org.uk> (raw)
In-Reply-To: <38f9595b-963e-b1f5-3c29-ad8981e677a7@aim.com>

On 11/09/2020 21:14, Brian Allen Vanderburg II wrote:
> That's right, I get the various combinations confused.  So does raid61
> allow for losing 4 disks in any order and still recovering? or would
> some order of disks make it where just 3 disks lost and be bad?
> Iinteresting non-the-less and I'll have to look into it.  Obviously it's
> not intended to as a replacement for backing up important data, but, for
> me any way, just away to minimize loss of any trivial bulk data/files.

Yup. Raid 6 has two parity disks, and that's mirrored to give four 
parity disks. So as an *absolute* *minimum*, raid-61 could lose four 
disks with no data loss.

Throw in the guarantee that, with a mirror, you can lose an entire 
mirror with no data-loss, that means - with luck and a following wind - 
you could lose half your disks, PLUS the two parities in the remaining 
disks, and still recover your data. So with a raid-6+1, if I had twelve 
disks, I could lose EIGHT disks and still have a *chance* of recovering 
my array. I'm not quite sure what difference raid-61 would make.

(That says to me, if I have a raid-61, I need as a minimum a complete 
set of data disks. That also says to me, if I've splatted an 8+2 raid-61 
across 11 disks, I only need 7 for a full recovery despite needing a 
minimum of 8, so something isn't quite right here... I suspect the 7 
would be enough but I did say my mind goes Whooaaa!!!!)
> 
> It would be nice if the raid modules had support for methods that could
> support a total of more disks in any order lost without loosing data.
> Snapraid source states that it uses some Cauchy Matrix algorithm which
> in theory could loose up to 6 disks if using 6 parity disks, in any
> order, and still be able to restore the data.  I'm not familiar with the
> math behind it so can't speak to the accuracy of that claim.

That's easy, it's just whether it's worth it. Look at the maths behind 
raid-6. The "one parity disk" methods, 4 or 5, just use XOR. But that 
only works once, a second XOR parity disk adds no new redundancy and is 
worthless. I'm guessing raid-6 uses that Cauchy method you talk about - 
certainly it can generate as many parity disks as you like ... so that 
claim is good, even if raid-6 doesn't use that particular technique.

If someone wants to, mod'ing raid-6 to use 3 parity disks shouldn't be 
that hard ...


But going back to your original idea, I've been thinking about it. And 
it struck me - you NEED to regenerate parity EVERY TIME you write data 
to disk! Otherwise, writing one file on one disk instantly trashes your 
ability to recover all the other files in the same position on the other 
disks. WHOOPS! But if you think it's a good idea, by all means try and 
do it.

The other thing I'd suggest here, is try and make it more like raid-5 
than raid-4. You have X disks, let's say 5. So one disk each is numbered 
0, 1, 2, 3, 4. As part of formatting the disk ready for raid, you create 
a file containing every block where LBA mod 5 equals disk number. So as 
you recalculate your parities, that's where they go.

Cheers,
Wol

  parent reply	other threads:[~2020-09-12 16:19 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1cf0d18c-2f63-6bca-9884-9544b0e7c54e.ref@aim.com>
2020-08-24 17:23 ` Linux raid-like idea Brian Allen Vanderburg II
2020-08-28 15:31   ` antlists
2020-09-05 21:47     ` Brian Allen Vanderburg II
2020-09-05 22:42       ` Wols Lists
2020-09-11 15:14         ` Brian Allen Vanderburg II
2020-09-11 19:16           ` antlists
2020-09-11 20:14             ` Brian Allen Vanderburg II
2020-09-12  6:09               ` Song Liu
2020-09-12 14:40               ` Adam Goryachev
2020-09-12 16:19               ` antlists [this message]
2020-09-12 17:28                 ` John Stoffel
2020-09-12 18:41                   ` antlists
2020-09-13 12:50                     ` John Stoffel
2020-09-13 16:01                       ` Wols Lists
2020-09-13 23:49                         ` Brian Allen Vanderburg II
2020-09-15  2:12                           ` John Stoffel
     [not found]                             ` <43ce60a7-64d1-51bc-f29c-7a6388ad91d5@grumpydevil.homelinux.org>
2020-09-15 18:12                               ` John Stoffel
2020-09-15 19:52                                 ` Rudy Zijlstra
2020-09-15  2:09                         ` John Stoffel
2020-09-15 11:14                           ` Roger Heflin
2020-09-15 18:07                             ` John Stoffel
2020-09-15 19:34                               ` Ram Ramesh
2020-09-14 17:19                 ` Phillip Susi
2020-09-14 17:26                   ` Wols Lists
2020-09-15 11:32       ` Nix
2020-09-15 18:10         ` John Stoffel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9220ea81-3a81-bb98-22e3-be1a123113a1@youngman.org.uk \
    --to=antlists@youngman.org.uk \
    --cc=brianvanderburg2@aim.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).