linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "John Stoffel" <john@stoffel.org>
To: antlists <antlists@youngman.org.uk>
Cc: Brian Allen Vanderburg II <brianvanderburg2@aim.com>,
	linux-raid@vger.kernel.org
Subject: Re: Linux raid-like idea
Date: Sat, 12 Sep 2020 13:28:30 -0400	[thread overview]
Message-ID: <24413.1342.676749.275674@quad.stoffel.home> (raw)
In-Reply-To: <9220ea81-3a81-bb98-22e3-be1a123113a1@youngman.org.uk>

>>>>> "antlists" == antlists  <antlists@youngman.org.uk> writes:

antlists> On 11/09/2020 21:14, Brian Allen Vanderburg II wrote:
>> That's right, I get the various combinations confused.  So does raid61
>> allow for losing 4 disks in any order and still recovering? or would
>> some order of disks make it where just 3 disks lost and be bad?
>> Iinteresting non-the-less and I'll have to look into it.  Obviously it's
>> not intended to as a replacement for backing up important data, but, for
>> me any way, just away to minimize loss of any trivial bulk data/files.

antlists> Yup. Raid 6 has two parity disks, and that's mirrored to give four 
antlists> parity disks. So as an *absolute* *minimum*, raid-61 could lose four 
antlists> disks with no data loss.

antlists> Throw in the guarantee that, with a mirror, you can lose an entire 
antlists> mirror with no data-loss, that means - with luck and a following wind - 
antlists> you could lose half your disks, PLUS the two parities in the remaining 
antlists> disks, and still recover your data. So with a raid-6+1, if I had twelve 
antlists> disks, I could lose EIGHT disks and still have a *chance* of recovering 
antlists> my array. I'm not quite sure what difference raid-61 would make.

Of course your useful storage is 12/2 -2 = 4 disks, so only 33%
useable space.  Not very good.  At that point, I'd just go with RAID 1
pairs and striped together with RAID 0 (RAID 1+0) for only a 50% loss
of space.  Now if I lose the wrong four disks, I'm screwed, as opposed
to before where I can lose *any* four disks.

The problem with RAID6 is that random small IO writes have terrible
performance.  RAID1+0 gives you much better performance.  Arstechnica
had a great article on this earlier in the year that disk actual testing.

I think (and haven't done the math) that Erasure encoding gives you
better protection as you scale.  And I've even thought about glusterfs
or cephfs for home storage using a bunch of small single board
computers each talking to one disk for storage.  But... it's hard to
justify.

These days, I think it's better for your main storage to be a three
way mirror for the important stuff, performance wise.  And RAID6 with
hot spare for your large streaming collections like videos and such.
But even then... it's hard to justify since it costs alot.

antlists> (That says to me, if I have a raid-61, I need as a minimum a complete 
antlists> set of data disks. That also says to me, if I've splatted an 8+2 raid-61 
antlists> across 11 disks, I only need 7 for a full recovery despite needing a 
antlists> minimum of 8, so something isn't quite right here... I suspect the 7 
antlists> would be enough but I did say my mind goes Whooaaa!!!!)


>> It would be nice if the raid modules had support for methods that could
>> support a total of more disks in any order lost without loosing data.
>> Snapraid source states that it uses some Cauchy Matrix algorithm which
>> in theory could loose up to 6 disks if using 6 parity disks, in any
>> order, and still be able to restore the data.  I'm not familiar with the
>> math behind it so can't speak to the accuracy of that claim.

antlists> That's easy, it's just whether it's worth it. Look at the
antlists> maths behind raid-6. The "one parity disk" methods, 4 or 5,
antlists> just use XOR. But that only works once, a second XOR parity
antlists> disk adds no new redundancy and is worthless. I'm guessing
antlists> raid-6 uses that Cauchy method you talk about - certainly it
antlists> can generate as many parity disks as you like ... so that
antlists> claim is good, even if raid-6 doesn't use that particular
antlists> technique.

antlists> If someone wants to, mod'ing raid-6 to use 3 parity disks
antlists> shouldn't be that hard ...

It's not, but there's diminishing returns because you now have to do
the RMW cycle across even more disks, which is slow.  

antlists> But going back to your original idea, I've been thinking
antlists> about it. And it struck me - you NEED to regenerate parity
antlists> EVERY TIME you write data to disk! Otherwise, writing one
antlists> file on one disk instantly trashes your ability to recover
antlists> all the other files in the same position on the other
antlists> disks. WHOOPS! But if you think it's a good idea, by all
antlists> means try and do it.

Correct.  When compute parity, you do it across blocks.  And the
parity calculation is effectively free these days.  The cost comes
from the (on disks at least) rotational latency to read the entire
stripe across all the disks, modify one to N bytes in that stripe,
then re-writing the stripe back to all the disks.  That's alot of IO.

With RAID1, you just make two writes, one to each disk.  Done.  Even
with a three way mirror, it's simpler.

Now the RAID6 works better if you are replacing the entire stripe,
then you can drop your IOs in half, but you still need to write chunks
to different disks.

This is why big vendors have log based filesystems (Netapp, EMC,
Isilon, etc) with battery backed RAM caches, so they can A) tell the
client the writes are done, B) collect large changes into bigger
chunks, and C) write them in linear fashion down to the disk.

Log based filesystems are great for this.  Until they get fragmented.
SSDs help in that they really don't have a seek cost at all, so you
can handle fragmentation better.  BUT!  SSDs are generally written
assuming 512 byte blocks, but the underlying SSDs now generally use 4k
blocks on the NAND flash, so there's another layer of fragmentation
and wear levelling and other stuff happening outside your control
there as well.  

antlists> The other thing I'd suggest here, is try and make it more
antlists> like raid-5 than raid-4. You have X disks, let's say 5. So
antlists> one disk each is numbered 0, 1, 2, 3, 4. As part of
antlists> formatting the disk ready for raid, you create a file
antlists> containing every block where LBA mod 5 equals disk
antlists> number. So as you recalculate your parities, that's where
antlists> they go.

RAID4 suffers from the parity disk becoming a super hot spot, since it
needs to get written to no matter what.  No one uses it.

Until we can get back to cost effective SSDs using SLC NAND, RAID is
here to stay.  And so is mirroring since it does help protect from
alot of issues, both permanent and temporary.

I had one of my 4tb disks fall out of my main VG, but I didn't lose
and data, I just checked the disk and added it back in.  I've got a
new 4tb disk on order along with a drive cage so I can balance things
better.

But it's almost to the point where it's cheaper to buy a pair of 8tb
drives to replace the 4x4tb drives I'm using now.  But I probably
won't.

I could  write for hours here... it's a tough problem space to work
through.

John

  reply	other threads:[~2020-09-12 17:34 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1cf0d18c-2f63-6bca-9884-9544b0e7c54e.ref@aim.com>
2020-08-24 17:23 ` Linux raid-like idea Brian Allen Vanderburg II
2020-08-28 15:31   ` antlists
2020-09-05 21:47     ` Brian Allen Vanderburg II
2020-09-05 22:42       ` Wols Lists
2020-09-11 15:14         ` Brian Allen Vanderburg II
2020-09-11 19:16           ` antlists
2020-09-11 20:14             ` Brian Allen Vanderburg II
2020-09-12  6:09               ` Song Liu
2020-09-12 14:40               ` Adam Goryachev
2020-09-12 16:19               ` antlists
2020-09-12 17:28                 ` John Stoffel [this message]
2020-09-12 18:41                   ` antlists
2020-09-13 12:50                     ` John Stoffel
2020-09-13 16:01                       ` Wols Lists
2020-09-13 23:49                         ` Brian Allen Vanderburg II
2020-09-15  2:12                           ` John Stoffel
     [not found]                             ` <43ce60a7-64d1-51bc-f29c-7a6388ad91d5@grumpydevil.homelinux.org>
2020-09-15 18:12                               ` John Stoffel
2020-09-15 19:52                                 ` Rudy Zijlstra
2020-09-15  2:09                         ` John Stoffel
2020-09-15 11:14                           ` Roger Heflin
2020-09-15 18:07                             ` John Stoffel
2020-09-15 19:34                               ` Ram Ramesh
2020-09-14 17:19                 ` Phillip Susi
2020-09-14 17:26                   ` Wols Lists
2020-09-15 11:32       ` Nix
2020-09-15 18:10         ` John Stoffel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24413.1342.676749.275674@quad.stoffel.home \
    --to=john@stoffel.org \
    --cc=antlists@youngman.org.uk \
    --cc=brianvanderburg2@aim.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).