linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "John Stoffel" <john@stoffel.org>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
Date: Sun, 8 Dec 2019 17:51:39 -0500	[thread overview]
Message-ID: <24045.32379.341578.79820@quad.stoffel.home> (raw)
In-Reply-To: <alpine.LRH.2.21.1912071804540.9108@fairfax.gathman.org>

>>>>> "Stuart" == Stuart D Gathman <stuart@gathman.org> writes:

Stuart> On Sat, 7 Dec 2019, John Stoffel wrote:
>> The biggest harm to performance here is really the RAID5, and if you
>> can instead move to RAID 10 (mirror then stripe across mirrors) then
>> you should be a performance boost.

Stuart> Yeah, That's what I do.  RAID10, and use LVM to join together as JBOD.
Stuart> I forgot about the raid 5 bottleneck part, sorry.

Yeah, it's not ideal, and I don't know enough about the code to figure
out if it's even possible to fix that issue without major
restructuring.  

>> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
>> single thread for RAID5 is a big bottleneck.

>> I assume he wants to use LVM so he can create volume(s) larger than
>> individual RAID5 volumes, so in that case, I'd probably just build a
>> regular non-striped LVM VG holding all your RAID5 disks.  Hopefully

Stuart> Wait, that's what I suggested!

Must have missed that, sorry!  Again, let's see if the original poster
can provide more details of the setup. 

>> If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
>> though you do have the problem where a double disk failure could kill
>> your data if it happens to both halves of a mirror.

Stuart> No worse than raid5.  In fact, better because the 2nd fault
Stuart> always kills the raid5, but only has a 33% or less chance of
Stuart> killing the raid10.  (And in either case, it is usually just
Stuart> specific sectors, not the entire drive, and other manual
Stuart> recovery techniques can come into play.)

I don't know the failure mode of NVMe drives, a bunch of SSDs didn't
so much fail single sectors as they just up and died instantly,
without any chance of recovery.  So I worry about the NVMe drive
failure modes, and I'd want some hot spares in the system if at all
possible, because you know they're going to fail just as you get home
and stop checking email... so having it rebuild automatically is a big
help.  If your business can afford it.  Can it afford not too?  :-)

John

  parent reply	other threads:[~2019-12-08 22:51 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-29  8:47 [linux-lvm] Best way to run LVM over multiple SW RAIDs? Daniel Janzon
2019-12-07 16:16 ` Anatoly Pugachev
2019-12-07 17:37   ` Roberto Fastec
2019-12-07 20:34     ` Stuart D. Gathman
2019-12-07 22:44       ` John Stoffel
2019-12-07 23:14         ` Stuart D. Gathman
2019-12-08 11:57           ` Gionatan Danti
2019-12-08 22:51           ` John Stoffel [this message]
2019-12-09 10:40         ` Guoqing Jiang
2019-12-09 10:26 Daniel Janzon
2019-12-09 14:26 ` Marian Csontos
2019-12-10 11:23 ` Gionatan Danti
2019-12-10 21:29   ` John Stoffel
2019-12-16  8:22 Daniel Janzon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24045.32379.341578.79820@quad.stoffel.home \
    --to=john@stoffel.org \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).