All of lore.kernel.org
 help / color / mirror / Atom feed
From: "John Stoffel" <john@stoffel.org>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
Date: Sat, 7 Dec 2019 17:44:02 -0500	[thread overview]
Message-ID: <24044.11058.338208.602498@quad.stoffel.home> (raw)
In-Reply-To: <alpine.LRH.2.21.1912071532450.27214@fairfax.gathman.org>

>>>>> "Stuart" == Stuart D Gathman <stuart@gathman.org> writes:

Stuart> On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon <daniel.janzon@edgeware.tv> wrote:
>> I have a server with very high load using four NVMe SSDs and
>> therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
>> Using one RAID5 volume does not work well since the driver can only
>> utilize one CPU core which spikes at 100% and harms performance.
>> Therefore I created 8 partitions on each disk, and 8 RAID5s across
>> the four disks.

>> Now I want to bring them together with LVM. If I do not use a striped
>> volume I get high performance (in expected magnitude according to disk
>> specs). But when I use a striped volume, performance drops to a
>> magnitude below. The reason I am looking for a striped setup is to

Stuart> The mdadm layer already does the striping.  So doing it again
Stuart> in the LVM layer completely screws it up.  You want plain JBOD
Stuart> (Just a Bunch Of Disks).

Umm... not really.  The problem here is more the MD layer not being
able to run RAID5 across multiple cores at the same time, which is why
he split things the way he did.

But we don't know the Kernel version, the LVM version, or the OS
release so as to give better ideas of what to do.

The biggest harm to performance here is really the RAID5, and if you
can instead move to RAID 10 (mirror then stripe across mirrors) then
you should be a performance boost.

As Daniel says, he's got lots of disk load, but plenty of CPU, so the
single thread for RAID5 is a big bottleneck.

I assume he wants to use LVM so he can create volume(s) larger than
individual RAID5 volumes, so in that case, I'd probably just build a
regular non-striped LVM VG holding all your RAID5 disks.  Hopefully
the Parity disk is spread across all the partitions, though NVMe
drives should have enough IOPs capacity to mask the RMW cost of RAID5
to a degree.

In any case, I'd just build it like:

  pvcreate /dev/md#     (do for each of 8 RAID5 MD devices)
  vgcreate datavg /dev/md[#-#]   (give all 8 RAID5 MD devices here.
  lvcreate -n "name" -L <size> datavg

And then test your performance.  Since you only have four disks, the 8
RAID5 volumes in your VG are all going to suck for small writes, but
NVMe SSDs will mask that to an extent.

If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
though you do have the problem where a double disk failure could kill
your data if it happens to both halves of a mirror.

But, numbers talk, BS walks.  So if the original poster can provide
some details and numbers... then maybe we can help more.

John

  reply	other threads:[~2019-12-07 22:44 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-29  8:47 [linux-lvm] Best way to run LVM over multiple SW RAIDs? Daniel Janzon
2019-12-07 16:16 ` Anatoly Pugachev
2019-12-07 17:37   ` Roberto Fastec
2019-12-07 20:34     ` Stuart D. Gathman
2019-12-07 22:44       ` John Stoffel [this message]
2019-12-07 23:14         ` Stuart D. Gathman
2019-12-08 11:57           ` Gionatan Danti
2019-12-08 22:51           ` John Stoffel
2019-12-09 10:40         ` Guoqing Jiang
2019-12-09 10:26 Daniel Janzon
2019-12-09 14:26 ` Marian Csontos
2019-12-10 11:23 ` Gionatan Danti
2019-12-10 21:29   ` John Stoffel
2019-12-16  8:22 Daniel Janzon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24044.11058.338208.602498@quad.stoffel.home \
    --to=john@stoffel.org \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.