archive mirror
 help / color / mirror / Atom feed
From: "John Stoffel" <>
To: LVM general discussion and development <>
Cc: Daniel Janzon <>
Subject: Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
Date: Tue, 10 Dec 2019 16:29:18 -0500	[thread overview]
Message-ID: <24048.3630.705344.759092@quad.stoffel.home> (raw)
In-Reply-To: <>

>>>>> "Gionatan" == Gionatan Danti <> writes:

Gionatan> On 09/12/19 11:26, Daniel Janzon wrote:
>> Exactly. The md driver executes on a single core, but with a bunch of RAID5s
>> I can distribute the load over many cores. That's also why I cannot join the
>> bunch of RAID5's with a RAID0 (as someone suggested) because then again
>> all data is pulled through a single core.

Gionatan> MD RAID0 is extremely fast, using a single core at the
Gionatan> striping level should pose no problem. Did you actually
Gionatan> tried this setup?

Gionatan> Anyway, the suggestion from Guoqing Jiang sound promising. Let me quote him:

>> Perhaps set "/sys/block/mdx/md/group_thread_cnt" could help here,
>> see below commits:
>> commit b721420e8719131896b009b11edbbd27d9b85e98
>> Author: Shaohua Li <>
>> Date:   Tue Aug 27 17:50:42 2013 +0800
>> raid5: sysfs entry to control worker thread number
>> commit 851c30c9badfc6b294c98e887624bff53644ad21
>> Author: Shaohua Li <>
>> Date:   Wed Aug 28 14:30:16 2013 +0800
>> raid5: offload stripe handle to workqueue

I think this requires a much newer kernel, but since he's running on
RHEL7 using kernel 3.10.x with RH patches and such, that feature
doesn't exist.  I just checked on my one my RHEL7.6 systems and I
don't see that option.  And I just setup a RAID5 four device RAID and
it doesn't have that option.

So I think maybe you need to try:

  mdadm -C -l 0 -c 64 md_stripe /dev/md_raid5[1-8]

But thinking some more, maybe you want to pin the RAID5 threads for
each of your RAID5s to a seperate CPU using cpusets?  Maybe that will
help performance?

But wait, why using use an MD stripe on top of the RAID5 setup?  Or
are you?

Can you please provide the setup of the system?

cat /proc/mdstat
vgs -av
pvs -av
lvs -av

Just so we can look at what you're doing?

Also, what's the queue depth of your devices?  Maybe with NVMe you can
bump it up higher?  Or maybe it wants to be lower... something else to


  reply	other threads:[~2019-12-10 21:29 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-09 10:26 [linux-lvm] Best way to run LVM over multiple SW RAIDs? Daniel Janzon
2019-12-09 14:26 ` Marian Csontos
2019-12-10 11:23 ` Gionatan Danti
2019-12-10 21:29   ` John Stoffel [this message]
  -- strict thread matches above, loose matches on Subject: below --
2019-12-16  8:22 Daniel Janzon
2019-10-29  8:47 Daniel Janzon
2019-12-07 16:16 ` Anatoly Pugachev
2019-12-07 17:37   ` Roberto Fastec
2019-12-07 20:34     ` Stuart D. Gathman
2019-12-07 22:44       ` John Stoffel
2019-12-07 23:14         ` Stuart D. Gathman
2019-12-08 11:57           ` Gionatan Danti
2019-12-08 22:51           ` John Stoffel
2019-12-09 10:40         ` Guoqing Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24048.3630.705344.759092@quad.stoffel.home \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).