linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
@ 2019-12-09 10:26 Daniel Janzon
  2019-12-09 14:26 ` Marian Csontos
  2019-12-10 11:23 ` Gionatan Danti
  0 siblings, 2 replies; 14+ messages in thread
From: Daniel Janzon @ 2019-12-09 10:26 UTC (permalink / raw)
  To: linux-lvm


> From: "John Stoffel" <john@stoffel.org>

> Stuart> The mdadm layer already does the striping.  So doing it again
> Stuart> in the LVM layer completely screws it up.  You want plain JBOD
> Stuart> (Just a Bunch Of Disks).

> Umm... not really.  The problem here is more the MD layer not being
> able to run RAID5 across multiple cores at the same time, which is why
> he split things the way he did.

Exactly. The md driver executes on a single core, but with a bunch of RAID5s
I can distribute the load over many cores. That's also why I cannot join the
bunch of RAID5's with a RAID0 (as someone suggested) because then again
all data is pulled through a single core.

> But we don't know the Kernel version, the LVM version, or the OS
> release so as to give better ideas of what to do.

It is Redhat 7, kernel 3.10, scheduler seems to be "[none] mq-deadline kyber"
according to /sys/block/nvme0n1/queue/scheduler. LVM version 2.02.185(2)-RHEL7.

But I wonder if fine-tuning e.g. io scheduler is going to cut it, since I am
looking for something like a 10x improvement.

> The biggest harm to performance here is really the RAID5, and if you
> can instead move to RAID 10 (mirror then stripe across mirrors) then
> you should be a performance boost.

The origin of my problem is indeed the poor performance of RAID5,
which maxes out the single core the driver runs on. But if I accept that
as a given, the next problem is LVM striping. Since I do get 10x better
performance with linear allocation.

> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
> single thread for RAID5 is a big bottleneck.

Yes. That should be fixed since NVMe SSDs now outperform a single
CPU core. But that's a topic for another mailing list I suppose.

> I assume he wants to use LVM so he can create volume(s) larger than
> individual RAID5 volumes, so in that case, I'd probably just build a
> regular non-striped LVM VG holding all your RAID5 disks.  Hopefully
> the Parity disk is spread across all the partitions, though NVMe
> drives should have enough IOPs capacity to mask the RMW cost of RAID5
> to a degree.

The problem is the linear allocation of LVM. It will tend to fill the first
RAID5 first, then the next, etc. The access pattern is such that files
written close in time will be read close in time. We have live video
streams being written and read 24/7. What I want to avoid is that at
some point a majority of all reads end up on a single RAID5 which
will then fail to perform. Bound to happen in an always-on system.

> In any case, I'd just build it like:
>
>  pvcreate /dev/md#     (do for each of 8 RAID5 MD devices)
>  vgcreate datavg /dev/md[#-#]   (give all 8 RAID5 MD devices here.
>  lvcreate -n "name" -L <size> datavg

I think this is basically what I did, what I refer to as a "linearly allocated"
as compared to a striped group. It does indeed perform well most of
the time, but has, I believe, a poor guarantee for the worst case.


> If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
> though you do have the problem where a double disk failure could kill
> your data if it happens to both halves of a mirror.

Yes throwing money on the problem is a good way to solve it. I was 
hoping to avoid that for this application since I thought I just did something
wrong with the stripes.

Kind Regards,
Daniel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-09 10:26 [linux-lvm] Best way to run LVM over multiple SW RAIDs? Daniel Janzon
@ 2019-12-09 14:26 ` Marian Csontos
  2019-12-10 11:23 ` Gionatan Danti
  1 sibling, 0 replies; 14+ messages in thread
From: Marian Csontos @ 2019-12-09 14:26 UTC (permalink / raw)
  To: LVM general discussion and development, Daniel Janzon

On 12/9/19 11:26 AM, Daniel Janzon wrote:
> 

> The origin of my problem is indeed the poor performance of RAID5,
> which maxes out the single core the driver runs on. But if I accept that
> as a given, the next problem is LVM striping. Since I do get 10x better

What stripesize was used for striped LV? IIRC the default is 64k.

IIUC you are serving mostly large files. I have no numbers, and no HW to 
test the hypothesis, but using larger stripesize could help here as this 
would split the load on multiple RAID5 volumes, while not splitting the 
IOs too early into too many small requests.

-- Marian

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-09 10:26 [linux-lvm] Best way to run LVM over multiple SW RAIDs? Daniel Janzon
  2019-12-09 14:26 ` Marian Csontos
@ 2019-12-10 11:23 ` Gionatan Danti
  2019-12-10 21:29   ` John Stoffel
  1 sibling, 1 reply; 14+ messages in thread
From: Gionatan Danti @ 2019-12-10 11:23 UTC (permalink / raw)
  To: LVM general discussion and development, Daniel Janzon

On 09/12/19 11:26, Daniel Janzon wrote:
> Exactly. The md driver executes on a single core, but with a bunch of RAID5s
> I can distribute the load over many cores. That's also why I cannot join the
> bunch of RAID5's with a RAID0 (as someone suggested) because then again
> all data is pulled through a single core.

MD RAID0 is extremely fast, using a single core at the striping level 
should pose no problem. Did you actually tried this setup?

Anyway, the suggestion from Guoqing Jiang sound promising. Let me quote him:

> Perhaps set "/sys/block/mdx/md/group_thread_cnt" could help here,
> see below commits:
> 
> commit b721420e8719131896b009b11edbbd27d9b85e98
> Author: Shaohua Li <shli@kernel.org>
> Date:   Tue Aug 27 17:50:42 2013 +0800
> 
>      raid5: sysfs entry to control worker thread number
> 
> commit 851c30c9badfc6b294c98e887624bff53644ad21
> Author: Shaohua Li <shli@kernel.org>
> Date:   Wed Aug 28 14:30:16 2013 +0800
> 
>      raid5: offload stripe handle to workqueue

Regards.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-10 11:23 ` Gionatan Danti
@ 2019-12-10 21:29   ` John Stoffel
  0 siblings, 0 replies; 14+ messages in thread
From: John Stoffel @ 2019-12-10 21:29 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Daniel Janzon

>>>>> "Gionatan" == Gionatan Danti <g.danti@assyoma.it> writes:

Gionatan> On 09/12/19 11:26, Daniel Janzon wrote:
>> Exactly. The md driver executes on a single core, but with a bunch of RAID5s
>> I can distribute the load over many cores. That's also why I cannot join the
>> bunch of RAID5's with a RAID0 (as someone suggested) because then again
>> all data is pulled through a single core.

Gionatan> MD RAID0 is extremely fast, using a single core at the
Gionatan> striping level should pose no problem. Did you actually
Gionatan> tried this setup?

Gionatan> Anyway, the suggestion from Guoqing Jiang sound promising. Let me quote him:

>> Perhaps set "/sys/block/mdx/md/group_thread_cnt" could help here,
>> see below commits:
>> 
>> commit b721420e8719131896b009b11edbbd27d9b85e98
>> Author: Shaohua Li <shli@kernel.org>
>> Date:   Tue Aug 27 17:50:42 2013 +0800
>> 
>> raid5: sysfs entry to control worker thread number
>> 
>> commit 851c30c9badfc6b294c98e887624bff53644ad21
>> Author: Shaohua Li <shli@kernel.org>
>> Date:   Wed Aug 28 14:30:16 2013 +0800
>> 
>> raid5: offload stripe handle to workqueue

I think this requires a much newer kernel, but since he's running on
RHEL7 using kernel 3.10.x with RH patches and such, that feature
doesn't exist.  I just checked on my one my RHEL7.6 systems and I
don't see that option.  And I just setup a RAID5 four device RAID and
it doesn't have that option.

So I think maybe you need to try:

  mdadm -C -l 0 -c 64 md_stripe /dev/md_raid5[1-8]

But thinking some more, maybe you want to pin the RAID5 threads for
each of your RAID5s to a seperate CPU using cpusets?  Maybe that will
help performance?

But wait, why using use an MD stripe on top of the RAID5 setup?  Or
are you?

Can you please provide the setup of the system?

cat /proc/mdstat
vgs -av
pvs -av
lvs -av

Just so we can look at what you're doing?

Also, what's the queue depth of your devices?  Maybe with NVMe you can
bump it up higher?  Or maybe it wants to be lower... something else to
check.

John

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [linux-lvm] Best way to run LVM over multiple SW RAIDs?
@ 2019-12-16  8:22 Daniel Janzon
  0 siblings, 0 replies; 14+ messages in thread
From: Daniel Janzon @ 2019-12-16  8:22 UTC (permalink / raw)
  To: linux-lvm

> From: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
>>On 12/7/19 11:44 PM, John Stoffel wrote:
>> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
>> single thread for RAID5 is a big bottleneck.

>Perhaps set "/sys/block/mdx/md/group_thread_cnt" could help here,

Now I finally had a chance to test this. It turns out to work great! It's not as fast as a non-raided linearly allocated LVM volume (about half of performance without getting a fat tail of high read/write response time). So there is a price for redundancy but that is worth it in my application. It's now up there in the same magnitude.

Thanks a lot Guoqing! You really helped me a lot here. I'd also like to thank John Stoffel for valuable input.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-07 22:44       ` John Stoffel
  2019-12-07 23:14         ` Stuart D. Gathman
@ 2019-12-09 10:40         ` Guoqing Jiang
  1 sibling, 0 replies; 14+ messages in thread
From: Guoqing Jiang @ 2019-12-09 10:40 UTC (permalink / raw)
  To: LVM general discussion and development, John Stoffel



On 12/7/19 11:44 PM, John Stoffel wrote:
>>>>>> "Stuart" == Stuart D Gathman<stuart@gathman.org>  writes:
> Stuart> On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon<daniel.janzon@edgeware.tv>  wrote:
>>> I have a server with very high load using four NVMe SSDs and
>>> therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
>>> Using one RAID5 volume does not work well since the driver can only
>>> utilize one CPU core which spikes at 100% and harms performance.
>>> Therefore I created 8 partitions on each disk, and 8 RAID5s across
>>> the four disks.
>>> Now I want to bring them together with LVM. If I do not use a striped
>>> volume I get high performance (in expected magnitude according to disk
>>> specs). But when I use a striped volume, performance drops to a
>>> magnitude below. The reason I am looking for a striped setup is to
> Stuart> The mdadm layer already does the striping.  So doing it again
> Stuart> in the LVM layer completely screws it up.  You want plain JBOD
> Stuart> (Just a Bunch Of Disks).
>
> Umm... not really.  The problem here is more the MD layer not being
> able to run RAID5 across multiple cores at the same time, which is why
> he split things the way he did.
>
> But we don't know the Kernel version, the LVM version, or the OS
> release so as to give better ideas of what to do.
>
> The biggest harm to performance here is really the RAID5, and if you
> can instead move to RAID 10 (mirror then stripe across mirrors) then
> you should be a performance boost.
>
> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
> single thread for RAID5 is a big bottleneck.

Perhaps set "/sys/block/mdx/md/group_thread_cnt" could help here,
see below commits:

commit b721420e8719131896b009b11edbbd27d9b85e98
Author: Shaohua Li <shli@kernel.org>
Date:   Tue Aug 27 17:50:42 2013 +0800

     raid5: sysfs entry to control worker thread number

commit 851c30c9badfc6b294c98e887624bff53644ad21
Author: Shaohua Li <shli@kernel.org>
Date:   Wed Aug 28 14:30:16 2013 +0800

     raid5: offload stripe handle to workqueue

Thanks,
Guoqing

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-07 23:14         ` Stuart D. Gathman
  2019-12-08 11:57           ` Gionatan Danti
@ 2019-12-08 22:51           ` John Stoffel
  1 sibling, 0 replies; 14+ messages in thread
From: John Stoffel @ 2019-12-08 22:51 UTC (permalink / raw)
  To: LVM general discussion and development

>>>>> "Stuart" == Stuart D Gathman <stuart@gathman.org> writes:

Stuart> On Sat, 7 Dec 2019, John Stoffel wrote:
>> The biggest harm to performance here is really the RAID5, and if you
>> can instead move to RAID 10 (mirror then stripe across mirrors) then
>> you should be a performance boost.

Stuart> Yeah, That's what I do.  RAID10, and use LVM to join together as JBOD.
Stuart> I forgot about the raid 5 bottleneck part, sorry.

Yeah, it's not ideal, and I don't know enough about the code to figure
out if it's even possible to fix that issue without major
restructuring.  

>> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
>> single thread for RAID5 is a big bottleneck.

>> I assume he wants to use LVM so he can create volume(s) larger than
>> individual RAID5 volumes, so in that case, I'd probably just build a
>> regular non-striped LVM VG holding all your RAID5 disks.  Hopefully

Stuart> Wait, that's what I suggested!

Must have missed that, sorry!  Again, let's see if the original poster
can provide more details of the setup. 

>> If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
>> though you do have the problem where a double disk failure could kill
>> your data if it happens to both halves of a mirror.

Stuart> No worse than raid5.  In fact, better because the 2nd fault
Stuart> always kills the raid5, but only has a 33% or less chance of
Stuart> killing the raid10.  (And in either case, it is usually just
Stuart> specific sectors, not the entire drive, and other manual
Stuart> recovery techniques can come into play.)

I don't know the failure mode of NVMe drives, a bunch of SSDs didn't
so much fail single sectors as they just up and died instantly,
without any chance of recovery.  So I worry about the NVMe drive
failure modes, and I'd want some hot spares in the system if at all
possible, because you know they're going to fail just as you get home
and stop checking email... so having it rebuild automatically is a big
help.  If your business can afford it.  Can it afford not too?  :-)

John

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-07 23:14         ` Stuart D. Gathman
@ 2019-12-08 11:57           ` Gionatan Danti
  2019-12-08 22:51           ` John Stoffel
  1 sibling, 0 replies; 14+ messages in thread
From: Gionatan Danti @ 2019-12-08 11:57 UTC (permalink / raw)
  To: LVM general discussion and development

Il 08-12-2019 00:14 Stuart D. Gathman ha scritto:
> On Sat, 7 Dec 2019, John Stoffel wrote:
> 
>> The biggest harm to performance here is really the RAID5, and if you
>> can instead move to RAID 10 (mirror then stripe across mirrors) then
>> you should be a performance boost.
> 
> Yeah, That's what I do.  RAID10, and use LVM to join together as JBOD.
> I forgot about the raid 5 bottleneck part, sorry.
> 
>> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
>> single thread for RAID5 is a big bottleneck.
> 
>> I assume he wants to use LVM so he can create volume(s) larger than
>> individual RAID5 volumes, so in that case, I'd probably just build a
>> regular non-striped LVM VG holding all your RAID5 disks.  Hopefully
> 
> Wait, that's what I suggested!
> 
>> If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
>> though you do have the problem where a double disk failure could kill
>> your data if it happens to both halves of a mirror.
> 
> No worse than raid5.  In fact, better because the 2nd fault always
> kills the raid5, but only has a 33% or less chance of killing the
> raid10.  (And in either case, it is usually just specific sectors,
> not the entire drive, and other manual recovery techniques can come 
> into
> play.)

While I agree with both (especially regarding RAID10), I propose another 
setup: and MD RAID0 of the eight MD RAID5 arrays.
If I remember correctly, LVM striping code is based on device mapper 
rather than MD RAID code. Maybe the latter is more efficient at striping 
on fast NVMe drives?

Regards.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-07 22:44       ` John Stoffel
@ 2019-12-07 23:14         ` Stuart D. Gathman
  2019-12-08 11:57           ` Gionatan Danti
  2019-12-08 22:51           ` John Stoffel
  2019-12-09 10:40         ` Guoqing Jiang
  1 sibling, 2 replies; 14+ messages in thread
From: Stuart D. Gathman @ 2019-12-07 23:14 UTC (permalink / raw)
  To: LVM general discussion and development

On Sat, 7 Dec 2019, John Stoffel wrote:

> The biggest harm to performance here is really the RAID5, and if you
> can instead move to RAID 10 (mirror then stripe across mirrors) then
> you should be a performance boost.

Yeah, That's what I do.  RAID10, and use LVM to join together as JBOD.
I forgot about the raid 5 bottleneck part, sorry.

> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
> single thread for RAID5 is a big bottleneck.

> I assume he wants to use LVM so he can create volume(s) larger than
> individual RAID5 volumes, so in that case, I'd probably just build a
> regular non-striped LVM VG holding all your RAID5 disks.  Hopefully

Wait, that's what I suggested!

> If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
> though you do have the problem where a double disk failure could kill
> your data if it happens to both halves of a mirror.

No worse than raid5.  In fact, better because the 2nd fault always
kills the raid5, but only has a 33% or less chance of killing the
raid10.  (And in either case, it is usually just specific sectors,
not the entire drive, and other manual recovery techniques can come into
play.)

-- 
 	      Stuart D. Gathman <stuart@gathman.org>
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-07 20:34     ` Stuart D. Gathman
@ 2019-12-07 22:44       ` John Stoffel
  2019-12-07 23:14         ` Stuart D. Gathman
  2019-12-09 10:40         ` Guoqing Jiang
  0 siblings, 2 replies; 14+ messages in thread
From: John Stoffel @ 2019-12-07 22:44 UTC (permalink / raw)
  To: LVM general discussion and development

>>>>> "Stuart" == Stuart D Gathman <stuart@gathman.org> writes:

Stuart> On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon <daniel.janzon@edgeware.tv> wrote:
>> I have a server with very high load using four NVMe SSDs and
>> therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
>> Using one RAID5 volume does not work well since the driver can only
>> utilize one CPU core which spikes at 100% and harms performance.
>> Therefore I created 8 partitions on each disk, and 8 RAID5s across
>> the four disks.

>> Now I want to bring them together with LVM. If I do not use a striped
>> volume I get high performance (in expected magnitude according to disk
>> specs). But when I use a striped volume, performance drops to a
>> magnitude below. The reason I am looking for a striped setup is to

Stuart> The mdadm layer already does the striping.  So doing it again
Stuart> in the LVM layer completely screws it up.  You want plain JBOD
Stuart> (Just a Bunch Of Disks).

Umm... not really.  The problem here is more the MD layer not being
able to run RAID5 across multiple cores at the same time, which is why
he split things the way he did.

But we don't know the Kernel version, the LVM version, or the OS
release so as to give better ideas of what to do.

The biggest harm to performance here is really the RAID5, and if you
can instead move to RAID 10 (mirror then stripe across mirrors) then
you should be a performance boost.

As Daniel says, he's got lots of disk load, but plenty of CPU, so the
single thread for RAID5 is a big bottleneck.

I assume he wants to use LVM so he can create volume(s) larger than
individual RAID5 volumes, so in that case, I'd probably just build a
regular non-striped LVM VG holding all your RAID5 disks.  Hopefully
the Parity disk is spread across all the partitions, though NVMe
drives should have enough IOPs capacity to mask the RMW cost of RAID5
to a degree.

In any case, I'd just build it like:

  pvcreate /dev/md#     (do for each of 8 RAID5 MD devices)
  vgcreate datavg /dev/md[#-#]   (give all 8 RAID5 MD devices here.
  lvcreate -n "name" -L <size> datavg

And then test your performance.  Since you only have four disks, the 8
RAID5 volumes in your VG are all going to suck for small writes, but
NVMe SSDs will mask that to an extent.

If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead,
though you do have the problem where a double disk failure could kill
your data if it happens to both halves of a mirror.

But, numbers talk, BS walks.  So if the original poster can provide
some details and numbers... then maybe we can help more.

John

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-07 17:37   ` Roberto Fastec
@ 2019-12-07 20:34     ` Stuart D. Gathman
  2019-12-07 22:44       ` John Stoffel
  0 siblings, 1 reply; 14+ messages in thread
From: Stuart D. Gathman @ 2019-12-07 20:34 UTC (permalink / raw)
  To: LVM general discussion and development

On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon <daniel.janzon@edgeware.tv> wrote:
> I have a server with very high load using four NVMe SSDs and
> therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
> Using one RAID5 volume does not work well since the driver can only
> utilize one CPU core which spikes at 100% and harms performance.
> Therefore I created 8 partitions on each disk, and 8 RAID5s across
> the four disks.

> Now I want to bring them together with LVM. If I do not use a striped
> volume I get high performance (in expected magnitude according to disk
> specs). But when I use a striped volume, performance drops to a
> magnitude below. The reason I am looking for a striped setup is to

The mdadm layer already does the striping.  So doing it again in the LVM
layer completely screws it up.  You want plain JBOD (Just a Bunch
Of Disks).

-- 
 	      Stuart D. Gathman <stuart@gathman.org>
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-12-07 16:16 ` Anatoly Pugachev
@ 2019-12-07 17:37   ` Roberto Fastec
  2019-12-07 20:34     ` Stuart D. Gathman
  0 siblings, 1 reply; 14+ messages in thread
From: Roberto Fastec @ 2019-12-07 17:37 UTC (permalink / raw)
  To: LVM general discussion and development

Did you thought about RAID 50 ?




  Messaggio originale  


Da: matorola@gmail.com
Inviato: 7 dicembre 2019 17:17
A: linux-lvm@redhat.com
Rispondi a: linux-lvm@redhat.com
Oggetto: Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?


On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon
<daniel.janzon@edgeware.tv> wrote:
>
> Hello,
>
> I have a server with very high load using four NVMe SSDs and therefore no HW RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does not work well since the driver can only utilize one CPU core which spikes at 100% and harms performance. Therefore I created 8 partitions on each disk, and 8 RAID5s across the four disks.
>
> Now I want to bring them together with LVM. If I do not use a striped volume I get high performance (in expected magnitude according to disk specs). But when I use a striped volume, performance drops to a magnitude below. The reason I am looking for a striped setup is to ensure that data is spread well over the drive to guarantee a good worst-case performance. With linear allocation rather than striped, if load is directed to files on the first PV (a SW RAID) the system is again exposed to the 1-core limitation.
>
> I tried "--stripes 8 --stripesize 512", and would appreciate any ideas of other things to try. I guess the performance hit can be in the file system as well. I tried XFS and EXT4 with default settings.

Daniel,

a bit more about your system? Like kernel version, io scheduler, etc..
Have you tried with recent kernels MQ (multi-queue) schedulers (noop,
deadline) ?


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
  2019-10-29  8:47 Daniel Janzon
@ 2019-12-07 16:16 ` Anatoly Pugachev
  2019-12-07 17:37   ` Roberto Fastec
  0 siblings, 1 reply; 14+ messages in thread
From: Anatoly Pugachev @ 2019-12-07 16:16 UTC (permalink / raw)
  To: LVM general discussion and development

On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon
<daniel.janzon@edgeware.tv> wrote:
>
> Hello,
>
> I have a server with very high load using four NVMe SSDs and therefore no HW RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does not work well since the driver can only utilize one CPU core which spikes at 100% and harms performance. Therefore I created 8 partitions on each disk, and 8 RAID5s across the four disks.
>
> Now I want to bring them together with LVM. If I do not use a striped volume I get high performance (in expected magnitude according to disk specs). But when I use a striped volume, performance drops to a magnitude below. The reason I am looking for a striped setup is to ensure that data is spread well over the drive to guarantee a good worst-case performance. With linear allocation rather than striped, if load is directed to files on the first PV (a SW RAID) the system is again exposed to the 1-core limitation.
>
> I tried "--stripes 8 --stripesize 512", and would appreciate any ideas of other things to try. I guess the performance hit can be in the file system as well. I tried XFS and EXT4 with default settings.

Daniel,

a bit more about your system? Like kernel version, io scheduler, etc..
Have you tried with recent kernels MQ (multi-queue) schedulers (noop,
deadline) ?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [linux-lvm] Best way to run LVM over multiple SW RAIDs?
@ 2019-10-29  8:47 Daniel Janzon
  2019-12-07 16:16 ` Anatoly Pugachev
  0 siblings, 1 reply; 14+ messages in thread
From: Daniel Janzon @ 2019-10-29  8:47 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1107 bytes --]

Hello,

I have a server with very high load using four NVMe SSDs and therefore no HW RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does not work well since the driver can only utilize one CPU core which spikes at 100% and harms performance. Therefore I created 8 partitions on each disk, and 8 RAID5s across the four disks.

Now I want to bring them together with LVM. If I do not use a striped volume I get high performance (in expected magnitude according to disk specs). But when I use a striped volume, performance drops to a magnitude below. The reason I am looking for a striped setup is to ensure that data is spread well over the drive to guarantee a good worst-case performance. With linear allocation rather than striped, if load is directed to files on the first PV (a SW RAID) the system is again exposed to the 1-core limitation.

I tried "--stripes 8 --stripesize 512", and would appreciate any ideas of other things to try. I guess the performance hit can be in the file system as well. I tried XFS and EXT4 with default settings.

Kind Regards,
Daniel


[-- Attachment #2: Type: text/html, Size: 1582 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-12-16  8:23 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-09 10:26 [linux-lvm] Best way to run LVM over multiple SW RAIDs? Daniel Janzon
2019-12-09 14:26 ` Marian Csontos
2019-12-10 11:23 ` Gionatan Danti
2019-12-10 21:29   ` John Stoffel
  -- strict thread matches above, loose matches on Subject: below --
2019-12-16  8:22 Daniel Janzon
2019-10-29  8:47 Daniel Janzon
2019-12-07 16:16 ` Anatoly Pugachev
2019-12-07 17:37   ` Roberto Fastec
2019-12-07 20:34     ` Stuart D. Gathman
2019-12-07 22:44       ` John Stoffel
2019-12-07 23:14         ` Stuart D. Gathman
2019-12-08 11:57           ` Gionatan Danti
2019-12-08 22:51           ` John Stoffel
2019-12-09 10:40         ` Guoqing Jiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).