linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* slow write performance with software RAID on nvme storage
@ 2019-03-29 20:55 Rick Warner
  2020-04-17  3:34 ` Rick Warner
  0 siblings, 1 reply; 2+ messages in thread
From: Rick Warner @ 2019-03-29 20:55 UTC (permalink / raw)
  To: linux-kernel

Hi All,

We've been testing a 24 drive NVME software RAID and getting far lower
write speeds than expected.  The drives are connected with PLX chips
such that 12 drives are on 1 x16 connection and the other 12 drives use
another x16 link  The system is a Supermicro 2029U-TN24R4T.  The drives
are Intel DC P4500 1TB.

We're testing with fio using 8 jobs.

Using all defaults with RAID0 I can only get 4 or 5 GB/s write speeds
but can hit ~24GB/s read speeds.  The drives have over 1GB/s write speed
each, so we should be able to hit at least 20GB/s write speed.

Testing with RAID6 and defaults got significantly lower (down around
1.5GB/s).  Using a 64k chunk and increasing the group_thread_cnt
increased the results to ~4GB/s.

dmesg shows the RAID parity calc speed being ~40GB/s:
[    4.215386] raid6: using algorithm avx512x2 gen() 41397 MB/s


I've played around with filesystem queue choices and tuning but haven't
seen any significant improvements.

What is the bottleneck here? If it's not known, what should I do to
determine it?

I've done a variety of other tests with this system and am happy to
elaborate further if any other information is needed.

Thanks,
Rick Warner

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: slow write performance with software RAID on nvme storage
  2019-03-29 20:55 slow write performance with software RAID on nvme storage Rick Warner
@ 2020-04-17  3:34 ` Rick Warner
  0 siblings, 0 replies; 2+ messages in thread
From: Rick Warner @ 2020-04-17  3:34 UTC (permalink / raw)
  To: linux-kernel

Additional testing with fio has shown near theoretical write speeds if I 
test direct to the /dev/md device instead of using either xfs or ext4.

I've tested different queue settings without significant changes.

Is it possible to get a single XFS or ext4 filesystem performing with 
 >10GB/s write speeds?

On 2019-03-29 16:55, Rick Warner wrote:
> Hi All,
>
> We've been testing a 24 drive NVME software RAID and getting far lower
> write speeds than expected.  The drives are connected with PLX chips
> such that 12 drives are on 1 x16 connection and the other 12 drives use
> another x16 link  The system is a Supermicro 2029U-TN24R4T.  The drives
> are Intel DC P4500 1TB.
>
> We're testing with fio using 8 jobs.
>
> Using all defaults with RAID0 I can only get 4 or 5 GB/s write speeds
> but can hit ~24GB/s read speeds.  The drives have over 1GB/s write speed
> each, so we should be able to hit at least 20GB/s write speed.
>
> Testing with RAID6 and defaults got significantly lower (down around
> 1.5GB/s).  Using a 64k chunk and increasing the group_thread_cnt
> increased the results to ~4GB/s.
>
> dmesg shows the RAID parity calc speed being ~40GB/s:
> [    4.215386] raid6: using algorithm avx512x2 gen() 41397 MB/s
>
>
> I've played around with filesystem queue choices and tuning but haven't
> seen any significant improvements.
>
> What is the bottleneck here? If it's not known, what should I do to
> determine it?
>
> I've done a variety of other tests with this system and am happy to
> elaborate further if any other information is needed.
>
> Thanks,
> Rick Warner


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-04-17  3:41 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-29 20:55 slow write performance with software RAID on nvme storage Rick Warner
2020-04-17  3:34 ` Rick Warner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).