From: Nikolay Kichukov <nikolay@oldum.net>
To: "Finlayson, James M CIV (USA)" <james.m.finlayson4.civ@mail.mil>,
"'linux-raid@vger.kernel.org'" <linux-raid@vger.kernel.org>
Subject: Re: Nr_requests mdraid
Date: Fri, 20 Nov 2020 11:42:44 +0100 [thread overview]
Message-ID: <fa14ca859160872fece3e2d3efc0a21c42bb9a4a.camel@oldum.net> (raw)
In-Reply-To: <5EAED86C53DED2479E3E145969315A23856EEA12@UMECHPA7B.easf.csd.disa.mil>
Hello all,
On Mon, 2020-11-16 at 16:51 +0000, Finlayson, James M CIV (USA) wrote:
> On Wed, Oct 28, 2020 at 6:39 PM Vitaly Mayatskih <
> v.mayatskih@gmail.com> wrote
> >
> > On Thu, Oct 22, 2020 at 2:56 AM Finlayson, James M CIV (USA) <
> > james.m.finlayson4.civ@mail.mil> wrote:
> > >
> > > All,
> > > I'm working on creating raid5 or raid6 arrays of 800K IOP nvme
> > > drives. Each of \
> > > the drives performs well with a queue depth of 128 and I set to
> > > 1023 if allowed. \
> > > In order for me to try to max out the queue depth on each RAID
> > > member, so I'd like \
> > > to set the sysfs nr_requests on the md device to something greater
> > > than 128, like \
> > > #raid members * 128. Even though
> > > /sys/block/md127/queue/nr_requests is mode 644, \
> > > when I try to change nr_requests in any way as root, I get write
> > > error: invalid \
> > > argument. When I'm hitting the md device with random reads, my
> > > nvme drives are \
> > > 100% utilized, but only doing 160K IOPS because they have no queue
> > > depth.
> > > Am I doing something silly?
> >
> > It only works for blk-mq block devices. MD is not blk-mq.
Would it be possible to implement something similar to the use_blk_mq of
dm_mod on md_mod?
> >
> > You can exchange simplicity for performance: instead of creating one
> > RAID-5/6 array you can partition drives in N equal sized partitions,
> > create N RAID-5/6 arrays using one partition from every disk, then
> > stripe them into top-level RAID-0. So that would be RAID-5+0 (or
> > 6+0).
> >
> > It is awful, but simulates multiqueue and performs better in
> > parallel
> > loads. Especially for writes (on RAID-5/6).
> >
> >
> > --
> > wbr, Vitaly
>
> Vitaly,
> Thank you for the tip. My raid5 performance (after creating 32
> partitions per SSD) and running 64 9+1 (2 in reality) stripes is up
> to 11.4M 4K random read IOPS, out of 17M that the box is capable,
> which I'm happy with, because I can't NUMA the raid stripes as I would
> the individual SSDs themselves. However, when I perform the RAID0
> striping to make the "RAID50 from hell", my performance drops to 7.1M
> 4K random read IOPS. Any suggestions? The last RAID50, again won't
> let me generate the queue depth.
>
> Thanks in advance,
> Jim
>
>
>
next prev parent reply other threads:[~2020-11-20 10:42 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-16 16:51 Nr_requests mdraid Finlayson, James M CIV (USA)
2020-11-20 10:42 ` Nikolay Kichukov [this message]
2020-11-20 13:04 ` Vitaly Mayatskih
2020-11-20 13:20 ` Vitaly Mayatskih
-- strict thread matches above, loose matches on Subject: below --
2020-10-21 20:29 Finlayson, James M CIV (USA)
2020-10-28 18:39 ` Vitaly Mayatskih
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fa14ca859160872fece3e2d3efc0a21c42bb9a4a.camel@oldum.net \
--to=nikolay@oldum.net \
--cc=james.m.finlayson4.civ@mail.mil \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).