linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Sagi Grimberg <sagi@grimberg.me>,
	Max Gurtovoy <mgurtovoy@nvidia.com>,
	linux-nvme@lists.infradead.org
Cc: Christoph Hellwig <hch@lst.de>, Keith Busch <kbusch@kernel.org>,
	Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>,
	linux-block@vger.kernel.org, Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH rfc] nvme: support io stats on the mpath device
Date: Thu, 29 Sep 2022 18:08:16 -0600	[thread overview]
Message-ID: <c2cab5be-658d-3c50-b1a0-1d7d86e12e0b@kernel.dk> (raw)
In-Reply-To: <91ebc84d-c0e3-b792-4f92-79612271eb91@grimberg.me>

On 9/29/22 10:25 AM, Sagi Grimberg wrote:
> 
>>>> 3. Do you have some performance numbers (we're touching the fast path here) ?
>>>
>>> This is pretty light-weight, accounting is per-cpu and only wrapped by
>>> preemption disable. This is a very small price to pay for what we gain.
>>
>> Is it? Enabling IO stats for normal devices has a very noticeable impact
>> on performance at the higher end of the scale.
> 
> Interesting, I didn't think this would be that noticeable. How much
> would you quantify the impact in terms of %?

If we take it to the extreme - my usual peak benchmark, which is drive
limited at 122M IOPS, run at 113M IOPS if I have iostats enabled. If I
lower the queue depth (128 -> 16), then peak goes from 46M to 44M. Not
as dramatic, but still quite noticeable. This is just using a single
thread on a single CPU core per drive, so not throwing tons of CPU at
it.

Now, I have no idea how well nvme multipath currently scales or works.
Would be interesting to test that separately. But if you were to double
(or more, I guess 3x if you're doing the exposed device and then adding
stats to at least two below?) the overhead, that'd certainly not be
free.

> I don't have any insight on this for blk-mq, probably because I've never
> seen any user turn IO stats off (or at least don't remember).

Most people don't care, but some certainly do. As per the above, it's
noticeable enough that it makes a difference if you're chasing latencies
or peak performance.

> My (very limited) testing did not show any noticeable differences for
> nvme-loop. All I'm saying that we need to have IO stats for the mpath
> device node. If there is a clever way to collect this from the hidden
> devices just for nvme, great, but we need to expose these stats.

 From a previous message, sounds like that's just some qemu setup? Hard
to measure anything there with precision in my experience, and it's not
really peak performance territory either.

>> So much so that I've contemplated how we can make this less expensive
>> than it currently is.
> 
> Then nvme-mpath would benefit from that as well.

Yeah, it'd be a win all around for sure...

-- 
Jens Axboe


  reply	other threads:[~2022-09-30  0:16 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-28 19:55 [PATCH rfc 0/1] nvme-mpath: Add IO stats support Sagi Grimberg
2022-09-28 19:55 ` [PATCH rfc] nvme: support io stats on the mpath device Sagi Grimberg
2022-09-29  9:42   ` Max Gurtovoy
2022-09-29  9:59     ` Sagi Grimberg
2022-09-29 10:25       ` Max Gurtovoy
2022-09-29 15:03       ` Keith Busch
2022-09-29 16:14         ` Sagi Grimberg
2022-09-30 15:21           ` Keith Busch
2022-10-03  8:09             ` Sagi Grimberg
2022-10-25 15:30               ` Christoph Hellwig
2022-10-25 15:58                 ` Sagi Grimberg
2022-10-30 16:22                   ` Christoph Hellwig
2022-09-29 16:32         ` Sagi Grimberg
2022-09-30 15:16           ` Keith Busch
2022-10-03  8:02             ` Sagi Grimberg
2022-10-03  9:32               ` Sagi Grimberg
2022-09-29 15:05       ` Jens Axboe
2022-09-29 16:25         ` Sagi Grimberg
2022-09-30  0:08           ` Jens Axboe [this message]
2022-10-03  8:35             ` Sagi Grimberg
2022-09-29 10:04   ` Sagi Grimberg
2022-09-29 15:07     ` Jens Axboe
2022-10-03  8:38       ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c2cab5be-658d-3c50-b1a0-1d7d86e12e0b@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=Chaitanya.Kulkarni@wdc.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mgurtovoy@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).