linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Jens Axboe <axboe@kernel.dk>, Max Gurtovoy <mgurtovoy@nvidia.com>,
	linux-nvme@lists.infradead.org
Cc: Christoph Hellwig <hch@lst.de>, Keith Busch <kbusch@kernel.org>,
	Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>,
	linux-block@vger.kernel.org, Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH rfc] nvme: support io stats on the mpath device
Date: Thu, 29 Sep 2022 19:25:48 +0300	[thread overview]
Message-ID: <91ebc84d-c0e3-b792-4f92-79612271eb91@grimberg.me> (raw)
In-Reply-To: <04b39974-6b55-7aca-70de-4a567f2eac8f@kernel.dk>


>>> 3. Do you have some performance numbers (we're touching the fast path here) ?
>>
>> This is pretty light-weight, accounting is per-cpu and only wrapped by
>> preemption disable. This is a very small price to pay for what we gain.
> 
> Is it? Enabling IO stats for normal devices has a very noticeable impact
> on performance at the higher end of the scale.

Interesting, I didn't think this would be that noticeable. How much
would you quantify the impact in terms of %?

I don't have any insight on this for blk-mq, probably because I've never
seen any user turn IO stats off (or at least don't remember).

My (very limited) testing did not show any noticeable differences for
nvme-loop. All I'm saying that we need to have IO stats for the mpath
device node. If there is a clever way to collect this from the hidden
devices just for nvme, great, but we need to expose these stats.

> So much so that I've contemplated how we can make this less expensive than it currently is.

Then nvme-mpath would benefit from that as well.


  reply	other threads:[~2022-09-29 16:26 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-28 19:55 [PATCH rfc 0/1] nvme-mpath: Add IO stats support Sagi Grimberg
2022-09-28 19:55 ` [PATCH rfc] nvme: support io stats on the mpath device Sagi Grimberg
2022-09-29  9:42   ` Max Gurtovoy
2022-09-29  9:59     ` Sagi Grimberg
2022-09-29 10:25       ` Max Gurtovoy
2022-09-29 15:03       ` Keith Busch
2022-09-29 16:14         ` Sagi Grimberg
2022-09-30 15:21           ` Keith Busch
2022-10-03  8:09             ` Sagi Grimberg
2022-10-25 15:30               ` Christoph Hellwig
2022-10-25 15:58                 ` Sagi Grimberg
2022-10-30 16:22                   ` Christoph Hellwig
2022-09-29 16:32         ` Sagi Grimberg
2022-09-30 15:16           ` Keith Busch
2022-10-03  8:02             ` Sagi Grimberg
2022-10-03  9:32               ` Sagi Grimberg
2022-09-29 15:05       ` Jens Axboe
2022-09-29 16:25         ` Sagi Grimberg [this message]
2022-09-30  0:08           ` Jens Axboe
2022-10-03  8:35             ` Sagi Grimberg
2022-09-29 10:04   ` Sagi Grimberg
2022-09-29 15:07     ` Jens Axboe
2022-10-03  8:38       ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=91ebc84d-c0e3-b792-4f92-79612271eb91@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=Chaitanya.Kulkarni@wdc.com \
    --cc=axboe@kernel.dk \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mgurtovoy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).