From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>, linux-nvme@lists.infradead.org
Cc: Christoph Hellwig <hch@lst.de>, Keith Busch <kbusch@kernel.org>,
Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>,
linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH rfc] nvme: support io stats on the mpath device
Date: Thu, 29 Sep 2022 12:42:08 +0300 [thread overview]
Message-ID: <760a7129-945c-35fa-6bd6-aa315d717bc5@nvidia.com> (raw)
In-Reply-To: <20220928195510.165062-2-sagi@grimberg.me>
Hi Sagi,
On 9/28/2022 10:55 PM, Sagi Grimberg wrote:
> Our mpath stack device is just a shim that selects a bottom namespace
> and submits the bio to it without any fancy splitting. This also means
> that we don't clone the bio or have any context to the bio beyond
> submission. However it really sucks that we don't see the mpath device
> io stats.
>
> Given that the mpath device can't do that without adding some context
> to it, we let the bottom device do it on its behalf (somewhat similar
> to the approach taken in nvme_trace_bio_complete);
Can you please paste the output of the application that shows the
benefit of this commit ?
>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
> drivers/nvme/host/apple.c | 2 +-
> drivers/nvme/host/core.c | 10 ++++++++++
> drivers/nvme/host/fc.c | 2 +-
> drivers/nvme/host/multipath.c | 18 ++++++++++++++++++
> drivers/nvme/host/nvme.h | 12 ++++++++++++
> drivers/nvme/host/pci.c | 2 +-
> drivers/nvme/host/rdma.c | 2 +-
> drivers/nvme/host/tcp.c | 2 +-
> drivers/nvme/target/loop.c | 2 +-
> 9 files changed, 46 insertions(+), 6 deletions(-)
Several questions:
1. I guess that for the non-mpath case we get this for free from the
block layer for each bio ?
2. Now we have doubled the accounting, haven't we ?
3. Do you have some performance numbers (we're touching the fast path
here) ?
4. Should we enable this by default ?
The implementation look good.
next prev parent reply other threads:[~2022-09-29 9:42 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-28 19:55 [PATCH rfc 0/1] nvme-mpath: Add IO stats support Sagi Grimberg
2022-09-28 19:55 ` [PATCH rfc] nvme: support io stats on the mpath device Sagi Grimberg
2022-09-29 9:42 ` Max Gurtovoy [this message]
2022-09-29 9:59 ` Sagi Grimberg
2022-09-29 10:25 ` Max Gurtovoy
2022-09-29 15:03 ` Keith Busch
2022-09-29 16:14 ` Sagi Grimberg
2022-09-30 15:21 ` Keith Busch
2022-10-03 8:09 ` Sagi Grimberg
2022-10-25 15:30 ` Christoph Hellwig
2022-10-25 15:58 ` Sagi Grimberg
2022-10-30 16:22 ` Christoph Hellwig
2022-09-29 16:32 ` Sagi Grimberg
2022-09-30 15:16 ` Keith Busch
2022-10-03 8:02 ` Sagi Grimberg
2022-10-03 9:32 ` Sagi Grimberg
2022-09-29 15:05 ` Jens Axboe
2022-09-29 16:25 ` Sagi Grimberg
2022-09-30 0:08 ` Jens Axboe
2022-10-03 8:35 ` Sagi Grimberg
2022-09-29 10:04 ` Sagi Grimberg
2022-09-29 15:07 ` Jens Axboe
2022-10-03 8:38 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=760a7129-945c-35fa-6bd6-aa315d717bc5@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=Chaitanya.Kulkarni@wdc.com \
--cc=axboe@kernel.dk \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).