From: Keith Busch <kbusch@kernel.org>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
Jens Axboe <axboe@kernel.dk>,
Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>,
Hannes Reinecke <hare@suse.de>,
linux-block@vger.kernel.org
Subject: Re: [PATCH v2 2/2] nvme: support io stats on the mpath device
Date: Tue, 29 Nov 2022 07:42:53 -0700 [thread overview]
Message-ID: <Y4YabR09emDGRRpP@kbusch-mbp.dhcp.thefacebook.com> (raw)
In-Reply-To: <20221003094344.242593-3-sagi@grimberg.me>
On Mon, Oct 03, 2022 at 12:43:44PM +0300, Sagi Grimberg wrote:
> Our mpath stack device is just a shim that selects a bottom namespace
> and submits the bio to it without any fancy splitting. This also means
> that we don't clone the bio or have any context to the bio beyond
> submission. However it really sucks that we don't see the mpath device
> io stats.
>
> Given that the mpath device can't do that without adding some context
> to it, we let the bottom device do it on its behalf (somewhat similar
> to the approach taken in nvme_trace_bio_complete).
>
> When the IO starts, we account the request for multipath IO stats using
> REQ_NVME_MPATH_IO_STATS nvme_request flag to avoid queue io stats disable
> in the middle of the request.
An unfortunate side effect is that a successful error failover will get
accounted for twice in the mpath device, but cloning to create a
separate context just to track iostats for that unusual condition is
much worse.
Reviewed-by: Keith Busch <kbusch@kernel.org>
> +void nvme_mpath_start_request(struct request *rq)
> +{
> + struct nvme_ns *ns = rq->q->queuedata;
> + struct gendisk *disk = ns->head->disk;
> +
> + if (!blk_queue_io_stat(disk->queue) || blk_rq_is_passthrough(rq))
> + return;
> +
> + nvme_req(rq)->flags |= NVME_MPATH_IO_STATS;
> + nvme_req(rq)->start_time = bdev_start_io_acct(disk->part0,
> + blk_rq_bytes(rq) >> SECTOR_SHIFT,
> + req_op(rq), jiffies);
> +}
> +void nvme_mpath_end_request(struct request *rq)
> +{
> + struct nvme_ns *ns = rq->q->queuedata;
> +
> + if (!(nvme_req(rq)->flags & NVME_MPATH_IO_STATS))
> + return;
> + bdev_end_io_acct(ns->head->disk->part0, req_op(rq),
> + nvme_req(rq)->start_time);
> +}
I think these also can be static inline.
next prev parent reply other threads:[~2022-11-29 14:43 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-03 9:43 [PATCH v2 0/2] nvme-mpath: Add IO stats support Sagi Grimberg
2022-10-03 9:43 ` [PATCH v2 1/2] nvme: introduce nvme_start_request Sagi Grimberg
2022-10-04 6:09 ` Hannes Reinecke
2022-11-29 14:28 ` Keith Busch
2022-11-29 14:32 ` Jens Axboe
2022-11-29 14:31 ` Jens Axboe
2022-10-03 9:43 ` [PATCH v2 2/2] nvme: support io stats on the mpath device Sagi Grimberg
2022-10-04 6:11 ` Hannes Reinecke
2022-10-04 8:19 ` Sagi Grimberg
2022-11-29 14:33 ` Jens Axboe
2022-11-29 14:33 ` Jens Axboe
2022-11-29 14:42 ` Keith Busch [this message]
2022-11-29 14:45 ` [PATCH v2 0/2] nvme-mpath: Add IO stats support Christoph Hellwig
2022-11-29 16:43 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y4YabR09emDGRRpP@kbusch-mbp.dhcp.thefacebook.com \
--to=kbusch@kernel.org \
--cc=Chaitanya.Kulkarni@wdc.com \
--cc=axboe@kernel.dk \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).