From: Ming Lei <ming.lei@redhat.com>
To: Weiping Zhang <zwp10758@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>, Mike Snitzer <snitzer@redhat.com>,
mpatocka@redhat.com, linux-block@vger.kernel.org
Subject: Re: [PATCH RFC] block: fix inaccurate io_ticks
Date: Fri, 23 Oct 2020 17:11:03 +0800 [thread overview]
Message-ID: <20201023091103.GE1698172@T590> (raw)
In-Reply-To: <CAA70yB6HQhaoGatAHhPnNbdMZfD3SdoEdpU+ip63JPXAvbL2iA@mail.gmail.com>
On Fri, Oct 23, 2020 at 04:56:08PM +0800, Weiping Zhang wrote:
> On Fri, Oct 23, 2020 at 4:49 PM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > On Fri, Oct 23, 2020 at 02:46:32PM +0800, Weiping Zhang wrote:
> > > Do not add io_ticks if there is no infligh io when start a new IO,
> > > otherwise an extra 1 jiffy will be add to this IO.
> > >
> > > I run the following command on a host, with different kernel version.
> > >
> > > fio -name=test -ioengine=sync -bs=4K -rw=write
> > > -filename=/home/test.fio.log -size=100M -time_based=1 -direct=1
> > > -runtime=300 -rate=2m,2m
> > >
> > > If we run fio in a sync direct io mode, IO will be proccessed one by one,
> > > you can see that there are 512 IOs completed in one second.
> > >
> > > kernel: 4.19.0
> > >
> > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
> > > vda 0.00 0.00 0.00 512.00 0.00 2.00 8.00 0.21 0.40 0.00 0.40 0.40 20.60
> > >
> > > The averate io.latency is 0.4ms, so the disk time cost in one second
> > > should be 0.4 * 512 = 204.8 ms, that means, %util should be 20%.
> > >
> > > Becase update_io_ticks will add a extra 1 jiffy(1ms) for every IO, the
> > > io.latency io.latency will be 1 + 0.4 = 1.4ms,
> > > 1.4 * 512 = 716.8ms, so the %util show it about 72%.
> > >
> > > Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> > > vda 0.00 512.00 0.00 2.00 0.00 0.00 0.00 0.00 0.00 0.40 0.20 0.00 4.00 1.41 72.10
> > >
> > > After this patch:
> > > Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> > > vda 0.00 512.00 0.00 2.00 0.00 0.00 0.00 0.00 0.00 0.40 0.20 0.00 4.00 0.39 20.00
> > >
> > > Fixes: 5b18b5a73760 ("block: delete part_round_stats and switch to less precise counting")
> > > Signed-off-by: Weiping Zhang <zhangweiping@didiglobal.com>
> > > ---
> > > block/blk-core.c | 19 ++++++++++++++-----
> > > block/blk.h | 1 +
> > > block/genhd.c | 2 +-
> > > 3 files changed, 16 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/block/blk-core.c b/block/blk-core.c
> > > index ac00d2fa4eb4..789a5c40b6a6 100644
> > > --- a/block/blk-core.c
> > > +++ b/block/blk-core.c
> > > @@ -1256,14 +1256,14 @@ unsigned int blk_rq_err_bytes(const struct request *rq)
> > > }
> > > EXPORT_SYMBOL_GPL(blk_rq_err_bytes);
> > >
> > > -static void update_io_ticks(struct hd_struct *part, unsigned long now, bool end)
> > > +static void update_io_ticks(struct hd_struct *part, unsigned long now, bool inflight)
> > > {
> > > unsigned long stamp;
> > > again:
> > > stamp = READ_ONCE(part->stamp);
> > > if (unlikely(stamp != now)) {
> > > - if (likely(cmpxchg(&part->stamp, stamp, now) == stamp))
> > > - __part_stat_add(part, io_ticks, end ? now - stamp : 1);
> > > + if (likely(cmpxchg(&part->stamp, stamp, now) == stamp) && inflight)
> > > + __part_stat_add(part, io_ticks, now - stamp);
> > > }
> > > if (part->partno) {
> > > part = &part_to_disk(part)->part0;
> > > @@ -1310,13 +1310,20 @@ void blk_account_io_done(struct request *req, u64 now)
> > >
> > > void blk_account_io_start(struct request *rq)
> > > {
> > > + struct hd_struct *part;
> > > + struct request_queue *q;
> > > + int inflight;
> > > +
> > > if (!blk_do_io_stat(rq))
> > > return;
> > >
> > > rq->part = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq));
> > >
> > > part_stat_lock();
> > > - update_io_ticks(rq->part, jiffies, false);
> > > + part = rq->part;
> > > + q = part_to_disk(part)->queue;
> > > + inflight = blk_mq_in_flight(q, part);
> > > + update_io_ticks(part, jiffies, inflight > 0 ? true : false);
> >
> > Yeah, this account issue can be fixed by applying such 'inflight' info.
> > However, blk_mq_in_flight() isn't cheap enough, I did get soft lockup
> > report because of blk_mq_in_flight() called in I/O path.
> >
> > BTW, this way is just like reverting 5b18b5a73760 ("block: delete
> > part_round_stats and switch to less precise counting").
> >
> >
> Hello Ming,
>
> Shall we switch it to atomic mode ? update inflight_count when
> start/done for every IO.
That is more expensive than blk_mq_in_flight().
> Or any other cheaper way.
I guess it is hard to figure out one cheaper way to figure out
IO in-flight count especially in case of multiple CPU cores and
Millions of IOPS.
Thanks,
Ming
next prev parent reply other threads:[~2020-10-23 9:11 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-23 6:46 [PATCH RFC] block: fix inaccurate io_ticks Weiping Zhang
2020-10-23 8:46 ` Ming Lei
2020-10-23 8:56 ` Weiping Zhang
2020-10-23 9:11 ` Ming Lei [this message]
2020-10-25 11:34 ` Weiping Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201023091103.GE1698172@T590 \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=linux-block@vger.kernel.org \
--cc=mpatocka@redhat.com \
--cc=snitzer@redhat.com \
--cc=zwp10758@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).