stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: Song Liu <song@kernel.org>,
	linux-block@vger.kernel.org,
	linux-raid <linux-raid@vger.kernel.org>,
	stable@vger.kernel.org, Larkin Lowrey <llowrey@nuclearwinter.com>,
	Wilson Jonathan <i400sjon@gmail.com>,
	Roger Heflin <rogerheflin@gmail.com>
Subject: Re: [PATCH] block: check more requests for multiple_queues in blk_attempt_plug_merge
Date: Fri, 11 Mar 2022 09:32:35 +0800	[thread overview]
Message-ID: <YiqmsypjvdPN/K3w@T590> (raw)
In-Reply-To: <0d7bb070-11a3-74b1-22d5-86001818018b@kernel.dk>

On Thu, Mar 10, 2022 at 06:21:33PM -0700, Jens Axboe wrote:
> On 3/10/22 6:14 PM, Ming Lei wrote:
> > On Thu, Mar 10, 2022 at 05:36:44PM -0700, Jens Axboe wrote:
> >> On 3/10/22 5:31 PM, Song Liu wrote:
> >>> On Thu, Mar 10, 2022 at 4:07 PM Jens Axboe <axboe@kernel.dk> wrote:
> >>>>
> >>>> On 3/10/22 4:33 PM, Song Liu wrote:
> >>>>> On Thu, Mar 10, 2022 at 3:02 PM Jens Axboe <axboe@kernel.dk> wrote:
> >>>>>>
> >>>>>> On 3/10/22 3:37 PM, Song Liu wrote:
> >>>>>>> On Thu, Mar 10, 2022 at 2:15 PM Jens Axboe <axboe@kernel.dk> wrote:
> >>>>>>>>
> >>>>>>>> On 3/8/22 11:42 PM, Song Liu wrote:
> >>>>>>>>> RAID arrays check/repair operations benefit a lot from merging requests.
> >>>>>>>>> If we only check the previous entry for merge attempt, many merge will be
> >>>>>>>>> missed. As a result, significant regression is observed for RAID check
> >>>>>>>>> and repair.
> >>>>>>>>>
> >>>>>>>>> Fix this by checking more than just the previous entry when
> >>>>>>>>> plug->multiple_queues == true.
> >>>>>>>>>
> >>>>>>>>> This improves the check/repair speed of a 20-HDD raid6 from 19 MB/s to
> >>>>>>>>> 103 MB/s.
> >>>>>>>>
> >>>>>>>> Do the underlying disks not have an IO scheduler attached? Curious why
> >>>>>>>> the merges aren't being done there, would be trivial when the list is
> >>>>>>>> flushed out. Because if the perf difference is that big, then other
> >>>>>>>> workloads would be suffering they are that sensitive to being within a
> >>>>>>>> plug worth of IO.
> >>>>>>>
> >>>>>>> The disks have mq-deadline by default. I also tried kyber, the result
> >>>>>>> is the same. Raid repair work sends IOs to all the HDDs in a
> >>>>>>> round-robin manner. If we only check the previous request, there isn't
> >>>>>>> much opportunity for merge. I guess other workloads may have different
> >>>>>>> behavior?
> >>>>>>
> >>>>>> Round robin one at the time? I feel like there's something odd or
> >>>>>> suboptimal with the raid rebuild, if it's that sensitive to plug
> >>>>>> merging.
> >>>>>
> >>>>> It is not one request at a time, but more like (for raid456):
> >>>>>    read 4kB from HDD1, HDD2, HDD3...,
> >>>>>    then read another 4kB from HDD1, HDD2, HDD3, ...
> >>>>
> >>>> Ehm, that very much looks like one-at-the-time from each drive, which is
> >>>> pretty much the worst way to do it :-)
> >>>>
> >>>> Is there a reason for that? Why isn't it using 64k chunks or something
> >>>> like that? You could still do that as a kind of read-ahead, even if
> >>>> you're still processing in chunks of 4k.
> >>>
> >>> raid456 handles logic in the granularity of stripe. Each stripe is 4kB from
> >>> every HDD in the array. AFAICT, we need some non-trivial change to
> >>> enable the read ahead.
> >>
> >> Right, you'd need to stick some sort of caching in between so instead of
> >> reading 4k directly, you ask the cache for 4k and that can manage
> >> read-ahead.
> >>
> >>>>>> Plug merging is mainly meant to reduce the overhead of merging,
> >>>>>> complement what the scheduler would do. If there's a big drop in
> >>>>>> performance just by not getting as efficient merging on the plug side,
> >>>>>> that points to an issue with something else.
> >>>>>
> >>>>> We introduced blk_plug_max_rq_count() to give md more opportunities to
> >>>>> merge at plug side, so I guess the behavior has been like this for a
> >>>>> long time. I will take a look at the scheduler side and see whether we
> >>>>> can just merge later, but I am not very optimistic about it.
> >>>>
> >>>> Yeah I remember, and that also kind of felt like a work-around for some
> >>>> underlying issue. Maybe there's something about how the IO is issued
> >>>> that makes it go straight to disk and we never get any merging? Is it
> >>>> because they are sync reads?
> >>>>
> >>>> In any case, just doing larger reads would likely help quite a bit, but
> >>>> would still be nice to get to the bottom of why we're not seeing the
> >>>> level of merging we expect.
> >>>
> >>> Let me look more into this. Maybe we messed something up in the
> >>> scheduler.
> >>
> >> I'm assuming you have a plug setup for doing the reads, which is why you
> >> see the big difference (or there would be none). But
> >> blk_mq_flush_plug_list() should really take care of this when the plug
> >> is flushed, requests should be merged at that point. And from your
> >> description, doesn't sound like they are at all.
> > 
> > requests are shared, when running out of request, plug list will be
> > flushed early.
> 
> That is true, but I don't think that's the problem here with the round
> robin approach. Seems like it'd drive a pretty low queue depth, even
> considering SATA.

Another one may be plug list not sorted before inserting requests to
scheduler in blk_mq_flush_plug_list(), looks you have mentioned.

Thanks,
Ming


  reply	other threads:[~2022-03-11  1:33 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-09  6:42 [PATCH] block: check more requests for multiple_queues in blk_attempt_plug_merge Song Liu
2022-03-10  6:48 ` Christoph Hellwig
2022-03-10  7:23   ` Song Liu
2022-03-10 22:10     ` Song Liu
2022-03-10 22:12       ` Song Liu
2022-03-10 22:15 ` Jens Axboe
2022-03-10 22:37   ` Song Liu
2022-03-10 23:02     ` Jens Axboe
2022-03-10 23:33       ` Song Liu
2022-03-11  0:07         ` Jens Axboe
2022-03-11  0:31           ` Song Liu
2022-03-11  0:36             ` Jens Axboe
2022-03-11  0:38               ` Jens Axboe
2022-03-11  1:14               ` Ming Lei
2022-03-11  1:21                 ` Jens Axboe
2022-03-11  1:32                   ` Ming Lei [this message]
2022-03-11  1:35                     ` Jens Axboe
2022-03-11  8:09                       ` Wols Lists
2022-03-11 14:16           ` Jens Axboe
2022-03-11 16:59             ` Song Liu
2022-03-11 21:41               ` Paul Menzel
2022-03-11 22:40                 ` Song Liu
2022-03-11 11:30     ` Wilson Jonathan
2022-03-11 15:58       ` Wols Lists

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YiqmsypjvdPN/K3w@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=i400sjon@gmail.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=llowrey@nuclearwinter.com \
    --cc=rogerheflin@gmail.com \
    --cc=song@kernel.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).