linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Paolo Valente <paolo.valente@linaro.org>
Cc: "Jan Kara" <jack@suse.cz>, "Michal Koutný" <mkoutny@suse.com>,
	"Jens Axboe" <axboe@kernel.dk>,
	linux-block@vger.kernel.org
Subject: Re: [PATCH 0/3 v2] bfq: Limit number of allocated scheduler tags per cgroup
Date: Mon, 20 Sep 2021 11:28:15 +0200	[thread overview]
Message-ID: <20210920092815.GA6607@quack2.suse.cz> (raw)
In-Reply-To: <AF7AAF20-7D9E-4305-901A-86A3717A9CFB@linaro.org>

On Sat 18-09-21 12:58:34, Paolo Valente wrote:
> > Il giorno 15 set 2021, alle ore 15:15, Jan Kara <jack@suse.cz> ha scritto:
> > 
> > On Tue 31-08-21 11:59:30, Michal Koutný wrote:
> >> Hello Paolo.
> >> 
> >> On Fri, Aug 27, 2021 at 12:07:20PM +0200, Paolo Valente <paolo.valente@linaro.org> wrote:
> >>> Before discussing your patches in detail, I need a little help on this
> >>> point.  You state that the number of scheduler tags must be larger
> >>> than the number of device tags.  So, I expected some of your patches
> >>> to address somehow this issue, e.g., by increasing the number of
> >>> scheduler tags.  Yet I have not found such a change.  Did I miss
> >>> something?
> >> 
> >> I believe Jan's conclusions so far are based on "manual" modifications
> >> of available scheduler tags by /sys/block/$dev/queue/nr_requests.
> >> Finding a good default value may be an additional change.
> > 
> > Exactly. So far I was manually increasing nr_requests. I agree that
> > improving the default nr_requests value selection would be desirable as
> > well so that manual tuning is not needed. But for now I've left that aside.
> > 
> 
> Ok. So, IIUC, to recover control on bandwidth you need to
> (1) increase nr_requests manually
> and
> (2) apply your patch
> 
> If you don't do (1), then (2) is not sufficient, and viceversa. Correct?

Correct, although 1) depends on HW capabilities - e.g. for standard SATA
NCQ drive with queue depth of 32, the current nr_requests setting of 256 is
fine and just 2) is enough to recover control. If you run on top of virtio
device or storage controller card with queue depth of 1024, you need to
bump up the nr_requests setting.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  reply	other threads:[~2021-09-20  9:28 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-15 13:30 [PATCH 0/3 v2] bfq: Limit number of allocated scheduler tags per cgroup Jan Kara
2021-07-15 13:30 ` [PATCH 1/3] block: Provide icq in request allocation data Jan Kara
2021-07-15 13:30 ` [PATCH 2/3] bfq: Track number of allocated requests in bfq_entity Jan Kara
2021-07-15 13:30 ` [PATCH 3/3] bfq: Limit number of requests consumed by each cgroup Jan Kara
2021-08-27 10:07 ` [PATCH 0/3 v2] bfq: Limit number of allocated scheduler tags per cgroup Paolo Valente
2021-08-31  9:59   ` Michal Koutný
2021-09-15 13:15     ` Jan Kara
2021-09-18 10:58       ` Paolo Valente
2021-09-20  9:28         ` Jan Kara [this message]
2021-09-22 14:33           ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210920092815.GA6607@quack2.suse.cz \
    --to=jack@suse.cz \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=mkoutny@suse.com \
    --cc=paolo.valente@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).