linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "yukuai (C)" <yukuai3@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: <axboe@kernel.dk>, <linux-block@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <yi.zhang@huawei.com>,
	<zhangxiaoxu5@huawei.com>
Subject: Re: [PATCH 1/3] blk-mq: allow hardware queue to get more tag while sharing a tag set
Date: Mon, 28 Dec 2020 09:56:15 +0800	[thread overview]
Message-ID: <04c39621-0c4a-e593-5545-c4bd274c5fc2@huawei.com> (raw)
In-Reply-To: <20201227115859.GA3282759@T590>

Hi,

On 2020/12/27 19:58, Ming Lei wrote:
> Hi Yu Kuai,
> 
> On Sat, Dec 26, 2020 at 06:28:06PM +0800, Yu Kuai wrote:
>> When sharing a tag set, if most disks are issuing small amount of IO, and
>> only a few is issuing a large amount of IO. Current approach is to limit
>> the max amount of tags a disk can get equally to the average of total
>> tags. Thus the few heavy load disk can't get enough tags while many tags
>> are still free in the tag set.
> 
> Yeah, current approach just allocates same share for each active queue
> which is evaluated in each timeout period.
> 
> That said you are trying to improve the following case:
> - heavy IO on one or several disks, and the average share for these
>    disks become bottleneck of IO performance
> - small amount IO on other disks attached to the same host, and all IOs are
> submitted to disk in <30 second period.
> 
> Just wondering if you may share the workload you are trying to optimize,
> or it is just one improvement in theory? And what is the disk(hdd, ssd
> or nvme) and host? And how many disks in your setting? And how deep the tagset
> depth is?

The details of the environment that we found the problem are as follows:

  total driver tags: 128
  number of disks: 13 (network drive, and they form a dm-multipath)
  default queue_depth: 32
  disk performance: when test with 4k randread and single thread, iops is
                    300. And can up to 4000 with 32 thread.
  test cmd: fio -ioengine=psync -numjobs=32 ...

We found that mpath will issue sg_io periodically(about 15s),which lead
to active_queues setting to 13 for about 5s in every 15s.

By the way, I'm not sure this is a common scenario, however, sq don't
have such problem,

Thanks
Yu Kuai

  reply	other threads:[~2020-12-28  1:57 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-26 10:28 [PATCH 0/3] fix the performance fluctuation due to shared tagset Yu Kuai
2020-12-26 10:28 ` [PATCH 1/3] blk-mq: allow hardware queue to get more tag while sharing a tag set Yu Kuai
2020-12-27 11:58   ` Ming Lei
2020-12-28  1:56     ` yukuai (C) [this message]
2020-12-28  8:28       ` Ming Lei
2020-12-28  9:02         ` yukuai (C)
2020-12-29  1:15           ` Ming Lei
2020-12-29  2:37             ` yukuai (C)
2020-12-26 10:28 ` [PATCH 2/3] blk-mq: clear 'active_queues' immediately when 'nr_active' is decreased to 0 Yu Kuai
2020-12-26 10:28 ` [PATCH 3/3] blk-mq: decrease pending_queues when it expires Yu Kuai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=04c39621-0c4a-e593-5545-c4bd274c5fc2@huawei.com \
    --to=yukuai3@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=yi.zhang@huawei.com \
    --cc=zhangxiaoxu5@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).