From: John Garry <firstname.lastname@example.org>
To: Ming Lei <email@example.com>
Cc: Jens Axboe <firstname.lastname@example.org>, Christoph Hellwig <email@example.com>,
<firstname.lastname@example.org>, Yanhui Ma <email@example.com>,
Bart Van Assche <firstname.lastname@example.org>,
Subject: Re: [PATCH] blk-mq: plug request for shared sbitmap
Date: Tue, 18 May 2021 12:42:22 +0100 [thread overview]
Message-ID: <email@example.com> (raw)
On 18/05/2021 12:16, Ming Lei wrote:
> On Tue, May 18, 2021 at 10:44:43AM +0100, John Garry wrote:
>> On 14/05/2021 03:20, Ming Lei wrote:
>>> In case of shared sbitmap, request won't be held in plug list any more
>>> sine commit 32bc15afed04 ("blk-mq: Facilitate a shared sbitmap per
>>> tagset"), this way makes request merge from flush plug list & batching
>>> submission not possible, so cause performance regression.
>>> Yanhui reports performance regression when running sequential IO
>>> test(libaio, 16 jobs, 8 depth for each job) in VM, and the VM disk
>>> is emulated with image stored on xfs/megaraid_sas.
>>> Fix the issue by recovering original behavior to allow to hold request
>>> in plug list.
>> Hi Ming,
>> Since testing v5.13-rc2, I noticed that this patch made the hang I was
>> seeing disappear:
>> I don't think that problem is solved, though.
> This kind of hang or lockup is usually related with cpu utilization, and
> this patch may reduce cpu utilization in submission context.
>> So I wonder about throughput performance (I had hoped to test before it was
>> merged). I only have 6x SAS SSDs at hand, but I see some significant changes
>> (good and bad) for mq-deadline for hisi_sas:
>> Before 620K (read), 300K IOPs (randread)
>> After 430K (read), 460-490K IOPs (randread)
> 'Before 620K' could be caused by busy queue when batching submission isn't
> applied, so merge chance is increased. This patch applies batching
> submission, so queue becomes not busy enough.
> BTW, what is the queue depth of sdev and can_queue of shost for your hisilision SAS?
sdev queue depth is 64 (see hisi_sas_slave_configure()) and host depth
is 4096 - 96 (for reserved tags) = 4000
IIRC, megaraid sas and mpt3sas use 256 for SAS sdev queue depth
>> none IO sched is always about 450K (read) and 500K (randread)
>> Do you guys have any figures? Are my results as expected?
> In yanhui's virt workload(qemu, libaio, dio, high queue depth, single
> job), the patch can improve throughput much(>50%) when running
> sequential write(dio, libaio, 16 jobs) to XFS. And it is observed that
> IO merge is recovered to level of disabling host tags.
next prev parent reply other threads:[~2021-05-18 11:43 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-14 2:20 [PATCH] blk-mq: plug request for shared sbitmap Ming Lei
2021-05-14 14:59 ` Jens Axboe
2021-05-18 9:44 ` John Garry
2021-05-18 11:16 ` Ming Lei
2021-05-18 11:42 ` John Garry [this message]
2021-05-18 12:00 ` Ming Lei
2021-05-18 12:51 ` John Garry
2021-05-18 16:01 ` John Garry
2021-05-19 0:21 ` Ming Lei
2021-05-19 8:41 ` John Garry
2021-05-20 1:23 ` Ming Lei
2021-05-20 8:21 ` John Garry
2021-05-18 11:54 ` Hannes Reinecke
2021-05-18 12:37 ` John Garry
2021-05-18 13:22 ` Hannes Reinecke
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).