archive mirror
 help / color / mirror / Atom feed
From: "yukuai (C)" <>
To: Bart Van Assche <>, <>,
Cc: <>, <>,
Subject: Re: [PATCH] blk-mq: allow hardware queue to get more tag while sharing a tag set
Date: Tue, 3 Aug 2021 10:57:05 +0800	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On 2021/08/03 0:17, Bart Van Assche wrote:
> On 8/2/21 6:34 AM, yukuai (C) wrote:
>> I run a test on both null_blk and nvme, results show that there are no
>> performance degradation:
>> test platform: x86
>> test cpu: 2 nodes, total 72
>> test scheduler: none
>> test device: null_blk / nvme
>> test cmd: fio -filename=/dev/xxx -name=test -ioengine=libaio -direct=1
>> -numjobs=72 -iodepth=16 -bs=4k -rw=write -offset_increment=1G
>> -cpus_allowed=0:71 -cpus_allowed_policy=split -group_reporting
>> -runtime=120
>> test results: iops
>> 1) null_blk before this patch: 280k
>> 2) null_blk after this patch: 282k
>> 3) nvme before this patch: 378k
>> 4) nvme after this patch: 384k
> Please use io_uring for performance tests.
> The null_blk numbers seem way too low to me. If I run a null_blk 
> performance test inside a VM with 6 CPU cores (Xeon W-2135 CPU) I see 
> about 6 million IOPS for synchronous I/O and about 4.4 million IOPS when 
> using libaio. The options I used and that are not in the above command 
> line are: --thread --gtod_reduce=1 --ioscheduler=none.

Hi, Bart

The cpu I'm testing is Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, and
after switching to io_uring with "--thread --gtod_reduce=1
--ioscheduler=none", the numbers can increase to 330k, yet still
far behind 6000k.

The new atomic operations in the hot path is atomic_read() from
hctx_may_queue(), and the atomic variable will change in two

a. fail to get driver tag with dbusy not set, increase and set dbusy.
b. if dbusy is set when queue switch from busy to dile, decrease and
clear dbusy.

During the period a device "idle -> busy -> idle", the new atomic
variable can be writen twice at most, which means this is almost
readonly in the above test situation. So I guess the impact on
performance is minimal ?


  reply	other threads:[~2021-08-03  2:57 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-12  3:18 [PATCH] blk-mq: allow hardware queue to get more tag while sharing a tag set Yu Kuai
2021-07-12  3:19 ` yukuai (C)
2021-07-20 12:33 ` yukuai (C)
2021-07-31  7:13   ` yukuai (C)
2021-07-31 17:15 ` Bart Van Assche
2021-08-02 13:34   ` yukuai (C)
2021-08-02 16:17     ` Bart Van Assche
2021-08-03  2:57       ` yukuai (C) [this message]
2021-08-03 18:38         ` Bart Van Assche
2021-08-06  1:50           ` yukuai (C)
2021-08-06  2:43             ` Bart Van Assche
2021-08-14  9:43               ` yukuai (C)
2021-08-16  4:16                 ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).