From: "yukuai (C)" <yukuai3@huawei.com>
To: Bart Van Assche <bvanassche@acm.org>, <axboe@kernel.dk>,
<ming.lei@redhat.com>
Cc: <linux-block@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<yi.zhang@huawei.com>
Subject: Re: [PATCH] blk-mq: allow hardware queue to get more tag while sharing a tag set
Date: Fri, 6 Aug 2021 09:50:23 +0800 [thread overview]
Message-ID: <a63fbd36-5a43-e412-c0a2-a06730945a13@huawei.com> (raw)
In-Reply-To: <07d2e6ba-d016-458a-a2ce-877fd7b72ed0@acm.org>
On 2021/08/04 2:38, Bart Van Assche wrote:
> On 8/2/21 7:57 PM, yukuai (C) wrote:
>> The cpu I'm testing is Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, and
>> after switching to io_uring with "--thread --gtod_reduce=1
>> --ioscheduler=none", the numbers can increase to 330k, yet still
>> far behind 6000k.
>
> On
> https://ark.intel.com/content/www/us/en/ark/products/120485/intel-xeon-gold-6140-processor-24-75m-cache-2-30-ghz.html
> I found the following information about that CPU:
> 18 CPU cores
> 36 hyperthreads
>
> so 36 fio jobs should be sufficient. Maybe IOPS are lower than expected
> because of how null_blk has been configured? This is the configuration
> that I used in my test:
>
> modprobe null_blk nr_devices=0 &&
> udevadm settle &&
> cd /sys/kernel/config/nullb &&
> mkdir nullb0 &&
> cd nullb0 &&
> echo 0 > completion_nsec &&
> echo 512 > blocksize &&
> echo 0 > home_node &&
> echo 0 > irqmode &&
> echo 1024 > size &&
> echo 0 > memory_backed &&
> echo 2 > queue_mode &&
> echo 1 > power ||
> exit $?
hi Bart,
After applying this configuration, the number of null_blk in my
machine is about 650k(330k before). Is this still too low?
By the way, there are no performance degradation.
Thanks
Kuai
>
>> The new atomic operations in the hot path is atomic_read() from
>> hctx_may_queue(), and the atomic variable will change in two
>> situations:
>>
>> a. fail to get driver tag with dbusy not set, increase and set dbusy.
>> b. if dbusy is set when queue switch from busy to dile, decrease and
>> clear dbusy.
>>
>> During the period a device "idle -> busy -> idle", the new atomic
>> variable can be writen twice at most, which means this is almost
>> readonly in the above test situation. So I guess the impact on
>> performance is minimal ?
>
> Please measure the performance impact of your patch.
>
> Thanks,
>
> Bart.
>
> .
>
next prev parent reply other threads:[~2021-08-06 1:50 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-12 3:18 [PATCH] blk-mq: allow hardware queue to get more tag while sharing a tag set Yu Kuai
2021-07-12 3:19 ` yukuai (C)
2021-07-20 12:33 ` yukuai (C)
2021-07-31 7:13 ` yukuai (C)
2021-07-31 17:15 ` Bart Van Assche
2021-08-02 13:34 ` yukuai (C)
2021-08-02 16:17 ` Bart Van Assche
2021-08-03 2:57 ` yukuai (C)
2021-08-03 18:38 ` Bart Van Assche
2021-08-06 1:50 ` yukuai (C) [this message]
2021-08-06 2:43 ` Bart Van Assche
2021-08-14 9:43 ` yukuai (C)
2021-08-16 4:16 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a63fbd36-5a43-e412-c0a2-a06730945a13@huawei.com \
--to=yukuai3@huawei.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=yi.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).