From: Wang You <wangyoua@uniontech.com>
To: bvanassche@acm.org
Cc: axboe@kernel.dk, fio@vger.kernel.org, hch@lst.de,
jaegeuk@kernel.org, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, ming.lei@redhat.com,
wangxiaohua@uniontech.com, wangyoua@uniontech.com
Subject: Re: [PATCH 2/2] block/mq-deadline: Prioritize first request
Date: Fri, 22 Jul 2022 11:34:47 +0800 [thread overview]
Message-ID: <20220722033447.342887-1-wangyoua@uniontech.com> (raw)
In-Reply-To: <c4da04a9-3f3d-3e22-a59a-1ab2867a5649@acm.org>
>> The test hardware is:
>> Kunpeng-920, HW-SAS3508+(MG04ACA400N * 2), RAID0.
> Please also provide performance numbers for a single hard disk and with
> no RAID controller between the host and the hard disk.
> Thanks,
> Bart.
Hi,
Yesterday I found another server without raid controller to test a same HDD,
but the performance(The data is not stable, so I tested it many times) wasn't
what I expected.
Also I tested a SSD with raid contriller on a previous Kunpeng server and
the performance improved, but that's not always the case with SSDs on
other servers.
This may indicate that the raid controller plays an important role in this,
so I'm not sure if this patch really has the desired effect.
Thanks,
Wang.
The test hardware is:
Hygon C86, MG04ACA400N
The test command is:
fio -ioengine=psync -lockmem=1G -buffered=0 -time_based=1 -direct=1 -iodepth=1
-thread -bs=512B -size=110g -numjobs=32 -runtime=300 -group_reporting
-name=read -filename=/dev/sdc -ioscheduler=mq-deadline -rw=read[,write,rw]
The following is the test data:
origin/master:
read iops: 15463 write iops: 5949 rw iops: 574,576
nr_sched_batch = 1:
read iops: 15082 write iops: 6283 rw iops: 783,786
nr_sched_batch = 1, use deadline_head_request:
read iops: 15368 write iops: 6575 rw iops: 907,906
The test hardware is:
Kunpeng-920, HW-SAS3508 + Samsung SSD 780, RAID0.
The test command is:
fio -ioengine=psync -lockmem=1G -buffered=0 -time_based=1 -direct=1 -iodepth=1
-thread -bs=512B -size=110g -numjobs=16 -runtime=300 -group_reporting
-name=read -filename=/dev/sda -ioscheduler=mq-deadline -rw=read[,write,rw]
The following is the test data:
origin/master:
read iops: 115399 write iops: 136801 rw iops: 58082,58084
nr_sched_batch = 1, use deadline_head_request:
read iops: 136473 write iops: 184646 rw iops: 56460,56454
next prev parent reply other threads:[~2022-07-22 3:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-20 9:30 [PATCH 0/2] Improve mq-deadline performance in HDD Wang You
2022-07-20 9:30 ` [PATCH 1/2] block: Introduce nr_sched_batch sys interface Wang You
2022-07-20 16:20 ` Bart Van Assche
2022-07-22 8:07 ` Wang You
2022-07-20 9:30 ` [PATCH 2/2] block/mq-deadline: Prioritize first request Wang You
2022-07-20 16:22 ` Bart Van Assche
2022-07-22 3:34 ` Wang You [this message]
2022-07-20 16:18 ` [PATCH 0/2] Improve mq-deadline performance in HDD Bart Van Assche
2022-07-22 7:57 ` Wang You
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220722033447.342887-1-wangyoua@uniontech.com \
--to=wangyoua@uniontech.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=fio@vger.kernel.org \
--cc=hch@lst.de \
--cc=jaegeuk@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=wangxiaohua@uniontech.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).