Linux-Block Archive on lore.kernel.org
 help / color / Atom feed
From: Paolo Valente <paolo.valente@linaro.org>
To: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>,
	newella@fb.com, clm@fb.com, Josef Bacik <josef@toxicpanda.com>,
	dennisz@fb.com, Li Zefan <lizefan@huawei.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	linux-block <linux-block@vger.kernel.org>,
	kernel-team@fb.com, cgroups@vger.kernel.org, ast@kernel.org,
	daniel@iogearbox.net, kafai@fb.com, songliubraving@fb.com,
	yhs@fb.com, bpf@vger.kernel.org
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller
Date: Mon, 2 Sep 2019 21:43:25 +0200
Message-ID: <D9F6BC6D-FEB3-40CA-A33C-F501AE4434F0@linaro.org> (raw)
In-Reply-To: <20190902155652.GH2263813@devbig004.ftw2.facebook.com>



> Il giorno 2 set 2019, alle ore 17:56, Tejun Heo <tj@kernel.org> ha scritto:
> 
> On Mon, Sep 02, 2019 at 05:45:50PM +0200, Paolo Valente wrote:
>> Thanks for this extra explanations.  It is a little bit difficult for
>> me to understand how the min/max teaks for exactly, but you did give
>> me the general idea.
> 
> It just limits how far high and low the IO issue rate, measured in
> cost, can go.  ie. if max is at 200%, the controller won't issue more
> than twice of what the cost model says 100% is.
> 
>> Are these results in line with your expectations?  If they are, then
>> I'd like to extend benchmarks to more mixes of workloads.  Or should I
>> try some other QoS configuration first?
> 
> They aren't.  Can you please include the content of io.cost.qos and
> io.cost.model before each run?  Note that partial writes to subset of
> parameters don't clear other parameters.
> 

Yep.  I've added the printing of the two parameters in the script, and
I'm pasting the whole output, in case you could get also some other
useful piece of information from it.

$ sudo ./bandwidth-latency.sh -t randread -s none -b weight -n 7 -d 20
Switching to none for sda
echo "8:0 enable=1 rpct=95 rlat=2500 wpct=95 wlat=5000" > /cgroup/io.cost.qos
/cgroup/io.cost.qos 8:0 enable=1 ctrl=user rpct=95.00 rlat=2500 wpct=95.00 wlat=5000 min=1.00 max=10000.00
/cgroup/io.cost.model 8:0 ctrl=auto model=linear rbps=488636629 rseqiops=8932 rrandiops=8518 wbps=427891549 wseqiops=28755 wrandiops=21940
Not changing weight/limits for interferer group 0
Not changing weight/limits for interferer group 1
Not changing weight/limits for interferer group 2
Not changing weight/limits for interferer group 3
Not changing weight/limits for interferer group 4
Not changing weight/limits for interferer group 5
Not changing weight/limits for interferer group 6
Not changing weight/limits for interfered
Starting Interferer group 0
start_fio_jobs InterfererGroup0 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile0
Starting Interferer group 1
start_fio_jobs InterfererGroup1 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile1
Starting Interferer group 2
start_fio_jobs InterfererGroup2 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile2
Starting Interferer group 3
start_fio_jobs InterfererGroup3 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile3
Starting Interferer group 4
start_fio_jobs InterfererGroup4 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile4
Starting Interferer group 5
start_fio_jobs InterfererGroup5 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile5
Starting Interferer group 6
start_fio_jobs InterfererGroup6 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile6
Linux 5.3.0-rc6+ (paolo-ThinkPad-W520) 	02/09/2019 	_x86_64_	(8 CPU)

02/09/2019 21:39:11
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              66.53         5.22         0.10       1385         27

start_fio_jobs interfered 20 default randread MAX poisson 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile_interfered0
02/09/2019 21:39:14
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             154.67        20.63         0.05         61          0

02/09/2019 21:39:17
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             453.00        64.27         0.00        192          0

02/09/2019 21:39:20
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             675.33        95.99         0.00        287          0

02/09/2019 21:39:23
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda            1907.67       348.61         0.00       1045          0

02/09/2019 21:39:26
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda            2414.67       462.98         0.00       1388          0

02/09/2019 21:39:29
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda            2429.67       438.71         0.00       1316          0

02/09/2019 21:39:32
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda            2437.00       475.79         0.00       1427          0

02/09/2019 21:39:35
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda            2162.33       346.97         0.00       1040          0

Results for one rand reader against 7 seq readers (I/O depth 1), weight-none with weights: (default, default)
Aggregated throughput:
         min         max         avg     std_dev     conf99%
       64.27      475.79     319.046     171.233     1011.97
Read throughput:
         min         max         avg     std_dev     conf99%
       64.27      475.79     319.046     171.233     1011.97
Write throughput:
         min         max         avg     std_dev     conf99%
           0           0           0           0           0
Interfered total throughput:
         min         max         avg     std_dev
       1.032       4.455       2.266    0.742696
Interfered per-request total latency:
         min         max         avg     std_dev
        0.11      12.005      1.7545    0.878281

Thanks,
Paolo


> Thanks.
> 
> -- 
> tejun


  reply index

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-14  1:56 Tejun Heo
2019-06-14  1:56 ` [PATCH 01/10] blkcg: pass @q and @blkcg into blkcg_pol_alloc_pd_fn() Tejun Heo
2019-06-14  1:56 ` [PATCH 02/10] blkcg: make ->cpd_init_fn() optional Tejun Heo
2019-06-14  1:56 ` [PATCH 03/10] blkcg: separate blkcg_conf_get_disk() out of blkg_conf_prep() Tejun Heo
2019-06-14  1:56 ` [PATCH 04/10] block/rq_qos: add rq_qos_merge() Tejun Heo
2019-06-14  1:56 ` [PATCH 05/10] block/rq_qos: implement rq_qos_ops->queue_depth_changed() Tejun Heo
2019-06-14  1:56 ` [PATCH 06/10] blkcg: s/RQ_QOS_CGROUP/RQ_QOS_LATENCY/ Tejun Heo
2019-06-14  1:56 ` [PATCH 07/10] blk-mq: add optional request->pre_start_time_ns Tejun Heo
2019-06-14  1:56 ` [PATCH 08/10] blkcg: implement blk-ioweight Tejun Heo
2019-06-14 12:17   ` Toke Høiland-Jørgensen
2019-06-14 15:09     ` Tejun Heo
2019-06-14 20:50       ` Toke Høiland-Jørgensen
2019-06-15 15:57         ` Tejun Heo
2019-06-14  1:56 ` [PATCH 09/10] blkcg: add tools/cgroup/monitor_ioweight.py Tejun Heo
2019-06-14  1:56 ` [PATCH 10/10] blkcg: implement BPF_PROG_TYPE_IO_COST Tejun Heo
2019-06-14 11:32   ` Quentin Monnet
2019-06-14 14:52     ` Tejun Heo
2019-06-14 16:35       ` Alexei Starovoitov
2019-06-14 17:09         ` Tejun Heo
2019-06-14 17:56 ` [PATCHSET block/for-next] IO cost model based work-conserving porportional controller Tejun Heo
2019-08-20 10:48   ` Paolo Valente
2019-08-20 15:04     ` Paolo Valente
2019-08-20 15:19       ` Tejun Heo
2019-08-22  8:58         ` Paolo Valente
2019-08-31  6:53           ` Tejun Heo
2019-08-31  7:10             ` Paolo Valente
2019-08-31 11:20               ` Tejun Heo
2019-09-02 15:45             ` Paolo Valente
2019-09-02 15:56               ` Tejun Heo
2019-09-02 19:43                 ` Paolo Valente [this message]
2019-09-05 16:55                   ` Tejun Heo
2019-09-06  9:07                     ` Paolo Valente
2019-09-06 14:58                       ` Tejun Heo

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D9F6BC6D-FEB3-40CA-A33C-F501AE4434F0@linaro.org \
    --to=paolo.valente@linaro.org \
    --cc=ast@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=bpf@vger.kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=clm@fb.com \
    --cc=daniel@iogearbox.net \
    --cc=dennisz@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=josef@toxicpanda.com \
    --cc=kafai@fb.com \
    --cc=kernel-team@fb.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=newella@fb.com \
    --cc=songliubraving@fb.com \
    --cc=tj@kernel.org \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-Block Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-block/0 linux-block/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-block linux-block/ https://lore.kernel.org/linux-block \
		linux-block@vger.kernel.org linux-block@archiver.kernel.org
	public-inbox-index linux-block

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-block


AGPL code for this site: git clone https://public-inbox.org/ public-inbox