From: Paolo Valente <paolo.valente@linaro.org>
To: Fam Zheng <zhengfeiran@bytedance.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
axboe@kernel.dk, Fam Zheng <fam@euphon.net>,
duanxiongchun@bytedance.com, cgroups@vger.kernel.org,
zhangjiachen.jc@bytedance.com, tj@kernel.org,
linux-block@vger.kernel.org
Subject: Re: [PATCH v3 0/3] Implement BFQ per-device weight interface
Date: Fri, 6 Sep 2019 16:28:42 +0200 [thread overview]
Message-ID: <A9FD0759-F1A7-4675-A2B3-26E5F11019EF@linaro.org> (raw)
In-Reply-To: <20190828035453.18129-1-zhengfeiran@bytedance.com>
Hi Jens,
is this patch series fine now?
Thanks,
Paolo
> Il giorno 28 ago 2019, alle ore 05:54, Fam Zheng <zhengfeiran@bytedance.com> ha scritto:
>
> v3: Pick up rev-by and ack-by from Paolo and Tejun.
> Add commit message to patch 3.
>
> (Revision starting from v2 since v1 was used off-list)
>
> Hi Paolo and others,
>
> This adds to BFQ the missing per-device weight interfaces:
> blkio.bfq.weight_device on legacy and io.bfq.weight on unified. The
> implementation pretty closely resembles what we had in CFQ and the parsing code
> is basically reused.
>
> Tests
> =====
>
> Using two cgroups and three block devices, having weights setup as:
>
> Cgroup test1 test2
> ============================================
> default 100 500
> sda 500 100
> sdb default default
> sdc 200 200
>
> cgroup v1 runs
> --------------
>
> sda.test1.out: READ: bw=913MiB/s
> sda.test2.out: READ: bw=183MiB/s
>
> sdb.test1.out: READ: bw=213MiB/s
> sdb.test2.out: READ: bw=1054MiB/s
>
> sdc.test1.out: READ: bw=650MiB/s
> sdc.test2.out: READ: bw=650MiB/s
>
> cgroup v2 runs
> --------------
>
> sda.test1.out: READ: bw=915MiB/s
> sda.test2.out: READ: bw=184MiB/s
>
> sdb.test1.out: READ: bw=216MiB/s
> sdb.test2.out: READ: bw=1069MiB/s
>
> sdc.test1.out: READ: bw=621MiB/s
> sdc.test2.out: READ: bw=622MiB/s
>
> Fam Zheng (3):
> bfq: Fix the missing barrier in __bfq_entity_update_weight_prio
> bfq: Extract bfq_group_set_weight from bfq_io_set_weight_legacy
> bfq: Add per-device weight
>
> block/bfq-cgroup.c | 151 +++++++++++++++++++++++++++++++++-----------
> block/bfq-iosched.h | 3 +
> block/bfq-wf2q.c | 2 +
> 3 files changed, 119 insertions(+), 37 deletions(-)
>
> --
> 2.22.1
>
next prev parent reply other threads:[~2019-09-06 14:28 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-28 3:54 [PATCH v3 0/3] Implement BFQ per-device weight interface Fam Zheng
2019-08-28 3:54 ` [PATCH v3 1/3] bfq: Fix the missing barrier in __bfq_entity_update_weight_prio Fam Zheng
2019-08-28 3:54 ` [PATCH v3 2/3] bfq: Extract bfq_group_set_weight from bfq_io_set_weight_legacy Fam Zheng
2019-08-28 3:54 ` [PATCH v3 3/3] bfq: Add per-device weight Fam Zheng
2019-09-06 14:28 ` Paolo Valente [this message]
2019-09-06 20:34 ` [PATCH v3 0/3] Implement BFQ per-device weight interface Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=A9FD0759-F1A7-4675-A2B3-26E5F11019EF@linaro.org \
--to=paolo.valente@linaro.org \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=duanxiongchun@bytedance.com \
--cc=fam@euphon.net \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=tj@kernel.org \
--cc=zhangjiachen.jc@bytedance.com \
--cc=zhengfeiran@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).