linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo VALENTE <paolo.valente@unimore.it>
To: Yu Kuai <yukuai1@huaweicloud.com>
Cc: Tejun Heo <tj@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	Jan Kara <jack@suse.cz>,
	cgroups@vger.kernel.org,
	linux-block <linux-block@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	yi.zhang@huawei.com, "yukuai (C)" <yukuai3@huawei.com>
Subject: Re: [patch v11 0/6] support concurrent sync io for bfq on a specail occasion
Date: Tue, 25 Oct 2022 08:34:44 +0200	[thread overview]
Message-ID: <A1E2F76F-2D27-4DCE-9B97-C8011CBE2A9E@unimore.it> (raw)
In-Reply-To: <82f1e969-742d-d3c3-63ca-961c755b5c35@huaweicloud.com>



> Il giorno 18 ott 2022, alle ore 06:00, Yu Kuai <yukuai1@huaweicloud.com> ha scritto:
> 
> Hi, Paolo
> 
> 在 2022/10/11 17:36, Yu Kuai 写道:
>>>>> Your patches seem ok to me now (thanks for you contribution and, above all, for your patience). I have only a high-level concern: what do you mean when you say that service guarantees are still preserved? What test did you run exactly? This point is very important to me. I'd like to see some convincing test with differentiated weights. In case you don't have other tools for executing such tests quickly, you may want to use the bandwidth-latency test in my simple S benchmark suite (for which I'm willing to help).
>>>> 
>>>> Is there any test that you wish me to try?
>>>> 
>>>> By the way, I think for the case that multiple groups are activaced, (
>>>> specifically num_groups_with_pendind_rqs > 1), io path in bfq is the
>>>> same with or without this patchset.
>> I just ran the test for one time, result is a liiter inconsistent, do
>> you think it's in the normal fluctuation range?
> 
> I rerun the manually test for 5 times, here is the average result:
> 
> without this patchset / with this patchset:
> 
> | --------------- | ------------- | ------------ | -------------- | ------------- | -------------- |
> | cg1 weight      | 10            | 20           | 30             | 40          | 50             |
> | cg2 weight      | 90            | 80           | 70             | 60          | 50             |
> | cg1 bw MiB/s    | 21.4 / 21.74  | 42.72 / 46.6 | 63.82 / 61.52  | 94.74 / 90.92 | 140 / 138.2    |
> | cg2 bw MiB/s    | 197.2 / 197.4 | 182 / 181.2  | 171.2 / 173.44 | 162 / 156.8   | 138.6 / 137.04 |
> | cg2 bw / cg1 bw | 9.22 / 9.08   | 4.26 / 3.89  | 2.68 / 2.82    | 1.71 / 1.72   | 0.99 / 0.99    |

Great!  Results are (statistically) the same, with and without your
patchset.  For me your patches are ok.  Thank you very much for this
contribution, and sorry again for my delay.

Acked-by: Paolo Valente <paolo.valente@linaro.org>

Thanks,
Paolo

> 
>> test script:
>> fio -filename=/dev/nullb0 -ioengine=libaio -ioscheduler=bfq -jumjobs=1 -iodepth=64 -direct=1 -bs=4k -rw=randread -runtime=60 -name=test
>> without this patchset:
>> |                 |      |      |      |      |      |
>> | --------------- | ---- | ---- | ---- | ---- | ---- |
>> | cg1 weight      | 10   | 20   | 30   | 40   | 50   |
>> | cg2 weight      | 90   | 80   | 70   | 60   | 50   |
>> | cg1 bw MiB/s    | 25.8 | 51.0 | 80.1 | 90.5 | 138  |
>> | cg2 bw MiB/s    | 193  | 179  | 162  | 127  | 136  |
>> | cg2 bw / cg1 bw | 7.48 | 3.51 | 2.02 | 1.40 | 0.98 |
>> with this patchset
>> |                 |      |      |      |      |      |
>> | --------------- | ---- | ---- | ---- | ---- | ---- |
>> | cg1 weight      | 10   | 20   | 30   | 40   | 50   |
>> | cg2 weight      | 90   | 80   | 70   | 60   | 50   |
>> | cg1 bw MiB/s    | 21.5 | 43.9 | 62.7 | 87.4 | 136  |
>> | cg2 bw MiB/s    | 195  | 185  | 173  | 138  | 141  |
>> | cg2 bw / cg1 bw | 9.07 | 4.21 | 2.75 | 1.57 | 0.96 |
>>>> 
>>> 
>>> The tests cases you mentioned are ok for me (whatever tool or personal
>>> code you use to run them).  Just show me your results with and without
>>> your patchset applied.
>>> 
>>> Thanks,
>>> Paolo
>>> 
>>>> Thanks,
>>>> Kuai
>>>>> Thanks,
>>>>> Paolo
>>>>>> Previous versions:
>>>>>> RFC: https://lore.kernel.org/all/20211127101132.486806-1-yukuai3@huawei.com/ 
>>>>>> v1: https://lore.kernel.org/all/20220305091205.4188398-1-yukuai3@huawei.com/ 
>>>>>> v2: https://lore.kernel.org/all/20220416093753.3054696-1-yukuai3@huawei.com/ 
>>>>>> v3: https://lore.kernel.org/all/20220427124722.48465-1-yukuai3@huawei.com/
>>>>>> v4: https://lore.kernel.org/all/20220428111907.3635820-1-yukuai3@huawei.com/ 
>>>>>> v5: https://lore.kernel.org/all/20220428120837.3737765-1-yukuai3@huawei.com/ 
>>>>>> v6: https://lore.kernel.org/all/20220523131818.2798712-1-yukuai3@huawei.com/ 
>>>>>> v7: https://lore.kernel.org/all/20220528095020.186970-1-yukuai3@huawei.com/ 
>>>>>> 
>>>>>> 
>>>>>> Yu Kuai (6):
>>>>>>   block, bfq: support to track if bfqq has pending requests
>>>>>>   block, bfq: record how many queues have pending requests
>>>>>>   block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
>>>>>>   block, bfq: do not idle if only one group is activated
>>>>>>   block, bfq: cleanup bfq_weights_tree add/remove apis
>>>>>>   block, bfq: cleanup __bfq_weights_tree_remove()
>>>>>> 
>>>>>> block/bfq-cgroup.c  | 10 +++++++
>>>>>> block/bfq-iosched.c | 71 +++++++--------------------------------------
>>>>>> block/bfq-iosched.h | 30 +++++++++----------
>>>>>> block/bfq-wf2q.c    | 69 ++++++++++++++++++++++++++-----------------
>>>>>> 4 files changed, 76 insertions(+), 104 deletions(-)
>>>>>> 
>>>>>> -- 
>>>>>> 2.31.1
>>>>>> 
>>>>> .
>>> 
>>> .
>>> 
>> .
> 


  reply	other threads:[~2022-10-25  6:34 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-16  7:19 [patch v11 0/6] support concurrent sync io for bfq on a specail occasion Yu Kuai
2022-09-16  7:19 ` [patch v11 1/6] block, bfq: support to track if bfqq has pending requests Yu Kuai
2022-09-16  7:19 ` [patch v11 2/6] block, bfq: record how many queues have " Yu Kuai
2022-09-16  7:19 ` [patch v11 3/6] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Yu Kuai
2022-09-27 16:32   ` Paolo Valente
2022-09-27 16:33     ` Paolo VALENTE
2022-09-16  7:19 ` [patch v11 4/6] block, bfq: do not idle if only one group is activated Yu Kuai
2022-09-16  7:19 ` [patch v11 5/6] block, bfq: cleanup bfq_weights_tree add/remove apis Yu Kuai
2022-09-19  8:46   ` Jan Kara
2022-09-27 16:19     ` Paolo Valente
2022-09-16  7:19 ` [patch v11 6/6] block, bfq: cleanup __bfq_weights_tree_remove() Yu Kuai
2022-09-27 16:38 ` [patch v11 0/6] support concurrent sync io for bfq on a specail occasion Paolo Valente
2022-09-28  1:07   ` Yu Kuai
2022-10-11  8:11   ` Yu Kuai
2022-10-11  8:21     ` Paolo Valente
2022-10-11  9:36       ` Yu Kuai
2022-10-18  4:00         ` Yu Kuai
2022-10-25  6:34           ` Paolo VALENTE [this message]
2022-10-25  7:31             ` Yu Kuai
2022-11-01 11:32 ` Yu Kuai
2022-11-01 13:10 ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A1E2F76F-2D27-4DCE-9B97-C8011CBE2A9E@unimore.it \
    --to=paolo.valente@unimore.it \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=jack@suse.cz \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai1@huaweicloud.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).