All of lore.kernel.org
 help / color / mirror / Atom feed
From: "yukuai (C)" <yukuai3@huawei.com>
To: <jack@suse.cz>, <paolo.valente@linaro.org>, <axboe@kernel.dk>,
	<tj@kernel.org>
Cc: <linux-block@vger.kernel.org>, <cgroups@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <yi.zhang@huawei.com>
Subject: Re: [PATCH -next v2 0/5] support concurrent sync io for bfq on a specail occasion
Date: Sun, 24 Apr 2022 10:44:28 +0800	[thread overview]
Message-ID: <c120ed31-860d-99d2-5c97-4023766fea96@huawei.com> (raw)
In-Reply-To: <20220416093753.3054696-1-yukuai3@huawei.com>

friendly ping ...

在 2022/04/16 17:37, Yu Kuai 写道:
> Changes in v2:
>   - Use a different aporch to count root group, which is much simple.
> 
> Currently, bfq can't handle sync io concurrently as long as they
> are not issued from root group. This is because
> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
> bfq_asymmetric_scenario().
> 
> The way that bfqg is counted to 'num_groups_with_pending_reqs':
> 
> Before this patchset:
>   1) root group will never be counted.
>   2) Count if bfqg or it's child bfqgs have pending requests.
>   3) Don't count if bfqg and it's child bfqgs complete all the requests.
> 
> After this patchset:
>   1) root group is counted.
>   2) Count if bfqg have pending requests.
> This is because, for example:
> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2 will all
> be counted into 'num_groups_with_pending_reqs', which makes it impossible
> to handle sync ios concurrently.
> 
>   3) Don't count if bfqg complete all the requests.
> This is because, for example:
> t1 issue sync io on root group, t2 and t3 issue sync io on the same child
> group. num_groups_with_pending_reqs is 2 now. After t1 stopped,
> num_groups_with_pending_reqs is still 2. sync io from t2 and t3 still can't
> be handled concurrently.
> 
> fio test script: startdelay is used to avoid queue merging
> [global]
> filename=/dev/nvme0n1
> allow_mounted_write=0
> ioengine=psync
> direct=1
> ioscheduler=bfq
> offset_increment=10g
> group_reporting
> rw=randwrite
> bs=4k
> 
> [test1]
> numjobs=1
> 
> [test2]
> startdelay=1
> numjobs=1
> 
> [test3]
> startdelay=2
> numjobs=1
> 
> [test4]
> startdelay=3
> numjobs=1
> 
> [test5]
> startdelay=4
> numjobs=1
> 
> [test6]
> startdelay=5
> numjobs=1
> 
> [test7]
> startdelay=6
> numjobs=1
> 
> [test8]
> startdelay=7
> numjobs=1
> 
> test result:
> running fio on root cgroup
> v5.18-rc1:	   550 Mib/s
> v5.18-rc1-patched: 550 Mib/s
> 
> running fio on non-root cgroup
> v5.18-rc1:	   349 Mib/s
> v5.18-rc1-patched: 550 Mib/s
> 
> Yu Kuai (5):
>    block, bfq: cleanup bfq_weights_tree add/remove apis
>    block, bfq: add fake weight_counter for weight-raised queue
>    bfq, block: record how many queues have pending requests in bfq_group
>    block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
>    block, bfq: do not idle if only one cgroup is activated
> 
>   block/bfq-cgroup.c  |  1 +
>   block/bfq-iosched.c | 90 +++++++++++++++++++--------------------------
>   block/bfq-iosched.h | 26 ++++++-------
>   block/bfq-wf2q.c    | 30 +++------------
>   4 files changed, 56 insertions(+), 91 deletions(-)
> 

WARNING: multiple messages have this Message-ID (diff)
From: "yukuai (C)" <yukuai3-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
To: jack-AlSwsSmVLrQ@public.gmane.org,
	paolo.valente-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org,
	axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org,
	tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org
Cc: linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	yi.zhang-hv44wF8Li93QT0dZR+AlfA@public.gmane.org
Subject: Re: [PATCH -next v2 0/5] support concurrent sync io for bfq on a specail occasion
Date: Sun, 24 Apr 2022 10:44:28 +0800	[thread overview]
Message-ID: <c120ed31-860d-99d2-5c97-4023766fea96@huawei.com> (raw)
In-Reply-To: <20220416093753.3054696-1-yukuai3-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

friendly ping ...

ÔÚ 2022/04/16 17:37, Yu Kuai дµÀ:
> Changes in v2:
>   - Use a different aporch to count root group, which is much simple.
> 
> Currently, bfq can't handle sync io concurrently as long as they
> are not issued from root group. This is because
> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
> bfq_asymmetric_scenario().
> 
> The way that bfqg is counted to 'num_groups_with_pending_reqs':
> 
> Before this patchset:
>   1) root group will never be counted.
>   2) Count if bfqg or it's child bfqgs have pending requests.
>   3) Don't count if bfqg and it's child bfqgs complete all the requests.
> 
> After this patchset:
>   1) root group is counted.
>   2) Count if bfqg have pending requests.
> This is because, for example:
> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2 will all
> be counted into 'num_groups_with_pending_reqs', which makes it impossible
> to handle sync ios concurrently.
> 
>   3) Don't count if bfqg complete all the requests.
> This is because, for example:
> t1 issue sync io on root group, t2 and t3 issue sync io on the same child
> group. num_groups_with_pending_reqs is 2 now. After t1 stopped,
> num_groups_with_pending_reqs is still 2. sync io from t2 and t3 still can't
> be handled concurrently.
> 
> fio test script: startdelay is used to avoid queue merging
> [global]
> filename=/dev/nvme0n1
> allow_mounted_write=0
> ioengine=psync
> direct=1
> ioscheduler=bfq
> offset_increment=10g
> group_reporting
> rw=randwrite
> bs=4k
> 
> [test1]
> numjobs=1
> 
> [test2]
> startdelay=1
> numjobs=1
> 
> [test3]
> startdelay=2
> numjobs=1
> 
> [test4]
> startdelay=3
> numjobs=1
> 
> [test5]
> startdelay=4
> numjobs=1
> 
> [test6]
> startdelay=5
> numjobs=1
> 
> [test7]
> startdelay=6
> numjobs=1
> 
> [test8]
> startdelay=7
> numjobs=1
> 
> test result:
> running fio on root cgroup
> v5.18-rc1:	   550 Mib/s
> v5.18-rc1-patched: 550 Mib/s
> 
> running fio on non-root cgroup
> v5.18-rc1:	   349 Mib/s
> v5.18-rc1-patched: 550 Mib/s
> 
> Yu Kuai (5):
>    block, bfq: cleanup bfq_weights_tree add/remove apis
>    block, bfq: add fake weight_counter for weight-raised queue
>    bfq, block: record how many queues have pending requests in bfq_group
>    block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
>    block, bfq: do not idle if only one cgroup is activated
> 
>   block/bfq-cgroup.c  |  1 +
>   block/bfq-iosched.c | 90 +++++++++++++++++++--------------------------
>   block/bfq-iosched.h | 26 ++++++-------
>   block/bfq-wf2q.c    | 30 +++------------
>   4 files changed, 56 insertions(+), 91 deletions(-)
> 

  parent reply	other threads:[~2022-04-24  2:44 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-16  9:37 [PATCH -next v2 0/5] support concurrent sync io for bfq on a specail occasion Yu Kuai
2022-04-16  9:37 ` Yu Kuai
2022-04-16  9:37 ` [PATCH -next v2 1/5] block, bfq: cleanup bfq_weights_tree add/remove apis Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-25  9:37   ` Jan Kara
2022-04-25  9:37     ` Jan Kara
2022-04-16  9:37 ` [PATCH -next v2 2/5] block, bfq: add fake weight_counter for weight-raised queue Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-25  9:48   ` Jan Kara
2022-04-25  9:48     ` Jan Kara
2022-04-25 13:34     ` yukuai (C)
2022-04-25 13:34       ` yukuai (C)
2022-04-25 13:55       ` yukuai (C)
2022-04-25 13:55         ` yukuai (C)
2022-04-25 16:26         ` Jan Kara
2022-04-25 16:26           ` Jan Kara
2022-04-25 16:16       ` Jan Kara
2022-04-25 16:16         ` Jan Kara
2022-04-26  1:49         ` yukuai (C)
2022-04-26  1:49           ` yukuai (C)
2022-04-26  7:40           ` Jan Kara
2022-04-26  8:27             ` yukuai (C)
2022-04-26  8:27               ` yukuai (C)
2022-04-26  9:15               ` Jan Kara
2022-04-26  9:15                 ` Jan Kara
2022-04-26 11:27                 ` yukuai (C)
2022-04-26 11:27                   ` yukuai (C)
2022-04-26 14:04                 ` Paolo Valente
2022-04-16  9:37 ` [PATCH -next v2 3/5] bfq, block: record how many queues have pending requests in bfq_group Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-16  9:37 ` [PATCH -next v2 4/5] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-16  9:37 ` [PATCH -next v2 5/5] block, bfq: do not idle if only one cgroup is activated Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-24  2:44 ` yukuai (C) [this message]
2022-04-24  2:44   ` [PATCH -next v2 0/5] support concurrent sync io for bfq on a specail occasion yukuai (C)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c120ed31-860d-99d2-5c97-4023766fea96@huawei.com \
    --to=yukuai3@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=jack@suse.cz \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paolo.valente@linaro.org \
    --cc=tj@kernel.org \
    --cc=yi.zhang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.