From: Yu Kuai <yukuai3@huawei.com>
To: <paolo.valente@linaro.org>, <axboe@kernel.dk>
Cc: <linux-block@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<yukuai3@huawei.com>, <yi.zhang@huawei.com>
Subject: [PATCH 3/3] block, bfq: consider request size in bfq_asymmetric_scenario()
Date: Wed, 14 Jul 2021 17:45:29 +0800 [thread overview]
Message-ID: <20210714094529.758808-4-yukuai3@huawei.com> (raw)
In-Reply-To: <20210714094529.758808-1-yukuai3@huawei.com>
There is a special case when bfq do not need to idle when more than
one groups is active:
1) all active queues have the same weight,
2) all active queues have the same request size.
3) all active queues belong to the same I/O-priority class,
Each time a request is dispatched, bfq can switch in service queue
safely, since the throughput of each active queue is guaranteed to
be equivalent.
Test procedure:
run "fio -numjobs=1 -ioengine=psync -bs=4k -direct=1 -rw=randread..." in
different cgroup(not root).
Test result: total bandwidth(Mib/s)
| total jobs | before this patch | after this patch |
| ---------- | ----------------- | --------------------- |
| 1 | 33.8 | 33.8 |
| 2 | 33.8 | 65.4 (32.7 each job) |
| 4 | 33.8 | 106.8 (26.7 each job) |
| 8 | 33.8 | 126.4 (15.8 each job) |
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/bfq-iosched.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index e5a1093ec30a..b78fe8a1537e 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -268,6 +268,15 @@ static struct kmem_cache *bfq_pool;
*/
#define BFQ_RATE_SHIFT 16
+/*
+ * 1) bfq keep dispatching requests with same size for at least one second.
+ * 2) bfq dispatch at lease 1024 requests
+ *
+ * We think bfq are dispatching request with same size if the above two
+ * conditions hold true.
+ */
+#define VARIED_REQUEST_SIZE(bfqd) ((bfqd)->dispatch_count < 1024 ||\
+ time_before(jiffies, (bfqd)->dispatch_time + HZ))
/*
* When configured for computing the duration of the weight-raising
* for interactive queues automatically (see the comments at the
@@ -724,7 +733,8 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
bool multiple_classes_busy;
#ifdef CONFIG_BFQ_GROUP_IOSCHED
- if (bfqd->num_groups_with_pending_reqs > 1)
+ if (bfqd->num_groups_with_pending_reqs > 1 &&
+ VARIED_REQUEST_SIZE(bfqd))
return true;
#endif
--
2.31.1
next prev parent reply other threads:[~2021-07-14 9:37 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-14 9:45 [PATCH 0/3] optimize the queue idle judgment Yu Kuai
2021-07-14 9:45 ` [PATCH 1/3] block, bfq: do not idle if only one cgroup is activated Yu Kuai
2021-07-24 7:12 ` Paolo Valente
2021-07-26 1:15 ` yukuai (C)
2021-07-31 7:10 ` yukuai (C)
2021-08-03 7:07 ` Paolo Valente
2021-08-03 11:30 ` yukuai (C)
2021-07-14 9:45 ` [PATCH 2/3] block, bfq: add support to record request size information Yu Kuai
2021-07-14 9:45 ` Yu Kuai [this message]
2021-07-20 12:33 ` [PATCH 0/3] optimize the queue idle judgment yukuai (C)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210714094529.758808-4-yukuai3@huawei.com \
--to=yukuai3@huawei.com \
--cc=axboe@kernel.dk \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=paolo.valente@linaro.org \
--cc=yi.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).