All of lore.kernel.org
 help / color / mirror / Atom feed
From: "yukuai (C)" <yukuai3@huawei.com>
To: Jan Kara <jack@suse.cz>
Cc: <paolo.valente@linaro.org>, <axboe@kernel.dk>, <tj@kernel.org>,
	<linux-block@vger.kernel.org>, <cgroups@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <yi.zhang@huawei.com>
Subject: Re: [PATCH -next v2 2/5] block, bfq: add fake weight_counter for weight-raised queue
Date: Tue, 26 Apr 2022 19:27:24 +0800	[thread overview]
Message-ID: <6a829b09-4546-990a-52b6-bbd398f864bb@huawei.com> (raw)
In-Reply-To: <20220426091556.qzryd552gzo6dikf@quack3.lan>

在 2022/04/26 17:15, Jan Kara 写道:
> On Tue 26-04-22 16:27:46, yukuai (C) wrote:
>> 在 2022/04/26 15:40, Jan Kara 写道:
>>> On Tue 26-04-22 09:49:04, yukuai (C) wrote:
>>>> 在 2022/04/26 0:16, Jan Kara 写道:
>>>>> Hello!
>>>>>
>>>>> On Mon 25-04-22 21:34:16, yukuai (C) wrote:
>>>>>> 在 2022/04/25 17:48, Jan Kara 写道:
>>>>>>> On Sat 16-04-22 17:37:50, Yu Kuai wrote:
>>>>>>>> Weight-raised queue is not inserted to weights_tree, which makes it
>>>>>>>> impossible to track how many queues have pending requests through
>>>>>>>> weights_tree insertion and removel. This patch add fake weight_counter
>>>>>>>> for weight-raised queue to do that.
>>>>>>>>
>>>>>>>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>>>>>>>
>>>>>>> This is a bit hacky. I was looking into a better place where to hook to
>>>>>>> count entities in a bfq_group with requests and I think bfq_add_bfqq_busy()
>>>>>>> and bfq_del_bfqq_busy() are ideal for this. It also makes better sense
>>>>>>> conceptually than hooking into weights tree handling.
>>>>>>
>>>>>> bfq_del_bfqq_busy() will be called when all the reqs in the bfqq are
>>>>>> dispatched, however there might still some reqs are't completed yet.
>>>>>>
>>>>>> Here what we want to track is how many bfqqs have pending reqs,
>>>>>> specifically if the bfqq have reqs are't complted.
>>>>>>
>>>>>> Thus I think bfq_del_bfqq_busy() is not the right place to do that.
>>>>>
>>>>> Yes, I'm aware there will be a difference. But note that bfqq can stay busy
>>>>> with only dispatched requests because the logic in __bfq_bfqq_expire() will
>>>>> not call bfq_del_bfqq_busy() if idling is needed for service guarantees. So
>>>>> I think using bfq_add/del_bfqq_busy() would work OK.
>>>> Hi,
>>>>
>>>> I didn't think of that before. If bfqq stay busy after dispathing all
>>>> the requests, there are two other places that bfqq can clear busy:
>>>>
>>>> 1) bfq_remove_request(), bfqq has to insert a new req while it's not in
>>>> service.
>>>
>>> Yes and the request then would have to be dispatched or merged. Which
>>> generally means another bfqq from the same bfqg is currently active and
>>> thus this should have no impact on service guarantees we are interested in.
>>>
>>>> 2) bfq_release_process_ref(), user thread is gone / moved, or old bfqq
>>>> is gone due to merge / ioprio change.
>>>
>>> Yes, here there's no new IO for the bfqq so no point in maintaining any
>>> service guarantees to it.
>>>
>>>> I wonder, will bfq_del_bfqq_busy() be called immediately when requests
>>>> are completed? (It seems not to me...). For example, a user thread
>>>> issue a sync io just once, and it keep running without issuing new io,
>>>> then when does the bfqq clears the busy state?
>>>
>>> No, when bfqq is kept busy, it will get scheduled as in-service queue in
>>> the future. Then what happens depends on whether it will get more requests
>>> or not. But generally its busy state will get cleared once it is expired
>>> for other reason than preemption.
>>
>> Thanks for your explanation.
>>
>> I think in normal case using bfq_add/del_bfqq_busy() if fine.
>>
>> There is one last situation that I'm worried: If some disk are very
>> slow that the dispatched reqs are not completed when the bfqq is
>> rescheduled as in-service queue, and thus busy state can be cleared
>> while reqs are not completed.
>>
>> Using bfq_del_bfqq_busy() will change behaviour in this specail case,
>> do you think service guarantees will be broken?
> 
> Well, I don't think so. Because slow disks don't tend to do a lot of
> internal scheduling (or have deep IO queues for that matter). Also note
> that generally bfq_select_queue() will not even expire a queue (despite it
> not having any requests to dispatch) when we should not dispatch other
> requests to maintain service guarantees. So I think service guarantees will
> be generally preserved. Obviously I could be wrong, we we will not know
> until we try it :).

Thanks a lot for your explanation, I'll do some tests. And i'll send a
new version if tests look good.

WARNING: multiple messages have this Message-ID (diff)
From: "yukuai (C)" <yukuai3@huawei.com>
To: Jan Kara <jack@suse.cz>
Cc: paolo.valente@linaro.org, axboe@kernel.dk, tj@kernel.org,
	linux-block@vger.kernel.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, yi.zhang@huawei.com
Subject: Re: [PATCH -next v2 2/5] block, bfq: add fake weight_counter for weight-raised queue
Date: Tue, 26 Apr 2022 19:27:24 +0800	[thread overview]
Message-ID: <6a829b09-4546-990a-52b6-bbd398f864bb@huawei.com> (raw)
In-Reply-To: <20220426091556.qzryd552gzo6dikf@quack3.lan>

在 2022/04/26 17:15, Jan Kara 写道:
> On Tue 26-04-22 16:27:46, yukuai (C) wrote:
>> 在 2022/04/26 15:40, Jan Kara 写道:
>>> On Tue 26-04-22 09:49:04, yukuai (C) wrote:
>>>> 在 2022/04/26 0:16, Jan Kara 写道:
>>>>> Hello!
>>>>>
>>>>> On Mon 25-04-22 21:34:16, yukuai (C) wrote:
>>>>>> 在 2022/04/25 17:48, Jan Kara 写道:
>>>>>>> On Sat 16-04-22 17:37:50, Yu Kuai wrote:
>>>>>>>> Weight-raised queue is not inserted to weights_tree, which makes it
>>>>>>>> impossible to track how many queues have pending requests through
>>>>>>>> weights_tree insertion and removel. This patch add fake weight_counter
>>>>>>>> for weight-raised queue to do that.
>>>>>>>>
>>>>>>>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>>>>>>>
>>>>>>> This is a bit hacky. I was looking into a better place where to hook to
>>>>>>> count entities in a bfq_group with requests and I think bfq_add_bfqq_busy()
>>>>>>> and bfq_del_bfqq_busy() are ideal for this. It also makes better sense
>>>>>>> conceptually than hooking into weights tree handling.
>>>>>>
>>>>>> bfq_del_bfqq_busy() will be called when all the reqs in the bfqq are
>>>>>> dispatched, however there might still some reqs are't completed yet.
>>>>>>
>>>>>> Here what we want to track is how many bfqqs have pending reqs,
>>>>>> specifically if the bfqq have reqs are't complted.
>>>>>>
>>>>>> Thus I think bfq_del_bfqq_busy() is not the right place to do that.
>>>>>
>>>>> Yes, I'm aware there will be a difference. But note that bfqq can stay busy
>>>>> with only dispatched requests because the logic in __bfq_bfqq_expire() will
>>>>> not call bfq_del_bfqq_busy() if idling is needed for service guarantees. So
>>>>> I think using bfq_add/del_bfqq_busy() would work OK.
>>>> Hi,
>>>>
>>>> I didn't think of that before. If bfqq stay busy after dispathing all
>>>> the requests, there are two other places that bfqq can clear busy:
>>>>
>>>> 1) bfq_remove_request(), bfqq has to insert a new req while it's not in
>>>> service.
>>>
>>> Yes and the request then would have to be dispatched or merged. Which
>>> generally means another bfqq from the same bfqg is currently active and
>>> thus this should have no impact on service guarantees we are interested in.
>>>
>>>> 2) bfq_release_process_ref(), user thread is gone / moved, or old bfqq
>>>> is gone due to merge / ioprio change.
>>>
>>> Yes, here there's no new IO for the bfqq so no point in maintaining any
>>> service guarantees to it.
>>>
>>>> I wonder, will bfq_del_bfqq_busy() be called immediately when requests
>>>> are completed? (It seems not to me...). For example, a user thread
>>>> issue a sync io just once, and it keep running without issuing new io,
>>>> then when does the bfqq clears the busy state?
>>>
>>> No, when bfqq is kept busy, it will get scheduled as in-service queue in
>>> the future. Then what happens depends on whether it will get more requests
>>> or not. But generally its busy state will get cleared once it is expired
>>> for other reason than preemption.
>>
>> Thanks for your explanation.
>>
>> I think in normal case using bfq_add/del_bfqq_busy() if fine.
>>
>> There is one last situation that I'm worried: If some disk are very
>> slow that the dispatched reqs are not completed when the bfqq is
>> rescheduled as in-service queue, and thus busy state can be cleared
>> while reqs are not completed.
>>
>> Using bfq_del_bfqq_busy() will change behaviour in this specail case,
>> do you think service guarantees will be broken?
> 
> Well, I don't think so. Because slow disks don't tend to do a lot of
> internal scheduling (or have deep IO queues for that matter). Also note
> that generally bfq_select_queue() will not even expire a queue (despite it
> not having any requests to dispatch) when we should not dispatch other
> requests to maintain service guarantees. So I think service guarantees will
> be generally preserved. Obviously I could be wrong, we we will not know
> until we try it :).

Thanks a lot for your explanation, I'll do some tests. And i'll send a
new version if tests look good.

  reply	other threads:[~2022-04-26 11:27 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-16  9:37 [PATCH -next v2 0/5] support concurrent sync io for bfq on a specail occasion Yu Kuai
2022-04-16  9:37 ` Yu Kuai
2022-04-16  9:37 ` [PATCH -next v2 1/5] block, bfq: cleanup bfq_weights_tree add/remove apis Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-25  9:37   ` Jan Kara
2022-04-25  9:37     ` Jan Kara
2022-04-16  9:37 ` [PATCH -next v2 2/5] block, bfq: add fake weight_counter for weight-raised queue Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-25  9:48   ` Jan Kara
2022-04-25  9:48     ` Jan Kara
2022-04-25 13:34     ` yukuai (C)
2022-04-25 13:34       ` yukuai (C)
2022-04-25 13:55       ` yukuai (C)
2022-04-25 13:55         ` yukuai (C)
2022-04-25 16:26         ` Jan Kara
2022-04-25 16:26           ` Jan Kara
2022-04-25 16:16       ` Jan Kara
2022-04-25 16:16         ` Jan Kara
2022-04-26  1:49         ` yukuai (C)
2022-04-26  1:49           ` yukuai (C)
2022-04-26  7:40           ` Jan Kara
2022-04-26  8:27             ` yukuai (C)
2022-04-26  8:27               ` yukuai (C)
2022-04-26  9:15               ` Jan Kara
2022-04-26  9:15                 ` Jan Kara
2022-04-26 11:27                 ` yukuai (C) [this message]
2022-04-26 11:27                   ` yukuai (C)
2022-04-26 14:04                 ` Paolo Valente
2022-04-16  9:37 ` [PATCH -next v2 3/5] bfq, block: record how many queues have pending requests in bfq_group Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-16  9:37 ` [PATCH -next v2 4/5] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-16  9:37 ` [PATCH -next v2 5/5] block, bfq: do not idle if only one cgroup is activated Yu Kuai
2022-04-16  9:37   ` Yu Kuai
2022-04-24  2:44 ` [PATCH -next v2 0/5] support concurrent sync io for bfq on a specail occasion yukuai (C)
2022-04-24  2:44   ` yukuai (C)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6a829b09-4546-990a-52b6-bbd398f864bb@huawei.com \
    --to=yukuai3@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=jack@suse.cz \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paolo.valente@linaro.org \
    --cc=tj@kernel.org \
    --cc=yi.zhang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.