linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode
@ 2017-03-01  2:07 Hou Tao
  2017-03-02 10:29 ` Jan Kara
  0 siblings, 1 reply; 6+ messages in thread
From: Hou Tao @ 2017-03-01  2:07 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, jmoyer, jack, stable

When adding a cfq_group into the cfq service tree, we use CFQ_IDLE_DELAY
as the delay of cfq_group's vdisktime if there have been other cfq_groups
already.

When cfq is under iops mode, commit 9a7f38c42c2b ("cfq-iosched: Convert
from jiffies to nanoseconds") could result in a large iops delay and
lead to an abnormal io schedule delay for the added cfq_group. To fix
it, we just need to revert to the old CFQ_IDLE_DELAY value: HZ / 5
when iops mode is enabled.

Cc: <stable@vger.kernel.org> # 4.8+
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 block/cfq-iosched.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 1379447..fdeb70b 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1361,6 +1361,14 @@ cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg)
 	cfqg->vfraction = max_t(unsigned, vfr, 1);
 }
 
+static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd)
+{
+	if (!iops_mode(cfqd))
+		return CFQ_IDLE_DELAY;
+	else
+		return nsecs_to_jiffies64(CFQ_IDLE_DELAY);
+}
+
 static void
 cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
 {
@@ -1380,7 +1388,8 @@ cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
 	n = rb_last(&st->rb);
 	if (n) {
 		__cfqg = rb_entry_cfqg(n);
-		cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY;
+		cfqg->vdisktime = __cfqg->vdisktime +
+			cfq_get_cfqg_vdisktime_delay(cfqd);
 	} else
 		cfqg->vdisktime = st->min_vdisktime;
 	cfq_group_service_tree_add(st, cfqg);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode
  2017-03-01  2:07 [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode Hou Tao
@ 2017-03-02 10:29 ` Jan Kara
  2017-03-03 13:20   ` Hou Tao
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Kara @ 2017-03-02 10:29 UTC (permalink / raw)
  To: Hou Tao; +Cc: axboe, linux-block, jmoyer, jack, stable

On Wed 01-03-17 10:07:44, Hou Tao wrote:
> When adding a cfq_group into the cfq service tree, we use CFQ_IDLE_DELAY
> as the delay of cfq_group's vdisktime if there have been other cfq_groups
> already.
> 
> When cfq is under iops mode, commit 9a7f38c42c2b ("cfq-iosched: Convert
> from jiffies to nanoseconds") could result in a large iops delay and
> lead to an abnormal io schedule delay for the added cfq_group. To fix
> it, we just need to revert to the old CFQ_IDLE_DELAY value: HZ / 5
> when iops mode is enabled.
> 
> Cc: <stable@vger.kernel.org> # 4.8+
> Signed-off-by: Hou Tao <houtao1@huawei.com>

OK, I agree my commit broke the logic in this case. Thanks for the fix.
Please add also tag:

Fixes: 9a7f38c42c2b92391d9dabaf9f51df7cfe5608e4

I somewhat disagree with the fix though. See below:

> +static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd)
> +{
> +	if (!iops_mode(cfqd))
> +		return CFQ_IDLE_DELAY;
> +	else
> +		return nsecs_to_jiffies64(CFQ_IDLE_DELAY);
> +}
> +

So using nsecs_to_jiffies64(CFQ_IDLE_DELAY) when in iops mode just does not
make any sense. AFAIU the code in cfq_group_notify_queue_add() we just want
to add the cfqg as the last one in the tree. So returning 1 from
cfq_get_cfqg_vdisktime_delay() in iops mode should be fine as well.

Frankly, vdisktime is in fixed-point precision shifted by
CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
case and just adding 1 to maximum vdisktime should be fine in all the
cases. But that would require more testing whether I did not miss anything
subtle.

								Honza

>  static void
>  cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>  {
> @@ -1380,7 +1388,8 @@ cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>  	n = rb_last(&st->rb);
>  	if (n) {
>  		__cfqg = rb_entry_cfqg(n);
> -		cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY;
> +		cfqg->vdisktime = __cfqg->vdisktime +
> +			cfq_get_cfqg_vdisktime_delay(cfqd);
>  	} else
>  		cfqg->vdisktime = st->min_vdisktime;
>  	cfq_group_service_tree_add(st, cfqg);
> -- 
> 2.5.0
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode
  2017-03-02 10:29 ` Jan Kara
@ 2017-03-03 13:20   ` Hou Tao
  2017-03-03 19:53     ` Vivek Goyal
  0 siblings, 1 reply; 6+ messages in thread
From: Hou Tao @ 2017-03-03 13:20 UTC (permalink / raw)
  To: Jan Kara; +Cc: axboe, linux-block, jmoyer, stable, Vivek Goyal


On 2017/3/2 18:29, Jan Kara wrote:
> On Wed 01-03-17 10:07:44, Hou Tao wrote:
>> When adding a cfq_group into the cfq service tree, we use CFQ_IDLE_DELAY
>> as the delay of cfq_group's vdisktime if there have been other cfq_groups
>> already.
>>
>> When cfq is under iops mode, commit 9a7f38c42c2b ("cfq-iosched: Convert
>> from jiffies to nanoseconds") could result in a large iops delay and
>> lead to an abnormal io schedule delay for the added cfq_group. To fix
>> it, we just need to revert to the old CFQ_IDLE_DELAY value: HZ / 5
>> when iops mode is enabled.
>>
>> Cc: <stable@vger.kernel.org> # 4.8+
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
> 
> OK, I agree my commit broke the logic in this case. Thanks for the fix.
> Please add also tag:
> 
> Fixes: 9a7f38c42c2b92391d9dabaf9f51df7cfe5608e4
> 
> I somewhat disagree with the fix though. See below:
> 
>> +static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd)
>> +{
>> +	if (!iops_mode(cfqd))
>> +		return CFQ_IDLE_DELAY;
>> +	else
>> +		return nsecs_to_jiffies64(CFQ_IDLE_DELAY);
>> +}
>> +
> 
> So using nsecs_to_jiffies64(CFQ_IDLE_DELAY) when in iops mode just does not
> make any sense. AFAIU the code in cfq_group_notify_queue_add() we just want
> to add the cfqg as the last one in the tree. So returning 1 from
> cfq_get_cfqg_vdisktime_delay() in iops mode should be fine as well.
Yes, nsecs_to_jiffies64(CFQ_IDLE_DELAY) is odd here, the better way is to
define a new macro with a value of 1 or 200 and use it directly, but I still
prefer to use 200 to be consistent with the no-hrtimer configuration.

> Frankly, vdisktime is in fixed-point precision shifted by
> CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
> case and just adding 1 to maximum vdisktime should be fine in all the
> cases. But that would require more testing whether I did not miss anything
> subtle.
Although the current implementation has done this, I don't think we should
add the cfq_group as the last one in the service tree. In some test cases,
I found that the delayed vdisktime of cfq_group is smaller than its last
vdisktime when the cfq_group was removed from the service_tree, and I think
it hurts the fairness. Maybe we can learn from CFS and calculate the delay
dynamically, but it would be the topic of another thread.

Regards,

Tao

> 
>>  static void
>>  cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>>  {
>> @@ -1380,7 +1388,8 @@ cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>>  	n = rb_last(&st->rb);
>>  	if (n) {
>>  		__cfqg = rb_entry_cfqg(n);
>> -		cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY;
>> +		cfqg->vdisktime = __cfqg->vdisktime +
>> +			cfq_get_cfqg_vdisktime_delay(cfqd);
>>  	} else
>>  		cfqg->vdisktime = st->min_vdisktime;
>>  	cfq_group_service_tree_add(st, cfqg);
>> -- 
>> 2.5.0
>>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode
  2017-03-03 13:20   ` Hou Tao
@ 2017-03-03 19:53     ` Vivek Goyal
  2017-03-06  8:55       ` Hou Tao
  0 siblings, 1 reply; 6+ messages in thread
From: Vivek Goyal @ 2017-03-03 19:53 UTC (permalink / raw)
  To: Hou Tao; +Cc: Jan Kara, axboe, linux-block, jmoyer, stable

On Fri, Mar 03, 2017 at 09:20:44PM +0800, Hou Tao wrote:

[..]
> > Frankly, vdisktime is in fixed-point precision shifted by
> > CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
> > case and just adding 1 to maximum vdisktime should be fine in all the
> > cases. But that would require more testing whether I did not miss anything
> > subtle.

I think even 1 will work. But in the beginning IIRC I took the idea
from cpu scheduler. Adding a value bigger than 1 will allow you to add
some other group later before this group. (If you want to give that group
higher priority).

Thanks
Vivek

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode
  2017-03-03 19:53     ` Vivek Goyal
@ 2017-03-06  8:55       ` Hou Tao
  2017-03-06 13:45         ` Vivek Goyal
  0 siblings, 1 reply; 6+ messages in thread
From: Hou Tao @ 2017-03-06  8:55 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Jan Kara, axboe, linux-block, jmoyer, stable

Hi Vivek,

On 2017/3/4 3:53, Vivek Goyal wrote:
> On Fri, Mar 03, 2017 at 09:20:44PM +0800, Hou Tao wrote:
> 
> [..]
>>> Frankly, vdisktime is in fixed-point precision shifted by
>>> CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
>>> case and just adding 1 to maximum vdisktime should be fine in all the
>>> cases. But that would require more testing whether I did not miss anything
>>> subtle.
> 
> I think even 1 will work. But in the beginning IIRC I took the idea
> from cpu scheduler. Adding a value bigger than 1 will allow you to add
> some other group later before this group. (If you want to give that group
> higher priority).
I still don't understand why using a value bigger than 1 will allow a later added
group to have a vdisktime less than the firstly added group. Could you explain it
in more detail ?

Regards,

Tao

> Thanks
> Vivek
> 
> .
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode
  2017-03-06  8:55       ` Hou Tao
@ 2017-03-06 13:45         ` Vivek Goyal
  0 siblings, 0 replies; 6+ messages in thread
From: Vivek Goyal @ 2017-03-06 13:45 UTC (permalink / raw)
  To: Hou Tao; +Cc: Jan Kara, axboe, linux-block, jmoyer, stable

On Mon, Mar 06, 2017 at 04:55:25PM +0800, Hou Tao wrote:
> Hi Vivek,
> 
> On 2017/3/4 3:53, Vivek Goyal wrote:
> > On Fri, Mar 03, 2017 at 09:20:44PM +0800, Hou Tao wrote:
> > 
> > [..]
> >>> Frankly, vdisktime is in fixed-point precision shifted by
> >>> CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
> >>> case and just adding 1 to maximum vdisktime should be fine in all the
> >>> cases. But that would require more testing whether I did not miss anything
> >>> subtle.
> > 
> > I think even 1 will work. But in the beginning IIRC I took the idea
> > from cpu scheduler. Adding a value bigger than 1 will allow you to add
> > some other group later before this group. (If you want to give that group
> > higher priority).
> I still don't understand why using a value bigger than 1 will allow a later added
> group to have a vdisktime less than the firstly added group. Could you explain it
> in more detail ?

The way I thought about this was as follows.

Assume Idle delay value is 5.

Say a group A is last group in the tree and has vdisktime=100, now a new
group B gets IO and gets added to tree say with value 105 (100 + 5). Now
another group C gets IO and gets added to tree. Assume we want to give
C little higher priority than group B (but not higher than A). So we 
could assign it value between 100 and 105 and it will work. But if we had
always added 1, then group A wil have vdisktime 100, B will have 101 and
now C can't be put between A and B.

But this is such a corner case, I doubt it is going to matter. So changing
it to 1 might not show any affect at all.

We had the issue that groups which were not continuously backlogged, will
lose their share. So I had tried implemeting something that while adding
give them a smaller vdisktime (scale based on their weight). But that did
not help much. So that's why a comment was left in there.

Vivek

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-03-06 13:45 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-01  2:07 [PATCH] cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode Hou Tao
2017-03-02 10:29 ` Jan Kara
2017-03-03 13:20   ` Hou Tao
2017-03-03 19:53     ` Vivek Goyal
2017-03-06  8:55       ` Hou Tao
2017-03-06 13:45         ` Vivek Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).