linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
@ 2021-05-06 11:09 Xuewen Yan
  2021-05-06 12:28 ` Vincent Donnefort
  0 siblings, 1 reply; 7+ messages in thread
From: Xuewen Yan @ 2021-05-06 11:09 UTC (permalink / raw)
  To: vincent.donnefort, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann
  Cc: rostedt, bsegall, mgorman, bristot, linux-kernel, zhang.lyra, xuewyan

From: Xuewen Yan <xuewen.yan@unisoc.com>

The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
And the enqueued is equal to task_util with the flag, so it is better
to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.

Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering

Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e5e457fa9dc8..94d77b4fa601 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
 	if (ue.enqueued & UTIL_AVG_UNCHANGED)
 		return;
 
-	last_enqueued_diff = ue.enqueued;
+	last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
 
 	/*
 	 * Reset EWMA on utilization increases, the moving average is used only
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
  2021-05-06 11:09 [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff Xuewen Yan
@ 2021-05-06 12:28 ` Vincent Donnefort
  2021-05-06 12:46   ` Xuewen Yan
  0 siblings, 1 reply; 7+ messages in thread
From: Vincent Donnefort @ 2021-05-06 12:28 UTC (permalink / raw)
  To: Xuewen Yan
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, linux-kernel, zhang.lyra,
	xuewyan

On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> From: Xuewen Yan <xuewen.yan@unisoc.com>
> 
> The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> And the enqueued is equal to task_util with the flag, so it is better
> to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.
> 
> Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
> 
> Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e5e457fa9dc8..94d77b4fa601 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
>  	if (ue.enqueued & UTIL_AVG_UNCHANGED)
>  		return;
>  
> -	last_enqueued_diff = ue.enqueued;
> +	last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
>  
>  	/*
>  	 * Reset EWMA on utilization increases, the moving average is used only
> -- 
> 2.29.0
> 

Hi,

We do indeed for the diff use the flag for the value updated and no flag for the
value before the update. However, last_enqueued_diff is only used for the margin
check which is an heuristic and is not an accurate value (~1%) and as we know
we already loose the LSB in util_est, I'm not sure this is really necessary.

-- 
Vincent


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
  2021-05-06 12:28 ` Vincent Donnefort
@ 2021-05-06 12:46   ` Xuewen Yan
  2021-05-06 16:26     ` Vincent Donnefort
  0 siblings, 1 reply; 7+ messages in thread
From: Xuewen Yan @ 2021-05-06 12:46 UTC (permalink / raw)
  To: Vincent Donnefort
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel, Chunyan Zhang, Ryan Y

Hi
On Thu, May 6, 2021 at 8:28 PM Vincent Donnefort
<vincent.donnefort@arm.com> wrote:
>
> On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> > From: Xuewen Yan <xuewen.yan@unisoc.com>
> >
> > The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> > And the enqueued is equal to task_util with the flag, so it is better
> > to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.
> >
> > Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
> >
> > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> > ---
> >  kernel/sched/fair.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e5e457fa9dc8..94d77b4fa601 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> >       if (ue.enqueued & UTIL_AVG_UNCHANGED)
> >               return;
> >
> > -     last_enqueued_diff = ue.enqueued;
> > +     last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
> >
> >       /*
> >        * Reset EWMA on utilization increases, the moving average is used only
> > --
> > 2.29.0
> >
>
> Hi,
>
> We do indeed for the diff use the flag for the value updated and no flag for the
> value before the update. However, last_enqueued_diff is only used for the margin
> check which is an heuristic and is not an accurate value (~1%) and as we know
The last_enqueued_diff is compared with the UTIL_EST_MARGIN which is
"1024/100 = 10",
and The LSB may cause ~10% error.
> we already loose the LSB in util_est, I'm not sure this is really necessary.
I'm also not very sure, maybe the calculation will be more rigorous
with the flag?
>
> --
> Vincent
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
  2021-05-06 12:46   ` Xuewen Yan
@ 2021-05-06 16:26     ` Vincent Donnefort
  2021-05-07  1:34       ` Xuewen Yan
  0 siblings, 1 reply; 7+ messages in thread
From: Vincent Donnefort @ 2021-05-06 16:26 UTC (permalink / raw)
  To: Xuewen Yan
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel, Chunyan Zhang, Ryan Y

On Thu, May 06, 2021 at 08:46:08PM +0800, Xuewen Yan wrote:
> Hi
> On Thu, May 6, 2021 at 8:28 PM Vincent Donnefort
> <vincent.donnefort@arm.com> wrote:
> >
> > On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> > > From: Xuewen Yan <xuewen.yan@unisoc.com>
> > >
> > > The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> > > And the enqueued is equal to task_util with the flag, so it is better
> > > to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.

Could we change the description here a bit? I don't think this is accurately
explaning the issue. Would probably be interesting to mention that by not
setting the flag, which is the LSB, we add +1 to the diff. This is therefore
reducing slightly UTIL_EST_MARGIN.

> > >
> > > Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
> > >
> > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> > > ---
> > >  kernel/sched/fair.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index e5e457fa9dc8..94d77b4fa601 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> > >       if (ue.enqueued & UTIL_AVG_UNCHANGED)
> > >               return;
> > >
> > > -     last_enqueued_diff = ue.enqueued;
> > > +     last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
> > >
> > >       /*
> > >        * Reset EWMA on utilization increases, the moving average is used only
> > > --
> > > 2.29.0
> > >
> >
> > Hi,
> >
> > We do indeed for the diff use the flag for the value updated and no flag for the
> > value before the update. However, last_enqueued_diff is only used for the margin
> > check which is an heuristic and is not an accurate value (~1%) and as we know
> The last_enqueued_diff is compared with the UTIL_EST_MARGIN which is
> "1024/100 = 10",
> and The LSB may cause ~10% error.

I meant ~1% being the original margin. With the bit set, we would use 0.87% instead
of 0.97%.

> > we already loose the LSB in util_est, I'm not sure this is really necessary.
> I'm also not very sure, maybe the calculation will be more rigorous
> with the flag?
> >
> > --
> > Vincent
> >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
  2021-05-06 16:26     ` Vincent Donnefort
@ 2021-05-07  1:34       ` Xuewen Yan
  2021-05-07  6:53         ` Vincent Guittot
  0 siblings, 1 reply; 7+ messages in thread
From: Xuewen Yan @ 2021-05-07  1:34 UTC (permalink / raw)
  To: Vincent Donnefort
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel, Chunyan Zhang, Ryan Y

On Fri, May 7, 2021 at 12:26 AM Vincent Donnefort
<vincent.donnefort@arm.com> wrote:
>
> On Thu, May 06, 2021 at 08:46:08PM +0800, Xuewen Yan wrote:
> > Hi
> > On Thu, May 6, 2021 at 8:28 PM Vincent Donnefort
> > <vincent.donnefort@arm.com> wrote:
> > >
> > > On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> > > > From: Xuewen Yan <xuewen.yan@unisoc.com>
> > > >
> > > > The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> > > > And the enqueued is equal to task_util with the flag, so it is better
> > > > to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.
>
> Could we change the description here a bit? I don't think this is accurately
> explaning the issue. Would probably be interesting to mention that by not
> setting the flag, which is the LSB, we add +1 to the diff. This is therefore
> reducing slightly UTIL_EST_MARGIN.

ok, If you agree with this patch, I'll change it in V2.
>
> > > >
> > > > Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
> > > >
> > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> > > > ---
> > > >  kernel/sched/fair.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index e5e457fa9dc8..94d77b4fa601 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> > > >       if (ue.enqueued & UTIL_AVG_UNCHANGED)
> > > >               return;
> > > >
> > > > -     last_enqueued_diff = ue.enqueued;
> > > > +     last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
> > > >
> > > >       /*
> > > >        * Reset EWMA on utilization increases, the moving average is used only
> > > > --
> > > > 2.29.0
> > > >
> > >
> > > Hi,
> > >
> > > We do indeed for the diff use the flag for the value updated and no flag for the
> > > value before the update. However, last_enqueued_diff is only used for the margin
> > > check which is an heuristic and is not an accurate value (~1%) and as we know
> > The last_enqueued_diff is compared with the UTIL_EST_MARGIN which is
> > "1024/100 = 10",
> > and The LSB may cause ~10% error.
>
> I meant ~1% being the original margin. With the bit set, we would use 0.87% instead
> of 0.97%.

Because the within_margin() does not contain “=”, if the enqueued
without the flag, the margin may be 0.97%(10/1024),
with the flag, be 1.07%(11/1024) instead of 0.87% I think.
>
> > > we already loose the LSB in util_est, I'm not sure this is really necessary.
> > I'm also not very sure, maybe the calculation will be more rigorous
> > with the flag?
> > >
> > > --
> > > Vincent
> > >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
  2021-05-07  1:34       ` Xuewen Yan
@ 2021-05-07  6:53         ` Vincent Guittot
  2021-05-07 10:26           ` Xuewen Yan
  0 siblings, 1 reply; 7+ messages in thread
From: Vincent Guittot @ 2021-05-07  6:53 UTC (permalink / raw)
  To: Xuewen Yan
  Cc: Vincent Donnefort, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel, Chunyan Zhang, Ryan Y

On Fri, 7 May 2021 at 03:36, Xuewen Yan <xuewen.yan94@gmail.com> wrote:
>
> On Fri, May 7, 2021 at 12:26 AM Vincent Donnefort
> <vincent.donnefort@arm.com> wrote:
> >
> > On Thu, May 06, 2021 at 08:46:08PM +0800, Xuewen Yan wrote:
> > > Hi
> > > On Thu, May 6, 2021 at 8:28 PM Vincent Donnefort
> > > <vincent.donnefort@arm.com> wrote:
> > > >
> > > > On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> > > > > From: Xuewen Yan <xuewen.yan@unisoc.com>
> > > > >
> > > > > The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> > > > > And the enqueued is equal to task_util with the flag, so it is better
> > > > > to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.
> >
> > Could we change the description here a bit? I don't think this is accurately
> > explaning the issue. Would probably be interesting to mention that by not
> > setting the flag, which is the LSB, we add +1 to the diff. This is therefore
> > reducing slightly UTIL_EST_MARGIN.
>
> ok, If you agree with this patch, I'll change it in V2.

Although the impact is not significant , it's worth having an accurate
computation. So the patch makes sense to me. Please submit a v2

> >
> > > > >
> > > > > Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
> > > > >
> > > > > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> > > > > ---
> > > > >  kernel/sched/fair.c | 2 +-
> > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > index e5e457fa9dc8..94d77b4fa601 100644
> > > > > --- a/kernel/sched/fair.c
> > > > > +++ b/kernel/sched/fair.c
> > > > > @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> > > > >       if (ue.enqueued & UTIL_AVG_UNCHANGED)
> > > > >               return;
> > > > >
> > > > > -     last_enqueued_diff = ue.enqueued;
> > > > > +     last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
> > > > >
> > > > >       /*
> > > > >        * Reset EWMA on utilization increases, the moving average is used only
> > > > > --
> > > > > 2.29.0
> > > > >
> > > >
> > > > Hi,
> > > >
> > > > We do indeed for the diff use the flag for the value updated and no flag for the
> > > > value before the update. However, last_enqueued_diff is only used for the margin
> > > > check which is an heuristic and is not an accurate value (~1%) and as we know
> > > The last_enqueued_diff is compared with the UTIL_EST_MARGIN which is
> > > "1024/100 = 10",
> > > and The LSB may cause ~10% error.
> >
> > I meant ~1% being the original margin. With the bit set, we would use 0.87% instead
> > of 0.97%.
>
> Because the within_margin() does not contain “=”, if the enqueued
> without the flag, the margin may be 0.97%(10/1024),
> with the flag, be 1.07%(11/1024) instead of 0.87% I think.
> >
> > > > we already loose the LSB in util_est, I'm not sure this is really necessary.
> > > I'm also not very sure, maybe the calculation will be more rigorous
> > > with the flag?
> > > >
> > > > --
> > > > Vincent
> > > >

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
  2021-05-07  6:53         ` Vincent Guittot
@ 2021-05-07 10:26           ` Xuewen Yan
  0 siblings, 0 replies; 7+ messages in thread
From: Xuewen Yan @ 2021-05-07 10:26 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Vincent Donnefort, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel, Chunyan Zhang, Ryan Y

On Fri, May 7, 2021 at 2:53 PM Vincent Guittot
<vincent.guittot@linaro.org> wrote:
>
> On Fri, 7 May 2021 at 03:36, Xuewen Yan <xuewen.yan94@gmail.com> wrote:
> >
> > On Fri, May 7, 2021 at 12:26 AM Vincent Donnefort
> > <vincent.donnefort@arm.com> wrote:
> > >
> > > On Thu, May 06, 2021 at 08:46:08PM +0800, Xuewen Yan wrote:
> > > > Hi
> > > > On Thu, May 6, 2021 at 8:28 PM Vincent Donnefort
> > > > <vincent.donnefort@arm.com> wrote:
> > > > >
> > > > > On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> > > > > > From: Xuewen Yan <xuewen.yan@unisoc.com>
> > > > > >
> > > > > > The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> > > > > > And the enqueued is equal to task_util with the flag, so it is better
> > > > > > to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.
> > >
> > > Could we change the description here a bit? I don't think this is accurately
> > > explaning the issue. Would probably be interesting to mention that by not
> > > setting the flag, which is the LSB, we add +1 to the diff. This is therefore
> > > reducing slightly UTIL_EST_MARGIN.
> >
> > ok, If you agree with this patch, I'll change it in V2.
>
> Although the impact is not significant , it's worth having an accurate
> computation. So the patch makes sense to me. Please submit a v2

Okay, I'll submit a v2 later.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-05-07 10:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-06 11:09 [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff Xuewen Yan
2021-05-06 12:28 ` Vincent Donnefort
2021-05-06 12:46   ` Xuewen Yan
2021-05-06 16:26     ` Vincent Donnefort
2021-05-07  1:34       ` Xuewen Yan
2021-05-07  6:53         ` Vincent Guittot
2021-05-07 10:26           ` Xuewen Yan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).