linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method
@ 2022-04-14  1:59 Kuyo Chang
  2022-04-14  7:37 ` Vincent Guittot
  2022-04-14  9:02 ` Dietmar Eggemann
  0 siblings, 2 replies; 5+ messages in thread
From: Kuyo Chang @ 2022-04-14  1:59 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matthias Brugger
  Cc: wsd_upstream, kuyo chang, linux-kernel, linux-arm-kernel, linux-mediatek

From: kuyo chang <kuyo.chang@mediatek.com>

I meet the warning message at cfs_rq_is_decayed at below code.

SCHED_WARN_ON(cfs_rq->avg.load_avg ||
		    cfs_rq->avg.util_avg ||
		    cfs_rq->avg.runnable_avg)

Following is the calltrace.

Call trace:
__update_blocked_fair
update_blocked_averages
newidle_balance
pick_next_task_fair
__schedule
schedule
pipe_read
vfs_read
ksys_read

After code analyzing and some debug messages, I found it exits a corner
case at attach_entity_load_avg which will cause load_sum is null but
load_avg is not.
Consider se_weight is 88761 according by sched_prio_to_weight table.
And assume the get_pelt_divider() is 47742, se->avg.load_avg is 1.
By the calculating for se->avg.load_sum as following will become zero
as following.
se->avg.load_sum =
	div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
se->avg.load_sum = 1*47742/88761 = 0.

After enqueue_load_avg code as below.
cfs_rq->avg.load_avg += se->avg.load_avg;
cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;

Then the load_sum for cfs_rq will be 1 while the load_sum for cfs_rq is 0.
So it will hit the warning message.

In order to fix the corner case, make sure the se->load_avg|sum is correct
before enqueue_load_avg.

After long time testing, the kernel warning was gone and the system runs
as well as before.

Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>
---

v1->v2:

(1)Thanks for suggestion from Peter Zijlstra & Vincent Guittot.
(2)By suggestion from Vincent Guittot,
rework the se->load_sum calculation method for fix the corner case, 
make sure the se->load_avg|sum is correct before enqueue_load_avg. 
(3)Rework changlog.

 kernel/sched/fair.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d4bd299d67ab..159274482c4e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3829,10 +3829,12 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
 
 	se->avg.runnable_sum = se->avg.runnable_avg * divider;
 
-	se->avg.load_sum = divider;
-	if (se_weight(se)) {
+	se->avg.load_sum = se->avg.load_avg * divider;
+	if (se_weight(se) < se->avg.load_sum) {
 		se->avg.load_sum =
-			div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
+			div_u64(se->avg.load_sum, se_weight(se));
+	} else {
+		se->avg.load_sum = 1;
 	}
 
 	enqueue_load_avg(cfs_rq, se);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method
  2022-04-14  1:59 [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method Kuyo Chang
@ 2022-04-14  7:37 ` Vincent Guittot
  2022-04-14  8:25   ` Kuyo Chang
  2022-04-14  9:02 ` Dietmar Eggemann
  1 sibling, 1 reply; 5+ messages in thread
From: Vincent Guittot @ 2022-04-14  7:37 UTC (permalink / raw)
  To: Kuyo Chang
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matthias Brugger, wsd_upstream,
	linux-kernel, linux-arm-kernel, linux-mediatek

On Thu, 14 Apr 2022 at 03:59, Kuyo Chang <kuyo.chang@mediatek.com> wrote:
>
> From: kuyo chang <kuyo.chang@mediatek.com>
>
> I meet the warning message at cfs_rq_is_decayed at below code.
>
> SCHED_WARN_ON(cfs_rq->avg.load_avg ||
>                     cfs_rq->avg.util_avg ||
>                     cfs_rq->avg.runnable_avg)
>
> Following is the calltrace.
>
> Call trace:
> __update_blocked_fair
> update_blocked_averages
> newidle_balance
> pick_next_task_fair
> __schedule
> schedule
> pipe_read
> vfs_read
> ksys_read
>
> After code analyzing and some debug messages, I found it exits a corner
> case at attach_entity_load_avg which will cause load_sum is null but
> load_avg is not.
> Consider se_weight is 88761 according by sched_prio_to_weight table.
> And assume the get_pelt_divider() is 47742, se->avg.load_avg is 1.
> By the calculating for se->avg.load_sum as following will become zero
> as following.
> se->avg.load_sum =
>         div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
> se->avg.load_sum = 1*47742/88761 = 0.
>
> After enqueue_load_avg code as below.
> cfs_rq->avg.load_avg += se->avg.load_avg;
> cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;
>
> Then the load_sum for cfs_rq will be 1 while the load_sum for cfs_rq is 0.
> So it will hit the warning message.
>
> In order to fix the corner case, make sure the se->load_avg|sum is correct
> before enqueue_load_avg.
>
> After long time testing, the kernel warning was gone and the system runs
> as well as before.

This needs a fix tag:
Fixes: f207934fb79d ("sched/fair: Align PELT windows between cfs_rq and its se")

>
> Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
>
> v1->v2:
>
> (1)Thanks for suggestion from Peter Zijlstra & Vincent Guittot.
> (2)By suggestion from Vincent Guittot,
> rework the se->load_sum calculation method for fix the corner case,
> make sure the se->load_avg|sum is correct before enqueue_load_avg.
> (3)Rework changlog.
>
>  kernel/sched/fair.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299d67ab..159274482c4e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3829,10 +3829,12 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
>
>         se->avg.runnable_sum = se->avg.runnable_avg * divider;
>
> -       se->avg.load_sum = divider;
> -       if (se_weight(se)) {
> +       se->avg.load_sum = se->avg.load_avg * divider;
> +       if (se_weight(se) < se->avg.load_sum) {
>                 se->avg.load_sum =
> -                       div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
> +                       div_u64(se->avg.load_sum, se_weight(se));
> +       } else {
> +               se->avg.load_sum = 1;
>         }
>
>         enqueue_load_avg(cfs_rq, se);
> --
> 2.18.0
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method
  2022-04-14  7:37 ` Vincent Guittot
@ 2022-04-14  8:25   ` Kuyo Chang
  0 siblings, 0 replies; 5+ messages in thread
From: Kuyo Chang @ 2022-04-14  8:25 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matthias Brugger, wsd_upstream,
	linux-kernel, linux-arm-kernel, linux-mediatek

On Thu, 2022-04-14 at 09:37 +0200, Vincent Guittot wrote:
> On Thu, 14 Apr 2022 at 03:59, Kuyo Chang <kuyo.chang@mediatek.com>
> wrote:
> > 
> > From: kuyo chang <kuyo.chang@mediatek.com>
> > 
> > I meet the warning message at cfs_rq_is_decayed at below code.
> > 
> > SCHED_WARN_ON(cfs_rq->avg.load_avg ||
> >                     cfs_rq->avg.util_avg ||
> >                     cfs_rq->avg.runnable_avg)
> > 
> > Following is the calltrace.
> > 
> > Call trace:
> > __update_blocked_fair
> > update_blocked_averages
> > newidle_balance
> > pick_next_task_fair
> > __schedule
> > schedule
> > pipe_read
> > vfs_read
> > ksys_read
> > 
> > After code analyzing and some debug messages, I found it exits a
> > corner
> > case at attach_entity_load_avg which will cause load_sum is null
> > but
> > load_avg is not.
> > Consider se_weight is 88761 according by sched_prio_to_weight
> > table.
> > And assume the get_pelt_divider() is 47742, se->avg.load_avg is 1.
> > By the calculating for se->avg.load_sum as following will become
> > zero
> > as following.
> > se->avg.load_sum =
> >         div_u64(se->avg.load_avg * se->avg.load_sum,
> > se_weight(se));
> > se->avg.load_sum = 1*47742/88761 = 0.
> > 
> > After enqueue_load_avg code as below.
> > cfs_rq->avg.load_avg += se->avg.load_avg;
> > cfs_rq->avg.load_sum += se_weight(se) * se->avg.load_sum;
> > 
> > Then the load_sum for cfs_rq will be 1 while the load_sum for
> > cfs_rq is 0.
> > So it will hit the warning message.
> > 
> > In order to fix the corner case, make sure the se->load_avg|sum is
> > correct
> > before enqueue_load_avg.
> > 
> > After long time testing, the kernel warning was gone and the system
> > runs
> > as well as before.
> 
> This needs a fix tag:
> Fixes: f207934fb79d ("sched/fair: Align PELT windows between cfs_rq
> and its se")

Thanks for your friendly reminder.


> > 
> > Signed-off-by: kuyo chang <kuyo.chang@mediatek.com>
> 
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> 
> > ---
> > 
> > v1->v2:
> > 
> > (1)Thanks for suggestion from Peter Zijlstra & Vincent Guittot.
> > (2)By suggestion from Vincent Guittot,
> > rework the se->load_sum calculation method for fix the corner case,
> > make sure the se->load_avg|sum is correct before enqueue_load_avg.
> > (3)Rework changlog.
> > 
> >  kernel/sched/fair.c | 8 +++++---
> >  1 file changed, 5 insertions(+), 3 deletions(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index d4bd299d67ab..159274482c4e 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3829,10 +3829,12 @@ static void attach_entity_load_avg(struct
> > cfs_rq *cfs_rq, struct sched_entity *s
> > 
> >         se->avg.runnable_sum = se->avg.runnable_avg * divider;
> > 
> > -       se->avg.load_sum = divider;
> > -       if (se_weight(se)) {
> > +       se->avg.load_sum = se->avg.load_avg * divider;
> > +       if (se_weight(se) < se->avg.load_sum) {
> >                 se->avg.load_sum =
> > -                       div_u64(se->avg.load_avg * se-
> > >avg.load_sum, se_weight(se));
> > +                       div_u64(se->avg.load_sum, se_weight(se));
> > +       } else {
> > +               se->avg.load_sum = 1;
> >         }
> > 
> >         enqueue_load_avg(cfs_rq, se);
> > --
> > 2.18.0
> > 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method
  2022-04-14  1:59 [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method Kuyo Chang
  2022-04-14  7:37 ` Vincent Guittot
@ 2022-04-14  9:02 ` Dietmar Eggemann
  2022-04-14  9:29   ` Kuyo Chang
  1 sibling, 1 reply; 5+ messages in thread
From: Dietmar Eggemann @ 2022-04-14  9:02 UTC (permalink / raw)
  To: Kuyo Chang, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matthias Brugger
  Cc: wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek

On 14/04/2022 03:59, Kuyo Chang wrote:
> From: kuyo chang <kuyo.chang@mediatek.com>

[...]

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299d67ab..159274482c4e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3829,10 +3829,12 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
>  
>  	se->avg.runnable_sum = se->avg.runnable_avg * divider;
>  
> -	se->avg.load_sum = divider;
> -	if (se_weight(se)) {
> +	se->avg.load_sum = se->avg.load_avg * divider;
> +	if (se_weight(se) < se->avg.load_sum) {
>  		se->avg.load_sum =
> -			div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
> +			div_u64(se->avg.load_sum, se_weight(se));

Seems that this will fit on one line now. No braces needed then.


> +	} else {
> +		se->avg.load_sum = 1;
>  	}
>  
>  	enqueue_load_avg(cfs_rq, se);

Looks like taskgroups are not affected since they get always online
with cpu.shares/weight = 1024 (cgroup v1):

cpu_cgroup_css_online() -> online_fair_sched_group() ->
attach_entity_cfs_rq() -> attach_entity_load_avg()

And reweight_entity() does not have this issue.

Tested with `qemu-system-x86_64 ... cores=64 ... -enable-kvm` and
weight=88761 for nice=0 tasks plus forcing se->avg.load_avg = 1 before
the div_u64() in attach_entity_load_avg().

Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method
  2022-04-14  9:02 ` Dietmar Eggemann
@ 2022-04-14  9:29   ` Kuyo Chang
  0 siblings, 0 replies; 5+ messages in thread
From: Kuyo Chang @ 2022-04-14  9:29 UTC (permalink / raw)
  To: Dietmar Eggemann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matthias Brugger
  Cc: wsd_upstream, linux-kernel, linux-arm-kernel, linux-mediatek

On Thu, 2022-04-14 at 11:02 +0200, Dietmar Eggemann wrote:
> On 14/04/2022 03:59, Kuyo Chang wrote:
> > From: kuyo chang <kuyo.chang@mediatek.com>
> 
> [...]
> 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index d4bd299d67ab..159274482c4e 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3829,10 +3829,12 @@ static void attach_entity_load_avg(struct
> > cfs_rq *cfs_rq, struct sched_entity *s
> >  
> >  	se->avg.runnable_sum = se->avg.runnable_avg * divider;
> >  
> > -	se->avg.load_sum = divider;
> > -	if (se_weight(se)) {
> > +	se->avg.load_sum = se->avg.load_avg * divider;
> > +	if (se_weight(se) < se->avg.load_sum) {
> >  		se->avg.load_sum =
> > -			div_u64(se->avg.load_avg * se->avg.load_sum,
> > se_weight(se));
> > +			div_u64(se->avg.load_sum, se_weight(se));
> 
> Seems that this will fit on one line now. No braces needed then.

Thanks for your friendly reminder.

> 
> > +	} else {
> > +		se->avg.load_sum = 1;
> >  	}
> >  
> >  	enqueue_load_avg(cfs_rq, se);
> 
> Looks like taskgroups are not affected since they get always online
> with cpu.shares/weight = 1024 (cgroup v1):
> 
> cpu_cgroup_css_online() -> online_fair_sched_group() ->
> attach_entity_cfs_rq() -> attach_entity_load_avg()
> 
> And reweight_entity() does not have this issue.
> 
> Tested with `qemu-system-x86_64 ... cores=64 ... -enable-kvm` and
> weight=88761 for nice=0 tasks plus forcing se->avg.load_avg = 1
> before
> the div_u64() in attach_entity_load_avg().
> 
> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-04-14  9:29 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-14  1:59 [PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method Kuyo Chang
2022-04-14  7:37 ` Vincent Guittot
2022-04-14  8:25   ` Kuyo Chang
2022-04-14  9:02 ` Dietmar Eggemann
2022-04-14  9:29   ` Kuyo Chang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).