linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] sched: revise the initial value of the util_avg.
@ 2020-11-06  3:22 Xuewen Yan
  2020-11-06 14:58 ` Tao Zhou
  0 siblings, 1 reply; 2+ messages in thread
From: Xuewen Yan @ 2020-11-06  3:22 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, bristot,
	linux-kernel, xuewen.yan, xuewyan

According to the original code logic:
		cfs_rq->avg.util_avg
sa->util_avg  = -------------------- * se->load.weight
		cfs_rq->avg.load_avg
but for fair_sched_class in 64bits platform:
se->load.weight = 1024 * sched_prio_to_weight[prio];
	cfs_rq->avg.util_avg
so the  -------------------- must be extremely small, the
	cfs_rq->avg.load_avg
judgment condition "sa->util_avg < cap" could be established.
It's not fair for those tasks who has smaller nice value.

Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
---
changes since V2:

*kernel/sched/fair.c | 6 +++++-
* 1 file changed, 5 insertions(+), 1 deletion(-)
*
*diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
*index 290f9e3..079760b 100644
*--- a/kernel/sched/fair.c
*+++ b/kernel/sched/fair.c
*@@ -794,7 +794,11 @@ void post_init_entity_util_avg(struct task_struct *p)
*
*        if (cap > 0) {
*                if (cfs_rq->avg.util_avg != 0) {
*-                       sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
*+                       if (p->sched_class == &fair_sched_class)
*+                               sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
*+                       else
*+                               sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
*+
*                        sa->util_avg /= (cfs_rq->avg.load_avg + 1);
*
*                        if (sa->util_avg > cap)
*
---
comment from Vincent Guittot <vincent.guittot@linaro.org>:
>
> According to the original code logic:
>                 cfs_rq->avg.util_avg
> sa->util_avg  = -------------------- * se->load.weight
>                 cfs_rq->avg.load_avg

this should have been scale_load_down(se->load.weight) from the beginning

> but for fair_sched_class:
> se->load.weight = 1024 * sched_prio_to_weight[prio];

This is only true for 64bits platform otherwise scale_load and
scale_load_down are nop

>         cfs_rq->avg.util_avg
> so the  -------------------- must be extremely small, the
>         cfs_rq->avg.load_avg
> judgment condition "sa->util_avg < cap" could be established.
> It's not fair for those tasks who has smaller nice value.
>
> Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> ---
>  kernel/sched/fair.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e3..079760b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -794,7 +794,11 @@ void post_init_entity_util_avg(struct task_struct *p)
>
>         if (cap > 0) {
>                 if (cfs_rq->avg.util_avg != 0) {

We should now use cpu_util() instead of cfs_rq->avg.util_avg which
takes into account other classes

> -                       sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
> +                       if (p->sched_class == &fair_sched_class)
> +                               sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
> +                       else
> +                               sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;

Why this else keeps using se->load.weight ?

Either we uses sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
for all classes

Or we want a different init value for other classes. But in this case
se->load.weight is meaningless and we should simply set them to 0
although we could probably compute a value based on bandwidth for
deadline class.

---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e3..c6186cc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -794,7 +794,7 @@ void post_init_entity_util_avg(struct task_struct *p)
 
 	if (cap > 0) {
 		if (cfs_rq->avg.util_avg != 0) {
-			sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
+			sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
 			sa->util_avg /= (cfs_rq->avg.load_avg + 1);
 
 			if (sa->util_avg > cap)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v3] sched: revise the initial value of the util_avg.
  2020-11-06  3:22 [PATCH v3] sched: revise the initial value of the util_avg Xuewen Yan
@ 2020-11-06 14:58 ` Tao Zhou
  0 siblings, 0 replies; 2+ messages in thread
From: Tao Zhou @ 2020-11-06 14:58 UTC (permalink / raw)
  To: Xuewen Yan
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, linux-kernel, xuewen.yan,
	xuewyan, t1zhou

On Fri, Nov 06, 2020 at 11:22:03AM +0800, Xuewen Yan wrote:

> According to the original code logic:
> 		cfs_rq->avg.util_avg
> sa->util_avg  = -------------------- * se->load.weight
> 		cfs_rq->avg.load_avg
> but for fair_sched_class in 64bits platform:
> se->load.weight = 1024 * sched_prio_to_weight[prio];
> 	cfs_rq->avg.util_avg
> so the  -------------------- must be extremely small, the
> 	cfs_rq->avg.load_avg
> judgment condition "sa->util_avg < cap" could be established.
> It's not fair for those tasks who has smaller nice value.
> 
> Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> ---
> changes since V2:
> 
> *kernel/sched/fair.c | 6 +++++-
> * 1 file changed, 5 insertions(+), 1 deletion(-)
> *
> *diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> *index 290f9e3..079760b 100644
> *--- a/kernel/sched/fair.c
> *+++ b/kernel/sched/fair.c
> *@@ -794,7 +794,11 @@ void post_init_entity_util_avg(struct task_struct *p)
> *
> *        if (cap > 0) {
> *                if (cfs_rq->avg.util_avg != 0) {
> *-                       sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
> *+                       if (p->sched_class == &fair_sched_class)
> *+                               sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
> *+                       else
> *+                               sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
> *+
> *                        sa->util_avg /= (cfs_rq->avg.load_avg + 1);
> *
> *                        if (sa->util_avg > cap)
> *
> ---
> comment from Vincent Guittot <vincent.guittot@linaro.org>:
> >
> > According to the original code logic:
> >                 cfs_rq->avg.util_avg
> > sa->util_avg  = -------------------- * se->load.weight
> >                 cfs_rq->avg.load_avg
> 
> this should have been scale_load_down(se->load.weight) from the beginning
> 
> > but for fair_sched_class:
> > se->load.weight = 1024 * sched_prio_to_weight[prio];
> 
> This is only true for 64bits platform otherwise scale_load and
> scale_load_down are nop
> 
> >         cfs_rq->avg.util_avg
> > so the  -------------------- must be extremely small, the
> >         cfs_rq->avg.load_avg
> > judgment condition "sa->util_avg < cap" could be established.
> > It's not fair for those tasks who has smaller nice value.
> >
> > Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
> > ---
> >  kernel/sched/fair.c | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 290f9e3..079760b 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -794,7 +794,11 @@ void post_init_entity_util_avg(struct task_struct *p)
> >
> >         if (cap > 0) {
> >                 if (cfs_rq->avg.util_avg != 0) {
> 
> We should now use cpu_util() instead of cfs_rq->avg.util_avg which
> takes into account other classes
> 
> > -                       sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
> > +                       if (p->sched_class == &fair_sched_class)
> > +                               sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
> > +                       else
> > +                               sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
> 
> Why this else keeps using se->load.weight ?
> 
> Either we uses sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
> for all classes
> 
> Or we want a different init value for other classes. But in this case
> se->load.weight is meaningless and we should simply set them to 0
> although we could probably compute a value based on bandwidth for
> deadline class.
> 
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e3..c6186cc 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -794,7 +794,7 @@ void post_init_entity_util_avg(struct task_struct *p)
>  
>  	if (cap > 0) {
>  		if (cfs_rq->avg.util_avg != 0) {
> -			sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
> +			sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);

Please refer to this MessageID: 20161208012722.GA4128@geo in lkml web site
if you want. Just a notice and no matter here. My head do not work now.
I can't remember more things that time..

>  			sa->util_avg /= (cfs_rq->avg.load_avg + 1);
>  
>  			if (sa->util_avg > cap)
> -- 
> 1.9.1
> 


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-11-06 15:02 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-06  3:22 [PATCH v3] sched: revise the initial value of the util_avg Xuewen Yan
2020-11-06 14:58 ` Tao Zhou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).