All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
@ 2021-06-04 10:23 Odin Ugedal
  2021-06-07 13:29   ` Vincent Guittot
  2021-06-08 16:39 ` Michal Koutný
  0 siblings, 2 replies; 8+ messages in thread
From: Odin Ugedal @ 2021-06-04 10:23 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira
  Cc: cgroups, linux-kernel, Odin Ugedal

This fixes an issue where fairness is decreased since cfs_rq's can
end up not being decayed properly. For two sibling control groups with
the same priority, this can often lead to a load ratio of 99/1 (!!).

This happen because when a cfs_rq is throttled, all the descendant cfs_rq's
will be removed from the leaf list. When they initial cfs_rq is
unthrottled, it will currently only re add descendant cfs_rq's if they
have one or more entities enqueued. This is not a perfect heuristic.

Instead, we insert all cfs_rq's that contain one or more enqueued
entities, or it its load is not completely decayed.

Can often lead to situations like this for equally weighted control
groups:

$ ps u -C stress
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       10009 88.8  0.0   3676   100 pts/1    R+   11:04   0:13 stress --cpu 1
root       10023  3.0  0.0   3676   104 pts/1    R+   11:04   0:00 stress --cpu 1

Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()")
Signed-off-by: Odin Ugedal <odin@uged.al>
---
Changes since v1:
 - Replaced cfs_rq field with using tg_load_avg_contrib
 - Went from 3 to 1 patches; one is merged and one is replaced
   by a new patchset.
Changes since v2:
 - Use !cfs_rq_is_decayed() instead of tg_load_avg_contrib
 - Moved cfs_rq_is_decayed to above its new use
Changes since v3:
 - (hopefully) Fix config for !CONFIG_SMP
 kernel/sched/fair.c | 40 +++++++++++++++++++++-------------------
 1 file changed, 21 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 794c2cb945f8..eec32f214ff8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -712,6 +712,25 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	return calc_delta_fair(sched_slice(cfs_rq, se), se);
 }
 
+static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
+{
+	if (cfs_rq->load.weight)
+		return false;
+
+#ifdef CONFIG_SMP
+	if (cfs_rq->avg.load_sum)
+		return false;
+
+	if (cfs_rq->avg.util_sum)
+		return false;
+
+	if (cfs_rq->avg.runnable_sum)
+		return false;
+#endif
+
+	return true;
+}
+
 #include "pelt.h"
 #ifdef CONFIG_SMP
 
@@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
 		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
 					     cfs_rq->throttled_clock_task;
 
-		/* Add cfs_rq with already running entity in the list */
-		if (cfs_rq->nr_running >= 1)
+		/* Add cfs_rq with load or one or more already running entities to the list */
+		if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
 			list_add_leaf_cfs_rq(cfs_rq);
 	}
 
@@ -7895,23 +7914,6 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 
-static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
-{
-	if (cfs_rq->load.weight)
-		return false;
-
-	if (cfs_rq->avg.load_sum)
-		return false;
-
-	if (cfs_rq->avg.util_sum)
-		return false;
-
-	if (cfs_rq->avg.runnable_sum)
-		return false;
-
-	return true;
-}
-
 static bool __update_blocked_fair(struct rq *rq, bool *done)
 {
 	struct cfs_rq *cfs_rq, *pos;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
@ 2021-06-07 13:29   ` Vincent Guittot
  0 siblings, 0 replies; 8+ messages in thread
From: Vincent Guittot @ 2021-06-07 13:29 UTC (permalink / raw)
  To: Odin Ugedal
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, open list:CONTROL GROUP (CGROUP),
	linux-kernel

On Fri, 4 Jun 2021 at 12:26, Odin Ugedal <odin@uged.al> wrote:
>
> This fixes an issue where fairness is decreased since cfs_rq's can
> end up not being decayed properly. For two sibling control groups with
> the same priority, this can often lead to a load ratio of 99/1 (!!).
>
> This happen because when a cfs_rq is throttled, all the descendant cfs_rq's
> will be removed from the leaf list. When they initial cfs_rq is
> unthrottled, it will currently only re add descendant cfs_rq's if they
> have one or more entities enqueued. This is not a perfect heuristic.
>
> Instead, we insert all cfs_rq's that contain one or more enqueued
> entities, or it its load is not completely decayed.
>
> Can often lead to situations like this for equally weighted control
> groups:
>
> $ ps u -C stress
> USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> root       10009 88.8  0.0   3676   100 pts/1    R+   11:04   0:13 stress --cpu 1
> root       10023  3.0  0.0   3676   104 pts/1    R+   11:04   0:00 stress --cpu 1
>
> Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()")
> Signed-off-by: Odin Ugedal <odin@uged.al>
> ---
> Changes since v1:
>  - Replaced cfs_rq field with using tg_load_avg_contrib
>  - Went from 3 to 1 patches; one is merged and one is replaced
>    by a new patchset.
> Changes since v2:
>  - Use !cfs_rq_is_decayed() instead of tg_load_avg_contrib
>  - Moved cfs_rq_is_decayed to above its new use
> Changes since v3:
>  - (hopefully) Fix config for !CONFIG_SMP
>  kernel/sched/fair.c | 40 +++++++++++++++++++++-------------------
>  1 file changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 794c2cb945f8..eec32f214ff8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -712,6 +712,25 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
>         return calc_delta_fair(sched_slice(cfs_rq, se), se);
>  }
>
> +static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)

It's not the best place for this function:
- pelt.h header file is included below but cfs_rq_is_decayed() uses PELT
- CONFIG_SMP is already defined few lines below
- cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and
now with CONFIG_CFS_BANDWIDTH which depends on the former

so moving cfs_rq_is_decayed() just above update_tg_load_avg() with
other functions used for propagating and updating tg load seems a
better place

> +{
> +       if (cfs_rq->load.weight)
> +               return false;
> +
> +#ifdef CONFIG_SMP
> +       if (cfs_rq->avg.load_sum)
> +               return false;
> +
> +       if (cfs_rq->avg.util_sum)
> +               return false;
> +
> +       if (cfs_rq->avg.runnable_sum)
> +               return false;
> +#endif
> +
> +       return true;
> +}
> +
>  #include "pelt.h"
>  #ifdef CONFIG_SMP
>
> @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
>                 cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
>                                              cfs_rq->throttled_clock_task;
>
> -               /* Add cfs_rq with already running entity in the list */
> -               if (cfs_rq->nr_running >= 1)
> +               /* Add cfs_rq with load or one or more already running entities to the list */
> +               if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
>                         list_add_leaf_cfs_rq(cfs_rq);
>         }
>
> @@ -7895,23 +7914,6 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>
>  #ifdef CONFIG_FAIR_GROUP_SCHED
>
> -static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> -{
> -       if (cfs_rq->load.weight)
> -               return false;
> -
> -       if (cfs_rq->avg.load_sum)
> -               return false;
> -
> -       if (cfs_rq->avg.util_sum)
> -               return false;
> -
> -       if (cfs_rq->avg.runnable_sum)
> -               return false;
> -
> -       return true;
> -}
> -
>  static bool __update_blocked_fair(struct rq *rq, bool *done)
>  {
>         struct cfs_rq *cfs_rq, *pos;
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
@ 2021-06-07 13:29   ` Vincent Guittot
  0 siblings, 0 replies; 8+ messages in thread
From: Vincent Guittot @ 2021-06-07 13:29 UTC (permalink / raw)
  To: Odin Ugedal
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, open list:CONTROL GROUP (CGROUP),
	linux-kernel

On Fri, 4 Jun 2021 at 12:26, Odin Ugedal <odin-RObV4cXtwVA@public.gmane.org> wrote:
>
> This fixes an issue where fairness is decreased since cfs_rq's can
> end up not being decayed properly. For two sibling control groups with
> the same priority, this can often lead to a load ratio of 99/1 (!!).
>
> This happen because when a cfs_rq is throttled, all the descendant cfs_rq's
> will be removed from the leaf list. When they initial cfs_rq is
> unthrottled, it will currently only re add descendant cfs_rq's if they
> have one or more entities enqueued. This is not a perfect heuristic.
>
> Instead, we insert all cfs_rq's that contain one or more enqueued
> entities, or it its load is not completely decayed.
>
> Can often lead to situations like this for equally weighted control
> groups:
>
> $ ps u -C stress
> USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> root       10009 88.8  0.0   3676   100 pts/1    R+   11:04   0:13 stress --cpu 1
> root       10023  3.0  0.0   3676   104 pts/1    R+   11:04   0:00 stress --cpu 1
>
> Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()")
> Signed-off-by: Odin Ugedal <odin-RObV4cXtwVA@public.gmane.org>
> ---
> Changes since v1:
>  - Replaced cfs_rq field with using tg_load_avg_contrib
>  - Went from 3 to 1 patches; one is merged and one is replaced
>    by a new patchset.
> Changes since v2:
>  - Use !cfs_rq_is_decayed() instead of tg_load_avg_contrib
>  - Moved cfs_rq_is_decayed to above its new use
> Changes since v3:
>  - (hopefully) Fix config for !CONFIG_SMP
>  kernel/sched/fair.c | 40 +++++++++++++++++++++-------------------
>  1 file changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 794c2cb945f8..eec32f214ff8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -712,6 +712,25 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
>         return calc_delta_fair(sched_slice(cfs_rq, se), se);
>  }
>
> +static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)

It's not the best place for this function:
- pelt.h header file is included below but cfs_rq_is_decayed() uses PELT
- CONFIG_SMP is already defined few lines below
- cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and
now with CONFIG_CFS_BANDWIDTH which depends on the former

so moving cfs_rq_is_decayed() just above update_tg_load_avg() with
other functions used for propagating and updating tg load seems a
better place

> +{
> +       if (cfs_rq->load.weight)
> +               return false;
> +
> +#ifdef CONFIG_SMP
> +       if (cfs_rq->avg.load_sum)
> +               return false;
> +
> +       if (cfs_rq->avg.util_sum)
> +               return false;
> +
> +       if (cfs_rq->avg.runnable_sum)
> +               return false;
> +#endif
> +
> +       return true;
> +}
> +
>  #include "pelt.h"
>  #ifdef CONFIG_SMP
>
> @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
>                 cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
>                                              cfs_rq->throttled_clock_task;
>
> -               /* Add cfs_rq with already running entity in the list */
> -               if (cfs_rq->nr_running >= 1)
> +               /* Add cfs_rq with load or one or more already running entities to the list */
> +               if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
>                         list_add_leaf_cfs_rq(cfs_rq);
>         }
>
> @@ -7895,23 +7914,6 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>
>  #ifdef CONFIG_FAIR_GROUP_SCHED
>
> -static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> -{
> -       if (cfs_rq->load.weight)
> -               return false;
> -
> -       if (cfs_rq->avg.load_sum)
> -               return false;
> -
> -       if (cfs_rq->avg.util_sum)
> -               return false;
> -
> -       if (cfs_rq->avg.runnable_sum)
> -               return false;
> -
> -       return true;
> -}
> -
>  static bool __update_blocked_fair(struct rq *rq, bool *done)
>  {
>         struct cfs_rq *cfs_rq, *pos;
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
@ 2021-06-07 13:36     ` Odin Ugedal
  0 siblings, 0 replies; 8+ messages in thread
From: Odin Ugedal @ 2021-06-07 13:36 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Odin Ugedal, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, open list:CONTROL GROUP (CGROUP),
	linux-kernel

> It's not the best place for this function:
> - pelt.h header file is included below but cfs_rq_is_decayed() uses PELT
> - CONFIG_SMP is already defined few lines below
> - cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and
> now with CONFIG_CFS_BANDWIDTH which depends on the former
>
> so moving cfs_rq_is_decayed() just above update_tg_load_avg() with
> other functions used for propagating and updating tg load seems a
> better place

Ack. When looking at it now, your suggestion makes more sense. Will fix it.

Thanks
Odin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
@ 2021-06-07 13:36     ` Odin Ugedal
  0 siblings, 0 replies; 8+ messages in thread
From: Odin Ugedal @ 2021-06-07 13:36 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Odin Ugedal, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, open list:CONTROL GROUP (CGROUP),
	linux-kernel

> It's not the best place for this function:
> - pelt.h header file is included below but cfs_rq_is_decayed() uses PELT
> - CONFIG_SMP is already defined few lines below
> - cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and
> now with CONFIG_CFS_BANDWIDTH which depends on the former
>
> so moving cfs_rq_is_decayed() just above update_tg_load_avg() with
> other functions used for propagating and updating tg load seems a
> better place

Ack. When looking at it now, your suggestion makes more sense. Will fix it.

Thanks
Odin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
  2021-06-04 10:23 [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle Odin Ugedal
  2021-06-07 13:29   ` Vincent Guittot
@ 2021-06-08 16:39 ` Michal Koutný
  2021-06-10  6:49     ` Vincent Guittot
  1 sibling, 1 reply; 8+ messages in thread
From: Michal Koutný @ 2021-06-08 16:39 UTC (permalink / raw)
  To: Odin Ugedal
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, cgroups, linux-kernel,
	Giovanni Gherdovich

[-- Attachment #1: Type: text/plain, Size: 840 bytes --]

Hello.

On Fri, Jun 04, 2021 at 12:23:14PM +0200, Odin Ugedal <odin@uged.al> wrote:

> @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
>  		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
>  					     cfs_rq->throttled_clock_task;
>  
> -		/* Add cfs_rq with already running entity in the list */
> -		if (cfs_rq->nr_running >= 1)
> +		/* Add cfs_rq with load or one or more already running entities to the list */
> +		if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
>  			list_add_leaf_cfs_rq(cfs_rq);
>  	}

Can there be a decayed cfs_rq with positive nr_running?
I.e. can the condition be simplified to just the decayed check?

(I'm looking at account_entity_enqueue() but I don't know if an entity's
weight can be zero in some singular cases.)

Thanks,
Michal

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
  2021-06-08 16:39 ` Michal Koutný
@ 2021-06-10  6:49     ` Vincent Guittot
  0 siblings, 0 replies; 8+ messages in thread
From: Vincent Guittot @ 2021-06-10  6:49 UTC (permalink / raw)
  To: Michal Koutný
  Cc: Odin Ugedal, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, open list:CONTROL GROUP (CGROUP),
	linux-kernel, Giovanni Gherdovich

On Tue, 8 Jun 2021 at 18:39, Michal Koutný <mkoutny@suse.com> wrote:
>
> Hello.
>
> On Fri, Jun 04, 2021 at 12:23:14PM +0200, Odin Ugedal <odin@uged.al> wrote:
>
> > @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
> >               cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
> >                                            cfs_rq->throttled_clock_task;
> >
> > -             /* Add cfs_rq with already running entity in the list */
> > -             if (cfs_rq->nr_running >= 1)
> > +             /* Add cfs_rq with load or one or more already running entities to the list */
> > +             if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
> >                       list_add_leaf_cfs_rq(cfs_rq);
> >       }
>
> Can there be a decayed cfs_rq with positive nr_running?
> I.e. can the condition be simplified to just the decayed check?

Yes, nothing prevent a task with a null load to be enqueued on a
throttle cfs as an example

>
> (I'm looking at account_entity_enqueue() but I don't know if an entity's
> weight can be zero in some singular cases.)
>
> Thanks,
> Michal

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle
@ 2021-06-10  6:49     ` Vincent Guittot
  0 siblings, 0 replies; 8+ messages in thread
From: Vincent Guittot @ 2021-06-10  6:49 UTC (permalink / raw)
  To: Michal Koutný
  Cc: Odin Ugedal, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, open list:CONTROL GROUP (CGROUP),
	linux-kernel, Giovanni Gherdovich

On Tue, 8 Jun 2021 at 18:39, Michal Koutn√Ω <mkoutny-IBi9RG/b67k@public.gmane.org> wrote:
>
> Hello.
>
> On Fri, Jun 04, 2021 at 12:23:14PM +0200, Odin Ugedal <odin-RObV4cXtwVA@public.gmane.org> wrote:
>
> > @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
> >               cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
> >                                            cfs_rq->throttled_clock_task;
> >
> > -             /* Add cfs_rq with already running entity in the list */
> > -             if (cfs_rq->nr_running >= 1)
> > +             /* Add cfs_rq with load or one or more already running entities to the list */
> > +             if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
> >                       list_add_leaf_cfs_rq(cfs_rq);
> >       }
>
> Can there be a decayed cfs_rq with positive nr_running?
> I.e. can the condition be simplified to just the decayed check?

Yes, nothing prevent a task with a null load to be enqueued on a
throttle cfs as an example

>
> (I'm looking at account_entity_enqueue() but I don't know if an entity's
> weight can be zero in some singular cases.)
>
> Thanks,
> Michal

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-06-10  6:50 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-04 10:23 [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle Odin Ugedal
2021-06-07 13:29 ` Vincent Guittot
2021-06-07 13:29   ` Vincent Guittot
2021-06-07 13:36   ` Odin Ugedal
2021-06-07 13:36     ` Odin Ugedal
2021-06-08 16:39 ` Michal Koutný
2021-06-10  6:49   ` Vincent Guittot
2021-06-10  6:49     ` Vincent Guittot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.