All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Return the busiest group only if imbalance is meaningful
@ 2022-04-06 11:23 zgpeng
  2022-04-06 12:59 ` Vincent Guittot
  0 siblings, 1 reply; 4+ messages in thread
From: zgpeng @ 2022-04-06 11:23 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, linux-kernel

When calculate_imbalance function calculate the imbalance, it may
actually get a negative number. In this case, it is meaningless to
return the so-called busiest group and continue to search for the
busiest cpu later. Therefore, only when the imbalance is greater
than 0 should return the busiest group, otherwise return NULL.

Signed-off-by: zgpeng <zgpeng@tencent.com>
Reviewed-by: Samuel Liao <samuelliao@tencent.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 601f8bd..9f75303 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9639,7 +9639,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 force_balance:
 	/* Looks like there is an imbalance. Compute it */
 	calculate_imbalance(env, &sds);
-	return env->imbalance ? sds.busiest : NULL;
+	return env->imbalance > 0 ? sds.busiest : NULL;
 
 out_balanced:
 	env->imbalance = 0;
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched/fair: Return the busiest group only if imbalance is meaningful
  2022-04-06 11:23 [PATCH] sched/fair: Return the busiest group only if imbalance is meaningful zgpeng
@ 2022-04-06 12:59 ` Vincent Guittot
       [not found]   ` <CAE5vP3=ZPGV=PuYb-WJsoQ8tX4yDAjajT_WRU+7gAiW54XgX_g@mail.gmail.com>
  0 siblings, 1 reply; 4+ messages in thread
From: Vincent Guittot @ 2022-04-06 12:59 UTC (permalink / raw)
  To: zgpeng
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, linux-kernel

On Wed, 6 Apr 2022 at 13:23, zgpeng <zgpeng.linux@gmail.com> wrote:
>
> When calculate_imbalance function calculate the imbalance, it may
> actually get a negative number. In this case, it is meaningless to

We should not return a negative imbalance but I suppose this can
happen when we are using the avg_load metrics to calculate imbalance.
Have you faced a use case where imbalance is negative ?

> return the so-called busiest group and continue to search for the
> busiest cpu later. Therefore, only when the imbalance is greater
> than 0 should return the busiest group, otherwise return NULL.
>
> Signed-off-by: zgpeng <zgpeng@tencent.com>
> Reviewed-by: Samuel Liao <samuelliao@tencent.com>
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 601f8bd..9f75303 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9639,7 +9639,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
>  force_balance:
>         /* Looks like there is an imbalance. Compute it */
>         calculate_imbalance(env, &sds);
> -       return env->imbalance ? sds.busiest : NULL;
> +       return env->imbalance > 0 ? sds.busiest : NULL;
>
>  out_balanced:
>         env->imbalance = 0;
> --
> 2.9.5
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched/fair: Return the busiest group only if imbalance is meaningful
       [not found]   ` <CAE5vP3=ZPGV=PuYb-WJsoQ8tX4yDAjajT_WRU+7gAiW54XgX_g@mail.gmail.com>
@ 2022-04-06 15:33     ` Vincent Guittot
  2022-04-06 17:25       ` Dietmar Eggemann
  0 siblings, 1 reply; 4+ messages in thread
From: Vincent Guittot @ 2022-04-06 15:33 UTC (permalink / raw)
  To: 彭志刚
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt,
	Benjamin Segall, mgorman, bristot, linux-kernel

On Wed, 6 Apr 2022 at 17:07, 彭志刚 <zgpeng.linux@gmail.com> wrote:
>
> YES. The following are specific scenarios where negative values occur:
>
>
>
> The scheduler domain contains four groups, namely groupA, groupB, groupC, and groupD;
>
> The types and avg_load conditions of the four groups are as follows
>
>
>
> GroupA    TYPE= group_fully_busy     avg_load=10        [local group]
>
>
>
> GroupB    TYPE= group_has_spare     avg_load=1
>
> GroupC    TYPE= group_has_spare     avg_load=1
>
> GroupD    TYPE= group_overloaded    avg_load=20      [busiest group]
>
>
>
> The CPU that calls load_balance is located in groupA, and update_sd_lb_stats will select the busiest group in GroupB, groupC, and
>
> groupD, that is, gorupD. Under this condition, other judgments in the find_busiest_group function will be bypassed and the
>
> calculate_imbalance function will be called. The judgment in the middle of the calculate_imbalance function cannot stop this
>
> situation, and it will go to the imbalance calculate logic at the end of the function.At this time, the domain's avg_load=8, but
>
>  the local_groupthe's avg_load=10; The negative value is therefore generated.

I think that this case should be covered by an additional test in
calculate_imbalance because we should not try to pull load in groupA
if it's already above  the average load. Something like below should
cover this

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9469,6 +9469,16 @@ static inline void calculate_imbalance(struct
lb_env *env, struct sd_lb_stats *s

                sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
                                sds->total_capacity;
+
+               /*
+                * Don't pull any tasks if this group is already above the
+                * domain average load.
+                */
+               if (local->avg_load >= sds->avg_load) {
+                       env->imbalance = 0;
+                       return;
+               }
+
                /*
                 * If the local group is more loaded than the selected
                 * busiest group don't try to pull any tasks.

>
>
> Vincent Guittot <vincent.guittot@linaro.org> 于2022年4月6日周三 20:59写道:
>>
>> On Wed, 6 Apr 2022 at 13:23, zgpeng <zgpeng.linux@gmail.com> wrote:
>> >
>> > When calculate_imbalance function calculate the imbalance, it may
>> > actually get a negative number. In this case, it is meaningless to
>>
>> We should not return a negative imbalance but I suppose this can
>> happen when we are using the avg_load metrics to calculate imbalance.
>> Have you faced a use case where imbalance is negative ?
>>
>> > return the so-called busiest group and continue to search for the
>> > busiest cpu later. Therefore, only when the imbalance is greater
>> > than 0 should return the busiest group, otherwise return NULL.
>> >
>> > Signed-off-by: zgpeng <zgpeng@tencent.com>
>> > Reviewed-by: Samuel Liao <samuelliao@tencent.com>
>> > ---
>> >  kernel/sched/fair.c | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> > index 601f8bd..9f75303 100644
>> > --- a/kernel/sched/fair.c
>> > +++ b/kernel/sched/fair.c
>> > @@ -9639,7 +9639,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
>> >  force_balance:
>> >         /* Looks like there is an imbalance. Compute it */
>> >         calculate_imbalance(env, &sds);
>> > -       return env->imbalance ? sds.busiest : NULL;
>> > +       return env->imbalance > 0 ? sds.busiest : NULL;
>> >
>> >  out_balanced:
>> >         env->imbalance = 0;
>> > --
>> > 2.9.5
>> >

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched/fair: Return the busiest group only if imbalance is meaningful
  2022-04-06 15:33     ` Vincent Guittot
@ 2022-04-06 17:25       ` Dietmar Eggemann
  0 siblings, 0 replies; 4+ messages in thread
From: Dietmar Eggemann @ 2022-04-06 17:25 UTC (permalink / raw)
  To: Vincent Guittot, 彭志刚
  Cc: mingo, peterz, juri.lelli, rostedt, Benjamin Segall, mgorman,
	bristot, linux-kernel

On 06/04/2022 17:33, Vincent Guittot wrote:
> On Wed, 6 Apr 2022 at 17:07, 彭志刚 <zgpeng.linux@gmail.com> wrote:

[...]

> I think that this case should be covered by an additional test in
> calculate_imbalance because we should not try to pull load in groupA
> if it's already above  the average load. Something like below should
> cover this
> 
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9469,6 +9469,16 @@ static inline void calculate_imbalance(struct
> lb_env *env, struct sd_lb_stats *s
> 
>                 sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
>                                 sds->total_capacity;
> +
> +               /*
> +                * Don't pull any tasks if this group is already above the
> +                * domain average load.
> +                */
> +               if (local->avg_load >= sds->avg_load) {
> +                       env->imbalance = 0;
> +                       return;
> +               }
> +

LGTM. Like for the `local->group_type == group_overloaded` case in
find_busiest_group() where we force `goto out_balanced`.

[...]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-04-06 19:56 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06 11:23 [PATCH] sched/fair: Return the busiest group only if imbalance is meaningful zgpeng
2022-04-06 12:59 ` Vincent Guittot
     [not found]   ` <CAE5vP3=ZPGV=PuYb-WJsoQ8tX4yDAjajT_WRU+7gAiW54XgX_g@mail.gmail.com>
2022-04-06 15:33     ` Vincent Guittot
2022-04-06 17:25       ` Dietmar Eggemann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.