linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Phil Auld <pauld@redhat.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Quentin Perret <quentin.perret@arm.com>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	Hillf Danton <hdanton@sina.com>
Subject: Re: [PATCH v3 04/10] sched/fair: rework load_balance
Date: Tue, 1 Oct 2019 10:14:36 +0200	[thread overview]
Message-ID: <CAKfTPtDUFMFnD+RZndx0+8A+V9HV9Hv0TN+p=mAge0VsqS6xmA@mail.gmail.com> (raw)
In-Reply-To: <9bfb3252-c268-8c0c-9c72-65f872e9c8b2@arm.com>

On Mon, 30 Sep 2019 at 18:24, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>
> Hi Vincent,
>
> On 19/09/2019 09:33, Vincent Guittot wrote:
>
> these are just some comments & questions based on a code study. Haven't
> run any tests with it yet.
>
> [...]
>
> > The type of sched_group has been extended to better reflect the type of
> > imbalance. We now have :
> >         group_has_spare
> >         group_fully_busy
> >         group_misfit_task
> >         group_asym_capacity
>
> s/group_asym_capacity/group_asym_packing

Yes, I forgot to change the commit message and the comments

>
> >         group_imbalanced
> >         group_overloaded
> >
> > Based on the type fo sched_group, load_balance now sets what it wants to
>
> s/fo/for

s/fo/of/

>
> > move in order to fix the imbalance. It can be some load as before but also
> > some utilization, a number of task or a type of task:
> >         migrate_task
> >         migrate_util
> >         migrate_load
> >         migrate_misfit
> >
> > This new load_balance algorithm fixes several pending wrong tasks
> > placement:
> > - the 1 task per CPU case with asymmetrics system
>
> s/asymmetrics/asymmetric

yes

>
> This stands for ASYM_CPUCAPACITY and ASYM_PACKING, right?

yes

>
> [...]
>
> >   #define LBF_ALL_PINNED       0x01
> > @@ -7115,7 +7130,7 @@ struct lb_env {
> >         unsigned int            loop_max;
> >
> >         enum fbq_type           fbq_type;
> > -     enum group_type         src_grp_type;
> > +     enum migration_type     balance_type;
>
> Minor thing:
>
> Why not
>      enum migration_type migration_type;
> or
>      enum balance_type balance_type;
>
> We do the same for other enums like fbq_type or group_type.

yes, I can align

>
> >         struct list_head        tasks;
> >   };
> >
>
> The detach_tasks() comment still mentions only runnable load.

ok

>
> > @@ -7347,7 +7362,7 @@ static int detach_tasks(struct lb_env *env)
> >   {
> >         struct list_head *tasks = &env->src_rq->cfs_tasks;
> >         struct task_struct *p;
> > -     unsigned long load;
> > +     unsigned long util, load;
>
> Minor: Order by length or reduce scope to while loop ?

I don't get your point here

>
> >         int detached = 0;
> >
> >         lockdep_assert_held(&env->src_rq->lock);
> > @@ -7380,19 +7395,53 @@ static int detach_tasks(struct lb_env *env)
> >                 if (!can_migrate_task(p, env))
> >                         goto next;
> >
> > -             load = task_h_load(p);
> > +             switch (env->balance_type) {
> > +             case migrate_load:
> > +                     load = task_h_load(p);
> >
> > -             if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
> > -                     goto next;
> > +                     if (sched_feat(LB_MIN) &&
> > +                         load < 16 && !env->sd->nr_balance_failed)
> > +                             goto next;
> >
> > -             if ((load / 2) > env->imbalance)
> > -                     goto next;
> > +                     if ((load / 2) > env->imbalance)
> > +                             goto next;
> > +
> > +                     env->imbalance -= load;
> > +                     break;
> > +
> > +             case migrate_util:
> > +                     util = task_util_est(p);
> > +
> > +                     if (util > env->imbalance)
> > +                             goto next;
> > +
> > +                     env->imbalance -= util;
> > +                     break;
> > +
> > +             case migrate_task:
> > +                     /* Migrate task */
>
> Minor: IMHO, this comment is not necessary.

yes

>
> > +                     env->imbalance--;
> > +                     break;
> > +
> > +             case migrate_misfit:
> > +                     load = task_h_load(p);
> > +
> > +                     /*
> > +                      * utilization of misfit task might decrease a bit
>
> This patch still uses load. IMHO this comments becomes true only with 08/10.

even with 8/10 it's not correct and it has been removed.
I'm going to remove it.

>
> [...]
>
> > @@ -7707,13 +7755,11 @@ static inline void init_sd_lb_stats(struct sd_lb_stats *sds)
> >         *sds = (struct sd_lb_stats){
> >                 .busiest = NULL,
> >                 .local = NULL,
> > -             .total_running = 0UL,
> >                 .total_load = 0UL,
> >                 .total_capacity = 0UL,
> >                 .busiest_stat = {
> > -                     .avg_load = 0UL,
>
> There is a sentence in the comment above explaining why avg_load has to
> be cleared. And IMHO local group isn't cleared anymore but set/initialized.

Yes, I have to update it

>
> > -                     .sum_h_nr_running = 0,
> > -                     .group_type = group_other,
> > +                     .idle_cpus = UINT_MAX,
> > +                     .group_type = group_has_spare,
> >                 },
> >         };
> >   }
>
> [...]
>
> > @@ -8042,14 +8104,24 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> >                 }
> >         }
> >
> > -     /* Adjust by relative CPU capacity of the group */
> > +     /* Check if dst cpu is idle and preferred to this group */
>
> s/preferred to/preferred by ? or the preferred CPU of this group ?

dst cpu doesn't belong to this group. We compare asym_prefer_cpu of
this group vs dst_cpu which belongs to another group

>
> [...]
>
> > @@ -8283,69 +8363,133 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> >    */
> >   static inline void calculate_imbalance(struct lb_env *env, struct
> sd_lb_stats *sds)
> >   {
> > -     unsigned long max_pull, load_above_capacity = ~0UL;
> >         struct sg_lb_stats *local, *busiest;
> >
> >         local = &sds->local_stat;
> >         busiest = &sds->busiest_stat;
> >
> > -     if (busiest->group_asym_packing) {
> > +     if (busiest->group_type == group_misfit_task) {
> > +             /* Set imbalance to allow misfit task to be balanced. */
> > +             env->balance_type = migrate_misfit;
> > +             env->imbalance = busiest->group_misfit_task_load;
> > +             return;
> > +     }
> > +
> > +     if (busiest->group_type == group_asym_packing) {
> > +             /*
> > +              * In case of asym capacity, we will try to migrate all load to
>
> Does asym capacity stands for asym packing or asym cpu capacity?

busiest->group_type == group_asym_packing

will fix it

>
> > +              * the preferred CPU.
> > +              */
> > +             env->balance_type = migrate_load;
> >                 env->imbalance = busiest->group_load;
> >                 return;
> >         }
> >
> > +     if (busiest->group_type == group_imbalanced) {
> > +             /*
> > +              * In the group_imb case we cannot rely on group-wide averages
> > +              * to ensure CPU-load equilibrium, try to move any task to fix
> > +              * the imbalance. The next load balance will take care of
> > +              * balancing back the system.
>
> balancing back ?

In case of imbalance, we don't try to balance the system but only try
to get rid of the pinned tasks problem. The system will still be
unbalanced after the migration and the next load balance will take
care of balancing the system

>
> > +              */
> > +             env->balance_type = migrate_task;
> > +             env->imbalance = 1;
> > +             return;
> > +     }
> > +
> >         /*
> > -      * Avg load of busiest sg can be less and avg load of local sg can
> > -      * be greater than avg load across all sgs of sd because avg load
> > -      * factors in sg capacity and sgs with smaller group_type are
> > -      * skipped when updating the busiest sg:
> > +      * Try to use spare capacity of local group without overloading it or
> > +      * emptying busiest
> >          */
> > -     if (busiest->group_type != group_misfit_task &&
> > -         (busiest->avg_load <= sds->avg_load ||
> > -          local->avg_load >= sds->avg_load)) {
> > -             env->imbalance = 0;
> > +     if (local->group_type == group_has_spare) {
> > +             if (busiest->group_type > group_fully_busy) {
>
> So this could be 'busiest->group_type == group_overloaded' here to match
> the comment below? Since you handle group_misfit_task,
> group_asym_packing, group_imbalanced above and return.

This is just to be more robust in case some new states are added later

>
> > +                     /*
> > +                      * If busiest is overloaded, try to fill spare
> > +                      * capacity. This might end up creating spare capacity
> > +                      * in busiest or busiest still being overloaded but
> > +                      * there is no simple way to directly compute the
> > +                      * amount of load to migrate in order to balance the
> > +                      * system.
> > +                      */
> > +                     env->balance_type = migrate_util;
> > +                     env->imbalance = max(local->group_capacity, local->group_util) -
> > +                                 local->group_util;
> > +                     return;
> > +             }
> > +
> > +             if (busiest->group_weight == 1 || sds->prefer_sibling) {
> > +                     /*
> > +                      * When prefer sibling, evenly spread running tasks on
> > +                      * groups.
> > +                      */
> > +                     env->balance_type = migrate_task;
> > +                     env->imbalance = (busiest->sum_h_nr_running - local->sum_h_nr_running) >> 1;
> > +                     return;
> > +             }
> > +
> > +             /*
> > +              * If there is no overload, we just want to even the number of
> > +              * idle cpus.
> > +              */
> > +             env->balance_type = migrate_task;
> > +             env->imbalance = max_t(long, 0, (local->idle_cpus - busiest->idle_cpus) >> 1);
>
> Why do we need a max_t(long, 0, ...) here and not for the 'if
> (busiest->group_weight == 1 || sds->prefer_sibling)' case?

For env->imbalance = (busiest->sum_h_nr_running - local->sum_h_nr_running) >> 1;

either we have sds->prefer_sibling && busiest->sum_nr_running >
local->sum_nr_running + 1

or busiest->group_weight == 1 and env->idle != CPU_NOT_IDLE so local
cpu is idle or newly idle

>
> >                 return;
> >         }
> >
> >         /*
> > -      * If there aren't any idle CPUs, avoid creating some.
> > +      * Local is fully busy but have to take more load to relieve the
>
> s/have/has
>
> > +      * busiest group
> >          */
>
> I thought that 'local->group_type == group_imbalanced' is allowed as
> well? So 'if (local->group_type < group_overloaded)' (further down)
> could include that?

yes.
Imbalance state is not very useful for local group but only reflects
that the group is not overloaded so either fully busy or has spare
capacity.
In this case we assume the worst : fully_busy

>
> > -     if (busiest->group_type == group_overloaded &&
> > -         local->group_type   == group_overloaded) {
> > -             load_above_capacity = busiest->sum_h_nr_running * SCHED_CAPACITY_SCALE;
> > -             if (load_above_capacity > busiest->group_capacity) {
> > -                     load_above_capacity -= busiest->group_capacity;
> > -                     load_above_capacity *= scale_load_down(NICE_0_LOAD);
> > -                     load_above_capacity /= busiest->group_capacity;
> > -             } else
> > -                     load_above_capacity = ~0UL;
> > +     if (local->group_type < group_overloaded) {
> > +             /*
> > +              * Local will become overloaded so the avg_load metrics are
> > +              * finally needed.
> > +              */
>
> How does this relate to the decision_matrix[local, busiest] (dm[])? E.g.
> dm[overload, overload] == avg_load or dm[fully_busy, overload] == force.
> It would be nice to be able to match all allowed fields of dm to code sections.

decision_matrix describes how it decides between balanced or unbalanced.
In case of dm[overload, overload], we use the avg_load to decide if it
is balanced or not
In case of dm[fully_busy, overload], the groups are unbalanced because
fully_busy < overload and we force the balance. Then
calculate_imbalance() uses the avg_load to decide how much will be
moved

dm[overload, overload]=force means that we force the balance and we
will compute later the imbalance. avg_load may be used to calculate
the imbalance
dm[overload, overload]=avg_load means that we compare the avg_load to
decide whether we need to balance load between groups
dm[overload, overload]=nr_idle means that we compare the number of
idle cpus to decide whether we need to balance.  In fact this is no
more true with patch 7 because we also take into account the number of
nr_h_running when weight =1

>
> [...]
>
> >   /******* find_busiest_group() helpers end here *********************/
> >
> > +/*
> > + * Decision matrix according to the local and busiest group state
>
> Minor s/state/type ?

ok

>
> > + *
> > + * busiest \ local has_spare fully_busy misfit asym imbalanced overloaded
> > + * has_spare        nr_idle   balanced   N/A    N/A  balanced   balanced
> > + * fully_busy       nr_idle   nr_idle    N/A    N/A  balanced   balanced
> > + * misfit_task      force     N/A        N/A    N/A  force      force
> > + * asym_capacity    force     force      N/A    N/A  force      force
>
> s/asym_capacity/asym_packing

yes

>
> > + * imbalanced       force     force      N/A    N/A  force      force
> > + * overloaded       force     force      N/A    N/A  force      avg_load
> > + *
> > + * N/A :      Not Applicable because already filtered while updating
> > + *            statistics.
> > + * balanced : The system is balanced for these 2 groups.
> > + * force :    Calculate the imbalance as load migration is probably needed.
> > + * avg_load : Only if imbalance is significant enough.
> > + * nr_idle :  dst_cpu is not busy and the number of idle cpus is quite
> > + *            different in groups.
> > + */
> > +
> >   /**
> >    * find_busiest_group - Returns the busiest group within the sched_domain
> >    * if there is an imbalance.
> > @@ -8380,17 +8524,17 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
> >         local = &sds.local_stat;
> >         busiest = &sds.busiest_stat;
> >
> > -     /* ASYM feature bypasses nice load balance check */
> > -     if (busiest->group_asym_packing)
> > -             goto force_balance;
> > -
> >         /* There is no busy sibling group to pull tasks from */
> > -     if (!sds.busiest || busiest->sum_h_nr_running == 0)
> > +     if (!sds.busiest)
> >                 goto out_balanced;
> >
> > -     /* XXX broken for overlapping NUMA groups */
> > -     sds.avg_load = (SCHED_CAPACITY_SCALE * sds.total_load)
> > -                                             / sds.total_capacity;
> > +     /* Misfit tasks should be dealt with regardless of the avg load */
> > +     if (busiest->group_type == group_misfit_task)
> > +             goto force_balance;
> > +
> > +     /* ASYM feature bypasses nice load balance check */
>
> Minor: s/ASYM feature/ASYM_PACKING ... to distinguish clearly from
> ASYM_CPUCAPACITY.

yes

>
> > +     if (busiest->group_type == group_asym_packing)
> > +             goto force_balance;
>
> [...]
>

  reply	other threads:[~2019-10-01  8:14 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-19  7:33 [PATCH v3 0/8] sched/fair: rework the CFS load balance Vincent Guittot
2019-09-19  7:33 ` [PATCH v3 01/10] sched/fair: clean up asym packing Vincent Guittot
2019-09-27 23:57   ` Rik van Riel
2019-09-19  7:33 ` [PATCH v3 02/10] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
2019-09-27 23:59   ` Rik van Riel
2019-10-01 17:11   ` Valentin Schneider
2019-09-19  7:33 ` [PATCH v3 03/10] sched/fair: remove meaningless imbalance calculation Vincent Guittot
2019-09-28  0:05   ` Rik van Riel
2019-10-01 17:12   ` Valentin Schneider
2019-10-02  6:28     ` Vincent Guittot
2019-09-19  7:33 ` [PATCH v3 04/10] sched/fair: rework load_balance Vincent Guittot
2019-09-30  1:12   ` Rik van Riel
2019-09-30  7:44     ` Vincent Guittot
2019-09-30 16:24   ` Dietmar Eggemann
2019-10-01  8:14     ` Vincent Guittot [this message]
2019-10-01 16:52       ` Dietmar Eggemann
2019-10-02  6:44         ` Vincent Guittot
2019-10-02  9:21           ` Dietmar Eggemann
2019-10-08 13:02             ` Peter Zijlstra
2019-10-02  8:23         ` Vincent Guittot
2019-10-02  9:24           ` Dietmar Eggemann
2019-10-01  8:15   ` Dietmar Eggemann
2019-10-01  9:14     ` Vincent Guittot
2019-10-01 16:57       ` Dietmar Eggemann
2019-10-01 17:47   ` Valentin Schneider
2019-10-02  8:30     ` Vincent Guittot
2019-10-02 10:47       ` Valentin Schneider
2019-10-08 14:16         ` Peter Zijlstra
2019-10-08 14:34           ` Valentin Schneider
2019-10-08 15:30             ` Vincent Guittot
2019-10-08 15:48               ` Valentin Schneider
2019-10-08 17:39               ` Peter Zijlstra
2019-10-08 18:45                 ` Vincent Guittot
2019-10-08 16:33             ` Peter Zijlstra
2019-10-08 16:39               ` Valentin Schneider
2019-10-08 17:36                 ` Valentin Schneider
2019-10-08 17:55   ` Peter Zijlstra
2019-10-08 18:47     ` Vincent Guittot
2019-10-16  7:21   ` Parth Shah
2019-10-16 11:56     ` Vincent Guittot
2019-10-18  5:34       ` Parth Shah
2019-09-19  7:33 ` [PATCH v3 05/10] sched/fair: use rq->nr_running when balancing load Vincent Guittot
2019-09-19  7:33 ` [PATCH v3 06/10] sched/fair: use load instead of runnable load in load_balance Vincent Guittot
2019-09-19  7:33 ` [PATCH v3 07/10] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
2019-09-19  7:33 ` [PATCH v3 08/10] sched/fair: use utilization to select misfit task Vincent Guittot
2019-10-01 17:12   ` Valentin Schneider
2019-09-19  7:33 ` [PATCH v3 09/10] sched/fair: use load instead of runnable load in wakeup path Vincent Guittot
2019-10-07 15:14   ` Rik van Riel
2019-10-07 15:27     ` Vincent Guittot
2019-10-07 18:06       ` Rik van Riel
2019-09-19  7:33 ` [PATCH v3 10/10] sched/fair: optimize find_idlest_group Vincent Guittot
2019-10-08 14:32 ` [PATCH v3 0/8] sched/fair: rework the CFS load balance Phil Auld
2019-10-08 15:53   ` Vincent Guittot
2019-10-09 19:33     ` Phil Auld
2019-10-10  8:20       ` Vincent Guittot
2019-10-16  7:21 ` Parth Shah
2019-10-16 11:51   ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKfTPtDUFMFnD+RZndx0+8A+V9HV9Hv0TN+p=mAge0VsqS6xmA@mail.gmail.com' \
    --to=vincent.guittot@linaro.org \
    --cc=Morten.Rasmussen@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pauld@redhat.com \
    --cc=peterz@infradead.org \
    --cc=quentin.perret@arm.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=valentin.schneider@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).