* [PATCH 0/3] sched/fair: some fixes for asym_packing @ 2018-08-07 15:56 Vincent Guittot 2018-08-07 15:56 ` [PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot ` (3 more replies) 0 siblings, 4 replies; 7+ messages in thread From: Vincent Guittot @ 2018-08-07 15:56 UTC (permalink / raw) To: peterz, mingo, linux-kernel Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot During the review of misfit task patchset, Morten and Valentin raised some problems with the use of SD_ASYM_PACKING flag on asymetric system like hikey960 arm64 big/LITTLE platform. The study of the use cases has shown some problems that can happen for every systems that use the flag. The 3 patches fixes the problems raised for lmbench and the rt-app UC that creates 2 tasks that start as small tasks and then become suddenly always running tasks. (I can provide the rt-app json isf needed) Vincent Guittot (3): sched/fair: fix rounding issue for asym packing sched/fair: trigger asym_packing during idle load balance sched/fair: fix unnecessary increase of balance interval kernel/sched/fair.c | 33 +++++++++++++++++++++++++-------- 1 file changed, 25 insertions(+), 8 deletions(-) -- 2.7.4 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/3] sched/fair: fix rounding issue for asym packing 2018-08-07 15:56 [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot @ 2018-08-07 15:56 ` Vincent Guittot 2018-08-07 15:56 ` [PATCH 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot ` (2 subsequent siblings) 3 siblings, 0 replies; 7+ messages in thread From: Vincent Guittot @ 2018-08-07 15:56 UTC (permalink / raw) To: peterz, mingo, linux-kernel Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot When check_asym_packing() is triggered, the imbalance is set to : busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE busiest_stat.avg_load also comes from a division and the final rounding can make imbalance slightly lower than the weighted load of the cfs_rq. But this is enough to skip the rq in find_busiest_queue and prevents asym migration to happen. Add 1 to the avg_load to make sure that the targeted cpu will not be skipped unexpectidly. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> --- kernel/sched/fair.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 309c93f..c376cd0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7780,6 +7780,12 @@ static inline void update_sg_lb_stats(struct lb_env *env, /* Adjust by relative CPU capacity of the group */ sgs->group_capacity = group->sgc->capacity; sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity; + /* + * Prevent division rounding to make the computation of imbalance + * slightly less than original value and to prevent the rq to be then + * selected as busiest queue + */ + sgs->avg_load += 1; if (sgs->sum_nr_running) sgs->load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running; -- 2.7.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/3] sched/fair: trigger asym_packing during idle load balance 2018-08-07 15:56 [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot 2018-08-07 15:56 ` [PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot @ 2018-08-07 15:56 ` Vincent Guittot 2018-08-07 15:56 ` [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot 2018-09-10 12:40 ` [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot 3 siblings, 0 replies; 7+ messages in thread From: Vincent Guittot @ 2018-08-07 15:56 UTC (permalink / raw) To: peterz, mingo, linux-kernel Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot newly idle load balance is not always triggered when a cpu becomes idle. This prevent the scheduler to get a chance to migrate task for asym packing. Enable active migration because of asym packing during idle load balance too. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c376cd0..5f1b6c6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8364,7 +8364,7 @@ static int need_active_balance(struct lb_env *env) { struct sched_domain *sd = env->sd; - if (env->idle == CPU_NEWLY_IDLE) { + if (env->idle != CPU_NOT_IDLE) { /* * ASYM_PACKING needs to force migrate tasks from busy but -- 2.7.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval 2018-08-07 15:56 [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot 2018-08-07 15:56 ` [PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot 2018-08-07 15:56 ` [PATCH 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot @ 2018-08-07 15:56 ` Vincent Guittot 2018-12-13 13:52 ` Peter Zijlstra 2018-09-10 12:40 ` [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot 3 siblings, 1 reply; 7+ messages in thread From: Vincent Guittot @ 2018-08-07 15:56 UTC (permalink / raw) To: peterz, mingo, linux-kernel Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot In case of active balance, we increase the balance interval to cover pinned tasks cases not covered by all_pinned logic. Neverthless, the active migration triggered by asym packing should be treated as the normal unbalanced case and reset the interval to default value otherwise active migration for asym_packing can be easily delayed for hundreds of ms because of this all_pinned detection mecanism. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> --- kernel/sched/fair.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5f1b6c6..ceb6bed 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8360,22 +8360,32 @@ static struct rq *find_busiest_queue(struct lb_env *env, */ #define MAX_PINNED_INTERVAL 512 -static int need_active_balance(struct lb_env *env) +static inline bool +asym_active_balance(enum cpu_idle_type idle, unsigned int flags, int dst, int src) { - struct sched_domain *sd = env->sd; - - if (env->idle != CPU_NOT_IDLE) { + if (idle != CPU_NOT_IDLE) { /* * ASYM_PACKING needs to force migrate tasks from busy but * lower priority CPUs in order to pack all tasks in the * highest priority CPUs. */ - if ((sd->flags & SD_ASYM_PACKING) && - sched_asym_prefer(env->dst_cpu, env->src_cpu)) - return 1; + if ((flags & SD_ASYM_PACKING) && + sched_asym_prefer(dst, src)) + return true; } + return false; +} + +static int need_active_balance(struct lb_env *env) +{ + struct sched_domain *sd = env->sd; + + + if (asym_active_balance(env->idle, sd->flags, env->dst_cpu, env->src_cpu)) + return 1; + /* * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task. * It's worth migrating the task if the src_cpu's capacity is reduced @@ -8650,7 +8660,8 @@ static int load_balance(int this_cpu, struct rq *this_rq, } else sd->nr_balance_failed = 0; - if (likely(!active_balance)) { + if (likely(!active_balance) || + asym_active_balance(env.idle, sd->flags, env.dst_cpu, env.src_cpu)) { /* We were unbalanced, so reset the balancing interval */ sd->balance_interval = sd->min_interval; } else { -- 2.7.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval 2018-08-07 15:56 ` [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot @ 2018-12-13 13:52 ` Peter Zijlstra 2018-12-13 15:36 ` Vincent Guittot 0 siblings, 1 reply; 7+ messages in thread From: Peter Zijlstra @ 2018-12-13 13:52 UTC (permalink / raw) To: Vincent Guittot; +Cc: mingo, linux-kernel, valentin.schneider, Morten.Rasmussen On Tue, Aug 07, 2018 at 05:56:27PM +0200, Vincent Guittot wrote: > +static inline bool > +asym_active_balance(enum cpu_idle_type idle, unsigned int flags, int dst, int src) > { > + if (idle != CPU_NOT_IDLE) { > > /* > * ASYM_PACKING needs to force migrate tasks from busy but > * lower priority CPUs in order to pack all tasks in the > * highest priority CPUs. > */ > + if ((flags & SD_ASYM_PACKING) && > + sched_asym_prefer(dst, src)) > + return true; > } > > + return false; > +} > + > +static int need_active_balance(struct lb_env *env) > +{ > + struct sched_domain *sd = env->sd; > + > + > + if (asym_active_balance(env->idle, sd->flags, env->dst_cpu, env->src_cpu)) > + return 1; > + > /* > * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task. > * It's worth migrating the task if the src_cpu's capacity is reduced > @@ -8650,7 +8660,8 @@ static int load_balance(int this_cpu, struct rq *this_rq, > } else > sd->nr_balance_failed = 0; > > + if (likely(!active_balance) || > + asym_active_balance(env.idle, sd->flags, env.dst_cpu, env.src_cpu)) { Perhaps like the below? --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8857,21 +8857,24 @@ static struct rq *find_busiest_queue(str */ #define MAX_PINNED_INTERVAL 512 +static inline bool +asym_active_balance(struct lb_env *env) +{ + /* + * ASYM_PACKING needs to force migrate tasks from busy but + * lower priority CPUs in order to pack all tasks in the + * highest priority CPUs. + */ + return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) && + sched_asym_prefer(env->dst_cpu, env->src_cpu); +} + static int need_active_balance(struct lb_env *env) { struct sched_domain *sd = env->sd; - if (env->idle != CPU_NOT_IDLE) { - - /* - * ASYM_PACKING needs to force migrate tasks from busy but - * lower priority CPUs in order to pack all tasks in the - * highest priority CPUs. - */ - if ((sd->flags & SD_ASYM_PACKING) && - sched_asym_prefer(env->dst_cpu, env->src_cpu)) - return 1; - } + if (asym_active_balance(env)) + return 1; /* * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task. @@ -9150,7 +9153,7 @@ static int load_balance(int this_cpu, st } else sd->nr_balance_failed = 0; - if (likely(!active_balance)) { + if (likely(!active_balance) || asym_active_balance(&env)) { /* We were unbalanced, so reset the balancing interval */ sd->balance_interval = sd->min_interval; } else { ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval 2018-12-13 13:52 ` Peter Zijlstra @ 2018-12-13 15:36 ` Vincent Guittot 0 siblings, 0 replies; 7+ messages in thread From: Vincent Guittot @ 2018-12-13 15:36 UTC (permalink / raw) To: Peter Zijlstra Cc: Ingo Molnar, linux-kernel, Valentin Schneider, Morten Rasmussen On Thu, 13 Dec 2018 at 14:52, Peter Zijlstra <peterz@infradead.org> wrote: > > On Tue, Aug 07, 2018 at 05:56:27PM +0200, Vincent Guittot wrote: > > +static inline bool > > +asym_active_balance(enum cpu_idle_type idle, unsigned int flags, int dst, int src) > > { > > + if (idle != CPU_NOT_IDLE) { > > > > /* > > * ASYM_PACKING needs to force migrate tasks from busy but > > * lower priority CPUs in order to pack all tasks in the > > * highest priority CPUs. > > */ > > + if ((flags & SD_ASYM_PACKING) && > > + sched_asym_prefer(dst, src)) > > + return true; > > } > > > > + return false; > > +} > > + > > +static int need_active_balance(struct lb_env *env) > > +{ > > + struct sched_domain *sd = env->sd; > > + > > + > > + if (asym_active_balance(env->idle, sd->flags, env->dst_cpu, env->src_cpu)) > > + return 1; > > + > > /* > > * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task. > > * It's worth migrating the task if the src_cpu's capacity is reduced > > @@ -8650,7 +8660,8 @@ static int load_balance(int this_cpu, struct rq *this_rq, > > } else > > sd->nr_balance_failed = 0; > > > > + if (likely(!active_balance) || > > + asym_active_balance(env.idle, sd->flags, env.dst_cpu, env.src_cpu)) { > > Perhaps like the below? Yes. Far more simple and readable > > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -8857,21 +8857,24 @@ static struct rq *find_busiest_queue(str > */ > #define MAX_PINNED_INTERVAL 512 > > +static inline bool > +asym_active_balance(struct lb_env *env) > +{ > + /* > + * ASYM_PACKING needs to force migrate tasks from busy but > + * lower priority CPUs in order to pack all tasks in the > + * highest priority CPUs. > + */ > + return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) && > + sched_asym_prefer(env->dst_cpu, env->src_cpu); > +} > + > static int need_active_balance(struct lb_env *env) > { > struct sched_domain *sd = env->sd; > > - if (env->idle != CPU_NOT_IDLE) { > - > - /* > - * ASYM_PACKING needs to force migrate tasks from busy but > - * lower priority CPUs in order to pack all tasks in the > - * highest priority CPUs. > - */ > - if ((sd->flags & SD_ASYM_PACKING) && > - sched_asym_prefer(env->dst_cpu, env->src_cpu)) > - return 1; > - } > + if (asym_active_balance(env)) > + return 1; > > /* > * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task. > @@ -9150,7 +9153,7 @@ static int load_balance(int this_cpu, st > } else > sd->nr_balance_failed = 0; > > - if (likely(!active_balance)) { > + if (likely(!active_balance) || asym_active_balance(&env)) { > /* We were unbalanced, so reset the balancing interval */ > sd->balance_interval = sd->min_interval; > } else { ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/3] sched/fair: some fixes for asym_packing 2018-08-07 15:56 [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot ` (2 preceding siblings ...) 2018-08-07 15:56 ` [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot @ 2018-09-10 12:40 ` Vincent Guittot 3 siblings, 0 replies; 7+ messages in thread From: Vincent Guittot @ 2018-09-10 12:40 UTC (permalink / raw) To: Peter Zijlstra, Ingo Molnar, linux-kernel Cc: Valentin Schneider, Morten Rasmussen Hi, On Tue, 7 Aug 2018 at 17:56, Vincent Guittot <vincent.guittot@linaro.org> wrote: > > During the review of misfit task patchset, Morten and Valentin raised some > problems with the use of SD_ASYM_PACKING flag on asymetric system like > hikey960 arm64 big/LITTLE platform. The study of the use cases has shown > some problems that can happen for every systems that use the flag. > > The 3 patches fixes the problems raised for lmbench and the rt-app UC that > creates 2 tasks that start as small tasks and then become suddenly always > running tasks. (I can provide the rt-app json isf needed) Gentle ping on this patchset > > Vincent Guittot (3): > sched/fair: fix rounding issue for asym packing > sched/fair: trigger asym_packing during idle load balance > sched/fair: fix unnecessary increase of balance interval > > kernel/sched/fair.c | 33 +++++++++++++++++++++++++-------- > 1 file changed, 25 insertions(+), 8 deletions(-) > > -- > 2.7.4 > ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2018-12-13 15:36 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-08-07 15:56 [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot 2018-08-07 15:56 ` [PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot 2018-08-07 15:56 ` [PATCH 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot 2018-08-07 15:56 ` [PATCH 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot 2018-12-13 13:52 ` Peter Zijlstra 2018-12-13 15:36 ` Vincent Guittot 2018-09-10 12:40 ` [PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.