* [PATCH v4 0/3] sched/fair: some fixes for asym_packing
@ 2019-01-17 17:44 Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Vincent Guittot @ 2019-01-17 17:44 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
During the review of misfit task patchset, Morten and Valentin raised some
problems with the use of SD_ASYM_PACKING flag on asymmetric system like
hikey960 arm64 big/LITTLE platform. The study of the use cases has shown
some problems that can happen for every systems that use the flag.
The 3 patches fixes the problems raised for lmbench and the rt-app UC that
creates 2 tasks that start as small tasks and then become suddenly always
running tasks. (I can provide the rt-app json if needed)
Changes since v3:
- simplify imbalance computation
Changes since v2:
- include other active balance reasons
- set imbalance to avg_load as suggested by Valentin
Changes since v1:
- rebase on tip/sched/core
- changes asym_active_balance() as suggested by Peter
Vincent Guittot (3):
sched/fair: fix rounding issue for asym packing
sched/fair: trigger asym_packing during idle load balance
sched/fair: fix unnecessary increase of balance interval
kernel/sched/fair.c | 44 ++++++++++++++++++++++++++++----------------
1 file changed, 28 insertions(+), 16 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v4 1/3] sched/fair: fix rounding issue for asym packing
2019-01-17 17:44 [PATCH v4 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
@ 2019-01-17 17:44 ` Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot
2 siblings, 0 replies; 5+ messages in thread
From: Vincent Guittot @ 2019-01-17 17:44 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
When check_asym_packing() is triggered, the imbalance is set to :
busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE
But busiest_stat.avg_load equals :
sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity
These divisions can generate a rounding that will make imbalance slightly
lower than the weighted load of the cfs_rq.
But this is enough to skip the rq in find_busiest_queue and prevents asym
migration to happen.
Directly set imbalance to busiest's sgs->group_load to remove the rounding.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
---
kernel/sched/fair.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ca46964..1e4bed4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8476,9 +8476,7 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
return 0;
- env->imbalance = DIV_ROUND_CLOSEST(
- sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity,
- SCHED_CAPACITY_SCALE);
+ env->imbalance = sds->busiest_stat.group_load;
return 1;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 2/3] sched/fair: trigger asym_packing during idle load balance
2019-01-17 17:44 [PATCH v4 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
@ 2019-01-17 17:44 ` Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot
2 siblings, 0 replies; 5+ messages in thread
From: Vincent Guittot @ 2019-01-17 17:44 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1e4bed4..bd3b0ac 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8853,7 +8853,7 @@ static int need_active_balance(struct lb_env *env)
{
struct sched_domain *sd = env->sd;
- if (env->idle == CPU_NEWLY_IDLE) {
+ if (env->idle != CPU_NOT_IDLE) {
/*
* ASYM_PACKING needs to force migrate tasks from busy but
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 3/3] sched/fair: fix unnecessary increase of balance interval
2019-01-17 17:44 [PATCH v4 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot
@ 2019-01-17 17:44 ` Vincent Guittot
2 siblings, 0 replies; 5+ messages in thread
From: Vincent Guittot @ 2019-01-17 17:44 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
In case of active balance, we increase the balance interval to cover
pinned tasks cases not covered by all_pinned logic. Neverthless, the
active migration triggered by asym packing should be treated as the normal
unbalanced case and reset the interval to default value otherwise active
migration for asym_packing can be easily delayed for hundreds of ms
because of this pinned task detection mecanism.
The same happen to other conditions tested in need_active_balance() like
mistfit task and when the capacity of src_cpu is reduced compared to
dst_cpu (see comments in need_active_balance() for details).
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/sched/fair.c | 40 +++++++++++++++++++++++++++-------------
1 file changed, 27 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bd3b0ac..0e17991 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8849,21 +8849,25 @@ static struct rq *find_busiest_queue(struct lb_env *env,
*/
#define MAX_PINNED_INTERVAL 512
-static int need_active_balance(struct lb_env *env)
+static inline bool
+asym_active_balance(struct lb_env *env)
{
- struct sched_domain *sd = env->sd;
+ /*
+ * ASYM_PACKING needs to force migrate tasks from busy but
+ * lower priority CPUs in order to pack all tasks in the
+ * highest priority CPUs.
+ */
+ return env->idle != CPU_NOT_IDLE && (env->sd->flags & SD_ASYM_PACKING) &&
+ sched_asym_prefer(env->dst_cpu, env->src_cpu);
+}
- if (env->idle != CPU_NOT_IDLE) {
+static inline bool
+voluntary_active_balance(struct lb_env *env)
+{
+ struct sched_domain *sd = env->sd;
- /*
- * ASYM_PACKING needs to force migrate tasks from busy but
- * lower priority CPUs in order to pack all tasks in the
- * highest priority CPUs.
- */
- if ((sd->flags & SD_ASYM_PACKING) &&
- sched_asym_prefer(env->dst_cpu, env->src_cpu))
- return 1;
- }
+ if (asym_active_balance(env))
+ return 1;
/*
* The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
@@ -8881,6 +8885,16 @@ static int need_active_balance(struct lb_env *env)
if (env->src_grp_type == group_misfit_task)
return 1;
+ return 0;
+}
+
+static int need_active_balance(struct lb_env *env)
+{
+ struct sched_domain *sd = env->sd;
+
+ if (voluntary_active_balance(env))
+ return 1;
+
return unlikely(sd->nr_balance_failed > sd->cache_nice_tries+2);
}
@@ -9142,7 +9156,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
} else
sd->nr_balance_failed = 0;
- if (likely(!active_balance)) {
+ if (likely(!active_balance) || voluntary_active_balance(&env)) {
/* We were unbalanced, so reset the balancing interval */
sd->balance_interval = sd->min_interval;
} else {
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 1/3] sched/fair: fix rounding issue for asym packing
2018-12-20 7:55 [PATCH v3 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
@ 2018-12-20 14:54 ` Vincent Guittot
0 siblings, 0 replies; 5+ messages in thread
From: Vincent Guittot @ 2018-12-20 14:54 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
When check_asym_packing() is triggered, the imbalance is set to :
busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE
But busiest_stat.avg_load equals :
sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity
These divisions can generate a rounding that will make imbalance slightly
lower than the weighted load of the cfs_rq.
But this is enough to skip the rq in find_busiest_queue and prevents asym
migration to happen.
Directly set imbalance to busiest's sgs->group_load to remove the rounding.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
---
kernel/sched/fair.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ca46964..1e4bed4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8476,9 +8476,7 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
return 0;
- env->imbalance = DIV_ROUND_CLOSEST(
- sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity,
- SCHED_CAPACITY_SCALE);
+ env->imbalance = sds->busiest_stat.group_load;
return 1;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-01-17 17:44 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-17 17:44 [PATCH v4 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot
2019-01-17 17:44 ` [PATCH v4 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot
-- strict thread matches above, loose matches on Subject: below --
2018-12-20 7:55 [PATCH v3 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
2018-12-20 14:54 ` [PATCH v4 " Vincent Guittot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).