* [RESEND PATCH 0/3] sched/fair: some fixes for asym_packing
@ 2018-10-02 7:26 Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Vincent Guittot @ 2018-10-02 7:26 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
During the review of misfit task patchset, Morten and Valentin raised some
problems with the use of SD_ASYM_PACKING flag on asymmetric system like
hikey960 arm64 big/LITTLE platform. The study of the use cases has shown
some problems that can happen for every systems that use the flag.
The 3 patches fixes the problems raised for lmbench and the rt-app UC that
creates 2 tasks that start as small tasks and then become suddenly always
running tasks. (I can provide the rt-app json is needed)
- Rebase on latest tip/sched/core
Vincent Guittot (3):
sched/fair: fix rounding issue for asym packing
sched/fair: trigger asym_packing during idle load balance
sched/fair: fix unnecessary increase of balance interval
kernel/sched/fair.c | 33 +++++++++++++++++++++++++--------
1 file changed, 25 insertions(+), 8 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 4+ messages in thread
* [RESEND PATCH 1/3] sched/fair: fix rounding issue for asym packing
2018-10-02 7:26 [RESEND PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
@ 2018-10-02 7:26 ` Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot
2 siblings, 0 replies; 4+ messages in thread
From: Vincent Guittot @ 2018-10-02 7:26 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
When check_asym_packing() is triggered, the imbalance is set to :
busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE
busiest_stat.avg_load also comes from a division and the final rounding
can make imbalance slightly lower than the weighted load of the cfs_rq.
But this is enough to skip the rq in find_busiest_queue and prevents asym
migration to happen.
Add 1 to the avg_load to make sure that the targeted cpu will not be
skipped unexpectidly.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/sched/fair.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6bd142d..0ed99ad2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7824,6 +7824,12 @@ static inline void update_sg_lb_stats(struct lb_env *env,
/* Adjust by relative CPU capacity of the group */
sgs->group_capacity = group->sgc->capacity;
sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;
+ /*
+ * Prevent division rounding to make the computation of imbalance
+ * slightly less than original value and to prevent the rq to be then
+ * selected as busiest queue
+ */
+ sgs->avg_load += 1;
if (sgs->sum_nr_running)
sgs->load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running;
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [RESEND PATCH 2/3] sched/fair: trigger asym_packing during idle load balance
2018-10-02 7:26 [RESEND PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
@ 2018-10-02 7:26 ` Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot
2 siblings, 0 replies; 4+ messages in thread
From: Vincent Guittot @ 2018-10-02 7:26 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0ed99ad2..00f2171 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8460,7 +8460,7 @@ static int need_active_balance(struct lb_env *env)
{
struct sched_domain *sd = env->sd;
- if (env->idle == CPU_NEWLY_IDLE) {
+ if (env->idle != CPU_NOT_IDLE) {
/*
* ASYM_PACKING needs to force migrate tasks from busy but
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [RESEND PATCH 3/3] sched/fair: fix unnecessary increase of balance interval
2018-10-02 7:26 [RESEND PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot
@ 2018-10-02 7:26 ` Vincent Guittot
2 siblings, 0 replies; 4+ messages in thread
From: Vincent Guittot @ 2018-10-02 7:26 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: valentin.schneider, Morten.Rasmussen, Vincent Guittot
In case of active balance, we increase the balance interval to cover
pinned tasks cases not covered by all_pinned logic. Neverthless, the active
migration triggered by asym packing should be treated as the normal
unbalanced case and reset the interval to default value otherwise active
migration for asym_packing can be easily delayed for hundreds of ms
because of this all_pinned detection mecanism.
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/sched/fair.c | 27 +++++++++++++++++++--------
1 file changed, 19 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 00f2171..4b6a226 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8456,22 +8456,32 @@ static struct rq *find_busiest_queue(struct lb_env *env,
*/
#define MAX_PINNED_INTERVAL 512
-static int need_active_balance(struct lb_env *env)
+static inline bool
+asym_active_balance(enum cpu_idle_type idle, unsigned int flags, int dst, int src)
{
- struct sched_domain *sd = env->sd;
-
- if (env->idle != CPU_NOT_IDLE) {
+ if (idle != CPU_NOT_IDLE) {
/*
* ASYM_PACKING needs to force migrate tasks from busy but
* lower priority CPUs in order to pack all tasks in the
* highest priority CPUs.
*/
- if ((sd->flags & SD_ASYM_PACKING) &&
- sched_asym_prefer(env->dst_cpu, env->src_cpu))
- return 1;
+ if ((flags & SD_ASYM_PACKING) &&
+ sched_asym_prefer(dst, src))
+ return true;
}
+ return false;
+}
+
+static int need_active_balance(struct lb_env *env)
+{
+ struct sched_domain *sd = env->sd;
+
+
+ if (asym_active_balance(env->idle, sd->flags, env->dst_cpu, env->src_cpu))
+ return 1;
+
/*
* The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
* It's worth migrating the task if the src_cpu's capacity is reduced
@@ -8749,7 +8759,8 @@ static int load_balance(int this_cpu, struct rq *this_rq,
} else
sd->nr_balance_failed = 0;
- if (likely(!active_balance)) {
+ if (likely(!active_balance) ||
+ asym_active_balance(env.idle, sd->flags, env.dst_cpu, env.src_cpu)) {
/* We were unbalanced, so reset the balancing interval */
sd->balance_interval = sd->min_interval;
} else {
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-10-02 7:27 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-02 7:26 [RESEND PATCH 0/3] sched/fair: some fixes for asym_packing Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 1/3] sched/fair: fix rounding issue for asym packing Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 2/3] sched/fair: trigger asym_packing during idle load balance Vincent Guittot
2018-10-02 7:26 ` [RESEND PATCH 3/3] sched/fair: fix unnecessary increase of balance interval Vincent Guittot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).