All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/fair: prefer prev cpu in asymmetric wakeup path
@ 2020-10-22 13:43 Vincent Guittot
  2020-10-22 14:53 ` Valentin Schneider
  0 siblings, 1 reply; 7+ messages in thread
From: Vincent Guittot @ 2020-10-22 13:43 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, linux-kernel, valentin.schneider, morten.rasmussen
  Cc: Vincent Guittot

During fast wakeup path, scheduler always check whether local or prev cpus
are good candidates for the task before looking for other cpus in the
domain. With
  commit b7a331615d25 ("sched/fair: Add asymmetric CPU capacity wakeup scan")
the heterogenous system gains a dedicated path but doesn't try to keep
reusing prev cpu whenever possible. If the previous cpu is idle and belong to the
asymmetric domain, we should check it 1st before looking for another cpu
because it stays one of the best candidate and it stabilizes task placement
on the system.

This change aligns asymmetric path behavior with symmetric one and reduces
cases where the task migrates across all cpus of the sd_asym_cpucapacity
domains at wakeup.

This change does not impact normal EAS mode but only the overloaded case or
when EAS is not used.

On hikey960 with performance governor (EAS disable)

./perf bench sched pipe -T -l 150000
             mainline           w/ patch
# migrations   299811                  3
ops/sec        154535(+/-0.13%)   181754(+/- 0.29) +17%

Fixes: b7a331615d25 ("sched/fair: Add asymmetric CPU capacity wakeup scan")
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa4c6227cd6d..f39638fe6b94 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6170,7 +6170,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
  * maximize capacity.
  */
 static int
-select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
+select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int prev, int target)
 {
 	unsigned long best_cap = 0;
 	int cpu, best_cpu = -1;
@@ -6178,9 +6178,22 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 
 	sync_entity_load_avg(&p->se);
 
+	if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
+	    task_fits_capacity(p, capacity_of(target)))
+		return target;
+
 	cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
 	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
 
+	/*
+	 * If the previous CPU belongs to this asymmetric domain and is idle,
+	 * check it 1st as it's the best candidate.
+	 */
+	if (prev != target && cpumask_test_cpu(prev, cpus) &&
+	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
+	    task_fits_capacity(p, capacity_of(prev)))
+		return prev;
+
 	for_each_cpu_wrap(cpu, cpus, target) {
 		unsigned long cpu_cap = capacity_of(cpu);
 
@@ -6223,7 +6236,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 		if (!sd)
 			goto symmetric;
 
-		i = select_idle_capacity(p, sd, target);
+		i = select_idle_capacity(p, sd, prev, target);
 		return ((unsigned)i < nr_cpumask_bits) ? i : target;
 	}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-10-26  8:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-22 13:43 [PATCH] sched/fair: prefer prev cpu in asymmetric wakeup path Vincent Guittot
2020-10-22 14:53 ` Valentin Schneider
2020-10-22 15:33   ` Vincent Guittot
2020-10-22 17:45     ` Valentin Schneider
2020-10-23  7:15       ` Vincent Guittot
2020-10-23 17:14     ` Dietmar Eggemann
2020-10-26  8:27       ` Vincent Guittot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.