linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/fair: improve spreading of utilization
@ 2020-03-12 16:54 Vincent Guittot
  2020-03-13 10:26 ` Peter Zijlstra
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Vincent Guittot @ 2020-03-12 16:54 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, linux-kernel
  Cc: Vincent Guittot

During load_balancing, a group with spare capacity will try to pull some
utilizations from an overloaded group. In such case, the load balance
looks for the runqueue with the highest utilization. Nevertheless, it
should also ensure that there are some pending tasks to pull otherwise
the load balance will fail to pull a task and the spread of the load will
be delayed.

This situation is quite transient but it's possible to highlight the
effect with a short run of sysbench test so the time to spread task impacts
the global result significantly.

Below are the average results for 15 iterations on an arm64 octo core:
sysbench --test=cpu --num-threads=8  --max-requests=1000 run

                           tip/sched/core  +patchset
total time:                172ms           158ms
per-request statistics:
         avg:                1.337ms         1.244ms
         max:               21.191ms        10.753ms

The average max doesn't fully reflect the wide spread of the value which
ranges from 1.350ms to more than 41ms for the tip/sched/core and from
1.350ms to 21ms with the patch.

Other factors like waiting for an idle load balance or cache hotness
can delay the spreading of the tasks which explains why we can still
have up to 21ms with the patch.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3c8a379c357e..97a0307312d9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9025,6 +9025,14 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 		case migrate_util:
 			util = cpu_util(cpu_of(rq));
 
+			/*
+			 * Don't try to pull utilization from a CPU with one
+			 * running task. Whatever its utilization, we will fail
+			 * detach the task.
+			 */
+			if (nr_running <= 1)
+				continue;
+
 			if (busiest_util < util) {
 				busiest_util = util;
 				busiest = rq;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-03-20 12:59 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-12 16:54 [PATCH] sched/fair: improve spreading of utilization Vincent Guittot
2020-03-13 10:26 ` Peter Zijlstra
2020-03-13 11:00 ` Valentin Schneider
2020-03-13 11:24   ` Vincent Guittot
2020-03-13 11:28     ` Valentin Schneider
2020-03-13 12:42       ` Valentin Schneider
2020-03-13 12:55         ` Vincent Guittot
2020-03-13 14:26           ` Vincent Guittot
2020-03-13 15:47             ` Valentin Schneider
2020-03-13 16:09               ` Vincent Guittot
2020-03-13 16:57                 ` Valentin Schneider
2020-03-13 17:12                   ` Vincent Guittot
2020-03-13 17:34                     ` Valentin Schneider
2020-03-20 12:58 ` [tip: sched/core] sched/fair: Improve " tip-bot2 for Vincent Guittot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).