linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] sched/fair: update scale invariance of PELT
@ 2017-04-10  9:18 Vincent Guittot
  2017-04-10 17:38 ` Peter Zijlstra
                   ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: Vincent Guittot @ 2017-04-10  9:18 UTC (permalink / raw)
  To: peterz, mingo, linux-kernel
  Cc: dietmar.eggemann, Morten.Rasmussen, yuyang.du, pjt, bsegall,
	Vincent Guittot

The current implementation of load tracking invariance scales the
contribution with current frequency and uarch performance (only for
utilization) of the CPU. One main result of this formula is that the
figures are capped by current capacity of CPU. Another one is that the
load_avg is not invariant because not scaled with uarch.

The util_avg of a periodic task that runs r time slots every p time slots
varies in the range :

    U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p)

with U is the max util_avg value = SCHED_CAPACITY_SCALE

At a lower capacity, the range becomes:

    U * C * (1-y^r')/(1-y^p) * y^i' < Utilization <  U * C * (1-y^r')/(1-y^p)

with C reflecting the compute capacity ratio between current capacity and
max capacity.

so C tries to compensate changes in (1-y^r') but it can't be accurate.

Instead of scaling the contribution value of PELT algo, we should scale the
running time. The PELT signal aims to track the amount of computation of
tasks and/or rq so it seems more correct to scale the running time to
reflect the effective amount of computation done since the last update.

In order to be fully invariant, we need to apply the same amount of
running time and idle time whatever the current capacity. Because running
at lower capacity implies that the task will run longer, we have to track
the amount of "stolen" idle time and to apply it when task becomes idle.

But once we have reached the maximum utilization value (SCHED_CAPACITY_SCALE),
it means that the task is seen as an always-running task whatever the
capacity of the cpu (even at max compute capacity). In this case, we can
discard the "stolen" idle times which becomes meaningless. In order to
cope with rounding effect of PELT algo we take a margin and consider task
with utilization greater than 1000 (vs 1024 max) as an always-running task.

Then, we can use the same algorithm for both utilization and load and
simplify __update_load_avg now that the load of a task doesn't have to be
capped by CPU uarch.

The responsivness of PELT is improved when CPU is not running at max
capacity with this new algorithm. I have put below some examples of
duration to reach some typical load values according to the capacity of the
CPU with current implementation and with this patch.

Util (%)     max capacity  half capacity(mainline)  half capacity(w/ patch)
972 (95%)    138ms         not reachable            276ms
486 (47.5%)  30ms          138ms                     60ms
256 (25%)    13ms           32ms                     26ms

On my hikey (octo ARM platform) with schedutil governor, the time to reach
max OPP when starting from a null utilization, decreases from 223ms with
current scale invariance down to 121ms with the new algorithm. For this
test, i have enable arch_scale_freq for arm64.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---

Update since v1:
- rebase on latest tip/sched/core which includes
  "Optimize __update_sched_avg()" + patch : https://lkml.org/lkml/2017/3/31/308


 include/linux/sched.h |  1 +
 kernel/sched/fair.c   | 53 ++++++++++++++++++++++++++++++++++++++++++++-------
 2 files changed, 47 insertions(+), 7 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index d67eee8..ca9d00f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -313,6 +313,7 @@ struct load_weight {
  */
 struct sched_avg {
 	u64				last_update_time;
+	u64				stolen_idle_time;
 	u64				load_sum;
 	u32				util_sum;
 	u32				period_contrib;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1e5f580..b6f4253 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -734,6 +734,7 @@ void init_entity_runnable_average(struct sched_entity *se)
 	struct sched_avg *sa = &se->avg;
 
 	sa->last_update_time = 0;
+	sa->stolen_idle_time = 0;
 	/*
 	 * sched_avg's period_contrib should be strictly less then 1024, so
 	 * we give it 1023 to make sure it is almost a period (1024us), and
@@ -2819,15 +2820,12 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
  *                         n=1
  */
 static __always_inline u32
-accumulate_sum(u64 delta, int cpu, struct sched_avg *sa,
+accumulate_sum(u64 delta, struct sched_avg *sa,
 	       unsigned long weight, int running, struct cfs_rq *cfs_rq)
 {
-	unsigned long scale_freq, scale_cpu;
 	u32 contrib = (u32)delta; /* p == 0 -> delta < 1024 */
 	u64 periods;
 
-	scale_freq = arch_scale_freq_capacity(NULL, cpu);
-	scale_cpu = arch_scale_cpu_capacity(NULL, cpu);
 
 	delta += sa->period_contrib;
 	periods = delta / 1024; /* A period is 1024us (~1ms) */
@@ -2852,19 +2850,54 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa,
 	}
 	sa->period_contrib = delta;
 
-	contrib = cap_scale(contrib, scale_freq);
 	if (weight) {
 		sa->load_sum += weight * contrib;
 		if (cfs_rq)
 			cfs_rq->runnable_load_sum += weight * contrib;
 	}
 	if (running)
-		sa->util_sum += contrib * scale_cpu;
+		sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
 
 	return periods;
 }
 
 /*
+ * Scale the time to reflect the effective amount of computation done during
+ * this delta time.
+ */
+static __always_inline u64
+scale_time(u64 delta, int cpu, struct sched_avg *sa,
+		unsigned long weight, int running)
+{
+	if (running) {
+		sa->stolen_idle_time += delta;
+		/*
+		 * scale the elapsed time to reflect the real amount of
+		 * computation
+		 */
+		delta = cap_scale(delta, arch_scale_freq_capacity(NULL, cpu));
+		delta = cap_scale(delta, arch_scale_cpu_capacity(NULL, cpu));
+
+		/*
+		 * Track the amount of stolen idle time due to running at
+		 * lower capacity
+		 */
+		sa->stolen_idle_time -= delta;
+	} else if (!weight) {
+		if (sa->util_sum < (LOAD_AVG_MAX * 1000)) {
+			/*
+			 * Add the idle time stolen by running at lower compute
+			 * capacity
+			 */
+			delta += sa->stolen_idle_time;
+		}
+		sa->stolen_idle_time = 0;
+	}
+
+	return delta;
+}
+
+/*
  * We can represent the historical contribution to runnable average as the
  * coefficients of a geometric series.  To do this we sub-divide our runnable
  * history into segments of approximately 1ms (1024us); label the segment that
@@ -2918,13 +2951,19 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa,
 	sa->last_update_time = now;
 
 	/*
+	 * Scale time to reflect the amount a computation effectively done
+	 * during the time slot at current capacity
+	 */
+	delta = scale_time(delta, cpu, sa, weight, running);
+
+	/*
 	 * Now we know we crossed measurement unit boundaries. The *_avg
 	 * accrues by two steps:
 	 *
 	 * Step 1: accumulate *_sum since last_update_time. If we haven't
 	 * crossed period boundaries, finish.
 	 */
-	if (!accumulate_sum(delta, cpu, sa, weight, running, cfs_rq))
+	if (!accumulate_sum(delta, sa, weight, running, cfs_rq))
 		return 0;
 
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-05-03 17:11 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-10  9:18 [PATCH v2] sched/fair: update scale invariance of PELT Vincent Guittot
2017-04-10 17:38 ` Peter Zijlstra
2017-04-11  7:52   ` Vincent Guittot
2017-04-11  8:53     ` Peter Zijlstra
2017-04-11  9:40       ` Vincent Guittot
2017-04-11 10:41         ` Peter Zijlstra
2017-04-11 10:49           ` Peter Zijlstra
2017-04-11 13:09             ` Vincent Guittot
2017-04-12 11:28               ` Peter Zijlstra
2017-04-12 14:50                 ` Vincent Guittot
2017-04-12 15:44                   ` Peter Zijlstra
2017-04-13  9:42                     ` Vincent Guittot
2017-04-13 13:32                 ` Peter Zijlstra
2017-04-13 14:59                   ` Vincent Guittot
2017-04-13 18:06                     ` Peter Zijlstra
2017-04-14  8:47                       ` Vincent Guittot
2017-04-11 12:08           ` Vincent Guittot
2017-04-11  9:12     ` Peter Zijlstra
2017-04-11  9:46       ` Vincent Guittot
2017-04-13 13:39     ` Peter Zijlstra
2017-04-13 15:16       ` Vincent Guittot
2017-04-13 16:13         ` Peter Zijlstra
2017-04-14  8:49           ` Vincent Guittot
2017-04-19 16:31             ` Vincent Guittot
2017-04-28 15:52 ` Morten Rasmussen
2017-04-28 17:08   ` Dietmar Eggemann
2017-05-03 17:11   ` Vincent Guittot
2017-04-28 22:09 ` Peter Zijlstra
2017-05-01  9:00   ` Peter Zijlstra
2017-05-02 13:38     ` Vincent Guittot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).