From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755212AbbIMLEL (ORCPT ); Sun, 13 Sep 2015 07:04:11 -0400 Received: from terminus.zytor.com ([198.137.202.10]:43489 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753221AbbIMLEJ (ORCPT ); Sun, 13 Sep 2015 07:04:09 -0400 Date: Sun, 13 Sep 2015 04:03:23 -0700 From: tip-bot for Dietmar Eggemann Message-ID: Cc: mingo@kernel.org, Dietmar.Eggemann@arm.com, tglx@linutronix.de, linux-kernel@vger.kernel.org, hpa@zytor.com, peterz@infradead.org, efault@gmx.de, torvalds@linux-foundation.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, Juri.Lelli@arm.com, morten.rasmussen@arm.com Reply-To: Juri.Lelli@arm.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, efault@gmx.de, torvalds@linux-foundation.org, dietmar.eggemann@arm.com, linux-kernel@vger.kernel.org, peterz@infradead.org, hpa@zytor.com, Dietmar.Eggemann@arm.com, mingo@kernel.org, tglx@linutronix.de In-Reply-To: <1439569394-11974-2-git-send-email-morten.rasmussen@arm.com> References: <1439569394-11974-2-git-send-email-morten.rasmussen@arm.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/fair: Make load tracking frequency scale-invariant Git-Commit-ID: e0f5f3afd2cffa96291cd852056d83ff4e2e99c7 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: e0f5f3afd2cffa96291cd852056d83ff4e2e99c7 Gitweb: http://git.kernel.org/tip/e0f5f3afd2cffa96291cd852056d83ff4e2e99c7 Author: Dietmar Eggemann AuthorDate: Fri, 14 Aug 2015 17:23:09 +0100 Committer: Ingo Molnar CommitDate: Sun, 13 Sep 2015 09:52:55 +0200 sched/fair: Make load tracking frequency scale-invariant Apply frequency scaling correction factor to per-entity load tracking to make it frequency invariant. Currently, load appears bigger when the CPU is running slower which affects load-balancing decisions. Each segment of the sched_avg.load_sum geometric series is now scaled by the current frequency so that the sched_avg.load_avg of each sched entity will be invariant from frequency scaling. Moreover, cfs_rq.runnable_load_sum is scaled by the current frequency as well. Signed-off-by: Dietmar Eggemann Signed-off-by: Morten Rasmussen Signed-off-by: Peter Zijlstra (Intel) Acked-by: Vincent Guittot Cc: Dietmar Eggemann Cc: Juri Lelli Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: daniel.lezcano@linaro.org Cc: mturquette@baylibre.com Cc: pang.xunlei@zte.com.cn Cc: rjw@rjwysocki.net Cc: sgurrappadi@nvidia.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1439569394-11974-2-git-send-email-morten.rasmussen@arm.com Signed-off-by: Ingo Molnar --- include/linux/sched.h | 6 +++--- kernel/sched/fair.c | 27 +++++++++++++++++---------- 2 files changed, 20 insertions(+), 13 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index a4ab9da..c8d923b 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1177,9 +1177,9 @@ struct load_weight { /* * The load_avg/util_avg accumulates an infinite geometric series. - * 1) load_avg factors the amount of time that a sched_entity is - * runnable on a rq into its weight. For cfs_rq, it is the aggregated - * such weights of all runnable and blocked sched_entities. + * 1) load_avg factors frequency scaling into the amount of time that a + * sched_entity is runnable on a rq into its weight. For cfs_rq, it is the + * aggregated such weights of all runnable and blocked sched_entities. * 2) util_avg factors frequency scaling into the amount of time * that a sched_entity is running on a CPU, in the range [0..SCHED_LOAD_SCALE]. * For cfs_rq, it is the aggregated such times of all runnable and diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 47ece22..86cb27c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2515,6 +2515,8 @@ static u32 __compute_runnable_contrib(u64 n) return contrib + runnable_avg_yN_sum[n]; } +#define scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) + /* * We can represent the historical contribution to runnable average as the * coefficients of a geometric series. To do this we sub-divide our runnable @@ -2547,9 +2549,9 @@ static __always_inline int __update_load_avg(u64 now, int cpu, struct sched_avg *sa, unsigned long weight, int running, struct cfs_rq *cfs_rq) { - u64 delta, periods; + u64 delta, scaled_delta, periods; u32 contrib; - int delta_w, decayed = 0; + int delta_w, scaled_delta_w, decayed = 0; unsigned long scale_freq = arch_scale_freq_capacity(NULL, cpu); delta = now - sa->last_update_time; @@ -2585,13 +2587,16 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, * period and accrue it. */ delta_w = 1024 - delta_w; + scaled_delta_w = scale(delta_w, scale_freq); if (weight) { - sa->load_sum += weight * delta_w; - if (cfs_rq) - cfs_rq->runnable_load_sum += weight * delta_w; + sa->load_sum += weight * scaled_delta_w; + if (cfs_rq) { + cfs_rq->runnable_load_sum += + weight * scaled_delta_w; + } } if (running) - sa->util_sum += delta_w * scale_freq >> SCHED_CAPACITY_SHIFT; + sa->util_sum += scaled_delta_w; delta -= delta_w; @@ -2608,23 +2613,25 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, /* Efficiently calculate \sum (1..n_period) 1024*y^i */ contrib = __compute_runnable_contrib(periods); + contrib = scale(contrib, scale_freq); if (weight) { sa->load_sum += weight * contrib; if (cfs_rq) cfs_rq->runnable_load_sum += weight * contrib; } if (running) - sa->util_sum += contrib * scale_freq >> SCHED_CAPACITY_SHIFT; + sa->util_sum += contrib; } /* Remainder of delta accrued against u_0` */ + scaled_delta = scale(delta, scale_freq); if (weight) { - sa->load_sum += weight * delta; + sa->load_sum += weight * scaled_delta; if (cfs_rq) - cfs_rq->runnable_load_sum += weight * delta; + cfs_rq->runnable_load_sum += weight * scaled_delta; } if (running) - sa->util_sum += delta * scale_freq >> SCHED_CAPACITY_SHIFT; + sa->util_sum += scaled_delta; sa->period_contrib += delta;