linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Douglas RAILLARD <douglas.raillard@arm.com>
To: linux-kernel@vger.kernel.org, rjw@rjwysocki.net,
	viresh.kumar@linaro.org, peterz@infradead.org,
	juri.lelli@redhat.com, vincent.guittot@linaro.org
Cc: douglas.raillard@arm.com, dietmar.eggemann@arm.com,
	qperret@google.com, linux-pm@vger.kernel.org
Subject: [RFC PATCH v4 4/6] sched/cpufreq: Introduce sugov_cpu_ramp_boost
Date: Wed, 22 Jan 2020 17:35:36 +0000	[thread overview]
Message-ID: <20200122173538.1142069-5-douglas.raillard@arm.com> (raw)
In-Reply-To: <20200122173538.1142069-1-douglas.raillard@arm.com>

Use the utilization signals dynamic to detect when the utilization of a
set of tasks starts increasing because of a change in tasks' behavior.
This allows detecting when spending extra power for faster frequency
ramp up response would be beneficial to the reactivity of the system.

This ramp boost is computed as the difference between util_avg and
util_est_enqueued. This number somehow represents a lower bound of how
much extra utilization this tasks is actually using, compared to our
best current stable knowledge of it (which is util_est_enqueued).

When the set of runnable tasks changes, the boost is disabled as the
impact of blocked utilization on util_avg will make the delta with
util_est_enqueued not very informative.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
---
 kernel/sched/cpufreq_schedutil.c | 43 ++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 608963da4916..25a410a1ff6a 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -61,6 +61,10 @@ struct sugov_cpu {
 	unsigned long		bw_dl;
 	unsigned long		max;
 
+	unsigned long		ramp_boost;
+	unsigned long		util_est_enqueued;
+	unsigned long		util_avg;
+
 	/* The field below is for single-CPU policies only: */
 #ifdef CONFIG_NO_HZ_COMMON
 	unsigned long		saved_idle_calls;
@@ -183,6 +187,42 @@ static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
 	}
 }
 
+static unsigned long sugov_cpu_ramp_boost(struct sugov_cpu *sg_cpu)
+{
+	return READ_ONCE(sg_cpu->ramp_boost);
+}
+
+static unsigned long sugov_cpu_ramp_boost_update(struct sugov_cpu *sg_cpu)
+{
+	struct rq *rq = cpu_rq(sg_cpu->cpu);
+	unsigned long util_est_enqueued;
+	unsigned long util_avg;
+	unsigned long boost = 0;
+
+	util_est_enqueued = READ_ONCE(rq->cfs.avg.util_est.enqueued);
+	util_avg = READ_ONCE(rq->cfs.avg.util_avg);
+
+	/*
+	 * Boost when util_avg becomes higher than the previous stable
+	 * knowledge of the enqueued tasks' set util, which is CPU's
+	 * util_est_enqueued.
+	 *
+	 * We try to spot changes in the workload itself, so we want to
+	 * avoid the noise of tasks being enqueued/dequeued. To do that,
+	 * we only trigger boosting when the "amount of work" enqueued
+	 * is stable.
+	 */
+	if (util_est_enqueued == sg_cpu->util_est_enqueued &&
+	    util_avg >= sg_cpu->util_avg &&
+	    util_avg > util_est_enqueued)
+		boost = util_avg - util_est_enqueued;
+
+	sg_cpu->util_est_enqueued = util_est_enqueued;
+	sg_cpu->util_avg = util_avg;
+	WRITE_ONCE(sg_cpu->ramp_boost, boost);
+	return boost;
+}
+
 /**
  * get_next_freq - Compute a new frequency for a given cpufreq policy.
  * @sg_policy: schedutil policy object to compute the new frequency for.
@@ -514,6 +554,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
 	busy = !sg_policy->need_freq_update && sugov_cpu_is_busy(sg_cpu);
 
 	util = sugov_get_util(sg_cpu);
+	sugov_cpu_ramp_boost_update(sg_cpu);
 	max = sg_cpu->max;
 	util = sugov_iowait_apply(sg_cpu, time, util, max);
 	next_f = get_next_freq(sg_policy, util, max);
@@ -554,6 +595,8 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
 		unsigned long j_util, j_max;
 
 		j_util = sugov_get_util(j_sg_cpu);
+		if (j_sg_cpu == sg_cpu)
+			sugov_cpu_ramp_boost_update(sg_cpu);
 		j_max = j_sg_cpu->max;
 		j_util = sugov_iowait_apply(j_sg_cpu, time, j_util, j_max);
 
-- 
2.24.1


  parent reply	other threads:[~2020-01-22 17:36 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-22 17:35 [RFC PATCH v4 0/6] sched/cpufreq: Make schedutil energy aware Douglas RAILLARD
2020-01-22 17:35 ` [RFC PATCH v4 1/6] PM: Introduce em_pd_get_higher_freq() Douglas RAILLARD
2020-01-22 17:35 ` [RFC PATCH v4 2/6] sched/cpufreq: Attach perf domain to sugov policy Douglas RAILLARD
2020-01-22 17:35 ` [RFC PATCH v4 3/6] sched/cpufreq: Hook em_pd_get_higher_power() into get_next_freq() Douglas RAILLARD
2020-01-23 16:16   ` Quentin Perret
2020-01-23 17:52     ` Douglas Raillard
2020-01-24 14:37       ` Quentin Perret
2020-01-24 14:58         ` Quentin Perret
2020-02-27 15:51   ` Douglas Raillard
2020-01-22 17:35 ` Douglas RAILLARD [this message]
2020-01-23 15:55   ` [RFC PATCH v4 4/6] sched/cpufreq: Introduce sugov_cpu_ramp_boost Rafael J. Wysocki
2020-01-23 17:21     ` Douglas Raillard
2020-01-23 21:02       ` Rafael J. Wysocki
2020-01-28 15:38         ` Douglas Raillard
2020-02-10 13:08   ` Peter Zijlstra
2020-02-13 10:49     ` Douglas Raillard
2020-01-22 17:35 ` [RFC PATCH v4 5/6] sched/cpufreq: Boost schedutil frequency ramp up Douglas RAILLARD
2020-01-22 17:35 ` [RFC PATCH v4 6/6] sched/cpufreq: Add schedutil_em_tp tracepoint Douglas RAILLARD
2020-01-22 18:14 ` [RFC PATCH v4 0/6] sched/cpufreq: Make schedutil energy aware Douglas Raillard
2020-02-10 13:21   ` Peter Zijlstra
2020-02-13 17:49     ` Douglas Raillard
2020-02-14 12:21       ` Peter Zijlstra
2020-02-14 12:52       ` Peter Zijlstra
2020-03-11 12:25         ` Douglas Raillard
2020-02-14 13:37       ` Peter Zijlstra
2020-03-11 12:40         ` Douglas Raillard
2020-01-23 15:43 ` Rafael J. Wysocki
2020-01-23 17:16   ` Douglas Raillard
2020-02-10 13:30     ` Peter Zijlstra
2020-02-13 11:55       ` Douglas Raillard
2020-02-13 13:20         ` Peter Zijlstra
2020-02-27 15:50           ` Douglas Raillard
2020-01-27 17:16 ` Vincent Guittot
2020-02-10 11:37   ` Douglas Raillard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200122173538.1142069-5-douglas.raillard@arm.com \
    --to=douglas.raillard@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=qperret@google.com \
    --cc=rjw@rjwysocki.net \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).