All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] cpufreq: schedutil: govern how frequently we change frequency with rate_limit
@ 2017-02-15 17:15 Viresh Kumar
  2017-02-15 22:35 ` Rafael J. Wysocki
  0 siblings, 1 reply; 12+ messages in thread
From: Viresh Kumar @ 2017-02-15 17:15 UTC (permalink / raw)
  To: Rafael Wysocki, Ingo Molnar, Peter Zijlstra
  Cc: linaro-kernel, linux-pm, linux-kernel, Vincent Guittot, Viresh Kumar

For an ideal system (where frequency change doesn't incur any penalty)
we would like to change the frequency as soon as the load changes for a
CPU. But the systems we have to work with are far from ideal and it
takes time to change the frequency of a CPU. For many ARM platforms
specially, it is at least 1 ms. In order to not spend too much time
changing frequency, we have earlier introduced a sysfs controlled
tunable for the schedutil governor: rate_limit_us.

Currently, rate_limit_us controls how frequently we reevaluate frequency
for a set of CPUs controlled by a cpufreq policy. But that may not be
the ideal behavior we want.

Consider for example the following scenario. The rate_limit_us tunable
is set to 10 ms. The CPU has a constant load X and that requires the
frequency to be set to Y. The schedutil governor changes the frequency
to Y, updates last_freq_update_time and we wait for 10 ms to reevaluate
the frequency again. After 10 ms, the schedutil governor reevaluates the
load and finds it to be the same. And so it doesn't update the
frequency, but updates last_freq_update_time before returning. Right
after this point, the scheduler puts more load on the CPU and the CPU
needs to go to a higher frequency Z. Because last_freq_update_time was
updated just now, the schedutil governor waits for additional 10ms
before reevaluating the load again.

Normally, the time it takes to reevaluate the frequency is negligible
compared to the time it takes to change the frequency. And considering
that in the above scenario, as we haven't updated the frequency for over
10ms, we should have changed the frequency as soon as the load changed.

This patch changes the way rate_limit_us is used, i.e. It now governs
"How frequently we change the frequency" instead of "How frequently we
reevaluate the frequency".

One may think that this change may have increased the number of times we
reevaluate the frequency after a period of rate_limit_us has expired
since the last change, if the load isn't changing. But that is protected
by the scheduler as normally it doesn't call into the schedutil governor
before 1 ms (Hint: "decayed" in update_cfs_rq_load_avg()) since the
last call.

Tests were performed with this patch on a Dual cluster (same frequency
domain), octa-core ARM64 platform (Hikey). Hackbench (Debian) and
Vellamo/Galleryfling (Android) didn't had much difference in
performance w/ or w/o this patch.

Its difficult to create a test case (tried rt-app as well) where this
patch will show a lot of improvements as the target of this patch is a
real corner case. I.e. Current load is X (resulting in freq change),
load after rate_limit_us is also X, but right after that load becomes Y.
Undoubtedly this patch would improve the responsiveness in such cases.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 kernel/sched/cpufreq_schedutil.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index fd4659313640..306d97e7b57c 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -92,14 +92,13 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
 {
 	struct cpufreq_policy *policy = sg_policy->policy;
 
-	sg_policy->last_freq_update_time = time;
-
 	if (policy->fast_switch_enabled) {
 		if (sg_policy->next_freq == next_freq) {
 			trace_cpu_frequency(policy->cur, smp_processor_id());
 			return;
 		}
 		sg_policy->next_freq = next_freq;
+		sg_policy->last_freq_update_time = time;
 		next_freq = cpufreq_driver_fast_switch(policy, next_freq);
 		if (next_freq == CPUFREQ_ENTRY_INVALID)
 			return;
@@ -108,6 +107,7 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
 		trace_cpu_frequency(next_freq, smp_processor_id());
 	} else if (sg_policy->next_freq != next_freq) {
 		sg_policy->next_freq = next_freq;
+		sg_policy->last_freq_update_time = time;
 		sg_policy->work_in_progress = true;
 		irq_work_queue(&sg_policy->irq_work);
 	}
-- 
2.7.1.410.g6faf27b

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-02-20 14:59 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-15 17:15 [PATCH] cpufreq: schedutil: govern how frequently we change frequency with rate_limit Viresh Kumar
2017-02-15 22:35 ` Rafael J. Wysocki
2017-02-15 22:52   ` Rafael J. Wysocki
2017-02-16  0:02     ` Rafael J. Wysocki
2017-02-16 10:12       ` Viresh Kumar
2017-02-16 12:36         ` Peter Zijlstra
2017-02-17 12:15           ` Rafael J. Wysocki
2017-02-17 12:48             ` Peter Zijlstra
2017-02-20  9:58               ` Viresh Kumar
2017-02-20 13:49                 ` Rafael J. Wysocki
2017-02-20 14:53                   ` Rafael J. Wysocki
2017-02-16  6:27   ` Viresh Kumar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.