From: Peter Zijlstra <peterz@infradead.org>
To: Quentin Perret <quentin.perret@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
mingo@kernel.org, linux-kernel@vger.kernel.org,
rjw@rjwysocki.net, juri.lelli@redhat.com,
dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com,
viresh.kumar@linaro.org, valentin.schneider@arm.com,
patrick.bellasi@arm.com, joel@joelfernandes.org,
daniel.lezcano@linaro.org, Ingo Molnar <mingo@redhat.com>
Subject: Re: [PATCH v6 04/11] cpufreq/schedutil: use rt utilization tracking
Date: Fri, 22 Jun 2018 17:22:58 +0200 [thread overview]
Message-ID: <20180622152258.GF2512@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20180622113713.GJ2494@hirez.programming.kicks-ass.net>
On Fri, Jun 22, 2018 at 01:37:13PM +0200, Peter Zijlstra wrote:
> That is true.. So we could limit the scaling to the case where there is
> no idle time, something like:
>
> util = sg_cpu->util_cfs;
>
> cap_cfs = (1024 - (sg_cpu->util_rt + ...));
> if (util == cap_cfs)
> util = sg_cpu->max;
>
OK, it appears this is more or less what the patches do. And I think
there's a small risk/hole with this where util ~= cap_cfs but very close
due to some unaccounted time.
FWIW, when looking, I saw no reason why sugov_get_util() and
sugov_aggregate_util() were in fact separate functions.
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -53,11 +53,7 @@ struct sugov_cpu {
unsigned int iowait_boost_max;
u64 last_update;
- /* The fields below are only needed when sharing a policy: */
- unsigned long util_cfs;
unsigned long util_dl;
- unsigned long bw_dl;
- unsigned long util_rt;
unsigned long max;
/* The field below is for single-CPU policies only: */
@@ -181,44 +177,38 @@ static unsigned int get_next_freq(struct
return cpufreq_driver_resolve_freq(policy, freq);
}
-static void sugov_get_util(struct sugov_cpu *sg_cpu)
+static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
{
struct rq *rq = cpu_rq(sg_cpu->cpu);
+ unsigned long util, max;
- sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
- sg_cpu->util_cfs = cpu_util_cfs(rq);
- sg_cpu->util_dl = cpu_util_dl(rq);
- sg_cpu->bw_dl = cpu_bw_dl(rq);
- sg_cpu->util_rt = cpu_util_rt(rq);
-}
-
-static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
-{
- struct rq *rq = cpu_rq(sg_cpu->cpu);
- unsigned long util;
+ sg_cpu->max = max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
+ sg_cpu->util_dl = cpu_util_dl(rq);
if (rq->rt.rt_nr_running)
- return sg_cpu->max;
+ return max;
- util = sg_cpu->util_cfs;
- util += sg_cpu->util_rt;
+ util = cpu_util_cfs(rq);
+ util += cpu_util_rt(rq);
- if ((util + sg_cpu->util_dl) >= sg_cpu->max)
- return sg_cpu->max;
+ /*
+ * If there is no idle time, we should run at max frequency.
+ */
+ if ((util + cpu_util_dl(rq)) >= max)
+ return max;
/*
- * As there is still idle time on the CPU, we need to compute the
- * utilization level of the CPU.
* Bandwidth required by DEADLINE must always be granted while, for
* FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
* to gracefully reduce the frequency when no tasks show up for longer
* periods of time.
+ *
+ * Ideally we would like to set bw_dl as min/guaranteed freq and bw_dl
+ * + util as requested freq. However, cpufreq is not yet ready for such
+ * an interface. So, we only do the latter for now.
*/
- /* Add DL bandwidth requirement */
- util += sg_cpu->bw_dl;
-
- return min(sg_cpu->max, util);
+ return min(max, cpu_bw_dl(rq) + util);
}
/**
@@ -396,9 +386,8 @@ static void sugov_update_single(struct u
busy = sugov_cpu_is_busy(sg_cpu);
- sugov_get_util(sg_cpu);
+ util = sugov_get_util(sg_cpu);
max = sg_cpu->max;
- util = sugov_aggregate_util(sg_cpu);
sugov_iowait_apply(sg_cpu, time, &util, &max);
next_f = get_next_freq(sg_policy, util, max);
/*
@@ -437,9 +426,8 @@ static unsigned int sugov_next_freq_shar
struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
unsigned long j_util, j_max;
- sugov_get_util(j_sg_cpu);
+ j_util = sugov_get_util(j_sg_cpu);
j_max = j_sg_cpu->max;
- j_util = sugov_aggregate_util(j_sg_cpu);
sugov_iowait_apply(j_sg_cpu, time, &j_util, &j_max);
if (j_util * max > j_max * util) {
next prev parent reply other threads:[~2018-06-22 15:23 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-08 12:09 [PATCH v6 00/11] track CPU utilization Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 01/11] sched/pelt: Move pelt related code in a dedicated file Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 02/11] sched/pelt: remove blank line Vincent Guittot
2018-06-21 14:33 ` Peter Zijlstra
2018-06-21 18:42 ` Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 03/11] sched/rt: add rt_rq utilization tracking Vincent Guittot
2018-06-15 11:52 ` Dietmar Eggemann
2018-06-15 12:18 ` Vincent Guittot
2018-06-15 14:55 ` Dietmar Eggemann
2018-06-21 18:50 ` Peter Zijlstra
2018-06-08 12:09 ` [PATCH v6 04/11] cpufreq/schedutil: use rt " Vincent Guittot
2018-06-18 9:00 ` Dietmar Eggemann
2018-06-18 12:58 ` Vincent Guittot
2018-06-21 18:45 ` Peter Zijlstra
2018-06-21 18:57 ` Peter Zijlstra
2018-06-22 8:10 ` Vincent Guittot
2018-06-22 11:41 ` Peter Zijlstra
2018-06-22 12:14 ` Vincent Guittot
2018-06-22 7:58 ` Juri Lelli
2018-06-22 7:58 ` Quentin Perret
2018-06-22 11:37 ` Peter Zijlstra
2018-06-22 11:44 ` Peter Zijlstra
2018-06-22 12:23 ` Vincent Guittot
2018-06-22 13:26 ` Peter Zijlstra
2018-06-22 13:52 ` Peter Zijlstra
2018-06-22 13:54 ` Vincent Guittot
2018-06-22 13:57 ` Vincent Guittot
2018-06-22 14:46 ` Peter Zijlstra
2018-06-22 14:49 ` Vincent Guittot
2018-06-22 14:11 ` Peter Zijlstra
2018-06-22 14:48 ` Peter Zijlstra
2018-06-22 14:12 ` Vincent Guittot
2018-06-22 12:54 ` Quentin Perret
2018-06-22 13:29 ` Peter Zijlstra
2018-06-22 15:22 ` Peter Zijlstra [this message]
2018-06-22 15:30 ` Quentin Perret
2018-06-22 17:24 ` Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 05/11] sched/dl: add dl_rq " Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 06/11] cpufreq/schedutil: use dl " Vincent Guittot
2018-06-08 12:39 ` Juri Lelli
2018-06-08 12:48 ` Vincent Guittot
2018-06-08 12:54 ` Juri Lelli
2018-06-08 13:36 ` Juri Lelli
2018-06-08 13:38 ` Vincent Guittot
2018-06-22 15:24 ` Peter Zijlstra
2018-06-22 17:22 ` Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 07/11] sched/irq: add irq " Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 08/11] cpufreq/schedutil: take into account interrupt Vincent Guittot
2018-06-12 8:54 ` Dietmar Eggemann
2018-06-12 9:10 ` Vincent Guittot
2018-06-12 9:16 ` Vincent Guittot
2018-06-12 9:20 ` Quentin Perret
2018-06-12 9:26 ` Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 09/11] sched: use pelt for scale_rt_capacity() Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 10/11] sched: remove rt_avg code Vincent Guittot
2018-06-08 12:09 ` [PATCH v6 11/11] proc/sched: remove unused sched_time_avg_ms Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180622152258.GF2512@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=Morten.Rasmussen@arm.com \
--cc=daniel.lezcano@linaro.org \
--cc=dietmar.eggemann@arm.com \
--cc=joel@joelfernandes.org \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=patrick.bellasi@arm.com \
--cc=quentin.perret@arm.com \
--cc=rjw@rjwysocki.net \
--cc=valentin.schneider@arm.com \
--cc=vincent.guittot@linaro.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).