From: Juri Lelli <juri.lelli@arm.com>
To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net,
viresh.kumar@linaro.org
Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org,
tglx@linutronix.de, vincent.guittot@linaro.org,
rostedt@goodmis.org, luca.abeni@santannapisa.it,
claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it,
bristot@redhat.com, mathieu.poirier@linaro.org,
tkjos@android.com, joelaf@google.com, andresoportus@google.com,
morten.rasmussen@arm.com, dietmar.eggemann@arm.com,
patrick.bellasi@arm.com, juri.lelli@arm.com,
Ingo Molnar <mingo@kernel.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>
Subject: [RFC PATCH v1 2/8] sched/deadline: move cpu frequency selection triggering points
Date: Wed, 5 Jul 2017 09:58:59 +0100 [thread overview]
Message-ID: <20170705085905.6558-3-juri.lelli@arm.com> (raw)
In-Reply-To: <20170705085905.6558-1-juri.lelli@arm.com>
Since SCHED_DEADLINE doesn't track utilization signal (but reserves a
fraction of CPU bandwidth to tasks admitted to the system), there is no
point in evaluating frequency changes during each tick event.
Move frequency selection triggering points to where running_bw changes.
Co-authored-by: Claudio Scordino <claudio@evidence.eu.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Luca Abeni <luca.abeni@santannapisa.it>
---
Changes from RFCv0:
- modify comment regarding periodic RT updates (Claudio)
---
kernel/sched/deadline.c | 7 ++++---
kernel/sched/sched.h | 12 ++++++------
2 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index a84299f44b5d..6912f7f35f9b 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -85,6 +85,8 @@ void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq)
dl_rq->running_bw += dl_bw;
SCHED_WARN_ON(dl_rq->running_bw < old); /* overflow */
SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw);
+ /* kick cpufreq (see the comment in kernel/sched/sched.h). */
+ cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL);
}
static inline
@@ -97,6 +99,8 @@ void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq)
SCHED_WARN_ON(dl_rq->running_bw > old); /* underflow */
if (dl_rq->running_bw > old)
dl_rq->running_bw = 0;
+ /* kick cpufreq (see the comment in kernel/sched/sched.h). */
+ cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL);
}
static inline
@@ -1135,9 +1139,6 @@ static void update_curr_dl(struct rq *rq)
return;
}
- /* kick cpufreq (see the comment in kernel/sched/sched.h). */
- cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_DL);
-
schedstat_set(curr->se.statistics.exec_max,
max(curr->se.statistics.exec_max, delta_exec));
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index eeef1a3086d1..d8798bb54ace 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2057,14 +2057,14 @@ DECLARE_PER_CPU(struct update_util_data *, cpufreq_update_util_data);
* The way cpufreq is currently arranged requires it to evaluate the CPU
* performance state (frequency/voltage) on a regular basis to prevent it from
* being stuck in a completely inadequate performance level for too long.
- * That is not guaranteed to happen if the updates are only triggered from CFS,
- * though, because they may not be coming in if RT or deadline tasks are active
- * all the time (or there are RT and DL tasks only).
+ * That is not guaranteed to happen if the updates are only triggered from CFS
+ * and DL, though, because they may not be coming in if only RT tasks are
+ * active all the time (or there are RT tasks only).
*
- * As a workaround for that issue, this function is called by the RT and DL
- * sched classes to trigger extra cpufreq updates to prevent it from stalling,
+ * As a workaround for that issue, this function is called periodically by the
+ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
* but that really is a band-aid. Going forward it should be replaced with
- * solutions targeted more specifically at RT and DL tasks.
+ * solutions targeted more specifically at RT tasks.
*/
static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
{
--
2.11.0
next prev parent reply other threads:[~2017-07-05 9:00 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-05 8:58 [RFC PATCH v1 0/8] SCHED_DEADLINE freq/cpu invariance and OPP selection Juri Lelli
2017-07-05 8:58 ` [RFC PATCH v1 1/8] sched/cpufreq_schedutil: make use of DEADLINE utilization signal Juri Lelli
2017-07-07 7:20 ` Viresh Kumar
2017-07-05 8:58 ` Juri Lelli [this message]
2017-07-07 7:21 ` [RFC PATCH v1 2/8] sched/deadline: move cpu frequency selection triggering points Viresh Kumar
2017-07-05 8:59 ` [RFC PATCH v1 3/8] sched/cpufreq_schedutil: make worker kthread be SCHED_DEADLINE Juri Lelli
2017-07-07 3:56 ` Joel Fernandes
2017-07-07 10:43 ` Juri Lelli
2017-07-07 10:46 ` Thomas Gleixner
2017-07-07 10:53 ` Juri Lelli
2017-07-07 13:11 ` Rafael J. Wysocki
2017-07-07 21:58 ` Steven Rostedt
2017-07-07 22:07 ` Joel Fernandes
2017-07-07 22:15 ` Steven Rostedt
2017-07-07 22:57 ` Joel Fernandes
2017-07-11 12:37 ` Peter Zijlstra
2017-07-07 7:21 ` Viresh Kumar
2017-07-11 16:18 ` Peter Zijlstra
2017-07-11 17:02 ` Juri Lelli
2017-07-05 8:59 ` [RFC PATCH v1 4/8] sched/cpufreq_schedutil: split utilization signals Juri Lelli
2017-07-07 3:26 ` Joel Fernandes
2017-07-07 8:58 ` Viresh Kumar
2017-07-07 10:59 ` Juri Lelli
2017-07-10 7:46 ` Joel Fernandes
2017-07-10 7:05 ` Viresh Kumar
2017-07-05 8:59 ` [RFC PATCH v1 5/8] sched/cpufreq_schedutil: always consider all CPUs when deciding next freq Juri Lelli
2017-07-07 8:59 ` Viresh Kumar
2017-07-11 16:17 ` Peter Zijlstra
2017-07-11 17:18 ` Juri Lelli
2017-07-05 8:59 ` [RFC PATCH v1 6/8] sched/sched.h: remove sd arch_scale_freq_capacity parameter Juri Lelli
2017-07-05 8:59 ` [RFC PATCH v1 7/8] sched/sched.h: move arch_scale_{freq,cpu}_capacity outside CONFIG_SMP Juri Lelli
2017-07-07 22:04 ` Steven Rostedt
2017-07-05 8:59 ` [RFC PATCH v1 8/8] sched/deadline: make bandwidth enforcement scale-invariant Juri Lelli
2017-07-19 7:21 ` Peter Zijlstra
2017-07-19 9:20 ` Juri Lelli
2017-07-19 11:00 ` Peter Zijlstra
2017-07-19 11:16 ` Juri Lelli
2017-07-24 16:43 ` Peter Zijlstra
2017-07-25 7:03 ` Luca Abeni
2017-07-25 13:51 ` Peter Zijlstra
2017-07-26 13:50 ` luca abeni
2017-07-06 15:57 ` [RFC PATCH v1 0/8] SCHED_DEADLINE freq/cpu invariance and OPP selection Steven Rostedt
2017-07-06 16:08 ` Juri Lelli
2017-07-06 16:15 ` Peter Zijlstra
2017-07-06 21:08 ` Rafael J. Wysocki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170705085905.6558-3-juri.lelli@arm.com \
--to=juri.lelli@arm.com \
--cc=andresoportus@google.com \
--cc=bristot@redhat.com \
--cc=claudio@evidence.eu.com \
--cc=dietmar.eggemann@arm.com \
--cc=joelaf@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=luca.abeni@santannapisa.it \
--cc=mathieu.poirier@linaro.org \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=morten.rasmussen@arm.com \
--cc=patrick.bellasi@arm.com \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=rjw@rjwysocki.net \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=tkjos@android.com \
--cc=tommaso.cucinotta@santannapisa.it \
--cc=vincent.guittot@linaro.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).