From: Thara Gopinath <thara.gopinath@linaro.org>
To: mingo@redhat.com, peterz@infradead.org, ionela.voinescu@arm.com,
vincent.guittot@linaro.org, rui.zhang@intel.com,
edubezval@gmail.com, qperret@google.com
Cc: linux-kernel@vger.kernel.org, amit.kachhap@gmail.com,
javi.merino@kernel.org, daniel.lezcano@linaro.org
Subject: [Patch v5 2/6] sched/fair: Add infrastructure to store and update instantaneous thermal pressure
Date: Tue, 5 Nov 2019 13:49:42 -0500 [thread overview]
Message-ID: <1572979786-20361-3-git-send-email-thara.gopinath@linaro.org> (raw)
In-Reply-To: <1572979786-20361-1-git-send-email-thara.gopinath@linaro.org>
Add interface APIs to initialize, update/average, track, accumulate
and decay thermal pressure per cpu basis. A per cpu variable
thermal_pressure is introduced to keep track of instantaneous per
cpu thermal pressure. Thermal pressure is the delta between maximum
capacity and capped capacity due to a thermal event.
API trigger_thermal_pressure_average is called for periodic accumulate
and decay of the thermal pressure.This API passes on the instantaneous
thermal pressure of a cpu to update_thermal_load_avg to do the necessary
accumulate, decay and average.
API update_thermal_pressure is for the system to update the thermal
pressure by providing a capped maximum capacity.
Considering, trigger_thermal_pressure_average reads thermal_pressure and
update_thermal_pressure writes into thermal_pressure, one can argue for
some sort of locking mechanism to avoid a stale value.
But considering trigger_thermal_pressure_average can be called from a
system critical path like scheduler tick function, a locking mechanism
is not ideal. This means that it is possible the thermal_pressure value
used to calculate average thermal pressure for a cpu can be
stale for upto 1 tick period.
Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
v3->v4:
- Dropped per cpu max_capacity_info struct and instead added a per
delta_capacity variable to store the delta between maximum
capacity and capped capacity. The delta is now calculated when
thermal pressure is updated and not every tick.
- Dropped populate_max_capacity_info api as only per cpu delta
capacity is stored.
- Renamed update_periodic_maxcap to
trigger_thermal_pressure_average and update_maxcap_capacity to
update_thermal_pressure.
v4->v5:
- As per Peter's review comments folded thermal.c into fair.c.
- As per Ionela's review comments revamped update_thermal_pressure
to take maximum available capacity as input instead of maximum
capped frequency ration.
---
include/linux/sched.h | 9 +++++++++
kernel/sched/fair.c | 37 +++++++++++++++++++++++++++++++++++++
2 files changed, 46 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 263cf08..3c31084 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1993,6 +1993,15 @@ static inline void rseq_syscall(struct pt_regs *regs)
#endif
+#ifdef CONFIG_SMP
+void update_thermal_pressure(int cpu, unsigned long capped_capacity);
+#else
+static inline void
+update_thermal_pressure(int cpu, unsigned long capped_capacity)
+{
+}
+#endif
+
const struct sched_avg *sched_trace_cfs_rq_avg(struct cfs_rq *cfs_rq);
char *sched_trace_cfs_rq_path(struct cfs_rq *cfs_rq, char *str, int len);
int sched_trace_cfs_rq_cpu(struct cfs_rq *cfs_rq);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 682a754..2e907cc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -86,6 +86,12 @@ static unsigned int normalized_sysctl_sched_wakeup_granularity = 1000000UL;
const_debug unsigned int sysctl_sched_migration_cost = 500000UL;
+/*
+ * Per-cpu instantaneous delta between maximum capacity
+ * and maximum available capacity due to thermal events.
+ */
+static DEFINE_PER_CPU(unsigned long, thermal_pressure);
+
#ifdef CONFIG_SMP
/*
* For asym packing, by default the lower numbered CPU has higher priority.
@@ -10401,6 +10407,37 @@ static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task
return rr_interval;
}
+#ifdef CONFIG_SMP
+/**
+ * update_thermal_pressure: Update thermal pressure
+ * @cpu: the cpu for which thermal pressure is to be updated for
+ * @capped_capacity: maximum capacity of the cpu after the capping
+ * due to thermal event.
+ *
+ * Delta between the arch_scale_cpu_capacity and capped max capacity is
+ * stored in per cpu thermal_pressure variable.
+ */
+void update_thermal_pressure(int cpu, unsigned long capped_capacity)
+{
+ unsigned long delta;
+
+ delta = arch_scale_cpu_capacity(cpu) - capped_capacity;
+ per_cpu(thermal_pressure, cpu) = delta;
+}
+#endif
+
+/**
+ * trigger_thermal_pressure_average: Trigger the thermal pressure accumulate
+ * and average algorithm
+ */
+static void trigger_thermal_pressure_average(struct rq *rq)
+{
+#ifdef CONFIG_SMP
+ update_thermal_load_avg(rq_clock_task(rq), rq,
+ per_cpu(thermal_pressure, cpu_of(rq)));
+#endif
+}
+
/*
* All the scheduling class methods:
*/
--
2.1.4
next prev parent reply other threads:[~2019-11-05 18:49 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-05 18:49 [Patch v5 0/6] Introduce Thermal Pressure Thara Gopinath
2019-11-05 18:49 ` [Patch v5 1/6] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
2019-11-06 8:24 ` Vincent Guittot
2019-11-06 12:50 ` Dietmar Eggemann
2019-11-06 17:00 ` Thara Gopinath
2019-11-07 16:39 ` Qais Yousef
2019-11-19 10:50 ` Amit Kucheria
2019-11-05 18:49 ` Thara Gopinath [this message]
2019-11-05 20:21 ` [Patch v5 2/6] sched/fair: Add infrastructure to store and update instantaneous " Ionela Voinescu
2019-11-05 21:02 ` Thara Gopinath
2019-11-05 21:15 ` Ionela Voinescu
2019-11-05 21:29 ` Thara Gopinath
2019-11-05 21:53 ` Ionela Voinescu
2019-11-06 12:50 ` Dietmar Eggemann
2019-11-06 17:53 ` Thara Gopinath
2019-11-07 9:32 ` Dietmar Eggemann
2019-11-07 10:48 ` Vincent Guittot
2019-11-07 11:36 ` Dietmar Eggemann
2019-11-06 8:27 ` Vincent Guittot
2019-11-06 17:00 ` Thara Gopinath
2019-11-19 10:51 ` Amit Kucheria
2019-11-05 18:49 ` [Patch v5 3/6] sched/fair: Enable periodic update of average " Thara Gopinath
2019-11-06 8:32 ` Vincent Guittot
2019-11-06 17:01 ` Thara Gopinath
2019-11-05 18:49 ` [Patch v5 4/6] sched/fair: update cpu_capcity to reflect " Thara Gopinath
2019-11-06 16:56 ` Qais Yousef
2019-11-06 17:31 ` Thara Gopinath
2019-11-06 17:41 ` Qais Yousef
2019-11-19 10:51 ` Amit Kucheria
2019-11-05 18:49 ` [Patch v5 5/6] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping Thara Gopinath
2019-11-06 12:50 ` Dietmar Eggemann
2019-11-06 17:28 ` Thara Gopinath
2019-11-07 13:00 ` Dietmar Eggemann
2019-11-05 18:49 ` [Patch v5 6/6] sched/fair: Enable tuning of decay period Thara Gopinath
2019-11-07 10:49 ` Vincent Guittot
2019-11-08 10:53 ` Dietmar Eggemann
2019-11-19 10:52 ` Amit Kucheria
2019-11-12 11:21 ` [Patch v5 0/6] Introduce Thermal Pressure Lukasz Luba
2019-11-19 15:12 ` Lukasz Luba
2019-11-19 10:54 ` Amit Kucheria
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1572979786-20361-3-git-send-email-thara.gopinath@linaro.org \
--to=thara.gopinath@linaro.org \
--cc=amit.kachhap@gmail.com \
--cc=daniel.lezcano@linaro.org \
--cc=edubezval@gmail.com \
--cc=ionela.voinescu@arm.com \
--cc=javi.merino@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=qperret@google.com \
--cc=rui.zhang@intel.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).