linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Patch v6 0/7] Introduce Thermal Pressure
@ 2019-12-12  4:11 Thara Gopinath
  2019-12-12  4:11 ` [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
                   ` (7 more replies)
  0 siblings, 8 replies; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

Thermal governors can respond to an overheat event of a cpu by
capping the cpu's maximum possible frequency. This in turn
means that the maximum available compute capacity of the
cpu is restricted. But today in the kernel, task scheduler is
not notified of capping of maximum frequency of a cpu.
In other words, scheduler is unaware of maximum capacity
restrictions placed on a cpu due to thermal activity.
This patch series attempts to address this issue.
The benefits identified are better task placement among available
cpus in event of overheating which in turn leads to better
performance numbers.

The reduction in the maximum possible capacity of a cpu due to a
thermal event can be considered as thermal pressure. Instantaneous
thermal pressure is hard to record and can sometime be erroneous
as there can be mismatch between the actual capping of capacity
and scheduler recording it. Thus solution is to have a weighted
average per cpu value for thermal pressure over time.
The weight reflects the amount of time the cpu has spent at a
capped maximum frequency. Since thermal pressure is recorded as
an average, it must be decayed periodically. Exisiting algorithm
in the kernel scheduler pelt framework is re-used to calculate
the weighted average. This patch series also defines a sysctl
inerface to allow for a configurable decay period.

Regarding testing, basic build, boot and sanity testing have been
performed on db845c platform with debian file system.
Further, dhrystone and hackbench tests have been
run with the thermal pressure algorithm. During testing, due to
constraints of step wise governor in dealing with big little systems,
trip point 0 temperature was made assymetric between cpus in little
cluster and big cluster; the idea being that
big core will heat up and cpu cooling device will throttle the
frequency of the big cores faster, there by limiting the maximum available
capacity and the scheduler will spread out tasks to little cores as well.

Test Results

Hackbench: 1 group , 30000 loops, 10 runs
                                               Result         SD
                                               (Secs)     (% of mean)
 No Thermal Pressure                            14.03       2.69%
 Thermal Pressure PELT Algo. Decay : 32 ms      13.29       0.56%
 Thermal Pressure PELT Algo. Decay : 64 ms      12.57       1.56%
 Thermal Pressure PELT Algo. Decay : 128 ms     12.71       1.04%
 Thermal Pressure PELT Algo. Decay : 256 ms     12.29       1.42%
 Thermal Pressure PELT Algo. Decay : 512 ms     12.42       1.15%

Dhrystone Run Time  : 20 threads, 3000 MLOOPS
                                                 Result      SD
                                                 (Secs)    (% of mean)
 No Thermal Pressure                              9.452      4.49%
 Thermal Pressure PELT Algo. Decay : 32 ms        8.793      5.30%
 Thermal Pressure PELT Algo. Decay : 64 ms        8.981      5.29%
 Thermal Pressure PELT Algo. Decay : 128 ms       8.647      6.62%
 Thermal Pressure PELT Algo. Decay : 256 ms       8.774      6.45%
 Thermal Pressure PELT Algo. Decay : 512 ms       8.603      5.41%

A Brief History

The first version of this patch-series was posted with resuing
PELT algorithm to decay thermal pressure signal. The discussions
that followed were around whether intanteneous thermal pressure
solution is better and whether a stand-alone algortihm to accumulate
and decay thermal pressure is more appropriate than re-using the
PELT framework.
Tests on Hikey960 showed the stand-alone algorithm performing slightly
better than resuing PELT algorithm and V2 was posted with the stand
alone algorithm. Test results were shared as part of this series.
Discussions were around re-using PELT algorithm and running
further tests with more granular decay period.

For some time after this development was impeded due to hardware
unavailability, some other unforseen and possibly unfortunate events.
For this version, h/w was switched from hikey960 to db845c.
Also Instantaneous thermal pressure was never tested as part of this
cycle as it is clear that weighted average is a better implementation.
The non-PELT algorithm never gave any conclusive results to prove that it
is better than reusing PELT algorithm, in this round of testing.
Also reusing PELT algorithm means thermal pressure tracks the
other utilization signals in the scheduler.

v3->v4:
        - "Patch 3/7:sched: Initialize per cpu thermal pressure structure"
           is dropped as it is no longer needed following changes in other
           other patches.
        - rest of the change log mentioned in specific patches.

v5->v6:
	- "Added arch_ interface APIs to access and update thermal pressure.
	   Moved declaration of per cpu thermal_pressure valriable and
	   infrastructure to update the variable to topology files.

Thara Gopinath (6):
  sched/pelt.c: Add support to track thermal pressure
  sched/fair: Add infrastructure to store and update  instantaneous
    thermal pressure
  sched/fair: Enable periodic update of thermal pressure
  sched/fair: update cpu_capcity to reflect thermal pressure
  thermal/cpu-cooling: Update thermal pressure in case of a maximum
    frequency capping
  sched/fair: Enable tuning of decay period


Thara Gopinath (7):
  sched/pelt.c: Add support to track thermal pressure
  sched: Add hook to read per cpu thermal pressure.
  Add infrastructure to store and update instantaneous thermal pressure
  sched/fair: Enable periodic update of average thermal pressure
  sched/fair: update cpu_capacity to reflect thermal pressure
  thermal/cpu-cooling: Update thermal pressure in case of a maximum
    frequency capping
  sched/fair: Enable tuning of decay period

 Documentation/admin-guide/kernel-parameters.txt |  5 +++
 arch/arm/include/asm/topology.h                 |  3 ++
 arch/arm64/include/asm/topology.h               |  3 ++
 drivers/base/arch_topology.c                    | 13 +++++++
 drivers/thermal/cpu_cooling.c                   | 19 +++++++++--
 include/linux/arch_topology.h                   | 11 ++++++
 include/linux/sched/topology.h                  |  8 +++++
 include/trace/events/sched.h                    |  4 +++
 kernel/sched/fair.c                             | 45 +++++++++++++++++++++++++
 kernel/sched/pelt.c                             | 22 ++++++++++++
 kernel/sched/pelt.h                             |  7 ++++
 kernel/sched/sched.h                            |  1 +
 12 files changed, 139 insertions(+), 2 deletions(-)

-- 
2.1.4


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
@ 2019-12-12  4:11 ` Thara Gopinath
  2019-12-16 13:43   ` Peter Zijlstra
                     ` (2 more replies)
  2019-12-12  4:11 ` [Patch v6 2/7] sched: Add hook to read per cpu " Thara Gopinath
                   ` (6 subsequent siblings)
  7 siblings, 3 replies; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

Extrapolating on the existing framework to track rt/dl utilization using
pelt signals, add a similar mechanism to track thermal pressure. The
difference here from rt/dl utilization tracking is that, instead of
tracking time spent by a cpu running a rt/dl task through util_avg, the
average thermal pressure is tracked through load_avg. This is because
thermal pressure signal is weighted "delta" capacity and is not
binary(util_avg is binary). "delta capacity" here means delta between the
actual capacity of a cpu and the decreased capacity a cpu due to a thermal
event.

In order to track average thermal pressure, a new sched_avg variable
avg_thermal is introduced. Function update_thermal_load_avg can be called
to do the periodic bookkeeping (accumulate, decay and average) of the
thermal pressure.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---

v5->v6:
	- added trace support for thermal pressure pelt signal.

 include/trace/events/sched.h |  4 ++++
 kernel/sched/pelt.c          | 22 ++++++++++++++++++++++
 kernel/sched/pelt.h          |  7 +++++++
 kernel/sched/sched.h         |  1 +
 4 files changed, 34 insertions(+)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 420e80e..a8fb667 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -613,6 +613,10 @@ DECLARE_TRACE(pelt_dl_tp,
 	TP_PROTO(struct rq *rq),
 	TP_ARGS(rq));
 
+DECLARE_TRACE(pelt_thermal_tp,
+	TP_PROTO(struct rq *rq),
+	TP_ARGS(rq));
+
 DECLARE_TRACE(pelt_irq_tp,
 	TP_PROTO(struct rq *rq),
 	TP_ARGS(rq));
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index a96db50..9aac3b7 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -353,6 +353,28 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
 	return 0;
 }
 
+/*
+ * thermal:
+ *
+ *   load_sum = \Sum se->avg.load_sum
+ *
+ *   util_avg and runnable_load_avg are not supported and meaningless.
+ *
+ */
+int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+{
+	if (___update_load_sum(now, &rq->avg_thermal,
+			       capacity,
+			       capacity,
+			       capacity)) {
+		___update_load_avg(&rq->avg_thermal, 1, 1);
+		trace_pelt_thermal_tp(rq);
+		return 1;
+	}
+
+	return 0;
+}
+
 #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
 /*
  * irq:
diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
index afff644..c74226d 100644
--- a/kernel/sched/pelt.h
+++ b/kernel/sched/pelt.h
@@ -6,6 +6,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se
 int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
 int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
 int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
+int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
 
 #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
 int update_irq_load_avg(struct rq *rq, u64 running);
@@ -159,6 +160,12 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
 }
 
 static inline int
+update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+{
+	return 0;
+}
+
+static inline int
 update_irq_load_avg(struct rq *rq, u64 running)
 {
 	return 0;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 280a3c7..37bd7ef 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -944,6 +944,7 @@ struct rq {
 #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
 	struct sched_avg	avg_irq;
 #endif
+	struct sched_avg	avg_thermal;
 	u64			idle_stamp;
 	u64			avg_idle;
 
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Patch v6 2/7] sched: Add hook to read per cpu thermal pressure.
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
  2019-12-12  4:11 ` [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
@ 2019-12-12  4:11 ` Thara Gopinath
  2019-12-16 14:35   ` Peter Zijlstra
  2019-12-12  4:11 ` [Patch v6 3/7] Add infrastructure to store and update instantaneous " Thara Gopinath
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

Introduce arch_scale_thermal_capacity to retrieve per cpu thermal
pressure.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 include/linux/sched/topology.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index f341163..f1e22f9 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -225,6 +225,14 @@ unsigned long arch_scale_cpu_capacity(int cpu)
 }
 #endif
 
+#ifndef arch_scale_thermal_capacity
+static __always_inline
+unsigned long arch_scale_thermal_capacity(int cpu)
+{
+	return 0;
+}
+#endif
+
 static inline int task_node(const struct task_struct *p)
 {
 	return cpu_to_node(task_cpu(p));
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Patch v6 3/7] Add infrastructure to store and update instantaneous thermal pressure
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
  2019-12-12  4:11 ` [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
  2019-12-12  4:11 ` [Patch v6 2/7] sched: Add hook to read per cpu " Thara Gopinath
@ 2019-12-12  4:11 ` Thara Gopinath
  2019-12-16 14:37   ` Peter Zijlstra
                     ` (2 more replies)
  2019-12-12  4:11 ` [Patch v6 4/7] sched/fair: Enable periodic update of average " Thara Gopinath
                   ` (4 subsequent siblings)
  7 siblings, 3 replies; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

Add architecture specific APIs to update and track thermal pressure on a
per cpu basis. A per cpu variable thermal_pressure is introduced to keep
track of instantaneous per cpu thermal pressure. Thermal pressure is the
delta between maximum capacity and capped capacity due to a thermal event.
capacity and capped capacity due to a thermal event.

topology_get_thermal_pressure can be hooked into the scheduler specified
arch_scale_thermal_capacity to retrieve instantaneius thermal pressure of
a cpu.

arch_set_thermal_presure can be used to update the thermal pressure by
providing a capped maximum capacity.

Considering topology_get_thermal_pressure reads thermal_pressure and
arch_set_thermal_pressure writes into thermal_pressure, one can argue for
some sort of locking mechanism to avoid a stale value.  But considering
topology_get_thermal_pressure_average can be called from a system critical
path like scheduler tick function, a locking mechanism is not ideal. This
means that it is possible the thermal_pressure value used to calculate
average thermal pressure for a cpu can be stale for upto 1 tick period.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v3->v4:
        - Dropped per cpu max_capacity_info struct and instead added a per
          delta_capacity variable to store the delta between maximum
          capacity and capped capacity. The delta is now calculated when
          thermal pressure is updated and not every tick.
        - Dropped populate_max_capacity_info api as only per cpu delta
          capacity is stored.
        - Renamed update_periodic_maxcap to
          trigger_thermal_pressure_average and update_maxcap_capacity to
          update_thermal_pressure.
v4->v5:
	- As per Peter's review comments folded thermal.c into fair.c.
	- As per Ionela's review comments revamped update_thermal_pressure
	  to take maximum available capacity as input instead of maximum
	  capped frequency ration.
v5->v6:
	- As per review comments moved all the infrastructure to track
	  and retrieve instantaneous thermal pressure out of scheduler
	  to topology files.

 arch/arm/include/asm/topology.h   |  3 +++
 arch/arm64/include/asm/topology.h |  3 +++
 drivers/base/arch_topology.c      | 13 +++++++++++++
 include/linux/arch_topology.h     | 11 +++++++++++
 4 files changed, 30 insertions(+)

diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
index 8a0fae9..90b18c3 100644
--- a/arch/arm/include/asm/topology.h
+++ b/arch/arm/include/asm/topology.h
@@ -16,6 +16,9 @@
 /* Enable topology flag updates */
 #define arch_update_cpu_topology topology_update_cpu_topology
 
+/* Replace task scheduler's defalut thermal pressure retrieve API */
+#define arch_scale_thermal_capacity topology_get_thermal_pressure
+
 #else
 
 static inline void init_cpu_topology(void) { }
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
index a4d945d..ccb277b 100644
--- a/arch/arm64/include/asm/topology.h
+++ b/arch/arm64/include/asm/topology.h
@@ -25,6 +25,9 @@ int pcibus_to_node(struct pci_bus *bus);
 /* Enable topology flag updates */
 #define arch_update_cpu_topology topology_update_cpu_topology
 
+/* Replace task scheduler's defalut thermal pressure retrieve API */
+#define arch_scale_thermal_capacity topology_get_thermal_pressure
+
 #include <asm-generic/topology.h>
 
 #endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 1eb81f11..3a91379 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -42,6 +42,19 @@ void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
 	per_cpu(cpu_scale, cpu) = capacity;
 }
 
+DEFINE_PER_CPU(unsigned long, thermal_pressure);
+
+void arch_set_thermal_pressure(struct cpumask *cpus,
+			       unsigned long capped_capacity)
+{
+	int cpu;
+	unsigned long delta = arch_scale_cpu_capacity(cpumask_first(cpus)) -
+					capped_capacity;
+
+	for_each_cpu(cpu, cpus)
+		WRITE_ONCE(per_cpu(thermal_pressure, cpu), delta);
+}
+
 static ssize_t cpu_capacity_show(struct device *dev,
 				 struct device_attribute *attr,
 				 char *buf)
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 3015ecb..7a04364 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -33,6 +33,17 @@ unsigned long topology_get_freq_scale(int cpu)
 	return per_cpu(freq_scale, cpu);
 }
 
+DECLARE_PER_CPU(unsigned long, thermal_pressure);
+
+static inline
+unsigned long topology_get_thermal_pressure(int cpu)
+{
+	return per_cpu(thermal_pressure, cpu);
+}
+
+void arch_set_thermal_pressure(struct cpumask *cpus,
+			       unsigned long capped_capacity);
+
 struct cpu_topology {
 	int thread_id;
 	int core_id;
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Patch v6 4/7] sched/fair: Enable periodic update of average thermal pressure
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
                   ` (2 preceding siblings ...)
  2019-12-12  4:11 ` [Patch v6 3/7] Add infrastructure to store and update instantaneous " Thara Gopinath
@ 2019-12-12  4:11 ` Thara Gopinath
  2019-12-16 14:39   ` Peter Zijlstra
  2019-12-12  4:11 ` [Patch v6 5/7] sched/fair: update cpu_capacity to reflect " Thara Gopinath
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

Introduce support in CFS periodic tick and other bookkeeping apis
to trigger the process of computing average thermal pressure for a
cpu. Also consider avg_thermal.load_avg in others_have_blocked
which allows for decay of pelt signals.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 kernel/sched/fair.c | 8 ++++++++
 1 file changed, 8 insertions(+)

v4->v5:
	- Updated both versions of update_blocked_averages to trigger the
	  process of computing average thermal pressure.
	- Updated others_have_blocked to considerd avg_thermal.load_avg.

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 08a233e..e12a375 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7462,6 +7462,9 @@ static inline bool others_have_blocked(struct rq *rq)
 	if (READ_ONCE(rq->avg_dl.util_avg))
 		return true;
 
+	if (READ_ONCE(rq->avg_thermal.load_avg))
+		return true;
+
 #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
 	if (READ_ONCE(rq->avg_irq.util_avg))
 		return true;
@@ -7487,6 +7490,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
 {
 	const struct sched_class *curr_class;
 	u64 now = rq_clock_pelt(rq);
+	unsigned long thermal_pressure = arch_scale_thermal_capacity(cpu_of(rq));
 	bool decayed;
 
 	/*
@@ -7497,6 +7501,8 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
 
 	decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
 		  update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
+		  update_thermal_load_avg(rq_clock_task(rq), rq,
+					  thermal_pressure) 			|
 		  update_irq_load_avg(rq, 0);
 
 	if (others_have_blocked(rq))
@@ -10263,6 +10269,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 {
 	struct cfs_rq *cfs_rq;
 	struct sched_entity *se = &curr->se;
+	unsigned long thermal_pressure = arch_scale_thermal_capacity(cpu_of(rq));
 
 	for_each_sched_entity(se) {
 		cfs_rq = cfs_rq_of(se);
@@ -10274,6 +10281,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 
 	update_misfit_status(curr, rq);
 	update_overutilized_status(task_rq(curr));
+	update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure);
 }
 
 /*
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Patch v6 5/7] sched/fair: update cpu_capacity to reflect thermal pressure
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
                   ` (3 preceding siblings ...)
  2019-12-12  4:11 ` [Patch v6 4/7] sched/fair: Enable periodic update of average " Thara Gopinath
@ 2019-12-12  4:11 ` Thara Gopinath
  2019-12-17 12:54   ` Dietmar Eggemann
  2019-12-23 17:52   ` Ionela Voinescu
  2019-12-12  4:11 ` [Patch v6 6/7] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping Thara Gopinath
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

cpu_capacity relflects the maximum available capacity of a cpu. Thermal
pressure on a cpu means this maximum available capacity is reduced. This
patch reduces the average thermal pressure for a cpu from its maximum
available capacity so that cpu_capacity reflects the actual
available capacity.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---
 kernel/sched/fair.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e12a375..4840655 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7725,8 +7725,15 @@ static unsigned long scale_rt_capacity(struct sched_domain *sd, int cpu)
 	if (unlikely(irq >= max))
 		return 1;
 
+	/*
+	 * avg_rt.util avg and avg_dl.util track binary signals
+	 * (running and not running) with weights 0 and 1024 respectively.
+	 * avg_thermal.load_avg tracks thermal pressure and the weighted
+	 * average uses the actual delta max capacity.
+	 */
 	used = READ_ONCE(rq->avg_rt.util_avg);
 	used += READ_ONCE(rq->avg_dl.util_avg);
+	used += READ_ONCE(rq->avg_thermal.load_avg);
 
 	if (unlikely(used >= max))
 		return 1;
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Patch v6 6/7] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
                   ` (4 preceding siblings ...)
  2019-12-12  4:11 ` [Patch v6 5/7] sched/fair: update cpu_capacity to reflect " Thara Gopinath
@ 2019-12-12  4:11 ` Thara Gopinath
  2019-12-23 17:54   ` Ionela Voinescu
  2019-12-12  4:11 ` [Patch v6 7/7] sched/fair: Enable tuning of decay period Thara Gopinath
  2019-12-16 14:56 ` [Patch v6 0/7] Introduce Thermal Pressure Peter Zijlstra
  7 siblings, 1 reply; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

Thermal governors can request for a cpu's maximum supported frequency to
be capped in case of an overheat event. This in turn means that the
maximum capacity available for tasks to run on the particular cpu is
reduced. Delta between the original maximum capacity and capped maximum
capacity is known as thermal pressure. Enable cpufreq cooling device to
update the thermal pressure in event of a capped maximum frequency.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v4->v5:
	- fixed issues in update_sched_max_capacity comment header.
	- Updated update_sched_max_capacity to calculate maximum available
	  capacity.
v5->v6:
	- Removed update_sched_max_capacity. Instead call directly into
	  arch_set_thermal_pressure to update thermal pressure.

 drivers/thermal/cpu_cooling.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/thermal/cpu_cooling.c b/drivers/thermal/cpu_cooling.c
index 52569b2..c97c13e 100644
--- a/drivers/thermal/cpu_cooling.c
+++ b/drivers/thermal/cpu_cooling.c
@@ -430,6 +430,10 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
 				 unsigned long state)
 {
 	struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
+	struct cpumask *cpus;
+	unsigned int frequency;
+	unsigned long capacity;
+	int ret;
 
 	/* Request state should be less than max_level */
 	if (WARN_ON(state > cpufreq_cdev->max_level))
@@ -441,8 +445,19 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
 
 	cpufreq_cdev->cpufreq_state = state;
 
-	return freq_qos_update_request(&cpufreq_cdev->qos_req,
-				get_state_freq(cpufreq_cdev, state));
+	frequency = get_state_freq(cpufreq_cdev, state);
+
+	ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
+
+	if (ret > 0) {
+		cpus = cpufreq_cdev->policy->cpus;
+		capacity = frequency *
+				arch_scale_cpu_capacity(cpumask_first(cpus));
+		capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
+		arch_set_thermal_pressure(cpus, capacity);
+	}
+
+	return ret;
 }
 
 /* Bind cpufreq callbacks to thermal cooling device ops */
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Patch v6 7/7] sched/fair: Enable tuning of decay period
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
                   ` (5 preceding siblings ...)
  2019-12-12  4:11 ` [Patch v6 6/7] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping Thara Gopinath
@ 2019-12-12  4:11 ` Thara Gopinath
  2019-12-23 17:55   ` Ionela Voinescu
  2019-12-16 14:56 ` [Patch v6 0/7] Introduce Thermal Pressure Peter Zijlstra
  7 siblings, 1 reply; 28+ messages in thread
From: Thara Gopinath @ 2019-12-12  4:11 UTC (permalink / raw)
  To: mingo, peterz, ionela.voinescu, vincent.guittot, rui.zhang,
	qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

Thermal pressure follows pelt signas which means the decay period for
thermal pressure is the default pelt decay period. Depending on soc
charecteristics and thermal activity, it might be beneficial to decay
thermal pressure slower, but still in-tune with the pelt signals.  One way
to achieve this is to provide a command line parameter to set a decay
shift parameter to an integer between 0 and 10.

Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
---

v4->v5:
	- Changed _coeff to _shift as per review comments on the list.
v5->v6:
	- as per review comments introduced rq_clock_thermal to return
	  clock shifted by the decay_shift.

 Documentation/admin-guide/kernel-parameters.txt |  5 ++++
 kernel/sched/fair.c                             | 34 +++++++++++++++++++++++--
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index ade4e6e..ffe456d 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4330,6 +4330,11 @@
 			incurs a small amount of overhead in the scheduler
 			but is useful for debugging and performance tuning.
 
+	sched_thermal_decay_shift=
+			[KNL, SMP] Set decay shift for thermal pressure signal.
+			Format: integer between 0 and 10
+			Default is 0.
+
 	skew_tick=	[KNL] Offset the periodic timer tick per cpu to mitigate
 			xtime_lock contention on larger systems, and/or RCU lock
 			contention on all systems with CONFIG_MAXSMP set.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4840655..9154cf8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -86,6 +86,36 @@ static unsigned int normalized_sysctl_sched_wakeup_granularity	= 1000000UL;
 
 const_debug unsigned int sysctl_sched_migration_cost	= 500000UL;
 
+/**
+ * By default the decay is the default pelt decay period.
+ * The decay shift can change the decay period in
+ * multiples of 32.
+ *  Decay shift		Decay period(ms)
+ *	0			32
+ *	1			64
+ *	2			128
+ *	3			256
+ *	4			512
+ */
+static int sched_thermal_decay_shift;
+
+static inline u64 rq_clock_thermal(struct rq *rq)
+{
+	return rq_clock_task(rq) >> sched_thermal_decay_shift;
+}
+
+static int __init setup_sched_thermal_decay_shift(char *str)
+{
+	int _shift;
+
+	if (kstrtoint(str, 0, &_shift))
+		pr_warn("Unable to set scheduler thermal pressure decay shift parameter\n");
+
+	sched_thermal_decay_shift = clamp(_shift, 0, 10);
+	return 1;
+}
+__setup("sched_thermal_decay_shift=", setup_sched_thermal_decay_shift);
+
 #ifdef CONFIG_SMP
 /*
  * For asym packing, by default the lower numbered CPU has higher priority.
@@ -7501,7 +7531,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
 
 	decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
 		  update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
-		  update_thermal_load_avg(rq_clock_task(rq), rq,
+		  update_thermal_load_avg(rq_clock_thermal(rq), rq,
 					  thermal_pressure) 			|
 		  update_irq_load_avg(rq, 0);
 
@@ -10288,7 +10318,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 
 	update_misfit_status(curr, rq);
 	update_overutilized_status(task_rq(curr));
-	update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure);
+	update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure);
 }
 
 /*
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure
  2019-12-12  4:11 ` [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
@ 2019-12-16 13:43   ` Peter Zijlstra
  2019-12-17 12:54   ` Dietmar Eggemann
  2019-12-23 17:56   ` Ionela Voinescu
  2 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2019-12-16 13:43 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wed, Dec 11, 2019 at 11:11:42PM -0500, Thara Gopinath wrote:
> The
> difference here from rt/dl utilization tracking is that, instead of
> tracking time spent by a cpu running a rt/dl task through util_avg, the
> average thermal pressure is tracked through load_avg. This is because
> thermal pressure signal is weighted "delta" capacity and is not
> binary(util_avg is binary). "delta capacity" here means delta between the
> actual capacity of a cpu and the decreased capacity a cpu due to a thermal
> event.

I'm thinking that ^, would make a nice adding to that v.

> +/*
> + * thermal:
> + *
> + *   load_sum = \Sum se->avg.load_sum
> + *
> + *   util_avg and runnable_load_avg are not supported and meaningless.
> + *
> + */
> +int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
> +{
> +	if (___update_load_sum(now, &rq->avg_thermal,
> +			       capacity,
> +			       capacity,
> +			       capacity)) {
> +		___update_load_avg(&rq->avg_thermal, 1, 1);
> +		trace_pelt_thermal_tp(rq);
> +		return 1;
> +	}
> +
> +	return 0;
> +}

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 2/7] sched: Add hook to read per cpu thermal pressure.
  2019-12-12  4:11 ` [Patch v6 2/7] sched: Add hook to read per cpu " Thara Gopinath
@ 2019-12-16 14:35   ` Peter Zijlstra
  2019-12-17 12:54     ` Dietmar Eggemann
  0 siblings, 1 reply; 28+ messages in thread
From: Peter Zijlstra @ 2019-12-16 14:35 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wed, Dec 11, 2019 at 11:11:43PM -0500, Thara Gopinath wrote:
> Introduce arch_scale_thermal_capacity to retrieve per cpu thermal
> pressure.
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>  include/linux/sched/topology.h | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index f341163..f1e22f9 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -225,6 +225,14 @@ unsigned long arch_scale_cpu_capacity(int cpu)
>  }
>  #endif
>  
> +#ifndef arch_scale_thermal_capacity
> +static __always_inline
> +unsigned long arch_scale_thermal_capacity(int cpu)
> +{
> +	return 0;
> +}
> +#endif

This is confusing, the return value is not a capacity, it is a reduction
in capacity. Either a comment of a better name would be appreciated.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 3/7] Add infrastructure to store and update instantaneous thermal pressure
  2019-12-12  4:11 ` [Patch v6 3/7] Add infrastructure to store and update instantaneous " Thara Gopinath
@ 2019-12-16 14:37   ` Peter Zijlstra
  2019-12-17 12:54   ` Dietmar Eggemann
  2019-12-23 17:50   ` Ionela Voinescu
  2 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2019-12-16 14:37 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wed, Dec 11, 2019 at 11:11:44PM -0500, Thara Gopinath wrote:
> diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
> index 8a0fae9..90b18c3 100644
> --- a/arch/arm/include/asm/topology.h
> +++ b/arch/arm/include/asm/topology.h
> @@ -16,6 +16,9 @@
>  /* Enable topology flag updates */
>  #define arch_update_cpu_topology topology_update_cpu_topology
>  
> +/* Replace task scheduler's defalut thermal pressure retrieve API */
> +#define arch_scale_thermal_capacity topology_get_thermal_pressure

Witsness the previously observed confusion.

>  #else
>  
>  static inline void init_cpu_topology(void) { }
> diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
> index a4d945d..ccb277b 100644
> --- a/arch/arm64/include/asm/topology.h
> +++ b/arch/arm64/include/asm/topology.h
> @@ -25,6 +25,9 @@ int pcibus_to_node(struct pci_bus *bus);
>  /* Enable topology flag updates */
>  #define arch_update_cpu_topology topology_update_cpu_topology
>  
> +/* Replace task scheduler's defalut thermal pressure retrieve API */
> +#define arch_scale_thermal_capacity topology_get_thermal_pressure
> +
>  #include <asm-generic/topology.h>
>  
>  #endif /* _ASM_ARM_TOPOLOGY_H */
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 1eb81f11..3a91379 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -42,6 +42,19 @@ void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
>  	per_cpu(cpu_scale, cpu) = capacity;
>  }
>  
> +DEFINE_PER_CPU(unsigned long, thermal_pressure);
> +
> +void arch_set_thermal_pressure(struct cpumask *cpus,
> +			       unsigned long capped_capacity)
> +{
> +	int cpu;
> +	unsigned long delta = arch_scale_cpu_capacity(cpumask_first(cpus)) -
> +					capped_capacity;
> +
> +	for_each_cpu(cpu, cpus)
> +		WRITE_ONCE(per_cpu(thermal_pressure, cpu), delta);
> +}

Not a capacity.

>  static ssize_t cpu_capacity_show(struct device *dev,
>  				 struct device_attribute *attr,
>  				 char *buf)
> diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
> index 3015ecb..7a04364 100644
> --- a/include/linux/arch_topology.h
> +++ b/include/linux/arch_topology.h
> @@ -33,6 +33,17 @@ unsigned long topology_get_freq_scale(int cpu)
>  	return per_cpu(freq_scale, cpu);
>  }
>  
> +DECLARE_PER_CPU(unsigned long, thermal_pressure);
> +
> +static inline
> +unsigned long topology_get_thermal_pressure(int cpu)
> +{
> +	return per_cpu(thermal_pressure, cpu);
> +}

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 4/7] sched/fair: Enable periodic update of average thermal pressure
  2019-12-12  4:11 ` [Patch v6 4/7] sched/fair: Enable periodic update of average " Thara Gopinath
@ 2019-12-16 14:39   ` Peter Zijlstra
  2019-12-16 17:59     ` Quentin Perret
  0 siblings, 1 reply; 28+ messages in thread
From: Peter Zijlstra @ 2019-12-16 14:39 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wed, Dec 11, 2019 at 11:11:45PM -0500, Thara Gopinath wrote:
> Introduce support in CFS periodic tick and other bookkeeping apis
> to trigger the process of computing average thermal pressure for a
> cpu. Also consider avg_thermal.load_avg in others_have_blocked
> which allows for decay of pelt signals.
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>  kernel/sched/fair.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> v4->v5:
> 	- Updated both versions of update_blocked_averages to trigger the
> 	  process of computing average thermal pressure.
> 	- Updated others_have_blocked to considerd avg_thermal.load_avg.
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 08a233e..e12a375 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7462,6 +7462,9 @@ static inline bool others_have_blocked(struct rq *rq)
>  	if (READ_ONCE(rq->avg_dl.util_avg))
>  		return true;
>  
> +	if (READ_ONCE(rq->avg_thermal.load_avg))
> +		return true;
> +
>  #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
>  	if (READ_ONCE(rq->avg_irq.util_avg))
>  		return true;
> @@ -7487,6 +7490,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>  {
>  	const struct sched_class *curr_class;
>  	u64 now = rq_clock_pelt(rq);
> +	unsigned long thermal_pressure = arch_scale_thermal_capacity(cpu_of(rq));
>  	bool decayed;
>  
>  	/*
> @@ -7497,6 +7501,8 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>  
>  	decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
>  		  update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
> +		  update_thermal_load_avg(rq_clock_task(rq), rq,
> +					  thermal_pressure) 			|
>  		  update_irq_load_avg(rq, 0);
>  
>  	if (others_have_blocked(rq))
> @@ -10263,6 +10269,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>  {
>  	struct cfs_rq *cfs_rq;
>  	struct sched_entity *se = &curr->se;
> +	unsigned long thermal_pressure = arch_scale_thermal_capacity(cpu_of(rq));
>  
>  	for_each_sched_entity(se) {
>  		cfs_rq = cfs_rq_of(se);
> @@ -10274,6 +10281,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>  
>  	update_misfit_status(curr, rq);
>  	update_overutilized_status(task_rq(curr));
> +	update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure);
>  }

My objection here is that when the arch does not have support for it,
there is still code generated and runtime overhead associated with it.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 0/7] Introduce Thermal Pressure
  2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
                   ` (6 preceding siblings ...)
  2019-12-12  4:11 ` [Patch v6 7/7] sched/fair: Enable tuning of decay period Thara Gopinath
@ 2019-12-16 14:56 ` Peter Zijlstra
  2020-01-11 15:04   ` Thara Gopinath
  7 siblings, 1 reply; 28+ messages in thread
From: Peter Zijlstra @ 2019-12-16 14:56 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wed, Dec 11, 2019 at 11:11:41PM -0500, Thara Gopinath wrote:
> Test Results
> 
> Hackbench: 1 group , 30000 loops, 10 runs
>                                                Result         SD
>                                                (Secs)     (% of mean)
>  No Thermal Pressure                            14.03       2.69%
>  Thermal Pressure PELT Algo. Decay : 32 ms      13.29       0.56%
>  Thermal Pressure PELT Algo. Decay : 64 ms      12.57       1.56%
>  Thermal Pressure PELT Algo. Decay : 128 ms     12.71       1.04%
>  Thermal Pressure PELT Algo. Decay : 256 ms     12.29       1.42%
>  Thermal Pressure PELT Algo. Decay : 512 ms     12.42       1.15%
> 
> Dhrystone Run Time  : 20 threads, 3000 MLOOPS
>                                                  Result      SD
>                                                  (Secs)    (% of mean)
>  No Thermal Pressure                              9.452      4.49%
>  Thermal Pressure PELT Algo. Decay : 32 ms        8.793      5.30%
>  Thermal Pressure PELT Algo. Decay : 64 ms        8.981      5.29%
>  Thermal Pressure PELT Algo. Decay : 128 ms       8.647      6.62%
>  Thermal Pressure PELT Algo. Decay : 256 ms       8.774      6.45%
>  Thermal Pressure PELT Algo. Decay : 512 ms       8.603      5.41%

What is the conclusion, if any from these results? Clearly thermal
pressuse seems to help, but what window? ISTR we default to 32ms, which
is a wash for drystone, but sub-optimal for hackbench.


Anyway, the patches look more or less acceptible, just a bunch of nits,
the biggest being the fact that even if an architecture does not support
this there is still the code and runtime overhead.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 4/7] sched/fair: Enable periodic update of average thermal pressure
  2019-12-16 14:39   ` Peter Zijlstra
@ 2019-12-16 17:59     ` Quentin Perret
  2019-12-17 12:57       ` Dietmar Eggemann
  0 siblings, 1 reply; 28+ messages in thread
From: Quentin Perret @ 2019-12-16 17:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thara Gopinath, mingo, ionela.voinescu, vincent.guittot,
	rui.zhang, daniel.lezcano, viresh.kumar, linux-kernel,
	amit.kachhap, javi.merino, amit.kucheria

On Monday 16 Dec 2019 at 15:39:32 (+0100), Peter Zijlstra wrote:
> > @@ -10274,6 +10281,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
> >  
> >  	update_misfit_status(curr, rq);
> >  	update_overutilized_status(task_rq(curr));
> > +	update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure);
> >  }
> 
> My objection here is that when the arch does not have support for it,
> there is still code generated and runtime overhead associated with it.

I guess this function could be stubbed for CONFIG_CPU_THERMAL=n ?
That is, reflecting the thermal pressure in the scheduler only makes
sense when the thermal infrastructure is enabled to begin with (which is
indeed not the case for most archs).

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure
  2019-12-12  4:11 ` [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
  2019-12-16 13:43   ` Peter Zijlstra
@ 2019-12-17 12:54   ` Dietmar Eggemann
       [not found]     ` <CALD-y_xHS7CaZ8SU--VP5+2F5Y8cVb4sw0XuG+JUpP_jxE7yuQ@mail.gmail.com>
  2019-12-23 17:56   ` Ionela Voinescu
  2 siblings, 1 reply; 28+ messages in thread
From: Dietmar Eggemann @ 2019-12-17 12:54 UTC (permalink / raw)
  To: Thara Gopinath, mingo, peterz, ionela.voinescu, vincent.guittot,
	rui.zhang, qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

On 12/12/2019 05:11, Thara Gopinath wrote:

minor: in subject: s/sched/pelt.c/sched/pelt

[...]

> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index a96db50..9aac3b7 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -353,6 +353,28 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
>  	return 0;
>  }
>  
> +/*
> + * thermal:
> + *
> + *   load_sum = \Sum se->avg.load_sum

Why not '\Sum rq->avg.load_sum' ?

> + *
> + *   util_avg and runnable_load_avg are not supported and meaningless.
> + *
> + */
> +int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
> +{
> +	if (___update_load_sum(now, &rq->avg_thermal,
> +			       capacity,
> +			       capacity,
> +			       capacity)) {
> +		___update_load_avg(&rq->avg_thermal, 1, 1);
> +		trace_pelt_thermal_tp(rq);
> +		return 1;
> +	}
> +
> +	return 0;
> +}
> +

[...]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 2/7] sched: Add hook to read per cpu thermal pressure.
  2019-12-16 14:35   ` Peter Zijlstra
@ 2019-12-17 12:54     ` Dietmar Eggemann
  0 siblings, 0 replies; 28+ messages in thread
From: Dietmar Eggemann @ 2019-12-17 12:54 UTC (permalink / raw)
  To: Peter Zijlstra, Thara Gopinath
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On 16/12/2019 15:35, Peter Zijlstra wrote:
> On Wed, Dec 11, 2019 at 11:11:43PM -0500, Thara Gopinath wrote:

minor: in subject: s/sched/sched/topology


>> Introduce arch_scale_thermal_capacity to retrieve per cpu thermal
>> pressure.
>>
>> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
>> ---
>>  include/linux/sched/topology.h | 8 ++++++++
>>  1 file changed, 8 insertions(+)
>>
>> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
>> index f341163..f1e22f9 100644
>> --- a/include/linux/sched/topology.h
>> +++ b/include/linux/sched/topology.h
>> @@ -225,6 +225,14 @@ unsigned long arch_scale_cpu_capacity(int cpu)
>>  }
>>  #endif
>>  
>> +#ifndef arch_scale_thermal_capacity
>> +static __always_inline
>> +unsigned long arch_scale_thermal_capacity(int cpu)
>> +{
>> +	return 0;
>> +}
>> +#endif
> 
> This is confusing, the return value is not a capacity, it is a reduction
> in capacity. Either a comment of a better name would be appreciated.

+1, the patch-set uses: thermal_capacity, thermal_pressure,
capped_capacity.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 3/7] Add infrastructure to store and update instantaneous thermal pressure
  2019-12-12  4:11 ` [Patch v6 3/7] Add infrastructure to store and update instantaneous " Thara Gopinath
  2019-12-16 14:37   ` Peter Zijlstra
@ 2019-12-17 12:54   ` Dietmar Eggemann
  2019-12-23 17:50   ` Ionela Voinescu
  2 siblings, 0 replies; 28+ messages in thread
From: Dietmar Eggemann @ 2019-12-17 12:54 UTC (permalink / raw)
  To: Thara Gopinath, mingo, peterz, ionela.voinescu, vincent.guittot,
	rui.zhang, qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

On 12/12/2019 05:11, Thara Gopinath wrote:

Shouldn't the subject contain something like 'arm,arm64,drivers:' ?

> Add architecture specific APIs to update and track thermal pressure on a
> per cpu basis. A per cpu variable thermal_pressure is introduced to keep
> track of instantaneous per cpu thermal pressure. Thermal pressure is the
> delta between maximum capacity and capped capacity due to a thermal event.
> capacity and capped capacity due to a thermal event.
> 
> topology_get_thermal_pressure can be hooked into the scheduler specified
> arch_scale_thermal_capacity to retrieve instantaneius thermal pressure of

s/instantaneius/instantaneous

> a cpu.
> 
> arch_set_thermal_presure can be used to update the thermal pressure by

s/presure/pressure

> providing a capped maximum capacity.
> 
> Considering topology_get_thermal_pressure reads thermal_pressure and
> arch_set_thermal_pressure writes into thermal_pressure, one can argue for
> some sort of locking mechanism to avoid a stale value.  But considering
> topology_get_thermal_pressure_average can be called from a system critical

s/topology_get_thermal_pressure_average/topology_get_thermal_pressure ?

> path like scheduler tick function, a locking mechanism is not ideal. This
> means that it is possible the thermal_pressure value used to calculate
> average thermal pressure for a cpu can be stale for upto 1 tick period.
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
> 
> v3->v4:
>         - Dropped per cpu max_capacity_info struct and instead added a per
>           delta_capacity variable to store the delta between maximum
>           capacity and capped capacity. The delta is now calculated when
>           thermal pressure is updated and not every tick.
>         - Dropped populate_max_capacity_info api as only per cpu delta
>           capacity is stored.
>         - Renamed update_periodic_maxcap to
>           trigger_thermal_pressure_average and update_maxcap_capacity to
>           update_thermal_pressure.
> v4->v5:
> 	- As per Peter's review comments folded thermal.c into fair.c.
> 	- As per Ionela's review comments revamped update_thermal_pressure
> 	  to take maximum available capacity as input instead of maximum
> 	  capped frequency ration.
> v5->v6:
> 	- As per review comments moved all the infrastructure to track
> 	  and retrieve instantaneous thermal pressure out of scheduler
> 	  to topology files.
> 
>  arch/arm/include/asm/topology.h   |  3 +++
>  arch/arm64/include/asm/topology.h |  3 +++
>  drivers/base/arch_topology.c      | 13 +++++++++++++
>  include/linux/arch_topology.h     | 11 +++++++++++
>  4 files changed, 30 insertions(+)
> 
> diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
> index 8a0fae9..90b18c3 100644
> --- a/arch/arm/include/asm/topology.h
> +++ b/arch/arm/include/asm/topology.h
> @@ -16,6 +16,9 @@
>  /* Enable topology flag updates */
>  #define arch_update_cpu_topology topology_update_cpu_topology
>  
> +/* Replace task scheduler's defalut thermal pressure retrieve API */
> +#define arch_scale_thermal_capacity topology_get_thermal_pressure
> +
>  #else
>  
>  static inline void init_cpu_topology(void) { }
> diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
> index a4d945d..ccb277b 100644
> --- a/arch/arm64/include/asm/topology.h
> +++ b/arch/arm64/include/asm/topology.h
> @@ -25,6 +25,9 @@ int pcibus_to_node(struct pci_bus *bus);
>  /* Enable topology flag updates */
>  #define arch_update_cpu_topology topology_update_cpu_topology
>  
> +/* Replace task scheduler's defalut thermal pressure retrieve API */
> +#define arch_scale_thermal_capacity topology_get_thermal_pressure
> +
>  #include <asm-generic/topology.h>
>  
>  #endif /* _ASM_ARM_TOPOLOGY_H */
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 1eb81f11..3a91379 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -42,6 +42,19 @@ void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
>  	per_cpu(cpu_scale, cpu) = capacity;
>  }
>  
> +DEFINE_PER_CPU(unsigned long, thermal_pressure);
> +
> +void arch_set_thermal_pressure(struct cpumask *cpus,
> +			       unsigned long capped_capacity)
> +{
> +	int cpu;
> +	unsigned long delta = arch_scale_cpu_capacity(cpumask_first(cpus)) -
> +					capped_capacity;

 unsigned long delta;
 int cpu;

 delta = arch_scale_cpu_capacity(cpumask_first(cpus)) - capped_capacity;

Easier to read plus order local variable declarations from longest to
shortest line.

https://lore.kernel.org/r/20181107171149.165693799@linutronix.de

> +
> +	for_each_cpu(cpu, cpus)
> +		WRITE_ONCE(per_cpu(thermal_pressure, cpu), delta);
> +}
> +
>  static ssize_t cpu_capacity_show(struct device *dev,
>  				 struct device_attribute *attr,
>  				 char *buf)
> diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
> index 3015ecb..7a04364 100644
> --- a/include/linux/arch_topology.h
> +++ b/include/linux/arch_topology.h
> @@ -33,6 +33,17 @@ unsigned long topology_get_freq_scale(int cpu)
>  	return per_cpu(freq_scale, cpu);
>  }
>  
> +DECLARE_PER_CPU(unsigned long, thermal_pressure);
> +
> +static inline
> +unsigned long topology_get_thermal_pressure(int cpu)

Save one line:

static inline unsigned long topology_get_thermal_pressure(int cpu)

> +{
> +	return per_cpu(thermal_pressure, cpu);
> +}
> +
> +void arch_set_thermal_pressure(struct cpumask *cpus,
> +			       unsigned long capped_capacity);
> +
>  struct cpu_topology {
>  	int thread_id;
>  	int core_id;
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 5/7] sched/fair: update cpu_capacity to reflect thermal pressure
  2019-12-12  4:11 ` [Patch v6 5/7] sched/fair: update cpu_capacity to reflect " Thara Gopinath
@ 2019-12-17 12:54   ` Dietmar Eggemann
  2019-12-23 17:52   ` Ionela Voinescu
  1 sibling, 0 replies; 28+ messages in thread
From: Dietmar Eggemann @ 2019-12-17 12:54 UTC (permalink / raw)
  To: Thara Gopinath, mingo, peterz, ionela.voinescu, vincent.guittot,
	rui.zhang, qperret, daniel.lezcano, viresh.kumar
  Cc: linux-kernel, amit.kachhap, javi.merino, amit.kucheria

On 12/12/2019 05:11, Thara Gopinath wrote:
> cpu_capacity relflects the maximum available capacity of a cpu. Thermal

s/relflects/reflects

[...]

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e12a375..4840655 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7725,8 +7725,15 @@ static unsigned long scale_rt_capacity(struct sched_domain *sd, int cpu)
>  	if (unlikely(irq >= max))
>  		return 1;
>  
> +	/*
> +	 * avg_rt.util avg and avg_dl.util track binary signals
> +	 * (running and not running) with weights 0 and 1024 respectively.

What exactly is this weight here? I assume the 'unsigned long load'
parameter of ___update_load_avg(). At least this would match its use in
__update_load_avg_se().

[...]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 4/7] sched/fair: Enable periodic update of average thermal pressure
  2019-12-16 17:59     ` Quentin Perret
@ 2019-12-17 12:57       ` Dietmar Eggemann
  2019-12-27 15:22         ` Thara Gopinath
  0 siblings, 1 reply; 28+ messages in thread
From: Dietmar Eggemann @ 2019-12-17 12:57 UTC (permalink / raw)
  To: Quentin Perret, Peter Zijlstra
  Cc: Thara Gopinath, mingo, ionela.voinescu, vincent.guittot,
	rui.zhang, daniel.lezcano, viresh.kumar, linux-kernel,
	amit.kachhap, javi.merino, amit.kucheria

On 16/12/2019 18:59, Quentin Perret wrote:
> On Monday 16 Dec 2019 at 15:39:32 (+0100), Peter Zijlstra wrote:
>>> @@ -10274,6 +10281,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>>>  
>>>  	update_misfit_status(curr, rq);
>>>  	update_overutilized_status(task_rq(curr));
>>> +	update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure);
>>>  }
>>
>> My objection here is that when the arch does not have support for it,
>> there is still code generated and runtime overhead associated with it.
> 
> I guess this function could be stubbed for CONFIG_CPU_THERMAL=n ?
> That is, reflecting the thermal pressure in the scheduler only makes
> sense when the thermal infrastructure is enabled to begin with (which is
> indeed not the case for most archs).

Makes sense to me. If we can agree that 'CPU cooling' is the only actor
for thermal (CPU capacity) capping.

thermal_sys-$(CONFIG_CPU_THERMAL)       += cpu_cooling.o

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 3/7] Add infrastructure to store and update instantaneous thermal pressure
  2019-12-12  4:11 ` [Patch v6 3/7] Add infrastructure to store and update instantaneous " Thara Gopinath
  2019-12-16 14:37   ` Peter Zijlstra
  2019-12-17 12:54   ` Dietmar Eggemann
@ 2019-12-23 17:50   ` Ionela Voinescu
  2 siblings, 0 replies; 28+ messages in thread
From: Ionela Voinescu @ 2019-12-23 17:50 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, peterz, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

Hi Thara,

On Wednesday 11 Dec 2019 at 23:11:44 (-0500), Thara Gopinath wrote:
> Add architecture specific APIs to update and track thermal pressure on a
> per cpu basis. A per cpu variable thermal_pressure is introduced to keep
> track of instantaneous per cpu thermal pressure. Thermal pressure is the
> delta between maximum capacity and capped capacity due to a thermal event.
> capacity and capped capacity due to a thermal event.
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This line seems to be a duplicate (initially I thought I was seeing
double :) ).

> topology_get_thermal_pressure can be hooked into the scheduler specified
> arch_scale_thermal_capacity to retrieve instantaneius thermal pressure of
> a cpu.
> 
> arch_set_thermal_presure can be used to update the thermal pressure by
> providing a capped maximum capacity.
> 
> Considering topology_get_thermal_pressure reads thermal_pressure and
> arch_set_thermal_pressure writes into thermal_pressure, one can argue for
> some sort of locking mechanism to avoid a stale value.  But considering
> topology_get_thermal_pressure_average can be called from a system critical
> path like scheduler tick function, a locking mechanism is not ideal. This
> means that it is possible the thermal_pressure value used to calculate
> average thermal pressure for a cpu can be stale for upto 1 tick period.
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
[...]
> diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
> index 8a0fae9..90b18c3 100644
> --- a/arch/arm/include/asm/topology.h
> +++ b/arch/arm/include/asm/topology.h
> @@ -16,6 +16,9 @@
>  /* Enable topology flag updates */
>  #define arch_update_cpu_topology topology_update_cpu_topology
>  
> +/* Replace task scheduler's defalut thermal pressure retrieve API */

s/defalut/default

> +#define arch_scale_thermal_capacity topology_get_thermal_pressure
> +

I also think this is deserving of a better name. I would drop the
'scale' part as well as it is not used as a scale factor, as
freq_scale or cpu_scale, but it's used as a reduction in capacity
(thermal capacity pressure) due to a thermal event.

It might be too much but what do you think about:
arch_thermal_capacity_pressure?

>  #else
>  
>  static inline void init_cpu_topology(void) { }
> diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
> index a4d945d..ccb277b 100644
> --- a/arch/arm64/include/asm/topology.h
> +++ b/arch/arm64/include/asm/topology.h
> @@ -25,6 +25,9 @@ int pcibus_to_node(struct pci_bus *bus);
>  /* Enable topology flag updates */
>  #define arch_update_cpu_topology topology_update_cpu_topology
>  
> +/* Replace task scheduler's defalut thermal pressure retrieve API */

s/defalut/default

Regards,
Ionela.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 5/7] sched/fair: update cpu_capacity to reflect thermal pressure
  2019-12-12  4:11 ` [Patch v6 5/7] sched/fair: update cpu_capacity to reflect " Thara Gopinath
  2019-12-17 12:54   ` Dietmar Eggemann
@ 2019-12-23 17:52   ` Ionela Voinescu
  1 sibling, 0 replies; 28+ messages in thread
From: Ionela Voinescu @ 2019-12-23 17:52 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, peterz, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wednesday 11 Dec 2019 at 23:11:46 (-0500), Thara Gopinath wrote:
> cpu_capacity relflects the maximum available capacity of a cpu. Thermal
> pressure on a cpu means this maximum available capacity is reduced. This
> patch reduces the average thermal pressure for a cpu from its maximum

Maybe we can introduce two terms here: maximum possible capacity and
maximum currently available capacity.
I think using these two terms the message can become more clear:

cpu_capacity initially reflects the maximum possible capacity of a cpu
(capacity orig). Thermal pressure on a cpu means this maximum
possible capacity of a cpu is not available due to thermal events.
This patch subtracts the average thermal pressure for a cpu from the
cpu's maximum possible capacity so that cpu_capacity reflects the
actual maximum currently available capacity.

Kind regards,
Ionela.

> available capacity so that cpu_capacity reflects the actual
> available capacity.
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>  kernel/sched/fair.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e12a375..4840655 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7725,8 +7725,15 @@ static unsigned long scale_rt_capacity(struct sched_domain *sd, int cpu)
>  	if (unlikely(irq >= max))
>  		return 1;
>  
> +	/*
> +	 * avg_rt.util avg and avg_dl.util track binary signals
> +	 * (running and not running) with weights 0 and 1024 respectively.
> +	 * avg_thermal.load_avg tracks thermal pressure and the weighted
> +	 * average uses the actual delta max capacity.
> +	 */
>  	used = READ_ONCE(rq->avg_rt.util_avg);
>  	used += READ_ONCE(rq->avg_dl.util_avg);
> +	used += READ_ONCE(rq->avg_thermal.load_avg);
>  
>  	if (unlikely(used >= max))
>  		return 1;
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 6/7] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping
  2019-12-12  4:11 ` [Patch v6 6/7] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping Thara Gopinath
@ 2019-12-23 17:54   ` Ionela Voinescu
  0 siblings, 0 replies; 28+ messages in thread
From: Ionela Voinescu @ 2019-12-23 17:54 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, peterz, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wednesday 11 Dec 2019 at 23:11:47 (-0500), Thara Gopinath wrote:
[...]
> @@ -430,6 +430,10 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
>  				 unsigned long state)
>  {
>  	struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
> +	struct cpumask *cpus;
> +	unsigned int frequency;
> +	unsigned long capacity;
> +	int ret;
>  
>  	/* Request state should be less than max_level */
>  	if (WARN_ON(state > cpufreq_cdev->max_level))
> @@ -441,8 +445,19 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
>  
>  	cpufreq_cdev->cpufreq_state = state;
>  
> -	return freq_qos_update_request(&cpufreq_cdev->qos_req,
> -				get_state_freq(cpufreq_cdev, state));
> +	frequency = get_state_freq(cpufreq_cdev, state);
> +
> +	ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
> +
> +	if (ret > 0) {
> +		cpus = cpufreq_cdev->policy->cpus;
> +		capacity = frequency *
> +				arch_scale_cpu_capacity(cpumask_first(cpus));
> +		capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
> +		arch_set_thermal_pressure(cpus, capacity);

Given that you already get a CPU's capacity (orig) here, why don't
you pass thermal pressure directly to arch_set_thermal_pressure,
rather than passing the capped capacity and subtracting it later from
the same CPU capacity (arch_scale_cpu_capacity)?

If my math is correct this would work nicely:
                pressure = cpufreq_cdev->policy->cpuinfo.max_freq;
		pressure -= frequency;
		pressure *= arch_scale_cpu_capacity(cpumask_first(cpus);
		pressure /= cpufreq_cdev->policy->cpuinfo.max_freq;

Thanks,
Ionela.

> +	}
> +
> +	return ret;
>  }
>  
>  /* Bind cpufreq callbacks to thermal cooling device ops */
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 7/7] sched/fair: Enable tuning of decay period
  2019-12-12  4:11 ` [Patch v6 7/7] sched/fair: Enable tuning of decay period Thara Gopinath
@ 2019-12-23 17:55   ` Ionela Voinescu
  0 siblings, 0 replies; 28+ messages in thread
From: Ionela Voinescu @ 2019-12-23 17:55 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, peterz, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wednesday 11 Dec 2019 at 23:11:48 (-0500), Thara Gopinath wrote:
> Thermal pressure follows pelt signas which means the decay period for

s/signas/signals

I think if you run checkpatch on the patches it will show misspelled
words as well.

Regards,
Ionela.

> thermal pressure is the default pelt decay period. Depending on soc
> charecteristics and thermal activity, it might be beneficial to decay
> thermal pressure slower, but still in-tune with the pelt signals.  One way
> to achieve this is to provide a command line parameter to set a decay
> shift parameter to an integer between 0 and 10.
> 
> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
> ---
>
[...]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure
  2019-12-12  4:11 ` [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
  2019-12-16 13:43   ` Peter Zijlstra
  2019-12-17 12:54   ` Dietmar Eggemann
@ 2019-12-23 17:56   ` Ionela Voinescu
  2020-01-02 14:40     ` Thara Gopinath
  2 siblings, 1 reply; 28+ messages in thread
From: Ionela Voinescu @ 2019-12-23 17:56 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: mingo, peterz, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On Wednesday 11 Dec 2019 at 23:11:42 (-0500), Thara Gopinath wrote:
[...]
> diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> index afff644..c74226d 100644
> --- a/kernel/sched/pelt.h
> +++ b/kernel/sched/pelt.h
> @@ -6,6 +6,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se
>  int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
>  int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
>  int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
> +int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
>

I know a lot of people have come with suggestions for this name, but how
about update_th_rq_load_avg :)? Only if you're in the mood to change it
again.

Regards,
Ionela.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 4/7] sched/fair: Enable periodic update of average thermal pressure
  2019-12-17 12:57       ` Dietmar Eggemann
@ 2019-12-27 15:22         ` Thara Gopinath
  0 siblings, 0 replies; 28+ messages in thread
From: Thara Gopinath @ 2019-12-27 15:22 UTC (permalink / raw)
  To: Dietmar Eggemann, Quentin Perret, Peter Zijlstra
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On 12/17/2019 07:57 AM, Dietmar Eggemann wrote:
> On 16/12/2019 18:59, Quentin Perret wrote:
>> On Monday 16 Dec 2019 at 15:39:32 (+0100), Peter Zijlstra wrote:
>>>> @@ -10274,6 +10281,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>>>>  
>>>>  	update_misfit_status(curr, rq);
>>>>  	update_overutilized_status(task_rq(curr));
>>>> +	update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure);
>>>>  }
>>>
>>> My objection here is that when the arch does not have support for it,
>>> there is still code generated and runtime overhead associated with it.
>>
>> I guess this function could be stubbed for CONFIG_CPU_THERMAL=n ?
>> That is, reflecting the thermal pressure in the scheduler only makes
>> sense when the thermal infrastructure is enabled to begin with (which is
>> indeed not the case for most archs).
> 
> Makes sense to me. If we can agree that 'CPU cooling' is the only actor
> for thermal (CPU capacity) capping.
> 
> thermal_sys-$(CONFIG_CPU_THERMAL)       += cpu_cooling.o
> 

Hi All,
Thanks for all the reviews!

The other option will be to have a separate
CONFIG_HAVE_SCHED_THERMAL_PRESSURE. This will ensure that we are not
tied to cpu cooling thermal infrastructure. What say?
 There is a CONFIG_HAVE_SCHED_AVG_IRQ for irq load average in pelt.c.


-- 
Warm Regards
Thara

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure
  2019-12-23 17:56   ` Ionela Voinescu
@ 2020-01-02 14:40     ` Thara Gopinath
  0 siblings, 0 replies; 28+ messages in thread
From: Thara Gopinath @ 2020-01-02 14:40 UTC (permalink / raw)
  To: Ionela Voinescu
  Cc: mingo, peterz, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On 12/23/2019 12:56 PM, Ionela Voinescu wrote:
> On Wednesday 11 Dec 2019 at 23:11:42 (-0500), Thara Gopinath wrote:
> [...]
>> diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
>> index afff644..c74226d 100644
>> --- a/kernel/sched/pelt.h
>> +++ b/kernel/sched/pelt.h
>> @@ -6,6 +6,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se
>>  int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
>>  int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
>>  int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
>> +int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
>>
> 
> I know a lot of people have come with suggestions for this name, but how
> about update_th_rq_load_avg :)? Only if you're in the mood to change it
> again.
Hi!
I am rebasing! So will rename!
> 
> Regards,
> Ionela.
> 


-- 
Warm Regards
Thara

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure
       [not found]     ` <CALD-y_xHS7CaZ8SU--VP5+2F5Y8cVb4sw0XuG+JUpP_jxE7yuQ@mail.gmail.com>
@ 2020-01-09 12:24       ` Dietmar Eggemann
  0 siblings, 0 replies; 28+ messages in thread
From: Dietmar Eggemann @ 2020-01-09 12:24 UTC (permalink / raw)
  To: Thara Gopinath
  Cc: Ingo Molnar, Peter Zijlstra, Ionela Voinescu, Vincent Guittot,
	Zhang Rui, Quentin Perret, Daniel Lezcano, Viresh Kumar,
	linux-kernel, Amit Kachhap, Javi Merino, Amit Kucheria

On 08/01/2020 15:56, Thara Gopinath wrote:
> On Tue, 17 Dec 2019 at 07:54, Dietmar Eggemann <dietmar.eggemann@arm.com>
> wrote:
> 
>> On 12/12/2019 05:11, Thara Gopinath wrote:
>>
>> minor: in subject: s/sched/pelt.c/sched/pelt
>>
>> [...]
>>
>>> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
>>> index a96db50..9aac3b7 100644
>>> --- a/kernel/sched/pelt.c
>>> +++ b/kernel/sched/pelt.c
>>> @@ -353,6 +353,28 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq,
>> int running)
>>>       return 0;
>>>  }
>>>
>>> +/*
>>> + * thermal:
>>> + *
>>> + *   load_sum = \Sum se->avg.load_sum
>>
>> Why not '\Sum rq->avg.load_sum' ?
>>
> Hi Dietmar,
> 
> The header for all other update_*_load_avg api use se->avg. and not rq->avg.

True but at least they (rt_rq, dl_rq, irq) also say 'but
se->avg.util_sum is not tracked'.

I guess this comment originally came from the
'__update_load_avg_blocked_se(), __update_load_avg_se(),
__update_load_avg_cfs_rq()' block:

* cfq_rq:
 *
 *   load_sum = \Sum se_weight(se) * se->avg.load_sum

but for rt_rq, dl_rq, irq and thermal we don't have a relationship like
between se and cfs_rq's so that's why this comment confuses me.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Patch v6 0/7] Introduce Thermal Pressure
  2019-12-16 14:56 ` [Patch v6 0/7] Introduce Thermal Pressure Peter Zijlstra
@ 2020-01-11 15:04   ` Thara Gopinath
  0 siblings, 0 replies; 28+ messages in thread
From: Thara Gopinath @ 2020-01-11 15:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, ionela.voinescu, vincent.guittot, rui.zhang, qperret,
	daniel.lezcano, viresh.kumar, linux-kernel, amit.kachhap,
	javi.merino, amit.kucheria

On 12/16/2019 09:56 AM, Peter Zijlstra wrote:
> On Wed, Dec 11, 2019 at 11:11:41PM -0500, Thara Gopinath wrote:
>> Test Results
>>
>> Hackbench: 1 group , 30000 loops, 10 runs
>>                                                Result         SD
>>                                                (Secs)     (% of mean)
>>  No Thermal Pressure                            14.03       2.69%
>>  Thermal Pressure PELT Algo. Decay : 32 ms      13.29       0.56%
>>  Thermal Pressure PELT Algo. Decay : 64 ms      12.57       1.56%
>>  Thermal Pressure PELT Algo. Decay : 128 ms     12.71       1.04%
>>  Thermal Pressure PELT Algo. Decay : 256 ms     12.29       1.42%
>>  Thermal Pressure PELT Algo. Decay : 512 ms     12.42       1.15%
>>
>> Dhrystone Run Time  : 20 threads, 3000 MLOOPS
>>                                                  Result      SD
>>                                                  (Secs)    (% of mean)
>>  No Thermal Pressure                              9.452      4.49%
>>  Thermal Pressure PELT Algo. Decay : 32 ms        8.793      5.30%
>>  Thermal Pressure PELT Algo. Decay : 64 ms        8.981      5.29%
>>  Thermal Pressure PELT Algo. Decay : 128 ms       8.647      6.62%
>>  Thermal Pressure PELT Algo. Decay : 256 ms       8.774      6.45%
>>  Thermal Pressure PELT Algo. Decay : 512 ms       8.603      5.41%
> 
> What is the conclusion, if any from these results? Clearly thermal
> pressuse seems to help, but what window? ISTR we default to 32ms, which
> is a wash for drystone, but sub-optimal for hackbench.
Hi Peter,

Thanks for the reviews. IMHO, the conclusion is that thermal pressure is
beneficial but the decay period to be used depends on the architecture
and/or use-cases. Sticking to 32ms should give some improvement but it
can be tuned depending on the system.

> 
> 
> Anyway, the patches look more or less acceptible, just a bunch of nits,
> the biggest being the fact that even if an architecture does not support
> this there is still the code and runtime overhead.

I am fixing this and sending out a v7.

> 


-- 
Warm Regards
Thara

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2020-01-11 15:04 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-12  4:11 [Patch v6 0/7] Introduce Thermal Pressure Thara Gopinath
2019-12-12  4:11 ` [Patch v6 1/7] sched/pelt.c: Add support to track thermal pressure Thara Gopinath
2019-12-16 13:43   ` Peter Zijlstra
2019-12-17 12:54   ` Dietmar Eggemann
     [not found]     ` <CALD-y_xHS7CaZ8SU--VP5+2F5Y8cVb4sw0XuG+JUpP_jxE7yuQ@mail.gmail.com>
2020-01-09 12:24       ` Dietmar Eggemann
2019-12-23 17:56   ` Ionela Voinescu
2020-01-02 14:40     ` Thara Gopinath
2019-12-12  4:11 ` [Patch v6 2/7] sched: Add hook to read per cpu " Thara Gopinath
2019-12-16 14:35   ` Peter Zijlstra
2019-12-17 12:54     ` Dietmar Eggemann
2019-12-12  4:11 ` [Patch v6 3/7] Add infrastructure to store and update instantaneous " Thara Gopinath
2019-12-16 14:37   ` Peter Zijlstra
2019-12-17 12:54   ` Dietmar Eggemann
2019-12-23 17:50   ` Ionela Voinescu
2019-12-12  4:11 ` [Patch v6 4/7] sched/fair: Enable periodic update of average " Thara Gopinath
2019-12-16 14:39   ` Peter Zijlstra
2019-12-16 17:59     ` Quentin Perret
2019-12-17 12:57       ` Dietmar Eggemann
2019-12-27 15:22         ` Thara Gopinath
2019-12-12  4:11 ` [Patch v6 5/7] sched/fair: update cpu_capacity to reflect " Thara Gopinath
2019-12-17 12:54   ` Dietmar Eggemann
2019-12-23 17:52   ` Ionela Voinescu
2019-12-12  4:11 ` [Patch v6 6/7] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping Thara Gopinath
2019-12-23 17:54   ` Ionela Voinescu
2019-12-12  4:11 ` [Patch v6 7/7] sched/fair: Enable tuning of decay period Thara Gopinath
2019-12-23 17:55   ` Ionela Voinescu
2019-12-16 14:56 ` [Patch v6 0/7] Introduce Thermal Pressure Peter Zijlstra
2020-01-11 15:04   ` Thara Gopinath

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).