All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 00/14] Energy Aware Scheduling
@ 2018-09-12  9:12 Quentin Perret
  2018-09-12  9:12 ` [PATCH v7 01/14] sched: Relocate arch_scale_cpu_capacity Quentin Perret
                   ` (13 more replies)
  0 siblings, 14 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:12 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

This patch series introduces Energy Aware Scheduling (EAS) for CFS tasks
on platforms with asymmetric CPU topologies (e.g. Arm big.LITTLE).

For more details about the ideas behind it and the overall design,
please refer to the cover letter of version 5 [1].


1. Version History
------------------

Changes v6[2]->v7:
- Replaced the sched_energy_present static key by a sched_feat
- Replaced the CPUFreq notifier in the dependency between sugov and EAS
  by a function call
- Squashed all sugov-refactoring patches into patch 02
- Clarified comment in em_fd_energy() to explain the choice of “energy”
  over “power”
- Added kerneldoc to structs in include/linux/energy_model.h
- Removed unnecessary memory barrier from the EM framework
- Fixed corner case in find_energy_efficient_cpu when prev_cpu is
  overutilized (and prev_energy = ULONG_MAX)

Changes v5[1]->v6:
- Rebased on Peter’s sched/core branch (that includes Morten's misfit
  patches [3] and the automatic detection of SD_ASYM_CPUCAPACITY [4])
- Removed patch 13/14 (not needed with the automatic flag detection)
- Added patch creating a dependency between sugov and EAS
- Renamed frequency domains to performance domains to avoid creating too
  deep assumptions in the code about the HW
- Renamed the sd_ea shortcut sd_asym_cpucapacity
- Added comment to explain why new tasks are not accounted when
  detecting the 'overutilized' flag
- Added comment explaining why forkees don’t go in
  find_energy_efficient_cpu()

Changes v4[5]->v5:
- Removed the RCU protection of the EM tables and the associated
  need for em_rescale_cpu_capacity().
- Factorized schedutil’s PELT aggregation function with EAS
- Improved comments/doc in the EM framework
- Added check on the uarch of CPUs in one fd in the EM framework
- Reduced CONFIG_ENERGY_MODEL ifdefery in kernel/sched/topology.c
- Cleaned-up update_sg_lb_stats parameters
- Improved comments in compute_energy() to explain the multi-rd
  scenarios

Changes v3[6]->v4:
- Replaced spinlock in EM framework by smp_store_release/READ_ONCE
- Fixed missing locks to protect rcu_assign_pointer in EM framework
- Fixed capacity calculation in EM framework on 32 bits system
- Fixed compilation issue for CONFIG_ENERGY_MODEL=n
- Removed cpumask from struct em_freq_domain, now dynamically allocated
- Power costs of the EM are specified in milliwatts
- Added example of CPUFreq driver modification
- Added doc/comments in the EM framework and better commit header
- Fixed integration issue with util_est in cpu_util_next()
- Changed scheduler topology code to have one freq. dom. list per rd
- Split sched topology patch in smaller patches
- Added doc/comments explaining the heuristic in the wake-up path
- Changed energy threshold for migration to from 1.5% to 6%

Changes v2[7]->v3:
- Removed the PM_OPP dependency by implementing a new EM framework
- Modified the scheduler topology code to take references on the EM data
  structures
- Simplified the overutilization mechanism into a system-wide flag
- Reworked the integration in the wake-up path using the sd_ea shortcut
- Rebased on tip/sched/core (247f2f6f3c70 "sched/core: Don't schedule
  threads on pre-empted vCPUs")

Changes v1[8]->v2:
- Reworked interface between fair.c and energy.[ch] (Remove #ifdef
  CONFIG_PM_OPP from energy.c) (Greg KH)
- Fixed licence & header issue in energy.[ch] (Greg KH)
- Reordered EAS path in select_task_rq_fair() (Joel)
- Avoid prev_cpu if not allowed in select_task_rq_fair() (Morten/Joel)
- Refactored compute_energy() (Patrick)
- Account for RT/IRQ pressure in task_fits() (Patrick)
- Use UTIL_EST and DL utilization during OPP estimation (Patrick/Juri)
- Optimize selection of CPU candidates in the energy-aware wake-up path
- Rebased on top of tip/sched/core (commit b720342849fe “sched/core:
  Update Preempt_notifier_key to modern API”)


2. Test results
---------------

Two fundamentally different tests were executed. Firstly the energy test
case shows the impact on energy consumption this patch-set has using a
synthetic set of tasks. Secondly the performance test case provides the
conventional hackbench metric numbers.

The tests run on two arm64 big.LITTLE platforms: Hikey960 (4xA73 +
4xA53) and Juno r0 (2xA57 + 4xA53).

Base kernel is 4.18-rc6, with some Hikey960 and Juno specific patches,
and Morten's patches from tip/sched/core for detecting the
SD_ASYM_CPUCAPACITY flag [9].

2.1 Energy test case

10 iterations of between 10 and 50 periodic rt-app tasks (16ms period,
5% duty-cycle) for 30 seconds with energy measurement. Unit is Joules.
The goal is to save energy, so lower is better.

2.1.1 Hikey960

Energy is measured with an ACME Cape on an instrumented board. Numbers
include consumption of big and little CPUs, LPDDR memory, GPU and most
of the other small components on the board. They do not include
consumption of the radio chip (turned-off anyway) and external
connectors.

+----------+-----------------+-------------------------+
|          | Without patches | With patches            |
+----------+--------+--------+------------------+------+
| Tasks nb |  Mean  | RSD*   | Mean             | RSD* |
+----------+--------+--------+------------------+------+
|       10 |  34.14 |   4.2% |  29.41 (-13.85%) | 2.0% |
|       20 |  52.83 |   1.5% |  44.57 (-15.64%) | 2.6% |
|       30 |  66.67 |   1.4% |  60.92  (-8.62%) | 3.7% |
|       40 |  93.21 |   3.7% |  86.36  (-7.35%) | 6.4% |
|       50 | 135.52 |   5.0% | 108.36 (-20.04%) | 2.1% |
+----------+--------+--------+------------------+------+

2.1.2 Juno r0

Energy is measured with the onboard energy meter. Numbers include
consumption of big and little CPUs.

+----------+-----------------+------------------------+
|          | Without patches | With patches           |
+----------+--------+--------+-----------------+------+
| Tasks nb |  Mean  | RSD*   | Mean            | RSD* |
+----------+--------+--------+-----------------+------+
|       10 |  10.83 |   6.7% |  7.79 (-28.07%) | 6.4% |
|       20 |  20.84 |   3.3% | 15.69 (-24.71%) | 4.1% |
|       30 |  32.41 |   2.8% | 23.85 (-26.41%) | 4.4% |
|       40 |  43.26 |   1.1% | 33.69 (-22.12%) | 4.1% |
|       50 |  56.26 |   2.5% | 47.95 (-14.77%) | 6.0% |
+----------+--------+--------+-----------------+------+


2.2 Performance test case

30 iterations of perf bench sched messaging --pipe --thread --group G
--loop L with G=[1 2 4 8] and L=50000 (Hikey960)/16000 (Juno r0).

2.2.1 Hikey960

The impact of thermal capping was mitigated thanks to a heatsink, a
fan, and a 30 sec delay between two successive executions. IPA is
disabled to reduce the stddev.

+----------------+-----------------+------------------------+
|                | Without patches | With patches           |
+--------+-------+---------+-------+----------------+-------+
| Groups | Tasks | Mean    | RSD*  | Mean           | RSD*  |
+--------+-------+---------+-------+----------------+-------+
|      1 |    40 |    8.03 | 0.99% |  8.23 (+2.49%) | 1.16% |
|      2 |    80 |   14.75 | 0.76% | 15.01 (+1.76%) | 1.03% |
|      4 |   160 |   31.51 | 0.46% | 31.61 (+0.32%) | 0.47% |
|      8 |   320 |   67.89 | 0.43% | 66.66 (+0.00%) | 0.41% |
+--------+-------+---------+-------+----------------+-------+

2.2.2 Juno r0

+----------------+-----------------+------------------------+
|                | Without patches | With patches           |
+--------+-------+---------+-------+----------------+-------+
| Groups | Tasks | Mean    | RSD*  | Mean           | RSD*  |
+--------+-------+---------+-------+----------------+-------+
|      1 |    40 |    8.23 | 1.14% |  8.24 (+0.12%) | 1.60% |
|      2 |    80 |   15.40 | 0.90% | 15.43 (+1.19%) | 1.23% |
|      4 |   160 |   29.66 | 1.30% | 29.70 (+0.10%) | 0.89% |
|      8 |   320 |   60.07 | 1.93% | 60.14 (+0.12%) | 1.70% |
+--------+-------+---------+-------+----------------+-------+

*RSD: Relative Standard Deviation (std dev / mean)


[1] https://marc.info/?l=linux-pm&m=153243513908731&w=2
[2] https://marc.info/?l=linux-kernel&m=153476300928169&w=2
[3] https://marc.info/?l=linux-kernel&m=153069968022982&w=2
[4] https://marc.info/?l=linux-kernel&m=153209362826476&w=2
[5] https://marc.info/?l=linux-kernel&m=153018606728533&w=2
[6] https://marc.info/?l=linux-kernel&m=152691273111941&w=2
[7] https://marc.info/?l=linux-kernel&m=152302902427143&w=2
[8] https://marc.info/?l=linux-kernel&m=152153905805048&w=2
[9] http://www.linux-arm.org/git?p=linux-qp.git;a=shortlog;h=refs/heads/upstream/eas_v7_04


Morten Rasmussen (1):
  sched: Add over-utilization/tipping point indicator

Quentin Perret (13):
  sched: Relocate arch_scale_cpu_capacity
  sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  PM: Introduce an Energy Model management framework
  PM / EM: Expose the Energy Model in sysfs
  sched: Introduce a sched_feat for Energy Aware Scheduling
  sched/topology: Reference the Energy Model of CPUs when available
  sched/topology: Lowest CPU asymmetry sched_domain level pointer
  sched/topology: Disable EAS on inappropriate platforms
  sched/fair: Clean-up update_sg_lb_stats parameters
  sched/fair: Introduce an energy estimation helper function
  sched/fair: Select an energy-efficient CPU on task wake-up
  sched/topology: Make Energy Aware Scheduling depend on schedutil
  OPTIONAL: cpufreq: dt: Register an Energy Model

 drivers/cpufreq/cpufreq-dt.c     |  48 ++++-
 drivers/cpufreq/cpufreq.c        |   2 +
 include/linux/energy_model.h     | 187 +++++++++++++++++++
 include/linux/sched/cpufreq.h    |  15 ++
 include/linux/sched/topology.h   |  19 ++
 kernel/power/Kconfig             |  15 ++
 kernel/power/Makefile            |   2 +
 kernel/power/energy_model.c      | 285 +++++++++++++++++++++++++++++
 kernel/sched/cpufreq_schedutil.c | 126 +++++++++----
 kernel/sched/fair.c              | 302 ++++++++++++++++++++++++++++---
 kernel/sched/features.h          |   7 +
 kernel/sched/sched.h             |  64 ++++---
 kernel/sched/topology.c          | 205 ++++++++++++++++++++-
 13 files changed, 1196 insertions(+), 81 deletions(-)
 create mode 100644 include/linux/energy_model.h
 create mode 100644 kernel/power/energy_model.c

-- 
2.18.0


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v7 01/14] sched: Relocate arch_scale_cpu_capacity
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
@ 2018-09-12  9:12 ` Quentin Perret
  2018-09-12  9:12 ` [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling Quentin Perret
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:12 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

By default, arch_scale_cpu_capacity() is only visible from within the
kernel/sched folder. Relocate it to include/linux/sched/topology.h to
make it visible to other clients needing to know about the capacity of
CPUs, such as the Energy Model framework.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 include/linux/sched/topology.h | 19 +++++++++++++++++++
 kernel/sched/sched.h           | 21 ---------------------
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 4fe2e49ab13b..1aaa14462207 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -202,6 +202,17 @@ extern void set_sched_topology(struct sched_domain_topology_level *tl);
 # define SD_INIT_NAME(type)
 #endif
 
+#ifndef arch_scale_cpu_capacity
+static __always_inline
+unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu)
+{
+	if (sd && (sd->flags & SD_SHARE_CPUCAPACITY) && (sd->span_weight > 1))
+		return sd->smt_gain / sd->span_weight;
+
+	return SCHED_CAPACITY_SCALE;
+}
+#endif
+
 #else /* CONFIG_SMP */
 
 struct sched_domain_attr;
@@ -217,6 +228,14 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
 	return true;
 }
 
+#ifndef arch_scale_cpu_capacity
+static __always_inline
+unsigned long arch_scale_cpu_capacity(void __always_unused *sd, int cpu)
+{
+	return SCHED_CAPACITY_SCALE;
+}
+#endif
+
 #endif	/* !CONFIG_SMP */
 
 static inline int task_node(const struct task_struct *p)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 85b3a2bf6c2b..481e6759441b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1761,27 +1761,6 @@ unsigned long arch_scale_freq_capacity(int cpu)
 }
 #endif
 
-#ifdef CONFIG_SMP
-#ifndef arch_scale_cpu_capacity
-static __always_inline
-unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu)
-{
-	if (sd && (sd->flags & SD_SHARE_CPUCAPACITY) && (sd->span_weight > 1))
-		return sd->smt_gain / sd->span_weight;
-
-	return SCHED_CAPACITY_SCALE;
-}
-#endif
-#else
-#ifndef arch_scale_cpu_capacity
-static __always_inline
-unsigned long arch_scale_cpu_capacity(void __always_unused *sd, int cpu)
-{
-	return SCHED_CAPACITY_SCALE;
-}
-#endif
-#endif
-
 struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
 	__acquires(rq->lock);
 
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
  2018-09-12  9:12 ` [PATCH v7 01/14] sched: Relocate arch_scale_cpu_capacity Quentin Perret
@ 2018-09-12  9:12 ` Quentin Perret
  2018-09-12 14:56     ` Vincent Guittot
  2018-09-18 21:33     ` Rafael J. Wysocki
  2018-09-12  9:12 ` [PATCH v7 03/14] PM: Introduce an Energy Model management framework Quentin Perret
                   ` (11 subsequent siblings)
  13 siblings, 2 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:12 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

Schedutil requests frequency by aggregating utilization signals from
the scheduler (CFS, RT, DL, IRQ) and applying and 25% margin on top of
them. Since Energy Aware Scheduling (EAS) needs to be able to predict
the frequency requests, it needs to forecast the decisions made by the
governor.

In order to prepare the introduction of EAS, introduce
schedutil_freq_util() to centralize the aforementioned signal
aggregation and make it available to both schedutil and EAS. Since
frequency selection and energy estimation still need to deal with RT and
DL signals slightly differently, schedutil_freq_util() is called with a
different 'type' parameter in those two contexts, and returns an
aggregated utilization signal accordingly. While at it, introduce the
map_util_freq() function which is designed to make schedutil's 25%
margin usable easily for both sugov and EAS.

As EAS will be able to predict schedutil's frequency requests more
accurately than any other governor by design, it'd be sensible to make
sure EAS cannot be used without schedutil. This will be done later, once
EAS has actually been introduced.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 include/linux/sched/cpufreq.h    |  6 +++
 kernel/sched/cpufreq_schedutil.c | 89 +++++++++++++++++++++-----------
 kernel/sched/sched.h             | 14 +++++
 3 files changed, 80 insertions(+), 29 deletions(-)

diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
index 59667444669f..afa940cd50dc 100644
--- a/include/linux/sched/cpufreq.h
+++ b/include/linux/sched/cpufreq.h
@@ -20,6 +20,12 @@ void cpufreq_add_update_util_hook(int cpu, struct update_util_data *data,
                        void (*func)(struct update_util_data *data, u64 time,
 				    unsigned int flags));
 void cpufreq_remove_update_util_hook(int cpu);
+
+static inline unsigned long map_util_freq(unsigned long util,
+					unsigned long freq, unsigned long cap)
+{
+	return (freq + (freq >> 2)) * util / cap;
+}
 #endif /* CONFIG_CPU_FREQ */
 
 #endif /* _LINUX_SCHED_CPUFREQ_H */
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 3fffad3bc8a8..8356cb0072a6 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -13,6 +13,7 @@
 
 #include "sched.h"
 
+#include <linux/sched/cpufreq.h>
 #include <trace/events/power.h>
 
 struct sugov_tunables {
@@ -167,7 +168,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
 	unsigned int freq = arch_scale_freq_invariant() ?
 				policy->cpuinfo.max_freq : policy->cur;
 
-	freq = (freq + (freq >> 2)) * util / max;
+	freq = map_util_freq(util, freq, max);
 
 	if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
 		return sg_policy->next_freq;
@@ -197,15 +198,15 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
  * based on the task model parameters and gives the minimal utilization
  * required to meet deadlines.
  */
-static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
+unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
+				  enum schedutil_type type)
 {
-	struct rq *rq = cpu_rq(sg_cpu->cpu);
+	struct rq *rq = cpu_rq(cpu);
 	unsigned long util, irq, max;
 
-	sg_cpu->max = max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
-	sg_cpu->bw_dl = cpu_bw_dl(rq);
+	max = arch_scale_cpu_capacity(NULL, cpu);
 
-	if (rt_rq_is_runnable(&rq->rt))
+	if (type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt))
 		return max;
 
 	/*
@@ -223,20 +224,33 @@ static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
 	 * utilization (PELT windows are synchronized) we can directly add them
 	 * to obtain the CPU's actual utilization.
 	 */
-	util = cpu_util_cfs(rq);
+	util = util_cfs;
 	util += cpu_util_rt(rq);
 
-	/*
-	 * We do not make cpu_util_dl() a permanent part of this sum because we
-	 * want to use cpu_bw_dl() later on, but we need to check if the
-	 * CFS+RT+DL sum is saturated (ie. no idle time) such that we select
-	 * f_max when there is no idle time.
-	 *
-	 * NOTE: numerical errors or stop class might cause us to not quite hit
-	 * saturation when we should -- something for later.
-	 */
-	if ((util + cpu_util_dl(rq)) >= max)
-		return max;
+	if (type == FREQUENCY_UTIL) {
+		/*
+		 * For frequency selection we do not make cpu_util_dl() a
+		 * permanent part of this sum because we want to use
+		 * cpu_bw_dl() later on, but we need to check if the
+		 * CFS+RT+DL sum is saturated (ie. no idle time) such
+		 * that we select f_max when there is no idle time.
+		 *
+		 * NOTE: numerical errors or stop class might cause us
+		 * to not quite hit saturation when we should --
+		 * something for later.
+		 */
+
+		if ((util + cpu_util_dl(rq)) >= max)
+			return max;
+	} else {
+		/*
+		 * OTOH, for energy computation we need the estimated
+		 * running time, so include util_dl and ignore dl_bw.
+		 */
+		util += cpu_util_dl(rq);
+		if (util >= max)
+			return max;
+	}
 
 	/*
 	 * There is still idle time; further improve the number by using the
@@ -250,17 +264,34 @@ static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
 	util = scale_irq_capacity(util, irq, max);
 	util += irq;
 
-	/*
-	 * Bandwidth required by DEADLINE must always be granted while, for
-	 * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
-	 * to gracefully reduce the frequency when no tasks show up for longer
-	 * periods of time.
-	 *
-	 * Ideally we would like to set bw_dl as min/guaranteed freq and util +
-	 * bw_dl as requested freq. However, cpufreq is not yet ready for such
-	 * an interface. So, we only do the latter for now.
-	 */
-	return min(max, util + sg_cpu->bw_dl);
+	if (type == FREQUENCY_UTIL) {
+		/*
+		 * Bandwidth required by DEADLINE must always be granted
+		 * while, for FAIR and RT, we use blocked utilization of
+		 * IDLE CPUs as a mechanism to gracefully reduce the
+		 * frequency when no tasks show up for longer periods of
+		 * time.
+		 *
+		 * Ideally we would like to set bw_dl as min/guaranteed
+		 * freq and util + bw_dl as requested freq. However,
+		 * cpufreq is not yet ready for such an interface. So,
+		 * we only do the latter for now.
+		 */
+		util += cpu_bw_dl(rq);
+	}
+
+	return min(max, util);
+}
+
+static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
+{
+	struct rq *rq = cpu_rq(sg_cpu->cpu);
+	unsigned long util = cpu_util_cfs(rq);
+
+	sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
+	sg_cpu->bw_dl = cpu_bw_dl(rq);
+
+	return schedutil_freq_util(sg_cpu->cpu, util, FREQUENCY_UTIL);
 }
 
 /**
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 481e6759441b..d59effb34786 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2172,7 +2172,15 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
 # define arch_scale_freq_invariant()	false
 #endif
 
+enum schedutil_type {
+	FREQUENCY_UTIL,
+	ENERGY_UTIL,
+};
+
 #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
+unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
+				  enum schedutil_type type);
+
 static inline unsigned long cpu_bw_dl(struct rq *rq)
 {
 	return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
@@ -2199,6 +2207,12 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
 {
 	return READ_ONCE(rq->avg_rt.util_avg);
 }
+#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
+static inline unsigned long schedutil_freq_util(int cpu, unsigned long util,
+				  enum schedutil_type type)
+{
+	return util;
+}
 #endif
 
 #ifdef HAVE_SCHED_AVG_IRQ
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
  2018-09-12  9:12 ` [PATCH v7 01/14] sched: Relocate arch_scale_cpu_capacity Quentin Perret
  2018-09-12  9:12 ` [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling Quentin Perret
@ 2018-09-12  9:12 ` Quentin Perret
  2018-10-02 12:25   ` Peter Zijlstra
  2018-10-02 12:30   ` Peter Zijlstra
  2018-09-12  9:12 ` [PATCH v7 04/14] PM / EM: Expose the Energy Model in sysfs Quentin Perret
                   ` (10 subsequent siblings)
  13 siblings, 2 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:12 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

Several subsystems in the kernel (task scheduler and/or thermal at the
time of writing) can benefit from knowing about the energy consumed by
CPUs. Yet, this information can come from different sources (DT or
firmware for example), in different formats, hence making it hard to
exploit without a standard API.

As an attempt to address this, introduce a centralized Energy Model
(EM) management framework which aggregates the power values provided
by drivers into a table for each performance domain in the system. The
power cost tables are made available to interested clients (e.g. task
scheduler or thermal) via platform-agnostic APIs. The overall design
is represented by the diagram below (focused on Arm-related drivers as
an example, but applicable to any architecture):

     +---------------+  +-----------------+  +-------------+
     | Thermal (IPA) |  | Scheduler (EAS) |  |    Other    |
     +---------------+  +-----------------+  +-------------+
             |                   | em_pd_energy()   |
             |                   | em_cpu_get()     |
             +-----------+       |         +--------+
                         |       |         |
                         v       v         v
                      +---------------------+
                      |                     |
                      |    Energy Model     |
                      |                     |
                      |     Framework       |
                      |                     |
                      +---------------------+
                         ^       ^       ^
                         |       |       | em_register_perf_domain()
              +----------+       |       +---------+
              |                  |                 |
      +---------------+  +---------------+  +--------------+
      |  cpufreq-dt   |  |   arm_scmi    |  |    Other     |
      +---------------+  +---------------+  +--------------+
              ^                  ^                 ^
              |                  |                 |
      +--------------+   +---------------+  +--------------+
      | Device Tree  |   |   Firmware    |  |      ?       |
      +--------------+   +---------------+  +--------------+

Drivers (typically, but not limited to, CPUFreq drivers) can register
data in the EM framework using the em_register_perf_domain() API. The
calling driver must provide a callback function with a standardized
signature that will be used by the EM framework to build the power
cost tables of the performance domain. This design should offer a lot of
flexibility to calling drivers which are free of reading information
from any location and to use any technique to compute power costs.
Moreover, the capacity states registered by drivers in the EM framework
are not required to match real performance states of the target. This
is particularly important on targets where the performance states are
not known by the OS.

The power cost coefficients managed by the EM framework are specified in
milli-watts. Although the two potential users of those coefficients (IPA
and EAS) only need relative correctness, IPA specifically needs to
compare the power of CPUs with the power of other components (GPUs, for
example), which are still expressed in absolute terms in their
respective subsystems. Hence, specifying the power of CPUs in
milli-watts should help transitioning IPA to using the EM framework
without introducing new problems by keeping units comparable across
sub-systems.
On the longer term, the EM of other devices than CPUs could also be
managed by the EM framework, which would enable to remove the absolute
unit. However, this is not absolutely required as a first step, so this
extension of the EM framework is left for later.

On the client side, the EM framework offers APIs to access the power
cost tables of a CPU (em_cpu_get()), and to estimate the energy
consumed by the CPUs of a performance domain (em_pd_energy()). Clients
such as the task scheduler can then use these APIs to access the shared
data structures holding the Energy Model of CPUs.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 include/linux/energy_model.h | 185 +++++++++++++++++++++++++++++++++
 kernel/power/Kconfig         |  15 +++
 kernel/power/Makefile        |   2 +
 kernel/power/energy_model.c  | 195 +++++++++++++++++++++++++++++++++++
 4 files changed, 397 insertions(+)
 create mode 100644 include/linux/energy_model.h
 create mode 100644 kernel/power/energy_model.c

diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
new file mode 100644
index 000000000000..a472076f9c80
--- /dev/null
+++ b/include/linux/energy_model.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_ENERGY_MODEL_H
+#define _LINUX_ENERGY_MODEL_H
+#include <linux/cpumask.h>
+#include <linux/jump_label.h>
+#include <linux/kobject.h>
+#include <linux/rcupdate.h>
+#include <linux/sched/cpufreq.h>
+#include <linux/sched/topology.h>
+#include <linux/types.h>
+
+#ifdef CONFIG_ENERGY_MODEL
+/**
+ * em_cap_state - Capacity state of a performance domain
+ * @frequency:	The CPU frequency in KHz, for consistency with CPUFreq
+ * @power:	The power consumed by 1 CPU at this level, in milli-watts
+ * @cost:	The cost coefficient associated with this level, used during
+ *		energy calculation. Equal to: power * max_frequency / frequency
+ */
+struct em_cap_state {
+	unsigned long frequency;
+	unsigned long power;
+	unsigned long cost;
+};
+
+/**
+ * em_perf_domain - Performance domain
+ * @table:		List of capacity states, in ascending order
+ * @nr_cap_states:	Number of capacity states
+ * @cpus:		Cpumask covering the CPUs of the domain
+ *
+ * A "performance domain" represents a group of CPUs whose performance is
+ * scaled together. All CPUs of a performance domain must have the same
+ * micro-architecture. Performance domains often have a 1-to-1 mapping with
+ * CPUFreq policies.
+ */
+struct em_perf_domain {
+	struct em_cap_state *table;
+	int nr_cap_states;
+	unsigned long cpus[0];
+};
+
+#define EM_CPU_MAX_POWER 0xFFFF
+
+struct em_data_callback {
+	/**
+	 * active_power() - Provide power at the next capacity state of a CPU
+	 * @power	: Active power at the capacity state in mW (modified)
+	 * @freq	: Frequency at the capacity state in kHz (modified)
+	 * @cpu		: CPU for which we do this operation
+	 *
+	 * active_power() must find the lowest capacity state of 'cpu' above
+	 * 'freq' and update 'power' and 'freq' to the matching active power
+	 * and frequency.
+	 *
+	 * The power is the one of a single CPU in the domain, expressed in
+	 * milli-watts. It is expected to fit in the [0, EM_CPU_MAX_POWER]
+	 * range.
+	 *
+	 * Return 0 on success.
+	 */
+	int (*active_power)(unsigned long *power, unsigned long *freq, int cpu);
+};
+#define EM_DATA_CB(_active_power_cb) { .active_power = &_active_power_cb }
+
+struct em_perf_domain *em_cpu_get(int cpu);
+int em_register_perf_domain(cpumask_t *span, unsigned int nr_states,
+						struct em_data_callback *cb);
+
+/**
+ * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain
+ * @pd		: performance domain for which energy has to be estimated
+ * @max_util	: highest utilization among CPUs of the domain
+ * @sum_util	: sum of the utilization of all CPUs in the domain
+ *
+ * Return: the sum of the energy consumed by the CPUs of the domain assuming
+ * a capacity state satisfying the max utilization of the domain.
+ */
+static inline unsigned long em_pd_energy(struct em_perf_domain *pd,
+				unsigned long max_util, unsigned long sum_util)
+{
+	unsigned long freq, scale_cpu;
+	struct em_cap_state *cs;
+	int i, cpu;
+
+	/*
+	 * In order to predict the capacity state, map the utilization of the
+	 * most utilized CPU of the performance domain to a requested frequency,
+	 * like schedutil.
+	 */
+	cpu = cpumask_first(to_cpumask(pd->cpus));
+	scale_cpu = arch_scale_cpu_capacity(NULL, cpu);
+	cs = &pd->table[pd->nr_cap_states - 1];
+	freq = map_util_freq(max_util, cs->frequency, scale_cpu);
+
+	/*
+	 * Find the lowest capacity state of the Energy Model above the
+	 * requested frequency.
+	 */
+	for (i = 0; i < pd->nr_cap_states; i++) {
+		cs = &pd->table[i];
+		if (cs->frequency >= freq)
+			break;
+	}
+
+	/*
+	 * The capacity of a CPU in the domain at that capacity state (cs)
+	 * can be computed as:
+	 *
+	 *             cs->freq * scale_cpu
+	 *   cs->cap = --------------------                          (1)
+	 *                 cpu_max_freq
+	 *
+	 * So, the energy consumed by this CPU at that capacity state is:
+	 *
+	 *             cs->power * cpu_util
+	 *   cpu_nrg = --------------------                          (2)
+	 *                   cs->cap
+	 *
+	 * since 'cpu_util / cs->cap' represents its percentage of busy time.
+	 *
+	 *   NOTE: Although the result of this computation actually is in
+	 *         units of power, it can be manipulated as an energy value
+	 *         over a scheduling period, since it is assumed to be
+	 *         constant during that interval.
+	 *
+	 * By injecting (1) in (2), 'cpu_nrg' can be re-expressed as a product
+	 * of two terms:
+	 *
+	 *             cs->power * cpu_max_freq   cpu_util
+	 *   cpu_nrg = ------------------------ * ---------          (3)
+	 *                    cs->freq            scale_cpu
+	 *
+	 * The first term is static, and is stored in the em_cap_state struct
+	 * as 'cs->cost'.
+	 *
+	 * Since all CPUs of the domain have the same micro-architecture, they
+	 * share the same 'cs->cost', and the same CPU capacity. Hence, the
+	 * total energy of the domain (which is the simple sum of the energy of
+	 * all of its CPUs) can be factorized as:
+	 *
+	 *            cs->cost * \Sum cpu_util
+	 *   pd_nrg = ------------------------                       (4)
+	 *                  scale_cpu
+	 */
+	return cs->cost * sum_util / scale_cpu;
+}
+
+/**
+ * em_pd_nr_cap_states() - Get the number of capacity states of a perf. domain
+ * @pd		: performance domain for which this must be done
+ *
+ * Return: the number of capacity states in the performance domain table
+ */
+static inline int em_pd_nr_cap_states(struct em_perf_domain *pd)
+{
+	return pd->nr_cap_states;
+}
+
+#else
+struct em_perf_domain {};
+struct em_data_callback {};
+#define EM_DATA_CB(_active_power_cb) { }
+
+static inline int em_register_perf_domain(cpumask_t *span,
+			unsigned int nr_states, struct em_data_callback *cb)
+{
+	return -EINVAL;
+}
+static inline struct em_perf_domain *em_cpu_get(int cpu)
+{
+	return NULL;
+}
+static inline unsigned long em_pd_energy(struct em_perf_domain *pd,
+			unsigned long max_util, unsigned long sum_util)
+{
+	return 0;
+}
+static inline int em_pd_nr_cap_states(struct em_perf_domain *pd)
+{
+	return 0;
+}
+#endif
+
+#endif
diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
index e880ca22c5a5..6f6db452dc7d 100644
--- a/kernel/power/Kconfig
+++ b/kernel/power/Kconfig
@@ -297,3 +297,18 @@ config PM_GENERIC_DOMAINS_OF
 
 config CPU_PM
 	bool
+
+config ENERGY_MODEL
+	bool "Energy Model for CPUs"
+	depends on SMP
+	depends on CPU_FREQ
+	default n
+	help
+	  Several subsystems (thermal and/or the task scheduler for example)
+	  can leverage information about the energy consumed by CPUs to make
+	  smarter decisions. This config option enables the framework from
+	  which subsystems can access the energy models.
+
+	  The exact usage of the energy model is subsystem-dependent.
+
+	  If in doubt, say N.
diff --git a/kernel/power/Makefile b/kernel/power/Makefile
index a3f79f0eef36..e7e47d9be1e5 100644
--- a/kernel/power/Makefile
+++ b/kernel/power/Makefile
@@ -15,3 +15,5 @@ obj-$(CONFIG_PM_AUTOSLEEP)	+= autosleep.o
 obj-$(CONFIG_PM_WAKELOCKS)	+= wakelock.o
 
 obj-$(CONFIG_MAGIC_SYSRQ)	+= poweroff.o
+
+obj-$(CONFIG_ENERGY_MODEL)	+= energy_model.o
diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
new file mode 100644
index 000000000000..d77cd295f3e9
--- /dev/null
+++ b/kernel/power/energy_model.c
@@ -0,0 +1,195 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Energy Model of CPUs
+ *
+ * Copyright (c) 2018, Arm ltd.
+ * Written by: Quentin Perret, Arm ltd.
+ */
+
+#define pr_fmt(fmt) "energy_model: " fmt
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/energy_model.h>
+#include <linux/sched/topology.h>
+#include <linux/slab.h>
+
+/* Mapping of each CPU to the performance domain to which it belongs. */
+static DEFINE_PER_CPU(struct em_perf_domain *, em_data);
+
+/*
+ * Mutex serializing the registrations of performance domains and letting
+ * callbacks defined by drivers sleep.
+ */
+static DEFINE_MUTEX(em_pd_mutex);
+
+static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states,
+						struct em_data_callback *cb)
+{
+	unsigned long opp_eff, prev_opp_eff = ULONG_MAX;
+	unsigned long power, freq, prev_freq = 0;
+	int i, ret, cpu = cpumask_first(span);
+	struct em_cap_state *table;
+	struct em_perf_domain *pd;
+	u64 fmax;
+
+	if (!cb->active_power)
+		return NULL;
+
+	pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL);
+	if (!pd)
+		return NULL;
+
+	table = kcalloc(nr_states, sizeof(*table), GFP_KERNEL);
+	if (!table)
+		goto free_pd;
+
+	/* Build the list of capacity states for this performance domain */
+	for (i = 0, freq = 0; i < nr_states; i++, freq++) {
+		/*
+		 * active_power() is a driver callback which ceils 'freq' to
+		 * lowest capacity state of 'cpu' above 'freq' and updates
+		 * 'power' and 'freq' accordingly.
+		 */
+		ret = cb->active_power(&power, &freq, cpu);
+		if (ret) {
+			pr_err("pd%d: invalid cap. state: %d\n", cpu, ret);
+			goto free_cs_table;
+		}
+
+		/*
+		 * We expect the driver callback to increase the frequency for
+		 * higher capacity states.
+		 */
+		if (freq <= prev_freq) {
+			pr_err("pd%d: non-increasing freq: %lu\n", cpu, freq);
+			goto free_cs_table;
+		}
+
+		/*
+		 * The power returned by active_state() is expected to be
+		 * positive, in milli-watts and to fit into 16 bits.
+		 */
+		if (!power || power > EM_CPU_MAX_POWER) {
+			pr_err("pd%d: invalid power: %lu\n", cpu, power);
+			goto free_cs_table;
+		}
+
+		table[i].power = power;
+		table[i].frequency = prev_freq = freq;
+
+		/*
+		 * The hertz/watts efficiency ratio should decrease as the
+		 * frequency grows on sane platforms. But this isn't always
+		 * true in practice so warn the user if a higher OPP is more
+		 * power efficient than a lower one.
+		 */
+		opp_eff = freq / power;
+		if (opp_eff >= prev_opp_eff)
+			pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: OPP%d >= OPP%d\n",
+					cpu, i, i - 1);
+		prev_opp_eff = opp_eff;
+	}
+
+	/* Compute the cost of each capacity_state. */
+	fmax = (u64) table[nr_states - 1].frequency;
+	for (i = 0; i < nr_states; i++) {
+		table[i].cost = div64_u64(fmax * table[i].power,
+					  table[i].frequency);
+	}
+
+	pd->table = table;
+	pd->nr_cap_states = nr_states;
+	cpumask_copy(to_cpumask(pd->cpus), span);
+
+	return pd;
+
+free_cs_table:
+	kfree(table);
+free_pd:
+	kfree(pd);
+
+	return NULL;
+}
+
+/**
+ * em_cpu_get() - Return the performance domain for a CPU
+ * @cpu : CPU to find the performance domain for
+ *
+ * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
+ * exist.
+ */
+struct em_perf_domain *em_cpu_get(int cpu)
+{
+	return READ_ONCE(per_cpu(em_data, cpu));
+}
+EXPORT_SYMBOL_GPL(em_cpu_get);
+
+/**
+ * em_register_perf_domain() - Register the Energy Model of a performance domain
+ * @span	: Mask of CPUs in the performance domain
+ * @nr_states	: Number of capacity states to register
+ * @cb		: Callback functions providing the data of the Energy Model
+ *
+ * Create Energy Model tables for a performance domain using the callbacks
+ * defined in cb.
+ *
+ * If multiple clients register the same performance domain, all but the first
+ * registration will be ignored.
+ *
+ * Return 0 on success
+ */
+int em_register_perf_domain(cpumask_t *span, unsigned int nr_states,
+						struct em_data_callback *cb)
+{
+	unsigned long cap, prev_cap = 0;
+	struct em_perf_domain *pd;
+	int cpu, ret = 0;
+
+	if (!span || !nr_states || !cb)
+		return -EINVAL;
+
+	/*
+	 * Use a mutex to serialize the registration of performance domains and
+	 * let the driver-defined callback functions sleep.
+	 */
+	mutex_lock(&em_pd_mutex);
+
+	for_each_cpu(cpu, span) {
+		/* Make sure we don't register again an existing domain. */
+		if (READ_ONCE(per_cpu(em_data, cpu))) {
+			ret = -EEXIST;
+			goto unlock;
+		}
+
+		/*
+		 * All CPUs of a domain must have the same micro-architecture
+		 * since they all share the same table.
+		 */
+		cap = arch_scale_cpu_capacity(NULL, cpu);
+		if (prev_cap && prev_cap != cap) {
+			pr_err("CPUs of %*pbl must have the same capacity\n",
+							cpumask_pr_args(span));
+			ret = -EINVAL;
+			goto unlock;
+		}
+		prev_cap = cap;
+	}
+
+	/* Create the performance domain and add it to the Energy Model. */
+	pd = em_create_pd(span, nr_states, cb);
+	if (!pd) {
+		ret = -EINVAL;
+		goto unlock;
+	}
+
+	for_each_cpu(cpu, span)
+		WRITE_ONCE(per_cpu(em_data, cpu), pd);
+
+	pr_debug("Created perf domain %*pbl\n", cpumask_pr_args(span));
+unlock:
+	mutex_unlock(&em_pd_mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(em_register_perf_domain);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 04/14] PM / EM: Expose the Energy Model in sysfs
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (2 preceding siblings ...)
  2018-09-12  9:12 ` [PATCH v7 03/14] PM: Introduce an Energy Model management framework Quentin Perret
@ 2018-09-12  9:12 ` Quentin Perret
  2018-09-12  9:13 ` [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling Quentin Perret
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:12 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

Expose the Energy Model (read-only) of all performance domains in sysfs
for convenience. To do so, add a kobject to the CPU subsystem under the
umbrella of which a kobject for each performance domain is attached.

The resulting hierarchy is as follows for a platform with two
performance domains for example:

   /sys/devices/system/cpu/energy_model
   ├── pd0
   │   ├── cost
   │   ├── cpus
   │   ├── frequency
   │   └── power
   └── pd4
       ├── cost
       ├── cpus
       ├── frequency
       └── power

In this implementation, the kobject abstraction is only used as a
convenient way of exposing data to sysfs. However, it could also be
used in the future to allocate and release performance domains in a more
dynamic way using reference counting.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 include/linux/energy_model.h |  2 +
 kernel/power/energy_model.c  | 90 ++++++++++++++++++++++++++++++++++++
 2 files changed, 92 insertions(+)

diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index a472076f9c80..69184f4f4ae0 100644
--- a/include/linux/energy_model.h
+++ b/include/linux/energy_model.h
@@ -27,6 +27,7 @@ struct em_cap_state {
  * em_perf_domain - Performance domain
  * @table:		List of capacity states, in ascending order
  * @nr_cap_states:	Number of capacity states
+ * @kobj:		Kobject used to expose the domain in sysfs
  * @cpus:		Cpumask covering the CPUs of the domain
  *
  * A "performance domain" represents a group of CPUs whose performance is
@@ -37,6 +38,7 @@ struct em_cap_state {
 struct em_perf_domain {
 	struct em_cap_state *table;
 	int nr_cap_states;
+	struct kobject kobj;
 	unsigned long cpus[0];
 };
 
diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
index d77cd295f3e9..2934ebb09e9d 100644
--- a/kernel/power/energy_model.c
+++ b/kernel/power/energy_model.c
@@ -23,6 +23,82 @@ static DEFINE_PER_CPU(struct em_perf_domain *, em_data);
  */
 static DEFINE_MUTEX(em_pd_mutex);
 
+static struct kobject *em_kobject;
+
+/* Getters for the attributes of em_perf_domain objects */
+struct em_pd_attr {
+	struct attribute attr;
+	ssize_t (*show)(struct em_perf_domain *pd, char *buf);
+	ssize_t (*store)(struct em_perf_domain *pd, const char *buf, size_t s);
+};
+
+#define EM_ATTR_LEN 13
+#define show_table_attr(_attr) \
+static ssize_t show_##_attr(struct em_perf_domain *pd, char *buf) \
+{ \
+	ssize_t cnt = 0; \
+	int i; \
+	for (i = 0; i < pd->nr_cap_states; i++) { \
+		if (cnt >= (ssize_t) (PAGE_SIZE / sizeof(char) \
+				      - (EM_ATTR_LEN + 2))) \
+			goto out; \
+		cnt += scnprintf(&buf[cnt], EM_ATTR_LEN + 1, "%lu ", \
+				 pd->table[i]._attr); \
+	} \
+out: \
+	cnt += sprintf(&buf[cnt], "\n"); \
+	return cnt; \
+}
+
+show_table_attr(power);
+show_table_attr(frequency);
+show_table_attr(cost);
+
+static ssize_t show_cpus(struct em_perf_domain *pd, char *buf)
+{
+	return sprintf(buf, "%*pbl\n", cpumask_pr_args(to_cpumask(pd->cpus)));
+}
+
+#define pd_attr(_name) em_pd_##_name##_attr
+#define define_pd_attr(_name) static struct em_pd_attr pd_attr(_name) = \
+		__ATTR(_name, 0444, show_##_name, NULL)
+
+define_pd_attr(power);
+define_pd_attr(frequency);
+define_pd_attr(cost);
+define_pd_attr(cpus);
+
+static struct attribute *em_pd_default_attrs[] = {
+	&pd_attr(power).attr,
+	&pd_attr(frequency).attr,
+	&pd_attr(cost).attr,
+	&pd_attr(cpus).attr,
+	NULL
+};
+
+#define to_pd(k) container_of(k, struct em_perf_domain, kobj)
+#define to_pd_attr(a) container_of(a, struct em_pd_attr, attr)
+
+static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
+{
+	struct em_perf_domain *pd = to_pd(kobj);
+	struct em_pd_attr *pd_attr = to_pd_attr(attr);
+	ssize_t ret;
+
+	ret = pd_attr->show(pd, buf);
+
+	return ret;
+}
+
+static const struct sysfs_ops em_pd_sysfs_ops = {
+	.show	= show,
+};
+
+static struct kobj_type ktype_em_pd = {
+	.sysfs_ops	= &em_pd_sysfs_ops,
+	.default_attrs	= em_pd_default_attrs,
+};
+
 static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states,
 						struct em_data_callback *cb)
 {
@@ -102,6 +178,11 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states,
 	pd->nr_cap_states = nr_states;
 	cpumask_copy(to_cpumask(pd->cpus), span);
 
+	ret = kobject_init_and_add(&pd->kobj, &ktype_em_pd, em_kobject,
+				   "pd%u", cpu);
+	if (ret)
+		pr_err("pd%d: failed kobject_init_and_add(): %d\n", cpu, ret);
+
 	return pd;
 
 free_cs_table:
@@ -155,6 +236,15 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states,
 	 */
 	mutex_lock(&em_pd_mutex);
 
+	if (!em_kobject) {
+		em_kobject = kobject_create_and_add("energy_model",
+						&cpu_subsys.dev_root->kobj);
+		if (!em_kobject) {
+			ret = -ENODEV;
+			goto unlock;
+		}
+	}
+
 	for_each_cpu(cpu, span) {
 		/* Make sure we don't register again an existing domain. */
 		if (READ_ONCE(per_cpu(em_data, cpu))) {
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (3 preceding siblings ...)
  2018-09-12  9:12 ` [PATCH v7 04/14] PM / EM: Expose the Energy Model in sysfs Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-10-02 12:34   ` Peter Zijlstra
  2018-09-12  9:13 ` [PATCH v7 06/14] sched/topology: Reference the Energy Model of CPUs when available Quentin Perret
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

In order to make sure Energy Aware Scheduling (EAS) doesn't hurt
systems not using it, add a new sched_feat, called ENERGY_AWARE,
guarding the access to EAS code paths.

Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/features.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 85ae8488039c..26ff6752e492 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -90,3 +90,10 @@ SCHED_FEAT(WA_BIAS, true)
  * UtilEstimation. Use estimated CPU utilization.
  */
 SCHED_FEAT(UTIL_EST, true)
+
+/*
+ * Energy-Aware Scheduling. Whether or not tasks will be placed into an
+ * energy-aware fashion depends on this feature being enabled and on the
+ * root domain having an Energy Model attached.
+ */
+SCHED_FEAT(ENERGY_AWARE, false)
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 06/14] sched/topology: Reference the Energy Model of CPUs when available
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (4 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-10-02 12:36   ` Peter Zijlstra
  2018-09-12  9:13 ` [PATCH v7 07/14] sched/topology: Lowest CPU asymmetry sched_domain level pointer Quentin Perret
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

The existing scheduling domain hierarchy is defined to map to the cache
topology of the system. However, Energy Aware Scheduling (EAS) requires
more knowledge about the platform, and specifically needs to know about
the span of Performance Domains (PD), which do not always align with
caches.

To address this issue, use the Energy Model (EM) of the system to extend
the scheduler topology code with a representation of the PDs, alongside
the scheduling domains. More specifically, a linked list of PDs is
attached to each root domain. When multiple root domains are in use,
each list contains only the PDs covering the CPUs of its root domain. If
a PD spans over CPUs of two different root domains, it will be
duplicated in both lists.

The lists are fully maintained by the scheduler from
partition_sched_domains() in order to cope with hotplug and cpuset
changes. As for scheduling domains, the list are protected by RCU to
ensure safe concurrent updates.

The linked lists should be used only from code paths protected by the
ENERGY_AWARE sched_feat.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/sched.h    |  21 +++++++
 kernel/sched/topology.c | 134 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 151 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d59effb34786..9922615592a8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -44,6 +44,7 @@
 #include <linux/ctype.h>
 #include <linux/debugfs.h>
 #include <linux/delayacct.h>
+#include <linux/energy_model.h>
 #include <linux/init_task.h>
 #include <linux/kprobes.h>
 #include <linux/kthread.h>
@@ -700,6 +701,12 @@ static inline bool sched_asym_prefer(int a, int b)
 	return arch_asym_cpu_priority(a) > arch_asym_cpu_priority(b);
 }
 
+struct perf_domain {
+	struct em_perf_domain *obj;
+	struct perf_domain *next;
+	struct rcu_head rcu;
+};
+
 /*
  * We add the notion of a root-domain which will be used to define per-domain
  * variables. Each exclusive cpuset essentially defines an island domain by
@@ -752,6 +759,12 @@ struct root_domain {
 	struct cpupri		cpupri;
 
 	unsigned long		max_cpu_capacity;
+
+	/*
+	 * NULL-terminated list of performance domains intersecting with the
+	 * CPUs of the rd. Protected by RCU.
+	 */
+	struct perf_domain	*pd;
 };
 
 extern struct root_domain def_root_domain;
@@ -2242,3 +2255,11 @@ unsigned long scale_irq_capacity(unsigned long util, unsigned long irq, unsigned
 	return util;
 }
 #endif
+
+#ifdef CONFIG_SMP
+#ifdef CONFIG_ENERGY_MODEL
+#define perf_domain_span(pd) (to_cpumask(((pd)->obj->cpus)))
+#else
+#define perf_domain_span(pd) NULL
+#endif
+#endif
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index c4444ed77a55..8019a9bf281d 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -201,6 +201,116 @@ sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
 	return 1;
 }
 
+#ifdef CONFIG_ENERGY_MODEL
+static void free_pd(struct perf_domain *pd)
+{
+	struct perf_domain *tmp;
+
+	while (pd) {
+		tmp = pd->next;
+		kfree(pd);
+		pd = tmp;
+	}
+}
+
+static struct perf_domain *find_pd(struct perf_domain *pd, int cpu)
+{
+	while (pd) {
+		if (cpumask_test_cpu(cpu, perf_domain_span(pd)))
+			return pd;
+		pd = pd->next;
+	}
+
+	return NULL;
+}
+
+static struct perf_domain *pd_init(int cpu)
+{
+	struct em_perf_domain *obj = em_cpu_get(cpu);
+	struct perf_domain *pd;
+
+	if (!obj) {
+		if (sched_debug())
+			pr_info("%s: no EM found for CPU%d\n", __func__, cpu);
+		return NULL;
+	}
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		return NULL;
+	pd->obj = obj;
+
+	return pd;
+}
+
+static void perf_domain_debug(const struct cpumask *cpu_map,
+						struct perf_domain *pd)
+{
+	if (!sched_debug() || !pd)
+		return;
+
+	printk(KERN_DEBUG "root_domain %*pbl: ", cpumask_pr_args(cpu_map));
+
+	while (pd) {
+		printk(KERN_CONT " pd%d:{ cpus=%*pbl nr_cstate=%d }",
+				cpumask_first(perf_domain_span(pd)),
+				cpumask_pr_args(perf_domain_span(pd)),
+				em_pd_nr_cap_states(pd->obj));
+		pd = pd->next;
+	}
+
+	printk(KERN_CONT "\n");
+}
+
+static void destroy_perf_domain_rcu(struct rcu_head *rp)
+{
+	struct perf_domain *pd;
+
+	pd = container_of(rp, struct perf_domain, rcu);
+	free_pd(pd);
+}
+
+static void build_perf_domains(const struct cpumask *cpu_map)
+{
+	struct perf_domain *pd = NULL, *tmp;
+	int cpu = cpumask_first(cpu_map);
+	struct root_domain *rd = cpu_rq(cpu)->rd;
+	int i;
+
+	for_each_cpu(i, cpu_map) {
+		/* Skip already covered CPUs. */
+		if (find_pd(pd, i))
+			continue;
+
+		/* Create the new pd and add it to the local list. */
+		tmp = pd_init(i);
+		if (!tmp)
+			goto free;
+		tmp->next = pd;
+		pd = tmp;
+	}
+
+	perf_domain_debug(cpu_map, pd);
+
+	/* Attach the new list of performance domains to the root domain. */
+	tmp = rd->pd;
+	rcu_assign_pointer(rd->pd, pd);
+	if (tmp)
+		call_rcu(&tmp->rcu, destroy_perf_domain_rcu);
+
+	return;
+
+free:
+	free_pd(pd);
+	tmp = rd->pd;
+	rcu_assign_pointer(rd->pd, NULL);
+	if (tmp)
+		call_rcu(&tmp->rcu, destroy_perf_domain_rcu);
+}
+#else
+static void free_pd(struct perf_domain *pd) { }
+#endif /* CONFIG_ENERGY_MODEL */
+
 static void free_rootdomain(struct rcu_head *rcu)
 {
 	struct root_domain *rd = container_of(rcu, struct root_domain, rcu);
@@ -211,6 +321,7 @@ static void free_rootdomain(struct rcu_head *rcu)
 	free_cpumask_var(rd->rto_mask);
 	free_cpumask_var(rd->online);
 	free_cpumask_var(rd->span);
+	free_pd(rd->pd);
 	kfree(rd);
 }
 
@@ -1964,8 +2075,8 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 	/* Destroy deleted domains: */
 	for (i = 0; i < ndoms_cur; i++) {
 		for (j = 0; j < n && !new_topology; j++) {
-			if (cpumask_equal(doms_cur[i], doms_new[j])
-			    && dattrs_equal(dattr_cur, i, dattr_new, j))
+			if (cpumask_equal(doms_cur[i], doms_new[j]) &&
+			    dattrs_equal(dattr_cur, i, dattr_new, j))
 				goto match1;
 		}
 		/* No match - a current sched domain not in new doms_new[] */
@@ -1985,8 +2096,8 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 	/* Build new domains: */
 	for (i = 0; i < ndoms_new; i++) {
 		for (j = 0; j < n && !new_topology; j++) {
-			if (cpumask_equal(doms_new[i], doms_cur[j])
-			    && dattrs_equal(dattr_new, i, dattr_cur, j))
+			if (cpumask_equal(doms_new[i], doms_cur[j]) &&
+			    dattrs_equal(dattr_new, i, dattr_cur, j))
 				goto match2;
 		}
 		/* No match - add a new doms_new */
@@ -1995,6 +2106,21 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 		;
 	}
 
+#ifdef CONFIG_ENERGY_MODEL
+	/* Build perf. domains: */
+	for (i = 0; i < ndoms_new; i++) {
+		for (j = 0; j < n; j++) {
+			if (cpumask_equal(doms_new[i], doms_cur[j]) &&
+			    cpu_rq(cpumask_first(doms_cur[j]))->rd->pd)
+				goto match3;
+		}
+		/* No match - add perf. domains for a new rd */
+		build_perf_domains(doms_new[i]);
+match3:
+		;
+	}
+#endif
+
 	/* Remember the new sched domains: */
 	if (doms_cur != &fallback_doms)
 		free_sched_domains(doms_cur, ndoms_cur);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 07/14] sched/topology: Lowest CPU asymmetry sched_domain level pointer
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (5 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 06/14] sched/topology: Reference the Energy Model of CPUs when available Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-09-12  9:13 ` [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms Quentin Perret
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

Add another member to the family of per-cpu sched_domain shortcut
pointers. This one, sd_asym_cpucapacity, points to the lowest level
at which the SD_ASYM_CPUCAPACITY flag is set. While at it, rename the
sd_asym shortcut to sd_asym_packing to avoid confusions.

Generally speaking, the largest opportunity to save energy via
scheduling comes from a smarter exploitation of heterogeneous platforms
(i.e. big.LITTLE). Consequently, the sd_asym_cpucapacity shortcut will
be used at first as the lowest domain where Energy-Aware Scheduling
(EAS) should be applied. For example, it is possible to apply EAS within
a socket on a multi-socket system, as long as each socket has an
asymmetric topology. Cross-sockets wake-up balancing will only happen
when the system is over-utilized, or this_cpu and prev_cpu are in
different sockets.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/fair.c     | 2 +-
 kernel/sched/sched.h    | 3 ++-
 kernel/sched/topology.c | 8 ++++++--
 3 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c60616796483..c28c7adcc7b3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9167,7 +9167,7 @@ static void nohz_balancer_kick(struct rq *rq)
 		}
 	}
 
-	sd = rcu_dereference(per_cpu(sd_asym, cpu));
+	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
 	if (sd) {
 		for_each_cpu(i, sched_domain_span(sd)) {
 			if (i == cpu ||
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9922615592a8..7ced27701454 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1203,7 +1203,8 @@ DECLARE_PER_CPU(int, sd_llc_size);
 DECLARE_PER_CPU(int, sd_llc_id);
 DECLARE_PER_CPU(struct sched_domain_shared *, sd_llc_shared);
 DECLARE_PER_CPU(struct sched_domain *, sd_numa);
-DECLARE_PER_CPU(struct sched_domain *, sd_asym);
+DECLARE_PER_CPU(struct sched_domain *, sd_asym_packing);
+DECLARE_PER_CPU(struct sched_domain *, sd_asym_cpucapacity);
 extern struct static_key_false sched_asym_cpucapacity;
 
 struct sched_group_capacity {
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 8019a9bf281d..10e37ffea19a 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -508,7 +508,8 @@ DEFINE_PER_CPU(int, sd_llc_size);
 DEFINE_PER_CPU(int, sd_llc_id);
 DEFINE_PER_CPU(struct sched_domain_shared *, sd_llc_shared);
 DEFINE_PER_CPU(struct sched_domain *, sd_numa);
-DEFINE_PER_CPU(struct sched_domain *, sd_asym);
+DEFINE_PER_CPU(struct sched_domain *, sd_asym_packing);
+DEFINE_PER_CPU(struct sched_domain *, sd_asym_cpucapacity);
 DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
 
 static void update_top_cache_domain(int cpu)
@@ -534,7 +535,10 @@ static void update_top_cache_domain(int cpu)
 	rcu_assign_pointer(per_cpu(sd_numa, cpu), sd);
 
 	sd = highest_flag_domain(cpu, SD_ASYM_PACKING);
-	rcu_assign_pointer(per_cpu(sd_asym, cpu), sd);
+	rcu_assign_pointer(per_cpu(sd_asym_packing, cpu), sd);
+
+	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY);
+	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
 }
 
 /*
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (6 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 07/14] sched/topology: Lowest CPU asymmetry sched_domain level pointer Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-10-03 16:27   ` Peter Zijlstra
  2018-09-12  9:13 ` [PATCH v7 09/14] sched/fair: Clean-up update_sg_lb_stats parameters Quentin Perret
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

Energy Aware Scheduling (EAS) in its current form is most relevant on
platforms with asymmetric CPU topologies (e.g. Arm big.LITTLE) since
this is where there is a lot of potential for saving energy through
scheduling. This is particularly true since the Energy Model only
includes the active power costs of CPUs, hence not providing enough data
to compare packing-vs-spreading strategies.

As such, disable EAS on root domains where the SD_ASYM_CPUCAPACITY flag
is not set. While at it, disable EAS on systems where the complexity of
the Energy Model is too high since that could lead to unacceptable
scheduling overhead.

All in all, EAS can be used on a root domain if and only if:
  1. the ENERGY_AWARE sched_feat is enabled;
  2. the root domain has an asymmetric CPU capacity topology;
  3. the complexity of the root domain's EM is low enough to keep
     scheduling overheads low.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/topology.c | 50 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 49 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 10e37ffea19a..0d18d69b719c 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -270,12 +270,45 @@ static void destroy_perf_domain_rcu(struct rcu_head *rp)
 	free_pd(pd);
 }
 
+/*
+ * EAS can be used on a root domain if it meets all the following conditions:
+ *    1. the ENERGY_AWARE sched_feat is enabled;
+ *    2. the SD_ASYM_CPUCAPACITY flag is set in the sched_domain hierarchy.
+ *    3. the EM complexity is low enough to keep scheduling overheads low;
+ *
+ * The complexity of the Energy Model is defined as:
+ *
+ *              C = nr_pd * (nr_cpus + nr_cs)
+ *
+ * with parameters defined as:
+ *  - nr_pd:    the number of performance domains
+ *  - nr_cpus:  the number of CPUs
+ *  - nr_cs:    the sum of the number of capacity states of all performance
+ *              domains (for example, on a system with 2 performance domains,
+ *              with 10 capacity states each, nr_cs = 2 * 10 = 20).
+ *
+ * It is generally not a good idea to use such a model in the wake-up path on
+ * very complex platforms because of the associated scheduling overheads. The
+ * arbitrary constraint below prevents that. It makes EAS usable up to 16 CPUs
+ * with per-CPU DVFS and less than 8 capacity states each, for example.
+ */
+#define EM_MAX_COMPLEXITY 2048
+
 static void build_perf_domains(const struct cpumask *cpu_map)
 {
+	int i, nr_pd = 0, nr_cs = 0, nr_cpus = cpumask_weight(cpu_map);
 	struct perf_domain *pd = NULL, *tmp;
 	int cpu = cpumask_first(cpu_map);
 	struct root_domain *rd = cpu_rq(cpu)->rd;
-	int i;
+
+	/* EAS is enabled for asymmetric CPU capacity topologies. */
+	if (!per_cpu(sd_asym_cpucapacity, cpu)) {
+		if (sched_debug()) {
+			pr_info("rd %*pbl: CPUs do not have asymmetric capacities\n",
+					cpumask_pr_args(cpu_map));
+		}
+		goto free;
+	}
 
 	for_each_cpu(i, cpu_map) {
 		/* Skip already covered CPUs. */
@@ -288,6 +321,21 @@ static void build_perf_domains(const struct cpumask *cpu_map)
 			goto free;
 		tmp->next = pd;
 		pd = tmp;
+
+		/*
+		 * Count performance domains and capacity states for the
+		 * complexity check.
+		 */
+		nr_pd++;
+		nr_cs += em_pd_nr_cap_states(pd->obj);
+	}
+
+	/* Bail out if the Energy Model complexity is too high. */
+	if (nr_pd * (nr_cs + nr_cpus) > EM_MAX_COMPLEXITY) {
+		if (sched_debug())
+			pr_info("rd %*pbl: EM complexity is too high\n ",
+						cpumask_pr_args(cpu_map));
+		goto free;
 	}
 
 	perf_domain_debug(cpu_map, pd);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 09/14] sched/fair: Clean-up update_sg_lb_stats parameters
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (7 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-09-12  9:13 ` [PATCH v7 10/14] sched: Add over-utilization/tipping point indicator Quentin Perret
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

In preparation for the introduction of a new root domain flag which can
be set during load balance (the 'overutilized' flag), clean-up the set
of parameters passed to update_sg_lb_stats(). More specifically, the
'local_group' and 'local_idx' parameters can be removed since they can
easily be reconstructed from within the function.

While at it, transform the 'overload' parameter into a flag stored in
the 'sg_status' parameter hence facilitating the definition of new flags
when needed.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/fair.c  | 27 +++++++++++----------------
 kernel/sched/sched.h |  3 +++
 2 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c28c7adcc7b3..23381feae4ec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7782,16 +7782,16 @@ static bool update_nohz_stats(struct rq *rq, bool force)
  * update_sg_lb_stats - Update sched_group's statistics for load balancing.
  * @env: The load balancing environment.
  * @group: sched_group whose statistics are to be updated.
- * @load_idx: Load index of sched_domain of this_cpu for load calc.
- * @local_group: Does group contain this_cpu.
  * @sgs: variable to hold the statistics for this group.
- * @overload: Indicate pullable load (e.g. >1 runnable task).
+ * @sg_status: Holds flag indicating the status of the sched_group
  */
 static inline void update_sg_lb_stats(struct lb_env *env,
-			struct sched_group *group, int load_idx,
-			int local_group, struct sg_lb_stats *sgs,
-			bool *overload)
+				      struct sched_group *group,
+				      struct sg_lb_stats *sgs,
+				      int *sg_status)
 {
+	int local_group = cpumask_test_cpu(env->dst_cpu, sched_group_span(group));
+	int load_idx = get_sd_load_idx(env->sd, env->idle);
 	unsigned long load;
 	int i, nr_running;
 
@@ -7815,7 +7815,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 
 		nr_running = rq->nr_running;
 		if (nr_running > 1)
-			*overload = true;
+			*sg_status |= SG_OVERLOAD;
 
 #ifdef CONFIG_NUMA_BALANCING
 		sgs->nr_numa_running += rq->nr_numa_running;
@@ -7831,7 +7831,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
 		    sgs->group_misfit_task_load < rq->misfit_task_load) {
 			sgs->group_misfit_task_load = rq->misfit_task_load;
-			*overload = 1;
+			*sg_status |= SG_OVERLOAD;
 		}
 	}
 
@@ -7976,17 +7976,14 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 	struct sched_group *sg = env->sd->groups;
 	struct sg_lb_stats *local = &sds->local_stat;
 	struct sg_lb_stats tmp_sgs;
-	int load_idx;
-	bool overload = false;
 	bool prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
+	int sg_status = 0;
 
 #ifdef CONFIG_NO_HZ_COMMON
 	if (env->idle == CPU_NEWLY_IDLE && READ_ONCE(nohz.has_blocked))
 		env->flags |= LBF_NOHZ_STATS;
 #endif
 
-	load_idx = get_sd_load_idx(env->sd, env->idle);
-
 	do {
 		struct sg_lb_stats *sgs = &tmp_sgs;
 		int local_group;
@@ -8001,8 +7998,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 				update_group_capacity(env->sd, env->dst_cpu);
 		}
 
-		update_sg_lb_stats(env, sg, load_idx, local_group, sgs,
-						&overload);
+		update_sg_lb_stats(env, sg, sgs, &sg_status);
 
 		if (local_group)
 			goto next_group;
@@ -8052,8 +8048,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 	if (!env->sd->parent) {
 		/* update overload indicator if we are at root domain */
-		if (READ_ONCE(env->dst_rq->rd->overload) != overload)
-			WRITE_ONCE(env->dst_rq->rd->overload, overload);
+		WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
 	}
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 7ced27701454..bee902d46d35 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -707,6 +707,9 @@ struct perf_domain {
 	struct rcu_head rcu;
 };
 
+/* Scheduling group status flags */
+#define SG_OVERLOAD		0x1 /* More than one runnable task on a CPU. */
+
 /*
  * We add the notion of a root-domain which will be used to define per-domain
  * variables. Each exclusive cpuset essentially defines an island domain by
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 10/14] sched: Add over-utilization/tipping point indicator
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (8 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 09/14] sched/fair: Clean-up update_sg_lb_stats parameters Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-09-12  9:13 ` [PATCH v7 11/14] sched/fair: Introduce an energy estimation helper function Quentin Perret
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

From: Morten Rasmussen <morten.rasmussen@arm.com>

Energy-aware scheduling is only meant to be active while the system is
_not_ over-utilized. That is, there are spare cycles available to shift
tasks around based on their actual utilization to get a more
energy-efficient task distribution without depriving any tasks. When
above the tipping point task placement is done the traditional way based
on load_avg, spreading the tasks across as many cpus as possible based
on priority scaled load to preserve smp_nice. Below the tipping point we
want to use util_avg instead. We need to define a criteria for when we
make the switch.

The util_avg for each cpu converges towards 100% regardless of how many
additional tasks we may put on it. If we define over-utilized as:

sum_{cpus}(rq.cfs.avg.util_avg) + margin > sum_{cpus}(rq.capacity)

some individual cpus may be over-utilized running multiple tasks even
when the above condition is false. That should be okay as long as we try
to spread the tasks out to avoid per-cpu over-utilization as much as
possible and if all tasks have the _same_ priority. If the latter isn't
true, we have to consider priority to preserve smp_nice.

For example, we could have n_cpus nice=-10 util_avg=55% tasks and
n_cpus/2 nice=0 util_avg=60% tasks. Balancing based on util_avg we are
likely to end up with nice=-10 tasks sharing cpus and nice=0 tasks
getting their own as we 1.5*n_cpus tasks in total and 55%+55% is less
over-utilized than 55%+60% for those cpus that have to be shared. The
system utilization is only 85% of the system capacity, but we are
breaking smp_nice.

To be sure not to break smp_nice, we have defined over-utilization
conservatively as when any cpu in the system is fully utilized at its
highest frequency instead:

cpu_rq(any).cfs.avg.util_avg + margin > cpu_rq(any).capacity

IOW, as soon as one cpu is (nearly) 100% utilized, we switch to load_avg
to factor in priority to preserve smp_nice.

With this definition, we can skip periodic load-balance as no cpu has an
always-running task when the system is not over-utilized. All tasks will
be periodic and we can balance them at wake-up. This conservative
condition does however mean that some scenarios that could benefit from
energy-aware decisions even if one cpu is fully utilized would not get
those benefits.

For systems where some cpus might have reduced capacity on some cpus
(RT-pressure and/or big.LITTLE), we want periodic load-balance checks as
soon a just a single cpu is fully utilized as it might one of those with
reduced capacity and in that case we want to migrate it.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
[ Added a comment explaining why new tasks are not accounted during
  overutilization detection ]
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/fair.c  | 59 ++++++++++++++++++++++++++++++++++++++++++--
 kernel/sched/sched.h |  4 +++
 2 files changed, 61 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 23381feae4ec..648482f35458 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5001,6 +5001,24 @@ static inline void hrtick_update(struct rq *rq)
 }
 #endif
 
+#ifdef CONFIG_SMP
+static inline unsigned long cpu_util(int cpu);
+static unsigned long capacity_of(int cpu);
+
+static inline bool cpu_overutilized(int cpu)
+{
+	return (capacity_of(cpu) * 1024) < (cpu_util(cpu) * capacity_margin);
+}
+
+static inline void update_overutilized_status(struct rq *rq)
+{
+	if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu))
+		WRITE_ONCE(rq->rd->overutilized, SG_OVERUTILIZED);
+}
+#else
+static inline void update_overutilized_status(struct rq *rq) { }
+#endif
+
 /*
  * The enqueue_task method is called before nr_running is
  * increased. Here we update the fair scheduling stats and
@@ -5058,8 +5076,26 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 		update_cfs_group(se);
 	}
 
-	if (!se)
+	if (!se) {
 		add_nr_running(rq, 1);
+		/*
+		 * Since new tasks are assigned an initial util_avg equal to
+		 * half of the spare capacity of their CPU, tiny tasks have the
+		 * ability to cross the overutilized threshold, which will
+		 * result in the load balancer ruining all the task placement
+		 * done by EAS. As a way to mitigate that effect, do not account
+		 * for the first enqueue operation of new tasks during the
+		 * overutilized flag detection.
+		 *
+		 * A better way of solving this problem would be to wait for
+		 * the PELT signals of tasks to converge before taking them
+		 * into account, but that is not straightforward to implement,
+		 * and the following generally works well enough in practice.
+		 */
+		if (flags & ENQUEUE_WAKEUP)
+			update_overutilized_status(rq);
+
+	}
 
 	hrtick_update(rq);
 }
@@ -7817,6 +7853,9 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		if (nr_running > 1)
 			*sg_status |= SG_OVERLOAD;
 
+		if (cpu_overutilized(i))
+			*sg_status |= SG_OVERUTILIZED;
+
 #ifdef CONFIG_NUMA_BALANCING
 		sgs->nr_numa_running += rq->nr_numa_running;
 		sgs->nr_preferred_running += rq->nr_preferred_running;
@@ -8047,8 +8086,15 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 		env->fbq_type = fbq_classify_group(&sds->busiest_stat);
 
 	if (!env->sd->parent) {
+		struct root_domain *rd = env->dst_rq->rd;
+
 		/* update overload indicator if we are at root domain */
-		WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
+		WRITE_ONCE(rd->overload, sg_status & SG_OVERLOAD);
+
+		/* Update over-utilization (tipping point, U >= 0) indicator */
+		WRITE_ONCE(rd->overutilized, sg_status & SG_OVERUTILIZED);
+	} else if (sg_status & SG_OVERUTILIZED) {
+		WRITE_ONCE(env->dst_rq->rd->overutilized, SG_OVERUTILIZED);
 	}
 }
 
@@ -8275,6 +8321,14 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	 * this level.
 	 */
 	update_sd_lb_stats(env, &sds);
+
+	if (sched_feat(ENERGY_AWARE)) {
+		struct root_domain *rd = env->dst_rq->rd;
+
+		if (rcu_dereference(rd->pd) && !READ_ONCE(rd->overutilized))
+			goto out_balanced;
+	}
+
 	local = &sds.local_stat;
 	busiest = &sds.busiest_stat;
 
@@ -9666,6 +9720,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 		task_tick_numa(rq, curr);
 
 	update_misfit_status(curr, rq);
+	update_overutilized_status(task_rq(curr));
 }
 
 /*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index bee902d46d35..309c0287004c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -709,6 +709,7 @@ struct perf_domain {
 
 /* Scheduling group status flags */
 #define SG_OVERLOAD		0x1 /* More than one runnable task on a CPU. */
+#define SG_OVERUTILIZED		0x2 /* One or more CPUs are over-utilized. */
 
 /*
  * We add the notion of a root-domain which will be used to define per-domain
@@ -732,6 +733,9 @@ struct root_domain {
 	 */
 	int			overload;
 
+	/* Indicate one or more cpus over-utilized (tipping point) */
+	int			overutilized;
+
 	/*
 	 * The bit corresponding to a CPU gets set here if such CPU has more
 	 * than one runnable -deadline task (as it is below for RT tasks).
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 11/14] sched/fair: Introduce an energy estimation helper function
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (9 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 10/14] sched: Add over-utilization/tipping point indicator Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-10-04  8:34   ` Peter Zijlstra
  2018-09-12  9:13 ` [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up Quentin Perret
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

In preparation for the definition of an energy-aware wakeup path,
introduce a helper function to estimate the consequence on system energy
when a specific task wakes-up on a specific CPU. compute_energy()
estimates the capacity state to be reached by all performance domains
and estimates the consumption of each online CPU according to its Energy
Model and its percentage of busy time.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/fair.c | 77 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 77 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 648482f35458..7ee3e43cdaf2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6262,6 +6262,83 @@ static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
 	return !task_fits_capacity(p, min_cap);
 }
 
+/*
+ * Predicts what cpu_util(@cpu) would return if @p was migrated (and enqueued)
+ * to @dst_cpu.
+ */
+static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu)
+{
+	struct cfs_rq *cfs_rq = &cpu_rq(cpu)->cfs;
+	unsigned long util_est, util = READ_ONCE(cfs_rq->avg.util_avg);
+
+	/*
+	 * If @p migrates from @cpu to another, remove its contribution. Or,
+	 * if @p migrates from another CPU to @cpu, add its contribution. In
+	 * the other cases, @cpu is not impacted by the migration, so the
+	 * util_avg should already be correct.
+	 */
+	if (task_cpu(p) == cpu && dst_cpu != cpu)
+		util = max_t(long, util - task_util(p), 0);
+	else if (task_cpu(p) != cpu && dst_cpu == cpu)
+		util += task_util(p);
+
+	if (sched_feat(UTIL_EST)) {
+		util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
+
+		/*
+		 * During wake-up, the task isn't enqueued yet and doesn't
+		 * appear in the cfs_rq->avg.util_est.enqueued of any rq,
+		 * so just add it (if needed) to "simulate" what will be
+		 * cpu_util() after the task has been enqueued.
+		 */
+		if (dst_cpu == cpu)
+			util_est += _task_util_est(p);
+
+		util = max(util, util_est);
+	}
+
+	return min_t(unsigned long, util, capacity_orig_of(cpu));
+}
+
+/*
+ * compute_energy(): Estimates the energy that would be consumed if @p was
+ * migrated to @dst_cpu. compute_energy() predicts what will be the utilization
+ * landscape of the * CPUs after the task migration, and uses the Energy Model
+ * to compute what would be the energy if we decided to actually migrate that
+ * task.
+ */
+static long compute_energy(struct task_struct *p, int dst_cpu,
+							struct perf_domain *pd)
+{
+	long util, max_util, sum_util, energy = 0;
+	int cpu;
+
+	while (pd) {
+		max_util = sum_util = 0;
+		/*
+		 * The capacity state of CPUs of the current rd can be driven by
+		 * CPUs of another rd if they belong to the same performance
+		 * domain. So, account for the utilization of these CPUs too
+		 * by masking pd with cpu_online_mask instead of the rd span.
+		 *
+		 * If an entire performance domain is outside of the current rd,
+		 * it will not appear in its pd list and will not be accounted
+		 * by compute_energy().
+		 */
+		for_each_cpu_and(cpu, perf_domain_span(pd), cpu_online_mask) {
+			util = cpu_util_next(cpu, p, dst_cpu);
+			util = schedutil_freq_util(cpu, util, ENERGY_UTIL);
+			max_util = max(util, max_util);
+			sum_util += util;
+		}
+
+		energy += em_pd_energy(pd->obj, max_util, sum_util);
+		pd = pd->next;
+	}
+
+	return energy;
+}
+
 /*
  * select_task_rq_fair: Select target runqueue for the waking task in domains
  * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (10 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 11/14] sched/fair: Introduce an energy estimation helper function Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-10-04  9:44   ` Peter Zijlstra
  2018-09-12  9:13 ` [PATCH v7 13/14] sched/topology: Make Energy Aware Scheduling depend on schedutil Quentin Perret
  2018-09-12  9:13 ` [PATCH v7 14/14] OPTIONAL: cpufreq: dt: Register an Energy Model Quentin Perret
  13 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

If an Energy Model (EM) is available and if the system isn't
overutilized, re-route waking tasks into an energy-aware placement
algorithm. The selection of an energy-efficient CPU for a task
is achieved by estimating the impact on system-level active energy
resulting from the placement of the task on the CPU with the highest
spare capacity in each performance domain. This strategy spreads tasks
in a performance domain and avoids overly aggressive task packing. The
best CPU energy-wise is then selected if it saves a large enough amount
of energy with respect to prev_cpu.

Although it has already shown significant benefits on some existing
targets, this approach cannot scale to platforms with numerous CPUs.
This is an attempt to do something useful as writing a fast heuristic
that performs reasonably well on a broad spectrum of architectures isn't
an easy task. As such, the scope of usability of the energy-aware
wake-up path is restricted to systems with the SD_ASYM_CPUCAPACITY flag
set, and where the EM isn't too complex.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 kernel/sched/fair.c | 139 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 136 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7ee3e43cdaf2..6e988a01011c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6339,6 +6339,114 @@ static long compute_energy(struct task_struct *p, int dst_cpu,
 	return energy;
 }
 
+/*
+ * find_energy_efficient_cpu(): Find most energy-efficient target CPU for the
+ * waking task. find_energy_efficient_cpu() looks for the CPU with maximum
+ * spare capacity in each performance domain and uses it as a potential
+ * candidate to execute the task. Then, it uses the Energy Model to figure
+ * out which of the CPU candidates is the most energy-efficient.
+ *
+ * The rationale for this heuristic is as follows. In a performance domain,
+ * all the most energy efficient CPU candidates (according to the Energy
+ * Model) are those for which we'll request a low frequency. When there are
+ * several CPUs for which the frequency request will be the same, we don't
+ * have enough data to break the tie between them, because the Energy Model
+ * only includes active power costs. With this model, if we assume that
+ * frequency requests follow utilization (e.g. using schedutil), the CPU with
+ * the maximum spare capacity in a performance domain is guaranteed to be among
+ * the best candidates of the performance domain.
+ *
+ * In practice, it could be preferable from an energy standpoint to pack
+ * small tasks on a CPU in order to let other CPUs go in deeper idle states,
+ * but that could also hurt our chances to go cluster idle, and we have no
+ * ways to tell with the current Energy Model if this is actually a good
+ * idea or not. So, find_energy_efficient_cpu() basically favors
+ * cluster-packing, and spreading inside a cluster. That should at least be
+ * a good thing for latency, and this is consistent with the idea that most
+ * of the energy savings of EAS come from the asymmetry of the system, and
+ * not so much from breaking the tie between identical CPUs. That's also the
+ * reason why EAS is enabled in the topology code only for systems where
+ * SD_ASYM_CPUCAPACITY is set.
+ */
+static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu,
+							struct perf_domain *pd)
+{
+	unsigned long prev_energy = ULONG_MAX, best_energy = ULONG_MAX;
+	int cpu, best_energy_cpu = prev_cpu;
+	struct perf_domain *head = pd;
+	unsigned long cpu_cap, util;
+	struct sched_domain *sd;
+
+	sync_entity_load_avg(&p->se);
+
+	if (!task_util_est(p))
+		return prev_cpu;
+
+	/*
+	 * Energy-aware wake-up happens on the lowest sched_domain starting
+	 * from sd_asym_cpucapacity spanning over this_cpu and prev_cpu.
+	 */
+	sd = rcu_dereference(*this_cpu_ptr(&sd_asym_cpucapacity));
+	while (sd && !cpumask_test_cpu(prev_cpu, sched_domain_span(sd)))
+		sd = sd->parent;
+	if (!sd)
+		return prev_cpu;
+
+	while (pd) {
+		unsigned long cur_energy, spare_cap, max_spare_cap = 0;
+		int max_spare_cap_cpu = -1;
+
+		for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {
+			if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
+				continue;
+
+			/* Skip CPUs that will be overutilized. */
+			util = cpu_util_next(cpu, p, cpu);
+			cpu_cap = capacity_of(cpu);
+			if (cpu_cap * 1024 < util * capacity_margin)
+				continue;
+
+			/* Always use prev_cpu as a candidate. */
+			if (cpu == prev_cpu) {
+				prev_energy = compute_energy(p, prev_cpu, head);
+				if (prev_energy < best_energy)
+					best_energy = prev_energy;
+				continue;
+			}
+
+			/*
+			 * Find the CPU with the maximum spare capacity in
+			 * the performance domain
+			 */
+			spare_cap = cpu_cap - util;
+			if (spare_cap > max_spare_cap) {
+				max_spare_cap = spare_cap;
+				max_spare_cap_cpu = cpu;
+			}
+		}
+
+		/* Evaluate the energy impact of using this CPU. */
+		if (max_spare_cap_cpu >= 0) {
+			cur_energy = compute_energy(p, max_spare_cap_cpu, head);
+			if (cur_energy < best_energy) {
+				best_energy = cur_energy;
+				best_energy_cpu = max_spare_cap_cpu;
+			}
+		}
+		pd = pd->next;
+	}
+
+	/*
+	 * Pick the best CPU if prev_cpu cannot be used, or if it saves at
+	 * least 6% of the energy used by prev_cpu.
+	 */
+	if (prev_energy == ULONG_MAX ||
+			(prev_energy - best_energy) > (prev_energy >> 4))
+		return best_energy_cpu;
+
+	return prev_cpu;
+}
+
 /*
  * select_task_rq_fair: Select target runqueue for the waking task in domains
  * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
@@ -6360,13 +6468,37 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
 	int want_affine = 0;
 	int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING);
 
+	rcu_read_lock();
 	if (sd_flag & SD_BALANCE_WAKE) {
 		record_wakee(p);
-		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu)
-			      && cpumask_test_cpu(cpu, &p->cpus_allowed);
+
+		/*
+		 * Forkees are not accepted in the energy-aware wake-up path
+		 * because they don't have any useful utilization data yet and
+		 * it's not possible to forecast their impact on energy
+		 * consumption. Consequently, they will be placed by
+		 * find_idlest_cpu() on the least loaded CPU, which might turn
+		 * out to be energy-inefficient in some use-cases. The
+		 * alternative would be to bias new tasks towards specific types
+		 * of CPUs first, or to try to infer their util_avg from the
+		 * parent task, but those heuristics could hurt other use-cases
+		 * too. So, until someone finds a better way to solve this,
+		 * let's keep things simple by re-using the existing slow path.
+		 */
+		if (sched_feat(ENERGY_AWARE)) {
+			struct root_domain *rd = cpu_rq(cpu)->rd;
+			struct perf_domain *pd = rcu_dereference(rd->pd);
+
+			if (pd && !READ_ONCE(rd->overutilized)) {
+				new_cpu = find_energy_efficient_cpu(p, prev_cpu, pd);
+				goto unlock;
+			}
+		}
+
+		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu) &&
+			      cpumask_test_cpu(cpu, &p->cpus_allowed);
 	}
 
-	rcu_read_lock();
 	for_each_domain(cpu, tmp) {
 		if (!(tmp->flags & SD_LOAD_BALANCE))
 			break;
@@ -6401,6 +6533,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
 		if (want_affine)
 			current->recent_used_cpu = cpu;
 	}
+unlock:
 	rcu_read_unlock();
 
 	return new_cpu;
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 13/14] sched/topology: Make Energy Aware Scheduling depend on schedutil
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (11 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  2018-10-04 10:50   ` Peter Zijlstra
  2018-09-12  9:13 ` [PATCH v7 14/14] OPTIONAL: cpufreq: dt: Register an Energy Model Quentin Perret
  13 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

Energy Aware Scheduling (EAS) is designed with the assumption that
frequencies of CPUs follow their utilization value. When using a CPUFreq
governor other than schedutil, the chances of this assumption being true
are small, if any. When schedutil is being used, EAS' predictions are at
least consistent with the frequency requests. Although those requests
have no guarantees to be honored by the hardware, they should at least
guide DVFS in the right direction and provide some hope in regards to the
EAS model being accurate.

To make sure EAS is only used in a sane configuration, create a strong
dependency on schedutil being used. Since having sugov compiled-in does
not provide that guarantee, make CPUFreq call a scheduler function on
on governor changes hence letting it rebuild the scheduling domains,
check the governors of the online CPUs, and enable/disable EAS
accordingly.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 drivers/cpufreq/cpufreq.c        |  2 ++
 include/linux/sched/cpufreq.h    |  9 ++++++++
 kernel/sched/cpufreq_schedutil.c | 37 ++++++++++++++++++++++++++++++--
 kernel/sched/sched.h             |  4 +---
 kernel/sched/topology.c          | 23 ++++++++++++++++----
 5 files changed, 66 insertions(+), 9 deletions(-)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index b0dfd3222013..c0d21eb900e3 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -25,6 +25,7 @@
 #include <linux/kernel_stat.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
+#include <linux/sched/cpufreq.h>
 #include <linux/slab.h>
 #include <linux/suspend.h>
 #include <linux/syscore_ops.h>
@@ -2271,6 +2272,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
 		ret = cpufreq_start_governor(policy);
 		if (!ret) {
 			pr_debug("cpufreq: governor change\n");
+			sched_cpufreq_governor_change(policy, old_gov);
 			return 0;
 		}
 		cpufreq_exit_governor(policy);
diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
index afa940cd50dc..a2ead52feb17 100644
--- a/include/linux/sched/cpufreq.h
+++ b/include/linux/sched/cpufreq.h
@@ -2,6 +2,7 @@
 #ifndef _LINUX_SCHED_CPUFREQ_H
 #define _LINUX_SCHED_CPUFREQ_H
 
+#include <linux/cpufreq.h>
 #include <linux/types.h>
 
 /*
@@ -28,4 +29,12 @@ static inline unsigned long map_util_freq(unsigned long util,
 }
 #endif /* CONFIG_CPU_FREQ */
 
+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
+void sched_cpufreq_governor_change(struct cpufreq_policy *policy,
+			struct cpufreq_governor *old_gov);
+#else
+static inline void sched_cpufreq_governor_change(struct cpufreq_policy *policy,
+			struct cpufreq_governor *old_gov) { }
+#endif
+
 #endif /* _LINUX_SCHED_CPUFREQ_H */
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 8356cb0072a6..d2236d1166f4 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -632,7 +632,7 @@ static struct kobj_type sugov_tunables_ktype = {
 
 /********************** cpufreq governor interface *********************/
 
-static struct cpufreq_governor schedutil_gov;
+struct cpufreq_governor schedutil_gov;
 
 static struct sugov_policy *sugov_policy_alloc(struct cpufreq_policy *policy)
 {
@@ -891,7 +891,7 @@ static void sugov_limits(struct cpufreq_policy *policy)
 	sg_policy->need_freq_update = true;
 }
 
-static struct cpufreq_governor schedutil_gov = {
+struct cpufreq_governor schedutil_gov = {
 	.name			= "schedutil",
 	.owner			= THIS_MODULE,
 	.dynamic_switching	= true,
@@ -914,3 +914,36 @@ static int __init sugov_register(void)
 	return cpufreq_register_governor(&schedutil_gov);
 }
 fs_initcall(sugov_register);
+
+#ifdef CONFIG_ENERGY_MODEL
+extern bool sched_energy_update;
+static DEFINE_MUTEX(rebuild_sd_mutex);
+
+static void rebuild_sd_workfn(struct work_struct *work)
+{
+	mutex_lock(&rebuild_sd_mutex);
+	sched_energy_update = true;
+	rebuild_sched_domains();
+	sched_energy_update = false;
+	mutex_unlock(&rebuild_sd_mutex);
+}
+static DECLARE_WORK(rebuild_sd_work, rebuild_sd_workfn);
+
+/*
+ * EAS shouldn't be attempted without sugov, so rebuild the sched_domains
+ * on governor changes to make sure the scheduler knows about it.
+ */
+void sched_cpufreq_governor_change(struct cpufreq_policy *policy,
+				  struct cpufreq_governor *old_gov)
+{
+	if (old_gov == &schedutil_gov || policy->governor == &schedutil_gov) {
+		/*
+		 * When called from the cpufreq_register_driver() path, the
+		 * cpu_hotplug_lock is already held, so use a work item to
+		 * avoid nested locking in rebuild_sched_domains().
+		 */
+		schedule_work(&rebuild_sd_work);
+	}
+
+}
+#endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 309c0287004c..b16417beb92e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2264,10 +2264,8 @@ unsigned long scale_irq_capacity(unsigned long util, unsigned long irq, unsigned
 }
 #endif
 
-#ifdef CONFIG_SMP
-#ifdef CONFIG_ENERGY_MODEL
+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
 #define perf_domain_span(pd) (to_cpumask(((pd)->obj->cpus)))
 #else
 #define perf_domain_span(pd) NULL
 #endif
-#endif
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 0d18d69b719c..3fb32de1bb28 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -201,7 +201,9 @@ sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
 	return 1;
 }
 
-#ifdef CONFIG_ENERGY_MODEL
+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
+bool sched_energy_update;
+
 static void free_pd(struct perf_domain *pd)
 {
 	struct perf_domain *tmp;
@@ -275,6 +277,7 @@ static void destroy_perf_domain_rcu(struct rcu_head *rp)
  *    1. the ENERGY_AWARE sched_feat is enabled;
  *    2. the SD_ASYM_CPUCAPACITY flag is set in the sched_domain hierarchy.
  *    3. the EM complexity is low enough to keep scheduling overheads low;
+ *    4. schedutil is driving the frequency of all CPUs of the rd;
  *
  * The complexity of the Energy Model is defined as:
  *
@@ -294,12 +297,15 @@ static void destroy_perf_domain_rcu(struct rcu_head *rp)
  */
 #define EM_MAX_COMPLEXITY 2048
 
+extern struct cpufreq_governor schedutil_gov;
 static void build_perf_domains(const struct cpumask *cpu_map)
 {
 	int i, nr_pd = 0, nr_cs = 0, nr_cpus = cpumask_weight(cpu_map);
 	struct perf_domain *pd = NULL, *tmp;
 	int cpu = cpumask_first(cpu_map);
 	struct root_domain *rd = cpu_rq(cpu)->rd;
+	struct cpufreq_policy *policy;
+	struct cpufreq_governor *gov;
 
 	/* EAS is enabled for asymmetric CPU capacity topologies. */
 	if (!per_cpu(sd_asym_cpucapacity, cpu)) {
@@ -315,6 +321,15 @@ static void build_perf_domains(const struct cpumask *cpu_map)
 		if (find_pd(pd, i))
 			continue;
 
+		/* Do not attempt EAS if schedutil is not being used. */
+		policy = cpufreq_cpu_get(i);
+		if (!policy)
+			goto free;
+		gov = policy->governor;
+		cpufreq_cpu_put(policy);
+		if (gov != &schedutil_gov)
+			goto free;
+
 		/* Create the new pd and add it to the local list. */
 		tmp = pd_init(i);
 		if (!tmp)
@@ -357,7 +372,7 @@ static void build_perf_domains(const struct cpumask *cpu_map)
 }
 #else
 static void free_pd(struct perf_domain *pd) { }
-#endif /* CONFIG_ENERGY_MODEL */
+#endif /* CONFIG_ENERGY_MODEL && CONFIG_CPU_FREQ_GOV_SCHEDUTIL*/
 
 static void free_rootdomain(struct rcu_head *rcu)
 {
@@ -2158,10 +2173,10 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 		;
 	}
 
-#ifdef CONFIG_ENERGY_MODEL
+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
 	/* Build perf. domains: */
 	for (i = 0; i < ndoms_new; i++) {
-		for (j = 0; j < n; j++) {
+		for (j = 0; j < n && !sched_energy_update; j++) {
 			if (cpumask_equal(doms_new[i], doms_cur[j]) &&
 			    cpu_rq(cpumask_first(doms_cur[j]))->rd->pd)
 				goto match3;
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v7 14/14] OPTIONAL: cpufreq: dt: Register an Energy Model
  2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
                   ` (12 preceding siblings ...)
  2018-09-12  9:13 ` [PATCH v7 13/14] sched/topology: Make Energy Aware Scheduling depend on schedutil Quentin Perret
@ 2018-09-12  9:13 ` Quentin Perret
  13 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-12  9:13 UTC (permalink / raw)
  To: peterz, rjw, linux-kernel, linux-pm
  Cc: gregkh, mingo, dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino, quentin.perret

*******************************************************************
* This patch illustrates the usage of the newly introduced Energy *
* Model framework and isn't supposed to be merged as-is.          *
*******************************************************************

The Energy Model framework provides an API to register the active power
of CPUs. Call this API from the cpufreq-dt driver with an estimation
of the power as P = C * V^2 * f with C, V, and f respectively the
capacitance of the CPU and the voltage and frequency of the OPP.

The CPU capacitance is read from the "dynamic-power-coefficient" DT
binding (originally introduced for thermal/IPA), and the voltage and
frequency values from PM_OPP.

Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
---
 drivers/cpufreq/cpufreq-dt.c | 48 +++++++++++++++++++++++++++++++++++-
 1 file changed, 47 insertions(+), 1 deletion(-)

diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index 0a9ebf00be46..15ac9754afa2 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -16,6 +16,7 @@
 #include <linux/cpu_cooling.h>
 #include <linux/cpufreq.h>
 #include <linux/cpumask.h>
+#include <linux/energy_model.h>
 #include <linux/err.h>
 #include <linux/module.h>
 #include <linux/of.h>
@@ -149,8 +150,50 @@ static int resources_available(void)
 	return 0;
 }
 
+static int __maybe_unused of_est_power(unsigned long *mW, unsigned long *KHz,
+				       int cpu)
+{
+	unsigned long mV, Hz, MHz;
+	struct device *cpu_dev;
+	struct dev_pm_opp *opp;
+	struct device_node *np;
+	u32 cap;
+	u64 tmp;
+
+	cpu_dev = get_cpu_device(cpu);
+	if (!cpu_dev)
+		return -ENODEV;
+
+	np = of_node_get(cpu_dev->of_node);
+	if (!np)
+		return -EINVAL;
+
+	if (of_property_read_u32(np, "dynamic-power-coefficient", &cap))
+		return -EINVAL;
+
+	Hz = *KHz * 1000;
+	opp = dev_pm_opp_find_freq_ceil(cpu_dev, &Hz);
+	if (IS_ERR(opp))
+		return -EINVAL;
+
+	mV = dev_pm_opp_get_voltage(opp) / 1000;
+	dev_pm_opp_put(opp);
+	if (!mV)
+		return -EINVAL;
+
+	MHz = Hz / 1000000;
+	tmp = (u64)cap * mV * mV * MHz;
+	do_div(tmp, 1000000000);
+
+	*mW = (unsigned long)tmp;
+	*KHz = Hz / 1000;
+
+	return 0;
+}
+
 static int cpufreq_init(struct cpufreq_policy *policy)
 {
+	struct em_data_callback em_cb = EM_DATA_CB(of_est_power);
 	struct cpufreq_frequency_table *freq_table;
 	struct opp_table *opp_table = NULL;
 	struct private_data *priv;
@@ -159,7 +202,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
 	unsigned int transition_latency;
 	bool fallback = false;
 	const char *name;
-	int ret;
+	int ret, nr_opp;
 
 	cpu_dev = get_cpu_device(policy->cpu);
 	if (!cpu_dev) {
@@ -226,6 +269,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
 		ret = -EPROBE_DEFER;
 		goto out_free_opp;
 	}
+	nr_opp = ret;
 
 	if (fallback) {
 		cpumask_setall(policy->cpus);
@@ -278,6 +322,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
 	policy->cpuinfo.transition_latency = transition_latency;
 	policy->dvfs_possible_from_any_cpu = true;
 
+	em_register_perf_domain(policy->cpus, nr_opp, &em_cb);
+
 	return 0;
 
 out_free_cpufreq_table:
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  2018-09-12  9:12 ` [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling Quentin Perret
@ 2018-09-12 14:56     ` Vincent Guittot
  2018-09-18 21:33     ` Rafael J. Wysocki
  1 sibling, 0 replies; 57+ messages in thread
From: Vincent Guittot @ 2018-09-12 14:56 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Peter Zijlstra, Rafael J. Wysocki, linux-kernel,
	open list:THERMAL, gregkh, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Thara Gopinath, viresh kumar, Todd Kjos,
	Joel Fernandes, Cc: Steve Muckle, adharmap, Saravana Kannan,
	pkondeti, Juri Lelli, Eduardo Valentin, Srinivas Pandruvada,
	currojerez, Javi Merino

Hi Quentin,

On Wed, 12 Sep 2018 at 11:13, Quentin Perret <quentin.perret@arm.com> wrote:
>
> Schedutil requests frequency by aggregating utilization signals from
> the scheduler (CFS, RT, DL, IRQ) and applying and 25% margin on top of

quite a minor thing but s/and 25%/a 25%/

> them. Since Energy Aware Scheduling (EAS) needs to be able to predict
> the frequency requests, it needs to forecast the decisions made by the
> governor.
>
> In order to prepare the introduction of EAS, introduce
> schedutil_freq_util() to centralize the aforementioned signal
> aggregation and make it available to both schedutil and EAS. Since
> frequency selection and energy estimation still need to deal with RT and
> DL signals slightly differently, schedutil_freq_util() is called with a
> different 'type' parameter in those two contexts, and returns an
> aggregated utilization signal accordingly. While at it, introduce the
> map_util_freq() function which is designed to make schedutil's 25%
> margin usable easily for both sugov and EAS.
>
> As EAS will be able to predict schedutil's frequency requests more
> accurately than any other governor by design, it'd be sensible to make
> sure EAS cannot be used without schedutil. This will be done later, once
> EAS has actually been introduced.
>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Quentin Perret <quentin.perret@arm.com>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
@ 2018-09-12 14:56     ` Vincent Guittot
  0 siblings, 0 replies; 57+ messages in thread
From: Vincent Guittot @ 2018-09-12 14:56 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Peter Zijlstra, Rafael J. Wysocki, linux-kernel,
	open list:THERMAL, gregkh, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Thara Gopinath, viresh kumar, Todd Kjos,
	Joel Fernandes, Cc: Steve Muckle, adharmap, Saravana Kannan,
	pkondeti

Hi Quentin,

On Wed, 12 Sep 2018 at 11:13, Quentin Perret <quentin.perret@arm.com> wrote:
>
> Schedutil requests frequency by aggregating utilization signals from
> the scheduler (CFS, RT, DL, IRQ) and applying and 25% margin on top of

quite a minor thing but s/and 25%/a 25%/

> them. Since Energy Aware Scheduling (EAS) needs to be able to predict
> the frequency requests, it needs to forecast the decisions made by the
> governor.
>
> In order to prepare the introduction of EAS, introduce
> schedutil_freq_util() to centralize the aforementioned signal
> aggregation and make it available to both schedutil and EAS. Since
> frequency selection and energy estimation still need to deal with RT and
> DL signals slightly differently, schedutil_freq_util() is called with a
> different 'type' parameter in those two contexts, and returns an
> aggregated utilization signal accordingly. While at it, introduce the
> map_util_freq() function which is designed to make schedutil's 25%
> margin usable easily for both sugov and EAS.
>
> As EAS will be able to predict schedutil's frequency requests more
> accurately than any other governor by design, it'd be sensible to make
> sure EAS cannot be used without schedutil. This will be done later, once
> EAS has actually been introduced.
>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Quentin Perret <quentin.perret@arm.com>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  2018-09-12  9:12 ` [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling Quentin Perret
@ 2018-09-18 21:33     ` Rafael J. Wysocki
  2018-09-18 21:33     ` Rafael J. Wysocki
  1 sibling, 0 replies; 57+ messages in thread
From: Rafael J. Wysocki @ 2018-09-18 21:33 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Peter Zijlstra, Rafael J. Wysocki, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan, Pavan Kondeti, Juri Lelli, Eduardo Valentin,
	Srinivas Pandruvada, currojerez, Javi Merino

On Wed, Sep 12, 2018 at 11:14 AM Quentin Perret <quentin.perret@arm.com> wrote:
>
> Schedutil requests frequency by aggregating utilization signals from
> the scheduler (CFS, RT, DL, IRQ) and applying and 25% margin on top of
> them. Since Energy Aware Scheduling (EAS) needs to be able to predict
> the frequency requests, it needs to forecast the decisions made by the
> governor.
>
> In order to prepare the introduction of EAS, introduce
> schedutil_freq_util() to centralize the aforementioned signal
> aggregation and make it available to both schedutil and EAS. Since
> frequency selection and energy estimation still need to deal with RT and
> DL signals slightly differently, schedutil_freq_util() is called with a
> different 'type' parameter in those two contexts, and returns an
> aggregated utilization signal accordingly. While at it, introduce the
> map_util_freq() function which is designed to make schedutil's 25%
> margin usable easily for both sugov and EAS.
>
> As EAS will be able to predict schedutil's frequency requests more
> accurately than any other governor by design, it'd be sensible to make
> sure EAS cannot be used without schedutil. This will be done later, once
> EAS has actually been introduced.
>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Quentin Perret <quentin.perret@arm.com>

While the overall idea is fine by me, I would do a couple of things
differently (see below).

> ---
>  include/linux/sched/cpufreq.h    |  6 +++
>  kernel/sched/cpufreq_schedutil.c | 89 +++++++++++++++++++++-----------
>  kernel/sched/sched.h             | 14 +++++
>  3 files changed, 80 insertions(+), 29 deletions(-)
>
> diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
> index 59667444669f..afa940cd50dc 100644
> --- a/include/linux/sched/cpufreq.h
> +++ b/include/linux/sched/cpufreq.h
> @@ -20,6 +20,12 @@ void cpufreq_add_update_util_hook(int cpu, struct update_util_data *data,
>                         void (*func)(struct update_util_data *data, u64 time,
>                                     unsigned int flags));
>  void cpufreq_remove_update_util_hook(int cpu);
> +
> +static inline unsigned long map_util_freq(unsigned long util,
> +                                       unsigned long freq, unsigned long cap)
> +{
> +       return (freq + (freq >> 2)) * util / cap;
> +}
>  #endif /* CONFIG_CPU_FREQ */
>
>  #endif /* _LINUX_SCHED_CPUFREQ_H */
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 3fffad3bc8a8..8356cb0072a6 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -13,6 +13,7 @@
>
>  #include "sched.h"
>
> +#include <linux/sched/cpufreq.h>
>  #include <trace/events/power.h>
>
>  struct sugov_tunables {
> @@ -167,7 +168,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>         unsigned int freq = arch_scale_freq_invariant() ?
>                                 policy->cpuinfo.max_freq : policy->cur;
>
> -       freq = (freq + (freq >> 2)) * util / max;
> +       freq = map_util_freq(util, freq, max);
>
>         if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
>                 return sg_policy->next_freq;
> @@ -197,15 +198,15 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>   * based on the task model parameters and gives the minimal utilization
>   * required to meet deadlines.
>   */
> -static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> +                                 enum schedutil_type type)

The new "type" argument should be documented.

Also IMO using the special enum for it is quite confusing, because you
ever only check one value from it directly.  What would be wrong with
using a plain "bool" instead?

>  {
> -       struct rq *rq = cpu_rq(sg_cpu->cpu);
> +       struct rq *rq = cpu_rq(cpu);
>         unsigned long util, irq, max;
>
> -       sg_cpu->max = max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> -       sg_cpu->bw_dl = cpu_bw_dl(rq);
> +       max = arch_scale_cpu_capacity(NULL, cpu);
>
> -       if (rt_rq_is_runnable(&rq->rt))
> +       if (type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt))
>                 return max;
>
>         /*
> @@ -223,20 +224,33 @@ static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
>          * utilization (PELT windows are synchronized) we can directly add them
>          * to obtain the CPU's actual utilization.
>          */
> -       util = cpu_util_cfs(rq);
> +       util = util_cfs;
>         util += cpu_util_rt(rq);
>
> -       /*
> -        * We do not make cpu_util_dl() a permanent part of this sum because we
> -        * want to use cpu_bw_dl() later on, but we need to check if the
> -        * CFS+RT+DL sum is saturated (ie. no idle time) such that we select
> -        * f_max when there is no idle time.
> -        *
> -        * NOTE: numerical errors or stop class might cause us to not quite hit
> -        * saturation when we should -- something for later.
> -        */
> -       if ((util + cpu_util_dl(rq)) >= max)
> -               return max;
> +       if (type == FREQUENCY_UTIL) {
> +               /*
> +                * For frequency selection we do not make cpu_util_dl() a
> +                * permanent part of this sum because we want to use
> +                * cpu_bw_dl() later on, but we need to check if the
> +                * CFS+RT+DL sum is saturated (ie. no idle time) such
> +                * that we select f_max when there is no idle time.
> +                *
> +                * NOTE: numerical errors or stop class might cause us
> +                * to not quite hit saturation when we should --
> +                * something for later.
> +                */
> +
> +               if ((util + cpu_util_dl(rq)) >= max)
> +                       return max;
> +       } else {
> +               /*
> +                * OTOH, for energy computation we need the estimated
> +                * running time, so include util_dl and ignore dl_bw.
> +                */
> +               util += cpu_util_dl(rq);
> +               if (util >= max)
> +                       return max;
> +       }
>
>         /*
>          * There is still idle time; further improve the number by using the
> @@ -250,17 +264,34 @@ static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
>         util = scale_irq_capacity(util, irq, max);
>         util += irq;
>
> -       /*
> -        * Bandwidth required by DEADLINE must always be granted while, for
> -        * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> -        * to gracefully reduce the frequency when no tasks show up for longer
> -        * periods of time.
> -        *
> -        * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> -        * bw_dl as requested freq. However, cpufreq is not yet ready for such
> -        * an interface. So, we only do the latter for now.
> -        */
> -       return min(max, util + sg_cpu->bw_dl);
> +       if (type == FREQUENCY_UTIL) {
> +               /*
> +                * Bandwidth required by DEADLINE must always be granted
> +                * while, for FAIR and RT, we use blocked utilization of
> +                * IDLE CPUs as a mechanism to gracefully reduce the
> +                * frequency when no tasks show up for longer periods of
> +                * time.
> +                *
> +                * Ideally we would like to set bw_dl as min/guaranteed
> +                * freq and util + bw_dl as requested freq. However,
> +                * cpufreq is not yet ready for such an interface. So,
> +                * we only do the latter for now.
> +                */
> +               util += cpu_bw_dl(rq);
> +       }
> +
> +       return min(max, util);
> +}
> +
> +static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> +{
> +       struct rq *rq = cpu_rq(sg_cpu->cpu);
> +       unsigned long util = cpu_util_cfs(rq);
> +
> +       sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> +       sg_cpu->bw_dl = cpu_bw_dl(rq);
> +
> +       return schedutil_freq_util(sg_cpu->cpu, util, FREQUENCY_UTIL);

If you add a "max" argument to schedutil_freq_util(), you can avoid
the second (and arguably redundant) evaluation of
arch_scale_cpu_capacity() in there.

>  }
>
>  /**
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 481e6759441b..d59effb34786 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2172,7 +2172,15 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
>  # define arch_scale_freq_invariant()   false
>  #endif
>
> +enum schedutil_type {
> +       FREQUENCY_UTIL,
> +       ENERGY_UTIL,
> +};

As I said above, I would just use "bool" instead of this new enum (it
has two values too) or the new type needs to be documented.

> +
>  #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
> +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> +                                 enum schedutil_type type);
> +
>  static inline unsigned long cpu_bw_dl(struct rq *rq)
>  {
>         return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
> @@ -2199,6 +2207,12 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
>  {
>         return READ_ONCE(rq->avg_rt.util_avg);
>  }
> +#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
> +static inline unsigned long schedutil_freq_util(int cpu, unsigned long util,
> +                                 enum schedutil_type type)
> +{
> +       return util;
> +}
>  #endif

And I would add a wrapper around schedutil_freq_util(), called say
schedutil_energy_util(), that would pass a specific value as the
"type".

>
>  #ifdef HAVE_SCHED_AVG_IRQ
> --

Cheers,
Rafael

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
@ 2018-09-18 21:33     ` Rafael J. Wysocki
  0 siblings, 0 replies; 57+ messages in thread
From: Rafael J. Wysocki @ 2018-09-18 21:33 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Peter Zijlstra, Rafael J. Wysocki, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan

On Wed, Sep 12, 2018 at 11:14 AM Quentin Perret <quentin.perret@arm.com> wrote:
>
> Schedutil requests frequency by aggregating utilization signals from
> the scheduler (CFS, RT, DL, IRQ) and applying and 25% margin on top of
> them. Since Energy Aware Scheduling (EAS) needs to be able to predict
> the frequency requests, it needs to forecast the decisions made by the
> governor.
>
> In order to prepare the introduction of EAS, introduce
> schedutil_freq_util() to centralize the aforementioned signal
> aggregation and make it available to both schedutil and EAS. Since
> frequency selection and energy estimation still need to deal with RT and
> DL signals slightly differently, schedutil_freq_util() is called with a
> different 'type' parameter in those two contexts, and returns an
> aggregated utilization signal accordingly. While at it, introduce the
> map_util_freq() function which is designed to make schedutil's 25%
> margin usable easily for both sugov and EAS.
>
> As EAS will be able to predict schedutil's frequency requests more
> accurately than any other governor by design, it'd be sensible to make
> sure EAS cannot be used without schedutil. This will be done later, once
> EAS has actually been introduced.
>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Quentin Perret <quentin.perret@arm.com>

While the overall idea is fine by me, I would do a couple of things
differently (see below).

> ---
>  include/linux/sched/cpufreq.h    |  6 +++
>  kernel/sched/cpufreq_schedutil.c | 89 +++++++++++++++++++++-----------
>  kernel/sched/sched.h             | 14 +++++
>  3 files changed, 80 insertions(+), 29 deletions(-)
>
> diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
> index 59667444669f..afa940cd50dc 100644
> --- a/include/linux/sched/cpufreq.h
> +++ b/include/linux/sched/cpufreq.h
> @@ -20,6 +20,12 @@ void cpufreq_add_update_util_hook(int cpu, struct update_util_data *data,
>                         void (*func)(struct update_util_data *data, u64 time,
>                                     unsigned int flags));
>  void cpufreq_remove_update_util_hook(int cpu);
> +
> +static inline unsigned long map_util_freq(unsigned long util,
> +                                       unsigned long freq, unsigned long cap)
> +{
> +       return (freq + (freq >> 2)) * util / cap;
> +}
>  #endif /* CONFIG_CPU_FREQ */
>
>  #endif /* _LINUX_SCHED_CPUFREQ_H */
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 3fffad3bc8a8..8356cb0072a6 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -13,6 +13,7 @@
>
>  #include "sched.h"
>
> +#include <linux/sched/cpufreq.h>
>  #include <trace/events/power.h>
>
>  struct sugov_tunables {
> @@ -167,7 +168,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>         unsigned int freq = arch_scale_freq_invariant() ?
>                                 policy->cpuinfo.max_freq : policy->cur;
>
> -       freq = (freq + (freq >> 2)) * util / max;
> +       freq = map_util_freq(util, freq, max);
>
>         if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
>                 return sg_policy->next_freq;
> @@ -197,15 +198,15 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>   * based on the task model parameters and gives the minimal utilization
>   * required to meet deadlines.
>   */
> -static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> +                                 enum schedutil_type type)

The new "type" argument should be documented.

Also IMO using the special enum for it is quite confusing, because you
ever only check one value from it directly.  What would be wrong with
using a plain "bool" instead?

>  {
> -       struct rq *rq = cpu_rq(sg_cpu->cpu);
> +       struct rq *rq = cpu_rq(cpu);
>         unsigned long util, irq, max;
>
> -       sg_cpu->max = max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> -       sg_cpu->bw_dl = cpu_bw_dl(rq);
> +       max = arch_scale_cpu_capacity(NULL, cpu);
>
> -       if (rt_rq_is_runnable(&rq->rt))
> +       if (type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt))
>                 return max;
>
>         /*
> @@ -223,20 +224,33 @@ static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
>          * utilization (PELT windows are synchronized) we can directly add them
>          * to obtain the CPU's actual utilization.
>          */
> -       util = cpu_util_cfs(rq);
> +       util = util_cfs;
>         util += cpu_util_rt(rq);
>
> -       /*
> -        * We do not make cpu_util_dl() a permanent part of this sum because we
> -        * want to use cpu_bw_dl() later on, but we need to check if the
> -        * CFS+RT+DL sum is saturated (ie. no idle time) such that we select
> -        * f_max when there is no idle time.
> -        *
> -        * NOTE: numerical errors or stop class might cause us to not quite hit
> -        * saturation when we should -- something for later.
> -        */
> -       if ((util + cpu_util_dl(rq)) >= max)
> -               return max;
> +       if (type == FREQUENCY_UTIL) {
> +               /*
> +                * For frequency selection we do not make cpu_util_dl() a
> +                * permanent part of this sum because we want to use
> +                * cpu_bw_dl() later on, but we need to check if the
> +                * CFS+RT+DL sum is saturated (ie. no idle time) such
> +                * that we select f_max when there is no idle time.
> +                *
> +                * NOTE: numerical errors or stop class might cause us
> +                * to not quite hit saturation when we should --
> +                * something for later.
> +                */
> +
> +               if ((util + cpu_util_dl(rq)) >= max)
> +                       return max;
> +       } else {
> +               /*
> +                * OTOH, for energy computation we need the estimated
> +                * running time, so include util_dl and ignore dl_bw.
> +                */
> +               util += cpu_util_dl(rq);
> +               if (util >= max)
> +                       return max;
> +       }
>
>         /*
>          * There is still idle time; further improve the number by using the
> @@ -250,17 +264,34 @@ static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
>         util = scale_irq_capacity(util, irq, max);
>         util += irq;
>
> -       /*
> -        * Bandwidth required by DEADLINE must always be granted while, for
> -        * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> -        * to gracefully reduce the frequency when no tasks show up for longer
> -        * periods of time.
> -        *
> -        * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> -        * bw_dl as requested freq. However, cpufreq is not yet ready for such
> -        * an interface. So, we only do the latter for now.
> -        */
> -       return min(max, util + sg_cpu->bw_dl);
> +       if (type == FREQUENCY_UTIL) {
> +               /*
> +                * Bandwidth required by DEADLINE must always be granted
> +                * while, for FAIR and RT, we use blocked utilization of
> +                * IDLE CPUs as a mechanism to gracefully reduce the
> +                * frequency when no tasks show up for longer periods of
> +                * time.
> +                *
> +                * Ideally we would like to set bw_dl as min/guaranteed
> +                * freq and util + bw_dl as requested freq. However,
> +                * cpufreq is not yet ready for such an interface. So,
> +                * we only do the latter for now.
> +                */
> +               util += cpu_bw_dl(rq);
> +       }
> +
> +       return min(max, util);
> +}
> +
> +static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> +{
> +       struct rq *rq = cpu_rq(sg_cpu->cpu);
> +       unsigned long util = cpu_util_cfs(rq);
> +
> +       sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> +       sg_cpu->bw_dl = cpu_bw_dl(rq);
> +
> +       return schedutil_freq_util(sg_cpu->cpu, util, FREQUENCY_UTIL);

If you add a "max" argument to schedutil_freq_util(), you can avoid
the second (and arguably redundant) evaluation of
arch_scale_cpu_capacity() in there.

>  }
>
>  /**
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 481e6759441b..d59effb34786 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2172,7 +2172,15 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
>  # define arch_scale_freq_invariant()   false
>  #endif
>
> +enum schedutil_type {
> +       FREQUENCY_UTIL,
> +       ENERGY_UTIL,
> +};

As I said above, I would just use "bool" instead of this new enum (it
has two values too) or the new type needs to be documented.

> +
>  #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
> +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> +                                 enum schedutil_type type);
> +
>  static inline unsigned long cpu_bw_dl(struct rq *rq)
>  {
>         return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
> @@ -2199,6 +2207,12 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
>  {
>         return READ_ONCE(rq->avg_rt.util_avg);
>  }
> +#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
> +static inline unsigned long schedutil_freq_util(int cpu, unsigned long util,
> +                                 enum schedutil_type type)
> +{
> +       return util;
> +}
>  #endif

And I would add a wrapper around schedutil_freq_util(), called say
schedutil_energy_util(), that would pass a specific value as the
"type".

>
>  #ifdef HAVE_SCHED_AVG_IRQ
> --

Cheers,
Rafael

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  2018-09-18 21:33     ` Rafael J. Wysocki
@ 2018-09-27 12:17       ` Quentin Perret
  -1 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-27 12:17 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Peter Zijlstra, Rafael J. Wysocki, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan, Pavan Kondeti, Juri Lelli, Eduardo Valentin,
	Srinivas Pandruvada, currojerez, Javi Merino

Hi Rafael,

Very sorry for the late reply ...

On Tuesday 18 Sep 2018 at 23:33:22 (+0200), Rafael J. Wysocki wrote:
[...]
> The new "type" argument should be documented.
> 
> Also IMO using the special enum for it is quite confusing, because you
> ever only check one value from it directly.  What would be wrong with
> using a plain "bool" instead?

So, this part of the code was originally proposed by Peter. I basically
took it from the following message (hence the Suggested-by) which was
fine by me:

https://lore.kernel.org/lkml/20180709120138.GQ2494@hirez.programming.kicks-ass.net/

Also, one of the things that has been mentioned during reviews was that
other clients (such as cpuidle, IIRC) could potentially be interested
in a 'global' cpu util value. And since those clients might have
different needs than EAS or sugov, they might need a new entry in the
enum.

So that's probably the main argument for the enum, it is easy to extend.

[...]
> > +static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> > +{
> > +       struct rq *rq = cpu_rq(sg_cpu->cpu);
> > +       unsigned long util = cpu_util_cfs(rq);
> > +
> > +       sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> > +       sg_cpu->bw_dl = cpu_bw_dl(rq);
> > +
> > +       return schedutil_freq_util(sg_cpu->cpu, util, FREQUENCY_UTIL);
> 
> If you add a "max" argument to schedutil_freq_util(), you can avoid
> the second (and arguably redundant) evaluation of
> arch_scale_cpu_capacity() in there.

OK

[...]
> > +enum schedutil_type {
> > +       FREQUENCY_UTIL,
> > +       ENERGY_UTIL,
> > +};
> 
> As I said above, I would just use "bool" instead of this new enum (it
> has two values too) or the new type needs to be documented.

As I said above, the enum has the good side of being easier to extend.
So, if we care about that, I guess I'd rather add a doc for the new
type.

> > +
> >  #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
> > +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> > +                                 enum schedutil_type type);
> > +
> >  static inline unsigned long cpu_bw_dl(struct rq *rq)
> >  {
> >         return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
> > @@ -2199,6 +2207,12 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
> >  {
> >         return READ_ONCE(rq->avg_rt.util_avg);
> >  }
> > +#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
> > +static inline unsigned long schedutil_freq_util(int cpu, unsigned long util,
> > +                                 enum schedutil_type type)
> > +{
> > +       return util;
> > +}
> >  #endif
> 
> And I would add a wrapper around schedutil_freq_util(), called say
> schedutil_energy_util(), that would pass a specific value as the
> "type".

OK, that's fine by me.

Other than that, do you think these changes could be done later ? Or do
you see that as mandatory before the patches can be picked up ?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
@ 2018-09-27 12:17       ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-27 12:17 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Peter Zijlstra, Rafael J. Wysocki, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan

Hi Rafael,

Very sorry for the late reply ...

On Tuesday 18 Sep 2018 at 23:33:22 (+0200), Rafael J. Wysocki wrote:
[...]
> The new "type" argument should be documented.
> 
> Also IMO using the special enum for it is quite confusing, because you
> ever only check one value from it directly.  What would be wrong with
> using a plain "bool" instead?

So, this part of the code was originally proposed by Peter. I basically
took it from the following message (hence the Suggested-by) which was
fine by me:

https://lore.kernel.org/lkml/20180709120138.GQ2494@hirez.programming.kicks-ass.net/

Also, one of the things that has been mentioned during reviews was that
other clients (such as cpuidle, IIRC) could potentially be interested
in a 'global' cpu util value. And since those clients might have
different needs than EAS or sugov, they might need a new entry in the
enum.

So that's probably the main argument for the enum, it is easy to extend.

[...]
> > +static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> > +{
> > +       struct rq *rq = cpu_rq(sg_cpu->cpu);
> > +       unsigned long util = cpu_util_cfs(rq);
> > +
> > +       sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> > +       sg_cpu->bw_dl = cpu_bw_dl(rq);
> > +
> > +       return schedutil_freq_util(sg_cpu->cpu, util, FREQUENCY_UTIL);
> 
> If you add a "max" argument to schedutil_freq_util(), you can avoid
> the second (and arguably redundant) evaluation of
> arch_scale_cpu_capacity() in there.

OK

[...]
> > +enum schedutil_type {
> > +       FREQUENCY_UTIL,
> > +       ENERGY_UTIL,
> > +};
> 
> As I said above, I would just use "bool" instead of this new enum (it
> has two values too) or the new type needs to be documented.

As I said above, the enum has the good side of being easier to extend.
So, if we care about that, I guess I'd rather add a doc for the new
type.

> > +
> >  #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
> > +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> > +                                 enum schedutil_type type);
> > +
> >  static inline unsigned long cpu_bw_dl(struct rq *rq)
> >  {
> >         return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
> > @@ -2199,6 +2207,12 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
> >  {
> >         return READ_ONCE(rq->avg_rt.util_avg);
> >  }
> > +#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
> > +static inline unsigned long schedutil_freq_util(int cpu, unsigned long util,
> > +                                 enum schedutil_type type)
> > +{
> > +       return util;
> > +}
> >  #endif
> 
> And I would add a wrapper around schedutil_freq_util(), called say
> schedutil_energy_util(), that would pass a specific value as the
> "type".

OK, that's fine by me.

Other than that, do you think these changes could be done later ? Or do
you see that as mandatory before the patches can be picked up ?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  2018-09-27 12:17       ` Quentin Perret
@ 2018-09-28  8:23         ` Rafael J. Wysocki
  -1 siblings, 0 replies; 57+ messages in thread
From: Rafael J. Wysocki @ 2018-09-28  8:23 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Rafael J. Wysocki, Peter Zijlstra, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan, Pavan Kondeti, Juri Lelli, Eduardo Valentin,
	Srinivas Pandruvada, currojerez, Javi Merino

On Thursday, September 27, 2018 2:17:49 PM CEST Quentin Perret wrote:
> Hi Rafael,
> 
> Very sorry for the late reply ...
> 
> On Tuesday 18 Sep 2018 at 23:33:22 (+0200), Rafael J. Wysocki wrote:
> [...]
> > The new "type" argument should be documented.
> > 
> > Also IMO using the special enum for it is quite confusing, because you
> > ever only check one value from it directly.  What would be wrong with
> > using a plain "bool" instead?
> 
> So, this part of the code was originally proposed by Peter. I basically
> took it from the following message (hence the Suggested-by) which was
> fine by me:
> 
> https://lore.kernel.org/lkml/20180709120138.GQ2494@hirez.programming.kicks-ass.net/
> 
> Also, one of the things that has been mentioned during reviews was that
> other clients (such as cpuidle, IIRC) could potentially be interested
> in a 'global' cpu util value. And since those clients might have
> different needs than EAS or sugov, they might need a new entry in the
> enum.
> 
> So that's probably the main argument for the enum, it is easy to extend.

OK, so please document the enum type.

> [...]
> > > +static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> > > +{
> > > +       struct rq *rq = cpu_rq(sg_cpu->cpu);
> > > +       unsigned long util = cpu_util_cfs(rq);
> > > +
> > > +       sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> > > +       sg_cpu->bw_dl = cpu_bw_dl(rq);
> > > +
> > > +       return schedutil_freq_util(sg_cpu->cpu, util, FREQUENCY_UTIL);
> > 
> > If you add a "max" argument to schedutil_freq_util(), you can avoid
> > the second (and arguably redundant) evaluation of
> > arch_scale_cpu_capacity() in there.
> 
> OK
> 
> [...]
> > > +enum schedutil_type {
> > > +       FREQUENCY_UTIL,
> > > +       ENERGY_UTIL,
> > > +};
> > 
> > As I said above, I would just use "bool" instead of this new enum (it
> > has two values too) or the new type needs to be documented.
> 
> As I said above, the enum has the good side of being easier to extend.
> So, if we care about that, I guess I'd rather add a doc for the new
> type.

Right, so please add a kerneldoc description here.

> > > +
> > >  #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
> > > +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> > > +                                 enum schedutil_type type);
> > > +
> > >  static inline unsigned long cpu_bw_dl(struct rq *rq)
> > >  {
> > >         return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
> > > @@ -2199,6 +2207,12 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
> > >  {
> > >         return READ_ONCE(rq->avg_rt.util_avg);
> > >  }
> > > +#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
> > > +static inline unsigned long schedutil_freq_util(int cpu, unsigned long util,
> > > +                                 enum schedutil_type type)
> > > +{
> > > +       return util;
> > > +}
> > >  #endif
> > 
> > And I would add a wrapper around schedutil_freq_util(), called say
> > schedutil_energy_util(), that would pass a specific value as the
> > "type".
> 
> OK, that's fine by me.
> 
> Other than that, do you think these changes could be done later ? Or do
> you see that as mandatory before the patches can be picked up ?

Documenting things properly is absolutely required.

The other changes suggested by me are rather straightforward, so why would it
be a problem to make them right away if you agree to make them?

Thanks,
Rafael


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
@ 2018-09-28  8:23         ` Rafael J. Wysocki
  0 siblings, 0 replies; 57+ messages in thread
From: Rafael J. Wysocki @ 2018-09-28  8:23 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Rafael J. Wysocki, Peter Zijlstra, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan

On Thursday, September 27, 2018 2:17:49 PM CEST Quentin Perret wrote:
> Hi Rafael,
> 
> Very sorry for the late reply ...
> 
> On Tuesday 18 Sep 2018 at 23:33:22 (+0200), Rafael J. Wysocki wrote:
> [...]
> > The new "type" argument should be documented.
> > 
> > Also IMO using the special enum for it is quite confusing, because you
> > ever only check one value from it directly.  What would be wrong with
> > using a plain "bool" instead?
> 
> So, this part of the code was originally proposed by Peter. I basically
> took it from the following message (hence the Suggested-by) which was
> fine by me:
> 
> https://lore.kernel.org/lkml/20180709120138.GQ2494@hirez.programming.kicks-ass.net/
> 
> Also, one of the things that has been mentioned during reviews was that
> other clients (such as cpuidle, IIRC) could potentially be interested
> in a 'global' cpu util value. And since those clients might have
> different needs than EAS or sugov, they might need a new entry in the
> enum.
> 
> So that's probably the main argument for the enum, it is easy to extend.

OK, so please document the enum type.

> [...]
> > > +static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
> > > +{
> > > +       struct rq *rq = cpu_rq(sg_cpu->cpu);
> > > +       unsigned long util = cpu_util_cfs(rq);
> > > +
> > > +       sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
> > > +       sg_cpu->bw_dl = cpu_bw_dl(rq);
> > > +
> > > +       return schedutil_freq_util(sg_cpu->cpu, util, FREQUENCY_UTIL);
> > 
> > If you add a "max" argument to schedutil_freq_util(), you can avoid
> > the second (and arguably redundant) evaluation of
> > arch_scale_cpu_capacity() in there.
> 
> OK
> 
> [...]
> > > +enum schedutil_type {
> > > +       FREQUENCY_UTIL,
> > > +       ENERGY_UTIL,
> > > +};
> > 
> > As I said above, I would just use "bool" instead of this new enum (it
> > has two values too) or the new type needs to be documented.
> 
> As I said above, the enum has the good side of being easier to extend.
> So, if we care about that, I guess I'd rather add a doc for the new
> type.

Right, so please add a kerneldoc description here.

> > > +
> > >  #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
> > > +unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs,
> > > +                                 enum schedutil_type type);
> > > +
> > >  static inline unsigned long cpu_bw_dl(struct rq *rq)
> > >  {
> > >         return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
> > > @@ -2199,6 +2207,12 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
> > >  {
> > >         return READ_ONCE(rq->avg_rt.util_avg);
> > >  }
> > > +#else /* CONFIG_CPU_FREQ_GOV_SCHEDUTIL */
> > > +static inline unsigned long schedutil_freq_util(int cpu, unsigned long util,
> > > +                                 enum schedutil_type type)
> > > +{
> > > +       return util;
> > > +}
> > >  #endif
> > 
> > And I would add a wrapper around schedutil_freq_util(), called say
> > schedutil_energy_util(), that would pass a specific value as the
> > "type".
> 
> OK, that's fine by me.
> 
> Other than that, do you think these changes could be done later ? Or do
> you see that as mandatory before the patches can be picked up ?

Documenting things properly is absolutely required.

The other changes suggested by me are rather straightforward, so why would it
be a problem to make them right away if you agree to make them?

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  2018-09-28  8:23         ` Rafael J. Wysocki
@ 2018-09-28  8:35           ` Quentin Perret
  -1 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-28  8:35 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Peter Zijlstra, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan, Pavan Kondeti, Juri Lelli, Eduardo Valentin,
	Srinivas Pandruvada, currojerez, Javi Merino

Hi Rafael,

On Friday 28 Sep 2018 at 10:23:42 (+0200), Rafael J. Wysocki wrote:
[...]
> OK, so please document the enum type.

Will do.

[...]
> Documenting things properly is absolutely required.

Understood.

> The other changes suggested by me are rather straightforward, so why would it
> be a problem to make them right away if you agree to make them?

Right, if I need to spin a v8 for doc purpose anyway, I guess I can do
the other small fixes at the same time.

I'll wait a bit before sending the update to see if others have some
feedback. It's probably a bit late for 4.20 anyway ... Or is it not ?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
@ 2018-09-28  8:35           ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-09-28  8:35 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Peter Zijlstra, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan

Hi Rafael,

On Friday 28 Sep 2018 at 10:23:42 (+0200), Rafael J. Wysocki wrote:
[...]
> OK, so please document the enum type.

Will do.

[...]
> Documenting things properly is absolutely required.

Understood.

> The other changes suggested by me are rather straightforward, so why would it
> be a problem to make them right away if you agree to make them?

Right, if I need to spin a v8 for doc purpose anyway, I guess I can do
the other small fixes at the same time.

I'll wait a bit before sending the update to see if others have some
feedback. It's probably a bit late for 4.20 anyway ... Or is it not ?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
  2018-09-28  8:35           ` Quentin Perret
@ 2018-09-28  8:35             ` Rafael J. Wysocki
  -1 siblings, 0 replies; 57+ messages in thread
From: Rafael J. Wysocki @ 2018-09-28  8:35 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Rafael J. Wysocki, Peter Zijlstra, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan, Pavan Kondeti, Juri Lelli, Eduardo Valentin,
	Srinivas Pandruvada, currojerez, Javi Merino

On Friday, September 28, 2018 10:35:05 AM CEST Quentin Perret wrote:
> Hi Rafael,
> 
> On Friday 28 Sep 2018 at 10:23:42 (+0200), Rafael J. Wysocki wrote:
> [...]
> > OK, so please document the enum type.
> 
> Will do.
> 
> [...]
> > Documenting things properly is absolutely required.
> 
> Understood.
> 
> > The other changes suggested by me are rather straightforward, so why would it
> > be a problem to make them right away if you agree to make them?
> 
> Right, if I need to spin a v8 for doc purpose anyway, I guess I can do
> the other small fixes at the same time.
> 
> I'll wait a bit before sending the update to see if others have some
> feedback. It's probably a bit late for 4.20 anyway ... Or is it not ?

I think this is scheduler material anyway, even though it affects cpufreq
somewhat, so that depends on what Peter thinks really.

Thanks,
Rafael


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling
@ 2018-09-28  8:35             ` Rafael J. Wysocki
  0 siblings, 0 replies; 57+ messages in thread
From: Rafael J. Wysocki @ 2018-09-28  8:35 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Rafael J. Wysocki, Peter Zijlstra, Linux Kernel Mailing List,
	Linux PM, Greg Kroah-Hartman, Ingo Molnar, Dietmar Eggemann,
	Morten Rasmussen, Chris Redpath, Patrick Bellasi,
	Valentin Schneider, Vincent Guittot, Thara Gopinath,
	Viresh Kumar, Todd Kjos, Joel Fernandes, Steve Muckle, adharmap,
	Saravana Kannan

On Friday, September 28, 2018 10:35:05 AM CEST Quentin Perret wrote:
> Hi Rafael,
> 
> On Friday 28 Sep 2018 at 10:23:42 (+0200), Rafael J. Wysocki wrote:
> [...]
> > OK, so please document the enum type.
> 
> Will do.
> 
> [...]
> > Documenting things properly is absolutely required.
> 
> Understood.
> 
> > The other changes suggested by me are rather straightforward, so why would it
> > be a problem to make them right away if you agree to make them?
> 
> Right, if I need to spin a v8 for doc purpose anyway, I guess I can do
> the other small fixes at the same time.
> 
> I'll wait a bit before sending the update to see if others have some
> feedback. It's probably a bit late for 4.20 anyway ... Or is it not ?

I think this is scheduler material anyway, even though it affects cpufreq
somewhat, so that depends on what Peter thinks really.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-09-12  9:12 ` [PATCH v7 03/14] PM: Introduce an Energy Model management framework Quentin Perret
@ 2018-10-02 12:25   ` Peter Zijlstra
  2018-10-02 12:54     ` Quentin Perret
  2018-10-02 12:30   ` Peter Zijlstra
  1 sibling, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-02 12:25 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:12:58AM +0100, Quentin Perret wrote:
> +/**
> + * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain
> + * @pd		: performance domain for which energy has to be estimated
> + * @max_util	: highest utilization among CPUs of the domain
> + * @sum_util	: sum of the utilization of all CPUs in the domain
> + *
> + * Return: the sum of the energy consumed by the CPUs of the domain assuming
> + * a capacity state satisfying the max utilization of the domain.
> + */
> +static inline unsigned long em_pd_energy(struct em_perf_domain *pd,
> +				unsigned long max_util, unsigned long sum_util)
> +{
> +	unsigned long freq, scale_cpu;
> +	struct em_cap_state *cs;
> +	int i, cpu;
> +
> +	/*
> +	 * In order to predict the capacity state, map the utilization of the
> +	 * most utilized CPU of the performance domain to a requested frequency,
> +	 * like schedutil.
> +	 */
> +	cpu = cpumask_first(to_cpumask(pd->cpus));
> +	scale_cpu = arch_scale_cpu_capacity(NULL, cpu);
> +	cs = &pd->table[pd->nr_cap_states - 1];
> +	freq = map_util_freq(max_util, cs->frequency, scale_cpu);
> +
> +	/*
> +	 * Find the lowest capacity state of the Energy Model above the
> +	 * requested frequency.
> +	 */
> +	for (i = 0; i < pd->nr_cap_states; i++) {
> +		cs = &pd->table[i];
> +		if (cs->frequency >= freq)
> +			break;
> +	}
> +
> +	/*
> +	 * The capacity of a CPU in the domain at that capacity state (cs)
> +	 * can be computed as:
> +	 *
> +	 *             cs->freq * scale_cpu
> +	 *   cs->cap = --------------------                          (1)
> +	 *                 cpu_max_freq
> +	 *
> +	 * So, the energy consumed by this CPU at that capacity state is:
> +	 *
> +	 *             cs->power * cpu_util
> +	 *   cpu_nrg = --------------------                          (2)
> +	 *                   cs->cap
> +	 *
> +	 * since 'cpu_util / cs->cap' represents its percentage of busy time.
> +	 *
> +	 *   NOTE: Although the result of this computation actually is in
> +	 *         units of power, it can be manipulated as an energy value
> +	 *         over a scheduling period, since it is assumed to be
> +	 *         constant during that interval.
> +	 *
> +	 * By injecting (1) in (2), 'cpu_nrg' can be re-expressed as a product
> +	 * of two terms:
> +	 *
> +	 *             cs->power * cpu_max_freq   cpu_util
> +	 *   cpu_nrg = ------------------------ * ---------          (3)
> +	 *                    cs->freq            scale_cpu
> +	 *
> +	 * The first term is static, and is stored in the em_cap_state struct
> +	 * as 'cs->cost'.
> +	 *
> +	 * Since all CPUs of the domain have the same micro-architecture, they
> +	 * share the same 'cs->cost', and the same CPU capacity. Hence, the
> +	 * total energy of the domain (which is the simple sum of the energy of
> +	 * all of its CPUs) can be factorized as:
> +	 *
> +	 *            cs->cost * \Sum cpu_util
> +	 *   pd_nrg = ------------------------                       (4)
> +	 *                  scale_cpu
> +	 */
> +	return cs->cost * sum_util / scale_cpu;
> +}

Should we explicitly mention that this ignores idle costs?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-09-12  9:12 ` [PATCH v7 03/14] PM: Introduce an Energy Model management framework Quentin Perret
  2018-10-02 12:25   ` Peter Zijlstra
@ 2018-10-02 12:30   ` Peter Zijlstra
  2018-10-02 12:51     ` Quentin Perret
  1 sibling, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-02 12:30 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:12:58AM +0100, Quentin Perret wrote:
> +/**
> + * em_register_perf_domain() - Register the Energy Model of a performance domain
> + * @span	: Mask of CPUs in the performance domain
> + * @nr_states	: Number of capacity states to register
> + * @cb		: Callback functions providing the data of the Energy Model
> + *
> + * Create Energy Model tables for a performance domain using the callbacks
> + * defined in cb.
> + *
> + * If multiple clients register the same performance domain, all but the first
> + * registration will be ignored.
> + *
> + * Return 0 on success
> + */
> +int em_register_perf_domain(cpumask_t *span, unsigned int nr_states,
> +						struct em_data_callback *cb)
> +{
> +	unsigned long cap, prev_cap = 0;
> +	struct em_perf_domain *pd;
> +	int cpu, ret = 0;
> +
> +	if (!span || !nr_states || !cb)
> +		return -EINVAL;
> +
> +	/*
> +	 * Use a mutex to serialize the registration of performance domains and
> +	 * let the driver-defined callback functions sleep.
> +	 */
> +	mutex_lock(&em_pd_mutex);
> +
> +	for_each_cpu(cpu, span) {
> +		/* Make sure we don't register again an existing domain. */
> +		if (READ_ONCE(per_cpu(em_data, cpu))) {
> +			ret = -EEXIST;
> +			goto unlock;
> +		}
> +
> +		/*
> +		 * All CPUs of a domain must have the same micro-architecture
> +		 * since they all share the same table.
> +		 */
> +		cap = arch_scale_cpu_capacity(NULL, cpu);
> +		if (prev_cap && prev_cap != cap) {
> +			pr_err("CPUs of %*pbl must have the same capacity\n",
> +							cpumask_pr_args(span));
> +			ret = -EINVAL;
> +			goto unlock;
> +		}
> +		prev_cap = cap;
> +	}
> +
> +	/* Create the performance domain and add it to the Energy Model. */
> +	pd = em_create_pd(span, nr_states, cb);
> +	if (!pd) {
> +		ret = -EINVAL;
> +		goto unlock;
> +	}
> +
> +	for_each_cpu(cpu, span)
> +		WRITE_ONCE(per_cpu(em_data, cpu), pd);

It's not immediately obvious to me why this doesn't need to be
smp_store_release(). The moment you publish that pointer, it can be
read, right?

Even if you never again change the pointer value, you want to ensure the
content of pd is stable before pd itself is observable, right?

> +
> +	pr_debug("Created perf domain %*pbl\n", cpumask_pr_args(span));
> +unlock:
> +	mutex_unlock(&em_pd_mutex);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(em_register_perf_domain);
> -- 
> 2.18.0
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling
  2018-09-12  9:13 ` [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling Quentin Perret
@ 2018-10-02 12:34   ` Peter Zijlstra
  2018-10-02 13:08     ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-02 12:34 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:13:00AM +0100, Quentin Perret wrote:
> In order to make sure Energy Aware Scheduling (EAS) doesn't hurt
> systems not using it, add a new sched_feat, called ENERGY_AWARE,
> guarding the access to EAS code paths.
> 
> Signed-off-by: Quentin Perret <quentin.perret@arm.com>
> ---
>  kernel/sched/features.h | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index 85ae8488039c..26ff6752e492 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -90,3 +90,10 @@ SCHED_FEAT(WA_BIAS, true)
>   * UtilEstimation. Use estimated CPU utilization.
>   */
>  SCHED_FEAT(UTIL_EST, true)
> +
> +/*
> + * Energy-Aware Scheduling. Whether or not tasks will be placed into an
> + * energy-aware fashion depends on this feature being enabled and on the
> + * root domain having an Energy Model attached.
> + */
> +SCHED_FEAT(ENERGY_AWARE, false)

So these are debug knobs.. I would expect there to be a regular
static_key that is controlled by the presence of EM data.

Remember, if you compile with SCHED_DEBUG=n, this becomes a compile time
constant: false, and the compiler gets to do lots of DCE.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 06/14] sched/topology: Reference the Energy Model of CPUs when available
  2018-09-12  9:13 ` [PATCH v7 06/14] sched/topology: Reference the Energy Model of CPUs when available Quentin Perret
@ 2018-10-02 12:36   ` Peter Zijlstra
  2018-10-02 13:16     ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-02 12:36 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:13:01AM +0100, Quentin Perret wrote:
> +struct perf_domain {
> +	struct em_perf_domain *obj;
> +	struct perf_domain *next;
> +	struct rcu_head rcu;
> +};

Maybe s/obj/em_pd/ or something like that? @obj is so very opaque.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 12:30   ` Peter Zijlstra
@ 2018-10-02 12:51     ` Quentin Perret
  2018-10-02 13:48       ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-02 12:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tuesday 02 Oct 2018 at 14:30:31 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:12:58AM +0100, Quentin Perret wrote:
> > +/**
> > + * em_register_perf_domain() - Register the Energy Model of a performance domain
> > + * @span	: Mask of CPUs in the performance domain
> > + * @nr_states	: Number of capacity states to register
> > + * @cb		: Callback functions providing the data of the Energy Model
> > + *
> > + * Create Energy Model tables for a performance domain using the callbacks
> > + * defined in cb.
> > + *
> > + * If multiple clients register the same performance domain, all but the first
> > + * registration will be ignored.
> > + *
> > + * Return 0 on success
> > + */
> > +int em_register_perf_domain(cpumask_t *span, unsigned int nr_states,
> > +						struct em_data_callback *cb)
> > +{
> > +	unsigned long cap, prev_cap = 0;
> > +	struct em_perf_domain *pd;
> > +	int cpu, ret = 0;
> > +
> > +	if (!span || !nr_states || !cb)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * Use a mutex to serialize the registration of performance domains and
> > +	 * let the driver-defined callback functions sleep.
> > +	 */
> > +	mutex_lock(&em_pd_mutex);
> > +
> > +	for_each_cpu(cpu, span) {
> > +		/* Make sure we don't register again an existing domain. */
> > +		if (READ_ONCE(per_cpu(em_data, cpu))) {
> > +			ret = -EEXIST;
> > +			goto unlock;
> > +		}
> > +
> > +		/*
> > +		 * All CPUs of a domain must have the same micro-architecture
> > +		 * since they all share the same table.
> > +		 */
> > +		cap = arch_scale_cpu_capacity(NULL, cpu);
> > +		if (prev_cap && prev_cap != cap) {
> > +			pr_err("CPUs of %*pbl must have the same capacity\n",
> > +							cpumask_pr_args(span));
> > +			ret = -EINVAL;
> > +			goto unlock;
> > +		}
> > +		prev_cap = cap;
> > +	}
> > +
> > +	/* Create the performance domain and add it to the Energy Model. */
> > +	pd = em_create_pd(span, nr_states, cb);
> > +	if (!pd) {
> > +		ret = -EINVAL;
> > +		goto unlock;
> > +	}
> > +
> > +	for_each_cpu(cpu, span)
> > +		WRITE_ONCE(per_cpu(em_data, cpu), pd);
> 
> It's not immediately obvious to me why this doesn't need to be
> smp_store_release(). The moment you publish that pointer, it can be
> read, right?
> 
> Even if you never again change the pointer value, you want to ensure the
> content of pd is stable before pd itself is observable, right?

So, I figured the mutex already gives me some of that. I mean, AFAIU it
should guarantee that concurrent callers to em_register_perf_domain are
serialized correctly.

For example, if I have two concurrent calls (let's name them A and B) to
em_register_perf_domain(), and say A takes the mutex first, then B
should be guaranteed to always see the totality of the update that A
made to the per_cpu table. Is that right ?

If the above is correct, then it's pretty much all I can do, I think ...
In the case of concurrent readers and writers to em_data, the
smp_store_release() call still doesn't give me the guarantee that the
per_cpu table is stable since em_cpu_get() is lock-free ...

If I want to be sure the per_cpu thing is stable from em_cpu_get() then
I can add a mutex_lock/unlock there too, but even then I won't need the
smp_store_release(), I think. Or maybe I got confused again ?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 12:25   ` Peter Zijlstra
@ 2018-10-02 12:54     ` Quentin Perret
  2018-10-02 13:50       ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-02 12:54 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tuesday 02 Oct 2018 at 14:25:35 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:12:58AM +0100, Quentin Perret wrote:
> > +/**
> > + * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain
> > + * @pd		: performance domain for which energy has to be estimated
> > + * @max_util	: highest utilization among CPUs of the domain
> > + * @sum_util	: sum of the utilization of all CPUs in the domain
> > + *
> > + * Return: the sum of the energy consumed by the CPUs of the domain assuming
> > + * a capacity state satisfying the max utilization of the domain.
> > + */
> > +static inline unsigned long em_pd_energy(struct em_perf_domain *pd,
> > +				unsigned long max_util, unsigned long sum_util)
> > +{
> > +	unsigned long freq, scale_cpu;
> > +	struct em_cap_state *cs;
> > +	int i, cpu;
> > +
> > +	/*
> > +	 * In order to predict the capacity state, map the utilization of the
> > +	 * most utilized CPU of the performance domain to a requested frequency,
> > +	 * like schedutil.
> > +	 */
> > +	cpu = cpumask_first(to_cpumask(pd->cpus));
> > +	scale_cpu = arch_scale_cpu_capacity(NULL, cpu);
> > +	cs = &pd->table[pd->nr_cap_states - 1];
> > +	freq = map_util_freq(max_util, cs->frequency, scale_cpu);
> > +
> > +	/*
> > +	 * Find the lowest capacity state of the Energy Model above the
> > +	 * requested frequency.
> > +	 */
> > +	for (i = 0; i < pd->nr_cap_states; i++) {
> > +		cs = &pd->table[i];
> > +		if (cs->frequency >= freq)
> > +			break;
> > +	}
> > +
> > +	/*
> > +	 * The capacity of a CPU in the domain at that capacity state (cs)
> > +	 * can be computed as:
> > +	 *
> > +	 *             cs->freq * scale_cpu
> > +	 *   cs->cap = --------------------                          (1)
> > +	 *                 cpu_max_freq
> > +	 *
> > +	 * So, the energy consumed by this CPU at that capacity state is:
> > +	 *
> > +	 *             cs->power * cpu_util
> > +	 *   cpu_nrg = --------------------                          (2)
> > +	 *                   cs->cap
> > +	 *
> > +	 * since 'cpu_util / cs->cap' represents its percentage of busy time.
> > +	 *
> > +	 *   NOTE: Although the result of this computation actually is in
> > +	 *         units of power, it can be manipulated as an energy value
> > +	 *         over a scheduling period, since it is assumed to be
> > +	 *         constant during that interval.
> > +	 *
> > +	 * By injecting (1) in (2), 'cpu_nrg' can be re-expressed as a product
> > +	 * of two terms:
> > +	 *
> > +	 *             cs->power * cpu_max_freq   cpu_util
> > +	 *   cpu_nrg = ------------------------ * ---------          (3)
> > +	 *                    cs->freq            scale_cpu
> > +	 *
> > +	 * The first term is static, and is stored in the em_cap_state struct
> > +	 * as 'cs->cost'.
> > +	 *
> > +	 * Since all CPUs of the domain have the same micro-architecture, they
> > +	 * share the same 'cs->cost', and the same CPU capacity. Hence, the
> > +	 * total energy of the domain (which is the simple sum of the energy of
> > +	 * all of its CPUs) can be factorized as:
> > +	 *
> > +	 *            cs->cost * \Sum cpu_util
> > +	 *   pd_nrg = ------------------------                       (4)
> > +	 *                  scale_cpu
> > +	 */
> > +	return cs->cost * sum_util / scale_cpu;
> > +}
> 
> Should we explicitly mention that this ignores idle costs?

More doc shouldn't hurt so I can add a little something if you feel it's
needed.

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling
  2018-10-02 12:34   ` Peter Zijlstra
@ 2018-10-02 13:08     ` Quentin Perret
  2018-10-03 16:24       ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-02 13:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tuesday 02 Oct 2018 at 14:34:16 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:13:00AM +0100, Quentin Perret wrote:
> > In order to make sure Energy Aware Scheduling (EAS) doesn't hurt
> > systems not using it, add a new sched_feat, called ENERGY_AWARE,
> > guarding the access to EAS code paths.
> > 
> > Signed-off-by: Quentin Perret <quentin.perret@arm.com>
> > ---
> >  kernel/sched/features.h | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> > index 85ae8488039c..26ff6752e492 100644
> > --- a/kernel/sched/features.h
> > +++ b/kernel/sched/features.h
> > @@ -90,3 +90,10 @@ SCHED_FEAT(WA_BIAS, true)
> >   * UtilEstimation. Use estimated CPU utilization.
> >   */
> >  SCHED_FEAT(UTIL_EST, true)
> > +
> > +/*
> > + * Energy-Aware Scheduling. Whether or not tasks will be placed into an
> > + * energy-aware fashion depends on this feature being enabled and on the
> > + * root domain having an Energy Model attached.
> > + */
> > +SCHED_FEAT(ENERGY_AWARE, false)
> 
> So these are debug knobs.. I would expect there to be a regular
> static_key that is controlled by the presence of EM data.

Right, the issue I'm trying to solve with this is basically that _some_
users will want to have an EM for the thermal stuff, but they will want
EAS disabled. Some people (not very many that's true, but still) use
big.little and don't care much about energy ...

So, there is a need for an EAS knob on the scheduler side. I don't mind
it being something else than a sched_feat, but I couldn't see a better
option. Another Kconfig (CONFIG_SCHED_ENERGY) ? A sysctl ?

FWIW, the Android kernel has had an ENERGY_AWARE sched_feat since the
very beginning of EAS.

> Remember, if you compile with SCHED_DEBUG=n, this becomes a compile time
> constant: false, and the compiler gets to do lots of DCE.

Indeed, and what happens in the mobile market sometimes is that people
play with those options as needed and then they put patches in their
device kernels to change the defaults before compiling with
!SCHED_DEBUG.

I'd say it's not the worst kind of device-specific out-of-tree changes
to Linux. That's debatable however

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 06/14] sched/topology: Reference the Energy Model of CPUs when available
  2018-10-02 12:36   ` Peter Zijlstra
@ 2018-10-02 13:16     ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-10-02 13:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tuesday 02 Oct 2018 at 14:36:17 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:13:01AM +0100, Quentin Perret wrote:
> > +struct perf_domain {
> > +	struct em_perf_domain *obj;
> > +	struct perf_domain *next;
> > +	struct rcu_head rcu;
> > +};
> 
> Maybe s/obj/em_pd/ or something like that? @obj is so very opaque.

I wanted to avoid calling it just 'pd' since that could be confusing. So
anything different than that is fine by me. I don't have a better
suggestion than 'em_pd', so I'd say let's stick with that for now.

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 12:51     ` Quentin Perret
@ 2018-10-02 13:48       ` Peter Zijlstra
  2018-10-02 14:05         ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-02 13:48 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tue, Oct 02, 2018 at 01:51:17PM +0100, Quentin Perret wrote:
> On Tuesday 02 Oct 2018 at 14:30:31 (+0200), Peter Zijlstra wrote:
> > On Wed, Sep 12, 2018 at 10:12:58AM +0100, Quentin Perret wrote:
> > > +/**
> > > + * em_register_perf_domain() - Register the Energy Model of a performance domain
> > > + * @span	: Mask of CPUs in the performance domain
> > > + * @nr_states	: Number of capacity states to register
> > > + * @cb		: Callback functions providing the data of the Energy Model
> > > + *
> > > + * Create Energy Model tables for a performance domain using the callbacks
> > > + * defined in cb.
> > > + *
> > > + * If multiple clients register the same performance domain, all but the first
> > > + * registration will be ignored.
> > > + *
> > > + * Return 0 on success
> > > + */
> > > +int em_register_perf_domain(cpumask_t *span, unsigned int nr_states,
> > > +						struct em_data_callback *cb)
> > > +{
> > > +	unsigned long cap, prev_cap = 0;
> > > +	struct em_perf_domain *pd;
> > > +	int cpu, ret = 0;
> > > +
> > > +	if (!span || !nr_states || !cb)
> > > +		return -EINVAL;
> > > +
> > > +	/*
> > > +	 * Use a mutex to serialize the registration of performance domains and
> > > +	 * let the driver-defined callback functions sleep.
> > > +	 */
> > > +	mutex_lock(&em_pd_mutex);
> > > +
> > > +	for_each_cpu(cpu, span) {
> > > +		/* Make sure we don't register again an existing domain. */
> > > +		if (READ_ONCE(per_cpu(em_data, cpu))) {
> > > +			ret = -EEXIST;
> > > +			goto unlock;
> > > +		}
> > > +
> > > +		/*
> > > +		 * All CPUs of a domain must have the same micro-architecture
> > > +		 * since they all share the same table.
> > > +		 */
> > > +		cap = arch_scale_cpu_capacity(NULL, cpu);
> > > +		if (prev_cap && prev_cap != cap) {
> > > +			pr_err("CPUs of %*pbl must have the same capacity\n",
> > > +							cpumask_pr_args(span));
> > > +			ret = -EINVAL;
> > > +			goto unlock;
> > > +		}
> > > +		prev_cap = cap;
> > > +	}
> > > +
> > > +	/* Create the performance domain and add it to the Energy Model. */
> > > +	pd = em_create_pd(span, nr_states, cb);
> > > +	if (!pd) {
> > > +		ret = -EINVAL;
> > > +		goto unlock;
> > > +	}
> > > +
> > > +	for_each_cpu(cpu, span)
> > > +		WRITE_ONCE(per_cpu(em_data, cpu), pd);
> > 
> > It's not immediately obvious to me why this doesn't need to be
> > smp_store_release(). The moment you publish that pointer, it can be
> > read, right?
> > 
> > Even if you never again change the pointer value, you want to ensure the
> > content of pd is stable before pd itself is observable, right?
> 
> So, I figured the mutex already gives me some of that. I mean, AFAIU it
> should guarantee that concurrent callers to em_register_perf_domain are
> serialized correctly.

+/**
+ * em_cpu_get() - Return the performance domain for a CPU
+ * @cpu : CPU to find the performance domain for
+ *
+ * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
+ * exist.
+ */
+struct em_perf_domain *em_cpu_get(int cpu)
+{
+       return READ_ONCE(per_cpu(em_data, cpu));
+}
+EXPORT_SYMBOL_GPL(em_cpu_get);

But your read side doesn't take, not is required to take em_pd_mutex.

At that point, the mutex_unlock() doesn't guarantee anything.

A CPU observing the em_data store, doesn't need to observe the store
that filled the data structure it points to.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 12:54     ` Quentin Perret
@ 2018-10-02 13:50       ` Peter Zijlstra
  0 siblings, 0 replies; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-02 13:50 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tue, Oct 02, 2018 at 01:54:17PM +0100, Quentin Perret wrote:
> On Tuesday 02 Oct 2018 at 14:25:35 (+0200), Peter Zijlstra wrote:

> > Should we explicitly mention that this ignores idle costs?
> 
> More doc shouldn't hurt so I can add a little something if you feel it's
> needed.

Yes please, every time I read that thing I wonder about idle energy.
Then I remember we talked about this and it is expressly ignored.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 13:48       ` Peter Zijlstra
@ 2018-10-02 14:05         ` Quentin Perret
  2018-10-02 14:29           ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-02 14:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tuesday 02 Oct 2018 at 15:48:57 (+0200), Peter Zijlstra wrote:
> +/**
> + * em_cpu_get() - Return the performance domain for a CPU
> + * @cpu : CPU to find the performance domain for
> + *
> + * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
> + * exist.
> + */
> +struct em_perf_domain *em_cpu_get(int cpu)
> +{
> +       return READ_ONCE(per_cpu(em_data, cpu));
> +}
> +EXPORT_SYMBOL_GPL(em_cpu_get);
> 
> But your read side doesn't take, not is required to take em_pd_mutex.
> 
> At that point, the mutex_unlock() doesn't guarantee anything.
> 
> A CPU observing the em_data store, doesn't need to observe the store
> that filled the data structure it points to.

Right but even if I add the smp_store_release(), I can still have a
CPU observing em_data while another is in the process of updating it.
So, if smp_store_release() doesn't guarantee that readers will see a
complete update, do I actually get something interesting from it ?
(That's not a rhetorical question, I'm actually wondering :-)

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 14:05         ` Quentin Perret
@ 2018-10-02 14:29           ` Peter Zijlstra
  2018-10-02 14:40             ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-02 14:29 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tue, Oct 02, 2018 at 03:05:23PM +0100, Quentin Perret wrote:
> On Tuesday 02 Oct 2018 at 15:48:57 (+0200), Peter Zijlstra wrote:
> > +/**
> > + * em_cpu_get() - Return the performance domain for a CPU
> > + * @cpu : CPU to find the performance domain for
> > + *
> > + * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
> > + * exist.
> > + */
> > +struct em_perf_domain *em_cpu_get(int cpu)
> > +{
> > +       return READ_ONCE(per_cpu(em_data, cpu));
> > +}
> > +EXPORT_SYMBOL_GPL(em_cpu_get);
> > 
> > But your read side doesn't take, not is required to take em_pd_mutex.
> > 
> > At that point, the mutex_unlock() doesn't guarantee anything.
> > 
> > A CPU observing the em_data store, doesn't need to observe the store
> > that filled the data structure it points to.
> 
> Right but even if I add the smp_store_release(), I can still have a
> CPU observing em_data while another is in the process of updating it.
> So, if smp_store_release() doesn't guarantee that readers will see a
> complete update, do I actually get something interesting from it ?
> (That's not a rhetorical question, I'm actually wondering :-)

I thought the update would fail if em_data was already set.

That is, you can only set this thing up _once_ and then you'll have to
forever live with it.

Or did I read that wrong?

If you want to allow updates, you'll have to do the whole RCU thing, at
which point you'll need rcu_assign_pointer(), which again is exactly
smp_store_release() :-)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 14:29           ` Peter Zijlstra
@ 2018-10-02 14:40             ` Quentin Perret
  2018-10-02 19:12               ` Andrea Parri
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-02 14:40 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tuesday 02 Oct 2018 at 16:29:24 (+0200), Peter Zijlstra wrote:
> On Tue, Oct 02, 2018 at 03:05:23PM +0100, Quentin Perret wrote:
> > On Tuesday 02 Oct 2018 at 15:48:57 (+0200), Peter Zijlstra wrote:
> > > +/**
> > > + * em_cpu_get() - Return the performance domain for a CPU
> > > + * @cpu : CPU to find the performance domain for
> > > + *
> > > + * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
> > > + * exist.
> > > + */
> > > +struct em_perf_domain *em_cpu_get(int cpu)
> > > +{
> > > +       return READ_ONCE(per_cpu(em_data, cpu));
> > > +}
> > > +EXPORT_SYMBOL_GPL(em_cpu_get);
> > > 
> > > But your read side doesn't take, not is required to take em_pd_mutex.
> > > 
> > > At that point, the mutex_unlock() doesn't guarantee anything.
> > > 
> > > A CPU observing the em_data store, doesn't need to observe the store
> > > that filled the data structure it points to.
> > 
> > Right but even if I add the smp_store_release(), I can still have a
> > CPU observing em_data while another is in the process of updating it.
> > So, if smp_store_release() doesn't guarantee that readers will see a
> > complete update, do I actually get something interesting from it ?
> > (That's not a rhetorical question, I'm actually wondering :-)
> 
> I thought the update would fail if em_data was already set.
> 
> That is, you can only set this thing up _once_ and then you'll have to
> forever live with it.
> 
> Or did I read that wrong?

No no, that's correct. em_data is populated once and kept as-is
forever.

What I was trying to say is, when em_data is being populated for the
first time, nothing prevents a reader from using em_cpu_get()
concurrently. And in this case, it doesn't matter if you use
smp_store_release() or not, the reader might see the table half-updated.

So, basically, smp_store_release() doesn't guarantee that readers won't
see a half-baked em_data. That's the point I'm trying to make at least :-)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 14:40             ` Quentin Perret
@ 2018-10-02 19:12               ` Andrea Parri
  2018-10-03  7:48                 ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Andrea Parri @ 2018-10-02 19:12 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Peter Zijlstra, rjw, linux-kernel, linux-pm, gregkh, mingo,
	dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino

On Tue, Oct 02, 2018 at 03:40:28PM +0100, Quentin Perret wrote:
> On Tuesday 02 Oct 2018 at 16:29:24 (+0200), Peter Zijlstra wrote:
> > On Tue, Oct 02, 2018 at 03:05:23PM +0100, Quentin Perret wrote:
> > > On Tuesday 02 Oct 2018 at 15:48:57 (+0200), Peter Zijlstra wrote:
> > > > +/**
> > > > + * em_cpu_get() - Return the performance domain for a CPU
> > > > + * @cpu : CPU to find the performance domain for
> > > > + *
> > > > + * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't
> > > > + * exist.
> > > > + */
> > > > +struct em_perf_domain *em_cpu_get(int cpu)
> > > > +{
> > > > +       return READ_ONCE(per_cpu(em_data, cpu));
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(em_cpu_get);
> > > > 
> > > > But your read side doesn't take, not is required to take em_pd_mutex.
> > > > 
> > > > At that point, the mutex_unlock() doesn't guarantee anything.
> > > > 
> > > > A CPU observing the em_data store, doesn't need to observe the store
> > > > that filled the data structure it points to.
> > > 
> > > Right but even if I add the smp_store_release(), I can still have a
> > > CPU observing em_data while another is in the process of updating it.
> > > So, if smp_store_release() doesn't guarantee that readers will see a
> > > complete update, do I actually get something interesting from it ?
> > > (That's not a rhetorical question, I'm actually wondering :-)
> > 
> > I thought the update would fail if em_data was already set.
> > 
> > That is, you can only set this thing up _once_ and then you'll have to
> > forever live with it.
> > 
> > Or did I read that wrong?
> 
> No no, that's correct. em_data is populated once and kept as-is
> forever.
> 
> What I was trying to say is, when em_data is being populated for the
> first time, nothing prevents a reader from using em_cpu_get()
> concurrently. And in this case, it doesn't matter if you use
> smp_store_release() or not, the reader might see the table half-updated.
> 
> So, basically, smp_store_release() doesn't guarantee that readers won't
> see a half-baked em_data. That's the point I'm trying to make at least :-)

An example might help clarify this: here is a scenario I can _imagine,
based on your description.

CPU0 (em_register_perf_domain())	CPU1 (reader)

[...]					my_pd = READ_ONCE(per_cpu(em_data, 1)); /* em_cpu_get() */
pd->table = table			if (my_pd)
WRITE_ONCE(per_cpu(em_data, 1), pd);		my_table = my_pd->table; /* process, dereference, ... my_table */

In this scenario, we'd like CPU1 to see CPU0's store to ->table (as well
as the stores to table[]) _if CPU1 sees CPU0's store to em_data (that is,
if my_pd != NULL).

This guarantee does not hold with the WRITE_ONCE(), because CPU0 could
propagate the store to ->table and the store to em_data out-of-order.
The smp_store_release(), together with the address dependency headed by
the READ_ONCE(), provides this guarantee (and more...).

(Enclosing the reader into an em_pd_mutex critical section would also
provide this guarantee, but I can imagine a few arguments for not using
a mutex... ;-) ).

The question, I guess, is whether you want such a guarantee.

  Andrea

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 03/14] PM: Introduce an Energy Model management framework
  2018-10-02 19:12               ` Andrea Parri
@ 2018-10-03  7:48                 ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-10-03  7:48 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Peter Zijlstra, rjw, linux-kernel, linux-pm, gregkh, mingo,
	dietmar.eggemann, morten.rasmussen, chris.redpath,
	patrick.bellasi, valentin.schneider, vincent.guittot,
	thara.gopinath, viresh.kumar, tkjos, joel, smuckle, adharmap,
	skannan, pkondeti, juri.lelli, edubezval, srinivas.pandruvada,
	currojerez, javi.merino

Hi Andrea,

On Tuesday 02 Oct 2018 at 21:12:17 (+0200), Andrea Parri wrote:
> An example might help clarify this: here is a scenario I can _imagine,
> based on your description.
> 
> CPU0 (em_register_perf_domain())	CPU1 (reader)
> 
> [...]					my_pd = READ_ONCE(per_cpu(em_data, 1)); /* em_cpu_get() */
> pd->table = table			if (my_pd)
> WRITE_ONCE(per_cpu(em_data, 1), pd);		my_table = my_pd->table; /* process, dereference, ... my_table */
> 
> In this scenario, we'd like CPU1 to see CPU0's store to ->table (as well
> as the stores to table[]) _if CPU1 sees CPU0's store to em_data (that is,
> if my_pd != NULL).
> 
> This guarantee does not hold with the WRITE_ONCE(), because CPU0 could
> propagate the store to ->table and the store to em_data out-of-order.

Ah, this is a very good point ... It's fine if a reader sees em_data
half way through being updated, but it is NOT fine at all if a reader
gets a pointer onto a half-baked em_perf_domain.

So I agree, there is a problem here.

(I also realize now that I totally misunderstood Peter's messages before
which were basically pointing out this issue :/ ...)

> The smp_store_release(), together with the address dependency headed by
> the READ_ONCE(), provides this guarantee (and more...).
> 
> (Enclosing the reader into an em_pd_mutex critical section would also
> provide this guarantee, but I can imagine a few arguments for not using
> a mutex... ;-) ).

Right, using the mutex in em_cpu_get() could work too, although we
probably don't want to do that. We might very well end up using it in a
RCU critical section for example (although that's not the case with this
series, I'm just looking forward a bit).

So using smp_store_release() is the best choice here. I'll fix that up
for v8.

Thank you very much,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling
  2018-10-02 13:08     ` Quentin Perret
@ 2018-10-03 16:24       ` Peter Zijlstra
  2018-10-04  9:57         ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-03 16:24 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Tue, Oct 02, 2018 at 02:08:21PM +0100, Quentin Perret wrote:
> On Tuesday 02 Oct 2018 at 14:34:16 (+0200), Peter Zijlstra wrote:

> > So these are debug knobs.. I would expect there to be a regular
> > static_key that is controlled by the presence of EM data.
> 
> Right, the issue I'm trying to solve with this is basically that _some_
> users will want to have an EM for the thermal stuff, but they will want
> EAS disabled. Some people (not very many that's true, but still) use
> big.little and don't care much about energy ...
> 
> So, there is a need for an EAS knob on the scheduler side. I don't mind
> it being something else than a sched_feat, but I couldn't see a better
> option. Another Kconfig (CONFIG_SCHED_ENERGY) ? A sysctl ?

Yeah, sysctl, see for example: sysctl.kernel.numa_balancing and the
sched_numa_balancing static_key that goes with it.

I would default enable EAS if the EM is there and valid, but allow
people to disable it.


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms
  2018-09-12  9:13 ` [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms Quentin Perret
@ 2018-10-03 16:27   ` Peter Zijlstra
  2018-10-04  9:10     ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-03 16:27 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:13:03AM +0100, Quentin Perret wrote:
> @@ -288,6 +321,21 @@ static void build_perf_domains(const struct cpumask *cpu_map)
>  			goto free;
>  		tmp->next = pd;
>  		pd = tmp;
> +
> +		/*
> +		 * Count performance domains and capacity states for the
> +		 * complexity check.
> +		 */
> +		nr_pd++;
> +		nr_cs += em_pd_nr_cap_states(pd->obj);
> +	}
> +
> +	/* Bail out if the Energy Model complexity is too high. */
> +	if (nr_pd * (nr_cs + nr_cpus) > EM_MAX_COMPLEXITY) {
> +		if (sched_debug())
> +			pr_info("rd %*pbl: EM complexity is too high\n ",
> +						cpumask_pr_args(cpu_map));
> +		goto free;
>  	}

I would make than an unconditional WARN, we do not really expect that to
trigger, but then it does, we really don't want to hide it.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 11/14] sched/fair: Introduce an energy estimation helper function
  2018-09-12  9:13 ` [PATCH v7 11/14] sched/fair: Introduce an energy estimation helper function Quentin Perret
@ 2018-10-04  8:34   ` Peter Zijlstra
  2018-10-04  9:02     ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-04  8:34 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:13:06AM +0100, Quentin Perret wrote:
> +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu)
> +{
> +	struct cfs_rq *cfs_rq = &cpu_rq(cpu)->cfs;
> +	unsigned long util_est, util = READ_ONCE(cfs_rq->avg.util_avg);
> +
> +	/*
> +	 * If @p migrates from @cpu to another, remove its contribution. Or,
> +	 * if @p migrates from another CPU to @cpu, add its contribution. In
> +	 * the other cases, @cpu is not impacted by the migration, so the
> +	 * util_avg should already be correct.
> +	 */
> +	if (task_cpu(p) == cpu && dst_cpu != cpu)
> +		util = max_t(long, util - task_util(p), 0);

That's not quite right; what you want to check for is underflow, but the
above also results in 0 if util - task_util() > LONG_MAX without an
underflow.

You could write: sub_positive(&util, task_util(p));

> +	else if (task_cpu(p) != cpu && dst_cpu == cpu)
> +		util += task_util(p);
> +
> +	if (sched_feat(UTIL_EST)) {
> +		util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
> +
> +		/*
> +		 * During wake-up, the task isn't enqueued yet and doesn't
> +		 * appear in the cfs_rq->avg.util_est.enqueued of any rq,
> +		 * so just add it (if needed) to "simulate" what will be
> +		 * cpu_util() after the task has been enqueued.
> +		 */
> +		if (dst_cpu == cpu)
> +			util_est += _task_util_est(p);
> +
> +		util = max(util, util_est);
> +	}
> +
> +	return min_t(unsigned long, util, capacity_orig_of(cpu));

AFAICT both @util and capacity_orig_of() have 'unsigned long' as type.

> +}
> +
> +/*
> + * compute_energy(): Estimates the energy that would be consumed if @p was
> + * migrated to @dst_cpu. compute_energy() predicts what will be the utilization
> + * landscape of the * CPUs after the task migration, and uses the Energy Model
> + * to compute what would be the energy if we decided to actually migrate that
> + * task.
> + */
> +static long compute_energy(struct task_struct *p, int dst_cpu,
> +							struct perf_domain *pd)

static long
compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)

> +{
> +	long util, max_util, sum_util, energy = 0;
> +	int cpu;
> +
> +	while (pd) {
> +		max_util = sum_util = 0;
> +		/*
> +		 * The capacity state of CPUs of the current rd can be driven by
> +		 * CPUs of another rd if they belong to the same performance
> +		 * domain. So, account for the utilization of these CPUs too
> +		 * by masking pd with cpu_online_mask instead of the rd span.
> +		 *
> +		 * If an entire performance domain is outside of the current rd,
> +		 * it will not appear in its pd list and will not be accounted
> +		 * by compute_energy().
> +		 */
> +		for_each_cpu_and(cpu, perf_domain_span(pd), cpu_online_mask) {
> +			util = cpu_util_next(cpu, p, dst_cpu);
> +			util = schedutil_freq_util(cpu, util, ENERGY_UTIL);
> +			max_util = max(util, max_util);
> +			sum_util += util;
> +		}
> +
> +		energy += em_pd_energy(pd->obj, max_util, sum_util);
> +		pd = pd->next;
> +	}

No real strong preference, but you could write that like:

	for (; pd; pd = pd->next) {
	}


> +
> +	return energy;
> +}

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 11/14] sched/fair: Introduce an energy estimation helper function
  2018-10-04  8:34   ` Peter Zijlstra
@ 2018-10-04  9:02     ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-10-04  9:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

Hi Peter,

On Thursday 04 Oct 2018 at 10:34:57 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:13:06AM +0100, Quentin Perret wrote:
> > +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu)
> > +{
> > +	struct cfs_rq *cfs_rq = &cpu_rq(cpu)->cfs;
> > +	unsigned long util_est, util = READ_ONCE(cfs_rq->avg.util_avg);
> > +
> > +	/*
> > +	 * If @p migrates from @cpu to another, remove its contribution. Or,
> > +	 * if @p migrates from another CPU to @cpu, add its contribution. In
> > +	 * the other cases, @cpu is not impacted by the migration, so the
> > +	 * util_avg should already be correct.
> > +	 */
> > +	if (task_cpu(p) == cpu && dst_cpu != cpu)
> > +		util = max_t(long, util - task_util(p), 0);
> 
> That's not quite right; what you want to check for is underflow, but the
> above also results in 0 if util - task_util() > LONG_MAX without an
> underflow.

Hmm, I basically took that from capacity_spare_wake(). So I guess that
function wants fixing too ... :/

Now, is it actually possible to hit that case (util - task_util() > LONG_MAX)
given that util numbers are in the 1024 scale ? I guess that _could_
become a problem if we decided to increase the resolution one day.

> You could write: sub_positive(&util, task_util(p));

Ah right, I forgot about that macro. It seems to have a slightly
stronger semantics than what I need here though. I don't really care if
the intermediate value hits the memory and task_util() already has a
READ_ONCE. Not sure if that's a big deal.

> 
> > +	else if (task_cpu(p) != cpu && dst_cpu == cpu)
> > +		util += task_util(p);
> > +
> > +	if (sched_feat(UTIL_EST)) {
> > +		util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
> > +
> > +		/*
> > +		 * During wake-up, the task isn't enqueued yet and doesn't
> > +		 * appear in the cfs_rq->avg.util_est.enqueued of any rq,
> > +		 * so just add it (if needed) to "simulate" what will be
> > +		 * cpu_util() after the task has been enqueued.
> > +		 */
> > +		if (dst_cpu == cpu)
> > +			util_est += _task_util_est(p);
> > +
> > +		util = max(util, util_est);
> > +	}
> > +
> > +	return min_t(unsigned long, util, capacity_orig_of(cpu));
> 
> AFAICT both @util and capacity_orig_of() have 'unsigned long' as type.

Indeed, no need for min_t.

> > +}
> > +
> > +/*
> > + * compute_energy(): Estimates the energy that would be consumed if @p was
> > + * migrated to @dst_cpu. compute_energy() predicts what will be the utilization
> > + * landscape of the * CPUs after the task migration, and uses the Energy Model
> > + * to compute what would be the energy if we decided to actually migrate that
> > + * task.
> > + */
> > +static long compute_energy(struct task_struct *p, int dst_cpu,
> > +							struct perf_domain *pd)
> 
> static long
> compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)

OK.

> > +{
> > +	long util, max_util, sum_util, energy = 0;
> > +	int cpu;
> > +
> > +	while (pd) {
> > +		max_util = sum_util = 0;
> > +		/*
> > +		 * The capacity state of CPUs of the current rd can be driven by
> > +		 * CPUs of another rd if they belong to the same performance
> > +		 * domain. So, account for the utilization of these CPUs too
> > +		 * by masking pd with cpu_online_mask instead of the rd span.
> > +		 *
> > +		 * If an entire performance domain is outside of the current rd,
> > +		 * it will not appear in its pd list and will not be accounted
> > +		 * by compute_energy().
> > +		 */
> > +		for_each_cpu_and(cpu, perf_domain_span(pd), cpu_online_mask) {
> > +			util = cpu_util_next(cpu, p, dst_cpu);
> > +			util = schedutil_freq_util(cpu, util, ENERGY_UTIL);
> > +			max_util = max(util, max_util);
> > +			sum_util += util;
> > +		}
> > +
> > +		energy += em_pd_energy(pd->obj, max_util, sum_util);
> > +		pd = pd->next;
> > +	}
> 
> No real strong preference, but you could write that like:
> 
> 	for (; pd; pd = pd->next) {
> 	}

And OK for that one too. That saves one line. The next patch has the
same pattern, I'll change it too.

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms
  2018-10-03 16:27   ` Peter Zijlstra
@ 2018-10-04  9:10     ` Quentin Perret
  2018-10-04  9:38       ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-04  9:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wednesday 03 Oct 2018 at 18:27:19 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:13:03AM +0100, Quentin Perret wrote:
> > @@ -288,6 +321,21 @@ static void build_perf_domains(const struct cpumask *cpu_map)
> >  			goto free;
> >  		tmp->next = pd;
> >  		pd = tmp;
> > +
> > +		/*
> > +		 * Count performance domains and capacity states for the
> > +		 * complexity check.
> > +		 */
> > +		nr_pd++;
> > +		nr_cs += em_pd_nr_cap_states(pd->obj);
> > +	}
> > +
> > +	/* Bail out if the Energy Model complexity is too high. */
> > +	if (nr_pd * (nr_cs + nr_cpus) > EM_MAX_COMPLEXITY) {
> > +		if (sched_debug())
> > +			pr_info("rd %*pbl: EM complexity is too high\n ",
> > +						cpumask_pr_args(cpu_map));
> > +		goto free;
> >  	}
> 
> I would make than an unconditional WARN, we do not really expect that to
> trigger, but then it does, we really don't want to hide it.

OTOH that also means that some people with big asymmetric machines can
get a WARN message every time they boot, and even if they don't want to
use EAS.

Now, that shouldn't happen any time soon, so it's maybe a good thing if
we get reports when/if people start to hit that one, so why not ...

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms
  2018-10-04  9:10     ` Quentin Perret
@ 2018-10-04  9:38       ` Peter Zijlstra
  2018-10-04  9:45         ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-04  9:38 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Thu, Oct 04, 2018 at 10:10:48AM +0100, Quentin Perret wrote:
> On Wednesday 03 Oct 2018 at 18:27:19 (+0200), Peter Zijlstra wrote:
> > On Wed, Sep 12, 2018 at 10:13:03AM +0100, Quentin Perret wrote:
> > > @@ -288,6 +321,21 @@ static void build_perf_domains(const struct cpumask *cpu_map)
> > >  			goto free;
> > >  		tmp->next = pd;
> > >  		pd = tmp;
> > > +
> > > +		/*
> > > +		 * Count performance domains and capacity states for the
> > > +		 * complexity check.
> > > +		 */
> > > +		nr_pd++;
> > > +		nr_cs += em_pd_nr_cap_states(pd->obj);
> > > +	}
> > > +
> > > +	/* Bail out if the Energy Model complexity is too high. */
> > > +	if (nr_pd * (nr_cs + nr_cpus) > EM_MAX_COMPLEXITY) {
> > > +		if (sched_debug())
> > > +			pr_info("rd %*pbl: EM complexity is too high\n ",
> > > +						cpumask_pr_args(cpu_map));
> > > +		goto free;
> > >  	}
> > 
> > I would make than an unconditional WARN, we do not really expect that to
> > trigger, but then it does, we really don't want to hide it.
> 
> OTOH that also means that some people with big asymmetric machines can
> get a WARN message every time they boot, and even if they don't want to
> use EAS.
> 
> Now, that shouldn't happen any time soon, so it's maybe a good thing if
> we get reports when/if people start to hit that one, so why not ...

Right, and if becomes a real problem we can think of a solution (like
maybe a DT thingy that says to not use EAS, or a 'better' EAS
algorithm).

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up
  2018-09-12  9:13 ` [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up Quentin Perret
@ 2018-10-04  9:44   ` Peter Zijlstra
  2018-10-04 10:27     ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-04  9:44 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:13:07AM +0100, Quentin Perret wrote:
> +	while (pd) {
> +		unsigned long cur_energy, spare_cap, max_spare_cap = 0;
> +		int max_spare_cap_cpu = -1;
> +
> +		for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {

Which of the two masks do we expect to be the smallest?

> +			if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
> +				continue;
> +
> +			/* Skip CPUs that will be overutilized. */
> +			util = cpu_util_next(cpu, p, cpu);
> +			cpu_cap = capacity_of(cpu);
> +			if (cpu_cap * 1024 < util * capacity_margin)
> +				continue;
> +
> +			/* Always use prev_cpu as a candidate. */
> +			if (cpu == prev_cpu) {
> +				prev_energy = compute_energy(p, prev_cpu, head);
> +				if (prev_energy < best_energy)
> +					best_energy = prev_energy;

				best_energy = min(best_energy, prev_energy);

That's both shorter and clearer.

> +				continue;
> +			}
> +
> +			/*
> +			 * Find the CPU with the maximum spare capacity in
> +			 * the performance domain
> +			 */
> +			spare_cap = cpu_cap - util;
> +			if (spare_cap > max_spare_cap) {
> +				max_spare_cap = spare_cap;
> +				max_spare_cap_cpu = cpu;
> +			}

Sometimes I wonder if something like:

#define min_filter(varp, val)		\
({					\
	typeof(varp) _varp = (varp);	\
	typeof(val)  _val  = (val);	\
	bool f = false;			\
					\
	if (_val < *_varp) {		\
		*_varp = _val;		\
		f = true;		\
	}				\
					\
	f;				\
})

and the corresponding max_filter() are worth the trouble; it would allow
writing:

	if (max_filter(&max_spare_cap, spare_cap))
		max_spare_cap_cpu = cpu;

and:

> +		}
> +
> +		/* Evaluate the energy impact of using this CPU. */
> +		if (max_spare_cap_cpu >= 0) {
> +			cur_energy = compute_energy(p, max_spare_cap_cpu, head);
> +			if (cur_energy < best_energy) {
> +				best_energy = cur_energy;
> +				best_energy_cpu = max_spare_cap_cpu;
> +			}

	if (min_filter(&best_energy, cur_energy))
		best_energy_cpu = max_spare_cap_cpu;

But then I figure, it is not... dunno. We do lots of this stuff.

> +		}
> +		pd = pd->next;
> +	}
> +
> +	/*
> +	 * Pick the best CPU if prev_cpu cannot be used, or if it saves at
> +	 * least 6% of the energy used by prev_cpu.
> +	 */
> +	if (prev_energy == ULONG_MAX ||
> +			(prev_energy - best_energy) > (prev_energy >> 4))
> +		return best_energy_cpu;

Does that become more readable if we split that into two conditions?

	if (prev_energy == ULONG_MAX)
		return best_energy_cpu;

	if ((prev_energy - best_energy) > (prev_energy >> 4))
		return best_energy_cpu;

> +	return prev_cpu;
> +}
> +
>  /*
>   * select_task_rq_fair: Select target runqueue for the waking task in domains
>   * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
> @@ -6360,13 +6468,37 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
>  	int want_affine = 0;
>  	int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING);
>  
> +	rcu_read_lock();
>  	if (sd_flag & SD_BALANCE_WAKE) {
>  		record_wakee(p);
> +
> +		/*
> +		 * Forkees are not accepted in the energy-aware wake-up path
> +		 * because they don't have any useful utilization data yet and
> +		 * it's not possible to forecast their impact on energy
> +		 * consumption. Consequently, they will be placed by
> +		 * find_idlest_cpu() on the least loaded CPU, which might turn
> +		 * out to be energy-inefficient in some use-cases. The
> +		 * alternative would be to bias new tasks towards specific types
> +		 * of CPUs first, or to try to infer their util_avg from the
> +		 * parent task, but those heuristics could hurt other use-cases
> +		 * too. So, until someone finds a better way to solve this,
> +		 * let's keep things simple by re-using the existing slow path.
> +		 */
> +		if (sched_feat(ENERGY_AWARE)) {
> +			struct root_domain *rd = cpu_rq(cpu)->rd;
> +			struct perf_domain *pd = rcu_dereference(rd->pd);
> +
> +			if (pd && !READ_ONCE(rd->overutilized)) {
> +				new_cpu = find_energy_efficient_cpu(p, prev_cpu, pd);
> +				goto unlock;
> +			}
> +		}
> +
> +		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu) &&
> +			      cpumask_test_cpu(cpu, &p->cpus_allowed);
>  	}

I would much prefer this to be something like:

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a8f601edd958..5475a885ec9f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6299,12 +6299,19 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
 {
 	struct sched_domain *tmp, *sd = NULL;
 	int cpu = smp_processor_id();
-	int new_cpu = prev_cpu;
+	unsigned int new_cpu = prev_cpu;
 	int want_affine = 0;
 	int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING);
 
 	if (sd_flag & SD_BALANCE_WAKE) {
 		record_wakee(p);
+
+		if (static_branch_unlikely(sched_eas_balance)) {
+			new_cpu = select_task_rq_eas(p, prev_cpu, sd_flags, wake_flags);
+			if (new_cpu < nr_cpu_ids)
+				return new_cpu;
+		}
+
 		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu)
 			      && cpumask_test_cpu(cpu, &p->cpus_allowed);
 	}


and then hide everything (including that giant comment) in
select_task_rq_eas().

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms
  2018-10-04  9:38       ` Peter Zijlstra
@ 2018-10-04  9:45         ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-10-04  9:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Thursday 04 Oct 2018 at 11:38:48 (+0200), Peter Zijlstra wrote:
> On Thu, Oct 04, 2018 at 10:10:48AM +0100, Quentin Perret wrote:
> > On Wednesday 03 Oct 2018 at 18:27:19 (+0200), Peter Zijlstra wrote:
> > > On Wed, Sep 12, 2018 at 10:13:03AM +0100, Quentin Perret wrote:
> > > > @@ -288,6 +321,21 @@ static void build_perf_domains(const struct cpumask *cpu_map)
> > > >  			goto free;
> > > >  		tmp->next = pd;
> > > >  		pd = tmp;
> > > > +
> > > > +		/*
> > > > +		 * Count performance domains and capacity states for the
> > > > +		 * complexity check.
> > > > +		 */
> > > > +		nr_pd++;
> > > > +		nr_cs += em_pd_nr_cap_states(pd->obj);
> > > > +	}
> > > > +
> > > > +	/* Bail out if the Energy Model complexity is too high. */
> > > > +	if (nr_pd * (nr_cs + nr_cpus) > EM_MAX_COMPLEXITY) {
> > > > +		if (sched_debug())
> > > > +			pr_info("rd %*pbl: EM complexity is too high\n ",
> > > > +						cpumask_pr_args(cpu_map));
> > > > +		goto free;
> > > >  	}
> > > 
> > > I would make than an unconditional WARN, we do not really expect that to
> > > trigger, but then it does, we really don't want to hide it.
> > 
> > OTOH that also means that some people with big asymmetric machines can
> > get a WARN message every time they boot, and even if they don't want to
> > use EAS.
> > 
> > Now, that shouldn't happen any time soon, so it's maybe a good thing if
> > we get reports when/if people start to hit that one, so why not ...
> 
> Right, and if becomes a real problem we can think of a solution (like
> maybe a DT thingy that says to not use EAS, or a 'better' EAS
> algorithm).

That works for me. I'll switch to a plain WARN in v8.

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling
  2018-10-03 16:24       ` Peter Zijlstra
@ 2018-10-04  9:57         ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-10-04  9:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wednesday 03 Oct 2018 at 18:24:35 (+0200), Peter Zijlstra wrote:
> Yeah, sysctl, see for example: sysctl.kernel.numa_balancing and the
> sched_numa_balancing static_key that goes with it.

OK, that works for me.

> I would default enable EAS if the EM is there and valid, but allow
> people to disable it.

Right, that'd be much nicer than the sched_feat off by default, I
agree. I'll see what I can do :-)

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up
  2018-10-04  9:44   ` Peter Zijlstra
@ 2018-10-04 10:27     ` Quentin Perret
  2018-10-04 10:41       ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-04 10:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Thursday 04 Oct 2018 at 11:44:12 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:13:07AM +0100, Quentin Perret wrote:
> > +	while (pd) {
> > +		unsigned long cur_energy, spare_cap, max_spare_cap = 0;
> > +		int max_spare_cap_cpu = -1;
> > +
> > +		for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {
> 
> Which of the two masks do we expect to be the smallest?

Typically, perf_domain_span is smaller.

> > +			if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
> > +				continue;
> > +
> > +			/* Skip CPUs that will be overutilized. */
> > +			util = cpu_util_next(cpu, p, cpu);
> > +			cpu_cap = capacity_of(cpu);
> > +			if (cpu_cap * 1024 < util * capacity_margin)
> > +				continue;
> > +
> > +			/* Always use prev_cpu as a candidate. */
> > +			if (cpu == prev_cpu) {
> > +				prev_energy = compute_energy(p, prev_cpu, head);
> > +				if (prev_energy < best_energy)
> > +					best_energy = prev_energy;
> 
> 				best_energy = min(best_energy, prev_energy);
> 
> That's both shorter and clearer.

OK.


> > +				continue;
> > +			}
> > +
> > +			/*
> > +			 * Find the CPU with the maximum spare capacity in
> > +			 * the performance domain
> > +			 */
> > +			spare_cap = cpu_cap - util;
> > +			if (spare_cap > max_spare_cap) {
> > +				max_spare_cap = spare_cap;
> > +				max_spare_cap_cpu = cpu;
> > +			}
> 
> Sometimes I wonder if something like:
> 
> #define min_filter(varp, val)		\
> ({					\
> 	typeof(varp) _varp = (varp);	\
> 	typeof(val)  _val  = (val);	\
> 	bool f = false;			\
> 					\
> 	if (_val < *_varp) {		\
> 		*_varp = _val;		\
> 		f = true;		\
> 	}				\
> 					\
> 	f;				\
> })
> 
> and the corresponding max_filter() are worth the trouble; it would allow
> writing:
> 
> 	if (max_filter(&max_spare_cap, spare_cap))
> 		max_spare_cap_cpu = cpu;
> 
> and:
> 
> > +		}
> > +
> > +		/* Evaluate the energy impact of using this CPU. */
> > +		if (max_spare_cap_cpu >= 0) {
> > +			cur_energy = compute_energy(p, max_spare_cap_cpu, head);
> > +			if (cur_energy < best_energy) {
> > +				best_energy = cur_energy;
> > +				best_energy_cpu = max_spare_cap_cpu;
> > +			}
> 
> 	if (min_filter(&best_energy, cur_energy))
> 		best_energy_cpu = max_spare_cap_cpu;
> 
> But then I figure, it is not... dunno. We do lots of this stuff.

If there are occurrences of this stuff all over the place, we could do
that in a separate clean-up patch that does just that, for the entire
file. Or maybe more ?

> > +		}
> > +		pd = pd->next;
> > +	}
> > +
> > +	/*
> > +	 * Pick the best CPU if prev_cpu cannot be used, or if it saves at
> > +	 * least 6% of the energy used by prev_cpu.
> > +	 */
> > +	if (prev_energy == ULONG_MAX ||
> > +			(prev_energy - best_energy) > (prev_energy >> 4))
> > +		return best_energy_cpu;
> 
> Does that become more readable if we split that into two conditions?
> 
> 	if (prev_energy == ULONG_MAX)
> 		return best_energy_cpu;
> 
> 	if ((prev_energy - best_energy) > (prev_energy >> 4))
> 		return best_energy_cpu;

Yeah, why not :-)

> > +	return prev_cpu;
> > +}
> > +
> >  /*
> >   * select_task_rq_fair: Select target runqueue for the waking task in domains
> >   * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
> > @@ -6360,13 +6468,37 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
> >  	int want_affine = 0;
> >  	int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING);
> >  
> > +	rcu_read_lock();
> >  	if (sd_flag & SD_BALANCE_WAKE) {
> >  		record_wakee(p);
> > +
> > +		/*
> > +		 * Forkees are not accepted in the energy-aware wake-up path
> > +		 * because they don't have any useful utilization data yet and
> > +		 * it's not possible to forecast their impact on energy
> > +		 * consumption. Consequently, they will be placed by
> > +		 * find_idlest_cpu() on the least loaded CPU, which might turn
> > +		 * out to be energy-inefficient in some use-cases. The
> > +		 * alternative would be to bias new tasks towards specific types
> > +		 * of CPUs first, or to try to infer their util_avg from the
> > +		 * parent task, but those heuristics could hurt other use-cases
> > +		 * too. So, until someone finds a better way to solve this,
> > +		 * let's keep things simple by re-using the existing slow path.
> > +		 */
> > +		if (sched_feat(ENERGY_AWARE)) {
> > +			struct root_domain *rd = cpu_rq(cpu)->rd;
> > +			struct perf_domain *pd = rcu_dereference(rd->pd);
> > +
> > +			if (pd && !READ_ONCE(rd->overutilized)) {
> > +				new_cpu = find_energy_efficient_cpu(p, prev_cpu, pd);
> > +				goto unlock;
> > +			}
> > +		}
> > +
> > +		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu) &&
> > +			      cpumask_test_cpu(cpu, &p->cpus_allowed);
> >  	}
> 
> I would much prefer this to be something like:
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a8f601edd958..5475a885ec9f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6299,12 +6299,19 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
>  {
>  	struct sched_domain *tmp, *sd = NULL;
>  	int cpu = smp_processor_id();
> -	int new_cpu = prev_cpu;
> +	unsigned int new_cpu = prev_cpu;
>  	int want_affine = 0;
>  	int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING);
>  
>  	if (sd_flag & SD_BALANCE_WAKE) {
>  		record_wakee(p);
> +
> +		if (static_branch_unlikely(sched_eas_balance)) {
> +			new_cpu = select_task_rq_eas(p, prev_cpu, sd_flags, wake_flags);
> +			if (new_cpu < nr_cpu_ids)
> +				return new_cpu;
> +		}
> +
>  		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu)
>  			      && cpumask_test_cpu(cpu, &p->cpus_allowed);
>  	}
> and then hide everything (including that giant comment) in
> select_task_rq_eas().

So you think we should rename find_energy_efficient_cpu and put all the
checks in there ? Or should select_task_rq_eas do the checks and then
call find_energy_efficient_cpu ?

Not a huge deal, but that'll save some time if we agree on that one
upfront.

Thanks

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up
  2018-10-04 10:27     ` Quentin Perret
@ 2018-10-04 10:41       ` Peter Zijlstra
  2018-10-04 10:55         ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-04 10:41 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Thu, Oct 04, 2018 at 11:27:22AM +0100, Quentin Perret wrote:
> > > +		for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {
> > 
> > Which of the two masks do we expect to be the smallest?
> 
> Typically, perf_domain_span is smaller.

OK, then the above expression is in the right order :-)

> > > +			if (spare_cap > max_spare_cap) {
> > > +				max_spare_cap = spare_cap;
> > > +				max_spare_cap_cpu = cpu;
> > > +			}
> > 
> > Sometimes I wonder if something like:
> > 
> > #define min_filter(varp, val)		\
> > ({					\
> > 	typeof(varp) _varp = (varp);	\
> > 	typeof(val)  _val  = (val);	\
> > 	bool f = false;			\
> > 					\
> > 	if (_val < *_varp) {		\
> > 		*_varp = _val;		\
> > 		f = true;		\
> > 	}				\
> > 					\
> > 	f;				\
> > })
> > 
> > and the corresponding max_filter() are worth the trouble; it would allow
> > writing:
> > 
> > 	if (max_filter(&max_spare_cap, spare_cap))
> > 		max_spare_cap_cpu = cpu;
> > 
> > and:
> > 
> > > +		}
> > > +
> > > +		/* Evaluate the energy impact of using this CPU. */
> > > +		if (max_spare_cap_cpu >= 0) {
> > > +			cur_energy = compute_energy(p, max_spare_cap_cpu, head);
> > > +			if (cur_energy < best_energy) {
> > > +				best_energy = cur_energy;
> > > +				best_energy_cpu = max_spare_cap_cpu;
> > > +			}
> > 
> > 	if (min_filter(&best_energy, cur_energy))
> > 		best_energy_cpu = max_spare_cap_cpu;
> > 
> > But then I figure, it is not... dunno. We do lots of this stuff.
> 
> If there are occurrences of this stuff all over the place, we could do
> that in a separate clean-up patch that does just that, for the entire
> file. Or maybe more ?

Sure, not something that needs done now. I just always think of this
when I see this pattern repeated, but never seem to get around to doing
anything about it.

I figured I'd mention it ;-)

> > I would much prefer this to be something like:
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index a8f601edd958..5475a885ec9f 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6299,12 +6299,19 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
> >  {
> >  	struct sched_domain *tmp, *sd = NULL;
> >  	int cpu = smp_processor_id();
> > -	int new_cpu = prev_cpu;
> > +	unsigned int new_cpu = prev_cpu;
> >  	int want_affine = 0;
> >  	int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING);
> >  
> >  	if (sd_flag & SD_BALANCE_WAKE) {
> >  		record_wakee(p);
> > +
> > +		if (static_branch_unlikely(sched_eas_balance)) {
> > +			new_cpu = select_task_rq_eas(p, prev_cpu, sd_flags, wake_flags);
> > +			if (new_cpu < nr_cpu_ids)
> > +				return new_cpu;
> > +		}
> > +
> >  		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu)
> >  			      && cpumask_test_cpu(cpu, &p->cpus_allowed);
> >  	}
> > and then hide everything (including that giant comment) in
> > select_task_rq_eas().
> 
> So you think we should rename find_energy_efficient_cpu and put all the
> checks in there ? Or should select_task_rq_eas do the checks and then
> call find_energy_efficient_cpu ?
> 
> Not a huge deal, but that'll save some time if we agree on that one
> upfront.

Not sure, see what it looks like ;-) My main concern here was to get rid
of that giant blob in select_task_rq_fair().

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 13/14] sched/topology: Make Energy Aware Scheduling depend on schedutil
  2018-09-12  9:13 ` [PATCH v7 13/14] sched/topology: Make Energy Aware Scheduling depend on schedutil Quentin Perret
@ 2018-10-04 10:50   ` Peter Zijlstra
  2018-10-04 11:58     ` Quentin Perret
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-04 10:50 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Wed, Sep 12, 2018 at 10:13:08AM +0100, Quentin Perret wrote:
> Energy Aware Scheduling (EAS) is designed with the assumption that
> frequencies of CPUs follow their utilization value. When using a CPUFreq
> governor other than schedutil, the chances of this assumption being true
> are small, if any. When schedutil is being used, EAS' predictions are at
> least consistent with the frequency requests. Although those requests
> have no guarantees to be honored by the hardware, they should at least
> guide DVFS in the right direction and provide some hope in regards to the
> EAS model being accurate.
> 
> To make sure EAS is only used in a sane configuration, create a strong
> dependency on schedutil being used. Since having sugov compiled-in does
> not provide that guarantee, make CPUFreq call a scheduler function on
> on governor changes hence letting it rebuild the scheduling domains,
> check the governors of the online CPUs, and enable/disable EAS
> accordingly.

So this patch disables EAS when we change cpufreq gov. How about we
force select schedutil and disallow changing it when EAS is enabled?

Instead of silently disabling EAS when we frob the governor?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up
  2018-10-04 10:41       ` Peter Zijlstra
@ 2018-10-04 10:55         ` Quentin Perret
  2018-10-04 12:18           ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Quentin Perret @ 2018-10-04 10:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Thursday 04 Oct 2018 at 12:41:07 (+0200), Peter Zijlstra wrote:
> Not sure, see what it looks like ;-) My main concern here was to get rid
> of that giant blob in select_task_rq_fair().

OK, got it. I'll probably just move the checks into the function and
merge that large comment into the already massive comment above
find_energy_efficient_cpu() or something. We'll see :-)

And do you actually care about the function name ? We've been using
find_energy_efficient_cpu() for some time in Android now, so that's just
slightly easier for us if that can be re-used. Not a big deal either,
though.

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 13/14] sched/topology: Make Energy Aware Scheduling depend on schedutil
  2018-10-04 10:50   ` Peter Zijlstra
@ 2018-10-04 11:58     ` Quentin Perret
  0 siblings, 0 replies; 57+ messages in thread
From: Quentin Perret @ 2018-10-04 11:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Thursday 04 Oct 2018 at 12:50:23 (+0200), Peter Zijlstra wrote:
> On Wed, Sep 12, 2018 at 10:13:08AM +0100, Quentin Perret wrote:
> > Energy Aware Scheduling (EAS) is designed with the assumption that
> > frequencies of CPUs follow their utilization value. When using a CPUFreq
> > governor other than schedutil, the chances of this assumption being true
> > are small, if any. When schedutil is being used, EAS' predictions are at
> > least consistent with the frequency requests. Although those requests
> > have no guarantees to be honored by the hardware, they should at least
> > guide DVFS in the right direction and provide some hope in regards to the
> > EAS model being accurate.
> > 
> > To make sure EAS is only used in a sane configuration, create a strong
> > dependency on schedutil being used. Since having sugov compiled-in does
> > not provide that guarantee, make CPUFreq call a scheduler function on
> > on governor changes hence letting it rebuild the scheduling domains,
> > check the governors of the online CPUs, and enable/disable EAS
> > accordingly.
> 
> So this patch disables EAS when we change cpufreq gov. How about we
> force select schedutil and disallow changing it when EAS is enabled?
> 
> Instead of silently disabling EAS when we frob the governor?

By doing it this way I think we have a better integration with things
like that exclusive cpusets. In the end, EAS is enabled in a
per-root_domain basis. So, say you have two root domains, A and B. EAS
is enabled on A and disabled on B (because it doesn't have
SD_ASYM_CPUCAPACITY or something). Should you be forced to use sugov on
the CPUs of B to run EAS on A ? If there is a cpufreq policy covering
only the CPUs of B, I'd say probably no.

Now I get the argument that this is happening silently as-is, so we can
do better. How about we put a message in dmesg we EAS gets disabled in
case of governor changes ?

Also, I'm currently working on a doc patch for EAS ATM (to be posted
soon), so I can (and will) make it very clear there that you _must_
use sugov to use EAS.

Would that work ?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up
  2018-10-04 10:55         ` Quentin Perret
@ 2018-10-04 12:18           ` Peter Zijlstra
  0 siblings, 0 replies; 57+ messages in thread
From: Peter Zijlstra @ 2018-10-04 12:18 UTC (permalink / raw)
  To: Quentin Perret
  Cc: rjw, linux-kernel, linux-pm, gregkh, mingo, dietmar.eggemann,
	morten.rasmussen, chris.redpath, patrick.bellasi,
	valentin.schneider, vincent.guittot, thara.gopinath,
	viresh.kumar, tkjos, joel, smuckle, adharmap, skannan, pkondeti,
	juri.lelli, edubezval, srinivas.pandruvada, currojerez,
	javi.merino

On Thu, Oct 04, 2018 at 11:55:09AM +0100, Quentin Perret wrote:
> On Thursday 04 Oct 2018 at 12:41:07 (+0200), Peter Zijlstra wrote:
> > Not sure, see what it looks like ;-) My main concern here was to get rid
> > of that giant blob in select_task_rq_fair().
> 
> OK, got it. I'll probably just move the checks into the function and
> merge that large comment into the already massive comment above
> find_energy_efficient_cpu() or something. We'll see :-)
> 
> And do you actually care about the function name ? We've been using
> find_energy_efficient_cpu() for some time in Android now, so that's just
> slightly easier for us if that can be re-used. Not a big deal either,
> though.

Sure, that's fine I suppose.

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2018-10-04 12:20 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-12  9:12 [PATCH v7 00/14] Energy Aware Scheduling Quentin Perret
2018-09-12  9:12 ` [PATCH v7 01/14] sched: Relocate arch_scale_cpu_capacity Quentin Perret
2018-09-12  9:12 ` [PATCH v7 02/14] sched/cpufreq: Prepare schedutil for Energy Aware Scheduling Quentin Perret
2018-09-12 14:56   ` Vincent Guittot
2018-09-12 14:56     ` Vincent Guittot
2018-09-18 21:33   ` Rafael J. Wysocki
2018-09-18 21:33     ` Rafael J. Wysocki
2018-09-27 12:17     ` Quentin Perret
2018-09-27 12:17       ` Quentin Perret
2018-09-28  8:23       ` Rafael J. Wysocki
2018-09-28  8:23         ` Rafael J. Wysocki
2018-09-28  8:35         ` Quentin Perret
2018-09-28  8:35           ` Quentin Perret
2018-09-28  8:35           ` Rafael J. Wysocki
2018-09-28  8:35             ` Rafael J. Wysocki
2018-09-12  9:12 ` [PATCH v7 03/14] PM: Introduce an Energy Model management framework Quentin Perret
2018-10-02 12:25   ` Peter Zijlstra
2018-10-02 12:54     ` Quentin Perret
2018-10-02 13:50       ` Peter Zijlstra
2018-10-02 12:30   ` Peter Zijlstra
2018-10-02 12:51     ` Quentin Perret
2018-10-02 13:48       ` Peter Zijlstra
2018-10-02 14:05         ` Quentin Perret
2018-10-02 14:29           ` Peter Zijlstra
2018-10-02 14:40             ` Quentin Perret
2018-10-02 19:12               ` Andrea Parri
2018-10-03  7:48                 ` Quentin Perret
2018-09-12  9:12 ` [PATCH v7 04/14] PM / EM: Expose the Energy Model in sysfs Quentin Perret
2018-09-12  9:13 ` [PATCH v7 05/14] sched: Introduce a sched_feat for Energy Aware Scheduling Quentin Perret
2018-10-02 12:34   ` Peter Zijlstra
2018-10-02 13:08     ` Quentin Perret
2018-10-03 16:24       ` Peter Zijlstra
2018-10-04  9:57         ` Quentin Perret
2018-09-12  9:13 ` [PATCH v7 06/14] sched/topology: Reference the Energy Model of CPUs when available Quentin Perret
2018-10-02 12:36   ` Peter Zijlstra
2018-10-02 13:16     ` Quentin Perret
2018-09-12  9:13 ` [PATCH v7 07/14] sched/topology: Lowest CPU asymmetry sched_domain level pointer Quentin Perret
2018-09-12  9:13 ` [PATCH v7 08/14] sched/topology: Disable EAS on inappropriate platforms Quentin Perret
2018-10-03 16:27   ` Peter Zijlstra
2018-10-04  9:10     ` Quentin Perret
2018-10-04  9:38       ` Peter Zijlstra
2018-10-04  9:45         ` Quentin Perret
2018-09-12  9:13 ` [PATCH v7 09/14] sched/fair: Clean-up update_sg_lb_stats parameters Quentin Perret
2018-09-12  9:13 ` [PATCH v7 10/14] sched: Add over-utilization/tipping point indicator Quentin Perret
2018-09-12  9:13 ` [PATCH v7 11/14] sched/fair: Introduce an energy estimation helper function Quentin Perret
2018-10-04  8:34   ` Peter Zijlstra
2018-10-04  9:02     ` Quentin Perret
2018-09-12  9:13 ` [PATCH v7 12/14] sched/fair: Select an energy-efficient CPU on task wake-up Quentin Perret
2018-10-04  9:44   ` Peter Zijlstra
2018-10-04 10:27     ` Quentin Perret
2018-10-04 10:41       ` Peter Zijlstra
2018-10-04 10:55         ` Quentin Perret
2018-10-04 12:18           ` Peter Zijlstra
2018-09-12  9:13 ` [PATCH v7 13/14] sched/topology: Make Energy Aware Scheduling depend on schedutil Quentin Perret
2018-10-04 10:50   ` Peter Zijlstra
2018-10-04 11:58     ` Quentin Perret
2018-09-12  9:13 ` [PATCH v7 14/14] OPTIONAL: cpufreq: dt: Register an Energy Model Quentin Perret

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.