All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Rework interface between scheduler and schedutil governor
@ 2023-10-13 15:14 Vincent Guittot
  2023-10-13 15:14 ` [PATCH 1/2] sched/schedutil: rework performance estimation Vincent Guittot
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-10-13 15:14 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm
  Cc: lukasz.luba, Vincent Guittot

Following the discussion with Qais [1] about how to handle uclamp
requirements and after syncing with him, we agreed that I should move
forward on the patchset to rework the interface between scheduler and
schedutil governor to provide more information to the latter. Scheduler
(and EAS in particular) doesn't need anymore to guess estimate which
headroom the governor wants to apply and will directly ask for the target
freq. Then the governor directly gets the actual utilization and new
minimum and maximum boundaries to select this target frequency and
doesn't have to deal anymore with scheduler internals like uclamp when
including iowait boost.

[1] https://lore.kernel.org/lkml/CAKfTPtA5JqNCauG-rP3wGfq+p8EEVx9Tvwj6ksM3SYCwRmfCTg@mail.gmail.com/

Vincent Guittot (2):
  sched/schedutil: rework performance estimation
  sched/schedutil: rework iowait boost

 include/linux/energy_model.h     |  1 -
 kernel/sched/core.c              | 85 ++++++++++++--------------------
 kernel/sched/cpufreq_schedutil.c | 72 +++++++++++++++++----------
 kernel/sched/fair.c              | 22 +++++++--
 kernel/sched/sched.h             | 84 +++----------------------------
 5 files changed, 105 insertions(+), 159 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-13 15:14 [PATCH 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
@ 2023-10-13 15:14 ` Vincent Guittot
  2023-10-13 18:20   ` Ingo Molnar
                     ` (2 more replies)
  2023-10-13 15:14 ` [PATCH 2/2] sched/schedutil: rework iowait boost Vincent Guittot
  2023-10-26 10:19 ` [PATCH 0/2] Rework interface between scheduler and schedutil governor Wyes Karny
  2 siblings, 3 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-10-13 15:14 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm
  Cc: lukasz.luba, Vincent Guittot

The current method to take into account uclamp hints when estimating the
target frequency can end into situation where the selected target
frequency is finally higher than uclamp hints whereas there are no real
needs. Such cases mainly happen because we are currently mixing the
traditional scheduler utilization signal with the uclamp performance
hints. By adding these 2 metrics, we loose an important information when
it comes to select the target frequency and we have to make some
assumptions which can't fit all cases.

Rework the interface between the scheduler and schedutil governor in order
to propagate all information down to the cpufreq governor.

effective_cpu_util() interface changes and now returns the actual
utilization of the CPU with 2 optional inputs:
- The minimum performance for this CPU; typically the capacity to handle
  the deadline task and the interrupt pressure. But also uclamp_min
  request when available.
- The maximum targeting performance for this CPU which reflects the
  maximum level that we would like to not exceed. By default it will be
  the CPU capacity but can be reduced because of some performance hints
  set with uclamp. The value can be lower than actual utilization and/or
  min performance level.

A new sugov_effective_cpu_perf() interface is also available to compute
the final performance level that is targeted for the CPU after applying
some cpufreq headroom and taking into account all inputs.

With these 2 functions, schedutil is now able to decide when it must go
above uclamp hints. It now also have a generic way to get the min
perfromance level.

The dependency between energy model and cpufreq governor and its headroom
policy doesn't exist anymore.

eenv_pd_max_util asks schedutil for the targeted performance after
applying the impact of the waking task.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 include/linux/energy_model.h     |  1 -
 kernel/sched/core.c              | 85 ++++++++++++--------------------
 kernel/sched/cpufreq_schedutil.c | 43 ++++++++++++----
 kernel/sched/fair.c              | 22 +++++++--
 kernel/sched/sched.h             | 24 +++------
 5 files changed, 91 insertions(+), 84 deletions(-)

diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index b9caa01dfac4..adec808b371a 100644
--- a/include/linux/energy_model.h
+++ b/include/linux/energy_model.h
@@ -243,7 +243,6 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
 	scale_cpu = arch_scale_cpu_capacity(cpu);
 	ps = &pd->table[pd->nr_perf_states - 1];
 
-	max_util = map_util_perf(max_util);
 	max_util = min(max_util, allowed_cpu_cap);
 	freq = map_util_freq(max_util, ps->frequency, scale_cpu);
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a3f9cd52eec5..78228abd1219 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7381,18 +7381,13 @@ int sched_core_idle_cpu(int cpu)
  * required to meet deadlines.
  */
 unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
-				 enum cpu_util_type type,
-				 struct task_struct *p)
+				 unsigned long *min,
+				 unsigned long *max)
 {
-	unsigned long dl_util, util, irq, max;
+	unsigned long util, irq, scale;
 	struct rq *rq = cpu_rq(cpu);
 
-	max = arch_scale_cpu_capacity(cpu);
-
-	if (!uclamp_is_used() &&
-	    type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
-		return max;
-	}
+	scale = arch_scale_cpu_capacity(cpu);
 
 	/*
 	 * Early check to see if IRQ/steal time saturates the CPU, can be
@@ -7400,45 +7395,36 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
 	 * update_irq_load_avg().
 	 */
 	irq = cpu_util_irq(rq);
-	if (unlikely(irq >= max))
-		return max;
+	if (unlikely(irq >= scale)) {
+		if (min)
+			*min = scale;
+		if (max)
+			*max = scale;
+		return scale;
+	}
+
+	/* The minimum utilization returns the highest level between:
+	 * - the computed DL bandwidth needed with the irq pressure which
+	 *   steals time to the deadline task.
+	 * - The minimum bandwidth requirement for CFS.
+	 */
+	if (min)
+		*min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
 
 	/*
 	 * Because the time spend on RT/DL tasks is visible as 'lost' time to
 	 * CFS tasks and we use the same metric to track the effective
 	 * utilization (PELT windows are synchronized) we can directly add them
 	 * to obtain the CPU's actual utilization.
-	 *
-	 * CFS and RT utilization can be boosted or capped, depending on
-	 * utilization clamp constraints requested by currently RUNNABLE
-	 * tasks.
-	 * When there are no CFS RUNNABLE tasks, clamps are released and
-	 * frequency will be gracefully reduced with the utilization decay.
 	 */
 	util = util_cfs + cpu_util_rt(rq);
-	if (type == FREQUENCY_UTIL)
-		util = uclamp_rq_util_with(rq, util, p);
-
-	dl_util = cpu_util_dl(rq);
-
-	/*
-	 * For frequency selection we do not make cpu_util_dl() a permanent part
-	 * of this sum because we want to use cpu_bw_dl() later on, but we need
-	 * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
-	 * that we select f_max when there is no idle time.
-	 *
-	 * NOTE: numerical errors or stop class might cause us to not quite hit
-	 * saturation when we should -- something for later.
-	 */
-	if (util + dl_util >= max)
-		return max;
+	util += cpu_util_dl(rq);
 
-	/*
-	 * OTOH, for energy computation we need the estimated running time, so
-	 * include util_dl and ignore dl_bw.
-	 */
-	if (type == ENERGY_UTIL)
-		util += dl_util;
+	if (util >= scale) {
+		if (max)
+			*max = scale;
+		return scale;
+	}
 
 	/*
 	 * There is still idle time; further improve the number by using the
@@ -7449,28 +7435,21 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
 	 *   U' = irq + --------- * U
 	 *                 max
 	 */
-	util = scale_irq_capacity(util, irq, max);
+	util = scale_irq_capacity(util, irq, scale);
 	util += irq;
 
-	/*
-	 * Bandwidth required by DEADLINE must always be granted while, for
-	 * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
-	 * to gracefully reduce the frequency when no tasks show up for longer
-	 * periods of time.
-	 *
-	 * Ideally we would like to set bw_dl as min/guaranteed freq and util +
-	 * bw_dl as requested freq. However, cpufreq is not yet ready for such
-	 * an interface. So, we only do the latter for now.
+	/* The maximum hint is a soft bandwidth requirement which can be lower
+	 * than the actual utilization because of max uclamp requirments
 	 */
-	if (type == FREQUENCY_UTIL)
-		util += cpu_bw_dl(rq);
+	if (max)
+		*max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
 
-	return min(max, util);
+	return min(scale, util);
 }
 
 unsigned long sched_cpu_util(int cpu)
 {
-	return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL);
+	return effective_cpu_util(cpu, cpu_util_cfs(cpu), NULL, NULL);
 }
 #endif /* CONFIG_SMP */
 
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 458d359f5991..8cb323522b90 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -47,7 +47,7 @@ struct sugov_cpu {
 	u64			last_update;
 
 	unsigned long		util;
-	unsigned long		bw_dl;
+	unsigned long		bw_min;
 
 	/* The field below is for single-CPU policies only: */
 #ifdef CONFIG_NO_HZ_COMMON
@@ -143,7 +143,6 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
 	unsigned int freq = arch_scale_freq_invariant() ?
 				policy->cpuinfo.max_freq : policy->cur;
 
-	util = map_util_perf(util);
 	freq = map_util_freq(util, freq, max);
 
 	if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
@@ -153,14 +152,38 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
 	return cpufreq_driver_resolve_freq(policy, freq);
 }
 
+unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+				 unsigned long min,
+				 unsigned long max)
+{
+	unsigned long target;
+	struct rq *rq = cpu_rq(cpu);
+
+	if (rt_rq_is_runnable(&rq->rt))
+		return max;
+
+	/* Provide at least enough capacity for DL + irq */
+	target =  min;
+
+	actual = map_util_perf(actual);
+	/* Actually we don't need to target the max performance */
+	if (actual < max)
+		max = actual;
+
+	/*
+	 * Ensure at least minimum performance while providing more compute
+	 * capacity when possible.
+	 */
+	return max(target, max);
+}
+
 static void sugov_get_util(struct sugov_cpu *sg_cpu)
 {
-	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
-	struct rq *rq = cpu_rq(sg_cpu->cpu);
+	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
 
-	sg_cpu->bw_dl = cpu_bw_dl(rq);
-	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
-					  FREQUENCY_UTIL, NULL);
+	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+	sg_cpu->bw_min = map_util_perf(min);
+	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
 }
 
 /**
@@ -306,7 +329,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
  */
 static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
 {
-	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
 		sg_cpu->sg_policy->limits_changed = true;
 }
 
@@ -407,8 +430,8 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time,
 	    sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
 		sg_cpu->util = prev_util;
 
-	cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl),
-				   map_util_perf(sg_cpu->util), max_cap);
+	cpufreq_driver_adjust_perf(sg_cpu->cpu, sg_cpu->bw_min,
+				   sg_cpu->util, max_cap);
 
 	sg_cpu->sg_policy->last_freq_update_time = time;
 }
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 922905194c0c..d4f7b2f49c44 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7628,7 +7628,7 @@ static inline void eenv_pd_busy_time(struct energy_env *eenv,
 	for_each_cpu(cpu, pd_cpus) {
 		unsigned long util = cpu_util(cpu, p, -1, 0);
 
-		busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
+		busy_time += effective_cpu_util(cpu, util, NULL, NULL);
 	}
 
 	eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
@@ -7651,7 +7651,7 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
 	for_each_cpu(cpu, pd_cpus) {
 		struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
 		unsigned long util = cpu_util(cpu, p, dst_cpu, 1);
-		unsigned long eff_util;
+		unsigned long eff_util, min, max;
 
 		/*
 		 * Performance domain frequency: utilization clamping
@@ -7660,7 +7660,23 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
 		 * NOTE: in case RT tasks are running, by default the
 		 * FREQUENCY_UTIL's utilization can be max OPP.
 		 */
-		eff_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
+		eff_util = effective_cpu_util(cpu, util, &min, &max);
+
+		/* Task's uclamp can modify min and max value */
+		if (tsk && uclamp_is_used()) {
+			min = max(min, uclamp_eff_value(p, UCLAMP_MIN));
+
+			/*
+			 * If there is no active max uclamp constraint,
+			 * directly use task's one otherwise keep max
+			 */
+			if (uclamp_rq_is_idle(cpu_rq(cpu)))
+				max = uclamp_eff_value(p, UCLAMP_MAX);
+			else
+				max = max(max, uclamp_eff_value(p, UCLAMP_MAX));
+		}
+
+		eff_util = sugov_effective_cpu_perf(cpu, eff_util, min, max);
 		max_util = max(max_util, eff_util);
 	}
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 65cad0e5729e..3873b4de7cfa 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2962,24 +2962,14 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
 #endif
 
 #ifdef CONFIG_SMP
-/**
- * enum cpu_util_type - CPU utilization type
- * @FREQUENCY_UTIL:	Utilization used to select frequency
- * @ENERGY_UTIL:	Utilization used during energy calculation
- *
- * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time
- * need to be aggregated differently depending on the usage made of them. This
- * enum is used within effective_cpu_util() to differentiate the types of
- * utilization expected by the callers, and adjust the aggregation accordingly.
- */
-enum cpu_util_type {
-	FREQUENCY_UTIL,
-	ENERGY_UTIL,
-};
-
 unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
-				 enum cpu_util_type type,
-				 struct task_struct *p);
+				 unsigned long *min,
+				 unsigned long *max);
+
+unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+				 unsigned long min,
+				 unsigned long max);
+
 
 /*
  * Verify the fitness of task @p to run on @cpu taking into account the
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/2] sched/schedutil: rework iowait boost
  2023-10-13 15:14 [PATCH 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
  2023-10-13 15:14 ` [PATCH 1/2] sched/schedutil: rework performance estimation Vincent Guittot
@ 2023-10-13 15:14 ` Vincent Guittot
  2023-10-18  7:39   ` Beata Michalska
  2023-10-26 10:19 ` [PATCH 0/2] Rework interface between scheduler and schedutil governor Wyes Karny
  2 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2023-10-13 15:14 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm
  Cc: lukasz.luba, Vincent Guittot

Use the max value that has already been computed inside sugov_get_util()
to cap the iowait boost and remove dependency with uclamp_rq_util_with()
which is not used anymore.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/cpufreq_schedutil.c | 29 ++++++++-------
 kernel/sched/sched.h             | 60 --------------------------------
 2 files changed, 14 insertions(+), 75 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 8cb323522b90..820612867769 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -177,11 +177,12 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
 	return max(target, max);
 }
 
-static void sugov_get_util(struct sugov_cpu *sg_cpu)
+static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
 {
 	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
 
 	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+	util = max(util, boost);
 	sg_cpu->bw_min = map_util_perf(min);
 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
 }
@@ -274,18 +275,16 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
  * This mechanism is designed to boost high frequently IO waiting tasks, while
  * being more conservative on tasks which does sporadic IO operations.
  */
-static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
+static unsigned long sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
 			       unsigned long max_cap)
 {
-	unsigned long boost;
-
 	/* No boost currently required */
 	if (!sg_cpu->iowait_boost)
-		return;
+		return 0;
 
 	/* Reset boost if the CPU appears to have been idle enough */
 	if (sugov_iowait_reset(sg_cpu, time, false))
-		return;
+		return 0;
 
 	if (!sg_cpu->iowait_boost_pending) {
 		/*
@@ -294,7 +293,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
 		sg_cpu->iowait_boost >>= 1;
 		if (sg_cpu->iowait_boost < IOWAIT_BOOST_MIN) {
 			sg_cpu->iowait_boost = 0;
-			return;
+			return 0;
 		}
 	}
 
@@ -304,10 +303,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
 	 * sg_cpu->util is already in capacity scale; convert iowait_boost
 	 * into the same scale so we can compare.
 	 */
-	boost = (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
-	boost = uclamp_rq_util_with(cpu_rq(sg_cpu->cpu), boost, NULL);
-	if (sg_cpu->util < boost)
-		sg_cpu->util = boost;
+	return (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
 }
 
 #ifdef CONFIG_NO_HZ_COMMON
@@ -337,6 +333,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
 					      u64 time, unsigned long max_cap,
 					      unsigned int flags)
 {
+	unsigned long boost;
+
 	sugov_iowait_boost(sg_cpu, time, flags);
 	sg_cpu->last_update = time;
 
@@ -345,8 +343,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
 	if (!sugov_should_update_freq(sg_cpu->sg_policy, time))
 		return false;
 
-	sugov_get_util(sg_cpu);
-	sugov_iowait_apply(sg_cpu, time, max_cap);
+	boost = sugov_iowait_apply(sg_cpu, time, max_cap);
+	sugov_get_util(sg_cpu, boost);
 
 	return true;
 }
@@ -447,9 +445,10 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
 
 	for_each_cpu(j, policy->cpus) {
 		struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
+		unsigned long boost;
 
-		sugov_get_util(j_sg_cpu);
-		sugov_iowait_apply(j_sg_cpu, time, max_cap);
+		boost = sugov_iowait_apply(j_sg_cpu, time, max_cap);
+		sugov_get_util(j_sg_cpu, boost);
 
 		util = max(j_sg_cpu->util, util);
 	}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3873b4de7cfa..b181edaf4d41 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3026,59 +3026,6 @@ static inline bool uclamp_rq_is_idle(struct rq *rq)
 	return rq->uclamp_flags & UCLAMP_FLAG_IDLE;
 }
 
-/**
- * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
- * @rq:		The rq to clamp against. Must not be NULL.
- * @util:	The util value to clamp.
- * @p:		The task to clamp against. Can be NULL if you want to clamp
- *		against @rq only.
- *
- * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
- *
- * If sched_uclamp_used static key is disabled, then just return the util
- * without any clamping since uclamp aggregation at the rq level in the fast
- * path is disabled, rendering this operation a NOP.
- *
- * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
- * will return the correct effective uclamp value of the task even if the
- * static key is disabled.
- */
-static __always_inline
-unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-				  struct task_struct *p)
-{
-	unsigned long min_util = 0;
-	unsigned long max_util = 0;
-
-	if (!static_branch_likely(&sched_uclamp_used))
-		return util;
-
-	if (p) {
-		min_util = uclamp_eff_value(p, UCLAMP_MIN);
-		max_util = uclamp_eff_value(p, UCLAMP_MAX);
-
-		/*
-		 * Ignore last runnable task's max clamp, as this task will
-		 * reset it. Similarly, no need to read the rq's min clamp.
-		 */
-		if (uclamp_rq_is_idle(rq))
-			goto out;
-	}
-
-	min_util = max_t(unsigned long, min_util, uclamp_rq_get(rq, UCLAMP_MIN));
-	max_util = max_t(unsigned long, max_util, uclamp_rq_get(rq, UCLAMP_MAX));
-out:
-	/*
-	 * Since CPU's {min,max}_util clamps are MAX aggregated considering
-	 * RUNNABLE tasks with _different_ clamps, we can end up with an
-	 * inversion. Fix it now when the clamps are applied.
-	 */
-	if (unlikely(min_util >= max_util))
-		return min_util;
-
-	return clamp(util, min_util, max_util);
-}
-
 /* Is the rq being capped/throttled by uclamp_max? */
 static inline bool uclamp_rq_is_capped(struct rq *rq)
 {
@@ -3116,13 +3063,6 @@ static inline unsigned long uclamp_eff_value(struct task_struct *p,
 	return SCHED_CAPACITY_SCALE;
 }
 
-static inline
-unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-				  struct task_struct *p)
-{
-	return util;
-}
-
 static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
 
 static inline bool uclamp_is_used(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-13 15:14 ` [PATCH 1/2] sched/schedutil: rework performance estimation Vincent Guittot
@ 2023-10-13 18:20   ` Ingo Molnar
  2023-10-15  8:02     ` Vincent Guittot
  2023-10-18  7:04   ` Beata Michalska
  2023-10-20  9:48   ` Dietmar Eggemann
  2 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2023-10-13 18:20 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba


* Vincent Guittot <vincent.guittot@linaro.org> wrote:

> +
> +	/* The minimum utilization returns the highest level between:
> +	 * - the computed DL bandwidth needed with the irq pressure which
> +	 *   steals time to the deadline task.
> +	 * - The minimum bandwidth requirement for CFS.
> +	 */

Nit: please use the standard multi-line Linux kernel comment style.

> +	/* The maximum hint is a soft bandwidth requirement which can be lower
> +	 * than the actual utilization because of max uclamp requirments
>  	 */

Ditto.

> +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> +				 unsigned long min,
> +				 unsigned long max)
> +{
> +	unsigned long target;
> +	struct rq *rq = cpu_rq(cpu);
> +
> +	if (rt_rq_is_runnable(&rq->rt))
> +		return max;
> +
> +	/* Provide at least enough capacity for DL + irq */
> +	target =  min;

s/  / /
s/irq/IRQ/

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-13 18:20   ` Ingo Molnar
@ 2023-10-15  8:02     ` Vincent Guittot
  0 siblings, 0 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-10-15  8:02 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba

On Fri, 13 Oct 2023 at 20:21, Ingo Molnar <mingo@kernel.org> wrote:
>
>
> * Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> > +
> > +     /* The minimum utilization returns the highest level between:
> > +      * - the computed DL bandwidth needed with the irq pressure which
> > +      *   steals time to the deadline task.
> > +      * - The minimum bandwidth requirement for CFS.
> > +      */
>
> Nit: please use the standard multi-line Linux kernel comment style.

Yes, I don't know how I ended up with such comment style. I will fix
it and others below

>
> > +     /* The maximum hint is a soft bandwidth requirement which can be lower
> > +      * than the actual utilization because of max uclamp requirments
> >        */
>
> Ditto.
>
> > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > +                              unsigned long min,
> > +                              unsigned long max)
> > +{
> > +     unsigned long target;
> > +     struct rq *rq = cpu_rq(cpu);
> > +
> > +     if (rt_rq_is_runnable(&rq->rt))
> > +             return max;
> > +
> > +     /* Provide at least enough capacity for DL + irq */
> > +     target =  min;
>
> s/  / /
> s/irq/IRQ/
>
> Thanks,
>
>         Ingo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-13 15:14 ` [PATCH 1/2] sched/schedutil: rework performance estimation Vincent Guittot
  2023-10-13 18:20   ` Ingo Molnar
@ 2023-10-18  7:04   ` Beata Michalska
  2023-10-18 12:20     ` Lukasz Luba
  2023-10-18 13:25     ` Vincent Guittot
  2023-10-20  9:48   ` Dietmar Eggemann
  2 siblings, 2 replies; 15+ messages in thread
From: Beata Michalska @ 2023-10-18  7:04 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba

Hi Vincent,

On Fri, Oct 13, 2023 at 05:14:49PM +0200, Vincent Guittot wrote:
> The current method to take into account uclamp hints when estimating the
> target frequency can end into situation where the selected target
> frequency is finally higher than uclamp hints whereas there are no real
> needs. Such cases mainly happen because we are currently mixing the
> traditional scheduler utilization signal with the uclamp performance
> hints. By adding these 2 metrics, we loose an important information when
> it comes to select the target frequency and we have to make some
> assumptions which can't fit all cases.
> 
> Rework the interface between the scheduler and schedutil governor in order
> to propagate all information down to the cpufreq governor.
/governor/driver ?
> 
> effective_cpu_util() interface changes and now returns the actual
> utilization of the CPU with 2 optional inputs:
> - The minimum performance for this CPU; typically the capacity to handle
>   the deadline task and the interrupt pressure. But also uclamp_min
>   request when available.
> - The maximum targeting performance for this CPU which reflects the
>   maximum level that we would like to not exceed. By default it will be
>   the CPU capacity but can be reduced because of some performance hints
>   set with uclamp. The value can be lower than actual utilization and/or
>   min performance level.
> 
> A new sugov_effective_cpu_perf() interface is also available to compute
> the final performance level that is targeted for the CPU after applying
> some cpufreq headroom and taking into account all inputs.
> 
> With these 2 functions, schedutil is now able to decide when it must go
> above uclamp hints. It now also have a generic way to get the min
> perfromance level.
> 
> The dependency between energy model and cpufreq governor and its headroom
> policy doesn't exist anymore.
> 
> eenv_pd_max_util asks schedutil for the targeted performance after
> applying the impact of the waking task.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  include/linux/energy_model.h     |  1 -
>  kernel/sched/core.c              | 85 ++++++++++++--------------------
>  kernel/sched/cpufreq_schedutil.c | 43 ++++++++++++----
>  kernel/sched/fair.c              | 22 +++++++--
>  kernel/sched/sched.h             | 24 +++------
>  5 files changed, 91 insertions(+), 84 deletions(-)
> 
> diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
> index b9caa01dfac4..adec808b371a 100644
> --- a/include/linux/energy_model.h
> +++ b/include/linux/energy_model.h
> @@ -243,7 +243,6 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
>  	scale_cpu = arch_scale_cpu_capacity(cpu);
>  	ps = &pd->table[pd->nr_perf_states - 1];
>  
> -	max_util = map_util_perf(max_util);
Even though the effective_cpu_util does no longer include the headroom, it is
being applied by sugov further down the line (sugov_effective_cpu_perf).
Won't that bring back the original problem when freq selection within EM is
not align with the one performed by sugov ?
>  	max_util = min(max_util, allowed_cpu_cap);
>  	freq = map_util_freq(max_util, ps->frequency, scale_cpu);
>  
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index a3f9cd52eec5..78228abd1219 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7381,18 +7381,13 @@ int sched_core_idle_cpu(int cpu)
>   * required to meet deadlines.
>   */
>  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> -				 enum cpu_util_type type,
> -				 struct task_struct *p)
> +				 unsigned long *min,
> +				 unsigned long *max)
>  {
> -	unsigned long dl_util, util, irq, max;
> +	unsigned long util, irq, scale;
>  	struct rq *rq = cpu_rq(cpu);
>  
> -	max = arch_scale_cpu_capacity(cpu);
> -
> -	if (!uclamp_is_used() &&
> -	    type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
> -		return max;
> -	}
> +	scale = arch_scale_cpu_capacity(cpu);
>  
>  	/*
>  	 * Early check to see if IRQ/steal time saturates the CPU, can be
> @@ -7400,45 +7395,36 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
>  	 * update_irq_load_avg().
>  	 */
>  	irq = cpu_util_irq(rq);
> -	if (unlikely(irq >= max))
> -		return max;
> +	if (unlikely(irq >= scale)) {
> +		if (min)
> +			*min = scale;
> +		if (max)
> +			*max = scale;
> +		return scale;
> +	}
> +
> +	/* The minimum utilization returns the highest level between:
> +	 * - the computed DL bandwidth needed with the irq pressure which
> +	 *   steals time to the deadline task.
> +	 * - The minimum bandwidth requirement for CFS.
> +	 */
> +	if (min)
> +		*min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
>  
>  	/*
>  	 * Because the time spend on RT/DL tasks is visible as 'lost' time to
>  	 * CFS tasks and we use the same metric to track the effective
>  	 * utilization (PELT windows are synchronized) we can directly add them
>  	 * to obtain the CPU's actual utilization.
> -	 *
> -	 * CFS and RT utilization can be boosted or capped, depending on
> -	 * utilization clamp constraints requested by currently RUNNABLE
> -	 * tasks.
> -	 * When there are no CFS RUNNABLE tasks, clamps are released and
> -	 * frequency will be gracefully reduced with the utilization decay.
>  	 */
>  	util = util_cfs + cpu_util_rt(rq);
> -	if (type == FREQUENCY_UTIL)
> -		util = uclamp_rq_util_with(rq, util, p);
> -
> -	dl_util = cpu_util_dl(rq);
> -
> -	/*
> -	 * For frequency selection we do not make cpu_util_dl() a permanent part
> -	 * of this sum because we want to use cpu_bw_dl() later on, but we need
> -	 * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
> -	 * that we select f_max when there is no idle time.
> -	 *
> -	 * NOTE: numerical errors or stop class might cause us to not quite hit
> -	 * saturation when we should -- something for later.
> -	 */
> -	if (util + dl_util >= max)
> -		return max;
> +	util += cpu_util_dl(rq);
>  
> -	/*
> -	 * OTOH, for energy computation we need the estimated running time, so
> -	 * include util_dl and ignore dl_bw.
> -	 */
> -	if (type == ENERGY_UTIL)
> -		util += dl_util;
> +	if (util >= scale) {
> +		if (max)
> +			*max = scale;
> +		return scale;
> +	}
>  
>  	/*
>  	 * There is still idle time; further improve the number by using the
> @@ -7449,28 +7435,21 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
>  	 *   U' = irq + --------- * U
>  	 *                 max
>  	 */
> -	util = scale_irq_capacity(util, irq, max);
> +	util = scale_irq_capacity(util, irq, scale);
>  	util += irq;
>  
> -	/*
> -	 * Bandwidth required by DEADLINE must always be granted while, for
> -	 * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> -	 * to gracefully reduce the frequency when no tasks show up for longer
> -	 * periods of time.
> -	 *
> -	 * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> -	 * bw_dl as requested freq. However, cpufreq is not yet ready for such
> -	 * an interface. So, we only do the latter for now.
> +	/* The maximum hint is a soft bandwidth requirement which can be lower
> +	 * than the actual utilization because of max uclamp requirments
>  	 */
> -	if (type == FREQUENCY_UTIL)
> -		util += cpu_bw_dl(rq);
> +	if (max)
> +		*max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
>  
> -	return min(max, util);
> +	return min(scale, util);
>  }
>  
>  unsigned long sched_cpu_util(int cpu)
>  {
> -	return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL);
> +	return effective_cpu_util(cpu, cpu_util_cfs(cpu), NULL, NULL);
>  }
>  #endif /* CONFIG_SMP */
>  
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 458d359f5991..8cb323522b90 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -47,7 +47,7 @@ struct sugov_cpu {
>  	u64			last_update;
>  
>  	unsigned long		util;
> -	unsigned long		bw_dl;
> +	unsigned long		bw_min;
>  
>  	/* The field below is for single-CPU policies only: */
>  #ifdef CONFIG_NO_HZ_COMMON
> @@ -143,7 +143,6 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>  	unsigned int freq = arch_scale_freq_invariant() ?
>  				policy->cpuinfo.max_freq : policy->cur;
>  
> -	util = map_util_perf(util);
>  	freq = map_util_freq(util, freq, max);
>  
>  	if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
> @@ -153,14 +152,38 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>  	return cpufreq_driver_resolve_freq(policy, freq);
>  }
>  
> +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> +				 unsigned long min,
> +				 unsigned long max)
> +{
> +	unsigned long target;
> +	struct rq *rq = cpu_rq(cpu);
> +
> +	if (rt_rq_is_runnable(&rq->rt))
> +		return max;
> +
> +	/* Provide at least enough capacity for DL + irq */
> +	target =  min;
Minor, but it might be worth to mention UCLAMP_MIN here, otherwise the return
statement might get bit confusing.
> +
> +	actual = map_util_perf(actual);
> +	/* Actually we don't need to target the max performance */
> +	if (actual < max)
> +		max = actual;
> +
> +	/*
> +	 * Ensure at least minimum performance while providing more compute
> +	 * capacity when possible.
> +	 */
> +	return max(target, max);
> +}
> +
>  static void sugov_get_util(struct sugov_cpu *sg_cpu)
>  {
> -	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> -	struct rq *rq = cpu_rq(sg_cpu->cpu);
> +	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
>  
> -	sg_cpu->bw_dl = cpu_bw_dl(rq);
> -	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> -					  FREQUENCY_UTIL, NULL);
> +	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> +	sg_cpu->bw_min = map_util_perf(min);
> +	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
>  }
>  
>  /**
> @@ -306,7 +329,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
>   */
>  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
>  {
> -	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
> +	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
>  		sg_cpu->sg_policy->limits_changed = true;
Because the bw_min now includes the headroom, this might not catch potential
changes in DL util, not sure though if that could be an issue.
>  }
>  
> @@ -407,8 +430,8 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time,
>  	    sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
>  		sg_cpu->util = prev_util;
>  
> -	cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl),
> -				   map_util_perf(sg_cpu->util), max_cap);
> +	cpufreq_driver_adjust_perf(sg_cpu->cpu, sg_cpu->bw_min,
> +				   sg_cpu->util, max_cap);
>  
>  	sg_cpu->sg_policy->last_freq_update_time = time;
>  }
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 922905194c0c..d4f7b2f49c44 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7628,7 +7628,7 @@ static inline void eenv_pd_busy_time(struct energy_env *eenv,
>  	for_each_cpu(cpu, pd_cpus) {
>  		unsigned long util = cpu_util(cpu, p, -1, 0);
>  
> -		busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
> +		busy_time += effective_cpu_util(cpu, util, NULL, NULL);
>  	}
>  
>  	eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
> @@ -7651,7 +7651,7 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
>  	for_each_cpu(cpu, pd_cpus) {
>  		struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
>  		unsigned long util = cpu_util(cpu, p, dst_cpu, 1);
> -		unsigned long eff_util;
> +		unsigned long eff_util, min, max;
>  
>  		/*
>  		 * Performance domain frequency: utilization clamping
> @@ -7660,7 +7660,23 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
>  		 * NOTE: in case RT tasks are running, by default the
>  		 * FREQUENCY_UTIL's utilization can be max OPP.
>  		 */
> -		eff_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
> +		eff_util = effective_cpu_util(cpu, util, &min, &max);
> +
> +		/* Task's uclamp can modify min and max value */
> +		if (tsk && uclamp_is_used()) {
> +			min = max(min, uclamp_eff_value(p, UCLAMP_MIN));
> +
> +			/*
> +			 * If there is no active max uclamp constraint,
> +			 * directly use task's one otherwise keep max
> +			 */
> +			if (uclamp_rq_is_idle(cpu_rq(cpu)))
> +				max = uclamp_eff_value(p, UCLAMP_MAX);
> +			else
> +				max = max(max, uclamp_eff_value(p, UCLAMP_MAX));
> +		}
> +
> +		eff_util = sugov_effective_cpu_perf(cpu, eff_util, min, max);
This will include the headroom so won't it inflate the util here ?

---
BR
B.
>  		max_util = max(max_util, eff_util);
>  	}
>  
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 65cad0e5729e..3873b4de7cfa 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2962,24 +2962,14 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
>  #endif
>  
>  #ifdef CONFIG_SMP
> -/**
> - * enum cpu_util_type - CPU utilization type
> - * @FREQUENCY_UTIL:	Utilization used to select frequency
> - * @ENERGY_UTIL:	Utilization used during energy calculation
> - *
> - * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time
> - * need to be aggregated differently depending on the usage made of them. This
> - * enum is used within effective_cpu_util() to differentiate the types of
> - * utilization expected by the callers, and adjust the aggregation accordingly.
> - */
> -enum cpu_util_type {
> -	FREQUENCY_UTIL,
> -	ENERGY_UTIL,
> -};
> -
>  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> -				 enum cpu_util_type type,
> -				 struct task_struct *p);
> +				 unsigned long *min,
> +				 unsigned long *max);
> +
> +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> +				 unsigned long min,
> +				 unsigned long max);
> +
>  
>  /*
>   * Verify the fitness of task @p to run on @cpu taking into account the
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] sched/schedutil: rework iowait boost
  2023-10-13 15:14 ` [PATCH 2/2] sched/schedutil: rework iowait boost Vincent Guittot
@ 2023-10-18  7:39   ` Beata Michalska
  2023-10-18 13:26     ` Vincent Guittot
  0 siblings, 1 reply; 15+ messages in thread
From: Beata Michalska @ 2023-10-18  7:39 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba

On Fri, Oct 13, 2023 at 05:14:50PM +0200, Vincent Guittot wrote:
> Use the max value that has already been computed inside sugov_get_util()
> to cap the iowait boost and remove dependency with uclamp_rq_util_with()
> which is not used anymore.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/cpufreq_schedutil.c | 29 ++++++++-------
>  kernel/sched/sched.h             | 60 --------------------------------
>  2 files changed, 14 insertions(+), 75 deletions(-)
> 
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 8cb323522b90..820612867769 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -177,11 +177,12 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
>  	return max(target, max);
>  }
>  
> -static void sugov_get_util(struct sugov_cpu *sg_cpu)
> +static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
>  {
>  	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
>  
>  	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> +	util = max(util, boost);
sugov_get_util could actually call sugov_iowait_apply here instead, to get a
centralized point of getting and applying the boost. Also sugov_iowait_apply
naming is no longer reflecting the functionality so maybe renaming it to smth
like sugov_iowait_get ?

---
BR
B.
>  	sg_cpu->bw_min = map_util_perf(min);
>  	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
>  }
> @@ -274,18 +275,16 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
>   * This mechanism is designed to boost high frequently IO waiting tasks, while
>   * being more conservative on tasks which does sporadic IO operations.
>   */
> -static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
> +static unsigned long sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
>  			       unsigned long max_cap)
>  {
> -	unsigned long boost;
> -
>  	/* No boost currently required */
>  	if (!sg_cpu->iowait_boost)
> -		return;
> +		return 0;
>  
>  	/* Reset boost if the CPU appears to have been idle enough */
>  	if (sugov_iowait_reset(sg_cpu, time, false))
> -		return;
> +		return 0;
>  
>  	if (!sg_cpu->iowait_boost_pending) {
>  		/*
> @@ -294,7 +293,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
>  		sg_cpu->iowait_boost >>= 1;
>  		if (sg_cpu->iowait_boost < IOWAIT_BOOST_MIN) {
>  			sg_cpu->iowait_boost = 0;
> -			return;
> +			return 0;
>  		}
>  	}
>  
> @@ -304,10 +303,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
>  	 * sg_cpu->util is already in capacity scale; convert iowait_boost
>  	 * into the same scale so we can compare.
>  	 */
> -	boost = (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
> -	boost = uclamp_rq_util_with(cpu_rq(sg_cpu->cpu), boost, NULL);
> -	if (sg_cpu->util < boost)
> -		sg_cpu->util = boost;
> +	return (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
>  }
>  
>  #ifdef CONFIG_NO_HZ_COMMON
> @@ -337,6 +333,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
>  					      u64 time, unsigned long max_cap,
>  					      unsigned int flags)
>  {
> +	unsigned long boost;
> +
>  	sugov_iowait_boost(sg_cpu, time, flags);
>  	sg_cpu->last_update = time;
>  
> @@ -345,8 +343,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
>  	if (!sugov_should_update_freq(sg_cpu->sg_policy, time))
>  		return false;
>  
> -	sugov_get_util(sg_cpu);
> -	sugov_iowait_apply(sg_cpu, time, max_cap);
> +	boost = sugov_iowait_apply(sg_cpu, time, max_cap);
> +	sugov_get_util(sg_cpu, boost);
>  
>  	return true;
>  }
> @@ -447,9 +445,10 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
>  
>  	for_each_cpu(j, policy->cpus) {
>  		struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
> +		unsigned long boost;
>  
> -		sugov_get_util(j_sg_cpu);
> -		sugov_iowait_apply(j_sg_cpu, time, max_cap);
> +		boost = sugov_iowait_apply(j_sg_cpu, time, max_cap);
> +		sugov_get_util(j_sg_cpu, boost);
>  
>  		util = max(j_sg_cpu->util, util);
>  	}
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 3873b4de7cfa..b181edaf4d41 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3026,59 +3026,6 @@ static inline bool uclamp_rq_is_idle(struct rq *rq)
>  	return rq->uclamp_flags & UCLAMP_FLAG_IDLE;
>  }
>  
> -/**
> - * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
> - * @rq:		The rq to clamp against. Must not be NULL.
> - * @util:	The util value to clamp.
> - * @p:		The task to clamp against. Can be NULL if you want to clamp
> - *		against @rq only.
> - *
> - * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
> - *
> - * If sched_uclamp_used static key is disabled, then just return the util
> - * without any clamping since uclamp aggregation at the rq level in the fast
> - * path is disabled, rendering this operation a NOP.
> - *
> - * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
> - * will return the correct effective uclamp value of the task even if the
> - * static key is disabled.
> - */
> -static __always_inline
> -unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
> -				  struct task_struct *p)
> -{
> -	unsigned long min_util = 0;
> -	unsigned long max_util = 0;
> -
> -	if (!static_branch_likely(&sched_uclamp_used))
> -		return util;
> -
> -	if (p) {
> -		min_util = uclamp_eff_value(p, UCLAMP_MIN);
> -		max_util = uclamp_eff_value(p, UCLAMP_MAX);
> -
> -		/*
> -		 * Ignore last runnable task's max clamp, as this task will
> -		 * reset it. Similarly, no need to read the rq's min clamp.
> -		 */
> -		if (uclamp_rq_is_idle(rq))
> -			goto out;
> -	}
> -
> -	min_util = max_t(unsigned long, min_util, uclamp_rq_get(rq, UCLAMP_MIN));
> -	max_util = max_t(unsigned long, max_util, uclamp_rq_get(rq, UCLAMP_MAX));
> -out:
> -	/*
> -	 * Since CPU's {min,max}_util clamps are MAX aggregated considering
> -	 * RUNNABLE tasks with _different_ clamps, we can end up with an
> -	 * inversion. Fix it now when the clamps are applied.
> -	 */
> -	if (unlikely(min_util >= max_util))
> -		return min_util;
> -
> -	return clamp(util, min_util, max_util);
> -}
> -
>  /* Is the rq being capped/throttled by uclamp_max? */
>  static inline bool uclamp_rq_is_capped(struct rq *rq)
>  {
> @@ -3116,13 +3063,6 @@ static inline unsigned long uclamp_eff_value(struct task_struct *p,
>  	return SCHED_CAPACITY_SCALE;
>  }
>  
> -static inline
> -unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
> -				  struct task_struct *p)
> -{
> -	return util;
> -}
> -
>  static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
>  
>  static inline bool uclamp_is_used(void)
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-18  7:04   ` Beata Michalska
@ 2023-10-18 12:20     ` Lukasz Luba
  2023-10-18 13:25     ` Vincent Guittot
  1 sibling, 0 replies; 15+ messages in thread
From: Lukasz Luba @ 2023-10-18 12:20 UTC (permalink / raw)
  To: Beata Michalska, Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm



On 10/18/23 08:04, Beata Michalska wrote:
> Hi Vincent,
> 
> On Fri, Oct 13, 2023 at 05:14:49PM +0200, Vincent Guittot wrote:
>> The current method to take into account uclamp hints when estimating the
>> target frequency can end into situation where the selected target
>> frequency is finally higher than uclamp hints whereas there are no real
>> needs. Such cases mainly happen because we are currently mixing the
>> traditional scheduler utilization signal with the uclamp performance
>> hints. By adding these 2 metrics, we loose an important information when
>> it comes to select the target frequency and we have to make some
>> assumptions which can't fit all cases.

[snip]

>>
>> diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
>> index b9caa01dfac4..adec808b371a 100644
>> --- a/include/linux/energy_model.h
>> +++ b/include/linux/energy_model.h
>> @@ -243,7 +243,6 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
>>   	scale_cpu = arch_scale_cpu_capacity(cpu);
>>   	ps = &pd->table[pd->nr_perf_states - 1];
>>   
>> -	max_util = map_util_perf(max_util);
> Even though the effective_cpu_util does no longer include the headroom, it is
> being applied by sugov further down the line (sugov_effective_cpu_perf).
> Won't that bring back the original problem when freq selection within EM is
> not align with the one performed by sugov ?

It should be OK here to remove the above line. The map_util_perf()
is done before this em_cpu_energy() call, in the new code in


[snip]

>>   }
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 922905194c0c..d4f7b2f49c44 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -7628,7 +7628,7 @@ static inline void eenv_pd_busy_time(struct energy_env *eenv,
>>   	for_each_cpu(cpu, pd_cpus) {
>>   		unsigned long util = cpu_util(cpu, p, -1, 0);
>>   
>> -		busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
>> +		busy_time += effective_cpu_util(cpu, util, NULL, NULL);
>>   	}
>>   
>>   	eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
>> @@ -7651,7 +7651,7 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
>>   	for_each_cpu(cpu, pd_cpus) {
>>   		struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
>>   		unsigned long util = cpu_util(cpu, p, dst_cpu, 1);
>> -		unsigned long eff_util;
>> +		unsigned long eff_util, min, max;
>>   
>>   		/*
>>   		 * Performance domain frequency: utilization clamping
>> @@ -7660,7 +7660,23 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
>>   		 * NOTE: in case RT tasks are running, by default the
>>   		 * FREQUENCY_UTIL's utilization can be max OPP.
>>   		 */
>> -		eff_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
>> +		eff_util = effective_cpu_util(cpu, util, &min, &max);
>> +
>> +		/* Task's uclamp can modify min and max value */
>> +		if (tsk && uclamp_is_used()) {
>> +			min = max(min, uclamp_eff_value(p, UCLAMP_MIN));
>> +
>> +			/*
>> +			 * If there is no active max uclamp constraint,
>> +			 * directly use task's one otherwise keep max
>> +			 */
>> +			if (uclamp_rq_is_idle(cpu_rq(cpu)))
>> +				max = uclamp_eff_value(p, UCLAMP_MAX);
>> +			else
>> +				max = max(max, uclamp_eff_value(p, UCLAMP_MAX));
>> +		}
>> +
>> +		eff_util = sugov_effective_cpu_perf(cpu, eff_util, min, max);
> This will include the headroom so won't it inflate the util here ?

Yes, that's the goal. It will inflate when needed. Currently, the
problem is that we always inflate (blindly) in the em_cpu_energy().
We don't know if the util value which is comming is from uclamp_max
and the frequency should not be higher, because something want to
clamp it.

The other question would be:
What if the PD has 4 CPUs, the max util found is 500 and is from
uclamp_max, but there is onother util on some other CPU 490?

That CPU is allowed and can have the +20% freq in voting.
In current design we don't punish the whole domain in such
scenario.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-18  7:04   ` Beata Michalska
  2023-10-18 12:20     ` Lukasz Luba
@ 2023-10-18 13:25     ` Vincent Guittot
  1 sibling, 0 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-10-18 13:25 UTC (permalink / raw)
  To: Beata Michalska
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba

On Wed, 18 Oct 2023 at 09:05, Beata Michalska <beata.michalska@arm.com> wrote:
>
> Hi Vincent,
>
> On Fri, Oct 13, 2023 at 05:14:49PM +0200, Vincent Guittot wrote:
> > The current method to take into account uclamp hints when estimating the
> > target frequency can end into situation where the selected target
> > frequency is finally higher than uclamp hints whereas there are no real
> > needs. Such cases mainly happen because we are currently mixing the
> > traditional scheduler utilization signal with the uclamp performance
> > hints. By adding these 2 metrics, we loose an important information when
> > it comes to select the target frequency and we have to make some
> > assumptions which can't fit all cases.
> >
> > Rework the interface between the scheduler and schedutil governor in order
> > to propagate all information down to the cpufreq governor.
> /governor/driver ?

no only schedutil governor uses the interface

> >
> > effective_cpu_util() interface changes and now returns the actual
> > utilization of the CPU with 2 optional inputs:
> > - The minimum performance for this CPU; typically the capacity to handle
> >   the deadline task and the interrupt pressure. But also uclamp_min
> >   request when available.
> > - The maximum targeting performance for this CPU which reflects the
> >   maximum level that we would like to not exceed. By default it will be
> >   the CPU capacity but can be reduced because of some performance hints
> >   set with uclamp. The value can be lower than actual utilization and/or
> >   min performance level.
> >
> > A new sugov_effective_cpu_perf() interface is also available to compute
> > the final performance level that is targeted for the CPU after applying
> > some cpufreq headroom and taking into account all inputs.
> >
> > With these 2 functions, schedutil is now able to decide when it must go
> > above uclamp hints. It now also have a generic way to get the min
> > perfromance level.
> >
> > The dependency between energy model and cpufreq governor and its headroom
> > policy doesn't exist anymore.
> >
> > eenv_pd_max_util asks schedutil for the targeted performance after
> > applying the impact of the waking task.
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >  include/linux/energy_model.h     |  1 -
> >  kernel/sched/core.c              | 85 ++++++++++++--------------------
> >  kernel/sched/cpufreq_schedutil.c | 43 ++++++++++++----
> >  kernel/sched/fair.c              | 22 +++++++--
> >  kernel/sched/sched.h             | 24 +++------
> >  5 files changed, 91 insertions(+), 84 deletions(-)
> >
> > diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
> > index b9caa01dfac4..adec808b371a 100644
> > --- a/include/linux/energy_model.h
> > +++ b/include/linux/energy_model.h
> > @@ -243,7 +243,6 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
> >       scale_cpu = arch_scale_cpu_capacity(cpu);
> >       ps = &pd->table[pd->nr_perf_states - 1];
> >
> > -     max_util = map_util_perf(max_util);
> Even though the effective_cpu_util does no longer include the headroom, it is
> being applied by sugov further down the line (sugov_effective_cpu_perf).
> Won't that bring back the original problem when freq selection within EM is
> not align with the one performed by sugov ?

They are both aligned. Here, the goal is to move the headroom
computation and decision inside schedutil function instead of taking
care to apply the same headroom policy everywhere in the scheduler

> >       max_util = min(max_util, allowed_cpu_cap);
> >       freq = map_util_freq(max_util, ps->frequency, scale_cpu);
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index a3f9cd52eec5..78228abd1219 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7381,18 +7381,13 @@ int sched_core_idle_cpu(int cpu)
> >   * required to meet deadlines.
> >   */
> >  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> > -                              enum cpu_util_type type,
> > -                              struct task_struct *p)
> > +                              unsigned long *min,
> > +                              unsigned long *max)
> >  {
> > -     unsigned long dl_util, util, irq, max;
> > +     unsigned long util, irq, scale;
> >       struct rq *rq = cpu_rq(cpu);
> >
> > -     max = arch_scale_cpu_capacity(cpu);
> > -
> > -     if (!uclamp_is_used() &&
> > -         type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
> > -             return max;
> > -     }
> > +     scale = arch_scale_cpu_capacity(cpu);
> >
> >       /*
> >        * Early check to see if IRQ/steal time saturates the CPU, can be
> > @@ -7400,45 +7395,36 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> >        * update_irq_load_avg().
> >        */
> >       irq = cpu_util_irq(rq);
> > -     if (unlikely(irq >= max))
> > -             return max;
> > +     if (unlikely(irq >= scale)) {
> > +             if (min)
> > +                     *min = scale;
> > +             if (max)
> > +                     *max = scale;
> > +             return scale;
> > +     }
> > +
> > +     /* The minimum utilization returns the highest level between:
> > +      * - the computed DL bandwidth needed with the irq pressure which
> > +      *   steals time to the deadline task.
> > +      * - The minimum bandwidth requirement for CFS.
> > +      */
> > +     if (min)
> > +             *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
> >
> >       /*
> >        * Because the time spend on RT/DL tasks is visible as 'lost' time to
> >        * CFS tasks and we use the same metric to track the effective
> >        * utilization (PELT windows are synchronized) we can directly add them
> >        * to obtain the CPU's actual utilization.
> > -      *
> > -      * CFS and RT utilization can be boosted or capped, depending on
> > -      * utilization clamp constraints requested by currently RUNNABLE
> > -      * tasks.
> > -      * When there are no CFS RUNNABLE tasks, clamps are released and
> > -      * frequency will be gracefully reduced with the utilization decay.
> >        */
> >       util = util_cfs + cpu_util_rt(rq);
> > -     if (type == FREQUENCY_UTIL)
> > -             util = uclamp_rq_util_with(rq, util, p);
> > -
> > -     dl_util = cpu_util_dl(rq);
> > -
> > -     /*
> > -      * For frequency selection we do not make cpu_util_dl() a permanent part
> > -      * of this sum because we want to use cpu_bw_dl() later on, but we need
> > -      * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
> > -      * that we select f_max when there is no idle time.
> > -      *
> > -      * NOTE: numerical errors or stop class might cause us to not quite hit
> > -      * saturation when we should -- something for later.
> > -      */
> > -     if (util + dl_util >= max)
> > -             return max;
> > +     util += cpu_util_dl(rq);
> >
> > -     /*
> > -      * OTOH, for energy computation we need the estimated running time, so
> > -      * include util_dl and ignore dl_bw.
> > -      */
> > -     if (type == ENERGY_UTIL)
> > -             util += dl_util;
> > +     if (util >= scale) {
> > +             if (max)
> > +                     *max = scale;
> > +             return scale;
> > +     }
> >
> >       /*
> >        * There is still idle time; further improve the number by using the
> > @@ -7449,28 +7435,21 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> >        *   U' = irq + --------- * U
> >        *                 max
> >        */
> > -     util = scale_irq_capacity(util, irq, max);
> > +     util = scale_irq_capacity(util, irq, scale);
> >       util += irq;
> >
> > -     /*
> > -      * Bandwidth required by DEADLINE must always be granted while, for
> > -      * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> > -      * to gracefully reduce the frequency when no tasks show up for longer
> > -      * periods of time.
> > -      *
> > -      * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> > -      * bw_dl as requested freq. However, cpufreq is not yet ready for such
> > -      * an interface. So, we only do the latter for now.
> > +     /* The maximum hint is a soft bandwidth requirement which can be lower
> > +      * than the actual utilization because of max uclamp requirments
> >        */
> > -     if (type == FREQUENCY_UTIL)
> > -             util += cpu_bw_dl(rq);
> > +     if (max)
> > +             *max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
> >
> > -     return min(max, util);
> > +     return min(scale, util);
> >  }
> >
> >  unsigned long sched_cpu_util(int cpu)
> >  {
> > -     return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL);
> > +     return effective_cpu_util(cpu, cpu_util_cfs(cpu), NULL, NULL);
> >  }
> >  #endif /* CONFIG_SMP */
> >
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 458d359f5991..8cb323522b90 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -47,7 +47,7 @@ struct sugov_cpu {
> >       u64                     last_update;
> >
> >       unsigned long           util;
> > -     unsigned long           bw_dl;
> > +     unsigned long           bw_min;
> >
> >       /* The field below is for single-CPU policies only: */
> >  #ifdef CONFIG_NO_HZ_COMMON
> > @@ -143,7 +143,6 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> >       unsigned int freq = arch_scale_freq_invariant() ?
> >                               policy->cpuinfo.max_freq : policy->cur;
> >
> > -     util = map_util_perf(util);
> >       freq = map_util_freq(util, freq, max);
> >
> >       if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
> > @@ -153,14 +152,38 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> >       return cpufreq_driver_resolve_freq(policy, freq);
> >  }
> >
> > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > +                              unsigned long min,
> > +                              unsigned long max)
> > +{
> > +     unsigned long target;
> > +     struct rq *rq = cpu_rq(cpu);
> > +
> > +     if (rt_rq_is_runnable(&rq->rt))
> > +             return max;
> > +
> > +     /* Provide at least enough capacity for DL + irq */
> > +     target =  min;
> Minor, but it might be worth to mention UCLAMP_MIN here, otherwise the return
> statement might get bit confusing.

yes, i can add it

> > +
> > +     actual = map_util_perf(actual);
> > +     /* Actually we don't need to target the max performance */
> > +     if (actual < max)
> > +             max = actual;
> > +
> > +     /*
> > +      * Ensure at least minimum performance while providing more compute
> > +      * capacity when possible.
> > +      */
> > +     return max(target, max);
> > +}
> > +
> >  static void sugov_get_util(struct sugov_cpu *sg_cpu)
> >  {
> > -     unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> > -     struct rq *rq = cpu_rq(sg_cpu->cpu);
> > +     unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
> >
> > -     sg_cpu->bw_dl = cpu_bw_dl(rq);
> > -     sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> > -                                       FREQUENCY_UTIL, NULL);
> > +     util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> > +     sg_cpu->bw_min = map_util_perf(min);
> > +     sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
> >  }
> >
> >  /**
> > @@ -306,7 +329,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
> >   */
> >  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
> >  {
> > -     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
> > +     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
> >               sg_cpu->sg_policy->limits_changed = true;
> Because the bw_min now includes the headroom, this might not catch potential
> changes in DL util, not sure though if that could be an issue.

this is ok, because the min is still higher than cpu_bw_dl()

> >  }
> >
> > @@ -407,8 +430,8 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time,
> >           sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
> >               sg_cpu->util = prev_util;
> >
> > -     cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl),
> > -                                map_util_perf(sg_cpu->util), max_cap);
> > +     cpufreq_driver_adjust_perf(sg_cpu->cpu, sg_cpu->bw_min,
> > +                                sg_cpu->util, max_cap);
> >
> >       sg_cpu->sg_policy->last_freq_update_time = time;
> >  }
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 922905194c0c..d4f7b2f49c44 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7628,7 +7628,7 @@ static inline void eenv_pd_busy_time(struct energy_env *eenv,
> >       for_each_cpu(cpu, pd_cpus) {
> >               unsigned long util = cpu_util(cpu, p, -1, 0);
> >
> > -             busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
> > +             busy_time += effective_cpu_util(cpu, util, NULL, NULL);
> >       }
> >
> >       eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
> > @@ -7651,7 +7651,7 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
> >       for_each_cpu(cpu, pd_cpus) {
> >               struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
> >               unsigned long util = cpu_util(cpu, p, dst_cpu, 1);
> > -             unsigned long eff_util;
> > +             unsigned long eff_util, min, max;
> >
> >               /*
> >                * Performance domain frequency: utilization clamping
> > @@ -7660,7 +7660,23 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
> >                * NOTE: in case RT tasks are running, by default the
> >                * FREQUENCY_UTIL's utilization can be max OPP.
> >                */
> > -             eff_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
> > +             eff_util = effective_cpu_util(cpu, util, &min, &max);
> > +
> > +             /* Task's uclamp can modify min and max value */
> > +             if (tsk && uclamp_is_used()) {
> > +                     min = max(min, uclamp_eff_value(p, UCLAMP_MIN));
> > +
> > +                     /*
> > +                      * If there is no active max uclamp constraint,
> > +                      * directly use task's one otherwise keep max
> > +                      */
> > +                     if (uclamp_rq_is_idle(cpu_rq(cpu)))
> > +                             max = uclamp_eff_value(p, UCLAMP_MAX);
> > +                     else
> > +                             max = max(max, uclamp_eff_value(p, UCLAMP_MAX));
> > +             }
> > +
> > +             eff_util = sugov_effective_cpu_perf(cpu, eff_util, min, max);
> This will include the headroom so won't it inflate the util here ?

If you look into sugov_effective_cpu_perf(), the headroom is not
applied on everything now and in particular not on the max.


>
> ---
> BR
> B.
> >               max_util = max(max_util, eff_util);
> >       }
> >
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 65cad0e5729e..3873b4de7cfa 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -2962,24 +2962,14 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
> >  #endif
> >
> >  #ifdef CONFIG_SMP
> > -/**
> > - * enum cpu_util_type - CPU utilization type
> > - * @FREQUENCY_UTIL:  Utilization used to select frequency
> > - * @ENERGY_UTIL:     Utilization used during energy calculation
> > - *
> > - * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time
> > - * need to be aggregated differently depending on the usage made of them. This
> > - * enum is used within effective_cpu_util() to differentiate the types of
> > - * utilization expected by the callers, and adjust the aggregation accordingly.
> > - */
> > -enum cpu_util_type {
> > -     FREQUENCY_UTIL,
> > -     ENERGY_UTIL,
> > -};
> > -
> >  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> > -                              enum cpu_util_type type,
> > -                              struct task_struct *p);
> > +                              unsigned long *min,
> > +                              unsigned long *max);
> > +
> > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > +                              unsigned long min,
> > +                              unsigned long max);
> > +
> >
> >  /*
> >   * Verify the fitness of task @p to run on @cpu taking into account the
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] sched/schedutil: rework iowait boost
  2023-10-18  7:39   ` Beata Michalska
@ 2023-10-18 13:26     ` Vincent Guittot
  0 siblings, 0 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-10-18 13:26 UTC (permalink / raw)
  To: Beata Michalska
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba

On Wed, 18 Oct 2023 at 09:40, Beata Michalska <beata.michalska@arm.com> wrote:
>
> On Fri, Oct 13, 2023 at 05:14:50PM +0200, Vincent Guittot wrote:
> > Use the max value that has already been computed inside sugov_get_util()
> > to cap the iowait boost and remove dependency with uclamp_rq_util_with()
> > which is not used anymore.
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >  kernel/sched/cpufreq_schedutil.c | 29 ++++++++-------
> >  kernel/sched/sched.h             | 60 --------------------------------
> >  2 files changed, 14 insertions(+), 75 deletions(-)
> >
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 8cb323522b90..820612867769 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -177,11 +177,12 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> >       return max(target, max);
> >  }
> >
> > -static void sugov_get_util(struct sugov_cpu *sg_cpu)
> > +static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
> >  {
> >       unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
> >
> >       util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> > +     util = max(util, boost);
> sugov_get_util could actually call sugov_iowait_apply here instead, to get a
> centralized point of getting and applying the boost. Also sugov_iowait_apply

I didn't want to add the time parameter in sugov_get_util()

> naming is no longer reflecting the functionality so maybe renaming it to smth
> like sugov_iowait_get ?

I usually do not rename functions unless really necessary to make the
review easier as people are used to the current function name. But I'm
fine to rename it if other maintainers agree


>
> ---
> BR
> B.
> >       sg_cpu->bw_min = map_util_perf(min);
> >       sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
> >  }
> > @@ -274,18 +275,16 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
> >   * This mechanism is designed to boost high frequently IO waiting tasks, while
> >   * being more conservative on tasks which does sporadic IO operations.
> >   */
> > -static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
> > +static unsigned long sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
> >                              unsigned long max_cap)
> >  {
> > -     unsigned long boost;
> > -
> >       /* No boost currently required */
> >       if (!sg_cpu->iowait_boost)
> > -             return;
> > +             return 0;
> >
> >       /* Reset boost if the CPU appears to have been idle enough */
> >       if (sugov_iowait_reset(sg_cpu, time, false))
> > -             return;
> > +             return 0;
> >
> >       if (!sg_cpu->iowait_boost_pending) {
> >               /*
> > @@ -294,7 +293,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
> >               sg_cpu->iowait_boost >>= 1;
> >               if (sg_cpu->iowait_boost < IOWAIT_BOOST_MIN) {
> >                       sg_cpu->iowait_boost = 0;
> > -                     return;
> > +                     return 0;
> >               }
> >       }
> >
> > @@ -304,10 +303,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
> >        * sg_cpu->util is already in capacity scale; convert iowait_boost
> >        * into the same scale so we can compare.
> >        */
> > -     boost = (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
> > -     boost = uclamp_rq_util_with(cpu_rq(sg_cpu->cpu), boost, NULL);
> > -     if (sg_cpu->util < boost)
> > -             sg_cpu->util = boost;
> > +     return (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
> >  }
> >
> >  #ifdef CONFIG_NO_HZ_COMMON
> > @@ -337,6 +333,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
> >                                             u64 time, unsigned long max_cap,
> >                                             unsigned int flags)
> >  {
> > +     unsigned long boost;
> > +
> >       sugov_iowait_boost(sg_cpu, time, flags);
> >       sg_cpu->last_update = time;
> >
> > @@ -345,8 +343,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
> >       if (!sugov_should_update_freq(sg_cpu->sg_policy, time))
> >               return false;
> >
> > -     sugov_get_util(sg_cpu);
> > -     sugov_iowait_apply(sg_cpu, time, max_cap);
> > +     boost = sugov_iowait_apply(sg_cpu, time, max_cap);
> > +     sugov_get_util(sg_cpu, boost);
> >
> >       return true;
> >  }
> > @@ -447,9 +445,10 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
> >
> >       for_each_cpu(j, policy->cpus) {
> >               struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
> > +             unsigned long boost;
> >
> > -             sugov_get_util(j_sg_cpu);
> > -             sugov_iowait_apply(j_sg_cpu, time, max_cap);
> > +             boost = sugov_iowait_apply(j_sg_cpu, time, max_cap);
> > +             sugov_get_util(j_sg_cpu, boost);
> >
> >               util = max(j_sg_cpu->util, util);
> >       }
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 3873b4de7cfa..b181edaf4d41 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -3026,59 +3026,6 @@ static inline bool uclamp_rq_is_idle(struct rq *rq)
> >       return rq->uclamp_flags & UCLAMP_FLAG_IDLE;
> >  }
> >
> > -/**
> > - * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
> > - * @rq:              The rq to clamp against. Must not be NULL.
> > - * @util:    The util value to clamp.
> > - * @p:               The task to clamp against. Can be NULL if you want to clamp
> > - *           against @rq only.
> > - *
> > - * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
> > - *
> > - * If sched_uclamp_used static key is disabled, then just return the util
> > - * without any clamping since uclamp aggregation at the rq level in the fast
> > - * path is disabled, rendering this operation a NOP.
> > - *
> > - * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
> > - * will return the correct effective uclamp value of the task even if the
> > - * static key is disabled.
> > - */
> > -static __always_inline
> > -unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
> > -                               struct task_struct *p)
> > -{
> > -     unsigned long min_util = 0;
> > -     unsigned long max_util = 0;
> > -
> > -     if (!static_branch_likely(&sched_uclamp_used))
> > -             return util;
> > -
> > -     if (p) {
> > -             min_util = uclamp_eff_value(p, UCLAMP_MIN);
> > -             max_util = uclamp_eff_value(p, UCLAMP_MAX);
> > -
> > -             /*
> > -              * Ignore last runnable task's max clamp, as this task will
> > -              * reset it. Similarly, no need to read the rq's min clamp.
> > -              */
> > -             if (uclamp_rq_is_idle(rq))
> > -                     goto out;
> > -     }
> > -
> > -     min_util = max_t(unsigned long, min_util, uclamp_rq_get(rq, UCLAMP_MIN));
> > -     max_util = max_t(unsigned long, max_util, uclamp_rq_get(rq, UCLAMP_MAX));
> > -out:
> > -     /*
> > -      * Since CPU's {min,max}_util clamps are MAX aggregated considering
> > -      * RUNNABLE tasks with _different_ clamps, we can end up with an
> > -      * inversion. Fix it now when the clamps are applied.
> > -      */
> > -     if (unlikely(min_util >= max_util))
> > -             return min_util;
> > -
> > -     return clamp(util, min_util, max_util);
> > -}
> > -
> >  /* Is the rq being capped/throttled by uclamp_max? */
> >  static inline bool uclamp_rq_is_capped(struct rq *rq)
> >  {
> > @@ -3116,13 +3063,6 @@ static inline unsigned long uclamp_eff_value(struct task_struct *p,
> >       return SCHED_CAPACITY_SCALE;
> >  }
> >
> > -static inline
> > -unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
> > -                               struct task_struct *p)
> > -{
> > -     return util;
> > -}
> > -
> >  static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
> >
> >  static inline bool uclamp_is_used(void)
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-13 15:14 ` [PATCH 1/2] sched/schedutil: rework performance estimation Vincent Guittot
  2023-10-13 18:20   ` Ingo Molnar
  2023-10-18  7:04   ` Beata Michalska
@ 2023-10-20  9:48   ` Dietmar Eggemann
  2023-10-20 13:58     ` Vincent Guittot
  2 siblings, 1 reply; 15+ messages in thread
From: Dietmar Eggemann @ 2023-10-20  9:48 UTC (permalink / raw)
  To: Vincent Guittot, mingo, peterz, juri.lelli, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm
  Cc: lukasz.luba

On 13/10/2023 17:14, Vincent Guittot wrote:
> The current method to take into account uclamp hints when estimating the
> target frequency can end into situation where the selected target
> frequency is finally higher than uclamp hints whereas there are no real
> needs. Such cases mainly happen because we are currently mixing the
> traditional scheduler utilization signal with the uclamp performance
> hints. By adding these 2 metrics, we loose an important information when
> it comes to select the target frequency and we have to make some
> assumptions which can't fit all cases.
> 
> Rework the interface between the scheduler and schedutil governor in order
> to propagate all information down to the cpufreq governor.

So we change from:

max(util -> uclamp, iowait_boost -> uclamp) -> head_room()

to:

util = max(util, iowait_boost) -> util =
                                  head_room(util)

_min = max(irq + cpu_bw_dl,
           uclamp_min)         ->                  -> max(_min, _max)

_max = min(scale, uclamp_max)  -> _max =
                                  min(util, _max)

> effective_cpu_util() interface changes and now returns the actual
> utilization of the CPU with 2 optional inputs:
> - The minimum performance for this CPU; typically the capacity to handle
>   the deadline task and the interrupt pressure. But also uclamp_min
>   request when available.
> - The maximum targeting performance for this CPU which reflects the
>   maximum level that we would like to not exceed. By default it will be
>   the CPU capacity but can be reduced because of some performance hints
>   set with uclamp. The value can be lower than actual utilization and/or
>   min performance level.
> 
> A new sugov_effective_cpu_perf() interface is also available to compute
> the final performance level that is targeted for the CPU after applying
> some cpufreq headroom and taking into account all inputs.
> 
> With these 2 functions, schedutil is now able to decide when it must go
> above uclamp hints. It now also have a generic way to get the min
> perfromance level.
> 
> The dependency between energy model and cpufreq governor and its headroom
> policy doesn't exist anymore.

But the dependency that both are doing the same thing still exists, right?

sugov_get_util() and eenv_pd_max_util() are calling the same functions:

  util = effective_cpu_util(cpu, util, &min, &max)

  /* ioboost, bw_min = head_room(min) resp. uclamp tsk handling */

  util = sugov_effective_cpu_perf(cpu, util, min, max)

[...]

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index a3f9cd52eec5..78228abd1219 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7381,18 +7381,13 @@ int sched_core_idle_cpu(int cpu)
>   * required to meet deadlines.
>   */
>  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> -				 enum cpu_util_type type,
> -				 struct task_struct *p)
> +				 unsigned long *min,
> +				 unsigned long *max)

FREQUENCY_UTIL relates to *min != NULL and *max != NULL

ENERGY_UTIL relates to *min == NULL and *max == NULL

so both must be either NULL or !NULL.

Calling it with one equa NULL and the other with !NULL should be
undefined, right?

[...]

> @@ -7400,45 +7395,36 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
>  	 * update_irq_load_avg().
>  	 */
>  	irq = cpu_util_irq(rq);
> -	if (unlikely(irq >= max))
> -		return max;
> +	if (unlikely(irq >= scale)) {
> +		if (min)
> +			*min = scale;
> +		if (max)
> +			*max = scale;
> +		return scale;
> +	}
> +
> +	/* The minimum utilization returns the highest level between:
> +	 * - the computed DL bandwidth needed with the irq pressure which
> +	 *   steals time to the deadline task.
> +	 * - The minimum bandwidth requirement for CFS.

rq UCLAMP_MIN can also be driven by RT, not only CFS.

> +	 */
> +	if (min)
> +		*min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
>  
>  	/*
>  	 * Because the time spend on RT/DL tasks is visible as 'lost' time to
>  	 * CFS tasks and we use the same metric to track the effective
>  	 * utilization (PELT windows are synchronized) we can directly add them
>  	 * to obtain the CPU's actual utilization.
> -	 *
> -	 * CFS and RT utilization can be boosted or capped, depending on
> -	 * utilization clamp constraints requested by currently RUNNABLE
> -	 * tasks.
> -	 * When there are no CFS RUNNABLE tasks, clamps are released and
> -	 * frequency will be gracefully reduced with the utilization decay.
>  	 */
>  	util = util_cfs + cpu_util_rt(rq);
> -	if (type == FREQUENCY_UTIL)
> -		util = uclamp_rq_util_with(rq, util, p);
> -
> -	dl_util = cpu_util_dl(rq);
> -
> -	/*
> -	 * For frequency selection we do not make cpu_util_dl() a permanent part
> -	 * of this sum because we want to use cpu_bw_dl() later on, but we need
> -	 * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
> -	 * that we select f_max when there is no idle time.
> -	 *
> -	 * NOTE: numerical errors or stop class might cause us to not quite hit
> -	 * saturation when we should -- something for later.
> -	 */
> -	if (util + dl_util >= max)
> -		return max;
> +	util += cpu_util_dl(rq);
>  
> -	/*
> -	 * OTOH, for energy computation we need the estimated running time, so
> -	 * include util_dl and ignore dl_bw.
> -	 */
> -	if (type == ENERGY_UTIL)
> -		util += dl_util;
> +	if (util >= scale) {
> +		if (max)
> +			*max = scale;

But that means that ucamp_max cannot constrain a system in which the
'util > ucamp_max'. I guess that's related to you saying uclamp_min is a
hard req and uclamp_max is a soft req. I don't think that's in sync with
the rest of the uclamp_max implantation.

> +		return scale;
> +	}
>  
>  	/*
>  	 * There is still idle time; further improve the number by using the
> @@ -7449,28 +7435,21 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
>  	 *   U' = irq + --------- * U
>  	 *                 max
>  	 */
> -	util = scale_irq_capacity(util, irq, max);
> +	util = scale_irq_capacity(util, irq, scale);
>  	util += irq;
>  
> -	/*
> -	 * Bandwidth required by DEADLINE must always be granted while, for
> -	 * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> -	 * to gracefully reduce the frequency when no tasks show up for longer
> -	 * periods of time.
> -	 *
> -	 * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> -	 * bw_dl as requested freq. However, cpufreq is not yet ready for such
> -	 * an interface. So, we only do the latter for now.
> +	/* The maximum hint is a soft bandwidth requirement which can be lower
> +	 * than the actual utilization because of max uclamp requirments
>  	 */
> -	if (type == FREQUENCY_UTIL)
> -		util += cpu_bw_dl(rq);
> +	if (max)
> +		*max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
>  
> -	return min(max, util);
> +	return min(scale, util);
>  }

effective_cpu_util for FREQUENCY_UTIL (i.e. (*min != NULL && *max !=
NULL)) is slightly different.

  missing:

  if (!uclamp_is_used() && rt_rq_is_runnable(&rq->rt)
    return max

  probably moved into sugov_effective_cpu_perf() (which is only called
  for `FREQUENCY_UTIL`) ?


  old:

  irq_cap_scaling(util_cfs, util_rt) + irq + cpu_bw_dl()
                                             ^^^^^^^^^^^

  new:

  irq_cap_scaling(util_cfs + util_rt + util_dl) + irq
                                       ^^^^^^^

[...]

> +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> +				 unsigned long min,
> +				 unsigned long max)
> +{
> +	unsigned long target;
> +	struct rq *rq = cpu_rq(cpu);
> +
> +	if (rt_rq_is_runnable(&rq->rt))
> +		return max;
> +
> +	/* Provide at least enough capacity for DL + irq */
> +	target =  min;
> +
> +	actual = map_util_perf(actual);
> +	/* Actually we don't need to target the max performance */
> +	if (actual < max)
> +		max = actual;
> +
> +	/*
> +	 * Ensure at least minimum performance while providing more compute
> +	 * capacity when possible.
> +	 */
> +	return max(target, max);

Can you not just use:

       return max(min, max)

and skip target?

> +}
> +
>  static void sugov_get_util(struct sugov_cpu *sg_cpu)
>  {
> -	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> -	struct rq *rq = cpu_rq(sg_cpu->cpu);
> +	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
>  
> -	sg_cpu->bw_dl = cpu_bw_dl(rq);
> -	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> -					  FREQUENCY_UTIL, NULL);
> +	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> +	sg_cpu->bw_min = map_util_perf(min);
> +	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
>  }
>  
>  /**
> @@ -306,7 +329,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
>   */
>  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
>  {
> -	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
> +	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)

bw_min is more than DL right?

bw_min = head_room(max(irq + cpu_bw_dl, rq's UCLAMP_MIN)

[...]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-20  9:48   ` Dietmar Eggemann
@ 2023-10-20 13:58     ` Vincent Guittot
  2023-10-26  9:07       ` Dietmar Eggemann
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2023-10-20 13:58 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: mingo, peterz, juri.lelli, rostedt, bsegall, mgorman, bristot,
	vschneid, rafael, viresh.kumar, qyousef, linux-kernel, linux-pm,
	lukasz.luba

On Fri, 20 Oct 2023 at 11:48, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>
> On 13/10/2023 17:14, Vincent Guittot wrote:
> > The current method to take into account uclamp hints when estimating the
> > target frequency can end into situation where the selected target
> > frequency is finally higher than uclamp hints whereas there are no real
> > needs. Such cases mainly happen because we are currently mixing the
> > traditional scheduler utilization signal with the uclamp performance
> > hints. By adding these 2 metrics, we loose an important information when
> > it comes to select the target frequency and we have to make some
> > assumptions which can't fit all cases.
> >
> > Rework the interface between the scheduler and schedutil governor in order
> > to propagate all information down to the cpufreq governor.
>
> So we change from:
>
> max(util -> uclamp, iowait_boost -> uclamp) -> head_room()
>
> to:
>
> util = max(util, iowait_boost) -> util =
>                                   head_room(util)
>
> _min = max(irq + cpu_bw_dl,
>            uclamp_min)         ->                  -> max(_min, _max)
>
> _max = min(scale, uclamp_max)  -> _max =
>                                   min(util, _max)
>
> > effective_cpu_util() interface changes and now returns the actual
> > utilization of the CPU with 2 optional inputs:
> > - The minimum performance for this CPU; typically the capacity to handle
> >   the deadline task and the interrupt pressure. But also uclamp_min
> >   request when available.
> > - The maximum targeting performance for this CPU which reflects the
> >   maximum level that we would like to not exceed. By default it will be
> >   the CPU capacity but can be reduced because of some performance hints
> >   set with uclamp. The value can be lower than actual utilization and/or
> >   min performance level.
> >
> > A new sugov_effective_cpu_perf() interface is also available to compute
> > the final performance level that is targeted for the CPU after applying
> > some cpufreq headroom and taking into account all inputs.
> >
> > With these 2 functions, schedutil is now able to decide when it must go
> > above uclamp hints. It now also have a generic way to get the min
> > perfromance level.
> >
> > The dependency between energy model and cpufreq governor and its headroom
> > policy doesn't exist anymore.
>
> But the dependency that both are doing the same thing still exists, right?

For the energy model itself, it is now fully removed; only EAS still
has to estimate which perf level will be selected by schedutil but it
uses now a schedutil function without having to care about headroom
and cpufreq governor policy

>
> sugov_get_util() and eenv_pd_max_util() are calling the same functions:
>
>   util = effective_cpu_util(cpu, util, &min, &max)
>
>   /* ioboost, bw_min = head_room(min) resp. uclamp tsk handling */
>
>   util = sugov_effective_cpu_perf(cpu, util, min, max)
>
> [...]
>
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index a3f9cd52eec5..78228abd1219 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7381,18 +7381,13 @@ int sched_core_idle_cpu(int cpu)
> >   * required to meet deadlines.
> >   */
> >  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> > -                              enum cpu_util_type type,
> > -                              struct task_struct *p)
> > +                              unsigned long *min,
> > +                              unsigned long *max)
>
> FREQUENCY_UTIL relates to *min != NULL and *max != NULL
>
> ENERGY_UTIL relates to *min == NULL and *max == NULL
>
> so both must be either NULL or !NULL.
>
> Calling it with one equa NULL and the other with !NULL should be
> undefined, right?

At now there is no user but one could consider only asking for min or
max. So I would not say undefined but unused

>
> [...]
>
> > @@ -7400,45 +7395,36 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> >        * update_irq_load_avg().
> >        */
> >       irq = cpu_util_irq(rq);
> > -     if (unlikely(irq >= max))
> > -             return max;
> > +     if (unlikely(irq >= scale)) {
> > +             if (min)
> > +                     *min = scale;
> > +             if (max)
> > +                     *max = scale;
> > +             return scale;
> > +     }
> > +
> > +     /* The minimum utilization returns the highest level between:
> > +      * - the computed DL bandwidth needed with the irq pressure which
> > +      *   steals time to the deadline task.
> > +      * - The minimum bandwidth requirement for CFS.
>
> rq UCLAMP_MIN can also be driven by RT, not only CFS.

yes

>
> > +      */
> > +     if (min)
> > +             *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
> >
> >       /*
> >        * Because the time spend on RT/DL tasks is visible as 'lost' time to
> >        * CFS tasks and we use the same metric to track the effective
> >        * utilization (PELT windows are synchronized) we can directly add them
> >        * to obtain the CPU's actual utilization.
> > -      *
> > -      * CFS and RT utilization can be boosted or capped, depending on
> > -      * utilization clamp constraints requested by currently RUNNABLE
> > -      * tasks.
> > -      * When there are no CFS RUNNABLE tasks, clamps are released and
> > -      * frequency will be gracefully reduced with the utilization decay.
> >        */
> >       util = util_cfs + cpu_util_rt(rq);
> > -     if (type == FREQUENCY_UTIL)
> > -             util = uclamp_rq_util_with(rq, util, p);
> > -
> > -     dl_util = cpu_util_dl(rq);
> > -
> > -     /*
> > -      * For frequency selection we do not make cpu_util_dl() a permanent part
> > -      * of this sum because we want to use cpu_bw_dl() later on, but we need
> > -      * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
> > -      * that we select f_max when there is no idle time.
> > -      *
> > -      * NOTE: numerical errors or stop class might cause us to not quite hit
> > -      * saturation when we should -- something for later.
> > -      */
> > -     if (util + dl_util >= max)
> > -             return max;
> > +     util += cpu_util_dl(rq);
> >
> > -     /*
> > -      * OTOH, for energy computation we need the estimated running time, so
> > -      * include util_dl and ignore dl_bw.
> > -      */
> > -     if (type == ENERGY_UTIL)
> > -             util += dl_util;
> > +     if (util >= scale) {
> > +             if (max)
> > +                     *max = scale;
>
> But that means that ucamp_max cannot constrain a system in which the
> 'util > ucamp_max'. I guess that's related to you saying uclamp_min is a
> hard req and uclamp_max is a soft req. I don't think that's in sync with
> the rest of the uclamp_max implantation.

That's a mistake, I made a shortcut here. I wanted to save the
scale_irq_capacity() step but forgot to update max 1st.

Will fix it


>
> > +             return scale;
> > +     }
> >
> >       /*
> >        * There is still idle time; further improve the number by using the
> > @@ -7449,28 +7435,21 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> >        *   U' = irq + --------- * U
> >        *                 max
> >        */
> > -     util = scale_irq_capacity(util, irq, max);
> > +     util = scale_irq_capacity(util, irq, scale);
> >       util += irq;
> >
> > -     /*
> > -      * Bandwidth required by DEADLINE must always be granted while, for
> > -      * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> > -      * to gracefully reduce the frequency when no tasks show up for longer
> > -      * periods of time.
> > -      *
> > -      * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> > -      * bw_dl as requested freq. However, cpufreq is not yet ready for such
> > -      * an interface. So, we only do the latter for now.
> > +     /* The maximum hint is a soft bandwidth requirement which can be lower
> > +      * than the actual utilization because of max uclamp requirments
> >        */
> > -     if (type == FREQUENCY_UTIL)
> > -             util += cpu_bw_dl(rq);
> > +     if (max)
> > +             *max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
> >
> > -     return min(max, util);
> > +     return min(scale, util);
> >  }
>
> effective_cpu_util for FREQUENCY_UTIL (i.e. (*min != NULL && *max !=
> NULL)) is slightly different.
>
>   missing:
>
>   if (!uclamp_is_used() && rt_rq_is_runnable(&rq->rt)
>     return max
>
>   probably moved into sugov_effective_cpu_perf() (which is only called
>   for `FREQUENCY_UTIL`) ?

yes, it's in sugov_effective_cpu_perf()

>
>
>   old:
>
>   irq_cap_scaling(util_cfs, util_rt) + irq + cpu_bw_dl()
>                                              ^^^^^^^^^^^
>
>   new:
>
>   irq_cap_scaling(util_cfs + util_rt + util_dl) + irq
>                                        ^^^^^^^

yes, cpu_bw_dl() input moved in the min value instead

>
> [...]
>
> > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > +                              unsigned long min,
> > +                              unsigned long max)
> > +{
> > +     unsigned long target;
> > +     struct rq *rq = cpu_rq(cpu);
> > +
> > +     if (rt_rq_is_runnable(&rq->rt))
> > +             return max;
> > +
> > +     /* Provide at least enough capacity for DL + irq */
> > +     target =  min;
> > +
> > +     actual = map_util_perf(actual);
> > +     /* Actually we don't need to target the max performance */
> > +     if (actual < max)
> > +             max = actual;
> > +
> > +     /*
> > +      * Ensure at least minimum performance while providing more compute
> > +      * capacity when possible.
> > +      */
> > +     return max(target, max);
>
> Can you not just use:
>
>        return max(min, max)
>
> and skip target?

yes. I had an intermediate value that I removed in the final version
but forgot to remove the intermediate variable

>
> > +}
> > +
> >  static void sugov_get_util(struct sugov_cpu *sg_cpu)
> >  {
> > -     unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> > -     struct rq *rq = cpu_rq(sg_cpu->cpu);
> > +     unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
> >
> > -     sg_cpu->bw_dl = cpu_bw_dl(rq);
> > -     sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> > -                                       FREQUENCY_UTIL, NULL);
> > +     util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> > +     sg_cpu->bw_min = map_util_perf(min);
> > +     sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
> >  }
> >
> >  /**
> > @@ -306,7 +329,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
> >   */
> >  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
> >  {
> > -     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
> > +     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
>
> bw_min is more than DL right?

yes

Interruptions are preempting DL so we should include them
And now that we can take into account uclamp_min, use it when
computing the min perf parameter of cpufreq_driver_adjust_perf()


>
> bw_min = head_room(max(irq + cpu_bw_dl, rq's UCLAMP_MIN)
>
> [...]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] sched/schedutil: rework performance estimation
  2023-10-20 13:58     ` Vincent Guittot
@ 2023-10-26  9:07       ` Dietmar Eggemann
  0 siblings, 0 replies; 15+ messages in thread
From: Dietmar Eggemann @ 2023-10-26  9:07 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, rostedt, bsegall, mgorman, bristot,
	vschneid, rafael, viresh.kumar, qyousef, linux-kernel, linux-pm,
	lukasz.luba

On 20/10/2023 15:58, Vincent Guittot wrote:
> On Fri, 20 Oct 2023 at 11:48, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>>
>> On 13/10/2023 17:14, Vincent Guittot wrote:

[...]

>>> A new sugov_effective_cpu_perf() interface is also available to compute
>>> the final performance level that is targeted for the CPU after applying
>>> some cpufreq headroom and taking into account all inputs.
>>>
>>> With these 2 functions, schedutil is now able to decide when it must go
>>> above uclamp hints. It now also have a generic way to get the min
>>> perfromance level.
>>>
>>> The dependency between energy model and cpufreq governor and its headroom
>>> policy doesn't exist anymore.
>>
>> But the dependency that both are doing the same thing still exists, right?
> 
> For the energy model itself, it is now fully removed; only EAS still
> has to estimate which perf level will be selected by schedutil but it
> uses now a schedutil function without having to care about headroom
> and cpufreq governor policy

I see now. (1) replaces (2) so only schedutil and EAS, EM dependency is
gone.

compute_energy()

  max_util = eenv_pd_max_util()

                 sugov_effective_cpu_perf()

                     actual = map_util_perf(actual)   (1)


  energy = em_cpu_energy(..., max_util, ...);

               max_util = map_util_perf(max_util)     (2)

[...]

>>>  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
>>> -                              enum cpu_util_type type,
>>> -                              struct task_struct *p)
>>> +                              unsigned long *min,
>>> +                              unsigned long *max)
>>
>> FREQUENCY_UTIL relates to *min != NULL and *max != NULL
>>
>> ENERGY_UTIL relates to *min == NULL and *max == NULL
>>
>> so both must be either NULL or !NULL.
>>
>> Calling it with one equa NULL and the other with !NULL should be
>> undefined, right?
> 
> At now there is no user but one could consider only asking for min or
> max. So I would not say undefined but unused

OK.

[...]

>>> -      * OTOH, for energy computation we need the estimated running time, so
>>> -      * include util_dl and ignore dl_bw.
>>> -      */
>>> -     if (type == ENERGY_UTIL)
>>> -             util += dl_util;
>>> +     if (util >= scale) {
>>> +             if (max)
>>> +                     *max = scale;
>>
>> But that means that ucamp_max cannot constrain a system in which the
>> 'util > ucamp_max'. I guess that's related to you saying uclamp_min is a
>> hard req and uclamp_max is a soft req. I don't think that's in sync with
>> the rest of the uclamp_max implantation.
> 
> That's a mistake, I made a shortcut here. I wanted to save the
> scale_irq_capacity() step but forgot to update max 1st.
> 
> Will fix it

I see.

[...]

>> effective_cpu_util for FREQUENCY_UTIL (i.e. (*min != NULL && *max !=
>> NULL)) is slightly different.
>>
>>   missing:
>>
>>   if (!uclamp_is_used() && rt_rq_is_runnable(&rq->rt)
>>     return max
>>
>>   probably moved into sugov_effective_cpu_perf() (which is only called
>>   for `FREQUENCY_UTIL`) ?
> 
> yes, it's in sugov_effective_cpu_perf()

OK.

[...]

>>> @@ -306,7 +329,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
>>>   */
>>>  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
>>>  {
>>> -     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
>>> +     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
>>
>> bw_min is more than DL right?
> 
> yes
> 
> Interruptions are preempting DL so we should include them
> And now that we can take into account uclamp_min, use it when
> computing the min perf parameter of cpufreq_driver_adjust_perf()

OK.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/2] Rework interface between scheduler and schedutil governor
  2023-10-13 15:14 [PATCH 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
  2023-10-13 15:14 ` [PATCH 1/2] sched/schedutil: rework performance estimation Vincent Guittot
  2023-10-13 15:14 ` [PATCH 2/2] sched/schedutil: rework iowait boost Vincent Guittot
@ 2023-10-26 10:19 ` Wyes Karny
  2023-10-26 15:11   ` Vincent Guittot
  2 siblings, 1 reply; 15+ messages in thread
From: Wyes Karny @ 2023-10-26 10:19 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Wyes Karny, mingo, peterz, juri.lelli, dietmar.eggemann, rostedt,
	bsegall, mgorman, bristot, vschneid, rafael, viresh.kumar,
	qyousef, linux-kernel, linux-pm, lukasz.luba

Hi Vincent,

On 13 Oct 17:14, Vincent Guittot wrote:
> Following the discussion with Qais [1] about how to handle uclamp
> requirements and after syncing with him, we agreed that I should move
> forward on the patchset to rework the interface between scheduler and
> schedutil governor to provide more information to the latter. Scheduler
> (and EAS in particular) doesn't need anymore to guess estimate which
> headroom the governor wants to apply and will directly ask for the target
> freq. Then the governor directly gets the actual utilization and new
> minimum and maximum boundaries to select this target frequency and
> doesn't have to deal anymore with scheduler internals like uclamp when
> including iowait boost.

I ran a duty_cycle (one cpu 1) test which does timed busy and idle repeatedly based on user input.

I used below bpftrace program to trace effective utilization:
bpftrace -e 'kretprobe:effective_cpu_util / cpu == 1/ { @eff_util[cpu] = stats(retval); @eff_util_hist[cpu] = hist(retval);}'

Below are the results on AMD server system:

--------------------------------------------------------------------------------+ -------------------------------------------------------------------------------+
                           Without patches on 6.6-rc6                           |                             With patches on 6.6-rc6                            |
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 0%, idle: 100% :                                                          | busy: 0%, idle: 100% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 4923, average 22, total 110935                              | @eff_util[1]: count 5556, average 10, total 58857                              |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[1]                    6 |                                                    | | [1]                   14 |                                                    ||
[2, 4)                10 |                                                    | | [2, 4)                16 |                                                    ||
[4, 8)               862 |@@@@@@@@@@@                                         | | [4, 8)              1628 |@@@@@@@@@@@@@@@@@@@@@                               ||
[8, 16)             3782 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [8, 16)             3896 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
[16, 32)              52 |                                                    | | [16, 32)               2 |                                                    ||
[32, 64)              44 |                                                    | |                                                                                |
[64, 128)             40 |                                                    | |                                                                                |
[128, 256)            38 |                                                    | |                                                                                |
[256, 512)            43 |                                                    | |                                                                                |
[512, 1K)             40 |                                                    | |                                                                                |
[1K, 2K)               6 |                                                    | |                                                                                |
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 100%, idle: 0% :                                                          | busy: 100%, idle: 0% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5544, average 974, total 5400203                            | @eff_util[1]: count 5588, average 972, total 5435602                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[0]                    9 |                                                    | | [0]                   17 |                                                    ||
[1]                    0 |                                                    | | [1]                    0 |                                                    ||
[2, 4)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
[4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
[16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
[32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
[64, 128)              0 |                                                    | | [64, 128)              0 |                                                    ||
[128, 256)             1 |                                                    | | [128, 256)             0 |                                                    ||
[256, 512)             0 |                                                    | | [256, 512)             0 |                                                    ||
[512, 1K)           5532 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [512, 1K)           5571 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
[1K, 2K)               2 |                                                    | |                                                                                |
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 10%, idle: 90% :                                                          | busy: 10%, idle: 90% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5073, average 102, total 519454                             | @eff_util[1]: count 5555, average 101, total 566563                            |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[1]                   10 |                                                    | | [1]                   21 |                                                    ||
[2, 4)                 6 |                                                    | | [2, 4)                10 |                                                    ||
[4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
[16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
[32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
[64, 128)           5057 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [64, 128)           5524 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 20%, idle: 80% :                                                          | busy: 20%, idle: 80% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5112, average 198, total 1017056                            | @eff_util[1]: count 5553, average 201, total 1118650                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[1]                   13 |                                                    | | [2, 4)                22 |                                                    ||
[2, 4)                 6 |                                                    | | [4, 8)                10 |                                                    ||
[4, 8)                 0 |                                                    | | [8, 16)                0 |                                                    ||
[8, 16)                1 |                                                    | | [16, 32)               0 |                                                    ||
[16, 32)               0 |                                                    | | [32, 64)               0 |                                                    ||
[32, 64)               0 |                                                    | | [64, 128)              0 |                                                    ||
[64, 128)              0 |                                                    | | [128, 256)          5521 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
[128, 256)          5092 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| |                                                                                |
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 30%, idle: 70% :                                                          | busy: 30%, idle: 70% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5136, average 297, total 1528840                            | @eff_util[1]: count 5548, average 297, total 1650683                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[1]                    7 |                                                    | | [0]                   17 |                                                    ||
[2, 4)                 8 |                                                    | | [1]                    0 |                                                    ||
[4, 8)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
[8, 16)                1 |                                                    | | [4, 8)                 0 |                                                    ||
[16, 32)               0 |                                                    | | [8, 16)                0 |                                                    ||
[32, 64)               0 |                                                    | | [16, 32)               0 |                                                    ||
[64, 128)              0 |                                                    | | [32, 64)               0 |                                                    ||
[128, 256)             0 |                                                    | | [64, 128)              0 |                                                    ||
[256, 512)          5120 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [128, 256)             0 |                                                    ||
                                                                                | [256, 512)          5531 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 40%, idle: 60% :                                                          | busy: 40%, idle: 60% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5161, average 394, total 2036421                            | @eff_util[1]: count 5552, average 394, total 2189976                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[0]                    2 |                                                    | | [0]                   16 |                                                    ||
[1]                    9 |                                                    | | [1]                    0 |                                                    ||
[2, 4)                 2 |                                                    | | [2, 4)                 0 |                                                    ||
[4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
[16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
[32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
[64, 128)              0 |                                                    | | [64, 128)              0 |                                                    ||
[128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
[256, 512)          5148 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)          5536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 50%, idle: 50% :                                                          | busy: 50%, idle: 50% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5226, average 491, total 2567889                            | @eff_util[1]: count 5559, average 489, total 2722999                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[0]                   10 |                                                    | | [0]                    2 |                                                    ||
[1]                    0 |                                                    | | [1]                   20 |                                                    ||
[2, 4)                 0 |                                                    | | [2, 4)                 6 |                                                    ||
[4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
[16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
[32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
[64, 128)              0 |                                                    | | [64, 128)              0 |                                                    ||
[128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
[256, 512)          5188 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)          5526 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
[512, 1K)             28 |                                                    | | [512, 1K)              5 |                                                    ||
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 60%, idle: 40% :                                                          | busy: 60%, idle: 40% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5303, average 587, total 3115494                            | @eff_util[1]: count 5549, average 588, total 3264071                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[1]                    2 |                                                    | | [0]                   17 |                                                    ||
[2, 4)                 8 |                                                    | | [1]                    0 |                                                    ||
[4, 8)                 4 |                                                    | | [2, 4)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [4, 8)                 0 |                                                    ||
[16, 32)               0 |                                                    | | [8, 16)                0 |                                                    ||
[32, 64)               0 |                                                    | | [16, 32)               0 |                                                    ||
[64, 128)              0 |                                                    | | [32, 64)               0 |                                                    ||
[128, 256)             0 |                                                    | | [64, 128)              0 |                                                    ||
[256, 512)             0 |                                                    | | [128, 256)             0 |                                                    ||
[512, 1K)           5289 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)             0 |                                                    ||
                                                                                | [512, 1K)           5532 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 70%, idle: 30% :                                                          | busy: 70%, idle: 30% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5325, average 685, total 3648392                            | @eff_util[1]: count 5542, average 685, total 3796277                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[0]                    9 |                                                    | | [0]                   15 |                                                    ||
[1]                    0 |                                                    | | [1]                    2 |                                                    ||
[2, 4)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
[4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
[16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
[32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
[64, 128)              1 |                                                    | | [64, 128)              0 |                                                    ||
[128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
[256, 512)             0 |                                                    | | [256, 512)             0 |                                                    ||
[512, 1K)           5315 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [512, 1K)           5525 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 80%, idle: 20% :                                                          | busy: 80%, idle: 20% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5327, average 780, total 4160266                            | @eff_util[1]: count 5541, average 780, total 4326164                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[1]                    8 |                                                    | | [0]                   17 |                                                    ||
[2, 4)                 6 |                                                    | | [1]                    0 |                                                    ||
[4, 8)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [4, 8)                 0 |                                                    ||
[16, 32)               0 |                                                    | | [8, 16)                0 |                                                    ||
[32, 64)               0 |                                                    | | [16, 32)               0 |                                                    ||
[64, 128)              0 |                                                    | | [32, 64)               0 |                                                    ||
[128, 256)             0 |                                                    | | [64, 128)              0 |                                                    ||
[256, 512)             0 |                                                    | | [128, 256)             0 |                                                    ||
[512, 1K)           5313 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)             0 |                                                    ||
                                                                                | [512, 1K)           5524 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
busy: 90%, idle: 10% :                                                          | busy: 90%, idle: 10% :                                                         |
                                                                                |                                                                                |
@eff_util[1]: count 5424, average 877, total 4762032                            | @eff_util[1]: count 5548, average 877, total 4869975                           |
                                                                                |                                                                                |
@eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
[0]                    9 |                                                    | | [0]                   17 |                                                    ||
[1]                    0 |                                                    | | [1]                    0 |                                                    ||
[2, 4)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
[4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
[8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
[16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
[32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
[64, 128)              1 |                                                    | | [64, 128)              0 |                                                    ||
[128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
[256, 512)             0 |                                                    | | [256, 512)             0 |                                                    ||
[512, 1K)           5412 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [512, 1K)           5531 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
[1K, 2K)               2 |                                                    | |                                                                                |
--------------------------------------------------------------------------------+ -------------------------------------------------------------------------------|

Thanks,
Wyes
> 
> [1] https://lore.kernel.org/lkml/CAKfTPtA5JqNCauG-rP3wGfq+p8EEVx9Tvwj6ksM3SYCwRmfCTg@mail.gmail.com/
> 
> Vincent Guittot (2):
>   sched/schedutil: rework performance estimation
>   sched/schedutil: rework iowait boost
> 
>  include/linux/energy_model.h     |  1 -
>  kernel/sched/core.c              | 85 ++++++++++++--------------------
>  kernel/sched/cpufreq_schedutil.c | 72 +++++++++++++++++----------
>  kernel/sched/fair.c              | 22 +++++++--
>  kernel/sched/sched.h             | 84 +++----------------------------
>  5 files changed, 105 insertions(+), 159 deletions(-)
> 
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/2] Rework interface between scheduler and schedutil governor
  2023-10-26 10:19 ` [PATCH 0/2] Rework interface between scheduler and schedutil governor Wyes Karny
@ 2023-10-26 15:11   ` Vincent Guittot
  0 siblings, 0 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-10-26 15:11 UTC (permalink / raw)
  To: Wyes Karny
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba

Hi Wyes,

On Thu, 26 Oct 2023 at 12:19, Wyes Karny <wyes.karny@amd.com> wrote:
>
> Hi Vincent,
>
> On 13 Oct 17:14, Vincent Guittot wrote:
> > Following the discussion with Qais [1] about how to handle uclamp
> > requirements and after syncing with him, we agreed that I should move
> > forward on the patchset to rework the interface between scheduler and
> > schedutil governor to provide more information to the latter. Scheduler
> > (and EAS in particular) doesn't need anymore to guess estimate which
> > headroom the governor wants to apply and will directly ask for the target
> > freq. Then the governor directly gets the actual utilization and new
> > minimum and maximum boundaries to select this target frequency and
> > doesn't have to deal anymore with scheduler internals like uclamp when
> > including iowait boost.
>
> I ran a duty_cycle (one cpu 1) test which does timed busy and idle repeatedly based on user input.
>

Thanks for the tests. IIUC your result below, you don't see any
significant difference in the figures with and without the patch which
is exactly the goal in your case. The difference happens when you use
either uclamp_min/max or deadline tasks or EAS or
cpufreq_driver_adjust_perf()

> I used below bpftrace program to trace effective utilization:
> bpftrace -e 'kretprobe:effective_cpu_util / cpu == 1/ { @eff_util[cpu] = stats(retval); @eff_util_hist[cpu] = hist(retval);}'

Minor point but this should not make any difference in your case
The new effective_cpu_util() replaces the legacy
effective_cpu_util(cpu, util_cfs, type == ENERGY_UTIL, p)
And the new sugov_effective_cpu_perf() replaces the legacy
effective_cpu_util(cpu, util_cfs, type == FREQUENCY_UTIL, p) + dvfs
headroom

Thanks,
Vincent

>
> Below are the results on AMD server system:
>
> --------------------------------------------------------------------------------+ -------------------------------------------------------------------------------+
>                            Without patches on 6.6-rc6                           |                             With patches on 6.6-rc6                            |
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 0%, idle: 100% :                                                          | busy: 0%, idle: 100% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 4923, average 22, total 110935                              | @eff_util[1]: count 5556, average 10, total 58857                              |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [1]                    6 |                                                    | | [1]                   14 |                                                    ||
> [2, 4)                10 |                                                    | | [2, 4)                16 |                                                    ||
> [4, 8)               862 |@@@@@@@@@@@                                         | | [4, 8)              1628 |@@@@@@@@@@@@@@@@@@@@@                               ||
> [8, 16)             3782 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [8, 16)             3896 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> [16, 32)              52 |                                                    | | [16, 32)               2 |                                                    ||
> [32, 64)              44 |                                                    | |                                                                                |
> [64, 128)             40 |                                                    | |                                                                                |
> [128, 256)            38 |                                                    | |                                                                                |
> [256, 512)            43 |                                                    | |                                                                                |
> [512, 1K)             40 |                                                    | |                                                                                |
> [1K, 2K)               6 |                                                    | |                                                                                |
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 100%, idle: 0% :                                                          | busy: 100%, idle: 0% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5544, average 974, total 5400203                            | @eff_util[1]: count 5588, average 972, total 5435602                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [0]                    9 |                                                    | | [0]                   17 |                                                    ||
> [1]                    0 |                                                    | | [1]                    0 |                                                    ||
> [2, 4)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
> [4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
> [16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
> [64, 128)              0 |                                                    | | [64, 128)              0 |                                                    ||
> [128, 256)             1 |                                                    | | [128, 256)             0 |                                                    ||
> [256, 512)             0 |                                                    | | [256, 512)             0 |                                                    ||
> [512, 1K)           5532 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [512, 1K)           5571 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> [1K, 2K)               2 |                                                    | |                                                                                |
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 10%, idle: 90% :                                                          | busy: 10%, idle: 90% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5073, average 102, total 519454                             | @eff_util[1]: count 5555, average 101, total 566563                            |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [1]                   10 |                                                    | | [1]                   21 |                                                    ||
> [2, 4)                 6 |                                                    | | [2, 4)                10 |                                                    ||
> [4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
> [16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
> [64, 128)           5057 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [64, 128)           5524 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 20%, idle: 80% :                                                          | busy: 20%, idle: 80% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5112, average 198, total 1017056                            | @eff_util[1]: count 5553, average 201, total 1118650                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [1]                   13 |                                                    | | [2, 4)                22 |                                                    ||
> [2, 4)                 6 |                                                    | | [4, 8)                10 |                                                    ||
> [4, 8)                 0 |                                                    | | [8, 16)                0 |                                                    ||
> [8, 16)                1 |                                                    | | [16, 32)               0 |                                                    ||
> [16, 32)               0 |                                                    | | [32, 64)               0 |                                                    ||
> [32, 64)               0 |                                                    | | [64, 128)              0 |                                                    ||
> [64, 128)              0 |                                                    | | [128, 256)          5521 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> [128, 256)          5092 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| |                                                                                |
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 30%, idle: 70% :                                                          | busy: 30%, idle: 70% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5136, average 297, total 1528840                            | @eff_util[1]: count 5548, average 297, total 1650683                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [1]                    7 |                                                    | | [0]                   17 |                                                    ||
> [2, 4)                 8 |                                                    | | [1]                    0 |                                                    ||
> [4, 8)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
> [8, 16)                1 |                                                    | | [4, 8)                 0 |                                                    ||
> [16, 32)               0 |                                                    | | [8, 16)                0 |                                                    ||
> [32, 64)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [64, 128)              0 |                                                    | | [32, 64)               0 |                                                    ||
> [128, 256)             0 |                                                    | | [64, 128)              0 |                                                    ||
> [256, 512)          5120 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [128, 256)             0 |                                                    ||
>                                                                                 | [256, 512)          5531 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 40%, idle: 60% :                                                          | busy: 40%, idle: 60% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5161, average 394, total 2036421                            | @eff_util[1]: count 5552, average 394, total 2189976                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [0]                    2 |                                                    | | [0]                   16 |                                                    ||
> [1]                    9 |                                                    | | [1]                    0 |                                                    ||
> [2, 4)                 2 |                                                    | | [2, 4)                 0 |                                                    ||
> [4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
> [16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
> [64, 128)              0 |                                                    | | [64, 128)              0 |                                                    ||
> [128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
> [256, 512)          5148 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)          5536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 50%, idle: 50% :                                                          | busy: 50%, idle: 50% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5226, average 491, total 2567889                            | @eff_util[1]: count 5559, average 489, total 2722999                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [0]                   10 |                                                    | | [0]                    2 |                                                    ||
> [1]                    0 |                                                    | | [1]                   20 |                                                    ||
> [2, 4)                 0 |                                                    | | [2, 4)                 6 |                                                    ||
> [4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
> [16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
> [64, 128)              0 |                                                    | | [64, 128)              0 |                                                    ||
> [128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
> [256, 512)          5188 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)          5526 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> [512, 1K)             28 |                                                    | | [512, 1K)              5 |                                                    ||
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 60%, idle: 40% :                                                          | busy: 60%, idle: 40% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5303, average 587, total 3115494                            | @eff_util[1]: count 5549, average 588, total 3264071                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [1]                    2 |                                                    | | [0]                   17 |                                                    ||
> [2, 4)                 8 |                                                    | | [1]                    0 |                                                    ||
> [4, 8)                 4 |                                                    | | [2, 4)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [4, 8)                 0 |                                                    ||
> [16, 32)               0 |                                                    | | [8, 16)                0 |                                                    ||
> [32, 64)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [64, 128)              0 |                                                    | | [32, 64)               0 |                                                    ||
> [128, 256)             0 |                                                    | | [64, 128)              0 |                                                    ||
> [256, 512)             0 |                                                    | | [128, 256)             0 |                                                    ||
> [512, 1K)           5289 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)             0 |                                                    ||
>                                                                                 | [512, 1K)           5532 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 70%, idle: 30% :                                                          | busy: 70%, idle: 30% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5325, average 685, total 3648392                            | @eff_util[1]: count 5542, average 685, total 3796277                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [0]                    9 |                                                    | | [0]                   15 |                                                    ||
> [1]                    0 |                                                    | | [1]                    2 |                                                    ||
> [2, 4)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
> [4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
> [16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
> [64, 128)              1 |                                                    | | [64, 128)              0 |                                                    ||
> [128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
> [256, 512)             0 |                                                    | | [256, 512)             0 |                                                    ||
> [512, 1K)           5315 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [512, 1K)           5525 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 80%, idle: 20% :                                                          | busy: 80%, idle: 20% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5327, average 780, total 4160266                            | @eff_util[1]: count 5541, average 780, total 4326164                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [1]                    8 |                                                    | | [0]                   17 |                                                    ||
> [2, 4)                 6 |                                                    | | [1]                    0 |                                                    ||
> [4, 8)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [4, 8)                 0 |                                                    ||
> [16, 32)               0 |                                                    | | [8, 16)                0 |                                                    ||
> [32, 64)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [64, 128)              0 |                                                    | | [32, 64)               0 |                                                    ||
> [128, 256)             0 |                                                    | | [64, 128)              0 |                                                    ||
> [256, 512)             0 |                                                    | | [128, 256)             0 |                                                    ||
> [512, 1K)           5313 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [256, 512)             0 |                                                    ||
>                                                                                 | [512, 1K)           5524 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------|
> busy: 90%, idle: 10% :                                                          | busy: 90%, idle: 10% :                                                         |
>                                                                                 |                                                                                |
> @eff_util[1]: count 5424, average 877, total 4762032                            | @eff_util[1]: count 5548, average 877, total 4869975                           |
>                                                                                 |                                                                                |
> @eff_util_hist[1]:                                                              | @eff_util_hist[1]:                                                             |
> [0]                    9 |                                                    | | [0]                   17 |                                                    ||
> [1]                    0 |                                                    | | [1]                    0 |                                                    ||
> [2, 4)                 0 |                                                    | | [2, 4)                 0 |                                                    ||
> [4, 8)                 0 |                                                    | | [4, 8)                 0 |                                                    ||
> [8, 16)                0 |                                                    | | [8, 16)                0 |                                                    ||
> [16, 32)               0 |                                                    | | [16, 32)               0 |                                                    ||
> [32, 64)               0 |                                                    | | [32, 64)               0 |                                                    ||
> [64, 128)              1 |                                                    | | [64, 128)              0 |                                                    ||
> [128, 256)             0 |                                                    | | [128, 256)             0 |                                                    ||
> [256, 512)             0 |                                                    | | [256, 512)             0 |                                                    ||
> [512, 1K)           5412 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| | [512, 1K)           5531 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@||
> [1K, 2K)               2 |                                                    | |                                                                                |
> --------------------------------------------------------------------------------+ -------------------------------------------------------------------------------|
>
> Thanks,
> Wyes
> >
> > [1] https://lore.kernel.org/lkml/CAKfTPtA5JqNCauG-rP3wGfq+p8EEVx9Tvwj6ksM3SYCwRmfCTg@mail.gmail.com/
> >
> > Vincent Guittot (2):
> >   sched/schedutil: rework performance estimation
> >   sched/schedutil: rework iowait boost
> >
> >  include/linux/energy_model.h     |  1 -
> >  kernel/sched/core.c              | 85 ++++++++++++--------------------
> >  kernel/sched/cpufreq_schedutil.c | 72 +++++++++++++++++----------
> >  kernel/sched/fair.c              | 22 +++++++--
> >  kernel/sched/sched.h             | 84 +++----------------------------
> >  5 files changed, 105 insertions(+), 159 deletions(-)
> >
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-10-26 15:11 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-13 15:14 [PATCH 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
2023-10-13 15:14 ` [PATCH 1/2] sched/schedutil: rework performance estimation Vincent Guittot
2023-10-13 18:20   ` Ingo Molnar
2023-10-15  8:02     ` Vincent Guittot
2023-10-18  7:04   ` Beata Michalska
2023-10-18 12:20     ` Lukasz Luba
2023-10-18 13:25     ` Vincent Guittot
2023-10-20  9:48   ` Dietmar Eggemann
2023-10-20 13:58     ` Vincent Guittot
2023-10-26  9:07       ` Dietmar Eggemann
2023-10-13 15:14 ` [PATCH 2/2] sched/schedutil: rework iowait boost Vincent Guittot
2023-10-18  7:39   ` Beata Michalska
2023-10-18 13:26     ` Vincent Guittot
2023-10-26 10:19 ` [PATCH 0/2] Rework interface between scheduler and schedutil governor Wyes Karny
2023-10-26 15:11   ` Vincent Guittot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.