All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] Rework interface between scheduler and schedutil governor
@ 2023-11-03 13:18 Vincent Guittot
  2023-11-03 13:18 ` [PATCH v3 1/2] sched/schedutil: Rework performance estimation Vincent Guittot
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-11-03 13:18 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm
  Cc: lukasz.luba, wyes.karny, beata.michalska, Vincent Guittot

Following the discussion with Qais [1] about how to handle uclamp
requirements and after syncing with him, we agreed that I should move
forward on the patchset to rework the interface between scheduler and
schedutil governor to provide more information to the latter. Scheduler
(and EAS in particular) doesn't need anymore to guess estimate which
headroom the governor wants to apply and will directly ask for the target
freq. Then the governor directly gets the actual utilization and new
minimum and maximum boundaries to select this target frequency and
doesn't have to deal anymore with scheduler internals like uclamp when
including iowait boost.

[1] https://lore.kernel.org/lkml/CAKfTPtA5JqNCauG-rP3wGfq+p8EEVx9Tvwj6ksM3SYCwRmfCTg@mail.gmail.com/

Changes since v2:
- remove useless target variable

Changes since v1:
- fix a bug (always set max even when returning early)
- fix typos
  
Vincent Guittot (2):
  sched/schedutil: Rework performance estimation
  sched/schedutil: Rework iowait boost

 include/linux/energy_model.h     |  1 -
 kernel/sched/core.c              | 82 ++++++++++++-------------------
 kernel/sched/cpufreq_schedutil.c | 69 ++++++++++++++++----------
 kernel/sched/fair.c              | 22 +++++++--
 kernel/sched/sched.h             | 84 +++-----------------------------
 5 files changed, 100 insertions(+), 158 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-03 13:18 [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
@ 2023-11-03 13:18 ` Vincent Guittot
  2023-11-14 20:54   ` Qais Yousef
  2023-11-16 13:19   ` Lukasz Luba
  2023-11-03 13:18 ` [PATCH v3 2/2] sched/schedutil: Rework iowait boost Vincent Guittot
  2023-11-06 15:05 ` [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Rafael J. Wysocki
  2 siblings, 2 replies; 15+ messages in thread
From: Vincent Guittot @ 2023-11-03 13:18 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm
  Cc: lukasz.luba, wyes.karny, beata.michalska, Vincent Guittot

The current method to take into account uclamp hints when estimating the
target frequency can end into situation where the selected target
frequency is finally higher than uclamp hints whereas there are no real
needs. Such cases mainly happen because we are currently mixing the
traditional scheduler utilization signal with the uclamp performance
hints. By adding these 2 metrics, we loose an important information when
it comes to select the target frequency and we have to make some
assumptions which can't fit all cases.

Rework the interface between the scheduler and schedutil governor in order
to propagate all information down to the cpufreq governor.

effective_cpu_util() interface changes and now returns the actual
utilization of the CPU with 2 optional inputs:
- The minimum performance for this CPU; typically the capacity to handle
  the deadline task and the interrupt pressure. But also uclamp_min
  request when available.
- The maximum targeting performance for this CPU which reflects the
  maximum level that we would like to not exceed. By default it will be
  the CPU capacity but can be reduced because of some performance hints
  set with uclamp. The value can be lower than actual utilization and/or
  min performance level.

A new sugov_effective_cpu_perf() interface is also available to compute
the final performance level that is targeted for the CPU after applying
some cpufreq headroom and taking into account all inputs.

With these 2 functions, schedutil is now able to decide when it must go
above uclamp hints. It now also have a generic way to get the min
perfromance level.

The dependency between energy model and cpufreq governor and its headroom
policy doesn't exist anymore.

eenv_pd_max_util asks schedutil for the targeted performance after
applying the impact of the waking task.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 include/linux/energy_model.h     |  1 -
 kernel/sched/core.c              | 82 ++++++++++++--------------------
 kernel/sched/cpufreq_schedutil.c | 40 ++++++++++++----
 kernel/sched/fair.c              | 22 +++++++--
 kernel/sched/sched.h             | 24 +++-------
 5 files changed, 86 insertions(+), 83 deletions(-)

diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index b9caa01dfac4..adec808b371a 100644
--- a/include/linux/energy_model.h
+++ b/include/linux/energy_model.h
@@ -243,7 +243,6 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
 	scale_cpu = arch_scale_cpu_capacity(cpu);
 	ps = &pd->table[pd->nr_perf_states - 1];
 
-	max_util = map_util_perf(max_util);
 	max_util = min(max_util, allowed_cpu_cap);
 	freq = map_util_freq(max_util, ps->frequency, scale_cpu);
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7a0c16115b79..af5333327493 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7391,18 +7391,13 @@ int sched_core_idle_cpu(int cpu)
  * required to meet deadlines.
  */
 unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
-				 enum cpu_util_type type,
-				 struct task_struct *p)
+				 unsigned long *min,
+				 unsigned long *max)
 {
-	unsigned long dl_util, util, irq, max;
+	unsigned long util, irq, scale;
 	struct rq *rq = cpu_rq(cpu);
 
-	max = arch_scale_cpu_capacity(cpu);
-
-	if (!uclamp_is_used() &&
-	    type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
-		return max;
-	}
+	scale = arch_scale_cpu_capacity(cpu);
 
 	/*
 	 * Early check to see if IRQ/steal time saturates the CPU, can be
@@ -7410,45 +7405,41 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
 	 * update_irq_load_avg().
 	 */
 	irq = cpu_util_irq(rq);
-	if (unlikely(irq >= max))
-		return max;
+	if (unlikely(irq >= scale)) {
+		if (min)
+			*min = scale;
+		if (max)
+			*max = scale;
+		return scale;
+	}
+
+	/*
+	 * The minimum utilization returns the highest level between:
+	 * - the computed DL bandwidth needed with the irq pressure which
+	 *   steals time to the deadline task.
+	 * - The minimum bandwidth requirement for CFS and/or RT.
+	 */
+	if (min)
+		*min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
 
 	/*
 	 * Because the time spend on RT/DL tasks is visible as 'lost' time to
 	 * CFS tasks and we use the same metric to track the effective
 	 * utilization (PELT windows are synchronized) we can directly add them
 	 * to obtain the CPU's actual utilization.
-	 *
-	 * CFS and RT utilization can be boosted or capped, depending on
-	 * utilization clamp constraints requested by currently RUNNABLE
-	 * tasks.
-	 * When there are no CFS RUNNABLE tasks, clamps are released and
-	 * frequency will be gracefully reduced with the utilization decay.
 	 */
 	util = util_cfs + cpu_util_rt(rq);
-	if (type == FREQUENCY_UTIL)
-		util = uclamp_rq_util_with(rq, util, p);
-
-	dl_util = cpu_util_dl(rq);
+	util += cpu_util_dl(rq);
 
 	/*
-	 * For frequency selection we do not make cpu_util_dl() a permanent part
-	 * of this sum because we want to use cpu_bw_dl() later on, but we need
-	 * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
-	 * that we select f_max when there is no idle time.
-	 *
-	 * NOTE: numerical errors or stop class might cause us to not quite hit
-	 * saturation when we should -- something for later.
+	 * The maximum hint is a soft bandwidth requirement which can be lower
+	 * than the actual utilization because of uclamp_max requirements
 	 */
-	if (util + dl_util >= max)
-		return max;
+	if (max)
+		*max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
 
-	/*
-	 * OTOH, for energy computation we need the estimated running time, so
-	 * include util_dl and ignore dl_bw.
-	 */
-	if (type == ENERGY_UTIL)
-		util += dl_util;
+	if (util >= scale)
+		return scale;
 
 	/*
 	 * There is still idle time; further improve the number by using the
@@ -7459,28 +7450,15 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
 	 *   U' = irq + --------- * U
 	 *                 max
 	 */
-	util = scale_irq_capacity(util, irq, max);
+	util = scale_irq_capacity(util, irq, scale);
 	util += irq;
 
-	/*
-	 * Bandwidth required by DEADLINE must always be granted while, for
-	 * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
-	 * to gracefully reduce the frequency when no tasks show up for longer
-	 * periods of time.
-	 *
-	 * Ideally we would like to set bw_dl as min/guaranteed freq and util +
-	 * bw_dl as requested freq. However, cpufreq is not yet ready for such
-	 * an interface. So, we only do the latter for now.
-	 */
-	if (type == FREQUENCY_UTIL)
-		util += cpu_bw_dl(rq);
-
-	return min(max, util);
+	return min(scale, util);
 }
 
 unsigned long sched_cpu_util(int cpu)
 {
-	return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL);
+	return effective_cpu_util(cpu, cpu_util_cfs(cpu), NULL, NULL);
 }
 #endif /* CONFIG_SMP */
 
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 458d359f5991..38accd8c854b 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -47,7 +47,7 @@ struct sugov_cpu {
 	u64			last_update;
 
 	unsigned long		util;
-	unsigned long		bw_dl;
+	unsigned long		bw_min;
 
 	/* The field below is for single-CPU policies only: */
 #ifdef CONFIG_NO_HZ_COMMON
@@ -143,7 +143,6 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
 	unsigned int freq = arch_scale_freq_invariant() ?
 				policy->cpuinfo.max_freq : policy->cur;
 
-	util = map_util_perf(util);
 	freq = map_util_freq(util, freq, max);
 
 	if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
@@ -153,14 +152,35 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
 	return cpufreq_driver_resolve_freq(policy, freq);
 }
 
+unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+				 unsigned long min,
+				 unsigned long max)
+{
+	struct rq *rq = cpu_rq(cpu);
+
+	if (rt_rq_is_runnable(&rq->rt))
+		return max;
+
+	/* Add dvfs headroom to actual utilization */
+	actual = map_util_perf(actual);
+	/* Actually we don't need to target the max performance */
+	if (actual < max)
+		max = actual;
+
+	/*
+	 * Ensure at least minimum performance while providing more compute
+	 * capacity when possible.
+	 */
+	return max(min, max);
+}
+
 static void sugov_get_util(struct sugov_cpu *sg_cpu)
 {
-	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
-	struct rq *rq = cpu_rq(sg_cpu->cpu);
+	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
 
-	sg_cpu->bw_dl = cpu_bw_dl(rq);
-	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
-					  FREQUENCY_UTIL, NULL);
+	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+	sg_cpu->bw_min = map_util_perf(min);
+	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
 }
 
 /**
@@ -306,7 +326,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
  */
 static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
 {
-	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
 		sg_cpu->sg_policy->limits_changed = true;
 }
 
@@ -407,8 +427,8 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time,
 	    sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
 		sg_cpu->util = prev_util;
 
-	cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl),
-				   map_util_perf(sg_cpu->util), max_cap);
+	cpufreq_driver_adjust_perf(sg_cpu->cpu, sg_cpu->bw_min,
+				   sg_cpu->util, max_cap);
 
 	sg_cpu->sg_policy->last_freq_update_time = time;
 }
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8767988242ee..ed64f2eaaa2a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7671,7 +7671,7 @@ static inline void eenv_pd_busy_time(struct energy_env *eenv,
 	for_each_cpu(cpu, pd_cpus) {
 		unsigned long util = cpu_util(cpu, p, -1, 0);
 
-		busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
+		busy_time += effective_cpu_util(cpu, util, NULL, NULL);
 	}
 
 	eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
@@ -7694,7 +7694,7 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
 	for_each_cpu(cpu, pd_cpus) {
 		struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
 		unsigned long util = cpu_util(cpu, p, dst_cpu, 1);
-		unsigned long eff_util;
+		unsigned long eff_util, min, max;
 
 		/*
 		 * Performance domain frequency: utilization clamping
@@ -7703,7 +7703,23 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
 		 * NOTE: in case RT tasks are running, by default the
 		 * FREQUENCY_UTIL's utilization can be max OPP.
 		 */
-		eff_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
+		eff_util = effective_cpu_util(cpu, util, &min, &max);
+
+		/* Task's uclamp can modify min and max value */
+		if (tsk && uclamp_is_used()) {
+			min = max(min, uclamp_eff_value(p, UCLAMP_MIN));
+
+			/*
+			 * If there is no active max uclamp constraint,
+			 * directly use task's one otherwise keep max
+			 */
+			if (uclamp_rq_is_idle(cpu_rq(cpu)))
+				max = uclamp_eff_value(p, UCLAMP_MAX);
+			else
+				max = max(max, uclamp_eff_value(p, UCLAMP_MAX));
+		}
+
+		eff_util = sugov_effective_cpu_perf(cpu, eff_util, min, max);
 		max_util = max(max_util, eff_util);
 	}
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2e5a95486a42..302b451a3fd8 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2961,24 +2961,14 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
 #endif
 
 #ifdef CONFIG_SMP
-/**
- * enum cpu_util_type - CPU utilization type
- * @FREQUENCY_UTIL:	Utilization used to select frequency
- * @ENERGY_UTIL:	Utilization used during energy calculation
- *
- * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time
- * need to be aggregated differently depending on the usage made of them. This
- * enum is used within effective_cpu_util() to differentiate the types of
- * utilization expected by the callers, and adjust the aggregation accordingly.
- */
-enum cpu_util_type {
-	FREQUENCY_UTIL,
-	ENERGY_UTIL,
-};
-
 unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
-				 enum cpu_util_type type,
-				 struct task_struct *p);
+				 unsigned long *min,
+				 unsigned long *max);
+
+unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+				 unsigned long min,
+				 unsigned long max);
+
 
 /*
  * Verify the fitness of task @p to run on @cpu taking into account the
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 2/2] sched/schedutil: Rework iowait boost
  2023-11-03 13:18 [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
  2023-11-03 13:18 ` [PATCH v3 1/2] sched/schedutil: Rework performance estimation Vincent Guittot
@ 2023-11-03 13:18 ` Vincent Guittot
  2023-11-14 20:59   ` Qais Yousef
  2023-11-06 15:05 ` [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Rafael J. Wysocki
  2 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2023-11-03 13:18 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm
  Cc: lukasz.luba, wyes.karny, beata.michalska, Vincent Guittot

Use the max value that has already been computed inside sugov_get_util()
to cap the iowait boost and remove dependency with uclamp_rq_util_with()
which is not used anymore.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/cpufreq_schedutil.c | 29 ++++++++-------
 kernel/sched/sched.h             | 60 --------------------------------
 2 files changed, 14 insertions(+), 75 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 38accd8c854b..068c895517fb 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -174,11 +174,12 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
 	return max(min, max);
 }
 
-static void sugov_get_util(struct sugov_cpu *sg_cpu)
+static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
 {
 	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
 
 	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
+	util = max(util, boost);
 	sg_cpu->bw_min = map_util_perf(min);
 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
 }
@@ -271,18 +272,16 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
  * This mechanism is designed to boost high frequently IO waiting tasks, while
  * being more conservative on tasks which does sporadic IO operations.
  */
-static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
+static unsigned long sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
 			       unsigned long max_cap)
 {
-	unsigned long boost;
-
 	/* No boost currently required */
 	if (!sg_cpu->iowait_boost)
-		return;
+		return 0;
 
 	/* Reset boost if the CPU appears to have been idle enough */
 	if (sugov_iowait_reset(sg_cpu, time, false))
-		return;
+		return 0;
 
 	if (!sg_cpu->iowait_boost_pending) {
 		/*
@@ -291,7 +290,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
 		sg_cpu->iowait_boost >>= 1;
 		if (sg_cpu->iowait_boost < IOWAIT_BOOST_MIN) {
 			sg_cpu->iowait_boost = 0;
-			return;
+			return 0;
 		}
 	}
 
@@ -301,10 +300,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
 	 * sg_cpu->util is already in capacity scale; convert iowait_boost
 	 * into the same scale so we can compare.
 	 */
-	boost = (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
-	boost = uclamp_rq_util_with(cpu_rq(sg_cpu->cpu), boost, NULL);
-	if (sg_cpu->util < boost)
-		sg_cpu->util = boost;
+	return (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
 }
 
 #ifdef CONFIG_NO_HZ_COMMON
@@ -334,6 +330,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
 					      u64 time, unsigned long max_cap,
 					      unsigned int flags)
 {
+	unsigned long boost;
+
 	sugov_iowait_boost(sg_cpu, time, flags);
 	sg_cpu->last_update = time;
 
@@ -342,8 +340,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
 	if (!sugov_should_update_freq(sg_cpu->sg_policy, time))
 		return false;
 
-	sugov_get_util(sg_cpu);
-	sugov_iowait_apply(sg_cpu, time, max_cap);
+	boost = sugov_iowait_apply(sg_cpu, time, max_cap);
+	sugov_get_util(sg_cpu, boost);
 
 	return true;
 }
@@ -444,9 +442,10 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
 
 	for_each_cpu(j, policy->cpus) {
 		struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
+		unsigned long boost;
 
-		sugov_get_util(j_sg_cpu);
-		sugov_iowait_apply(j_sg_cpu, time, max_cap);
+		boost = sugov_iowait_apply(j_sg_cpu, time, max_cap);
+		sugov_get_util(j_sg_cpu, boost);
 
 		util = max(j_sg_cpu->util, util);
 	}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 302b451a3fd8..e3cb8e004bd1 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3025,59 +3025,6 @@ static inline bool uclamp_rq_is_idle(struct rq *rq)
 	return rq->uclamp_flags & UCLAMP_FLAG_IDLE;
 }
 
-/**
- * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
- * @rq:		The rq to clamp against. Must not be NULL.
- * @util:	The util value to clamp.
- * @p:		The task to clamp against. Can be NULL if you want to clamp
- *		against @rq only.
- *
- * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
- *
- * If sched_uclamp_used static key is disabled, then just return the util
- * without any clamping since uclamp aggregation at the rq level in the fast
- * path is disabled, rendering this operation a NOP.
- *
- * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
- * will return the correct effective uclamp value of the task even if the
- * static key is disabled.
- */
-static __always_inline
-unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-				  struct task_struct *p)
-{
-	unsigned long min_util = 0;
-	unsigned long max_util = 0;
-
-	if (!static_branch_likely(&sched_uclamp_used))
-		return util;
-
-	if (p) {
-		min_util = uclamp_eff_value(p, UCLAMP_MIN);
-		max_util = uclamp_eff_value(p, UCLAMP_MAX);
-
-		/*
-		 * Ignore last runnable task's max clamp, as this task will
-		 * reset it. Similarly, no need to read the rq's min clamp.
-		 */
-		if (uclamp_rq_is_idle(rq))
-			goto out;
-	}
-
-	min_util = max_t(unsigned long, min_util, uclamp_rq_get(rq, UCLAMP_MIN));
-	max_util = max_t(unsigned long, max_util, uclamp_rq_get(rq, UCLAMP_MAX));
-out:
-	/*
-	 * Since CPU's {min,max}_util clamps are MAX aggregated considering
-	 * RUNNABLE tasks with _different_ clamps, we can end up with an
-	 * inversion. Fix it now when the clamps are applied.
-	 */
-	if (unlikely(min_util >= max_util))
-		return min_util;
-
-	return clamp(util, min_util, max_util);
-}
-
 /* Is the rq being capped/throttled by uclamp_max? */
 static inline bool uclamp_rq_is_capped(struct rq *rq)
 {
@@ -3115,13 +3062,6 @@ static inline unsigned long uclamp_eff_value(struct task_struct *p,
 	return SCHED_CAPACITY_SCALE;
 }
 
-static inline
-unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-				  struct task_struct *p)
-{
-	return util;
-}
-
 static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
 
 static inline bool uclamp_is_used(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 0/2] Rework interface between scheduler and schedutil governor
  2023-11-03 13:18 [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
  2023-11-03 13:18 ` [PATCH v3 1/2] sched/schedutil: Rework performance estimation Vincent Guittot
  2023-11-03 13:18 ` [PATCH v3 2/2] sched/schedutil: Rework iowait boost Vincent Guittot
@ 2023-11-06 15:05 ` Rafael J. Wysocki
  2023-11-16 14:34   ` Peter Zijlstra
  2 siblings, 1 reply; 15+ messages in thread
From: Rafael J. Wysocki @ 2023-11-06 15:05 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba, wyes.karny, beata.michalska

On Fri, Nov 3, 2023 at 2:18 PM Vincent Guittot
<vincent.guittot@linaro.org> wrote:
>
> Following the discussion with Qais [1] about how to handle uclamp
> requirements and after syncing with him, we agreed that I should move
> forward on the patchset to rework the interface between scheduler and
> schedutil governor to provide more information to the latter. Scheduler
> (and EAS in particular) doesn't need anymore to guess estimate which
> headroom the governor wants to apply and will directly ask for the target
> freq. Then the governor directly gets the actual utilization and new
> minimum and maximum boundaries to select this target frequency and
> doesn't have to deal anymore with scheduler internals like uclamp when
> including iowait boost.
>
> [1] https://lore.kernel.org/lkml/CAKfTPtA5JqNCauG-rP3wGfq+p8EEVx9Tvwj6ksM3SYCwRmfCTg@mail.gmail.com/
>
> Changes since v2:
> - remove useless target variable
>
> Changes since v1:
> - fix a bug (always set max even when returning early)
> - fix typos
>
> Vincent Guittot (2):
>   sched/schedutil: Rework performance estimation
>   sched/schedutil: Rework iowait boost
>
>  include/linux/energy_model.h     |  1 -
>  kernel/sched/core.c              | 82 ++++++++++++-------------------
>  kernel/sched/cpufreq_schedutil.c | 69 ++++++++++++++++----------
>  kernel/sched/fair.c              | 22 +++++++--
>  kernel/sched/sched.h             | 84 +++-----------------------------
>  5 files changed, 100 insertions(+), 158 deletions(-)
>
> --

For the schedutil changes in the series:

Acked-by: Rafael J. Wysocki <rafael@kernel.org>

and I'm assuming this series to be targeted at sched.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-03 13:18 ` [PATCH v3 1/2] sched/schedutil: Rework performance estimation Vincent Guittot
@ 2023-11-14 20:54   ` Qais Yousef
  2023-11-22  7:38     ` Vincent Guittot
  2023-11-16 13:19   ` Lukasz Luba
  1 sibling, 1 reply; 15+ messages in thread
From: Qais Yousef @ 2023-11-14 20:54 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On 11/03/23 14:18, Vincent Guittot wrote:
> The current method to take into account uclamp hints when estimating the
> target frequency can end into situation where the selected target
> frequency is finally higher than uclamp hints whereas there are no real
> needs. Such cases mainly happen because we are currently mixing the
> traditional scheduler utilization signal with the uclamp performance
> hints. By adding these 2 metrics, we loose an important information when
> it comes to select the target frequency and we have to make some
> assumptions which can't fit all cases.
> 
> Rework the interface between the scheduler and schedutil governor in order
> to propagate all information down to the cpufreq governor.
> 
> effective_cpu_util() interface changes and now returns the actual
> utilization of the CPU with 2 optional inputs:
> - The minimum performance for this CPU; typically the capacity to handle
>   the deadline task and the interrupt pressure. But also uclamp_min
>   request when available.
> - The maximum targeting performance for this CPU which reflects the
>   maximum level that we would like to not exceed. By default it will be
>   the CPU capacity but can be reduced because of some performance hints
>   set with uclamp. The value can be lower than actual utilization and/or
>   min performance level.
> 
> A new sugov_effective_cpu_perf() interface is also available to compute
> the final performance level that is targeted for the CPU after applying
> some cpufreq headroom and taking into account all inputs.
> 
> With these 2 functions, schedutil is now able to decide when it must go
> above uclamp hints. It now also have a generic way to get the min
> perfromance level.
> 
> The dependency between energy model and cpufreq governor and its headroom
> policy doesn't exist anymore.
> 
> eenv_pd_max_util asks schedutil for the targeted performance after
> applying the impact of the waking task.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  include/linux/energy_model.h     |  1 -
>  kernel/sched/core.c              | 82 ++++++++++++--------------------
>  kernel/sched/cpufreq_schedutil.c | 40 ++++++++++++----
>  kernel/sched/fair.c              | 22 +++++++--
>  kernel/sched/sched.h             | 24 +++-------
>  5 files changed, 86 insertions(+), 83 deletions(-)
> 
> diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
> index b9caa01dfac4..adec808b371a 100644
> --- a/include/linux/energy_model.h
> +++ b/include/linux/energy_model.h
> @@ -243,7 +243,6 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
>  	scale_cpu = arch_scale_cpu_capacity(cpu);
>  	ps = &pd->table[pd->nr_perf_states - 1];
>  
> -	max_util = map_util_perf(max_util);
>  	max_util = min(max_util, allowed_cpu_cap);
>  	freq = map_util_freq(max_util, ps->frequency, scale_cpu);
>  
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 7a0c16115b79..af5333327493 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7391,18 +7391,13 @@ int sched_core_idle_cpu(int cpu)
>   * required to meet deadlines.
>   */
>  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> -				 enum cpu_util_type type,
> -				 struct task_struct *p)
> +				 unsigned long *min,
> +				 unsigned long *max)
>  {
> -	unsigned long dl_util, util, irq, max;
> +	unsigned long util, irq, scale;
>  	struct rq *rq = cpu_rq(cpu);
>  
> -	max = arch_scale_cpu_capacity(cpu);
> -
> -	if (!uclamp_is_used() &&
> -	    type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
> -		return max;
> -	}
> +	scale = arch_scale_cpu_capacity(cpu);
>  
>  	/*
>  	 * Early check to see if IRQ/steal time saturates the CPU, can be
> @@ -7410,45 +7405,41 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
>  	 * update_irq_load_avg().
>  	 */
>  	irq = cpu_util_irq(rq);
> -	if (unlikely(irq >= max))
> -		return max;
> +	if (unlikely(irq >= scale)) {
> +		if (min)
> +			*min = scale;
> +		if (max)
> +			*max = scale;
> +		return scale;
> +	}
> +
> +	/*
> +	 * The minimum utilization returns the highest level between:
> +	 * - the computed DL bandwidth needed with the irq pressure which
> +	 *   steals time to the deadline task.
> +	 * - The minimum bandwidth requirement for CFS and/or RT.

s/bandwidth for CFS/performance for CFS/?

I've seen a lot of confusion on what both means. I think you've used bandwidth
to refer to performance but most of us look at bandwidth as actual
cpu.share/time actually consumed by the task. I think it's important to keep
the two concept distinguished and use common terminology as it has been a point
of contention in various discussions on and off the list.

> +	 */
> +	if (min)
> +		*min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
>  
>  	/*
>  	 * Because the time spend on RT/DL tasks is visible as 'lost' time to
>  	 * CFS tasks and we use the same metric to track the effective
>  	 * utilization (PELT windows are synchronized) we can directly add them
>  	 * to obtain the CPU's actual utilization.
> -	 *
> -	 * CFS and RT utilization can be boosted or capped, depending on
> -	 * utilization clamp constraints requested by currently RUNNABLE
> -	 * tasks.
> -	 * When there are no CFS RUNNABLE tasks, clamps are released and
> -	 * frequency will be gracefully reduced with the utilization decay.
>  	 */
>  	util = util_cfs + cpu_util_rt(rq);
> -	if (type == FREQUENCY_UTIL)
> -		util = uclamp_rq_util_with(rq, util, p);
> -
> -	dl_util = cpu_util_dl(rq);
> +	util += cpu_util_dl(rq);
>  
>  	/*
> -	 * For frequency selection we do not make cpu_util_dl() a permanent part
> -	 * of this sum because we want to use cpu_bw_dl() later on, but we need
> -	 * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
> -	 * that we select f_max when there is no idle time.
> -	 *
> -	 * NOTE: numerical errors or stop class might cause us to not quite hit
> -	 * saturation when we should -- something for later.
> +	 * The maximum hint is a soft bandwidth requirement which can be lower
> +	 * than the actual utilization because of uclamp_max requirements
>  	 */
> -	if (util + dl_util >= max)
> -		return max;
> +	if (max)
> +		*max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
>  
> -	/*
> -	 * OTOH, for energy computation we need the estimated running time, so
> -	 * include util_dl and ignore dl_bw.
> -	 */
> -	if (type == ENERGY_UTIL)
> -		util += dl_util;
> +	if (util >= scale)
> +		return scale;
>  
>  	/*
>  	 * There is still idle time; further improve the number by using the
> @@ -7459,28 +7450,15 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
>  	 *   U' = irq + --------- * U
>  	 *                 max
>  	 */
> -	util = scale_irq_capacity(util, irq, max);
> +	util = scale_irq_capacity(util, irq, scale);
>  	util += irq;
>  
> -	/*
> -	 * Bandwidth required by DEADLINE must always be granted while, for
> -	 * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> -	 * to gracefully reduce the frequency when no tasks show up for longer
> -	 * periods of time.
> -	 *
> -	 * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> -	 * bw_dl as requested freq. However, cpufreq is not yet ready for such
> -	 * an interface. So, we only do the latter for now.
> -	 */
> -	if (type == FREQUENCY_UTIL)
> -		util += cpu_bw_dl(rq);
> -
> -	return min(max, util);
> +	return min(scale, util);
>  }
>  
>  unsigned long sched_cpu_util(int cpu)
>  {
> -	return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL);
> +	return effective_cpu_util(cpu, cpu_util_cfs(cpu), NULL, NULL);
>  }
>  #endif /* CONFIG_SMP */
>  
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 458d359f5991..38accd8c854b 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -47,7 +47,7 @@ struct sugov_cpu {
>  	u64			last_update;
>  
>  	unsigned long		util;
> -	unsigned long		bw_dl;
> +	unsigned long		bw_min;
>  
>  	/* The field below is for single-CPU policies only: */
>  #ifdef CONFIG_NO_HZ_COMMON
> @@ -143,7 +143,6 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>  	unsigned int freq = arch_scale_freq_invariant() ?
>  				policy->cpuinfo.max_freq : policy->cur;
>  
> -	util = map_util_perf(util);
>  	freq = map_util_freq(util, freq, max);
>  
>  	if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
> @@ -153,14 +152,35 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
>  	return cpufreq_driver_resolve_freq(policy, freq);
>  }
>  
> +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> +				 unsigned long min,
> +				 unsigned long max)
> +{
> +	struct rq *rq = cpu_rq(cpu);
> +
> +	if (rt_rq_is_runnable(&rq->rt))
> +		return max;

I think this breaks old behavior. When uclamp_is_used() the frequency of the RT
task is determined by uclamp_min; but you revert this to the old behavior where
we always return max, no? You should check for !uclamp_is_used(); otherwise let
the rest of the function exec as usual.

> +
> +	/* Add dvfs headroom to actual utilization */
> +	actual = map_util_perf(actual);

Can we rename this function please? It is not mapping anything, but applying
a dvfs headroom (I suggest apply_dvfs_headroom()). Which would make the comment
also unnecessary ;-)

> +	/* Actually we don't need to target the max performance */
> +	if (actual < max)
> +		max = actual;
> +
> +	/*
> +	 * Ensure at least minimum performance while providing more compute
> +	 * capacity when possible.
> +	 */
> +	return max(min, max);
> +}
> +
>  static void sugov_get_util(struct sugov_cpu *sg_cpu)
>  {
> -	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> -	struct rq *rq = cpu_rq(sg_cpu->cpu);
> +	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
>  
> -	sg_cpu->bw_dl = cpu_bw_dl(rq);
> -	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> -					  FREQUENCY_UTIL, NULL);
> +	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> +	sg_cpu->bw_min = map_util_perf(min);

Hmm. I don't think we need to apply_dvfs_headroom() to min here. What's the
rationale to give headroom for min perf requirement? I think the headroom is
only required for actual util.

And is it right to mix irq and uclamp_min with bw_min which is for DL? We might
be mixing things up here. If not, I think we need a comment on how bw_min now
should be looked at/treated.


Thanks!

--
Qais Yousef

> +	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
>  }
>  
>  /**
> @@ -306,7 +326,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
>   */
>  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
>  {
> -	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
> +	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
>  		sg_cpu->sg_policy->limits_changed = true;
>  }
>  
> @@ -407,8 +427,8 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time,
>  	    sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
>  		sg_cpu->util = prev_util;
>  
> -	cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl),
> -				   map_util_perf(sg_cpu->util), max_cap);
> +	cpufreq_driver_adjust_perf(sg_cpu->cpu, sg_cpu->bw_min,
> +				   sg_cpu->util, max_cap);
>  
>  	sg_cpu->sg_policy->last_freq_update_time = time;
>  }
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8767988242ee..ed64f2eaaa2a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7671,7 +7671,7 @@ static inline void eenv_pd_busy_time(struct energy_env *eenv,
>  	for_each_cpu(cpu, pd_cpus) {
>  		unsigned long util = cpu_util(cpu, p, -1, 0);
>  
> -		busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
> +		busy_time += effective_cpu_util(cpu, util, NULL, NULL);
>  	}
>  
>  	eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
> @@ -7694,7 +7694,7 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
>  	for_each_cpu(cpu, pd_cpus) {
>  		struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
>  		unsigned long util = cpu_util(cpu, p, dst_cpu, 1);
> -		unsigned long eff_util;
> +		unsigned long eff_util, min, max;
>  
>  		/*
>  		 * Performance domain frequency: utilization clamping
> @@ -7703,7 +7703,23 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
>  		 * NOTE: in case RT tasks are running, by default the
>  		 * FREQUENCY_UTIL's utilization can be max OPP.
>  		 */
> -		eff_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
> +		eff_util = effective_cpu_util(cpu, util, &min, &max);
> +
> +		/* Task's uclamp can modify min and max value */
> +		if (tsk && uclamp_is_used()) {
> +			min = max(min, uclamp_eff_value(p, UCLAMP_MIN));
> +
> +			/*
> +			 * If there is no active max uclamp constraint,
> +			 * directly use task's one otherwise keep max
> +			 */
> +			if (uclamp_rq_is_idle(cpu_rq(cpu)))
> +				max = uclamp_eff_value(p, UCLAMP_MAX);
> +			else
> +				max = max(max, uclamp_eff_value(p, UCLAMP_MAX));
> +		}
> +
> +		eff_util = sugov_effective_cpu_perf(cpu, eff_util, min, max);
>  		max_util = max(max_util, eff_util);
>  	}
>  
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2e5a95486a42..302b451a3fd8 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2961,24 +2961,14 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
>  #endif
>  
>  #ifdef CONFIG_SMP
> -/**
> - * enum cpu_util_type - CPU utilization type
> - * @FREQUENCY_UTIL:	Utilization used to select frequency
> - * @ENERGY_UTIL:	Utilization used during energy calculation
> - *
> - * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time
> - * need to be aggregated differently depending on the usage made of them. This
> - * enum is used within effective_cpu_util() to differentiate the types of
> - * utilization expected by the callers, and adjust the aggregation accordingly.
> - */
> -enum cpu_util_type {
> -	FREQUENCY_UTIL,
> -	ENERGY_UTIL,
> -};
> -
>  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> -				 enum cpu_util_type type,
> -				 struct task_struct *p);
> +				 unsigned long *min,
> +				 unsigned long *max);
> +
> +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> +				 unsigned long min,
> +				 unsigned long max);
> +
>  
>  /*
>   * Verify the fitness of task @p to run on @cpu taking into account the
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] sched/schedutil: Rework iowait boost
  2023-11-03 13:18 ` [PATCH v3 2/2] sched/schedutil: Rework iowait boost Vincent Guittot
@ 2023-11-14 20:59   ` Qais Yousef
  0 siblings, 0 replies; 15+ messages in thread
From: Qais Yousef @ 2023-11-14 20:59 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On 11/03/23 14:18, Vincent Guittot wrote:
> Use the max value that has already been computed inside sugov_get_util()
> to cap the iowait boost and remove dependency with uclamp_rq_util_with()
> which is not used anymore.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---

Reviewed-by: Qais Yousef <qyousef@layalina.io>

I didn't get a chance to reproduce the original issue I've seen with iowait
boost to test this, but will do and confirm back.


Thanks!

--
Qais Yousef

>  kernel/sched/cpufreq_schedutil.c | 29 ++++++++-------
>  kernel/sched/sched.h             | 60 --------------------------------
>  2 files changed, 14 insertions(+), 75 deletions(-)
> 
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 38accd8c854b..068c895517fb 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -174,11 +174,12 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
>  	return max(min, max);
>  }
>  
> -static void sugov_get_util(struct sugov_cpu *sg_cpu)
> +static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
>  {
>  	unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
>  
>  	util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> +	util = max(util, boost);
>  	sg_cpu->bw_min = map_util_perf(min);
>  	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
>  }
> @@ -271,18 +272,16 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
>   * This mechanism is designed to boost high frequently IO waiting tasks, while
>   * being more conservative on tasks which does sporadic IO operations.
>   */
> -static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
> +static unsigned long sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
>  			       unsigned long max_cap)
>  {
> -	unsigned long boost;
> -
>  	/* No boost currently required */
>  	if (!sg_cpu->iowait_boost)
> -		return;
> +		return 0;
>  
>  	/* Reset boost if the CPU appears to have been idle enough */
>  	if (sugov_iowait_reset(sg_cpu, time, false))
> -		return;
> +		return 0;
>  
>  	if (!sg_cpu->iowait_boost_pending) {
>  		/*
> @@ -291,7 +290,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
>  		sg_cpu->iowait_boost >>= 1;
>  		if (sg_cpu->iowait_boost < IOWAIT_BOOST_MIN) {
>  			sg_cpu->iowait_boost = 0;
> -			return;
> +			return 0;
>  		}
>  	}
>  
> @@ -301,10 +300,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
>  	 * sg_cpu->util is already in capacity scale; convert iowait_boost
>  	 * into the same scale so we can compare.
>  	 */
> -	boost = (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
> -	boost = uclamp_rq_util_with(cpu_rq(sg_cpu->cpu), boost, NULL);
> -	if (sg_cpu->util < boost)
> -		sg_cpu->util = boost;
> +	return (sg_cpu->iowait_boost * max_cap) >> SCHED_CAPACITY_SHIFT;
>  }
>  
>  #ifdef CONFIG_NO_HZ_COMMON
> @@ -334,6 +330,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
>  					      u64 time, unsigned long max_cap,
>  					      unsigned int flags)
>  {
> +	unsigned long boost;
> +
>  	sugov_iowait_boost(sg_cpu, time, flags);
>  	sg_cpu->last_update = time;
>  
> @@ -342,8 +340,8 @@ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
>  	if (!sugov_should_update_freq(sg_cpu->sg_policy, time))
>  		return false;
>  
> -	sugov_get_util(sg_cpu);
> -	sugov_iowait_apply(sg_cpu, time, max_cap);
> +	boost = sugov_iowait_apply(sg_cpu, time, max_cap);
> +	sugov_get_util(sg_cpu, boost);
>  
>  	return true;
>  }
> @@ -444,9 +442,10 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
>  
>  	for_each_cpu(j, policy->cpus) {
>  		struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
> +		unsigned long boost;
>  
> -		sugov_get_util(j_sg_cpu);
> -		sugov_iowait_apply(j_sg_cpu, time, max_cap);
> +		boost = sugov_iowait_apply(j_sg_cpu, time, max_cap);
> +		sugov_get_util(j_sg_cpu, boost);
>  
>  		util = max(j_sg_cpu->util, util);
>  	}
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 302b451a3fd8..e3cb8e004bd1 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3025,59 +3025,6 @@ static inline bool uclamp_rq_is_idle(struct rq *rq)
>  	return rq->uclamp_flags & UCLAMP_FLAG_IDLE;
>  }
>  
> -/**
> - * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
> - * @rq:		The rq to clamp against. Must not be NULL.
> - * @util:	The util value to clamp.
> - * @p:		The task to clamp against. Can be NULL if you want to clamp
> - *		against @rq only.
> - *
> - * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
> - *
> - * If sched_uclamp_used static key is disabled, then just return the util
> - * without any clamping since uclamp aggregation at the rq level in the fast
> - * path is disabled, rendering this operation a NOP.
> - *
> - * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
> - * will return the correct effective uclamp value of the task even if the
> - * static key is disabled.
> - */
> -static __always_inline
> -unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
> -				  struct task_struct *p)
> -{
> -	unsigned long min_util = 0;
> -	unsigned long max_util = 0;
> -
> -	if (!static_branch_likely(&sched_uclamp_used))
> -		return util;
> -
> -	if (p) {
> -		min_util = uclamp_eff_value(p, UCLAMP_MIN);
> -		max_util = uclamp_eff_value(p, UCLAMP_MAX);
> -
> -		/*
> -		 * Ignore last runnable task's max clamp, as this task will
> -		 * reset it. Similarly, no need to read the rq's min clamp.
> -		 */
> -		if (uclamp_rq_is_idle(rq))
> -			goto out;
> -	}
> -
> -	min_util = max_t(unsigned long, min_util, uclamp_rq_get(rq, UCLAMP_MIN));
> -	max_util = max_t(unsigned long, max_util, uclamp_rq_get(rq, UCLAMP_MAX));
> -out:
> -	/*
> -	 * Since CPU's {min,max}_util clamps are MAX aggregated considering
> -	 * RUNNABLE tasks with _different_ clamps, we can end up with an
> -	 * inversion. Fix it now when the clamps are applied.
> -	 */
> -	if (unlikely(min_util >= max_util))
> -		return min_util;
> -
> -	return clamp(util, min_util, max_util);
> -}
> -
>  /* Is the rq being capped/throttled by uclamp_max? */
>  static inline bool uclamp_rq_is_capped(struct rq *rq)
>  {
> @@ -3115,13 +3062,6 @@ static inline unsigned long uclamp_eff_value(struct task_struct *p,
>  	return SCHED_CAPACITY_SCALE;
>  }
>  
> -static inline
> -unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
> -				  struct task_struct *p)
> -{
> -	return util;
> -}
> -
>  static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
>  
>  static inline bool uclamp_is_used(void)
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 0/2] Rework interface between scheduler and schedutil governor
  2023-11-16 14:34   ` Peter Zijlstra
@ 2023-11-14 21:13     ` Qais Yousef
  0 siblings, 0 replies; 15+ messages in thread
From: Qais Yousef @ 2023-11-14 21:13 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot
  Cc: Rafael J. Wysocki, Vincent Guittot, mingo, juri.lelli,
	dietmar.eggemann, rostedt, bsegall, mgorman, bristot, vschneid,
	viresh.kumar, linux-kernel, linux-pm, lukasz.luba, wyes.karny,
	beata.michalska

On 11/16/23 15:34, Peter Zijlstra wrote:
> On Mon, Nov 06, 2023 at 04:05:40PM +0100, Rafael J. Wysocki wrote:
> > On Fri, Nov 3, 2023 at 2:18 PM Vincent Guittot
> > <vincent.guittot@linaro.org> wrote:
> > >
> > > Following the discussion with Qais [1] about how to handle uclamp
> > > requirements and after syncing with him, we agreed that I should move
> > > forward on the patchset to rework the interface between scheduler and
> > > schedutil governor to provide more information to the latter. Scheduler
> > > (and EAS in particular) doesn't need anymore to guess estimate which
> > > headroom the governor wants to apply and will directly ask for the target
> > > freq. Then the governor directly gets the actual utilization and new
> > > minimum and maximum boundaries to select this target frequency and
> > > doesn't have to deal anymore with scheduler internals like uclamp when
> > > including iowait boost.

Thanks a lot for taking over Vincent and helping with this! And sorry for
delayed review, was out travelling between holiday and LPC so haven't caught up
with the list properly yet..

Beside the comments on patch 1, it looks good to me. Do we want to generalize
the way the interface is called though so that scheduler is not tightly coupled
to schedutil? Speaking with Intel folks in LPC, it seemed they rely on firmware
to make a lot of decision and if we further generalize how the interface is
called (I think we need a new cpufreq wrapper akin to cpufreq_update_util()) to
allow governors to hook into it and do their own thing. This could allow them
to use uclamp and these min/max perf hints.

But I haven't thought this fully through. So something to consider separately
anyway to not hold this up unnecessarily. Maybe we do want to keep schedutil
tightly integrated and get people to switch to schedutil instead..

> > >
> > > [1] https://lore.kernel.org/lkml/CAKfTPtA5JqNCauG-rP3wGfq+p8EEVx9Tvwj6ksM3SYCwRmfCTg@mail.gmail.com/
> > >
> > > Changes since v2:
> > > - remove useless target variable
> > >
> > > Changes since v1:
> > > - fix a bug (always set max even when returning early)
> > > - fix typos
> > >
> > > Vincent Guittot (2):
> > >   sched/schedutil: Rework performance estimation
> > >   sched/schedutil: Rework iowait boost
> > >
> > >  include/linux/energy_model.h     |  1 -
> > >  kernel/sched/core.c              | 82 ++++++++++++-------------------
> > >  kernel/sched/cpufreq_schedutil.c | 69 ++++++++++++++++----------
> > >  kernel/sched/fair.c              | 22 +++++++--
> > >  kernel/sched/sched.h             | 84 +++-----------------------------
> > >  5 files changed, 100 insertions(+), 158 deletions(-)
> > >
> > > --
> > 
> > For the schedutil changes in the series:
> > 
> > Acked-by: Rafael J. Wysocki <rafael@kernel.org>
> > 
> > and I'm assuming this series to be targeted at sched.
> 
> Sure, I'll go queue it. Thanks!

Sorry for being late. If this wasn't queued already I think worth waiting to
iron out some comments on patch 1 first.


Thanks!

--
Qais Yousef

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-03 13:18 ` [PATCH v3 1/2] sched/schedutil: Rework performance estimation Vincent Guittot
  2023-11-14 20:54   ` Qais Yousef
@ 2023-11-16 13:19   ` Lukasz Luba
  1 sibling, 0 replies; 15+ messages in thread
From: Lukasz Luba @ 2023-11-16 13:19 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: wyes.karny, beata.michalska, juri.lelli, mingo, linux-pm,
	linux-kernel, viresh.kumar, rafael, vschneid, bristot, mgorman,
	bsegall, rostedt, dietmar.eggemann, peterz, qyousef

Hi Vincent,

On 11/3/23 13:18, Vincent Guittot wrote:
> The current method to take into account uclamp hints when estimating the
> target frequency can end into situation where the selected target
> frequency is finally higher than uclamp hints whereas there are no real
> needs. Such cases mainly happen because we are currently mixing the
> traditional scheduler utilization signal with the uclamp performance
> hints. By adding these 2 metrics, we loose an important information when
> it comes to select the target frequency and we have to make some
> assumptions which can't fit all cases.
> 
> Rework the interface between the scheduler and schedutil governor in order
> to propagate all information down to the cpufreq governor.
> 
> effective_cpu_util() interface changes and now returns the actual
> utilization of the CPU with 2 optional inputs:
> - The minimum performance for this CPU; typically the capacity to handle
>    the deadline task and the interrupt pressure. But also uclamp_min
>    request when available.
> - The maximum targeting performance for this CPU which reflects the
>    maximum level that we would like to not exceed. By default it will be
>    the CPU capacity but can be reduced because of some performance hints
>    set with uclamp. The value can be lower than actual utilization and/or
>    min performance level.
> 
> A new sugov_effective_cpu_perf() interface is also available to compute
> the final performance level that is targeted for the CPU after applying
> some cpufreq headroom and taking into account all inputs.
> 
> With these 2 functions, schedutil is now able to decide when it must go
> above uclamp hints. It now also have a generic way to get the min
> perfromance level.
> 
> The dependency between energy model and cpufreq governor and its headroom
> policy doesn't exist anymore.
> 
> eenv_pd_max_util asks schedutil for the targeted performance after
> applying the impact of the waking task.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

Just in case you are waiting for me. The patch should be OK as
you explained to me in v2. I'm not able to test it right now
on my pixel6 downstream-tracking-mainline kernel, so please
go forward not waiting for me. (the pixel6 is the only platform
that I have now which is normally using the uclamp).

Regards,
Lukasz

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 0/2] Rework interface between scheduler and schedutil governor
  2023-11-06 15:05 ` [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Rafael J. Wysocki
@ 2023-11-16 14:34   ` Peter Zijlstra
  2023-11-14 21:13     ` Qais Yousef
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2023-11-16 14:34 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Vincent Guittot, mingo, juri.lelli, dietmar.eggemann, rostedt,
	bsegall, mgorman, bristot, vschneid, viresh.kumar, qyousef,
	linux-kernel, linux-pm, lukasz.luba, wyes.karny, beata.michalska

On Mon, Nov 06, 2023 at 04:05:40PM +0100, Rafael J. Wysocki wrote:
> On Fri, Nov 3, 2023 at 2:18 PM Vincent Guittot
> <vincent.guittot@linaro.org> wrote:
> >
> > Following the discussion with Qais [1] about how to handle uclamp
> > requirements and after syncing with him, we agreed that I should move
> > forward on the patchset to rework the interface between scheduler and
> > schedutil governor to provide more information to the latter. Scheduler
> > (and EAS in particular) doesn't need anymore to guess estimate which
> > headroom the governor wants to apply and will directly ask for the target
> > freq. Then the governor directly gets the actual utilization and new
> > minimum and maximum boundaries to select this target frequency and
> > doesn't have to deal anymore with scheduler internals like uclamp when
> > including iowait boost.
> >
> > [1] https://lore.kernel.org/lkml/CAKfTPtA5JqNCauG-rP3wGfq+p8EEVx9Tvwj6ksM3SYCwRmfCTg@mail.gmail.com/
> >
> > Changes since v2:
> > - remove useless target variable
> >
> > Changes since v1:
> > - fix a bug (always set max even when returning early)
> > - fix typos
> >
> > Vincent Guittot (2):
> >   sched/schedutil: Rework performance estimation
> >   sched/schedutil: Rework iowait boost
> >
> >  include/linux/energy_model.h     |  1 -
> >  kernel/sched/core.c              | 82 ++++++++++++-------------------
> >  kernel/sched/cpufreq_schedutil.c | 69 ++++++++++++++++----------
> >  kernel/sched/fair.c              | 22 +++++++--
> >  kernel/sched/sched.h             | 84 +++-----------------------------
> >  5 files changed, 100 insertions(+), 158 deletions(-)
> >
> > --
> 
> For the schedutil changes in the series:
> 
> Acked-by: Rafael J. Wysocki <rafael@kernel.org>
> 
> and I'm assuming this series to be targeted at sched.

Sure, I'll go queue it. Thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-22  7:38     ` Vincent Guittot
@ 2023-11-21 21:17       ` Qais Yousef
  2023-11-23  7:47         ` Vincent Guittot
  0 siblings, 1 reply; 15+ messages in thread
From: Qais Yousef @ 2023-11-21 21:17 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On 11/22/23 08:38, Vincent Guittot wrote:

> > > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > > +                              unsigned long min,
> > > +                              unsigned long max)
> > > +{
> > > +     struct rq *rq = cpu_rq(cpu);
> > > +
> > > +     if (rt_rq_is_runnable(&rq->rt))
> > > +             return max;
> >
> > I think this breaks old behavior. When uclamp_is_used() the frequency of the RT
> > task is determined by uclamp_min; but you revert this to the old behavior where
> > we always return max, no? You should check for !uclamp_is_used(); otherwise let
> > the rest of the function exec as usual.
> 
> Yes, I made a shortcut assuming that max would be adjusted to the max
> allowed freq for RT task whereas it's the min freq that is adjusted by
> uclamp and that should also be adjusted without uclamp. Let me fix
> that in effective_cpu_util and remove this early return from
> sugov_effective_cpu_perf()

+1

> > Can we rename this function please? It is not mapping anything, but applying
> > a dvfs headroom (I suggest apply_dvfs_headroom()). Which would make the comment
> > also unnecessary ;-)
> 
> I didn't want to add unnecessary renaming which often confuses
> reviewers so I kept  the current function name. But this can the be
> rename in a follow up patch

Okay.

> > >  static void sugov_get_util(struct sugov_cpu *sg_cpu)
> > >  {
> > > -     unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> > > -     struct rq *rq = cpu_rq(sg_cpu->cpu);
> > > +     unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
> > >
> > > -     sg_cpu->bw_dl = cpu_bw_dl(rq);
> > > -     sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> > > -                                       FREQUENCY_UTIL, NULL);
> > > +     util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> > > +     sg_cpu->bw_min = map_util_perf(min);
> >
> > Hmm. I don't think we need to apply_dvfs_headroom() to min here. What's the
> > rationale to give headroom for min perf requirement? I think the headroom is
> > only required for actual util.
> 
> This headroom only applies for bw_min that is used with
> cpufreq_driver_adjust_perf(). Currently it only takes cpu_bw_dl()

It is also used in ignore_dl_rate_limit() - which is the user that caught my
eyes more.

I have to admit, I always get caught out with the new adjust_perf stuff. The
down side of working on older LTS kernels for prolonged time :p

> which seems too low because IRQ can preempt DL. So I added the average
> irq utilization into bw_min which is only an estimate and needs some
> headroom. That being said I can probably stay with current behavior
> for now and remove headroom

I think this is more logical IMHO. DL should never need any headroom. And irq
needing headroom is questionable everytime I think about it. Does an irq storm
need a dvfs headroom? I don't think it's a clear cut answer, but I tend towards
no.

> > And is it right to mix irq and uclamp_min with bw_min which is for DL? We might
> 
> cpu_bw_dl() is not the actual utilization by DL task but the computed
> bandwidth which can be seen as min performance level

Yep. That's why I am not in favour of a dvfs headroom for DL.

But what I meant here is that in effective_cpu_util(), where we populate min
and max we have

	if (min) {
	        /*
	         * The minimum utilization returns the highest level between:
	         * - the computed DL bandwidth needed with the irq pressure which
	         *   steals time to the deadline task.
	         * - The minimum performance requirement for CFS and/or RT.
	         */
	        *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));

So if there was an RT/CFS task requesting a UCLAMP_MIN of 1024 for example,
bw_min will end up being too high, no?

Should we add another arg to sugov_effective_cpu_perf() to populate bw_min too
for the single user who wants it?


Thanks!

--
Qais Yousef

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-23  7:47         ` Vincent Guittot
@ 2023-11-21 22:09           ` Qais Yousef
  2023-11-23 13:32             ` Vincent Guittot
  0 siblings, 1 reply; 15+ messages in thread
From: Qais Yousef @ 2023-11-21 22:09 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On 11/23/23 08:47, Vincent Guittot wrote:

> > > > And is it right to mix irq and uclamp_min with bw_min which is for DL? We might
> > >
> > > cpu_bw_dl() is not the actual utilization by DL task but the computed
> > > bandwidth which can be seen as min performance level
> >
> > Yep. That's why I am not in favour of a dvfs headroom for DL.
> >
> > But what I meant here is that in effective_cpu_util(), where we populate min
> > and max we have
> >
> >         if (min) {
> >                 /*
> >                  * The minimum utilization returns the highest level between:
> >                  * - the computed DL bandwidth needed with the irq pressure which
> >                  *   steals time to the deadline task.
> >                  * - The minimum performance requirement for CFS and/or RT.
> >                  */
> >                 *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
> >
> > So if there was an RT/CFS task requesting a UCLAMP_MIN of 1024 for example,
> > bw_min will end up being too high, no?
> 
> But at the end, we want at least uclamp_min for cfs or rt just like we
> want at least DL bandwidth for DL tasks

The issue I see is that we do

static void sugov_get_util()
{
..
	util = effective_cpu_util(.., &min, ..); // min = max(irq + cpu_bw_dl(), rq_uclamp_min)
	..
	sg_cpu->bw_min = min; // bw_min can pick the rq_uclamp_min. Shouldn't it be irq + cpu_bw_dl() only?
	..
}

If yes, why the comparison in ignore_dl_rate_limit() is still correct then?

	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)

And does cpufreq_driver_adjust_perf() still need the sg_cpu->bw_min arg
actually? sg_cpu->util already calculated based on sugov_effective_cpu_perf()
which takes all constraints (including bw_min) into account.


Thanks!

--
Qais Yousef

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-23 13:32             ` Vincent Guittot
@ 2023-11-21 22:31               ` Qais Yousef
  0 siblings, 0 replies; 15+ messages in thread
From: Qais Yousef @ 2023-11-21 22:31 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On 11/23/23 14:32, Vincent Guittot wrote:
> On Thu, 23 Nov 2023 at 14:15, Qais Yousef <qyousef@layalina.io> wrote:
> >
> > On 11/23/23 08:47, Vincent Guittot wrote:
> >
> > > > > > And is it right to mix irq and uclamp_min with bw_min which is for DL? We might
> > > > >
> > > > > cpu_bw_dl() is not the actual utilization by DL task but the computed
> > > > > bandwidth which can be seen as min performance level
> > > >
> > > > Yep. That's why I am not in favour of a dvfs headroom for DL.
> > > >
> > > > But what I meant here is that in effective_cpu_util(), where we populate min
> > > > and max we have
> > > >
> > > >         if (min) {
> > > >                 /*
> > > >                  * The minimum utilization returns the highest level between:
> > > >                  * - the computed DL bandwidth needed with the irq pressure which
> > > >                  *   steals time to the deadline task.
> > > >                  * - The minimum performance requirement for CFS and/or RT.
> > > >                  */
> > > >                 *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
> > > >
> > > > So if there was an RT/CFS task requesting a UCLAMP_MIN of 1024 for example,
> > > > bw_min will end up being too high, no?
> > >
> > > But at the end, we want at least uclamp_min for cfs or rt just like we
> > > want at least DL bandwidth for DL tasks
> >
> > The issue I see is that we do
> >
> > static void sugov_get_util()
> > {
> > ..
> >         util = effective_cpu_util(.., &min, ..); // min = max(irq + cpu_bw_dl(), rq_uclamp_min)
> >         ..
> >         sg_cpu->bw_min = min; // bw_min can pick the rq_uclamp_min. Shouldn't it be irq + cpu_bw_dl() only?
> >         ..
> > }
> >
> > If yes, why the comparison in ignore_dl_rate_limit() is still correct then?
> >
> >         if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
> 
> yes, this is to ensure enough performance for the deadline task when
> the dl bandwidth increases without waiting the rate limit period which
> would prevent the system from providing enough bandwidth to the
> deadline scheduler. This remains true because it's still at least
> above cpu_bw_dl()

Okay I think I get it now. I think renaming bw_min to perf_min or something
along those lines in the next opportunity would be a good thing.

> 
> >
> > And does cpufreq_driver_adjust_perf() still need the sg_cpu->bw_min arg
> > actually? sg_cpu->util already calculated based on sugov_effective_cpu_perf()
> > which takes all constraints (including bw_min) into account.
> 
> cpufreq_driver_adjust_perf() is used for systems on which you can't
> actually set an operating frequency but only a min and a desired
> performance level and let the hw move freely in this range.

I see. Thanks for the explanation.


Cheers

--
Qais Yousef

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-14 20:54   ` Qais Yousef
@ 2023-11-22  7:38     ` Vincent Guittot
  2023-11-21 21:17       ` Qais Yousef
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2023-11-22  7:38 UTC (permalink / raw)
  To: Qais Yousef
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On Tue, 21 Nov 2023 at 17:43, Qais Yousef <qyousef@layalina.io> wrote:
>
> On 11/03/23 14:18, Vincent Guittot wrote:
> > @@ -7410,45 +7405,41 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> >        * update_irq_load_avg().
> >        */
> >       irq = cpu_util_irq(rq);
> > -     if (unlikely(irq >= max))
> > -             return max;
> > +     if (unlikely(irq >= scale)) {
> > +             if (min)
> > +                     *min = scale;
> > +             if (max)
> > +                     *max = scale;
> > +             return scale;
> > +     }
> > +
> > +     /*
> > +      * The minimum utilization returns the highest level between:
> > +      * - the computed DL bandwidth needed with the irq pressure which
> > +      *   steals time to the deadline task.
> > +      * - The minimum bandwidth requirement for CFS and/or RT.
>
> s/bandwidth for CFS/performance for CFS/?
>
> I've seen a lot of confusion on what both means. I think you've used bandwidth
> to refer to performance but most of us look at bandwidth as actual
> cpu.share/time actually consumed by the task. I think it's important to keep
> the two concept distinguished and use common terminology as it has been a point
> of contention in various discussions on and off the list.
>
> > +      */
> > +     if (min)
> > +             *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
> >
> >       /*
> >        * Because the time spend on RT/DL tasks is visible as 'lost' time to
> >        * CFS tasks and we use the same metric to track the effective
> >        * utilization (PELT windows are synchronized) we can directly add them
> >        * to obtain the CPU's actual utilization.
> > -      *
> > -      * CFS and RT utilization can be boosted or capped, depending on
> > -      * utilization clamp constraints requested by currently RUNNABLE
> > -      * tasks.
> > -      * When there are no CFS RUNNABLE tasks, clamps are released and
> > -      * frequency will be gracefully reduced with the utilization decay.
> >        */
> >       util = util_cfs + cpu_util_rt(rq);
> > -     if (type == FREQUENCY_UTIL)
> > -             util = uclamp_rq_util_with(rq, util, p);
> > -
> > -     dl_util = cpu_util_dl(rq);
> > +     util += cpu_util_dl(rq);
> >
> >       /*
> > -      * For frequency selection we do not make cpu_util_dl() a permanent part
> > -      * of this sum because we want to use cpu_bw_dl() later on, but we need
> > -      * to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
> > -      * that we select f_max when there is no idle time.
> > -      *
> > -      * NOTE: numerical errors or stop class might cause us to not quite hit
> > -      * saturation when we should -- something for later.
> > +      * The maximum hint is a soft bandwidth requirement which can be lower
> > +      * than the actual utilization because of uclamp_max requirements
> >        */
> > -     if (util + dl_util >= max)
> > -             return max;
> > +     if (max)
> > +             *max = min(scale, uclamp_rq_get(rq, UCLAMP_MAX));
> >
> > -     /*
> > -      * OTOH, for energy computation we need the estimated running time, so
> > -      * include util_dl and ignore dl_bw.
> > -      */
> > -     if (type == ENERGY_UTIL)
> > -             util += dl_util;
> > +     if (util >= scale)
> > +             return scale;
> >
> >       /*
> >        * There is still idle time; further improve the number by using the
> > @@ -7459,28 +7450,15 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> >        *   U' = irq + --------- * U
> >        *                 max
> >        */
> > -     util = scale_irq_capacity(util, irq, max);
> > +     util = scale_irq_capacity(util, irq, scale);
> >       util += irq;
> >
> > -     /*
> > -      * Bandwidth required by DEADLINE must always be granted while, for
> > -      * FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
> > -      * to gracefully reduce the frequency when no tasks show up for longer
> > -      * periods of time.
> > -      *
> > -      * Ideally we would like to set bw_dl as min/guaranteed freq and util +
> > -      * bw_dl as requested freq. However, cpufreq is not yet ready for such
> > -      * an interface. So, we only do the latter for now.
> > -      */
> > -     if (type == FREQUENCY_UTIL)
> > -             util += cpu_bw_dl(rq);
> > -
> > -     return min(max, util);
> > +     return min(scale, util);
> >  }
> >
> >  unsigned long sched_cpu_util(int cpu)
> >  {
> > -     return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL);
> > +     return effective_cpu_util(cpu, cpu_util_cfs(cpu), NULL, NULL);
> >  }
> >  #endif /* CONFIG_SMP */
> >
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 458d359f5991..38accd8c854b 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -47,7 +47,7 @@ struct sugov_cpu {
> >       u64                     last_update;
> >
> >       unsigned long           util;
> > -     unsigned long           bw_dl;
> > +     unsigned long           bw_min;
> >
> >       /* The field below is for single-CPU policies only: */
> >  #ifdef CONFIG_NO_HZ_COMMON
> > @@ -143,7 +143,6 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> >       unsigned int freq = arch_scale_freq_invariant() ?
> >                               policy->cpuinfo.max_freq : policy->cur;
> >
> > -     util = map_util_perf(util);
> >       freq = map_util_freq(util, freq, max);
> >
> >       if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
> > @@ -153,14 +152,35 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> >       return cpufreq_driver_resolve_freq(policy, freq);
> >  }
> >
> > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > +                              unsigned long min,
> > +                              unsigned long max)
> > +{
> > +     struct rq *rq = cpu_rq(cpu);
> > +
> > +     if (rt_rq_is_runnable(&rq->rt))
> > +             return max;
>
> I think this breaks old behavior. When uclamp_is_used() the frequency of the RT
> task is determined by uclamp_min; but you revert this to the old behavior where
> we always return max, no? You should check for !uclamp_is_used(); otherwise let
> the rest of the function exec as usual.

Yes, I made a shortcut assuming that max would be adjusted to the max
allowed freq for RT task whereas it's the min freq that is adjusted by
uclamp and that should also be adjusted without uclamp. Let me fix
that in effective_cpu_util and remove this early return from
sugov_effective_cpu_perf()

>
> > +
> > +     /* Add dvfs headroom to actual utilization */
> > +     actual = map_util_perf(actual);
>
> Can we rename this function please? It is not mapping anything, but applying
> a dvfs headroom (I suggest apply_dvfs_headroom()). Which would make the comment
> also unnecessary ;-)

I didn't want to add unnecessary renaming which often confuses
reviewers so I kept  the current function name. But this can the be
rename in a follow up patch

>
> > +     /* Actually we don't need to target the max performance */
> > +     if (actual < max)
> > +             max = actual;
> > +
> > +     /*
> > +      * Ensure at least minimum performance while providing more compute
> > +      * capacity when possible.
> > +      */
> > +     return max(min, max);
> > +}
> > +
> >  static void sugov_get_util(struct sugov_cpu *sg_cpu)
> >  {
> > -     unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> > -     struct rq *rq = cpu_rq(sg_cpu->cpu);
> > +     unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
> >
> > -     sg_cpu->bw_dl = cpu_bw_dl(rq);
> > -     sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> > -                                       FREQUENCY_UTIL, NULL);
> > +     util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> > +     sg_cpu->bw_min = map_util_perf(min);
>
> Hmm. I don't think we need to apply_dvfs_headroom() to min here. What's the
> rationale to give headroom for min perf requirement? I think the headroom is
> only required for actual util.

This headroom only applies for bw_min that is used with
cpufreq_driver_adjust_perf(). Currently it only takes cpu_bw_dl()
which seems too low because IRQ can preempt DL. So I added the average
irq utilization into bw_min which is only an estimate and needs some
headroom. That being said I can probably stay with current behavior
for now and remove headroom

>
> And is it right to mix irq and uclamp_min with bw_min which is for DL? We might

cpu_bw_dl() is not the actual utilization by DL task but the computed
bandwidth which can be seen as min performance level

> be mixing things up here. If not, I think we need a comment on how bw_min now
> should be looked at/treated.
>
>
> Thanks!
>
> --
> Qais Yousef
>
> > +     sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
> >  }
> >
> >  /**
> > @@ -306,7 +326,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
> >   */
> >  static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
> >  {
> > -     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
> > +     if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
> >               sg_cpu->sg_policy->limits_changed = true;
> >  }
> >
> > @@ -407,8 +427,8 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time,
> >           sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
> >               sg_cpu->util = prev_util;
> >
> > -     cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl),
> > -                                map_util_perf(sg_cpu->util), max_cap);
> > +     cpufreq_driver_adjust_perf(sg_cpu->cpu, sg_cpu->bw_min,
> > +                                sg_cpu->util, max_cap);
> >
> >       sg_cpu->sg_policy->last_freq_update_time = time;
> >  }
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8767988242ee..ed64f2eaaa2a 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7671,7 +7671,7 @@ static inline void eenv_pd_busy_time(struct energy_env *eenv,
> >       for_each_cpu(cpu, pd_cpus) {
> >               unsigned long util = cpu_util(cpu, p, -1, 0);
> >
> > -             busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
> > +             busy_time += effective_cpu_util(cpu, util, NULL, NULL);
> >       }
> >
> >       eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
> > @@ -7694,7 +7694,7 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
> >       for_each_cpu(cpu, pd_cpus) {
> >               struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
> >               unsigned long util = cpu_util(cpu, p, dst_cpu, 1);
> > -             unsigned long eff_util;
> > +             unsigned long eff_util, min, max;
> >
> >               /*
> >                * Performance domain frequency: utilization clamping
> > @@ -7703,7 +7703,23 @@ eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
> >                * NOTE: in case RT tasks are running, by default the
> >                * FREQUENCY_UTIL's utilization can be max OPP.
> >                */
> > -             eff_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
> > +             eff_util = effective_cpu_util(cpu, util, &min, &max);
> > +
> > +             /* Task's uclamp can modify min and max value */
> > +             if (tsk && uclamp_is_used()) {
> > +                     min = max(min, uclamp_eff_value(p, UCLAMP_MIN));
> > +
> > +                     /*
> > +                      * If there is no active max uclamp constraint,
> > +                      * directly use task's one otherwise keep max
> > +                      */
> > +                     if (uclamp_rq_is_idle(cpu_rq(cpu)))
> > +                             max = uclamp_eff_value(p, UCLAMP_MAX);
> > +                     else
> > +                             max = max(max, uclamp_eff_value(p, UCLAMP_MAX));
> > +             }
> > +
> > +             eff_util = sugov_effective_cpu_perf(cpu, eff_util, min, max);
> >               max_util = max(max_util, eff_util);
> >       }
> >
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 2e5a95486a42..302b451a3fd8 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -2961,24 +2961,14 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
> >  #endif
> >
> >  #ifdef CONFIG_SMP
> > -/**
> > - * enum cpu_util_type - CPU utilization type
> > - * @FREQUENCY_UTIL:  Utilization used to select frequency
> > - * @ENERGY_UTIL:     Utilization used during energy calculation
> > - *
> > - * The utilization signals of all scheduling classes (CFS/RT/DL) and IRQ time
> > - * need to be aggregated differently depending on the usage made of them. This
> > - * enum is used within effective_cpu_util() to differentiate the types of
> > - * utilization expected by the callers, and adjust the aggregation accordingly.
> > - */
> > -enum cpu_util_type {
> > -     FREQUENCY_UTIL,
> > -     ENERGY_UTIL,
> > -};
> > -
> >  unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
> > -                              enum cpu_util_type type,
> > -                              struct task_struct *p);
> > +                              unsigned long *min,
> > +                              unsigned long *max);
> > +
> > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > +                              unsigned long min,
> > +                              unsigned long max);
> > +
> >
> >  /*
> >   * Verify the fitness of task @p to run on @cpu taking into account the
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-21 21:17       ` Qais Yousef
@ 2023-11-23  7:47         ` Vincent Guittot
  2023-11-21 22:09           ` Qais Yousef
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2023-11-23  7:47 UTC (permalink / raw)
  To: Qais Yousef
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On Wed, 22 Nov 2023 at 23:01, Qais Yousef <qyousef@layalina.io> wrote:
>
> On 11/22/23 08:38, Vincent Guittot wrote:
>
> > > > +unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
> > > > +                              unsigned long min,
> > > > +                              unsigned long max)
> > > > +{
> > > > +     struct rq *rq = cpu_rq(cpu);
> > > > +
> > > > +     if (rt_rq_is_runnable(&rq->rt))
> > > > +             return max;
> > >
> > > I think this breaks old behavior. When uclamp_is_used() the frequency of the RT
> > > task is determined by uclamp_min; but you revert this to the old behavior where
> > > we always return max, no? You should check for !uclamp_is_used(); otherwise let
> > > the rest of the function exec as usual.
> >
> > Yes, I made a shortcut assuming that max would be adjusted to the max
> > allowed freq for RT task whereas it's the min freq that is adjusted by
> > uclamp and that should also be adjusted without uclamp. Let me fix
> > that in effective_cpu_util and remove this early return from
> > sugov_effective_cpu_perf()
>
> +1
>
> > > Can we rename this function please? It is not mapping anything, but applying
> > > a dvfs headroom (I suggest apply_dvfs_headroom()). Which would make the comment
> > > also unnecessary ;-)
> >
> > I didn't want to add unnecessary renaming which often confuses
> > reviewers so I kept  the current function name. But this can the be
> > rename in a follow up patch
>
> Okay.
>
> > > >  static void sugov_get_util(struct sugov_cpu *sg_cpu)
> > > >  {
> > > > -     unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
> > > > -     struct rq *rq = cpu_rq(sg_cpu->cpu);
> > > > +     unsigned long min, max, util = cpu_util_cfs_boost(sg_cpu->cpu);
> > > >
> > > > -     sg_cpu->bw_dl = cpu_bw_dl(rq);
> > > > -     sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
> > > > -                                       FREQUENCY_UTIL, NULL);
> > > > +     util = effective_cpu_util(sg_cpu->cpu, util, &min, &max);
> > > > +     sg_cpu->bw_min = map_util_perf(min);
> > >
> > > Hmm. I don't think we need to apply_dvfs_headroom() to min here. What's the
> > > rationale to give headroom for min perf requirement? I think the headroom is
> > > only required for actual util.
> >
> > This headroom only applies for bw_min that is used with
> > cpufreq_driver_adjust_perf(). Currently it only takes cpu_bw_dl()
>
> It is also used in ignore_dl_rate_limit() - which is the user that caught my
> eyes more.
>
> I have to admit, I always get caught out with the new adjust_perf stuff. The
> down side of working on older LTS kernels for prolonged time :p
>
> > which seems too low because IRQ can preempt DL. So I added the average
> > irq utilization into bw_min which is only an estimate and needs some
> > headroom. That being said I can probably stay with current behavior
> > for now and remove headroom
>
> I think this is more logical IMHO. DL should never need any headroom. And irq
> needing headroom is questionable everytime I think about it. Does an irq storm
> need a dvfs headroom? I don't think it's a clear cut answer, but I tend towards
> no.
>
> > > And is it right to mix irq and uclamp_min with bw_min which is for DL? We might
> >
> > cpu_bw_dl() is not the actual utilization by DL task but the computed
> > bandwidth which can be seen as min performance level
>
> Yep. That's why I am not in favour of a dvfs headroom for DL.
>
> But what I meant here is that in effective_cpu_util(), where we populate min
> and max we have
>
>         if (min) {
>                 /*
>                  * The minimum utilization returns the highest level between:
>                  * - the computed DL bandwidth needed with the irq pressure which
>                  *   steals time to the deadline task.
>                  * - The minimum performance requirement for CFS and/or RT.
>                  */
>                 *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
>
> So if there was an RT/CFS task requesting a UCLAMP_MIN of 1024 for example,
> bw_min will end up being too high, no?

But at the end, we want at least uclamp_min for cfs or rt just like we
want at least DL bandwidth for DL tasks

>
> Should we add another arg to sugov_effective_cpu_perf() to populate bw_min too
> for the single user who wants it?
>
>
> Thanks!
>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] sched/schedutil: Rework performance estimation
  2023-11-21 22:09           ` Qais Yousef
@ 2023-11-23 13:32             ` Vincent Guittot
  2023-11-21 22:31               ` Qais Yousef
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2023-11-23 13:32 UTC (permalink / raw)
  To: Qais Yousef
  Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, rafael, viresh.kumar, linux-kernel,
	linux-pm, lukasz.luba, wyes.karny, beata.michalska

On Thu, 23 Nov 2023 at 14:15, Qais Yousef <qyousef@layalina.io> wrote:
>
> On 11/23/23 08:47, Vincent Guittot wrote:
>
> > > > > And is it right to mix irq and uclamp_min with bw_min which is for DL? We might
> > > >
> > > > cpu_bw_dl() is not the actual utilization by DL task but the computed
> > > > bandwidth which can be seen as min performance level
> > >
> > > Yep. That's why I am not in favour of a dvfs headroom for DL.
> > >
> > > But what I meant here is that in effective_cpu_util(), where we populate min
> > > and max we have
> > >
> > >         if (min) {
> > >                 /*
> > >                  * The minimum utilization returns the highest level between:
> > >                  * - the computed DL bandwidth needed with the irq pressure which
> > >                  *   steals time to the deadline task.
> > >                  * - The minimum performance requirement for CFS and/or RT.
> > >                  */
> > >                 *min = max(irq + cpu_bw_dl(rq), uclamp_rq_get(rq, UCLAMP_MIN));
> > >
> > > So if there was an RT/CFS task requesting a UCLAMP_MIN of 1024 for example,
> > > bw_min will end up being too high, no?
> >
> > But at the end, we want at least uclamp_min for cfs or rt just like we
> > want at least DL bandwidth for DL tasks
>
> The issue I see is that we do
>
> static void sugov_get_util()
> {
> ..
>         util = effective_cpu_util(.., &min, ..); // min = max(irq + cpu_bw_dl(), rq_uclamp_min)
>         ..
>         sg_cpu->bw_min = min; // bw_min can pick the rq_uclamp_min. Shouldn't it be irq + cpu_bw_dl() only?
>         ..
> }
>
> If yes, why the comparison in ignore_dl_rate_limit() is still correct then?
>
>         if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)

yes, this is to ensure enough performance for the deadline task when
the dl bandwidth increases without waiting the rate limit period which
would prevent the system from providing enough bandwidth to the
deadline scheduler. This remains true because it's still at least
above cpu_bw_dl()

>
> And does cpufreq_driver_adjust_perf() still need the sg_cpu->bw_min arg
> actually? sg_cpu->util already calculated based on sugov_effective_cpu_perf()
> which takes all constraints (including bw_min) into account.

cpufreq_driver_adjust_perf() is used for systems on which you can't
actually set an operating frequency but only a min and a desired
performance level and let the hw move freely in this range.

>
>
> Thanks!
>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-11-23 13:40 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-03 13:18 [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Vincent Guittot
2023-11-03 13:18 ` [PATCH v3 1/2] sched/schedutil: Rework performance estimation Vincent Guittot
2023-11-14 20:54   ` Qais Yousef
2023-11-22  7:38     ` Vincent Guittot
2023-11-21 21:17       ` Qais Yousef
2023-11-23  7:47         ` Vincent Guittot
2023-11-21 22:09           ` Qais Yousef
2023-11-23 13:32             ` Vincent Guittot
2023-11-21 22:31               ` Qais Yousef
2023-11-16 13:19   ` Lukasz Luba
2023-11-03 13:18 ` [PATCH v3 2/2] sched/schedutil: Rework iowait boost Vincent Guittot
2023-11-14 20:59   ` Qais Yousef
2023-11-06 15:05 ` [PATCH v3 0/2] Rework interface between scheduler and schedutil governor Rafael J. Wysocki
2023-11-16 14:34   ` Peter Zijlstra
2023-11-14 21:13     ` Qais Yousef

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.