linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qais Yousef <qais.yousef@arm.com>
To: Ingo Molnar <mingo@kernel.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: linux-kernel@vger.kernel.org, Xuewen Yan <xuewen.yan94@gmail.com>,
	Lukasz Luba <lukasz.luba@arm.com>, Wei Wang <wvw@google.com>,
	Jonathan JMChen <Jonathan.JMChen@mediatek.com>,
	Hank <han.lin@mediatek.com>, Qais Yousef <qais.yousef@arm.com>
Subject: [PATCH v2 8/9] sched/fair: Detect capacity inversion
Date: Thu,  4 Aug 2022 15:36:08 +0100	[thread overview]
Message-ID: <20220804143609.515789-9-qais.yousef@arm.com> (raw)
In-Reply-To: <20220804143609.515789-1-qais.yousef@arm.com>

Check each performance domain to see if thermal pressure is causing its
capacity to be lower than another performance domain.

We assume that each performance domain has CPUs with the same
capacities, which is similar to an assumption made in energy_model.c

We also assume that thermal pressure impacts all CPUs in a performance
domain equally.

If there're multiple performance domains with the same capacity_orig, we
will trigger a capacity inversion if the domain is under thermal
pressure.

The new cpu_in_capacity_inversion() should help users to know when
information about capacity_orig are not reliable and can opt in to use
the inverted capacity as the 'actual' capacity_orig.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 kernel/sched/fair.c  | 63 +++++++++++++++++++++++++++++++++++++++++---
 kernel/sched/sched.h | 19 +++++++++++++
 2 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 59ba7106ddc6..cb32dc9a057f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8659,16 +8659,73 @@ static unsigned long scale_rt_capacity(int cpu)
 
 static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 {
+	unsigned long capacity_orig = arch_scale_cpu_capacity(cpu);
 	unsigned long capacity = scale_rt_capacity(cpu);
 	struct sched_group *sdg = sd->groups;
+	struct rq *rq = cpu_rq(cpu);
 
-	cpu_rq(cpu)->cpu_capacity_orig = arch_scale_cpu_capacity(cpu);
+	rq->cpu_capacity_orig = capacity_orig;
 
 	if (!capacity)
 		capacity = 1;
 
-	cpu_rq(cpu)->cpu_capacity = capacity;
-	trace_sched_cpu_capacity_tp(cpu_rq(cpu));
+	rq->cpu_capacity = capacity;
+
+	/*
+	 * Detect if the performance domain is in capacity inversion state.
+	 *
+	 * Capacity inversion happens when another perf domain with equal or
+	 * lower capacity_orig_of() ends up having higher capacity than this
+	 * domain after subtracting thermal pressure.
+	 *
+	 * We only take into account thermal pressure in this detection as it's
+	 * the only metric that actually results in *real* reduction of
+	 * capacity due to performance points (OPPs) being dropped/become
+	 * unreachable due to thermal throttling.
+	 *
+	 * We assume:
+	 *   * That all cpus in a perf domain have the same capacity_orig
+	 *     (same uArch).
+	 *   * Thermal pressure will impact all cpus in this perf domain
+	 *     equally.
+	 */
+	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
+		struct perf_domain *pd = rcu_dereference(rq->rd->pd);
+
+		rq->cpu_capacity_inverted = 0;
+
+		for (; pd; pd = pd->next) {
+			struct cpumask *pd_span = perf_domain_span(pd);
+			unsigned long pd_cap_orig, pd_cap;
+
+			cpu = cpumask_any(pd_span);
+			pd_cap_orig = arch_scale_cpu_capacity(cpu);
+
+			if (capacity_orig < pd_cap_orig)
+				continue;
+
+			/*
+			 * handle the case of multiple perf domains have the
+			 * same capacity_orig but one of them is under higher
+			 * thermal pressure. We record it as capacity
+			 * inversion.
+			 */
+			if (capacity_orig == pd_cap_orig) {
+				pd_cap = pd_cap_orig - thermal_load_avg(cpu_rq(cpu));
+
+				if (pd_cap > inv_cap) {
+					rq->cpu_capacity_inverted = inv_cap;
+					break;
+				}
+			} else if (pd_cap_orig > inv_cap) {
+				rq->cpu_capacity_inverted = inv_cap;
+				break;
+			}
+		}
+	}
+
+	trace_sched_cpu_capacity_tp(rq);
 
 	sdg->sgc->capacity = capacity;
 	sdg->sgc->min_capacity = capacity;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index caf017f7def6..541a70fa55b3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1033,6 +1033,7 @@ struct rq {
 
 	unsigned long		cpu_capacity;
 	unsigned long		cpu_capacity_orig;
+	unsigned long		cpu_capacity_inverted;
 
 	struct callback_head	*balance_callback;
 
@@ -2865,6 +2866,24 @@ static inline unsigned long capacity_orig_of(int cpu)
 	return cpu_rq(cpu)->cpu_capacity_orig;
 }
 
+/*
+ * Returns inverted capacity if the CPU is in capacity inversion state.
+ * 0 otherwise.
+ *
+ * Capacity inversion detection only considers thermal impact where actual
+ * performance points (OPPs) gets dropped.
+ *
+ * Capacity inversion state happens when another performance domain that has
+ * equal or lower capacity_orig_of() becomes effectively larger than the perf
+ * domain this CPU belongs to due to thermal pressure throttling it hard.
+ *
+ * See comment in update_cpu_capacity().
+ */
+static inline unsigned long cpu_in_capacity_inversion(int cpu)
+{
+	return cpu_rq(cpu)->cpu_capacity_inverted;
+}
+
 /**
  * enum cpu_util_type - CPU utilization type
  * @FREQUENCY_UTIL:	Utilization used to select frequency
-- 
2.25.1


  parent reply	other threads:[~2022-08-04 14:37 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-04 14:36 [PATCH v2 0/9] Fix relationship between uclamp and fits_capacity() Qais Yousef
2022-08-04 14:36 ` [PATCH v2 1/9] sched/uclamp: Fix relationship between uclamp and migration margin Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-11-04 17:35   ` [PATCH v2 1/9] " Valentin Schneider
2022-11-05 19:24     ` Qais Yousef
2022-11-07 18:58       ` Valentin Schneider
2022-11-08 11:33         ` Qais Yousef
2022-11-09 10:33   ` Dietmar Eggemann
2022-08-04 14:36 ` [PATCH v2 2/9] sched/uclamp: Make task_fits_capacity() use util_fits_cpu() Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-08-04 14:36 ` [PATCH v2 3/9] sched/uclamp: Fix fits_capacity() check in feec() Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-08-04 14:36 ` [PATCH v2 4/9] sched/uclamp: Make select_idle_capacity() use util_fits_cpu() Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-08-04 14:36 ` [PATCH v2 5/9] sched/uclamp: Make asym_fits_capacity() " Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-08-04 14:36 ` [PATCH v2 6/9] sched/uclamp: Make cpu_overutilized() " Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-08-04 14:36 ` [PATCH v2 7/9] sched/uclamp: Cater for uclamp in find_energy_efficient_cpu()'s early exit condition Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-08-04 14:36 ` Qais Yousef [this message]
2022-10-28  6:42   ` [tip: sched/core] sched/fair: Detect capacity inversion tip-bot2 for Qais Yousef
2022-11-09 10:42   ` [PATCH v2 8/9] " Dietmar Eggemann
2022-11-12 19:35     ` Qais Yousef
2022-11-16 17:45       ` Dietmar Eggemann
2022-11-20 21:30         ` Qais Yousef
2022-08-04 14:36 ` [PATCH v2 9/9] sched/fair: Consider capacity inversion in util_fits_cpu() Qais Yousef
2022-10-28  6:42   ` [tip: sched/core] " tip-bot2 for Qais Yousef
2022-11-04 17:35   ` [PATCH v2 9/9] " Valentin Schneider
2022-11-05 20:41     ` Qais Yousef
2022-11-07 18:58       ` Valentin Schneider
2022-11-08 11:51         ` Qais Yousef
2022-11-09 10:43       ` Dietmar Eggemann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220804143609.515789-9-qais.yousef@arm.com \
    --to=qais.yousef@arm.com \
    --cc=Jonathan.JMChen@mediatek.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=han.lin@mediatek.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lukasz.luba@arm.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=vincent.guittot@linaro.org \
    --cc=wvw@google.com \
    --cc=xuewen.yan94@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).