linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] Fixes for uclamp and capacity inversion detection
@ 2023-01-12 12:27 Qais Yousef
  2023-01-12 12:27 ` [PATCH v3 1/2] sched/uclamp: Fix a uninitialized variable warnings Qais Yousef
  2023-01-12 12:27 ` [PATCH v3 2/2] sched/fair: Fixes for capacity inversion detection Qais Yousef
  0 siblings, 2 replies; 7+ messages in thread
From: Qais Yousef @ 2023-01-12 12:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Dietmar Eggemann
  Cc: Rafael J. Wysocki, Viresh Kumar, linux-kernel, linux-pm,
	Lukasz Luba, Wei Wang, Xuewen Yan, Hank, Jonathan JMChen,
	Qais Yousef

Patch 1 addresses a smatch warning reported by Dan Carpenter.

Patch 2 fixes a couple of issues reported by Dietmar in capacity inversion
logic.

Patch 3 which was an RFC patch was dropped. The discussion has settled into
this patch which is now treated separately from this series

	https://lore.kernel.org/lkml/20221228165415.3436-1-vincent.guittot@linaro.org/


Changes in v3:

	* Fix commit message in patch 2.
	* Drop Patch 3

Changes in v2:

	* Patch1: Improve indentation as suggested by Dietmar
	* Patch2: Make sure to hold rcu_read_lock() as we need it's not
	  held in all paths.


LINK v1:
	* https://lore.kernel.org/lkml/20221127141742.1644023-1-qyousef@layalina.io/

LINK v2:
	Sent in-reply-to v1

	* https://lore.kernel.org/lkml/20221208145108.452032-1-qyousef@layalina.io/
	* https://lore.kernel.org/lkml/20221208145409.453308-1-qyousef@layalina.io/

Qais Yousef (2):
  sched/uclamp: Fix a uninitialized variable warnings
  sched/fair: Fixes for capacity inversion detection

 kernel/sched/fair.c | 48 +++++++++++++++++++++++++--------------------
 1 file changed, 27 insertions(+), 21 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/2] sched/uclamp: Fix a uninitialized variable warnings
  2023-01-12 12:27 [PATCH v3 0/2] Fixes for uclamp and capacity inversion detection Qais Yousef
@ 2023-01-12 12:27 ` Qais Yousef
  2023-01-13  8:13   ` Vincent Guittot
  2023-01-13 12:08   ` [tip: sched/urgent] " tip-bot2 for Qais Yousef
  2023-01-12 12:27 ` [PATCH v3 2/2] sched/fair: Fixes for capacity inversion detection Qais Yousef
  1 sibling, 2 replies; 7+ messages in thread
From: Qais Yousef @ 2023-01-12 12:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Dietmar Eggemann
  Cc: Rafael J. Wysocki, Viresh Kumar, linux-kernel, linux-pm,
	Lukasz Luba, Wei Wang, Xuewen Yan, Hank, Jonathan JMChen,
	Qais Yousef, kernel test robot, Dan Carpenter

Addresses the following warnings:

> config: riscv-randconfig-m031-20221111
> compiler: riscv64-linux-gcc (GCC) 12.1.0
>
> smatch warnings:
> kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_min'.
> kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_max'.

Fixes: 244226035a1f ("sched/uclamp: Fix fits_capacity() check in feec()")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
---
 kernel/sched/fair.c | 35 ++++++++++++++++-------------------
 1 file changed, 16 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e9d906a9bba9..5a8e75d4a17b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7353,10 +7353,10 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 	eenv_task_busy_time(&eenv, p, prev_cpu);
 
 	for (; pd; pd = pd->next) {
+		unsigned long util_min = p_util_min, util_max = p_util_max;
 		unsigned long cpu_cap, cpu_thermal_cap, util;
 		unsigned long cur_delta, max_spare_cap = 0;
 		unsigned long rq_util_min, rq_util_max;
-		unsigned long util_min, util_max;
 		unsigned long prev_spare_cap = 0;
 		int max_spare_cap_cpu = -1;
 		unsigned long base_energy;
@@ -7375,6 +7375,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 		eenv.pd_cap = 0;
 
 		for_each_cpu(cpu, cpus) {
+			struct rq *rq = cpu_rq(cpu);
+
 			eenv.pd_cap += cpu_thermal_cap;
 
 			if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
@@ -7393,24 +7395,19 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 			 * much capacity we can get out of the CPU; this is
 			 * aligned with sched_cpu_util().
 			 */
-			if (uclamp_is_used()) {
-				if (uclamp_rq_is_idle(cpu_rq(cpu))) {
-					util_min = p_util_min;
-					util_max = p_util_max;
-				} else {
-					/*
-					 * Open code uclamp_rq_util_with() except for
-					 * the clamp() part. Ie: apply max aggregation
-					 * only. util_fits_cpu() logic requires to
-					 * operate on non clamped util but must use the
-					 * max-aggregated uclamp_{min, max}.
-					 */
-					rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
-					rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
-
-					util_min = max(rq_util_min, p_util_min);
-					util_max = max(rq_util_max, p_util_max);
-				}
+			if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) {
+				/*
+				 * Open code uclamp_rq_util_with() except for
+				 * the clamp() part. Ie: apply max aggregation
+				 * only. util_fits_cpu() logic requires to
+				 * operate on non clamped util but must use the
+				 * max-aggregated uclamp_{min, max}.
+				 */
+				rq_util_min = uclamp_rq_get(rq, UCLAMP_MIN);
+				rq_util_max = uclamp_rq_get(rq, UCLAMP_MAX);
+
+				util_min = max(rq_util_min, p_util_min);
+				util_max = max(rq_util_max, p_util_max);
 			}
 			if (!util_fits_cpu(util, util_min, util_max, cpu))
 				continue;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/2] sched/fair: Fixes for capacity inversion detection
  2023-01-12 12:27 [PATCH v3 0/2] Fixes for uclamp and capacity inversion detection Qais Yousef
  2023-01-12 12:27 ` [PATCH v3 1/2] sched/uclamp: Fix a uninitialized variable warnings Qais Yousef
@ 2023-01-12 12:27 ` Qais Yousef
  2023-01-13  8:14   ` Vincent Guittot
  2023-01-13 12:08   ` [tip: sched/urgent] " tip-bot2 for Qais Yousef
  1 sibling, 2 replies; 7+ messages in thread
From: Qais Yousef @ 2023-01-12 12:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Dietmar Eggemann
  Cc: Rafael J. Wysocki, Viresh Kumar, linux-kernel, linux-pm,
	Lukasz Luba, Wei Wang, Xuewen Yan, Hank, Jonathan JMChen,
	Qais Yousef

Traversing the Perf Domains requires rcu_read_lock() to be held and is
conditional on sched_energy_enabled(). Ensure right protections applied.

Also skip capacity inversion detection for our own pd; which was an
error.

Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
---
 kernel/sched/fair.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5a8e75d4a17b..34239d3118f0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8992,16 +8992,23 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 	 *   * Thermal pressure will impact all cpus in this perf domain
 	 *     equally.
 	 */
-	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+	if (sched_energy_enabled()) {
 		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
-		struct perf_domain *pd = rcu_dereference(rq->rd->pd);
+		struct perf_domain *pd;
+
+		rcu_read_lock();
 
+		pd = rcu_dereference(rq->rd->pd);
 		rq->cpu_capacity_inverted = 0;
 
 		for (; pd; pd = pd->next) {
 			struct cpumask *pd_span = perf_domain_span(pd);
 			unsigned long pd_cap_orig, pd_cap;
 
+			/* We can't be inverted against our own pd */
+			if (cpumask_test_cpu(cpu_of(rq), pd_span))
+				continue;
+
 			cpu = cpumask_any(pd_span);
 			pd_cap_orig = arch_scale_cpu_capacity(cpu);
 
@@ -9026,6 +9033,8 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 				break;
 			}
 		}
+
+		rcu_read_unlock();
 	}
 
 	trace_sched_cpu_capacity_tp(rq);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/2] sched/uclamp: Fix a uninitialized variable warnings
  2023-01-12 12:27 ` [PATCH v3 1/2] sched/uclamp: Fix a uninitialized variable warnings Qais Yousef
@ 2023-01-13  8:13   ` Vincent Guittot
  2023-01-13 12:08   ` [tip: sched/urgent] " tip-bot2 for Qais Yousef
  1 sibling, 0 replies; 7+ messages in thread
From: Vincent Guittot @ 2023-01-13  8:13 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Ingo Molnar, Peter Zijlstra, Dietmar Eggemann, Rafael J. Wysocki,
	Viresh Kumar, linux-kernel, linux-pm, Lukasz Luba, Wei Wang,
	Xuewen Yan, Hank, Jonathan JMChen, kernel test robot,
	Dan Carpenter

On Thu, 12 Jan 2023 at 13:27, Qais Yousef <qyousef@layalina.io> wrote:
>
> Addresses the following warnings:
>
> > config: riscv-randconfig-m031-20221111
> > compiler: riscv64-linux-gcc (GCC) 12.1.0
> >
> > smatch warnings:
> > kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_min'.
> > kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_max'.
>
> Fixes: 244226035a1f ("sched/uclamp: Fix fits_capacity() check in feec()")
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <error27@gmail.com>
> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
>  kernel/sched/fair.c | 35 ++++++++++++++++-------------------
>  1 file changed, 16 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e9d906a9bba9..5a8e75d4a17b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7353,10 +7353,10 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>         eenv_task_busy_time(&eenv, p, prev_cpu);
>
>         for (; pd; pd = pd->next) {
> +               unsigned long util_min = p_util_min, util_max = p_util_max;
>                 unsigned long cpu_cap, cpu_thermal_cap, util;
>                 unsigned long cur_delta, max_spare_cap = 0;
>                 unsigned long rq_util_min, rq_util_max;
> -               unsigned long util_min, util_max;
>                 unsigned long prev_spare_cap = 0;
>                 int max_spare_cap_cpu = -1;
>                 unsigned long base_energy;
> @@ -7375,6 +7375,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>                 eenv.pd_cap = 0;
>
>                 for_each_cpu(cpu, cpus) {
> +                       struct rq *rq = cpu_rq(cpu);
> +
>                         eenv.pd_cap += cpu_thermal_cap;
>
>                         if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
> @@ -7393,24 +7395,19 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>                          * much capacity we can get out of the CPU; this is
>                          * aligned with sched_cpu_util().
>                          */
> -                       if (uclamp_is_used()) {
> -                               if (uclamp_rq_is_idle(cpu_rq(cpu))) {
> -                                       util_min = p_util_min;
> -                                       util_max = p_util_max;
> -                               } else {
> -                                       /*
> -                                        * Open code uclamp_rq_util_with() except for
> -                                        * the clamp() part. Ie: apply max aggregation
> -                                        * only. util_fits_cpu() logic requires to
> -                                        * operate on non clamped util but must use the
> -                                        * max-aggregated uclamp_{min, max}.
> -                                        */
> -                                       rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
> -                                       rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
> -
> -                                       util_min = max(rq_util_min, p_util_min);
> -                                       util_max = max(rq_util_max, p_util_max);
> -                               }
> +                       if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) {
> +                               /*
> +                                * Open code uclamp_rq_util_with() except for
> +                                * the clamp() part. Ie: apply max aggregation
> +                                * only. util_fits_cpu() logic requires to
> +                                * operate on non clamped util but must use the
> +                                * max-aggregated uclamp_{min, max}.
> +                                */
> +                               rq_util_min = uclamp_rq_get(rq, UCLAMP_MIN);
> +                               rq_util_max = uclamp_rq_get(rq, UCLAMP_MAX);
> +
> +                               util_min = max(rq_util_min, p_util_min);
> +                               util_max = max(rq_util_max, p_util_max);
>                         }
>                         if (!util_fits_cpu(util, util_min, util_max, cpu))
>                                 continue;
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 2/2] sched/fair: Fixes for capacity inversion detection
  2023-01-12 12:27 ` [PATCH v3 2/2] sched/fair: Fixes for capacity inversion detection Qais Yousef
@ 2023-01-13  8:14   ` Vincent Guittot
  2023-01-13 12:08   ` [tip: sched/urgent] " tip-bot2 for Qais Yousef
  1 sibling, 0 replies; 7+ messages in thread
From: Vincent Guittot @ 2023-01-13  8:14 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Ingo Molnar, Peter Zijlstra, Dietmar Eggemann, Rafael J. Wysocki,
	Viresh Kumar, linux-kernel, linux-pm, Lukasz Luba, Wei Wang,
	Xuewen Yan, Hank, Jonathan JMChen

On Thu, 12 Jan 2023 at 13:27, Qais Yousef <qyousef@layalina.io> wrote:
>
> Traversing the Perf Domains requires rcu_read_lock() to be held and is
> conditional on sched_energy_enabled(). Ensure right protections applied.
>
> Also skip capacity inversion detection for our own pd; which was an
> error.
>
> Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
> Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---
>  kernel/sched/fair.c | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 5a8e75d4a17b..34239d3118f0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8992,16 +8992,23 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
>          *   * Thermal pressure will impact all cpus in this perf domain
>          *     equally.
>          */
> -       if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> +       if (sched_energy_enabled()) {
>                 unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
> -               struct perf_domain *pd = rcu_dereference(rq->rd->pd);
> +               struct perf_domain *pd;
> +
> +               rcu_read_lock();
>
> +               pd = rcu_dereference(rq->rd->pd);
>                 rq->cpu_capacity_inverted = 0;
>
>                 for (; pd; pd = pd->next) {
>                         struct cpumask *pd_span = perf_domain_span(pd);
>                         unsigned long pd_cap_orig, pd_cap;
>
> +                       /* We can't be inverted against our own pd */
> +                       if (cpumask_test_cpu(cpu_of(rq), pd_span))
> +                               continue;
> +
>                         cpu = cpumask_any(pd_span);
>                         pd_cap_orig = arch_scale_cpu_capacity(cpu);
>
> @@ -9026,6 +9033,8 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
>                                 break;
>                         }
>                 }
> +
> +               rcu_read_unlock();
>         }
>
>         trace_sched_cpu_capacity_tp(rq);
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [tip: sched/urgent] sched/uclamp: Fix a uninitialized variable warnings
  2023-01-12 12:27 ` [PATCH v3 1/2] sched/uclamp: Fix a uninitialized variable warnings Qais Yousef
  2023-01-13  8:13   ` Vincent Guittot
@ 2023-01-13 12:08   ` tip-bot2 for Qais Yousef
  1 sibling, 0 replies; 7+ messages in thread
From: tip-bot2 for Qais Yousef @ 2023-01-13 12:08 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: kernel test robot, Dan Carpenter, Qais Yousef (Google),
	Peter Zijlstra (Intel),
	Vincent Guittot, x86, linux-kernel

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     e26fd28db82899be71b4b949527373d0a6be1e65
Gitweb:        https://git.kernel.org/tip/e26fd28db82899be71b4b949527373d0a6be1e65
Author:        Qais Yousef <qyousef@layalina.io>
AuthorDate:    Thu, 12 Jan 2023 12:27:07 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 13 Jan 2023 11:40:21 +01:00

sched/uclamp: Fix a uninitialized variable warnings

Addresses the following warnings:

> config: riscv-randconfig-m031-20221111
> compiler: riscv64-linux-gcc (GCC) 12.1.0
>
> smatch warnings:
> kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_min'.
> kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_max'.

Fixes: 244226035a1f ("sched/uclamp: Fix fits_capacity() check in feec()")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20230112122708.330667-2-qyousef@layalina.io
---
 kernel/sched/fair.c | 35 ++++++++++++++++-------------------
 1 file changed, 16 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c36aa54..be43731 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7229,10 +7229,10 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 	eenv_task_busy_time(&eenv, p, prev_cpu);
 
 	for (; pd; pd = pd->next) {
+		unsigned long util_min = p_util_min, util_max = p_util_max;
 		unsigned long cpu_cap, cpu_thermal_cap, util;
 		unsigned long cur_delta, max_spare_cap = 0;
 		unsigned long rq_util_min, rq_util_max;
-		unsigned long util_min, util_max;
 		unsigned long prev_spare_cap = 0;
 		int max_spare_cap_cpu = -1;
 		unsigned long base_energy;
@@ -7251,6 +7251,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 		eenv.pd_cap = 0;
 
 		for_each_cpu(cpu, cpus) {
+			struct rq *rq = cpu_rq(cpu);
+
 			eenv.pd_cap += cpu_thermal_cap;
 
 			if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
@@ -7269,24 +7271,19 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 			 * much capacity we can get out of the CPU; this is
 			 * aligned with sched_cpu_util().
 			 */
-			if (uclamp_is_used()) {
-				if (uclamp_rq_is_idle(cpu_rq(cpu))) {
-					util_min = p_util_min;
-					util_max = p_util_max;
-				} else {
-					/*
-					 * Open code uclamp_rq_util_with() except for
-					 * the clamp() part. Ie: apply max aggregation
-					 * only. util_fits_cpu() logic requires to
-					 * operate on non clamped util but must use the
-					 * max-aggregated uclamp_{min, max}.
-					 */
-					rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
-					rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
-
-					util_min = max(rq_util_min, p_util_min);
-					util_max = max(rq_util_max, p_util_max);
-				}
+			if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) {
+				/*
+				 * Open code uclamp_rq_util_with() except for
+				 * the clamp() part. Ie: apply max aggregation
+				 * only. util_fits_cpu() logic requires to
+				 * operate on non clamped util but must use the
+				 * max-aggregated uclamp_{min, max}.
+				 */
+				rq_util_min = uclamp_rq_get(rq, UCLAMP_MIN);
+				rq_util_max = uclamp_rq_get(rq, UCLAMP_MAX);
+
+				util_min = max(rq_util_min, p_util_min);
+				util_max = max(rq_util_max, p_util_max);
 			}
 			if (!util_fits_cpu(util, util_min, util_max, cpu))
 				continue;

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip: sched/urgent] sched/fair: Fixes for capacity inversion detection
  2023-01-12 12:27 ` [PATCH v3 2/2] sched/fair: Fixes for capacity inversion detection Qais Yousef
  2023-01-13  8:14   ` Vincent Guittot
@ 2023-01-13 12:08   ` tip-bot2 for Qais Yousef
  1 sibling, 0 replies; 7+ messages in thread
From: tip-bot2 for Qais Yousef @ 2023-01-13 12:08 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Dietmar Eggemann, Qais Yousef (Google), Peter Zijlstra (Intel),
	Vincent Guittot, x86, linux-kernel

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     da07d2f9c153e457e845d4dcfdd13568d71d18a4
Gitweb:        https://git.kernel.org/tip/da07d2f9c153e457e845d4dcfdd13568d71d18a4
Author:        Qais Yousef <qyousef@layalina.io>
AuthorDate:    Thu, 12 Jan 2023 12:27:08 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 13 Jan 2023 11:40:21 +01:00

sched/fair: Fixes for capacity inversion detection

Traversing the Perf Domains requires rcu_read_lock() to be held and is
conditional on sched_energy_enabled(). Ensure right protections applied.

Also skip capacity inversion detection for our own pd; which was an
error.

Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20230112122708.330667-3-qyousef@layalina.io
---
 kernel/sched/fair.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index be43731..0f87369 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8868,16 +8868,23 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 	 *   * Thermal pressure will impact all cpus in this perf domain
 	 *     equally.
 	 */
-	if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+	if (sched_energy_enabled()) {
 		unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
-		struct perf_domain *pd = rcu_dereference(rq->rd->pd);
+		struct perf_domain *pd;
+
+		rcu_read_lock();
 
+		pd = rcu_dereference(rq->rd->pd);
 		rq->cpu_capacity_inverted = 0;
 
 		for (; pd; pd = pd->next) {
 			struct cpumask *pd_span = perf_domain_span(pd);
 			unsigned long pd_cap_orig, pd_cap;
 
+			/* We can't be inverted against our own pd */
+			if (cpumask_test_cpu(cpu_of(rq), pd_span))
+				continue;
+
 			cpu = cpumask_any(pd_span);
 			pd_cap_orig = arch_scale_cpu_capacity(cpu);
 
@@ -8902,6 +8909,8 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
 				break;
 			}
 		}
+
+		rcu_read_unlock();
 	}
 
 	trace_sched_cpu_capacity_tp(rq);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-01-13 12:15 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-12 12:27 [PATCH v3 0/2] Fixes for uclamp and capacity inversion detection Qais Yousef
2023-01-12 12:27 ` [PATCH v3 1/2] sched/uclamp: Fix a uninitialized variable warnings Qais Yousef
2023-01-13  8:13   ` Vincent Guittot
2023-01-13 12:08   ` [tip: sched/urgent] " tip-bot2 for Qais Yousef
2023-01-12 12:27 ` [PATCH v3 2/2] sched/fair: Fixes for capacity inversion detection Qais Yousef
2023-01-13  8:14   ` Vincent Guittot
2023-01-13 12:08   ` [tip: sched/urgent] " tip-bot2 for Qais Yousef

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).