All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RESEND v3 0/5]
@ 2017-10-05 11:45 Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function Brendan Jackman
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Brendan Jackman @ 2017-10-05 11:45 UTC (permalink / raw)
  To: linux-kernel, Peter Zijlstra

This patchset optimises away an unused comparison, and fixes some corner cases in
the find_idlest_group path of select_task_rq_fair.

Changes v2 -> v3:

 - 1/5 through 4/5 unchanged.

 - 5/5: Essentially undid [1]. Vincent pointed out that this isn't really a
   bugfix but more of a change in policy. Now find_idlest_group still returns
   NULL when the local group is idlest.

   I previously said some extra code could be removed in this case, but after
   more careful inspection I changed my mind.

Changes v1 -> v2:

 - Reworked task affinity checks to avoid repeating them, as per Vincent's
   suggestion. To avoid excessive indentation this required moving code into its
   own function, as per PeterZ's suggestion.

 - Split up the patches.

[1]
 - Altered the caller of find_idlest_group so that it now unconditionally uses
   find_idlest_group_cpu (formerly find_idlest_cpu). This means that we more
   often use the maligned "perspective-switching" logic at the bottom of the
   while(sd) loop, but it also means the task placement algorithm is more
   consistent between whether the idlest group is local or remote.

   As mentioned in 5/5 an alternative would be to just initialise @new_cpu to
   @cpu instead of @prev_cpu (which is what PeterZ suggested in v1 review). In
   that case, some extra code could be removed in & around
   find_idlest_group_cpu.

Brendan Jackman (5):
  sched/fair: Move select_task_rq_fair slow-path into its own function
  sched/fair: Remove unnecessary comparison with -1
  sched/fair: Fix find_idlest_group when local group is not allowed
  sched/fair: Fix use of find_idlest_group when no groups are allowed
  sched/fair: Fix use of find_idlest_group when local group is idlest

 kernel/sched/fair.c | 93 ++++++++++++++++++++++++++++++++---------------------
 1 file changed, 56 insertions(+), 37 deletions(-)

-- 
2.14.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH RESEND v3 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function
  2017-10-05 11:45 [PATCH RESEND v3 0/5] Brendan Jackman
@ 2017-10-05 11:45 ` Brendan Jackman
  2017-10-10 10:59   ` [tip:sched/core] sched/fair: Move select_task_rq_fair() " tip-bot for Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 2/5] sched/fair: Remove unnecessary comparison with -1 Brendan Jackman
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Brendan Jackman @ 2017-10-05 11:45 UTC (permalink / raw)
  To: linux-kernel, Peter Zijlstra
  Cc: Dietmar Eggemann, Vincent Guittot, Josef Bacik, Ingo Molnar,
	Morten Rasmussen

In preparation for changes that would otherwise require adding a new
level of indentation to the while(sd) loop, create a new function
find_idlest_cpu which contains this loop, and rename the existing
find_idlest_cpu to find_idlest_group_cpu.

Code inside the while(sd) loop is unchanged. @new_cpu is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 83 +++++++++++++++++++++++++++++++----------------------
 1 file changed, 48 insertions(+), 35 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 350dbec01523..7df72795b20b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5897,10 +5897,10 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 }
 
 /*
- * find_idlest_cpu - find the idlest cpu among the cpus in group.
+ * find_idlest_group_cpu - find the idlest cpu among the cpus in group.
  */
 static int
-find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
+find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 {
 	unsigned long load, min_load = ULONG_MAX;
 	unsigned int min_exit_latency = UINT_MAX;
@@ -5949,6 +5949,50 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 	return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu;
 }
 
+static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p,
+				  int cpu, int prev_cpu, int sd_flag)
+{
+	int new_cpu = prev_cpu;
+
+	while (sd) {
+		struct sched_group *group;
+		struct sched_domain *tmp;
+		int weight;
+
+		if (!(sd->flags & sd_flag)) {
+			sd = sd->child;
+			continue;
+		}
+
+		group = find_idlest_group(sd, p, cpu, sd_flag);
+		if (!group) {
+			sd = sd->child;
+			continue;
+		}
+
+		new_cpu = find_idlest_group_cpu(group, p, cpu);
+		if (new_cpu == -1 || new_cpu == cpu) {
+			/* Now try balancing at a lower domain level of cpu */
+			sd = sd->child;
+			continue;
+		}
+
+		/* Now try balancing at a lower domain level of new_cpu */
+		cpu = new_cpu;
+		weight = sd->span_weight;
+		sd = NULL;
+		for_each_domain(cpu, tmp) {
+			if (weight <= tmp->span_weight)
+				break;
+			if (tmp->flags & sd_flag)
+				sd = tmp;
+		}
+		/* while loop will break here if sd == NULL */
+	}
+
+	return new_cpu;
+}
+
 #ifdef CONFIG_SCHED_SMT
 
 static inline void set_idle_cores(int cpu, int val)
@@ -6306,39 +6350,8 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
 		if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */
 			new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
 
-	} else while (sd) {
-		struct sched_group *group;
-		int weight;
-
-		if (!(sd->flags & sd_flag)) {
-			sd = sd->child;
-			continue;
-		}
-
-		group = find_idlest_group(sd, p, cpu, sd_flag);
-		if (!group) {
-			sd = sd->child;
-			continue;
-		}
-
-		new_cpu = find_idlest_cpu(group, p, cpu);
-		if (new_cpu == -1 || new_cpu == cpu) {
-			/* Now try balancing at a lower domain level of cpu */
-			sd = sd->child;
-			continue;
-		}
-
-		/* Now try balancing at a lower domain level of new_cpu */
-		cpu = new_cpu;
-		weight = sd->span_weight;
-		sd = NULL;
-		for_each_domain(cpu, tmp) {
-			if (weight <= tmp->span_weight)
-				break;
-			if (tmp->flags & sd_flag)
-				sd = tmp;
-		}
-		/* while loop will break here if sd == NULL */
+	} else {
+		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
 	}
 	rcu_read_unlock();
 
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RESEND v3 2/5] sched/fair: Remove unnecessary comparison with -1
  2017-10-05 11:45 [PATCH RESEND v3 0/5] Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function Brendan Jackman
@ 2017-10-05 11:45 ` Brendan Jackman
  2017-10-10 10:59   ` [tip:sched/core] " tip-bot for Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 3/5] sched/fair: Fix find_idlest_group when local group is not allowed Brendan Jackman
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Brendan Jackman @ 2017-10-05 11:45 UTC (permalink / raw)
  To: linux-kernel, Peter Zijlstra
  Cc: Dietmar Eggemann, Vincent Guittot, Josef Bacik, Ingo Molnar,
	Morten Rasmussen

Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7df72795b20b..8b12c76a8b62 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5971,7 +5971,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 		}
 
 		new_cpu = find_idlest_group_cpu(group, p, cpu);
-		if (new_cpu == -1 || new_cpu == cpu) {
+		if (new_cpu == cpu) {
 			/* Now try balancing at a lower domain level of cpu */
 			sd = sd->child;
 			continue;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RESEND v3 3/5] sched/fair: Fix find_idlest_group when local group is not allowed
  2017-10-05 11:45 [PATCH RESEND v3 0/5] Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 2/5] sched/fair: Remove unnecessary comparison with -1 Brendan Jackman
@ 2017-10-05 11:45 ` Brendan Jackman
  2017-10-10 11:00   ` [tip:sched/core] sched/fair: Fix find_idlest_group() " tip-bot for Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 4/5] sched/fair: Fix use of find_idlest_group when no groups are allowed Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 5/5] sched/fair: Fix use of find_idlest_group when local group is idlest Brendan Jackman
  4 siblings, 1 reply; 11+ messages in thread
From: Brendan Jackman @ 2017-10-05 11:45 UTC (permalink / raw)
  To: linux-kernel, Peter Zijlstra
  Cc: Dietmar Eggemann, Vincent Guittot, Josef Bacik, Ingo Molnar,
	Morten Rasmussen

When the local group is not allowed we do not modify this_*_load from
their initial value of 0. That means that the load checks at the end
of find_idlest_group cause us to incorrectly return NULL. Fixing the
initial values to ULONG_MAX means we will instead return the idlest
remote group in that case.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
---
 kernel/sched/fair.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8b12c76a8b62..933f5c9d95d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5775,8 +5775,9 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 {
 	struct sched_group *idlest = NULL, *group = sd->groups;
 	struct sched_group *most_spare_sg = NULL;
-	unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = 0;
-	unsigned long min_avg_load = ULONG_MAX, this_avg_load = 0;
+	unsigned long min_runnable_load = ULONG_MAX;
+	unsigned long this_runnable_load = ULONG_MAX;
+	unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX;
 	unsigned long most_spare = 0, this_spare = 0;
 	int load_idx = sd->forkexec_idx;
 	int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RESEND v3 4/5] sched/fair: Fix use of find_idlest_group when no groups are allowed
  2017-10-05 11:45 [PATCH RESEND v3 0/5] Brendan Jackman
                   ` (2 preceding siblings ...)
  2017-10-05 11:45 ` [PATCH RESEND v3 3/5] sched/fair: Fix find_idlest_group when local group is not allowed Brendan Jackman
@ 2017-10-05 11:45 ` Brendan Jackman
  2017-10-10 11:00   ` [tip:sched/core] sched/fair: Fix usage of find_idlest_group() " tip-bot for Brendan Jackman
  2017-10-05 11:45 ` [PATCH RESEND v3 5/5] sched/fair: Fix use of find_idlest_group when local group is idlest Brendan Jackman
  4 siblings, 1 reply; 11+ messages in thread
From: Brendan Jackman @ 2017-10-05 11:45 UTC (permalink / raw)
  To: linux-kernel, Peter Zijlstra
  Cc: Dietmar Eggemann, Vincent Guittot, Josef Bacik, Ingo Molnar,
	Morten Rasmussen

When p is allowed on none of the CPUs in the sched_domain, we
currently return NULL from find_idlest_group, and pointlessly
continue the search on lower sched_domain levels (where p is also not
allowed) before returning prev_cpu regardless (as we have not updated
new_cpu).

Add an explicit check for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
---
 kernel/sched/fair.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 933f5c9d95d1..4a14ebca4d79 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5768,6 +5768,8 @@ static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
 /*
  * find_idlest_group finds and returns the least busy CPU group within the
  * domain.
+ *
+ * Assumes p is allowed on at least one CPU in sd.
  */
 static struct sched_group *
 find_idlest_group(struct sched_domain *sd, struct task_struct *p,
@@ -5955,6 +5957,9 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 {
 	int new_cpu = prev_cpu;
 
+	if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
+		return prev_cpu;
+
 	while (sd) {
 		struct sched_group *group;
 		struct sched_domain *tmp;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RESEND v3 5/5] sched/fair: Fix use of find_idlest_group when local group is idlest
  2017-10-05 11:45 [PATCH RESEND v3 0/5] Brendan Jackman
                   ` (3 preceding siblings ...)
  2017-10-05 11:45 ` [PATCH RESEND v3 4/5] sched/fair: Fix use of find_idlest_group when no groups are allowed Brendan Jackman
@ 2017-10-05 11:45 ` Brendan Jackman
  2017-10-10 11:01   ` [tip:sched/core] sched/fair: Fix usage of find_idlest_group() when the " tip-bot for Brendan Jackman
  4 siblings, 1 reply; 11+ messages in thread
From: Brendan Jackman @ 2017-10-05 11:45 UTC (permalink / raw)
  To: linux-kernel, Peter Zijlstra
  Cc: Dietmar Eggemann, Vincent Guittot, Josef Bacik, Ingo Molnar,
	Morten Rasmussen

find_idlest_group returns NULL when the local group is idlest. The
caller then continues the find_idlest_group search at a lower level
of the current CPU's sched_domain hierarchy. find_idlest_group_cpu is
not consulted and, crucially, @new_cpu is not updated. This means the
search is pointless and we return @prev_cpu from select_task_rq_fair.

This is fixed by initialising @new_cpu to @cpu instead of
@prev_cpu.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4a14ebca4d79..82a8e206657f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5955,7 +5955,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
 static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p,
 				  int cpu, int prev_cpu, int sd_flag)
 {
-	int new_cpu = prev_cpu;
+	int new_cpu = cpu;
 
 	if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
 		return prev_cpu;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [tip:sched/core] sched/fair: Move select_task_rq_fair() slow-path into its own function
  2017-10-05 11:45 ` [PATCH RESEND v3 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function Brendan Jackman
@ 2017-10-10 10:59   ` tip-bot for Brendan Jackman
  0 siblings, 0 replies; 11+ messages in thread
From: tip-bot for Brendan Jackman @ 2017-10-10 10:59 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: vincent.guittot, jbacik, tglx, efault, hpa, dietmar.eggemann,
	mingo, linux-kernel, morten.rasmussen, torvalds, josef,
	brendan.jackman, peterz

Commit-ID:  18bd1b4bd53aba81d76d55e91a68310a227dc187
Gitweb:     https://git.kernel.org/tip/18bd1b4bd53aba81d76d55e91a68310a227dc187
Author:     Brendan Jackman <brendan.jackman@arm.com>
AuthorDate: Thu, 5 Oct 2017 12:45:12 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 10 Oct 2017 11:45:33 +0200

sched/fair: Move select_task_rq_fair() slow-path into its own function

In preparation for changes that would otherwise require adding a new
level of indentation to the while(sd) loop, create a new function
find_idlest_cpu() which contains this loop, and rename the existing
find_idlest_cpu() to find_idlest_group_cpu().

Code inside the while(sd) loop is unchanged. @new_cpu is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair().

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171005114516.18617-2-brendan.jackman@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 83 +++++++++++++++++++++++++++++++----------------------
 1 file changed, 48 insertions(+), 35 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cf3e816..dd4f253 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5859,10 +5859,10 @@ skip_spare:
 }
 
 /*
- * find_idlest_cpu - find the idlest cpu among the cpus in group.
+ * find_idlest_group_cpu - find the idlest cpu among the cpus in group.
  */
 static int
-find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
+find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 {
 	unsigned long load, min_load = ULONG_MAX;
 	unsigned int min_exit_latency = UINT_MAX;
@@ -5911,6 +5911,50 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 	return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu;
 }
 
+static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p,
+				  int cpu, int prev_cpu, int sd_flag)
+{
+	int new_cpu = prev_cpu;
+
+	while (sd) {
+		struct sched_group *group;
+		struct sched_domain *tmp;
+		int weight;
+
+		if (!(sd->flags & sd_flag)) {
+			sd = sd->child;
+			continue;
+		}
+
+		group = find_idlest_group(sd, p, cpu, sd_flag);
+		if (!group) {
+			sd = sd->child;
+			continue;
+		}
+
+		new_cpu = find_idlest_group_cpu(group, p, cpu);
+		if (new_cpu == -1 || new_cpu == cpu) {
+			/* Now try balancing at a lower domain level of cpu */
+			sd = sd->child;
+			continue;
+		}
+
+		/* Now try balancing at a lower domain level of new_cpu */
+		cpu = new_cpu;
+		weight = sd->span_weight;
+		sd = NULL;
+		for_each_domain(cpu, tmp) {
+			if (weight <= tmp->span_weight)
+				break;
+			if (tmp->flags & sd_flag)
+				sd = tmp;
+		}
+		/* while loop will break here if sd == NULL */
+	}
+
+	return new_cpu;
+}
+
 #ifdef CONFIG_SCHED_SMT
 
 static inline void set_idle_cores(int cpu, int val)
@@ -6277,39 +6321,8 @@ pick_cpu:
 		if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */
 			new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
 
-	} else while (sd) {
-		struct sched_group *group;
-		int weight;
-
-		if (!(sd->flags & sd_flag)) {
-			sd = sd->child;
-			continue;
-		}
-
-		group = find_idlest_group(sd, p, cpu, sd_flag);
-		if (!group) {
-			sd = sd->child;
-			continue;
-		}
-
-		new_cpu = find_idlest_cpu(group, p, cpu);
-		if (new_cpu == -1 || new_cpu == cpu) {
-			/* Now try balancing at a lower domain level of cpu */
-			sd = sd->child;
-			continue;
-		}
-
-		/* Now try balancing at a lower domain level of new_cpu */
-		cpu = new_cpu;
-		weight = sd->span_weight;
-		sd = NULL;
-		for_each_domain(cpu, tmp) {
-			if (weight <= tmp->span_weight)
-				break;
-			if (tmp->flags & sd_flag)
-				sd = tmp;
-		}
-		/* while loop will break here if sd == NULL */
+	} else {
+		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
 	}
 	rcu_read_unlock();
 

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [tip:sched/core] sched/fair: Remove unnecessary comparison with -1
  2017-10-05 11:45 ` [PATCH RESEND v3 2/5] sched/fair: Remove unnecessary comparison with -1 Brendan Jackman
@ 2017-10-10 10:59   ` tip-bot for Brendan Jackman
  0 siblings, 0 replies; 11+ messages in thread
From: tip-bot for Brendan Jackman @ 2017-10-10 10:59 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dietmar.eggemann, brendan.jackman, torvalds, morten.rasmussen,
	linux-kernel, josef, peterz, tglx, efault, vincent.guittot,
	mingo, jbacik, hpa

Commit-ID:  e90381eaecf6d59c60fe396838e0e99789531419
Gitweb:     https://git.kernel.org/tip/e90381eaecf6d59c60fe396838e0e99789531419
Author:     Brendan Jackman <brendan.jackman@arm.com>
AuthorDate: Thu, 5 Oct 2017 12:45:13 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 10 Oct 2017 11:45:34 +0200

sched/fair: Remove unnecessary comparison with -1

Since commit:

  83a0a96a5f26 ("sched/fair: Leverage the idle state info when choosing the "idlest" cpu")

find_idlest_group_cpu() (formerly find_idlest_cpu) no longer returns -1,
so we can simplify the checking of the return value in find_idlest_cpu().

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171005114516.18617-3-brendan.jackman@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index dd4f253..bdd28fb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5933,7 +5933,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 		}
 
 		new_cpu = find_idlest_group_cpu(group, p, cpu);
-		if (new_cpu == -1 || new_cpu == cpu) {
+		if (new_cpu == cpu) {
 			/* Now try balancing at a lower domain level of cpu */
 			sd = sd->child;
 			continue;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [tip:sched/core] sched/fair: Fix find_idlest_group() when local group is not allowed
  2017-10-05 11:45 ` [PATCH RESEND v3 3/5] sched/fair: Fix find_idlest_group when local group is not allowed Brendan Jackman
@ 2017-10-10 11:00   ` tip-bot for Brendan Jackman
  0 siblings, 0 replies; 11+ messages in thread
From: tip-bot for Brendan Jackman @ 2017-10-10 11:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, josef, morten.rasmussen, jbacik, dietmar.eggemann, peterz,
	hpa, torvalds, vincent.guittot, efault, mingo, brendan.jackman,
	linux-kernel

Commit-ID:  0d10ab952e99f3e9f374898e93f45452b81e5711
Gitweb:     https://git.kernel.org/tip/0d10ab952e99f3e9f374898e93f45452b81e5711
Author:     Brendan Jackman <brendan.jackman@arm.com>
AuthorDate: Thu, 5 Oct 2017 12:45:14 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 10 Oct 2017 11:45:34 +0200

sched/fair: Fix find_idlest_group() when local group is not allowed

When the local group is not allowed we do not modify this_*_load from
their initial value of 0. That means that the load checks at the end
of find_idlest_group cause us to incorrectly return NULL. Fixing the
initial values to ULONG_MAX means we will instead return the idlest
remote group in that case.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171005114516.18617-4-brendan.jackman@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bdd28fb..cca1835 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5737,8 +5737,9 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 {
 	struct sched_group *idlest = NULL, *group = sd->groups;
 	struct sched_group *most_spare_sg = NULL;
-	unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = 0;
-	unsigned long min_avg_load = ULONG_MAX, this_avg_load = 0;
+	unsigned long min_runnable_load = ULONG_MAX;
+	unsigned long this_runnable_load = ULONG_MAX;
+	unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX;
 	unsigned long most_spare = 0, this_spare = 0;
 	int load_idx = sd->forkexec_idx;
 	int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [tip:sched/core] sched/fair: Fix usage of find_idlest_group() when no groups are allowed
  2017-10-05 11:45 ` [PATCH RESEND v3 4/5] sched/fair: Fix use of find_idlest_group when no groups are allowed Brendan Jackman
@ 2017-10-10 11:00   ` tip-bot for Brendan Jackman
  0 siblings, 0 replies; 11+ messages in thread
From: tip-bot for Brendan Jackman @ 2017-10-10 11:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: morten.rasmussen, josef, hpa, brendan.jackman, torvalds,
	vincent.guittot, peterz, tglx, mingo, linux-kernel, efault,
	jbacik, dietmar.eggemann

Commit-ID:  6fee85ccbc76e8aeba43dc120c5fa3c5409a4e2c
Gitweb:     https://git.kernel.org/tip/6fee85ccbc76e8aeba43dc120c5fa3c5409a4e2c
Author:     Brendan Jackman <brendan.jackman@arm.com>
AuthorDate: Thu, 5 Oct 2017 12:45:15 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 10 Oct 2017 11:45:35 +0200

sched/fair: Fix usage of find_idlest_group() when no groups are allowed

When 'p' is not allowed on any of the CPUs in the sched_domain, we
currently return NULL from find_idlest_group(), and pointlessly
continue the search on lower sched_domain levels (where 'p' is also not
allowed) before returning prev_cpu regardless (as we have not updated
new_cpu).

Add an explicit check for this case, and add a comment to
find_idlest_group(). Now when find_idlest_group() returns NULL, it always
means that the local group is allowed and idlest.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171005114516.18617-5-brendan.jackman@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cca1835..ed80d6b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5730,6 +5730,8 @@ static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
 /*
  * find_idlest_group finds and returns the least busy CPU group within the
  * domain.
+ *
+ * Assumes p is allowed on at least one CPU in sd.
  */
 static struct sched_group *
 find_idlest_group(struct sched_domain *sd, struct task_struct *p,
@@ -5917,6 +5919,9 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 {
 	int new_cpu = prev_cpu;
 
+	if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
+		return prev_cpu;
+
 	while (sd) {
 		struct sched_group *group;
 		struct sched_domain *tmp;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [tip:sched/core] sched/fair: Fix usage of find_idlest_group() when the local group is idlest
  2017-10-05 11:45 ` [PATCH RESEND v3 5/5] sched/fair: Fix use of find_idlest_group when local group is idlest Brendan Jackman
@ 2017-10-10 11:01   ` tip-bot for Brendan Jackman
  0 siblings, 0 replies; 11+ messages in thread
From: tip-bot for Brendan Jackman @ 2017-10-10 11:01 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dietmar.eggemann, peterz, brendan.jackman, mingo, linux-kernel,
	efault, morten.rasmussen, tglx, josef, vincent.guittot, jbacik,
	torvalds, hpa

Commit-ID:  93f50f90247e3e926bbe9830df089c64a5cec236
Gitweb:     https://git.kernel.org/tip/93f50f90247e3e926bbe9830df089c64a5cec236
Author:     Brendan Jackman <brendan.jackman@arm.com>
AuthorDate: Thu, 5 Oct 2017 12:45:16 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 10 Oct 2017 11:45:36 +0200

sched/fair: Fix usage of find_idlest_group() when the local group is idlest

find_idlest_group() returns NULL when the local group is idlest. The
caller then continues the find_idlest_group() search at a lower level
of the current CPU's sched_domain hierarchy. find_idlest_group_cpu() is
not consulted and, crucially, @new_cpu is not updated. This means the
search is pointless and we return @prev_cpu from select_task_rq_fair().

This is fixed by initialising @new_cpu to @cpu instead of @prev_cpu.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171005114516.18617-6-brendan.jackman@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ed80d6b..56f343b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5917,7 +5917,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
 static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p,
 				  int cpu, int prev_cpu, int sd_flag)
 {
-	int new_cpu = prev_cpu;
+	int new_cpu = cpu;
 
 	if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
 		return prev_cpu;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-10-10 11:05 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-05 11:45 [PATCH RESEND v3 0/5] Brendan Jackman
2017-10-05 11:45 ` [PATCH RESEND v3 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function Brendan Jackman
2017-10-10 10:59   ` [tip:sched/core] sched/fair: Move select_task_rq_fair() " tip-bot for Brendan Jackman
2017-10-05 11:45 ` [PATCH RESEND v3 2/5] sched/fair: Remove unnecessary comparison with -1 Brendan Jackman
2017-10-10 10:59   ` [tip:sched/core] " tip-bot for Brendan Jackman
2017-10-05 11:45 ` [PATCH RESEND v3 3/5] sched/fair: Fix find_idlest_group when local group is not allowed Brendan Jackman
2017-10-10 11:00   ` [tip:sched/core] sched/fair: Fix find_idlest_group() " tip-bot for Brendan Jackman
2017-10-05 11:45 ` [PATCH RESEND v3 4/5] sched/fair: Fix use of find_idlest_group when no groups are allowed Brendan Jackman
2017-10-10 11:00   ` [tip:sched/core] sched/fair: Fix usage of find_idlest_group() " tip-bot for Brendan Jackman
2017-10-05 11:45 ` [PATCH RESEND v3 5/5] sched/fair: Fix use of find_idlest_group when local group is idlest Brendan Jackman
2017-10-10 11:01   ` [tip:sched/core] sched/fair: Fix usage of find_idlest_group() when the " tip-bot for Brendan Jackman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.