All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/3] sched/fair: Introduce scaled capacity awareness in enqueue
@ 2017-10-07 23:48 Rohit Jain
  2017-10-07 23:48 ` [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path Rohit Jain
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Rohit Jain @ 2017-10-07 23:48 UTC (permalink / raw)
  To: linux-kernel, eas-dev
  Cc: peterz, mingo, joelaf, atish.patra, vincent.guittot,
	dietmar.eggemann, morten.rasmussen

Changelog:
---------------------------------------------------------------------------
v1->v2:
* Changed the dynamic threshold calculation as the having global state
  can be avoided.

v2->v3:
* Split up the patch for find_idlest_cpu and select_idle_sibling code
  paths.

v3->v4:
* Rebased it to peterz's tree (apologies for wrong tree for v3)

v4->v5:
* Changed the threshold to 768 from 819 for easier shifts
* Changed the find_idlest_cpu code path to be simpler
* Changed the select_idle_core code path to search for
  idlest+full_capacity core 
* Added scaled capacity awareness to wake_affine_idle code path
---------------------------------------------------------------------------

During OLTP workload runs, threads can end up on CPUs with a lot of
softIRQ activity, thus delaying progress. For more reliable and
faster runs, if the system can spare it, these threads should be
scheduled on CPUs with lower IRQ/RT activity.

Currently, the scheduler takes into account the original capacity of
CPUs when providing 'hints' for select_idle_sibling code path to return
an idle CPU. However, the rest of the select_idle_* code paths remain
capacity agnostic. Further, these code paths are only aware of the
original capacity and not the capacity stolen by IRQ/RT activity.

This patch introduces capacity awarness in scheduler (CAS) which avoids
CPUs which might have their capacities reduced (due to IRQ/RT activity)
when trying to schedule threads (on the push side) in the system. This
awareness has been added into the fair scheduling class.

It does so by, using the following algorithm:
1) As in rt_avg the scaled capacities are already calculated.

2) Any CPU which is running below 80% capacity is considered running low
on capacity.

3) During idle CPU search if a CPU is found running low on capacity, it
is skipped if better CPUs are available.

4) If none of the CPUs are better in terms of idleness and capacity, then
the low-capacity CPU is considered to be the best available CPU.

The performance numbers:
---------------------------------------------------------------------------
CAS shows upto 1.5% improvement on x86 when running 'SELECT' database
workload.

For microbenchmark results, I used hackbench running with process along
with, running ping on CPU 0,1 and 2 as:
'ping -l 10000 -q -s 10 -f hostX'

The results below should be read as:

* 'Baseline without ping' is how the workload would've behaved if there
  was no IRQ activity.

* Compare 'Baseline with ping' and 'Baseline without ping' to see the
  effect of ping

* Compare 'Baseline with ping' and 'CAS with ping' to see the improvement
  CAS can give over baseline

Following are the runtime(s) with hackbench and ping activity as
described above (lower is better), on a 44 core 2 socket x86 machine:

+---------------+------+--------+--------+
|Num.           |CAS   |Baseline|Baseline|
|Tasks          |with  |with    |without |
|(groups of 40) |ping  |ping    |ping    |
+---------------+------+--------+--------+
|               |Mean  |Mean    |Mean    |
+---------------+------+--------+--------+
|1              | 0.55 | 0.59   | 0.53   |
|2              | 0.66 | 0.81   | 0.51   |
|4              | 0.99 | 1.16   | 0.95   |
|8              | 1.92 | 1.93   | 1.88   |
|16             | 3.24 | 3.26   | 3.15   |
|32             | 5.93 | 5.98   | 5.68   |
|64             | 11.55| 11.94  | 10.89  |
+---------------+------+--------+--------+



Rohit Jain (3):
  sched/fair: Introduce scaled capacity awareness in find_idlest_cpu
    code path
  sched/fair: Introduce scaled capacity awareness in select_idle_sibling
    code path
  sched/fair: Introduce scaled capacity awareness in wake_affine_idle
    code path

 kernel/sched/fair.c | 66 ++++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 53 insertions(+), 13 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path
  2017-10-07 23:48 [PATCH v5 0/3] sched/fair: Introduce scaled capacity awareness in enqueue Rohit Jain
@ 2017-10-07 23:48 ` Rohit Jain
  2017-10-12 17:03   ` Rohit Jain
  2017-10-07 23:48 ` [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling " Rohit Jain
  2017-10-07 23:48 ` [PATCH 3/3] sched/fair: Introduce scaled capacity awareness in wake_affine_idle " Rohit Jain
  2 siblings, 1 reply; 16+ messages in thread
From: Rohit Jain @ 2017-10-07 23:48 UTC (permalink / raw)
  To: linux-kernel, eas-dev
  Cc: peterz, mingo, joelaf, atish.patra, vincent.guittot,
	dietmar.eggemann, morten.rasmussen

While looking for idle CPUs for a waking task, we should also account
for the delays caused due to the bandwidth reduction by RT/IRQ tasks.

This patch does that by trying to find a higher capacity CPU with
minimum wake up latency.

Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
---
 kernel/sched/fair.c | 27 ++++++++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0107280..eaede50 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5579,6 +5579,11 @@ static unsigned long capacity_orig_of(int cpu)
 	return cpu_rq(cpu)->cpu_capacity_orig;
 }
 
+static inline bool full_capacity(int cpu)
+{
+	return (capacity_of(cpu) >= (capacity_orig_of(cpu)*768 >> 10));
+}
+
 static unsigned long cpu_avg_load_per_task(int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
@@ -5865,8 +5870,10 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 	unsigned long load, min_load = ULONG_MAX;
 	unsigned int min_exit_latency = UINT_MAX;
 	u64 latest_idle_timestamp = 0;
+	unsigned int backup_cap = 0;
 	int least_loaded_cpu = this_cpu;
 	int shallowest_idle_cpu = -1;
+	int shallowest_idle_cpu_backup = -1;
 	int i;
 
 	/* Check if we have any choice: */
@@ -5876,6 +5883,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 	/* Traverse only the allowed CPUs */
 	for_each_cpu_and(i, sched_group_span(group), &p->cpus_allowed) {
 		if (idle_cpu(i)) {
+			int idle_candidate = -1;
 			struct rq *rq = cpu_rq(i);
 			struct cpuidle_state *idle = idle_get_state(rq);
 			if (idle && idle->exit_latency < min_exit_latency) {
@@ -5886,7 +5894,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 				 */
 				min_exit_latency = idle->exit_latency;
 				latest_idle_timestamp = rq->idle_stamp;
-				shallowest_idle_cpu = i;
+				idle_candidate = i;
 			} else if ((!idle || idle->exit_latency == min_exit_latency) &&
 				   rq->idle_stamp > latest_idle_timestamp) {
 				/*
@@ -5895,7 +5903,16 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 				 * a warmer cache.
 				 */
 				latest_idle_timestamp = rq->idle_stamp;
-				shallowest_idle_cpu = i;
+				idle_candidate = i;
+			}
+
+			if (idle_candidate != -1) {
+				if (full_capacity(idle_candidate)) {
+					shallowest_idle_cpu = idle_candidate;
+				} else if (capacity_of(idle_candidate) > backup_cap) {
+					shallowest_idle_cpu_backup = idle_candidate;
+					backup_cap = capacity_of(idle_candidate);
+				}
 			}
 		} else if (shallowest_idle_cpu == -1) {
 			load = weighted_cpuload(cpu_rq(i));
@@ -5906,7 +5923,11 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 		}
 	}
 
-	return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu;
+	if (shallowest_idle_cpu != -1)
+		return shallowest_idle_cpu;
+
+	return (shallowest_idle_cpu_backup != -1 ?
+		shallowest_idle_cpu_backup : least_loaded_cpu);
 }
 
 #ifdef CONFIG_SCHED_SMT
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-10-07 23:48 [PATCH v5 0/3] sched/fair: Introduce scaled capacity awareness in enqueue Rohit Jain
  2017-10-07 23:48 ` [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path Rohit Jain
@ 2017-10-07 23:48 ` Rohit Jain
  2017-10-10 15:54   ` Atish Patra
  2017-10-07 23:48 ` [PATCH 3/3] sched/fair: Introduce scaled capacity awareness in wake_affine_idle " Rohit Jain
  2 siblings, 1 reply; 16+ messages in thread
From: Rohit Jain @ 2017-10-07 23:48 UTC (permalink / raw)
  To: linux-kernel, eas-dev
  Cc: peterz, mingo, joelaf, atish.patra, vincent.guittot,
	dietmar.eggemann, morten.rasmussen

While looking for CPUs to place running tasks on, the scheduler
completely ignores the capacity stolen away by RT/IRQ tasks. This patch
changes that behavior to also take the scaled capacity into account.

Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
---
 kernel/sched/fair.c | 37 ++++++++++++++++++++++++++++---------
 1 file changed, 28 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index eaede50..5b1f7b9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6004,7 +6004,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 
 		for_each_cpu(cpu, cpu_smt_mask(core)) {
 			cpumask_clear_cpu(cpu, cpus);
-			if (!idle_cpu(cpu))
+			if (!idle_cpu(cpu) || !full_capacity(cpu))
 				idle = false;
 		}
 
@@ -6025,7 +6025,8 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
  */
 static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
 {
-	int cpu;
+	int cpu, backup_cpu = -1;
+	unsigned int backup_cap = 0;
 
 	if (!static_branch_likely(&sched_smt_present))
 		return -1;
@@ -6033,11 +6034,17 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
 	for_each_cpu(cpu, cpu_smt_mask(target)) {
 		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
 			continue;
-		if (idle_cpu(cpu))
-			return cpu;
+		if (idle_cpu(cpu)) {
+			if (full_capacity(cpu))
+				return cpu;
+			if (capacity_of(cpu) > backup_cap) {
+				backup_cap = capacity_of(cpu);
+				backup_cpu = cpu;
+			}
+		}
 	}
 
-	return -1;
+	return backup_cpu;
 }
 
 #else /* CONFIG_SCHED_SMT */
@@ -6066,6 +6073,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 	u64 time, cost;
 	s64 delta;
 	int cpu, nr = INT_MAX;
+	int backup_cpu = -1;
+	unsigned int backup_cap = 0;
 
 	this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
 	if (!this_sd)
@@ -6096,10 +6105,19 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 			return -1;
 		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
 			continue;
-		if (idle_cpu(cpu))
-			break;
+		if (idle_cpu(cpu)) {
+			if (full_capacity(cpu)) {
+				backup_cpu = -1;
+				break;
+			} else if (capacity_of(cpu) > backup_cap) {
+				backup_cap = capacity_of(cpu);
+				backup_cpu = cpu;
+			}
+		}
 	}
 
+	if (backup_cpu >= 0)
+		cpu = backup_cpu;
 	time = local_clock() - time;
 	cost = this_sd->avg_scan_cost;
 	delta = (s64)(time - cost) / 8;
@@ -6116,13 +6134,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	struct sched_domain *sd;
 	int i;
 
-	if (idle_cpu(target))
+	if (idle_cpu(target) && full_capacity(target))
 		return target;
 
 	/*
 	 * If the previous cpu is cache affine and idle, don't be stupid.
 	 */
-	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev))
+	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev)
+	    && full_capacity(prev))
 		return prev;
 
 	sd = rcu_dereference(per_cpu(sd_llc, target));
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] sched/fair: Introduce scaled capacity awareness in wake_affine_idle code path
  2017-10-07 23:48 [PATCH v5 0/3] sched/fair: Introduce scaled capacity awareness in enqueue Rohit Jain
  2017-10-07 23:48 ` [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path Rohit Jain
  2017-10-07 23:48 ` [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling " Rohit Jain
@ 2017-10-07 23:48 ` Rohit Jain
  2 siblings, 0 replies; 16+ messages in thread
From: Rohit Jain @ 2017-10-07 23:48 UTC (permalink / raw)
  To: linux-kernel, eas-dev
  Cc: peterz, mingo, joelaf, atish.patra, vincent.guittot,
	dietmar.eggemann, morten.rasmussen

wake_affine_idle returns true if the CPU can run the task. Since it is
ignoring capacity, adding that check there to only return true if the
CPU is full_capacity.

Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5b1f7b9..f4761f2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5660,7 +5660,7 @@ static bool
 wake_affine_idle(struct sched_domain *sd, struct task_struct *p,
 		 int this_cpu, int prev_cpu, int sync)
 {
-	if (idle_cpu(this_cpu))
+	if (idle_cpu(this_cpu) && full_capacity(this_cpu))
 		return true;
 
 	if (sync && cpu_rq(this_cpu)->nr_running == 1)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-10-07 23:48 ` [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling " Rohit Jain
@ 2017-10-10 15:54   ` Atish Patra
  2017-10-10 18:02     ` Rohit Jain
  0 siblings, 1 reply; 16+ messages in thread
From: Atish Patra @ 2017-10-10 15:54 UTC (permalink / raw)
  To: Rohit Jain, linux-kernel, eas-dev
  Cc: peterz, mingo, joelaf, vincent.guittot, dietmar.eggemann,
	morten.rasmussen


Minor nit: version number missing

On 10/07/2017 06:48 PM, Rohit Jain wrote:
> While looking for CPUs to place running tasks on, the scheduler
> completely ignores the capacity stolen away by RT/IRQ tasks. This patch
> changes that behavior to also take the scaled capacity into account.
>
> Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
> ---
>   kernel/sched/fair.c | 37 ++++++++++++++++++++++++++++---------
>   1 file changed, 28 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index eaede50..5b1f7b9 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6004,7 +6004,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>   
>   		for_each_cpu(cpu, cpu_smt_mask(core)) {
>   			cpumask_clear_cpu(cpu, cpus);
> -			if (!idle_cpu(cpu))
> +			if (!idle_cpu(cpu) || !full_capacity(cpu))
Do we need to skip the entire core just because 1st cpu in the core 
doesn't have full capacity ?
Let's say that is the only idle core available. It will go and try to 
select_idle_cpu() to find the idlest cpu.
Is it worth spending extra time to search an idle cpu with full capacity 
when there are idle cores available ?

Regards,
Atish
>   				idle = false;
>   		}
>   
> @@ -6025,7 +6025,8 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>    */
>   static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>   {
> -	int cpu;
> +	int cpu, backup_cpu = -1;
> +	unsigned int backup_cap = 0;
>   
>   	if (!static_branch_likely(&sched_smt_present))
>   		return -1;
> @@ -6033,11 +6034,17 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
>   	for_each_cpu(cpu, cpu_smt_mask(target)) {
>   		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
>   			continue;
> -		if (idle_cpu(cpu))
> -			return cpu;
> +		if (idle_cpu(cpu)) {
> +			if (full_capacity(cpu))
> +				return cpu;
> +			if (capacity_of(cpu) > backup_cap) {
> +				backup_cap = capacity_of(cpu);
> +				backup_cpu = cpu;
> +			}
> +		}
>   	}
>   
> -	return -1;
> +	return backup_cpu;
>   }
>   
>   #else /* CONFIG_SCHED_SMT */
> @@ -6066,6 +6073,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>   	u64 time, cost;
>   	s64 delta;
>   	int cpu, nr = INT_MAX;
> +	int backup_cpu = -1;
> +	unsigned int backup_cap = 0;
>   
>   	this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
>   	if (!this_sd)
> @@ -6096,10 +6105,19 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>   			return -1;
>   		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
>   			continue;
> -		if (idle_cpu(cpu))
> -			break;
> +		if (idle_cpu(cpu)) {
> +			if (full_capacity(cpu)) {
> +				backup_cpu = -1;
> +				break;
> +			} else if (capacity_of(cpu) > backup_cap) {
> +				backup_cap = capacity_of(cpu);
> +				backup_cpu = cpu;
> +			}
> +		}
>   	}
>   
> +	if (backup_cpu >= 0)
> +		cpu = backup_cpu;
>   	time = local_clock() - time;
>   	cost = this_sd->avg_scan_cost;
>   	delta = (s64)(time - cost) / 8;
> @@ -6116,13 +6134,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>   	struct sched_domain *sd;
>   	int i;
>   
> -	if (idle_cpu(target))
> +	if (idle_cpu(target) && full_capacity(target))
>   		return target;
>   
>   	/*
>   	 * If the previous cpu is cache affine and idle, don't be stupid.
>   	 */
> -	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev))
> +	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev)
> +	    && full_capacity(prev))
>   		return prev;
>   
>   	sd = rcu_dereference(per_cpu(sd_llc, target));

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-10-10 15:54   ` Atish Patra
@ 2017-10-10 18:02     ` Rohit Jain
  0 siblings, 0 replies; 16+ messages in thread
From: Rohit Jain @ 2017-10-10 18:02 UTC (permalink / raw)
  To: Atish Patra, linux-kernel, eas-dev
  Cc: peterz, mingo, joelaf, vincent.guittot, dietmar.eggemann,
	morten.rasmussen

Hi Atish,

Thanks for the comments

On 10/10/2017 08:54 AM, Atish Patra wrote:
> <snip>
>>
>> Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
>> ---
>>   kernel/sched/fair.c | 37 ++++++++++++++++++++++++++++---------
>>   1 file changed, 28 insertions(+), 9 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index eaede50..5b1f7b9 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6004,7 +6004,7 @@ static int select_idle_core(struct task_struct 
>> *p, struct sched_domain *sd, int
>>             for_each_cpu(cpu, cpu_smt_mask(core)) {
>>               cpumask_clear_cpu(cpu, cpus);
>> -            if (!idle_cpu(cpu))
>> +            if (!idle_cpu(cpu) || !full_capacity(cpu))
> Do we need to skip the entire core just because 1st cpu in the core 
> doesn't have full capacity ?
> Let's say that is the only idle core available. It will go and try to 
> select_idle_cpu() to find the idlest cpu.
> Is it worth spending extra time to search an idle cpu with full 
> capacity when there are idle cores available ?

This has been previously discussed:
https://lkml.org/lkml/2017/10/3/1001

Returning the best CPU within the idle core did not result in a
statistically significant performance benefit, hence I went with Joel's
suggestion to keep the code simple.

Thanks,
Rohit

<snip>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path
  2017-10-07 23:48 ` [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path Rohit Jain
@ 2017-10-12 17:03   ` Rohit Jain
  2017-10-12 21:47     ` Joel Fernandes
  0 siblings, 1 reply; 16+ messages in thread
From: Rohit Jain @ 2017-10-12 17:03 UTC (permalink / raw)
  To: peterz, joelaf, atish.patra
  Cc: linux-kernel, eas-dev, mingo, vincent.guittot, dietmar.eggemann,
	morten.rasmussen

Hi Joel, Atish,

Moving off-line discussions to LKML, just so everyone's on the same page,
I actually like this version now and it is outperforming my previous
code, so I am on board with this version. It makes the code simpler too.

Since we need a fast way of returning an idle cpu in select_idle_sibling
path, I think that can remain as it is (or may be we can argue about the
patch on that thread)

If what I said abovemakes sense to everyone, I will send out a v6.

As always, please let me know what you think.

Thanks,
Rohit

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 56f343b..a1f622c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5724,7 +5724,7 @@ static int cpu_util_wake(int cpu, struct 
task_struct *p);

  static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
  {
-    return capacity_orig_of(cpu) - cpu_util_wake(cpu, p);
+    return capacity_of(cpu) - cpu_util_wake(cpu, p);
  }

  /*
@@ -5870,6 +5870,7 @@ find_idlest_group_cpu(struct sched_group *group, 
struct task_struct *p, int this
      unsigned long load, min_load = ULONG_MAX;
      unsigned int min_exit_latency = UINT_MAX;
      u64 latest_idle_timestamp = 0;
+    unsigned int idle_cpu_cap = 0;
      int least_loaded_cpu = this_cpu;
      int shallowest_idle_cpu = -1;
      int i;
@@ -5881,6 +5882,7 @@ find_idlest_group_cpu(struct sched_group *group, 
struct task_struct *p, int this
      /* Traverse only the allowed CPUs */
      for_each_cpu_and(i, sched_group_span(group), &p->cpus_allowed) {
          if (idle_cpu(i)) {
+            int idle_candidate = -1;
              struct rq *rq = cpu_rq(i);
              struct cpuidle_state *idle = idle_get_state(rq);
              if (idle && idle->exit_latency < min_exit_latency) {
@@ -5891,7 +5893,7 @@ find_idlest_group_cpu(struct sched_group *group, 
struct task_struct *p, int this
                   */
                  min_exit_latency = idle->exit_latency;
                  latest_idle_timestamp = rq->idle_stamp;
-                shallowest_idle_cpu = i;
+                idle_candidate = i;
              } else if ((!idle || idle->exit_latency == 
min_exit_latency) &&
                     rq->idle_stamp > latest_idle_timestamp) {
                  /*
@@ -5900,8 +5902,14 @@ find_idlest_group_cpu(struct sched_group *group, 
struct task_struct *p, int this
                   * a warmer cache.
                   */
                  latest_idle_timestamp = rq->idle_stamp;
-                shallowest_idle_cpu = i;
+                idle_candidate = i;
              }
+
+            if (idle_candidate != -1 &&
+                (capacity_of(idle_candidate) > idle_cpu_cap)) {
+                shallowest_idle_cpu = idle_candidate;
+                idle_cpu_cap = capacity_of(idle_candidate);
+            }
          } else if (shallowest_idle_cpu == -1) {
              load = weighted_cpuload(cpu_rq(i));
              if (load < min_load || (load == min_load && i == this_cpu)) {
-- 
2.7.4


On 10/07/2017 04:48 PM, Rohit Jain wrote:
> While looking for idle CPUs for a waking task, we should also account
> for the delays caused due to the bandwidth reduction by RT/IRQ tasks.
>
> This patch does that by trying to find a higher capacity CPU with
> minimum wake up latency.
>
> Signed-off-by: Rohit Jain<rohit.k.jain@oracle.com>
> ---
>   kernel/sched/fair.c | 27 ++++++++++++++++++++++++---
>   1 file changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0107280..eaede50 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5579,6 +5579,11 @@ static unsigned long capacity_orig_of(int cpu)
>   	return cpu_rq(cpu)->cpu_capacity_orig;
>   }
>   
> +static inline bool full_capacity(int cpu)
> +{
> +	return (capacity_of(cpu) >= (capacity_orig_of(cpu)*768 >> 10));
> +}
> +
>   static unsigned long cpu_avg_load_per_task(int cpu)
>   {
>   	struct rq *rq = cpu_rq(cpu);
> @@ -5865,8 +5870,10 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
>   	unsigned long load, min_load = ULONG_MAX;
>   	unsigned int min_exit_latency = UINT_MAX;
>   	u64 latest_idle_timestamp = 0;
> +	unsigned int backup_cap = 0;
>   	int least_loaded_cpu = this_cpu;
>   	int shallowest_idle_cpu = -1;
> +	int shallowest_idle_cpu_backup = -1;
>   	int i;
>   
>   	/* Check if we have any choice: */
> @@ -5876,6 +5883,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
>   	/* Traverse only the allowed CPUs */
>   	for_each_cpu_and(i, sched_group_span(group), &p->cpus_allowed) {
>   		if (idle_cpu(i)) {
> +			int idle_candidate = -1;
>   			struct rq *rq = cpu_rq(i);
>   			struct cpuidle_state *idle = idle_get_state(rq);
>   			if (idle && idle->exit_latency < min_exit_latency) {
> @@ -5886,7 +5894,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
>   				 */
>   				min_exit_latency = idle->exit_latency;
>   				latest_idle_timestamp = rq->idle_stamp;
> -				shallowest_idle_cpu = i;
> +				idle_candidate = i;
>   			} else if ((!idle || idle->exit_latency == min_exit_latency) &&
>   				   rq->idle_stamp > latest_idle_timestamp) {
>   				/*
> @@ -5895,7 +5903,16 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
>   				 * a warmer cache.
>   				 */
>   				latest_idle_timestamp = rq->idle_stamp;
> -				shallowest_idle_cpu = i;
> +				idle_candidate = i;
> +			}
> +
> +			if (idle_candidate != -1) {
> +				if (full_capacity(idle_candidate)) {
> +					shallowest_idle_cpu = idle_candidate;
> +				} else if (capacity_of(idle_candidate) > backup_cap) {
> +					shallowest_idle_cpu_backup = idle_candidate;
> +					backup_cap = capacity_of(idle_candidate);
> +				}
>   			}
>   		} else if (shallowest_idle_cpu == -1) {
>   			load = weighted_cpuload(cpu_rq(i));
> @@ -5906,7 +5923,11 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
>   		}
>   	}
>   
> -	return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu;
> +	if (shallowest_idle_cpu != -1)
> +		return shallowest_idle_cpu;
> +
> +	return (shallowest_idle_cpu_backup != -1 ?
> +		shallowest_idle_cpu_backup : least_loaded_cpu);
>   }
>   
>   #ifdef CONFIG_SCHED_SMT

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path
  2017-10-12 17:03   ` Rohit Jain
@ 2017-10-12 21:47     ` Joel Fernandes
  2017-10-13  1:54       ` Rohit Jain
  0 siblings, 1 reply; 16+ messages in thread
From: Joel Fernandes @ 2017-10-12 21:47 UTC (permalink / raw)
  To: Rohit Jain
  Cc: Peter Zijlstra, Atish Patra, LKML, eas-dev, Ingo Molnar,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

On Thu, Oct 12, 2017 at 10:03 AM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
> Hi Joel, Atish,
>
> Moving off-line discussions to LKML, just so everyone's on the same page,
> I actually like this version now and it is outperforming my previous
> code, so I am on board with this version. It makes the code simpler too.

I think you should have explained what the version does differently.
Nobody can read your mind.

>
> Since we need a fast way of returning an idle cpu in select_idle_sibling
> path, I think that can remain as it is (or may be we can argue about the
> patch on that thread)

This is hardly an explanation of the diff below.

>
> If what I said abovemakes sense to everyone, I will send out a v6.
>
> As always, please let me know what you think.

More below:

> Thanks,
> Rohit
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 56f343b..a1f622c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5724,7 +5724,7 @@ static int cpu_util_wake(int cpu, struct task_struct
> *p);
>
>  static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
>  {
> -    return capacity_orig_of(cpu) - cpu_util_wake(cpu, p);
> +    return capacity_of(cpu) - cpu_util_wake(cpu, p);
>  }
>
>  /*
> @@ -5870,6 +5870,7 @@ find_idlest_group_cpu(struct sched_group *group,
> struct task_struct *p, int this
>      unsigned long load, min_load = ULONG_MAX;
>      unsigned int min_exit_latency = UINT_MAX;
>      u64 latest_idle_timestamp = 0;
> +    unsigned int idle_cpu_cap = 0;
>      int least_loaded_cpu = this_cpu;
>      int shallowest_idle_cpu = -1;
>      int i;
> @@ -5881,6 +5882,7 @@ find_idlest_group_cpu(struct sched_group *group,
> struct task_struct *p, int this
>      /* Traverse only the allowed CPUs */
>      for_each_cpu_and(i, sched_group_span(group), &p->cpus_allowed) {
>          if (idle_cpu(i)) {
> +            int idle_candidate = -1;
>              struct rq *rq = cpu_rq(i);
>              struct cpuidle_state *idle = idle_get_state(rq);
>              if (idle && idle->exit_latency < min_exit_latency) {
> @@ -5891,7 +5893,7 @@ find_idlest_group_cpu(struct sched_group *group,
> struct task_struct *p, int this
>                   */
>                  min_exit_latency = idle->exit_latency;
>                  latest_idle_timestamp = rq->idle_stamp;
> -                shallowest_idle_cpu = i;
> +                idle_candidate = i;
>              } else if ((!idle || idle->exit_latency == min_exit_latency) &&
>                     rq->idle_stamp > latest_idle_timestamp) {
>                  /*
> @@ -5900,8 +5902,14 @@ find_idlest_group_cpu(struct sched_group *group,
> struct task_struct *p, int this
>                   * a warmer cache.
>                   */
>                  latest_idle_timestamp = rq->idle_stamp;
> -                shallowest_idle_cpu = i;
> +                idle_candidate = i;
>              }
> +
> +            if (idle_candidate != -1 &&
> +                (capacity_of(idle_candidate) > idle_cpu_cap)) {
> +                shallowest_idle_cpu = idle_candidate;
> +                idle_cpu_cap = capacity_of(idle_candidate);
> +            }

This is broken, incase idle_candidate != -1 but idle_cpu_cap makes the
condition false - you're still setting min_exit_latency which is
wrong.

Also this means if you have 2 CPUs and 1 is in a shallower idle state
than the other, but lesser in capacity, then it would select the CPU
with less shallow idle state right? So 'shallowest_idle_cpu' loses its
meaning.

thanks,

- Joel

[..]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path
  2017-10-12 21:47     ` Joel Fernandes
@ 2017-10-13  1:54       ` Rohit Jain
  0 siblings, 0 replies; 16+ messages in thread
From: Rohit Jain @ 2017-10-13  1:54 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Peter Zijlstra, Atish Patra, LKML, eas-dev, Ingo Molnar,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

Hi Joel,


On 10/12/2017 02:47 PM, Joel Fernandes wrote:
> On Thu, Oct 12, 2017 at 10:03 AM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
>> Hi Joel, Atish,
>>
>> Moving off-line discussions to LKML, just so everyone's on the same page,
>> I actually like this version now and it is outperforming my previous
>> code, so I am on board with this version. It makes the code simpler too.
> I think you should have explained what the version does differently.
> Nobody can read your mind.

I apologize for being terse (will do better next time)

This is based on your (offline) suggestion (and rightly so), that
find_idlest_group today bases its decision on capacity_spare_wake which
in turn only looks at the original capacity of the CPU. This diff
(version) changes that to look at the current capacity after being
scaled down (due to IRQ/RT/etc.).

Also, this diff changed find_idlest_group_cpu to not do a search for
CPUs based on the 'full_capacity()' function, instead changed it to
find the idlest CPU with max available capacity. This way we can avoid
all the 'backup' stuff in the code as in the version (v5) below it.

I think as you can see from the way it will work itself out that the
code will look much simpler with the new search. This is OK because we
are doing a full CPU search in the sched_group_span anyway.

[..]
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 56f343b..a1f622c 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -5724,7 +5724,7 @@ static int cpu_util_wake(int cpu, struct task_struct
>> *p);
>>
>>   static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
>>   {
>> -    return capacity_orig_of(cpu) - cpu_util_wake(cpu, p);
>> +    return capacity_of(cpu) - cpu_util_wake(cpu, p);
>>   }
>>
>>   /*
>> @@ -5870,6 +5870,7 @@ find_idlest_group_cpu(struct sched_group *group,
>> struct task_struct *p, int this
>>       unsigned long load, min_load = ULONG_MAX;
>>       unsigned int min_exit_latency = UINT_MAX;
>>       u64 latest_idle_timestamp = 0;
>> +    unsigned int idle_cpu_cap = 0;
>>       int least_loaded_cpu = this_cpu;
>>       int shallowest_idle_cpu = -1;
>>       int i;
>> @@ -5881,6 +5882,7 @@ find_idlest_group_cpu(struct sched_group *group,
>> struct task_struct *p, int this
>>       /* Traverse only the allowed CPUs */
>>       for_each_cpu_and(i, sched_group_span(group), &p->cpus_allowed) {
>>           if (idle_cpu(i)) {
>> +            int idle_candidate = -1;
>>               struct rq *rq = cpu_rq(i);
>>               struct cpuidle_state *idle = idle_get_state(rq);
>>               if (idle && idle->exit_latency < min_exit_latency) {
>> @@ -5891,7 +5893,7 @@ find_idlest_group_cpu(struct sched_group *group,
>> struct task_struct *p, int this
>>                    */
>>                   min_exit_latency = idle->exit_latency;
>>                   latest_idle_timestamp = rq->idle_stamp;
>> -                shallowest_idle_cpu = i;
>> +                idle_candidate = i;
>>               } else if ((!idle || idle->exit_latency == min_exit_latency) &&
>>                      rq->idle_stamp > latest_idle_timestamp) {
>>                   /*
>> @@ -5900,8 +5902,14 @@ find_idlest_group_cpu(struct sched_group *group,
>> struct task_struct *p, int this
>>                    * a warmer cache.
>>                    */
>>                   latest_idle_timestamp = rq->idle_stamp;
>> -                shallowest_idle_cpu = i;
>> +                idle_candidate = i;
>>               }
>> +
>> +            if (idle_candidate != -1 &&
>> +                (capacity_of(idle_candidate) > idle_cpu_cap)) {
>> +                shallowest_idle_cpu = idle_candidate;
>> +                idle_cpu_cap = capacity_of(idle_candidate);
>> +            }
> This is broken, incase idle_candidate != -1 but idle_cpu_cap makes the
> condition false - you're still setting min_exit_latency which is
> wrong.

Yes, you're right. I will fix this.

>
> Also this means if you have 2 CPUs and 1 is in a shallower idle state
> than the other, but lesser in capacity, then it would select the CPU
> with less shallow idle state right? So 'shallowest_idle_cpu' loses its
> meaning.

OK, I will change the name

Thanks,
Rohit
> [..]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-10-03  4:52           ` Joel Fernandes
@ 2017-10-04  0:21             ` Rohit Jain
  0 siblings, 0 replies; 16+ messages in thread
From: Rohit Jain @ 2017-10-04  0:21 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: LKML, eas-dev, Peter Zijlstra, Ingo Molnar, Atish Patra,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

Hi Joel,


On 10/02/2017 09:52 PM, Joel Fernandes wrote:
> Hi Rohit,
>
> On Thu, Sep 28, 2017 at 8:09 AM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
> [..]
>>>> With this case, because we know from the past avg, one of the strands is
>>>> running low on capacity, I am trying to return a better strand for the
>>>> thread to start on.
>>>>
>>> I know what you're trying to do but they way you've retrofitted it into
>>> the
>>> core looks weird (to me) and makes the code unreadable and ugly IMO.
>>>
>>> Why not do something simpler like skip the core if any SMT thread has been
>>> running at lesser capacity? I'm not sure if this works great or if the
>>> maintainers
>>> will prefer your or my below approach, but I find the below diff much
>>> cleaner
>>> for the select_idle_core bit. It also makes more sense since resources are
>>> shared at SMT level so makes sense to me to skip the core altogether for
>>> this:
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 6ee7242dbe0a..f324a84e29f1 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -5738,14 +5738,17 @@ static int select_idle_core(struct task_struct *p,
>>> struct sched_domain *sd, int
>>>          for_each_cpu_wrap(core, cpus, target) {
>>>                  bool idle = true;
>>> +               bool full_cap = true;
>>>                  for_each_cpu(cpu, cpu_smt_mask(core)) {
>>>                          cpumask_clear_cpu(cpu, cpus);
>>>                          if (!idle_cpu(cpu))
>>>                                  idle = false;
>>> +                       if (!full_capacity(cpu))
>>> +                               full_cap = false;
>>>                  }
>>>    -             if (idle)
>>> +               if (idle && full_cap)
>>>                          return core;
>>>          }
>>>
>>
>>
>> Well, with your changes you will skip over fully idle cores which is not
>> an ideal thing either. I see that you were advocating for select
>> idle+lowest capacity core, whereas I was stopping at the first idlecore.
>>
>> Since the whole philosophy till now in this patch is "Don't spare an
>> idle CPU", I think the following diff might look better to you. Please
>> note this is only for discussion sakes, I haven't fully tested it yet.
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index ec15e5f..c2933eb 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6040,7 +6040,9 @@ void __update_idle_core(struct rq *rq)
>>   static int select_idle_core(struct task_struct *p, struct sched_domain *sd,
>> int target)
>>   {
>>       struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
>> -    int core, cpu;
>> +    int core, cpu, rcpu, backup_core;
>> +
>> +    rcpu = backup_core = -1;
>>
>>       if (!static_branch_likely(&sched_smt_present))
>>           return -1;
>> @@ -6052,15 +6054,34 @@ static int select_idle_core(struct task_struct *p,
>> struct sched_domain *sd, int
>>
>>       for_each_cpu_wrap(core, cpus, target) {
>>           bool idle = true;
>> +        bool full_cap = true;
>>
>>           for_each_cpu(cpu, cpu_smt_mask(core)) {
>>               cpumask_clear_cpu(cpu, cpus);
>>               if (!idle_cpu(cpu))
>>                   idle = false;
>> +
>> +            if (!full_capacity(cpu)) {
>> +                full_cap = false;
>> +            }
>>           }
>>
>> -        if (idle)
>> +        if (idle && full_cap)
>>               return core;
>> +        else if (idle && backup_core == -1)
>> +            backup_core = core;
>> +    }
>> +
>> +    if (backup_core != -1) {
>> +        for_each_cpu(cpu, cpu_smt_mask(backup_core)) {
>> +            if (full_capacity(cpu))
>> +                return cpu;
>> +            else if ((rcpu == -1) ||
>> +                 (capacity_of(cpu) > capacity_of(rcpu)))
>> +                rcpu = cpu;
>> +        }
>> +
>> +        return rcpu;
>>       }
>>
>>
>> Do let me know what you think.
> I think that if there isn't a benefit in your tests in doing the above
> vs the simpler approach, then I prefer the simpler approach especially
> since there's no point/benefit in complicating the code for
> select_idle_core.

Fair enough!

If there are no more concerns in this version, then I will go ahead and
try out all that is discussed in this version and send an updated
version. Please let me know if there are any other concerns/feedback.

Thanks,
Rohit

>
> thanks,
>
> - Joel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-09-28 15:09         ` Rohit Jain
@ 2017-10-03  4:52           ` Joel Fernandes
  2017-10-04  0:21             ` Rohit Jain
  0 siblings, 1 reply; 16+ messages in thread
From: Joel Fernandes @ 2017-10-03  4:52 UTC (permalink / raw)
  To: Rohit Jain
  Cc: LKML, eas-dev, Peter Zijlstra, Ingo Molnar, Atish Patra,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

Hi Rohit,

On Thu, Sep 28, 2017 at 8:09 AM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
[..]
>>>
>>> With this case, because we know from the past avg, one of the strands is
>>> running low on capacity, I am trying to return a better strand for the
>>> thread to start on.
>>>
>> I know what you're trying to do but they way you've retrofitted it into
>> the
>> core looks weird (to me) and makes the code unreadable and ugly IMO.
>>
>> Why not do something simpler like skip the core if any SMT thread has been
>> running at lesser capacity? I'm not sure if this works great or if the
>> maintainers
>> will prefer your or my below approach, but I find the below diff much
>> cleaner
>> for the select_idle_core bit. It also makes more sense since resources are
>> shared at SMT level so makes sense to me to skip the core altogether for
>> this:
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 6ee7242dbe0a..f324a84e29f1 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -5738,14 +5738,17 @@ static int select_idle_core(struct task_struct *p,
>> struct sched_domain *sd, int
>>         for_each_cpu_wrap(core, cpus, target) {
>>                 bool idle = true;
>> +               bool full_cap = true;
>>                 for_each_cpu(cpu, cpu_smt_mask(core)) {
>>                         cpumask_clear_cpu(cpu, cpus);
>>                         if (!idle_cpu(cpu))
>>                                 idle = false;
>> +                       if (!full_capacity(cpu))
>> +                               full_cap = false;
>>                 }
>>   -             if (idle)
>> +               if (idle && full_cap)
>>                         return core;
>>         }
>>
>
>
>
> Well, with your changes you will skip over fully idle cores which is not
> an ideal thing either. I see that you were advocating for select
> idle+lowest capacity core, whereas I was stopping at the first idlecore.
>
> Since the whole philosophy till now in this patch is "Don't spare an
> idle CPU", I think the following diff might look better to you. Please
> note this is only for discussion sakes, I haven't fully tested it yet.
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ec15e5f..c2933eb 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6040,7 +6040,9 @@ void __update_idle_core(struct rq *rq)
>  static int select_idle_core(struct task_struct *p, struct sched_domain *sd,
> int target)
>  {
>      struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -    int core, cpu;
> +    int core, cpu, rcpu, backup_core;
> +
> +    rcpu = backup_core = -1;
>
>      if (!static_branch_likely(&sched_smt_present))
>          return -1;
> @@ -6052,15 +6054,34 @@ static int select_idle_core(struct task_struct *p,
> struct sched_domain *sd, int
>
>      for_each_cpu_wrap(core, cpus, target) {
>          bool idle = true;
> +        bool full_cap = true;
>
>          for_each_cpu(cpu, cpu_smt_mask(core)) {
>              cpumask_clear_cpu(cpu, cpus);
>              if (!idle_cpu(cpu))
>                  idle = false;
> +
> +            if (!full_capacity(cpu)) {
> +                full_cap = false;
> +            }
>          }
>
> -        if (idle)
> +        if (idle && full_cap)
>              return core;
> +        else if (idle && backup_core == -1)
> +            backup_core = core;
> +    }
> +
> +    if (backup_core != -1) {
> +        for_each_cpu(cpu, cpu_smt_mask(backup_core)) {
> +            if (full_capacity(cpu))
> +                return cpu;
> +            else if ((rcpu == -1) ||
> +                 (capacity_of(cpu) > capacity_of(rcpu)))
> +                rcpu = cpu;
> +        }
> +
> +        return rcpu;
>      }
>
>
> Do let me know what you think.

I think that if there isn't a benefit in your tests in doing the above
vs the simpler approach, then I prefer the simpler approach especially
since there's no point/benefit in complicating the code for
select_idle_core.

thanks,

- Joel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-09-28 10:53       ` joelaf
@ 2017-09-28 15:09         ` Rohit Jain
  2017-10-03  4:52           ` Joel Fernandes
  0 siblings, 1 reply; 16+ messages in thread
From: Rohit Jain @ 2017-09-28 15:09 UTC (permalink / raw)
  To: joelaf
  Cc: LKML, eas-dev, Peter Zijlstra, Ingo Molnar, Atish Patra,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

Hi Joel,

On 09/28/2017 05:53 AM, joelaf wrote:
> Hi Rohit,
>
> On Tue, Sep 26, 2017 at 12:48 PM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
> [...]
>
<snip>
>>>>                   }
>>>>
>>>> -               if (idle)
>>>> -                       return core;
>>>> +               if (idle) {
>>>> +                       if (rcpu == -1)
>>>> +                               return (rcpu_backup != -1 ? rcpu_backup :
>>>> core);
>>>> +                       return rcpu;
>>>> +               }
>>>
>>> This didn't make much sense to me, here you are returning either an
>>> SMT thread or a core. That doesn't make much of a difference because
>>> SMT threads share the same capacity (SD_SHARE_CPUCAPACITY). I think
>>> what you want to do is find out the capacity of a 'core', not an SMT
>>> thread, and compare the capacity of different cores and consider the
>>> one which has least RT/IRQ interference.
>>
>> IIUC the capacities of each strand is scaled by IRQ and 'rt_avg' for that
>> 'rq'. Now if the strand is idle now and gets an interrupt in the future,
>> the 'core' would look like:
>>
>>     +----+----+
>>     | I  |    |
>>     | T  |    |
>>     +----+----+
>>
>> (I -> Interrupt, T-> Thread we are trying to schedule).
>>
>> whereas if the other strand on the core was taking interrupt the core
>> would look like:
>>
>>     +----+----+
>>     | I  | T  |
>>     |    |    |
>>     +----+----+
>>
>> With this case, because we know from the past avg, one of the strands is
>> running low on capacity, I am trying to return a better strand for the
>> thread to start on.
>>
> I know what you're trying to do but they way you've retrofitted it into the
> core looks weird (to me) and makes the code unreadable and ugly IMO.
>
> Why not do something simpler like skip the core if any SMT thread has been
> running at lesser capacity? I'm not sure if this works great or if the maintainers
> will prefer your or my below approach, but I find the below diff much cleaner
> for the select_idle_core bit. It also makes more sense since resources are
> shared at SMT level so makes sense to me to skip the core altogether for this:
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6ee7242dbe0a..f324a84e29f1 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5738,14 +5738,17 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>   
>   	for_each_cpu_wrap(core, cpus, target) {
>   		bool idle = true;
> +		bool full_cap = true;
>   
>   		for_each_cpu(cpu, cpu_smt_mask(core)) {
>   			cpumask_clear_cpu(cpu, cpus);
>   			if (!idle_cpu(cpu))
>   				idle = false;
> +			if (!full_capacity(cpu))
> +				full_cap = false;
>   		}
>   
> -		if (idle)
> +		if (idle && full_cap)
>   			return core;
>   	}
>   


Well, with your changes you will skip over fully idle cores which is not
an ideal thing either. I see that you were advocating for select
idle+lowest capacity core, whereas I was stopping at the first idlecore.

Since the whole philosophy till now in this patch is "Don't spare an
idle CPU", I think the following diff might look better to you. Please
note this is only for discussion sakes, I haven't fully tested it yet.

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ec15e5f..c2933eb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6040,7 +6040,9 @@ void __update_idle_core(struct rq *rq)
  static int select_idle_core(struct task_struct *p, struct sched_domain 
*sd, int target)
  {
      struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-    int core, cpu;
+    int core, cpu, rcpu, backup_core;
+
+    rcpu = backup_core = -1;

      if (!static_branch_likely(&sched_smt_present))
          return -1;
@@ -6052,15 +6054,34 @@ static int select_idle_core(struct task_struct 
*p, struct sched_domain *sd, int

      for_each_cpu_wrap(core, cpus, target) {
          bool idle = true;
+        bool full_cap = true;

          for_each_cpu(cpu, cpu_smt_mask(core)) {
              cpumask_clear_cpu(cpu, cpus);
              if (!idle_cpu(cpu))
                  idle = false;
+
+            if (!full_capacity(cpu)) {
+                full_cap = false;
+            }
          }

-        if (idle)
+        if (idle && full_cap)
              return core;
+        else if (idle && backup_core == -1)
+            backup_core = core;
+    }
+
+    if (backup_core != -1) {
+        for_each_cpu(cpu, cpu_smt_mask(backup_core)) {
+            if (full_capacity(cpu))
+                return cpu;
+            else if ((rcpu == -1) ||
+                 (capacity_of(cpu) > capacity_of(rcpu)))
+                rcpu = cpu;
+        }
+
+        return rcpu;
      }


Do let me know what you think.

Thanks,
Rohit

>
> thanks,
>
> - Joel
>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-09-26 19:48     ` Rohit Jain
@ 2017-09-28 10:53       ` joelaf
  2017-09-28 15:09         ` Rohit Jain
  0 siblings, 1 reply; 16+ messages in thread
From: joelaf @ 2017-09-28 10:53 UTC (permalink / raw)
  To: Rohit Jain
  Cc: LKML, eas-dev, Peter Zijlstra, Ingo Molnar, Atish Patra,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

Hi Rohit,

On Tue, Sep 26, 2017 at 12:48 PM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
[...]
>>> +       unsigned int backup_cap = 0;
>>> +
>>> +       rcpu = rcpu_backup = -1;
>>>
>>>          if (!static_branch_likely(&sched_smt_present))
>>>                  return -1;
>>> @@ -6057,10 +6060,20 @@ static int select_idle_core(struct task_struct
>>> *p, struct sched_domain *sd, int
>>>                          cpumask_clear_cpu(cpu, cpus);
>>>                          if (!idle_cpu(cpu))
>>>                                  idle = false;
>>> +
>>> +                       if (full_capacity(cpu)) {
>>> +                               rcpu = cpu;
>>> +                       } else if ((rcpu == -1) && (capacity_of(cpu) >
>>> backup_cap)) {
>>> +                               backup_cap = capacity_of(cpu);
>>> +                               rcpu_backup = cpu;
>>> +                       }
>>
>> Here you comparing capacity of different SMT threads.
>>
>>>                  }
>>>
>>> -               if (idle)
>>> -                       return core;
>>> +               if (idle) {
>>> +                       if (rcpu == -1)
>>> +                               return (rcpu_backup != -1 ? rcpu_backup :
>>> core);
>>> +                       return rcpu;
>>> +               }
>>
>>
>> This didn't make much sense to me, here you are returning either an
>> SMT thread or a core. That doesn't make much of a difference because
>> SMT threads share the same capacity (SD_SHARE_CPUCAPACITY). I think
>> what you want to do is find out the capacity of a 'core', not an SMT
>> thread, and compare the capacity of different cores and consider the
>> one which has least RT/IRQ interference.
>
>
> IIUC the capacities of each strand is scaled by IRQ and 'rt_avg' for that
> 'rq'. Now if the strand is idle now and gets an interrupt in the future,
> the 'core' would look like:
>
>    +----+----+
>    | I  |    |
>    | T  |    |
>    +----+----+
>
> (I -> Interrupt, T-> Thread we are trying to schedule).
>
> whereas if the other strand on the core was taking interrupt the core
> would look like:
>
>    +----+----+
>    | I  | T  |
>    |    |    |
>    +----+----+
>
> With this case, because we know from the past avg, one of the strands is
> running low on capacity, I am trying to return a better strand for the
> thread to start on.
>

I know what you're trying to do but they way you've retrofitted it into the
core looks weird (to me) and makes the code unreadable and ugly IMO.

Why not do something simpler like skip the core if any SMT thread has been
running at lesser capacity? I'm not sure if this works great or if the maintainers
will prefer your or my below approach, but I find the below diff much cleaner
for the select_idle_core bit. It also makes more sense since resources are
shared at SMT level so makes sense to me to skip the core altogether for this:

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6ee7242dbe0a..f324a84e29f1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5738,14 +5738,17 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 
 	for_each_cpu_wrap(core, cpus, target) {
 		bool idle = true;
+		bool full_cap = true;
 
 		for_each_cpu(cpu, cpu_smt_mask(core)) {
 			cpumask_clear_cpu(cpu, cpus);
 			if (!idle_cpu(cpu))
 				idle = false;
+			if (!full_capacity(cpu))
+				full_cap = false;
 		}
 
-		if (idle)
+		if (idle && full_cap)
 			return core;
 	}
 


thanks,

- Joel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-09-26  6:53   ` Joel Fernandes
@ 2017-09-26 19:48     ` Rohit Jain
  2017-09-28 10:53       ` joelaf
  0 siblings, 1 reply; 16+ messages in thread
From: Rohit Jain @ 2017-09-26 19:48 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: LKML, eas-dev, Peter Zijlstra, Ingo Molnar, Atish Patra,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

On 09/25/2017 11:53 PM, Joel Fernandes wrote:
> Hi Rohit,
>
> Just some comments:

Hi Joel,

Thanks for the comments.

> On Mon, Sep 25, 2017 at 5:02 PM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
>> While looking for CPUs to place running tasks on, the scheduler
>> completely ignores the capacity stolen away by RT/IRQ tasks.
>>
>> This patch fixes that.
>>
>> Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
>> ---
>>   kernel/sched/fair.c | 54 ++++++++++++++++++++++++++++++++++++++++++-----------
>>   1 file changed, 43 insertions(+), 11 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index afb701f..19ff2c3 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6040,7 +6040,10 @@ void __update_idle_core(struct rq *rq)
>>   static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
>>   {
>>          struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
>> -       int core, cpu;
>> +       int core, cpu, rcpu, rcpu_backup;
> I would call rcpu_backup as backup_cpu.

OK

>
>> +       unsigned int backup_cap = 0;
>> +
>> +       rcpu = rcpu_backup = -1;
>>
>>          if (!static_branch_likely(&sched_smt_present))
>>                  return -1;
>> @@ -6057,10 +6060,20 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>>                          cpumask_clear_cpu(cpu, cpus);
>>                          if (!idle_cpu(cpu))
>>                                  idle = false;
>> +
>> +                       if (full_capacity(cpu)) {
>> +                               rcpu = cpu;
>> +                       } else if ((rcpu == -1) && (capacity_of(cpu) > backup_cap)) {
>> +                               backup_cap = capacity_of(cpu);
>> +                               rcpu_backup = cpu;
>> +                       }
> Here you comparing capacity of different SMT threads.
>
>>                  }
>>
>> -               if (idle)
>> -                       return core;
>> +               if (idle) {
>> +                       if (rcpu == -1)
>> +                               return (rcpu_backup != -1 ? rcpu_backup : core);
>> +                       return rcpu;
>> +               }
>
> This didn't make much sense to me, here you are returning either an
> SMT thread or a core. That doesn't make much of a difference because
> SMT threads share the same capacity (SD_SHARE_CPUCAPACITY). I think
> what you want to do is find out the capacity of a 'core', not an SMT
> thread, and compare the capacity of different cores and consider the
> one which has least RT/IRQ interference.

IIUC the capacities of each strand is scaled by IRQ and 'rt_avg' for that
'rq'. Now if the strand is idle now and gets an interrupt in the future,
the 'core' would look like:

    +----+----+
    | I  |    |
    | T  |    |
    +----+----+

(I -> Interrupt, T-> Thread we are trying to schedule).

whereas if the other strand on the core was taking interrupt the core
would look like:

    +----+----+
    | I  | T  |
    |    |    |
    +----+----+

With this case, because we know from the past avg, one of the strands is
running low on capacity, I am trying to return a better strand for the
thread to start on.

>
>>          }
>>
>>          /*
>> @@ -6076,7 +6089,8 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>>    */
>>   static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>>   {
>> -       int cpu;
>> +       int cpu, backup_cpu = -1;
>> +       unsigned int backup_cap = 0;
>>
>>          if (!static_branch_likely(&sched_smt_present))
>>                  return -1;
>> @@ -6084,11 +6098,17 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
>>          for_each_cpu(cpu, cpu_smt_mask(target)) {
>>                  if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
>>                          continue;
>> -               if (idle_cpu(cpu))
>> -                       return cpu;
>> +               if (idle_cpu(cpu)) {
>> +                       if (full_capacity(cpu))
>> +                               return cpu;
>> +                       if (capacity_of(cpu) > backup_cap) {
>> +                               backup_cap = capacity_of(cpu);
>> +                               backup_cpu = cpu;
>> +                       }
>> +               }
> Same thing here, since SMT threads share the same underlying capacity,
> is there any point in comparing the capacities of each SMT thread?

See above

Thanks,
Rohit

>
> thanks,
>
> - Joel
>
> [...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-09-26  0:02 ` [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path Rohit Jain
@ 2017-09-26  6:53   ` Joel Fernandes
  2017-09-26 19:48     ` Rohit Jain
  0 siblings, 1 reply; 16+ messages in thread
From: Joel Fernandes @ 2017-09-26  6:53 UTC (permalink / raw)
  To: Rohit Jain
  Cc: LKML, eas-dev, Peter Zijlstra, Ingo Molnar, Atish Patra,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen

Hi Rohit,

Just some comments:

On Mon, Sep 25, 2017 at 5:02 PM, Rohit Jain <rohit.k.jain@oracle.com> wrote:
> While looking for CPUs to place running tasks on, the scheduler
> completely ignores the capacity stolen away by RT/IRQ tasks.
>
> This patch fixes that.
>
> Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
> ---
>  kernel/sched/fair.c | 54 ++++++++++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 43 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index afb701f..19ff2c3 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6040,7 +6040,10 @@ void __update_idle_core(struct rq *rq)
>  static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -       int core, cpu;
> +       int core, cpu, rcpu, rcpu_backup;

I would call rcpu_backup as backup_cpu.

> +       unsigned int backup_cap = 0;
> +
> +       rcpu = rcpu_backup = -1;
>
>         if (!static_branch_likely(&sched_smt_present))
>                 return -1;
> @@ -6057,10 +6060,20 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>                         cpumask_clear_cpu(cpu, cpus);
>                         if (!idle_cpu(cpu))
>                                 idle = false;
> +
> +                       if (full_capacity(cpu)) {
> +                               rcpu = cpu;
> +                       } else if ((rcpu == -1) && (capacity_of(cpu) > backup_cap)) {
> +                               backup_cap = capacity_of(cpu);
> +                               rcpu_backup = cpu;
> +                       }

Here you comparing capacity of different SMT threads.

>                 }
>
> -               if (idle)
> -                       return core;
> +               if (idle) {
> +                       if (rcpu == -1)
> +                               return (rcpu_backup != -1 ? rcpu_backup : core);
> +                       return rcpu;
> +               }


This didn't make much sense to me, here you are returning either an
SMT thread or a core. That doesn't make much of a difference because
SMT threads share the same capacity (SD_SHARE_CPUCAPACITY). I think
what you want to do is find out the capacity of a 'core', not an SMT
thread, and compare the capacity of different cores and consider the
one which has least RT/IRQ interference.

>         }
>
>         /*
> @@ -6076,7 +6089,8 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>   */
>  static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>  {
> -       int cpu;
> +       int cpu, backup_cpu = -1;
> +       unsigned int backup_cap = 0;
>
>         if (!static_branch_likely(&sched_smt_present))
>                 return -1;
> @@ -6084,11 +6098,17 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
>         for_each_cpu(cpu, cpu_smt_mask(target)) {
>                 if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
>                         continue;
> -               if (idle_cpu(cpu))
> -                       return cpu;
> +               if (idle_cpu(cpu)) {
> +                       if (full_capacity(cpu))
> +                               return cpu;
> +                       if (capacity_of(cpu) > backup_cap) {
> +                               backup_cap = capacity_of(cpu);
> +                               backup_cpu = cpu;
> +                       }
> +               }

Same thing here, since SMT threads share the same underlying capacity,
is there any point in comparing the capacities of each SMT thread?

thanks,

- Joel

[...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path
  2017-09-26  0:02 [PATCH v4 0/3] sched/fair: Introduce scaled capacity awareness in enqueue Rohit Jain
@ 2017-09-26  0:02 ` Rohit Jain
  2017-09-26  6:53   ` Joel Fernandes
  0 siblings, 1 reply; 16+ messages in thread
From: Rohit Jain @ 2017-09-26  0:02 UTC (permalink / raw)
  To: linux-kernel, eas-dev
  Cc: peterz, mingo, joelaf, atish.patra, vincent.guittot,
	dietmar.eggemann, morten.rasmussen

While looking for CPUs to place running tasks on, the scheduler
completely ignores the capacity stolen away by RT/IRQ tasks.

This patch fixes that.

Signed-off-by: Rohit Jain <rohit.k.jain@oracle.com>
---
 kernel/sched/fair.c | 54 ++++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 43 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index afb701f..19ff2c3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6040,7 +6040,10 @@ void __update_idle_core(struct rq *rq)
 static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-	int core, cpu;
+	int core, cpu, rcpu, rcpu_backup;
+	unsigned int backup_cap = 0;
+
+	rcpu = rcpu_backup = -1;
 
 	if (!static_branch_likely(&sched_smt_present))
 		return -1;
@@ -6057,10 +6060,20 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 			cpumask_clear_cpu(cpu, cpus);
 			if (!idle_cpu(cpu))
 				idle = false;
+
+			if (full_capacity(cpu)) {
+				rcpu = cpu;
+			} else if ((rcpu == -1) && (capacity_of(cpu) > backup_cap)) {
+				backup_cap = capacity_of(cpu);
+				rcpu_backup = cpu;
+			}
 		}
 
-		if (idle)
-			return core;
+		if (idle) {
+			if (rcpu == -1)
+				return (rcpu_backup != -1 ? rcpu_backup : core);
+			return rcpu;
+		}
 	}
 
 	/*
@@ -6076,7 +6089,8 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
  */
 static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
 {
-	int cpu;
+	int cpu, backup_cpu = -1;
+	unsigned int backup_cap = 0;
 
 	if (!static_branch_likely(&sched_smt_present))
 		return -1;
@@ -6084,11 +6098,17 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
 	for_each_cpu(cpu, cpu_smt_mask(target)) {
 		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
 			continue;
-		if (idle_cpu(cpu))
-			return cpu;
+		if (idle_cpu(cpu)) {
+			if (full_capacity(cpu))
+				return cpu;
+			if (capacity_of(cpu) > backup_cap) {
+				backup_cap = capacity_of(cpu);
+				backup_cpu = cpu;
+			}
+		}
 	}
 
-	return -1;
+	return backup_cpu;
 }
 
 #else /* CONFIG_SCHED_SMT */
@@ -6117,6 +6137,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 	u64 time, cost;
 	s64 delta;
 	int cpu, nr = INT_MAX;
+	int backup_cpu = -1;
+	unsigned int backup_cap = 0;
 
 	this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
 	if (!this_sd)
@@ -6147,10 +6169,19 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 			return -1;
 		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
 			continue;
-		if (idle_cpu(cpu))
-			break;
+		if (idle_cpu(cpu)) {
+			if (full_capacity(cpu)) {
+				backup_cpu = -1;
+				break;
+			} else if (capacity_of(cpu) > backup_cap) {
+				backup_cap = capacity_of(cpu);
+				backup_cpu = cpu;
+			}
+		}
 	}
 
+	if (backup_cpu >= 0)
+		cpu = backup_cpu;
 	time = local_clock() - time;
 	cost = this_sd->avg_scan_cost;
 	delta = (s64)(time - cost) / 8;
@@ -6167,13 +6198,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	struct sched_domain *sd;
 	int i;
 
-	if (idle_cpu(target))
+	if (idle_cpu(target) && full_capacity(target))
 		return target;
 
 	/*
 	 * If the previous cpu is cache affine and idle, don't be stupid.
 	 */
-	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev))
+	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev)
+	    && full_capacity(prev))
 		return prev;
 
 	sd = rcu_dereference(per_cpu(sd_llc, target));
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-10-13  1:54 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-07 23:48 [PATCH v5 0/3] sched/fair: Introduce scaled capacity awareness in enqueue Rohit Jain
2017-10-07 23:48 ` [PATCH 1/3] sched/fair: Introduce scaled capacity awareness in find_idlest_cpu code path Rohit Jain
2017-10-12 17:03   ` Rohit Jain
2017-10-12 21:47     ` Joel Fernandes
2017-10-13  1:54       ` Rohit Jain
2017-10-07 23:48 ` [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling " Rohit Jain
2017-10-10 15:54   ` Atish Patra
2017-10-10 18:02     ` Rohit Jain
2017-10-07 23:48 ` [PATCH 3/3] sched/fair: Introduce scaled capacity awareness in wake_affine_idle " Rohit Jain
  -- strict thread matches above, loose matches on Subject: below --
2017-09-26  0:02 [PATCH v4 0/3] sched/fair: Introduce scaled capacity awareness in enqueue Rohit Jain
2017-09-26  0:02 ` [PATCH 2/3] sched/fair: Introduce scaled capacity awareness in select_idle_sibling code path Rohit Jain
2017-09-26  6:53   ` Joel Fernandes
2017-09-26 19:48     ` Rohit Jain
2017-09-28 10:53       ` joelaf
2017-09-28 15:09         ` Rohit Jain
2017-10-03  4:52           ` Joel Fernandes
2017-10-04  0:21             ` Rohit Jain

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.