* [PATCH 1/5] sched/fair: Remove SIS_AVG_CPU
2021-01-15 10:08 [PATCH v2 0/5] Scan for an idle sibling in a single pass Mel Gorman
@ 2021-01-15 10:08 ` Mel Gorman
2021-01-15 10:08 ` [PATCH 2/5] sched/fair: Move avg_scan_cost calculations under SIS_PROP Mel Gorman
` (3 subsequent siblings)
4 siblings, 0 replies; 14+ messages in thread
From: Mel Gorman @ 2021-01-15 10:08 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Vincent Guittot, Li Aubrey, Qais Yousef, LKML, Mel Gorman
SIS_AVG_CPU was introduced as a means of avoiding a search when the
average search cost indicated that the search would likely fail. It was
a blunt instrument and disabled by commit 4c77b18cf8b7 ("sched/fair: Make
select_idle_cpu() more aggressive") and later replaced with a proportional
search depth by commit 1ad3aaf3fcd2 ("sched/core: Implement new approach
to scale select_idle_cpu()").
While there are corner cases where SIS_AVG_CPU is better, it has now been
disabled for almost three years. As the intent of SIS_PROP is to reduce
the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus
on SIS_PROP as a throttling mechanism.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/sched/fair.c | 20 +++++++++-----------
kernel/sched/features.h | 1 -
2 files changed, 9 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 04a3ce20da67..9f5682aeda2e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6145,7 +6145,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
struct sched_domain *this_sd;
- u64 avg_cost, avg_idle;
u64 time;
int this = smp_processor_id();
int cpu, nr = INT_MAX;
@@ -6154,18 +6153,17 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
if (!this_sd)
return -1;
- /*
- * Due to large variance we need a large fuzz factor; hackbench in
- * particularly is sensitive here.
- */
- avg_idle = this_rq()->avg_idle / 512;
- avg_cost = this_sd->avg_scan_cost + 1;
+ if (sched_feat(SIS_PROP)) {
+ u64 avg_cost, avg_idle, span_avg;
- if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost)
- return -1;
+ /*
+ * Due to large variance we need a large fuzz factor;
+ * hackbench in particularly is sensitive here.
+ */
+ avg_idle = this_rq()->avg_idle / 512;
+ avg_cost = this_sd->avg_scan_cost + 1;
- if (sched_feat(SIS_PROP)) {
- u64 span_avg = sd->span_weight * avg_idle;
+ span_avg = sd->span_weight * avg_idle;
if (span_avg > 4*avg_cost)
nr = div_u64(span_avg, avg_cost);
else
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 68d369cba9e4..e875eabb6600 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -54,7 +54,6 @@ SCHED_FEAT(TTWU_QUEUE, true)
/*
* When doing wakeups, attempt to limit superfluous scans of the LLC domain.
*/
-SCHED_FEAT(SIS_AVG_CPU, false)
SCHED_FEAT(SIS_PROP, true)
/*
--
2.26.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/5] sched/fair: Move avg_scan_cost calculations under SIS_PROP
2021-01-15 10:08 [PATCH v2 0/5] Scan for an idle sibling in a single pass Mel Gorman
2021-01-15 10:08 ` [PATCH 1/5] sched/fair: Remove SIS_AVG_CPU Mel Gorman
@ 2021-01-15 10:08 ` Mel Gorman
2021-01-15 10:08 ` [PATCH 3/5] sched/fair: Make select_idle_cpu() proportional to cores Mel Gorman
` (2 subsequent siblings)
4 siblings, 0 replies; 14+ messages in thread
From: Mel Gorman @ 2021-01-15 10:08 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Vincent Guittot, Li Aubrey, Qais Yousef, LKML, Mel Gorman
As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP
even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP
check and while we are at it, exclude the cost of initialising the CPU
mask from the average scan cost.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/fair.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9f5682aeda2e..c8d8e185cf3b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6153,6 +6153,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
if (!this_sd)
return -1;
+ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+
if (sched_feat(SIS_PROP)) {
u64 avg_cost, avg_idle, span_avg;
@@ -6168,11 +6170,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
nr = div_u64(span_avg, avg_cost);
else
nr = 4;
- }
-
- time = cpu_clock(this);
- cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+ time = cpu_clock(this);
+ }
for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
@@ -6181,8 +6181,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
break;
}
- time = cpu_clock(this) - time;
- update_avg(&this_sd->avg_scan_cost, time);
+ if (sched_feat(SIS_PROP)) {
+ time = cpu_clock(this) - time;
+ update_avg(&this_sd->avg_scan_cost, time);
+ }
return cpu;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/5] sched/fair: Make select_idle_cpu() proportional to cores
2021-01-15 10:08 [PATCH v2 0/5] Scan for an idle sibling in a single pass Mel Gorman
2021-01-15 10:08 ` [PATCH 1/5] sched/fair: Remove SIS_AVG_CPU Mel Gorman
2021-01-15 10:08 ` [PATCH 2/5] sched/fair: Move avg_scan_cost calculations under SIS_PROP Mel Gorman
@ 2021-01-15 10:08 ` Mel Gorman
2021-01-18 8:14 ` Li, Aubrey
2021-01-15 10:08 ` [PATCH 4/5] sched/fair: Remove select_idle_smt() Mel Gorman
2021-01-15 10:08 ` [PATCH 5/5] sched/fair: Merge select_idle_core/cpu() Mel Gorman
4 siblings, 1 reply; 14+ messages in thread
From: Mel Gorman @ 2021-01-15 10:08 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Vincent Guittot, Li Aubrey, Qais Yousef, LKML, Mel Gorman
From: Peter Zijlstra (Intel) <peterz@infradead.org>
Instead of calculating how many (logical) CPUs to scan, compute how
many cores to scan.
This changes behaviour for anything !SMT2.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/core.c | 18 +++++++++++++-----
kernel/sched/fair.c | 12 ++++++++++--
kernel/sched/sched.h | 2 ++
3 files changed, 25 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 15d2562118d1..ada8faac2e4d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7444,11 +7444,19 @@ int sched_cpu_activate(unsigned int cpu)
balance_push_set(cpu, false);
#ifdef CONFIG_SCHED_SMT
- /*
- * When going up, increment the number of cores with SMT present.
- */
- if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
- static_branch_inc_cpuslocked(&sched_smt_present);
+ do {
+ int weight = cpumask_weight(cpu_smt_mask(cpu));
+
+ if (weight > sched_smt_weight)
+ sched_smt_weight = weight;
+
+ /*
+ * When going up, increment the number of cores with SMT present.
+ */
+ if (weight == 2)
+ static_branch_inc_cpuslocked(&sched_smt_present);
+
+ } while (0);
#endif
set_cpu_active(cpu, true);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c8d8e185cf3b..0811e2fe4f19 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6010,6 +6010,8 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
DEFINE_STATIC_KEY_FALSE(sched_smt_present);
EXPORT_SYMBOL_GPL(sched_smt_present);
+int sched_smt_weight __read_mostly = 1;
+
static inline void set_idle_cores(int cpu, int val)
{
struct sched_domain_shared *sds;
@@ -6124,6 +6126,8 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
#else /* CONFIG_SCHED_SMT */
+#define sched_smt_weight 1
+
static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
{
return -1;
@@ -6136,6 +6140,8 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
#endif /* CONFIG_SCHED_SMT */
+#define sis_min_cores 2
+
/*
* Scan the LLC domain for idle CPUs; this is dynamically regulated by
* comparing the average scan cost (tracked in sd->avg_scan_cost) against the
@@ -6166,10 +6172,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
avg_cost = this_sd->avg_scan_cost + 1;
span_avg = sd->span_weight * avg_idle;
- if (span_avg > 4*avg_cost)
+ if (span_avg > sis_min_cores*avg_cost)
nr = div_u64(span_avg, avg_cost);
else
- nr = 4;
+ nr = sis_min_cores;
+
+ nr *= sched_smt_weight;
time = cpu_clock(this);
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 12ada79d40f3..29aabe98dd1d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1107,6 +1107,8 @@ static inline void update_idle_core(struct rq *rq)
__update_idle_core(rq);
}
+extern int sched_smt_weight;
+
#else
static inline void update_idle_core(struct rq *rq) { }
#endif
--
2.26.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 3/5] sched/fair: Make select_idle_cpu() proportional to cores
2021-01-15 10:08 ` [PATCH 3/5] sched/fair: Make select_idle_cpu() proportional to cores Mel Gorman
@ 2021-01-18 8:14 ` Li, Aubrey
2021-01-18 9:27 ` Mel Gorman
0 siblings, 1 reply; 14+ messages in thread
From: Li, Aubrey @ 2021-01-18 8:14 UTC (permalink / raw)
To: Mel Gorman, Peter Zijlstra, Ingo Molnar
Cc: Vincent Guittot, Qais Yousef, LKML
On 2021/1/15 18:08, Mel Gorman wrote:
> From: Peter Zijlstra (Intel) <peterz@infradead.org>
>
> Instead of calculating how many (logical) CPUs to scan, compute how
> many cores to scan.
>
> This changes behaviour for anything !SMT2.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
> kernel/sched/core.c | 18 +++++++++++++-----
> kernel/sched/fair.c | 12 ++++++++++--
> kernel/sched/sched.h | 2 ++
> 3 files changed, 25 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 15d2562118d1..ada8faac2e4d 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7444,11 +7444,19 @@ int sched_cpu_activate(unsigned int cpu)
> balance_push_set(cpu, false);
>
> #ifdef CONFIG_SCHED_SMT
> - /*
> - * When going up, increment the number of cores with SMT present.
> - */
> - if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> - static_branch_inc_cpuslocked(&sched_smt_present);
> + do {
> + int weight = cpumask_weight(cpu_smt_mask(cpu));
> +
> + if (weight > sched_smt_weight)
> + sched_smt_weight = weight;
> +
> + /*
> + * When going up, increment the number of cores with SMT present.
> + */
> + if (weight == 2)
> + static_branch_inc_cpuslocked(&sched_smt_present);
> +
> + } while (0);
> #endif
> set_cpu_active(cpu, true);
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c8d8e185cf3b..0811e2fe4f19 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6010,6 +6010,8 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
> DEFINE_STATIC_KEY_FALSE(sched_smt_present);
> EXPORT_SYMBOL_GPL(sched_smt_present);
>
> +int sched_smt_weight __read_mostly = 1;
> +
> static inline void set_idle_cores(int cpu, int val)
> {
> struct sched_domain_shared *sds;
> @@ -6124,6 +6126,8 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
>
> #else /* CONFIG_SCHED_SMT */
>
> +#define sched_smt_weight 1
> +
> static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
> {
> return -1;
> @@ -6136,6 +6140,8 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
>
> #endif /* CONFIG_SCHED_SMT */
>
> +#define sis_min_cores 2
> +
> /*
> * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
> @@ -6166,10 +6172,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> avg_cost = this_sd->avg_scan_cost + 1;
>
> span_avg = sd->span_weight * avg_idle;
> - if (span_avg > 4*avg_cost)
> + if (span_avg > sis_min_cores*avg_cost)
> nr = div_u64(span_avg, avg_cost);
> else
> - nr = 4;
> + nr = sis_min_cores;
> +
> + nr *= sched_smt_weight;
Is it better to put this into an inline wrapper to hide sched_smt_weight if !CONFIG_SCHED_SMT?
Thanks,
-Aubrey
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/5] sched/fair: Make select_idle_cpu() proportional to cores
2021-01-18 8:14 ` Li, Aubrey
@ 2021-01-18 9:27 ` Mel Gorman
0 siblings, 0 replies; 14+ messages in thread
From: Mel Gorman @ 2021-01-18 9:27 UTC (permalink / raw)
To: Li, Aubrey
Cc: Peter Zijlstra, Ingo Molnar, Vincent Guittot, Qais Yousef, LKML
On Mon, Jan 18, 2021 at 04:14:36PM +0800, Li, Aubrey wrote:
> > <SNIP>
> > @@ -6124,6 +6126,8 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
> >
> > #else /* CONFIG_SCHED_SMT */
> >
> > +#define sched_smt_weight 1
> > +
> > static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
> > {
> > return -1;
> >
> > <SNIP>
> >
> > @@ -6166,10 +6172,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> > avg_cost = this_sd->avg_scan_cost + 1;
> >
> > span_avg = sd->span_weight * avg_idle;
> > - if (span_avg > 4*avg_cost)
> > + if (span_avg > sis_min_cores*avg_cost)
> > nr = div_u64(span_avg, avg_cost);
> > else
> > - nr = 4;
> > + nr = sis_min_cores;
> > +
> > + nr *= sched_smt_weight;
>
> Is it better to put this into an inline wrapper to hide sched_smt_weight if !CONFIG_SCHED_SMT?
>
There already is a #define sched_smt_weight for !CONFIG_SCHED_SMT and I
do not think an inline wrapper would make it more readable or maintainable.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 4/5] sched/fair: Remove select_idle_smt()
2021-01-15 10:08 [PATCH v2 0/5] Scan for an idle sibling in a single pass Mel Gorman
` (2 preceding siblings ...)
2021-01-15 10:08 ` [PATCH 3/5] sched/fair: Make select_idle_cpu() proportional to cores Mel Gorman
@ 2021-01-15 10:08 ` Mel Gorman
2021-01-15 10:08 ` [PATCH 5/5] sched/fair: Merge select_idle_core/cpu() Mel Gorman
4 siblings, 0 replies; 14+ messages in thread
From: Mel Gorman @ 2021-01-15 10:08 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Vincent Guittot, Li Aubrey, Qais Yousef, LKML, Mel Gorman
From: Peter Zijlstra (Intel) <peterz@infradead.org>
In order to make the next patch more readable, and to quantify the
actual effectiveness of this pass, start by removing it.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/fair.c | 30 ------------------------------
1 file changed, 30 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0811e2fe4f19..12e08da90024 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6103,27 +6103,6 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
return -1;
}
-/*
- * Scan the local SMT mask for idle CPUs.
- */
-static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
-{
- int cpu;
-
- if (!static_branch_likely(&sched_smt_present))
- return -1;
-
- for_each_cpu(cpu, cpu_smt_mask(target)) {
- if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
- !cpumask_test_cpu(cpu, sched_domain_span(sd)))
- continue;
- if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
- return cpu;
- }
-
- return -1;
-}
-
#else /* CONFIG_SCHED_SMT */
#define sched_smt_weight 1
@@ -6133,11 +6112,6 @@ static inline int select_idle_core(struct task_struct *p, struct sched_domain *s
return -1;
}
-static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
-{
- return -1;
-}
-
#endif /* CONFIG_SCHED_SMT */
#define sis_min_cores 2
@@ -6331,10 +6305,6 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
if ((unsigned)i < nr_cpumask_bits)
return i;
- i = select_idle_smt(p, sd, target);
- if ((unsigned)i < nr_cpumask_bits)
- return i;
-
return target;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 5/5] sched/fair: Merge select_idle_core/cpu()
2021-01-15 10:08 [PATCH v2 0/5] Scan for an idle sibling in a single pass Mel Gorman
` (3 preceding siblings ...)
2021-01-15 10:08 ` [PATCH 4/5] sched/fair: Remove select_idle_smt() Mel Gorman
@ 2021-01-15 10:08 ` Mel Gorman
2021-01-18 12:55 ` Li, Aubrey
4 siblings, 1 reply; 14+ messages in thread
From: Mel Gorman @ 2021-01-15 10:08 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Vincent Guittot, Li Aubrey, Qais Yousef, LKML, Mel Gorman
From: Peter Zijlstra (Intel) <peterz@infradead.org>
Both select_idle_core() and select_idle_cpu() do a loop over the same
cpumask. Observe that by clearing the already visited CPUs, we can
fold the iteration and iterate a core at a time.
All we need to do is remember any non-idle CPU we encountered while
scanning for an idle core. This way we'll only iterate every CPU once.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/fair.c | 97 +++++++++++++++++++++++++++------------------
1 file changed, 59 insertions(+), 38 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 12e08da90024..6c0f841e9e75 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6006,6 +6006,14 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
return new_cpu;
}
+static inline int __select_idle_cpu(struct task_struct *p, int core, struct cpumask *cpus)
+{
+ if (available_idle_cpu(core) || sched_idle_cpu(core))
+ return core;
+
+ return -1;
+}
+
#ifdef CONFIG_SCHED_SMT
DEFINE_STATIC_KEY_FALSE(sched_smt_present);
EXPORT_SYMBOL_GPL(sched_smt_present);
@@ -6066,40 +6074,34 @@ void __update_idle_core(struct rq *rq)
* there are no idle cores left in the system; tracked through
* sd_llc->shared->has_idle_cores and enabled through update_idle_core() above.
*/
-static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
+static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu)
{
- struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
- int core, cpu;
+ bool idle = true;
+ int cpu;
if (!static_branch_likely(&sched_smt_present))
- return -1;
-
- if (!test_idle_cores(target, false))
- return -1;
-
- cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+ return __select_idle_cpu(p, core, cpus);
- for_each_cpu_wrap(core, cpus, target) {
- bool idle = true;
-
- for_each_cpu(cpu, cpu_smt_mask(core)) {
- if (!available_idle_cpu(cpu)) {
- idle = false;
- break;
+ for_each_cpu(cpu, cpu_smt_mask(core)) {
+ if (!available_idle_cpu(cpu)) {
+ idle = false;
+ if (*idle_cpu == -1) {
+ if (sched_idle_cpu(cpu) && cpumask_test_cpu(cpu, p->cpus_ptr)) {
+ *idle_cpu = cpu;
+ break;
+ }
+ continue;
}
+ break;
}
-
- if (idle)
- return core;
-
- cpumask_andnot(cpus, cpus, cpu_smt_mask(core));
+ if (*idle_cpu == -1 && cpumask_test_cpu(cpu, p->cpus_ptr))
+ *idle_cpu = cpu;
}
- /*
- * Failed to find an idle core; stop looking for one.
- */
- set_idle_cores(target, 0);
+ if (idle)
+ return core;
+ cpumask_andnot(cpus, cpus, cpu_smt_mask(core));
return -1;
}
@@ -6107,9 +6109,18 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
#define sched_smt_weight 1
-static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
+static inline void set_idle_cores(int cpu, int val)
{
- return -1;
+}
+
+static inline bool test_idle_cores(int cpu, bool def)
+{
+ return def;
+}
+
+static inline int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu)
+{
+ return __select_idle_cpu(p, core, cpus);
}
#endif /* CONFIG_SCHED_SMT */
@@ -6124,10 +6135,11 @@ static inline int select_idle_core(struct task_struct *p, struct sched_domain *s
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
+ int i, cpu, idle_cpu = -1, nr = INT_MAX;
+ bool smt = test_idle_cores(target, false);
+ int this = smp_processor_id();
struct sched_domain *this_sd;
u64 time;
- int this = smp_processor_id();
- int cpu, nr = INT_MAX;
this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
if (!this_sd)
@@ -6135,7 +6147,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
- if (sched_feat(SIS_PROP)) {
+ if (sched_feat(SIS_PROP) && !smt) {
u64 avg_cost, avg_idle, span_avg;
/*
@@ -6159,16 +6171,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return -1;
- if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
- break;
+ if (smt) {
+ i = select_idle_core(p, cpu, cpus, &idle_cpu);
+ if ((unsigned int)i < nr_cpumask_bits)
+ return i;
+
+ } else {
+ i = __select_idle_cpu(p, cpu, cpus);
+ if ((unsigned int)i < nr_cpumask_bits) {
+ idle_cpu = i;
+ break;
+ }
+ }
}
- if (sched_feat(SIS_PROP)) {
+ if (smt)
+ set_idle_cores(this, false);
+
+ if (sched_feat(SIS_PROP) && !smt) {
time = cpu_clock(this) - time;
update_avg(&this_sd->avg_scan_cost, time);
}
- return cpu;
+ return idle_cpu;
}
/*
@@ -6297,10 +6322,6 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
if (!sd)
return target;
- i = select_idle_core(p, sd, target);
- if ((unsigned)i < nr_cpumask_bits)
- return i;
-
i = select_idle_cpu(p, sd, target);
if ((unsigned)i < nr_cpumask_bits)
return i;
--
2.26.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] sched/fair: Merge select_idle_core/cpu()
2021-01-15 10:08 ` [PATCH 5/5] sched/fair: Merge select_idle_core/cpu() Mel Gorman
@ 2021-01-18 12:55 ` Li, Aubrey
2021-01-18 14:41 ` Mel Gorman
0 siblings, 1 reply; 14+ messages in thread
From: Li, Aubrey @ 2021-01-18 12:55 UTC (permalink / raw)
To: Mel Gorman, Peter Zijlstra, Ingo Molnar
Cc: Vincent Guittot, Qais Yousef, LKML
On 2021/1/15 18:08, Mel Gorman wrote:
> From: Peter Zijlstra (Intel) <peterz@infradead.org>
>
> Both select_idle_core() and select_idle_cpu() do a loop over the same
> cpumask. Observe that by clearing the already visited CPUs, we can
> fold the iteration and iterate a core at a time.
>
> All we need to do is remember any non-idle CPU we encountered while
> scanning for an idle core. This way we'll only iterate every CPU once.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
> kernel/sched/fair.c | 97 +++++++++++++++++++++++++++------------------
> 1 file changed, 59 insertions(+), 38 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 12e08da90024..6c0f841e9e75 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6006,6 +6006,14 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
> return new_cpu;
> }
>
> +static inline int __select_idle_cpu(struct task_struct *p, int core, struct cpumask *cpus)
Sorry if I missed anything, why p and cpus are needed here?
> +{
> + if (available_idle_cpu(core) || sched_idle_cpu(core))
> + return core;
> +
> + return -1;
> +}
> +
> #ifdef CONFIG_SCHED_SMT
> DEFINE_STATIC_KEY_FALSE(sched_smt_present);
> EXPORT_SYMBOL_GPL(sched_smt_present);
> @@ -6066,40 +6074,34 @@ void __update_idle_core(struct rq *rq)
> * there are no idle cores left in the system; tracked through
> * sd_llc->shared->has_idle_cores and enabled through update_idle_core() above.
> */
> -static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
> +static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu)
> {
> - struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> - int core, cpu;
> + bool idle = true;
> + int cpu;
>
> if (!static_branch_likely(&sched_smt_present))
> - return -1;
> -
> - if (!test_idle_cores(target, false))
> - return -1;
> -
> - cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> + return __select_idle_cpu(p, core, cpus);
>
> - for_each_cpu_wrap(core, cpus, target) {
> - bool idle = true;
> -
> - for_each_cpu(cpu, cpu_smt_mask(core)) {
> - if (!available_idle_cpu(cpu)) {
> - idle = false;
> - break;
> + for_each_cpu(cpu, cpu_smt_mask(core)) {
> + if (!available_idle_cpu(cpu)) {
> + idle = false;
> + if (*idle_cpu == -1) {
> + if (sched_idle_cpu(cpu) && cpumask_test_cpu(cpu, p->cpus_ptr)) {
> + *idle_cpu = cpu;
> + break;
> + }
> + continue;
> }
> + break;
> }
> -
> - if (idle)
> - return core;
> -
> - cpumask_andnot(cpus, cpus, cpu_smt_mask(core));
> + if (*idle_cpu == -1 && cpumask_test_cpu(cpu, p->cpus_ptr))
> + *idle_cpu = cpu;
> }
>
> - /*
> - * Failed to find an idle core; stop looking for one.
> - */
> - set_idle_cores(target, 0);
> + if (idle)
> + return core;
>
> + cpumask_andnot(cpus, cpus, cpu_smt_mask(core));
> return -1;
> }
>
> @@ -6107,9 +6109,18 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>
> #define sched_smt_weight 1
>
> -static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
> +static inline void set_idle_cores(int cpu, int val)
> {
> - return -1;
> +}
> +
> +static inline bool test_idle_cores(int cpu, bool def)
> +{
> + return def;
> +}
> +
> +static inline int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu)
> +{
> + return __select_idle_cpu(p, core, cpus);
> }
>
> #endif /* CONFIG_SCHED_SMT */
> @@ -6124,10 +6135,11 @@ static inline int select_idle_core(struct task_struct *p, struct sched_domain *s
> static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
> {
> struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> + int i, cpu, idle_cpu = -1, nr = INT_MAX;
> + bool smt = test_idle_cores(target, false);
> + int this = smp_processor_id();
> struct sched_domain *this_sd;
> u64 time;
> - int this = smp_processor_id();
> - int cpu, nr = INT_MAX;
>
> this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
> if (!this_sd)
> @@ -6135,7 +6147,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>
> cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
>
> - if (sched_feat(SIS_PROP)) {
> + if (sched_feat(SIS_PROP) && !smt) {
Is it possible the system does have a idle core, but I still don't want to scan the entire llc domain?
> u64 avg_cost, avg_idle, span_avg;
>
> /*
> @@ -6159,16 +6171,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> for_each_cpu_wrap(cpu, cpus, target) {
> if (!--nr)
> return -1;
It looks like nr only makes sense when smt = false now, can it be moved into else branch below?
> - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> - break;
> + if (smt) {
> + i = select_idle_core(p, cpu, cpus, &idle_cpu);
> + if ((unsigned int)i < nr_cpumask_bits)
> + return i;
What if the last idle core is selected here, should we set_idle_cores false before return?
> +
> + } else {
> + i = __select_idle_cpu(p, cpu, cpus);
> + if ((unsigned int)i < nr_cpumask_bits) {
> + idle_cpu = i;
> + break;
> + }
> + }
> }
>
> - if (sched_feat(SIS_PROP)) {
> + if (smt)
> + set_idle_cores(this, false);
> +
> + if (sched_feat(SIS_PROP) && !smt) {
> time = cpu_clock(this) - time;
> update_avg(&this_sd->avg_scan_cost, time);
> }
>
> - return cpu;
> + return idle_cpu;
> }
>
> /*
> @@ -6297,10 +6322,6 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> if (!sd)
> return target;
>
> - i = select_idle_core(p, sd, target);
> - if ((unsigned)i < nr_cpumask_bits)
> - return i;
> -
> i = select_idle_cpu(p, sd, target);
> if ((unsigned)i < nr_cpumask_bits)
> return i;
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] sched/fair: Merge select_idle_core/cpu()
2021-01-18 12:55 ` Li, Aubrey
@ 2021-01-18 14:41 ` Mel Gorman
0 siblings, 0 replies; 14+ messages in thread
From: Mel Gorman @ 2021-01-18 14:41 UTC (permalink / raw)
To: Li, Aubrey
Cc: Peter Zijlstra, Ingo Molnar, Vincent Guittot, Qais Yousef, LKML
On Mon, Jan 18, 2021 at 08:55:03PM +0800, Li, Aubrey wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 12e08da90024..6c0f841e9e75 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6006,6 +6006,14 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
> > return new_cpu;
> > }
> >
> > +static inline int __select_idle_cpu(struct task_struct *p, int core, struct cpumask *cpus)
>
> Sorry if I missed anything, why p and cpus are needed here?
>
They are not needed. The original code was matching the calling pattern
for select_idle_core() which needs p and cpus to check if sibling CPUs
are allowed.
> > @@ -6135,7 +6147,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> >
> > cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> >
> > - if (sched_feat(SIS_PROP)) {
> > + if (sched_feat(SIS_PROP) && !smt) {
>
> Is it possible the system does have a idle core, but I still don't want to scan the entire llc domain?
>
This version is matching historical behaviour. To limit the scan for cores,
select_idle_core() would need to obey SIS_PROP and that patch was dropped
as it introduced regressions. It would only be considered once SIS_PROP
had better metrics for limiting the depth of the search.
> > u64 avg_cost, avg_idle, span_avg;
> >
> > /*
> > @@ -6159,16 +6171,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> > for_each_cpu_wrap(cpu, cpus, target) {
> > if (!--nr)
> > return -1;
>
> It looks like nr only makes sense when smt = false now, can it be moved into else branch below?
>
It can. I expect the saving to be marginal and it will need to move back
when/if select_idle_core() obeys SIS_PROP.
> > - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> > - break;
> > + if (smt) {
> > + i = select_idle_core(p, cpu, cpus, &idle_cpu);
> > + if ((unsigned int)i < nr_cpumask_bits)
> > + return i;
>
> What if the last idle core is selected here, should we set_idle_cores false before return?
>
We'd have to check what bits were still set in the cpus mask and
determine if they represent an idle core. I severely doubt it would be
worth the cost given that the availability of idle cores can change at
any instant.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 14+ messages in thread