* [PATCH v2 1/5] sched/fair: remove redundant check in select_idle_smt
@ 2022-09-01 13:11 Abel Wu
2022-09-01 13:11 ` [PATCH v2 2/5] sched/fair: avoid double search on same cpu Abel Wu
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Abel Wu @ 2022-09-01 13:11 UTC (permalink / raw)
To: Peter Zijlstra, Mel Gorman, Vincent Guittot
Cc: Josh Don, Chen Yu, Yicong Yang, linux-kernel, Abel Wu, Mel Gorman
If two cpus share LLC cache, then the two cores they belong to
are also in the same LLC domain.
Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
Reviewed-by: Josh Don <joshdon@google.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/fair.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index efceb670e755..9657c7de5f57 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6350,14 +6350,11 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
/*
* Scan the local SMT mask for idle CPUs.
*/
-static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
+static int select_idle_smt(struct task_struct *p, int target)
{
int cpu;
- for_each_cpu(cpu, cpu_smt_mask(target)) {
- if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
- !cpumask_test_cpu(cpu, sched_domain_span(sd)))
- continue;
+ for_each_cpu_and(cpu, cpu_smt_mask(target), p->cpus_ptr) {
if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
return cpu;
}
@@ -6381,7 +6378,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
return __select_idle_cpu(core, p);
}
-static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
+static inline int select_idle_smt(struct task_struct *p, int target)
{
return -1;
}
@@ -6615,7 +6612,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
has_idle_core = test_idle_cores(target, false);
if (!has_idle_core && cpus_share_cache(prev, target)) {
- i = select_idle_smt(p, sd, prev);
+ i = select_idle_smt(p, prev);
if ((unsigned int)i < nr_cpumask_bits)
return i;
}
--
2.31.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 2/5] sched/fair: avoid double search on same cpu
2022-09-01 13:11 [PATCH v2 1/5] sched/fair: remove redundant check in select_idle_smt Abel Wu
@ 2022-09-01 13:11 ` Abel Wu
2022-09-01 13:11 ` [PATCH v2 3/5] sched/fair: remove useless check in select_idle_core Abel Wu
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Abel Wu @ 2022-09-01 13:11 UTC (permalink / raw)
To: Peter Zijlstra, Mel Gorman, Vincent Guittot
Cc: Josh Don, Chen Yu, Yicong Yang, linux-kernel, Abel Wu, Mel Gorman
The prev cpu is checked at the beginning of SIS, and it's unlikely
to be idle before the second check in select_idle_smt(). So we'd
better focus on its SMT siblings.
Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
Reviewed-by: Josh Don <joshdon@google.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/fair.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9657c7de5f57..1ad79aaaaf93 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6355,6 +6355,8 @@ static int select_idle_smt(struct task_struct *p, int target)
int cpu;
for_each_cpu_and(cpu, cpu_smt_mask(target), p->cpus_ptr) {
+ if (cpu == target)
+ continue;
if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
return cpu;
}
--
2.31.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 3/5] sched/fair: remove useless check in select_idle_core
2022-09-01 13:11 [PATCH v2 1/5] sched/fair: remove redundant check in select_idle_smt Abel Wu
2022-09-01 13:11 ` [PATCH v2 2/5] sched/fair: avoid double search on same cpu Abel Wu
@ 2022-09-01 13:11 ` Abel Wu
2022-09-01 13:11 ` [PATCH v2 4/5] sched/fair: default to false in test_idle_cores Abel Wu
2022-09-01 13:11 ` [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP Abel Wu
3 siblings, 0 replies; 9+ messages in thread
From: Abel Wu @ 2022-09-01 13:11 UTC (permalink / raw)
To: Peter Zijlstra, Mel Gorman, Vincent Guittot
Cc: Josh Don, Chen Yu, Yicong Yang, linux-kernel, Abel Wu, Mel Gorman
The function only gets called when sds->has_idle_cores is true
which can be possible only when sched_smt_present is enabled.
This change also aligns select_idle_core with select_idle_smt
that the caller do the check if necessary.
Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/fair.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1ad79aaaaf93..03ce65068333 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6321,9 +6321,6 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
bool idle = true;
int cpu;
- if (!static_branch_likely(&sched_smt_present))
- return __select_idle_cpu(core, p);
-
for_each_cpu(cpu, cpu_smt_mask(core)) {
if (!available_idle_cpu(cpu)) {
idle = false;
--
2.31.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 4/5] sched/fair: default to false in test_idle_cores
2022-09-01 13:11 [PATCH v2 1/5] sched/fair: remove redundant check in select_idle_smt Abel Wu
2022-09-01 13:11 ` [PATCH v2 2/5] sched/fair: avoid double search on same cpu Abel Wu
2022-09-01 13:11 ` [PATCH v2 3/5] sched/fair: remove useless check in select_idle_core Abel Wu
@ 2022-09-01 13:11 ` Abel Wu
2022-09-01 13:11 ` [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP Abel Wu
3 siblings, 0 replies; 9+ messages in thread
From: Abel Wu @ 2022-09-01 13:11 UTC (permalink / raw)
To: Peter Zijlstra, Mel Gorman, Vincent Guittot
Cc: Josh Don, Chen Yu, Yicong Yang, linux-kernel, Abel Wu, Mel Gorman
It's uncertain whether idle cores exist or not if shared sched-
domains are not ready, so returning "no idle cores" usually
makes sense.
While __update_idle_core() is an exception, it checks status
of this core and set to shared sched-domain if necessary. So
the whole logic depends on the existence of sds, and can bail
out early if !sds.
It's somehow a little tricky, and as Josh suggested that it
should be transient while the domain isn't ready. So remove
the self-defined default value to make things more clearer,
while introduce little overhead in idle path.
Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
Reviewed-by: Josh Don <joshdon@google.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
kernel/sched/fair.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 03ce65068333..23b020c3d3a0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1588,11 +1588,11 @@ numa_type numa_classify(unsigned int imbalance_pct,
#ifdef CONFIG_SCHED_SMT
/* Forward declarations of select_idle_sibling helpers */
-static inline bool test_idle_cores(int cpu, bool def);
+static inline bool test_idle_cores(int cpu);
static inline int numa_idle_core(int idle_core, int cpu)
{
if (!static_branch_likely(&sched_smt_present) ||
- idle_core >= 0 || !test_idle_cores(cpu, false))
+ idle_core >= 0 || !test_idle_cores(cpu))
return idle_core;
/*
@@ -6271,7 +6271,7 @@ static inline void set_idle_cores(int cpu, int val)
WRITE_ONCE(sds->has_idle_cores, val);
}
-static inline bool test_idle_cores(int cpu, bool def)
+static inline bool test_idle_cores(int cpu)
{
struct sched_domain_shared *sds;
@@ -6279,7 +6279,7 @@ static inline bool test_idle_cores(int cpu, bool def)
if (sds)
return READ_ONCE(sds->has_idle_cores);
- return def;
+ return false;
}
/*
@@ -6295,7 +6295,7 @@ void __update_idle_core(struct rq *rq)
int cpu;
rcu_read_lock();
- if (test_idle_cores(core, true))
+ if (test_idle_cores(core))
goto unlock;
for_each_cpu(cpu, cpu_smt_mask(core)) {
@@ -6367,9 +6367,9 @@ static inline void set_idle_cores(int cpu, int val)
{
}
-static inline bool test_idle_cores(int cpu, bool def)
+static inline bool test_idle_cores(int cpu)
{
- return def;
+ return false;
}
static inline int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu)
@@ -6608,7 +6608,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
return target;
if (sched_smt_active()) {
- has_idle_core = test_idle_cores(target, false);
+ has_idle_core = test_idle_cores(target);
if (!has_idle_core && cpus_share_cache(prev, target)) {
i = select_idle_smt(p, prev);
--
2.31.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP
2022-09-01 13:11 [PATCH v2 1/5] sched/fair: remove redundant check in select_idle_smt Abel Wu
` (2 preceding siblings ...)
2022-09-01 13:11 ` [PATCH v2 4/5] sched/fair: default to false in test_idle_cores Abel Wu
@ 2022-09-01 13:11 ` Abel Wu
2022-09-01 14:03 ` Mel Gorman
3 siblings, 1 reply; 9+ messages in thread
From: Abel Wu @ 2022-09-01 13:11 UTC (permalink / raw)
To: Peter Zijlstra, Mel Gorman, Vincent Guittot
Cc: Josh Don, Chen Yu, Yicong Yang, linux-kernel, Abel Wu
The sched-domain of this cpu is only used when SIS_PROP is enabled,
and it should be irrelevant whether the local sd_llc is valid or
not, since all we care about is target sd_llc if !SIS_PROP.
Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
---
kernel/sched/fair.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 23b020c3d3a0..3561b18bfe9f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6399,16 +6399,16 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
struct sched_domain *this_sd;
u64 time = 0;
- this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
- if (!this_sd)
- return -1;
-
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
if (sched_feat(SIS_PROP) && !has_idle_core) {
u64 avg_cost, avg_idle, span_avg;
unsigned long now = jiffies;
+ this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
+ if (!this_sd)
+ return -1;
+
/*
* If we're busy, the assumption that the last idle period
* predicts the future is flawed; age away the remaining
--
2.31.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP
2022-09-01 13:11 ` [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP Abel Wu
@ 2022-09-01 14:03 ` Mel Gorman
2022-09-02 3:28 ` Abel Wu
2022-09-07 7:59 ` Abel Wu
0 siblings, 2 replies; 9+ messages in thread
From: Mel Gorman @ 2022-09-01 14:03 UTC (permalink / raw)
To: Abel Wu
Cc: Peter Zijlstra, Vincent Guittot, Josh Don, Chen Yu, Yicong Yang,
linux-kernel
On Thu, Sep 01, 2022 at 09:11:07PM +0800, Abel Wu wrote:
> The sched-domain of this cpu is only used when SIS_PROP is enabled,
> and it should be irrelevant whether the local sd_llc is valid or
> not, since all we care about is target sd_llc if !SIS_PROP.
>
> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
This could conceivably result in an uninitialised memory access if
SIS_PROP was enabled while select_idle_cpu is running. I'm not sure if
it can happen when jump labels are in use but I think it could happen
for !CONFIG_JUMP_LABEL updating the sysctl_sched_features bitmap updated
via sysctl.
The patch is still a good idea because it moves an unlikely rcu_deference
out of the default path for sched features but either this_sd needs to
be initialised to NULL and checked or the this_sd lookup needs to happen
twice at a slight additional cost to the default-disabled SIS_PROP path.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Re: [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP
2022-09-01 14:03 ` Mel Gorman
@ 2022-09-02 3:28 ` Abel Wu
2022-09-07 7:59 ` Abel Wu
1 sibling, 0 replies; 9+ messages in thread
From: Abel Wu @ 2022-09-02 3:28 UTC (permalink / raw)
To: Mel Gorman
Cc: Peter Zijlstra, Vincent Guittot, Josh Don, Chen Yu, Yicong Yang,
linux-kernel
On 9/1/22 10:03 PM, Mel Gorman Wrote:
> On Thu, Sep 01, 2022 at 09:11:07PM +0800, Abel Wu wrote:
>> The sched-domain of this cpu is only used when SIS_PROP is enabled,
>> and it should be irrelevant whether the local sd_llc is valid or
>> not, since all we care about is target sd_llc if !SIS_PROP.
>>
>> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
>
> This could conceivably result in an uninitialised memory access if
> SIS_PROP was enabled while select_idle_cpu is running. I'm not sure if
> it can happen when jump labels are in use but I think it could happen
> for !CONFIG_JUMP_LABEL updating the sysctl_sched_features bitmap updated
> via sysctl.
Nice catch!
>
> The patch is still a good idea because it moves an unlikely rcu_deference
> out of the default path for sched features but either this_sd needs to
> be initialised to NULL and checked or the this_sd lookup needs to happen
> twice at a slight additional cost to the default-disabled SIS_PROP path.
>
I'd prefer the former.
Thanks & BR,
Abel
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP
2022-09-01 14:03 ` Mel Gorman
2022-09-02 3:28 ` Abel Wu
@ 2022-09-07 7:59 ` Abel Wu
2022-09-07 8:51 ` Mel Gorman
1 sibling, 1 reply; 9+ messages in thread
From: Abel Wu @ 2022-09-07 7:59 UTC (permalink / raw)
To: Mel Gorman
Cc: Peter Zijlstra, Vincent Guittot, Josh Don, Chen Yu, Yicong Yang,
linux-kernel
On 9/1/22 10:03 PM, Mel Gorman wrote:
> On Thu, Sep 01, 2022 at 09:11:07PM +0800, Abel Wu wrote:
>> The sched-domain of this cpu is only used when SIS_PROP is enabled,
>> and it should be irrelevant whether the local sd_llc is valid or
>> not, since all we care about is target sd_llc if !SIS_PROP.
>>
>> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
>
> This could conceivably result in an uninitialised memory access if
> SIS_PROP was enabled while select_idle_cpu is running. I'm not sure if
> it can happen when jump labels are in use but I think it could happen
> for !CONFIG_JUMP_LABEL updating the sysctl_sched_features bitmap updated
> via sysctl.
>
> The patch is still a good idea because it moves an unlikely rcu_deference
> out of the default path for sched features but either this_sd needs to
> be initialised to NULL and checked or the this_sd lookup needs to happen
> twice at a slight additional cost to the default-disabled SIS_PROP path.
>
Hi Mel, please check the following resent patch, Thanks!
https://lore.kernel.org/lkml/20220902033032.79846-5-wuyun.abel@bytedance.com/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP
2022-09-07 7:59 ` Abel Wu
@ 2022-09-07 8:51 ` Mel Gorman
0 siblings, 0 replies; 9+ messages in thread
From: Mel Gorman @ 2022-09-07 8:51 UTC (permalink / raw)
To: Abel Wu
Cc: Peter Zijlstra, Vincent Guittot, Josh Don, Chen Yu, Yicong Yang,
linux-kernel
On Wed, Sep 07, 2022 at 03:59:33PM +0800, Abel Wu wrote:
> On 9/1/22 10:03 PM, Mel Gorman wrote:
> > On Thu, Sep 01, 2022 at 09:11:07PM +0800, Abel Wu wrote:
> > > The sched-domain of this cpu is only used when SIS_PROP is enabled,
> > > and it should be irrelevant whether the local sd_llc is valid or
> > > not, since all we care about is target sd_llc if !SIS_PROP.
> > >
> > > Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
> >
> > This could conceivably result in an uninitialised memory access if
> > SIS_PROP was enabled while select_idle_cpu is running. I'm not sure if
> > it can happen when jump labels are in use but I think it could happen
> > for !CONFIG_JUMP_LABEL updating the sysctl_sched_features bitmap updated
> > via sysctl.
> >
> > The patch is still a good idea because it moves an unlikely rcu_deference
> > out of the default path for sched features but either this_sd needs to
> > be initialised to NULL and checked or the this_sd lookup needs to happen
> > twice at a slight additional cost to the default-disabled SIS_PROP path.
> >
>
> Hi Mel, please check the following resent patch, Thanks!
>
> https://lore.kernel.org/lkml/20220902033032.79846-5-wuyun.abel@bytedance.com/
Weird, I don't remember seeing this patch even though I'm cc'd on it. It
looks fine so even though it's the wrong thread;
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2022-09-07 8:51 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-01 13:11 [PATCH v2 1/5] sched/fair: remove redundant check in select_idle_smt Abel Wu
2022-09-01 13:11 ` [PATCH v2 2/5] sched/fair: avoid double search on same cpu Abel Wu
2022-09-01 13:11 ` [PATCH v2 3/5] sched/fair: remove useless check in select_idle_core Abel Wu
2022-09-01 13:11 ` [PATCH v2 4/5] sched/fair: default to false in test_idle_cores Abel Wu
2022-09-01 13:11 ` [PATCH v2 5/5] sched/fair: cleanup for SIS_PROP Abel Wu
2022-09-01 14:03 ` Mel Gorman
2022-09-02 3:28 ` Abel Wu
2022-09-07 7:59 ` Abel Wu
2022-09-07 8:51 ` Mel Gorman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).