linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
@ 2020-08-24 12:30 Xunlei Pang
  2020-08-24 13:38 ` Srikar Dronamraju
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Xunlei Pang @ 2020-08-24 12:30 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli, Wetp Zhang
  Cc: linux-kernel

We've met problems that occasionally tasks with full cpumask
(e.g. by putting it into a cpuset or setting to full affinity)
were migrated to our isolated cpus in production environment.

After some analysis, we found that it is due to the current
select_idle_smt() not considering the sched_domain mask.

Fix it by checking the valid domain mask in select_idle_smt().

Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
---
 kernel/sched/fair.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..fa942c4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 /*
  * Scan the local SMT mask for idle CPUs.
  */
-static int select_idle_smt(struct task_struct *p, int target)
+static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	int cpu;
 
@@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
 		return -1;
 
 	for_each_cpu(cpu, cpu_smt_mask(target)) {
-		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+		if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
+		    !cpumask_test_cpu(cpu, sched_domain_span(sd)))
 			continue;
 		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
 			return cpu;
@@ -6099,7 +6100,7 @@ static inline int select_idle_core(struct task_struct *p, struct sched_domain *s
 	return -1;
 }
 
-static inline int select_idle_smt(struct task_struct *p, int target)
+static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	return -1;
 }
@@ -6274,7 +6275,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
-	i = select_idle_smt(p, target);
+	i = select_idle_smt(p, sd, target);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
  2020-08-24 12:30 [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain Xunlei Pang
@ 2020-08-24 13:38 ` Srikar Dronamraju
  2020-08-25  2:11   ` xunlei
  2020-08-25  6:37 ` Jiang Biao
  2020-08-28  2:53 ` Xunlei Pang
  2 siblings, 1 reply; 8+ messages in thread
From: Srikar Dronamraju @ 2020-08-24 13:38 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli,
	Wetp Zhang, linux-kernel

* Xunlei Pang <xlpang@linux.alibaba.com> [2020-08-24 20:30:19]:

> We've met problems that occasionally tasks with full cpumask
> (e.g. by putting it into a cpuset or setting to full affinity)
> were migrated to our isolated cpus in production environment.
> 
> After some analysis, we found that it is due to the current
> select_idle_smt() not considering the sched_domain mask.
> 
> Fix it by checking the valid domain mask in select_idle_smt().
> 
> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
> Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> ---
>  kernel/sched/fair.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..fa942c4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>  /*
>   * Scan the local SMT mask for idle CPUs.
>   */
> -static int select_idle_smt(struct task_struct *p, int target)
> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>  	int cpu;
>  
> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
>  		return -1;
>  
>  	for_each_cpu(cpu, cpu_smt_mask(target)) {
> -		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> +		if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
> +		    !cpumask_test_cpu(cpu, sched_domain_span(sd)))
>  			continue;

Don't think this is right thing to do.  What if this task had set a cpumask
that doesn't cover all the cpus in this sched_domain_span(sd)

cpu_smt_mask(target) would already limit to the sched_domain_span(sd) so I
am not sure how this can help?


-- 
Thanks and Regards
Srikar Dronamraju

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
  2020-08-24 13:38 ` Srikar Dronamraju
@ 2020-08-25  2:11   ` xunlei
  2020-08-25  2:59     ` Srikar Dronamraju
  0 siblings, 1 reply; 8+ messages in thread
From: xunlei @ 2020-08-25  2:11 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli,
	Wetp Zhang, linux-kernel

On 2020/8/24 PM9:38, Srikar Dronamraju wrote:
> * Xunlei Pang <xlpang@linux.alibaba.com> [2020-08-24 20:30:19]:
> 
>> We've met problems that occasionally tasks with full cpumask
>> (e.g. by putting it into a cpuset or setting to full affinity)
>> were migrated to our isolated cpus in production environment.
>>
>> After some analysis, we found that it is due to the current
>> select_idle_smt() not considering the sched_domain mask.
>>
>> Fix it by checking the valid domain mask in select_idle_smt().
>>
>> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
>> Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
>> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
>> ---
>>  kernel/sched/fair.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 1a68a05..fa942c4 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>>  /*
>>   * Scan the local SMT mask for idle CPUs.
>>   */
>> -static int select_idle_smt(struct task_struct *p, int target)
>> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>>  {
>>  	int cpu;
>>  
>> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
>>  		return -1;
>>  
>>  	for_each_cpu(cpu, cpu_smt_mask(target)) {
>> -		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
>> +		if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
>> +		    !cpumask_test_cpu(cpu, sched_domain_span(sd)))
>>  			continue;
> 
> Don't think this is right thing to do.  What if this task had set a cpumask
> that doesn't cover all the cpus in this sched_domain_span(sd)

It doesn't matter, without this patch, it selects an idle cpu from:
"cpu_smt_mask(target) and p->cpus_ptr"

with this patch, it selects an idle cpu from:
"cpu_smt_mask(target) and p->cpus_ptr and sched_domain_span(sd)"

> 
> cpu_smt_mask(target) would already limit to the sched_domain_span(sd) so I
> am not sure how this can help?
> 
> 

Here is an example:
CPU0 and CPU16 are hyper-thread pair, CPU16 is domain isolated. So its
sd_llc doesn't contain CPU16, and cpu_smt_mask(0) is 0 and 16.

Then we have @target is 0, select_idle_smt() may return the isolated(and
idle) CPU16 without this patch.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
  2020-08-25  2:11   ` xunlei
@ 2020-08-25  2:59     ` Srikar Dronamraju
  0 siblings, 0 replies; 8+ messages in thread
From: Srikar Dronamraju @ 2020-08-25  2:59 UTC (permalink / raw)
  To: xunlei
  Cc: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli,
	Wetp Zhang, linux-kernel

* xunlei <xlpang@linux.alibaba.com> [2020-08-25 10:11:24]:

> On 2020/8/24 PM9:38, Srikar Dronamraju wrote:
> > * Xunlei Pang <xlpang@linux.alibaba.com> [2020-08-24 20:30:19]:
> > 
> >> We've met problems that occasionally tasks with full cpumask
> >> (e.g. by putting it into a cpuset or setting to full affinity)
> >> were migrated to our isolated cpus in production environment.
> >>
> >> After some analysis, we found that it is due to the current
> >> select_idle_smt() not considering the sched_domain mask.
> >>
> >> Fix it by checking the valid domain mask in select_idle_smt().
> >>
> >> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
> >> Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
> >> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> >> ---
> >>  kernel/sched/fair.c | 9 +++++----
> >>  1 file changed, 5 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index 1a68a05..fa942c4 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
> >>  /*
> >>   * Scan the local SMT mask for idle CPUs.
> >>   */
> >> -static int select_idle_smt(struct task_struct *p, int target)
> >> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
> >>  {
> >>  	int cpu;
> >>  
> >> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
> >>  		return -1;
> >>  
> >>  	for_each_cpu(cpu, cpu_smt_mask(target)) {
> >> -		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> >> +		if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
> >> +		    !cpumask_test_cpu(cpu, sched_domain_span(sd)))
> >>  			continue;
> > 
> > Don't think this is right thing to do.  What if this task had set a cpumask
> > that doesn't cover all the cpus in this sched_domain_span(sd)

ah, right I  missed the 'or' part.
> 
> It doesn't matter, without this patch, it selects an idle cpu from:
> "cpu_smt_mask(target) and p->cpus_ptr"
> 
> with this patch, it selects an idle cpu from:
> "cpu_smt_mask(target) and p->cpus_ptr and sched_domain_span(sd)"
> 
> > 
> > cpu_smt_mask(target) would already limit to the sched_domain_span(sd) so I
> > am not sure how this can help?
> > 
> > 
> 
> Here is an example:
> CPU0 and CPU16 are hyper-thread pair, CPU16 is domain isolated. So its
> sd_llc doesn't contain CPU16, and cpu_smt_mask(0) is 0 and 16.
> 
> Then we have @target is 0, select_idle_smt() may return the isolated(and
> idle) CPU16 without this patch.

Okay.

-- 
Thanks and Regards
Srikar Dronamraju

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
  2020-08-24 12:30 [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain Xunlei Pang
  2020-08-24 13:38 ` Srikar Dronamraju
@ 2020-08-25  6:37 ` Jiang Biao
  2020-08-25  9:27   ` xunlei
  2020-08-28  2:53 ` Xunlei Pang
  2 siblings, 1 reply; 8+ messages in thread
From: Jiang Biao @ 2020-08-25  6:37 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli,
	Wetp Zhang, linux-kernel

On Mon, 24 Aug 2020 at 20:31, Xunlei Pang <xlpang@linux.alibaba.com> wrote:
>
> We've met problems that occasionally tasks with full cpumask
> (e.g. by putting it into a cpuset or setting to full affinity)
> were migrated to our isolated cpus in production environment.
>
> After some analysis, we found that it is due to the current
> select_idle_smt() not considering the sched_domain mask.
>
> Fix it by checking the valid domain mask in select_idle_smt().
>
> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
> Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> ---
>  kernel/sched/fair.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..fa942c4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>  /*
>   * Scan the local SMT mask for idle CPUs.
>   */
> -static int select_idle_smt(struct task_struct *p, int target)
> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>         int cpu;
>
> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
>                 return -1;
>
>         for_each_cpu(cpu, cpu_smt_mask(target)) {
> -               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> +               if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
> +                   !cpumask_test_cpu(cpu, sched_domain_span(sd)))
Maybe the following change could be better, :)
for_each_cpu_and(cpu, cpu_smt_mask(target), sched_domain_span(sd))
keep a similar style with select_idle_core/cpu, and could reduce loops.

Just an option.
Reviewed-by: Jiang Biao <benbjiang@tencent.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
  2020-08-25  6:37 ` Jiang Biao
@ 2020-08-25  9:27   ` xunlei
  2020-08-25 12:46     ` Jiang Biao
  0 siblings, 1 reply; 8+ messages in thread
From: xunlei @ 2020-08-25  9:27 UTC (permalink / raw)
  To: Jiang Biao
  Cc: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli,
	Wetp Zhang, linux-kernel

On 2020/8/25 下午2:37, Jiang Biao wrote:
> On Mon, 24 Aug 2020 at 20:31, Xunlei Pang <xlpang@linux.alibaba.com> wrote:
>>
>> We've met problems that occasionally tasks with full cpumask
>> (e.g. by putting it into a cpuset or setting to full affinity)
>> were migrated to our isolated cpus in production environment.
>>
>> After some analysis, we found that it is due to the current
>> select_idle_smt() not considering the sched_domain mask.
>>
>> Fix it by checking the valid domain mask in select_idle_smt().
>>
>> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
>> Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
>> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
>> ---
>>  kernel/sched/fair.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 1a68a05..fa942c4 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>>  /*
>>   * Scan the local SMT mask for idle CPUs.
>>   */
>> -static int select_idle_smt(struct task_struct *p, int target)
>> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>>  {
>>         int cpu;
>>
>> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
>>                 return -1;
>>
>>         for_each_cpu(cpu, cpu_smt_mask(target)) {
>> -               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
>> +               if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
>> +                   !cpumask_test_cpu(cpu, sched_domain_span(sd)))
> Maybe the following change could be better, :)
> for_each_cpu_and(cpu, cpu_smt_mask(target), sched_domain_span(sd))
> keep a similar style with select_idle_core/cpu, and could reduce loops.
> 

I thought that, but given that smt mask is usually small, the original
code may run a bit faster?

> Just an option.
> Reviewed-by: Jiang Biao <benbjiang@tencent.com>
> 

Thanks :-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
  2020-08-25  9:27   ` xunlei
@ 2020-08-25 12:46     ` Jiang Biao
  0 siblings, 0 replies; 8+ messages in thread
From: Jiang Biao @ 2020-08-25 12:46 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli,
	Wetp Zhang, linux-kernel

On Tue, 25 Aug 2020 at 17:28, xunlei <xlpang@linux.alibaba.com> wrote:
>
> On 2020/8/25 下午2:37, Jiang Biao wrote:
> > On Mon, 24 Aug 2020 at 20:31, Xunlei Pang <xlpang@linux.alibaba.com> wrote:
> >>
> >> We've met problems that occasionally tasks with full cpumask
> >> (e.g. by putting it into a cpuset or setting to full affinity)
> >> were migrated to our isolated cpus in production environment.
> >>
> >> After some analysis, we found that it is due to the current
> >> select_idle_smt() not considering the sched_domain mask.
> >>
> >> Fix it by checking the valid domain mask in select_idle_smt().
> >>
> >> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
> >> Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
> >> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> >> ---
> >>  kernel/sched/fair.c | 9 +++++----
> >>  1 file changed, 5 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index 1a68a05..fa942c4 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
> >>  /*
> >>   * Scan the local SMT mask for idle CPUs.
> >>   */
> >> -static int select_idle_smt(struct task_struct *p, int target)
> >> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
> >>  {
> >>         int cpu;
> >>
> >> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
> >>                 return -1;
> >>
> >>         for_each_cpu(cpu, cpu_smt_mask(target)) {
> >> -               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> >> +               if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
> >> +                   !cpumask_test_cpu(cpu, sched_domain_span(sd)))
> > Maybe the following change could be better, :)
> > for_each_cpu_and(cpu, cpu_smt_mask(target), sched_domain_span(sd))
> > keep a similar style with select_idle_core/cpu, and could reduce loops.
> >
>
> I thought that, but given that smt mask is usually small, the original
> code may run a bit faster?
Not sure. :)
It's OK for me.

Regards,
Jiang

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain
  2020-08-24 12:30 [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain Xunlei Pang
  2020-08-24 13:38 ` Srikar Dronamraju
  2020-08-25  6:37 ` Jiang Biao
@ 2020-08-28  2:53 ` Xunlei Pang
  2 siblings, 0 replies; 8+ messages in thread
From: Xunlei Pang @ 2020-08-28  2:53 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Vincent Guittot, Juri Lelli, Wetp Zhang
  Cc: linux-kernel

On 2020/8/24 PM8:30, Xunlei Pang wrote:
> We've met problems that occasionally tasks with full cpumask
> (e.g. by putting it into a cpuset or setting to full affinity)
> were migrated to our isolated cpus in production environment.
> 
> After some analysis, we found that it is due to the current
> select_idle_smt() not considering the sched_domain mask.
> 
> Fix it by checking the valid domain mask in select_idle_smt().
> 
> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
> Reported-by: Wetp Zhang <wetp.zy@linux.alibaba.com>
> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> ---
>  kernel/sched/fair.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..fa942c4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
>  /*
>   * Scan the local SMT mask for idle CPUs.
>   */
> -static int select_idle_smt(struct task_struct *p, int target)
> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>  	int cpu;
>  
> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target)
>  		return -1;
>  
>  	for_each_cpu(cpu, cpu_smt_mask(target)) {
> -		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> +		if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
> +		    !cpumask_test_cpu(cpu, sched_domain_span(sd)))
>  			continue;
>  		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
>  			return cpu;
> @@ -6099,7 +6100,7 @@ static inline int select_idle_core(struct task_struct *p, struct sched_domain *s
>  	return -1;
>  }
>  
> -static inline int select_idle_smt(struct task_struct *p, int target)
> +static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>  	return -1;
>  }
> @@ -6274,7 +6275,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>  	if ((unsigned)i < nr_cpumask_bits)
>  		return i;
>  
> -	i = select_idle_smt(p, target);
> +	i = select_idle_smt(p, sd, target);
>  	if ((unsigned)i < nr_cpumask_bits)
>  		return i;
>  
> 

Hi Peter, any other comments?

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-08-28  2:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-24 12:30 [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain Xunlei Pang
2020-08-24 13:38 ` Srikar Dronamraju
2020-08-25  2:11   ` xunlei
2020-08-25  2:59     ` Srikar Dronamraju
2020-08-25  6:37 ` Jiang Biao
2020-08-25  9:27   ` xunlei
2020-08-25 12:46     ` Jiang Biao
2020-08-28  2:53 ` Xunlei Pang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).