All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
@ 2020-03-03 12:27 Juergen Gross
  2020-03-03 13:31 ` Jan Beulich
  0 siblings, 1 reply; 5+ messages in thread
From: Juergen Gross @ 2020-03-03 12:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli

When onlining a cpu cpupool_cpu_add() checks whether all siblings of
the new cpu are free in order to decide whether to add it to cpupool0.
In case the added cpu is not the last sibling to be onlined this test
is wrong as it only checks for all online siblings to be free. The
test should include the check for the number of siblings having
reached the scheduling granularity of cpupool0, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 9f70c7ec17..4a67df8584 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
     get_sched_res(cpu)->cpupool = NULL;
 
     cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
-    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
+    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
+         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
         ret = cpupool_assign_cpu_locked(cpupool0, cpu);
 
     rcu_read_unlock(&sched_res_rculock);
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
  2020-03-03 12:27 [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active Juergen Gross
@ 2020-03-03 13:31 ` Jan Beulich
  2020-03-03 16:04   ` Jürgen Groß
  0 siblings, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2020-03-03 13:31 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, George Dunlap, Dario Faggioli

On 03.03.2020 13:27, Juergen Gross wrote:
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>      get_sched_res(cpu)->cpupool = NULL;
>  
>      cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )

Why >= , not == ? And is the other part of the condition needed?
Isn't this rather a condition that could be ASSERT()ed, as CPUs
shouldn't move out of the "free" set before reaching the
granularity?

Jan

>          ret = cpupool_assign_cpu_locked(cpupool0, cpu);
>  
>      rcu_read_unlock(&sched_res_rculock);
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
  2020-03-03 13:31 ` Jan Beulich
@ 2020-03-03 16:04   ` Jürgen Groß
  2020-03-10  8:16     ` Jürgen Groß
  0 siblings, 1 reply; 5+ messages in thread
From: Jürgen Groß @ 2020-03-03 16:04 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, George Dunlap, Dario Faggioli

On 03.03.20 14:31, Jan Beulich wrote:
> On 03.03.2020 13:27, Juergen Gross wrote:
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>>       get_sched_res(cpu)->cpupool = NULL;
>>   
>>       cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
>> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
>> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
>> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
> 
> Why >= , not == ? And is the other part of the condition needed?

I can switch to ==.

> Isn't this rather a condition that could be ASSERT()ed, as CPUs
> shouldn't move out of the "free" set before reaching the
> granularity?

Probably, yes. I'll give it some testing and change it in the case
of (expected) success.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
  2020-03-03 16:04   ` Jürgen Groß
@ 2020-03-10  8:16     ` Jürgen Groß
  2020-03-10  9:43       ` Jan Beulich
  0 siblings, 1 reply; 5+ messages in thread
From: Jürgen Groß @ 2020-03-10  8:16 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, George Dunlap, Dario Faggioli

On 03.03.20 17:04, Jürgen Groß wrote:
> On 03.03.20 14:31, Jan Beulich wrote:
>> On 03.03.2020 13:27, Juergen Gross wrote:
>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>>>       get_sched_res(cpu)->cpupool = NULL;
>>>       cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
>>> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
>>> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
>>> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
>>
>> Why >= , not == ? And is the other part of the condition needed?
> 
> I can switch to ==.
> 
>> Isn't this rather a condition that could be ASSERT()ed, as CPUs
>> shouldn't move out of the "free" set before reaching the
>> granularity?
> 
> Probably, yes. I'll give it some testing and change it in the case
> of (expected) success.

Thinking more about it I'm inclined to keep testing both conditions.
In case we are supporting cpupools with different granularities we'll
need to test for all cpus to be free in case the other sibling has been
moved to a cpupool with gran=1 already.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active
  2020-03-10  8:16     ` Jürgen Groß
@ 2020-03-10  9:43       ` Jan Beulich
  0 siblings, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2020-03-10  9:43 UTC (permalink / raw)
  To: Jürgen Groß; +Cc: xen-devel, George Dunlap, Dario Faggioli

On 10.03.2020 09:16, Jürgen Groß wrote:
> On 03.03.20 17:04, Jürgen Groß wrote:
>> On 03.03.20 14:31, Jan Beulich wrote:
>>> On 03.03.2020 13:27, Juergen Gross wrote:
>>>> --- a/xen/common/sched/cpupool.c
>>>> +++ b/xen/common/sched/cpupool.c
>>>> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>>>>       get_sched_res(cpu)->cpupool = NULL;
>>>>       cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
>>>> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
>>>> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
>>>> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
>>>
>>> Why >= , not == ? And is the other part of the condition needed?
>>
>> I can switch to ==.
>>
>>> Isn't this rather a condition that could be ASSERT()ed, as CPUs
>>> shouldn't move out of the "free" set before reaching the
>>> granularity?
>>
>> Probably, yes. I'll give it some testing and change it in the case
>> of (expected) success.
> 
> Thinking more about it I'm inclined to keep testing both conditions.
> In case we are supporting cpupools with different granularities we'll
> need to test for all cpus to be free in case the other sibling has been
> moved to a cpupool with gran=1 already.

Ah, yes, makes sense.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-03-10  9:44 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-03 12:27 [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active Juergen Gross
2020-03-03 13:31 ` Jan Beulich
2020-03-03 16:04   ` Jürgen Groß
2020-03-10  8:16     ` Jürgen Groß
2020-03-10  9:43       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.