All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] xen: Fix some bugs in scheduling
@ 2020-04-30 15:15 Juergen Gross
  2020-04-30 15:15 ` [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Juergen Gross
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Juergen Gross @ 2020-04-30 15:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Dario Faggioli,
	Jan Beulich, Roger Pau Monné

Some bugs I found when trying to find a problem with cpu-on/offlining
in core scheduling mode.

Patches 1 and 3 are fixing observed problems, while patch 2 is more
of a theoretical issue.

Juergen Gross (3):
  xen/sched: allow rcu work to happen when syncing cpus in core
    scheduling
  xen/sched: fix theoretical races accessing vcpu->dirty_cpu
  xen/cpupool: fix removing cpu from a cpupool

 xen/arch/x86/domain.c      | 14 ++++++++++----
 xen/common/sched/core.c    | 10 +++++++---
 xen/common/sched/cpupool.c |  3 +++
 xen/include/xen/sched.h    |  2 +-
 xen/include/xen/softirq.h  |  2 +-
 5 files changed, 22 insertions(+), 9 deletions(-)

-- 
2.16.4



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling
  2020-04-30 15:15 [PATCH 0/3] xen: Fix some bugs in scheduling Juergen Gross
@ 2020-04-30 15:15 ` Juergen Gross
  2020-05-07 18:34   ` Dario Faggioli
  2020-04-30 15:15 ` [PATCH 2/3] xen/sched: fix theoretical races accessing vcpu->dirty_cpu Juergen Gross
  2020-04-30 15:15 ` [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool Juergen Gross
  2 siblings, 1 reply; 10+ messages in thread
From: Juergen Gross @ 2020-04-30 15:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Dario Faggioli,
	Jan Beulich

With RCU barriers moved from tasklets to normal RCU processing cpu
offlining in core scheduling might deadlock due to cpu synchronization
required by RCU processing and core scheduling concurrently.

Fix that by bailing out from core scheduling synchronization in case
of pending RCU work. Additionally the RCU softirq is now required to
be of higher priority than the scheduling softirqs in order to do
RCU processing before entering the scheduler again, as bailing out from
the core scheduling synchronization requires to raise another softirq
SCHED_SLAVE, which would bypass RCU processing again.

Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c   | 10 +++++++---
 xen/include/xen/softirq.h |  2 +-
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d94b95285f..a099e37b0f 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2457,13 +2457,17 @@ static struct sched_unit *sched_wait_rendezvous_in(struct sched_unit *prev,
             v = unit2vcpu_cpu(prev, cpu);
         }
         /*
-         * Coming from idle might need to do tasklet work.
+         * Check for any work to be done which might need cpu synchronization.
+         * This is either pending RCU work, or tasklet work when coming from
+         * idle.
          * In order to avoid deadlocks we can't do that here, but have to
-         * continue the idle loop.
+         * schedule the previous vcpu again, which will lead to the desired
+         * processing to be done.
          * Undo the rendezvous_in_cnt decrement and schedule another call of
          * sched_slave().
          */
-        if ( is_idle_unit(prev) && sched_tasklet_check_cpu(cpu) )
+        if ( rcu_pending(cpu) ||
+             (is_idle_unit(prev) && sched_tasklet_check_cpu(cpu)) )
         {
             struct vcpu *vprev = current;
 
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index b4724f5c8b..1f6c4783da 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -4,10 +4,10 @@
 /* Low-latency softirqs come first in the following list. */
 enum {
     TIMER_SOFTIRQ = 0,
+    RCU_SOFTIRQ,
     SCHED_SLAVE_SOFTIRQ,
     SCHEDULE_SOFTIRQ,
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
-    RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
-- 
2.16.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] xen/sched: fix theoretical races accessing vcpu->dirty_cpu
  2020-04-30 15:15 [PATCH 0/3] xen: Fix some bugs in scheduling Juergen Gross
  2020-04-30 15:15 ` [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Juergen Gross
@ 2020-04-30 15:15 ` Juergen Gross
  2020-04-30 15:19   ` Jürgen Groß
  2020-04-30 15:15 ` [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool Juergen Gross
  2 siblings, 1 reply; 10+ messages in thread
From: Juergen Gross @ 2020-04-30 15:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich,
	Roger Pau Monné

The dirty_cpu field of struct vcpu denotes which cpu still holds data
of a vcpu. All accesses to this field should be atomic in case the
vcpu could just be running, as it is accessed without any lock held
in most cases.

There are some instances where accesses are not atomically done, and
even worse where multiple accesses are done when a single one would
be mandated.

Correct that in order to avoid potential problems.

Add some assertions to verify dirty_cpu is handled properly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/domain.c   | 14 ++++++++++----
 xen/include/xen/sched.h |  2 +-
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a4428190d5..f0579a56d1 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1769,6 +1769,7 @@ static void __context_switch(void)
 
     if ( !is_idle_domain(pd) )
     {
+        ASSERT(read_atomic(&p->dirty_cpu) == cpu);
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
         pd->arch.ctxt_switch->from(p);
@@ -1832,7 +1833,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 {
     unsigned int cpu = smp_processor_id();
     const struct domain *prevd = prev->domain, *nextd = next->domain;
-    unsigned int dirty_cpu = next->dirty_cpu;
+    unsigned int dirty_cpu = read_atomic(&next->dirty_cpu);
 
     ASSERT(prev != next);
     ASSERT(local_irq_is_enabled());
@@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
         flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
+        ASSERT(read_atomic(&next->dirty_cpu) == VCPU_CPU_CLEAN);
     }
 
     _update_runstate_area(prev);
@@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
 
 void sync_vcpu_execstate(struct vcpu *v)
 {
-    if ( v->dirty_cpu == smp_processor_id() )
+    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
+
+    if ( dirty_cpu == smp_processor_id() )
         sync_local_execstate();
-    else if ( vcpu_cpu_dirty(v) )
+    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
-        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
+        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
     }
+    ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu ||
+           dirty_cpu == VCPU_CPU_CLEAN);
 }
 
 static int relinquish_memory(
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 195e7ee583..008d3c8861 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int cpu)
 
 static inline bool vcpu_cpu_dirty(const struct vcpu *v)
 {
-    return is_vcpu_dirty_cpu(v->dirty_cpu);
+    return is_vcpu_dirty_cpu(read_atomic(&v->dirty_cpu));
 }
 
 void vcpu_block(void);
-- 
2.16.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool
  2020-04-30 15:15 [PATCH 0/3] xen: Fix some bugs in scheduling Juergen Gross
  2020-04-30 15:15 ` [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Juergen Gross
  2020-04-30 15:15 ` [PATCH 2/3] xen/sched: fix theoretical races accessing vcpu->dirty_cpu Juergen Gross
@ 2020-04-30 15:15 ` Juergen Gross
  2020-05-07 18:36   ` Dario Faggioli
  2 siblings, 1 reply; 10+ messages in thread
From: Juergen Gross @ 2020-04-30 15:15 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli

Commit cb563d7665f2 ("xen/sched: support core scheduling for moving
cpus to/from cpupools") introduced a regression when trying to remove
an offline cpu from a cpupool, as the system would crash in this
situation.

Fix that by testing the cpu to be online.

Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving cpus to/from cpupools")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index d40345b585..de9e25af84 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -520,6 +520,9 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu)
     debugtrace_printk("cpupool_unassign_cpu(pool=%d,cpu=%d)\n",
                       c->cpupool_id, cpu);
 
+    if ( !cpu_online(cpu) )
+        return -EINVAL;
+
     master_cpu = sched_get_resource_cpu(cpu);
     ret = cpupool_unassign_cpu_start(c, master_cpu);
     if ( ret )
-- 
2.16.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] xen/sched: fix theoretical races accessing vcpu->dirty_cpu
  2020-04-30 15:15 ` [PATCH 2/3] xen/sched: fix theoretical races accessing vcpu->dirty_cpu Juergen Gross
@ 2020-04-30 15:19   ` Jürgen Groß
  0 siblings, 0 replies; 10+ messages in thread
From: Jürgen Groß @ 2020-04-30 15:19 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Jan Beulich, Roger Pau Monné

On 30.04.20 17:15, Juergen Gross wrote:
> The dirty_cpu field of struct vcpu denotes which cpu still holds data
> of a vcpu. All accesses to this field should be atomic in case the
> vcpu could just be running, as it is accessed without any lock held
> in most cases.
> 
> There are some instances where accesses are not atomically done, and
> even worse where multiple accesses are done when a single one would
> be mandated.
> 
> Correct that in order to avoid potential problems.
> 
> Add some assertions to verify dirty_cpu is handled properly.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Please ignore this one, just realized it doesn't build for ARM.


Juergen


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling
  2020-04-30 15:15 ` [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Juergen Gross
@ 2020-05-07 18:34   ` Dario Faggioli
  2020-05-08  5:54     ` Jürgen Groß
  0 siblings, 1 reply; 10+ messages in thread
From: Dario Faggioli @ 2020-05-07 18:34 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Jan Beulich

[-- Attachment #1: Type: text/plain, Size: 3382 bytes --]

On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
> With RCU barriers moved from tasklets to normal RCU processing cpu
> offlining in core scheduling might deadlock due to cpu
> synchronization
> required by RCU processing and core scheduling concurrently.
> 
> Fix that by bailing out from core scheduling synchronization in case
> of pending RCU work. Additionally the RCU softirq is now required to
> be of higher priority than the scheduling softirqs in order to do
> RCU processing before entering the scheduler again, as bailing out
> from
> the core scheduling synchronization requires to raise another softirq
> SCHED_SLAVE, which would bypass RCU processing again.
> 
> Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
In general, I'm fine with this patch and it can have my:

Acked-by: Dario Faggioli <dfaggioli@suse.com>

I'd ask for one thing, but that doesn't affect the ack, as it's not
"my" code. :-)

> diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
> index b4724f5c8b..1f6c4783da 100644
> --- a/xen/include/xen/softirq.h
> +++ b/xen/include/xen/softirq.h
> @@ -4,10 +4,10 @@
>  /* Low-latency softirqs come first in the following list. */
>  enum {
>      TIMER_SOFTIRQ = 0,
> +    RCU_SOFTIRQ,
>      SCHED_SLAVE_SOFTIRQ,
>      SCHEDULE_SOFTIRQ,
>      NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
> -    RCU_SOFTIRQ,
>      TASKLET_SOFTIRQ,
>      NR_COMMON_SOFTIRQS
>  };
>
So, until now, it was kind of intuitive (at least, it was to me :-) )
that the TIMER_SOFTIRQ, we want it first, and the SCHEDULE one right
after it. And the comment above the enum ("Low-latency softirqs come
first in the following list"), although brief, is effective.

With the introduction of SCHED_SLAVE, things became slightly more
complex, but it still is not too far a reach to figure out the fact
that we want it to be above SCHEDULE, and the reasons for that.

Now that we're moving RCU from (almost) the very bottom to up here, I
think we need some more info, there in the code. Sure all the bits and
pieces are there in the changelogs, but I think it would be rather
helpful to have them easily available to people trying to understand or
modifying this code, e.g., with a comment.

I was also thinking that, even better than a comment, would be a
(build?) BUG_ON if RCU has no smaller value than SCHED_SLAVE and SLAVE.
Not here, of course, but maybe close to some piece of code that relies
on this assumption. Something that, if I tomorrow put the SCHED* ones
on top again, would catch my attention and tell me that I either take
care of that code path too, or I can't do it.

However, I'm not sure whether, e.g., the other hunk of this patch would
be a suitable place for something like this. And I can't, out of the
top of my head, think of a really good place for where to put it.
Therefore, I'm "only" asking for the comment... but if you (or others)
have ideas, that'd be cool. :-)

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool
  2020-04-30 15:15 ` [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool Juergen Gross
@ 2020-05-07 18:36   ` Dario Faggioli
  2020-05-08  8:19     ` Jan Beulich
  0 siblings, 1 reply; 10+ messages in thread
From: Dario Faggioli @ 2020-05-07 18:36 UTC (permalink / raw)
  To: Juergen Gross, xen-devel; +Cc: George Dunlap

[-- Attachment #1: Type: text/plain, Size: 828 bytes --]

On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
> Commit cb563d7665f2 ("xen/sched: support core scheduling for moving
> cpus to/from cpupools") introduced a regression when trying to remove
> an offline cpu from a cpupool, as the system would crash in this
> situation.
> 
> Fix that by testing the cpu to be online.
> 
> Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving
> cpus to/from cpupools")
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Acked-by: Dario Faggioli <dfaggioli@suse.com>

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling
  2020-05-07 18:34   ` Dario Faggioli
@ 2020-05-08  5:54     ` Jürgen Groß
  0 siblings, 0 replies; 10+ messages in thread
From: Jürgen Groß @ 2020-05-08  5:54 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, Jan Beulich

On 07.05.20 20:34, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
>> With RCU barriers moved from tasklets to normal RCU processing cpu
>> offlining in core scheduling might deadlock due to cpu
>> synchronization
>> required by RCU processing and core scheduling concurrently.
>>
>> Fix that by bailing out from core scheduling synchronization in case
>> of pending RCU work. Additionally the RCU softirq is now required to
>> be of higher priority than the scheduling softirqs in order to do
>> RCU processing before entering the scheduler again, as bailing out
>> from
>> the core scheduling synchronization requires to raise another softirq
>> SCHED_SLAVE, which would bypass RCU processing again.
>>
>> Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> In general, I'm fine with this patch and it can have my:
> 
> Acked-by: Dario Faggioli <dfaggioli@suse.com>
> 
> I'd ask for one thing, but that doesn't affect the ack, as it's not
> "my" code. :-)
> 
>> diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
>> index b4724f5c8b..1f6c4783da 100644
>> --- a/xen/include/xen/softirq.h
>> +++ b/xen/include/xen/softirq.h
>> @@ -4,10 +4,10 @@
>>   /* Low-latency softirqs come first in the following list. */
>>   enum {
>>       TIMER_SOFTIRQ = 0,
>> +    RCU_SOFTIRQ,
>>       SCHED_SLAVE_SOFTIRQ,
>>       SCHEDULE_SOFTIRQ,
>>       NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
>> -    RCU_SOFTIRQ,
>>       TASKLET_SOFTIRQ,
>>       NR_COMMON_SOFTIRQS
>>   };
>>
> So, until now, it was kind of intuitive (at least, it was to me :-) )
> that the TIMER_SOFTIRQ, we want it first, and the SCHEDULE one right
> after it. And the comment above the enum ("Low-latency softirqs come
> first in the following list"), although brief, is effective.
> 
> With the introduction of SCHED_SLAVE, things became slightly more
> complex, but it still is not too far a reach to figure out the fact
> that we want it to be above SCHEDULE, and the reasons for that.
> 
> Now that we're moving RCU from (almost) the very bottom to up here, I
> think we need some more info, there in the code. Sure all the bits and
> pieces are there in the changelogs, but I think it would be rather
> helpful to have them easily available to people trying to understand or
> modifying this code, e.g., with a comment.

That's reasonable.

> 
> I was also thinking that, even better than a comment, would be a
> (build?) BUG_ON if RCU has no smaller value than SCHED_SLAVE and SLAVE.
> Not here, of course, but maybe close to some piece of code that relies
> on this assumption. Something that, if I tomorrow put the SCHED* ones
> on top again, would catch my attention and tell me that I either take
> care of that code path too, or I can't do it.
> 
> However, I'm not sure whether, e.g., the other hunk of this patch would
> be a suitable place for something like this. And I can't, out of the
> top of my head, think of a really good place for where to put it.
> Therefore, I'm "only" asking for the comment... but if you (or others)
> have ideas, that'd be cool. :-)

I think the other hunk is exactly where the BUILD_BUG_ON() should be.
And this is a perfect place for the comment, too, as its placement will
explain the context very well.


Juergen


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool
  2020-05-07 18:36   ` Dario Faggioli
@ 2020-05-08  8:19     ` Jan Beulich
  2020-05-08  8:29       ` Jürgen Groß
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2020-05-08  8:19 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, George Dunlap, Dario Faggioli

On 07.05.2020 20:36, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
>> Commit cb563d7665f2 ("xen/sched: support core scheduling for moving
>> cpus to/from cpupools") introduced a regression when trying to remove
>> an offline cpu from a cpupool, as the system would crash in this
>> situation.
>>
>> Fix that by testing the cpu to be online.
>>
>> Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving
>> cpus to/from cpupools")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> Acked-by: Dario Faggioli <dfaggioli@suse.com>

Jürgen,

it looks like this is independent of the earlier two patches
and hence could go in, while I understand there'll be v2 for
the earlier ones. Please confirm.

Jan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool
  2020-05-08  8:19     ` Jan Beulich
@ 2020-05-08  8:29       ` Jürgen Groß
  0 siblings, 0 replies; 10+ messages in thread
From: Jürgen Groß @ 2020-05-08  8:29 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, George Dunlap, Dario Faggioli

On 08.05.20 10:19, Jan Beulich wrote:
> On 07.05.2020 20:36, Dario Faggioli wrote:
>> On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
>>> Commit cb563d7665f2 ("xen/sched: support core scheduling for moving
>>> cpus to/from cpupools") introduced a regression when trying to remove
>>> an offline cpu from a cpupool, as the system would crash in this
>>> situation.
>>>
>>> Fix that by testing the cpu to be online.
>>>
>>> Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving
>>> cpus to/from cpupools")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>> Acked-by: Dario Faggioli <dfaggioli@suse.com>
> 
> Jürgen,
> 
> it looks like this is independent of the earlier two patches
> and hence could go in, while I understand there'll be v2 for
> the earlier ones. Please confirm.

Confirmed.


Juergen


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-05-08  8:29 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-30 15:15 [PATCH 0/3] xen: Fix some bugs in scheduling Juergen Gross
2020-04-30 15:15 ` [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Juergen Gross
2020-05-07 18:34   ` Dario Faggioli
2020-05-08  5:54     ` Jürgen Groß
2020-04-30 15:15 ` [PATCH 2/3] xen/sched: fix theoretical races accessing vcpu->dirty_cpu Juergen Gross
2020-04-30 15:19   ` Jürgen Groß
2020-04-30 15:15 ` [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool Juergen Gross
2020-05-07 18:36   ` Dario Faggioli
2020-05-08  8:19     ` Jan Beulich
2020-05-08  8:29       ` Jürgen Groß

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.