All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jürgen Groß" <jgross@suse.com>
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling
Date: Fri, 8 May 2020 07:54:05 +0200	[thread overview]
Message-ID: <dac6bdf3-17a1-aeea-f14b-7154222589b4@suse.com> (raw)
In-Reply-To: <7bdf9bd021ff4bd1131a8a41f42b37d6559f600f.camel@suse.com>

On 07.05.20 20:34, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
>> With RCU barriers moved from tasklets to normal RCU processing cpu
>> offlining in core scheduling might deadlock due to cpu
>> synchronization
>> required by RCU processing and core scheduling concurrently.
>>
>> Fix that by bailing out from core scheduling synchronization in case
>> of pending RCU work. Additionally the RCU softirq is now required to
>> be of higher priority than the scheduling softirqs in order to do
>> RCU processing before entering the scheduler again, as bailing out
>> from
>> the core scheduling synchronization requires to raise another softirq
>> SCHED_SLAVE, which would bypass RCU processing again.
>>
>> Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> In general, I'm fine with this patch and it can have my:
> 
> Acked-by: Dario Faggioli <dfaggioli@suse.com>
> 
> I'd ask for one thing, but that doesn't affect the ack, as it's not
> "my" code. :-)
> 
>> diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
>> index b4724f5c8b..1f6c4783da 100644
>> --- a/xen/include/xen/softirq.h
>> +++ b/xen/include/xen/softirq.h
>> @@ -4,10 +4,10 @@
>>   /* Low-latency softirqs come first in the following list. */
>>   enum {
>>       TIMER_SOFTIRQ = 0,
>> +    RCU_SOFTIRQ,
>>       SCHED_SLAVE_SOFTIRQ,
>>       SCHEDULE_SOFTIRQ,
>>       NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
>> -    RCU_SOFTIRQ,
>>       TASKLET_SOFTIRQ,
>>       NR_COMMON_SOFTIRQS
>>   };
>>
> So, until now, it was kind of intuitive (at least, it was to me :-) )
> that the TIMER_SOFTIRQ, we want it first, and the SCHEDULE one right
> after it. And the comment above the enum ("Low-latency softirqs come
> first in the following list"), although brief, is effective.
> 
> With the introduction of SCHED_SLAVE, things became slightly more
> complex, but it still is not too far a reach to figure out the fact
> that we want it to be above SCHEDULE, and the reasons for that.
> 
> Now that we're moving RCU from (almost) the very bottom to up here, I
> think we need some more info, there in the code. Sure all the bits and
> pieces are there in the changelogs, but I think it would be rather
> helpful to have them easily available to people trying to understand or
> modifying this code, e.g., with a comment.

That's reasonable.

> 
> I was also thinking that, even better than a comment, would be a
> (build?) BUG_ON if RCU has no smaller value than SCHED_SLAVE and SLAVE.
> Not here, of course, but maybe close to some piece of code that relies
> on this assumption. Something that, if I tomorrow put the SCHED* ones
> on top again, would catch my attention and tell me that I either take
> care of that code path too, or I can't do it.
> 
> However, I'm not sure whether, e.g., the other hunk of this patch would
> be a suitable place for something like this. And I can't, out of the
> top of my head, think of a really good place for where to put it.
> Therefore, I'm "only" asking for the comment... but if you (or others)
> have ideas, that'd be cool. :-)

I think the other hunk is exactly where the BUILD_BUG_ON() should be.
And this is a perfect place for the comment, too, as its placement will
explain the context very well.


Juergen


  reply	other threads:[~2020-05-08  5:54 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-30 15:15 [PATCH 0/3] xen: Fix some bugs in scheduling Juergen Gross
2020-04-30 15:15 ` [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus in core scheduling Juergen Gross
2020-05-07 18:34   ` Dario Faggioli
2020-05-08  5:54     ` Jürgen Groß [this message]
2020-04-30 15:15 ` [PATCH 2/3] xen/sched: fix theoretical races accessing vcpu->dirty_cpu Juergen Gross
2020-04-30 15:19   ` Jürgen Groß
2020-04-30 15:15 ` [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool Juergen Gross
2020-05-07 18:36   ` Dario Faggioli
2020-05-08  8:19     ` Jan Beulich
2020-05-08  8:29       ` Jürgen Groß

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dac6bdf3-17a1-aeea-f14b-7154222589b4@suse.com \
    --to=jgross@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.