xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@arm.com>
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	andrii.anisov@gmail.com,
	xen-devel <xen-devel@lists.xenproject.org>,
	"andrii_anisov@epam.com" <andrii_anisov@epam.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall
Date: Thu, 13 Jun 2019 07:40:00 -0600	[thread overview]
Message-ID: <5D0252300200007800237E93@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <f5b28793-5cc4-0f83-d571-af87d75e38c0@arm.com>

>>> On 13.06.19 at 15:14, <julien.grall@arm.com> wrote:
> On 13/06/2019 13:58, Jan Beulich wrote:
>>>>> On 13.06.19 at 14:48, <julien.grall@arm.com> wrote:
>>> On 13/06/2019 13:41, Jan Beulich wrote:
>>>>>>> On 13.06.19 at 14:32, <andrii.anisov@gmail.com> wrote:
>>>>> On 11.06.19 12:10, Jan Beulich wrote:
>>>>>>>> At the very least such loops want a cpu_relax() in their bodies.
>>>>>>>> But this being on a hypercall path - are there theoretical guarantees
>>>>>>>> that a guest can't abuse this to lock up a CPU?
>>>>>>> Hmmm, I suggested this but it looks like a guest may call the hypercall
>>>>> multiple
>>>>>>> time from different vCPU. So this could be a way to delay work on the CPU.
>>>>>>>
>>>>>>> I wanted to make the context switch mostly lockless and therefore avoiding
>>>>> to
>>>>>>> introduce a spinlock.
>>>>>>
>>>>>> Well, constructs like the above are trying to mimic a spinlock
>>>>>> without actually using a spinlock. There are extremely rare
>>>>>> situation in which this may indeed be warranted, but here it
>>>>>> falls in the common "makes things worse overall" bucket, I
>>>>>> think. To not unduly penalize the actual update paths, I think
>>>>>> using a r/w lock would be appropriate here.
>>>>>
>>>>> So what is the conclusion here? Should we go with trylock and
>>>>> hypercall_create_continuation() in order to avoid locking but still not fail
>>>>> to the guest?
>>>>
>>>> I'm not convinced a "trylock" approach is needed - that's
>>>> something Julien suggested.
>>>
>>> I think the trylock in the context switch is a must. Otherwise you would delay
>>> context switch if the information get updated.
>> 
>> Delay in what way? I.e. how would this be an issue other than for
>> the guest itself (which shouldn't be constantly updating the
>> address for the region)?
> 
> Why would it only be an issue with the guest itself? Any wait on lock in Xen 
> implies that you can't schedule another vCPU as we are not preemptible.

For one I initially (wrongly) understood you want the trylock in the
hypercall handler. And then, for context switch, wasting the target
(i.e. being switched in) vCPU's time slice is not an issue here. Of
course if there's a chance that acquiring the lock could require more
than a full time slice, then yes, some try-lock-ery may be needed.

However, ...

>>> Regarding the need of the lock, I still can't see how you can make it safe
>>> without it as you may have concurrent call.
>>>
>>> Feel free to suggest a way.
>> 
>> Well, if none can be found, then fine. I don't have the time or interest
>> here to try and think about a lockless approach; it just doesn't _feel_
>> like this ought to strictly require use of a lock. This gut feeling of mine
>> may well be wrong.
> 
> I am not asking you to spend a lot of time on it. But if you have a gut feeling 
> this can be done, then a little help would be extremely useful...

... I thought I had already outlined a model: Allow cross-vCPU updates
only while the target vCPU is still offline. Once online, a vCPU can only
itself update its runstate area address. I think you can get away
without any locks in this case; there may be a corner case with a vCPU
being onlined right at that point in time, so there may need to be a more
strict condition (like "only one online vCPU" instead of "the target vCPU
is offline").

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-06-13 13:40 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-24 18:12 [PATCH RFC 2] [DO NOT APPLY] introduce VCPUOP_register_runstate_phys_memory_area hypercall Andrii Anisov
2019-05-24 18:12 ` [Xen-devel] " Andrii Anisov
2019-05-24 18:12 ` [PATCH v3] Introduce runstate area registration with phys address Andrii Anisov
2019-05-24 18:12   ` [Xen-devel] " Andrii Anisov
2019-05-24 18:12 ` [PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall Andrii Anisov
2019-05-24 18:12   ` [Xen-devel] " Andrii Anisov
2019-06-07 14:23   ` Jan Beulich
2019-06-10 11:44     ` Julien Grall
2019-06-11  9:10       ` Jan Beulich
2019-06-11 10:22         ` Andrii Anisov
2019-06-11 12:12           ` Julien Grall
2019-06-11 12:26             ` Andrii Anisov
2019-06-11 12:32               ` Julien Grall
2019-06-11 12:40                 ` Andrii Anisov
2019-06-13 12:21           ` Andrii Anisov
2019-06-13 12:39             ` Jan Beulich
2019-06-13 12:32         ` Andrii Anisov
2019-06-13 12:41           ` Jan Beulich
2019-06-13 12:48             ` Julien Grall
2019-06-13 12:58               ` Jan Beulich
2019-06-13 13:14                 ` Julien Grall
2019-06-13 13:40                   ` Jan Beulich [this message]
2019-06-13 14:41                     ` Julien Grall
2019-06-14 14:36                       ` Andrii Anisov
2019-06-14 14:39                         ` Julien Grall
2019-06-14 15:11                           ` Andrii Anisov
2019-06-14 15:24                             ` Julien Grall
2019-06-14 16:11                               ` Andrii Anisov
2019-06-14 16:20                                 ` Julien Grall
2019-06-14 16:25                                   ` Andrii Anisov
2019-06-17  6:27                                     ` Jan Beulich
2019-06-14 15:42                             ` Jan Beulich
2019-06-14 16:23                               ` Andrii Anisov
2019-06-17  6:28                                 ` Jan Beulich
2019-06-18 15:32                                   ` Andrii Anisov
2019-06-18 15:44                                     ` Jan Beulich
2019-06-11 16:09     ` Andrii Anisov
2019-06-12  7:27       ` Jan Beulich
2019-06-13 12:17         ` Andrii Anisov
2019-06-13 12:36           ` Jan Beulich
2019-06-11 16:13     ` Andrii Anisov
2019-05-24 18:12 ` [PATCH RFC 1] [DO NOT APPLY] " Andrii Anisov
2019-05-24 18:12   ` [Xen-devel] " Andrii Anisov
2019-05-28  8:59 ` [PATCH RFC 2] " Julien Grall
2019-05-28  8:59   ` [Xen-devel] " Julien Grall
2019-05-28  9:17   ` Andrii Anisov
2019-05-28  9:17     ` [Xen-devel] " Andrii Anisov
2019-05-28  9:23     ` Julien Grall
2019-05-28  9:23       ` [Xen-devel] " Julien Grall
2019-05-28  9:36       ` Andrii Anisov
2019-05-28  9:36         ` [Xen-devel] " Andrii Anisov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5D0252300200007800237E93@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=andrii.anisov@gmail.com \
    --cc=andrii_anisov@epam.com \
    --cc=julien.grall@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).