xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "Stefano Stabellini" <sstabellini@kernel.org>,
	"Wei Liu" <wl@xen.org>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Jackson" <ian.jackson@eu.citrix.com>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Jan Beulich" <jbeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>
Subject: Re: [PATCH 1/2] xen/arm: Convert runstate address during hypcall
Date: Fri, 12 Jun 2020 20:56:02 +0100	[thread overview]
Message-ID: <68f7349e-c773-f0ef-bc77-cbf459796604@xen.org> (raw)
In-Reply-To: <131FB1AE-656D-45E7-9019-FC50F9D5CF0B@arm.com>

On 12/06/2020 15:13, Bertrand Marquis wrote:
> Hi Julien,

Hi,

>> On 12 Jun 2020, at 11:53, Julien Grall <julien@xen.org> wrote:
>> The code below suggests the hypercall will just fail if you call it from a different vCPU. Is that correct?
> 
> No the hypercall will work, but the area in this case won’t be updated.
> When the hypercall is called on the current vcpu, the area will be updated when the context will be restored before returning to the guest so there is no need to do it an extra time in the hypercall itself.

I am afraid this is not correct, update_runstate_area() is only called 
when context switching between 2 vCPUs. So the only way this could be 
called on return from hypercall is if the vCPU get preempted.

> When this is done for a different vcpu the current code is not updating the area anymore.
> 
> I did think at first to do it but the complexity it was adding (mainly due to the dual page case) was very high and I could not really think of a use case or find one in Linux.

You may want to have an updated view without forcing the vCPU to exit to 
the hypervisor and do a context switch.

> But now that I think of it I could, in that specific case, use a copy_to_guest.

I think you should be able to call update_runstate_area() directly.

> 
> Do you think this is something which would make sense to restore ?

I would like to avoid diverging too much from the current definition of 
the hypercall. In this case, I don't think the restriction you add is 
necessary.

>>
>>> +    if ( !page )
>>> +    {
>>> +        spin_unlock(&v->arch.runstate_guest.lock);
>>> +        gprintk(XENLOG_DEBUG, "Runstate pointer is not mapped\n");
>>> +        return -EINVAL;
>>>       }
>>>   -    __copy_to_guest(runstate_guest(v), &runstate, 1);
>>> +    v->arch.runstate_guest.page = page;
>>> +    v->arch.runstate_guest.offset = offset;
>>> +
>>
>> In the current implementation, the runstate area was update with the latest information during the hypercall. This doesn't seem to happen anymore. Is there any specific reason?
> 
> As explained before, this is still happening on the current vcpu as a result of the context switch back to the guest at the end of the hypercall.

See above, I don't believe this is correct. :).

>>> +    spin_unlock(&v->arch.runstate_guest.lock);
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +
>>> +/* Update per-VCPU guest runstate shared memory area (if registered). */
>>> +static void update_runstate_area(struct vcpu *v)
>>> +{
>>> +    struct vcpu_runstate_info *guest_runstate;
>>> +    void *p;
>>> +
>>> +    spin_lock(&v->arch.runstate_guest.lock);
>>>   -    if ( guest_handle )
>>> +    if ( v->arch.runstate_guest.page )
>>>       {
>>> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>>> +        p = __map_domain_page(v->arch.runstate_guest.page);
>>> +        guest_runstate = p + v->arch.runstate_guest.offset;
>>> +
>>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>> +        {
>>> +            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>>> +            guest_runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>>> +            smp_wmb();
>>> +        }
>>> +
>>> +        memcpy(guest_runstate, &v->runstate, sizeof(v->runstate));
>>>           smp_wmb();
>>> -        __raw_copy_to_guest(guest_handle,
>>> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
>>> +
>>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>> +        {
>>> +            v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>>> +            guest_runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>>> +            smp_wmb();
>>
>> Why do you need this barrier?
> 
> As the bit is supposed to be used to wait for an update to be done, I felt it was better to push this out as soon as possible.

smp_wmb() only ensure that any write before the barrier will be seen 
before after write after the barrier. It doesn't guarantee the other end 
will see it right after it...

> As there is already one after the memcpy, the cost should be fairly limited here.

... so this feel like more a micro-optimization (happy to be proved 
wrong). The memory barriers are already tricky enough to use/understand, 
so I would rather not want to use more than necessary.

Cheers,

-- 
Julien Grall


  reply	other threads:[~2020-06-12 19:56 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-11 11:58 [PATCH 0/2] xen/arm: Convert runstate address during hypcall Bertrand Marquis
2020-06-11 11:58 ` [PATCH 1/2] " Bertrand Marquis
2020-06-11 18:16   ` Stefano Stabellini
2020-06-11 18:24     ` Julien Grall
2020-06-11 18:50       ` Stefano Stabellini
2020-06-11 19:38         ` Julien Grall
2020-06-12  1:09           ` Stefano Stabellini
2020-06-12  8:13             ` Bertrand Marquis
2020-06-13  0:24               ` Stefano Stabellini
2020-06-15 14:09                 ` Bertrand Marquis
2020-06-15 20:30                   ` Stefano Stabellini
2020-06-15 20:44                     ` Julien Grall
2020-06-12  9:53             ` Julien Grall
2020-06-13  0:24               ` Stefano Stabellini
2020-06-12  8:07     ` Bertrand Marquis
2020-06-12 10:53   ` Julien Grall
2020-06-12 14:13     ` Bertrand Marquis
2020-06-12 19:56       ` Julien Grall [this message]
2020-06-12 16:51     ` Bertrand Marquis
2020-06-12 20:31       ` Julien Grall
2020-06-15 14:01         ` Bertrand Marquis
2020-06-11 11:58 ` [PATCH 2/2] xen/arm: Support runstate crossing pages Bertrand Marquis
2020-06-12  1:10   ` Stefano Stabellini
2020-06-12 11:37     ` Julien Grall
2020-06-12 12:14   ` Julien Grall
2020-06-12 16:13     ` Bertrand Marquis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=68f7349e-c773-f0ef-bc77-cbf459796604@xen.org \
    --to=julien@xen.org \
    --cc=Bertrand.Marquis@arm.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=nd@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).