All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: wei.liu2@citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	ian.jackson@eu.citrix.com, xen-devel <xen-devel@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	roger.pau@citrix.com
Subject: Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
Date: Tue, 22 Nov 2016 18:47:59 -0500	[thread overview]
Message-ID: <54f35d82-a905-760d-a258-2943b9337a46@oracle.com> (raw)
In-Reply-To: <a4ac4c28-833b-df5f-ce34-1fa72f7c4cd2@oracle.com>



On 11/22/2016 11:25 AM, Boris Ostrovsky wrote:
>
>
> On 11/22/2016 11:01 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 16:43, <boris.ostrovsky@oracle.com> wrote:
>>
>>>
>>> On 11/22/2016 10:07 AM, Jan Beulich wrote:
>>>>>>> On 22.11.16 at 15:37, <boris.ostrovsky@oracle.com> wrote:
>>>>
>>>>>
>>>>> On 11/22/2016 08:59 AM, Jan Beulich wrote:
>>>>>>>>> On 22.11.16 at 13:34, <boris.ostrovsky@oracle.com> wrote:
>>>>>>
>>>>>>>
>>>>>>> On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>>>>>>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>>>>>>> This domctl is called when a VCPU is hot-(un)plugged to a
>>>>>>>>>> guest (via
>>>>>>>>>> 'xl vcpu-set').
>>>>>>>>>>
>>>>>>>>>> The primary reason for adding this call is because for PVH guests
>>>>>>>>>> the hypervisor needs to send an SCI and set GPE registers.
>>>>>>>>>> This is
>>>>>>>>>> unlike HVM guests that have qemu to perform these tasks.
>>>>>>>>>
>>>>>>>>> And the tool stack can't do this?
>>>>>>>>
>>>>>>>> For the avoidance of further misunderstandings: Of course likely
>>>>>>>> not completely on its own, but by using a (to be introduced) more
>>>>>>>> low level hypervisor interface (setting arbitrary GPE bits, with
>>>>>>>> SCI
>>>>>>>> raised as needed, or the SCI raising being another hypercall).
>>>>>>>
>>>>>>> So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into
>>>>>>>
>>>>>>> XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
>>>>>>> XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
>>>>>>> XEN_DOMCTL_send_virq(virq)
>>>>>>>
>>>>>>> (with perhaps set_avail_vcpus folded into set_acpi_reg) ?
>>>>>>
>>>>>> Well, I don't see what set_avail_vcpus would be good for
>>>>>> considering that during v2 review you've said that you need it
>>>>>> just for the GPE modification and SCI sending.
>>>>>
>>>>>
>>>>> Someone needs to provide the hypervisor with the new number of
>>>>> available
>>>>> (i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl.
>>>>> GPE/SCI
>>>>> manipulation are part of that update.
>>>>>
>>>>> (I didn't say it during v2 review and I should have)
>>>>
>>>> And I've just found that need while looking over patch 8. With
>>>> that I'm not sure the splitting would make sense, albeit we may
>>>> find it necessary to fiddle with other GPE bits down the road.
>>>
>>> Just to make sure we are talking about the same thing:
>>> XEN_DOMCTL_set_acpi_reg is sufficient for both GPE and CPU map (or any
>>> ACPI register should the need arise)
>>
>> Well, my point is that as long as we continue to need
>> set_avail_vcpus (which I hear you say we do need), I'm not
>> sure the splitting would be helpful (minus the "albeit" part
>> above).
>
>
> So the downside of having set_avail is that if we ever find this need to
> touch ACPI registers we will be left with a useless (or at least
> redundant) domctl.
>
> Let me try to have set_acpi_reg and see if it looks good enough. If
> people don't like it then I'll go back to set_avail_vcpus.

(apparently I replied to Jan only. Resending to everyone)

I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with 
XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to 
perform two (or more, if >32 VCPUs) hypercalls and the logic on the 
hypervisor side is almost the same as the ioreq handling that this 
series added in patch 8.

However, I now realized that this interface will not be available to PV 
guests (and it will only become available to HVM guests when we move 
hotplug from qemu to hypervisor). And it's x86-specific.

This means that PV guests will not know what the number of available 
VCPUs is and therefore we will not be able to enforce it. OTOH we don't 
know how to do that anyway since PV guests bring up all VCPUs and then 
offline them.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-11-22 23:47 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
2016-11-22 10:31   ` Jan Beulich
2016-11-22 10:39     ` Jan Beulich
2016-11-22 12:34       ` Boris Ostrovsky
2016-11-22 13:59         ` Jan Beulich
2016-11-22 14:37           ` Boris Ostrovsky
2016-11-22 15:07             ` Jan Beulich
2016-11-22 15:43               ` Boris Ostrovsky
2016-11-22 16:01                 ` Jan Beulich
     [not found]                   ` <a4ac4c28-833b-df5f-ce34-1fa72f7c4cd2@oracle.com>
2016-11-22 23:47                     ` Boris Ostrovsky [this message]
2016-11-23  8:09                       ` Jan Beulich
2016-11-23 13:33                         ` Boris Ostrovsky
2016-11-23 13:58                           ` Jan Beulich
2016-11-23 14:16                             ` Boris Ostrovsky
2016-11-25 18:16                               ` Boris Ostrovsky
2016-11-28  7:59                                 ` Jan Beulich
2016-11-22 12:19     ` Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests Boris Ostrovsky
2016-11-22 10:37   ` Jan Beulich
2016-11-22 12:28     ` Boris Ostrovsky
2016-11-22 14:07       ` Jan Beulich
2016-11-22 14:53         ` Boris Ostrovsky
2016-11-22 15:13           ` Jan Beulich
2016-11-22 15:52             ` Boris Ostrovsky
2016-11-22 16:02               ` Jan Beulich
2016-11-21 21:00 ` [PATCH v3 03/11] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 04/11] acpi: Make pmtimer optional in FADT Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 05/11] acpi: Power and Sleep ACPI buttons are not emulated for PVH guests Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
2016-11-22  9:13   ` Jan Beulich
2016-11-22 20:20   ` Konrad Rzeszutek Wilk
2016-11-21 21:00 ` [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
2016-11-22 11:34   ` Jan Beulich
2016-11-22 12:38     ` Boris Ostrovsky
2016-11-22 14:08       ` Jan Beulich
2016-11-28 15:16         ` Boris Ostrovsky
2016-11-28 15:48           ` Roger Pau Monné
2016-11-21 21:00 ` [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
2016-11-22 14:11   ` Paul Durrant
2016-11-22 15:01   ` Jan Beulich
2016-11-22 15:30     ` Boris Ostrovsky
2016-11-22 16:05       ` Jan Beulich
2016-11-22 16:33         ` Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 09/11] events/x86: Define SCI virtual interrupt Boris Ostrovsky
2016-11-22 15:25   ` Jan Beulich
2016-11-22 15:57     ` Boris Ostrovsky
2016-11-22 16:07       ` Jan Beulich
2016-11-21 21:00 ` [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
2016-11-22 15:32   ` Jan Beulich
2016-11-21 21:00 ` [PATCH v3 11/11] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54f35d82-a905-760d-a258-2943b9337a46@oracle.com \
    --to=boris.ostrovsky@oracle.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=paul.durrant@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.