All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/11] PVH VCPU hotplug support
@ 2016-11-21 21:00 Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
                   ` (10 more replies)
  0 siblings, 11 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

This series adds support for ACPI-based VCPU hotplug for unprivileged
PVH guests.

New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
guest creation and in response to 'xl vcpu-set' command. This domctl
updates GPE0's status and enable registers and sends an SCI to the
guest using (newly added) VIRQ_SCI.

I decided not to implement enforcement of avail_vcpus in this series
after I realized that HVM guests (just like PV) also start all
max_vcpus initially and then offline them
(firmware/hvmloader/smp.c:smp_initialise()). Together with the fact
that HVM hotplug only works in one direction (i.e. adding VCPUs but
not removing them) I think it's pretty clear that hotplug needs to be
fixed in general, and adding avail_vcpus enforcement should be part of that
fix.

I also didn't include changes to getdomaininfo to expanded with getting
number of available vcpus, mostly because I haven't needed it so
far. For live migration (where Andrew thought it might be needed) we
rely on xenstore's "cpu/available" value to keep hypervisor up-to-date
(see libxl__update_avail_vcpus_xenstore()) (I should say that I
haven't tested live migration, only save/restore but I think it's the
same codepath)

Boris Ostrovsky (11):
  x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  acpi: Define ACPI IO registers for PVH guests
  pvh: Set online VCPU map to avail_vcpus
  acpi: Make pmtimer optional in FADT
  acpi: Power and Sleep ACPI buttons are not emulated for PVH guests
  acpi: PVH guests need _E02 method
  pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  pvh/acpi: Handle ACPI accesses for PVH guests
  events/x86: Define SCI virtual interrupt
  pvh: Send an SCI on VCPU hotplug event
  docs: Describe PVHv2's VCPU hotplug procedure

 docs/misc/hvmlite.markdown            |  12 ++++
 tools/firmware/hvmloader/util.c       |   4 +-
 tools/flask/policy/modules/dom0.te    |   2 +-
 tools/flask/policy/modules/xen.if     |   4 +-
 tools/libacpi/build.c                 |   7 +++
 tools/libacpi/libacpi.h               |   2 +
 tools/libacpi/mk_dsdt.c               |  17 +++---
 tools/libacpi/static_tables.c         |  20 ++-----
 tools/libxc/include/xenctrl.h         |   5 ++
 tools/libxc/xc_domain.c               |  12 ++++
 tools/libxl/libxl.c                   |   7 +++
 tools/libxl/libxl_dom.c               |   7 +++
 tools/libxl/libxl_x86_acpi.c          |   6 +-
 xen/arch/arm/domain.c                 |   5 ++
 xen/arch/x86/domain.c                 |  16 ++++++
 xen/arch/x86/hvm/ioreq.c              | 103 ++++++++++++++++++++++++++++++++++
 xen/common/domctl.c                   |  26 +++++++++
 xen/common/event_channel.c            |   7 ++-
 xen/include/asm-x86/domain.h          |   1 +
 xen/include/asm-x86/hvm/domain.h      |   6 ++
 xen/include/public/arch-x86/xen-mca.h |   2 -
 xen/include/public/arch-x86/xen.h     |   7 ++-
 xen/include/public/domctl.h           |   9 +++
 xen/include/public/hvm/ioreq.h        |  25 +++++++++
 xen/include/xen/domain.h              |   1 +
 xen/include/xen/event.h               |   8 +++
 xen/include/xen/sched.h               |   6 ++
 xen/xsm/flask/hooks.c                 |   3 +
 xen/xsm/flask/policy/access_vectors   |   2 +
 29 files changed, 299 insertions(+), 33 deletions(-)

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-22 10:31   ` Jan Beulich
  2016-11-21 21:00 ` [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests Boris Ostrovsky
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	Daniel De Graaf, roger.pau

This domctl is called when a VCPU is hot-(un)plugged to a guest (via
'xl vcpu-set').

The primary reason for adding this call is because for PVH guests
the hypervisor needs to send an SCI and set GPE registers. This is
unlike HVM guests that have qemu to perform these tasks.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
Changes in v3:
* Clarified commit message to include reason for having the domctl
* Moved domctl to common code
* Made avail_vcpus a bitmap

 tools/flask/policy/modules/dom0.te  |  2 +-
 tools/flask/policy/modules/xen.if   |  4 ++--
 tools/libxc/include/xenctrl.h       |  5 +++++
 tools/libxc/xc_domain.c             | 12 ++++++++++++
 tools/libxl/libxl.c                 |  7 +++++++
 tools/libxl/libxl_dom.c             |  7 +++++++
 xen/common/domctl.c                 | 25 +++++++++++++++++++++++++
 xen/include/public/domctl.h         |  9 +++++++++
 xen/include/xen/sched.h             |  6 ++++++
 xen/xsm/flask/hooks.c               |  3 +++
 xen/xsm/flask/policy/access_vectors |  2 ++
 11 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 2d982d9..fd60c39 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -38,7 +38,7 @@ allow dom0_t dom0_t:domain {
 };
 allow dom0_t dom0_t:domain2 {
 	set_cpuid gettsc settsc setscheduler set_max_evtchn set_vnumainfo
-	get_vnumainfo psr_cmt_op psr_cat_op
+	get_vnumainfo psr_cmt_op psr_cat_op set_avail_vcpus
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index eb646f5..afbedc2 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -52,7 +52,7 @@ define(`create_domain_common', `
 			settime setdomainhandle getvcpucontext };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
 			set_max_evtchn set_vnumainfo get_vnumainfo cacheflush
-			psr_cmt_op psr_cat_op soft_reset };
+			psr_cmt_op psr_cat_op soft_reset set_avail_vcpus};
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -85,7 +85,7 @@ define(`manage_domain', `
 			getaddrsize pause unpause trigger shutdown destroy
 			setaffinity setdomainmaxmem getscheduler resume
 			setpodtarget getpodtarget };
-    allow $1 $2:domain2 set_vnumainfo;
+    allow $1 $2:domain2 { set_vnumainfo set_avail_vcpus };
 ')
 
 # migrate_domain_out(priv, target)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 2c83544..49e9b9f 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1256,6 +1256,11 @@ int xc_domain_getvnuma(xc_interface *xch,
 int xc_domain_soft_reset(xc_interface *xch,
                          uint32_t domid);
 
+int xc_domain_set_avail_vcpus(xc_interface *xch,
+                              uint32_t domid,
+                              unsigned int num_vcpus);
+
+
 #if defined(__i386__) || defined(__x86_64__)
 /*
  * PC BIOS standard E820 types and structure.
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 296b852..161a80e 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2520,6 +2520,18 @@ int xc_domain_soft_reset(xc_interface *xch,
     domctl.domain = (domid_t)domid;
     return do_domctl(xch, &domctl);
 }
+
+int xc_domain_set_avail_vcpus(xc_interface *xch,
+                              uint32_t domid,
+                              unsigned int num_vcpus)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_set_avail_vcpus;
+    domctl.domain = (domid_t)domid;
+    domctl.u.avail_vcpus.num = num_vcpus;
+    return do_domctl(xch, &domctl);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 33c5e4c..e936f38 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5164,6 +5164,13 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
     default:
         rc = ERROR_INVAL;
     }
+
+    if (!rc) {
+        rc = xc_domain_set_avail_vcpus(ctx->xch, domid, maxcpus);
+        if (rc)
+            LOG(ERROR, "Couldn't set available vcpu count");
+    }
+
 out:
     libxl_dominfo_dispose(&info);
     GC_FREE;
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index d519c8d..9037166 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -309,6 +309,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         return ERROR_FAIL;
     }
 
+    rc = xc_domain_set_avail_vcpus(ctx->xch, domid,
+                                   libxl_bitmap_count_set(&info->avail_vcpus));
+    if (rc) {
+        LOG(ERROR, "Couldn't set available vcpu count (error %d)", rc);
+        return ERROR_FAIL;
+    }
+
     /*
      * Check if the domain has any CPU or node affinity already. If not, try
      * to build up the latter via automatic NUMA placement. In fact, in case
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index b0ee961..626f2cb 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1157,6 +1157,31 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
+    case XEN_DOMCTL_set_avail_vcpus:
+    {
+        unsigned int num = op->u.avail_vcpus.num;
+        unsigned long *avail_vcpus;
+
+        if ( num > d->max_vcpus || num == 0 )
+        {
+            ret = -EINVAL;
+            break;
+        }
+
+        avail_vcpus = xzalloc_array(unsigned long, BITS_TO_LONGS(d->max_vcpus));
+        if ( !avail_vcpus )
+        {
+            ret = -ENOMEM;
+            break;
+        }
+
+        bitmap_fill(avail_vcpus, num);
+        xfree(d->avail_vcpus);
+        d->avail_vcpus = avail_vcpus;
+
+        break;
+    }
+    
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 177319d..e7c84a9 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1059,6 +1059,13 @@ struct xen_domctl_psr_cmt_op {
 typedef struct xen_domctl_psr_cmt_op xen_domctl_psr_cmt_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cmt_op_t);
 
+/* XEN_DOMCTL_avail_vcpus */
+struct xen_domctl_avail_vcpus {
+    uint32_t num; /* available number of vcpus */
+};
+typedef struct xen_domctl_avail_vcpus xen_domctl_avail_vcpus_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_avail_vcpus_t);
+
 /*  XEN_DOMCTL_MONITOR_*
  *
  * Enable/disable monitoring various VM events.
@@ -1221,6 +1228,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_psr_cat_op                    78
 #define XEN_DOMCTL_soft_reset                    79
+#define XEN_DOMCTL_set_avail_vcpus               80
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1283,6 +1291,7 @@ struct xen_domctl {
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_cat_op        psr_cat_op;
+        struct xen_domctl_avail_vcpus       avail_vcpus;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 1fbda87..104ce61 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -313,6 +313,12 @@ struct domain
     domid_t          domain_id;
 
     unsigned int     max_vcpus;
+    /*
+     * Bitmap of VCPUs that were online during guest creation
+     * plus/minus any hot-(un)plugged VCPUs.
+     */
+    unsigned long   *avail_vcpus;
+
     struct vcpu    **vcpu;
 
     shared_info_t   *shared_info;     /* shared data area */
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 177c11f..377549a 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -748,6 +748,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_soft_reset:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SOFT_RESET);
 
+    case XEN_DOMCTL_set_avail_vcpus:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_AVAIL_VCPUS);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 49c9a9e..f8a5e6c 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -244,6 +244,8 @@ class domain2
     mem_sharing
 # XEN_DOMCTL_psr_cat_op
     psr_cat_op
+# XEN_DOMCTL_set_avail_vcpus
+    set_avail_vcpus
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-22 10:37   ` Jan Beulich
  2016-11-21 21:00 ` [PATCH v3 03/11] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, Paul Durrant, jbeulich,
	Boris Ostrovsky, roger.pau

ACPI hotplug-related IO accesses (to GPE0 block) are handled
by qemu for HVM guests. Since PVH guests don't have qemu these
accesses will need to be procesed by the hypervisor.

Because ACPI event model expects pm1a block to be present we
need to have the hypervisor emulate it as well.

Define XEN_CPU_ACPI_MAP[_LEN] block that lists online VCPUs.

Define XEN_GPE0_CPUHP_BIT which is set in GPE0 on CPU hotplug
event.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: Paul Durrant <paul.durrant@citrix.com>
---
Changes in v3:
* Dropped ARM changes (ARM is not using PRST)
* Restored bit offset macros
* Added XEN_ prefix to CPU_ACPI_MAP
* Moved struct hvm_domain.acpi_io definition to later patch
* Made XEN_CPU_ACPI_MAP* definitions conditional
* Added XEN_GPE0_CPUHP_BIT definition
* Simplfied XEN_ACPI_CPU_MAP_LEN definition

 tools/libacpi/mk_dsdt.c        |  7 +++++--
 tools/libacpi/static_tables.c  | 20 ++++++--------------
 xen/include/public/hvm/ioreq.h | 25 +++++++++++++++++++++++++
 3 files changed, 36 insertions(+), 16 deletions(-)

diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index 16320a9..6da24fa 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -19,6 +19,7 @@
 #include <stdbool.h>
 #if defined(__i386__) || defined(__x86_64__)
 #include <xen/hvm/hvm_info_table.h>
+#include <xen/hvm/ioreq.h>
 #elif defined(__aarch64__)
 #include <xen/arch-arm.h>
 #endif
@@ -244,7 +245,8 @@ int main(int argc, char **argv)
 #endif
 
     /* Operation Region 'PRST': bitmask of online CPUs. */
-    stmt("OperationRegion", "PRST, SystemIO, 0xaf00, 32");
+    stmt("OperationRegion", "PRST, SystemIO, %#x, %d",
+        XEN_ACPI_CPU_MAP, XEN_ACPI_CPU_MAP_LEN);
     push_block("Field", "PRST, ByteAcc, NoLock, Preserve");
     indent(); printf("PRS, %u\n", max_cpus);
     pop_block();
@@ -288,7 +290,8 @@ int main(int argc, char **argv)
     /* Define GPE control method. */
     push_block("Scope", "\\_GPE");
     push_block("Method",
-               dm_version == QEMU_XEN_TRADITIONAL ? "_L02" : "_E02");
+               dm_version == QEMU_XEN_TRADITIONAL ? "_L%02d" : "_E%02d",
+               XEN_GPE0_CPUHP_BIT);
     stmt("\\_SB.PRSC ()", NULL);
     pop_block();
     pop_block();
diff --git a/tools/libacpi/static_tables.c b/tools/libacpi/static_tables.c
index 617bf68..a966333 100644
--- a/tools/libacpi/static_tables.c
+++ b/tools/libacpi/static_tables.c
@@ -30,14 +30,6 @@ struct acpi_20_facs Facs = {
 /*
  * Fixed ACPI Description Table (FADT).
  */
-
-#define ACPI_PM1A_EVT_BLK_BIT_WIDTH         0x20
-#define ACPI_PM1A_EVT_BLK_BIT_OFFSET        0x00
-#define ACPI_PM1A_CNT_BLK_BIT_WIDTH         0x10
-#define ACPI_PM1A_CNT_BLK_BIT_OFFSET        0x00
-#define ACPI_PM_TMR_BLK_BIT_WIDTH           0x20
-#define ACPI_PM_TMR_BLK_BIT_OFFSET          0x00
-
 struct acpi_20_fadt Fadt = {
     .header = {
         .signature    = ACPI_2_0_FADT_SIGNATURE,
@@ -56,9 +48,9 @@ struct acpi_20_fadt Fadt = {
     .pm1a_cnt_blk = ACPI_PM1A_CNT_BLK_ADDRESS_V1,
     .pm_tmr_blk = ACPI_PM_TMR_BLK_ADDRESS_V1,
     .gpe0_blk = ACPI_GPE0_BLK_ADDRESS_V1,
-    .pm1_evt_len = ACPI_PM1A_EVT_BLK_BIT_WIDTH / 8,
-    .pm1_cnt_len = ACPI_PM1A_CNT_BLK_BIT_WIDTH / 8,
-    .pm_tmr_len = ACPI_PM_TMR_BLK_BIT_WIDTH / 8,
+    .pm1_evt_len = ACPI_PM1A_EVT_BLK_LEN,
+    .pm1_cnt_len = ACPI_PM1A_CNT_BLK_LEN,
+    .pm_tmr_len = ACPI_PM_TMR_BLK_LEN,
     .gpe0_blk_len = ACPI_GPE0_BLK_LEN_V1,
 
     .p_lvl2_lat = 0x0fff, /* >100,  means we do not support C2 state */
@@ -79,21 +71,21 @@ struct acpi_20_fadt Fadt = {
 
     .x_pm1a_evt_blk = {
         .address_space_id    = ACPI_SYSTEM_IO,
-        .register_bit_width  = ACPI_PM1A_EVT_BLK_BIT_WIDTH,
+        .register_bit_width  = ACPI_PM1A_EVT_BLK_LEN * 8,
         .register_bit_offset = ACPI_PM1A_EVT_BLK_BIT_OFFSET,
         .address             = ACPI_PM1A_EVT_BLK_ADDRESS_V1,
     },
 
     .x_pm1a_cnt_blk = {
         .address_space_id    = ACPI_SYSTEM_IO,
-        .register_bit_width  = ACPI_PM1A_CNT_BLK_BIT_WIDTH,
+        .register_bit_width  = ACPI_PM1A_CNT_BLK_LEN * 8,
         .register_bit_offset = ACPI_PM1A_CNT_BLK_BIT_OFFSET,
         .address             = ACPI_PM1A_CNT_BLK_ADDRESS_V1,
     },
 
     .x_pm_tmr_blk = {
         .address_space_id    = ACPI_SYSTEM_IO,
-        .register_bit_width  = ACPI_PM_TMR_BLK_BIT_WIDTH,
+        .register_bit_width  = ACPI_PM_TMR_BLK_LEN * 8,
         .register_bit_offset = ACPI_PM_TMR_BLK_BIT_OFFSET,
         .address             = ACPI_PM_TMR_BLK_ADDRESS_V1,
     }
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index 2e5809b..c04b5e6 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -24,6 +24,9 @@
 #ifndef _IOREQ_H_
 #define _IOREQ_H_
 
+#include "../xen-compat.h"
+#include "hvm_info_table.h" /* HVM_MAX_VCPUS */
+
 #define IOREQ_READ      1
 #define IOREQ_WRITE     0
 
@@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
 
 /* Compatibility definitions for the default location (version 0). */
 #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
+#define ACPI_PM1A_EVT_BLK_LEN        0x04
+#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
 #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
+#define ACPI_PM1A_CNT_BLK_LEN        0x02
+#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
 #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
+#define ACPI_PM_TMR_BLK_LEN          0x04
+#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
 #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
 #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
 
+#if __XEN_INTERFACE_VERSION__ >= 0x00040800
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+/* Location of online VCPU bitmap. */
+#define XEN_ACPI_CPU_MAP             0xaf00
+#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
+
+#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
+#error "XEN_ACPI_CPU_MAP is too big"
+#endif
+
+/* GPE0 bit set during CPU hotplug */
+#define XEN_GPE0_CPUHP_BIT           2
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
 
 #endif /* _IOREQ_H_ */
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 03/11] pvh: Set online VCPU map to avail_vcpus
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 04/11] acpi: Make pmtimer optional in FADT Boris Ostrovsky
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

ACPI builder marks VCPUS set in vcpu_online map as enabled in MADT.
With ACPI-based CPU hotplug we only want VCPUs that are started by
the guest to be marked as such. Remaining VCPUs will be set to
"enable" by AML code during hotplug.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxl/libxl_x86_acpi.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_x86_acpi.c b/tools/libxl/libxl_x86_acpi.c
index ff0e2df..949f555 100644
--- a/tools/libxl/libxl_x86_acpi.c
+++ b/tools/libxl/libxl_x86_acpi.c
@@ -98,7 +98,7 @@ static int init_acpi_config(libxl__gc *gc,
     uint32_t domid = dom->guest_domid;
     xc_dominfo_t info;
     struct hvm_info_table *hvminfo;
-    int i, rc = 0;
+    int rc = 0;
 
     config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
     config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
@@ -144,8 +144,8 @@ static int init_acpi_config(libxl__gc *gc,
         hvminfo->nr_vcpus = info.max_vcpu_id + 1;
     }
 
-    for (i = 0; i < hvminfo->nr_vcpus; i++)
-        hvminfo->vcpu_online[i / 8] |= 1 << (i & 7);
+    memcpy(hvminfo->vcpu_online, b_info->avail_vcpus.map,
+           b_info->avail_vcpus.size);
 
     config->hvminfo = hvminfo;
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 04/11] acpi: Make pmtimer optional in FADT
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (2 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 03/11] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 05/11] acpi: Power and Sleep ACPI buttons are not emulated for PVH guests Boris Ostrovsky
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

PM timer is not supported by PVH guests.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in v3:
* Fadt.pm_tmr_len needs to be zero too. (I kept the acks)

 tools/firmware/hvmloader/util.c | 3 ++-
 tools/libacpi/build.c           | 5 +++++
 tools/libacpi/libacpi.h         | 1 +
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 6e0cfe7..1d78973 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -948,7 +948,8 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
     if ( !strncmp(xenstore_read("platform/acpi_s4", "1"), "1", 1)  )
         config->table_flags |= ACPI_HAS_SSDT_S4;
 
-    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC | ACPI_HAS_WAET);
+    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
+                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER);
 
     config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
 
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index 47dae01..e1fd381 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -574,6 +574,11 @@ int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config)
 
     fadt = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_fadt), 16);
     if (!fadt) goto oom;
+    if ( !(config->table_flags & ACPI_HAS_PMTIMER) )
+    {
+        Fadt.pm_tmr_blk = Fadt.pm_tmr_len = 0;
+        memset(&Fadt.x_pm_tmr_blk, 0, sizeof(Fadt.x_pm_tmr_blk));
+    }
     memcpy(fadt, &Fadt, sizeof(struct acpi_20_fadt));
     fadt->dsdt   = ctxt->mem_ops.v2p(ctxt, dsdt);
     fadt->x_dsdt = ctxt->mem_ops.v2p(ctxt, dsdt);
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index 1d388f9..bda692e 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -30,6 +30,7 @@
 #define ACPI_HAS_TCPA        (1<<7)
 #define ACPI_HAS_IOAPIC      (1<<8)
 #define ACPI_HAS_WAET        (1<<9)
+#define ACPI_HAS_PMTIMER     (1<<10)
 
 struct xen_vmemrange;
 struct acpi_numa {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 05/11] acpi: Power and Sleep ACPI buttons are not emulated for PVH guests
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (3 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 04/11] acpi: Make pmtimer optional in FADT Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 tools/firmware/hvmloader/util.c | 3 ++-
 tools/libacpi/build.c           | 2 ++
 tools/libacpi/libacpi.h         | 1 +
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 1d78973..a3f12fe 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -949,7 +949,8 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
         config->table_flags |= ACPI_HAS_SSDT_S4;
 
     config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
-                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER);
+                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER |
+                            ACPI_HAS_BUTTONS);
 
     config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
 
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index e1fd381..4a2e2a9 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -579,6 +579,8 @@ int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config)
         Fadt.pm_tmr_blk = Fadt.pm_tmr_len = 0;
         memset(&Fadt.x_pm_tmr_blk, 0, sizeof(Fadt.x_pm_tmr_blk));
     }
+    if ( !(config->table_flags & ACPI_HAS_BUTTONS) )
+        Fadt.flags |= (ACPI_PWR_BUTTON | ACPI_SLP_BUTTON);
     memcpy(fadt, &Fadt, sizeof(struct acpi_20_fadt));
     fadt->dsdt   = ctxt->mem_ops.v2p(ctxt, dsdt);
     fadt->x_dsdt = ctxt->mem_ops.v2p(ctxt, dsdt);
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index bda692e..dd6ef8b 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -31,6 +31,7 @@
 #define ACPI_HAS_IOAPIC      (1<<8)
 #define ACPI_HAS_WAET        (1<<9)
 #define ACPI_HAS_PMTIMER     (1<<10)
+#define ACPI_HAS_BUTTONS     (1<<11)
 
 struct xen_vmemrange;
 struct acpi_numa {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 06/11] acpi: PVH guests need _E02 method
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (4 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 05/11] acpi: Power and Sleep ACPI buttons are not emulated for PVH guests Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-22  9:13   ` Jan Beulich
  2016-11-22 20:20   ` Konrad Rzeszutek Wilk
  2016-11-21 21:00 ` [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

This is the method that will get invoked on an SCI.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 tools/libacpi/mk_dsdt.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index 6da24fa..6197692 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -282,11 +282,6 @@ int main(int argc, char **argv)
 
     pop_block();
 
-    if (dm_version == QEMU_NONE) {
-        pop_block();
-        return 0;
-    }
-
     /* Define GPE control method. */
     push_block("Scope", "\\_GPE");
     push_block("Method",
@@ -295,6 +290,11 @@ int main(int argc, char **argv)
     stmt("\\_SB.PRSC ()", NULL);
     pop_block();
     pop_block();
+
+    if (dm_version == QEMU_NONE) {
+        pop_block();
+        return 0;
+    }
     /**** Processor end ****/
 
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (5 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-22 11:34   ` Jan Beulich
  2016-11-21 21:00 ` [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, Paul Durrant, jbeulich,
	Boris Ostrovsky, roger.pau

PVH guests will have ACPI accesses emulated by the hypervisor
as opposed to QEMU (as is the case for HVM guests)

Support for IOREQ server emulation of CPU hotplug is indicated
by XEN_X86_EMU_IOREQ_CPUHP flag.

Logic for the handler will be provided by a later patch.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: Paul Durrant <paul.durrant@citrix.com>
---
Changes in v3:
* acpi_ioaccess() returns X86EMUL_UNHANDLEABLE
* Renamed XEN_X86_EMU_IOREQ_CPUHP to XEN_X86_EMU_ACPI_FF (together
  with corresponding has_*())

 xen/arch/x86/hvm/ioreq.c          | 18 ++++++++++++++++++
 xen/include/asm-x86/domain.h      |  1 +
 xen/include/public/arch-x86/xen.h |  4 +++-
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index d2245e2..51bb399 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1380,6 +1380,12 @@ static int hvm_access_cf8(
     return X86EMUL_UNHANDLEABLE;
 }
 
+static int acpi_ioaccess(
+    int dir, unsigned int port, unsigned int bytes, uint32_t *val)
+{
+    return X86EMUL_UNHANDLEABLE;
+}
+
 void hvm_ioreq_init(struct domain *d)
 {
     spin_lock_init(&d->arch.hvm_domain.ioreq_server.lock);
@@ -1387,6 +1393,18 @@ void hvm_ioreq_init(struct domain *d)
 
     if ( !is_pvh_domain(d) )
         register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
+
+    if ( !has_acpi_ff(d) )
+    {
+        /* Online CPU map, see DSDT's PRST region. */
+        register_portio_handler(d, XEN_ACPI_CPU_MAP,
+                                XEN_ACPI_CPU_MAP_LEN, acpi_ioaccess);
+
+        register_portio_handler(d, ACPI_GPE0_BLK_ADDRESS_V1,
+                                ACPI_GPE0_BLK_LEN_V1, acpi_ioaccess);
+        register_portio_handler(d, ACPI_PM1A_EVT_BLK_ADDRESS_V1,
+                                ACPI_PM1A_EVT_BLK_LEN, acpi_ioaccess);
+    }
 }
 
 /*
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index f6a40eb..33f862f 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -425,6 +425,7 @@ struct arch_domain
 #define has_vvga(d)        (!!((d)->arch.emulation_flags & XEN_X86_EMU_VGA))
 #define has_viommu(d)      (!!((d)->arch.emulation_flags & XEN_X86_EMU_IOMMU))
 #define has_vpit(d)        (!!((d)->arch.emulation_flags & XEN_X86_EMU_PIT))
+#define has_acpi_ff(d)     (!!((d)->arch.emulation_flags & XEN_X86_EMU_ACPI_FF))
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
 
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index cdd93c1..04c78ec 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -283,12 +283,14 @@ struct xen_arch_domainconfig {
 #define XEN_X86_EMU_IOMMU           (1U<<_XEN_X86_EMU_IOMMU)
 #define _XEN_X86_EMU_PIT            8
 #define XEN_X86_EMU_PIT             (1U<<_XEN_X86_EMU_PIT)
+#define _XEN_X86_EMU_ACPI_FF        9
+#define XEN_X86_EMU_ACPI_FF         (1U<<_XEN_X86_EMU_ACPI_FF)
 
 #define XEN_X86_EMU_ALL             (XEN_X86_EMU_LAPIC | XEN_X86_EMU_HPET |  \
                                      XEN_X86_EMU_PM | XEN_X86_EMU_RTC |      \
                                      XEN_X86_EMU_IOAPIC | XEN_X86_EMU_PIC |  \
                                      XEN_X86_EMU_VGA | XEN_X86_EMU_IOMMU |   \
-                                     XEN_X86_EMU_PIT)
+                                     XEN_X86_EMU_PIT | XEN_X86_EMU_ACPI_FF)
     uint32_t emulation_flags;
 };
 #endif
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (6 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-22 14:11   ` Paul Durrant
  2016-11-22 15:01   ` Jan Beulich
  2016-11-21 21:00 ` [PATCH v3 09/11] events/x86: Define SCI virtual interrupt Boris Ostrovsky
                   ` (2 subsequent siblings)
  10 siblings, 2 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, Paul Durrant, jbeulich,
	Boris Ostrovsky, roger.pau

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: Paul Durrant <paul.durrant@citrix.com>
---
Changes in v3:
* Introduce a mask for pm1a and gpe0 that lists bits that a
  guest can operate on.
* Lots of small changes.

 xen/arch/x86/hvm/ioreq.c         | 87 +++++++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/hvm/domain.h |  6 +++
 2 files changed, 92 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 51bb399..4ab0d0a 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -16,6 +16,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/acpi.h>
 #include <xen/config.h>
 #include <xen/ctype.h>
 #include <xen/init.h>
@@ -1383,7 +1384,91 @@ static int hvm_access_cf8(
 static int acpi_ioaccess(
     int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
-    return X86EMUL_UNHANDLEABLE;
+    uint8_t *reg = NULL;
+    const uint8_t *mask = NULL;
+    bool is_cpu_map = false;
+    struct domain *currd = current->domain;
+    const static uint8_t pm1a_mask[4] = {ACPI_BITMASK_GLOBAL_LOCK_STATUS, 0,
+                                         ACPI_BITMASK_GLOBAL_LOCK_ENABLE, 0};
+    const static uint8_t gpe0_mask[4] = {1U << XEN_GPE0_CPUHP_BIT, 0,
+                                         1U << XEN_GPE0_CPUHP_BIT, 0};
+
+    BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
+                 (ACPI_GPE0_BLK_LEN_V1 != 4));
+
+    ASSERT(!has_acpi_ff(currd));
+
+    switch ( port )
+    {
+    case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
+         ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1:
+        reg = currd->arch.hvm_domain.acpi_io.pm1a;
+        mask = pm1a_mask;
+        break;
+
+    case ACPI_GPE0_BLK_ADDRESS_V1 ...
+         ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1:
+        reg = currd->arch.hvm_domain.acpi_io.gpe;
+        mask = gpe0_mask;
+        break;
+
+    case XEN_ACPI_CPU_MAP ...
+         XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN - 1:
+        is_cpu_map = true;
+        break;
+
+    default:
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    if ( bytes == 0 )
+        return X86EMUL_OKAY;
+
+    if ( dir == IOREQ_READ )
+    {
+        if ( is_cpu_map )
+        {
+            unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
+
+            /*
+             * Clear bits that we are about to read to in case we
+             * copy fewer than @bytes.
+             */
+            *val &= (~((1ULL << (bytes * 8)) - 1)) & 0xffffffff;
+
+            if ( ((currd->max_vcpus + 7) / 8) > first_byte )
+            {
+                memcpy(val, (uint8_t *)currd->avail_vcpus + first_byte,
+                       min(bytes, ((currd->max_vcpus + 7) / 8) - first_byte));
+            }
+        }
+        else
+            memcpy(val, &reg[port & 3], bytes);
+    }
+    else
+    {
+        unsigned int idx = port & 3;
+        unsigned int i;
+        uint8_t *ptr;
+
+        if ( is_cpu_map )
+            /*
+             * CPU map is only read by DSDT's PRSC method and should never
+             * be written by a guest.
+             */
+            return X86EMUL_UNHANDLEABLE;
+
+        ptr = (uint8_t *)val;
+        for ( i = 0; i < bytes; i++, idx++ )
+        {
+            if ( idx < 2 ) /* status, write 1 to clear. */
+                reg[idx] &= ~(mask[i] & ptr[i]);
+            else           /* enable */
+                reg[idx] |= (mask[i] & ptr[i]);
+        }
+    }
+
+    return X86EMUL_OKAY;
 }
 
 void hvm_ioreq_init(struct domain *d)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index f34d784..f492a2b 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -87,6 +87,12 @@ struct hvm_domain {
     } ioreq_server;
     struct hvm_ioreq_server *default_ioreq_server;
 
+    /* PVH guests */
+    struct {
+        uint8_t pm1a[ACPI_PM1A_EVT_BLK_LEN];
+        uint8_t gpe[ACPI_GPE0_BLK_LEN_V1];
+    } acpi_io;
+
     /* Cached CF8 for guest PCI config cycles */
     uint32_t                pci_cf8;
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 09/11] events/x86: Define SCI virtual interrupt
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (7 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-22 15:25   ` Jan Beulich
  2016-11-21 21:00 ` [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
  2016-11-21 21:00 ` [PATCH v3 11/11] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
  10 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, wei.liu2, George Dunlap, andrew.cooper3,
	ian.jackson, Tim Deegan, jbeulich, Boris Ostrovsky, roger.pau

PVH guests do not have IOAPIC which typically generates an SCI. For
those guests SCI will be provided as a virtual interrupt.

We also move VIRQ_MCA definition out of xen-mca.h to
keep all x86-specific VIRQ_ARCH_* in one place.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Tim Deegan <tim@xen.org>
---
Changes in v3:
* VIRQ_SCI is global

 xen/include/public/arch-x86/xen-mca.h | 2 --
 xen/include/public/arch-x86/xen.h     | 3 +++
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/include/public/arch-x86/xen-mca.h b/xen/include/public/arch-x86/xen-mca.h
index a97e821..b76c53c 100644
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -91,8 +91,6 @@
 
 #ifndef __ASSEMBLY__
 
-#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
-
 /*
  * Machine Check Architecure:
  * structs are read-only and used to report all kinds of
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 04c78ec..fcc1bf8 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -295,6 +295,9 @@ struct xen_arch_domainconfig {
 };
 #endif
 
+#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
+#define VIRQ_SCI VIRQ_ARCH_1 /* G. (PVH) ACPI interrupt */
+
 #endif /* !__ASSEMBLY__ */
 
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (8 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 09/11] events/x86: Define SCI virtual interrupt Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  2016-11-22 15:32   ` Jan Beulich
  2016-11-21 21:00 ` [PATCH v3 11/11] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
  10 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, wei.liu2, andrew.cooper3, ian.jackson,
	Julien Grall, jbeulich, Boris Ostrovsky, roger.pau

.. and update GPE0 registers.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
---
Changes in v3:
* Add per-arch arch_update_avail_vcpus() (nop for ARM)
* send_guest_global_virq() is updated to deal with
  offlined VCPU0, made non-static.

 xen/arch/arm/domain.c      |  5 +++++
 xen/arch/x86/domain.c      | 16 ++++++++++++++++
 xen/common/domctl.c        |  1 +
 xen/common/event_channel.c |  7 +++++--
 xen/include/xen/domain.h   |  1 +
 xen/include/xen/event.h    |  8 ++++++++
 6 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 7e43691..19af326 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -549,6 +549,11 @@ void vcpu_destroy(struct vcpu *v)
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
 }
 
+int arch_update_avail_vcpus(struct domain *d)
+{
+    return 0;
+}
+
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config)
 {
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 1bd5eb6..c0c0d4f 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -509,6 +509,22 @@ void vcpu_destroy(struct vcpu *v)
         xfree(v->arch.pv_vcpu.trap_ctxt);
 }
 
+int arch_update_avail_vcpus(struct domain *d)
+{
+    /*
+     * For PVH guests we need to send an SCI and set enable/status
+     * bits in GPE block.
+     */
+    if ( is_hvm_domain(d) && !has_acpi_ff(d) )
+    {
+        d->arch.hvm_domain.acpi_io.gpe[2] =
+            d->arch.hvm_domain.acpi_io.gpe[0] = 1 << XEN_GPE0_CPUHP_BIT;
+        send_guest_global_virq(d, VIRQ_SCI);
+    }
+
+    return 0;
+}
+
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config)
 {
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 626f2cb..2ae6a91 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1179,6 +1179,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         xfree(d->avail_vcpus);
         d->avail_vcpus = avail_vcpus;
 
+        ret = arch_update_avail_vcpus(d);
         break;
     }
     
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 638dc5e..1d77373 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -727,7 +727,7 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
     spin_unlock_irqrestore(&v->virq_lock, flags);
 }
 
-static void send_guest_global_virq(struct domain *d, uint32_t virq)
+void send_guest_global_virq(struct domain *d, uint32_t virq)
 {
     unsigned long flags;
     int port;
@@ -739,7 +739,10 @@ static void send_guest_global_virq(struct domain *d, uint32_t virq)
     if ( unlikely(d == NULL) || unlikely(d->vcpu == NULL) )
         return;
 
-    v = d->vcpu[0];
+    /* Send to first available VCPU */
+    for_each_vcpu(d, v)
+        if ( is_vcpu_online(v) )
+            break;
     if ( unlikely(v == NULL) )
         return;
 
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index bce0ea1..b386038 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -52,6 +52,7 @@ void vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+int arch_update_avail_vcpus(struct domain *d);
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config);
 
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 5008c80..74bd605 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -23,6 +23,14 @@
 void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq);
 
 /*
+ * send_guest_global_virq: Notify guest via a global VIRQ.
+ *  @d:        domain to which virtual IRQ should be sent. First
+ *             online VCPU will be selected.
+ *  @virq:     Virtual IRQ number (VIRQ_*)
+ */
+void send_guest_global_virq(struct domain *d, uint32_t virq);
+
+/*
  * send_global_virq: Notify the domain handling a global VIRQ.
  *  @virq:     Virtual IRQ number (VIRQ_*)
  */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 11/11] docs: Describe PVHv2's VCPU hotplug procedure
  2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
                   ` (9 preceding siblings ...)
  2016-11-21 21:00 ` [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
@ 2016-11-21 21:00 ` Boris Ostrovsky
  10 siblings, 0 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-21 21:00 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, wei.liu2, George Dunlap, andrew.cooper3,
	ian.jackson, Tim Deegan, jbeulich, Boris Ostrovsky, roger.pau

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Tim Deegan <tim@xen.org>
---
 docs/misc/hvmlite.markdown | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/docs/misc/hvmlite.markdown b/docs/misc/hvmlite.markdown
index 898b8ee..0045d22 100644
--- a/docs/misc/hvmlite.markdown
+++ b/docs/misc/hvmlite.markdown
@@ -75,3 +75,15 @@ info structure that's passed at boot time (field rsdp_paddr).
 
 Description of paravirtualized devices will come from XenStore, just as it's
 done for HVM guests.
+
+## VCPU hotplug ##
+
+VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
+follows ACPI model where change in domain's number of VCPUS (stored in
+arch_domain.avail_vcpus) results in an SCI being sent to the guest. The
+guest then executes DSDT's PRSC method, updating MADT enable status for
+the affected VCPU.
+
+This is achieved by having the toolstack call XEN_DOMCTL_set_avail_vcpus
+which sets appropriate bits in ACPI GPE0 enable and status registers followed
+by sending VIRQ_SCI to the guest.
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 06/11] acpi: PVH guests need _E02 method
  2016-11-21 21:00 ` [PATCH v3 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
@ 2016-11-22  9:13   ` Jan Beulich
  2016-11-22 20:20   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2016-11-22  9:13 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: andrew.cooper3, xen-devel, wei.liu2, ian.jackson, roger.pau

>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> This is the method that will get invoked on an SCI.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Looks like you've lost Konrad's R-b here?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-21 21:00 ` [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
@ 2016-11-22 10:31   ` Jan Beulich
  2016-11-22 10:39     ` Jan Beulich
  2016-11-22 12:19     ` Boris Ostrovsky
  0 siblings, 2 replies; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 10:31 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau

>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
> 'xl vcpu-set').
> 
> The primary reason for adding this call is because for PVH guests
> the hypervisor needs to send an SCI and set GPE registers. This is
> unlike HVM guests that have qemu to perform these tasks.

And the tool stack can't do this?

> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -1157,6 +1157,31 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              copyback = 1;
>          break;
>  
> +    case XEN_DOMCTL_set_avail_vcpus:
> +    {
> +        unsigned int num = op->u.avail_vcpus.num;
> +        unsigned long *avail_vcpus;
> +
> +        if ( num > d->max_vcpus || num == 0 )
> +        {
> +            ret = -EINVAL;
> +            break;
> +        }
> +
> +        avail_vcpus = xzalloc_array(unsigned long, BITS_TO_LONGS(d->max_vcpus));
> +        if ( !avail_vcpus )
> +        {
> +            ret = -ENOMEM;
> +            break;
> +        }
> +
> +        bitmap_fill(avail_vcpus, num);

What is the bitmap good for if all that gets ever set are the initial so
many bits? I think the domctl should _take_ a bitmap.

Also d->max_vcpus shouldn't change over the lifetime of the guest,
so repeated allocating and freeing the bitmap seems pointless.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-21 21:00 ` [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests Boris Ostrovsky
@ 2016-11-22 10:37   ` Jan Beulich
  2016-11-22 12:28     ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 10:37 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>  
>  /* Compatibility definitions for the default location (version 0). */
>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
> +#define ACPI_PM_TMR_BLK_LEN          0x04
> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>  
> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +/* Location of online VCPU bitmap. */
> +#define XEN_ACPI_CPU_MAP             0xaf00
> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
> +
> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
> +#error "XEN_ACPI_CPU_MAP is too big"
> +#endif
> +
> +/* GPE0 bit set during CPU hotplug */
> +#define XEN_GPE0_CPUHP_BIT           2
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>  
>  #endif /* _IOREQ_H_ */

I'm afraid there's been some misunderstanding here during the v2
discussion: New hypervisor/tools only definitions don't need an
additional interface version guard. It's instead the pre-existing
ones which should be removed from the namespace by adding
such a guard.

And of course _everything_ being added here anew needs to be
XEN_ prefixed and guarded.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 10:31   ` Jan Beulich
@ 2016-11-22 10:39     ` Jan Beulich
  2016-11-22 12:34       ` Boris Ostrovsky
  2016-11-22 12:19     ` Boris Ostrovsky
  1 sibling, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 10:39 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau

>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>> 'xl vcpu-set').
>> 
>> The primary reason for adding this call is because for PVH guests
>> the hypervisor needs to send an SCI and set GPE registers. This is
>> unlike HVM guests that have qemu to perform these tasks.
> 
> And the tool stack can't do this?

For the avoidance of further misunderstandings: Of course likely
not completely on its own, but by using a (to be introduced) more
low level hypervisor interface (setting arbitrary GPE bits, with SCI
raised as needed, or the SCI raising being another hypercall).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  2016-11-21 21:00 ` [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
@ 2016-11-22 11:34   ` Jan Beulich
  2016-11-22 12:38     ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 11:34 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> PVH guests will have ACPI accesses emulated by the hypervisor
> as opposed to QEMU (as is the case for HVM guests)
> 
> Support for IOREQ server emulation of CPU hotplug is indicated
> by XEN_X86_EMU_IOREQ_CPUHP flag.
> 
> Logic for the handler will be provided by a later patch.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
> CC: Paul Durrant <paul.durrant@citrix.com>
> ---
> Changes in v3:
> * acpi_ioaccess() returns X86EMUL_UNHANDLEABLE
> * Renamed XEN_X86_EMU_IOREQ_CPUHP to XEN_X86_EMU_ACPI_FF (together
>   with corresponding has_*())

Except in the description above.

Also, while I'm fine with the flag rename, has_acpi_ff() looks wrong
(or at least misleading) to me: Both HVM and PVHv2 have fixed
function hardware emulated, they only differ in who the emulator
is. Reduced hardware, if we would emulate such down the road,
otoh might then indeed come without. So how about one of
has_xen_acpi_ff() or has_dm_acpi_ff()?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 10:31   ` Jan Beulich
  2016-11-22 10:39     ` Jan Beulich
@ 2016-11-22 12:19     ` Boris Ostrovsky
  1 sibling, 0 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 12:19 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau



On 11/22/2016 05:31 AM, Jan Beulich wrote:

>
> What is the bitmap good for if all that gets ever set are the initial so
> many bits? I think the domctl should _take_ a bitmap.

Yes, I was actually thinking about that after I sent the series. 
Toolstack already has this bitmap so it will be even more convenient.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-22 10:37   ` Jan Beulich
@ 2016-11-22 12:28     ` Boris Ostrovsky
  2016-11-22 14:07       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 12:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau



On 11/22/2016 05:37 AM, Jan Beulich wrote:
>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>>
>>  /* Compatibility definitions for the default location (version 0). */
>>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>
>> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>> +
>> +/* Location of online VCPU bitmap. */
>> +#define XEN_ACPI_CPU_MAP             0xaf00
>> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
>> +
>> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>> +#error "XEN_ACPI_CPU_MAP is too big"
>> +#endif
>> +
>> +/* GPE0 bit set during CPU hotplug */
>> +#define XEN_GPE0_CPUHP_BIT           2
>> +
>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>>
>>  #endif /* _IOREQ_H_ */
>
> I'm afraid there's been some misunderstanding here during the v2
> discussion: New hypervisor/tools only definitions don't need an
> additional interface version guard. It's instead the pre-existing
> ones which should be removed from the namespace by adding
> such a guard.

We want to keep all of them now. Shouldn't then the interface check be 
added after we move to Andrew's suggestion of dynamic IO ranges?

>
> And of course _everything_ being added here anew needs to be
> XEN_ prefixed and guarded.


The ones that I didn't add the prefix to are the standard ACPI names. If 
I did this would look like

#define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
+#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
+#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00

And as for being guarded, don't you think it mill make things difficult 
to read. E.g.:

#define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
+#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
+#endif

Repeated 3 times.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 10:39     ` Jan Beulich
@ 2016-11-22 12:34       ` Boris Ostrovsky
  2016-11-22 13:59         ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 12:34 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau



On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>>> 'xl vcpu-set').
>>>
>>> The primary reason for adding this call is because for PVH guests
>>> the hypervisor needs to send an SCI and set GPE registers. This is
>>> unlike HVM guests that have qemu to perform these tasks.
>>
>> And the tool stack can't do this?
>
> For the avoidance of further misunderstandings: Of course likely
> not completely on its own, but by using a (to be introduced) more
> low level hypervisor interface (setting arbitrary GPE bits, with SCI
> raised as needed, or the SCI raising being another hypercall).

So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into

XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
XEN_DOMCTL_send_virq(virq)

(with perhaps set_avail_vcpus folded into set_acpi_reg) ?


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  2016-11-22 11:34   ` Jan Beulich
@ 2016-11-22 12:38     ` Boris Ostrovsky
  2016-11-22 14:08       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 12:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau



On 11/22/2016 06:34 AM, Jan Beulich wrote:
>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>> PVH guests will have ACPI accesses emulated by the hypervisor
>> as opposed to QEMU (as is the case for HVM guests)
>>
>> Support for IOREQ server emulation of CPU hotplug is indicated
>> by XEN_X86_EMU_IOREQ_CPUHP flag.
>>
>> Logic for the handler will be provided by a later patch.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> ---
>> CC: Paul Durrant <paul.durrant@citrix.com>
>> ---
>> Changes in v3:
>> * acpi_ioaccess() returns X86EMUL_UNHANDLEABLE
>> * Renamed XEN_X86_EMU_IOREQ_CPUHP to XEN_X86_EMU_ACPI_FF (together
>>   with corresponding has_*())
>
> Except in the description above.
>
> Also, while I'm fine with the flag rename, has_acpi_ff() looks wrong
> (or at least misleading) to me: Both HVM and PVHv2 have fixed
> function hardware emulated, they only differ in who the emulator
> is. Reduced hardware, if we would emulate such down the road,
> otoh might then indeed come without. So how about one of
> has_xen_acpi_ff() or has_dm_acpi_ff()?

I think the latter is better. But then to keep flag names in sync with 
has_*() macros, how about XEN_X86_EMU_DM_ACPI_FF?


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 12:34       ` Boris Ostrovsky
@ 2016-11-22 13:59         ` Jan Beulich
  2016-11-22 14:37           ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 13:59 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau

>>> On 22.11.16 at 13:34, <boris.ostrovsky@oracle.com> wrote:

> 
> On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>>>> 'xl vcpu-set').
>>>>
>>>> The primary reason for adding this call is because for PVH guests
>>>> the hypervisor needs to send an SCI and set GPE registers. This is
>>>> unlike HVM guests that have qemu to perform these tasks.
>>>
>>> And the tool stack can't do this?
>>
>> For the avoidance of further misunderstandings: Of course likely
>> not completely on its own, but by using a (to be introduced) more
>> low level hypervisor interface (setting arbitrary GPE bits, with SCI
>> raised as needed, or the SCI raising being another hypercall).
> 
> So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into
> 
> XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
> XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
> XEN_DOMCTL_send_virq(virq)
> 
> (with perhaps set_avail_vcpus folded into set_acpi_reg) ?

Well, I don't see what set_avail_vcpus would be good for
considering that during v2 review you've said that you need it
just for the GPE modification and SCI sending.

And whether to split the other two, or simply send SCI whenever
GPE bits have been modified depends on the specific requirements
the ACPI spec puts on us. Or maybe it could always be folded,
with (if necessary) SCI sending being controlled by a flag.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-22 12:28     ` Boris Ostrovsky
@ 2016-11-22 14:07       ` Jan Beulich
  2016-11-22 14:53         ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 14:07 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 22.11.16 at 13:28, <boris.ostrovsky@oracle.com> wrote:

> 
> On 11/22/2016 05:37 AM, Jan Beulich wrote:
>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>
>>>  /* Compatibility definitions for the default location (version 0). */
>>>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>>>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>>>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>
>>> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>> +
>>> +/* Location of online VCPU bitmap. */
>>> +#define XEN_ACPI_CPU_MAP             0xaf00
>>> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
>>> +
>>> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>> +#error "XEN_ACPI_CPU_MAP is too big"
>>> +#endif
>>> +
>>> +/* GPE0 bit set during CPU hotplug */
>>> +#define XEN_GPE0_CPUHP_BIT           2
>>> +
>>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>>> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>>>
>>>  #endif /* _IOREQ_H_ */
>>
>> I'm afraid there's been some misunderstanding here during the v2
>> discussion: New hypervisor/tools only definitions don't need an
>> additional interface version guard. It's instead the pre-existing
>> ones which should be removed from the namespace by adding
>> such a guard.
> 
> We want to keep all of them now. Shouldn't then the interface check be 
> added after we move to Andrew's suggestion of dynamic IO ranges?

No, we want them gone from the public interface for new
consumers. The only valid consumer has always been the tool
stack, just that this hadn't been properly done in the header.
Hence the need to add __XEN__ / __XEN_TOOLS__ around new
additions, and additionally interface version guards around
existing ones.

>> And of course _everything_ being added here anew needs to be
>> XEN_ prefixed and guarded.
> 
> 
> The ones that I didn't add the prefix to are the standard ACPI names. If 
> I did this would look like
> 
> #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
> +#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
> +#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00

There's nothing standard here. Implementations are fine to specify
a larger length iirc, at least for ACPI v2 and newer. If these values
were all fixed, there wouldn't have been a need to specify them in
FADT in the first place.

> And as for being guarded, don't you think it mill make things difficult 
> to read. E.g.:
> 
> #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
> +#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
> +#endif
> 
> Repeated 3 times.

Then don't repeat it three times and put the lengths and bit offsets
after all the addresses.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  2016-11-22 12:38     ` Boris Ostrovsky
@ 2016-11-22 14:08       ` Jan Beulich
  2016-11-28 15:16         ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 14:08 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 22.11.16 at 13:38, <boris.ostrovsky@oracle.com> wrote:

> 
> On 11/22/2016 06:34 AM, Jan Beulich wrote:
>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>> PVH guests will have ACPI accesses emulated by the hypervisor
>>> as opposed to QEMU (as is the case for HVM guests)
>>>
>>> Support for IOREQ server emulation of CPU hotplug is indicated
>>> by XEN_X86_EMU_IOREQ_CPUHP flag.
>>>
>>> Logic for the handler will be provided by a later patch.
>>>
>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> ---
>>> CC: Paul Durrant <paul.durrant@citrix.com>
>>> ---
>>> Changes in v3:
>>> * acpi_ioaccess() returns X86EMUL_UNHANDLEABLE
>>> * Renamed XEN_X86_EMU_IOREQ_CPUHP to XEN_X86_EMU_ACPI_FF (together
>>>   with corresponding has_*())
>>
>> Except in the description above.
>>
>> Also, while I'm fine with the flag rename, has_acpi_ff() looks wrong
>> (or at least misleading) to me: Both HVM and PVHv2 have fixed
>> function hardware emulated, they only differ in who the emulator
>> is. Reduced hardware, if we would emulate such down the road,
>> otoh might then indeed come without. So how about one of
>> has_xen_acpi_ff() or has_dm_acpi_ff()?
> 
> I think the latter is better. But then to keep flag names in sync with 
> has_*() macros, how about XEN_X86_EMU_DM_ACPI_FF?

Not sure - the flag name, as said, seemed fine to me before, and I
don't overly care about the two names fully matching up. Maybe
others here have an opinion?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
  2016-11-21 21:00 ` [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
@ 2016-11-22 14:11   ` Paul Durrant
  2016-11-22 15:01   ` Jan Beulich
  1 sibling, 0 replies; 51+ messages in thread
From: Paul Durrant @ 2016-11-22 14:11 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, jbeulich, Ian Jackson

> -----Original Message-----
> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: 21 November 2016 21:01
> To: xen-devel@lists.xen.org
> Cc: jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> Wei Liu <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger
> Pau Monne <roger.pau@citrix.com>; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; Paul Durrant <Paul.Durrant@citrix.com>
> Subject: [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
> CC: Paul Durrant <paul.durrant@citrix.com>
> ---
> Changes in v3:
> * Introduce a mask for pm1a and gpe0 that lists bits that a
>   guest can operate on.
> * Lots of small changes.
> 
>  xen/arch/x86/hvm/ioreq.c         | 87
> +++++++++++++++++++++++++++++++++++++++-
>  xen/include/asm-x86/hvm/domain.h |  6 +++
>  2 files changed, 92 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 51bb399..4ab0d0a 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -16,6 +16,7 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
> 
> +#include <xen/acpi.h>
>  #include <xen/config.h>
>  #include <xen/ctype.h>
>  #include <xen/init.h>
> @@ -1383,7 +1384,91 @@ static int hvm_access_cf8(
>  static int acpi_ioaccess(
>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
> -    return X86EMUL_UNHANDLEABLE;
> +    uint8_t *reg = NULL;
> +    const uint8_t *mask = NULL;
> +    bool is_cpu_map = false;
> +    struct domain *currd = current->domain;
> +    const static uint8_t pm1a_mask[4] =
> {ACPI_BITMASK_GLOBAL_LOCK_STATUS, 0,
> +                                         ACPI_BITMASK_GLOBAL_LOCK_ENABLE, 0};
> +    const static uint8_t gpe0_mask[4] = {1U << XEN_GPE0_CPUHP_BIT, 0,
> +                                         1U << XEN_GPE0_CPUHP_BIT, 0};
> +
> +    BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
> +                 (ACPI_GPE0_BLK_LEN_V1 != 4));
> +
> +    ASSERT(!has_acpi_ff(currd));
> +
> +    switch ( port )
> +    {
> +    case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
> +         ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1:
> +        reg = currd->arch.hvm_domain.acpi_io.pm1a;
> +        mask = pm1a_mask;
> +        break;
> +
> +    case ACPI_GPE0_BLK_ADDRESS_V1 ...
> +         ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1:
> +        reg = currd->arch.hvm_domain.acpi_io.gpe;
> +        mask = gpe0_mask;
> +        break;
> +
> +    case XEN_ACPI_CPU_MAP ...
> +         XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN - 1:
> +        is_cpu_map = true;
> +        break;
> +
> +    default:
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
> +    if ( bytes == 0 )
> +        return X86EMUL_OKAY;
> +
> +    if ( dir == IOREQ_READ )
> +    {
> +        if ( is_cpu_map )
> +        {
> +            unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
> +
> +            /*
> +             * Clear bits that we are about to read to in case we
> +             * copy fewer than @bytes.
> +             */
> +            *val &= (~((1ULL << (bytes * 8)) - 1)) & 0xffffffff;
> +
> +            if ( ((currd->max_vcpus + 7) / 8) > first_byte )
> +            {
> +                memcpy(val, (uint8_t *)currd->avail_vcpus + first_byte,
> +                       min(bytes, ((currd->max_vcpus + 7) / 8) - first_byte));
> +            }
> +        }
> +        else
> +            memcpy(val, &reg[port & 3], bytes);
> +    }
> +    else
> +    {
> +        unsigned int idx = port & 3;
> +        unsigned int i;
> +        uint8_t *ptr;
> +
> +        if ( is_cpu_map )
> +            /*
> +             * CPU map is only read by DSDT's PRSC method and should never
> +             * be written by a guest.
> +             */
> +            return X86EMUL_UNHANDLEABLE;
> +
> +        ptr = (uint8_t *)val;
> +        for ( i = 0; i < bytes; i++, idx++ )
> +        {
> +            if ( idx < 2 ) /* status, write 1 to clear. */
> +                reg[idx] &= ~(mask[i] & ptr[i]);
> +            else           /* enable */
> +                reg[idx] |= (mask[i] & ptr[i]);
> +        }
> +    }
> +
> +    return X86EMUL_OKAY;
>  }
> 
>  void hvm_ioreq_init(struct domain *d)
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> index f34d784..f492a2b 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -87,6 +87,12 @@ struct hvm_domain {
>      } ioreq_server;
>      struct hvm_ioreq_server *default_ioreq_server;
> 
> +    /* PVH guests */
> +    struct {
> +        uint8_t pm1a[ACPI_PM1A_EVT_BLK_LEN];
> +        uint8_t gpe[ACPI_GPE0_BLK_LEN_V1];
> +    } acpi_io;
> +
>      /* Cached CF8 for guest PCI config cycles */
>      uint32_t                pci_cf8;
> 
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 13:59         ` Jan Beulich
@ 2016-11-22 14:37           ` Boris Ostrovsky
  2016-11-22 15:07             ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 14:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau



On 11/22/2016 08:59 AM, Jan Beulich wrote:
>>>> On 22.11.16 at 13:34, <boris.ostrovsky@oracle.com> wrote:
>
>>
>> On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>>>>> 'xl vcpu-set').
>>>>>
>>>>> The primary reason for adding this call is because for PVH guests
>>>>> the hypervisor needs to send an SCI and set GPE registers. This is
>>>>> unlike HVM guests that have qemu to perform these tasks.
>>>>
>>>> And the tool stack can't do this?
>>>
>>> For the avoidance of further misunderstandings: Of course likely
>>> not completely on its own, but by using a (to be introduced) more
>>> low level hypervisor interface (setting arbitrary GPE bits, with SCI
>>> raised as needed, or the SCI raising being another hypercall).
>>
>> So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into
>>
>> XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
>> XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
>> XEN_DOMCTL_send_virq(virq)
>>
>> (with perhaps set_avail_vcpus folded into set_acpi_reg) ?
>
> Well, I don't see what set_avail_vcpus would be good for
> considering that during v2 review you've said that you need it
> just for the GPE modification and SCI sending.


Someone needs to provide the hypervisor with the new number of available 
(i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl. GPE/SCI 
manipulation are part of that update.

(I didn't say it during v2 review and I should have)


>
> And whether to split the other two, or simply send SCI whenever
> GPE bits have been modified depends on the specific requirements
> the ACPI spec puts on us. Or maybe it could always be folded,
> with (if necessary) SCI sending being controlled by a flag.

True. The spec says that all ACPI events generate an SCI.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-22 14:07       ` Jan Beulich
@ 2016-11-22 14:53         ` Boris Ostrovsky
  2016-11-22 15:13           ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 14:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau



On 11/22/2016 09:07 AM, Jan Beulich wrote:
>>>> On 22.11.16 at 13:28, <boris.ostrovsky@oracle.com> wrote:
>
>>
>> On 11/22/2016 05:37 AM, Jan Beulich wrote:
>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>>
>>>>  /* Compatibility definitions for the default location (version 0). */
>>>>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
>>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>>> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>>>>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
>>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>>> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>>>>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>>> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>>
>>>> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
>>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>> +
>>>> +/* Location of online VCPU bitmap. */
>>>> +#define XEN_ACPI_CPU_MAP             0xaf00
>>>> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
>>>> +
>>>> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>>> +#error "XEN_ACPI_CPU_MAP is too big"
>>>> +#endif
>>>> +
>>>> +/* GPE0 bit set during CPU hotplug */
>>>> +#define XEN_GPE0_CPUHP_BIT           2
>>>> +
>>>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>>>> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>>>>
>>>>  #endif /* _IOREQ_H_ */
>>>
>>> I'm afraid there's been some misunderstanding here during the v2
>>> discussion: New hypervisor/tools only definitions don't need an
>>> additional interface version guard. It's instead the pre-existing
>>> ones which should be removed from the namespace by adding
>>> such a guard.
>>
>> We want to keep all of them now. Shouldn't then the interface check be
>> added after we move to Andrew's suggestion of dynamic IO ranges?
>
> No, we want them gone from the public interface for new
> consumers. The only valid consumer has always been the tool
> stack, just that this hadn't been properly done in the header.
> Hence the need to add __XEN__ / __XEN_TOOLS__ around new
> additions, and additionally interface version guards around
> existing ones.
>

pmtimer.c uses some of those macros so how will wrapping interface 
version check around existing definitions continue to work when 
interface version is updated, unless dynamic IO ranges are also added by 
that time?


>>> And of course _everything_ being added here anew needs to be
>>> XEN_ prefixed and guarded.
>>
>>
>> The ones that I didn't add the prefix to are the standard ACPI names. If
>> I did this would look like
>>
>> #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>> +#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
>> +#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>
> There's nothing standard here. Implementations are fine to specify
> a larger length iirc, at least for ACPI v2 and newer. If these values
> were all fixed, there wouldn't have been a need to specify them in
> FADT in the first place.

I meant that, unlike XEN_ACPI_CPU_MAP that I added, these blocks are 
ACPI-standard, not macros' values.

Are you asking to rename just the newly introduced definitions 
(lengths/offsets) or existing address macros as well? Doing the latter 
will also require changes to at least pmtimer.c

-boris

>
>> And as for being guarded, don't you think it mill make things difficult
>> to read. E.g.:
>>
>> #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>> +#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
>> +#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>> +#endif
>>
>> Repeated 3 times.
>
> Then don't repeat it three times and put the lengths and bit offsets
> after all the addresses.
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
  2016-11-21 21:00 ` [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
  2016-11-22 14:11   ` Paul Durrant
@ 2016-11-22 15:01   ` Jan Beulich
  2016-11-22 15:30     ` Boris Ostrovsky
  1 sibling, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 15:01 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -16,6 +16,7 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <xen/acpi.h>
>  #include <xen/config.h>
>  #include <xen/ctype.h>
>  #include <xen/init.h>

Please take the opportunity and remove the pointless xen/config.h
inclusion at once.

> @@ -1383,7 +1384,91 @@ static int hvm_access_cf8(
>  static int acpi_ioaccess(
>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
> -    return X86EMUL_UNHANDLEABLE;
> +    uint8_t *reg = NULL;
> +    const uint8_t *mask = NULL;
> +    bool is_cpu_map = false;
> +    struct domain *currd = current->domain;

const?

> +    const static uint8_t pm1a_mask[4] = {ACPI_BITMASK_GLOBAL_LOCK_STATUS, 0,
> +                                         ACPI_BITMASK_GLOBAL_LOCK_ENABLE, 0};
> +    const static uint8_t gpe0_mask[4] = {1U << XEN_GPE0_CPUHP_BIT, 0,
> +                                         1U << XEN_GPE0_CPUHP_BIT, 0};

Hmm, funny, in someone else's patch I've recently seen the same.
Can we please stick to the more standard "storage type first"
ordering of declaration elements. After all const modifies the type,
and hence better stays together with it.

And then I'd like to have an explanation (in the commit message)
about the choice of the values for pm1a_mask. Plus you using
uint8_t here is at least odd, considering that this is about registers
consisting of two 16-bit halves. I'm not even certain the spec
permits these to be accessed with other than the specified
granularity.

Or wait - the literal 4-s here look bad too. Perhaps the two should
be combined into a variable of type
typeof(currd->arch.hvm_domain.acpi_io), so values and masks
really match up. Which would still seem to make it desirable for the
parts to be of type uint16_t, if permitted by the spec.

> +    BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
> +                 (ACPI_GPE0_BLK_LEN_V1 != 4));

Please split these into two, so that one of them triggering uniquely
identifies the offender. There's no code being generated for them,
so it doesn't matter how many there are. Perhaps it might even be
worth moving each into its respective case block below.

> +    ASSERT(!has_acpi_ff(currd));
> +
> +    switch ( port )
> +    {
> +    case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
> +         ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1:
> +        reg = currd->arch.hvm_domain.acpi_io.pm1a;
> +        mask = pm1a_mask;
> +        break;
> +
> +    case ACPI_GPE0_BLK_ADDRESS_V1 ...
> +         ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1:
> +        reg = currd->arch.hvm_domain.acpi_io.gpe;
> +        mask = gpe0_mask;
> +        break;
> +
> +    case XEN_ACPI_CPU_MAP ...
> +         XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN - 1:
> +        is_cpu_map = true;

In order to make more obvious in the code below that reg and mask
can't be NULL, wouldn't it make sense to ditch this variable and
instead use checks of reg against NULL in the code further down?

> +        break;
> +
> +    default:
> +        return X86EMUL_UNHANDLEABLE;
> +    }
> +
> +    if ( bytes == 0 )
> +        return X86EMUL_OKAY;

Did you find a check like this in any other I/O port handler? It doesn't
seem to make sense to me.

> +    if ( dir == IOREQ_READ )
> +    {
> +        if ( is_cpu_map )
> +        {
> +            unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
> +
> +            /*
> +             * Clear bits that we are about to read to in case we
> +             * copy fewer than @bytes.
> +             */
> +            *val &= (~((1ULL << (bytes * 8)) - 1)) & 0xffffffff;

*val being of type uint32_t I understand neither the ULL suffix nor
the and-ing. How about

            if ( bytes < 4 )
                *val &= ~0U << (bytes * 8);

?

> +            if ( ((currd->max_vcpus + 7) / 8) > first_byte )
> +            {
> +                memcpy(val, (uint8_t *)currd->avail_vcpus + first_byte,
> +                       min(bytes, ((currd->max_vcpus + 7) / 8) - first_byte));
> +            }

Stray braces.

> +        }
> +        else
> +            memcpy(val, &reg[port & 3], bytes);
> +    }
> +    else
> +    {
> +        unsigned int idx = port & 3;
> +        unsigned int i;
> +        uint8_t *ptr;

const

> +        if ( is_cpu_map )
> +            /*
> +             * CPU map is only read by DSDT's PRSC method and should never
> +             * be written by a guest.
> +             */
> +            return X86EMUL_UNHANDLEABLE;
> +
> +        ptr = (uint8_t *)val;
> +        for ( i = 0; i < bytes; i++, idx++ )
> +        {
> +            if ( idx < 2 ) /* status, write 1 to clear. */
> +                reg[idx] &= ~(mask[i] & ptr[i]);
> +            else           /* enable */
> +                reg[idx] |= (mask[i] & ptr[i]);

Don't you mean mask[idx] in both cases?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 14:37           ` Boris Ostrovsky
@ 2016-11-22 15:07             ` Jan Beulich
  2016-11-22 15:43               ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 15:07 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau

>>> On 22.11.16 at 15:37, <boris.ostrovsky@oracle.com> wrote:

> 
> On 11/22/2016 08:59 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 13:34, <boris.ostrovsky@oracle.com> wrote:
>>
>>>
>>> On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>>>>>> 'xl vcpu-set').
>>>>>>
>>>>>> The primary reason for adding this call is because for PVH guests
>>>>>> the hypervisor needs to send an SCI and set GPE registers. This is
>>>>>> unlike HVM guests that have qemu to perform these tasks.
>>>>>
>>>>> And the tool stack can't do this?
>>>>
>>>> For the avoidance of further misunderstandings: Of course likely
>>>> not completely on its own, but by using a (to be introduced) more
>>>> low level hypervisor interface (setting arbitrary GPE bits, with SCI
>>>> raised as needed, or the SCI raising being another hypercall).
>>>
>>> So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into
>>>
>>> XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
>>> XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
>>> XEN_DOMCTL_send_virq(virq)
>>>
>>> (with perhaps set_avail_vcpus folded into set_acpi_reg) ?
>>
>> Well, I don't see what set_avail_vcpus would be good for
>> considering that during v2 review you've said that you need it
>> just for the GPE modification and SCI sending.
> 
> 
> Someone needs to provide the hypervisor with the new number of available 
> (i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl. GPE/SCI 
> manipulation are part of that update.
> 
> (I didn't say it during v2 review and I should have)

And I've just found that need while looking over patch 8. With
that I'm not sure the splitting would make sense, albeit we may
find it necessary to fiddle with other GPE bits down the road.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-22 14:53         ` Boris Ostrovsky
@ 2016-11-22 15:13           ` Jan Beulich
  2016-11-22 15:52             ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 15:13 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 22.11.16 at 15:53, <boris.ostrovsky@oracle.com> wrote:

> 
> On 11/22/2016 09:07 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 13:28, <boris.ostrovsky@oracle.com> wrote:
>>
>>>
>>> On 11/22/2016 05:37 AM, Jan Beulich wrote:
>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>>>
>>>>>  /* Compatibility definitions for the default location (version 0). */
>>>>>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
>>>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>>>> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>>>>>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
>>>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>>>> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>>>>>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>>>> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>>>
>>>>> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
>>>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>> +
>>>>> +/* Location of online VCPU bitmap. */
>>>>> +#define XEN_ACPI_CPU_MAP             0xaf00
>>>>> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
>>>>> +
>>>>> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>>>> +#error "XEN_ACPI_CPU_MAP is too big"
>>>>> +#endif
>>>>> +
>>>>> +/* GPE0 bit set during CPU hotplug */
>>>>> +#define XEN_GPE0_CPUHP_BIT           2
>>>>> +
>>>>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>>>>> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>>>>>
>>>>>  #endif /* _IOREQ_H_ */
>>>>
>>>> I'm afraid there's been some misunderstanding here during the v2
>>>> discussion: New hypervisor/tools only definitions don't need an
>>>> additional interface version guard. It's instead the pre-existing
>>>> ones which should be removed from the namespace by adding
>>>> such a guard.
>>>
>>> We want to keep all of them now. Shouldn't then the interface check be
>>> added after we move to Andrew's suggestion of dynamic IO ranges?
>>
>> No, we want them gone from the public interface for new
>> consumers. The only valid consumer has always been the tool
>> stack, just that this hadn't been properly done in the header.
>> Hence the need to add __XEN__ / __XEN_TOOLS__ around new
>> additions, and additionally interface version guards around
>> existing ones.
> 
> pmtimer.c uses some of those macros so how will wrapping interface 
> version check around existing definitions continue to work when 
> interface version is updated, unless dynamic IO ranges are also added by 
> that time?

The question makes me suspect you still don't understand how things
should look like. I'll try to sketch this out in a simplified manner:

#if defined(__XEN__) || defined(__XEN_TOOLS__) || interface-version < 4.9
existing #define-s
#endif

#if defined(__XEN__) || defined(__XEN_TOOLS__)
new #define-s
#endif

>>>> And of course _everything_ being added here anew needs to be
>>>> XEN_ prefixed and guarded.
>>>
>>>
>>> The ones that I didn't add the prefix to are the standard ACPI names. If
>>> I did this would look like
>>>
>>> #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>> +#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
>>> +#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>
>> There's nothing standard here. Implementations are fine to specify
>> a larger length iirc, at least for ACPI v2 and newer. If these values
>> were all fixed, there wouldn't have been a need to specify them in
>> FADT in the first place.
> 
> I meant that, unlike XEN_ACPI_CPU_MAP that I added, these blocks are 
> ACPI-standard, not macros' values.
> 
> Are you asking to rename just the newly introduced definitions 
> (lengths/offsets) or existing address macros as well? Doing the latter 
> will also require changes to at least pmtimer.c

Just the new ones - for backwards compatibility we can't rename
the existing ones. But then, considering they go inside XEN/tools
#ifdef-s, perhaps we don't even need to XEN_-prefix them. I hadn't
considered this aspect before, sorry.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 09/11] events/x86: Define SCI virtual interrupt
  2016-11-21 21:00 ` [PATCH v3 09/11] events/x86: Define SCI virtual interrupt Boris Ostrovsky
@ 2016-11-22 15:25   ` Jan Beulich
  2016-11-22 15:57     ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 15:25 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Tim Deegan, Stefano Stabellini, wei.liu2, George Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, roger.pau

>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> PVH guests do not have IOAPIC which typically generates an SCI. For
> those guests SCI will be provided as a virtual interrupt.
> 
> We also move VIRQ_MCA definition out of xen-mca.h to
> keep all x86-specific VIRQ_ARCH_* in one place.

While I appreciate the idea, you can't: xen-mca.h doesn't include
xen.h, and hence parties consuming xen-mca.h alone may then see
their build broken. Considering that duplicate identical #define-s are
fine, I think there's no way around ...

> --- a/xen/include/public/arch-x86/xen-mca.h
> +++ b/xen/include/public/arch-x86/xen-mca.h
> @@ -91,8 +91,6 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */

.. keeping this, while perhaps still ...

> --- a/xen/include/public/arch-x86/xen.h
> +++ b/xen/include/public/arch-x86/xen.h
> @@ -295,6 +295,9 @@ struct xen_arch_domainconfig {
>  };
>  #endif
>  
> +#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
> +#define VIRQ_SCI VIRQ_ARCH_1 /* G. (PVH) ACPI interrupt */

... adding both here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
  2016-11-22 15:01   ` Jan Beulich
@ 2016-11-22 15:30     ` Boris Ostrovsky
  2016-11-22 16:05       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 15:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau



On 11/22/2016 10:01 AM, Jan Beulich wrote:

>
>> +    const static uint8_t pm1a_mask[4] = {ACPI_BITMASK_GLOBAL_LOCK_STATUS, 0,
>> +                                         ACPI_BITMASK_GLOBAL_LOCK_ENABLE, 0};
>> +    const static uint8_t gpe0_mask[4] = {1U << XEN_GPE0_CPUHP_BIT, 0,
>> +                                         1U << XEN_GPE0_CPUHP_BIT, 0};
>
> Hmm, funny, in someone else's patch I've recently seen the same.
> Can we please stick to the more standard "storage type first"
> ordering of declaration elements. After all const modifies the type,
> and hence better stays together with it.
>
> And then I'd like to have an explanation (in the commit message)
> about the choice of the values for pm1a_mask.

Sure (Lock status/enable is required)


> Plus you using
> uint8_t here is at least odd, considering that this is about registers
> consisting of two 16-bit halves. I'm not even certain the spec
> permits these to be accessed with other than the specified
> granularity.


GPE registers can be 1-byte long. And, in fact, that's how ACPICA 
accesses it.

PM1 is indeed 2-byte long. I can make a check in the switch statement 
but I think I should leave the IOREQ_WRITE handling (at the bottom of 
this message) as it is for simplicity.


>
> Or wait - the literal 4-s here look bad too. Perhaps the two should
> be combined into a variable of type
> typeof(currd->arch.hvm_domain.acpi_io), so values and masks
> really match up. Which would still seem to make it desirable for the
> parts to be of type uint16_t, if permitted by the spec.

But I then assign these masks to uint8_t mask. Wouldn't it be better to 
explicitly keep those as byte-size values? Especially given how they are 
used in IOREQ_WRITE case (below).


>> +    else
>> +    {
>> +        unsigned int idx = port & 3;
>> +        unsigned int i;
>> +        uint8_t *ptr;
>
> const
>
>> +        if ( is_cpu_map )
>> +            /*
>> +             * CPU map is only read by DSDT's PRSC method and should never
>> +             * be written by a guest.
>> +             */
>> +            return X86EMUL_UNHANDLEABLE;
>> +
>> +        ptr = (uint8_t *)val;
>> +        for ( i = 0; i < bytes; i++, idx++ )
>> +        {
>> +            if ( idx < 2 ) /* status, write 1 to clear. */
>> +                reg[idx] &= ~(mask[i] & ptr[i]);
>> +            else           /* enable */
>> +                reg[idx] |= (mask[i] & ptr[i]);
>
> Don't you mean mask[idx] in both cases?

Oh, right, of course.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event
  2016-11-21 21:00 ` [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
@ 2016-11-22 15:32   ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 15:32 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, wei.liu2, andrew.cooper3, ian.jackson,
	xen-devel, Julien Grall, roger.pau

>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -509,6 +509,22 @@ void vcpu_destroy(struct vcpu *v)
>          xfree(v->arch.pv_vcpu.trap_ctxt);
>  }
>  
> +int arch_update_avail_vcpus(struct domain *d)

I don't see a way for failure here, so perhaps the function could
return void for now?

> +{
> +    /*
> +     * For PVH guests we need to send an SCI and set enable/status
> +     * bits in GPE block.
> +     */
> +    if ( is_hvm_domain(d) && !has_acpi_ff(d) )
> +    {
> +        d->arch.hvm_domain.acpi_io.gpe[2] =
> +            d->arch.hvm_domain.acpi_io.gpe[0] = 1 << XEN_GPE0_CPUHP_BIT;

Literal array indexes. I think you want them to be calculated from
XEN_GPE0_CPUHP_BIT (which btw than also applies to the static
mask variable in the other patch).

> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -23,6 +23,14 @@
>  void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq);
>  
>  /*
> + * send_guest_global_virq: Notify guest via a global VIRQ.
> + *  @d:        domain to which virtual IRQ should be sent. First
> + *             online VCPU will be selected.
> + *  @virq:     Virtual IRQ number (VIRQ_*)
> + */
> +void send_guest_global_virq(struct domain *d, uint32_t virq);

Please take the opportunity and switch away for the pointless
use of a fixed width type here - unsigned int will be just fine.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 15:07             ` Jan Beulich
@ 2016-11-22 15:43               ` Boris Ostrovsky
  2016-11-22 16:01                 ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 15:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau



On 11/22/2016 10:07 AM, Jan Beulich wrote:
>>>> On 22.11.16 at 15:37, <boris.ostrovsky@oracle.com> wrote:
>
>>
>> On 11/22/2016 08:59 AM, Jan Beulich wrote:
>>>>>> On 22.11.16 at 13:34, <boris.ostrovsky@oracle.com> wrote:
>>>
>>>>
>>>> On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>>>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>>>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>>>>>>> 'xl vcpu-set').
>>>>>>>
>>>>>>> The primary reason for adding this call is because for PVH guests
>>>>>>> the hypervisor needs to send an SCI and set GPE registers. This is
>>>>>>> unlike HVM guests that have qemu to perform these tasks.
>>>>>>
>>>>>> And the tool stack can't do this?
>>>>>
>>>>> For the avoidance of further misunderstandings: Of course likely
>>>>> not completely on its own, but by using a (to be introduced) more
>>>>> low level hypervisor interface (setting arbitrary GPE bits, with SCI
>>>>> raised as needed, or the SCI raising being another hypercall).
>>>>
>>>> So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into
>>>>
>>>> XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
>>>> XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
>>>> XEN_DOMCTL_send_virq(virq)
>>>>
>>>> (with perhaps set_avail_vcpus folded into set_acpi_reg) ?
>>>
>>> Well, I don't see what set_avail_vcpus would be good for
>>> considering that during v2 review you've said that you need it
>>> just for the GPE modification and SCI sending.
>>
>>
>> Someone needs to provide the hypervisor with the new number of available
>> (i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl. GPE/SCI
>> manipulation are part of that update.
>>
>> (I didn't say it during v2 review and I should have)
>
> And I've just found that need while looking over patch 8. With
> that I'm not sure the splitting would make sense, albeit we may
> find it necessary to fiddle with other GPE bits down the road.

Just to make sure we are talking about the same thing: 
XEN_DOMCTL_set_acpi_reg is sufficient for both GPE and CPU map (or any 
ACPI register should the need arise)

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-22 15:13           ` Jan Beulich
@ 2016-11-22 15:52             ` Boris Ostrovsky
  2016-11-22 16:02               ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 15:52 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau



On 11/22/2016 10:13 AM, Jan Beulich wrote:
>>>> On 22.11.16 at 15:53, <boris.ostrovsky@oracle.com> wrote:
>
>>
>> On 11/22/2016 09:07 AM, Jan Beulich wrote:
>>>>>> On 22.11.16 at 13:28, <boris.ostrovsky@oracle.com> wrote:
>>>
>>>>
>>>> On 11/22/2016 05:37 AM, Jan Beulich wrote:
>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>>> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>>>>
>>>>>>  /* Compatibility definitions for the default location (version 0). */
>>>>>>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
>>>>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>>>>> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>>>>>>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
>>>>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>>>>> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>>>>>>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>>>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>>>>> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>>>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>>>>
>>>>>> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
>>>>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>>> +
>>>>>> +/* Location of online VCPU bitmap. */
>>>>>> +#define XEN_ACPI_CPU_MAP             0xaf00
>>>>>> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
>>>>>> +
>>>>>> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>>>>> +#error "XEN_ACPI_CPU_MAP is too big"
>>>>>> +#endif
>>>>>> +
>>>>>> +/* GPE0 bit set during CPU hotplug */
>>>>>> +#define XEN_GPE0_CPUHP_BIT           2
>>>>>> +
>>>>>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>>>>>> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>>>>>>
>>>>>>  #endif /* _IOREQ_H_ */
>>>>>
>>>>> I'm afraid there's been some misunderstanding here during the v2
>>>>> discussion: New hypervisor/tools only definitions don't need an
>>>>> additional interface version guard. It's instead the pre-existing
>>>>> ones which should be removed from the namespace by adding
>>>>> such a guard.
>>>>
>>>> We want to keep all of them now. Shouldn't then the interface check be
>>>> added after we move to Andrew's suggestion of dynamic IO ranges?
>>>
>>> No, we want them gone from the public interface for new
>>> consumers. The only valid consumer has always been the tool
>>> stack, just that this hadn't been properly done in the header.
>>> Hence the need to add __XEN__ / __XEN_TOOLS__ around new
>>> additions, and additionally interface version guards around
>>> existing ones.
>>
>> pmtimer.c uses some of those macros so how will wrapping interface
>> version check around existing definitions continue to work when
>> interface version is updated, unless dynamic IO ranges are also added by
>> that time?
>
> The question makes me suspect you still don't understand how things
> should look like. I'll try to sketch this out in a simplified manner:
>
> #if defined(__XEN__) || defined(__XEN_TOOLS__) || interface-version < 4.9
> existing #define-s
> #endif
>
> #if defined(__XEN__) || defined(__XEN_TOOLS__)
> new #define-s
> #endif

OK, now I understand what you want.

Unfortunately, I still don't understand why, until IO range reservation 
is implemented. Sorry for being obtuse.


>
>>>>> And of course _everything_ being added here anew needs to be
>>>>> XEN_ prefixed and guarded.
>>>>
>>>>
>>>> The ones that I didn't add the prefix to are the standard ACPI names. If
>>>> I did this would look like
>>>>
>>>> #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>>> +#define XEN_ACPI_PM_TMR_BLK_LEN          0x04
>>>> +#define XEN_ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>>
>>> There's nothing standard here. Implementations are fine to specify
>>> a larger length iirc, at least for ACPI v2 and newer. If these values
>>> were all fixed, there wouldn't have been a need to specify them in
>>> FADT in the first place.
>>
>> I meant that, unlike XEN_ACPI_CPU_MAP that I added, these blocks are
>> ACPI-standard, not macros' values.
>>
>> Are you asking to rename just the newly introduced definitions
>> (lengths/offsets) or existing address macros as well? Doing the latter
>> will also require changes to at least pmtimer.c
>
> Just the new ones - for backwards compatibility we can't rename
> the existing ones. But then, considering they go inside XEN/tools
> #ifdef-s, perhaps we don't even need to XEN_-prefix them. I hadn't
> considered this aspect before, sorry.

Then I'd rather keep the names the way they are in this series.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 09/11] events/x86: Define SCI virtual interrupt
  2016-11-22 15:25   ` Jan Beulich
@ 2016-11-22 15:57     ` Boris Ostrovsky
  2016-11-22 16:07       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 15:57 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Stefano Stabellini, wei.liu2, George Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, roger.pau



On 11/22/2016 10:25 AM, Jan Beulich wrote:
>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>> PVH guests do not have IOAPIC which typically generates an SCI. For
>> those guests SCI will be provided as a virtual interrupt.
>>
>> We also move VIRQ_MCA definition out of xen-mca.h to
>> keep all x86-specific VIRQ_ARCH_* in one place.
>
> While I appreciate the idea, you can't: xen-mca.h doesn't include
> xen.h, and hence parties consuming xen-mca.h alone may then see
> their build broken.

Right.

Any reason we can't include "xen.h" here?

-boris

Considering that duplicate identical #define-s are
> fine, I think there's no way around ...
>
>> --- a/xen/include/public/arch-x86/xen-mca.h
>> +++ b/xen/include/public/arch-x86/xen-mca.h
>> @@ -91,8 +91,6 @@
>>
>>  #ifndef __ASSEMBLY__
>>
>> -#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
>
> .. keeping this, while perhaps still ...
>
>> --- a/xen/include/public/arch-x86/xen.h
>> +++ b/xen/include/public/arch-x86/xen.h
>> @@ -295,6 +295,9 @@ struct xen_arch_domainconfig {
>>  };
>>  #endif
>>
>> +#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
>> +#define VIRQ_SCI VIRQ_ARCH_1 /* G. (PVH) ACPI interrupt */
>
> ... adding both here.
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 15:43               ` Boris Ostrovsky
@ 2016-11-22 16:01                 ` Jan Beulich
       [not found]                   ` <a4ac4c28-833b-df5f-ce34-1fa72f7c4cd2@oracle.com>
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 16:01 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	Daniel De Graaf, roger.pau

>>> On 22.11.16 at 16:43, <boris.ostrovsky@oracle.com> wrote:

> 
> On 11/22/2016 10:07 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 15:37, <boris.ostrovsky@oracle.com> wrote:
>>
>>>
>>> On 11/22/2016 08:59 AM, Jan Beulich wrote:
>>>>>>> On 22.11.16 at 13:34, <boris.ostrovsky@oracle.com> wrote:
>>>>
>>>>>
>>>>> On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>>>>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>>>>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>>>>>>>> 'xl vcpu-set').
>>>>>>>>
>>>>>>>> The primary reason for adding this call is because for PVH guests
>>>>>>>> the hypervisor needs to send an SCI and set GPE registers. This is
>>>>>>>> unlike HVM guests that have qemu to perform these tasks.
>>>>>>>
>>>>>>> And the tool stack can't do this?
>>>>>>
>>>>>> For the avoidance of further misunderstandings: Of course likely
>>>>>> not completely on its own, but by using a (to be introduced) more
>>>>>> low level hypervisor interface (setting arbitrary GPE bits, with SCI
>>>>>> raised as needed, or the SCI raising being another hypercall).
>>>>>
>>>>> So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into
>>>>>
>>>>> XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
>>>>> XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
>>>>> XEN_DOMCTL_send_virq(virq)
>>>>>
>>>>> (with perhaps set_avail_vcpus folded into set_acpi_reg) ?
>>>>
>>>> Well, I don't see what set_avail_vcpus would be good for
>>>> considering that during v2 review you've said that you need it
>>>> just for the GPE modification and SCI sending.
>>>
>>>
>>> Someone needs to provide the hypervisor with the new number of available
>>> (i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl. GPE/SCI
>>> manipulation are part of that update.
>>>
>>> (I didn't say it during v2 review and I should have)
>>
>> And I've just found that need while looking over patch 8. With
>> that I'm not sure the splitting would make sense, albeit we may
>> find it necessary to fiddle with other GPE bits down the road.
> 
> Just to make sure we are talking about the same thing: 
> XEN_DOMCTL_set_acpi_reg is sufficient for both GPE and CPU map (or any 
> ACPI register should the need arise)

Well, my point is that as long as we continue to need
set_avail_vcpus (which I hear you say we do need), I'm not
sure the splitting would be helpful (minus the "albeit" part
above).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests
  2016-11-22 15:52             ` Boris Ostrovsky
@ 2016-11-22 16:02               ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 16:02 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 22.11.16 at 16:52, <boris.ostrovsky@oracle.com> wrote:

> 
> On 11/22/2016 10:13 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 15:53, <boris.ostrovsky@oracle.com> wrote:
>>
>>>
>>> On 11/22/2016 09:07 AM, Jan Beulich wrote:
>>>>>>> On 22.11.16 at 13:28, <boris.ostrovsky@oracle.com> wrote:
>>>>
>>>>>
>>>>> On 11/22/2016 05:37 AM, Jan Beulich wrote:
>>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>>>> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>>>>>
>>>>>>>  /* Compatibility definitions for the default location (version 0). */
>>>>>>>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
>>>>>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>>>>>> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>>>>>>>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
>>>>>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>>>>>> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>>>>>>>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>>>>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>>>>>> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>>>>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>>>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>>>>>
>>>>>>> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
>>>>>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>>>> +
>>>>>>> +/* Location of online VCPU bitmap. */
>>>>>>> +#define XEN_ACPI_CPU_MAP             0xaf00
>>>>>>> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
>>>>>>> +
>>>>>>> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>>>>>> +#error "XEN_ACPI_CPU_MAP is too big"
>>>>>>> +#endif
>>>>>>> +
>>>>>>> +/* GPE0 bit set during CPU hotplug */
>>>>>>> +#define XEN_GPE0_CPUHP_BIT           2
>>>>>>> +
>>>>>>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>>>>>>> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>>>>>>>
>>>>>>>  #endif /* _IOREQ_H_ */
>>>>>>
>>>>>> I'm afraid there's been some misunderstanding here during the v2
>>>>>> discussion: New hypervisor/tools only definitions don't need an
>>>>>> additional interface version guard. It's instead the pre-existing
>>>>>> ones which should be removed from the namespace by adding
>>>>>> such a guard.
>>>>>
>>>>> We want to keep all of them now. Shouldn't then the interface check be
>>>>> added after we move to Andrew's suggestion of dynamic IO ranges?
>>>>
>>>> No, we want them gone from the public interface for new
>>>> consumers. The only valid consumer has always been the tool
>>>> stack, just that this hadn't been properly done in the header.
>>>> Hence the need to add __XEN__ / __XEN_TOOLS__ around new
>>>> additions, and additionally interface version guards around
>>>> existing ones.
>>>
>>> pmtimer.c uses some of those macros so how will wrapping interface
>>> version check around existing definitions continue to work when
>>> interface version is updated, unless dynamic IO ranges are also added by
>>> that time?
>>
>> The question makes me suspect you still don't understand how things
>> should look like. I'll try to sketch this out in a simplified manner:
>>
>> #if defined(__XEN__) || defined(__XEN_TOOLS__) || interface-version < 4.9
>> existing #define-s
>> #endif
>>
>> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>> new #define-s
>> #endif
> 
> OK, now I understand what you want.
> 
> Unfortunately, I still don't understand why, until IO range reservation 
> is implemented. Sorry for being obtuse.

The "why" is the same as for everything else we want to hide from
the general public, but make available to tools: Be able to modify
it without going through versioning hoops.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
  2016-11-22 15:30     ` Boris Ostrovsky
@ 2016-11-22 16:05       ` Jan Beulich
  2016-11-22 16:33         ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 16:05 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

>>> On 22.11.16 at 16:30, <boris.ostrovsky@oracle.com> wrote:
> On 11/22/2016 10:01 AM, Jan Beulich wrote:
>>
>>> +    const static uint8_t pm1a_mask[4] = {ACPI_BITMASK_GLOBAL_LOCK_STATUS, 
> 0,
>>> +                                         ACPI_BITMASK_GLOBAL_LOCK_ENABLE, 
> 0};
>>> +    const static uint8_t gpe0_mask[4] = {1U << XEN_GPE0_CPUHP_BIT, 0,
>>> +                                         1U << XEN_GPE0_CPUHP_BIT, 0};
>>
>> Hmm, funny, in someone else's patch I've recently seen the same.
>> Can we please stick to the more standard "storage type first"
>> ordering of declaration elements. After all const modifies the type,
>> and hence better stays together with it.
>>
>> And then I'd like to have an explanation (in the commit message)
>> about the choice of the values for pm1a_mask.
> 
> Sure (Lock status/enable is required)

And nothing else is? And there's no other implementation
required for the lock bit?

>> Plus you using
>> uint8_t here is at least odd, considering that this is about registers
>> consisting of two 16-bit halves. I'm not even certain the spec
>> permits these to be accessed with other than the specified
>> granularity.
> 
> 
> GPE registers can be 1-byte long. And, in fact, that's how ACPICA 
> accesses it.
> 
> PM1 is indeed 2-byte long. I can make a check in the switch statement 
> but I think I should leave the IOREQ_WRITE handling (at the bottom of 
> this message) as it is for simplicity.
> 
> 
>> Or wait - the literal 4-s here look bad too. Perhaps the two should
>> be combined into a variable of type
>> typeof(currd->arch.hvm_domain.acpi_io), so values and masks
>> really match up. Which would still seem to make it desirable for the
>> parts to be of type uint16_t, if permitted by the spec.
> 
> But I then assign these masks to uint8_t mask. Wouldn't it be better to 
> explicitly keep those as byte-size values? Especially given how they are 
> used in IOREQ_WRITE case (below).

Well, maybe, namely considering that the GPE and PM1a parts
would otherwise end up different, further complicating the code.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 09/11] events/x86: Define SCI virtual interrupt
  2016-11-22 15:57     ` Boris Ostrovsky
@ 2016-11-22 16:07       ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2016-11-22 16:07 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Tim Deegan, Stefano Stabellini, wei.liu2, George Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, roger.pau

>>> On 22.11.16 at 16:57, <boris.ostrovsky@oracle.com> wrote:
> On 11/22/2016 10:25 AM, Jan Beulich wrote:
>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>> PVH guests do not have IOAPIC which typically generates an SCI. For
>>> those guests SCI will be provided as a virtual interrupt.
>>>
>>> We also move VIRQ_MCA definition out of xen-mca.h to
>>> keep all x86-specific VIRQ_ARCH_* in one place.
>>
>> While I appreciate the idea, you can't: xen-mca.h doesn't include
>> xen.h, and hence parties consuming xen-mca.h alone may then see
>> their build broken.
> 
> Right.
> 
> Any reason we can't include "xen.h" here?

I'm personally of the opinion that we shouldn't force unnecessary
header dependencies onto every consumer. But maybe everone
else disagrees ...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests
  2016-11-22 16:05       ` Jan Beulich
@ 2016-11-22 16:33         ` Boris Ostrovsky
  0 siblings, 0 replies; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 16:33 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau



On 11/22/2016 11:05 AM, Jan Beulich wrote:
>>>> On 22.11.16 at 16:30, <boris.ostrovsky@oracle.com> wrote:
>> On 11/22/2016 10:01 AM, Jan Beulich wrote:
>>>
>>>> +    const static uint8_t pm1a_mask[4] = {ACPI_BITMASK_GLOBAL_LOCK_STATUS,
>> 0,
>>>> +                                         ACPI_BITMASK_GLOBAL_LOCK_ENABLE,
>> 0};
>>>> +    const static uint8_t gpe0_mask[4] = {1U << XEN_GPE0_CPUHP_BIT, 0,
>>>> +                                         1U << XEN_GPE0_CPUHP_BIT, 0};
>>>
>>> Hmm, funny, in someone else's patch I've recently seen the same.
>>> Can we please stick to the more standard "storage type first"
>>> ordering of declaration elements. After all const modifies the type,
>>> and hence better stays together with it.
>>>
>>> And then I'd like to have an explanation (in the commit message)
>>> about the choice of the values for pm1a_mask.
>>
>> Sure (Lock status/enable is required)
>
> And nothing else is? And there's no other implementation
> required for the lock bit?

The other part is the global lock itself, which is part of the FACS that 
we allocate in build.c

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 06/11] acpi: PVH guests need _E02 method
  2016-11-21 21:00 ` [PATCH v3 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
  2016-11-22  9:13   ` Jan Beulich
@ 2016-11-22 20:20   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 51+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-11-22 20:20 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, jbeulich, roger.pau

On Mon, Nov 21, 2016 at 04:00:42PM -0500, Boris Ostrovsky wrote:
> This is the method that will get invoked on an SCI.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  tools/libacpi/mk_dsdt.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
> index 6da24fa..6197692 100644
> --- a/tools/libacpi/mk_dsdt.c
> +++ b/tools/libacpi/mk_dsdt.c
> @@ -282,11 +282,6 @@ int main(int argc, char **argv)
>  
>      pop_block();
>  
> -    if (dm_version == QEMU_NONE) {
> -        pop_block();
> -        return 0;
> -    }
> -
>      /* Define GPE control method. */
>      push_block("Scope", "\\_GPE");
>      push_block("Method",
> @@ -295,6 +290,11 @@ int main(int argc, char **argv)
>      stmt("\\_SB.PRSC ()", NULL);
>      pop_block();
>      pop_block();
> +
> +    if (dm_version == QEMU_NONE) {
> +        pop_block();
> +        return 0;
> +    }
>      /**** Processor end ****/
>  
>  
> -- 
> 2.7.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
       [not found]                   ` <a4ac4c28-833b-df5f-ce34-1fa72f7c4cd2@oracle.com>
@ 2016-11-22 23:47                     ` Boris Ostrovsky
  2016-11-23  8:09                       ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-22 23:47 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, Andrew Cooper, ian.jackson, xen-devel, Paul Durrant, roger.pau



On 11/22/2016 11:25 AM, Boris Ostrovsky wrote:
>
>
> On 11/22/2016 11:01 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 16:43, <boris.ostrovsky@oracle.com> wrote:
>>
>>>
>>> On 11/22/2016 10:07 AM, Jan Beulich wrote:
>>>>>>> On 22.11.16 at 15:37, <boris.ostrovsky@oracle.com> wrote:
>>>>
>>>>>
>>>>> On 11/22/2016 08:59 AM, Jan Beulich wrote:
>>>>>>>>> On 22.11.16 at 13:34, <boris.ostrovsky@oracle.com> wrote:
>>>>>>
>>>>>>>
>>>>>>> On 11/22/2016 05:39 AM, Jan Beulich wrote:
>>>>>>>>>>> On 22.11.16 at 11:31, <JBeulich@suse.com> wrote:
>>>>>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>>>>>>>> This domctl is called when a VCPU is hot-(un)plugged to a
>>>>>>>>>> guest (via
>>>>>>>>>> 'xl vcpu-set').
>>>>>>>>>>
>>>>>>>>>> The primary reason for adding this call is because for PVH guests
>>>>>>>>>> the hypervisor needs to send an SCI and set GPE registers.
>>>>>>>>>> This is
>>>>>>>>>> unlike HVM guests that have qemu to perform these tasks.
>>>>>>>>>
>>>>>>>>> And the tool stack can't do this?
>>>>>>>>
>>>>>>>> For the avoidance of further misunderstandings: Of course likely
>>>>>>>> not completely on its own, but by using a (to be introduced) more
>>>>>>>> low level hypervisor interface (setting arbitrary GPE bits, with
>>>>>>>> SCI
>>>>>>>> raised as needed, or the SCI raising being another hypercall).
>>>>>>>
>>>>>>> So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into
>>>>>>>
>>>>>>> XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
>>>>>>> XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
>>>>>>> XEN_DOMCTL_send_virq(virq)
>>>>>>>
>>>>>>> (with perhaps set_avail_vcpus folded into set_acpi_reg) ?
>>>>>>
>>>>>> Well, I don't see what set_avail_vcpus would be good for
>>>>>> considering that during v2 review you've said that you need it
>>>>>> just for the GPE modification and SCI sending.
>>>>>
>>>>>
>>>>> Someone needs to provide the hypervisor with the new number of
>>>>> available
>>>>> (i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl.
>>>>> GPE/SCI
>>>>> manipulation are part of that update.
>>>>>
>>>>> (I didn't say it during v2 review and I should have)
>>>>
>>>> And I've just found that need while looking over patch 8. With
>>>> that I'm not sure the splitting would make sense, albeit we may
>>>> find it necessary to fiddle with other GPE bits down the road.
>>>
>>> Just to make sure we are talking about the same thing:
>>> XEN_DOMCTL_set_acpi_reg is sufficient for both GPE and CPU map (or any
>>> ACPI register should the need arise)
>>
>> Well, my point is that as long as we continue to need
>> set_avail_vcpus (which I hear you say we do need), I'm not
>> sure the splitting would be helpful (minus the "albeit" part
>> above).
>
>
> So the downside of having set_avail is that if we ever find this need to
> touch ACPI registers we will be left with a useless (or at least
> redundant) domctl.
>
> Let me try to have set_acpi_reg and see if it looks good enough. If
> people don't like it then I'll go back to set_avail_vcpus.

(apparently I replied to Jan only. Resending to everyone)

I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with 
XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to 
perform two (or more, if >32 VCPUs) hypercalls and the logic on the 
hypervisor side is almost the same as the ioreq handling that this 
series added in patch 8.

However, I now realized that this interface will not be available to PV 
guests (and it will only become available to HVM guests when we move 
hotplug from qemu to hypervisor). And it's x86-specific.

This means that PV guests will not know what the number of available 
VCPUs is and therefore we will not be able to enforce it. OTOH we don't 
know how to do that anyway since PV guests bring up all VCPUs and then 
offline them.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-22 23:47                     ` Boris Ostrovsky
@ 2016-11-23  8:09                       ` Jan Beulich
  2016-11-23 13:33                         ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-23  8:09 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, Andrew Cooper, ian.jackson, xen-devel, Paul Durrant, roger.pau

>>> On 23.11.16 at 00:47, <boris.ostrovsky@oracle.com> wrote:
> I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with 
> XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to 
> perform two (or more, if >32 VCPUs) hypercalls and the logic on the 
> hypervisor side is almost the same as the ioreq handling that this 
> series added in patch 8.

Why would there be multiple hypercalls needed? (I guess I may need
to see the prototype to understand.)

> However, I now realized that this interface will not be available to PV 
> guests (and it will only become available to HVM guests when we move 
> hotplug from qemu to hypervisor). And it's x86-specific.

As you make clear below, the PV aspect is likely a non-issue. But
why is this x86-specific? It's generic ACPI, isn't it?

Jan

> This means that PV guests will not know what the number of available 
> VCPUs is and therefore we will not be able to enforce it. OTOH we don't 
> know how to do that anyway since PV guests bring up all VCPUs and then 
> offline them.
> 
> -boris




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-23  8:09                       ` Jan Beulich
@ 2016-11-23 13:33                         ` Boris Ostrovsky
  2016-11-23 13:58                           ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-23 13:33 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, Andrew Cooper, ian.jackson, xen-devel, Paul Durrant, roger.pau

On 11/23/2016 03:09 AM, Jan Beulich wrote:
>>>> On 23.11.16 at 00:47, <boris.ostrovsky@oracle.com> wrote:
>> I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with 
>> XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to 
>> perform two (or more, if >32 VCPUs) hypercalls and the logic on the 
>> hypervisor side is almost the same as the ioreq handling that this 
>> series added in patch 8.
> Why would there be multiple hypercalls needed? (I guess I may need
> to see the prototype to understand.)

The interface is

#define XEN_DOMCTL_acpi_access 81
struct xen_domctl_acpi_access {
    uint8_t rw;
    uint8_t bytes;
    uint16_t port;
    uint32_t val;
};

And so as an example, to add VCPU1 to already existing VCPU0:

/* Update the VCPU map*/
val = 3;
xc_acpi_access(ctx->xch, domid, WRITE, 0xaf00, 1/*bytes*/, &val);

/* Set event status in GPE block */
val= 1 << 2;
xc_acpi_access(ctx->xch, domid, WRITE, 0xafe0, 1/*bytes*/, &val);


If we want to support ACPI registers in memory space then we need to add
'uint8_t space' and extend val to uint64_t. We also may want to make val
a uint64_t to have fewer hypercalls for VCPU map updates. (We could, in
fact, pass a pointer to the map but I think a scalar is cleaner.)


>
>> However, I now realized that this interface will not be available to PV 
>> guests (and it will only become available to HVM guests when we move 
>> hotplug from qemu to hypervisor). And it's x86-specific.
> As you make clear below, the PV aspect is likely a non-issue. But
> why is this x86-specific? It's generic ACPI, isn't it?

Mostly because I don't know how ARM handles hotplug. I was told that ARM
does not use PRST, it uses PSCI, which I am not familiar with.

The interface is generic enough to be used by any architecture.

-boris

>
>
>> This means that PV guests will not know what the number of available 
>> VCPUs is and therefore we will not be able to enforce it. OTOH we don't 
>> know how to do that anyway since PV guests bring up all VCPUs and then 
>> offline them.
>>
>> -boris
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-23 13:33                         ` Boris Ostrovsky
@ 2016-11-23 13:58                           ` Jan Beulich
  2016-11-23 14:16                             ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Beulich @ 2016-11-23 13:58 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, Andrew Cooper, ian.jackson, xen-devel, Paul Durrant, roger.pau

>>> On 23.11.16 at 14:33, <boris.ostrovsky@oracle.com> wrote:
> On 11/23/2016 03:09 AM, Jan Beulich wrote:
>>>>> On 23.11.16 at 00:47, <boris.ostrovsky@oracle.com> wrote:
>>> I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with 
>>> XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to 
>>> perform two (or more, if >32 VCPUs) hypercalls and the logic on the 
>>> hypervisor side is almost the same as the ioreq handling that this 
>>> series added in patch 8.
>> Why would there be multiple hypercalls needed? (I guess I may need
>> to see the prototype to understand.)
> 
> The interface is
> 
> #define XEN_DOMCTL_acpi_access 81
> struct xen_domctl_acpi_access {
>     uint8_t rw;
>     uint8_t bytes;
>     uint16_t port;
>     uint32_t val;
> };
> 
> And so as an example, to add VCPU1 to already existing VCPU0:
> 
> /* Update the VCPU map*/
> val = 3;
> xc_acpi_access(ctx->xch, domid, WRITE, 0xaf00, 1/*bytes*/, &val);
> 
> /* Set event status in GPE block */
> val= 1 << 2;
> xc_acpi_access(ctx->xch, domid, WRITE, 0xafe0, 1/*bytes*/, &val);
> 
> 
> If we want to support ACPI registers in memory space then we need to add
> 'uint8_t space' and extend val to uint64_t.

It's a domctl, so we can change it down the road, but of course if
would be nice if this was supporting a proper GAS (without access
width I guess) from the beginning.

> We also may want to make val
> a uint64_t to have fewer hypercalls for VCPU map updates. (We could, in
> fact, pass a pointer to the map but I think a scalar is cleaner.)

Well, mostly re-using GAS layout, we'd allow for up to 255 bits
per operation. That might be fine for now, but the need for the
caller to break things up seems awkward to me, so I'd in fact
lean towards passing a handle.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-23 13:58                           ` Jan Beulich
@ 2016-11-23 14:16                             ` Boris Ostrovsky
  2016-11-25 18:16                               ` Boris Ostrovsky
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-23 14:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, Andrew Cooper, ian.jackson, xen-devel, Paul Durrant, roger.pau

On 11/23/2016 08:58 AM, Jan Beulich wrote:
>>>> On 23.11.16 at 14:33, <boris.ostrovsky@oracle.com> wrote:
>> On 11/23/2016 03:09 AM, Jan Beulich wrote:
>>>>>> On 23.11.16 at 00:47, <boris.ostrovsky@oracle.com> wrote:
>>>> I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with 
>>>> XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to 
>>>> perform two (or more, if >32 VCPUs) hypercalls and the logic on the 
>>>> hypervisor side is almost the same as the ioreq handling that this 
>>>> series added in patch 8.
>>> Why would there be multiple hypercalls needed? (I guess I may need
>>> to see the prototype to understand.)
>> The interface is
>>
>> #define XEN_DOMCTL_acpi_access 81
>> struct xen_domctl_acpi_access {
>>     uint8_t rw;
>>     uint8_t bytes;
>>     uint16_t port;
>>     uint32_t val;
>> };
>>
>> And so as an example, to add VCPU1 to already existing VCPU0:
>>
>> /* Update the VCPU map*/
>> val = 3;
>> xc_acpi_access(ctx->xch, domid, WRITE, 0xaf00, 1/*bytes*/, &val);
>>
>> /* Set event status in GPE block */
>> val= 1 << 2;
>> xc_acpi_access(ctx->xch, domid, WRITE, 0xafe0, 1/*bytes*/, &val);
>>
>>
>> If we want to support ACPI registers in memory space then we need to add
>> 'uint8_t space' and extend val to uint64_t.

This obviously was meant to be 'port', not 'val'.

> It's a domctl, so we can change it down the road, but of course if
> would be nice if this was supporting a proper GAS (without access
> width I guess) from the beginning.
>
>> We also may want to make val
>> a uint64_t to have fewer hypercalls for VCPU map updates. (We could, in
>> fact, pass a pointer to the map but I think a scalar is cleaner.)
> Well, mostly re-using GAS layout, we'd allow for up to 255 bits
> per operation. That might be fine for now, but the need for the
> caller to break things up seems awkward to me, so I'd in fact
> lean towards passing a handle.

I haven't thought about representing this as proper GAS. Then it looks
like we can have up to 256 bits worth of data and then pass the handle.
We may still need to make more than one call to read or write full VCPU
map but that should be be an exceedingly rare event, e.g. when we add
more than 256 VCPUs at once. And since we can only have 128 VCPUs now we
don't need to support this in the toolstack right away.
Of course, this still requires two hypercalls --- one for map update and
one for GPE.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-23 14:16                             ` Boris Ostrovsky
@ 2016-11-25 18:16                               ` Boris Ostrovsky
  2016-11-28  7:59                                 ` Jan Beulich
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-25 18:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, Andrew Cooper, ian.jackson, xen-devel, Paul Durrant, roger.pau



On 11/23/2016 09:16 AM, Boris Ostrovsky wrote:
> On 11/23/2016 08:58 AM, Jan Beulich wrote:
>>>>> On 23.11.16 at 14:33, <boris.ostrovsky@oracle.com> wrote:
>>> On 11/23/2016 03:09 AM, Jan Beulich wrote:
>>>>>>> On 23.11.16 at 00:47, <boris.ostrovsky@oracle.com> wrote:
>>>>> I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with
>>>>> XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to
>>>>> perform two (or more, if >32 VCPUs) hypercalls and the logic on the
>>>>> hypervisor side is almost the same as the ioreq handling that this
>>>>> series added in patch 8.
>>>> Why would there be multiple hypercalls needed? (I guess I may need
>>>> to see the prototype to understand.)
>>> The interface is
>>>
>>> #define XEN_DOMCTL_acpi_access 81
>>> struct xen_domctl_acpi_access {
>>>     uint8_t rw;
>>>     uint8_t bytes;
>>>     uint16_t port;
>>>     uint32_t val;
>>> };
>>>
>>> And so as an example, to add VCPU1 to already existing VCPU0:
>>>
>>> /* Update the VCPU map*/
>>> val = 3;
>>> xc_acpi_access(ctx->xch, domid, WRITE, 0xaf00, 1/*bytes*/, &val);
>>>
>>> /* Set event status in GPE block */
>>> val= 1 << 2;
>>> xc_acpi_access(ctx->xch, domid, WRITE, 0xafe0, 1/*bytes*/, &val);
>>>
>>>
>>> If we want to support ACPI registers in memory space then we need to add
>>> 'uint8_t space' and extend val to uint64_t.
>
> This obviously was meant to be 'port', not 'val'.
>
>> It's a domctl, so we can change it down the road, but of course if
>> would be nice if this was supporting a proper GAS (without access
>> width I guess) from the beginning.
>>
>>> We also may want to make val
>>> a uint64_t to have fewer hypercalls for VCPU map updates. (We could, in
>>> fact, pass a pointer to the map but I think a scalar is cleaner.)
>> Well, mostly re-using GAS layout, we'd allow for up to 255 bits
>> per operation. That might be fine for now, but the need for the
>> caller to break things up seems awkward to me, so I'd in fact
>> lean towards passing a handle.
>
> I haven't thought about representing this as proper GAS. Then it looks
> like we can have up to 256 bits worth of data and then pass the handle.
> We may still need to make more than one call to read or write full VCPU
> map but that should be be an exceedingly rare event, e.g. when we add
> more than 256 VCPUs at once. And since we can only have 128 VCPUs now we
> don't need to support this in the toolstack right away.
> Of course, this still requires two hypercalls --- one for map update and
> one for GPE.


BTW, another thing I am afraid I will have to do is extract 
hvm_hw_timer_pm from PMTState and move it up to hvm_domain.

It already has pm1a_{sts|en} and this series adds its own version of 
that register, which doesn't make sense. Instead, both will refer to
hvm_domain.acpi (which I will add, including tmr_val).

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
  2016-11-25 18:16                               ` Boris Ostrovsky
@ 2016-11-28  7:59                                 ` Jan Beulich
  0 siblings, 0 replies; 51+ messages in thread
From: Jan Beulich @ 2016-11-28  7:59 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, Andrew Cooper, ian.jackson, xen-devel, Paul Durrant, roger.pau

>>> On 25.11.16 at 19:16, <boris.ostrovsky@oracle.com> wrote:
> BTW, another thing I am afraid I will have to do is extract 
> hvm_hw_timer_pm from PMTState and move it up to hvm_domain.
> 
> It already has pm1a_{sts|en} and this series adds its own version of 
> that register, which doesn't make sense. Instead, both will refer to
> hvm_domain.acpi (which I will add, including tmr_val).

Indeed, these need to be folded for consistent behavior.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  2016-11-22 14:08       ` Jan Beulich
@ 2016-11-28 15:16         ` Boris Ostrovsky
  2016-11-28 15:48           ` Roger Pau Monné
  0 siblings, 1 reply; 51+ messages in thread
From: Boris Ostrovsky @ 2016-11-28 15:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	roger.pau

On 11/22/2016 09:08 AM, Jan Beulich wrote:
>>>> On 22.11.16 at 13:38, <boris.ostrovsky@oracle.com> wrote:
>> On 11/22/2016 06:34 AM, Jan Beulich wrote:
>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
>>>> PVH guests will have ACPI accesses emulated by the hypervisor
>>>> as opposed to QEMU (as is the case for HVM guests)
>>>>
>>>> Support for IOREQ server emulation of CPU hotplug is indicated
>>>> by XEN_X86_EMU_IOREQ_CPUHP flag.
>>>>
>>>> Logic for the handler will be provided by a later patch.
>>>>
>>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>> ---
>>>> CC: Paul Durrant <paul.durrant@citrix.com>
>>>> ---
>>>> Changes in v3:
>>>> * acpi_ioaccess() returns X86EMUL_UNHANDLEABLE
>>>> * Renamed XEN_X86_EMU_IOREQ_CPUHP to XEN_X86_EMU_ACPI_FF (together
>>>>   with corresponding has_*())
>>> Except in the description above.
>>>
>>> Also, while I'm fine with the flag rename, has_acpi_ff() looks wrong
>>> (or at least misleading) to me: Both HVM and PVHv2 have fixed
>>> function hardware emulated, they only differ in who the emulator
>>> is. Reduced hardware, if we would emulate such down the road,
>>> otoh might then indeed come without. So how about one of
>>> has_xen_acpi_ff() or has_dm_acpi_ff()?
>> I think the latter is better. But then to keep flag names in sync with 
>> has_*() macros, how about XEN_X86_EMU_DM_ACPI_FF?
> Not sure - the flag name, as said, seemed fine to me before, and I
> don't overly care about the two names fully matching up. Maybe
> others here have an opinion?

Any preferences? Roger?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
  2016-11-28 15:16         ` Boris Ostrovsky
@ 2016-11-28 15:48           ` Roger Pau Monné
  0 siblings, 0 replies; 51+ messages in thread
From: Roger Pau Monné @ 2016-11-28 15:48 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, Paul Durrant,
	Jan Beulich

On Mon, Nov 28, 2016 at 10:16:33AM -0500, Boris Ostrovsky wrote:
> On 11/22/2016 09:08 AM, Jan Beulich wrote:
> >>>> On 22.11.16 at 13:38, <boris.ostrovsky@oracle.com> wrote:
> >> On 11/22/2016 06:34 AM, Jan Beulich wrote:
> >>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@oracle.com> wrote:
> >>>> PVH guests will have ACPI accesses emulated by the hypervisor
> >>>> as opposed to QEMU (as is the case for HVM guests)
> >>>>
> >>>> Support for IOREQ server emulation of CPU hotplug is indicated
> >>>> by XEN_X86_EMU_IOREQ_CPUHP flag.
> >>>>
> >>>> Logic for the handler will be provided by a later patch.
> >>>>
> >>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >>>> ---
> >>>> CC: Paul Durrant <paul.durrant@citrix.com>
> >>>> ---
> >>>> Changes in v3:
> >>>> * acpi_ioaccess() returns X86EMUL_UNHANDLEABLE
> >>>> * Renamed XEN_X86_EMU_IOREQ_CPUHP to XEN_X86_EMU_ACPI_FF (together
> >>>>   with corresponding has_*())
> >>> Except in the description above.
> >>>
> >>> Also, while I'm fine with the flag rename, has_acpi_ff() looks wrong
> >>> (or at least misleading) to me: Both HVM and PVHv2 have fixed
> >>> function hardware emulated, they only differ in who the emulator
> >>> is. Reduced hardware, if we would emulate such down the road,
> >>> otoh might then indeed come without. So how about one of
> >>> has_xen_acpi_ff() or has_dm_acpi_ff()?
> >> I think the latter is better. But then to keep flag names in sync with 
> >> has_*() macros, how about XEN_X86_EMU_DM_ACPI_FF?
> > Not sure - the flag name, as said, seemed fine to me before, and I
> > don't overly care about the two names fully matching up. Maybe
> > others here have an opinion?
> 
> Any preferences? Roger?

I don't really have a strong opinion, so far we have named all the macros after 
s/XEN_X86_EMU//, so if the macro is named has_dm_acpi_ff the mask should be 
XEN_X86_EMU_DM_ACPI_FF (but again this is more for coherence that anything 
else).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2016-11-28 15:48 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-21 21:00 [PATCH v3 00/11] PVH VCPU hotplug support Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus Boris Ostrovsky
2016-11-22 10:31   ` Jan Beulich
2016-11-22 10:39     ` Jan Beulich
2016-11-22 12:34       ` Boris Ostrovsky
2016-11-22 13:59         ` Jan Beulich
2016-11-22 14:37           ` Boris Ostrovsky
2016-11-22 15:07             ` Jan Beulich
2016-11-22 15:43               ` Boris Ostrovsky
2016-11-22 16:01                 ` Jan Beulich
     [not found]                   ` <a4ac4c28-833b-df5f-ce34-1fa72f7c4cd2@oracle.com>
2016-11-22 23:47                     ` Boris Ostrovsky
2016-11-23  8:09                       ` Jan Beulich
2016-11-23 13:33                         ` Boris Ostrovsky
2016-11-23 13:58                           ` Jan Beulich
2016-11-23 14:16                             ` Boris Ostrovsky
2016-11-25 18:16                               ` Boris Ostrovsky
2016-11-28  7:59                                 ` Jan Beulich
2016-11-22 12:19     ` Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests Boris Ostrovsky
2016-11-22 10:37   ` Jan Beulich
2016-11-22 12:28     ` Boris Ostrovsky
2016-11-22 14:07       ` Jan Beulich
2016-11-22 14:53         ` Boris Ostrovsky
2016-11-22 15:13           ` Jan Beulich
2016-11-22 15:52             ` Boris Ostrovsky
2016-11-22 16:02               ` Jan Beulich
2016-11-21 21:00 ` [PATCH v3 03/11] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 04/11] acpi: Make pmtimer optional in FADT Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 05/11] acpi: Power and Sleep ACPI buttons are not emulated for PVH guests Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 06/11] acpi: PVH guests need _E02 method Boris Ostrovsky
2016-11-22  9:13   ` Jan Beulich
2016-11-22 20:20   ` Konrad Rzeszutek Wilk
2016-11-21 21:00 ` [PATCH v3 07/11] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
2016-11-22 11:34   ` Jan Beulich
2016-11-22 12:38     ` Boris Ostrovsky
2016-11-22 14:08       ` Jan Beulich
2016-11-28 15:16         ` Boris Ostrovsky
2016-11-28 15:48           ` Roger Pau Monné
2016-11-21 21:00 ` [PATCH v3 08/11] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
2016-11-22 14:11   ` Paul Durrant
2016-11-22 15:01   ` Jan Beulich
2016-11-22 15:30     ` Boris Ostrovsky
2016-11-22 16:05       ` Jan Beulich
2016-11-22 16:33         ` Boris Ostrovsky
2016-11-21 21:00 ` [PATCH v3 09/11] events/x86: Define SCI virtual interrupt Boris Ostrovsky
2016-11-22 15:25   ` Jan Beulich
2016-11-22 15:57     ` Boris Ostrovsky
2016-11-22 16:07       ` Jan Beulich
2016-11-21 21:00 ` [PATCH v3 10/11] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
2016-11-22 15:32   ` Jan Beulich
2016-11-21 21:00 ` [PATCH v3 11/11] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.