All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/12] PVH VCPU hotplug support
@ 2017-01-03 14:04 Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
                   ` (11 more replies)
  0 siblings, 12 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

This series adds support for ACPI-based VCPU hotplug for unprivileged
PVH guests.

Main changes in v6:
* Generate SCI on VCPU map update by domctl
* Simplify public domctl structure
* Make ACPI registers accessible only by guest (and not by domctl)
* Update VCPU map under lock
* Fix pointer update in xc_acpi_access()

Boris Ostrovsky (12):
  domctl: Add XEN_DOMCTL_acpi_access
  x86/save: public/arch-x86/hvm/save.h is available to hypervisor and
    tools only
  pvh/acpi: Install handlers for ACPI-related PVH IO accesses
  pvh/acpi: Handle ACPI accesses for PVH guests
  x86/domctl: Handle ACPI access from domctl
  events/x86: Define SCI virtual interrupt
  pvh: Send an SCI on VCPU hotplug event
  libxl: Update xenstore on VCPU hotplug for all guest types
  tools: Call XEN_DOMCTL_acpi_access on PVH VCPU hotplug
  pvh: Set online VCPU map to avail_vcpus
  pvh/acpi: Save ACPI registers for PVH guests
  docs: Describe PVHv2's VCPU hotplug procedure

 docs/misc/hvmlite.markdown             |  13 ++
 tools/flask/policy/modules/dom0.te     |   2 +-
 tools/flask/policy/modules/xen.if      |   4 +-
 tools/libxc/include/xenctrl.h          |  20 +++
 tools/libxc/xc_domain.c                |  41 ++++++
 tools/libxl/libxl.c                    |  10 +-
 tools/libxl/libxl_arch.h               |   4 +
 tools/libxl/libxl_arm.c                |   6 +
 tools/libxl/libxl_dom.c                |  10 ++
 tools/libxl/libxl_x86.c                |  11 ++
 tools/libxl/libxl_x86_acpi.c           |   6 +-
 xen/arch/x86/domctl.c                  |   7 +
 xen/arch/x86/hvm/Makefile              |   1 +
 xen/arch/x86/hvm/acpi.c                | 226 +++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c                 |   2 +
 xen/arch/x86/hvm/pmtimer.c             |   9 ++
 xen/common/domain.c                    |   1 +
 xen/common/domctl.c                    |   5 +
 xen/common/event_channel.c             |   7 +-
 xen/include/asm-x86/domain.h           |   2 +
 xen/include/asm-x86/hvm/domain.h       |   5 +
 xen/include/public/arch-x86/hvm/save.h |  25 +++-
 xen/include/public/arch-x86/xen.h      |   7 +-
 xen/include/public/domctl.h            |  17 +++
 xen/include/xen/domain.h               |   1 +
 xen/include/xen/event.h                |   8 ++
 xen/include/xen/sched.h                |   3 +
 xen/xsm/flask/hooks.c                  |   3 +
 xen/xsm/flask/policy/access_vectors    |   2 +
 29 files changed, 445 insertions(+), 13 deletions(-)
 create mode 100644 xen/arch/x86/hvm/acpi.c

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 18:21   ` Daniel De Graaf
  2017-01-03 20:51   ` Konrad Rzeszutek Wilk
  2017-01-03 14:04 ` [PATCH v6 02/12] x86/save: public/arch-x86/hvm/save.h is available to hypervisor and tools only Boris Ostrovsky
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	Daniel De Graaf, roger.pau

This domctl will allow toolstack to read and write some
ACPI registers. It will be available to both x86 and ARM
but will be implemented first only for x86

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
Changes in v6:
* Fold xen_acpi_access into xen_domctl_acpi_access
* Some new error return values


 tools/flask/policy/modules/dom0.te  |  2 +-
 tools/flask/policy/modules/xen.if   |  4 ++--
 xen/arch/x86/domctl.c               |  7 +++++++
 xen/arch/x86/hvm/Makefile           |  1 +
 xen/arch/x86/hvm/acpi.c             | 24 ++++++++++++++++++++++++
 xen/include/asm-x86/hvm/domain.h    |  3 +++
 xen/include/public/domctl.h         | 17 +++++++++++++++++
 xen/xsm/flask/hooks.c               |  3 +++
 xen/xsm/flask/policy/access_vectors |  2 ++
 9 files changed, 60 insertions(+), 3 deletions(-)
 create mode 100644 xen/arch/x86/hvm/acpi.c

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index d0a4d91..475d446 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -39,7 +39,7 @@ allow dom0_t dom0_t:domain {
 };
 allow dom0_t dom0_t:domain2 {
 	set_cpuid gettsc settsc setscheduler set_max_evtchn set_vnumainfo
-	get_vnumainfo psr_cmt_op psr_cat_op
+	get_vnumainfo psr_cmt_op psr_cat_op acpi_access
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 1aca75d..42a8cc2 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -52,7 +52,7 @@ define(`create_domain_common', `
 			settime setdomainhandle getvcpucontext set_misc_info };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
 			set_max_evtchn set_vnumainfo get_vnumainfo cacheflush
-			psr_cmt_op psr_cat_op soft_reset };
+			psr_cmt_op psr_cat_op soft_reset acpi_access };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -85,7 +85,7 @@ define(`manage_domain', `
 			getaddrsize pause unpause trigger shutdown destroy
 			setaffinity setdomainmaxmem getscheduler resume
 			setpodtarget getpodtarget };
-    allow $1 $2:domain2 set_vnumainfo;
+    allow $1 $2:domain2 { set_vnumainfo acpi_access };
 ')
 
 # migrate_domain_out(priv, target)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ab9ad39..2904e49 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1425,6 +1425,13 @@ long arch_do_domctl(
         }
         break;
 
+    case XEN_DOMCTL_acpi_access:
+        if ( !is_hvm_domain(d) )
+            ret = -ENODEV;
+        else
+            ret = hvm_acpi_domctl_access(d, &domctl->u.acpi_access);
+        break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index f750d13..bae3244 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -1,6 +1,7 @@
 subdir-y += svm
 subdir-y += vmx
 
+obj-y += acpi.o
 obj-y += asid.o
 obj-y += emulate.o
 obj-y += hpet.o
diff --git a/xen/arch/x86/hvm/acpi.c b/xen/arch/x86/hvm/acpi.c
new file mode 100644
index 0000000..04901c1
--- /dev/null
+++ b/xen/arch/x86/hvm/acpi.c
@@ -0,0 +1,24 @@
+/* acpi.c: ACPI access handling
+ *
+ * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
+ */
+#include <xen/errno.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+
+
+int hvm_acpi_domctl_access(struct domain *d,
+                           const struct xen_domctl_acpi_access *access)
+{
+    return -ENOSYS;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index d55d180..52f934a 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -166,6 +166,9 @@ struct hvm_domain {
 
 #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
 
+int hvm_acpi_domctl_access(struct domain *d,
+                           const struct xen_domctl_acpi_access *access);
+
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
 
 /*
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 85cbb7c..5978664 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1145,6 +1145,21 @@ struct xen_domctl_psr_cat_op {
 typedef struct xen_domctl_psr_cat_op xen_domctl_psr_cat_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cat_op_t);
 
+struct xen_domctl_acpi_access {
+#define XEN_DOMCTL_ACPI_READ   0
+#define XEN_DOMCTL_ACPI_WRITE  1
+    uint8_t            rw;                 /* IN: Read or write */
+#define XEN_ACPI_SYSTEM_MEMORY 0
+#define XEN_ACPI_SYSTEM_IO     1
+    uint8_t            space_id;           /* IN: Address space */
+    uint8_t            width;              /* IN: Access size (bytes) */
+    uint8_t            pad[5];
+    uint64_aligned_t   address;            /* IN: 64-bit address of register */
+    XEN_GUEST_HANDLE_64(void) val;         /* IN/OUT: data */
+};
+typedef struct xen_domctl_acpi_access xen_domctl_acpi_access_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_acpi_access_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1222,6 +1237,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_psr_cat_op                    78
 #define XEN_DOMCTL_soft_reset                    79
+#define XEN_DOMCTL_acpi_access                   80
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1284,6 +1300,7 @@ struct xen_domctl {
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_cat_op        psr_cat_op;
+        struct xen_domctl_acpi_access       acpi_access;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 040a251..c1ba42e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -748,6 +748,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_soft_reset:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SOFT_RESET);
 
+    case XEN_DOMCTL_acpi_access:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__ACPI_ACCESS);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 92e6da9..e40258e 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -246,6 +246,8 @@ class domain2
     mem_sharing
 # XEN_DOMCTL_psr_cat_op
     psr_cat_op
+# XEN_DOMCTL_acpi_access
+    acpi_access
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 02/12] x86/save: public/arch-x86/hvm/save.h is available to hypervisor and tools only
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 16:55   ` Jan Beulich
  2017-01-03 14:04 ` [PATCH v6 03/12] pvh/acpi: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Noone else needs to include it since it is only useful to code
that can made domctl calls. And public domctl.h can only be included
by the toolstack or the hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
New in v6 (not required by the series).

Q: Should include/public/hvm/save.h have the same guards?

 xen/include/public/arch-x86/hvm/save.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 8d73b51..ee0a3f7 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -26,6 +26,8 @@
 #ifndef __XEN_PUBLIC_HVM_SAVE_X86_H__
 #define __XEN_PUBLIC_HVM_SAVE_X86_H__
 
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
 /* 
  * Save/restore header: general info about the save file. 
  */
@@ -632,6 +634,8 @@ struct hvm_msr {
  */
 #define HVM_SAVE_CODE_MAX 20
 
+#endif /* __XEN__ || __XEN_TOOLS__ */
+
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
 
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 03/12] pvh/acpi: Install handlers for ACPI-related PVH IO accesses
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 02/12] x86/save: public/arch-x86/hvm/save.h is available to hypervisor and tools only Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 04/12] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

PVH guests will have ACPI accesses emulated by the hypervisor as
opposed to QEMU (as is the case for HVM guests). This patch installs
handler for accesses to PM1A, GPE0 (which is added to struct
hvm_hw_acpi) and VCPU map. Logic for the handler will be provided
by a later patch.

Whether or not the handler needs to be installed is decided based
on the value of XEN_X86_EMU_ACPI_FF flag which indicates whether
emulation is implemented in QEMU.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes in v6:
* Drop ifdef guards in include/public/arch-x86/hvm/save.h

 xen/arch/x86/hvm/acpi.c                | 31 +++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c                 |  2 ++
 xen/include/asm-x86/domain.h           |  2 ++
 xen/include/asm-x86/hvm/domain.h       |  1 +
 xen/include/public/arch-x86/hvm/save.h | 21 +++++++++++++++++++--
 xen/include/public/arch-x86/xen.h      |  4 +++-
 6 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/acpi.c b/xen/arch/x86/hvm/acpi.c
index 04901c1..15a9a0e 100644
--- a/xen/arch/x86/hvm/acpi.c
+++ b/xen/arch/x86/hvm/acpi.c
@@ -6,6 +6,7 @@
 #include <xen/lib.h>
 #include <xen/sched.h>
 
+#include <public/arch-x86/xen.h>
 
 int hvm_acpi_domctl_access(struct domain *d,
                            const struct xen_domctl_acpi_access *access)
@@ -13,6 +14,36 @@ int hvm_acpi_domctl_access(struct domain *d,
     return -ENOSYS;
 }
 
+static int acpi_cpumap_guest_access(int dir, unsigned int port,
+                                    unsigned int bytes, uint32_t *val)
+{
+    return X86EMUL_UNHANDLEABLE;
+}
+
+static int acpi_guest_access(int dir, unsigned int port,
+                             unsigned int bytes, uint32_t *val)
+{
+    return X86EMUL_UNHANDLEABLE;
+}
+
+void hvm_acpi_init(struct domain *d)
+{
+    if ( has_acpi_dm_ff(d) )
+        return;
+
+    register_portio_handler(d, XEN_ACPI_CPU_MAP,
+                            XEN_ACPI_CPU_MAP_LEN, acpi_cpumap_guest_access);
+
+    register_portio_handler(d, ACPI_GPE0_BLK_ADDRESS_V1,
+                            sizeof(d->arch.hvm_domain.acpi.gpe0_sts) +
+                            sizeof(d->arch.hvm_domain.acpi.gpe0_en),
+                            acpi_guest_access);
+    register_portio_handler(d, ACPI_PM1A_EVT_BLK_ADDRESS_V1,
+                            sizeof(d->arch.hvm_domain.acpi.pm1a_sts) +
+                            sizeof(d->arch.hvm_domain.acpi.pm1a_en),
+                            acpi_guest_access);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 708f474..d0dd9fc 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -667,6 +667,8 @@ int hvm_domain_initialise(struct domain *d)
 
     hvm_ioreq_init(d);
 
+    hvm_acpi_init(d);
+
     if ( is_pvh_domain(d) )
     {
         register_portio_handler(d, 0, 0x10003, handle_pvh_io);
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 95762cf..233233a 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -426,6 +426,8 @@ struct arch_domain
 #define has_vvga(d)        (!!((d)->arch.emulation_flags & XEN_X86_EMU_VGA))
 #define has_viommu(d)      (!!((d)->arch.emulation_flags & XEN_X86_EMU_IOMMU))
 #define has_vpit(d)        (!!((d)->arch.emulation_flags & XEN_X86_EMU_PIT))
+#define has_acpi_dm_ff(d)  (!!((d)->arch.emulation_flags & \
+                               XEN_X86_EMU_ACPI_DM_FF))
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
 
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 52f934a..07815b6 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -166,6 +166,7 @@ struct hvm_domain {
 
 #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
 
+void hvm_acpi_init(struct domain *d);
 int hvm_acpi_domctl_access(struct domain *d,
                            const struct xen_domctl_acpi_access *access);
 
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index ee0a3f7..f47fd21 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -529,14 +529,31 @@ DECLARE_HVM_SAVE_TYPE(HPET, 12, struct hvm_hw_hpet);
 /*
  * PM timer
  */
-
 struct hvm_hw_pmtimer {
     uint32_t tmr_val;   /* PM_TMR_BLK.TMR_VAL: 32bit free-running counter */
     uint16_t pm1a_sts;  /* PM1a_EVT_BLK.PM1a_STS: status register */
     uint16_t pm1a_en;   /* PM1a_EVT_BLK.PM1a_EN: enable register */
+    uint16_t gpe0_sts;
+    uint16_t gpe0_en;
+};
+
+struct hvm_hw_pmtimer_compat {
+    uint32_t tmr_val;
+    uint16_t pm1a_sts;
+    uint16_t pm1a_en;
 };
 
-DECLARE_HVM_SAVE_TYPE(PMTIMER, 13, struct hvm_hw_pmtimer);
+static inline int _hvm_hw_fix_pmtimer(void *h, uint32_t size)
+{
+    struct hvm_hw_pmtimer *acpi = (struct hvm_hw_pmtimer *)h;
+
+    if ( size == sizeof(struct hvm_hw_pmtimer_compat) )
+        acpi->gpe0_sts = acpi->gpe0_en = 0;
+    return 0;
+}
+
+DECLARE_HVM_SAVE_TYPE_COMPAT(PMTIMER, 13, struct hvm_hw_pmtimer,        \
+                             struct hvm_hw_pmtimer_compat, _hvm_hw_fix_pmtimer);
 
 /*
  * MTRR MSRs
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 12f719d..2565acd 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -283,12 +283,14 @@ struct xen_arch_domainconfig {
 #define XEN_X86_EMU_IOMMU           (1U<<_XEN_X86_EMU_IOMMU)
 #define _XEN_X86_EMU_PIT            8
 #define XEN_X86_EMU_PIT             (1U<<_XEN_X86_EMU_PIT)
+#define _XEN_X86_EMU_ACPI_DM_FF     9
+#define XEN_X86_EMU_ACPI_DM_FF      (1U<<_XEN_X86_EMU_ACPI_DM_FF)
 
 #define XEN_X86_EMU_ALL             (XEN_X86_EMU_LAPIC | XEN_X86_EMU_HPET |  \
                                      XEN_X86_EMU_PM | XEN_X86_EMU_RTC |      \
                                      XEN_X86_EMU_IOAPIC | XEN_X86_EMU_PIC |  \
                                      XEN_X86_EMU_VGA | XEN_X86_EMU_IOMMU |   \
-                                     XEN_X86_EMU_PIT)
+                                     XEN_X86_EMU_PIT | XEN_X86_EMU_ACPI_DM_FF)
     uint32_t emulation_flags;
 };
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 04/12] pvh/acpi: Handle ACPI accesses for PVH guests
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (2 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 03/12] pvh/acpi: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl Boris Ostrovsky
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Subsequent domctl access  VCPU map will use the same code. We create 
acpi_cpumap_access_common() routines in anticipation of these changes.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes in v6:
* ACPI registers are only accessed by guest code (not by domctl), thus
  acpi_access_common() is no longer needed
* Adjusted access direction (RW) to be a boolean.
* Dropped unnecessary masking of status register

 xen/arch/x86/hvm/acpi.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++-
 xen/common/domain.c     |   1 +
 xen/common/domctl.c     |   5 +++
 xen/include/xen/sched.h |   3 ++
 4 files changed, 117 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/acpi.c b/xen/arch/x86/hvm/acpi.c
index 15a9a0e..f0a84f9 100644
--- a/xen/arch/x86/hvm/acpi.c
+++ b/xen/arch/x86/hvm/acpi.c
@@ -2,12 +2,43 @@
  *
  * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
  */
+#include <xen/acpi.h>
 #include <xen/errno.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
 
 #include <public/arch-x86/xen.h>
 
+static int acpi_cpumap_access_common(struct domain *d, bool is_write,
+                                     unsigned int port,
+                                     unsigned int bytes, uint32_t *val)
+{
+    unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
+
+    BUILD_BUG_ON(XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN
+                 > ACPI_GPE0_BLK_ADDRESS_V1);
+
+    if ( !is_write )
+    {
+        uint32_t mask = (bytes < 4) ? ~0U << (bytes * 8) : 0;
+
+        /*
+         * Clear bits that we are about to read to in case we
+         * copy fewer than @bytes.
+         */
+        *val &= mask;
+
+        if ( ((d->max_vcpus + 7) / 8) > first_byte )
+            memcpy(val, (uint8_t *)d->avail_vcpus + first_byte,
+                   min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
+    }
+    else
+        /* Guests do not write CPU map */
+        return X86EMUL_UNHANDLEABLE;
+
+    return X86EMUL_OKAY;
+}
+
 int hvm_acpi_domctl_access(struct domain *d,
                            const struct xen_domctl_acpi_access *access)
 {
@@ -17,13 +48,88 @@ int hvm_acpi_domctl_access(struct domain *d,
 static int acpi_cpumap_guest_access(int dir, unsigned int port,
                                     unsigned int bytes, uint32_t *val)
 {
-    return X86EMUL_UNHANDLEABLE;
+    return  acpi_cpumap_access_common(current->domain,
+                                      (dir == IOREQ_WRITE) ? true : false,
+                                      port, bytes, val);
 }
 
 static int acpi_guest_access(int dir, unsigned int port,
                              unsigned int bytes, uint32_t *val)
 {
-    return X86EMUL_UNHANDLEABLE;
+    struct domain *d = current->domain;
+    uint16_t *sts = NULL, *en = NULL;
+    const uint16_t *mask_en = NULL;
+    static const uint16_t pm1a_en_mask = ACPI_BITMASK_GLOBAL_LOCK_ENABLE;
+    static const uint16_t gpe0_en_mask = 1U << XEN_ACPI_GPE0_CPUHP_BIT;
+
+    ASSERT(!has_acpi_dm_ff(d));
+
+    switch ( port )
+    {
+    case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
+        ACPI_PM1A_EVT_BLK_ADDRESS_V1 +
+        sizeof(d->arch.hvm_domain.acpi.pm1a_sts) +
+        sizeof(d->arch.hvm_domain.acpi.pm1a_en):
+
+        sts = &d->arch.hvm_domain.acpi.pm1a_sts;
+        en = &d->arch.hvm_domain.acpi.pm1a_en;
+        mask_en = &pm1a_en_mask;
+        break;
+
+    case ACPI_GPE0_BLK_ADDRESS_V1 ...
+        ACPI_GPE0_BLK_ADDRESS_V1 +
+        sizeof(d->arch.hvm_domain.acpi.gpe0_sts) +
+        sizeof(d->arch.hvm_domain.acpi.gpe0_en):
+
+        sts = &d->arch.hvm_domain.acpi.gpe0_sts;
+        en = &d->arch.hvm_domain.acpi.gpe0_en;
+        mask_en = &gpe0_en_mask;
+        break;
+
+    default:
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    if ( dir == IOREQ_READ )
+    {
+        uint32_t mask = (bytes < 4) ? ~0U << (bytes * 8) : 0;
+        uint32_t data = (((uint32_t)*en) << 16) | *sts;
+
+        data >>= 8 * (port & 3);
+        *val = (*val & mask) | (data & ~mask);
+    }
+    else
+    {
+        uint32_t v = *val;
+
+        /* Status register is write-1-to-clear */
+        switch ( port & 3 )
+        {
+        case 0:
+            *sts &= ~(v & 0xff);
+            if ( !--bytes )
+                break;
+            v >>= 8;
+            /* fallthrough */
+        case 1:
+            *sts &= ~((v & 0xff) << 8);
+            if ( !--bytes )
+                break;
+            v >>= 8;
+            /* fallthrough */
+        case 2:
+            *en = ((*en & 0xff00) | (v & 0xff)) & *mask_en;
+            if ( !--bytes )
+                break;
+            v >>= 8;
+            /* fallthrough */
+        case 3:
+            *en = (((v & 0xff) << 8) | (*en & 0xff)) & *mask_en;
+            break;
+        }
+    }
+
+    return X86EMUL_OKAY;
 }
 
 void hvm_acpi_init(struct domain *d)
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 05130e2..ca1f0ed 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -847,6 +847,7 @@ static void complete_domain_destroy(struct rcu_head *head)
     xsm_free_security_domain(d);
     free_cpumask_var(d->domain_dirty_cpumask);
     xfree(d->vcpu);
+    xfree(d->avail_vcpus);
     free_domain_struct(d);
 
     send_global_virq(VIRQ_DOM_EXC);
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index b0ee961..0a08b83 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -651,6 +651,11 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                 goto maxvcpu_out;
         }
 
+        d->avail_vcpus = xzalloc_array(unsigned long,
+                                       BITS_TO_LONGS(d->max_vcpus));
+        if ( !d->avail_vcpus )
+            goto maxvcpu_out;
+
         ret = 0;
 
     maxvcpu_out:
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 063efe6..bee190f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -315,6 +315,9 @@ struct domain
     unsigned int     max_vcpus;
     struct vcpu    **vcpu;
 
+    /* Bitmap of available VCPUs. */
+    unsigned long   *avail_vcpus;
+
     shared_info_t   *shared_info;     /* shared data area */
 
     spinlock_t       domain_lock;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (3 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 04/12] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-07-31 14:14   ` Ross Lagerwall
  2017-01-03 14:04 ` [PATCH v6 06/12] events/x86: Define SCI virtual interrupt Boris Ostrovsky
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes in v6:
* Adjustments to to patch 4 changes.
* Added a spinlock for VCPU map access
* Return an error on guest trying to write VCPU map

 xen/arch/x86/hvm/acpi.c          | 57 +++++++++++++++++++++++++++++++++++-----
 xen/include/asm-x86/hvm/domain.h |  1 +
 2 files changed, 52 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/acpi.c b/xen/arch/x86/hvm/acpi.c
index f0a84f9..9f0578e 100644
--- a/xen/arch/x86/hvm/acpi.c
+++ b/xen/arch/x86/hvm/acpi.c
@@ -7,17 +7,22 @@
 #include <xen/lib.h>
 #include <xen/sched.h>
 
+#include <asm/guest_access.h>
+
 #include <public/arch-x86/xen.h>
 
-static int acpi_cpumap_access_common(struct domain *d, bool is_write,
-                                     unsigned int port,
+static int acpi_cpumap_access_common(struct domain *d, bool is_guest_access,
+                                     bool is_write, unsigned int port,
                                      unsigned int bytes, uint32_t *val)
 {
     unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
+    int rc = X86EMUL_OKAY;
 
     BUILD_BUG_ON(XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN
                  > ACPI_GPE0_BLK_ADDRESS_V1);
 
+    spin_lock(&d->arch.hvm_domain.acpi_lock);
+
     if ( !is_write )
     {
         uint32_t mask = (bytes < 4) ? ~0U << (bytes * 8) : 0;
@@ -32,23 +37,61 @@ static int acpi_cpumap_access_common(struct domain *d, bool is_write,
             memcpy(val, (uint8_t *)d->avail_vcpus + first_byte,
                    min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
     }
+    else if ( !is_guest_access )
+        memcpy((uint8_t *)d->avail_vcpus + first_byte, val,
+               min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
     else
         /* Guests do not write CPU map */
-        return X86EMUL_UNHANDLEABLE;
+        rc = X86EMUL_UNHANDLEABLE;
 
-    return X86EMUL_OKAY;
+    spin_unlock(&d->arch.hvm_domain.acpi_lock);
+
+    return rc;
 }
 
 int hvm_acpi_domctl_access(struct domain *d,
                            const struct xen_domctl_acpi_access *access)
 {
-    return -ENOSYS;
+    unsigned int bytes, i;
+    uint32_t val = 0;
+    uint8_t *ptr = (uint8_t *)&val;
+    int rc;
+    bool is_write = (access->rw == XEN_DOMCTL_ACPI_WRITE) ? true : false;
+
+    if ( has_acpi_dm_ff(d) )
+        return -EOPNOTSUPP;
+
+    if ( access->space_id != XEN_ACPI_SYSTEM_IO )
+        return -EINVAL;
+
+    if ( !((access->address >= XEN_ACPI_CPU_MAP) &&
+           (access->address < XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN)) )
+        return -ENODEV;
+
+    for ( i = 0; i < access->width; i += sizeof(val) )
+    {
+        bytes = (access->width - i > sizeof(val)) ?
+            sizeof(val) : access->width - i;
+
+        if ( is_write && copy_from_guest_offset(ptr, access->val, i, bytes) )
+            return -EFAULT;
+
+        rc = acpi_cpumap_access_common(d, false, is_write,
+                                       access->address, bytes, &val);
+        if ( rc )
+            return rc;
+
+        if ( !is_write && copy_to_guest_offset(access->val, i, ptr, bytes) )
+            return -EFAULT;
+    }
+
+    return 0;
 }
 
 static int acpi_cpumap_guest_access(int dir, unsigned int port,
                                     unsigned int bytes, uint32_t *val)
 {
-    return  acpi_cpumap_access_common(current->domain,
+    return  acpi_cpumap_access_common(current->domain, true,
                                       (dir == IOREQ_WRITE) ? true : false,
                                       port, bytes, val);
 }
@@ -148,6 +191,8 @@ void hvm_acpi_init(struct domain *d)
                             sizeof(d->arch.hvm_domain.acpi.pm1a_sts) +
                             sizeof(d->arch.hvm_domain.acpi.pm1a_en),
                             acpi_guest_access);
+
+    spin_lock_init(&d->arch.hvm_domain.acpi_lock);
 }
 
 /*
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 07815b6..438ea12 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -111,6 +111,7 @@ struct hvm_domain {
      */
 #define hvm_hw_acpi hvm_hw_pmtimer
     struct hvm_hw_acpi     acpi;
+    spinlock_t             acpi_lock;
 
     /* VCPU which is current target for 8259 interrupts. */
     struct vcpu           *i8259_target;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 06/12] events/x86: Define SCI virtual interrupt
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (4 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 07/12] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

PVH guests do not have IOAPIC which typically generates an SCI. For
those guests SCI will be provided as a virtual interrupt.

Copy VIRQ_MCA definition from of xen-mca.h to xen.h to keep all
x86-specific VIRQ_ARCH_* in one place. (However, because we don't
want to require inclusion of xen.h in xen-mca.h we preserve original
definition of VIRQ_MCA as well.)

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/include/public/arch-x86/xen.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 2565acd..b1290a3 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -302,6 +302,9 @@ struct xen_arch_domainconfig {
 #define XEN_ACPI_GPE0_CPUHP_BIT      2
 #endif
 
+#define VIRQ_MCA VIRQ_ARCH_0 /* G. (DOM0) Machine Check Architecture */
+#define VIRQ_SCI VIRQ_ARCH_1 /* G. (PVH) ACPI interrupt */
+
 #endif /* !__ASSEMBLY__ */
 
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 07/12] pvh: Send an SCI on VCPU hotplug event
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (5 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 06/12] events/x86: Define SCI virtual interrupt Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 08/12] libxl: Update xenstore on VCPU hotplug for all guest types Boris Ostrovsky
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Send and SCI when VCPU map is updated by domctl or when guest sets
GPE0 enable bit and status bit is already set.

Also update send_guest_global_virq() to handle cases when VCPU0
is offlined.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes in v6:
* Change conditions causing the SCI to be generated:
  - domctl write to VCPU map
  - Enabling a pending GPE0 event


 xen/arch/x86/hvm/acpi.c    | 20 ++++++++++++++++++++
 xen/common/event_channel.c |  7 +++++--
 xen/include/xen/domain.h   |  1 +
 xen/include/xen/event.h    |  8 ++++++++
 4 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/acpi.c b/xen/arch/x86/hvm/acpi.c
index 9f0578e..946640e 100644
--- a/xen/arch/x86/hvm/acpi.c
+++ b/xen/arch/x86/hvm/acpi.c
@@ -4,6 +4,7 @@
  */
 #include <xen/acpi.h>
 #include <xen/errno.h>
+#include <xen/event.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
 
@@ -85,6 +86,17 @@ int hvm_acpi_domctl_access(struct domain *d,
             return -EFAULT;
     }
 
+    /*
+     * For simplicity don't verify whether CPU map changed and
+     * always send an SCI on a write (provided it's enabled).
+     */
+    if ( is_write )
+    {
+        d->arch.hvm_domain.acpi.gpe0_sts |= 1U << XEN_ACPI_GPE0_CPUHP_BIT;
+        if ( d->arch.hvm_domain.acpi.gpe0_en & (1U << XEN_ACPI_GPE0_CPUHP_BIT) )
+            send_guest_global_virq(d, VIRQ_SCI);
+    }
+
     return 0;
 }
 
@@ -144,6 +156,7 @@ static int acpi_guest_access(int dir, unsigned int port,
     else
     {
         uint32_t v = *val;
+        uint16_t en_orig = *en;
 
         /* Status register is write-1-to-clear */
         switch ( port & 3 )
@@ -170,6 +183,13 @@ static int acpi_guest_access(int dir, unsigned int port,
             *en = (((v & 0xff) << 8) | (*en & 0xff)) & *mask_en;
             break;
         }
+
+        /*
+         * If an event became enabled and corresponding status bit is set
+         * then send an SCI to the guest.
+         */
+        if ( (*en & ~en_orig) & *sts )
+            send_guest_global_virq(d, VIRQ_SCI);
     }
 
     return X86EMUL_OKAY;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 638dc5e..1d77373 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -727,7 +727,7 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
     spin_unlock_irqrestore(&v->virq_lock, flags);
 }
 
-static void send_guest_global_virq(struct domain *d, uint32_t virq)
+void send_guest_global_virq(struct domain *d, uint32_t virq)
 {
     unsigned long flags;
     int port;
@@ -739,7 +739,10 @@ static void send_guest_global_virq(struct domain *d, uint32_t virq)
     if ( unlikely(d == NULL) || unlikely(d->vcpu == NULL) )
         return;
 
-    v = d->vcpu[0];
+    /* Send to first available VCPU */
+    for_each_vcpu(d, v)
+        if ( is_vcpu_online(v) )
+            break;
     if ( unlikely(v == NULL) )
         return;
 
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index bce0ea1..b386038 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -52,6 +52,7 @@ void vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+int arch_update_avail_vcpus(struct domain *d);
 int arch_domain_create(struct domain *d, unsigned int domcr_flags,
                        struct xen_arch_domainconfig *config);
 
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 5008c80..74bd605 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -23,6 +23,14 @@
 void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq);
 
 /*
+ * send_guest_global_virq: Notify guest via a global VIRQ.
+ *  @d:        domain to which virtual IRQ should be sent. First
+ *             online VCPU will be selected.
+ *  @virq:     Virtual IRQ number (VIRQ_*)
+ */
+void send_guest_global_virq(struct domain *d, uint32_t virq);
+
+/*
  * send_global_virq: Notify the domain handling a global VIRQ.
  *  @virq:     Virtual IRQ number (VIRQ_*)
  */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 08/12] libxl: Update xenstore on VCPU hotplug for all guest types
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (6 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 07/12] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-04 10:36   ` Wei Liu
  2017-01-03 14:04 ` [PATCH v6 09/12] tools: Call XEN_DOMCTL_acpi_access on PVH VCPU hotplug Boris Ostrovsky
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Currently HVM guests that use upstream qemu do not update xenstore's
availability entry for VCPUs. While it is not strictly necessary for
hotplug to work, xenstore ends up not reflecting actual status of
VCPUs. We should fix this.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 tools/libxl/libxl.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 6fd4fe1..bbbb3de 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5148,7 +5148,6 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
         switch (libxl__device_model_version_running(gc, domid)) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
         case LIBXL_DEVICE_MODEL_VERSION_NONE:
-            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap, &info);
             break;
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
             rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap, &info);
@@ -5158,11 +5157,14 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
         }
         break;
     case LIBXL_DOMAIN_TYPE_PV:
-        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap, &info);
         break;
     default:
         rc = ERROR_INVAL;
     }
+
+    if (!rc)
+        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap, &info);
+
 out:
     libxl_dominfo_dispose(&info);
     GC_FREE;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 09/12] tools: Call XEN_DOMCTL_acpi_access on PVH VCPU hotplug
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (7 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 08/12] libxl: Update xenstore on VCPU hotplug for all guest types Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 10/12] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Provide libxc interface for accessing ACPI via XEN_DOMCTL_acpi_access.

When a VCPU is hot-(un)plugged to/from a PVH guest update VCPU map
by writing to ACPI's XEN_ACPI_CPU_MAP register and then set GPE0
status bit in GPE0.status.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Changes in v6:
* Fix xc_acpi_access() by updating the val pointer passed to the hypercall
  and take some domctl initializers out of the loop
* Don't update GPE0 status on VCPU map update as it is no longer necessary


 tools/libxc/include/xenctrl.h | 20 ++++++++++++++++++++
 tools/libxc/xc_domain.c       | 41 +++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl.c           |  4 ++++
 tools/libxl/libxl_arch.h      |  4 ++++
 tools/libxl/libxl_arm.c       |  6 ++++++
 tools/libxl/libxl_dom.c       | 10 ++++++++++
 tools/libxl/libxl_x86.c       | 11 +++++++++++
 7 files changed, 96 insertions(+)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 4ab0f57..3d771bc 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2710,6 +2710,26 @@ int xc_livepatch_revert(xc_interface *xch, char *name, uint32_t timeout);
 int xc_livepatch_unload(xc_interface *xch, char *name, uint32_t timeout);
 int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout);
 
+int xc_acpi_access(xc_interface *xch, domid_t domid,
+                   uint8_t rw, uint8_t space_id, unsigned long addr,
+                   unsigned int bytes, void *val);
+
+static inline int xc_acpi_ioread(xc_interface *xch, domid_t domid,
+                                 unsigned long port,
+                                 unsigned int bytes, void *val)
+{
+    return xc_acpi_access(xch, domid, XEN_DOMCTL_ACPI_READ, XEN_ACPI_SYSTEM_IO,
+                          port, bytes, val);
+}
+
+static inline int xc_acpi_iowrite(xc_interface *xch, domid_t domid,
+                                  unsigned long port,
+                                  unsigned int bytes, void *val)
+{
+    return xc_acpi_access(xch, domid, XEN_DOMCTL_ACPI_WRITE, XEN_ACPI_SYSTEM_IO,
+                          port, bytes, val);
+}
+
 /* Compat shims */
 #include "xenctrl_compat.h"
 
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 296b852..ed1dddb 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2520,6 +2520,47 @@ int xc_domain_soft_reset(xc_interface *xch,
     domctl.domain = (domid_t)domid;
     return do_domctl(xch, &domctl);
 }
+
+int
+xc_acpi_access(xc_interface *xch, domid_t domid,
+               uint8_t rw, uint8_t space_id,
+               unsigned long address, unsigned int bytes, void *val)
+{
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(val, bytes, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    struct xen_domctl_acpi_access *access = &domctl.u.acpi_access;
+    unsigned int max_bytes = (1U << (sizeof(access->width) * 8)) - 1;
+    int ret;
+
+    memset(&domctl, 0, sizeof(domctl));
+    domctl.domain = domid;
+    domctl.cmd = XEN_DOMCTL_acpi_access;
+    access->space_id = space_id;
+    access->rw = rw;
+    access->address = address;
+
+    if ( (ret = xc_hypercall_bounce_pre(xch, val)) )
+        return ret;
+
+    while ( bytes != 0 )
+    {
+        access->width = bytes < max_bytes ? bytes : max_bytes;
+        set_xen_guest_handle_offset(domctl.u.acpi_access.val,
+                                    val, access->address - address);
+
+        if ( (ret = do_domctl(xch, &domctl)) )
+             goto out;
+
+        bytes -= access->width;
+        access->address += access->width;
+    }
+
+ out:
+    xc_hypercall_bounce_post(xch, val);
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index bbbb3de..d8306ff 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5147,7 +5147,11 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
     case LIBXL_DOMAIN_TYPE_HVM:
         switch (libxl__device_model_version_running(gc, domid)) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+            break;
         case LIBXL_DEVICE_MODEL_VERSION_NONE:
+            rc = libxl__arch_set_vcpuonline(gc, domid, cpumap);
+            if (rc < 0)
+                LOGE(ERROR, "Can't change vcpu online map (%d)", rc);
             break;
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
             rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap, &info);
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 5e1fc60..9649c21 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -71,6 +71,10 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+int libxl__arch_set_vcpuonline(libxl__gc *gc, uint32_t domid,
+                               libxl_bitmap *cpumap);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index d842d88..93dc81e 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -126,6 +126,12 @@ out:
     return rc;
 }
 
+int libxl__arch_set_vcpuonline(libxl__gc *gc, uint32_t domid,
+                               libxl_bitmap *cpumap)
+{
+    return ERROR_FAIL;
+}
+
 static struct arch_info {
     const char *guest_type;
     const char *timer_compat;
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index d519c8d..ca8f7a2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -309,6 +309,16 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         return ERROR_FAIL;
     }
 
+    if ((info->type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (libxl__device_model_version_running(gc, domid) ==
+         LIBXL_DEVICE_MODEL_VERSION_NONE)) {
+        rc = libxl__arch_set_vcpuonline(gc, domid, &info->avail_vcpus);
+        if (rc) {
+            LOG(ERROR, "Couldn't set available vcpu count (error %d)", rc);
+            return ERROR_FAIL;
+        }
+    }
+
     /*
      * Check if the domain has any CPU or node affinity already. If not, try
      * to build up the latter via automatic NUMA placement. In fact, in case
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 5da7504..00c3891 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -3,6 +3,9 @@
 
 #include <xc_dom.h>
 
+#include <xen/arch-x86/xen.h>
+#include <xen/hvm/ioreq.h>
+
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
                                       xc_domain_configuration_t *xc_config)
@@ -368,6 +371,14 @@ int libxl__arch_extra_memory(libxl__gc *gc,
     return 0;
 }
 
+int libxl__arch_set_vcpuonline(libxl__gc *gc, uint32_t domid,
+			       libxl_bitmap *cpumap)
+{
+    /*Update VCPU map. */
+    return xc_acpi_iowrite(CTX->xch, domid, XEN_ACPI_CPU_MAP,
+                           cpumap->size, cpumap->map);
+}
+
 int libxl__arch_domain_init_hw_description(libxl__gc *gc,
                                            libxl_domain_build_info *info,
                                            libxl__domain_build_state *state,
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 10/12] pvh: Set online VCPU map to avail_vcpus
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (8 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 09/12] tools: Call XEN_DOMCTL_acpi_access on PVH VCPU hotplug Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 11/12] pvh/acpi: Save ACPI registers for PVH guests Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
  11 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

ACPI builder marks VCPUS set in vcpu_online map as enabled in MADT.
With ACPI-based CPU hotplug we only want VCPUs that are started by
the guest to be marked as such. Remaining VCPUs will be set to
"enable" by AML code during hotplug.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxl/libxl_x86_acpi.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_x86_acpi.c b/tools/libxl/libxl_x86_acpi.c
index c0a6e32..08178d6 100644
--- a/tools/libxl/libxl_x86_acpi.c
+++ b/tools/libxl/libxl_x86_acpi.c
@@ -98,7 +98,7 @@ static int init_acpi_config(libxl__gc *gc,
     uint32_t domid = dom->guest_domid;
     xc_dominfo_t info;
     struct hvm_info_table *hvminfo;
-    int i, r, rc;
+    int r, rc;
 
     config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
     config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
@@ -147,8 +147,8 @@ static int init_acpi_config(libxl__gc *gc,
         hvminfo->nr_vcpus = info.max_vcpu_id + 1;
     }
 
-    for (i = 0; i < hvminfo->nr_vcpus; i++)
-        hvminfo->vcpu_online[i / 8] |= 1 << (i & 7);
+    memcpy(hvminfo->vcpu_online, b_info->avail_vcpus.map,
+           b_info->avail_vcpus.size);
 
     config->hvminfo = hvminfo;
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 11/12] pvh/acpi: Save ACPI registers for PVH guests
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (9 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 10/12] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 14:04 ` [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
  11 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, jbeulich, Boris Ostrovsky,
	roger.pau

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
Can't generate SCI over event channel during is an event
is enabled and pending:
* v->virq_to_evtchn is not initialized yet (it's done by guest)
* The SCI is sent immediately after event is made pending so it's
  not possible to miss it (at least for the only event that have)


 xen/arch/x86/hvm/pmtimer.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index b70c299..e96805e 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/support.h>
 #include <asm/acpi.h> /* for hvm_acpi_power_button prototype */
 #include <public/hvm/params.h>
+#include <xen/event.h>
 
 /* Slightly more readable port I/O addresses for the registers we intercept */
 #define PM1a_STS_ADDR_V0 (ACPI_PM1A_EVT_BLK_ADDRESS_V0)
@@ -257,7 +258,11 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
     int rc;
 
     if ( !has_vpm(d) )
+    {
+        if ( !has_acpi_dm_ff(d) )
+            return hvm_save_entry(PMTIMER, 0, h, acpi);
         return 0;
+    }
 
     spin_lock(&s->lock);
 
@@ -286,7 +291,11 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
 
     if ( !has_vpm(d) )
+    {
+        if ( !has_acpi_dm_ff(d) )
+            return hvm_load_entry(PMTIMER, h, acpi);
         return -ENODEV;
+    }
 
     spin_lock(&s->lock);
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure
  2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
                   ` (10 preceding siblings ...)
  2017-01-03 14:04 ` [PATCH v6 11/12] pvh/acpi: Save ACPI registers for PVH guests Boris Ostrovsky
@ 2017-01-03 14:04 ` Boris Ostrovsky
  2017-01-03 16:58   ` Jan Beulich
  2017-01-03 18:19   ` Stefano Stabellini
  11 siblings, 2 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 14:04 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, wei.liu2, George Dunlap, andrew.cooper3,
	ian.jackson, Tim Deegan, jbeulich, Boris Ostrovsky, roger.pau

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Tim Deegan <tim@xen.org>
---
Changes in v6:
* No GPE0 update is needed anymore.

 docs/misc/hvmlite.markdown | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/docs/misc/hvmlite.markdown b/docs/misc/hvmlite.markdown
index 898b8ee..472edee 100644
--- a/docs/misc/hvmlite.markdown
+++ b/docs/misc/hvmlite.markdown
@@ -75,3 +75,14 @@ info structure that's passed at boot time (field rsdp_paddr).
 
 Description of paravirtualized devices will come from XenStore, just as it's
 done for HVM guests.
+
+## VCPU hotplug ##
+
+VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
+follows ACPI model where change in domain's number of VCPUS (stored in
+domain.avail_vcpus) results in an SCI being sent to the guest. The guest
+then executes DSDT's PRSC method, updating MADT enable status for the
+affected VCPU.
+
+Updating VCPU number is achieved by having the toolstack issue a write to
+ACPI's XEN_ACPI_CPU_MAP.
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 02/12] x86/save: public/arch-x86/hvm/save.h is available to hypervisor and tools only
  2017-01-03 14:04 ` [PATCH v6 02/12] x86/save: public/arch-x86/hvm/save.h is available to hypervisor and tools only Boris Ostrovsky
@ 2017-01-03 16:55   ` Jan Beulich
  0 siblings, 0 replies; 24+ messages in thread
From: Jan Beulich @ 2017-01-03 16:55 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: andrew.cooper3, xen-devel, wei.liu2, ian.jackson, roger.pau

>>> On 03.01.17 at 15:04, <boris.ostrovsky@oracle.com> wrote:
> Noone else needs to include it since it is only useful to code
> that can made domctl calls. And public domctl.h can only be included
> by the toolstack or the hypervisor.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
> New in v6 (not required by the series).
> 
> Q: Should include/public/hvm/save.h have the same guards?

Yes. In fact the architecture specific header should never be included
directly, so putting the guard just there (and ...

> --- a/xen/include/public/arch-x86/hvm/save.h
> +++ b/xen/include/public/arch-x86/hvm/save.h
> @@ -26,6 +26,8 @@
>  #ifndef __XEN_PUBLIC_HVM_SAVE_X86_H__
>  #define __XEN_PUBLIC_HVM_SAVE_X86_H__
>  
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)

... an #error here and in the ARM counterpart) would seem best.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure
  2017-01-03 14:04 ` [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
@ 2017-01-03 16:58   ` Jan Beulich
  2017-01-03 19:33     ` Boris Ostrovsky
  2017-01-03 18:19   ` Stefano Stabellini
  1 sibling, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2017-01-03 16:58 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Tim Deegan, Stefano Stabellini, wei.liu2, George Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, roger.pau

>>> On 03.01.17 at 15:04, <boris.ostrovsky@oracle.com> wrote:
> --- a/docs/misc/hvmlite.markdown
> +++ b/docs/misc/hvmlite.markdown
> @@ -75,3 +75,14 @@ info structure that's passed at boot time (field rsdp_paddr).
>  
>  Description of paravirtualized devices will come from XenStore, just as it's
>  done for HVM guests.
> +
> +## VCPU hotplug ##
> +
> +VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
> +follows ACPI model where change in domain's number of VCPUS (stored in
> +domain.avail_vcpus) results in an SCI being sent to the guest. The guest
> +then executes DSDT's PRSC method, updating MADT enable status for the
> +affected VCPU.
> +
> +Updating VCPU number is achieved by having the toolstack issue a write to
> +ACPI's XEN_ACPI_CPU_MAP.

Is any of this valid anymore in the context of the recent discussion?
Perhaps even wider - how much of this series is applicable if pCPU
hotplug is to use the normal ACPI code path? I hope the plan is not
to have different vCPU hotplug for DomU and Dom0?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure
  2017-01-03 14:04 ` [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
  2017-01-03 16:58   ` Jan Beulich
@ 2017-01-03 18:19   ` Stefano Stabellini
  2017-01-03 20:31     ` Boris Ostrovsky
  1 sibling, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-01-03 18:19 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Tim Deegan, Stefano Stabellini, wei.liu2, George Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, jbeulich, roger.pau

On Tue, 3 Jan 2017, Boris Ostrovsky wrote:
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Tim Deegan <tim@xen.org>
> ---
> Changes in v6:
> * No GPE0 update is needed anymore.
> 
>  docs/misc/hvmlite.markdown | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/docs/misc/hvmlite.markdown b/docs/misc/hvmlite.markdown
> index 898b8ee..472edee 100644
> --- a/docs/misc/hvmlite.markdown
> +++ b/docs/misc/hvmlite.markdown
> @@ -75,3 +75,14 @@ info structure that's passed at boot time (field rsdp_paddr).
>  
>  Description of paravirtualized devices will come from XenStore, just as it's
>  done for HVM guests.
> +
> +## VCPU hotplug ##
> +
> +VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
> +follows ACPI model where change in domain's number of VCPUS (stored in
> +domain.avail_vcpus) results in an SCI being sent to the guest. The guest
> +then executes DSDT's PRSC method, updating MADT enable status for the
> +affected VCPU.
> +
> +Updating VCPU number is achieved by having the toolstack issue a write to
> +ACPI's XEN_ACPI_CPU_MAP.

Looking at 1483452256-2879-10-git-send-email-boris.ostrovsky@oracle.com,
this is done via domctl. I think it is worth documenting that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access
  2017-01-03 14:04 ` [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
@ 2017-01-03 18:21   ` Daniel De Graaf
  2017-01-03 20:51   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 24+ messages in thread
From: Daniel De Graaf @ 2017-01-03 18:21 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: andrew.cooper3, wei.liu2, ian.jackson, jbeulich, roger.pau

On 01/03/2017 09:04 AM, Boris Ostrovsky wrote:
> This domctl will allow toolstack to read and write some
> ACPI registers. It will be available to both x86 and ARM
> but will be implemented first only for x86
>
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>



-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure
  2017-01-03 16:58   ` Jan Beulich
@ 2017-01-03 19:33     ` Boris Ostrovsky
  2017-01-04  9:26       ` Jan Beulich
  0 siblings, 1 reply; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 19:33 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Stefano Stabellini, wei.liu2, George Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, roger.pau

On 01/03/2017 11:58 AM, Jan Beulich wrote:
>>>> On 03.01.17 at 15:04, <boris.ostrovsky@oracle.com> wrote:
>> --- a/docs/misc/hvmlite.markdown
>> +++ b/docs/misc/hvmlite.markdown
>> @@ -75,3 +75,14 @@ info structure that's passed at boot time (field rsdp_paddr).
>>  
>>  Description of paravirtualized devices will come from XenStore, just as it's
>>  done for HVM guests.
>> +
>> +## VCPU hotplug ##
>> +
>> +VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
>> +follows ACPI model where change in domain's number of VCPUS (stored in
>> +domain.avail_vcpus) results in an SCI being sent to the guest. The guest
>> +then executes DSDT's PRSC method, updating MADT enable status for the
>> +affected VCPU.
>> +
>> +Updating VCPU number is achieved by having the toolstack issue a write to
>> +ACPI's XEN_ACPI_CPU_MAP.
> Is any of this valid anymore in the context of the recent discussion?
> Perhaps even wider - how much of this series is applicable if pCPU
> hotplug is to use the normal ACPI code path? 

pCPU hotplug is not going to use this path because it would not be
executing PRSC method that we (Xen toolstack) provide.


> I hope the plan is not
> to have different vCPU hotplug for DomU and Dom0?

That was not the plan. But I haven't thought of dom0 not being able to
execute PRSC.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure
  2017-01-03 18:19   ` Stefano Stabellini
@ 2017-01-03 20:31     ` Boris Ostrovsky
  0 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-01-03 20:31 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Tim Deegan, wei.liu2, George Dunlap, andrew.cooper3, ian.jackson,
	xen-devel, jbeulich, roger.pau

On 01/03/2017 01:19 PM, Stefano Stabellini wrote:
> On Tue, 3 Jan 2017, Boris Ostrovsky wrote:
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> ---
>> CC: George Dunlap <George.Dunlap@eu.citrix.com>
>> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Tim Deegan <tim@xen.org>
>> ---
>> Changes in v6:
>> * No GPE0 update is needed anymore.
>>
>>  docs/misc/hvmlite.markdown | 11 +++++++++++
>>  1 file changed, 11 insertions(+)
>>
>> diff --git a/docs/misc/hvmlite.markdown b/docs/misc/hvmlite.markdown
>> index 898b8ee..472edee 100644
>> --- a/docs/misc/hvmlite.markdown
>> +++ b/docs/misc/hvmlite.markdown
>> @@ -75,3 +75,14 @@ info structure that's passed at boot time (field rsdp_paddr).
>>  
>>  Description of paravirtualized devices will come from XenStore, just as it's
>>  done for HVM guests.
>> +
>> +## VCPU hotplug ##
>> +
>> +VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
>> +follows ACPI model where change in domain's number of VCPUS (stored in
>> +domain.avail_vcpus) results in an SCI being sent to the guest. The guest
>> +then executes DSDT's PRSC method, updating MADT enable status for the
>> +affected VCPU.
>> +
>> +Updating VCPU number is achieved by having the toolstack issue a write to
>> +ACPI's XEN_ACPI_CPU_MAP.
> Looking at 1483452256-2879-10-git-send-email-boris.ostrovsky@oracle.com,
> this is done via domctl. I think it is worth documenting that.

Will do.

-boirs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access
  2017-01-03 14:04 ` [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
  2017-01-03 18:21   ` Daniel De Graaf
@ 2017-01-03 20:51   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 24+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-01-03 20:51 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, jbeulich,
	Daniel De Graaf, roger.pau

> diff --git a/xen/arch/x86/hvm/acpi.c b/xen/arch/x86/hvm/acpi.c
> new file mode 100644
> index 0000000..04901c1
> --- /dev/null
> +++ b/xen/arch/x86/hvm/acpi.c
> @@ -0,0 +1,24 @@
> +/* acpi.c: ACPI access handling
> + *
> + * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.

2017.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure
  2017-01-03 19:33     ` Boris Ostrovsky
@ 2017-01-04  9:26       ` Jan Beulich
  0 siblings, 0 replies; 24+ messages in thread
From: Jan Beulich @ 2017-01-04  9:26 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Tim Deegan, Stefano Stabellini, wei.liu2, George Dunlap,
	andrew.cooper3, ian.jackson, xen-devel, roger.pau

>>> On 03.01.17 at 20:33, <boris.ostrovsky@oracle.com> wrote:
> On 01/03/2017 11:58 AM, Jan Beulich wrote:
>>>>> On 03.01.17 at 15:04, <boris.ostrovsky@oracle.com> wrote:
>>> --- a/docs/misc/hvmlite.markdown
>>> +++ b/docs/misc/hvmlite.markdown
>>> @@ -75,3 +75,14 @@ info structure that's passed at boot time (field 
> rsdp_paddr).
>>>  
>>>  Description of paravirtualized devices will come from XenStore, just as 
> it's
>>>  done for HVM guests.
>>> +
>>> +## VCPU hotplug ##
>>> +
>>> +VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
>>> +follows ACPI model where change in domain's number of VCPUS (stored in
>>> +domain.avail_vcpus) results in an SCI being sent to the guest. The guest
>>> +then executes DSDT's PRSC method, updating MADT enable status for the
>>> +affected VCPU.
>>> +
>>> +Updating VCPU number is achieved by having the toolstack issue a write to
>>> +ACPI's XEN_ACPI_CPU_MAP.
>> Is any of this valid anymore in the context of the recent discussion?
>> Perhaps even wider - how much of this series is applicable if pCPU
>> hotplug is to use the normal ACPI code path? 
> 
> pCPU hotplug is not going to use this path because it would not be
> executing PRSC method that we (Xen toolstack) provide.
> 
> 
>> I hope the plan is not
>> to have different vCPU hotplug for DomU and Dom0?
> 
> That was not the plan. But I haven't thought of dom0 not being able to
> execute PRSC.

Well - bottom line to me then is: This series needs to be deferred
until there is a plan for acceptable Dom0 behavior. In particular it
may well be that PVH needs to go the PV vCPU hotplug route instead,
in which case we'd need to evaluate which of the already committed
patches make no sense anymore.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 08/12] libxl: Update xenstore on VCPU hotplug for all guest types
  2017-01-03 14:04 ` [PATCH v6 08/12] libxl: Update xenstore on VCPU hotplug for all guest types Boris Ostrovsky
@ 2017-01-04 10:36   ` Wei Liu
  0 siblings, 0 replies; 24+ messages in thread
From: Wei Liu @ 2017-01-04 10:36 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, jbeulich, roger.pau

On Tue, Jan 03, 2017 at 09:04:12AM -0500, Boris Ostrovsky wrote:
> Currently HVM guests that use upstream qemu do not update xenstore's
> availability entry for VCPUs. While it is not strictly necessary for
> hotplug to work, xenstore ends up not reflecting actual status of
> VCPUs. We should fix this.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

> ---
>  tools/libxl/libxl.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 6fd4fe1..bbbb3de 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -5148,7 +5148,6 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>          switch (libxl__device_model_version_running(gc, domid)) {
>          case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>          case LIBXL_DEVICE_MODEL_VERSION_NONE:
> -            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap, &info);
>              break;
>          case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
>              rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap, &info);
> @@ -5158,11 +5157,14 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>          }
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
> -        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap, &info);
>          break;
>      default:
>          rc = ERROR_INVAL;
>      }
> +
> +    if (!rc)
> +        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap, &info);
> +
>  out:
>      libxl_dominfo_dispose(&info);
>      GC_FREE;
> -- 
> 2.7.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl
  2017-01-03 14:04 ` [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl Boris Ostrovsky
@ 2017-07-31 14:14   ` Ross Lagerwall
  2017-07-31 14:59     ` Boris Ostrovsky
  0 siblings, 1 reply; 24+ messages in thread
From: Ross Lagerwall @ 2017-07-31 14:14 UTC (permalink / raw)
  To: Boris Ostrovsky, xen-devel
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, jbeulich, Ian Jackson

On 01/03/2017 02:04 PM, Boris Ostrovsky wrote:
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
> Changes in v6:
> * Adjustments to to patch 4 changes.
> * Added a spinlock for VCPU map access
> * Return an error on guest trying to write VCPU map
> 
snip
> -static int acpi_cpumap_access_common(struct domain *d, bool is_write,
> -                                     unsigned int port,
> +static int acpi_cpumap_access_common(struct domain *d, bool is_guest_access,
> +                                     bool is_write, unsigned int port,
>                                        unsigned int bytes, uint32_t *val)
>   {
>       unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
> +    int rc = X86EMUL_OKAY;
> 
>       BUILD_BUG_ON(XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN
>                    > ACPI_GPE0_BLK_ADDRESS_V1);
> 
> +    spin_lock(&d->arch.hvm_domain.acpi_lock);
> +
>       if ( !is_write )
>       {
>           uint32_t mask = (bytes < 4) ? ~0U << (bytes * 8) : 0;
> @@ -32,23 +37,61 @@ static int acpi_cpumap_access_common(struct domain *d, bool is_write,
>               memcpy(val, (uint8_t *)d->avail_vcpus + first_byte,
>                      min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
>       }
> +    else if ( !is_guest_access )
> +        memcpy((uint8_t *)d->avail_vcpus + first_byte, val,
> +               min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
>       else
>           /* Guests do not write CPU map */
> -        return X86EMUL_UNHANDLEABLE;
> +        rc = X86EMUL_UNHANDLEABLE;
> 
> -    return X86EMUL_OKAY;
> +    spin_unlock(&d->arch.hvm_domain.acpi_lock);
> +
> +    return rc;
>   }
> 
>   int hvm_acpi_domctl_access(struct domain *d,
>                              const struct xen_domctl_acpi_access *access)
>   {
> -    return -ENOSYS;
> +    unsigned int bytes, i;
> +    uint32_t val = 0;
> +    uint8_t *ptr = (uint8_t *)&val;
> +    int rc;
> +    bool is_write = (access->rw == XEN_DOMCTL_ACPI_WRITE) ? true : false;
> +
> +    if ( has_acpi_dm_ff(d) )
> +        return -EOPNOTSUPP;
> +
> +    if ( access->space_id != XEN_ACPI_SYSTEM_IO )
> +        return -EINVAL;
> +
> +    if ( !((access->address >= XEN_ACPI_CPU_MAP) &&
> +           (access->address < XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN)) )
> +        return -ENODEV;
> +
> +    for ( i = 0; i < access->width; i += sizeof(val) )
> +    {
> +        bytes = (access->width - i > sizeof(val)) ?
> +            sizeof(val) : access->width - i;
> +
> +        if ( is_write && copy_from_guest_offset(ptr, access->val, i, bytes) )
> +            return -EFAULT;
> +
> +        rc = acpi_cpumap_access_common(d, false, is_write,
> +                                       access->address, bytes, &val);

While I'm looking at this code...
This doesn't work if access->width > sizeof(val) (4 bytes). The same 
value (access->address) is always passed into acpi_cpumap_access_common 
for 'port' and this is used as an offset into the avail_cpus array. So 
the offset is unchanged and only the first 4 bytes of avail_cpus ever 
gets changed.

-- 
Ross Lagerwall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl
  2017-07-31 14:14   ` Ross Lagerwall
@ 2017-07-31 14:59     ` Boris Ostrovsky
  0 siblings, 0 replies; 24+ messages in thread
From: Boris Ostrovsky @ 2017-07-31 14:59 UTC (permalink / raw)
  To: Ross Lagerwall, xen-devel
  Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, jbeulich, Ian Jackson

On 07/31/2017 10:14 AM, Ross Lagerwall wrote:
> On 01/03/2017 02:04 PM, Boris Ostrovsky wrote:
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> ---
>> Changes in v6:
>> * Adjustments to to patch 4 changes.
>> * Added a spinlock for VCPU map access
>> * Return an error on guest trying to write VCPU map
>>
> snip
>> -static int acpi_cpumap_access_common(struct domain *d, bool is_write,
>> -                                     unsigned int port,
>> +static int acpi_cpumap_access_common(struct domain *d, bool
>> is_guest_access,
>> +                                     bool is_write, unsigned int port,
>>                                        unsigned int bytes, uint32_t
>> *val)
>>   {
>>       unsigned int first_byte = port - XEN_ACPI_CPU_MAP;
>> +    int rc = X86EMUL_OKAY;
>>
>>       BUILD_BUG_ON(XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN
>>                    > ACPI_GPE0_BLK_ADDRESS_V1);
>>
>> +    spin_lock(&d->arch.hvm_domain.acpi_lock);
>> +
>>       if ( !is_write )
>>       {
>>           uint32_t mask = (bytes < 4) ? ~0U << (bytes * 8) : 0;
>> @@ -32,23 +37,61 @@ static int acpi_cpumap_access_common(struct
>> domain *d, bool is_write,
>>               memcpy(val, (uint8_t *)d->avail_vcpus + first_byte,
>>                      min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
>>       }
>> +    else if ( !is_guest_access )
>> +        memcpy((uint8_t *)d->avail_vcpus + first_byte, val,
>> +               min(bytes, ((d->max_vcpus + 7) / 8) - first_byte));
>>       else
>>           /* Guests do not write CPU map */
>> -        return X86EMUL_UNHANDLEABLE;
>> +        rc = X86EMUL_UNHANDLEABLE;
>>
>> -    return X86EMUL_OKAY;
>> +    spin_unlock(&d->arch.hvm_domain.acpi_lock);
>> +
>> +    return rc;
>>   }
>>
>>   int hvm_acpi_domctl_access(struct domain *d,
>>                              const struct xen_domctl_acpi_access
>> *access)
>>   {
>> -    return -ENOSYS;
>> +    unsigned int bytes, i;
>> +    uint32_t val = 0;
>> +    uint8_t *ptr = (uint8_t *)&val;
>> +    int rc;
>> +    bool is_write = (access->rw == XEN_DOMCTL_ACPI_WRITE) ? true :
>> false;
>> +
>> +    if ( has_acpi_dm_ff(d) )
>> +        return -EOPNOTSUPP;
>> +
>> +    if ( access->space_id != XEN_ACPI_SYSTEM_IO )
>> +        return -EINVAL;
>> +
>> +    if ( !((access->address >= XEN_ACPI_CPU_MAP) &&
>> +           (access->address < XEN_ACPI_CPU_MAP +
>> XEN_ACPI_CPU_MAP_LEN)) )
>> +        return -ENODEV;
>> +
>> +    for ( i = 0; i < access->width; i += sizeof(val) )
>> +    {
>> +        bytes = (access->width - i > sizeof(val)) ?
>> +            sizeof(val) : access->width - i;
>> +
>> +        if ( is_write && copy_from_guest_offset(ptr, access->val, i,
>> bytes) )
>> +            return -EFAULT;
>> +
>> +        rc = acpi_cpumap_access_common(d, false, is_write,
>> +                                       access->address, bytes, &val);
>
> While I'm looking at this code...
> This doesn't work if access->width > sizeof(val) (4 bytes). The same
> value (access->address) is always passed into
> acpi_cpumap_access_common for 'port' and this is used as an offset
> into the avail_cpus array. So the offset is unchanged and only the
> first 4 bytes of avail_cpus ever gets changed.

I'd have to go back to the series (haven't looked at it since it was
posted back in January) but I think I enforce somewhere size of the
access to fit into 4 bytes. And if not then you are right.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-07-31 14:59 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-03 14:04 [PATCH v6 00/12] PVH VCPU hotplug support Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 01/12] domctl: Add XEN_DOMCTL_acpi_access Boris Ostrovsky
2017-01-03 18:21   ` Daniel De Graaf
2017-01-03 20:51   ` Konrad Rzeszutek Wilk
2017-01-03 14:04 ` [PATCH v6 02/12] x86/save: public/arch-x86/hvm/save.h is available to hypervisor and tools only Boris Ostrovsky
2017-01-03 16:55   ` Jan Beulich
2017-01-03 14:04 ` [PATCH v6 03/12] pvh/acpi: Install handlers for ACPI-related PVH IO accesses Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 04/12] pvh/acpi: Handle ACPI accesses for PVH guests Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 05/12] x86/domctl: Handle ACPI access from domctl Boris Ostrovsky
2017-07-31 14:14   ` Ross Lagerwall
2017-07-31 14:59     ` Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 06/12] events/x86: Define SCI virtual interrupt Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 07/12] pvh: Send an SCI on VCPU hotplug event Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 08/12] libxl: Update xenstore on VCPU hotplug for all guest types Boris Ostrovsky
2017-01-04 10:36   ` Wei Liu
2017-01-03 14:04 ` [PATCH v6 09/12] tools: Call XEN_DOMCTL_acpi_access on PVH VCPU hotplug Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 10/12] pvh: Set online VCPU map to avail_vcpus Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 11/12] pvh/acpi: Save ACPI registers for PVH guests Boris Ostrovsky
2017-01-03 14:04 ` [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure Boris Ostrovsky
2017-01-03 16:58   ` Jan Beulich
2017-01-03 19:33     ` Boris Ostrovsky
2017-01-04  9:26       ` Jan Beulich
2017-01-03 18:19   ` Stefano Stabellini
2017-01-03 20:31     ` Boris Ostrovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.