All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/8] Add guest CPU topology support
@ 2018-01-08  4:01 Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology Chao Gao
                   ` (7 more replies)
  0 siblings, 8 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
	Ian Jackson, Tim Deegan, Jan Beulich, Daniel De Graaf, Chao Gao

This series of patches add guest CPU topology support for hvm.

CPU topology here means the number of sockets, the number of cores in
each socket and the number of threads in each core. Currently, guest CPU
topology cannot be specified except the number of sockets through virtual
numa. For Intel CPU, CPU topology is exposed to software via extended topology
leaf (CPUID.0xb). However, it isn't emulated. Without this CPUID leaf, the
x2APIC ID cannot be exposed to guest. At this end, it would limit the number
of vcpus. Although guest virtual NUMA is configurable, it isn't complete
because it isn't consistent with NUMA information exposed by guest CPUID. For
more information about how software should get CPU topology, please refer to
"PROGRAMMING CONSIDERATIONS FOR HARDWARE MULTI-THREADING CAPABLE PROCESSORS".

This series of patches make following contributions:
1. complete guest's virtual NUMA emulation
2. make guest CPU topology configurable
3. add guest CPUID.0xb emulation, which also removes the number of vcpus
limitation.

With this series, guest CPU topology can be specified in the guest
configuration. When creating guest, libxl would assign an APIC_ID to each
vCPU and pass these APIC_IDs down to Xen via hypercall before creating vCPUs
so that vCPUs can be initilized with correct APIC_IDs. Then, when setting up
guest CPUID policy, guest CPU topology is obtained from Xen via hypercall and
then part of the return values of CPUID.0xb, CPUID.4 are emulated accordingly.
In hvmloader, guest CPU topology is retrieved from Xen in order to build
ACPI and boot APs.

Chao Gao (8):
  x86/domctl: introduce a pair of hypercall to set and get cpu topology
  x86/vlapic: use apic_id array to set initial (x2)APIC ID
  xl/parse: introduce cpu_topology to guest config
  libxl: calculate and set vcpu topology
  xen/mem: handle XENMEM_get_cpu_topology in compat_memory_op
  hvmloader: get CPU topology information from hypervisor
  libacpi: build madt/srat according cpu topology
  x86/cpuid: emulate extended topology enumeration leaf

 docs/man/xl.cfg.pod.5.in             | 21 +++++++++
 tools/firmware/hvmloader/Makefile    |  2 +-
 tools/firmware/hvmloader/hvmloader.c |  8 ++++
 tools/firmware/hvmloader/smp.c       |  3 +-
 tools/firmware/hvmloader/topology.c  | 57 ++++++++++++++++++++++++
 tools/firmware/hvmloader/topology.h  | 36 +++++++++++++++
 tools/firmware/hvmloader/util.c      |  8 +++-
 tools/libacpi/build.c                |  4 +-
 tools/libacpi/libacpi.h              |  5 ++-
 tools/libxc/include/xenctrl.h        | 15 +++++++
 tools/libxc/xc_cpuid_x86.c           | 86 +++++++++++++++++++++++++++++++++---
 tools/libxc/xc_domain.c              | 71 +++++++++++++++++++++++++++++
 tools/libxl/libxl_arch.h             |  4 ++
 tools/libxl/libxl_arm.c              |  6 +++
 tools/libxl/libxl_dom.c              |  4 ++
 tools/libxl/libxl_types.idl          |  8 ++++
 tools/libxl/libxl_x86.c              | 70 +++++++++++++++++++++++++++++
 tools/libxl/libxl_x86_acpi.c         |  6 ++-
 tools/xl/xl_parse.c                  | 19 ++++++++
 xen/arch/x86/cpuid.c                 | 32 +++++++++++++-
 xen/arch/x86/domctl.c                | 35 +++++++++++++++
 xen/arch/x86/hvm/hvm.c               |  7 +++
 xen/arch/x86/hvm/vlapic.c            | 10 ++---
 xen/arch/x86/mm.c                    | 45 +++++++++++++++++++
 xen/common/compat/memory.c           | 21 +++++++++
 xen/include/asm-x86/cpuid.h          | 12 +++++
 xen/include/asm-x86/hvm/domain.h     | 18 ++++++++
 xen/include/public/domctl.h          | 22 +++++++++
 xen/include/public/memory.h          | 27 ++++++++++-
 xen/include/xlat.lst                 |  1 +
 xen/include/xsm/dummy.h              |  6 +++
 xen/xsm/dummy.c                      |  1 +
 xen/xsm/flask/hooks.c                | 10 +++++
 xen/xsm/flask/policy/access_vectors  |  4 ++
 34 files changed, 661 insertions(+), 23 deletions(-)
 create mode 100644 tools/firmware/hvmloader/topology.c
 create mode 100644 tools/firmware/hvmloader/topology.h

-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  2018-01-08 18:14   ` Daniel De Graaf
                     ` (2 more replies)
  2018-01-08  4:01 ` [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID Chao Gao
                   ` (6 subsequent siblings)
  7 siblings, 3 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, Jan Beulich, Andrew Cooper, Daniel De Graaf,
	Chao Gao

Define interface, structures and hypercalls for toolstack to build
cpu topology and for guest that will retrieve it [1].
Two subop hypercalls introduced by this patch:
XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
and XENMEM_get_cpu_topology to retrieve cpu topology information.

[1]: during guest creation, those information helps hvmloader to build ACPI.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 xen/arch/x86/domctl.c               | 27 ++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              |  7 ++++++
 xen/arch/x86/mm.c                   | 45 +++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/domain.h    | 15 +++++++++++++
 xen/include/public/domctl.h         | 22 ++++++++++++++++++
 xen/include/public/memory.h         | 27 +++++++++++++++++++++-
 xen/include/xsm/dummy.h             |  6 +++++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               | 10 +++++++++
 xen/xsm/flask/policy/access_vectors |  4 ++++
 10 files changed, 163 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 36ab235..4e1bbd5 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -347,6 +347,29 @@ void arch_get_domain_info(const struct domain *d,
         info->flags |= XEN_DOMINF_hap;
 }
 
+static int arch_set_cpu_topology(struct domain *d,
+                                 struct xen_domctl_cpu_topology *topology)
+{
+    if ( !is_hvm_domain(d) ||
+         !topology->size || topology->size > HVM_MAX_VCPUS )
+        return -EINVAL;
+
+    if ( !d->arch.hvm_domain.apic_id )
+        d->arch.hvm_domain.apic_id = xmalloc_array(uint32_t, topology->size);
+
+    if ( !d->arch.hvm_domain.apic_id )
+        return -ENOMEM;
+
+    if ( copy_from_guest(d->arch.hvm_domain.apic_id, topology->tid,
+                         topology->size) )
+        return -EFAULT;
+
+    d->arch.hvm_domain.apic_id_size = topology->size;
+    d->arch.hvm_domain.core_per_socket = topology->core_per_socket;
+    d->arch.hvm_domain.thread_per_core = topology->thread_per_core;
+    return 0;
+}
+
 #define MAX_IOPORTS 0x10000
 
 long arch_do_domctl(
@@ -1555,6 +1578,10 @@ long arch_do_domctl(
         recalculate_cpuid_policy(d);
         break;
 
+    case XEN_DOMCTL_set_cpu_topology:
+        ret = arch_set_cpu_topology(d, &domctl->u.cpu_topology);
+        break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 71fddfd..b3b3224 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1509,6 +1509,13 @@ int hvm_vcpu_initialise(struct vcpu *v)
     int rc;
     struct domain *d = v->domain;
 
+    if ( v->vcpu_id > d->arch.hvm_domain.apic_id_size )
+    {
+        printk(XENLOG_ERR "d%dv%d's apic id isn't set.\n",
+               d->domain_id, v->vcpu_id);
+        return -ENOENT;
+    }
+
     hvm_asid_flush_vcpu(v);
 
     spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a56f875..b90e663 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4413,6 +4413,51 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return rc;
     }
 
+    case XENMEM_get_cpu_topology:
+    {
+        struct domain *d;
+        struct xen_cpu_topology_info topology;
+
+        if ( copy_from_guest(&topology, arg, 1) )
+            return -EFAULT;
+
+        if ( topology.pad || topology.pad2 )
+            return -EINVAL;
+
+        if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
+            return -ESRCH;
+
+        rc = xsm_get_cpu_topology(XSM_TARGET, d);
+        if ( rc )
+            goto get_cpu_topology_failed;
+
+        rc = -EOPNOTSUPP;
+        if ( !is_hvm_domain(d) || !d->arch.hvm_domain.apic_id )
+            goto get_cpu_topology_failed;
+
+        /* allow the size to be zero for users who don't care apic_id */
+        if ( topology.size )
+        {
+            rc = -E2BIG;
+            if ( topology.size != d->arch.hvm_domain.apic_id_size )
+                goto get_cpu_topology_failed;
+
+            rc = -EFAULT;
+            if ( copy_to_guest(topology.tid.h, d->arch.hvm_domain.apic_id,
+                               topology.size) )
+                goto get_cpu_topology_failed;
+        }
+
+        topology.core_per_socket = d->arch.hvm_domain.core_per_socket;
+        topology.thread_per_core = d->arch.hvm_domain.thread_per_core;
+
+        rc = __copy_to_guest(arg, &topology, 1) ? -EFAULT : 0;
+
+ get_cpu_topology_failed:
+        rcu_unlock_domain(d);
+        return rc;
+    }
+
     default:
         return subarch_memory_op(cmd, arg);
     }
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 7f128c0..501ed99 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -196,6 +196,21 @@ struct hvm_domain {
         struct vmx_domain vmx;
         struct svm_domain svm;
     };
+
+    /*
+     * an array of apic_id, which is unique and can be used to extract
+     * socket ID, core ID and thread ID
+     */
+    uint32_t *apic_id;
+    uint32_t apic_id_size;
+
+    /*
+     * reports the number of core/thread in a socket/core, determining the
+     * right-shift value to extract {core/thread} ID from apic_id (defined
+     * above).
+     */
+    uint8_t core_per_socket;
+    uint8_t thread_per_core;
 };
 
 #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 9ae72959..99392b7 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1109,6 +1109,26 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/* XEN_DOMCTL_set_cpu_topology */
+struct xen_domctl_cpu_topology {
+    /* IN - size of 'topology' array */
+    uint32_t size;
+    /* IN - the number of core in the same socket */
+    uint8_t core_per_socket;
+    /* IN - the number of thread in the same core */
+    uint8_t thread_per_core;
+    /* IN - should be 0 */
+    uint8_t pad[2];
+    /*
+     * IN - an array of topology ID (tid), which is used to compute a given
+     * vcpu's core id and thread id. For x86, topology ID is the APIC ID,
+     * which is system-level unique.
+     */
+    XEN_GUEST_HANDLE_64(uint32) tid;
+};
+typedef struct xen_domctl_cpu_topology xen_domctl_cpu_topology;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cpu_topology);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1188,6 +1208,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_soft_reset                    79
 #define XEN_DOMCTL_set_gnttab_limits             80
 #define XEN_DOMCTL_vuart_op                      81
+#define XEN_DOMCTL_set_cpu_topology              82
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1252,6 +1273,7 @@ struct xen_domctl {
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_set_gnttab_limits set_gnttab_limits;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_cpu_topology      cpu_topology;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 29386df..a6bcc64 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -650,7 +650,32 @@ struct xen_vnuma_topology_info {
 typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t;
 DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t);
 
-/* Next available subop number is 28 */
+/*
+ * XENMEM_get_cpu_topology is used by guest to acquire vcpu topology from
+ * hypervisor.
+ */
+#define XENMEM_get_cpu_topology     28
+
+struct xen_cpu_topology_info {
+    /* IN */
+    domid_t domid;
+    uint16_t pad;
+
+    /* IN/OUT */
+    uint32_t size;
+
+    /* OUT */
+    uint8_t core_per_socket;
+    uint8_t thread_per_core;
+    uint16_t pad2;
+
+    union {
+        XEN_GUEST_HANDLE(uint32) h;
+        uint64_t pad;
+    } tid;
+};
+typedef struct xen_cpu_topology_info xen_cpu_topology_info_t;
+DEFINE_XEN_GUEST_HANDLE(xen_cpu_topology_info_t);
 
 #endif /* __XEN_PUBLIC_MEMORY_H__ */
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index d6ddadc..3ac59c7 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -330,6 +330,12 @@ static XSM_INLINE int xsm_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
+static XSM_INLINE int xsm_get_cpu_topology(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_TARGET);
+    return xsm_default_action(action, current->domain, d);
+}
+
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
 static XSM_INLINE int xsm_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 479b103..98eb86f 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -88,6 +88,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, iomem_mapping);
     set_to_dummy_if_null(ops, pci_config_permission);
     set_to_dummy_if_null(ops, get_vnumainfo);
+    set_to_dummy_if_null(ops, get_cpu_topology);
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
     set_to_dummy_if_null(ops, get_device_group);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 19ceacf..29ee1e1 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -427,6 +427,11 @@ static int flask_get_vnumainfo(struct domain *d)
     return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_VNUMAINFO);
 }
 
+static int flask_get_cpu_topology(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_TOPOLOGY);
+}
+
 static int flask_console_io(struct domain *d, int cmd)
 {
     u32 perm;
@@ -752,6 +757,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_gnttab_limits:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_GNTTAB_LIMITS);
 
+    case XEN_DOMCTL_set_cpu_topology:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPU_TOPOLOGY);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
@@ -1800,6 +1808,8 @@ static struct xsm_operations flask_ops = {
     .do_xsm_op = do_flask_op,
     .get_vnumainfo = flask_get_vnumainfo,
 
+    .get_cpu_topology = flask_get_cpu_topology,
+
     .vm_event_control = flask_vm_event_control,
 
 #ifdef CONFIG_HAS_MEM_ACCESS
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index d0a1ec5..e531ec0 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -250,6 +250,10 @@ class domain2
     psr_alloc
 # XEN_DOMCTL_set_gnttab_limits
     set_gnttab_limits
+# XEN_DOMCTL_set_cpu_topology
+    set_cpu_topology
+# XENMEM_get_cpu_topology
+    get_cpu_topology
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  2018-04-23 16:04   ` Jan Beulich
  2018-01-08  4:01 ` [RFC PATCH 3/8] xl/parse: introduce cpu_topology to guest config Chao Gao
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Jan Beulich, Chao Gao

It removes the fixed mapping between vcpu_id and apic_id.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 xen/arch/x86/cpuid.c             |  7 +++++--
 xen/arch/x86/hvm/vlapic.c        | 10 +++++-----
 xen/include/asm-x86/hvm/domain.h |  3 +++
 3 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 5ee82d3..b47dc86 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -696,7 +696,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         /* TODO: Rework topology logic. */
         res->b &= 0x00ffffffu;
         if ( is_hvm_domain(d) )
-            res->b |= (v->vcpu_id * 2) << 24;
+            res->b |= hvm_vcpu_apic_id(v) << 24;
 
         /* TODO: Rework vPMU control in terms of toolstack choices. */
         if ( vpmu_available(v) &&
@@ -875,7 +875,10 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
             *(uint8_t *)&res->c = subleaf;
 
             /* Fix the x2APIC identifier. */
-            res->d = v->vcpu_id * 2;
+            if ( is_hvm_domain(d) )
+                res->d = hvm_vcpu_x2apic_id(v);
+            else
+                res->d = v->vcpu_id * 2;
         }
         break;
 
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 50f53bd..01848b0 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1053,10 +1053,10 @@ static const struct hvm_mmio_ops vlapic_mmio_ops = {
 
 static void set_x2apic_id(struct vlapic *vlapic)
 {
-    u32 id = vlapic_vcpu(vlapic)->vcpu_id;
-    u32 ldr = ((id & ~0xf) << 12) | (1 << (id & 0xf));
+    u32 x2apic_id = hvm_vcpu_x2apic_id(vlapic_vcpu(vlapic));
+    u32 ldr = ((x2apic_id & ~0xf) << 12) | (1 << (x2apic_id & 0xf));
 
-    vlapic_set_reg(vlapic, APIC_ID, id * 2);
+    vlapic_set_reg(vlapic, APIC_ID, x2apic_id);
     vlapic_set_reg(vlapic, APIC_LDR, ldr);
 }
 
@@ -1365,7 +1365,7 @@ void vlapic_reset(struct vlapic *vlapic)
     if ( v->vcpu_id == 0 )
         vlapic->hw.apic_base_msr |= MSR_IA32_APICBASE_BSP;
 
-    vlapic_set_reg(vlapic, APIC_ID, (v->vcpu_id * 2) << 24);
+    vlapic_set_reg(vlapic, APIC_ID, hvm_vcpu_apic_id(v) << 24);
     vlapic_do_init(vlapic);
 }
 
@@ -1456,7 +1456,7 @@ static void lapic_load_fixup(struct vlapic *vlapic)
          * here, but can be dropped as soon as it is found to conflict with
          * other (future) changes.
          */
-        if ( GET_xAPIC_ID(id) != vlapic_vcpu(vlapic)->vcpu_id * 2 ||
+        if ( GET_xAPIC_ID(id) != hvm_vcpu_apic_id(vlapic_vcpu(vlapic)) ||
              id != SET_xAPIC_ID(GET_xAPIC_ID(id)) )
             printk(XENLOG_G_WARNING "%pv: bogus APIC ID %#x loaded\n",
                    vlapic_vcpu(vlapic), id);
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 501ed99..f3da7ed 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -213,6 +213,9 @@ struct hvm_domain {
     uint8_t thread_per_core;
 };
 
+#define hvm_vcpu_x2apic_id(v) (v->domain->arch.hvm_domain.apic_id[v->vcpu_id])
+#define hvm_vcpu_apic_id(v) (hvm_vcpu_x2apic_id(v) % 255)
+
 #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
 
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH 3/8] xl/parse: introduce cpu_topology to guest config
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 4/8] libxl: calculate and set vcpu topology Chao Gao
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, Wei Liu, Chao Gao

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 docs/man/xl.cfg.pod.5.in    | 21 +++++++++++++++++++++
 tools/libxl/libxl_types.idl |  7 +++++++
 tools/xl/xl_parse.c         | 19 +++++++++++++++++++
 3 files changed, 47 insertions(+)

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index b7b91d8..8ff401e 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -501,6 +501,27 @@ Boot the guest using the OVMF UEFI firmware.
 
 Load the specified file as firmware for the guest.
 
+=item B<cpu_topology="CPU_TOPOLOGY_SPEC">
+
+Each B<CPU_TOPOLOGY_SPEC> is a comma-separated list of C<KEY=VALUE>
+settings from the following list:
+
+=over 4
+
+=item B<cores=NUMBER>
+specify the number of cores in each socket.
+
+=item B<threads=NUMBER>
+specify the number of threads in each core.
+
+=item B<real_threads=NUMBER>
+specify the real number of threads in each core. This field can be used
+to expose only part of threads in one core. For example, though the CPUID
+instruction indicates that each core has two threads, but only one thread
+of each core is exposed to the guest.
+
+=back
+
 =back
 
 =head4 PVH guest options
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index a239324..8c80e67 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -457,6 +457,12 @@ libxl_altp2m_mode = Enumeration("altp2m_mode", [
     (3, "limited"),
     ], init_val = "LIBXL_ALTP2M_MODE_DISABLED")
 
+libxl_cpu_topology = Struct("cpu_topology", [
+    ("cores",           uint8),
+    ("threads",         uint8),
+    ("real_threads",    uint8),
+    ])
+
 libxl_domain_build_info = Struct("domain_build_info",[
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
@@ -581,6 +587,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("rdm", libxl_rdm_reserve),
                                        ("rdm_mem_boundary_memkb", MemKB),
                                        ("mca_caps",         uint64),
+                                       ("cpu_topology",     libxl_cpu_topology),
                                        ])),
                  ("pv", Struct(None, [("kernel", string),
                                       ("slack_memkb", MemKB),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 9a692d5..9b76920 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1373,6 +1373,25 @@ void parse_config_data(const char *config_source,
              */
         }
 
+        if (!xlu_cfg_get_string(config, "cpu_topology", &buf, 0))
+        {
+            char *buf2, *p, *oparg, *strtok_ptr;
+
+            buf2 = strdup(buf);
+
+            for (p = strtok_r(buf2, ",", &strtok_ptr); p;
+                 p = strtok_r(NULL, ",", &strtok_ptr)) {
+                if (MATCH_OPTION("cores", p, oparg)) {
+                    b_info->u.hvm.cpu_topology.cores = parse_ulong(oparg);
+                } else if (MATCH_OPTION("threads", p, oparg)) {
+                    b_info->u.hvm.cpu_topology.threads = parse_ulong(oparg);
+                } else if (MATCH_OPTION("real_threads", p, oparg)) {
+                    b_info->u.hvm.cpu_topology.real_threads =
+                        parse_ulong(oparg);
+                }
+            }
+        }
+
         break;
     case LIBXL_DOMAIN_TYPE_PVH:
     case LIBXL_DOMAIN_TYPE_PV:
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH 4/8] libxl: calculate and set vcpu topology
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
                   ` (2 preceding siblings ...)
  2018-01-08  4:01 ` [RFC PATCH 3/8] xl/parse: introduce cpu_topology to guest config Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 5/8] xen/mem: handle XENMEM_get_cpu_topology in compat_memory_op Chao Gao
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, Wei Liu, Chao Gao

Before creating vCPUs, assign an unique APIC_ID to each vCPU according the
number of vcpu, the number of core/thread in one socket/core and guest's numa
configuration. Refer to SDM "PROGRAMMING CONSIDERATIONS FOR HARDWARE
MULTI-THREADING CAPABLE PROCESSORS" for more information about how software
should extract topology from APIC_ID. The unique APIC_ID consists of three sub
fields: PACKAGE_ID, CORE_ID and SMT_ID.  PACKAGE_ID is the virtual numa ID (if
no numa information, PACKAGE_ID is always 0). CORE_ID and SMT_ID increase from
0.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 tools/libxc/include/xenctrl.h |  7 +++++
 tools/libxc/xc_domain.c       | 36 ++++++++++++++++++++++
 tools/libxl/libxl_arch.h      |  4 +++
 tools/libxl/libxl_arm.c       |  6 ++++
 tools/libxl/libxl_dom.c       |  4 +++
 tools/libxl/libxl_types.idl   |  1 +
 tools/libxl/libxl_x86.c       | 70 +++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 128 insertions(+)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 09e1363..e897e5d 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1341,6 +1341,13 @@ int xc_domain_set_memory_map(xc_interface *xch,
 int xc_get_machine_memory_map(xc_interface *xch,
                               struct e820entry entries[],
                               uint32_t max_entries);
+
+int xc_set_cpu_topology(xc_interface *xch,
+                        uint32_t domid,
+                        uint32_t *tid,
+                        uint32_t size,
+                        uint8_t thread_per_core,
+                        uint8_t core_per_socket);
 #endif
 
 int xc_reserved_device_memory_map(xc_interface *xch,
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 3ccd27f..f8bb1eb 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2435,6 +2435,42 @@ int xc_domain_soft_reset(xc_interface *xch,
     domctl.domain = domid;
     return do_domctl(xch, &domctl);
 }
+
+int xc_set_cpu_topology(xc_interface *xch,
+                        uint32_t domid,
+                        uint32_t *tid,
+                        uint32_t size,
+                        uint8_t thread_per_core,
+                        uint8_t core_per_socket)
+{
+    int rc;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(tid, sizeof(*tid) * size,
+                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
+    domctl.cmd = XEN_DOMCTL_set_cpu_topology;
+    domctl.domain = domid;
+
+    if ( xc_hypercall_bounce_pre(xch, tid) )
+    {
+        rc = -1;
+        errno = ENOMEM;
+        goto failed;
+    }
+
+    set_xen_guest_handle(domctl.u.cpu_topology.tid, tid);
+    domctl.u.cpu_topology.size = size;
+    domctl.u.cpu_topology.core_per_socket = core_per_socket;
+    domctl.u.cpu_topology.thread_per_core = thread_per_core;
+    memset(domctl.u.cpu_topology.pad, 0, sizeof(domctl.u.cpu_topology.pad));
+
+    rc = do_domctl(xch, &domctl);
+
+ failed:
+    xc_hypercall_bounce_post(xch, tid);
+
+    return rc;
+}
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 784ec7f..61d9492 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -78,6 +78,10 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+int libxl__arch_cpu_topology_init(libxl__gc *gc, uint32_t domid,
+                                  libxl_domain_config *d_config);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index de1840b..70c328e 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -1154,6 +1154,12 @@ void libxl__arch_domain_build_info_acpi_setdefault(
     libxl_defbool_setdefault(&b_info->acpi, false);
 }
 
+int libxl__arch_cpu_topology_init(libxl__gc *gc, uint32_t domid,
+                                  libxl_domain_config *d_config)
+{
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index ef834e6..13e27d3 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -353,6 +353,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     int rc;
     uint64_t size;
 
+    if (libxl__arch_cpu_topology_init(gc, domid, d_config)) {
+        return ERROR_FAIL;
+    }
+
     if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
         LOG(ERROR, "Couldn't set max vcpu count");
         return ERROR_FAIL;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 8c80e67..6e0d96a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -461,6 +461,7 @@ libxl_cpu_topology = Struct("cpu_topology", [
     ("cores",           uint8),
     ("threads",         uint8),
     ("real_threads",    uint8),
+    ("tid",             Array(uint32, "tid_size")),
     ])
 
 libxl_domain_build_info = Struct("domain_build_info",[
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 5f91fe4..f28af46 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -596,6 +596,76 @@ void libxl__arch_domain_build_info_acpi_setdefault(
     libxl_defbool_setdefault(&b_info->acpi, true);
 }
 
+static inline int fls(unsigned int x)
+{
+    int r;
+
+    asm ( "bsr %1,%0\n\t"
+          "jnz 1f\n\t"
+          "mov $-1,%0\n"
+          "1:" : "=r" (r) : "rm" (x));
+    return r + 1;
+}
+
+int libxl__arch_cpu_topology_init(libxl__gc *gc, uint32_t domid,
+                                  libxl_domain_config *d_config)
+{
+    int i, rc = 0;
+    uint8_t core_shift, socket_shift, real_threads;
+    unsigned int *tid;
+    libxl_domain_build_info *const info = &d_config->b_info;
+
+    if (!info->u.hvm.cpu_topology.cores)
+        info->u.hvm.cpu_topology.cores = 128;
+    if (!info->u.hvm.cpu_topology.threads)
+        info->u.hvm.cpu_topology.threads = 2;
+    if (!info->u.hvm.cpu_topology.real_threads)
+        info->u.hvm.cpu_topology.real_threads = 1;
+
+    if (info->u.hvm.cpu_topology.threads <
+         info->u.hvm.cpu_topology.real_threads)
+    {
+        LOGE(ERROR, "threads cannot be smaller than real threads");
+        return ERROR_FAIL;
+    }
+
+    real_threads = info->u.hvm.cpu_topology.real_threads;
+    tid = libxl__calloc(gc, info->max_vcpus, sizeof(unsigned int));
+    core_shift = fls(info->u.hvm.cpu_topology.threads - 1);
+    socket_shift = core_shift + fls(info->u.hvm.cpu_topology.cores - 1);
+    if (info->num_vnuma_nodes == 0) {
+        for (i = 0; i < info->max_vcpus; i++) {
+            tid[i] = ((i / real_threads) << core_shift) + i % real_threads;
+        }
+    } else {
+        int socket_id;
+
+        for (socket_id = 0; socket_id < info->num_vnuma_nodes; socket_id++) {
+            int j = 0;
+
+            libxl_for_each_set_bit(i, info->vnuma_nodes[socket_id].vcpus) {
+                tid[i] = (socket_id << socket_shift) +
+                         ((j / real_threads) << core_shift) +
+                         (j % real_threads);
+                j++;
+            }
+        }
+    }
+
+    info->u.hvm.cpu_topology.tid = tid;
+    info->u.hvm.cpu_topology.tid_size = info->max_vcpus;
+
+    rc = xc_set_cpu_topology(libxl__gc_owner(gc)->xch, domid, tid,
+                             info->max_vcpus, info->u.hvm.cpu_topology.threads,
+                             info->u.hvm.cpu_topology.cores);
+    if (rc < 0) {
+        LOGE(ERROR, "xc_set_cpu_topology failed");
+        rc = ERROR_FAIL;
+    }
+
+    return rc;
+
+}
 /*
  * Local variables:
  * mode: C
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH 5/8] xen/mem: handle XENMEM_get_cpu_topology in compat_memory_op
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
                   ` (3 preceding siblings ...)
  2018-01-08  4:01 ` [RFC PATCH 4/8] libxl: calculate and set vcpu topology Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 6/8] hvmloader: get CPU topology information from hypervisor Chao Gao
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, Jan Beulich, Andrew Cooper, Chao Gao

hvmloader needs to get cpu topology to build ACPI and boot APs.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 xen/common/compat/memory.c | 21 +++++++++++++++++++++
 xen/include/xlat.lst       |  1 +
 2 files changed, 22 insertions(+)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 35bb259..b41f210 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -71,6 +71,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
             struct xen_remove_from_physmap *xrfp;
             struct xen_vnuma_topology_info *vnuma;
             struct xen_mem_access_op *mao;
+            struct xen_cpu_topology_info *cti;
         } nat;
         union {
             struct compat_memory_reservation rsrv;
@@ -79,6 +80,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
             struct compat_add_to_physmap_batch atpb;
             struct compat_vnuma_topology_info vnuma;
             struct compat_mem_access_op mao;
+            struct compat_cpu_topology_info cti;
         } cmp;
 
         set_xen_guest_handle(nat.hnd, COMPAT_ARG_XLAT_VIRT_BASE);
@@ -395,6 +397,21 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
         }
 #endif
 
+        case XENMEM_get_cpu_topology:
+        {
+            enum XLAT_cpu_topology_info_tid tid = XLAT_cpu_topology_info_tid_h;
+
+            if ( copy_from_guest(&cmp.cti, compat, 1) )
+                return -EFAULT;
+#define XLAT_cpu_topology_info_HNDL_tid_h(_d_, _s_)      \
+            guest_from_compat_handle((_d_)->tid.h, (_s_)->tid.h)
+
+            XLAT_cpu_topology_info(nat.cti, &cmp.cti);
+
+#undef XLAT_cpu_topology_info_HNDL_tid_h
+            break;
+        }
+
         default:
             return compat_arch_memory_op(cmd, compat);
         }
@@ -527,6 +544,10 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
         case XENMEM_access_op:
             break;
 
+        case XENMEM_get_cpu_topology:
+            printk("finish getting cpu topology\n");
+            break;
+
         case XENMEM_get_vnumainfo:
             cmp.vnuma.nr_vnodes = nat.vnuma->nr_vnodes;
             cmp.vnuma.nr_vcpus = nat.vnuma->nr_vcpus;
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 4346cbe..8380bb7 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -89,6 +89,7 @@
 !	reserved_device_memory_map	memory.h
 ?	vmemrange			memory.h
 !	vnuma_topology_info		memory.h
+!	cpu_topology_info 		memory.h
 ?	physdev_eoi			physdev.h
 ?	physdev_get_free_pirq		physdev.h
 ?	physdev_irq			physdev.h
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH 6/8] hvmloader: get CPU topology information from hypervisor
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
                   ` (4 preceding siblings ...)
  2018-01-08  4:01 ` [RFC PATCH 5/8] xen/mem: handle XENMEM_get_cpu_topology in compat_memory_op Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 7/8] libacpi: build madt/srat according cpu topology Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 8/8] x86/cpuid: emulate extended topology enumeration leaf Chao Gao
  7 siblings, 0 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, Jan Beulich, Andrew Cooper, Chao Gao

The previous relationship between APIC_ID and vcpu_id won't hold.
The APIC_ID array got from hypervisor is used to boot APs.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 tools/firmware/hvmloader/Makefile    |  2 +-
 tools/firmware/hvmloader/hvmloader.c |  8 +++++
 tools/firmware/hvmloader/smp.c       |  3 +-
 tools/firmware/hvmloader/topology.c  | 57 ++++++++++++++++++++++++++++++++++++
 tools/firmware/hvmloader/topology.h  | 36 +++++++++++++++++++++++
 5 files changed, 104 insertions(+), 2 deletions(-)
 create mode 100644 tools/firmware/hvmloader/topology.c
 create mode 100644 tools/firmware/hvmloader/topology.h

diff --git a/tools/firmware/hvmloader/Makefile b/tools/firmware/hvmloader/Makefile
index a5b4c32..9124208 100644
--- a/tools/firmware/hvmloader/Makefile
+++ b/tools/firmware/hvmloader/Makefile
@@ -32,7 +32,7 @@ CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__
 
 OBJS  = hvmloader.o mp_tables.o util.o smbios.o 
-OBJS += smp.o cacheattr.o xenbus.o vnuma.o
+OBJS += smp.o cacheattr.o xenbus.o vnuma.o topology.o
 OBJS += e820.o pci.o pir.o ctype.o
 OBJS += hvm_param.o
 OBJS += ovmf.o seabios.o
diff --git a/tools/firmware/hvmloader/hvmloader.c b/tools/firmware/hvmloader/hvmloader.c
index f603f68..c952a93 100644
--- a/tools/firmware/hvmloader/hvmloader.c
+++ b/tools/firmware/hvmloader/hvmloader.c
@@ -25,6 +25,7 @@
 #include "pci_regs.h"
 #include "apic_regs.h"
 #include "vnuma.h"
+#include "topology.h"
 #include <acpi2_0.h>
 #include <xen/version.h>
 #include <xen/hvm/params.h>
@@ -344,6 +345,13 @@ int main(void)
     apic_setup();
     pci_setup();
 
+    /* smp_initialise() needs the mapping between vcpu_id and apic_id */
+    if ( init_cpu_topology_info() )
+    {
+        printf("Failed to get CPU topology\n");
+        return -1;
+    }
+
     smp_initialise();
 
     perform_tests();
diff --git a/tools/firmware/hvmloader/smp.c b/tools/firmware/hvmloader/smp.c
index 082b17f..5f3b7ef 100644
--- a/tools/firmware/hvmloader/smp.c
+++ b/tools/firmware/hvmloader/smp.c
@@ -22,6 +22,7 @@
 #include "util.h"
 #include "config.h"
 #include "apic_regs.h"
+#include "topology.h"
 
 #define AP_BOOT_EIP 0x1000
 extern char ap_boot_start[], ap_boot_end[];
@@ -86,7 +87,7 @@ static void lapic_wait_ready(void)
 
 static void boot_cpu(unsigned int cpu)
 {
-    unsigned int icr2 = SET_APIC_DEST_FIELD(LAPIC_ID(cpu));
+    unsigned int icr2 = SET_APIC_DEST_FIELD(topology_id[cpu]);
 
     /* Initialise shared variables. */
     ap_cpuid = cpu;
diff --git a/tools/firmware/hvmloader/topology.c b/tools/firmware/hvmloader/topology.c
new file mode 100644
index 0000000..144a4d2
--- /dev/null
+++ b/tools/firmware/hvmloader/topology.c
@@ -0,0 +1,57 @@
+/*
+ * topology.c: obtain vCPU topology from hypervisor
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include "util.h"
+#include "hypercall.h"
+#include "topology.h"
+#include <xen/memory.h>
+
+unsigned int *topology_id;
+unsigned int topology_id_size;
+
+int init_cpu_topology_info(void)
+{
+    int rc;
+    struct xen_cpu_topology_info cpu_topology =
+        { .domid = DOMID_SELF, .size = hvm_info->nr_vcpus };
+
+    topology_id = scratch_alloc(sizeof(*topology_id) * hvm_info->nr_vcpus, 0);
+    set_xen_guest_handle(cpu_topology.tid.h, topology_id);
+    rc = hypercall_memory_op(XENMEM_get_cpu_topology, &cpu_topology);
+    if ( rc < 0 )
+        printf("Failed to retrieve cpu topology, rc = %d\n", rc);
+    topology_id_size = hvm_info->nr_vcpus;
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/firmware/hvmloader/topology.h b/tools/firmware/hvmloader/topology.h
new file mode 100644
index 0000000..bc71c3b
--- /dev/null
+++ b/tools/firmware/hvmloader/topology.h
@@ -0,0 +1,36 @@
+/******************************************************************************
+ * topology.h
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+#ifndef __HVMLOADER_TOPOLOGY_H_
+#define __HVMLOADER_TOPOLOGY_H_
+
+extern unsigned int *topology_id;
+extern unsigned int topology_id_size;
+
+int init_cpu_topology_info(void);
+
+#endif /* __HVMLOADER_TOPOLOGY_H__ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH 7/8] libacpi: build madt/srat according cpu topology
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
                   ` (5 preceding siblings ...)
  2018-01-08  4:01 ` [RFC PATCH 6/8] hvmloader: get CPU topology information from hypervisor Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  2018-01-08  4:01 ` [RFC PATCH 8/8] x86/cpuid: emulate extended topology enumeration leaf Chao Gao
  7 siblings, 0 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, Andrew Cooper, Wei Liu, Jan Beulich, Chao Gao

Callers pass an APIC_ID array to libacpi and libacpi will use this array to
fill the APIC_ID field of MADT and SRAT.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 tools/firmware/hvmloader/util.c | 8 ++++++--
 tools/libacpi/build.c           | 4 ++--
 tools/libacpi/libacpi.h         | 5 ++++-
 tools/libxl/libxl_x86_acpi.c    | 6 ++++--
 4 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 0c3f2d2..e4c7eb3 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -22,6 +22,7 @@
 #include "hypercall.h"
 #include "ctype.h"
 #include "vnuma.h"
+#include "topology.h"
 #include <acpi2_0.h>
 #include <libacpi.h>
 #include <stdint.h>
@@ -883,9 +884,9 @@ static void acpi_mem_free(struct acpi_ctxt *ctxt,
     /* ACPI builder currently doesn't free memory so this is just a stub */
 }
 
-static uint32_t acpi_lapic_id(unsigned cpu)
+static uint32_t acpi_lapic_id(unsigned cpu, const struct acpi_config *config)
 {
-    return LAPIC_ID(cpu);
+    return config->topology_id[cpu];
 }
 
 void hvmloader_acpi_build_tables(struct acpi_config *config,
@@ -981,6 +982,9 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
     config->numa.vdistance = vdistance;
     config->numa.vmemrange = vmemrange;
 
+    config->topology_id = topology_id;
+    config->topology_id_size = topology_id_size;
+
     config->hvminfo = hvm_info;
 
     config->rsdp = physical;
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index f9881c9..3cd5eb9 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -156,7 +156,7 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt,
         lapic->length  = sizeof(*lapic);
         /* Processor ID must match processor-object IDs in the DSDT. */
         lapic->acpi_processor_id = i;
-        lapic->apic_id = config->lapic_id(i);
+        lapic->apic_id = config->lapic_id(i, config);
         lapic->flags = (test_bit(i, hvminfo->vcpu_online)
                         ? ACPI_LOCAL_APIC_ENABLED : 0);
         lapic++;
@@ -244,7 +244,7 @@ static struct acpi_20_srat *construct_srat(struct acpi_ctxt *ctxt,
         processor->type     = ACPI_PROCESSOR_AFFINITY;
         processor->length   = sizeof(*processor);
         processor->domain   = config->numa.vcpu_to_vnode[i];
-        processor->apic_id  = config->lapic_id(i);
+        processor->apic_id  = config->lapic_id(i, config);
         processor->flags    = ACPI_LOCAL_APIC_AFFIN_ENABLED;
         processor++;
     }
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index a2efd23..981ad10 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -91,11 +91,14 @@ struct acpi_config {
     unsigned long rsdp;
 
     /* x86-specific parameters */
-    uint32_t (*lapic_id)(unsigned cpu);
+    uint32_t (*lapic_id)(unsigned cpu, const struct acpi_config *config);
     uint32_t lapic_base_address;
     uint32_t ioapic_base_address;
     uint16_t pci_isa_irq_mask;
     uint8_t ioapic_id;
+    /* CPU topology info */
+    uint32_t *topology_id;
+    uint32_t topology_id_size;
 };
 
 int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config);
diff --git a/tools/libxl/libxl_x86_acpi.c b/tools/libxl/libxl_x86_acpi.c
index 9a7c904..1a227f6 100644
--- a/tools/libxl/libxl_x86_acpi.c
+++ b/tools/libxl/libxl_x86_acpi.c
@@ -84,9 +84,9 @@ static void acpi_mem_free(struct acpi_ctxt *ctxt,
 {
 }
 
-static uint32_t acpi_lapic_id(unsigned cpu)
+static uint32_t acpi_lapic_id(unsigned cpu, const struct acpi_config *config)
 {
-    return cpu * 2;
+    return config->topology_id[cpu];
 }
 
 static int init_acpi_config(libxl__gc *gc, 
@@ -155,6 +155,8 @@ static int init_acpi_config(libxl__gc *gc,
     config->lapic_base_address = LAPIC_BASE_ADDRESS;
     config->lapic_id = acpi_lapic_id;
     config->acpi_revision = 5;
+    config->topology_id = b_info->u.hvm.cpu_topology.tid;
+    config->topology_id_size = b_info->u.hvm.cpu_topology.tid_size;
 
     rc = 0;
 out:
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC PATCH 8/8] x86/cpuid: emulate extended topology enumeration leaf
  2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
                   ` (6 preceding siblings ...)
  2018-01-08  4:01 ` [RFC PATCH 7/8] libacpi: build madt/srat according cpu topology Chao Gao
@ 2018-01-08  4:01 ` Chao Gao
  7 siblings, 0 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-08  4:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Ian Jackson, Jan Beulich, Chao Gao

This patch sets policy for guest's CPUID.0xb.  eax return value can be infered
from the number of core/thread in one socket/core. ebx cannot be zero
otherwise it means CPUID.0xb isn't supported. edx is the x2apic id of each
vcpu, which would be adjusted in Xen according the context. ecx[7:0] should be
the same value in ecx input .  ecx[15:8] should be the level type.

This patch also sets policy for guest's CPUID.4:EAX[25:14]. Rather than
passing the host value to guest, we set this field to the number of thread
in each core to be consistent with CPU topology.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 tools/libxc/include/xenctrl.h |  8 ++++
 tools/libxc/xc_cpuid_x86.c    | 86 ++++++++++++++++++++++++++++++++++++++++---
 tools/libxc/xc_domain.c       | 35 ++++++++++++++++++
 xen/arch/x86/cpuid.c          | 25 +++++++++++++
 xen/arch/x86/domctl.c         |  8 ++++
 xen/include/asm-x86/cpuid.h   | 12 ++++++
 6 files changed, 168 insertions(+), 6 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index e897e5d..2eab621 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1348,6 +1348,14 @@ int xc_set_cpu_topology(xc_interface *xch,
                         uint32_t size,
                         uint8_t thread_per_core,
                         uint8_t core_per_socket);
+
+int xc_get_cpu_topology(xc_interface *xch,
+                        uint32_t domid,
+                        uint32_t size,
+                        uint32_t *tid,
+                        uint8_t *thread_per_core,
+                        uint8_t *core_per_socket);
+
 #endif
 
 int xc_reserved_device_memory_map(xc_interface *xch,
diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index 25b922e..0891ace 100644
--- a/tools/libxc/xc_cpuid_x86.c
+++ b/tools/libxc/xc_cpuid_x86.c
@@ -178,6 +178,10 @@ struct cpuid_domain_info
     /* HVM-only information. */
     bool pae;
     bool nestedhvm;
+    /* CPU topology information. */
+    bool topology_supported;
+    uint8_t core_per_socket;
+    uint8_t thread_per_core;
 };
 
 static void cpuid(const unsigned int *input, unsigned int *regs)
@@ -280,6 +284,14 @@ static int get_cpuid_domain_info(xc_interface *xch, uint32_t domid,
             if ( rc )
                 return rc;
         }
+
+        /* retrieve CPU topology information */
+        rc = xc_get_cpu_topology(xch, domid, 0, NULL,
+                                 &info->thread_per_core,
+                                 &info->core_per_socket);
+        info->topology_supported = !rc;
+        if ( rc )
+            return rc;
     }
     else
     {
@@ -365,6 +377,17 @@ static void amd_xc_cpuid_policy(xc_interface *xch,
     }
 }
 
+static inline int fls(unsigned int x)
+{
+    int r;
+
+    asm ( "bsr %1,%0\n\t"
+          "jnz 1f\n\t"
+          "mov $-1,%0\n"
+          "1:" : "=r" (r) : "rm" (x));
+    return r + 1;
+}
+
 static void intel_xc_cpuid_policy(xc_interface *xch,
                                   const struct cpuid_domain_info *info,
                                   const unsigned int *input, unsigned int *regs)
@@ -379,6 +402,38 @@ static void intel_xc_cpuid_policy(xc_interface *xch,
         regs[0] = (((regs[0] & 0x7c000000u) << 1) | 0x04000000u |
                    (regs[0] & 0x3ffu));
         regs[3] &= 0x3ffu;
+
+        if ( !info->topology_supported )
+            break;
+        /* only emulate cache topology when host supports this level */
+        if ( (input[1] == 2 || input[1] == 3) && regs[0] )
+            regs[0] = (regs[0] & 0x3fffu) | (info->core_per_socket << 26) |
+                      ((info->thread_per_core - 1) << 14);
+        break;
+
+    case 0x0000000b:
+        if ( !info->topology_supported )
+            break;
+
+        switch ( input[1] )
+        {
+        case 0:
+            regs[0] = fls(info->thread_per_core - 1);
+            regs[1] = 1;
+            regs[2] = 1 << 8;
+            break;
+
+        case 1:
+            regs[0] = fls(info->thread_per_core - 1) +
+                      fls(info->core_per_socket - 1);
+            regs[1] = 1;
+            regs[2] = 2 << 8;
+            break;
+
+        default:
+            regs[0] = regs[1] = regs[2] = regs[3] = 0;
+            break;
+        }
         break;
 
     case 0x80000000:
@@ -409,16 +464,27 @@ static void xc_cpuid_hvm_policy(xc_interface *xch,
         break;
 
     case 0x00000001:
-        /*
-         * EBX[23:16] is Maximum Logical Processors Per Package.
-         * Update to reflect vLAPIC_ID = vCPU_ID * 2.
-         */
-        regs[1] = (regs[1] & 0x0000ffffu) | ((regs[1] & 0x007f0000u) << 1);
+    {
+        if ( info->topology_supported )
+        {
+            int bit = fls(info->thread_per_core - 1) +
+                      fls(info->core_per_socket - 1);
+
+            regs[1] = (regs[1] & 0x0000ffffu) |
+                      (((bit < 8) ? (1 << bit) : 0xff) << 16);
+        }
+        else
+            /*
+             * EBX[23:16] is Maximum Logical Processors Per Package.
+             * Update to reflect vLAPIC_ID = vCPU_ID * 2.
+             */
+            regs[1] = (regs[1] & 0x0000ffffu) | ((regs[1] & 0x007f0000u) << 1);
 
         regs[2] = info->featureset[featureword_of(X86_FEATURE_SSE3)];
         regs[3] = (info->featureset[featureword_of(X86_FEATURE_FPU)] |
                    bitmaskof(X86_FEATURE_HTT));
         break;
+    }
 
     case 0x00000007: /* Intel-defined CPU features */
         if ( input[1] == 0 )
@@ -470,6 +536,7 @@ static void xc_cpuid_hvm_policy(xc_interface *xch,
 
     case 0x00000002: /* Intel cache info (dumped by AMD policy) */
     case 0x00000004: /* Intel cache info (dumped by AMD policy) */
+    case 0x0000000b: /* Intel Extended Topology Enumeration Leaf */
     case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x80000002: /* Processor name string */
     case 0x80000003: /* ... continued         */
@@ -757,12 +824,19 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid,
                 continue;
         }
 
+        if ( input[0] == 0xb )
+        {
+            input[1]++;
+            if ( regs[0] || regs[1] || regs[2] || regs[3] )
+                continue;
+        }
+
         input[0]++;
         if ( !(input[0] & 0x80000000u) && (input[0] > base_max ) )
             input[0] = 0x80000000u;
 
         input[1] = XEN_CPUID_INPUT_UNUSED;
-        if ( (input[0] == 4) || (input[0] == 7) )
+        if ( (input[0] == 4) || (input[0] == 7) || (input[0] == 0xb) )
             input[1] = 0;
         else if ( input[0] == 0xd )
             input[1] = 1; /* Xen automatically calculates almost everything. */
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index f8bb1eb..2ececbe 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2471,6 +2471,41 @@ int xc_set_cpu_topology(xc_interface *xch,
 
     return rc;
 }
+
+int xc_get_cpu_topology(xc_interface *xch,
+                        uint32_t domid,
+                        uint32_t size,
+                        uint32_t *tid,
+                        uint8_t *thread_per_core,
+                        uint8_t *core_per_socket)
+{
+    int rc;
+    DECLARE_HYPERCALL_BOUNCE(tid, sizeof(*tid) * size,
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    struct xen_cpu_topology_info cpu_topology =
+        { .domid = domid, .size = size };
+
+    if ( xc_hypercall_bounce_pre(xch, tid) )
+    {
+        rc = -1;
+        errno = ENOMEM;
+        goto failed;
+    }
+
+    set_xen_guest_handle(cpu_topology.tid.h, tid);
+    rc = do_memory_op(xch, XENMEM_get_cpu_topology, &cpu_topology,
+                      sizeof(cpu_topology));
+    if ( !rc )
+    {
+        *thread_per_core = cpu_topology.thread_per_core;
+        *core_per_socket = cpu_topology.core_per_socket;
+    }
+
+ failed:
+    xc_hypercall_buffer_free(xch, tid);
+
+    return rc;
+}
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index b47dc86..4d8b20e 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -563,6 +563,24 @@ void recalculate_cpuid_policy(struct domain *d)
         }
     }
 
+    for ( i = 0; i < ARRAY_SIZE(p->ext_topo.raw); ++i )
+    {
+        printk("level_type %x\n", p->ext_topo.subleaf[i].level_type);
+        if ( p->ext_topo.subleaf[i].level_type == 1 ||
+             p->ext_topo.subleaf[i].level_type == 2 )
+        {
+            /* Subleaf has a valid level type. Zero reserved fields. */
+            p->ext_topo.raw[i].a &= 0x0000001f;
+            p->ext_topo.raw[i].b &= 0x0000ffff;
+            p->ext_topo.raw[i].c &= 0x0000ffff;
+        }
+        else
+        {
+            zero_leaves(p->ext_topo.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
+            break;
+        }
+    }
+
     if ( !p->extd.svm )
         p->extd.raw[0xa] = EMPTY_LEAF;
 
@@ -634,6 +652,13 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
             *res = p->feat.raw[subleaf];
             break;
 
+        case 0xb:
+            if ( subleaf >= ARRAY_SIZE(p->ext_topo.raw) )
+                return;
+
+            *res = p->ext_topo.raw[subleaf];
+            break;
+
         case XSTATE_CPUID:
             if ( !p->basic.xsave || subleaf >= ARRAY_SIZE(p->xstate.raw) )
                 return;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 4e1bbd5..b7e4f44 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -70,6 +70,10 @@ static int update_domain_cpuid_info(struct domain *d,
              ctl->input[1] >= ARRAY_SIZE(p->feat.raw) )
             return 0;
 
+        if ( ctl->input[0] == 0xb &&
+             ctl->input[1] >= ARRAY_SIZE(p->ext_topo.raw) )
+            return 0;
+
         BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) < 2);
         if ( ctl->input[0] == XSTATE_CPUID &&
              ctl->input[1] != 1 ) /* Everything else automatically calculated. */
@@ -100,6 +104,10 @@ static int update_domain_cpuid_info(struct domain *d,
             p->feat.raw[ctl->input[1]] = leaf;
             break;
 
+        case 0xb:
+            p->ext_topo.raw[ctl->input[1]] = leaf;
+            break;
+
         case XSTATE_CPUID:
             p->xstate.raw[ctl->input[1]] = leaf;
             break;
diff --git a/xen/include/asm-x86/cpuid.h b/xen/include/asm-x86/cpuid.h
index d2dd841..54e36bd 100644
--- a/xen/include/asm-x86/cpuid.h
+++ b/xen/include/asm-x86/cpuid.h
@@ -64,6 +64,7 @@ DECLARE_PER_CPU(bool, cpuid_faulting_enabled);
 #define CPUID_GUEST_NR_BASIC      (0xdu + 1)
 #define CPUID_GUEST_NR_FEAT       (0u + 1)
 #define CPUID_GUEST_NR_CACHE      (5u + 1)
+#define CPUID_GUEST_NR_EXT_TOPO   (2u + 1)
 #define CPUID_GUEST_NR_XSTATE     (62u + 1)
 #define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
 #define CPUID_GUEST_NR_EXTD_AMD   (0x1cu + 1)
@@ -145,6 +146,17 @@ struct cpuid_policy
         };
     } feat;
 
+    /* Structured Extended Topology Enumeration Leaf */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_EXT_TOPO];
+        struct cpuid_ext_topo_leaf {
+            uint32_t shift:5, :27;
+            uint32_t proc_num:16, :16;
+            uint32_t level_num:8, level_type:8, :16;
+            uint32_t x2apic_id;
+        } subleaf[CPUID_GUEST_NR_EXT_TOPO];
+    } ext_topo;
+
     /* Xstate feature leaf: 0x0000000D[xx] */
     union {
         struct cpuid_leaf raw[CPUID_GUEST_NR_XSTATE];
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-08  4:01 ` [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology Chao Gao
@ 2018-01-08 18:14   ` Daniel De Graaf
  2018-01-09  9:06     ` Chao Gao
  2018-01-09 23:47   ` Andrew Cooper
  2018-04-23 16:00   ` Jan Beulich
  2 siblings, 1 reply; 20+ messages in thread
From: Daniel De Graaf @ 2018-01-08 18:14 UTC (permalink / raw)
  To: Chao Gao, xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, Jan Beulich, Andrew Cooper

On 01/07/2018 11:01 PM, Chao Gao wrote:
> Define interface, structures and hypercalls for toolstack to build
> cpu topology and for guest that will retrieve it [1].
> Two subop hypercalls introduced by this patch:
> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
> and XENMEM_get_cpu_topology to retrieve cpu topology information.
> 
> [1]: during guest creation, those information helps hvmloader to build ACPI.
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>

When adding new XSM controls for use by device models, you also
need to add the permissions to the device_model macro defined in
tools/flask/policy/modules/xen.if.  If domains need to call this
function on themselves (is this only true for get?), you will also
need to add it to declare_domain_common.

> ---
>   xen/arch/x86/domctl.c               | 27 ++++++++++++++++++++++
>   xen/arch/x86/hvm/hvm.c              |  7 ++++++
>   xen/arch/x86/mm.c                   | 45 +++++++++++++++++++++++++++++++++++++
>   xen/include/asm-x86/hvm/domain.h    | 15 +++++++++++++
>   xen/include/public/domctl.h         | 22 ++++++++++++++++++
>   xen/include/public/memory.h         | 27 +++++++++++++++++++++-
>   xen/include/xsm/dummy.h             |  6 +++++
>   xen/xsm/dummy.c                     |  1 +
>   xen/xsm/flask/hooks.c               | 10 +++++++++
>   xen/xsm/flask/policy/access_vectors |  4 ++++
>   10 files changed, 163 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 36ab235..4e1bbd5 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -347,6 +347,29 @@ void arch_get_domain_info(const struct domain *d,
>           info->flags |= XEN_DOMINF_hap;
>   }
>   
> +static int arch_set_cpu_topology(struct domain *d,
> +                                 struct xen_domctl_cpu_topology *topology)
> +{
> +    if ( !is_hvm_domain(d) ||
> +         !topology->size || topology->size > HVM_MAX_VCPUS )
> +        return -EINVAL;
> +
> +    if ( !d->arch.hvm_domain.apic_id )
> +        d->arch.hvm_domain.apic_id = xmalloc_array(uint32_t, topology->size);
> +
> +    if ( !d->arch.hvm_domain.apic_id )
> +        return -ENOMEM;
> +
> +    if ( copy_from_guest(d->arch.hvm_domain.apic_id, topology->tid,
> +                         topology->size) )
> +        return -EFAULT;
> +
> +    d->arch.hvm_domain.apic_id_size = topology->size;
> +    d->arch.hvm_domain.core_per_socket = topology->core_per_socket;
> +    d->arch.hvm_domain.thread_per_core = topology->thread_per_core;
> +    return 0;
> +}
> +
>   #define MAX_IOPORTS 0x10000
>   
>   long arch_do_domctl(
> @@ -1555,6 +1578,10 @@ long arch_do_domctl(
>           recalculate_cpuid_policy(d);
>           break;
>   
> +    case XEN_DOMCTL_set_cpu_topology:
> +        ret = arch_set_cpu_topology(d, &domctl->u.cpu_topology);
> +        break;
> +
>       default:
>           ret = iommu_do_domctl(domctl, d, u_domctl);
>           break;
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 71fddfd..b3b3224 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1509,6 +1509,13 @@ int hvm_vcpu_initialise(struct vcpu *v)
>       int rc;
>       struct domain *d = v->domain;
>   
> +    if ( v->vcpu_id > d->arch.hvm_domain.apic_id_size )
> +    {
> +        printk(XENLOG_ERR "d%dv%d's apic id isn't set.\n",
> +               d->domain_id, v->vcpu_id);
> +        return -ENOENT;
> +    }
> +
>       hvm_asid_flush_vcpu(v);
>   
>       spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index a56f875..b90e663 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4413,6 +4413,51 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>           return rc;
>       }
>   
> +    case XENMEM_get_cpu_topology:
> +    {
> +        struct domain *d;
> +        struct xen_cpu_topology_info topology;
> +
> +        if ( copy_from_guest(&topology, arg, 1) )
> +            return -EFAULT;
> +
> +        if ( topology.pad || topology.pad2 )
> +            return -EINVAL;
> +
> +        if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
> +            return -ESRCH;
> +
> +        rc = xsm_get_cpu_topology(XSM_TARGET, d);
> +        if ( rc )
> +            goto get_cpu_topology_failed;
> +
> +        rc = -EOPNOTSUPP;
> +        if ( !is_hvm_domain(d) || !d->arch.hvm_domain.apic_id )
> +            goto get_cpu_topology_failed;
> +
> +        /* allow the size to be zero for users who don't care apic_id */
> +        if ( topology.size )
> +        {
> +            rc = -E2BIG;
> +            if ( topology.size != d->arch.hvm_domain.apic_id_size )
> +                goto get_cpu_topology_failed;
> +
> +            rc = -EFAULT;
> +            if ( copy_to_guest(topology.tid.h, d->arch.hvm_domain.apic_id,
> +                               topology.size) )
> +                goto get_cpu_topology_failed;
> +        }
> +
> +        topology.core_per_socket = d->arch.hvm_domain.core_per_socket;
> +        topology.thread_per_core = d->arch.hvm_domain.thread_per_core;
> +
> +        rc = __copy_to_guest(arg, &topology, 1) ? -EFAULT : 0;
> +
> + get_cpu_topology_failed:
> +        rcu_unlock_domain(d);
> +        return rc;
> +    }
> +
>       default:
>           return subarch_memory_op(cmd, arg);
>       }
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 7f128c0..501ed99 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -196,6 +196,21 @@ struct hvm_domain {
>           struct vmx_domain vmx;
>           struct svm_domain svm;
>       };
> +
> +    /*
> +     * an array of apic_id, which is unique and can be used to extract
> +     * socket ID, core ID and thread ID
> +     */
> +    uint32_t *apic_id;
> +    uint32_t apic_id_size;
> +
> +    /*
> +     * reports the number of core/thread in a socket/core, determining the
> +     * right-shift value to extract {core/thread} ID from apic_id (defined
> +     * above).
> +     */
> +    uint8_t core_per_socket;
> +    uint8_t thread_per_core;
>   };
>   
>   #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 9ae72959..99392b7 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1109,6 +1109,26 @@ struct xen_domctl_vuart_op {
>                                    */
>   };
>   
> +/* XEN_DOMCTL_set_cpu_topology */
> +struct xen_domctl_cpu_topology {
> +    /* IN - size of 'topology' array */
> +    uint32_t size;
> +    /* IN - the number of core in the same socket */
> +    uint8_t core_per_socket;
> +    /* IN - the number of thread in the same core */
> +    uint8_t thread_per_core;
> +    /* IN - should be 0 */
> +    uint8_t pad[2];
> +    /*
> +     * IN - an array of topology ID (tid), which is used to compute a given
> +     * vcpu's core id and thread id. For x86, topology ID is the APIC ID,
> +     * which is system-level unique.
> +     */
> +    XEN_GUEST_HANDLE_64(uint32) tid;
> +};
> +typedef struct xen_domctl_cpu_topology xen_domctl_cpu_topology;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cpu_topology);
> +
>   struct xen_domctl {
>       uint32_t cmd;
>   #define XEN_DOMCTL_createdomain                   1
> @@ -1188,6 +1208,7 @@ struct xen_domctl {
>   #define XEN_DOMCTL_soft_reset                    79
>   #define XEN_DOMCTL_set_gnttab_limits             80
>   #define XEN_DOMCTL_vuart_op                      81
> +#define XEN_DOMCTL_set_cpu_topology              82
>   #define XEN_DOMCTL_gdbsx_guestmemio            1000
>   #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>   #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1252,6 +1273,7 @@ struct xen_domctl {
>           struct xen_domctl_psr_alloc         psr_alloc;
>           struct xen_domctl_set_gnttab_limits set_gnttab_limits;
>           struct xen_domctl_vuart_op          vuart_op;
> +        struct xen_domctl_cpu_topology      cpu_topology;
>           uint8_t                             pad[128];
>       } u;
>   };
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index 29386df..a6bcc64 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -650,7 +650,32 @@ struct xen_vnuma_topology_info {
>   typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t;
>   DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t);
>   
> -/* Next available subop number is 28 */
> +/*
> + * XENMEM_get_cpu_topology is used by guest to acquire vcpu topology from
> + * hypervisor.
> + */
> +#define XENMEM_get_cpu_topology     28
> +
> +struct xen_cpu_topology_info {
> +    /* IN */
> +    domid_t domid;
> +    uint16_t pad;
> +
> +    /* IN/OUT */
> +    uint32_t size;
> +
> +    /* OUT */
> +    uint8_t core_per_socket;
> +    uint8_t thread_per_core;
> +    uint16_t pad2;
> +
> +    union {
> +        XEN_GUEST_HANDLE(uint32) h;
> +        uint64_t pad;
> +    } tid;
> +};
> +typedef struct xen_cpu_topology_info xen_cpu_topology_info_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_cpu_topology_info_t);
>   
>   #endif /* __XEN_PUBLIC_MEMORY_H__ */
>   
> diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
> index d6ddadc..3ac59c7 100644
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -330,6 +330,12 @@ static XSM_INLINE int xsm_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
>       return xsm_default_action(action, current->domain, d);
>   }
>   
> +static XSM_INLINE int xsm_get_cpu_topology(XSM_DEFAULT_ARG struct domain *d)
> +{
> +    XSM_ASSERT_ACTION(XSM_TARGET);
> +    return xsm_default_action(action, current->domain, d);
> +}
> +
>   #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
>   static XSM_INLINE int xsm_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
>   {
> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> index 479b103..98eb86f 100644
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -88,6 +88,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
>       set_to_dummy_if_null(ops, iomem_mapping);
>       set_to_dummy_if_null(ops, pci_config_permission);
>       set_to_dummy_if_null(ops, get_vnumainfo);
> +    set_to_dummy_if_null(ops, get_cpu_topology);
>   
>   #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
>       set_to_dummy_if_null(ops, get_device_group);
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 19ceacf..29ee1e1 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -427,6 +427,11 @@ static int flask_get_vnumainfo(struct domain *d)
>       return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_VNUMAINFO);
>   }
>   
> +static int flask_get_cpu_topology(struct domain *d)
> +{
> +    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_TOPOLOGY);
> +}
> +
>   static int flask_console_io(struct domain *d, int cmd)
>   {
>       u32 perm;
> @@ -752,6 +757,9 @@ static int flask_domctl(struct domain *d, int cmd)
>       case XEN_DOMCTL_set_gnttab_limits:
>           return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_GNTTAB_LIMITS);
>   
> +    case XEN_DOMCTL_set_cpu_topology:
> +        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPU_TOPOLOGY);
> +
>       default:
>           return avc_unknown_permission("domctl", cmd);
>       }
> @@ -1800,6 +1808,8 @@ static struct xsm_operations flask_ops = {
>       .do_xsm_op = do_flask_op,
>       .get_vnumainfo = flask_get_vnumainfo,
>   
> +    .get_cpu_topology = flask_get_cpu_topology,
> +
>       .vm_event_control = flask_vm_event_control,
>   
>   #ifdef CONFIG_HAS_MEM_ACCESS
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index d0a1ec5..e531ec0 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -250,6 +250,10 @@ class domain2
>       psr_alloc
>   # XEN_DOMCTL_set_gnttab_limits
>       set_gnttab_limits
> +# XEN_DOMCTL_set_cpu_topology
> +    set_cpu_topology
> +# XENMEM_get_cpu_topology
> +    get_cpu_topology
>   }
>   
>   # Similar to class domain, but primarily contains domctls related to HVM domains
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-08 18:14   ` Daniel De Graaf
@ 2018-01-09  9:06     ` Chao Gao
  2018-01-09 17:18       ` Daniel De Graaf
  0 siblings, 1 reply; 20+ messages in thread
From: Chao Gao @ 2018-01-09  9:06 UTC (permalink / raw)
  To: Daniel De Graaf
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, xen-devel, Jan Beulich, Andrew Cooper

On Mon, Jan 08, 2018 at 01:14:44PM -0500, Daniel De Graaf wrote:
>On 01/07/2018 11:01 PM, Chao Gao wrote:
>> Define interface, structures and hypercalls for toolstack to build
>> cpu topology and for guest that will retrieve it [1].
>> Two subop hypercalls introduced by this patch:
>> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>> 
>> [1]: during guest creation, those information helps hvmloader to build ACPI.
>> 
>> Signed-off-by: Chao Gao <chao.gao@intel.com>
>
>When adding new XSM controls for use by device models, you also
>need to add the permissions to the device_model macro defined in
>tools/flask/policy/modules/xen.if.  If domains need to call this
>function on themselves (is this only true for get?), you will also
>need to add it to declare_domain_common.
>

Hi, Daniel.

Yes. XENMEM_get_cpu_topology will be called by the domain itself.
And Both get and set will be called by dom0 when creating one domain.
So I need:
1. add *set* and *get* to create_domain_common.
2. add *set* to declare_domain_common.

Is it right?

Thanks
Chao

>> ---
>>   xen/arch/x86/domctl.c               | 27 ++++++++++++++++++++++
>>   xen/arch/x86/hvm/hvm.c              |  7 ++++++
>>   xen/arch/x86/mm.c                   | 45 +++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-x86/hvm/domain.h    | 15 +++++++++++++
>>   xen/include/public/domctl.h         | 22 ++++++++++++++++++
>>   xen/include/public/memory.h         | 27 +++++++++++++++++++++-
>>   xen/include/xsm/dummy.h             |  6 +++++
>>   xen/xsm/dummy.c                     |  1 +
>>   xen/xsm/flask/hooks.c               | 10 +++++++++
>>   xen/xsm/flask/policy/access_vectors |  4 ++++
>>   10 files changed, 163 insertions(+), 1 deletion(-)
>> 
>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>> index 36ab235..4e1bbd5 100644
>> --- a/xen/arch/x86/domctl.c
>> +++ b/xen/arch/x86/domctl.c
>> @@ -347,6 +347,29 @@ void arch_get_domain_info(const struct domain *d,
>>           info->flags |= XEN_DOMINF_hap;
>>   }
>> +static int arch_set_cpu_topology(struct domain *d,
>> +                                 struct xen_domctl_cpu_topology *topology)
>> +{
>> +    if ( !is_hvm_domain(d) ||
>> +         !topology->size || topology->size > HVM_MAX_VCPUS )
>> +        return -EINVAL;
>> +
>> +    if ( !d->arch.hvm_domain.apic_id )
>> +        d->arch.hvm_domain.apic_id = xmalloc_array(uint32_t, topology->size);
>> +
>> +    if ( !d->arch.hvm_domain.apic_id )
>> +        return -ENOMEM;
>> +
>> +    if ( copy_from_guest(d->arch.hvm_domain.apic_id, topology->tid,
>> +                         topology->size) )
>> +        return -EFAULT;
>> +
>> +    d->arch.hvm_domain.apic_id_size = topology->size;
>> +    d->arch.hvm_domain.core_per_socket = topology->core_per_socket;
>> +    d->arch.hvm_domain.thread_per_core = topology->thread_per_core;
>> +    return 0;
>> +}
>> +
>>   #define MAX_IOPORTS 0x10000
>>   long arch_do_domctl(
>> @@ -1555,6 +1578,10 @@ long arch_do_domctl(
>>           recalculate_cpuid_policy(d);
>>           break;
>> +    case XEN_DOMCTL_set_cpu_topology:
>> +        ret = arch_set_cpu_topology(d, &domctl->u.cpu_topology);
>> +        break;
>> +
>>       default:
>>           ret = iommu_do_domctl(domctl, d, u_domctl);
>>           break;
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index 71fddfd..b3b3224 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -1509,6 +1509,13 @@ int hvm_vcpu_initialise(struct vcpu *v)
>>       int rc;
>>       struct domain *d = v->domain;
>> +    if ( v->vcpu_id > d->arch.hvm_domain.apic_id_size )
>> +    {
>> +        printk(XENLOG_ERR "d%dv%d's apic id isn't set.\n",
>> +               d->domain_id, v->vcpu_id);
>> +        return -ENOENT;
>> +    }
>> +
>>       hvm_asid_flush_vcpu(v);
>>       spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index a56f875..b90e663 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -4413,6 +4413,51 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>           return rc;
>>       }
>> +    case XENMEM_get_cpu_topology:
>> +    {
>> +        struct domain *d;
>> +        struct xen_cpu_topology_info topology;
>> +
>> +        if ( copy_from_guest(&topology, arg, 1) )
>> +            return -EFAULT;
>> +
>> +        if ( topology.pad || topology.pad2 )
>> +            return -EINVAL;
>> +
>> +        if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
>> +            return -ESRCH;
>> +
>> +        rc = xsm_get_cpu_topology(XSM_TARGET, d);
>> +        if ( rc )
>> +            goto get_cpu_topology_failed;
>> +
>> +        rc = -EOPNOTSUPP;
>> +        if ( !is_hvm_domain(d) || !d->arch.hvm_domain.apic_id )
>> +            goto get_cpu_topology_failed;
>> +
>> +        /* allow the size to be zero for users who don't care apic_id */
>> +        if ( topology.size )
>> +        {
>> +            rc = -E2BIG;
>> +            if ( topology.size != d->arch.hvm_domain.apic_id_size )
>> +                goto get_cpu_topology_failed;
>> +
>> +            rc = -EFAULT;
>> +            if ( copy_to_guest(topology.tid.h, d->arch.hvm_domain.apic_id,
>> +                               topology.size) )
>> +                goto get_cpu_topology_failed;
>> +        }
>> +
>> +        topology.core_per_socket = d->arch.hvm_domain.core_per_socket;
>> +        topology.thread_per_core = d->arch.hvm_domain.thread_per_core;
>> +
>> +        rc = __copy_to_guest(arg, &topology, 1) ? -EFAULT : 0;
>> +
>> + get_cpu_topology_failed:
>> +        rcu_unlock_domain(d);
>> +        return rc;
>> +    }
>> +
>>       default:
>>           return subarch_memory_op(cmd, arg);
>>       }
>> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
>> index 7f128c0..501ed99 100644
>> --- a/xen/include/asm-x86/hvm/domain.h
>> +++ b/xen/include/asm-x86/hvm/domain.h
>> @@ -196,6 +196,21 @@ struct hvm_domain {
>>           struct vmx_domain vmx;
>>           struct svm_domain svm;
>>       };
>> +
>> +    /*
>> +     * an array of apic_id, which is unique and can be used to extract
>> +     * socket ID, core ID and thread ID
>> +     */
>> +    uint32_t *apic_id;
>> +    uint32_t apic_id_size;
>> +
>> +    /*
>> +     * reports the number of core/thread in a socket/core, determining the
>> +     * right-shift value to extract {core/thread} ID from apic_id (defined
>> +     * above).
>> +     */
>> +    uint8_t core_per_socket;
>> +    uint8_t thread_per_core;
>>   };
>>   #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>> index 9ae72959..99392b7 100644
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -1109,6 +1109,26 @@ struct xen_domctl_vuart_op {
>>                                    */
>>   };
>> +/* XEN_DOMCTL_set_cpu_topology */
>> +struct xen_domctl_cpu_topology {
>> +    /* IN - size of 'topology' array */
>> +    uint32_t size;
>> +    /* IN - the number of core in the same socket */
>> +    uint8_t core_per_socket;
>> +    /* IN - the number of thread in the same core */
>> +    uint8_t thread_per_core;
>> +    /* IN - should be 0 */
>> +    uint8_t pad[2];
>> +    /*
>> +     * IN - an array of topology ID (tid), which is used to compute a given
>> +     * vcpu's core id and thread id. For x86, topology ID is the APIC ID,
>> +     * which is system-level unique.
>> +     */
>> +    XEN_GUEST_HANDLE_64(uint32) tid;
>> +};
>> +typedef struct xen_domctl_cpu_topology xen_domctl_cpu_topology;
>> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cpu_topology);
>> +
>>   struct xen_domctl {
>>       uint32_t cmd;
>>   #define XEN_DOMCTL_createdomain                   1
>> @@ -1188,6 +1208,7 @@ struct xen_domctl {
>>   #define XEN_DOMCTL_soft_reset                    79
>>   #define XEN_DOMCTL_set_gnttab_limits             80
>>   #define XEN_DOMCTL_vuart_op                      81
>> +#define XEN_DOMCTL_set_cpu_topology              82
>>   #define XEN_DOMCTL_gdbsx_guestmemio            1000
>>   #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>>   #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
>> @@ -1252,6 +1273,7 @@ struct xen_domctl {
>>           struct xen_domctl_psr_alloc         psr_alloc;
>>           struct xen_domctl_set_gnttab_limits set_gnttab_limits;
>>           struct xen_domctl_vuart_op          vuart_op;
>> +        struct xen_domctl_cpu_topology      cpu_topology;
>>           uint8_t                             pad[128];
>>       } u;
>>   };
>> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> index 29386df..a6bcc64 100644
>> --- a/xen/include/public/memory.h
>> +++ b/xen/include/public/memory.h
>> @@ -650,7 +650,32 @@ struct xen_vnuma_topology_info {
>>   typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t;
>>   DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t);
>> -/* Next available subop number is 28 */
>> +/*
>> + * XENMEM_get_cpu_topology is used by guest to acquire vcpu topology from
>> + * hypervisor.
>> + */
>> +#define XENMEM_get_cpu_topology     28
>> +
>> +struct xen_cpu_topology_info {
>> +    /* IN */
>> +    domid_t domid;
>> +    uint16_t pad;
>> +
>> +    /* IN/OUT */
>> +    uint32_t size;
>> +
>> +    /* OUT */
>> +    uint8_t core_per_socket;
>> +    uint8_t thread_per_core;
>> +    uint16_t pad2;
>> +
>> +    union {
>> +        XEN_GUEST_HANDLE(uint32) h;
>> +        uint64_t pad;
>> +    } tid;
>> +};
>> +typedef struct xen_cpu_topology_info xen_cpu_topology_info_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_cpu_topology_info_t);
>>   #endif /* __XEN_PUBLIC_MEMORY_H__ */
>> diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
>> index d6ddadc..3ac59c7 100644
>> --- a/xen/include/xsm/dummy.h
>> +++ b/xen/include/xsm/dummy.h
>> @@ -330,6 +330,12 @@ static XSM_INLINE int xsm_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
>>       return xsm_default_action(action, current->domain, d);
>>   }
>> +static XSM_INLINE int xsm_get_cpu_topology(XSM_DEFAULT_ARG struct domain *d)
>> +{
>> +    XSM_ASSERT_ACTION(XSM_TARGET);
>> +    return xsm_default_action(action, current->domain, d);
>> +}
>> +
>>   #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
>>   static XSM_INLINE int xsm_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
>>   {
>> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
>> index 479b103..98eb86f 100644
>> --- a/xen/xsm/dummy.c
>> +++ b/xen/xsm/dummy.c
>> @@ -88,6 +88,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
>>       set_to_dummy_if_null(ops, iomem_mapping);
>>       set_to_dummy_if_null(ops, pci_config_permission);
>>       set_to_dummy_if_null(ops, get_vnumainfo);
>> +    set_to_dummy_if_null(ops, get_cpu_topology);
>>   #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
>>       set_to_dummy_if_null(ops, get_device_group);
>> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
>> index 19ceacf..29ee1e1 100644
>> --- a/xen/xsm/flask/hooks.c
>> +++ b/xen/xsm/flask/hooks.c
>> @@ -427,6 +427,11 @@ static int flask_get_vnumainfo(struct domain *d)
>>       return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_VNUMAINFO);
>>   }
>> +static int flask_get_cpu_topology(struct domain *d)
>> +{
>> +    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_TOPOLOGY);
>> +}
>> +
>>   static int flask_console_io(struct domain *d, int cmd)
>>   {
>>       u32 perm;
>> @@ -752,6 +757,9 @@ static int flask_domctl(struct domain *d, int cmd)
>>       case XEN_DOMCTL_set_gnttab_limits:
>>           return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_GNTTAB_LIMITS);
>> +    case XEN_DOMCTL_set_cpu_topology:
>> +        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPU_TOPOLOGY);
>> +
>>       default:
>>           return avc_unknown_permission("domctl", cmd);
>>       }
>> @@ -1800,6 +1808,8 @@ static struct xsm_operations flask_ops = {
>>       .do_xsm_op = do_flask_op,
>>       .get_vnumainfo = flask_get_vnumainfo,
>> +    .get_cpu_topology = flask_get_cpu_topology,
>> +
>>       .vm_event_control = flask_vm_event_control,
>>   #ifdef CONFIG_HAS_MEM_ACCESS
>> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
>> index d0a1ec5..e531ec0 100644
>> --- a/xen/xsm/flask/policy/access_vectors
>> +++ b/xen/xsm/flask/policy/access_vectors
>> @@ -250,6 +250,10 @@ class domain2
>>       psr_alloc
>>   # XEN_DOMCTL_set_gnttab_limits
>>       set_gnttab_limits
>> +# XEN_DOMCTL_set_cpu_topology
>> +    set_cpu_topology
>> +# XENMEM_get_cpu_topology
>> +    get_cpu_topology
>>   }
>>   # Similar to class domain, but primarily contains domctls related to HVM domains
>> 
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-09  9:06     ` Chao Gao
@ 2018-01-09 17:18       ` Daniel De Graaf
  2018-01-09 19:51         ` Chao Gao
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel De Graaf @ 2018-01-09 17:18 UTC (permalink / raw)
  To: Chao Gao, xen-devel, Wei Liu, Tim Deegan, Stefano Stabellini,
	Konrad Rzeszutek Wilk, Ian Jackson, George Dunlap, Andrew Cooper,
	Jan Beulich

On 01/09/2018 04:06 AM, Chao Gao wrote:
> On Mon, Jan 08, 2018 at 01:14:44PM -0500, Daniel De Graaf wrote:
>> On 01/07/2018 11:01 PM, Chao Gao wrote:
>>> Define interface, structures and hypercalls for toolstack to build
>>> cpu topology and for guest that will retrieve it [1].
>>> Two subop hypercalls introduced by this patch:
>>> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>>> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>>>
>>> [1]: during guest creation, those information helps hvmloader to build ACPI.
>>>
>>> Signed-off-by: Chao Gao <chao.gao@intel.com>
>>
>> When adding new XSM controls for use by device models, you also
>> need to add the permissions to the device_model macro defined in
>> tools/flask/policy/modules/xen.if.  If domains need to call this
>> function on themselves (is this only true for get?), you will also
>> need to add it to declare_domain_common.
>>
> 
> Hi, Daniel.
> 
> Yes. XENMEM_get_cpu_topology will be called by the domain itself.
> And Both get and set will be called by dom0 when creating one domain.
> So I need:
> 1. add *set* and *get* to create_domain_common.
> 2. add *set* to declare_domain_common.
> 
> Is it right?
> 
> Thanks
> Chao

It sounds like you need to add get to declare_domain_common (not set)
because the domain only needs to invoke this on itself.  If the device
model doesn't need to use these hypercalls (would guest cpu hotplug or
similar things need them?), then that's all you need to add.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-09 17:18       ` Daniel De Graaf
@ 2018-01-09 19:51         ` Chao Gao
  0 siblings, 0 replies; 20+ messages in thread
From: Chao Gao @ 2018-01-09 19:51 UTC (permalink / raw)
  To: Daniel De Graaf
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, xen-devel, Jan Beulich, Andrew Cooper

On Tue, Jan 09, 2018 at 12:18:13PM -0500, Daniel De Graaf wrote:
>On 01/09/2018 04:06 AM, Chao Gao wrote:
>> On Mon, Jan 08, 2018 at 01:14:44PM -0500, Daniel De Graaf wrote:
>> > On 01/07/2018 11:01 PM, Chao Gao wrote:
>> > > Define interface, structures and hypercalls for toolstack to build
>> > > cpu topology and for guest that will retrieve it [1].
>> > > Two subop hypercalls introduced by this patch:
>> > > XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>> > > and XENMEM_get_cpu_topology to retrieve cpu topology information.
>> > > 
>> > > [1]: during guest creation, those information helps hvmloader to build ACPI.
>> > > 
>> > > Signed-off-by: Chao Gao <chao.gao@intel.com>
>> > 
>> > When adding new XSM controls for use by device models, you also
>> > need to add the permissions to the device_model macro defined in
>> > tools/flask/policy/modules/xen.if.  If domains need to call this
>> > function on themselves (is this only true for get?), you will also
>> > need to add it to declare_domain_common.
>> > 
>> 
>> Hi, Daniel.
>> 
>> Yes. XENMEM_get_cpu_topology will be called by the domain itself.
>> And Both get and set will be called by dom0 when creating one domain.
>> So I need:
>> 1. add *set* and *get* to create_domain_common.
>> 2. add *set* to declare_domain_common.
>> 
>> Is it right?
>> 
>> Thanks
>> Chao
>
>It sounds like you need to add get to declare_domain_common (not set)
>because the domain only needs to invoke this on itself.  If the device
>model doesn't need to use these hypercalls (would guest cpu hotplug or
>similar things need them?), then that's all you need to add.

Got it. I will first recognize whether device model needs these
hypercalls. If yes, make changes to macro device_model accordingly.

Thanks
chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-09 23:47   ` Andrew Cooper
@ 2018-01-09 20:47     ` Chao Gao
  2018-01-10  9:09       ` Andrew Cooper
  0 siblings, 1 reply; 20+ messages in thread
From: Chao Gao @ 2018-01-09 20:47 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, xen-devel, Jan Beulich, Daniel De Graaf

On Tue, Jan 09, 2018 at 11:47:54PM +0000, Andrew Cooper wrote:
>On 08/01/18 04:01, Chao Gao wrote:
>> Define interface, structures and hypercalls for toolstack to build
>> cpu topology and for guest that will retrieve it [1].
>> Two subop hypercalls introduced by this patch:
>> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>>
>> [1]: during guest creation, those information helps hvmloader to build ACPI.
>>
>> Signed-off-by: Chao Gao <chao.gao@intel.com>
>
>I'm sorry, but this going in the wrong direction.  Details like this
>should be contained and communicated exclusively in the CPUID policy.
>
>Before the spectre/meltdown fire started, I had a prototype series
>introducing a toolstack interface for getting and setting a full CPUID
>policy at once, rather than piecewise.  I will be continuing with this

Is the new interface able to set CPUID policy for each vCPU rather than
current for each domain? Otherwise I couldn't see how to set APIC_ID
for each vcpu except by introducing a new interface.

>work once the dust settles.
>
>In particular, we should not have multiple ways of conveying the same
>information, or duplication of the same data inside the hypervisor.
>
>If you rearrange your series to put the struct cpuid_policy changes
>first, then patch 2 will become far more simple.  HVMLoader should
>derive its topology information from the CPUID instruction, just as is
>expected on native hardware.

Good point. It seems that in HVMLoader BSP should boot APs in a
broadcase fashion and then information is collected via CPUID and then
build MADT/SRAT.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-08  4:01 ` [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology Chao Gao
  2018-01-08 18:14   ` Daniel De Graaf
@ 2018-01-09 23:47   ` Andrew Cooper
  2018-01-09 20:47     ` Chao Gao
  2018-04-23 16:00   ` Jan Beulich
  2 siblings, 1 reply; 20+ messages in thread
From: Andrew Cooper @ 2018-01-09 23:47 UTC (permalink / raw)
  To: Chao Gao, xen-devel
  Cc: Stefano Stabellini, Wei Liu, George Dunlap, Ian Jackson,
	Tim Deegan, Jan Beulich, Daniel De Graaf

On 08/01/18 04:01, Chao Gao wrote:
> Define interface, structures and hypercalls for toolstack to build
> cpu topology and for guest that will retrieve it [1].
> Two subop hypercalls introduced by this patch:
> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>
> [1]: during guest creation, those information helps hvmloader to build ACPI.
>
> Signed-off-by: Chao Gao <chao.gao@intel.com>

I'm sorry, but this going in the wrong direction.  Details like this
should be contained and communicated exclusively in the CPUID policy.

Before the spectre/meltdown fire started, I had a prototype series
introducing a toolstack interface for getting and setting a full CPUID
policy at once, rather than piecewise.  I will be continuing with this
work once the dust settles.

In particular, we should not have multiple ways of conveying the same
information, or duplication of the same data inside the hypervisor.

If you rearrange your series to put the struct cpuid_policy changes
first, then patch 2 will become far more simple.  HVMLoader should
derive its topology information from the CPUID instruction, just as is
expected on native hardware.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-09 20:47     ` Chao Gao
@ 2018-01-10  9:09       ` Andrew Cooper
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-01-10  9:09 UTC (permalink / raw)
  To: Chao Gao, xen-devel, Daniel De Graaf, Wei Liu, Tim Deegan,
	Stefano Stabellini, Konrad Rzeszutek Wilk, Ian Jackson,
	George Dunlap, Jan Beulich

On 09/01/2018 20:47, Chao Gao wrote:
> On Tue, Jan 09, 2018 at 11:47:54PM +0000, Andrew Cooper wrote:
>> On 08/01/18 04:01, Chao Gao wrote:
>>> Define interface, structures and hypercalls for toolstack to build
>>> cpu topology and for guest that will retrieve it [1].
>>> Two subop hypercalls introduced by this patch:
>>> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
>>> and XENMEM_get_cpu_topology to retrieve cpu topology information.
>>>
>>> [1]: during guest creation, those information helps hvmloader to build ACPI.
>>>
>>> Signed-off-by: Chao Gao <chao.gao@intel.com>
>> I'm sorry, but this going in the wrong direction.  Details like this
>> should be contained and communicated exclusively in the CPUID policy.
>>
>> Before the spectre/meltdown fire started, I had a prototype series
>> introducing a toolstack interface for getting and setting a full CPUID
>> policy at once, rather than piecewise.  I will be continuing with this
> Is the new interface able to set CPUID policy for each vCPU rather than
> current for each domain? Otherwise I couldn't see how to set APIC_ID
> for each vcpu except by introducing a new interface.

AFAIR, all APIC IDs can be uniquely and correctly derived from the
shift/count information in leaf 0xb

We've already got some CPUID information that is derived either entirely
dynamically (e.g. OSXSAVE), or when first set.  This isn't any different.

>
>> work once the dust settles.
>>
>> In particular, we should not have multiple ways of conveying the same
>> information, or duplication of the same data inside the hypervisor.
>>
>> If you rearrange your series to put the struct cpuid_policy changes
>> first, then patch 2 will become far more simple.  HVMLoader should
>> derive its topology information from the CPUID instruction, just as is
>> expected on native hardware.
> Good point. It seems that in HVMLoader BSP should boot APs in a
> broadcase fashion and then information is collected via CPUID and then
> build MADT/SRAT.

We already do that for MTRR setup, but as before, I'm fairly sure the
BSP can do this alone.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
  2018-01-08  4:01 ` [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology Chao Gao
  2018-01-08 18:14   ` Daniel De Graaf
  2018-01-09 23:47   ` Andrew Cooper
@ 2018-04-23 16:00   ` Jan Beulich
  2 siblings, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2018-04-23 16:00 UTC (permalink / raw)
  To: Chao Gao
  Cc: Tim Deegan, Stefano Stabellini, Wei Liu, George Dunlap,
	Andrew Cooper, Ian Jackson, xen-devel, Daniel de Graaf

>>> On 08.01.18 at 05:01, <chao.gao@intel.com> wrote:
> Define interface, structures and hypercalls for toolstack to build
> cpu topology and for guest that will retrieve it [1].
> Two subop hypercalls introduced by this patch:
> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain
> and XENMEM_get_cpu_topology to retrieve cpu topology information.
> 
> [1]: during guest creation, those information helps hvmloader to build ACPI.
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>

This will want revisiting once Andrew's domain creation rework series has
landed. In no case do I view XENMEM_get_cpu_topology as something
sensible - XENMEM, as its name says, is about memory.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID
  2018-01-08  4:01 ` [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID Chao Gao
@ 2018-04-23 16:04   ` Jan Beulich
  2018-04-24  7:53     ` Chao Gao
  0 siblings, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2018-04-23 16:04 UTC (permalink / raw)
  To: Chao Gao; +Cc: Andrew Cooper, xen-devel

>>> On 08.01.18 at 05:01, <chao.gao@intel.com> wrote:
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -213,6 +213,9 @@ struct hvm_domain {
>      uint8_t thread_per_core;
>  };
>  
> +#define hvm_vcpu_x2apic_id(v) (v->domain->arch.hvm_domain.apic_id[v->vcpu_id])

I can't seem to find where you set up this array.

> +#define hvm_vcpu_apic_id(v) (hvm_vcpu_x2apic_id(v) % 255)

I don't think the % 255 is appropriate here - the macro simply shouldn't be
invoked in such a case.

On the whole I'm not convinced using a array is appropriate - calculating
the APIC ID should be very involved, and require much less than possibly
multiple kb of storage.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID
  2018-04-23 16:04   ` Jan Beulich
@ 2018-04-24  7:53     ` Chao Gao
  2018-04-24  8:26       ` Jan Beulich
  0 siblings, 1 reply; 20+ messages in thread
From: Chao Gao @ 2018-04-24  7:53 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, xen-devel

On Mon, Apr 23, 2018 at 10:04:56AM -0600, Jan Beulich wrote:
>>>> On 08.01.18 at 05:01, <chao.gao@intel.com> wrote:
>> --- a/xen/include/asm-x86/hvm/domain.h
>> +++ b/xen/include/asm-x86/hvm/domain.h
>> @@ -213,6 +213,9 @@ struct hvm_domain {
>>      uint8_t thread_per_core;
>>  };
>>  
>> +#define hvm_vcpu_x2apic_id(v) (v->domain->arch.hvm_domain.apic_id[v->vcpu_id])
>
>I can't seem to find where you set up this array.
>
>> +#define hvm_vcpu_apic_id(v) (hvm_vcpu_x2apic_id(v) % 255)
>
>I don't think the % 255 is appropriate here - the macro simply shouldn't be
>invoked in such a case.
>
>On the whole I'm not convinced using a array is appropriate - calculating
>the APIC ID should be very involved, and require much less than possibly
>multiple kb of storage.

APIC ID can be inferred from a 3-tuple (socket ID, core ID and thread
ID). If we want to give admin the ability to set the mapping between
vcpu_id and this 3-tuple to anything he wants (such vcpu0 - socket ID 1
core ID 0 thread ID 3 and vcpu1 - socket ID 0 core ID 1 thread ID 0...),
IMO, we have no way to avoid store some related information (an APIC ID
array or an 3-tuple array) except limiting the flexibility of guest CPU
topology. At least, I want to emulate a CPU and cache topology which is
similar to KNM's, namely each core has 4 logical threads and two cores
share the same L2 cache.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID
  2018-04-24  7:53     ` Chao Gao
@ 2018-04-24  8:26       ` Jan Beulich
  0 siblings, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2018-04-24  8:26 UTC (permalink / raw)
  To: Chao Gao; +Cc: Andrew Cooper, xen-devel

>>> On 24.04.18 at 09:53, <chao.gao@intel.com> wrote:
> On Mon, Apr 23, 2018 at 10:04:56AM -0600, Jan Beulich wrote:
>>>>> On 08.01.18 at 05:01, <chao.gao@intel.com> wrote:
>>> --- a/xen/include/asm-x86/hvm/domain.h
>>> +++ b/xen/include/asm-x86/hvm/domain.h
>>> @@ -213,6 +213,9 @@ struct hvm_domain {
>>>      uint8_t thread_per_core;
>>>  };
>>>  
>>> +#define hvm_vcpu_x2apic_id(v) (v->domain->arch.hvm_domain.apic_id[v->vcpu_id])
>>
>>I can't seem to find where you set up this array.
>>
>>> +#define hvm_vcpu_apic_id(v) (hvm_vcpu_x2apic_id(v) % 255)
>>
>>I don't think the % 255 is appropriate here - the macro simply shouldn't be
>>invoked in such a case.
>>
>>On the whole I'm not convinced using a array is appropriate - calculating
>>the APIC ID should be very involved, and require much less than possibly
>>multiple kb of storage.
> 
> APIC ID can be inferred from a 3-tuple (socket ID, core ID and thread
> ID). If we want to give admin the ability to set the mapping between
> vcpu_id and this 3-tuple to anything he wants (such vcpu0 - socket ID 1
> core ID 0 thread ID 3 and vcpu1 - socket ID 0 core ID 1 thread ID 0...),
> IMO, we have no way to avoid store some related information (an APIC ID
> array or an 3-tuple array) except limiting the flexibility of guest CPU
> topology. At least, I want to emulate a CPU and cache topology which is
> similar to KNM's, namely each core has 4 logical threads and two cores
> share the same L2 cache.

I think it was pointed out to you already that the CPUID handling here
needs overhaul. We cannot allow the admin to specify things which are
impossible on real hardware. IOW the input here ought to be number of
sockets, number of cores per socket, and number of threads per core.
Everything else will need to be calculated from these.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-04-24  8:26 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-08  4:01 [RFC PATCH 0/8] Add guest CPU topology support Chao Gao
2018-01-08  4:01 ` [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology Chao Gao
2018-01-08 18:14   ` Daniel De Graaf
2018-01-09  9:06     ` Chao Gao
2018-01-09 17:18       ` Daniel De Graaf
2018-01-09 19:51         ` Chao Gao
2018-01-09 23:47   ` Andrew Cooper
2018-01-09 20:47     ` Chao Gao
2018-01-10  9:09       ` Andrew Cooper
2018-04-23 16:00   ` Jan Beulich
2018-01-08  4:01 ` [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID Chao Gao
2018-04-23 16:04   ` Jan Beulich
2018-04-24  7:53     ` Chao Gao
2018-04-24  8:26       ` Jan Beulich
2018-01-08  4:01 ` [RFC PATCH 3/8] xl/parse: introduce cpu_topology to guest config Chao Gao
2018-01-08  4:01 ` [RFC PATCH 4/8] libxl: calculate and set vcpu topology Chao Gao
2018-01-08  4:01 ` [RFC PATCH 5/8] xen/mem: handle XENMEM_get_cpu_topology in compat_memory_op Chao Gao
2018-01-08  4:01 ` [RFC PATCH 6/8] hvmloader: get CPU topology information from hypervisor Chao Gao
2018-01-08  4:01 ` [RFC PATCH 7/8] libacpi: build madt/srat according cpu topology Chao Gao
2018-01-08  4:01 ` [RFC PATCH 8/8] x86/cpuid: emulate extended topology enumeration leaf Chao Gao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.