All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/9] viridian: implement more enlightenments
@ 2019-01-31 10:47 Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 1/9] viridian: add init hooks Paul Durrant
                   ` (8 more replies)
  0 siblings, 9 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This series adds three new enlightenments:

- Synthetic timers, which depends on the...
- Synthetic interrupt controller (or SynIC)
- Synthetic cluster IPI

All these enlightenments are implemented in current versions of QEMU/KVM
so this series closes the gap.

Paul Durrant (9):
  viridian: add init hooks
  viridian: separately allocate domain and vcpu structures
  viridian: extend init/deinit hooks into synic and time modules
  viridian: add missing context save helpers into synic and time modules
  viridian: use viridian_map/unmap_guest_page() for reference tsc page
  viridian: add implementation of synthetic interrupt MSRs
  viridian: stop directly calling
    viridian_time_ref_count_freeze/thaw()...
  viridian: add implementation of synthetic timers
  viridian: add implementation of the HvSendSyntheticClusterIpi
    hypercall

 docs/man/xl.cfg.5.pod.in               |  18 +-
 tools/libxl/libxl.h                    |  18 +
 tools/libxl/libxl_dom.c                |  10 +
 tools/libxl/libxl_types.idl            |   3 +
 xen/arch/x86/domain.c                  |  12 +-
 xen/arch/x86/hvm/hvm.c                 |  18 +-
 xen/arch/x86/hvm/viridian/private.h    |  30 +-
 xen/arch/x86/hvm/viridian/synic.c      | 333 ++++++++++++++++--
 xen/arch/x86/hvm/viridian/time.c       | 468 ++++++++++++++++++++++---
 xen/arch/x86/hvm/viridian/viridian.c   | 222 ++++++++++--
 xen/arch/x86/hvm/vlapic.c              |  16 +-
 xen/include/asm-x86/hvm/domain.h       |   2 +-
 xen/include/asm-x86/hvm/hvm.h          |   7 +
 xen/include/asm-x86/hvm/vcpu.h         |   2 +-
 xen/include/asm-x86/hvm/viridian.h     |  63 +++-
 xen/include/public/arch-x86/hvm/save.h |   4 +
 xen/include/public/hvm/params.h        |  17 +-
 17 files changed, 1125 insertions(+), 118 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>

v3:
 - Add the synthetic cluster IPI patch (#9)

-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v3 1/9] viridian: add init hooks
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 2/9] viridian: separately allocate domain and vcpu structures Paul Durrant
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

This patch adds domain and vcpu init hooks for viridian features. The init
hooks do not yet do anything; the functionality will be added to by
subsequent patches.

NOTE: This patch also removes the call from the domain deinit function to
      the vcpu deinit function, as this is not necessary.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Re-instate call from domain deinit to vcpu deinit
 - Move deinit calls to avoid introducing new labels

v2:
 - Remove call from domain deinit to vcpu deinit
---
 xen/arch/x86/hvm/hvm.c               | 18 ++++++++++++++----
 xen/arch/x86/hvm/viridian/viridian.c | 10 ++++++++++
 xen/include/asm-x86/hvm/viridian.h   |  3 +++
 3 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 21944e9306..090f1ff03e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -665,6 +665,10 @@ int hvm_domain_initialise(struct domain *d)
     if ( hvm_tsc_scaling_supported )
         d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
 
+    rc = viridian_domain_init(d);
+    if ( rc )
+        goto fail2;
+
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
         goto fail2;
@@ -686,6 +690,7 @@ int hvm_domain_initialise(struct domain *d)
     hvm_destroy_cacheattr_region_list(d);
     destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0);
  fail:
+    viridian_domain_deinit(d);
     return rc;
 }
 
@@ -694,8 +699,6 @@ void hvm_domain_relinquish_resources(struct domain *d)
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
-    viridian_domain_deinit(d);
-
     hvm_destroy_all_ioreq_servers(d);
 
     msixtbl_pt_cleanup(d);
@@ -735,6 +738,8 @@ void hvm_domain_destroy(struct domain *d)
     }
 
     destroy_vpci_mmcfg(d);
+
+    viridian_domain_deinit(d);
 }
 
 static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
@@ -1525,6 +1530,10 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
+    rc = viridian_vcpu_init(v);
+    if ( rc )
+        goto fail5;
+
     rc = hvm_all_ioreq_servers_add_vcpu(d, v);
     if ( rc != 0 )
         goto fail6;
@@ -1552,13 +1561,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
  fail2:
     hvm_vcpu_cacheattr_destroy(v);
  fail1:
+    viridian_vcpu_deinit(v);
     return rc;
 }
 
 void hvm_vcpu_destroy(struct vcpu *v)
 {
-    viridian_vcpu_deinit(v);
-
     hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
 
     if ( hvm_altp2m_supported() )
@@ -1574,6 +1582,8 @@ void hvm_vcpu_destroy(struct vcpu *v)
     vlapic_destroy(v);
 
     hvm_vcpu_cacheattr_destroy(v);
+
+    viridian_vcpu_deinit(v);
 }
 
 void hvm_vcpu_down(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index c78b2918d9..ad110ee6f3 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -417,6 +417,16 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     return X86EMUL_OKAY;
 }
 
+int viridian_vcpu_init(struct vcpu *v)
+{
+    return 0;
+}
+
+int viridian_domain_init(struct domain *d)
+{
+    return 0;
+}
+
 void viridian_vcpu_deinit(struct vcpu *v)
 {
     viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index ec5ef8d3f9..f072838955 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -80,6 +80,9 @@ viridian_hypercall(struct cpu_user_regs *regs);
 void viridian_time_ref_count_freeze(struct domain *d);
 void viridian_time_ref_count_thaw(struct domain *d);
 
+int viridian_vcpu_init(struct vcpu *v);
+int viridian_domain_init(struct domain *d);
+
 void viridian_vcpu_deinit(struct vcpu *v);
 void viridian_domain_deinit(struct domain *d);
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 2/9] viridian: separately allocate domain and vcpu structures
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 1/9] viridian: add init hooks Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 3/9] viridian: extend init/deinit hooks into synic and time modules Paul Durrant
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

Currently the viridian_domain and viridian_vcpu structures are inline in
the hvm_domain and hvm_vcpu structures respectively. Subsequent patches
will need to add sizable extra fields to the viridian structures which
will cause the PAGE_SIZE limit of the overall vcpu structure to be
exceeded. This patch, therefore, uses the new init hooks to separately
allocate the structures and converts the 'viridian' fields in hvm_domain
and hvm_cpu to be pointers to these allocations.

Ideally, now that they are no longer inline, the allocations of the
viridian structures could be made conditional on whether the toolstack
is going to configure the viridian enlightenments. However the toolstack
is currently unable to convey this information to the domain creation code
so such an enhancement is deferred until that becomes possible.

NOTE: The patch also introduced the 'is_viridian_vcpu' macro to avoid
      introducing a second evaluation of 'is_viridian_domain' with an
      open-coded 'v->domain' argument. This macro will also be further
      used in a subsequent patch.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - use XFREE()
 - expand commit comment to point out why allocations are unconditional
---
 xen/arch/x86/hvm/viridian/synic.c    | 40 +++++++--------
 xen/arch/x86/hvm/viridian/time.c     | 32 ++++++------
 xen/arch/x86/hvm/viridian/viridian.c | 73 ++++++++++++++++++----------
 xen/include/asm-x86/hvm/domain.h     |  2 +-
 xen/include/asm-x86/hvm/hvm.h        |  4 ++
 xen/include/asm-x86/hvm/vcpu.h       |  2 +-
 6 files changed, 88 insertions(+), 65 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index a6ebbbc9f5..20731c2379 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -30,7 +30,7 @@ typedef union _HV_VP_ASSIST_PAGE
 
 void viridian_apic_assist_set(struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr;
+    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
 
     if ( !ptr )
         return;
@@ -40,25 +40,25 @@ void viridian_apic_assist_set(struct vcpu *v)
      * wrong and the VM will most likely hang so force a crash now
      * to make the problem clear.
      */
-    if ( v->arch.hvm.viridian.apic_assist_pending )
+    if ( v->arch.hvm.viridian->apic_assist_pending )
         domain_crash(v->domain);
 
-    v->arch.hvm.viridian.apic_assist_pending = true;
+    v->arch.hvm.viridian->apic_assist_pending = true;
     ptr->ApicAssist.no_eoi = 1;
 }
 
 bool viridian_apic_assist_completed(struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr;
+    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
 
     if ( !ptr )
         return false;
 
-    if ( v->arch.hvm.viridian.apic_assist_pending &&
+    if ( v->arch.hvm.viridian->apic_assist_pending &&
          !ptr->ApicAssist.no_eoi )
     {
         /* An EOI has been avoided */
-        v->arch.hvm.viridian.apic_assist_pending = false;
+        v->arch.hvm.viridian->apic_assist_pending = false;
         return true;
     }
 
@@ -67,13 +67,13 @@ bool viridian_apic_assist_completed(struct vcpu *v)
 
 void viridian_apic_assist_clear(struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr;
+    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
 
     if ( !ptr )
         return;
 
     ptr->ApicAssist.no_eoi = 0;
-    v->arch.hvm.viridian.apic_assist_pending = false;
+    v->arch.hvm.viridian->apic_assist_pending = false;
 }
 
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
@@ -95,12 +95,12 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
     case HV_X64_MSR_VP_ASSIST_PAGE:
         /* release any previous mapping */
-        viridian_unmap_guest_page(&v->arch.hvm.viridian.vp_assist);
-        v->arch.hvm.viridian.vp_assist.msr.raw = val;
+        viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist);
+        v->arch.hvm.viridian->vp_assist.msr.raw = val;
         viridian_dump_guest_page(v, "VP_ASSIST",
-                                 &v->arch.hvm.viridian.vp_assist);
-        if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled )
-            viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist);
+                                 &v->arch.hvm.viridian->vp_assist);
+        if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
+            viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
         break;
 
     default:
@@ -132,7 +132,7 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         break;
 
     case HV_X64_MSR_VP_ASSIST_PAGE:
-        *val = v->arch.hvm.viridian.vp_assist.msr.raw;
+        *val = v->arch.hvm.viridian->vp_assist.msr.raw;
         break;
 
     default:
@@ -146,18 +146,18 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt)
 {
-    ctxt->apic_assist_pending = v->arch.hvm.viridian.apic_assist_pending;
-    ctxt->vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw;
+    ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending;
+    ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw;
 }
 
 void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
-    v->arch.hvm.viridian.vp_assist.msr.raw = ctxt->vp_assist_msr;
-    if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled )
-        viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist);
+    v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr;
+    if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
+        viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
 
-    v->arch.hvm.viridian.apic_assist_pending = ctxt->apic_assist_pending;
+    v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 840a82b457..42367f6460 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -27,7 +27,7 @@ typedef struct _HV_REFERENCE_TSC_PAGE
 
 static void dump_reference_tsc(const struct domain *d)
 {
-    const union viridian_page_msr *rt = &d->arch.hvm.viridian.reference_tsc;
+    const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc;
 
     if ( !rt->fields.enabled )
         return;
@@ -38,7 +38,7 @@ static void dump_reference_tsc(const struct domain *d)
 
 static void update_reference_tsc(struct domain *d, bool initialize)
 {
-    unsigned long gmfn = d->arch.hvm.viridian.reference_tsc.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     HV_REFERENCE_TSC_PAGE *p;
 
@@ -121,9 +121,8 @@ static int64_t raw_trc_val(struct domain *d)
 
 void viridian_time_ref_count_freeze(struct domain *d)
 {
-    struct viridian_time_ref_count *trc;
-
-    trc = &d->arch.hvm.viridian.time_ref_count;
+    struct viridian_time_ref_count *trc =
+        &d->arch.hvm.viridian->time_ref_count;
 
     if ( test_and_clear_bit(_TRC_running, &trc->flags) )
         trc->val = raw_trc_val(d) + trc->off;
@@ -131,9 +130,8 @@ void viridian_time_ref_count_freeze(struct domain *d)
 
 void viridian_time_ref_count_thaw(struct domain *d)
 {
-    struct viridian_time_ref_count *trc;
-
-    trc = &d->arch.hvm.viridian.time_ref_count;
+    struct viridian_time_ref_count *trc =
+        &d->arch.hvm.viridian->time_ref_count;
 
     if ( !d->is_shutting_down &&
          !test_and_set_bit(_TRC_running, &trc->flags) )
@@ -150,9 +148,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        d->arch.hvm.viridian.reference_tsc.raw = val;
+        d->arch.hvm.viridian->reference_tsc.raw = val;
         dump_reference_tsc(d);
-        if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
+        if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
             update_reference_tsc(d, true);
         break;
 
@@ -189,13 +187,13 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        *val = d->arch.hvm.viridian.reference_tsc.raw;
+        *val = d->arch.hvm.viridian->reference_tsc.raw;
         break;
 
     case HV_X64_MSR_TIME_REF_COUNT:
     {
         struct viridian_time_ref_count *trc =
-            &d->arch.hvm.viridian.time_ref_count;
+            &d->arch.hvm.viridian->time_ref_count;
 
         if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) )
             return X86EMUL_EXCEPTION;
@@ -219,17 +217,17 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
-    ctxt->time_ref_count = d->arch.hvm.viridian.time_ref_count.val;
-    ctxt->reference_tsc = d->arch.hvm.viridian.reference_tsc.raw;
+    ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val;
+    ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw;
 }
 
 void viridian_time_load_domain_ctxt(
     struct domain *d, const struct hvm_viridian_domain_context *ctxt)
 {
-    d->arch.hvm.viridian.time_ref_count.val = ctxt->time_ref_count;
-    d->arch.hvm.viridian.reference_tsc.raw = ctxt->reference_tsc;
+    d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count;
+    d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc;
 
-    if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
+    if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
         update_reference_tsc(d, false);
 }
 
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index ad110ee6f3..1e777611ba 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -146,7 +146,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
          * Hypervisor information, but only if the guest has set its
          * own version number.
          */
-        if ( d->arch.hvm.viridian.guest_os_id.raw == 0 )
+        if ( d->arch.hvm.viridian->guest_os_id.raw == 0 )
             break;
         res->a = viridian_build;
         res->b = ((uint32_t)viridian_major << 16) | viridian_minor;
@@ -191,8 +191,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
     case 4:
         /* Recommended hypercall usage. */
-        if ( (d->arch.hvm.viridian.guest_os_id.raw == 0) ||
-             (d->arch.hvm.viridian.guest_os_id.fields.os < 4) )
+        if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) ||
+             (d->arch.hvm.viridian->guest_os_id.fields.os < 4) )
             break;
         res->a = CPUID4A_RELAX_TIMER_INT;
         if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
@@ -224,7 +224,7 @@ static void dump_guest_os_id(const struct domain *d)
 {
     const union viridian_guest_os_id_msr *goi;
 
-    goi = &d->arch.hvm.viridian.guest_os_id;
+    goi = &d->arch.hvm.viridian->guest_os_id;
 
     printk(XENLOG_G_INFO
            "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n",
@@ -238,7 +238,7 @@ static void dump_hypercall(const struct domain *d)
 {
     const union viridian_page_msr *hg;
 
-    hg = &d->arch.hvm.viridian.hypercall_gpa;
+    hg = &d->arch.hvm.viridian->hypercall_gpa;
 
     printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n",
            d->domain_id,
@@ -247,7 +247,7 @@ static void dump_hypercall(const struct domain *d)
 
 static void enable_hypercall_page(struct domain *d)
 {
-    unsigned long gmfn = d->arch.hvm.viridian.hypercall_gpa.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.fields.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     uint8_t *p;
 
@@ -288,14 +288,14 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     switch ( idx )
     {
     case HV_X64_MSR_GUEST_OS_ID:
-        d->arch.hvm.viridian.guest_os_id.raw = val;
+        d->arch.hvm.viridian->guest_os_id.raw = val;
         dump_guest_os_id(d);
         break;
 
     case HV_X64_MSR_HYPERCALL:
-        d->arch.hvm.viridian.hypercall_gpa.raw = val;
+        d->arch.hvm.viridian->hypercall_gpa.raw = val;
         dump_hypercall(d);
-        if ( d->arch.hvm.viridian.hypercall_gpa.fields.enabled )
+        if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled )
             enable_hypercall_page(d);
         break;
 
@@ -317,10 +317,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm.viridian.crash_param));
+                     ARRAY_SIZE(v->arch.hvm.viridian->crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        v->arch.hvm.viridian.crash_param[idx] = val;
+        v->arch.hvm.viridian->crash_param[idx] = val;
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -337,11 +337,11 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
         spin_unlock(&d->shutdown_lock);
 
         gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n",
-                v->arch.hvm.viridian.crash_param[0],
-                v->arch.hvm.viridian.crash_param[1],
-                v->arch.hvm.viridian.crash_param[2],
-                v->arch.hvm.viridian.crash_param[3],
-                v->arch.hvm.viridian.crash_param[4]);
+                v->arch.hvm.viridian->crash_param[0],
+                v->arch.hvm.viridian->crash_param[1],
+                v->arch.hvm.viridian->crash_param[2],
+                v->arch.hvm.viridian->crash_param[3],
+                v->arch.hvm.viridian->crash_param[4]);
         break;
     }
 
@@ -364,11 +364,11 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     switch ( idx )
     {
     case HV_X64_MSR_GUEST_OS_ID:
-        *val = d->arch.hvm.viridian.guest_os_id.raw;
+        *val = d->arch.hvm.viridian->guest_os_id.raw;
         break;
 
     case HV_X64_MSR_HYPERCALL:
-        *val = d->arch.hvm.viridian.hypercall_gpa.raw;
+        *val = d->arch.hvm.viridian->hypercall_gpa.raw;
         break;
 
     case HV_X64_MSR_VP_INDEX:
@@ -393,10 +393,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm.viridian.crash_param));
+                     ARRAY_SIZE(v->arch.hvm.viridian->crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        *val = v->arch.hvm.viridian.crash_param[idx];
+        *val = v->arch.hvm.viridian->crash_param[idx];
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -419,17 +419,33 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
 
 int viridian_vcpu_init(struct vcpu *v)
 {
+    ASSERT(!v->arch.hvm.viridian);
+    v->arch.hvm.viridian = xzalloc(struct viridian_vcpu);
+    if ( !v->arch.hvm.viridian )
+        return -ENOMEM;
+
     return 0;
 }
 
 int viridian_domain_init(struct domain *d)
 {
+    ASSERT(!d->arch.hvm.viridian);
+    d->arch.hvm.viridian = xzalloc(struct viridian_domain);
+    if ( !d->arch.hvm.viridian )
+        return -ENOMEM;
+
     return 0;
 }
 
 void viridian_vcpu_deinit(struct vcpu *v)
 {
-    viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
+    if ( !v->arch.hvm.viridian )
+        return;
+
+    if ( is_viridian_vcpu(v) )
+        viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
+
+    XFREE(v->arch.hvm.viridian);
 }
 
 void viridian_domain_deinit(struct domain *d)
@@ -438,6 +454,11 @@ void viridian_domain_deinit(struct domain *d)
 
     for_each_vcpu ( d, v )
         viridian_vcpu_deinit(v);
+
+    if ( !d->arch.hvm.viridian )
+        return;
+
+    XFREE(d->arch.hvm.viridian);
 }
 
 static DEFINE_PER_CPU(cpumask_t, ipi_cpumask);
@@ -662,8 +683,8 @@ static int viridian_save_domain_ctxt(struct vcpu *v,
 {
     const struct domain *d = v->domain;
     struct hvm_viridian_domain_context ctxt = {
-        .hypercall_gpa  = d->arch.hvm.viridian.hypercall_gpa.raw,
-        .guest_os_id    = d->arch.hvm.viridian.guest_os_id.raw,
+        .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw,
+        .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw,
     };
 
     if ( !is_viridian_domain(d) )
@@ -682,8 +703,8 @@ static int viridian_load_domain_ctxt(struct domain *d,
     if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
         return -EINVAL;
 
-    d->arch.hvm.viridian.hypercall_gpa.raw  = ctxt.hypercall_gpa;
-    d->arch.hvm.viridian.guest_os_id.raw    = ctxt.guest_os_id;
+    d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa;
+    d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id;
 
     viridian_time_load_domain_ctxt(d, &ctxt);
 
@@ -697,7 +718,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_viridian_vcpu_context ctxt = {};
 
-    if ( !is_viridian_domain(v->domain) )
+    if ( !is_viridian_vcpu(v) )
         return 0;
 
     viridian_synic_save_vcpu_ctxt(v, &ctxt);
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 3e7331817f..6c7c4f5aa6 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -154,7 +154,7 @@ struct hvm_domain {
     /* hypervisor intercepted msix table */
     struct list_head       msixtbl_list;
 
-    struct viridian_domain viridian;
+    struct viridian_domain *viridian;
 
     bool_t                 hap_enabled;
     bool_t                 mem_sharing_enabled;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 0a10b51554..d8df6f4352 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -461,6 +461,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
 #define is_viridian_domain(d) \
     (is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
 
+#define is_viridian_vcpu(v) \
+    is_viridian_domain((v)->domain)
+
 #define has_viridian_time_ref_count(d) \
     (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_time_ref_count))
 
@@ -760,6 +763,7 @@ static inline bool hvm_has_set_descriptor_access_exiting(void)
 }
 
 #define is_viridian_domain(d) ((void)(d), false)
+#define is_viridian_vcpu(v) ((void)(v), false)
 #define has_viridian_time_ref_count(d) ((void)(d), false)
 #define hvm_long_mode_active(v) ((void)(v), false)
 #define hvm_get_guest_time(v) ((void)(v), 0)
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index c8a40f6d55..be9fa5b5a4 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -200,7 +200,7 @@ struct hvm_vcpu {
     /* Pending hw/sw interrupt (.vector = -1 means nothing pending). */
     struct x86_event     inject_event;
 
-    struct viridian_vcpu viridian;
+    struct viridian_vcpu *viridian;
 };
 
 #endif /* __ASM_X86_HVM_VCPU_H__ */
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 3/9] viridian: extend init/deinit hooks into synic and time modules
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 1/9] viridian: add init hooks Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 2/9] viridian: separately allocate domain and vcpu structures Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 14:45   ` Wei Liu
  2019-01-31 10:47 ` [PATCH v3 4/9] viridian: add missing context save helpers " Paul Durrant
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

This patch simply adds domain and vcpu init/deinit hooks into the synic
and time modules and wires them into viridian_[domain|vcpu]_[init|deinit]().
Only one of the hooks is currently needed (to unmap the 'VP Assist' page)
but subsequent patches will make use of the others.

NOTE: To perform the unmap of the VP Assist page,
      viridian_unmap_guest_page() is now directly called in the new
      viridian_synic_vcpu_deinit() function (which is safe even if
      is_viridian_vcpu() evaluates to false). This replaces the slightly
      hacky mechanism of faking a zero write to the
      HV_X64_MSR_VP_ASSIST_PAGE MSR in viridian_cpu_deinit().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Pay attention to sync and time init hook return values
---
 xen/arch/x86/hvm/viridian/private.h  | 12 +++++++++
 xen/arch/x86/hvm/viridian/synic.c    | 19 ++++++++++++++
 xen/arch/x86/hvm/viridian/time.c     | 18 ++++++++++++++
 xen/arch/x86/hvm/viridian/viridian.c | 37 ++++++++++++++++++++++++++--
 4 files changed, 84 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 398b22f12d..040c3c991d 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -74,6 +74,12 @@
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
+int viridian_synic_vcpu_init(struct vcpu *v);
+int viridian_synic_domain_init(struct domain *d);
+
+void viridian_synic_vcpu_deinit(struct vcpu *v);
+void viridian_synic_domain_deinit(struct domain *d);
+
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt);
 void viridian_synic_load_vcpu_ctxt(
@@ -82,6 +88,12 @@ void viridian_synic_load_vcpu_ctxt(
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
+int viridian_time_vcpu_init(struct vcpu *v);
+int viridian_time_domain_init(struct domain *d);
+
+void viridian_time_vcpu_deinit(struct vcpu *v);
+void viridian_time_domain_deinit(struct domain *d);
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt);
 void viridian_time_load_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index 20731c2379..9892bf279d 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -143,6 +143,25 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
     return X86EMUL_OKAY;
 }
 
+int viridian_synic_vcpu_init(struct vcpu *v)
+{
+    return 0;
+}
+
+int viridian_synic_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+void viridian_synic_vcpu_deinit(struct vcpu *v)
+{
+    viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist);
+}
+
+void viridian_synic_domain_deinit(struct domain *d)
+{
+}
+
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt)
 {
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 42367f6460..b1d67035e4 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -214,6 +214,24 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
     return X86EMUL_OKAY;
 }
 
+int viridian_time_vcpu_init(struct vcpu *v)
+{
+    return 0;
+}
+
+int viridian_time_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+void viridian_time_vcpu_deinit(struct vcpu *v)
+{
+}
+
+void viridian_time_domain_deinit(struct domain *d)
+{
+}
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 1e777611ba..157d9f48b2 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -419,22 +419,52 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
 
 int viridian_vcpu_init(struct vcpu *v)
 {
+    int rc;
+
     ASSERT(!v->arch.hvm.viridian);
     v->arch.hvm.viridian = xzalloc(struct viridian_vcpu);
     if ( !v->arch.hvm.viridian )
         return -ENOMEM;
 
+    rc = viridian_synic_vcpu_init(v);
+    if ( rc )
+        goto fail;
+
+    rc = viridian_time_vcpu_init(v);
+    if ( rc )
+        goto fail;
+
     return 0;
+
+ fail:
+    viridian_vcpu_deinit(v);
+
+    return rc;
 }
 
 int viridian_domain_init(struct domain *d)
 {
+    int rc;
+
     ASSERT(!d->arch.hvm.viridian);
     d->arch.hvm.viridian = xzalloc(struct viridian_domain);
     if ( !d->arch.hvm.viridian )
         return -ENOMEM;
 
+    rc = viridian_synic_domain_init(d);
+    if ( rc )
+        goto fail;
+
+    rc = viridian_time_domain_init(d);
+    if ( rc )
+        goto fail;
+
     return 0;
+
+ fail:
+    viridian_domain_deinit(d);
+
+    return rc;
 }
 
 void viridian_vcpu_deinit(struct vcpu *v)
@@ -442,8 +472,8 @@ void viridian_vcpu_deinit(struct vcpu *v)
     if ( !v->arch.hvm.viridian )
         return;
 
-    if ( is_viridian_vcpu(v) )
-        viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
+    viridian_time_vcpu_deinit(v);
+    viridian_synic_vcpu_deinit(v);
 
     XFREE(v->arch.hvm.viridian);
 }
@@ -458,6 +488,9 @@ void viridian_domain_deinit(struct domain *d)
     if ( !d->arch.hvm.viridian )
         return;
 
+    viridian_time_domain_deinit(d);
+    viridian_synic_domain_deinit(d);
+
     XFREE(d->arch.hvm.viridian);
 }
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 4/9] viridian: add missing context save helpers into synic and time modules
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
                   ` (2 preceding siblings ...)
  2019-01-31 10:47 ` [PATCH v3 3/9] viridian: extend init/deinit hooks into synic and time modules Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 5/9] viridian: use viridian_map/unmap_guest_page() for reference tsc page Paul Durrant
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

Currently the time module lacks vcpu context save helpers and the synic
module lacks domain context save helpers. These helpers are not yet
required but subsequent patches will require at least some of them so this
patch completes the set to avoid introducing them in an ad-hoc way.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Add missing callers so that they are not added in an ad-hoc way
---
 xen/arch/x86/hvm/viridian/private.h  | 10 ++++++++++
 xen/arch/x86/hvm/viridian/synic.c    | 10 ++++++++++
 xen/arch/x86/hvm/viridian/time.c     | 10 ++++++++++
 xen/arch/x86/hvm/viridian/viridian.c |  4 ++++
 4 files changed, 34 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 040c3c991d..1864bab3d5 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -85,6 +85,11 @@ void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
 void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt);
 
+void viridian_synic_save_domain_ctxt(
+    const struct domain *d, struct hvm_viridian_domain_context *ctxt);
+void viridian_synic_load_domain_ctxt(
+    struct domain *d, const struct hvm_viridian_domain_context *ctxt);
+
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
@@ -94,6 +99,11 @@ int viridian_time_domain_init(struct domain *d);
 void viridian_time_vcpu_deinit(struct vcpu *v);
 void viridian_time_domain_deinit(struct domain *d);
 
+void viridian_time_save_vcpu_ctxt(
+    const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt);
+void viridian_time_load_vcpu_ctxt(
+    struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt);
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt);
 void viridian_time_load_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index 9892bf279d..d8c35b4785 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -179,6 +179,16 @@ void viridian_synic_load_vcpu_ctxt(
     v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending;
 }
 
+void viridian_synic_save_domain_ctxt(
+    const struct domain *d, struct hvm_viridian_domain_context *ctxt)
+{
+}
+
+void viridian_synic_load_domain_ctxt(
+    struct domain *d, const struct hvm_viridian_domain_context *ctxt)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index b1d67035e4..9225bdf9ab 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -232,6 +232,16 @@ void viridian_time_domain_deinit(struct domain *d)
 {
 }
 
+void viridian_time_save_vcpu_ctxt(
+    const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
+{
+}
+
+void viridian_time_load_vcpu_ctxt(
+    struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
+{
+}
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 157d9f48b2..82d059b18d 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -724,6 +724,7 @@ static int viridian_save_domain_ctxt(struct vcpu *v,
         return 0;
 
     viridian_time_save_domain_ctxt(d, &ctxt);
+    viridian_synic_save_domain_ctxt(d, &ctxt);
 
     return (hvm_save_entry(VIRIDIAN_DOMAIN, 0, h, &ctxt) != 0);
 }
@@ -739,6 +740,7 @@ static int viridian_load_domain_ctxt(struct domain *d,
     d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa;
     d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id;
 
+    viridian_synic_load_domain_ctxt(d, &ctxt);
     viridian_time_load_domain_ctxt(d, &ctxt);
 
     return 0;
@@ -754,6 +756,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
     if ( !is_viridian_vcpu(v) )
         return 0;
 
+    viridian_time_save_vcpu_ctxt(v, &ctxt);
     viridian_synic_save_vcpu_ctxt(v, &ctxt);
 
     return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
@@ -780,6 +783,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d,
         return -EINVAL;
 
     viridian_synic_load_vcpu_ctxt(v, &ctxt);
+    viridian_time_load_vcpu_ctxt(v, &ctxt);
 
     return 0;
 }
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 5/9] viridian: use viridian_map/unmap_guest_page() for reference tsc page
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
                   ` (3 preceding siblings ...)
  2019-01-31 10:47 ` [PATCH v3 4/9] viridian: add missing context save helpers " Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

Whilst the reference tsc page does not currently need to be kept mapped
after it is initially set up (or updated after migrate), the code can
be simplified by using the common guest page map/unmap and dump functions.
New functionality added by a subsequent patch will also require the page to
kept mapped for the lifetime of the domain.

NOTE: Because the reference tsc page is per-domain rather than per-vcpu
      this patch also changes viridian_map_guest_page() to take a domain
      pointer rather than a vcpu pointer.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/private.h  |  2 +-
 xen/arch/x86/hvm/viridian/synic.c    |  8 +++-
 xen/arch/x86/hvm/viridian/time.c     | 57 ++++++++++------------------
 xen/arch/x86/hvm/viridian/viridian.c |  3 +-
 xen/include/asm-x86/hvm/viridian.h   |  2 +-
 5 files changed, 28 insertions(+), 44 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 1864bab3d5..0bf34eefe0 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -111,7 +111,7 @@ void viridian_time_load_domain_ctxt(
 
 void viridian_dump_guest_page(const struct vcpu *v, const char *name,
                               const struct viridian_page *vp);
-void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp);
+void viridian_map_guest_page(struct domain *d, struct viridian_page *vp);
 void viridian_unmap_guest_page(struct viridian_page *vp);
 
 #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index d8c35b4785..ea73ca1f62 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -78,6 +78,8 @@ void viridian_apic_assist_clear(struct vcpu *v)
 
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 {
+    struct domain *d = v->domain;
+
     switch ( idx )
     {
     case HV_X64_MSR_EOI:
@@ -100,7 +102,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         viridian_dump_guest_page(v, "VP_ASSIST",
                                  &v->arch.hvm.viridian->vp_assist);
         if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
-            viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
+            viridian_map_guest_page(d, &v->arch.hvm.viridian->vp_assist);
         break;
 
     default:
@@ -172,9 +174,11 @@ void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
 void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
+    struct domain *d = v->domain;
+
     v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr;
     if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
-        viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
+        viridian_map_guest_page(d, &v->arch.hvm.viridian->vp_assist);
 
     v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending;
 }
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 9225bdf9ab..e8924adfa1 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -25,33 +25,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE
     uint64_t Reserved2[509];
 } HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE;
 
-static void dump_reference_tsc(const struct domain *d)
-{
-    const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc;
-
-    if ( !rt->fields.enabled )
-        return;
-
-    printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n",
-           d->domain_id, (unsigned long)rt->fields.pfn);
-}
-
 static void update_reference_tsc(struct domain *d, bool initialize)
 {
-    unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn;
-    struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
-    HV_REFERENCE_TSC_PAGE *p;
-
-    if ( !page || !get_page_type(page, PGT_writable_page) )
-    {
-        if ( page )
-            put_page(page);
-        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
-                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
-        return;
-    }
-
-    p = __map_domain_page(page);
+    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
+    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
 
     if ( initialize )
         clear_page(p);
@@ -82,7 +59,7 @@ static void update_reference_tsc(struct domain *d, bool initialize)
 
         printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n",
                d->domain_id);
-        goto out;
+        return;
     }
 
     /*
@@ -100,11 +77,6 @@ static void update_reference_tsc(struct domain *d, bool initialize)
     if ( p->TscSequence == 0xFFFFFFFF ||
          p->TscSequence == 0 ) /* Avoid both 'invalid' values */
         p->TscSequence = 1;
-
- out:
-    unmap_domain_page(p);
-
-    put_page_and_type(page);
 }
 
 static int64_t raw_trc_val(struct domain *d)
@@ -148,10 +120,15 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        d->arch.hvm.viridian->reference_tsc.raw = val;
-        dump_reference_tsc(d);
-        if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
+        viridian_unmap_guest_page(&d->arch.hvm.viridian->reference_tsc);
+        d->arch.hvm.viridian->reference_tsc.msr.raw = val;
+        viridian_dump_guest_page(v, "REFERENCE_TSC",
+                                 &d->arch.hvm.viridian->reference_tsc);
+        if ( d->arch.hvm.viridian->reference_tsc.msr.fields.enabled )
+        {
+            viridian_map_guest_page(d, &d->arch.hvm.viridian->reference_tsc);
             update_reference_tsc(d, true);
+        }
         break;
 
     default:
@@ -187,7 +164,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        *val = d->arch.hvm.viridian->reference_tsc.raw;
+        *val = d->arch.hvm.viridian->reference_tsc.msr.raw;
         break;
 
     case HV_X64_MSR_TIME_REF_COUNT:
@@ -230,6 +207,7 @@ void viridian_time_vcpu_deinit(struct vcpu *v)
 
 void viridian_time_domain_deinit(struct domain *d)
 {
+    viridian_unmap_guest_page(&d->arch.hvm.viridian->reference_tsc);
 }
 
 void viridian_time_save_vcpu_ctxt(
@@ -246,17 +224,20 @@ void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
     ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val;
-    ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw;
+    ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.msr.raw;
 }
 
 void viridian_time_load_domain_ctxt(
     struct domain *d, const struct hvm_viridian_domain_context *ctxt)
 {
     d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count;
-    d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc;
+    d->arch.hvm.viridian->reference_tsc.msr.raw = ctxt->reference_tsc;
 
-    if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
+    if ( d->arch.hvm.viridian->reference_tsc.msr.fields.enabled )
+    {
+        viridian_map_guest_page(d, &d->arch.hvm.viridian->reference_tsc);
         update_reference_tsc(d, false);
+    }
 }
 
 /*
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 82d059b18d..b2b9963486 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -662,9 +662,8 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name,
            v, name, (unsigned long)vp->msr.fields.pfn);
 }
 
-void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp)
+void viridian_map_guest_page(struct domain *d, struct viridian_page *vp)
 {
-    struct domain *d = v->domain;
     unsigned long gmfn = vp->msr.fields.pfn;
     struct page_info *page;
 
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index f072838955..1d281d825e 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -65,7 +65,7 @@ struct viridian_domain
     union viridian_guest_os_id_msr guest_os_id;
     union viridian_page_msr hypercall_gpa;
     struct viridian_time_ref_count time_ref_count;
-    union viridian_page_msr reference_tsc;
+    struct viridian_page reference_tsc;
 };
 
 void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
                   ` (4 preceding siblings ...)
  2019-01-31 10:47 ` [PATCH v3 5/9] viridian: use viridian_map/unmap_guest_page() for reference tsc page Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 14:46   ` Wei Liu
  2019-02-25 14:12   ` Jan Beulich
  2019-01-31 10:47 ` [PATCH v3 7/9] viridian: stop directly calling viridian_time_ref_count_freeze/thaw() Paul Durrant
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This patch introduces an implementation of the SCONTROL, SVERSION, SIEFP,
SIMP, EOM and SINT0-15 SynIC MSRs. No message source is added and, as such,
nothing will yet generate a synthetic interrupt. A subsequent patch will
add an implementation of synthetic timers which will need the infrastructure
added by this patch to deliver expiry messages to the guest.

NOTE: A 'synic' option is added to the toolstack viridian enlightenments
      enumeration but is deliberately not documented as enabling these
      SynIC registers without a message source is only useful for
      debugging.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Add the 'SintPollingModeAvailable' bit in CPUID leaf 3
---
 tools/libxl/libxl.h                    |   6 +
 tools/libxl/libxl_dom.c                |   3 +
 tools/libxl/libxl_types.idl            |   1 +
 xen/arch/x86/hvm/viridian/synic.c      | 214 +++++++++++++++++++++++++
 xen/arch/x86/hvm/viridian/viridian.c   |  19 +++
 xen/arch/x86/hvm/vlapic.c              |  16 +-
 xen/include/asm-x86/hvm/hvm.h          |   3 +
 xen/include/asm-x86/hvm/viridian.h     |  24 +++
 xen/include/public/arch-x86/hvm/save.h |   2 +
 xen/include/public/hvm/params.h        |   7 +-
 10 files changed, 293 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index a38e5cdba2..a923a380d3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -318,6 +318,12 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_CRASH_CTL 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_SYNIC indicates that the 'synic' value
+ * is present in the viridian enlightenment enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_SYNIC 1
+
 /*
  * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that
  * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field.
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 6160991af3..fb758d2ac3 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -317,6 +317,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL))
         mask |= HVMPV_crash_ctl;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC))
+        mask |= HVMPV_synic;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index b685ac47ac..9860bcaf5f 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -235,6 +235,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (4, "hcall_remote_tlb_flush"),
     (5, "apic_assist"),
     (6, "crash_ctl"),
+    (7, "synic"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index ea73ca1f62..a754fa6336 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -13,6 +13,7 @@
 
 #include <asm/apic.h>
 #include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
 
 #include "private.h"
 
@@ -28,6 +29,32 @@ typedef union _HV_VP_ASSIST_PAGE
     uint8_t ReservedZBytePadding[PAGE_SIZE];
 } HV_VP_ASSIST_PAGE;
 
+typedef enum HV_MESSAGE_TYPE {
+    HvMessageTypeNone,
+    HvMessageTimerExpired = 0x80000010,
+} HV_MESSAGE_TYPE;
+
+typedef struct HV_MESSAGE_FLAGS {
+    uint8_t MessagePending:1;
+    uint8_t Reserved:7;
+} HV_MESSAGE_FLAGS;
+
+typedef struct HV_MESSAGE_HEADER {
+    HV_MESSAGE_TYPE MessageType;
+    uint16_t Reserved1;
+    HV_MESSAGE_FLAGS MessageFlags;
+    uint8_t PayloadSize;
+    uint64_t Reserved2;
+} HV_MESSAGE_HEADER;
+
+#define HV_MESSAGE_SIZE 256
+#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30
+
+typedef struct HV_MESSAGE {
+    HV_MESSAGE_HEADER Header;
+    uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT];
+} HV_MESSAGE;
+
 void viridian_apic_assist_set(struct vcpu *v)
 {
     HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
@@ -105,6 +132,73 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
             viridian_map_guest_page(d, &v->arch.hvm.viridian->vp_assist);
         break;
 
+    case HV_X64_MSR_SCONTROL:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        v->arch.hvm.viridian->scontrol = val;
+        break;
+
+    case HV_X64_MSR_SVERSION:
+        return X86EMUL_EXCEPTION;
+
+    case HV_X64_MSR_SIEFP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        v->arch.hvm.viridian->siefp = val;
+        break;
+
+    case HV_X64_MSR_SIMP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        viridian_unmap_guest_page(&v->arch.hvm.viridian->simp);
+        v->arch.hvm.viridian->simp.msr.raw = val;
+        viridian_dump_guest_page(v, "SIMP", &v->arch.hvm.viridian->simp);
+        if ( v->arch.hvm.viridian->simp.msr.fields.enabled )
+            viridian_map_guest_page(d, &v->arch.hvm.viridian->simp);
+        break;
+
+    case HV_X64_MSR_EOM:
+    {
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        v->arch.hvm.viridian->msg_pending = 0;
+        break;
+    }
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
+    {
+        unsigned int sintx = idx - HV_X64_MSR_SINT0;
+        uint8_t vector = v->arch.hvm.viridian->sint[sintx].fields.vector;
+
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        /*
+         * Invalidate any previous mapping by setting an out-of-range
+         * index.
+         */
+        v->arch.hvm.viridian->vector_to_sintx[vector] =
+            ARRAY_SIZE(v->arch.hvm.viridian->sint);
+
+        v->arch.hvm.viridian->sint[sintx].raw = val;
+
+        /* Vectors must be in the range 16-255 inclusive */
+        vector = v->arch.hvm.viridian->sint[sintx].fields.vector;
+        if ( vector < 16 )
+            return X86EMUL_EXCEPTION;
+
+        printk(XENLOG_G_INFO "%pv: VIRIDIAN SINT%u: vector: %x\n", v, sintx,
+               vector);
+        v->arch.hvm.viridian->vector_to_sintx[vector] = sintx;
+
+        if ( v->arch.hvm.viridian->sint[sintx].fields.polling )
+            clear_bit(sintx, &v->arch.hvm.viridian->msg_pending);
+
+        break;
+    }
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n",
                  __func__, idx, val);
@@ -116,6 +210,8 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
 int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 {
+    struct domain *d = v->domain;
+
     switch ( idx )
     {
     case HV_X64_MSR_EOI:
@@ -137,6 +233,58 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         *val = v->arch.hvm.viridian->vp_assist.msr.raw;
         break;
 
+    case HV_X64_MSR_SCONTROL:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = v->arch.hvm.viridian->scontrol;
+        break;
+
+    case HV_X64_MSR_SVERSION:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        /*
+         * The specification says that the version number is 0x00000001
+         * and should be in the lower 32-bits of the MSR, while the
+         * upper 32-bits are reserved... but it doesn't say what they
+         * should be set to. Assume everything but the bottom bit
+         * should be zero.
+         */
+        *val = 1ul;
+        break;
+
+    case HV_X64_MSR_SIEFP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = v->arch.hvm.viridian->siefp;
+        break;
+
+    case HV_X64_MSR_SIMP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = v->arch.hvm.viridian->simp.msr.raw;
+        break;
+
+    case HV_X64_MSR_EOM:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = 0;
+        break;
+
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
+    {
+        unsigned int sintx = idx - HV_X64_MSR_SINT0;
+
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = v->arch.hvm.viridian->sint[sintx].raw;
+        break;
+    }
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx);
         return X86EMUL_EXCEPTION;
@@ -147,6 +295,20 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 
 int viridian_synic_vcpu_init(struct vcpu *v)
 {
+    unsigned int i;
+
+    /*
+     * The specification says that all synthetic interrupts must be
+     * initally masked.
+     */
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->sint); i++ )
+        v->arch.hvm.viridian->sint[i].fields.mask = 1;
+
+    /* Initialize the mapping array with invalid values */
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->vector_to_sintx); i++ )
+        v->arch.hvm.viridian->vector_to_sintx[i] =
+            ARRAY_SIZE(v->arch.hvm.viridian->sint);
+
     return 0;
 }
 
@@ -158,15 +320,49 @@ int viridian_synic_domain_init(struct domain *d)
 void viridian_synic_vcpu_deinit(struct vcpu *v)
 {
     viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist);
+    viridian_unmap_guest_page(&v->arch.hvm.viridian->simp);
 }
 
 void viridian_synic_domain_deinit(struct domain *d)
 {
 }
 
+void viridian_synic_poll_messages(struct vcpu *v)
+{
+    /* There are currently no message sources */
+}
+
+bool viridian_synic_is_auto_eoi_sint(struct vcpu *v, uint8_t vector)
+{
+    int sintx = v->arch.hvm.viridian->vector_to_sintx[vector];
+
+    if ( sintx >= ARRAY_SIZE(v->arch.hvm.viridian->sint) )
+        return false;
+
+    return v->arch.hvm.viridian->sint[sintx].fields.auto_eoi;
+}
+
+void viridian_synic_ack_sint(struct vcpu *v, uint8_t vector)
+{
+    int sintx = v->arch.hvm.viridian->vector_to_sintx[vector];
+
+    if ( sintx < ARRAY_SIZE(v->arch.hvm.viridian->sint) )
+        clear_bit(sintx, &v->arch.hvm.viridian->msg_pending);
+}
+
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt)
 {
+    unsigned int i;
+
+    BUILD_BUG_ON(ARRAY_SIZE(v->arch.hvm.viridian->sint) !=
+                 ARRAY_SIZE(ctxt->sint_msr));
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->sint); i++ )
+        ctxt->sint_msr[i] = v->arch.hvm.viridian->sint[i].raw;
+
+    ctxt->simp_msr = v->arch.hvm.viridian->simp.msr.raw;
+
     ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending;
     ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw;
 }
@@ -175,12 +371,30 @@ void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
     struct domain *d = v->domain;
+    unsigned int i;
 
     v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr;
     if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
         viridian_map_guest_page(d, &v->arch.hvm.viridian->vp_assist);
 
     v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending;
+
+    v->arch.hvm.viridian->simp.msr.raw = ctxt->simp_msr;
+    if ( v->arch.hvm.viridian->simp.msr.fields.enabled )
+        viridian_map_guest_page(d, &v->arch.hvm.viridian->simp);
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->sint); i++ )
+    {
+        uint8_t vector;
+
+        v->arch.hvm.viridian->sint[i].raw = ctxt->sint_msr[i];
+
+        vector = v->arch.hvm.viridian->sint[i].fields.vector;
+        if ( vector < 16 )
+            continue;
+
+        v->arch.hvm.viridian->vector_to_sintx[vector] = i;
+    }
 }
 
 void viridian_synic_save_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index b2b9963486..8d8cf3a37d 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -89,6 +89,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 
 /* Viridian CPUID leaf 3, Hypervisor Feature Indication */
 #define CPUID3D_CRASH_MSRS (1 << 10)
+#define CPUID3D_SINT_POLLING (1 << 17)
 
 /* Viridian CPUID leaf 4: Implementation Recommendations. */
 #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2)
@@ -177,6 +178,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             mask.AccessPartitionReferenceCounter = 1;
         if ( viridian_feature_mask(d) & HVMPV_reference_tsc )
             mask.AccessPartitionReferenceTsc = 1;
+        if ( viridian_feature_mask(d) & HVMPV_synic )
+            mask.AccessSynicRegs = 1;
 
         u.mask = mask;
 
@@ -185,6 +188,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
         if ( viridian_feature_mask(d) & HVMPV_crash_ctl )
             res->d = CPUID3D_CRASH_MSRS;
+        if ( viridian_feature_mask(d) & HVMPV_synic )
+            res->d = CPUID3D_SINT_POLLING;
 
         break;
     }
@@ -306,8 +311,16 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_ICR:
     case HV_X64_MSR_TPR:
     case HV_X64_MSR_VP_ASSIST_PAGE:
+    case HV_X64_MSR_SCONTROL:
+    case HV_X64_MSR_SVERSION:
+    case HV_X64_MSR_SIEFP:
+    case HV_X64_MSR_SIMP:
+    case HV_X64_MSR_EOM:
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
         return viridian_synic_wrmsr(v, idx, val);
 
+    case HV_X64_MSR_TSC_FREQUENCY:
+    case HV_X64_MSR_APIC_FREQUENCY:
     case HV_X64_MSR_REFERENCE_TSC:
         return viridian_time_wrmsr(v, idx, val);
 
@@ -379,6 +392,12 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     case HV_X64_MSR_ICR:
     case HV_X64_MSR_TPR:
     case HV_X64_MSR_VP_ASSIST_PAGE:
+    case HV_X64_MSR_SCONTROL:
+    case HV_X64_MSR_SVERSION:
+    case HV_X64_MSR_SIEFP:
+    case HV_X64_MSR_SIMP:
+    case HV_X64_MSR_EOM:
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
         return viridian_synic_rdmsr(v, idx, val);
 
     case HV_X64_MSR_TSC_FREQUENCY:
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index a1a43cd792..45d6ef91da 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -461,11 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic)
 
 void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
 {
+    struct vcpu *v = vlapic_vcpu(vlapic);
     struct domain *d = vlapic_domain(vlapic);
 
     if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
         vioapic_update_EOI(d, vector);
 
+    if ( has_viridian_synic(v->domain) )
+        viridian_synic_ack_sint(v, vector);
+
     hvm_dpci_msi_eoi(d, vector);
 }
 
@@ -1301,6 +1305,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
     if ( !vlapic_enabled(vlapic) )
         return -1;
 
+    /*
+     * Poll the viridian message queues before checking the IRR since
+     * a sythetic interrupt may be asserted during the poll.
+     */
+    if ( has_viridian_synic(v->domain) )
+        viridian_synic_poll_messages(v);
+
     irr = vlapic_find_highest_irr(vlapic);
     if ( irr == -1 )
         return -1;
@@ -1360,7 +1371,10 @@ int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack)
     }
 
  done:
-    vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
+    if ( !has_viridian_synic(v->domain) ||
+         !viridian_synic_is_auto_eoi_sint(v, vector) )
+        vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
+
     vlapic_clear_irr(vector, vlapic);
     return 1;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index d8df6f4352..7892f98c7b 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -470,6 +470,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
 #define has_viridian_apic_assist(d) \
     (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_apic_assist))
 
+#define has_viridian_synic(d) \
+    (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_synic))
+
 static inline void hvm_inject_exception(
     unsigned int vector, unsigned int type,
     unsigned int insn_len, int error_code)
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index 1d281d825e..6d40d391e1 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -26,10 +26,30 @@ struct viridian_page
     void *ptr;
 };
 
+union viridian_sint_msr
+{
+    uint64_t raw;
+    struct
+    {
+        uint64_t vector:8;
+        uint64_t reserved_preserved1:8;
+        uint64_t mask:1;
+        uint64_t auto_eoi:1;
+        uint64_t polling:1;
+        uint64_t reserved_preserved2:45;
+    } fields;
+};
+
 struct viridian_vcpu
 {
     struct viridian_page vp_assist;
     bool apic_assist_pending;
+    uint64_t scontrol;
+    uint64_t siefp;
+    struct viridian_page simp;
+    union viridian_sint_msr sint[16];
+    uint8_t vector_to_sintx[256];
+    unsigned long msg_pending;
     uint64_t crash_param[5];
 };
 
@@ -90,6 +110,10 @@ void viridian_apic_assist_set(struct vcpu *v);
 bool viridian_apic_assist_completed(struct vcpu *v);
 void viridian_apic_assist_clear(struct vcpu *v);
 
+bool viridian_synic_is_auto_eoi_sint(struct vcpu *v, uint8_t vector);
+void viridian_synic_poll_messages(struct vcpu *v);
+void viridian_synic_ack_sint(struct vcpu *v, uint8_t vector);
+
 #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */
 
 /*
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 40be84ecda..ec3e4df12c 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -602,6 +602,8 @@ struct hvm_viridian_vcpu_context {
     uint64_t vp_assist_msr;
     uint8_t  apic_assist_pending;
     uint8_t  _pad[7];
+    uint64_t simp_msr;
+    uint64_t sint_msr[16];
 };
 
 DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 72f633ef2d..e7e3c7c892 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -146,6 +146,10 @@
 #define _HVMPV_crash_ctl 6
 #define HVMPV_crash_ctl (1 << _HVMPV_crash_ctl)
 
+/* Enable SYNIC MSRs */
+#define _HVMPV_synic 7
+#define HVMPV_synic (1 << _HVMPV_synic)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -153,7 +157,8 @@
          HVMPV_reference_tsc | \
          HVMPV_hcall_remote_tlb_flush | \
          HVMPV_apic_assist | \
-         HVMPV_crash_ctl)
+         HVMPV_crash_ctl | \
+         HVMPV_synic)
 
 #endif
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 7/9] viridian: stop directly calling viridian_time_ref_count_freeze/thaw()...
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
                   ` (5 preceding siblings ...)
  2019-01-31 10:47 ` [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 8/9] viridian: add implementation of synthetic timers Paul Durrant
  2019-01-31 10:47 ` [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
  8 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

...from arch_domain_shutdown/pause/unpause().

A subsequent patch will introduce an implementaion of synthetic timers
which will also need freeze/thaw hooks, so make the exported hooks more
generic and call through to (re-named and static) time_ref_count_freeze/thaw
functions.

NOTE: This patch also introduces a new time_ref_count() helper to return
      the current counter value. This is currently only used by the MSR
      read handler but the synthetic timer code will also need to use it.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/domain.c              | 12 ++++++------
 xen/arch/x86/hvm/viridian/time.c   | 24 +++++++++++++++++++++---
 xen/include/asm-x86/hvm/viridian.h |  4 ++--
 3 files changed, 29 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 32dc4253ff..ed2a57a8a6 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -658,20 +658,20 @@ void arch_domain_destroy(struct domain *d)
 
 void arch_domain_shutdown(struct domain *d)
 {
-    if ( has_viridian_time_ref_count(d) )
-        viridian_time_ref_count_freeze(d);
+    if ( is_viridian_domain(d) )
+        viridian_time_domain_freeze(d);
 }
 
 void arch_domain_pause(struct domain *d)
 {
-    if ( has_viridian_time_ref_count(d) )
-        viridian_time_ref_count_freeze(d);
+    if ( is_viridian_domain(d) )
+        viridian_time_domain_freeze(d);
 }
 
 void arch_domain_unpause(struct domain *d)
 {
-    if ( has_viridian_time_ref_count(d) )
-        viridian_time_ref_count_thaw(d);
+    if ( is_viridian_domain(d) )
+        viridian_time_domain_thaw(d);
 }
 
 int arch_domain_soft_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index e8924adfa1..cb7162c2d4 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -91,7 +91,7 @@ static int64_t raw_trc_val(struct domain *d)
     return scale_delta(tsc, &tsc_to_ns) / 100ul;
 }
 
-void viridian_time_ref_count_freeze(struct domain *d)
+static void time_ref_count_freeze(struct domain *d)
 {
     struct viridian_time_ref_count *trc =
         &d->arch.hvm.viridian->time_ref_count;
@@ -100,7 +100,7 @@ void viridian_time_ref_count_freeze(struct domain *d)
         trc->val = raw_trc_val(d) + trc->off;
 }
 
-void viridian_time_ref_count_thaw(struct domain *d)
+static void time_ref_count_thaw(struct domain *d)
 {
     struct viridian_time_ref_count *trc =
         &d->arch.hvm.viridian->time_ref_count;
@@ -110,6 +110,24 @@ void viridian_time_ref_count_thaw(struct domain *d)
         trc->off = (int64_t)trc->val - raw_trc_val(d);
 }
 
+static int64_t time_ref_count(struct domain *d)
+{
+    struct viridian_time_ref_count *trc =
+        &d->arch.hvm.viridian->time_ref_count;
+
+    return raw_trc_val(d) + trc->off;
+}
+
+void viridian_time_domain_freeze(struct domain *d)
+{
+    time_ref_count_freeze(d);
+}
+
+void viridian_time_domain_thaw(struct domain *d)
+{
+    time_ref_count_thaw(d);
+}
+
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 {
     struct domain *d = v->domain;
@@ -179,7 +197,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
             printk(XENLOG_G_INFO "d%d: VIRIDIAN MSR_TIME_REF_COUNT: accessed\n",
                    d->domain_id);
 
-        *val = raw_trc_val(d) + trc->off;
+        *val = time_ref_count(d);
         break;
     }
 
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index 6d40d391e1..9a493cf048 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -97,8 +97,8 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val);
 int
 viridian_hypercall(struct cpu_user_regs *regs);
 
-void viridian_time_ref_count_freeze(struct domain *d);
-void viridian_time_ref_count_thaw(struct domain *d);
+void viridian_time_domain_freeze(struct domain *d);
+void viridian_time_domain_thaw(struct domain *d);
 
 int viridian_vcpu_init(struct vcpu *v);
 int viridian_domain_init(struct domain *d);
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
                   ` (6 preceding siblings ...)
  2019-01-31 10:47 ` [PATCH v3 7/9] viridian: stop directly calling viridian_time_ref_count_freeze/thaw() Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 14:46   ` Wei Liu
  2019-02-25 14:54   ` Jan Beulich
  2019-01-31 10:47 ` [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
  8 siblings, 2 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This patch introduces an implementation of the STIMER0-15_CONFIG/COUNT MSRs
and hence a the first SynIC message source.

The new (and documented) 'stimer' viridian enlightenment group may be
specified to enable this feature.

NOTE: It is necessary for correct operation that timer expiration and
      message delivery time-stamping use the same time source as the guest.
      The specification is ambiguous but testing with a Windows 10 1803
      guest has shown that using the partition reference counter as a
      source whilst the guest is using RDTSC and the reference tsc page
      does not work correctly. Therefore the time_now() function is used.
      This implements the algorithm for acquiring partition reference time
      that is documented in the specifiction. This requires use of 128-bit
      arithmetic and hence __int128_t values are used in the calculation,
      although the result is a signed 64-bit value.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Re-worked missed ticks calculation
---
 docs/man/xl.cfg.5.pod.in               |  12 +-
 tools/libxl/libxl.h                    |   6 +
 tools/libxl/libxl_dom.c                |   4 +
 tools/libxl/libxl_types.idl            |   1 +
 xen/arch/x86/hvm/viridian/private.h    |   6 +
 xen/arch/x86/hvm/viridian/synic.c      |  48 +++-
 xen/arch/x86/hvm/viridian/time.c       | 343 +++++++++++++++++++++++++
 xen/arch/x86/hvm/viridian/viridian.c   |  19 ++
 xen/include/asm-x86/hvm/viridian.h     |  30 +++
 xen/include/public/arch-x86/hvm/save.h |   2 +
 xen/include/public/hvm/params.h        |   7 +-
 11 files changed, 474 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index ad81af1ed8..355c654693 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2167,11 +2167,19 @@ This group incorporates the crash control MSRs. These enlightenments
 allow Windows to write crash information such that it can be logged
 by Xen.
 
+=item B<stimer>
+
+This set incorporates the SynIC and synthetic timer MSRs. Windows will
+use synthetic timers in preference to emulated HPET for a source of
+ticks and hence enabling this group will ensure that ticks will be
+consistent with use of an enlightened time source (B<time_ref_count> or
+B<reference_tsc>).
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
-is currently the B<base>, B<freq>, B<time_ref_count>, B<apic_assist>
-and B<crash_ctl> groups.
+is currently the B<base>, B<freq>, B<time_ref_count>, B<apic_assist>,
+B<crash_ctl> and B<stimer> groups.
 
 =item B<all>
 
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index a923a380d3..c8f219b0d3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -324,6 +324,12 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_SYNIC 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_STIMER indicates that the 'stimer' value
+ * is present in the viridian enlightenment enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_STIMER 1
+
 /*
  * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that
  * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field.
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index fb758d2ac3..2ee0f82ee7 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -269,6 +269,7 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
         libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_TIME_REF_COUNT);
         libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_APIC_ASSIST);
         libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL);
+        libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER);
     }
 
     libxl_for_each_set_bit(v, info->u.hvm.viridian_enable) {
@@ -320,6 +321,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC))
         mask |= HVMPV_synic;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER))
+        mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9860bcaf5f..1cce249de4 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -236,6 +236,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (5, "apic_assist"),
     (6, "crash_ctl"),
     (7, "synic"),
+    (8, "stimer"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 0bf34eefe0..6f81f62b60 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -74,6 +74,10 @@
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
+bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
+                                      unsigned int index,
+                                      int64_t expiration, int64_t delivery);
+
 int viridian_synic_vcpu_init(struct vcpu *v);
 int viridian_synic_domain_init(struct domain *d);
 
@@ -93,6 +97,8 @@ void viridian_synic_load_domain_ctxt(
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
+void viridian_time_poll_timers(struct vcpu *v);
+
 int viridian_time_vcpu_init(struct vcpu *v);
 int viridian_time_domain_init(struct domain *d);
 
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index a754fa6336..edb5aa38ba 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -329,7 +329,53 @@ void viridian_synic_domain_deinit(struct domain *d)
 
 void viridian_synic_poll_messages(struct vcpu *v)
 {
-    /* There are currently no message sources */
+    viridian_time_poll_timers(v);
+}
+
+bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
+                                      unsigned int index,
+                                      int64_t expiration, int64_t delivery)
+{
+    const union viridian_sint_msr *vs = &v->arch.hvm.viridian->sint[sintx];
+    HV_MESSAGE *msg = v->arch.hvm.viridian->simp.ptr;
+    struct {
+        uint32_t TimerIndex;
+        uint32_t Reserved;
+        uint64_t ExpirationTime;
+        uint64_t DeliveryTime;
+    } payload = {
+        .TimerIndex = index,
+        .ExpirationTime = expiration,
+        .DeliveryTime = delivery,
+    };
+
+    if ( test_bit(sintx, &v->arch.hvm.viridian->msg_pending) )
+        return false;
+
+    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
+    msg += sintx;
+
+    /*
+     * To avoid using an atomic test-and-set this function must be called
+     * in context of the vcpu receiving the message.
+     */
+    ASSERT(v == current);
+    if ( msg->Header.MessageType != HvMessageTypeNone )
+    {
+        msg->Header.MessageFlags.MessagePending = 1;
+        set_bit(sintx, &v->arch.hvm.viridian->msg_pending);
+        return false;
+    }
+
+    msg->Header.MessageType = HvMessageTimerExpired;
+    msg->Header.MessageFlags.MessagePending = 0;
+    msg->Header.PayloadSize = sizeof(payload);
+    memcpy(msg->Payload, &payload, sizeof(payload));
+
+    if ( !vs->fields.mask )
+        vlapic_set_irq(vcpu_vlapic(v), vs->fields.vector, 0);
+
+    return true;
 }
 
 bool viridian_synic_is_auto_eoi_sint(struct vcpu *v, uint8_t vector)
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index cb7162c2d4..b2fe152038 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -12,6 +12,7 @@
 #include <xen/version.h>
 
 #include <asm/apic.h>
+#include <asm/event.h>
 #include <asm/hvm/support.h>
 
 #include "private.h"
@@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
     return raw_trc_val(d) + trc->off;
 }
 
+static int64_t time_now(struct domain *d)
+{
+    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
+    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
+    uint32_t start, end;
+    __int128_t tsc;
+    __int128_t scale;
+    int64_t offset;
+
+    /*
+     * If the reference TSC page is not enabled, or has been invalidated
+     * fall back to the partition reference counter.
+     */
+    if ( !p || !p->TscSequence )
+        return time_ref_count(d);
+
+    /*
+     * The following sampling algorithm for tsc, scale and offset is
+     * documented in the specifiction.
+     */
+    start = p->TscSequence;
+
+    do {
+        tsc = rdtsc();
+        scale = p->TscScale;
+        offset = p->TscOffset;
+
+        smp_mb();
+        end = p->TscSequence;
+    } while (end != start);
+
+    /*
+     * The specification says: "The partition reference time is computed
+     * by the following formula:
+     *
+     * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
+     *
+     * The multiplication is a 64 bit multiplication, which results in a
+     * 128 bit number which is then shifted 64 times to the right to obtain
+     * the high 64 bits."
+     */
+    return ((tsc * scale) >> 64) + offset;
+}
+
+static void stop_stimer(struct viridian_stimer *vs)
+{
+    struct vcpu *v = vs->v;
+    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
+
+    if ( !vs->started )
+        return;
+
+    stop_timer(&vs->timer);
+    clear_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
+    vs->started = false;
+}
+
+static void stimer_expire(void *data)
+{
+    struct viridian_stimer *vs = data;
+    struct vcpu *v = vs->v;
+    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
+
+    if ( !vs->config.fields.enabled )
+        return;
+
+    set_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
+    vcpu_kick(v);
+}
+
+static void start_stimer(struct viridian_stimer *vs)
+{
+    struct vcpu *v = vs->v;
+    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
+    int64_t now = time_now(v->domain);
+    s_time_t timeout;
+
+    if ( !test_and_set_bit(stimerx, &v->arch.hvm.viridian->stimer_enabled) )
+        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
+               stimerx);
+
+    if ( vs->config.fields.periodic )
+    {
+        unsigned int missed = 0;
+        int64_t next;
+
+        /*
+         * The specification says that if the timer is lazy then we
+         * skip over any missed expirations so we can treat this case
+         * as the same as if the timer is currently stopped, i.e. we
+         * just schedule expiration to be 'count' ticks from now.
+         */
+        if ( !vs->started || vs->config.fields.lazy )
+        {
+            next = now + vs->count;
+        }
+        else
+        {
+            /*
+             * The timer is already started, so we're re-scheduling.
+             * Hence advance the timer expiration by one tick.
+             */
+            next = vs->expiration + vs->count;
+
+            /* Now check to see if any expirations have been missed */
+            if ( now - next > 0 )
+                missed = (now - next) / vs->count;
+
+            /*
+             * The specification says that if the timer is not lazy then
+             * a non-zero missed count should be used to reduce the period
+             * of the timer until it catches up, unless the count has
+             * reached a 'significant number', in which case the timer
+             * should be treated as lazy. Unfortunately the specification
+             * does not state what that number is so the choice of number
+             * here is a pure guess.
+             */
+            if ( missed > 3 )
+            {
+                missed = 0;
+                next = now + vs->count;
+            }
+        }
+
+        vs->expiration = next;
+        timeout = ((next - now) * 100ull) / (missed + 1);
+    }
+    else
+    {
+        vs->expiration = vs->count;
+        if ( vs->count - now <= 0 )
+        {
+            set_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
+            return;
+        }
+
+        timeout = (vs->expiration - now) * 100ull;
+    }
+
+    vs->started = true;
+    migrate_timer(&vs->timer, smp_processor_id());
+    set_timer(&vs->timer, timeout + NOW());
+}
+
+static void poll_stimer(struct vcpu *v, unsigned int stimerx)
+{
+    struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
+
+    if ( !test_bit(stimerx, &v->arch.hvm.viridian->stimer_pending) )
+        return;
+
+    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
+                                           stimerx, vs->expiration,
+                                           time_now(v->domain)) )
+        return;
+
+    clear_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
+
+    if ( vs->config.fields.periodic )
+        start_stimer(vs);
+    else
+        vs->config.fields.enabled = 0;
+}
+
+void viridian_time_poll_timers(struct vcpu *v)
+{
+    unsigned int i;
+
+    if ( !v->arch.hvm.viridian->stimer_pending )
+       return;
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->stimer); i++ )
+        poll_stimer(v, i);
+}
+
+void viridian_time_vcpu_freeze(struct vcpu *v)
+{
+    unsigned int i;
+
+    if ( !is_viridian_vcpu(v) )
+        return;
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[i];
+
+        if ( vs->started )
+            stop_timer(&vs->timer);
+    }
+}
+
+void viridian_time_vcpu_thaw(struct vcpu *v)
+{
+    unsigned int i;
+
+    if ( !is_viridian_vcpu(v) )
+        return;
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[i];
+
+        if ( vs->config.fields.enabled )
+            start_stimer(vs);
+    }
+}
+
 void viridian_time_domain_freeze(struct domain *d)
 {
+    struct vcpu *v;
+
+    if ( !is_viridian_domain(d) )
+        return;
+
+    for_each_vcpu ( d, v )
+        viridian_time_vcpu_freeze(v);
+
     time_ref_count_freeze(d);
 }
 
 void viridian_time_domain_thaw(struct domain *d)
 {
+    struct vcpu *v;
+
+    if ( !is_viridian_domain(d) )
+        return;
+
     time_ref_count_thaw(d);
+
+    for_each_vcpu ( d, v )
+        viridian_time_vcpu_thaw(v);
 }
 
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
@@ -149,6 +373,57 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         }
         break;
 
+    case HV_X64_MSR_TIME_REF_COUNT:
+        return X86EMUL_EXCEPTION;
+
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    {
+        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        stop_stimer(vs);
+
+        vs->config.raw = val;
+
+        if ( !vs->config.fields.sintx )
+            vs->config.fields.enabled = 0;
+
+        if ( vs->config.fields.enabled )
+            start_stimer(vs);
+
+        break;
+    }
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_COUNT:
+    {
+        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        stop_stimer(vs);
+
+        vs->count = val;
+
+        if ( !vs->count  )
+            vs->config.fields.enabled = 0;
+        else if ( vs->config.fields.auto_enable )
+            vs->config.fields.enabled = 1;
+
+        if ( vs->config.fields.enabled )
+            start_stimer(vs);
+
+        break;
+    }
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n",
                  __func__, idx, val);
@@ -201,6 +476,32 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         break;
     }
 
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    {
+        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        *val = v->arch.hvm.viridian->stimer[stimerx].config.raw;
+        break;
+    }
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_COUNT:
+    {
+        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        *val = v->arch.hvm.viridian->stimer[stimerx].count;
+        break;
+    }
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx);
         return X86EMUL_EXCEPTION;
@@ -211,6 +512,16 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 
 int viridian_time_vcpu_init(struct vcpu *v)
 {
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[i];
+
+        vs->v = v;
+        init_timer(&vs->timer, stimer_expire, vs, v->processor);
+    }
+
     return 0;
 }
 
@@ -221,6 +532,15 @@ int viridian_time_domain_init(struct domain *d)
 
 void viridian_time_vcpu_deinit(struct vcpu *v)
 {
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[i];
+
+        kill_timer(&vs->timer);
+        vs->v = NULL;
+    }
 }
 
 void viridian_time_domain_deinit(struct domain *d)
@@ -231,11 +551,34 @@ void viridian_time_domain_deinit(struct domain *d)
 void viridian_time_save_vcpu_ctxt(
     const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
 {
+    unsigned int i;
+
+    BUILD_BUG_ON(ARRAY_SIZE(v->arch.hvm.viridian->stimer) !=
+                 ARRAY_SIZE(ctxt->stimer_config_msr));
+    BUILD_BUG_ON(ARRAY_SIZE(v->arch.hvm.viridian->stimer) !=
+                 ARRAY_SIZE(ctxt->stimer_count_msr));
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[i];
+
+        ctxt->stimer_config_msr[i] = vs->config.raw;
+        ctxt->stimer_count_msr[i] = vs->count;
+    }
 }
 
 void viridian_time_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.viridian->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[i];
+
+        vs->config.raw = ctxt->stimer_config_msr[i];
+        vs->count = ctxt->stimer_count_msr[i];
+    }
 }
 
 void viridian_time_save_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 8d8cf3a37d..bca544aced 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -180,6 +180,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             mask.AccessPartitionReferenceTsc = 1;
         if ( viridian_feature_mask(d) & HVMPV_synic )
             mask.AccessSynicRegs = 1;
+        if ( viridian_feature_mask(d) & HVMPV_stimer )
+            mask.AccessSyntheticTimerRegs = 1;
 
         u.mask = mask;
 
@@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_TSC_FREQUENCY:
     case HV_X64_MSR_APIC_FREQUENCY:
     case HV_X64_MSR_REFERENCE_TSC:
+    case HV_X64_MSR_TIME_REF_COUNT:
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    case HV_X64_MSR_STIMER3_COUNT:
         return viridian_time_wrmsr(v, idx, val);
 
     case HV_X64_MSR_CRASH_P0:
@@ -404,6 +415,14 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     case HV_X64_MSR_APIC_FREQUENCY:
     case HV_X64_MSR_REFERENCE_TSC:
     case HV_X64_MSR_TIME_REF_COUNT:
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    case HV_X64_MSR_STIMER3_COUNT:
         return viridian_time_rdmsr(v, idx, val);
 
     case HV_X64_MSR_CRASH_P0:
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index 9a493cf048..32da65a064 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -40,6 +40,33 @@ union viridian_sint_msr
     } fields;
 };
 
+union viridian_stimer_config_msr
+{
+    uint64_t raw;
+    struct
+    {
+        uint64_t enabled:1;
+        uint64_t periodic:1;
+        uint64_t lazy:1;
+        uint64_t auto_enable:1;
+        uint64_t vector:8;
+        uint64_t direct_mode:1;
+        uint64_t reserved_zero1:3;
+        uint64_t sintx:4;
+        uint64_t reserved_zero2:44;
+    } fields;
+};
+
+struct viridian_stimer {
+    struct vcpu *v;
+    struct timer timer;
+    union viridian_stimer_config_msr config;
+    uint64_t count;
+    int64_t expiration;
+    s_time_t timeout;
+    bool started;
+};
+
 struct viridian_vcpu
 {
     struct viridian_page vp_assist;
@@ -50,6 +77,9 @@ struct viridian_vcpu
     union viridian_sint_msr sint[16];
     uint8_t vector_to_sintx[256];
     unsigned long msg_pending;
+    struct viridian_stimer stimer[4];
+    unsigned long stimer_enabled;
+    unsigned long stimer_pending;
     uint64_t crash_param[5];
 };
 
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index ec3e4df12c..8344aa471f 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -604,6 +604,8 @@ struct hvm_viridian_vcpu_context {
     uint8_t  _pad[7];
     uint64_t simp_msr;
     uint64_t sint_msr[16];
+    uint64_t stimer_config_msr[4];
+    uint64_t stimer_count_msr[4];
 };
 
 DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index e7e3c7c892..e06b0942d0 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -150,6 +150,10 @@
 #define _HVMPV_synic 7
 #define HVMPV_synic (1 << _HVMPV_synic)
 
+/* Enable STIMER MSRs */
+#define _HVMPV_stimer 8
+#define HVMPV_stimer (1 << _HVMPV_stimer)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -158,7 +162,8 @@
          HVMPV_hcall_remote_tlb_flush | \
          HVMPV_apic_assist | \
          HVMPV_crash_ctl | \
-         HVMPV_synic)
+         HVMPV_synic | \
+         HVMPV_stimer)
 
 #endif
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall
  2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
                   ` (7 preceding siblings ...)
  2019-01-31 10:47 ` [PATCH v3 8/9] viridian: add implementation of synthetic timers Paul Durrant
@ 2019-01-31 10:47 ` Paul Durrant
  2019-01-31 14:46   ` Wei Liu
  2019-02-25 15:00   ` Jan Beulich
  8 siblings, 2 replies; 27+ messages in thread
From: Paul Durrant @ 2019-01-31 10:47 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This patch adds an implementation of the hypercall as documented in the
specification [1], section 10.5.2. This enlightenment, as with others, is
advertised by CPUID leaf 0x40000004 and is under control of a new
'hcall_ipi' option in libxl.

If used, this enlightenment should mean the guest only takes a single VMEXIT
to issue IPIs to multiple vCPUs rather than the multiple VMEXITs that would
result from using the emulated local APIC.

[1] https://github.com/MicrosoftDocs/Virtualization-Documentation/raw/live/tlfs/Hypervisor%20Top%20Level%20Functional%20Specification%20v5.0C.pdf

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 docs/man/xl.cfg.5.pod.in             |  6 +++
 tools/libxl/libxl.h                  |  6 +++
 tools/libxl/libxl_dom.c              |  3 ++
 tools/libxl/libxl_types.idl          |  1 +
 xen/arch/x86/hvm/viridian/viridian.c | 61 ++++++++++++++++++++++++++++
 xen/include/public/hvm/params.h      |  7 +++-
 6 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 355c654693..c7d70e618b 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2175,6 +2175,12 @@ ticks and hence enabling this group will ensure that ticks will be
 consistent with use of an enlightened time source (B<time_ref_count> or
 B<reference_tsc>).
 
+=item B<hcall_ipi>
+
+This set incorporates use of a hypercall for interprocessor interrupts.
+This enlightenment may improve performance of Windows guests with multiple
+virtual CPUs.
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index c8f219b0d3..482499a6c0 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -330,6 +330,12 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_STIMER 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_HCALL_IPI indicates that the 'hcall_ipi' value
+ * is present in the viridian enlightenment enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_HCALL_IPI 1
+
 /*
  * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that
  * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field.
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 2ee0f82ee7..879c806139 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -324,6 +324,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER))
         mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI))
+        mask |= HVMPV_hcall_ipi;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 1cce249de4..cb4702fd7a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -237,6 +237,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (6, "crash_ctl"),
     (7, "synic"),
     (8, "stimer"),
+    (9, "hcall_ipi"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index bca544aced..010999678d 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -28,6 +28,7 @@
 #define HvFlushVirtualAddressSpace 0x0002
 #define HvFlushVirtualAddressList  0x0003
 #define HvNotifyLongSpinWait       0x0008
+#define HvSendSyntheticClusterIpi  0x000b
 #define HvGetPartitionId           0x0046
 #define HvExtCallQueryCapabilities 0x8001
 
@@ -95,6 +96,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2)
 #define CPUID4A_MSR_BASED_APIC         (1 << 3)
 #define CPUID4A_RELAX_TIMER_INT        (1 << 5)
+#define CPUID4A_SYNTHETIC_CLUSTER_IPI  (1 << 10)
 
 /* Viridian CPUID leaf 6: Implementation HW features detected and in use */
 #define CPUID6A_APIC_OVERLAY    (1 << 0)
@@ -206,6 +208,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH;
         if ( !cpu_has_vmx_apic_reg_virt )
             res->a |= CPUID4A_MSR_BASED_APIC;
+        if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
+            res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
 
         /*
          * This value is the recommended number of attempts to try to
@@ -659,7 +663,64 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         status = HV_STATUS_SUCCESS;
         break;
     }
+    case HvSendSyntheticClusterIpi:
+    {
+        struct vcpu *v;
+        uint32_t vector;
+        uint64_t vcpu_mask;
+
+        status = HV_STATUS_INVALID_PARAMETER;
+
+        /* Get input parameters. */
+        if ( input.fast )
+        {
+            if ( input_params_gpa >> 32 != 0 )
+                break;
+
+            vector = input_params_gpa;
+            vcpu_mask = output_params_gpa;
+        }
+        else
+        {
+            struct {
+                uint32_t vector;
+                uint8_t target_vtl;
+                uint8_t reserved_zero[3];
+                uint64_t vcpu_mask;
+            } input_params;
+
+            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                          sizeof(input_params)) !=
+                 HVMTRANS_okay )
+                break;
+
+            if ( input_params.target_vtl ||
+                 input_params.reserved_zero[0] ||
+                 input_params.reserved_zero[1] ||
+                 input_params.reserved_zero[2] )
+                break;
 
+            vector = input_params.vector;
+            vcpu_mask = input_params.vcpu_mask;
+        }
+
+        if ( vector < 0x10 || vector > 0xff )
+            break;
+
+        for_each_vcpu ( currd, v )
+        {
+            if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+                break;
+
+            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+                continue;
+
+            vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+        }
+
+        status = HV_STATUS_SUCCESS;
+        break;
+    }
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
                 input.call_code);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index e06b0942d0..36832e4b94 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -154,6 +154,10 @@
 #define _HVMPV_stimer 8
 #define HVMPV_stimer (1 << _HVMPV_stimer)
 
+/* Use Synthetic Cluster IPI Hypercall */
+#define _HVMPV_hcall_ipi 9
+#define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -163,7 +167,8 @@
          HVMPV_apic_assist | \
          HVMPV_crash_ctl | \
          HVMPV_synic | \
-         HVMPV_stimer)
+         HVMPV_stimer | \
+         HVMPV_hcall_ipi)
 
 #endif
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 3/9] viridian: extend init/deinit hooks into synic and time modules
  2019-01-31 10:47 ` [PATCH v3 3/9] viridian: extend init/deinit hooks into synic and time modules Paul Durrant
@ 2019-01-31 14:45   ` Wei Liu
  0 siblings, 0 replies; 27+ messages in thread
From: Wei Liu @ 2019-01-31 14:45 UTC (permalink / raw)
  To: Paul Durrant
  Cc: xen-devel, Roger Pau Monné, Wei Liu, Jan Beulich, Andrew Cooper

On Thu, Jan 31, 2019 at 10:47:24AM +0000, Paul Durrant wrote:
> This patch simply adds domain and vcpu init/deinit hooks into the synic
> and time modules and wires them into viridian_[domain|vcpu]_[init|deinit]().
> Only one of the hooks is currently needed (to unmap the 'VP Assist' page)
> but subsequent patches will make use of the others.
> 
> NOTE: To perform the unmap of the VP Assist page,
>       viridian_unmap_guest_page() is now directly called in the new
>       viridian_synic_vcpu_deinit() function (which is safe even if
>       is_viridian_vcpu() evaluates to false). This replaces the slightly
>       hacky mechanism of faking a zero write to the
>       HV_X64_MSR_VP_ASSIST_PAGE MSR in viridian_cpu_deinit().
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs
  2019-01-31 10:47 ` [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
@ 2019-01-31 14:46   ` Wei Liu
  2019-02-25 14:12   ` Jan Beulich
  1 sibling, 0 replies; 27+ messages in thread
From: Wei Liu @ 2019-01-31 14:46 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Jan Beulich, xen-devel, Roger Pau Monné

On Thu, Jan 31, 2019 at 10:47:27AM +0000, Paul Durrant wrote:
> This patch introduces an implementation of the SCONTROL, SVERSION, SIEFP,
> SIMP, EOM and SINT0-15 SynIC MSRs. No message source is added and, as such,
> nothing will yet generate a synthetic interrupt. A subsequent patch will
> add an implementation of synthetic timers which will need the infrastructure
> added by this patch to deliver expiry messages to the guest.
> 
> NOTE: A 'synic' option is added to the toolstack viridian enlightenments
>       enumeration but is deliberately not documented as enabling these
>       SynIC registers without a message source is only useful for
>       debugging.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Tim Deegan <tim@xen.org>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> 
> v3:
>  - Add the 'SintPollingModeAvailable' bit in CPUID leaf 3
> ---
>  tools/libxl/libxl.h                    |   6 +
>  tools/libxl/libxl_dom.c                |   3 +
>  tools/libxl/libxl_types.idl            |   1 +

For the toolstack part:

Acked-by: Wei Liu <wei.liu2@citrix.com>

Sorry I don't have time to go through the viridian spec, but the rest
looks sensible to me.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-01-31 10:47 ` [PATCH v3 8/9] viridian: add implementation of synthetic timers Paul Durrant
@ 2019-01-31 14:46   ` Wei Liu
  2019-02-25 14:54   ` Jan Beulich
  1 sibling, 0 replies; 27+ messages in thread
From: Wei Liu @ 2019-01-31 14:46 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Jan Beulich, xen-devel, Roger Pau Monné

On Thu, Jan 31, 2019 at 10:47:29AM +0000, Paul Durrant wrote:
> This patch introduces an implementation of the STIMER0-15_CONFIG/COUNT MSRs
> and hence a the first SynIC message source.
> 
> The new (and documented) 'stimer' viridian enlightenment group may be
> specified to enable this feature.
> 
> NOTE: It is necessary for correct operation that timer expiration and
>       message delivery time-stamping use the same time source as the guest.
>       The specification is ambiguous but testing with a Windows 10 1803
>       guest has shown that using the partition reference counter as a
>       source whilst the guest is using RDTSC and the reference tsc page
>       does not work correctly. Therefore the time_now() function is used.
>       This implements the algorithm for acquiring partition reference time
>       that is documented in the specifiction. This requires use of 128-bit
>       arithmetic and hence __int128_t values are used in the calculation,
>       although the result is a signed 64-bit value.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Tim Deegan <tim@xen.org>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> 
> v3:
>  - Re-worked missed ticks calculation
> ---
>  docs/man/xl.cfg.5.pod.in               |  12 +-
>  tools/libxl/libxl.h                    |   6 +
>  tools/libxl/libxl_dom.c                |   4 +
>  tools/libxl/libxl_types.idl            |   1 +

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall
  2019-01-31 10:47 ` [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
@ 2019-01-31 14:46   ` Wei Liu
  2019-02-25 15:00   ` Jan Beulich
  1 sibling, 0 replies; 27+ messages in thread
From: Wei Liu @ 2019-01-31 14:46 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Jan Beulich, xen-devel, Roger Pau Monné

On Thu, Jan 31, 2019 at 10:47:30AM +0000, Paul Durrant wrote:
> This patch adds an implementation of the hypercall as documented in the
> specification [1], section 10.5.2. This enlightenment, as with others, is
> advertised by CPUID leaf 0x40000004 and is under control of a new
> 'hcall_ipi' option in libxl.
> 
> If used, this enlightenment should mean the guest only takes a single VMEXIT
> to issue IPIs to multiple vCPUs rather than the multiple VMEXITs that would
> result from using the emulated local APIC.
> 
> [1] https://github.com/MicrosoftDocs/Virtualization-Documentation/raw/live/tlfs/Hypervisor%20Top%20Level%20Functional%20Specification%20v5.0C.pdf
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Tim Deegan <tim@xen.org>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> ---
>  docs/man/xl.cfg.5.pod.in             |  6 +++
>  tools/libxl/libxl.h                  |  6 +++
>  tools/libxl/libxl_dom.c              |  3 ++
>  tools/libxl/libxl_types.idl          |  1 +

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs
  2019-01-31 10:47 ` [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
  2019-01-31 14:46   ` Wei Liu
@ 2019-02-25 14:12   ` Jan Beulich
  2019-03-04 17:01     ` Paul Durrant
  1 sibling, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2019-02-25 14:12 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, xen-devel, Roger Pau Monne

>>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> @@ -105,6 +132,73 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>              viridian_map_guest_page(d, &v->arch.hvm.viridian->vp_assist);
>          break;
>  
> +    case HV_X64_MSR_SCONTROL:
> +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> +            return X86EMUL_EXCEPTION;
> +
> +        v->arch.hvm.viridian->scontrol = val;
> +        break;
> +
> +    case HV_X64_MSR_SVERSION:
> +        return X86EMUL_EXCEPTION;
> +
> +    case HV_X64_MSR_SIEFP:
> +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> +            return X86EMUL_EXCEPTION;
> +
> +        v->arch.hvm.viridian->siefp = val;
> +        break;
> +
> +    case HV_X64_MSR_SIMP:
> +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> +            return X86EMUL_EXCEPTION;
> +
> +        viridian_unmap_guest_page(&v->arch.hvm.viridian->simp);
> +        v->arch.hvm.viridian->simp.msr.raw = val;
> +        viridian_dump_guest_page(v, "SIMP", &v->arch.hvm.viridian->simp);
> +        if ( v->arch.hvm.viridian->simp.msr.fields.enabled )
> +            viridian_map_guest_page(d, &v->arch.hvm.viridian->simp);
> +        break;
> +
> +    case HV_X64_MSR_EOM:
> +    {
> +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> +            return X86EMUL_EXCEPTION;
> +
> +        v->arch.hvm.viridian->msg_pending = 0;
> +        break;
> +    }
> +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:

Stray braces for the previous case, and a missing blank line between
both.

> +    {
> +        unsigned int sintx = idx - HV_X64_MSR_SINT0;
> +        uint8_t vector = v->arch.hvm.viridian->sint[sintx].fields.vector;
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> +            return X86EMUL_EXCEPTION;
> +
> +        /*
> +         * Invalidate any previous mapping by setting an out-of-range
> +         * index.
> +         */
> +        v->arch.hvm.viridian->vector_to_sintx[vector] =
> +            ARRAY_SIZE(v->arch.hvm.viridian->sint);
> +
> +        v->arch.hvm.viridian->sint[sintx].raw = val;
> +
> +        /* Vectors must be in the range 16-255 inclusive */
> +        vector = v->arch.hvm.viridian->sint[sintx].fields.vector;
> +        if ( vector < 16 )
> +            return X86EMUL_EXCEPTION;

The Viridian spec may surely specify architecturally inconsistent
behavior, but I'd like to double check that raising an exception
after having updated some state already is really intended here.

> +        printk(XENLOG_G_INFO "%pv: VIRIDIAN SINT%u: vector: %x\n", v, sintx,
> +               vector);
> +        v->arch.hvm.viridian->vector_to_sintx[vector] = sintx;
> +
> +        if ( v->arch.hvm.viridian->sint[sintx].fields.polling )
> +            clear_bit(sintx, &v->arch.hvm.viridian->msg_pending);
> +
> +        break;
> +    }
>      default:

Missing blank line above here again (and one more in the rdmsr code
below).

> @@ -116,6 +210,8 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>  
>  int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>  {
> +    struct domain *d = v->domain;

const?

> +void viridian_synic_poll_messages(struct vcpu *v)

const ?

> +bool viridian_synic_is_auto_eoi_sint(struct vcpu *v, uint8_t vector)

const

> +{
> +    int sintx = v->arch.hvm.viridian->vector_to_sintx[vector];

I realize the array read from has uint8_t elements, but can this and ...

> +    if ( sintx >= ARRAY_SIZE(v->arch.hvm.viridian->sint) )
> +        return false;
> +
> +    return v->arch.hvm.viridian->sint[sintx].fields.auto_eoi;
> +}
> +
> +void viridian_synic_ack_sint(struct vcpu *v, uint8_t vector)
> +{
> +    int sintx = v->arch.hvm.viridian->vector_to_sintx[vector];

... this please still be unsigned int?

Also for both functions I wonder whether their first parameters
wouldn't better be struct viridian_vcpu *. This would certainly help
readability here.

> +    if ( sintx < ARRAY_SIZE(v->arch.hvm.viridian->sint) )
> +        clear_bit(sintx, &v->arch.hvm.viridian->msg_pending);

You also may want to use array_index_nospec() here and
array_access_nospec() above, despite there not being very big a
range to run past array bounds .

> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -461,11 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic)
>  
>  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
>  {
> +    struct vcpu *v = vlapic_vcpu(vlapic);
>      struct domain *d = vlapic_domain(vlapic);
>  
>      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
>          vioapic_update_EOI(d, vector);
>  
> +    if ( has_viridian_synic(v->domain) )

Please use d here. And could this be "else if()"?

> @@ -1301,6 +1305,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
>      if ( !vlapic_enabled(vlapic) )
>          return -1;
>  
> +    /*
> +     * Poll the viridian message queues before checking the IRR since
> +     * a sythetic interrupt may be asserted during the poll.
> +     */
> +    if ( has_viridian_synic(v->domain) )
> +        viridian_synic_poll_messages(v);
> +
>      irr = vlapic_find_highest_irr(vlapic);
>      if ( irr == -1 )
>          return -1;

While architecturally IRR can indeed become set at any time, is it
acceptable to all of our other code for two successive invocations
to the function to potentially produce different results? I'm in
particular worried about {svm,vmx}_intr_assist(), and in their
context I also wonder whether you don't need a new
hvm_intsrc_* - it looks as if you use enough of the LAPIC to get
away without, but I'm not entirely certain.

> --- a/xen/include/asm-x86/hvm/viridian.h
> +++ b/xen/include/asm-x86/hvm/viridian.h
> @@ -26,10 +26,30 @@ struct viridian_page
>      void *ptr;
>  };
>  
> +union viridian_sint_msr
> +{
> +    uint64_t raw;
> +    struct
> +    {
> +        uint64_t vector:8;
> +        uint64_t reserved_preserved1:8;
> +        uint64_t mask:1;
> +        uint64_t auto_eoi:1;
> +        uint64_t polling:1;
> +        uint64_t reserved_preserved2:45;
> +    } fields;

This being an internal header, does the inner struct really need
a field name?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-01-31 10:47 ` [PATCH v3 8/9] viridian: add implementation of synthetic timers Paul Durrant
  2019-01-31 14:46   ` Wei Liu
@ 2019-02-25 14:54   ` Jan Beulich
  2019-03-06 11:23     ` Paul Durrant
  1 sibling, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2019-02-25 14:54 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, xen-devel, Roger Pau Monne

>>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/viridian/synic.c
> +++ b/xen/arch/x86/hvm/viridian/synic.c
> @@ -329,7 +329,53 @@ void viridian_synic_domain_deinit(struct domain *d)
>  
>  void viridian_synic_poll_messages(struct vcpu *v)
>  {
> -    /* There are currently no message sources */
> +    viridian_time_poll_timers(v);
> +}
> +
> +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> +                                      unsigned int index,
> +                                      int64_t expiration, int64_t delivery)
> +{
> +    const union viridian_sint_msr *vs = &v->arch.hvm.viridian->sint[sintx];
> +    HV_MESSAGE *msg = v->arch.hvm.viridian->simp.ptr;
> +    struct {
> +        uint32_t TimerIndex;
> +        uint32_t Reserved;
> +        uint64_t ExpirationTime;
> +        uint64_t DeliveryTime;
> +    } payload = {
> +        .TimerIndex = index,
> +        .ExpirationTime = expiration,
> +        .DeliveryTime = delivery,
> +    };
> +
> +    if ( test_bit(sintx, &v->arch.hvm.viridian->msg_pending) )
> +        return false;
> +
> +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
> +    msg += sintx;
> +
> +    /*
> +     * To avoid using an atomic test-and-set this function must be called
> +     * in context of the vcpu receiving the message.
> +     */
> +    ASSERT(v == current);
> +    if ( msg->Header.MessageType != HvMessageTypeNone )
> +    {
> +        msg->Header.MessageFlags.MessagePending = 1;
> +        set_bit(sintx, &v->arch.hvm.viridian->msg_pending);

As per the comment above this is always in context of the subject
vCPU. It looks to me as if this was also the case for the two
clear_bit() on the field in the prior patch. If so, all three could be
the non-atomic variants instead.

> +        return false;
> +    }
> +
> +    msg->Header.MessageType = HvMessageTimerExpired;
> +    msg->Header.MessageFlags.MessagePending = 0;
> +    msg->Header.PayloadSize = sizeof(payload);
> +    memcpy(msg->Payload, &payload, sizeof(payload));
> +
> +    if ( !vs->fields.mask )
> +        vlapic_set_irq(vcpu_vlapic(v), vs->fields.vector, 0);

If this wasn't with v == current, I think you'd also need a barrier
here. Could you extend the comment above to also mention this
aspect?

> @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
>      return raw_trc_val(d) + trc->off;
>  }
>  
> +static int64_t time_now(struct domain *d)

Why would this return a signed value? And can't the function
parameter be const?

> +{
> +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> +    uint32_t start, end;
> +    __int128_t tsc;
> +    __int128_t scale;

I don't think you need both of them be 128 bits wide. I also don't
see why either would want to be of a signed type.

> +    int64_t offset;
> +
> +    /*
> +     * If the reference TSC page is not enabled, or has been invalidated
> +     * fall back to the partition reference counter.
> +     */
> +    if ( !p || !p->TscSequence )
> +        return time_ref_count(d);
> +
> +    /*
> +     * The following sampling algorithm for tsc, scale and offset is
> +     * documented in the specifiction.
> +     */
> +    start = p->TscSequence;
> +
> +    do {
> +        tsc = rdtsc();
> +        scale = p->TscScale;
> +        offset = p->TscOffset;
> +
> +        smp_mb();
> +        end = p->TscSequence;

Why is this a full barrier, rather than just a read one? And don't you need
to add a counterpart in update_reference_tsc()?

> +    } while (end != start);

update_reference_tsc() increments TscSequence. If end doesn't match
start at this point, you're entering a near infinite loop here as long as
you don't update start inside the loop. I also think that there's a
second read barrier needed between this initial reading of the sequence
number and the reading of the actual values.

> +    /*
> +     * The specification says: "The partition reference time is computed
> +     * by the following formula:
> +     *
> +     * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> +     *
> +     * The multiplication is a 64 bit multiplication, which results in a
> +     * 128 bit number which is then shifted 64 times to the right to obtain
> +     * the high 64 bits."
> +     */
> +    return ((tsc * scale) >> 64) + offset;
> +}
> +
> +static void stop_stimer(struct viridian_stimer *vs)
> +{
> +    struct vcpu *v = vs->v;

const?

> +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> +
> +    if ( !vs->started )
> +        return;
> +
> +    stop_timer(&vs->timer);
> +    clear_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
> +    vs->started = false;
> +}
> +
> +static void stimer_expire(void *data)
> +{
> +    struct viridian_stimer *vs = data;

const?

> +    struct vcpu *v = vs->v;
> +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> +
> +    if ( !vs->config.fields.enabled )
> +        return;
> +
> +    set_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
> +    vcpu_kick(v);
> +}
> +
> +static void start_stimer(struct viridian_stimer *vs)
> +{
> +    struct vcpu *v = vs->v;
> +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> +    int64_t now = time_now(v->domain);
> +    s_time_t timeout;
> +
> +    if ( !test_and_set_bit(stimerx, &v->arch.hvm.viridian->stimer_enabled) )
> +        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
> +               stimerx);
> +
> +    if ( vs->config.fields.periodic )
> +    {
> +        unsigned int missed = 0;
> +        int64_t next;
> +
> +        /*
> +         * The specification says that if the timer is lazy then we
> +         * skip over any missed expirations so we can treat this case
> +         * as the same as if the timer is currently stopped, i.e. we
> +         * just schedule expiration to be 'count' ticks from now.
> +         */
> +        if ( !vs->started || vs->config.fields.lazy )
> +        {
> +            next = now + vs->count;
> +        }

Unnecessary braces.

> @@ -149,6 +373,57 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>          }
>          break;
>  
> +    case HV_X64_MSR_TIME_REF_COUNT:
> +        return X86EMUL_EXCEPTION;

Isn't this an unrelated change?

> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    {
> +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
> +        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        stop_stimer(vs);
> +
> +        vs->config.raw = val;
> +
> +        if ( !vs->config.fields.sintx )
> +            vs->config.fields.enabled = 0;
> +
> +        if ( vs->config.fields.enabled )
> +            start_stimer(vs);
> +
> +        break;
> +    }
> +    case HV_X64_MSR_STIMER0_COUNT:

Missing blank line again (and also further down here as well as in the
rdmsr code).

> +    case HV_X64_MSR_STIMER1_COUNT:
> +    case HV_X64_MSR_STIMER2_COUNT:
> +    case HV_X64_MSR_STIMER3_COUNT:
> +    {
> +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
> +        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        stop_stimer(vs);
> +
> +        vs->count = val;
> +
> +        if ( !vs->count  )

Any reason you don't use val here (which the compiler likely will do
anyway)?

> @@ -201,6 +476,32 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>          break;
>      }
>  
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    {
> +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        *val = v->arch.hvm.viridian->stimer[stimerx].config.raw;

While more noticeable here and ...

> +        break;
> +    }
> +    case HV_X64_MSR_STIMER0_COUNT:
> +    case HV_X64_MSR_STIMER1_COUNT:
> +    case HV_X64_MSR_STIMER2_COUNT:
> +    case HV_X64_MSR_STIMER3_COUNT:
> +    {
> +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        *val = v->arch.hvm.viridian->stimer[stimerx].count;

... here, array_access_nospec() are probably needed not just here,
but also in the wrmsr logic.

> --- a/xen/include/asm-x86/hvm/viridian.h
> +++ b/xen/include/asm-x86/hvm/viridian.h
> @@ -40,6 +40,33 @@ union viridian_sint_msr
>      } fields;
>  };
>  
> +union viridian_stimer_config_msr
> +{
> +    uint64_t raw;
> +    struct
> +    {
> +        uint64_t enabled:1;
> +        uint64_t periodic:1;
> +        uint64_t lazy:1;
> +        uint64_t auto_enable:1;
> +        uint64_t vector:8;
> +        uint64_t direct_mode:1;
> +        uint64_t reserved_zero1:3;
> +        uint64_t sintx:4;
> +        uint64_t reserved_zero2:44;
> +    } fields;
> +};
> +
> +struct viridian_stimer {
> +    struct vcpu *v;

Isn't a full 8-byte pointer a little too much overhead here? You could
instead store the timer index ...

> +    struct timer timer;
> +    union viridian_stimer_config_msr config;
> +    uint64_t count;
> +    int64_t expiration;
> +    s_time_t timeout;
> +    bool started;

... in a field using the 7-byte padding here, and use container_of()
to get at the outer structure.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall
  2019-01-31 10:47 ` [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
  2019-01-31 14:46   ` Wei Liu
@ 2019-02-25 15:00   ` Jan Beulich
  1 sibling, 0 replies; 27+ messages in thread
From: Jan Beulich @ 2019-02-25 15:00 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, xen-devel, Roger Pau Monne

>>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> @@ -659,7 +663,64 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>          status = HV_STATUS_SUCCESS;
>          break;
>      }
> +    case HvSendSyntheticClusterIpi:

Missing blank line again, and one more at the end of this case block.

> +    {
> +        struct vcpu *v;
> +        uint32_t vector;
> +        uint64_t vcpu_mask;
> +
> +        status = HV_STATUS_INVALID_PARAMETER;
> +
> +        /* Get input parameters. */
> +        if ( input.fast )
> +        {
> +            if ( input_params_gpa >> 32 != 0 )

Please parenthesize the shift expression (or omit the != 0).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs
  2019-02-25 14:12   ` Jan Beulich
@ 2019-03-04 17:01     ` Paul Durrant
  0 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-03-04 17:01 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Jan Beulich
> Sent: 25 February 2019 14:12
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim (Xen.org) <tim@xen.org>; Julien
> Grall <julien.grall@arm.com>; xen-devel <xen-devel@lists.xenproject.org>; Roger Pau Monne
> <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs
> 
> >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> > @@ -105,6 +132,73 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> >              viridian_map_guest_page(d, &v->arch.hvm.viridian->vp_assist);
> >          break;
> >
> > +    case HV_X64_MSR_SCONTROL:
> > +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        v->arch.hvm.viridian->scontrol = val;
> > +        break;
> > +
> > +    case HV_X64_MSR_SVERSION:
> > +        return X86EMUL_EXCEPTION;
> > +
> > +    case HV_X64_MSR_SIEFP:
> > +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        v->arch.hvm.viridian->siefp = val;
> > +        break;
> > +
> > +    case HV_X64_MSR_SIMP:
> > +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        viridian_unmap_guest_page(&v->arch.hvm.viridian->simp);
> > +        v->arch.hvm.viridian->simp.msr.raw = val;
> > +        viridian_dump_guest_page(v, "SIMP", &v->arch.hvm.viridian->simp);
> > +        if ( v->arch.hvm.viridian->simp.msr.fields.enabled )
> > +            viridian_map_guest_page(d, &v->arch.hvm.viridian->simp);
> > +        break;
> > +
> > +    case HV_X64_MSR_EOM:
> > +    {
> > +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        v->arch.hvm.viridian->msg_pending = 0;
> > +        break;
> > +    }
> > +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
> 
> Stray braces for the previous case, and a missing blank line between
> both.
>

Ok.
 
> > +    {
> > +        unsigned int sintx = idx - HV_X64_MSR_SINT0;
> > +        uint8_t vector = v->arch.hvm.viridian->sint[sintx].fields.vector;
> > +
> > +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        /*
> > +         * Invalidate any previous mapping by setting an out-of-range
> > +         * index.
> > +         */
> > +        v->arch.hvm.viridian->vector_to_sintx[vector] =
> > +            ARRAY_SIZE(v->arch.hvm.viridian->sint);
> > +
> > +        v->arch.hvm.viridian->sint[sintx].raw = val;
> > +
> > +        /* Vectors must be in the range 16-255 inclusive */
> > +        vector = v->arch.hvm.viridian->sint[sintx].fields.vector;
> > +        if ( vector < 16 )
> > +            return X86EMUL_EXCEPTION;
> 
> The Viridian spec may surely specify architecturally inconsistent
> behavior, but I'd like to double check that raising an exception
> after having updated some state already is really intended here.
> 

The spec is ambiguous as to whether a subsequent read should see the invalid value, but I agree it would be better to avoid changing state.

> > +        printk(XENLOG_G_INFO "%pv: VIRIDIAN SINT%u: vector: %x\n", v, sintx,
> > +               vector);
> > +        v->arch.hvm.viridian->vector_to_sintx[vector] = sintx;
> > +
> > +        if ( v->arch.hvm.viridian->sint[sintx].fields.polling )
> > +            clear_bit(sintx, &v->arch.hvm.viridian->msg_pending);
> > +
> > +        break;
> > +    }
> >      default:
> 
> Missing blank line above here again (and one more in the rdmsr code
> below).

Ok.

> 
> > @@ -116,6 +210,8 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> >
> >  int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
> >  {
> > +    struct domain *d = v->domain;
> 
> const?

Ok.

> 
> > +void viridian_synic_poll_messages(struct vcpu *v)
> 
> const ?

At the moment, since the function does nothing, yes.

> 
> > +bool viridian_synic_is_auto_eoi_sint(struct vcpu *v, uint8_t vector)
> 
> const

Ok.

> 
> > +{
> > +    int sintx = v->arch.hvm.viridian->vector_to_sintx[vector];
> 
> I realize the array read from has uint8_t elements, but can this and ...
> 
> > +    if ( sintx >= ARRAY_SIZE(v->arch.hvm.viridian->sint) )
> > +        return false;
> > +
> > +    return v->arch.hvm.viridian->sint[sintx].fields.auto_eoi;
> > +}
> > +
> > +void viridian_synic_ack_sint(struct vcpu *v, uint8_t vector)
> > +{
> > +    int sintx = v->arch.hvm.viridian->vector_to_sintx[vector];
> 
> ... this please still be unsigned int?

Ok.

> 
> Also for both functions I wonder whether their first parameters
> wouldn't better be struct viridian_vcpu *. This would certainly help
> readability here.

I'd prefer outside callers to just pass struct vcpu. I think I'll put in a pre-cursor patch to use a stack variable to point at viridian_vcpu or viridian_domain in places where it makes sense, for the same of readability.

> 
> > +    if ( sintx < ARRAY_SIZE(v->arch.hvm.viridian->sint) )
> > +        clear_bit(sintx, &v->arch.hvm.viridian->msg_pending);
> 
> You also may want to use array_index_nospec() here and
> array_access_nospec() above, despite there not being very big a
> range to run past array bounds .

Ok, I can do that.

> 
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -461,11 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic)
> >
> >  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
> >  {
> > +    struct vcpu *v = vlapic_vcpu(vlapic);
> >      struct domain *d = vlapic_domain(vlapic);
> >
> >      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
> >          vioapic_update_EOI(d, vector);
> >
> > +    if ( has_viridian_synic(v->domain) )
> 
> Please use d here. And could this be "else if()"?
> 

Ok, else if would work.

> > @@ -1301,6 +1305,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
> >      if ( !vlapic_enabled(vlapic) )
> >          return -1;
> >
> > +    /*
> > +     * Poll the viridian message queues before checking the IRR since
> > +     * a sythetic interrupt may be asserted during the poll.
> > +     */
> > +    if ( has_viridian_synic(v->domain) )
> > +        viridian_synic_poll_messages(v);
> > +
> >      irr = vlapic_find_highest_irr(vlapic);
> >      if ( irr == -1 )
> >          return -1;
> 
> While architecturally IRR can indeed become set at any time, is it
> acceptable to all of our other code for two successive invocations
> to the function to potentially produce different results? I'm in
> particular worried about {svm,vmx}_intr_assist(), and in their
> context I also wonder whether you don't need a new
> hvm_intsrc_* - it looks as if you use enough of the LAPIC to get
> away without, but I'm not entirely certain.

Well according to the spec the synic is supposed to be a superset of the local APIC so I'd rather stay combined. I guess I can add a latch so that the poll is not retried until after vlapic_ack_pending_irq() has been called.
 
> 
> > --- a/xen/include/asm-x86/hvm/viridian.h
> > +++ b/xen/include/asm-x86/hvm/viridian.h
> > @@ -26,10 +26,30 @@ struct viridian_page
> >      void *ptr;
> >  };
> >
> > +union viridian_sint_msr
> > +{
> > +    uint64_t raw;
> > +    struct
> > +    {
> > +        uint64_t vector:8;
> > +        uint64_t reserved_preserved1:8;
> > +        uint64_t mask:1;
> > +        uint64_t auto_eoi:1;
> > +        uint64_t polling:1;
> > +        uint64_t reserved_preserved2:45;
> > +    } fields;
> 
> This being an internal header, does the inner struct really need
> a field name?

For consistency, yes. Again I'll see about adding a pre-cursor patch to blow the 'fields' name away in other cases, then I won't need it here.

  Paul

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-02-25 14:54   ` Jan Beulich
@ 2019-03-06 11:23     ` Paul Durrant
  2019-03-06 11:47       ` Paul Durrant
  2019-03-06 12:56       ` Jan Beulich
  0 siblings, 2 replies; 27+ messages in thread
From: Paul Durrant @ 2019-03-06 11:23 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 February 2019 14:54
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org)
> <tim@xen.org>
> Subject: Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> 
> >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> > --- a/xen/arch/x86/hvm/viridian/synic.c
> > +++ b/xen/arch/x86/hvm/viridian/synic.c
> > @@ -329,7 +329,53 @@ void viridian_synic_domain_deinit(struct domain *d)
> >
> >  void viridian_synic_poll_messages(struct vcpu *v)
> >  {
> > -    /* There are currently no message sources */
> > +    viridian_time_poll_timers(v);
> > +}
> > +
> > +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> > +                                      unsigned int index,
> > +                                      int64_t expiration, int64_t delivery)
> > +{
> > +    const union viridian_sint_msr *vs = &v->arch.hvm.viridian->sint[sintx];
> > +    HV_MESSAGE *msg = v->arch.hvm.viridian->simp.ptr;
> > +    struct {
> > +        uint32_t TimerIndex;
> > +        uint32_t Reserved;
> > +        uint64_t ExpirationTime;
> > +        uint64_t DeliveryTime;
> > +    } payload = {
> > +        .TimerIndex = index,
> > +        .ExpirationTime = expiration,
> > +        .DeliveryTime = delivery,
> > +    };
> > +
> > +    if ( test_bit(sintx, &v->arch.hvm.viridian->msg_pending) )
> > +        return false;
> > +
> > +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
> > +    msg += sintx;
> > +
> > +    /*
> > +     * To avoid using an atomic test-and-set this function must be called
> > +     * in context of the vcpu receiving the message.
> > +     */
> > +    ASSERT(v == current);
> > +    if ( msg->Header.MessageType != HvMessageTypeNone )
> > +    {
> > +        msg->Header.MessageFlags.MessagePending = 1;
> > +        set_bit(sintx, &v->arch.hvm.viridian->msg_pending);
> 
> As per the comment above this is always in context of the subject
> vCPU. It looks to me as if this was also the case for the two
> clear_bit() on the field in the prior patch. If so, all three could be
> the non-atomic variants instead.

The only slight subtlety I think is the one in the wrmsr function, which can be called in context of a domain restore. I think it's still ok for it to be non-atomic in this case but I'll assert (v = current || !v->running), which I think should cover it.

> 
> > +        return false;
> > +    }
> > +
> > +    msg->Header.MessageType = HvMessageTimerExpired;
> > +    msg->Header.MessageFlags.MessagePending = 0;
> > +    msg->Header.PayloadSize = sizeof(payload);
> > +    memcpy(msg->Payload, &payload, sizeof(payload));
> > +
> > +    if ( !vs->fields.mask )
> > +        vlapic_set_irq(vcpu_vlapic(v), vs->fields.vector, 0);
> 
> If this wasn't with v == current, I think you'd also need a barrier
> here. Could you extend the comment above to also mention this
> aspect?

Ok.

> 
> > @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
> >      return raw_trc_val(d) + trc->off;
> >  }
> >
> > +static int64_t time_now(struct domain *d)
> 
> Why would this return a signed value? And can't the function
> parameter be const?

The function parameter can be const, but I think the result needs to be signed for the missed ticks logic in start_timer() to work correctly.

> 
> > +{
> > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> > +    uint32_t start, end;
> > +    __int128_t tsc;
> > +    __int128_t scale;
> 
> I don't think you need both of them be 128 bits wide. I also don't
> see why either would want to be of a signed type.

The spec says (as in the comment below):

"The partition reference time is computed by the following formula:

ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset

The multiplication is a 64 bit multiplication, which results in a 128 bit number which is then shifted 64 times to the right to obtain the high 64 bits.TscScale"

Again, I'm using signed arithmetic as I think it's necessary for the missed ticks logic to work correctly in the event of an overflow.

> 
> > +    int64_t offset;
> > +
> > +    /*
> > +     * If the reference TSC page is not enabled, or has been invalidated
> > +     * fall back to the partition reference counter.
> > +     */
> > +    if ( !p || !p->TscSequence )
> > +        return time_ref_count(d);
> > +
> > +    /*
> > +     * The following sampling algorithm for tsc, scale and offset is
> > +     * documented in the specifiction.
> > +     */
> > +    start = p->TscSequence;
> > +
> > +    do {
> > +        tsc = rdtsc();
> > +        scale = p->TscScale;
> > +        offset = p->TscOffset;
> > +
> > +        smp_mb();
> > +        end = p->TscSequence;
> 
> Why is this a full barrier, rather than just a read one? And don't you need
> to add a counterpart in update_reference_tsc()?

Yes, a read barrier is enough with the counterpart write barrier added.

> 
> > +    } while (end != start);
> 
> update_reference_tsc() increments TscSequence. If end doesn't match
> start at this point, you're entering a near infinite loop here as long as
> you don't update start inside the loop. I also think that there's a
> second read barrier needed between this initial reading of the sequence
> number and the reading of the actual values.

Yes, the start value should be inside the loop of course.

> 
> > +    /*
> > +     * The specification says: "The partition reference time is computed
> > +     * by the following formula:
> > +     *
> > +     * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> > +     *
> > +     * The multiplication is a 64 bit multiplication, which results in a
> > +     * 128 bit number which is then shifted 64 times to the right to obtain
> > +     * the high 64 bits."
> > +     */
> > +    return ((tsc * scale) >> 64) + offset;
> > +}
> > +
> > +static void stop_stimer(struct viridian_stimer *vs)
> > +{
> > +    struct vcpu *v = vs->v;
> 
> const?

Ok.

> 
> > +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> > +
> > +    if ( !vs->started )
> > +        return;
> > +
> > +    stop_timer(&vs->timer);
> > +    clear_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
> > +    vs->started = false;
> > +}
> > +
> > +static void stimer_expire(void *data)
> > +{
> > +    struct viridian_stimer *vs = data;
> 
> const?

Ok.

> 
> > +    struct vcpu *v = vs->v;
> > +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> > +
> > +    if ( !vs->config.fields.enabled )
> > +        return;
> > +
> > +    set_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
> > +    vcpu_kick(v);
> > +}
> > +
> > +static void start_stimer(struct viridian_stimer *vs)
> > +{
> > +    struct vcpu *v = vs->v;
> > +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> > +    int64_t now = time_now(v->domain);
> > +    s_time_t timeout;
> > +
> > +    if ( !test_and_set_bit(stimerx, &v->arch.hvm.viridian->stimer_enabled) )
> > +        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
> > +               stimerx);
> > +
> > +    if ( vs->config.fields.periodic )
> > +    {
> > +        unsigned int missed = 0;
> > +        int64_t next;
> > +
> > +        /*
> > +         * The specification says that if the timer is lazy then we
> > +         * skip over any missed expirations so we can treat this case
> > +         * as the same as if the timer is currently stopped, i.e. we
> > +         * just schedule expiration to be 'count' ticks from now.
> > +         */
> > +        if ( !vs->started || vs->config.fields.lazy )
> > +        {
> > +            next = now + vs->count;
> > +        }
> 
> Unnecessary braces.

Yes.

> 
> > @@ -149,6 +373,57 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> >          }
> >          break;
> >
> > +    case HV_X64_MSR_TIME_REF_COUNT:
> > +        return X86EMUL_EXCEPTION;
> 
> Isn't this an unrelated change?

It is. I'll call it out in the comment comment.

> 
> > +    case HV_X64_MSR_STIMER0_CONFIG:
> > +    case HV_X64_MSR_STIMER1_CONFIG:
> > +    case HV_X64_MSR_STIMER2_CONFIG:
> > +    case HV_X64_MSR_STIMER3_CONFIG:
> > +    {
> > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
> > +        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
> > +
> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        stop_stimer(vs);
> > +
> > +        vs->config.raw = val;
> > +
> > +        if ( !vs->config.fields.sintx )
> > +            vs->config.fields.enabled = 0;
> > +
> > +        if ( vs->config.fields.enabled )
> > +            start_stimer(vs);
> > +
> > +        break;
> > +    }
> > +    case HV_X64_MSR_STIMER0_COUNT:
> 
> Missing blank line again (and also further down here as well as in the
> rdmsr code).
> 

Ok. TBH I've always thought the normal style was to omit the blank line if the case statement has braces.

> > +    case HV_X64_MSR_STIMER1_COUNT:
> > +    case HV_X64_MSR_STIMER2_COUNT:
> > +    case HV_X64_MSR_STIMER3_COUNT:
> > +    {
> > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
> > +        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
> > +
> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        stop_stimer(vs);
> > +
> > +        vs->count = val;
> > +
> > +        if ( !vs->count  )
> 
> Any reason you don't use val here (which the compiler likely will do
> anyway)?

Not particularly, I just think it reads better and is more consistent with other code.

> 
> > @@ -201,6 +476,32 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
> >          break;
> >      }
> >
> > +    case HV_X64_MSR_STIMER0_CONFIG:
> > +    case HV_X64_MSR_STIMER1_CONFIG:
> > +    case HV_X64_MSR_STIMER2_CONFIG:
> > +    case HV_X64_MSR_STIMER3_CONFIG:
> > +    {
> > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
> > +
> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        *val = v->arch.hvm.viridian->stimer[stimerx].config.raw;
> 
> While more noticeable here and ...
> 
> > +        break;
> > +    }
> > +    case HV_X64_MSR_STIMER0_COUNT:
> > +    case HV_X64_MSR_STIMER1_COUNT:
> > +    case HV_X64_MSR_STIMER2_COUNT:
> > +    case HV_X64_MSR_STIMER3_COUNT:
> > +    {
> > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
> > +
> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        *val = v->arch.hvm.viridian->stimer[stimerx].count;
> 
> ... here, array_access_nospec() are probably needed not just here,
> but also in the wrmsr logic.

Really? stimerx is calculated based on hitting the case statement in the first place.

> 
> > --- a/xen/include/asm-x86/hvm/viridian.h
> > +++ b/xen/include/asm-x86/hvm/viridian.h
> > @@ -40,6 +40,33 @@ union viridian_sint_msr
> >      } fields;
> >  };
> >
> > +union viridian_stimer_config_msr
> > +{
> > +    uint64_t raw;
> > +    struct
> > +    {
> > +        uint64_t enabled:1;
> > +        uint64_t periodic:1;
> > +        uint64_t lazy:1;
> > +        uint64_t auto_enable:1;
> > +        uint64_t vector:8;
> > +        uint64_t direct_mode:1;
> > +        uint64_t reserved_zero1:3;
> > +        uint64_t sintx:4;
> > +        uint64_t reserved_zero2:44;
> > +    } fields;
> > +};
> > +
> > +struct viridian_stimer {
> > +    struct vcpu *v;
> 
> Isn't a full 8-byte pointer a little too much overhead here? You could
> instead store the timer index ...

I think I need it in stimer_expire() which can be called in any vcpu context IIUC.

> 
> > +    struct timer timer;
> > +    union viridian_stimer_config_msr config;
> > +    uint64_t count;
> > +    int64_t expiration;
> > +    s_time_t timeout;
> > +    bool started;
> 
> ... in a field using the 7-byte padding here, and use container_of()
> to get at the outer structure.

That would get me as far as viridian_vcpu, but there's no pointer to struct vcpu in there, and I need one to call vcpu_kick().

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 11:23     ` Paul Durrant
@ 2019-03-06 11:47       ` Paul Durrant
  2019-03-06 12:59         ` Jan Beulich
  2019-03-06 12:56       ` Jan Beulich
  1 sibling, 1 reply; 27+ messages in thread
From: Paul Durrant @ 2019-03-06 11:47 UTC (permalink / raw)
  To: Paul Durrant, 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Ian Jackson, xen-devel,
	Roger Pau Monne

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Paul Durrant
> Sent: 06 March 2019 11:24
> To: 'Jan Beulich' <JBeulich@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Tim (Xen.org) <tim@xen.org>;
> George Dunlap <George.Dunlap@citrix.com>; Julien Grall <julien.grall@arm.com>; xen-devel <xen-
> devel@lists.xenproject.org>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v3 8/9] viridian: add implementation of synthetic timers
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: 25 February 2019 14:54
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne
> > <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> > Jackson <Ian.Jackson@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> > devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org)
> > <tim@xen.org>
> > Subject: Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> >
> > >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> > > --- a/xen/arch/x86/hvm/viridian/synic.c
> > > +++ b/xen/arch/x86/hvm/viridian/synic.c
> > > @@ -329,7 +329,53 @@ void viridian_synic_domain_deinit(struct domain *d)
> > >
> > >  void viridian_synic_poll_messages(struct vcpu *v)
> > >  {
> > > -    /* There are currently no message sources */
> > > +    viridian_time_poll_timers(v);
> > > +}
> > > +
> > > +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> > > +                                      unsigned int index,
> > > +                                      int64_t expiration, int64_t delivery)
> > > +{
> > > +    const union viridian_sint_msr *vs = &v->arch.hvm.viridian->sint[sintx];
> > > +    HV_MESSAGE *msg = v->arch.hvm.viridian->simp.ptr;
> > > +    struct {
> > > +        uint32_t TimerIndex;
> > > +        uint32_t Reserved;
> > > +        uint64_t ExpirationTime;
> > > +        uint64_t DeliveryTime;
> > > +    } payload = {
> > > +        .TimerIndex = index,
> > > +        .ExpirationTime = expiration,
> > > +        .DeliveryTime = delivery,
> > > +    };
> > > +
> > > +    if ( test_bit(sintx, &v->arch.hvm.viridian->msg_pending) )
> > > +        return false;
> > > +
> > > +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
> > > +    msg += sintx;
> > > +
> > > +    /*
> > > +     * To avoid using an atomic test-and-set this function must be called
> > > +     * in context of the vcpu receiving the message.
> > > +     */
> > > +    ASSERT(v == current);
> > > +    if ( msg->Header.MessageType != HvMessageTypeNone )
> > > +    {
> > > +        msg->Header.MessageFlags.MessagePending = 1;
> > > +        set_bit(sintx, &v->arch.hvm.viridian->msg_pending);
> >
> > As per the comment above this is always in context of the subject
> > vCPU. It looks to me as if this was also the case for the two
> > clear_bit() on the field in the prior patch. If so, all three could be
> > the non-atomic variants instead.
> 
> The only slight subtlety I think is the one in the wrmsr function, which can be called in context of a
> domain restore. I think it's still ok for it to be non-atomic in this case but I'll assert (v =
> current || !v->running), which I think should cover it.
> 
> >
> > > +        return false;
> > > +    }
> > > +
> > > +    msg->Header.MessageType = HvMessageTimerExpired;
> > > +    msg->Header.MessageFlags.MessagePending = 0;
> > > +    msg->Header.PayloadSize = sizeof(payload);
> > > +    memcpy(msg->Payload, &payload, sizeof(payload));
> > > +
> > > +    if ( !vs->fields.mask )
> > > +        vlapic_set_irq(vcpu_vlapic(v), vs->fields.vector, 0);
> >
> > If this wasn't with v == current, I think you'd also need a barrier
> > here. Could you extend the comment above to also mention this
> > aspect?
> 
> Ok.
> 
> >
> > > @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
> > >      return raw_trc_val(d) + trc->off;
> > >  }
> > >
> > > +static int64_t time_now(struct domain *d)
> >
> > Why would this return a signed value? And can't the function
> > parameter be const?
> 
> The function parameter can be const, but I think the result needs to be signed for the missed ticks
> logic in start_timer() to work correctly.
> 
> >
> > > +{
> > > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> > > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> > > +    uint32_t start, end;
> > > +    __int128_t tsc;
> > > +    __int128_t scale;
> >
> > I don't think you need both of them be 128 bits wide. I also don't
> > see why either would want to be of a signed type.
> 
> The spec says (as in the comment below):
> 
> "The partition reference time is computed by the following formula:
> 
> ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> 
> The multiplication is a 64 bit multiplication, which results in a 128 bit number which is then shifted
> 64 times to the right to obtain the high 64 bits.TscScale"
> 
> Again, I'm using signed arithmetic as I think it's necessary for the missed ticks logic to work
> correctly in the event of an overflow.

FAOD the code that I am concerned about is:

            /*
             * The timer is already started, so we're re-scheduling.
             * Hence advance the timer expiration by one tick.
             */
            next = vs->expiration + vs->count;

            /* Now check to see if any expirations have been missed */
            if ( now - next > 0 )
                missed = (now - next) / vs->count;

If now and next were unsigned then next may overflow such that (now - next) ends up being very large, rather than negative, so I'd end up calculating a completely bogus value for missed.

  Paul

> 
> >
> > > +    int64_t offset;
> > > +
> > > +    /*
> > > +     * If the reference TSC page is not enabled, or has been invalidated
> > > +     * fall back to the partition reference counter.
> > > +     */
> > > +    if ( !p || !p->TscSequence )
> > > +        return time_ref_count(d);
> > > +
> > > +    /*
> > > +     * The following sampling algorithm for tsc, scale and offset is
> > > +     * documented in the specifiction.
> > > +     */
> > > +    start = p->TscSequence;
> > > +
> > > +    do {
> > > +        tsc = rdtsc();
> > > +        scale = p->TscScale;
> > > +        offset = p->TscOffset;
> > > +
> > > +        smp_mb();
> > > +        end = p->TscSequence;
> >
> > Why is this a full barrier, rather than just a read one? And don't you need
> > to add a counterpart in update_reference_tsc()?
> 
> Yes, a read barrier is enough with the counterpart write barrier added.
> 
> >
> > > +    } while (end != start);
> >
> > update_reference_tsc() increments TscSequence. If end doesn't match
> > start at this point, you're entering a near infinite loop here as long as
> > you don't update start inside the loop. I also think that there's a
> > second read barrier needed between this initial reading of the sequence
> > number and the reading of the actual values.
> 
> Yes, the start value should be inside the loop of course.
> 
> >
> > > +    /*
> > > +     * The specification says: "The partition reference time is computed
> > > +     * by the following formula:
> > > +     *
> > > +     * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> > > +     *
> > > +     * The multiplication is a 64 bit multiplication, which results in a
> > > +     * 128 bit number which is then shifted 64 times to the right to obtain
> > > +     * the high 64 bits."
> > > +     */
> > > +    return ((tsc * scale) >> 64) + offset;
> > > +}
> > > +
> > > +static void stop_stimer(struct viridian_stimer *vs)
> > > +{
> > > +    struct vcpu *v = vs->v;
> >
> > const?
> 
> Ok.
> 
> >
> > > +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> > > +
> > > +    if ( !vs->started )
> > > +        return;
> > > +
> > > +    stop_timer(&vs->timer);
> > > +    clear_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
> > > +    vs->started = false;
> > > +}
> > > +
> > > +static void stimer_expire(void *data)
> > > +{
> > > +    struct viridian_stimer *vs = data;
> >
> > const?
> 
> Ok.
> 
> >
> > > +    struct vcpu *v = vs->v;
> > > +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> > > +
> > > +    if ( !vs->config.fields.enabled )
> > > +        return;
> > > +
> > > +    set_bit(stimerx, &v->arch.hvm.viridian->stimer_pending);
> > > +    vcpu_kick(v);
> > > +}
> > > +
> > > +static void start_stimer(struct viridian_stimer *vs)
> > > +{
> > > +    struct vcpu *v = vs->v;
> > > +    unsigned int stimerx = vs - &v->arch.hvm.viridian->stimer[0];
> > > +    int64_t now = time_now(v->domain);
> > > +    s_time_t timeout;
> > > +
> > > +    if ( !test_and_set_bit(stimerx, &v->arch.hvm.viridian->stimer_enabled) )
> > > +        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
> > > +               stimerx);
> > > +
> > > +    if ( vs->config.fields.periodic )
> > > +    {
> > > +        unsigned int missed = 0;
> > > +        int64_t next;
> > > +
> > > +        /*
> > > +         * The specification says that if the timer is lazy then we
> > > +         * skip over any missed expirations so we can treat this case
> > > +         * as the same as if the timer is currently stopped, i.e. we
> > > +         * just schedule expiration to be 'count' ticks from now.
> > > +         */
> > > +        if ( !vs->started || vs->config.fields.lazy )
> > > +        {
> > > +            next = now + vs->count;
> > > +        }
> >
> > Unnecessary braces.
> 
> Yes.
> 
> >
> > > @@ -149,6 +373,57 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> > >          }
> > >          break;
> > >
> > > +    case HV_X64_MSR_TIME_REF_COUNT:
> > > +        return X86EMUL_EXCEPTION;
> >
> > Isn't this an unrelated change?
> 
> It is. I'll call it out in the comment comment.
> 
> >
> > > +    case HV_X64_MSR_STIMER0_CONFIG:
> > > +    case HV_X64_MSR_STIMER1_CONFIG:
> > > +    case HV_X64_MSR_STIMER2_CONFIG:
> > > +    case HV_X64_MSR_STIMER3_CONFIG:
> > > +    {
> > > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
> > > +        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
> > > +
> > > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > > +            return X86EMUL_EXCEPTION;
> > > +
> > > +        stop_stimer(vs);
> > > +
> > > +        vs->config.raw = val;
> > > +
> > > +        if ( !vs->config.fields.sintx )
> > > +            vs->config.fields.enabled = 0;
> > > +
> > > +        if ( vs->config.fields.enabled )
> > > +            start_stimer(vs);
> > > +
> > > +        break;
> > > +    }
> > > +    case HV_X64_MSR_STIMER0_COUNT:
> >
> > Missing blank line again (and also further down here as well as in the
> > rdmsr code).
> >
> 
> Ok. TBH I've always thought the normal style was to omit the blank line if the case statement has
> braces.
> 
> > > +    case HV_X64_MSR_STIMER1_COUNT:
> > > +    case HV_X64_MSR_STIMER2_COUNT:
> > > +    case HV_X64_MSR_STIMER3_COUNT:
> > > +    {
> > > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
> > > +        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
> > > +
> > > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > > +            return X86EMUL_EXCEPTION;
> > > +
> > > +        stop_stimer(vs);
> > > +
> > > +        vs->count = val;
> > > +
> > > +        if ( !vs->count  )
> >
> > Any reason you don't use val here (which the compiler likely will do
> > anyway)?
> 
> Not particularly, I just think it reads better and is more consistent with other code.
> 
> >
> > > @@ -201,6 +476,32 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
> > >          break;
> > >      }
> > >
> > > +    case HV_X64_MSR_STIMER0_CONFIG:
> > > +    case HV_X64_MSR_STIMER1_CONFIG:
> > > +    case HV_X64_MSR_STIMER2_CONFIG:
> > > +    case HV_X64_MSR_STIMER3_CONFIG:
> > > +    {
> > > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
> > > +
> > > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > > +            return X86EMUL_EXCEPTION;
> > > +
> > > +        *val = v->arch.hvm.viridian->stimer[stimerx].config.raw;
> >
> > While more noticeable here and ...
> >
> > > +        break;
> > > +    }
> > > +    case HV_X64_MSR_STIMER0_COUNT:
> > > +    case HV_X64_MSR_STIMER1_COUNT:
> > > +    case HV_X64_MSR_STIMER2_COUNT:
> > > +    case HV_X64_MSR_STIMER3_COUNT:
> > > +    {
> > > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
> > > +
> > > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > > +            return X86EMUL_EXCEPTION;
> > > +
> > > +        *val = v->arch.hvm.viridian->stimer[stimerx].count;
> >
> > ... here, array_access_nospec() are probably needed not just here,
> > but also in the wrmsr logic.
> 
> Really? stimerx is calculated based on hitting the case statement in the first place.
> 
> >
> > > --- a/xen/include/asm-x86/hvm/viridian.h
> > > +++ b/xen/include/asm-x86/hvm/viridian.h
> > > @@ -40,6 +40,33 @@ union viridian_sint_msr
> > >      } fields;
> > >  };
> > >
> > > +union viridian_stimer_config_msr
> > > +{
> > > +    uint64_t raw;
> > > +    struct
> > > +    {
> > > +        uint64_t enabled:1;
> > > +        uint64_t periodic:1;
> > > +        uint64_t lazy:1;
> > > +        uint64_t auto_enable:1;
> > > +        uint64_t vector:8;
> > > +        uint64_t direct_mode:1;
> > > +        uint64_t reserved_zero1:3;
> > > +        uint64_t sintx:4;
> > > +        uint64_t reserved_zero2:44;
> > > +    } fields;
> > > +};
> > > +
> > > +struct viridian_stimer {
> > > +    struct vcpu *v;
> >
> > Isn't a full 8-byte pointer a little too much overhead here? You could
> > instead store the timer index ...
> 
> I think I need it in stimer_expire() which can be called in any vcpu context IIUC.
> 
> >
> > > +    struct timer timer;
> > > +    union viridian_stimer_config_msr config;
> > > +    uint64_t count;
> > > +    int64_t expiration;
> > > +    s_time_t timeout;
> > > +    bool started;
> >
> > ... in a field using the 7-byte padding here, and use container_of()
> > to get at the outer structure.
> 
> That would get me as far as viridian_vcpu, but there's no pointer to struct vcpu in there, and I need
> one to call vcpu_kick().
> 
>   Paul
> 
> >
> > Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 11:23     ` Paul Durrant
  2019-03-06 11:47       ` Paul Durrant
@ 2019-03-06 12:56       ` Jan Beulich
  2019-03-06 13:05         ` Paul Durrant
  1 sibling, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2019-03-06 12:56 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, Ian Jackson, Roger Pau Monne

>>> On 06.03.19 at 12:23, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 25 February 2019 14:54
>> 
>> >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
>> > @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
>> >      return raw_trc_val(d) + trc->off;
>> >  }
>> >
>> > +static int64_t time_now(struct domain *d)
>> 
>> Why would this return a signed value? And can't the function
>> parameter be const?
> 
> The function parameter can be const, but I think the result needs to be 
> signed for the missed ticks logic in start_timer() to work correctly.

If something requires signed arithmetic, then this should be enforced
there, not by the return type of an otherwise sufficiently generic
function. Then again NOW() also produces a signed value ...

>> > +{
>> > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
>> > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
>> > +    uint32_t start, end;
>> > +    __int128_t tsc;
>> > +    __int128_t scale;
>> 
>> I don't think you need both of them be 128 bits wide. I also don't
>> see why either would want to be of a signed type.
> 
> The spec says (as in the comment below):
> 
> "The partition reference time is computed by the following formula:
> 
> ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> 
> The multiplication is a 64 bit multiplication, which results in a 128 bit 
> number which is then shifted 64 times to the right to obtain the high 64 
> bits.TscScale"

Well, yes, you want a 128-bit result. But for that you don't need to
multiply 128-bit quantities. See e.g. our own scale_delta() or
hvm_scale_tsc().

>> > +    case HV_X64_MSR_STIMER0_CONFIG:
>> > +    case HV_X64_MSR_STIMER1_CONFIG:
>> > +    case HV_X64_MSR_STIMER2_CONFIG:
>> > +    case HV_X64_MSR_STIMER3_CONFIG:
>> > +    {
>> > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
>> > +        struct viridian_stimer *vs = &v->arch.hvm.viridian->stimer[stimerx];
>> > +
>> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
>> > +            return X86EMUL_EXCEPTION;
>> > +
>> > +        stop_stimer(vs);
>> > +
>> > +        vs->config.raw = val;
>> > +
>> > +        if ( !vs->config.fields.sintx )
>> > +            vs->config.fields.enabled = 0;
>> > +
>> > +        if ( vs->config.fields.enabled )
>> > +            start_stimer(vs);
>> > +
>> > +        break;
>> > +    }
>> > +    case HV_X64_MSR_STIMER0_COUNT:
>> 
>> Missing blank line again (and also further down here as well as in the
>> rdmsr code).
>> 
> 
> Ok. TBH I've always thought the normal style was to omit the blank line if 
> the case statement has braces.

Not sure about this specific sub-case.

>> > @@ -201,6 +476,32 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>> >          break;
>> >      }
>> >
>> > +    case HV_X64_MSR_STIMER0_CONFIG:
>> > +    case HV_X64_MSR_STIMER1_CONFIG:
>> > +    case HV_X64_MSR_STIMER2_CONFIG:
>> > +    case HV_X64_MSR_STIMER3_CONFIG:
>> > +    {
>> > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_CONFIG) / 2;
>> > +
>> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
>> > +            return X86EMUL_EXCEPTION;
>> > +
>> > +        *val = v->arch.hvm.viridian->stimer[stimerx].config.raw;
>> 
>> While more noticeable here and ...
>> 
>> > +        break;
>> > +    }
>> > +    case HV_X64_MSR_STIMER0_COUNT:
>> > +    case HV_X64_MSR_STIMER1_COUNT:
>> > +    case HV_X64_MSR_STIMER2_COUNT:
>> > +    case HV_X64_MSR_STIMER3_COUNT:
>> > +    {
>> > +        unsigned int stimerx = (idx - HV_X64_MSR_STIMER0_COUNT) / 2;
>> > +
>> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
>> > +            return X86EMUL_EXCEPTION;
>> > +
>> > +        *val = v->arch.hvm.viridian->stimer[stimerx].count;
>> 
>> ... here, array_access_nospec() are probably needed not just here,
>> but also in the wrmsr logic.
> 
> Really? stimerx is calculated based on hitting the case statement in the 
> first place.

And any of the branches of the switch() can be (mis)speculated into.

>> > --- a/xen/include/asm-x86/hvm/viridian.h
>> > +++ b/xen/include/asm-x86/hvm/viridian.h
>> > @@ -40,6 +40,33 @@ union viridian_sint_msr
>> >      } fields;
>> >  };
>> >
>> > +union viridian_stimer_config_msr
>> > +{
>> > +    uint64_t raw;
>> > +    struct
>> > +    {
>> > +        uint64_t enabled:1;
>> > +        uint64_t periodic:1;
>> > +        uint64_t lazy:1;
>> > +        uint64_t auto_enable:1;
>> > +        uint64_t vector:8;
>> > +        uint64_t direct_mode:1;
>> > +        uint64_t reserved_zero1:3;
>> > +        uint64_t sintx:4;
>> > +        uint64_t reserved_zero2:44;
>> > +    } fields;
>> > +};
>> > +
>> > +struct viridian_stimer {
>> > +    struct vcpu *v;
>> 
>> Isn't a full 8-byte pointer a little too much overhead here? You could
>> instead store the timer index ...
> 
> I think I need it in stimer_expire() which can be called in any vcpu context 
> IIUC.
> 
>> 
>> > +    struct timer timer;
>> > +    union viridian_stimer_config_msr config;
>> > +    uint64_t count;
>> > +    int64_t expiration;
>> > +    s_time_t timeout;
>> > +    bool started;
>> 
>> ... in a field using the 7-byte padding here, and use container_of()
>> to get at the outer structure.
> 
> That would get me as far as viridian_vcpu, but there's no pointer to struct 
> vcpu in there, and I need one to call vcpu_kick().

Oh, I see.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 11:47       ` Paul Durrant
@ 2019-03-06 12:59         ` Jan Beulich
  2019-03-06 13:03           ` Paul Durrant
  0 siblings, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2019-03-06 12:59 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, IanJackson, Roger Pau Monne

>>> On 06.03.19 at 12:47, <Paul.Durrant@citrix.com> wrote:
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Paul Durrant
>> Sent: 06 March 2019 11:24
>> 
>> > -----Original Message-----
>> > From: Jan Beulich [mailto:JBeulich@suse.com]
>> > Sent: 25 February 2019 14:54
>> >
>> > >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
>> > > @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
>> > >      return raw_trc_val(d) + trc->off;
>> > >  }
>> > >
>> > > +static int64_t time_now(struct domain *d)
>> > > +{
>> > > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
>> > > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
>> > > +    uint32_t start, end;
>> > > +    __int128_t tsc;
>> > > +    __int128_t scale;
>> >
>> > I don't think you need both of them be 128 bits wide. I also don't
>> > see why either would want to be of a signed type.
>> 
>> The spec says (as in the comment below):
>> 
>> "The partition reference time is computed by the following formula:
>> 
>> ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
>> 
>> The multiplication is a 64 bit multiplication, which results in a 128 bit number which is then shifted
>> 64 times to the right to obtain the high 64 bits.TscScale"
>> 
>> Again, I'm using signed arithmetic as I think it's necessary for the missed ticks logic to work
>> correctly in the event of an overflow.
> 
> FAOD the code that I am concerned about is:
> 
>             /*
>              * The timer is already started, so we're re-scheduling.
>              * Hence advance the timer expiration by one tick.
>              */
>             next = vs->expiration + vs->count;
> 
>             /* Now check to see if any expirations have been missed */
>             if ( now - next > 0 )
>                 missed = (now - next) / vs->count;
> 
> If now and next were unsigned then next may overflow such that (now - next) 
> ends up being very large, rather than negative, so I'd end up calculating a 
> completely bogus value for missed.

And this is also what I've been referring to: If signedness matters, there
should be a cast here rather than enforcing signedness onto everyone by
a function logically never returning a signed value.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 12:59         ` Jan Beulich
@ 2019-03-06 13:03           ` Paul Durrant
  0 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-03-06 13:03 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 06 March 2019 13:00
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> (Xen.org) <tim@xen.org>
> Subject: RE: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> 
> >>> On 06.03.19 at 12:47, <Paul.Durrant@citrix.com> wrote:
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Paul Durrant
> >> Sent: 06 March 2019 11:24
> >>
> >> > -----Original Message-----
> >> > From: Jan Beulich [mailto:JBeulich@suse.com]
> >> > Sent: 25 February 2019 14:54
> >> >
> >> > >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> >> > > @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
> >> > >      return raw_trc_val(d) + trc->off;
> >> > >  }
> >> > >
> >> > > +static int64_t time_now(struct domain *d)
> >> > > +{
> >> > > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> >> > > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> >> > > +    uint32_t start, end;
> >> > > +    __int128_t tsc;
> >> > > +    __int128_t scale;
> >> >
> >> > I don't think you need both of them be 128 bits wide. I also don't
> >> > see why either would want to be of a signed type.
> >>
> >> The spec says (as in the comment below):
> >>
> >> "The partition reference time is computed by the following formula:
> >>
> >> ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> >>
> >> The multiplication is a 64 bit multiplication, which results in a 128 bit number which is then
> shifted
> >> 64 times to the right to obtain the high 64 bits.TscScale"
> >>
> >> Again, I'm using signed arithmetic as I think it's necessary for the missed ticks logic to work
> >> correctly in the event of an overflow.
> >
> > FAOD the code that I am concerned about is:
> >
> >             /*
> >              * The timer is already started, so we're re-scheduling.
> >              * Hence advance the timer expiration by one tick.
> >              */
> >             next = vs->expiration + vs->count;
> >
> >             /* Now check to see if any expirations have been missed */
> >             if ( now - next > 0 )
> >                 missed = (now - next) / vs->count;
> >
> > If now and next were unsigned then next may overflow such that (now - next)
> > ends up being very large, rather than negative, so I'd end up calculating a
> > completely bogus value for missed.
> 
> And this is also what I've been referring to: If signedness matters, there
> should be a cast here rather than enforcing signedness onto everyone by
> a function logically never returning a signed value.

Ok, I'll redefine the function to return an unsigned value and leave now and next as int64_t.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 12:56       ` Jan Beulich
@ 2019-03-06 13:05         ` Paul Durrant
  2019-03-06 13:09           ` Paul Durrant
  0 siblings, 1 reply; 27+ messages in thread
From: Paul Durrant @ 2019-03-06 13:05 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 06 March 2019 12:57
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> (Xen.org) <tim@xen.org>
> Subject: RE: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> 
> >>> On 06.03.19 at 12:23, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 25 February 2019 14:54
> >>
> >> >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> >> > @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
> >> >      return raw_trc_val(d) + trc->off;
> >> >  }
> >> >
> >> > +static int64_t time_now(struct domain *d)
> >>
> >> Why would this return a signed value? And can't the function
> >> parameter be const?
> >
> > The function parameter can be const, but I think the result needs to be
> > signed for the missed ticks logic in start_timer() to work correctly.
> 
> If something requires signed arithmetic, then this should be enforced
> there, not by the return type of an otherwise sufficiently generic
> function. Then again NOW() also produces a signed value ...
> 
> >> > +{
> >> > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> >> > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> >> > +    uint32_t start, end;
> >> > +    __int128_t tsc;
> >> > +    __int128_t scale;
> >>
> >> I don't think you need both of them be 128 bits wide. I also don't
> >> see why either would want to be of a signed type.
> >
> > The spec says (as in the comment below):
> >
> > "The partition reference time is computed by the following formula:
> >
> > ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> >
> > The multiplication is a 64 bit multiplication, which results in a 128 bit
> > number which is then shifted 64 times to the right to obtain the high 64
> > bits.TscScale"
> 
> Well, yes, you want a 128-bit result. But for that you don't need to
> multiply 128-bit quantities. See e.g. our own scale_delta() or
> hvm_scale_tsc().

Testing showed that by not casting first things were broken. I assumed this was because the result of the multiplication was being truncated to 64-bits before assignment, but I can check the generated code. I'll also have a look at the examples you cite.

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 13:05         ` Paul Durrant
@ 2019-03-06 13:09           ` Paul Durrant
  2019-03-06 14:38             ` Jan Beulich
  0 siblings, 1 reply; 27+ messages in thread
From: Paul Durrant @ 2019-03-06 13:09 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: 'Stefano Stabellini',
	Wei Liu, 'Konrad Rzeszutek Wilk',
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, 'Julien Grall', 'xen-devel',
	Ian Jackson, Roger Pau Monne

> -----Original Message-----
> From: Paul Durrant
> Sent: 06 March 2019 13:06
> To: 'Jan Beulich' <JBeulich@suse.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> (Xen.org) <tim@xen.org>
> Subject: RE: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: 06 March 2019 12:57
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> > <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> > <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> > xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> > (Xen.org) <tim@xen.org>
> > Subject: RE: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> >
> > >>> On 06.03.19 at 12:23, <Paul.Durrant@citrix.com> wrote:
> > >> From: Jan Beulich [mailto:JBeulich@suse.com]
> > >> Sent: 25 February 2019 14:54
> > >>
> > >> >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> > >> > @@ -118,14 +119,237 @@ static int64_t time_ref_count(struct domain *d)
> > >> >      return raw_trc_val(d) + trc->off;
> > >> >  }
> > >> >
> > >> > +static int64_t time_now(struct domain *d)
> > >>
> > >> Why would this return a signed value? And can't the function
> > >> parameter be const?
> > >
> > > The function parameter can be const, but I think the result needs to be
> > > signed for the missed ticks logic in start_timer() to work correctly.
> >
> > If something requires signed arithmetic, then this should be enforced
> > there, not by the return type of an otherwise sufficiently generic
> > function. Then again NOW() also produces a signed value ...
> >
> > >> > +{
> > >> > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> > >> > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> > >> > +    uint32_t start, end;
> > >> > +    __int128_t tsc;
> > >> > +    __int128_t scale;
> > >>
> > >> I don't think you need both of them be 128 bits wide. I also don't
> > >> see why either would want to be of a signed type.
> > >
> > > The spec says (as in the comment below):
> > >
> > > "The partition reference time is computed by the following formula:
> > >
> > > ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> > >
> > > The multiplication is a 64 bit multiplication, which results in a 128 bit
> > > number which is then shifted 64 times to the right to obtain the high 64
> > > bits.TscScale"
> >
> > Well, yes, you want a 128-bit result. But for that you don't need to
> > multiply 128-bit quantities. See e.g. our own scale_delta() or
> > hvm_scale_tsc().
> 
> Testing showed that by not casting first things were broken. I assumed this was because the result of
> the multiplication was being truncated to 64-bits before assignment, but I can check the generated
> code. I'll also have a look at the examples you cite.

Both those examples do the multiplication by inline asm (presumably to avoid truncation). Is that what you'd prefer?

  Paul

> 
>   Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 13:09           ` Paul Durrant
@ 2019-03-06 14:38             ` Jan Beulich
  2019-03-06 14:41               ` Paul Durrant
  0 siblings, 1 reply; 27+ messages in thread
From: Jan Beulich @ 2019-03-06 14:38 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, IanJackson, Roger Pau Monne

>>> On 06.03.19 at 14:09, <Paul.Durrant@citrix.com> wrote:
>> From: Paul Durrant
>> Sent: 06 March 2019 13:06
>> 
>> > From: Jan Beulich [mailto:JBeulich@suse.com]
>> > Sent: 06 March 2019 12:57
>> > Subject: RE: [PATCH v3 8/9] viridian: add implementation of synthetic timers
>> >
>> > >>> On 06.03.19 at 12:23, <Paul.Durrant@citrix.com> wrote:
>> > >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> > >> Sent: 25 February 2019 14:54
>> > >>
>> > >> >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
>> > >> > +{
>> > >> > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
>> > >> > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
>> > >> > +    uint32_t start, end;
>> > >> > +    __int128_t tsc;
>> > >> > +    __int128_t scale;
>> > >>
>> > >> I don't think you need both of them be 128 bits wide. I also don't
>> > >> see why either would want to be of a signed type.
>> > >
>> > > The spec says (as in the comment below):
>> > >
>> > > "The partition reference time is computed by the following formula:
>> > >
>> > > ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
>> > >
>> > > The multiplication is a 64 bit multiplication, which results in a 128 bit
>> > > number which is then shifted 64 times to the right to obtain the high 64
>> > > bits.TscScale"
>> >
>> > Well, yes, you want a 128-bit result. But for that you don't need to
>> > multiply 128-bit quantities. See e.g. our own scale_delta() or
>> > hvm_scale_tsc().
>> 
>> Testing showed that by not casting first things were broken. I assumed this was because the result of
>> the multiplication was being truncated to 64-bits before assignment, but I can check the generated
>> code. I'll also have a look at the examples you cite.
> 
> Both those examples do the multiplication by inline asm (presumably to avoid 
> truncation). Is that what you'd prefer?

That would imo be best, not the least because of making us independent
of possible issues with 128-bit arithmetic on older compilers.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 8/9] viridian: add implementation of synthetic timers
  2019-03-06 14:38             ` Jan Beulich
@ 2019-03-06 14:41               ` Paul Durrant
  0 siblings, 0 replies; 27+ messages in thread
From: Paul Durrant @ 2019-03-06 14:41 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 06 March 2019 14:38
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> (Xen.org) <tim@xen.org>
> Subject: RE: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> 
> >>> On 06.03.19 at 14:09, <Paul.Durrant@citrix.com> wrote:
> >> From: Paul Durrant
> >> Sent: 06 March 2019 13:06
> >>
> >> > From: Jan Beulich [mailto:JBeulich@suse.com]
> >> > Sent: 06 March 2019 12:57
> >> > Subject: RE: [PATCH v3 8/9] viridian: add implementation of synthetic timers
> >> >
> >> > >>> On 06.03.19 at 12:23, <Paul.Durrant@citrix.com> wrote:
> >> > >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> > >> Sent: 25 February 2019 14:54
> >> > >>
> >> > >> >>> On 31.01.19 at 11:47, <paul.durrant@citrix.com> wrote:
> >> > >> > +{
> >> > >> > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> >> > >> > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> >> > >> > +    uint32_t start, end;
> >> > >> > +    __int128_t tsc;
> >> > >> > +    __int128_t scale;
> >> > >>
> >> > >> I don't think you need both of them be 128 bits wide. I also don't
> >> > >> see why either would want to be of a signed type.
> >> > >
> >> > > The spec says (as in the comment below):
> >> > >
> >> > > "The partition reference time is computed by the following formula:
> >> > >
> >> > > ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> >> > >
> >> > > The multiplication is a 64 bit multiplication, which results in a 128 bit
> >> > > number which is then shifted 64 times to the right to obtain the high 64
> >> > > bits.TscScale"
> >> >
> >> > Well, yes, you want a 128-bit result. But for that you don't need to
> >> > multiply 128-bit quantities. See e.g. our own scale_delta() or
> >> > hvm_scale_tsc().
> >>
> >> Testing showed that by not casting first things were broken. I assumed this was because the result
> of
> >> the multiplication was being truncated to 64-bits before assignment, but I can check the generated
> >> code. I'll also have a look at the examples you cite.
> >
> > Both those examples do the multiplication by inline asm (presumably to avoid
> > truncation). Is that what you'd prefer?
> 
> That would imo be best, not the least because of making us independent
> of possible issues with 128-bit arithmetic on older compilers.
> 

Ok, I think I have figured out the necessary runes so I'll do that.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2019-03-06 14:44 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-31 10:47 [PATCH v3 0/9] viridian: implement more enlightenments Paul Durrant
2019-01-31 10:47 ` [PATCH v3 1/9] viridian: add init hooks Paul Durrant
2019-01-31 10:47 ` [PATCH v3 2/9] viridian: separately allocate domain and vcpu structures Paul Durrant
2019-01-31 10:47 ` [PATCH v3 3/9] viridian: extend init/deinit hooks into synic and time modules Paul Durrant
2019-01-31 14:45   ` Wei Liu
2019-01-31 10:47 ` [PATCH v3 4/9] viridian: add missing context save helpers " Paul Durrant
2019-01-31 10:47 ` [PATCH v3 5/9] viridian: use viridian_map/unmap_guest_page() for reference tsc page Paul Durrant
2019-01-31 10:47 ` [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
2019-01-31 14:46   ` Wei Liu
2019-02-25 14:12   ` Jan Beulich
2019-03-04 17:01     ` Paul Durrant
2019-01-31 10:47 ` [PATCH v3 7/9] viridian: stop directly calling viridian_time_ref_count_freeze/thaw() Paul Durrant
2019-01-31 10:47 ` [PATCH v3 8/9] viridian: add implementation of synthetic timers Paul Durrant
2019-01-31 14:46   ` Wei Liu
2019-02-25 14:54   ` Jan Beulich
2019-03-06 11:23     ` Paul Durrant
2019-03-06 11:47       ` Paul Durrant
2019-03-06 12:59         ` Jan Beulich
2019-03-06 13:03           ` Paul Durrant
2019-03-06 12:56       ` Jan Beulich
2019-03-06 13:05         ` Paul Durrant
2019-03-06 13:09           ` Paul Durrant
2019-03-06 14:38             ` Jan Beulich
2019-03-06 14:41               ` Paul Durrant
2019-01-31 10:47 ` [PATCH v3 9/9] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
2019-01-31 14:46   ` Wei Liu
2019-02-25 15:00   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.