All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/11] viridian: implement more enlightenments
@ 2019-03-11 13:41 Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 01/11] viridian: add init hooks Paul Durrant
                   ` (10 more replies)
  0 siblings, 11 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This series adds three new enlightenments:

- Synthetic timers, which depends on the...
- Synthetic interrupt controller (or SynIC)
- Synthetic cluster IPI

All these enlightenments are implemented in current versions of QEMU/KVM
so this series closes the gap.

Paul Durrant (11):
  viridian: add init hooks
  viridian: separately allocate domain and vcpu structures
  viridian: use stack variables for viridian_vcpu and viridian_domain...
  viridian: make 'fields' struct anonymous...
  viridian: extend init/deinit hooks into synic and time modules
  viridian: add missing context save helpers into synic and time modules
  viridian: use viridian_map/unmap_guest_page() for reference tsc page
  viridian: stop directly calling
    viridian_time_ref_count_freeze/thaw()...
  viridian: add implementation of synthetic interrupt MSRs
  viridian: add implementation of synthetic timers
  viridian: add implementation of the HvSendSyntheticClusterIpi
    hypercall

 docs/man/xl.cfg.5.pod.in               |  18 +-
 tools/libxl/libxl.h                    |  18 +
 tools/libxl/libxl_dom.c                |  10 +
 tools/libxl/libxl_types.idl            |   3 +
 xen/arch/x86/domain.c                  |  12 +-
 xen/arch/x86/hvm/hvm.c                 |  10 +
 xen/arch/x86/hvm/viridian/private.h    |  31 +-
 xen/arch/x86/hvm/viridian/synic.c      | 367 +++++++++++++++--
 xen/arch/x86/hvm/viridian/time.c       | 519 ++++++++++++++++++++++---
 xen/arch/x86/hvm/viridian/viridian.c   | 243 ++++++++++--
 xen/arch/x86/hvm/vlapic.c              |  32 +-
 xen/include/asm-x86/hvm/domain.h       |   2 +-
 xen/include/asm-x86/hvm/hvm.h          |   7 +
 xen/include/asm-x86/hvm/vcpu.h         |   2 +-
 xen/include/asm-x86/hvm/viridian.h     |  74 +++-
 xen/include/asm-x86/hvm/vlapic.h       |   1 +
 xen/include/public/arch-x86/hvm/save.h |   4 +
 xen/include/public/hvm/params.h        |  17 +-
 18 files changed, 1231 insertions(+), 139 deletions(-)

v5:
 - Fix stuck domains (in patch #1) and unscaled TSC (in patch #10)

v4:
 - Add two cleanup patches (#3 and #4) and re-order #8 and #9

v3:
 - Add the synthetic cluster IPI patch (#11)

---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v5 01/11] viridian: add init hooks
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 12:23   ` Jan Beulich
  2019-03-11 13:41 ` [PATCH v5 02/11] viridian: separately allocate domain and vcpu structures Paul Durrant
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

This patch adds domain and vcpu init hooks for viridian features. The init
hooks do not yet do anything; the functionality will be added to by
subsequent patches.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v5:
 - Put the call to viridian_domain_deinit() back into
   hvm_domain_relinquish_resources() where it should be

v3:
 - Re-instate call from domain deinit to vcpu deinit
 - Move deinit calls to avoid introducing new labels

v2:
 - Remove call from domain deinit to vcpu deinit
---
 xen/arch/x86/hvm/hvm.c               | 10 ++++++++++
 xen/arch/x86/hvm/viridian/viridian.c | 10 ++++++++++
 xen/include/asm-x86/hvm/viridian.h   |  3 +++
 3 files changed, 23 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8adbb61b57..11ce21fc08 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -666,6 +666,10 @@ int hvm_domain_initialise(struct domain *d)
     if ( hvm_tsc_scaling_supported )
         d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
 
+    rc = viridian_domain_init(d);
+    if ( rc )
+        goto fail2;
+
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
         goto fail2;
@@ -687,6 +691,7 @@ int hvm_domain_initialise(struct domain *d)
     hvm_destroy_cacheattr_region_list(d);
     destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0);
  fail:
+    viridian_domain_deinit(d);
     return rc;
 }
 
@@ -1526,6 +1531,10 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
+    rc = viridian_vcpu_init(v);
+    if ( rc )
+        goto fail5;
+
     rc = hvm_all_ioreq_servers_add_vcpu(d, v);
     if ( rc != 0 )
         goto fail6;
@@ -1553,6 +1562,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
  fail2:
     hvm_vcpu_cacheattr_destroy(v);
  fail1:
+    viridian_vcpu_deinit(v);
     return rc;
 }
 
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 425af56856..5b0eb8a8c7 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -417,6 +417,16 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     return X86EMUL_OKAY;
 }
 
+int viridian_vcpu_init(struct vcpu *v)
+{
+    return 0;
+}
+
+int viridian_domain_init(struct domain *d)
+{
+    return 0;
+}
+
 void viridian_vcpu_deinit(struct vcpu *v)
 {
     viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index ec5ef8d3f9..f072838955 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -80,6 +80,9 @@ viridian_hypercall(struct cpu_user_regs *regs);
 void viridian_time_ref_count_freeze(struct domain *d);
 void viridian_time_ref_count_thaw(struct domain *d);
 
+int viridian_vcpu_init(struct vcpu *v);
+int viridian_domain_init(struct domain *d);
+
 void viridian_vcpu_deinit(struct vcpu *v);
 void viridian_domain_deinit(struct domain *d);
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 02/11] viridian: separately allocate domain and vcpu structures
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 01/11] viridian: add init hooks Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 12:25   ` Jan Beulich
  2019-03-11 13:41 ` [PATCH v5 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain Paul Durrant
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

Currently the viridian_domain and viridian_vcpu structures are inline in
the hvm_domain and hvm_vcpu structures respectively. Subsequent patches
will need to add sizable extra fields to the viridian structures which
will cause the PAGE_SIZE limit of the overall vcpu structure to be
exceeded. This patch, therefore, uses the new init hooks to separately
allocate the structures and converts the 'viridian' fields in hvm_domain
and hvm_cpu to be pointers to these allocations. These separate allocations
also allow some vcpu and domain pointers to become const.

Ideally, now that they are no longer inline, the allocations of the
viridian structures could be made conditional on whether the toolstack
is going to configure the viridian enlightenments. However the toolstack
is currently unable to convey this information to the domain creation code
so such an enhancement is deferred until that becomes possible.

NOTE: The patch also introduced the 'is_viridian_vcpu' macro to avoid
      introducing a second evaluation of 'is_viridian_domain' with an
      open-coded 'v->domain' argument. This macro will also be further
      used in a subsequent patch.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Const-ify some vcpu and domain pointers

v2:
 - use XFREE()
 - expand commit comment to point out why allocations are unconditional
---
 xen/arch/x86/hvm/viridian/private.h  |  2 +-
 xen/arch/x86/hvm/viridian/synic.c    | 46 ++++++++---------
 xen/arch/x86/hvm/viridian/time.c     | 38 +++++++-------
 xen/arch/x86/hvm/viridian/viridian.c | 75 ++++++++++++++++++----------
 xen/include/asm-x86/hvm/domain.h     |  2 +-
 xen/include/asm-x86/hvm/hvm.h        |  4 ++
 xen/include/asm-x86/hvm/vcpu.h       |  2 +-
 xen/include/asm-x86/hvm/viridian.h   | 10 ++--
 8 files changed, 101 insertions(+), 78 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 398b22f12d..46174f48cd 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -89,7 +89,7 @@ void viridian_time_load_domain_ctxt(
 
 void viridian_dump_guest_page(const struct vcpu *v, const char *name,
                               const struct viridian_page *vp);
-void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp);
+void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp);
 void viridian_unmap_guest_page(struct viridian_page *vp);
 
 #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index a6ebbbc9f5..28eda7798c 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -28,9 +28,9 @@ typedef union _HV_VP_ASSIST_PAGE
     uint8_t ReservedZBytePadding[PAGE_SIZE];
 } HV_VP_ASSIST_PAGE;
 
-void viridian_apic_assist_set(struct vcpu *v)
+void viridian_apic_assist_set(const struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr;
+    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
 
     if ( !ptr )
         return;
@@ -40,40 +40,40 @@ void viridian_apic_assist_set(struct vcpu *v)
      * wrong and the VM will most likely hang so force a crash now
      * to make the problem clear.
      */
-    if ( v->arch.hvm.viridian.apic_assist_pending )
+    if ( v->arch.hvm.viridian->apic_assist_pending )
         domain_crash(v->domain);
 
-    v->arch.hvm.viridian.apic_assist_pending = true;
+    v->arch.hvm.viridian->apic_assist_pending = true;
     ptr->ApicAssist.no_eoi = 1;
 }
 
-bool viridian_apic_assist_completed(struct vcpu *v)
+bool viridian_apic_assist_completed(const struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr;
+    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
 
     if ( !ptr )
         return false;
 
-    if ( v->arch.hvm.viridian.apic_assist_pending &&
+    if ( v->arch.hvm.viridian->apic_assist_pending &&
          !ptr->ApicAssist.no_eoi )
     {
         /* An EOI has been avoided */
-        v->arch.hvm.viridian.apic_assist_pending = false;
+        v->arch.hvm.viridian->apic_assist_pending = false;
         return true;
     }
 
     return false;
 }
 
-void viridian_apic_assist_clear(struct vcpu *v)
+void viridian_apic_assist_clear(const struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian.vp_assist.ptr;
+    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
 
     if ( !ptr )
         return;
 
     ptr->ApicAssist.no_eoi = 0;
-    v->arch.hvm.viridian.apic_assist_pending = false;
+    v->arch.hvm.viridian->apic_assist_pending = false;
 }
 
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
@@ -95,12 +95,12 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
     case HV_X64_MSR_VP_ASSIST_PAGE:
         /* release any previous mapping */
-        viridian_unmap_guest_page(&v->arch.hvm.viridian.vp_assist);
-        v->arch.hvm.viridian.vp_assist.msr.raw = val;
+        viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist);
+        v->arch.hvm.viridian->vp_assist.msr.raw = val;
         viridian_dump_guest_page(v, "VP_ASSIST",
-                                 &v->arch.hvm.viridian.vp_assist);
-        if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled )
-            viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist);
+                                 &v->arch.hvm.viridian->vp_assist);
+        if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
+            viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
         break;
 
     default:
@@ -132,7 +132,7 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         break;
 
     case HV_X64_MSR_VP_ASSIST_PAGE:
-        *val = v->arch.hvm.viridian.vp_assist.msr.raw;
+        *val = v->arch.hvm.viridian->vp_assist.msr.raw;
         break;
 
     default:
@@ -146,18 +146,18 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt)
 {
-    ctxt->apic_assist_pending = v->arch.hvm.viridian.apic_assist_pending;
-    ctxt->vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw;
+    ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending;
+    ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw;
 }
 
 void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
-    v->arch.hvm.viridian.vp_assist.msr.raw = ctxt->vp_assist_msr;
-    if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled )
-        viridian_map_guest_page(v, &v->arch.hvm.viridian.vp_assist);
+    v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr;
+    if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
+        viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
 
-    v->arch.hvm.viridian.apic_assist_pending = ctxt->apic_assist_pending;
+    v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 840a82b457..a7e94aadf0 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -27,7 +27,7 @@ typedef struct _HV_REFERENCE_TSC_PAGE
 
 static void dump_reference_tsc(const struct domain *d)
 {
-    const union viridian_page_msr *rt = &d->arch.hvm.viridian.reference_tsc;
+    const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc;
 
     if ( !rt->fields.enabled )
         return;
@@ -38,7 +38,7 @@ static void dump_reference_tsc(const struct domain *d)
 
 static void update_reference_tsc(struct domain *d, bool initialize)
 {
-    unsigned long gmfn = d->arch.hvm.viridian.reference_tsc.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     HV_REFERENCE_TSC_PAGE *p;
 
@@ -107,7 +107,7 @@ static void update_reference_tsc(struct domain *d, bool initialize)
     put_page_and_type(page);
 }
 
-static int64_t raw_trc_val(struct domain *d)
+static int64_t raw_trc_val(const struct domain *d)
 {
     uint64_t tsc;
     struct time_scale tsc_to_ns;
@@ -119,21 +119,19 @@ static int64_t raw_trc_val(struct domain *d)
     return scale_delta(tsc, &tsc_to_ns) / 100ul;
 }
 
-void viridian_time_ref_count_freeze(struct domain *d)
+void viridian_time_ref_count_freeze(const struct domain *d)
 {
-    struct viridian_time_ref_count *trc;
-
-    trc = &d->arch.hvm.viridian.time_ref_count;
+    struct viridian_time_ref_count *trc =
+        &d->arch.hvm.viridian->time_ref_count;
 
     if ( test_and_clear_bit(_TRC_running, &trc->flags) )
         trc->val = raw_trc_val(d) + trc->off;
 }
 
-void viridian_time_ref_count_thaw(struct domain *d)
+void viridian_time_ref_count_thaw(const struct domain *d)
 {
-    struct viridian_time_ref_count *trc;
-
-    trc = &d->arch.hvm.viridian.time_ref_count;
+    struct viridian_time_ref_count *trc =
+        &d->arch.hvm.viridian->time_ref_count;
 
     if ( !d->is_shutting_down &&
          !test_and_set_bit(_TRC_running, &trc->flags) )
@@ -150,9 +148,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        d->arch.hvm.viridian.reference_tsc.raw = val;
+        d->arch.hvm.viridian->reference_tsc.raw = val;
         dump_reference_tsc(d);
-        if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
+        if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
             update_reference_tsc(d, true);
         break;
 
@@ -189,13 +187,13 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        *val = d->arch.hvm.viridian.reference_tsc.raw;
+        *val = d->arch.hvm.viridian->reference_tsc.raw;
         break;
 
     case HV_X64_MSR_TIME_REF_COUNT:
     {
         struct viridian_time_ref_count *trc =
-            &d->arch.hvm.viridian.time_ref_count;
+            &d->arch.hvm.viridian->time_ref_count;
 
         if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) )
             return X86EMUL_EXCEPTION;
@@ -219,17 +217,17 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
-    ctxt->time_ref_count = d->arch.hvm.viridian.time_ref_count.val;
-    ctxt->reference_tsc = d->arch.hvm.viridian.reference_tsc.raw;
+    ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val;
+    ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw;
 }
 
 void viridian_time_load_domain_ctxt(
     struct domain *d, const struct hvm_viridian_domain_context *ctxt)
 {
-    d->arch.hvm.viridian.time_ref_count.val = ctxt->time_ref_count;
-    d->arch.hvm.viridian.reference_tsc.raw = ctxt->reference_tsc;
+    d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count;
+    d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc;
 
-    if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
+    if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
         update_reference_tsc(d, false);
 }
 
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 5b0eb8a8c7..7839718ef4 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -146,7 +146,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
          * Hypervisor information, but only if the guest has set its
          * own version number.
          */
-        if ( d->arch.hvm.viridian.guest_os_id.raw == 0 )
+        if ( d->arch.hvm.viridian->guest_os_id.raw == 0 )
             break;
         res->a = viridian_build;
         res->b = ((uint32_t)viridian_major << 16) | viridian_minor;
@@ -191,8 +191,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
     case 4:
         /* Recommended hypercall usage. */
-        if ( (d->arch.hvm.viridian.guest_os_id.raw == 0) ||
-             (d->arch.hvm.viridian.guest_os_id.fields.os < 4) )
+        if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) ||
+             (d->arch.hvm.viridian->guest_os_id.fields.os < 4) )
             break;
         res->a = CPUID4A_RELAX_TIMER_INT;
         if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
@@ -224,7 +224,7 @@ static void dump_guest_os_id(const struct domain *d)
 {
     const union viridian_guest_os_id_msr *goi;
 
-    goi = &d->arch.hvm.viridian.guest_os_id;
+    goi = &d->arch.hvm.viridian->guest_os_id;
 
     printk(XENLOG_G_INFO
            "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n",
@@ -238,7 +238,7 @@ static void dump_hypercall(const struct domain *d)
 {
     const union viridian_page_msr *hg;
 
-    hg = &d->arch.hvm.viridian.hypercall_gpa;
+    hg = &d->arch.hvm.viridian->hypercall_gpa;
 
     printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n",
            d->domain_id,
@@ -247,7 +247,7 @@ static void dump_hypercall(const struct domain *d)
 
 static void enable_hypercall_page(struct domain *d)
 {
-    unsigned long gmfn = d->arch.hvm.viridian.hypercall_gpa.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.fields.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     uint8_t *p;
 
@@ -288,14 +288,14 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     switch ( idx )
     {
     case HV_X64_MSR_GUEST_OS_ID:
-        d->arch.hvm.viridian.guest_os_id.raw = val;
+        d->arch.hvm.viridian->guest_os_id.raw = val;
         dump_guest_os_id(d);
         break;
 
     case HV_X64_MSR_HYPERCALL:
-        d->arch.hvm.viridian.hypercall_gpa.raw = val;
+        d->arch.hvm.viridian->hypercall_gpa.raw = val;
         dump_hypercall(d);
-        if ( d->arch.hvm.viridian.hypercall_gpa.fields.enabled )
+        if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled )
             enable_hypercall_page(d);
         break;
 
@@ -317,10 +317,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm.viridian.crash_param));
+                     ARRAY_SIZE(v->arch.hvm.viridian->crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        v->arch.hvm.viridian.crash_param[idx] = val;
+        v->arch.hvm.viridian->crash_param[idx] = val;
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -337,11 +337,11 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
         spin_unlock(&d->shutdown_lock);
 
         gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n",
-                v->arch.hvm.viridian.crash_param[0],
-                v->arch.hvm.viridian.crash_param[1],
-                v->arch.hvm.viridian.crash_param[2],
-                v->arch.hvm.viridian.crash_param[3],
-                v->arch.hvm.viridian.crash_param[4]);
+                v->arch.hvm.viridian->crash_param[0],
+                v->arch.hvm.viridian->crash_param[1],
+                v->arch.hvm.viridian->crash_param[2],
+                v->arch.hvm.viridian->crash_param[3],
+                v->arch.hvm.viridian->crash_param[4]);
         break;
     }
 
@@ -364,11 +364,11 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     switch ( idx )
     {
     case HV_X64_MSR_GUEST_OS_ID:
-        *val = d->arch.hvm.viridian.guest_os_id.raw;
+        *val = d->arch.hvm.viridian->guest_os_id.raw;
         break;
 
     case HV_X64_MSR_HYPERCALL:
-        *val = d->arch.hvm.viridian.hypercall_gpa.raw;
+        *val = d->arch.hvm.viridian->hypercall_gpa.raw;
         break;
 
     case HV_X64_MSR_VP_INDEX:
@@ -393,10 +393,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm.viridian.crash_param));
+                     ARRAY_SIZE(v->arch.hvm.viridian->crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        *val = v->arch.hvm.viridian.crash_param[idx];
+        *val = v->arch.hvm.viridian->crash_param[idx];
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -419,17 +419,33 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
 
 int viridian_vcpu_init(struct vcpu *v)
 {
+    ASSERT(!v->arch.hvm.viridian);
+    v->arch.hvm.viridian = xzalloc(struct viridian_vcpu);
+    if ( !v->arch.hvm.viridian )
+        return -ENOMEM;
+
     return 0;
 }
 
 int viridian_domain_init(struct domain *d)
 {
+    ASSERT(!d->arch.hvm.viridian);
+    d->arch.hvm.viridian = xzalloc(struct viridian_domain);
+    if ( !d->arch.hvm.viridian )
+        return -ENOMEM;
+
     return 0;
 }
 
 void viridian_vcpu_deinit(struct vcpu *v)
 {
-    viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
+    if ( !v->arch.hvm.viridian )
+        return;
+
+    if ( is_viridian_vcpu(v) )
+        viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
+
+    XFREE(v->arch.hvm.viridian);
 }
 
 void viridian_domain_deinit(struct domain *d)
@@ -438,6 +454,11 @@ void viridian_domain_deinit(struct domain *d)
 
     for_each_vcpu ( d, v )
         viridian_vcpu_deinit(v);
+
+    if ( !d->arch.hvm.viridian )
+        return;
+
+    XFREE(d->arch.hvm.viridian);
 }
 
 /*
@@ -591,7 +612,7 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name,
            v, name, (unsigned long)vp->msr.fields.pfn);
 }
 
-void viridian_map_guest_page(struct vcpu *v, struct viridian_page *vp)
+void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp)
 {
     struct domain *d = v->domain;
     unsigned long gmfn = vp->msr.fields.pfn;
@@ -645,8 +666,8 @@ static int viridian_save_domain_ctxt(struct vcpu *v,
 {
     const struct domain *d = v->domain;
     struct hvm_viridian_domain_context ctxt = {
-        .hypercall_gpa  = d->arch.hvm.viridian.hypercall_gpa.raw,
-        .guest_os_id    = d->arch.hvm.viridian.guest_os_id.raw,
+        .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw,
+        .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw,
     };
 
     if ( !is_viridian_domain(d) )
@@ -665,8 +686,8 @@ static int viridian_load_domain_ctxt(struct domain *d,
     if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
         return -EINVAL;
 
-    d->arch.hvm.viridian.hypercall_gpa.raw  = ctxt.hypercall_gpa;
-    d->arch.hvm.viridian.guest_os_id.raw    = ctxt.guest_os_id;
+    d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa;
+    d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id;
 
     viridian_time_load_domain_ctxt(d, &ctxt);
 
@@ -680,7 +701,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_viridian_vcpu_context ctxt = {};
 
-    if ( !is_viridian_domain(v->domain) )
+    if ( !is_viridian_vcpu(v) )
         return 0;
 
     viridian_synic_save_vcpu_ctxt(v, &ctxt);
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 3e7331817f..6c7c4f5aa6 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -154,7 +154,7 @@ struct hvm_domain {
     /* hypervisor intercepted msix table */
     struct list_head       msixtbl_list;
 
-    struct viridian_domain viridian;
+    struct viridian_domain *viridian;
 
     bool_t                 hap_enabled;
     bool_t                 mem_sharing_enabled;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 53ffebb2c5..37c3567a57 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -463,6 +463,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
 #define is_viridian_domain(d) \
     (is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
 
+#define is_viridian_vcpu(v) \
+    is_viridian_domain((v)->domain)
+
 #define has_viridian_time_ref_count(d) \
     (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_time_ref_count))
 
@@ -762,6 +765,7 @@ static inline bool hvm_has_set_descriptor_access_exiting(void)
 }
 
 #define is_viridian_domain(d) ((void)(d), false)
+#define is_viridian_vcpu(v) ((void)(v), false)
 #define has_viridian_time_ref_count(d) ((void)(d), false)
 #define hvm_long_mode_active(v) ((void)(v), false)
 #define hvm_get_guest_time(v) ((void)(v), 0)
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 6c84d5a5a6..d1589f3a96 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -205,7 +205,7 @@ struct hvm_vcpu {
     /* Pending hw/sw interrupt (.vector = -1 means nothing pending). */
     struct x86_event     inject_event;
 
-    struct viridian_vcpu viridian;
+    struct viridian_vcpu *viridian;
 };
 
 #endif /* __ASM_X86_HVM_VCPU_H__ */
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index f072838955..c562424332 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -77,8 +77,8 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val);
 int
 viridian_hypercall(struct cpu_user_regs *regs);
 
-void viridian_time_ref_count_freeze(struct domain *d);
-void viridian_time_ref_count_thaw(struct domain *d);
+void viridian_time_ref_count_freeze(const struct domain *d);
+void viridian_time_ref_count_thaw(const struct domain *d);
 
 int viridian_vcpu_init(struct vcpu *v);
 int viridian_domain_init(struct domain *d);
@@ -86,9 +86,9 @@ int viridian_domain_init(struct domain *d);
 void viridian_vcpu_deinit(struct vcpu *v);
 void viridian_domain_deinit(struct domain *d);
 
-void viridian_apic_assist_set(struct vcpu *v);
-bool viridian_apic_assist_completed(struct vcpu *v);
-void viridian_apic_assist_clear(struct vcpu *v);
+void viridian_apic_assist_set(const struct vcpu *v);
+bool viridian_apic_assist_completed(const struct vcpu *v);
+void viridian_apic_assist_clear(const struct vcpu *v);
 
 #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain...
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 01/11] viridian: add init hooks Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 02/11] viridian: separately allocate domain and vcpu structures Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 12:33   ` Jan Beulich
  2019-03-11 13:41 ` [PATCH v5 04/11] viridian: make 'fields' struct anonymous Paul Durrant
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

...where there is more than one dereference inside a function.

This shortens the code and makes it more readable. No functional change.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - New in v4
---
 xen/arch/x86/hvm/viridian/synic.c    | 49 ++++++++++++++++------------
 xen/arch/x86/hvm/viridian/time.c     | 27 ++++++++-------
 xen/arch/x86/hvm/viridian/viridian.c | 47 +++++++++++++-------------
 3 files changed, 69 insertions(+), 54 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index 28eda7798c..f3d9f7ae74 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -30,7 +30,8 @@ typedef union _HV_VP_ASSIST_PAGE
 
 void viridian_apic_assist_set(const struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr;
 
     if ( !ptr )
         return;
@@ -40,25 +41,25 @@ void viridian_apic_assist_set(const struct vcpu *v)
      * wrong and the VM will most likely hang so force a crash now
      * to make the problem clear.
      */
-    if ( v->arch.hvm.viridian->apic_assist_pending )
+    if ( vv->apic_assist_pending )
         domain_crash(v->domain);
 
-    v->arch.hvm.viridian->apic_assist_pending = true;
+    vv->apic_assist_pending = true;
     ptr->ApicAssist.no_eoi = 1;
 }
 
 bool viridian_apic_assist_completed(const struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr;
 
     if ( !ptr )
         return false;
 
-    if ( v->arch.hvm.viridian->apic_assist_pending &&
-         !ptr->ApicAssist.no_eoi )
+    if ( vv->apic_assist_pending && !ptr->ApicAssist.no_eoi )
     {
         /* An EOI has been avoided */
-        v->arch.hvm.viridian->apic_assist_pending = false;
+        vv->apic_assist_pending = false;
         return true;
     }
 
@@ -67,17 +68,20 @@ bool viridian_apic_assist_completed(const struct vcpu *v)
 
 void viridian_apic_assist_clear(const struct vcpu *v)
 {
-    HV_VP_ASSIST_PAGE *ptr = v->arch.hvm.viridian->vp_assist.ptr;
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    HV_VP_ASSIST_PAGE *ptr = vv->vp_assist.ptr;
 
     if ( !ptr )
         return;
 
     ptr->ApicAssist.no_eoi = 0;
-    v->arch.hvm.viridian->apic_assist_pending = false;
+    vv->apic_assist_pending = false;
 }
 
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+
     switch ( idx )
     {
     case HV_X64_MSR_EOI:
@@ -95,12 +99,11 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
     case HV_X64_MSR_VP_ASSIST_PAGE:
         /* release any previous mapping */
-        viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist);
-        v->arch.hvm.viridian->vp_assist.msr.raw = val;
-        viridian_dump_guest_page(v, "VP_ASSIST",
-                                 &v->arch.hvm.viridian->vp_assist);
-        if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
-            viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
+        viridian_unmap_guest_page(&vv->vp_assist);
+        vv->vp_assist.msr.raw = val;
+        viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist);
+        if ( vv->vp_assist.msr.fields.enabled )
+            viridian_map_guest_page(v, &vv->vp_assist);
         break;
 
     default:
@@ -146,18 +149,22 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt)
 {
-    ctxt->apic_assist_pending = v->arch.hvm.viridian->apic_assist_pending;
-    ctxt->vp_assist_msr = v->arch.hvm.viridian->vp_assist.msr.raw;
+    const struct viridian_vcpu *vv = v->arch.hvm.viridian;
+
+    ctxt->apic_assist_pending = vv->apic_assist_pending;
+    ctxt->vp_assist_msr = vv->vp_assist.msr.raw;
 }
 
 void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
-    v->arch.hvm.viridian->vp_assist.msr.raw = ctxt->vp_assist_msr;
-    if ( v->arch.hvm.viridian->vp_assist.msr.fields.enabled )
-        viridian_map_guest_page(v, &v->arch.hvm.viridian->vp_assist);
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+
+    vv->vp_assist.msr.raw = ctxt->vp_assist_msr;
+    if ( vv->vp_assist.msr.fields.enabled )
+        viridian_map_guest_page(v, &vv->vp_assist);
 
-    v->arch.hvm.viridian->apic_assist_pending = ctxt->apic_assist_pending;
+    vv->apic_assist_pending = ctxt->apic_assist_pending;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index a7e94aadf0..76f9612001 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -141,6 +141,7 @@ void viridian_time_ref_count_thaw(const struct domain *d)
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 {
     struct domain *d = v->domain;
+    struct viridian_domain *vd = d->arch.hvm.viridian;
 
     switch ( idx )
     {
@@ -148,9 +149,9 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        d->arch.hvm.viridian->reference_tsc.raw = val;
+        vd->reference_tsc.raw = val;
         dump_reference_tsc(d);
-        if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
+        if ( vd->reference_tsc.fields.enabled )
             update_reference_tsc(d, true);
         break;
 
@@ -165,7 +166,8 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 {
-    struct domain *d = v->domain;
+    const struct domain *d = v->domain;
+    struct viridian_domain *vd = d->arch.hvm.viridian;
 
     switch ( idx )
     {
@@ -187,13 +189,12 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        *val = d->arch.hvm.viridian->reference_tsc.raw;
+        *val = vd->reference_tsc.raw;
         break;
 
     case HV_X64_MSR_TIME_REF_COUNT:
     {
-        struct viridian_time_ref_count *trc =
-            &d->arch.hvm.viridian->time_ref_count;
+        struct viridian_time_ref_count *trc = &vd->time_ref_count;
 
         if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) )
             return X86EMUL_EXCEPTION;
@@ -217,17 +218,21 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
-    ctxt->time_ref_count = d->arch.hvm.viridian->time_ref_count.val;
-    ctxt->reference_tsc = d->arch.hvm.viridian->reference_tsc.raw;
+    const struct viridian_domain *vd = d->arch.hvm.viridian;
+
+    ctxt->time_ref_count = vd->time_ref_count.val;
+    ctxt->reference_tsc = vd->reference_tsc.raw;
 }
 
 void viridian_time_load_domain_ctxt(
     struct domain *d, const struct hvm_viridian_domain_context *ctxt)
 {
-    d->arch.hvm.viridian->time_ref_count.val = ctxt->time_ref_count;
-    d->arch.hvm.viridian->reference_tsc.raw = ctxt->reference_tsc;
+    struct viridian_domain *vd = d->arch.hvm.viridian;
+
+    vd->time_ref_count.val = ctxt->time_ref_count;
+    vd->reference_tsc.raw = ctxt->reference_tsc;
 
-    if ( d->arch.hvm.viridian->reference_tsc.fields.enabled )
+    if ( vd->reference_tsc.fields.enabled )
         update_reference_tsc(d, false);
 }
 
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 7839718ef4..710470fed7 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -122,6 +122,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
                            uint32_t subleaf, struct cpuid_leaf *res)
 {
     const struct domain *d = v->domain;
+    const struct viridian_domain *vd = d->arch.hvm.viridian;
 
     ASSERT(is_viridian_domain(d));
     ASSERT(leaf >= 0x40000000 && leaf < 0x40000100);
@@ -146,7 +147,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
          * Hypervisor information, but only if the guest has set its
          * own version number.
          */
-        if ( d->arch.hvm.viridian->guest_os_id.raw == 0 )
+        if ( vd->guest_os_id.raw == 0 )
             break;
         res->a = viridian_build;
         res->b = ((uint32_t)viridian_major << 16) | viridian_minor;
@@ -191,8 +192,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
     case 4:
         /* Recommended hypercall usage. */
-        if ( (d->arch.hvm.viridian->guest_os_id.raw == 0) ||
-             (d->arch.hvm.viridian->guest_os_id.fields.os < 4) )
+        if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.fields.os < 4 )
             break;
         res->a = CPUID4A_RELAX_TIMER_INT;
         if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
@@ -281,21 +281,23 @@ static void enable_hypercall_page(struct domain *d)
 
 int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
     struct domain *d = v->domain;
+    struct viridian_domain *vd = d->arch.hvm.viridian;
 
     ASSERT(is_viridian_domain(d));
 
     switch ( idx )
     {
     case HV_X64_MSR_GUEST_OS_ID:
-        d->arch.hvm.viridian->guest_os_id.raw = val;
+        vd->guest_os_id.raw = val;
         dump_guest_os_id(d);
         break;
 
     case HV_X64_MSR_HYPERCALL:
-        d->arch.hvm.viridian->hypercall_gpa.raw = val;
+        vd->hypercall_gpa.raw = val;
         dump_hypercall(d);
-        if ( d->arch.hvm.viridian->hypercall_gpa.fields.enabled )
+        if ( vd->hypercall_gpa.fields.enabled )
             enable_hypercall_page(d);
         break;
 
@@ -317,10 +319,10 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm.viridian->crash_param));
+                     ARRAY_SIZE(vv->crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        v->arch.hvm.viridian->crash_param[idx] = val;
+        vv->crash_param[idx] = val;
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -337,11 +339,8 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
         spin_unlock(&d->shutdown_lock);
 
         gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n",
-                v->arch.hvm.viridian->crash_param[0],
-                v->arch.hvm.viridian->crash_param[1],
-                v->arch.hvm.viridian->crash_param[2],
-                v->arch.hvm.viridian->crash_param[3],
-                v->arch.hvm.viridian->crash_param[4]);
+                vv->crash_param[0], vv->crash_param[1], vv->crash_param[2],
+                vv->crash_param[3], vv->crash_param[4]);
         break;
     }
 
@@ -357,18 +356,20 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
 
 int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
 {
-    struct domain *d = v->domain;
+    const struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    const struct domain *d = v->domain;
+    const struct viridian_domain *vd = d->arch.hvm.viridian;
 
     ASSERT(is_viridian_domain(d));
 
     switch ( idx )
     {
     case HV_X64_MSR_GUEST_OS_ID:
-        *val = d->arch.hvm.viridian->guest_os_id.raw;
+        *val = vd->guest_os_id.raw;
         break;
 
     case HV_X64_MSR_HYPERCALL:
-        *val = d->arch.hvm.viridian->hypercall_gpa.raw;
+        *val = vd->hypercall_gpa.raw;
         break;
 
     case HV_X64_MSR_VP_INDEX:
@@ -393,10 +394,10 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm.viridian->crash_param));
+                     ARRAY_SIZE(vv->crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        *val = v->arch.hvm.viridian->crash_param[idx];
+        *val = vv->crash_param[idx];
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -665,9 +666,10 @@ static int viridian_save_domain_ctxt(struct vcpu *v,
                                      hvm_domain_context_t *h)
 {
     const struct domain *d = v->domain;
+    const struct viridian_domain *vd = d->arch.hvm.viridian;
     struct hvm_viridian_domain_context ctxt = {
-        .hypercall_gpa = d->arch.hvm.viridian->hypercall_gpa.raw,
-        .guest_os_id = d->arch.hvm.viridian->guest_os_id.raw,
+        .hypercall_gpa = vd->hypercall_gpa.raw,
+        .guest_os_id = vd->guest_os_id.raw,
     };
 
     if ( !is_viridian_domain(d) )
@@ -681,13 +683,14 @@ static int viridian_save_domain_ctxt(struct vcpu *v,
 static int viridian_load_domain_ctxt(struct domain *d,
                                      hvm_domain_context_t *h)
 {
+    struct viridian_domain *vd = d->arch.hvm.viridian;
     struct hvm_viridian_domain_context ctxt;
 
     if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
         return -EINVAL;
 
-    d->arch.hvm.viridian->hypercall_gpa.raw = ctxt.hypercall_gpa;
-    d->arch.hvm.viridian->guest_os_id.raw = ctxt.guest_os_id;
+    vd->hypercall_gpa.raw = ctxt.hypercall_gpa;
+    vd->guest_os_id.raw = ctxt.guest_os_id;
 
     viridian_time_load_domain_ctxt(d, &ctxt);
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 04/11] viridian: make 'fields' struct anonymous...
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (2 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 12:34   ` Jan Beulich
  2019-03-11 13:41 ` [PATCH v5 05/11] viridian: extend init/deinit hooks into synic and time modules Paul Durrant
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

...inside viridian_page_msr and viridian_guest_os_id_msr unions.

There's no need to name it and the code is shortened by not doing so.
No functional change.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - New in v4
---
 xen/arch/x86/hvm/viridian/synic.c    |  4 ++--
 xen/arch/x86/hvm/viridian/time.c     | 10 +++++-----
 xen/arch/x86/hvm/viridian/viridian.c | 20 +++++++++-----------
 xen/include/asm-x86/hvm/viridian.h   |  4 ++--
 4 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index f3d9f7ae74..05d971b365 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -102,7 +102,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         viridian_unmap_guest_page(&vv->vp_assist);
         vv->vp_assist.msr.raw = val;
         viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist);
-        if ( vv->vp_assist.msr.fields.enabled )
+        if ( vv->vp_assist.msr.enabled )
             viridian_map_guest_page(v, &vv->vp_assist);
         break;
 
@@ -161,7 +161,7 @@ void viridian_synic_load_vcpu_ctxt(
     struct viridian_vcpu *vv = v->arch.hvm.viridian;
 
     vv->vp_assist.msr.raw = ctxt->vp_assist_msr;
-    if ( vv->vp_assist.msr.fields.enabled )
+    if ( vv->vp_assist.msr.enabled )
         viridian_map_guest_page(v, &vv->vp_assist);
 
     vv->apic_assist_pending = ctxt->apic_assist_pending;
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 76f9612001..909a3fb9e3 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -29,16 +29,16 @@ static void dump_reference_tsc(const struct domain *d)
 {
     const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc;
 
-    if ( !rt->fields.enabled )
+    if ( !rt->enabled )
         return;
 
     printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n",
-           d->domain_id, (unsigned long)rt->fields.pfn);
+           d->domain_id, (unsigned long)rt->pfn);
 }
 
 static void update_reference_tsc(struct domain *d, bool initialize)
 {
-    unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     HV_REFERENCE_TSC_PAGE *p;
 
@@ -151,7 +151,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
         vd->reference_tsc.raw = val;
         dump_reference_tsc(d);
-        if ( vd->reference_tsc.fields.enabled )
+        if ( vd->reference_tsc.enabled )
             update_reference_tsc(d, true);
         break;
 
@@ -232,7 +232,7 @@ void viridian_time_load_domain_ctxt(
     vd->time_ref_count.val = ctxt->time_ref_count;
     vd->reference_tsc.raw = ctxt->reference_tsc;
 
-    if ( vd->reference_tsc.fields.enabled )
+    if ( vd->reference_tsc.enabled )
         update_reference_tsc(d, false);
 }
 
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 710470fed7..1a20d68aaf 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -192,7 +192,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
     case 4:
         /* Recommended hypercall usage. */
-        if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.fields.os < 4 )
+        if ( vd->guest_os_id.raw == 0 || vd->guest_os_id.os < 4 )
             break;
         res->a = CPUID4A_RELAX_TIMER_INT;
         if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
@@ -228,10 +228,8 @@ static void dump_guest_os_id(const struct domain *d)
 
     printk(XENLOG_G_INFO
            "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n",
-           d->domain_id,
-           goi->fields.vendor, goi->fields.os,
-           goi->fields.major, goi->fields.minor,
-           goi->fields.service_pack, goi->fields.build_number);
+           d->domain_id, goi->vendor, goi->os, goi->major, goi->minor,
+           goi->service_pack, goi->build_number);
 }
 
 static void dump_hypercall(const struct domain *d)
@@ -242,12 +240,12 @@ static void dump_hypercall(const struct domain *d)
 
     printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n",
            d->domain_id,
-           hg->fields.enabled, (unsigned long)hg->fields.pfn);
+           hg->enabled, (unsigned long)hg->pfn);
 }
 
 static void enable_hypercall_page(struct domain *d)
 {
-    unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     uint8_t *p;
 
@@ -297,7 +295,7 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_HYPERCALL:
         vd->hypercall_gpa.raw = val;
         dump_hypercall(d);
-        if ( vd->hypercall_gpa.fields.enabled )
+        if ( vd->hypercall_gpa.enabled )
             enable_hypercall_page(d);
         break;
 
@@ -606,17 +604,17 @@ out:
 void viridian_dump_guest_page(const struct vcpu *v, const char *name,
                               const struct viridian_page *vp)
 {
-    if ( !vp->msr.fields.enabled )
+    if ( !vp->msr.enabled )
         return;
 
     printk(XENLOG_G_INFO "%pv: VIRIDIAN %s: pfn: %lx\n",
-           v, name, (unsigned long)vp->msr.fields.pfn);
+           v, name, (unsigned long)vp->msr.pfn);
 }
 
 void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp)
 {
     struct domain *d = v->domain;
-    unsigned long gmfn = vp->msr.fields.pfn;
+    unsigned long gmfn = vp->msr.pfn;
     struct page_info *page;
 
     if ( vp->ptr )
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index c562424332..abbbb36092 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -17,7 +17,7 @@ union viridian_page_msr
         uint64_t enabled:1;
         uint64_t reserved_preserved:11;
         uint64_t pfn:48;
-    } fields;
+    };
 };
 
 struct viridian_page
@@ -44,7 +44,7 @@ union viridian_guest_os_id_msr
         uint64_t major:8;
         uint64_t os:8;
         uint64_t vendor:16;
-    } fields;
+    };
 };
 
 struct viridian_time_ref_count
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 05/11] viridian: extend init/deinit hooks into synic and time modules
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (3 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 04/11] viridian: make 'fields' struct anonymous Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 06/11] viridian: add missing context save helpers " Paul Durrant
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

This patch simply adds domain and vcpu init/deinit hooks into the synic
and time modules and wires them into viridian_[domain|vcpu]_[init|deinit]().
Only one of the hooks is currently needed (to unmap the 'VP Assist' page)
but subsequent patches will make use of the others.

NOTE: To perform the unmap of the VP Assist page,
      viridian_unmap_guest_page() is now directly called in the new
      viridian_synic_vcpu_deinit() function (which is safe even if
      is_viridian_vcpu() evaluates to false). This replaces the slightly
      hacky mechanism of faking a zero write to the
      HV_X64_MSR_VP_ASSIST_PAGE MSR in viridian_cpu_deinit().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Constify vcpu and domain pointers

v2:
 - Pay attention to sync and time init hook return values
---
 xen/arch/x86/hvm/viridian/private.h  | 12 +++++++++
 xen/arch/x86/hvm/viridian/synic.c    | 19 ++++++++++++++
 xen/arch/x86/hvm/viridian/time.c     | 18 ++++++++++++++
 xen/arch/x86/hvm/viridian/viridian.c | 37 ++++++++++++++++++++++++++--
 4 files changed, 84 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 46174f48cd..8c029f62c6 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -74,6 +74,12 @@
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
+int viridian_synic_vcpu_init(const struct vcpu *v);
+int viridian_synic_domain_init(const struct domain *d);
+
+void viridian_synic_vcpu_deinit(const struct vcpu *v);
+void viridian_synic_domain_deinit(const struct domain *d);
+
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt);
 void viridian_synic_load_vcpu_ctxt(
@@ -82,6 +88,12 @@ void viridian_synic_load_vcpu_ctxt(
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
+int viridian_time_vcpu_init(const struct vcpu *v);
+int viridian_time_domain_init(const struct domain *d);
+
+void viridian_time_vcpu_deinit(const struct vcpu *v);
+void viridian_time_domain_deinit(const struct domain *d);
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt);
 void viridian_time_load_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index 05d971b365..4b00dbe1b3 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -146,6 +146,25 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
     return X86EMUL_OKAY;
 }
 
+int viridian_synic_vcpu_init(const struct vcpu *v)
+{
+    return 0;
+}
+
+int viridian_synic_domain_init(const struct domain *d)
+{
+    return 0;
+}
+
+void viridian_synic_vcpu_deinit(const struct vcpu *v)
+{
+    viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist);
+}
+
+void viridian_synic_domain_deinit(const struct domain *d)
+{
+}
+
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt)
 {
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 909a3fb9e3..48aca7e0ab 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -215,6 +215,24 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
     return X86EMUL_OKAY;
 }
 
+int viridian_time_vcpu_init(const struct vcpu *v)
+{
+    return 0;
+}
+
+int viridian_time_domain_init(const struct domain *d)
+{
+    return 0;
+}
+
+void viridian_time_vcpu_deinit(const struct vcpu *v)
+{
+}
+
+void viridian_time_domain_deinit(const struct domain *d)
+{
+}
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 1a20d68aaf..f9a509d918 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -418,22 +418,52 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
 
 int viridian_vcpu_init(struct vcpu *v)
 {
+    int rc;
+
     ASSERT(!v->arch.hvm.viridian);
     v->arch.hvm.viridian = xzalloc(struct viridian_vcpu);
     if ( !v->arch.hvm.viridian )
         return -ENOMEM;
 
+    rc = viridian_synic_vcpu_init(v);
+    if ( rc )
+        goto fail;
+
+    rc = viridian_time_vcpu_init(v);
+    if ( rc )
+        goto fail;
+
     return 0;
+
+ fail:
+    viridian_vcpu_deinit(v);
+
+    return rc;
 }
 
 int viridian_domain_init(struct domain *d)
 {
+    int rc;
+
     ASSERT(!d->arch.hvm.viridian);
     d->arch.hvm.viridian = xzalloc(struct viridian_domain);
     if ( !d->arch.hvm.viridian )
         return -ENOMEM;
 
+    rc = viridian_synic_domain_init(d);
+    if ( rc )
+        goto fail;
+
+    rc = viridian_time_domain_init(d);
+    if ( rc )
+        goto fail;
+
     return 0;
+
+ fail:
+    viridian_domain_deinit(d);
+
+    return rc;
 }
 
 void viridian_vcpu_deinit(struct vcpu *v)
@@ -441,8 +471,8 @@ void viridian_vcpu_deinit(struct vcpu *v)
     if ( !v->arch.hvm.viridian )
         return;
 
-    if ( is_viridian_vcpu(v) )
-        viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0);
+    viridian_time_vcpu_deinit(v);
+    viridian_synic_vcpu_deinit(v);
 
     XFREE(v->arch.hvm.viridian);
 }
@@ -457,6 +487,9 @@ void viridian_domain_deinit(struct domain *d)
     if ( !d->arch.hvm.viridian )
         return;
 
+    viridian_time_domain_deinit(d);
+    viridian_synic_domain_deinit(d);
+
     XFREE(d->arch.hvm.viridian);
 }
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 06/11] viridian: add missing context save helpers into synic and time modules
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (4 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 05/11] viridian: extend init/deinit hooks into synic and time modules Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page Paul Durrant
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

Currently the time module lacks vcpu context save helpers and the synic
module lacks domain context save helpers. These helpers are not yet
required but subsequent patches will require at least some of them so this
patch completes the set to avoid introducing them in an ad-hoc way.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Add missing callers so that they are not added in an ad-hoc way
---
 xen/arch/x86/hvm/viridian/private.h  | 10 ++++++++++
 xen/arch/x86/hvm/viridian/synic.c    | 10 ++++++++++
 xen/arch/x86/hvm/viridian/time.c     | 10 ++++++++++
 xen/arch/x86/hvm/viridian/viridian.c |  4 ++++
 4 files changed, 34 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 8c029f62c6..5078b2d2ab 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -85,6 +85,11 @@ void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
 void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt);
 
+void viridian_synic_save_domain_ctxt(
+    const struct domain *d, struct hvm_viridian_domain_context *ctxt);
+void viridian_synic_load_domain_ctxt(
+    struct domain *d, const struct hvm_viridian_domain_context *ctxt);
+
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
@@ -94,6 +99,11 @@ int viridian_time_domain_init(const struct domain *d);
 void viridian_time_vcpu_deinit(const struct vcpu *v);
 void viridian_time_domain_deinit(const struct domain *d);
 
+void viridian_time_save_vcpu_ctxt(
+    const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt);
+void viridian_time_load_vcpu_ctxt(
+    struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt);
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt);
 void viridian_time_load_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index 4b00dbe1b3..b8dab4b246 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -186,6 +186,16 @@ void viridian_synic_load_vcpu_ctxt(
     vv->apic_assist_pending = ctxt->apic_assist_pending;
 }
 
+void viridian_synic_save_domain_ctxt(
+    const struct domain *d, struct hvm_viridian_domain_context *ctxt)
+{
+}
+
+void viridian_synic_load_domain_ctxt(
+    struct domain *d, const struct hvm_viridian_domain_context *ctxt)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 48aca7e0ab..4399e62f54 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -233,6 +233,16 @@ void viridian_time_domain_deinit(const struct domain *d)
 {
 }
 
+void viridian_time_save_vcpu_ctxt(
+    const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
+{
+}
+
+void viridian_time_load_vcpu_ctxt(
+    struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
+{
+}
+
 void viridian_time_save_domain_ctxt(
     const struct domain *d, struct hvm_viridian_domain_context *ctxt)
 {
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index f9a509d918..742a988252 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -707,6 +707,7 @@ static int viridian_save_domain_ctxt(struct vcpu *v,
         return 0;
 
     viridian_time_save_domain_ctxt(d, &ctxt);
+    viridian_synic_save_domain_ctxt(d, &ctxt);
 
     return (hvm_save_entry(VIRIDIAN_DOMAIN, 0, h, &ctxt) != 0);
 }
@@ -723,6 +724,7 @@ static int viridian_load_domain_ctxt(struct domain *d,
     vd->hypercall_gpa.raw = ctxt.hypercall_gpa;
     vd->guest_os_id.raw = ctxt.guest_os_id;
 
+    viridian_synic_load_domain_ctxt(d, &ctxt);
     viridian_time_load_domain_ctxt(d, &ctxt);
 
     return 0;
@@ -738,6 +740,7 @@ static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
     if ( !is_viridian_vcpu(v) )
         return 0;
 
+    viridian_time_save_vcpu_ctxt(v, &ctxt);
     viridian_synic_save_vcpu_ctxt(v, &ctxt);
 
     return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
@@ -764,6 +767,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d,
         return -EINVAL;
 
     viridian_synic_load_vcpu_ctxt(v, &ctxt);
+    viridian_time_load_vcpu_ctxt(v, &ctxt);
 
     return 0;
 }
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (5 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 06/11] viridian: add missing context save helpers " Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw() Paul Durrant
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

Whilst the reference tsc page does not currently need to be kept mapped
after it is initially set up (or updated after migrate), the code can
be simplified by using the common guest page map/unmap and dump functions.
New functionality added by a subsequent patch will also require the page to
kept mapped for the lifetime of the domain.

NOTE: Because the reference tsc page is per-domain rather than per-vcpu
      this patch also changes viridian_map_guest_page() to take a domain
      pointer rather than a vcpu pointer. The domain pointer cannot be
      const, unlike the vcpu pointer.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/private.h  |  2 +-
 xen/arch/x86/hvm/viridian/synic.c    |  6 ++-
 xen/arch/x86/hvm/viridian/time.c     | 56 +++++++++-------------------
 xen/arch/x86/hvm/viridian/viridian.c |  3 +-
 xen/include/asm-x86/hvm/viridian.h   |  2 +-
 5 files changed, 25 insertions(+), 44 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 5078b2d2ab..96a784b840 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -111,7 +111,7 @@ void viridian_time_load_domain_ctxt(
 
 void viridian_dump_guest_page(const struct vcpu *v, const char *name,
                               const struct viridian_page *vp);
-void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp);
+void viridian_map_guest_page(struct domain *d, struct viridian_page *vp);
 void viridian_unmap_guest_page(struct viridian_page *vp);
 
 #endif /* X86_HVM_VIRIDIAN_PRIVATE_H */
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index b8dab4b246..fb560bc162 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -81,6 +81,7 @@ void viridian_apic_assist_clear(const struct vcpu *v)
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 {
     struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    struct domain *d = v->domain;
 
     switch ( idx )
     {
@@ -103,7 +104,7 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         vv->vp_assist.msr.raw = val;
         viridian_dump_guest_page(v, "VP_ASSIST", &vv->vp_assist);
         if ( vv->vp_assist.msr.enabled )
-            viridian_map_guest_page(v, &vv->vp_assist);
+            viridian_map_guest_page(d, &vv->vp_assist);
         break;
 
     default:
@@ -178,10 +179,11 @@ void viridian_synic_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
     struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    struct domain *d = v->domain;
 
     vv->vp_assist.msr.raw = ctxt->vp_assist_msr;
     if ( vv->vp_assist.msr.enabled )
-        viridian_map_guest_page(v, &vv->vp_assist);
+        viridian_map_guest_page(d, &vv->vp_assist);
 
     vv->apic_assist_pending = ctxt->apic_assist_pending;
 }
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 4399e62f54..16fe41d411 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -25,33 +25,10 @@ typedef struct _HV_REFERENCE_TSC_PAGE
     uint64_t Reserved2[509];
 } HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE;
 
-static void dump_reference_tsc(const struct domain *d)
-{
-    const union viridian_page_msr *rt = &d->arch.hvm.viridian->reference_tsc;
-
-    if ( !rt->enabled )
-        return;
-
-    printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: pfn: %lx\n",
-           d->domain_id, (unsigned long)rt->pfn);
-}
-
 static void update_reference_tsc(struct domain *d, bool initialize)
 {
-    unsigned long gmfn = d->arch.hvm.viridian->reference_tsc.pfn;
-    struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
-    HV_REFERENCE_TSC_PAGE *p;
-
-    if ( !page || !get_page_type(page, PGT_writable_page) )
-    {
-        if ( page )
-            put_page(page);
-        gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n",
-                 gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN));
-        return;
-    }
-
-    p = __map_domain_page(page);
+    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
+    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
 
     if ( initialize )
         clear_page(p);
@@ -82,7 +59,7 @@ static void update_reference_tsc(struct domain *d, bool initialize)
 
         printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: invalidated\n",
                d->domain_id);
-        goto out;
+        return;
     }
 
     /*
@@ -100,11 +77,6 @@ static void update_reference_tsc(struct domain *d, bool initialize)
     if ( p->TscSequence == 0xFFFFFFFF ||
          p->TscSequence == 0 ) /* Avoid both 'invalid' values */
         p->TscSequence = 1;
-
- out:
-    unmap_domain_page(p);
-
-    put_page_and_type(page);
 }
 
 static int64_t raw_trc_val(const struct domain *d)
@@ -149,10 +121,14 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        vd->reference_tsc.raw = val;
-        dump_reference_tsc(d);
-        if ( vd->reference_tsc.enabled )
+        viridian_unmap_guest_page(&vd->reference_tsc);
+        vd->reference_tsc.msr.raw = val;
+        viridian_dump_guest_page(v, "REFERENCE_TSC", &vd->reference_tsc);
+        if ( vd->reference_tsc.msr.enabled )
+        {
+            viridian_map_guest_page(d, &vd->reference_tsc);
             update_reference_tsc(d, true);
+        }
         break;
 
     default:
@@ -189,7 +165,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
             return X86EMUL_EXCEPTION;
 
-        *val = vd->reference_tsc.raw;
+        *val = vd->reference_tsc.msr.raw;
         break;
 
     case HV_X64_MSR_TIME_REF_COUNT:
@@ -231,6 +207,7 @@ void viridian_time_vcpu_deinit(const struct vcpu *v)
 
 void viridian_time_domain_deinit(const struct domain *d)
 {
+    viridian_unmap_guest_page(&d->arch.hvm.viridian->reference_tsc);
 }
 
 void viridian_time_save_vcpu_ctxt(
@@ -249,7 +226,7 @@ void viridian_time_save_domain_ctxt(
     const struct viridian_domain *vd = d->arch.hvm.viridian;
 
     ctxt->time_ref_count = vd->time_ref_count.val;
-    ctxt->reference_tsc = vd->reference_tsc.raw;
+    ctxt->reference_tsc = vd->reference_tsc.msr.raw;
 }
 
 void viridian_time_load_domain_ctxt(
@@ -258,10 +235,13 @@ void viridian_time_load_domain_ctxt(
     struct viridian_domain *vd = d->arch.hvm.viridian;
 
     vd->time_ref_count.val = ctxt->time_ref_count;
-    vd->reference_tsc.raw = ctxt->reference_tsc;
+    vd->reference_tsc.msr.raw = ctxt->reference_tsc;
 
-    if ( vd->reference_tsc.enabled )
+    if ( vd->reference_tsc.msr.enabled )
+    {
+        viridian_map_guest_page(d, &vd->reference_tsc);
         update_reference_tsc(d, false);
+    }
 }
 
 /*
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 742a988252..2b045ed88f 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -644,9 +644,8 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name,
            v, name, (unsigned long)vp->msr.pfn);
 }
 
-void viridian_map_guest_page(const struct vcpu *v, struct viridian_page *vp)
+void viridian_map_guest_page(struct domain *d, struct viridian_page *vp)
 {
-    struct domain *d = v->domain;
     unsigned long gmfn = vp->msr.pfn;
     struct page_info *page;
 
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index abbbb36092..c65c044191 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -65,7 +65,7 @@ struct viridian_domain
     union viridian_guest_os_id_msr guest_os_id;
     union viridian_page_msr hypercall_gpa;
     struct viridian_time_ref_count time_ref_count;
-    union viridian_page_msr reference_tsc;
+    struct viridian_page reference_tsc;
 };
 
 void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw()...
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (6 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 12:36   ` Jan Beulich
  2019-03-11 13:41 ` [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

...from arch_domain_shutdown/pause/unpause().

A subsequent patch will introduce an implementaion of synthetic timers
which will also need freeze/thaw hooks, so make the exported hooks more
generic and call through to (re-named and static) time_ref_count_freeze/thaw
functions.

NOTE: This patch also introduces a new time_ref_count() helper to return
      the current counter value. This is currently only used by the MSR
      read handler but the synthetic timer code will also need to use it.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/domain.c              | 12 ++++++------
 xen/arch/x86/hvm/viridian/time.c   | 24 +++++++++++++++++++++---
 xen/include/asm-x86/hvm/viridian.h |  4 ++--
 3 files changed, 29 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 8d579e2cf9..02afa7518e 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -657,20 +657,20 @@ void arch_domain_destroy(struct domain *d)
 
 void arch_domain_shutdown(struct domain *d)
 {
-    if ( has_viridian_time_ref_count(d) )
-        viridian_time_ref_count_freeze(d);
+    if ( is_viridian_domain(d) )
+        viridian_time_domain_freeze(d);
 }
 
 void arch_domain_pause(struct domain *d)
 {
-    if ( has_viridian_time_ref_count(d) )
-        viridian_time_ref_count_freeze(d);
+    if ( is_viridian_domain(d) )
+        viridian_time_domain_freeze(d);
 }
 
 void arch_domain_unpause(struct domain *d)
 {
-    if ( has_viridian_time_ref_count(d) )
-        viridian_time_ref_count_thaw(d);
+    if ( is_viridian_domain(d) )
+        viridian_time_domain_thaw(d);
 }
 
 int arch_domain_soft_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 16fe41d411..71291d921c 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -91,7 +91,7 @@ static int64_t raw_trc_val(const struct domain *d)
     return scale_delta(tsc, &tsc_to_ns) / 100ul;
 }
 
-void viridian_time_ref_count_freeze(const struct domain *d)
+static void time_ref_count_freeze(const struct domain *d)
 {
     struct viridian_time_ref_count *trc =
         &d->arch.hvm.viridian->time_ref_count;
@@ -100,7 +100,7 @@ void viridian_time_ref_count_freeze(const struct domain *d)
         trc->val = raw_trc_val(d) + trc->off;
 }
 
-void viridian_time_ref_count_thaw(const struct domain *d)
+static void time_ref_count_thaw(const struct domain *d)
 {
     struct viridian_time_ref_count *trc =
         &d->arch.hvm.viridian->time_ref_count;
@@ -110,6 +110,24 @@ void viridian_time_ref_count_thaw(const struct domain *d)
         trc->off = (int64_t)trc->val - raw_trc_val(d);
 }
 
+static int64_t time_ref_count(const struct domain *d)
+{
+    struct viridian_time_ref_count *trc =
+        &d->arch.hvm.viridian->time_ref_count;
+
+    return raw_trc_val(d) + trc->off;
+}
+
+void viridian_time_domain_freeze(const struct domain *d)
+{
+    time_ref_count_freeze(d);
+}
+
+void viridian_time_domain_thaw(const struct domain *d)
+{
+    time_ref_count_thaw(d);
+}
+
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 {
     struct domain *d = v->domain;
@@ -179,7 +197,7 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
             printk(XENLOG_G_INFO "d%d: VIRIDIAN MSR_TIME_REF_COUNT: accessed\n",
                    d->domain_id);
 
-        *val = raw_trc_val(d) + trc->off;
+        *val = time_ref_count(d);
         break;
     }
 
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index c65c044191..8146e2fc46 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -77,8 +77,8 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val);
 int
 viridian_hypercall(struct cpu_user_regs *regs);
 
-void viridian_time_ref_count_freeze(const struct domain *d);
-void viridian_time_ref_count_thaw(const struct domain *d);
+void viridian_time_domain_freeze(const struct domain *d);
+void viridian_time_domain_thaw(const struct domain *d);
 
 int viridian_vcpu_init(struct vcpu *v);
 int viridian_domain_init(struct domain *d);
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (7 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw() Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 13:14   ` Jan Beulich
  2019-03-11 13:41 ` [PATCH v5 10/11] viridian: add implementation of synthetic timers Paul Durrant
  2019-03-11 13:41 ` [PATCH v5 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
  10 siblings, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This patch introduces an implementation of the SCONTROL, SVERSION, SIEFP,
SIMP, EOM and SINT0-15 SynIC MSRs. No message source is added and, as such,
nothing will yet generate a synthetic interrupt. A subsequent patch will
add an implementation of synthetic timers which will need the infrastructure
added by this patch to deliver expiry messages to the guest.

NOTE: A 'synic' option is added to the toolstack viridian enlightenments
      enumeration but is deliberately not documented as enabling these
      SynIC registers without a message source is only useful for
      debugging.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Address comments from Jan

v3:
 - Add the 'SintPollingModeAvailable' bit in CPUID leaf 3
---
 tools/libxl/libxl.h                    |   6 +
 tools/libxl/libxl_dom.c                |   3 +
 tools/libxl/libxl_types.idl            |   1 +
 xen/arch/x86/hvm/viridian/synic.c      | 234 ++++++++++++++++++++++++-
 xen/arch/x86/hvm/viridian/viridian.c   |  19 ++
 xen/arch/x86/hvm/vlapic.c              |  32 +++-
 xen/include/asm-x86/hvm/hvm.h          |   3 +
 xen/include/asm-x86/hvm/viridian.h     |  25 +++
 xen/include/asm-x86/hvm/vlapic.h       |   1 +
 xen/include/public/arch-x86/hvm/save.h |   2 +
 xen/include/public/hvm/params.h        |   7 +-
 11 files changed, 326 insertions(+), 7 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index a38e5cdba2..a923a380d3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -318,6 +318,12 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_CRASH_CTL 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_SYNIC indicates that the 'synic' value
+ * is present in the viridian enlightenment enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_SYNIC 1
+
 /*
  * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that
  * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field.
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 6160991af3..fb758d2ac3 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -317,6 +317,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL))
         mask |= HVMPV_crash_ctl;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC))
+        mask |= HVMPV_synic;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index b685ac47ac..9860bcaf5f 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -235,6 +235,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (4, "hcall_remote_tlb_flush"),
     (5, "apic_assist"),
     (6, "crash_ctl"),
+    (7, "synic"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index fb560bc162..f4510d3829 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -13,6 +13,7 @@
 
 #include <asm/apic.h>
 #include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
 
 #include "private.h"
 
@@ -28,6 +29,32 @@ typedef union _HV_VP_ASSIST_PAGE
     uint8_t ReservedZBytePadding[PAGE_SIZE];
 } HV_VP_ASSIST_PAGE;
 
+typedef enum HV_MESSAGE_TYPE {
+    HvMessageTypeNone,
+    HvMessageTimerExpired = 0x80000010,
+} HV_MESSAGE_TYPE;
+
+typedef struct HV_MESSAGE_FLAGS {
+    uint8_t MessagePending:1;
+    uint8_t Reserved:7;
+} HV_MESSAGE_FLAGS;
+
+typedef struct HV_MESSAGE_HEADER {
+    HV_MESSAGE_TYPE MessageType;
+    uint16_t Reserved1;
+    HV_MESSAGE_FLAGS MessageFlags;
+    uint8_t PayloadSize;
+    uint64_t Reserved2;
+} HV_MESSAGE_HEADER;
+
+#define HV_MESSAGE_SIZE 256
+#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30
+
+typedef struct HV_MESSAGE {
+    HV_MESSAGE_HEADER Header;
+    uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT];
+} HV_MESSAGE;
+
 void viridian_apic_assist_set(const struct vcpu *v)
 {
     struct viridian_vcpu *vv = v->arch.hvm.viridian;
@@ -83,6 +110,8 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
     struct viridian_vcpu *vv = v->arch.hvm.viridian;
     struct domain *d = v->domain;
 
+    ASSERT(v == current || !v->is_running);
+
     switch ( idx )
     {
     case HV_X64_MSR_EOI:
@@ -107,6 +136,76 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
             viridian_map_guest_page(d, &vv->vp_assist);
         break;
 
+    case HV_X64_MSR_SCONTROL:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        vv->scontrol = val;
+        break;
+
+    case HV_X64_MSR_SVERSION:
+        return X86EMUL_EXCEPTION;
+
+    case HV_X64_MSR_SIEFP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        vv->siefp = val;
+        break;
+
+    case HV_X64_MSR_SIMP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        viridian_unmap_guest_page(&vv->simp);
+        vv->simp.msr.raw = val;
+        viridian_dump_guest_page(v, "SIMP", &vv->simp);
+        if ( vv->simp.msr.enabled )
+            viridian_map_guest_page(d, &vv->simp);
+        break;
+
+    case HV_X64_MSR_EOM:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        vv->msg_pending = 0;
+        break;
+
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
+    {
+        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
+                                                ARRAY_SIZE(vv->sint));
+        union viridian_sint_msr sint;
+        uint8_t vector;
+
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        /* Vectors must be in the range 16-255 inclusive */
+        sint.raw = val;
+        if ( sint.vector < 16 )
+            return X86EMUL_EXCEPTION;
+
+        /*
+         * Invalidate any previous mapping by setting an out-of-range
+         * index before setting the new mapping.
+         */
+        vector = vv->sint[sintx].vector;
+        vv->vector_to_sintx[vector] = ARRAY_SIZE(vv->sint);
+
+        vector = sint.vector;
+        vv->vector_to_sintx[vector] = sintx;
+
+        printk(XENLOG_G_INFO "%pv: VIRIDIAN SINT%u: vector: %x\n", v, sintx,
+               vector);
+
+        if ( sint.polling )
+            __clear_bit(sintx, &vv->msg_pending);
+
+        vv->sint[sintx] = sint;
+        break;
+    }
+
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n",
                  __func__, idx, val);
@@ -118,6 +217,9 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
 int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    const struct domain *d = v->domain;
+
     switch ( idx )
     {
     case HV_X64_MSR_EOI:
@@ -131,14 +233,69 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         *val = ((uint64_t)icr2 << 32) | icr;
         break;
     }
+
     case HV_X64_MSR_TPR:
         *val = vlapic_get_reg(vcpu_vlapic(v), APIC_TASKPRI);
         break;
 
     case HV_X64_MSR_VP_ASSIST_PAGE:
-        *val = v->arch.hvm.viridian->vp_assist.msr.raw;
+        *val = vv->vp_assist.msr.raw;
+        break;
+
+    case HV_X64_MSR_SCONTROL:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = vv->scontrol;
+        break;
+
+    case HV_X64_MSR_SVERSION:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        /*
+         * The specification says that the version number is 0x00000001
+         * and should be in the lower 32-bits of the MSR, while the
+         * upper 32-bits are reserved... but it doesn't say what they
+         * should be set to. Assume everything but the bottom bit
+         * should be zero.
+         */
+        *val = 1ul;
+        break;
+
+    case HV_X64_MSR_SIEFP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = vv->siefp;
+        break;
+
+    case HV_X64_MSR_SIMP:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = vv->simp.msr.raw;
+        break;
+
+    case HV_X64_MSR_EOM:
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = 0;
         break;
 
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
+    {
+        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
+                                                ARRAY_SIZE(vv->sint));
+
+        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
+            return X86EMUL_EXCEPTION;
+
+        *val = vv->sint[sintx].raw;
+        break;
+    }
+
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx);
         return X86EMUL_EXCEPTION;
@@ -149,6 +306,20 @@ int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 
 int viridian_synic_vcpu_init(const struct vcpu *v)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    /*
+     * The specification says that all synthetic interrupts must be
+     * initally masked.
+     */
+    for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ )
+        vv->sint[i].mask = 1;
+
+    /* Initialize the mapping array with invalid values */
+    for ( i = 0; i < ARRAY_SIZE(vv->vector_to_sintx); i++ )
+        vv->vector_to_sintx[i] = ARRAY_SIZE(vv->sint);
+
     return 0;
 }
 
@@ -159,17 +330,58 @@ int viridian_synic_domain_init(const struct domain *d)
 
 void viridian_synic_vcpu_deinit(const struct vcpu *v)
 {
-    viridian_unmap_guest_page(&v->arch.hvm.viridian->vp_assist);
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+
+    viridian_unmap_guest_page(&vv->vp_assist);
+    viridian_unmap_guest_page(&vv->simp);
 }
 
 void viridian_synic_domain_deinit(const struct domain *d)
 {
 }
 
+void viridian_synic_poll_messages(const struct vcpu *v)
+{
+    /* There are currently no message sources */
+}
+
+bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
+                                     unsigned int vector)
+{
+    const struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int idx = vv->vector_to_sintx[vector];
+    unsigned int sintx = array_index_nospec(idx, ARRAY_SIZE(vv->sint));
+
+    if ( idx >= ARRAY_SIZE(vv->sint) )
+        return false;
+
+    return vv->sint[sintx].auto_eoi;
+}
+
+void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector)
+{
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int idx = vv->vector_to_sintx[vector];
+    unsigned int sintx = array_index_nospec(idx, ARRAY_SIZE(vv->sint));
+
+    ASSERT(v == current);
+
+    if ( idx < ARRAY_SIZE(vv->sint) )
+        __clear_bit(sintx, &vv->msg_pending);
+}
+
 void viridian_synic_save_vcpu_ctxt(const struct vcpu *v,
                                    struct hvm_viridian_vcpu_context *ctxt)
 {
     const struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    BUILD_BUG_ON(ARRAY_SIZE(vv->sint) != ARRAY_SIZE(ctxt->sint_msr));
+
+    for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ )
+        ctxt->sint_msr[i] = vv->sint[i].raw;
+
+    ctxt->simp_msr = vv->simp.msr.raw;
 
     ctxt->apic_assist_pending = vv->apic_assist_pending;
     ctxt->vp_assist_msr = vv->vp_assist.msr.raw;
@@ -180,12 +392,30 @@ void viridian_synic_load_vcpu_ctxt(
 {
     struct viridian_vcpu *vv = v->arch.hvm.viridian;
     struct domain *d = v->domain;
+    unsigned int i;
 
     vv->vp_assist.msr.raw = ctxt->vp_assist_msr;
     if ( vv->vp_assist.msr.enabled )
         viridian_map_guest_page(d, &vv->vp_assist);
 
     vv->apic_assist_pending = ctxt->apic_assist_pending;
+
+    vv->simp.msr.raw = ctxt->simp_msr;
+    if ( vv->simp.msr.enabled )
+        viridian_map_guest_page(d, &vv->simp);
+
+    for ( i = 0; i < ARRAY_SIZE(vv->sint); i++ )
+    {
+        uint8_t vector;
+
+        vv->sint[i].raw = ctxt->sint_msr[i];
+
+        vector = vv->sint[i].vector;
+        if ( vector < 16 )
+            continue;
+
+        vv->vector_to_sintx[vector] = i;
+    }
 }
 
 void viridian_synic_save_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 2b045ed88f..67d0121924 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -89,6 +89,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 
 /* Viridian CPUID leaf 3, Hypervisor Feature Indication */
 #define CPUID3D_CRASH_MSRS (1 << 10)
+#define CPUID3D_SINT_POLLING (1 << 17)
 
 /* Viridian CPUID leaf 4: Implementation Recommendations. */
 #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2)
@@ -178,6 +179,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             mask.AccessPartitionReferenceCounter = 1;
         if ( viridian_feature_mask(d) & HVMPV_reference_tsc )
             mask.AccessPartitionReferenceTsc = 1;
+        if ( viridian_feature_mask(d) & HVMPV_synic )
+            mask.AccessSynicRegs = 1;
 
         u.mask = mask;
 
@@ -186,6 +189,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
         if ( viridian_feature_mask(d) & HVMPV_crash_ctl )
             res->d = CPUID3D_CRASH_MSRS;
+        if ( viridian_feature_mask(d) & HVMPV_synic )
+            res->d = CPUID3D_SINT_POLLING;
 
         break;
     }
@@ -306,8 +311,16 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_ICR:
     case HV_X64_MSR_TPR:
     case HV_X64_MSR_VP_ASSIST_PAGE:
+    case HV_X64_MSR_SCONTROL:
+    case HV_X64_MSR_SVERSION:
+    case HV_X64_MSR_SIEFP:
+    case HV_X64_MSR_SIMP:
+    case HV_X64_MSR_EOM:
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
         return viridian_synic_wrmsr(v, idx, val);
 
+    case HV_X64_MSR_TSC_FREQUENCY:
+    case HV_X64_MSR_APIC_FREQUENCY:
     case HV_X64_MSR_REFERENCE_TSC:
         return viridian_time_wrmsr(v, idx, val);
 
@@ -378,6 +391,12 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     case HV_X64_MSR_ICR:
     case HV_X64_MSR_TPR:
     case HV_X64_MSR_VP_ASSIST_PAGE:
+    case HV_X64_MSR_SCONTROL:
+    case HV_X64_MSR_SVERSION:
+    case HV_X64_MSR_SIEFP:
+    case HV_X64_MSR_SIMP:
+    case HV_X64_MSR_EOM:
+    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
         return viridian_synic_rdmsr(v, idx, val);
 
     case HV_X64_MSR_TSC_FREQUENCY:
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index a1a43cd792..5a732858ca 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -461,10 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic)
 
 void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
 {
+    struct vcpu *v = vlapic_vcpu(vlapic);
     struct domain *d = vlapic_domain(vlapic);
 
+    /* All synic SINTx vectors are edge triggered */
+
     if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
         vioapic_update_EOI(d, vector);
+    else if ( has_viridian_synic(v->domain) )
+        viridian_synic_ack_sint(v, vector);
 
     hvm_dpci_msi_eoi(d, vector);
 }
@@ -1301,13 +1306,23 @@ int vlapic_has_pending_irq(struct vcpu *v)
     if ( !vlapic_enabled(vlapic) )
         return -1;
 
+    /*
+     * Poll the viridian message queues before checking the IRR since
+     * a sythetic interrupt may be asserted during the poll.
+     */
+    if ( !vlapic->polled_synic && has_viridian_synic(v->domain) )
+    {
+        viridian_synic_poll_messages(v);
+        vlapic->polled_synic = true;
+    }
+
     irr = vlapic_find_highest_irr(vlapic);
     if ( irr == -1 )
-        return -1;
+        goto out;
 
     if ( hvm_funcs.virtual_intr_delivery_enabled &&
          !nestedhvm_vcpu_in_guestmode(v) )
-        return irr;
+        goto out;
 
     /*
      * If APIC assist was set then an EOI may have been avoided.
@@ -1328,9 +1343,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
          (irr & 0xf0) <= (isr & 0xf0) )
     {
         viridian_apic_assist_clear(v);
-        return -1;
+        irr = -1;
     }
 
+out:
+    if (irr == -1)
+        vlapic->polled_synic = false;
+
     return irr;
 }
 
@@ -1360,8 +1379,13 @@ int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack)
     }
 
  done:
-    vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
+    if ( !has_viridian_synic(v->domain) ||
+         !viridian_synic_is_auto_eoi_sint(v, vector) )
+        vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
+
     vlapic_clear_irr(vector, vlapic);
+    vlapic->polled_synic = false;
+
     return 1;
 }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 37c3567a57..f67e9dbd12 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -472,6 +472,9 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
 #define has_viridian_apic_assist(d) \
     (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_apic_assist))
 
+#define has_viridian_synic(d) \
+    (is_viridian_domain(d) && (viridian_feature_mask(d) & HVMPV_synic))
+
 static inline void hvm_inject_exception(
     unsigned int vector, unsigned int type,
     unsigned int insn_len, int error_code)
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index 8146e2fc46..7446be492b 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -26,10 +26,30 @@ struct viridian_page
     void *ptr;
 };
 
+union viridian_sint_msr
+{
+    uint64_t raw;
+    struct
+    {
+        uint64_t vector:8;
+        uint64_t reserved_preserved1:8;
+        uint64_t mask:1;
+        uint64_t auto_eoi:1;
+        uint64_t polling:1;
+        uint64_t reserved_preserved2:45;
+    };
+};
+
 struct viridian_vcpu
 {
     struct viridian_page vp_assist;
     bool apic_assist_pending;
+    uint64_t scontrol;
+    uint64_t siefp;
+    struct viridian_page simp;
+    union viridian_sint_msr sint[16];
+    uint8_t vector_to_sintx[256];
+    unsigned long msg_pending;
     uint64_t crash_param[5];
 };
 
@@ -90,6 +110,11 @@ void viridian_apic_assist_set(const struct vcpu *v);
 bool viridian_apic_assist_completed(const struct vcpu *v);
 void viridian_apic_assist_clear(const struct vcpu *v);
 
+bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
+                                     unsigned int vector);
+void viridian_synic_poll_messages(const struct vcpu *v);
+void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector);
+
 #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */
 
 /*
diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
index dde66b4f0f..7b2195c4eb 100644
--- a/xen/include/asm-x86/hvm/vlapic.h
+++ b/xen/include/asm-x86/hvm/vlapic.h
@@ -91,6 +91,7 @@ struct vlapic {
         uint32_t             icr, dest;
         struct tasklet       tasklet;
     } init_sipi;
+    bool                     polled_synic;
 };
 
 /* vlapic's frequence is 100 MHz */
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 40be84ecda..ec3e4df12c 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -602,6 +602,8 @@ struct hvm_viridian_vcpu_context {
     uint64_t vp_assist_msr;
     uint8_t  apic_assist_pending;
     uint8_t  _pad[7];
+    uint64_t simp_msr;
+    uint64_t sint_msr[16];
 };
 
 DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 72f633ef2d..e7e3c7c892 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -146,6 +146,10 @@
 #define _HVMPV_crash_ctl 6
 #define HVMPV_crash_ctl (1 << _HVMPV_crash_ctl)
 
+/* Enable SYNIC MSRs */
+#define _HVMPV_synic 7
+#define HVMPV_synic (1 << _HVMPV_synic)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -153,7 +157,8 @@
          HVMPV_reference_tsc | \
          HVMPV_hcall_remote_tlb_flush | \
          HVMPV_apic_assist | \
-         HVMPV_crash_ctl)
+         HVMPV_crash_ctl | \
+         HVMPV_synic)
 
 #endif
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (8 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 13:06   ` Paul Durrant
  2019-03-13 14:05   ` Jan Beulich
  2019-03-11 13:41 ` [PATCH v5 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
  10 siblings, 2 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This patch introduces an implementation of the STIMER0-15_CONFIG/COUNT MSRs
and hence a the first SynIC message source.

The new (and documented) 'stimer' viridian enlightenment group may be
specified to enable this feature.

While in the neighbourhood, this patch adds a missing check for an
attempt to write the time reference count MSR, which should result in an
exception (but not be reported as an unimplemented MSR).

NOTE: It is necessary for correct operation that timer expiration and
      message delivery time-stamping use the same time source as the guest.
      The specification is ambiguous but testing with a Windows 10 1803
      guest has shown that using the partition reference counter as a
      source whilst the guest is using RDTSC and the reference tsc page
      does not work correctly. Therefore the time_now() function is used.
      This implements the algorithm for acquiring partition reference time
      that is documented in the specifiction.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v5:
 - Fix time_now() to read TSC as the guest would see it

v4:
 - Address comments from Jan

v3:
 - Re-worked missed ticks calculation
---
 docs/man/xl.cfg.5.pod.in               |  12 +-
 tools/libxl/libxl.h                    |   6 +
 tools/libxl/libxl_dom.c                |   4 +
 tools/libxl/libxl_types.idl            |   1 +
 xen/arch/x86/hvm/viridian/private.h    |   9 +-
 xen/arch/x86/hvm/viridian/synic.c      |  53 +++-
 xen/arch/x86/hvm/viridian/time.c       | 386 ++++++++++++++++++++++++-
 xen/arch/x86/hvm/viridian/viridian.c   |  19 ++
 xen/include/asm-x86/hvm/viridian.h     |  32 +-
 xen/include/public/arch-x86/hvm/save.h |   2 +
 xen/include/public/hvm/params.h        |   7 +-
 11 files changed, 523 insertions(+), 8 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index ad81af1ed8..355c654693 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2167,11 +2167,19 @@ This group incorporates the crash control MSRs. These enlightenments
 allow Windows to write crash information such that it can be logged
 by Xen.
 
+=item B<stimer>
+
+This set incorporates the SynIC and synthetic timer MSRs. Windows will
+use synthetic timers in preference to emulated HPET for a source of
+ticks and hence enabling this group will ensure that ticks will be
+consistent with use of an enlightened time source (B<time_ref_count> or
+B<reference_tsc>).
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
-is currently the B<base>, B<freq>, B<time_ref_count>, B<apic_assist>
-and B<crash_ctl> groups.
+is currently the B<base>, B<freq>, B<time_ref_count>, B<apic_assist>,
+B<crash_ctl> and B<stimer> groups.
 
 =item B<all>
 
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index a923a380d3..c8f219b0d3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -324,6 +324,12 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_SYNIC 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_STIMER indicates that the 'stimer' value
+ * is present in the viridian enlightenment enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_STIMER 1
+
 /*
  * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that
  * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field.
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index fb758d2ac3..2ee0f82ee7 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -269,6 +269,7 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
         libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_TIME_REF_COUNT);
         libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_APIC_ASSIST);
         libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL);
+        libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER);
     }
 
     libxl_for_each_set_bit(v, info->u.hvm.viridian_enable) {
@@ -320,6 +321,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC))
         mask |= HVMPV_synic;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER))
+        mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9860bcaf5f..1cce249de4 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -236,6 +236,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (5, "apic_assist"),
     (6, "crash_ctl"),
     (7, "synic"),
+    (8, "stimer"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
index 96a784b840..c272c34cda 100644
--- a/xen/arch/x86/hvm/viridian/private.h
+++ b/xen/arch/x86/hvm/viridian/private.h
@@ -74,6 +74,11 @@
 int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
+bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
+                                      unsigned int index,
+                                      uint64_t expiration,
+                                      uint64_t delivery);
+
 int viridian_synic_vcpu_init(const struct vcpu *v);
 int viridian_synic_domain_init(const struct domain *d);
 
@@ -93,7 +98,9 @@ void viridian_synic_load_domain_ctxt(
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
-int viridian_time_vcpu_init(const struct vcpu *v);
+void viridian_time_poll_timers(struct vcpu *v);
+
+int viridian_time_vcpu_init(struct vcpu *v);
 int viridian_time_domain_init(const struct domain *d);
 
 void viridian_time_vcpu_deinit(const struct vcpu *v);
diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
index f4510d3829..b5f3a79556 100644
--- a/xen/arch/x86/hvm/viridian/synic.c
+++ b/xen/arch/x86/hvm/viridian/synic.c
@@ -340,9 +340,58 @@ void viridian_synic_domain_deinit(const struct domain *d)
 {
 }
 
-void viridian_synic_poll_messages(const struct vcpu *v)
+void viridian_synic_poll_messages(struct vcpu *v)
 {
-    /* There are currently no message sources */
+    viridian_time_poll_timers(v);
+}
+
+bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
+                                      unsigned int index,
+                                      uint64_t expiration,
+                                      uint64_t delivery)
+{
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    const union viridian_sint_msr *vs = &vv->sint[sintx];
+    HV_MESSAGE *msg = vv->simp.ptr;
+    struct {
+        uint32_t TimerIndex;
+        uint32_t Reserved;
+        uint64_t ExpirationTime;
+        uint64_t DeliveryTime;
+    } payload = {
+        .TimerIndex = index,
+        .ExpirationTime = expiration,
+        .DeliveryTime = delivery,
+    };
+
+    if ( test_bit(sintx, &vv->msg_pending) )
+        return false;
+
+    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
+    msg += sintx;
+
+    /*
+     * To avoid using an atomic test-and-set, and barrier before calling
+     * vlapic_set_irq(), this function must be called in context of the
+     * vcpu receiving the message.
+     */
+    ASSERT(v == current);
+    if ( msg->Header.MessageType != HvMessageTypeNone )
+    {
+        msg->Header.MessageFlags.MessagePending = 1;
+        __set_bit(sintx, &vv->msg_pending);
+        return false;
+    }
+
+    msg->Header.MessageType = HvMessageTimerExpired;
+    msg->Header.MessageFlags.MessagePending = 0;
+    msg->Header.PayloadSize = sizeof(payload);
+    memcpy(msg->Payload, &payload, sizeof(payload));
+
+    if ( !vs->mask )
+        vlapic_set_irq(vcpu_vlapic(v), vs->vector, 0);
+
+    return true;
 }
 
 bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
index 71291d921c..12ce6c8f01 100644
--- a/xen/arch/x86/hvm/viridian/time.c
+++ b/xen/arch/x86/hvm/viridian/time.c
@@ -12,6 +12,7 @@
 #include <xen/version.h>
 
 #include <asm/apic.h>
+#include <asm/event.h>
 #include <asm/hvm/support.h>
 
 #include "private.h"
@@ -72,6 +73,7 @@ static void update_reference_tsc(struct domain *d, bool initialize)
      * ticks per 100ns shifted left by 64.
      */
     p->TscScale = ((10000ul << 32) / d->arch.tsc_khz) << 32;
+    smp_wmb();
 
     p->TscSequence++;
     if ( p->TscSequence == 0xFFFFFFFF ||
@@ -118,18 +120,265 @@ static int64_t time_ref_count(const struct domain *d)
     return raw_trc_val(d) + trc->off;
 }
 
+/*
+ * The specification says: "The partition reference time is computed
+ * by the following formula:
+ *
+ * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
+ *
+ * The multiplication is a 64 bit multiplication, which results in a
+ * 128 bit number which is then shifted 64 times to the right to obtain
+ * the high 64 bits."
+ */
+static uint64_t scale_tsc(uint64_t tsc, uint64_t scale, uint64_t offset)
+{
+    uint64_t result;
+
+    /*
+     * Quadword MUL takes an implicit operand in RAX, and puts the result
+     * in RDX:RAX. Because we only want the result of the multiplication
+     * after shifting right by 64 bits, we therefore only need the content
+     * of RDX.
+     */
+    asm ( "mulq %[scale]"
+          : "+a" (tsc), "=d" (result)
+          : [scale] "rm" (scale) );
+
+    return result + offset;
+}
+
+static uint64_t time_now(struct domain *d)
+{
+    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
+    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
+    uint32_t start, end;
+    uint64_t tsc;
+    uint64_t scale;
+    uint64_t offset;
+
+    /*
+     * If the reference TSC page is not enabled, or has been invalidated
+     * fall back to the partition reference counter.
+     */
+    if ( !p || !p->TscSequence )
+        return time_ref_count(d);
+
+    /*
+     * The following sampling algorithm for tsc, scale and offset is
+     * documented in the specifiction.
+     */
+    do {
+        start = p->TscSequence;
+        smp_rmb();
+
+        tsc = hvm_get_guest_tsc(pt_global_vcpu_target(d));
+        scale = p->TscScale;
+        offset = p->TscOffset;
+
+        smp_rmb();
+        end = p->TscSequence;
+    } while (end != start);
+
+    return scale_tsc(tsc, scale, offset);
+}
+
+static void stop_stimer(struct viridian_stimer *vs)
+{
+    const struct vcpu *v = vs->v;
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int stimerx = vs - &vv->stimer[0];
+
+    if ( !vs->started )
+        return;
+
+    stop_timer(&vs->timer);
+    clear_bit(stimerx, &vv->stimer_pending);
+    vs->started = false;
+}
+
+static void stimer_expire(void *data)
+{
+    const struct viridian_stimer *vs = data;
+    struct vcpu *v = vs->v;
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int stimerx = vs - &vv->stimer[0];
+
+    if ( !vs->config.fields.enabled )
+        return;
+
+    set_bit(stimerx, &vv->stimer_pending);
+    vcpu_kick(v);
+}
+
+static void start_stimer(struct viridian_stimer *vs)
+{
+    const struct vcpu *v = vs->v;
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int stimerx = vs - &vv->stimer[0];
+    int64_t now = time_now(v->domain);
+    s_time_t timeout;
+
+    if ( !test_and_set_bit(stimerx, &vv->stimer_enabled) )
+        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
+               stimerx);
+
+    if ( vs->config.fields.periodic )
+    {
+        unsigned int missed = 0;
+        int64_t next;
+
+        /*
+         * The specification says that if the timer is lazy then we
+         * skip over any missed expirations so we can treat this case
+         * as the same as if the timer is currently stopped, i.e. we
+         * just schedule expiration to be 'count' ticks from now.
+         */
+        if ( !vs->started || vs->config.fields.lazy )
+            next = now + vs->count;
+        else
+        {
+            /*
+             * The timer is already started, so we're re-scheduling.
+             * Hence advance the timer expiration by one tick.
+             */
+            next = vs->expiration + vs->count;
+
+            /* Now check to see if any expirations have been missed */
+            if ( now - next > 0 )
+                missed = (now - next) / vs->count;
+
+            /*
+             * The specification says that if the timer is not lazy then
+             * a non-zero missed count should be used to reduce the period
+             * of the timer until it catches up, unless the count has
+             * reached a 'significant number', in which case the timer
+             * should be treated as lazy. Unfortunately the specification
+             * does not state what that number is so the choice of number
+             * here is a pure guess.
+             */
+            if ( missed > 3 )
+            {
+                missed = 0;
+                next = now + vs->count;
+            }
+        }
+
+        vs->expiration = next;
+        timeout = ((next - now) * 100ull) / (missed + 1);
+    }
+    else
+    {
+        vs->expiration = vs->count;
+        if ( vs->count - now <= 0 )
+        {
+            set_bit(stimerx, &vv->stimer_pending);
+            return;
+        }
+
+        timeout = (vs->expiration - now) * 100ull;
+    }
+
+    vs->started = true;
+    migrate_timer(&vs->timer, smp_processor_id());
+    set_timer(&vs->timer, timeout + NOW());
+}
+
+static void poll_stimer(struct vcpu *v, unsigned int stimerx)
+{
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    struct viridian_stimer *vs = &vv->stimer[stimerx];
+
+    if ( !test_bit(stimerx, &vv->stimer_pending) )
+        return;
+
+    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
+                                           stimerx, vs->expiration,
+                                           time_now(v->domain)) )
+        return;
+
+    clear_bit(stimerx, &vv->stimer_pending);
+
+    if ( vs->config.fields.periodic )
+        start_stimer(vs);
+    else
+        vs->config.fields.enabled = 0;
+}
+
+void viridian_time_poll_timers(struct vcpu *v)
+{
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    if ( !vv->stimer_pending )
+       return;
+
+    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
+        poll_stimer(v, i);
+}
+
+void viridian_time_vcpu_freeze(struct vcpu *v)
+{
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    if ( !is_viridian_vcpu(v) )
+        return;
+
+    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &vv->stimer[i];
+
+        if ( vs->started )
+            stop_timer(&vs->timer);
+    }
+}
+
+void viridian_time_vcpu_thaw(struct vcpu *v)
+{
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    if ( !is_viridian_vcpu(v) )
+        return;
+
+    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &vv->stimer[i];
+
+        if ( vs->config.fields.enabled )
+            start_stimer(vs);
+    }
+}
+
 void viridian_time_domain_freeze(const struct domain *d)
 {
+    struct vcpu *v;
+
+    if ( !is_viridian_domain(d) )
+        return;
+
+    for_each_vcpu ( d, v )
+        viridian_time_vcpu_freeze(v);
+
     time_ref_count_freeze(d);
 }
 
 void viridian_time_domain_thaw(const struct domain *d)
 {
+    struct vcpu *v;
+
+    if ( !is_viridian_domain(d) )
+        return;
+
     time_ref_count_thaw(d);
+
+    for_each_vcpu ( d, v )
+        viridian_time_vcpu_thaw(v);
 }
 
 int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
     struct domain *d = v->domain;
     struct viridian_domain *vd = d->arch.hvm.viridian;
 
@@ -149,6 +398,63 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
         }
         break;
 
+    case HV_X64_MSR_TIME_REF_COUNT:
+        return X86EMUL_EXCEPTION;
+
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    {
+        unsigned int stimerx =
+            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
+                               ARRAY_SIZE(vv->stimer));
+        struct viridian_stimer *vs = &vv->stimer[stimerx];
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        stop_stimer(vs);
+
+        vs->config.raw = val;
+
+        if ( !vs->config.fields.sintx )
+            vs->config.fields.enabled = 0;
+
+        if ( vs->config.fields.enabled )
+            start_stimer(vs);
+
+        break;
+    }
+
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_COUNT:
+    {
+        unsigned int stimerx =
+            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
+                               ARRAY_SIZE(vv->stimer));
+        struct viridian_stimer *vs = &vv->stimer[stimerx];
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        stop_stimer(vs);
+
+        vs->count = val;
+
+        if ( !vs->count  )
+            vs->config.fields.enabled = 0;
+        else if ( vs->config.fields.auto_enable )
+            vs->config.fields.enabled = 1;
+
+        if ( vs->config.fields.enabled )
+            start_stimer(vs);
+
+        break;
+    }
+
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n",
                  __func__, idx, val);
@@ -160,6 +466,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
 
 int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
     const struct domain *d = v->domain;
     struct viridian_domain *vd = d->arch.hvm.viridian;
 
@@ -201,6 +508,37 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
         break;
     }
 
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    {
+        unsigned int stimerx =
+            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
+                               ARRAY_SIZE(vv->stimer));;
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        *val = vv->stimer[stimerx].config.raw;
+        break;
+    }
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_COUNT:
+    {
+        unsigned int stimerx =
+            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
+                               ARRAY_SIZE(vv->stimer));;
+
+        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
+            return X86EMUL_EXCEPTION;
+
+        *val = vv->stimer[stimerx].count;
+        break;
+    }
+
     default:
         gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx);
         return X86EMUL_EXCEPTION;
@@ -209,8 +547,19 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
     return X86EMUL_OKAY;
 }
 
-int viridian_time_vcpu_init(const struct vcpu *v)
+int viridian_time_vcpu_init(struct vcpu *v)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &vv->stimer[i];
+
+        vs->v = v;
+        init_timer(&vs->timer, stimer_expire, vs, v->processor);
+    }
+
     return 0;
 }
 
@@ -221,6 +570,16 @@ int viridian_time_domain_init(const struct domain *d)
 
 void viridian_time_vcpu_deinit(const struct vcpu *v)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &vv->stimer[i];
+
+        kill_timer(&vs->timer);
+        vs->v = NULL;
+    }
 }
 
 void viridian_time_domain_deinit(const struct domain *d)
@@ -231,11 +590,36 @@ void viridian_time_domain_deinit(const struct domain *d)
 void viridian_time_save_vcpu_ctxt(
     const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
+                 ARRAY_SIZE(ctxt->stimer_config_msr));
+    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
+                 ARRAY_SIZE(ctxt->stimer_count_msr));
+
+    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &vv->stimer[i];
+
+        ctxt->stimer_config_msr[i] = vs->config.raw;
+        ctxt->stimer_count_msr[i] = vs->count;
+    }
 }
 
 void viridian_time_load_vcpu_ctxt(
     struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
 {
+    struct viridian_vcpu *vv = v->arch.hvm.viridian;
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
+    {
+        struct viridian_stimer *vs = &vv->stimer[i];
+
+        vs->config.raw = ctxt->stimer_config_msr[i];
+        vs->count = ctxt->stimer_count_msr[i];
+    }
 }
 
 void viridian_time_save_domain_ctxt(
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 67d0121924..ec79cd9f15 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -181,6 +181,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             mask.AccessPartitionReferenceTsc = 1;
         if ( viridian_feature_mask(d) & HVMPV_synic )
             mask.AccessSynicRegs = 1;
+        if ( viridian_feature_mask(d) & HVMPV_stimer )
+            mask.AccessSyntheticTimerRegs = 1;
 
         u.mask = mask;
 
@@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
     case HV_X64_MSR_TSC_FREQUENCY:
     case HV_X64_MSR_APIC_FREQUENCY:
     case HV_X64_MSR_REFERENCE_TSC:
+    case HV_X64_MSR_TIME_REF_COUNT:
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    case HV_X64_MSR_STIMER3_COUNT:
         return viridian_time_wrmsr(v, idx, val);
 
     case HV_X64_MSR_CRASH_P0:
@@ -403,6 +414,14 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
     case HV_X64_MSR_APIC_FREQUENCY:
     case HV_X64_MSR_REFERENCE_TSC:
     case HV_X64_MSR_TIME_REF_COUNT:
+    case HV_X64_MSR_STIMER0_CONFIG:
+    case HV_X64_MSR_STIMER0_COUNT:
+    case HV_X64_MSR_STIMER1_CONFIG:
+    case HV_X64_MSR_STIMER1_COUNT:
+    case HV_X64_MSR_STIMER2_CONFIG:
+    case HV_X64_MSR_STIMER2_COUNT:
+    case HV_X64_MSR_STIMER3_CONFIG:
+    case HV_X64_MSR_STIMER3_COUNT:
         return viridian_time_rdmsr(v, idx, val);
 
     case HV_X64_MSR_CRASH_P0:
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index 7446be492b..c9cab32e06 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -40,6 +40,33 @@ union viridian_sint_msr
     };
 };
 
+union viridian_stimer_config_msr
+{
+    uint64_t raw;
+    struct
+    {
+        uint64_t enabled:1;
+        uint64_t periodic:1;
+        uint64_t lazy:1;
+        uint64_t auto_enable:1;
+        uint64_t vector:8;
+        uint64_t direct_mode:1;
+        uint64_t reserved_zero1:3;
+        uint64_t sintx:4;
+        uint64_t reserved_zero2:44;
+    } fields;
+};
+
+struct viridian_stimer {
+    struct vcpu *v;
+    struct timer timer;
+    union viridian_stimer_config_msr config;
+    uint64_t count;
+    uint64_t expiration;
+    s_time_t timeout;
+    bool started;
+};
+
 struct viridian_vcpu
 {
     struct viridian_page vp_assist;
@@ -50,6 +77,9 @@ struct viridian_vcpu
     union viridian_sint_msr sint[16];
     uint8_t vector_to_sintx[256];
     unsigned long msg_pending;
+    struct viridian_stimer stimer[4];
+    unsigned long stimer_enabled;
+    unsigned long stimer_pending;
     uint64_t crash_param[5];
 };
 
@@ -112,7 +142,7 @@ void viridian_apic_assist_clear(const struct vcpu *v);
 
 bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
                                      unsigned int vector);
-void viridian_synic_poll_messages(const struct vcpu *v);
+void viridian_synic_poll_messages(struct vcpu *v);
 void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector);
 
 #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index ec3e4df12c..8344aa471f 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -604,6 +604,8 @@ struct hvm_viridian_vcpu_context {
     uint8_t  _pad[7];
     uint64_t simp_msr;
     uint64_t sint_msr[16];
+    uint64_t stimer_config_msr[4];
+    uint64_t stimer_count_msr[4];
 };
 
 DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index e7e3c7c892..e06b0942d0 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -150,6 +150,10 @@
 #define _HVMPV_synic 7
 #define HVMPV_synic (1 << _HVMPV_synic)
 
+/* Enable STIMER MSRs */
+#define _HVMPV_stimer 8
+#define HVMPV_stimer (1 << _HVMPV_stimer)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -158,7 +162,8 @@
          HVMPV_hcall_remote_tlb_flush | \
          HVMPV_apic_assist | \
          HVMPV_crash_ctl | \
-         HVMPV_synic)
+         HVMPV_synic | \
+         HVMPV_stimer)
 
 #endif
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v5 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall
  2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
                   ` (9 preceding siblings ...)
  2019-03-11 13:41 ` [PATCH v5 10/11] viridian: add implementation of synthetic timers Paul Durrant
@ 2019-03-11 13:41 ` Paul Durrant
  2019-03-13 14:08   ` Jan Beulich
  10 siblings, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-11 13:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, Paul Durrant, Jan Beulich, Roger Pau Monné

This patch adds an implementation of the hypercall as documented in the
specification [1], section 10.5.2. This enlightenment, as with others, is
advertised by CPUID leaf 0x40000004 and is under control of a new
'hcall_ipi' option in libxl.

If used, this enlightenment should mean the guest only takes a single VMEXIT
to issue IPIs to multiple vCPUs rather than the multiple VMEXITs that would
result from using the emulated local APIC.

[1] https://github.com/MicrosoftDocs/Virtualization-Documentation/raw/live/tlfs/Hypervisor%20Top%20Level%20Functional%20Specification%20v5.0C.pdf

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Address comments from Jan

v3:
 - New in v3
---
 docs/man/xl.cfg.5.pod.in             |  6 +++
 tools/libxl/libxl.h                  |  6 +++
 tools/libxl/libxl_dom.c              |  3 ++
 tools/libxl/libxl_types.idl          |  1 +
 xen/arch/x86/hvm/viridian/viridian.c | 63 ++++++++++++++++++++++++++++
 xen/include/public/hvm/params.h      |  7 +++-
 6 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 355c654693..c7d70e618b 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2175,6 +2175,12 @@ ticks and hence enabling this group will ensure that ticks will be
 consistent with use of an enlightened time source (B<time_ref_count> or
 B<reference_tsc>).
 
+=item B<hcall_ipi>
+
+This set incorporates use of a hypercall for interprocessor interrupts.
+This enlightenment may improve performance of Windows guests with multiple
+virtual CPUs.
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index c8f219b0d3..482499a6c0 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -330,6 +330,12 @@
  */
 #define LIBXL_HAVE_VIRIDIAN_STIMER 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_HCALL_IPI indicates that the 'hcall_ipi' value
+ * is present in the viridian enlightenment enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_HCALL_IPI 1
+
 /*
  * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that
  * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field.
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 2ee0f82ee7..879c806139 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -324,6 +324,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER))
         mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI))
+        mask |= HVMPV_hcall_ipi;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 1cce249de4..cb4702fd7a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -237,6 +237,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (6, "crash_ctl"),
     (7, "synic"),
     (8, "stimer"),
+    (9, "hcall_ipi"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index ec79cd9f15..9c419d8cf3 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -28,6 +28,7 @@
 #define HvFlushVirtualAddressSpace 0x0002
 #define HvFlushVirtualAddressList  0x0003
 #define HvNotifyLongSpinWait       0x0008
+#define HvSendSyntheticClusterIpi  0x000b
 #define HvGetPartitionId           0x0046
 #define HvExtCallQueryCapabilities 0x8001
 
@@ -95,6 +96,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 #define CPUID4A_HCALL_REMOTE_TLB_FLUSH (1 << 2)
 #define CPUID4A_MSR_BASED_APIC         (1 << 3)
 #define CPUID4A_RELAX_TIMER_INT        (1 << 5)
+#define CPUID4A_SYNTHETIC_CLUSTER_IPI  (1 << 10)
 
 /* Viridian CPUID leaf 6: Implementation HW features detected and in use */
 #define CPUID6A_APIC_OVERLAY    (1 << 0)
@@ -206,6 +208,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH;
         if ( !cpu_has_vmx_apic_reg_virt )
             res->a |= CPUID4A_MSR_BASED_APIC;
+        if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
+            res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
 
         /*
          * This value is the recommended number of attempts to try to
@@ -642,6 +646,65 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         break;
     }
 
+    case HvSendSyntheticClusterIpi:
+    {
+        struct vcpu *v;
+        uint32_t vector;
+        uint64_t vcpu_mask;
+
+        status = HV_STATUS_INVALID_PARAMETER;
+
+        /* Get input parameters. */
+        if ( input.fast )
+        {
+            if ( input_params_gpa >> 32 )
+                break;
+
+            vector = input_params_gpa;
+            vcpu_mask = output_params_gpa;
+        }
+        else
+        {
+            struct {
+                uint32_t vector;
+                uint8_t target_vtl;
+                uint8_t reserved_zero[3];
+                uint64_t vcpu_mask;
+            } input_params;
+
+            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                          sizeof(input_params)) !=
+                 HVMTRANS_okay )
+                break;
+
+            if ( input_params.target_vtl ||
+                 input_params.reserved_zero[0] ||
+                 input_params.reserved_zero[1] ||
+                 input_params.reserved_zero[2] )
+                break;
+
+            vector = input_params.vector;
+            vcpu_mask = input_params.vcpu_mask;
+        }
+
+        if ( vector < 0x10 || vector > 0xff )
+            break;
+
+        for_each_vcpu ( currd, v )
+        {
+            if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+                break;
+
+            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+                continue;
+
+            vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+        }
+
+        status = HV_STATUS_SUCCESS;
+        break;
+    }
+
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
                 input.call_code);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index e06b0942d0..36832e4b94 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -154,6 +154,10 @@
 #define _HVMPV_stimer 8
 #define HVMPV_stimer (1 << _HVMPV_stimer)
 
+/* Use Synthetic Cluster IPI Hypercall */
+#define _HVMPV_hcall_ipi 9
+#define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -163,7 +167,8 @@
          HVMPV_apic_assist | \
          HVMPV_crash_ctl | \
          HVMPV_synic | \
-         HVMPV_stimer)
+         HVMPV_stimer | \
+         HVMPV_hcall_ipi)
 
 #endif
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 01/11] viridian: add init hooks
  2019-03-11 13:41 ` [PATCH v5 01/11] viridian: add init hooks Paul Durrant
@ 2019-03-13 12:23   ` Jan Beulich
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 12:23 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> This patch adds domain and vcpu init hooks for viridian features. The init
> hooks do not yet do anything; the functionality will be added to by
> subsequent patches.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Wei Liu <wei.liu2@citrix.com>

Non-Viridian parts
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 02/11] viridian: separately allocate domain and vcpu structures
  2019-03-11 13:41 ` [PATCH v5 02/11] viridian: separately allocate domain and vcpu structures Paul Durrant
@ 2019-03-13 12:25   ` Jan Beulich
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 12:25 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> Currently the viridian_domain and viridian_vcpu structures are inline in
> the hvm_domain and hvm_vcpu structures respectively. Subsequent patches
> will need to add sizable extra fields to the viridian structures which
> will cause the PAGE_SIZE limit of the overall vcpu structure to be
> exceeded. This patch, therefore, uses the new init hooks to separately
> allocate the structures and converts the 'viridian' fields in hvm_domain
> and hvm_cpu to be pointers to these allocations. These separate allocations
> also allow some vcpu and domain pointers to become const.
> 
> Ideally, now that they are no longer inline, the allocations of the
> viridian structures could be made conditional on whether the toolstack
> is going to configure the viridian enlightenments. However the toolstack
> is currently unable to convey this information to the domain creation code
> so such an enhancement is deferred until that becomes possible.
> 
> NOTE: The patch also introduced the 'is_viridian_vcpu' macro to avoid
>       introducing a second evaluation of 'is_viridian_domain' with an
>       open-coded 'v->domain' argument. This macro will also be further
>       used in a subsequent patch.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Wei Liu <wei.liu2@citrix.com>

Non-Viridian parts
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain...
  2019-03-11 13:41 ` [PATCH v5 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain Paul Durrant
@ 2019-03-13 12:33   ` Jan Beulich
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 12:33 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> ...where there is more than one dereference inside a function.
> 
> This shortens the code and makes it more readable. No functional change.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 04/11] viridian: make 'fields' struct anonymous...
  2019-03-11 13:41 ` [PATCH v5 04/11] viridian: make 'fields' struct anonymous Paul Durrant
@ 2019-03-13 12:34   ` Jan Beulich
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 12:34 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> ...inside viridian_page_msr and viridian_guest_os_id_msr unions.
> 
> There's no need to name it and the code is shortened by not doing so.
> No functional change.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw()...
  2019-03-11 13:41 ` [PATCH v5 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw() Paul Durrant
@ 2019-03-13 12:36   ` Jan Beulich
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 12:36 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> ...from arch_domain_shutdown/pause/unpause().
> 
> A subsequent patch will introduce an implementaion of synthetic timers
> which will also need freeze/thaw hooks, so make the exported hooks more
> generic and call through to (re-named and static) time_ref_count_freeze/thaw
> functions.
> 
> NOTE: This patch also introduces a new time_ref_count() helper to return
>       the current counter value. This is currently only used by the MSR
>       read handler but the synthetic timer code will also need to use it.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
> ---
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> ---
>  xen/arch/x86/domain.c              | 12 ++++++------

Acked-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-11 13:41 ` [PATCH v5 10/11] viridian: add implementation of synthetic timers Paul Durrant
@ 2019-03-13 13:06   ` Paul Durrant
  2019-03-13 14:10     ` Jan Beulich
  2019-03-13 14:05   ` Jan Beulich
  1 sibling, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-13 13:06 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Jan Beulich, Ian Jackson,
	Roger Pau Monne



> -----Original Message-----
> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: 11 March 2019 13:42
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant <Paul.Durrant@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>; Julien Grall <julien.grall@arm.com>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim
> (Xen.org) <tim@xen.org>; Roger Pau Monne <roger.pau@citrix.com>
> Subject: [PATCH v5 10/11] viridian: add implementation of synthetic timers
> 
> This patch introduces an implementation of the STIMER0-15_CONFIG/COUNT MSRs
> and hence a the first SynIC message source.
> 
> The new (and documented) 'stimer' viridian enlightenment group may be
> specified to enable this feature.
> 
> While in the neighbourhood, this patch adds a missing check for an
> attempt to write the time reference count MSR, which should result in an
> exception (but not be reported as an unimplemented MSR).
> 
> NOTE: It is necessary for correct operation that timer expiration and
>       message delivery time-stamping use the same time source as the guest.
>       The specification is ambiguous but testing with a Windows 10 1803
>       guest has shown that using the partition reference counter as a
>       source whilst the guest is using RDTSC and the reference tsc page
>       does not work correctly. Therefore the time_now() function is used.
>       This implements the algorithm for acquiring partition reference time
>       that is documented in the specifiction.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Tim Deegan <tim@xen.org>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> 
> v5:
>  - Fix time_now() to read TSC as the guest would see it
> 
> v4:
>  - Address comments from Jan
> 
> v3:
>  - Re-worked missed ticks calculation
> ---
>  docs/man/xl.cfg.5.pod.in               |  12 +-
>  tools/libxl/libxl.h                    |   6 +
>  tools/libxl/libxl_dom.c                |   4 +
>  tools/libxl/libxl_types.idl            |   1 +
>  xen/arch/x86/hvm/viridian/private.h    |   9 +-
>  xen/arch/x86/hvm/viridian/synic.c      |  53 +++-
>  xen/arch/x86/hvm/viridian/time.c       | 386 ++++++++++++++++++++++++-
>  xen/arch/x86/hvm/viridian/viridian.c   |  19 ++
>  xen/include/asm-x86/hvm/viridian.h     |  32 +-
>  xen/include/public/arch-x86/hvm/save.h |   2 +
>  xen/include/public/hvm/params.h        |   7 +-
>  11 files changed, 523 insertions(+), 8 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index ad81af1ed8..355c654693 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -2167,11 +2167,19 @@ This group incorporates the crash control MSRs. These enlightenments
>  allow Windows to write crash information such that it can be logged
>  by Xen.
> 
> +=item B<stimer>
> +
> +This set incorporates the SynIC and synthetic timer MSRs. Windows will
> +use synthetic timers in preference to emulated HPET for a source of
> +ticks and hence enabling this group will ensure that ticks will be
> +consistent with use of an enlightened time source (B<time_ref_count> or
> +B<reference_tsc>).
> +
>  =item B<defaults>
> 
>  This is a special value that enables the default set of groups, which
> -is currently the B<base>, B<freq>, B<time_ref_count>, B<apic_assist>
> -and B<crash_ctl> groups.
> +is currently the B<base>, B<freq>, B<time_ref_count>, B<apic_assist>,
> +B<crash_ctl> and B<stimer> groups.
> 
>  =item B<all>
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index a923a380d3..c8f219b0d3 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -324,6 +324,12 @@
>   */
>  #define LIBXL_HAVE_VIRIDIAN_SYNIC 1
> 
> +/*
> + * LIBXL_HAVE_VIRIDIAN_STIMER indicates that the 'stimer' value
> + * is present in the viridian enlightenment enumeration.
> + */
> +#define LIBXL_HAVE_VIRIDIAN_STIMER 1
> +
>  /*
>   * LIBXL_HAVE_BUILDINFO_HVM_ACPI_LAPTOP_SLATE indicates that
>   * libxl_domain_build_info has the u.hvm.acpi_laptop_slate field.
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index fb758d2ac3..2ee0f82ee7 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -269,6 +269,7 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
>          libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_TIME_REF_COUNT);
>          libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_APIC_ASSIST);
>          libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_CRASH_CTL);
> +        libxl_bitmap_set(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER);
>      }
> 
>      libxl_for_each_set_bit(v, info->u.hvm.viridian_enable) {
> @@ -320,6 +321,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
>      if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_SYNIC))
>          mask |= HVMPV_synic;
> 
> +    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_STIMER))
> +        mask |= HVMPV_time_ref_count | HVMPV_synic | HVMPV_stimer;
> +
>      if (mask != 0 &&
>          xc_hvm_param_set(CTX->xch,
>                           domid,
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 9860bcaf5f..1cce249de4 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -236,6 +236,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
>      (5, "apic_assist"),
>      (6, "crash_ctl"),
>      (7, "synic"),
> +    (8, "stimer"),
>      ])
> 
>  libxl_hdtype = Enumeration("hdtype", [
> diff --git a/xen/arch/x86/hvm/viridian/private.h b/xen/arch/x86/hvm/viridian/private.h
> index 96a784b840..c272c34cda 100644
> --- a/xen/arch/x86/hvm/viridian/private.h
> +++ b/xen/arch/x86/hvm/viridian/private.h
> @@ -74,6 +74,11 @@
>  int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
>  int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
> 
> +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> +                                      unsigned int index,
> +                                      uint64_t expiration,
> +                                      uint64_t delivery);
> +
>  int viridian_synic_vcpu_init(const struct vcpu *v);
>  int viridian_synic_domain_init(const struct domain *d);
> 
> @@ -93,7 +98,9 @@ void viridian_synic_load_domain_ctxt(
>  int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val);
>  int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val);
> 
> -int viridian_time_vcpu_init(const struct vcpu *v);
> +void viridian_time_poll_timers(struct vcpu *v);
> +
> +int viridian_time_vcpu_init(struct vcpu *v);
>  int viridian_time_domain_init(const struct domain *d);
> 
>  void viridian_time_vcpu_deinit(const struct vcpu *v);
> diff --git a/xen/arch/x86/hvm/viridian/synic.c b/xen/arch/x86/hvm/viridian/synic.c
> index f4510d3829..b5f3a79556 100644
> --- a/xen/arch/x86/hvm/viridian/synic.c
> +++ b/xen/arch/x86/hvm/viridian/synic.c
> @@ -340,9 +340,58 @@ void viridian_synic_domain_deinit(const struct domain *d)
>  {
>  }
> 
> -void viridian_synic_poll_messages(const struct vcpu *v)
> +void viridian_synic_poll_messages(struct vcpu *v)
>  {
> -    /* There are currently no message sources */
> +    viridian_time_poll_timers(v);
> +}
> +
> +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> +                                      unsigned int index,
> +                                      uint64_t expiration,
> +                                      uint64_t delivery)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    const union viridian_sint_msr *vs = &vv->sint[sintx];
> +    HV_MESSAGE *msg = vv->simp.ptr;
> +    struct {
> +        uint32_t TimerIndex;
> +        uint32_t Reserved;
> +        uint64_t ExpirationTime;
> +        uint64_t DeliveryTime;
> +    } payload = {
> +        .TimerIndex = index,
> +        .ExpirationTime = expiration,
> +        .DeliveryTime = delivery,
> +    };
> +
> +    if ( test_bit(sintx, &vv->msg_pending) )
> +        return false;
> +
> +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
> +    msg += sintx;
> +
> +    /*
> +     * To avoid using an atomic test-and-set, and barrier before calling
> +     * vlapic_set_irq(), this function must be called in context of the
> +     * vcpu receiving the message.
> +     */
> +    ASSERT(v == current);
> +    if ( msg->Header.MessageType != HvMessageTypeNone )
> +    {
> +        msg->Header.MessageFlags.MessagePending = 1;
> +        __set_bit(sintx, &vv->msg_pending);
> +        return false;
> +    }
> +
> +    msg->Header.MessageType = HvMessageTimerExpired;
> +    msg->Header.MessageFlags.MessagePending = 0;
> +    msg->Header.PayloadSize = sizeof(payload);
> +    memcpy(msg->Payload, &payload, sizeof(payload));
> +
> +    if ( !vs->mask )
> +        vlapic_set_irq(vcpu_vlapic(v), vs->vector, 0);
> +
> +    return true;
>  }
> 
>  bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
> diff --git a/xen/arch/x86/hvm/viridian/time.c b/xen/arch/x86/hvm/viridian/time.c
> index 71291d921c..12ce6c8f01 100644
> --- a/xen/arch/x86/hvm/viridian/time.c
> +++ b/xen/arch/x86/hvm/viridian/time.c
> @@ -12,6 +12,7 @@
>  #include <xen/version.h>
> 
>  #include <asm/apic.h>
> +#include <asm/event.h>
>  #include <asm/hvm/support.h>
> 
>  #include "private.h"
> @@ -72,6 +73,7 @@ static void update_reference_tsc(struct domain *d, bool initialize)
>       * ticks per 100ns shifted left by 64.
>       */
>      p->TscScale = ((10000ul << 32) / d->arch.tsc_khz) << 32;
> +    smp_wmb();
> 
>      p->TscSequence++;
>      if ( p->TscSequence == 0xFFFFFFFF ||
> @@ -118,18 +120,265 @@ static int64_t time_ref_count(const struct domain *d)
>      return raw_trc_val(d) + trc->off;
>  }
> 
> +/*
> + * The specification says: "The partition reference time is computed
> + * by the following formula:
> + *
> + * ReferenceTime = ((VirtualTsc * TscScale) >> 64) + TscOffset
> + *
> + * The multiplication is a 64 bit multiplication, which results in a
> + * 128 bit number which is then shifted 64 times to the right to obtain
> + * the high 64 bits."
> + */
> +static uint64_t scale_tsc(uint64_t tsc, uint64_t scale, uint64_t offset)
> +{
> +    uint64_t result;
> +
> +    /*
> +     * Quadword MUL takes an implicit operand in RAX, and puts the result
> +     * in RDX:RAX. Because we only want the result of the multiplication
> +     * after shifting right by 64 bits, we therefore only need the content
> +     * of RDX.
> +     */
> +    asm ( "mulq %[scale]"
> +          : "+a" (tsc), "=d" (result)
> +          : [scale] "rm" (scale) );
> +
> +    return result + offset;
> +}
> +
> +static uint64_t time_now(struct domain *d)
> +{
> +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> +    uint32_t start, end;
> +    uint64_t tsc;
> +    uint64_t scale;
> +    uint64_t offset;
> +
> +    /*
> +     * If the reference TSC page is not enabled, or has been invalidated
> +     * fall back to the partition reference counter.
> +     */
> +    if ( !p || !p->TscSequence )
> +        return time_ref_count(d);
> +
> +    /*
> +     * The following sampling algorithm for tsc, scale and offset is
> +     * documented in the specifiction.
> +     */
> +    do {
> +        start = p->TscSequence;
> +        smp_rmb();
> +
> +        tsc = hvm_get_guest_tsc(pt_global_vcpu_target(d));
> +        scale = p->TscScale;
> +        offset = p->TscOffset;
> +
> +        smp_rmb();
> +        end = p->TscSequence;
> +    } while (end != start);
> +
> +    return scale_tsc(tsc, scale, offset);
> +}
> +
> +static void stop_stimer(struct viridian_stimer *vs)
> +{
> +    const struct vcpu *v = vs->v;
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int stimerx = vs - &vv->stimer[0];
> +
> +    if ( !vs->started )
> +        return;
> +
> +    stop_timer(&vs->timer);
> +    clear_bit(stimerx, &vv->stimer_pending);
> +    vs->started = false;
> +}
> +
> +static void stimer_expire(void *data)
> +{
> +    const struct viridian_stimer *vs = data;
> +    struct vcpu *v = vs->v;
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int stimerx = vs - &vv->stimer[0];
> +
> +    if ( !vs->config.fields.enabled )
> +        return;
> +
> +    set_bit(stimerx, &vv->stimer_pending);
> +    vcpu_kick(v);
> +}
> +
> +static void start_stimer(struct viridian_stimer *vs)
> +{
> +    const struct vcpu *v = vs->v;
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int stimerx = vs - &vv->stimer[0];
> +    int64_t now = time_now(v->domain);
> +    s_time_t timeout;
> +
> +    if ( !test_and_set_bit(stimerx, &vv->stimer_enabled) )
> +        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
> +               stimerx);
> +
> +    if ( vs->config.fields.periodic )
> +    {
> +        unsigned int missed = 0;
> +        int64_t next;
> +
> +        /*
> +         * The specification says that if the timer is lazy then we
> +         * skip over any missed expirations so we can treat this case
> +         * as the same as if the timer is currently stopped, i.e. we
> +         * just schedule expiration to be 'count' ticks from now.
> +         */
> +        if ( !vs->started || vs->config.fields.lazy )
> +            next = now + vs->count;
> +        else
> +        {
> +            /*
> +             * The timer is already started, so we're re-scheduling.
> +             * Hence advance the timer expiration by one tick.
> +             */
> +            next = vs->expiration + vs->count;
> +
> +            /* Now check to see if any expirations have been missed */
> +            if ( now - next > 0 )
> +                missed = (now - next) / vs->count;
> +
> +            /*
> +             * The specification says that if the timer is not lazy then
> +             * a non-zero missed count should be used to reduce the period
> +             * of the timer until it catches up, unless the count has
> +             * reached a 'significant number', in which case the timer
> +             * should be treated as lazy. Unfortunately the specification
> +             * does not state what that number is so the choice of number
> +             * here is a pure guess.
> +             */
> +            if ( missed > 3 )
> +            {
> +                missed = 0;
> +                next = now + vs->count;
> +            }
> +        }
> +
> +        vs->expiration = next;
> +        timeout = ((next - now) * 100ull) / (missed + 1);
> +    }
> +    else
> +    {
> +        vs->expiration = vs->count;
> +        if ( vs->count - now <= 0 )
> +        {
> +            set_bit(stimerx, &vv->stimer_pending);
> +            return;
> +        }
> +
> +        timeout = (vs->expiration - now) * 100ull;
> +    }
> +
> +    vs->started = true;
> +    migrate_timer(&vs->timer, smp_processor_id());
> +    set_timer(&vs->timer, timeout + NOW());
> +}
> +
> +static void poll_stimer(struct vcpu *v, unsigned int stimerx)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    struct viridian_stimer *vs = &vv->stimer[stimerx];
> +
> +    if ( !test_bit(stimerx, &vv->stimer_pending) )
> +        return;
> +
> +    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
> +                                           stimerx, vs->expiration,
> +                                           time_now(v->domain)) )
> +        return;
> +
> +    clear_bit(stimerx, &vv->stimer_pending);
> +
> +    if ( vs->config.fields.periodic )
> +        start_stimer(vs);
> +    else
> +        vs->config.fields.enabled = 0;
> +}
> +
> +void viridian_time_poll_timers(struct vcpu *v)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    if ( !vv->stimer_pending )
> +       return;
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +        poll_stimer(v, i);
> +}
> +
> +void viridian_time_vcpu_freeze(struct vcpu *v)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    if ( !is_viridian_vcpu(v) )
> +        return;
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +    {
> +        struct viridian_stimer *vs = &vv->stimer[i];
> +
> +        if ( vs->started )
> +            stop_timer(&vs->timer);
> +    }
> +}
> +
> +void viridian_time_vcpu_thaw(struct vcpu *v)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    if ( !is_viridian_vcpu(v) )
> +        return;
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +    {
> +        struct viridian_stimer *vs = &vv->stimer[i];
> +
> +        if ( vs->config.fields.enabled )
> +            start_stimer(vs);
> +    }
> +}
> +
>  void viridian_time_domain_freeze(const struct domain *d)
>  {
> +    struct vcpu *v;
> +
> +    if ( !is_viridian_domain(d) )
> +        return;
> +
> +    for_each_vcpu ( d, v )
> +        viridian_time_vcpu_freeze(v);
> +
>      time_ref_count_freeze(d);
>  }
> 
>  void viridian_time_domain_thaw(const struct domain *d)
>  {
> +    struct vcpu *v;
> +
> +    if ( !is_viridian_domain(d) )
> +        return;
> +
>      time_ref_count_thaw(d);
> +
> +    for_each_vcpu ( d, v )
> +        viridian_time_vcpu_thaw(v);
>  }
> 
>  int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>      struct domain *d = v->domain;
>      struct viridian_domain *vd = d->arch.hvm.viridian;
> 
> @@ -149,6 +398,63 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>          }
>          break;
> 
> +    case HV_X64_MSR_TIME_REF_COUNT:
> +        return X86EMUL_EXCEPTION;
> +
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    {
> +        unsigned int stimerx =
> +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> +                               ARRAY_SIZE(vv->stimer));
> +        struct viridian_stimer *vs = &vv->stimer[stimerx];
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        stop_stimer(vs);
> +
> +        vs->config.raw = val;
> +
> +        if ( !vs->config.fields.sintx )
> +            vs->config.fields.enabled = 0;
> +
> +        if ( vs->config.fields.enabled )
> +            start_stimer(vs);
> +
> +        break;
> +    }
> +
> +    case HV_X64_MSR_STIMER0_COUNT:
> +    case HV_X64_MSR_STIMER1_COUNT:
> +    case HV_X64_MSR_STIMER2_COUNT:
> +    case HV_X64_MSR_STIMER3_COUNT:
> +    {
> +        unsigned int stimerx =
> +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> +                               ARRAY_SIZE(vv->stimer));
> +        struct viridian_stimer *vs = &vv->stimer[stimerx];
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        stop_stimer(vs);
> +
> +        vs->count = val;
> +
> +        if ( !vs->count  )
> +            vs->config.fields.enabled = 0;
> +        else if ( vs->config.fields.auto_enable )
> +            vs->config.fields.enabled = 1;
> +
> +        if ( vs->config.fields.enabled )
> +            start_stimer(vs);
> +
> +        break;
> +    }
> +
>      default:
>          gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x (%016"PRIx64")\n",
>                   __func__, idx, val);
> @@ -160,6 +466,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> 
>  int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>      const struct domain *d = v->domain;
>      struct viridian_domain *vd = d->arch.hvm.viridian;
> 
> @@ -201,6 +508,37 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>          break;
>      }
> 
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    {
> +        unsigned int stimerx =
> +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> +                               ARRAY_SIZE(vv->stimer));;
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        *val = vv->stimer[stimerx].config.raw;
> +        break;
> +    }
> +    case HV_X64_MSR_STIMER0_COUNT:
> +    case HV_X64_MSR_STIMER1_COUNT:
> +    case HV_X64_MSR_STIMER2_COUNT:
> +    case HV_X64_MSR_STIMER3_COUNT:
> +    {
> +        unsigned int stimerx =
> +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> +                               ARRAY_SIZE(vv->stimer));;
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        *val = vv->stimer[stimerx].count;
> +        break;
> +    }
> +
>      default:
>          gdprintk(XENLOG_INFO, "%s: unimplemented MSR %#x\n", __func__, idx);
>          return X86EMUL_EXCEPTION;
> @@ -209,8 +547,19 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>      return X86EMUL_OKAY;
>  }
> 
> -int viridian_time_vcpu_init(const struct vcpu *v)
> +int viridian_time_vcpu_init(struct vcpu *v)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +    {
> +        struct viridian_stimer *vs = &vv->stimer[i];
> +
> +        vs->v = v;
> +        init_timer(&vs->timer, stimer_expire, vs, v->processor);
> +    }
> +
>      return 0;
>  }
> 
> @@ -221,6 +570,16 @@ int viridian_time_domain_init(const struct domain *d)
> 
>  void viridian_time_vcpu_deinit(const struct vcpu *v)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +    {
> +        struct viridian_stimer *vs = &vv->stimer[i];
> +
> +        kill_timer(&vs->timer);
> +        vs->v = NULL;
> +    }
>  }
> 
>  void viridian_time_domain_deinit(const struct domain *d)
> @@ -231,11 +590,36 @@ void viridian_time_domain_deinit(const struct domain *d)
>  void viridian_time_save_vcpu_ctxt(
>      const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
> +                 ARRAY_SIZE(ctxt->stimer_config_msr));
> +    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
> +                 ARRAY_SIZE(ctxt->stimer_count_msr));
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +    {
> +        struct viridian_stimer *vs = &vv->stimer[i];
> +
> +        ctxt->stimer_config_msr[i] = vs->config.raw;
> +        ctxt->stimer_count_msr[i] = vs->count;
> +    }
>  }
> 
>  void viridian_time_load_vcpu_ctxt(
>      struct vcpu *v, const struct hvm_viridian_vcpu_context *ctxt)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +    {
> +        struct viridian_stimer *vs = &vv->stimer[i];
> +
> +        vs->config.raw = ctxt->stimer_config_msr[i];
> +        vs->count = ctxt->stimer_count_msr[i];
> +    }
>  }
> 
>  void viridian_time_save_domain_ctxt(
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
> index 67d0121924..ec79cd9f15 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -181,6 +181,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
>              mask.AccessPartitionReferenceTsc = 1;
>          if ( viridian_feature_mask(d) & HVMPV_synic )
>              mask.AccessSynicRegs = 1;
> +        if ( viridian_feature_mask(d) & HVMPV_stimer )
> +            mask.AccessSyntheticTimerRegs = 1;
> 
>          u.mask = mask;
> 
> @@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
>      case HV_X64_MSR_TSC_FREQUENCY:
>      case HV_X64_MSR_APIC_FREQUENCY:
>      case HV_X64_MSR_REFERENCE_TSC:
> +    case HV_X64_MSR_TIME_REF_COUNT:
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER0_COUNT:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER1_COUNT:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER2_COUNT:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    case HV_X64_MSR_STIMER3_COUNT:
>          return viridian_time_wrmsr(v, idx, val);
> 
>      case HV_X64_MSR_CRASH_P0:
> @@ -403,6 +414,14 @@ int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
>      case HV_X64_MSR_APIC_FREQUENCY:
>      case HV_X64_MSR_REFERENCE_TSC:
>      case HV_X64_MSR_TIME_REF_COUNT:
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER0_COUNT:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER1_COUNT:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER2_COUNT:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    case HV_X64_MSR_STIMER3_COUNT:
>          return viridian_time_rdmsr(v, idx, val);
> 
>      case HV_X64_MSR_CRASH_P0:
> diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
> index 7446be492b..c9cab32e06 100644
> --- a/xen/include/asm-x86/hvm/viridian.h
> +++ b/xen/include/asm-x86/hvm/viridian.h
> @@ -40,6 +40,33 @@ union viridian_sint_msr
>      };
>  };
> 
> +union viridian_stimer_config_msr
> +{
> +    uint64_t raw;
> +    struct
> +    {
> +        uint64_t enabled:1;
> +        uint64_t periodic:1;
> +        uint64_t lazy:1;
> +        uint64_t auto_enable:1;
> +        uint64_t vector:8;
> +        uint64_t direct_mode:1;
> +        uint64_t reserved_zero1:3;
> +        uint64_t sintx:4;
> +        uint64_t reserved_zero2:44;
> +    } fields;

Looks like I missed this. I'll send a v6 with it removed.

  Paul

> +};
> +
> +struct viridian_stimer {
> +    struct vcpu *v;
> +    struct timer timer;
> +    union viridian_stimer_config_msr config;
> +    uint64_t count;
> +    uint64_t expiration;
> +    s_time_t timeout;
> +    bool started;
> +};
> +
>  struct viridian_vcpu
>  {
>      struct viridian_page vp_assist;
> @@ -50,6 +77,9 @@ struct viridian_vcpu
>      union viridian_sint_msr sint[16];
>      uint8_t vector_to_sintx[256];
>      unsigned long msg_pending;
> +    struct viridian_stimer stimer[4];
> +    unsigned long stimer_enabled;
> +    unsigned long stimer_pending;
>      uint64_t crash_param[5];
>  };
> 
> @@ -112,7 +142,7 @@ void viridian_apic_assist_clear(const struct vcpu *v);
> 
>  bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
>                                       unsigned int vector);
> -void viridian_synic_poll_messages(const struct vcpu *v);
> +void viridian_synic_poll_messages(struct vcpu *v);
>  void viridian_synic_ack_sint(const struct vcpu *v, unsigned int vector);
> 
>  #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */
> diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
> index ec3e4df12c..8344aa471f 100644
> --- a/xen/include/public/arch-x86/hvm/save.h
> +++ b/xen/include/public/arch-x86/hvm/save.h
> @@ -604,6 +604,8 @@ struct hvm_viridian_vcpu_context {
>      uint8_t  _pad[7];
>      uint64_t simp_msr;
>      uint64_t sint_msr[16];
> +    uint64_t stimer_config_msr[4];
> +    uint64_t stimer_count_msr[4];
>  };
> 
>  DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context);
> diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
> index e7e3c7c892..e06b0942d0 100644
> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -150,6 +150,10 @@
>  #define _HVMPV_synic 7
>  #define HVMPV_synic (1 << _HVMPV_synic)
> 
> +/* Enable STIMER MSRs */
> +#define _HVMPV_stimer 8
> +#define HVMPV_stimer (1 << _HVMPV_stimer)
> +
>  #define HVMPV_feature_mask \
>          (HVMPV_base_freq | \
>           HVMPV_no_freq | \
> @@ -158,7 +162,8 @@
>           HVMPV_hcall_remote_tlb_flush | \
>           HVMPV_apic_assist | \
>           HVMPV_crash_ctl | \
> -         HVMPV_synic)
> +         HVMPV_synic | \
> +         HVMPV_stimer)
> 
>  #endif
> 
> --
> 2.20.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-11 13:41 ` [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
@ 2019-03-13 13:14   ` Jan Beulich
  2019-03-13 13:25     ` Paul Durrant
  2019-03-13 16:13     ` Paul Durrant
  0 siblings, 2 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 13:14 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> @@ -28,6 +29,32 @@ typedef union _HV_VP_ASSIST_PAGE
>      uint8_t ReservedZBytePadding[PAGE_SIZE];
>  } HV_VP_ASSIST_PAGE;
>  
> +typedef enum HV_MESSAGE_TYPE {
> +    HvMessageTypeNone,
> +    HvMessageTimerExpired = 0x80000010,
> +} HV_MESSAGE_TYPE;
> +
> +typedef struct HV_MESSAGE_FLAGS {
> +    uint8_t MessagePending:1;
> +    uint8_t Reserved:7;
> +} HV_MESSAGE_FLAGS;
> +
> +typedef struct HV_MESSAGE_HEADER {
> +    HV_MESSAGE_TYPE MessageType;
> +    uint16_t Reserved1;
> +    HV_MESSAGE_FLAGS MessageFlags;
> +    uint8_t PayloadSize;
> +    uint64_t Reserved2;
> +} HV_MESSAGE_HEADER;
> +
> +#define HV_MESSAGE_SIZE 256
> +#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30

Is this defined this way, or (given ...

> +typedef struct HV_MESSAGE {
> +    HV_MESSAGE_HEADER Header;
> +    uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT];
> +} HV_MESSAGE;

... this) isn't it rather

#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT \
    ((HV_MESSAGE_SIZE - sizeof(HV_MESSAGE_HEADER) / 8)

> +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
> +    {
> +        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
> +                                                ARRAY_SIZE(vv->sint));

While here I can see the usefulness of using the local variable (but
you're aware of the remaining issue with going this route?), ...

> +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
> +    {
> +        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
> +                                                ARRAY_SIZE(vv->sint));
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> +            return X86EMUL_EXCEPTION;
> +
> +        *val = vv->sint[sintx].raw;
> +        break;

... I think you would better use array_access_nospec() here, to
avoid that very risk.

> +bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
> +                                     unsigned int vector)
> +{
> +    const struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int idx = vv->vector_to_sintx[vector];
> +    unsigned int sintx = array_index_nospec(idx, ARRAY_SIZE(vv->sint));
> +
> +    if ( idx >= ARRAY_SIZE(vv->sint) )
> +        return false;
> +
> +    return vv->sint[sintx].auto_eoi;

Same here then.

> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -461,10 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic)
>  
>  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
>  {
> +    struct vcpu *v = vlapic_vcpu(vlapic);
>      struct domain *d = vlapic_domain(vlapic);

Please use v->domain here.

> +    /* All synic SINTx vectors are edge triggered */
> +
>      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
>          vioapic_update_EOI(d, vector);
> +    else if ( has_viridian_synic(v->domain) )

And please use d here.

> @@ -1301,13 +1306,23 @@ int vlapic_has_pending_irq(struct vcpu *v)
>      if ( !vlapic_enabled(vlapic) )
>          return -1;
>  
> +    /*
> +     * Poll the viridian message queues before checking the IRR since
> +     * a sythetic interrupt may be asserted during the poll.

synthetic

> @@ -1328,9 +1343,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
>           (irr & 0xf0) <= (isr & 0xf0) )
>      {
>          viridian_apic_assist_clear(v);
> -        return -1;
> +        irr = -1;
>      }
>  
> +out:
> +    if (irr == -1)

Style: label indentation and spaces inside the parentheses.

> +        vlapic->polled_synic = false;

I'm struggling to understand the purpose of this flag, and the
situation is no helped by viridian_synic_poll_messages() currently
doing nothing. It would be really nice if maintenance of a flag like
this - if needed in the first place - could be kept local to Viridian
code (but of course not at the expense of adding various new
hooks for that purpose).

> --- a/xen/include/asm-x86/hvm/viridian.h
> +++ b/xen/include/asm-x86/hvm/viridian.h
> @@ -26,10 +26,30 @@ struct viridian_page
>      void *ptr;
>  };
>  
> +union viridian_sint_msr
> +{
> +    uint64_t raw;
> +    struct
> +    {
> +        uint64_t vector:8;
> +        uint64_t reserved_preserved1:8;
> +        uint64_t mask:1;
> +        uint64_t auto_eoi:1;
> +        uint64_t polling:1;
> +        uint64_t reserved_preserved2:45;
> +    };
> +};
> +
>  struct viridian_vcpu
>  {
>      struct viridian_page vp_assist;
>      bool apic_assist_pending;
> +    uint64_t scontrol;
> +    uint64_t siefp;
> +    struct viridian_page simp;
> +    union viridian_sint_msr sint[16];
> +    uint8_t vector_to_sintx[256];

I've been wondering about the wasted initial 16 bytes here, but looking
at the code trying to save that space would apparently complicate some
of the array accesses in undue fashion.

> +    unsigned long msg_pending;

Does this really need to be unsigned long? There are only 16 bits used
here, and bitops ought to work fine on an unsigned int. Once reduced,
the field could then fill the gap between apic_assist_pending and scontrol.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-13 13:14   ` Jan Beulich
@ 2019-03-13 13:25     ` Paul Durrant
  2019-03-13 14:23       ` Jan Beulich
  2019-03-13 16:13     ` Paul Durrant
  1 sibling, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-13 13:25 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Jan Beulich
> Sent: 13 March 2019 13:15
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim (Xen.org) <tim@xen.org>; Julien
> Grall <julien.grall@arm.com>; xen-devel <xen-devel@lists.xenproject.org>; Roger Pau Monne
> <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
> 
> >>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> > @@ -28,6 +29,32 @@ typedef union _HV_VP_ASSIST_PAGE
> >      uint8_t ReservedZBytePadding[PAGE_SIZE];
> >  } HV_VP_ASSIST_PAGE;
> >
> > +typedef enum HV_MESSAGE_TYPE {
> > +    HvMessageTypeNone,
> > +    HvMessageTimerExpired = 0x80000010,
> > +} HV_MESSAGE_TYPE;
> > +
> > +typedef struct HV_MESSAGE_FLAGS {
> > +    uint8_t MessagePending:1;
> > +    uint8_t Reserved:7;
> > +} HV_MESSAGE_FLAGS;
> > +
> > +typedef struct HV_MESSAGE_HEADER {
> > +    HV_MESSAGE_TYPE MessageType;
> > +    uint16_t Reserved1;
> > +    HV_MESSAGE_FLAGS MessageFlags;
> > +    uint8_t PayloadSize;
> > +    uint64_t Reserved2;
> > +} HV_MESSAGE_HEADER;
> > +
> > +#define HV_MESSAGE_SIZE 256
> > +#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30
> 
> Is this defined this way, or (given ...
> 
> > +typedef struct HV_MESSAGE {
> > +    HV_MESSAGE_HEADER Header;
> > +    uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT];
> > +} HV_MESSAGE;
> 
> ... this) isn't it rather
> 
> #define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT \
>     ((HV_MESSAGE_SIZE - sizeof(HV_MESSAGE_HEADER) / 8)
> 
> > +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
> > +    {
> > +        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
> > +                                                ARRAY_SIZE(vv->sint));
> 
> While here I can see the usefulness of using the local variable (but
> you're aware of the remaining issue with going this route?), ...

I guess I'm not aware. Given that using sintx cannot lead to an out-of-bounds access, what is the risk?

> 
> > +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
> > +    {
> > +        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
> > +                                                ARRAY_SIZE(vv->sint));
> > +
> > +        if ( !(viridian_feature_mask(d) & HVMPV_synic) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        *val = vv->sint[sintx].raw;
> > +        break;
> 
> ... I think you would better use array_access_nospec() here, to
> avoid that very risk.
> 
> > +bool viridian_synic_is_auto_eoi_sint(const struct vcpu *v,
> > +                                     unsigned int vector)
> > +{
> > +    const struct viridian_vcpu *vv = v->arch.hvm.viridian;
> > +    unsigned int idx = vv->vector_to_sintx[vector];
> > +    unsigned int sintx = array_index_nospec(idx, ARRAY_SIZE(vv->sint));
> > +
> > +    if ( idx >= ARRAY_SIZE(vv->sint) )
> > +        return false;
> > +
> > +    return vv->sint[sintx].auto_eoi;
> 
> Same here then.
> 
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -461,10 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic)
> >
> >  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
> >  {
> > +    struct vcpu *v = vlapic_vcpu(vlapic);
> >      struct domain *d = vlapic_domain(vlapic);
> 
> Please use v->domain here.
> 
> > +    /* All synic SINTx vectors are edge triggered */
> > +
> >      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
> >          vioapic_update_EOI(d, vector);
> > +    else if ( has_viridian_synic(v->domain) )
> 
> And please use d here.

Sorry, yes. I changed it and then apparently lost the change.

> 
> > @@ -1301,13 +1306,23 @@ int vlapic_has_pending_irq(struct vcpu *v)
> >      if ( !vlapic_enabled(vlapic) )
> >          return -1;
> >
> > +    /*
> > +     * Poll the viridian message queues before checking the IRR since
> > +     * a sythetic interrupt may be asserted during the poll.
> 
> synthetic
> 
> > @@ -1328,9 +1343,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
> >           (irr & 0xf0) <= (isr & 0xf0) )
> >      {
> >          viridian_apic_assist_clear(v);
> > -        return -1;
> > +        irr = -1;
> >      }
> >
> > +out:
> > +    if (irr == -1)
> 
> Style: label indentation and spaces inside the parentheses.

Oops.

> 
> > +        vlapic->polled_synic = false;
> 
> I'm struggling to understand the purpose of this flag, and the
> situation is no helped by viridian_synic_poll_messages() currently
> doing nothing. It would be really nice if maintenance of a flag like
> this - if needed in the first place - could be kept local to Viridian
> code (but of course not at the expense of adding various new
> hooks for that purpose).

The flag is there to stop viridian_synic_poll_messages() being called multiple times as you requested. I can move the flag into the viridian code but I'll have to add add extra call to unblock the poll in this case and in the ack function.

> 
> > --- a/xen/include/asm-x86/hvm/viridian.h
> > +++ b/xen/include/asm-x86/hvm/viridian.h
> > @@ -26,10 +26,30 @@ struct viridian_page
> >      void *ptr;
> >  };
> >
> > +union viridian_sint_msr
> > +{
> > +    uint64_t raw;
> > +    struct
> > +    {
> > +        uint64_t vector:8;
> > +        uint64_t reserved_preserved1:8;
> > +        uint64_t mask:1;
> > +        uint64_t auto_eoi:1;
> > +        uint64_t polling:1;
> > +        uint64_t reserved_preserved2:45;
> > +    };
> > +};
> > +
> >  struct viridian_vcpu
> >  {
> >      struct viridian_page vp_assist;
> >      bool apic_assist_pending;
> > +    uint64_t scontrol;
> > +    uint64_t siefp;
> > +    struct viridian_page simp;
> > +    union viridian_sint_msr sint[16];
> > +    uint8_t vector_to_sintx[256];
> 
> I've been wondering about the wasted initial 16 bytes here, but looking
> at the code trying to save that space would apparently complicate some
> of the array accesses in undue fashion.
> 
> > +    unsigned long msg_pending;
> 
> Does this really need to be unsigned long? There are only 16 bits used
> here, and bitops ought to work fine on an unsigned int. Once reduced,
> the field could then fill the gap between apic_assist_pending and scontrol.

Ok.

  Paul

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-11 13:41 ` [PATCH v5 10/11] viridian: add implementation of synthetic timers Paul Durrant
  2019-03-13 13:06   ` Paul Durrant
@ 2019-03-13 14:05   ` Jan Beulich
  2019-03-13 14:37     ` Paul Durrant
  1 sibling, 1 reply; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 14:05 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> +                                      unsigned int index,
> +                                      uint64_t expiration,
> +                                      uint64_t delivery)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    const union viridian_sint_msr *vs = &vv->sint[sintx];
> +    HV_MESSAGE *msg = vv->simp.ptr;
> +    struct {
> +        uint32_t TimerIndex;
> +        uint32_t Reserved;
> +        uint64_t ExpirationTime;
> +        uint64_t DeliveryTime;
> +    } payload = {
> +        .TimerIndex = index,
> +        .ExpirationTime = expiration,
> +        .DeliveryTime = delivery,
> +    };
> +
> +    if ( test_bit(sintx, &vv->msg_pending) )
> +        return false;
> +
> +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
> +    msg += sintx;
> +
> +    /*
> +     * To avoid using an atomic test-and-set, and barrier before calling
> +     * vlapic_set_irq(), this function must be called in context of the
> +     * vcpu receiving the message.
> +     */
> +    ASSERT(v == current);
> +    if ( msg->Header.MessageType != HvMessageTypeNone )
> +    {
> +        msg->Header.MessageFlags.MessagePending = 1;
> +        __set_bit(sintx, &vv->msg_pending);
> +        return false;
> +    }
> +
> +    msg->Header.MessageType = HvMessageTimerExpired;
> +    msg->Header.MessageFlags.MessagePending = 0;
> +    msg->Header.PayloadSize = sizeof(payload);
> +    memcpy(msg->Payload, &payload, sizeof(payload));

Since you can't use plain assignment here, how about a
BUILD_BUG_ON(sizeof(payload) <= sizeof(msg->payload))?

> +static uint64_t time_now(struct domain *d)
> +{
> +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> +    uint32_t start, end;
> +    uint64_t tsc;
> +    uint64_t scale;
> +    uint64_t offset;

Would mind grouping them all on one line, just like you do for start
and end?

> +    /*
> +     * If the reference TSC page is not enabled, or has been invalidated
> +     * fall back to the partition reference counter.
> +     */
> +    if ( !p || !p->TscSequence )
> +        return time_ref_count(d);
> +
> +    /*
> +     * The following sampling algorithm for tsc, scale and offset is
> +     * documented in the specifiction.
> +     */
> +    do {
> +        start = p->TscSequence;
> +        smp_rmb();
> +
> +        tsc = hvm_get_guest_tsc(pt_global_vcpu_target(d));
> +        scale = p->TscScale;
> +        offset = p->TscOffset;
> +
> +        smp_rmb();
> +        end = p->TscSequence;
> +    } while (end != start);

Blanks immediately inside the parentheses please.

As to safety of this, I have two concerns:

1) TscSequence gets updated as a result of a guest action (an MSR
write). This makes it non-obvious that the loop above will get
exited in due course.

2) The way update_reference_tsc() deals with the two "invalid"
values suggests ~0 and 0 should be special cased in general. I
_think_ this is not necessary here, but it also seems to me as if
the VM ever having a way to observe either of those two values
would be wrong too. Shouldn't the function avoid to ever store
~0 into that field, i.e. increment into a local variable, update
that local variable to skip the two "invalid" values, and only then
store into the field?

Otoh, making it into that function being a result of an MSR write,
it may welll be that the spec precludes the guest from reading
the reference page while an update was invoked from one of its
vCPU-s. If this was the case, then I also wouldn't have to
wonder any longer how this entire mechanism can be race free
in the first place (without a double increment like we do in the
pv-clock protocol).

> +static void start_stimer(struct viridian_stimer *vs)
> +{
> +    const struct vcpu *v = vs->v;
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int stimerx = vs - &vv->stimer[0];
> +    int64_t now = time_now(v->domain);
> +    s_time_t timeout;
> +
> +    if ( !test_and_set_bit(stimerx, &vv->stimer_enabled) )
> +        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
> +               stimerx);
> +
> +    if ( vs->config.fields.periodic )
> +    {
> +        unsigned int missed = 0;
> +        int64_t next;
> +
> +        /*
> +         * The specification says that if the timer is lazy then we
> +         * skip over any missed expirations so we can treat this case
> +         * as the same as if the timer is currently stopped, i.e. we
> +         * just schedule expiration to be 'count' ticks from now.
> +         */
> +        if ( !vs->started || vs->config.fields.lazy )
> +            next = now + vs->count;
> +        else
> +        {
> +            /*
> +             * The timer is already started, so we're re-scheduling.
> +             * Hence advance the timer expiration by one tick.
> +             */
> +            next = vs->expiration + vs->count;
> +
> +            /* Now check to see if any expirations have been missed */
> +            if ( now - next > 0 )
> +                missed = (now - next) / vs->count;
> +
> +            /*
> +             * The specification says that if the timer is not lazy then
> +             * a non-zero missed count should be used to reduce the period
> +             * of the timer until it catches up, unless the count has
> +             * reached a 'significant number', in which case the timer
> +             * should be treated as lazy. Unfortunately the specification
> +             * does not state what that number is so the choice of number
> +             * here is a pure guess.
> +             */
> +            if ( missed > 3 )
> +            {
> +                missed = 0;
> +                next = now + vs->count;
> +            }
> +        }
> +
> +        vs->expiration = next;
> +        timeout = ((next - now) * 100ull) / (missed + 1);

The logic above guarantees next > now only if missed > 3. Yet
with next < now, the multiplication by 100ull will produce a huge
64-bit value, and division by a value larger than 1 will make it a
huge 62- or 63-bit value (sign bit lost). That's hardly the timeout
you mean.

> +    }
> +    else
> +    {
> +        vs->expiration = vs->count;
> +        if ( vs->count - now <= 0 )

Unless you really mean != 0 you have an issue here, due to
vs->count being uint64_t. Perhaps, taking also ...

> +        {
> +            set_bit(stimerx, &vv->stimer_pending);
> +            return;
> +        }
> +
> +        timeout = (vs->expiration - now) * 100ull;

... this, you want to calculate the difference into a local
variable of type int64_t instead, and use it in both places?

> +static void poll_stimer(struct vcpu *v, unsigned int stimerx)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    struct viridian_stimer *vs = &vv->stimer[stimerx];
> +
> +    if ( !test_bit(stimerx, &vv->stimer_pending) )
> +        return;
> +
> +    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
> +                                           stimerx, vs->expiration,
> +                                           time_now(v->domain)) )
> +        return;
> +
> +    clear_bit(stimerx, &vv->stimer_pending);

While perhaps benign, wouldn't it be better to clear the pending bit
before delivering the message?

> +void viridian_time_vcpu_freeze(struct vcpu *v)
> +{
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> +    unsigned int i;
> +
> +    if ( !is_viridian_vcpu(v) )
> +        return;

Don't you also want/need to check the HVMPV_stimer bit here (and
the also in the thaw counterpart)?

> @@ -149,6 +398,63 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>          }
>          break;
>  
> +    case HV_X64_MSR_TIME_REF_COUNT:
> +        return X86EMUL_EXCEPTION;
> +
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    {
> +        unsigned int stimerx =
> +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> +                               ARRAY_SIZE(vv->stimer));
> +        struct viridian_stimer *vs = &vv->stimer[stimerx];

I again think you'd better use array_access_nospec() here (also
for the rdmsr counterparts).

> @@ -160,6 +466,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>  
>  int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;

const?

> @@ -201,6 +508,37 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t 
> idx, uint64_t *val)
>          break;
>      }
>  
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    {
> +        unsigned int stimerx =
> +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> +                               ARRAY_SIZE(vv->stimer));;
> +
> +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> +            return X86EMUL_EXCEPTION;
> +
> +        *val = vv->stimer[stimerx].config.raw;
> +        break;
> +    }
> +    case HV_X64_MSR_STIMER0_COUNT:

Blank line between case blocks please.

> @@ -231,11 +590,36 @@ void viridian_time_domain_deinit(const struct domain *d)
>  void viridian_time_save_vcpu_ctxt(
>      const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
>  {
> +    struct viridian_vcpu *vv = v->arch.hvm.viridian;

const?

> +    unsigned int i;
> +
> +    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
> +                 ARRAY_SIZE(ctxt->stimer_config_msr));
> +    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
> +                 ARRAY_SIZE(ctxt->stimer_count_msr));
> +
> +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> +    {
> +        struct viridian_stimer *vs = &vv->stimer[i];

const (if you really consider the variable useful in the first place)?

> @@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
>      case HV_X64_MSR_TSC_FREQUENCY:
>      case HV_X64_MSR_APIC_FREQUENCY:
>      case HV_X64_MSR_REFERENCE_TSC:
> +    case HV_X64_MSR_TIME_REF_COUNT:
> +    case HV_X64_MSR_STIMER0_CONFIG:
> +    case HV_X64_MSR_STIMER0_COUNT:
> +    case HV_X64_MSR_STIMER1_CONFIG:
> +    case HV_X64_MSR_STIMER1_COUNT:
> +    case HV_X64_MSR_STIMER2_CONFIG:
> +    case HV_X64_MSR_STIMER2_COUNT:
> +    case HV_X64_MSR_STIMER3_CONFIG:
> +    case HV_X64_MSR_STIMER3_COUNT:

For readability / brevity

    case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT:

?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall
  2019-03-11 13:41 ` [PATCH v5 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
@ 2019-03-13 14:08   ` Jan Beulich
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 14:08 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Julien Grall, xen-devel, Roger Pau Monne

>>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> This patch adds an implementation of the hypercall as documented in the
> specification [1], section 10.5.2. This enlightenment, as with others, is
> advertised by CPUID leaf 0x40000004 and is under control of a new
> 'hcall_ipi' option in libxl.
> 
> If used, this enlightenment should mean the guest only takes a single VMEXIT
> to issue IPIs to multiple vCPUs rather than the multiple VMEXITs that would
> result from using the emulated local APIC.
> 
> [1] 
> https://github.com/MicrosoftDocs/Virtualization-Documentation/raw/live/tlfs/Hypervisor%20Top%20Level%20Functional%20Specification%20v5.0C.pdf
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Hypervisor parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one remark:

> @@ -642,6 +646,65 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>          break;
>      }
>  
> +    case HvSendSyntheticClusterIpi:
> +    {
> +        struct vcpu *v;
> +        uint32_t vector;
> +        uint64_t vcpu_mask;
> +
> +        status = HV_STATUS_INVALID_PARAMETER;
> +
> +        /* Get input parameters. */
> +        if ( input.fast )
> +        {
> +            if ( input_params_gpa >> 32 )
> +                break;
> +
> +            vector = input_params_gpa;
> +            vcpu_mask = output_params_gpa;
> +        }
> +        else
> +        {
> +            struct {
> +                uint32_t vector;
> +                uint8_t target_vtl;
> +                uint8_t reserved_zero[3];
> +                uint64_t vcpu_mask;
> +            } input_params;
> +
> +            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
> +                                          sizeof(input_params)) !=
> +                 HVMTRANS_okay )
> +                break;
> +
> +            if ( input_params.target_vtl ||
> +                 input_params.reserved_zero[0] ||
> +                 input_params.reserved_zero[1] ||
> +                 input_params.reserved_zero[2] )
> +                break;
> +
> +            vector = input_params.vector;
> +            vcpu_mask = input_params.vcpu_mask;
> +        }
> +
> +        if ( vector < 0x10 || vector > 0xff )
> +            break;

Elsewhere you've been using decimal.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-13 13:06   ` Paul Durrant
@ 2019-03-13 14:10     ` Jan Beulich
  0 siblings, 0 replies; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 14:10 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, Ian Jackson, Roger Pau Monne

>>> On 13.03.19 at 14:06, <Paul.Durrant@citrix.com> wrote:
>> From: Paul Durrant [mailto:paul.durrant@citrix.com]
>> Sent: 11 March 2019 13:42
>> 
>> --- a/xen/include/asm-x86/hvm/viridian.h
>> +++ b/xen/include/asm-x86/hvm/viridian.h
>> @@ -40,6 +40,33 @@ union viridian_sint_msr
>>      };
>>  };
>> 
>> +union viridian_stimer_config_msr
>> +{
>> +    uint64_t raw;
>> +    struct
>> +    {
>> +        uint64_t enabled:1;
>> +        uint64_t periodic:1;
>> +        uint64_t lazy:1;
>> +        uint64_t auto_enable:1;
>> +        uint64_t vector:8;
>> +        uint64_t direct_mode:1;
>> +        uint64_t reserved_zero1:3;
>> +        uint64_t sintx:4;
>> +        uint64_t reserved_zero2:44;
>> +    } fields;
> 
> Looks like I missed this. I'll send a v6 with it removed.

Oh, and I didn't really notice.

But the real reason for my reply: Please trim the quoting in your
replies.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-13 13:25     ` Paul Durrant
@ 2019-03-13 14:23       ` Jan Beulich
  2019-03-13 14:51         ` Paul Durrant
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 14:23 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, Ian Jackson, Roger Pau Monne

>>> On 13.03.19 at 14:25, <Paul.Durrant@citrix.com> wrote:
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Jan Beulich
>> Sent: 13 March 2019 13:15
>> 
>> > +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
>> > +    {
>> > +        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
>> > +                                                ARRAY_SIZE(vv->sint));
>> 
>> While here I can see the usefulness of using the local variable (but
>> you're aware of the remaining issue with going this route?), ...
> 
> I guess I'm not aware. Given that using sintx cannot lead to an 
> out-of-bounds access, what is the risk?

The workaround is effective only as long as the variable stays in a
register. If it gets read from memory before use, mis-speculation
is possible again from all we can tell.

>> > @@ -1328,9 +1343,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
>> >           (irr & 0xf0) <= (isr & 0xf0) )
>> >      {
>> >          viridian_apic_assist_clear(v);
>> > -        return -1;
>> > +        irr = -1;
>> >      }
>> >
>> > +out:
>> > +    if (irr == -1)
>> > +        vlapic->polled_synic = false;
>> 
>> I'm struggling to understand the purpose of this flag, and the
>> situation is no helped by viridian_synic_poll_messages() currently
>> doing nothing. It would be really nice if maintenance of a flag like
>> this - if needed in the first place - could be kept local to Viridian
>> code (but of course not at the expense of adding various new
>> hooks for that purpose).
> 
> The flag is there to stop viridian_synic_poll_messages() being called 
> multiple times as you requested. I can move the flag into the viridian code 
> but I'll have to add add extra call to unblock the poll in this case and in 
> the ack function.

Well, in that case it's perhaps better to keep as is.

As to me having requested this - in an abstract way, yes, but to
be honest I couldn't have deduced that connection from the
name you've chosen. "polled_synic" reads to me like reflecting
some guest property. I admit though that I'm also having difficulty
suggesting a better alternative: "synic_polled", "synic_was_polled",
or "sync_poll_pending" come to mind.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-13 14:05   ` Jan Beulich
@ 2019-03-13 14:37     ` Paul Durrant
  2019-03-13 15:36       ` Jan Beulich
  2019-03-14  9:58       ` Paul Durrant
  0 siblings, 2 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-13 14:37 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 13 March 2019 14:06
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen-devel <xen-
> devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org)
> <tim@xen.org>
> Subject: Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
> 
> >>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> > +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> > +                                      unsigned int index,
> > +                                      uint64_t expiration,
> > +                                      uint64_t delivery)
> > +{
> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> > +    const union viridian_sint_msr *vs = &vv->sint[sintx];
> > +    HV_MESSAGE *msg = vv->simp.ptr;
> > +    struct {
> > +        uint32_t TimerIndex;
> > +        uint32_t Reserved;
> > +        uint64_t ExpirationTime;
> > +        uint64_t DeliveryTime;
> > +    } payload = {
> > +        .TimerIndex = index,
> > +        .ExpirationTime = expiration,
> > +        .DeliveryTime = delivery,
> > +    };
> > +
> > +    if ( test_bit(sintx, &vv->msg_pending) )
> > +        return false;
> > +
> > +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
> > +    msg += sintx;
> > +
> > +    /*
> > +     * To avoid using an atomic test-and-set, and barrier before calling
> > +     * vlapic_set_irq(), this function must be called in context of the
> > +     * vcpu receiving the message.
> > +     */
> > +    ASSERT(v == current);
> > +    if ( msg->Header.MessageType != HvMessageTypeNone )
> > +    {
> > +        msg->Header.MessageFlags.MessagePending = 1;
> > +        __set_bit(sintx, &vv->msg_pending);
> > +        return false;
> > +    }
> > +
> > +    msg->Header.MessageType = HvMessageTimerExpired;
> > +    msg->Header.MessageFlags.MessagePending = 0;
> > +    msg->Header.PayloadSize = sizeof(payload);
> > +    memcpy(msg->Payload, &payload, sizeof(payload));
> 
> Since you can't use plain assignment here, how about a
> BUILD_BUG_ON(sizeof(payload) <= sizeof(msg->payload))?

Surely '>' rather than '<='?

> 
> > +static uint64_t time_now(struct domain *d)
> > +{
> > +    const struct viridian_page *rt = &d->arch.hvm.viridian->reference_tsc;
> > +    HV_REFERENCE_TSC_PAGE *p = rt->ptr;
> > +    uint32_t start, end;
> > +    uint64_t tsc;
> > +    uint64_t scale;
> > +    uint64_t offset;
> 
> Would mind grouping them all on one line, just like you do for start
> and end?

Sure.

> 
> > +    /*
> > +     * If the reference TSC page is not enabled, or has been invalidated
> > +     * fall back to the partition reference counter.
> > +     */
> > +    if ( !p || !p->TscSequence )
> > +        return time_ref_count(d);
> > +
> > +    /*
> > +     * The following sampling algorithm for tsc, scale and offset is
> > +     * documented in the specifiction.
> > +     */
> > +    do {
> > +        start = p->TscSequence;
> > +        smp_rmb();
> > +
> > +        tsc = hvm_get_guest_tsc(pt_global_vcpu_target(d));
> > +        scale = p->TscScale;
> > +        offset = p->TscOffset;
> > +
> > +        smp_rmb();
> > +        end = p->TscSequence;
> > +    } while (end != start);
> 
> Blanks immediately inside the parentheses please.
> 

Yes, missed that.

> As to safety of this, I have two concerns:
> 
> 1) TscSequence gets updated as a result of a guest action (an MSR
> write). This makes it non-obvious that the loop above will get
> exited in due course.
> 

True. The domain could try to DoS this call. This could be avoided by doing a domain_pause() if we test continuously fails for a number of iterations, or maybe just one iteration.

> 2) The way update_reference_tsc() deals with the two "invalid"
> values suggests ~0 and 0 should be special cased in general. I
> _think_ this is not necessary here, but it also seems to me as if
> the VM ever having a way to observe either of those two values
> would be wrong too. Shouldn't the function avoid to ever store
> ~0 into that field, i.e. increment into a local variable, update
> that local variable to skip the two "invalid" values, and only then
> store into the field?
> 
> Otoh, making it into that function being a result of an MSR write,
> it may welll be that the spec precludes the guest from reading
> the reference page while an update was invoked from one of its
> vCPU-s. If this was the case, then I also wouldn't have to
> wonder any longer how this entire mechanism can be race free
> in the first place (without a double increment like we do in the
> pv-clock protocol).

From observation, it looks like Windows initializes the reference tsc page before it brings secondary CPUs online and then doesn't touch the MSR again, so we should probably only tolerate one mismatch in time_now() before doing domain_pause().
I also agree that the 'invalid' values should not be exposed to the guest, even though it is highly unlikely to ever observe one in practice.

> 
> > +static void start_stimer(struct viridian_stimer *vs)
> > +{
> > +    const struct vcpu *v = vs->v;
> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> > +    unsigned int stimerx = vs - &vv->stimer[0];
> > +    int64_t now = time_now(v->domain);
> > +    s_time_t timeout;
> > +
> > +    if ( !test_and_set_bit(stimerx, &vv->stimer_enabled) )
> > +        printk(XENLOG_G_INFO "%pv: VIRIDIAN STIMER%u: enabled\n", v,
> > +               stimerx);
> > +
> > +    if ( vs->config.fields.periodic )
> > +    {
> > +        unsigned int missed = 0;
> > +        int64_t next;
> > +
> > +        /*
> > +         * The specification says that if the timer is lazy then we
> > +         * skip over any missed expirations so we can treat this case
> > +         * as the same as if the timer is currently stopped, i.e. we
> > +         * just schedule expiration to be 'count' ticks from now.
> > +         */
> > +        if ( !vs->started || vs->config.fields.lazy )
> > +            next = now + vs->count;
> > +        else
> > +        {
> > +            /*
> > +             * The timer is already started, so we're re-scheduling.
> > +             * Hence advance the timer expiration by one tick.
> > +             */
> > +            next = vs->expiration + vs->count;
> > +
> > +            /* Now check to see if any expirations have been missed */
> > +            if ( now - next > 0 )
> > +                missed = (now - next) / vs->count;
> > +
> > +            /*
> > +             * The specification says that if the timer is not lazy then
> > +             * a non-zero missed count should be used to reduce the period
> > +             * of the timer until it catches up, unless the count has
> > +             * reached a 'significant number', in which case the timer
> > +             * should be treated as lazy. Unfortunately the specification
> > +             * does not state what that number is so the choice of number
> > +             * here is a pure guess.
> > +             */
> > +            if ( missed > 3 )
> > +            {
> > +                missed = 0;
> > +                next = now + vs->count;
> > +            }
> > +        }
> > +
> > +        vs->expiration = next;
> > +        timeout = ((next - now) * 100ull) / (missed + 1);
> 
> The logic above guarantees next > now only if missed > 3. Yet
> with next < now, the multiplication by 100ull will produce a huge
> 64-bit value, and division by a value larger than 1 will make it a
> huge 62- or 63-bit value (sign bit lost). That's hardly the timeout
> you mean.

Indeed, I'll need to re-think that.

> 
> > +    }
> > +    else
> > +    {
> > +        vs->expiration = vs->count;
> > +        if ( vs->count - now <= 0 )
> 
> Unless you really mean != 0 you have an issue here, due to
> vs->count being uint64_t. Perhaps, taking also ...
> 
> > +        {
> > +            set_bit(stimerx, &vv->stimer_pending);
> > +            return;
> > +        }
> > +
> > +        timeout = (vs->expiration - now) * 100ull;
> 
> ... this, you want to calculate the difference into a local
> variable of type int64_t instead, and use it in both places?

Yes, this was broken when expiration and now became unsigned.

> 
> > +static void poll_stimer(struct vcpu *v, unsigned int stimerx)
> > +{
> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> > +    struct viridian_stimer *vs = &vv->stimer[stimerx];
> > +
> > +    if ( !test_bit(stimerx, &vv->stimer_pending) )
> > +        return;
> > +
> > +    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
> > +                                           stimerx, vs->expiration,
> > +                                           time_now(v->domain)) )
> > +        return;
> > +
> > +    clear_bit(stimerx, &vv->stimer_pending);
> 
> While perhaps benign, wouldn't it be better to clear the pending bit
> before delivering the message?

No, because I only want to clear it if the delivery is successful.

> 
> > +void viridian_time_vcpu_freeze(struct vcpu *v)
> > +{
> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> > +    unsigned int i;
> > +
> > +    if ( !is_viridian_vcpu(v) )
> > +        return;
> 
> Don't you also want/need to check the HVMPV_stimer bit here (and
> the also in the thaw counterpart)?

Yes, I probably do.

> 
> > @@ -149,6 +398,63 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> >          }
> >          break;
> >
> > +    case HV_X64_MSR_TIME_REF_COUNT:
> > +        return X86EMUL_EXCEPTION;
> > +
> > +    case HV_X64_MSR_STIMER0_CONFIG:
> > +    case HV_X64_MSR_STIMER1_CONFIG:
> > +    case HV_X64_MSR_STIMER2_CONFIG:
> > +    case HV_X64_MSR_STIMER3_CONFIG:
> > +    {
> > +        unsigned int stimerx =
> > +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> > +                               ARRAY_SIZE(vv->stimer));
> > +        struct viridian_stimer *vs = &vv->stimer[stimerx];
> 
> I again think you'd better use array_access_nospec() here (also
> for the rdmsr counterparts).

I don't follow. I *am* using array_index_nospec().

> 
> > @@ -160,6 +466,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> >
> >  int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
> >  {
> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> 
> const?
> 

I don't think so. A read of the reference TSC MSR updates a flag.

> > @@ -201,6 +508,37 @@ int viridian_time_rdmsr(const struct vcpu *v, uint32_t
> > idx, uint64_t *val)
> >          break;
> >      }
> >
> > +    case HV_X64_MSR_STIMER0_CONFIG:
> > +    case HV_X64_MSR_STIMER1_CONFIG:
> > +    case HV_X64_MSR_STIMER2_CONFIG:
> > +    case HV_X64_MSR_STIMER3_CONFIG:
> > +    {
> > +        unsigned int stimerx =
> > +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> > +                               ARRAY_SIZE(vv->stimer));;
> > +
> > +        if ( !(viridian_feature_mask(d) & HVMPV_stimer) )
> > +            return X86EMUL_EXCEPTION;
> > +
> > +        *val = vv->stimer[stimerx].config.raw;
> > +        break;
> > +    }
> > +    case HV_X64_MSR_STIMER0_COUNT:
> 
> Blank line between case blocks please.

Ok.

> 
> > @@ -231,11 +590,36 @@ void viridian_time_domain_deinit(const struct domain *d)
> >  void viridian_time_save_vcpu_ctxt(
> >      const struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
> >  {
> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> 
> const?

Possibly.

> 
> > +    unsigned int i;
> > +
> > +    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
> > +                 ARRAY_SIZE(ctxt->stimer_config_msr));
> > +    BUILD_BUG_ON(ARRAY_SIZE(vv->stimer) !=
> > +                 ARRAY_SIZE(ctxt->stimer_count_msr));
> > +
> > +    for ( i = 0; i < ARRAY_SIZE(vv->stimer); i++ )
> > +    {
> > +        struct viridian_stimer *vs = &vv->stimer[i];
> 
> const (if you really consider the variable useful in the first place)?
> 

Ok.

> > @@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
> >      case HV_X64_MSR_TSC_FREQUENCY:
> >      case HV_X64_MSR_APIC_FREQUENCY:
> >      case HV_X64_MSR_REFERENCE_TSC:
> > +    case HV_X64_MSR_TIME_REF_COUNT:
> > +    case HV_X64_MSR_STIMER0_CONFIG:
> > +    case HV_X64_MSR_STIMER0_COUNT:
> > +    case HV_X64_MSR_STIMER1_CONFIG:
> > +    case HV_X64_MSR_STIMER1_COUNT:
> > +    case HV_X64_MSR_STIMER2_CONFIG:
> > +    case HV_X64_MSR_STIMER2_COUNT:
> > +    case HV_X64_MSR_STIMER3_CONFIG:
> > +    case HV_X64_MSR_STIMER3_COUNT:
> 
> For readability / brevity
> 
>     case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT:
> 
> ?

Certainly brevity, but I'm not sure about readability. I'll make the change.

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-13 14:23       ` Jan Beulich
@ 2019-03-13 14:51         ` Paul Durrant
  0 siblings, 0 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-13 14:51 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 13 March 2019 14:23
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> (Xen.org) <tim@xen.org>
> Subject: RE: [Xen-devel] [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
> 
> >>> On 13.03.19 at 14:25, <Paul.Durrant@citrix.com> wrote:
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Jan Beulich
> >> Sent: 13 March 2019 13:15
> >>
> >> > +    case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
> >> > +    {
> >> > +        unsigned int sintx = array_index_nospec(idx - HV_X64_MSR_SINT0,
> >> > +                                                ARRAY_SIZE(vv->sint));
> >>
> >> While here I can see the usefulness of using the local variable (but
> >> you're aware of the remaining issue with going this route?), ...
> >
> > I guess I'm not aware. Given that using sintx cannot lead to an
> > out-of-bounds access, what is the risk?
> 
> The workaround is effective only as long as the variable stays in a
> register. If it gets read from memory before use, mis-speculation
> is possible again from all we can tell.

Ah, ok. So, having talked to Andrew it would seem better to immediately calculate the pointer into the array and use that. I can do that here. In other cases, as long as the stack variable is only used once, it would seem the better to way to construct the code.

> 
> >> > @@ -1328,9 +1343,13 @@ int vlapic_has_pending_irq(struct vcpu *v)
> >> >           (irr & 0xf0) <= (isr & 0xf0) )
> >> >      {
> >> >          viridian_apic_assist_clear(v);
> >> > -        return -1;
> >> > +        irr = -1;
> >> >      }
> >> >
> >> > +out:
> >> > +    if (irr == -1)
> >> > +        vlapic->polled_synic = false;
> >>
> >> I'm struggling to understand the purpose of this flag, and the
> >> situation is no helped by viridian_synic_poll_messages() currently
> >> doing nothing. It would be really nice if maintenance of a flag like
> >> this - if needed in the first place - could be kept local to Viridian
> >> code (but of course not at the expense of adding various new
> >> hooks for that purpose).
> >
> > The flag is there to stop viridian_synic_poll_messages() being called
> > multiple times as you requested. I can move the flag into the viridian code
> > but I'll have to add add extra call to unblock the poll in this case and in
> > the ack function.
> 
> Well, in that case it's perhaps better to keep as is.
> 
> As to me having requested this - in an abstract way, yes, but to
> be honest I couldn't have deduced that connection from the
> name you've chosen. "polled_synic" reads to me like reflecting
> some guest property. I admit though that I'm also having difficulty
> suggesting a better alternative: "synic_polled", "synic_was_polled",
> or "sync_poll_pending" come to mind.
> 

Given the confusion, I think keeping the flag within the viridian code may well be better.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-13 14:37     ` Paul Durrant
@ 2019-03-13 15:36       ` Jan Beulich
  2019-03-13 15:43         ` Paul Durrant
  2019-03-14  9:58       ` Paul Durrant
  1 sibling, 1 reply; 32+ messages in thread
From: Jan Beulich @ 2019-03-13 15:36 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, Ian Jackson, Roger Pau Monne

>>> On 13.03.19 at 15:37, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 13 March 2019 14:06
>> 
>> >>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
>> > +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
>> > +                                      unsigned int index,
>> > +                                      uint64_t expiration,
>> > +                                      uint64_t delivery)
>> > +{
>> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>> > +    const union viridian_sint_msr *vs = &vv->sint[sintx];
>> > +    HV_MESSAGE *msg = vv->simp.ptr;
>> > +    struct {
>> > +        uint32_t TimerIndex;
>> > +        uint32_t Reserved;
>> > +        uint64_t ExpirationTime;
>> > +        uint64_t DeliveryTime;
>> > +    } payload = {
>> > +        .TimerIndex = index,
>> > +        .ExpirationTime = expiration,
>> > +        .DeliveryTime = delivery,
>> > +    };
>> > +
>> > +    if ( test_bit(sintx, &vv->msg_pending) )
>> > +        return false;
>> > +
>> > +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
>> > +    msg += sintx;
>> > +
>> > +    /*
>> > +     * To avoid using an atomic test-and-set, and barrier before calling
>> > +     * vlapic_set_irq(), this function must be called in context of the
>> > +     * vcpu receiving the message.
>> > +     */
>> > +    ASSERT(v == current);
>> > +    if ( msg->Header.MessageType != HvMessageTypeNone )
>> > +    {
>> > +        msg->Header.MessageFlags.MessagePending = 1;
>> > +        __set_bit(sintx, &vv->msg_pending);
>> > +        return false;
>> > +    }
>> > +
>> > +    msg->Header.MessageType = HvMessageTimerExpired;
>> > +    msg->Header.MessageFlags.MessagePending = 0;
>> > +    msg->Header.PayloadSize = sizeof(payload);
>> > +    memcpy(msg->Payload, &payload, sizeof(payload));
>> 
>> Since you can't use plain assignment here, how about a
>> BUILD_BUG_ON(sizeof(payload) <= sizeof(msg->payload))?
> 
> Surely '>' rather than '<='?

Oops, yes - I was apparently thinking the ASSERT() way.

>> As to safety of this, I have two concerns:
>> 
>> 1) TscSequence gets updated as a result of a guest action (an MSR
>> write). This makes it non-obvious that the loop above will get
>> exited in due course.
>> 
> 
> True. The domain could try to DoS this call. This could be avoided by doing 
> a domain_pause() if we test continuously fails for a number of iterations, or 
> maybe just one iteration.

As per what you say further down, one iteration ought to be enough
indeed. Otherwise I would have suggested a handful.

>> > +static void poll_stimer(struct vcpu *v, unsigned int stimerx)
>> > +{
>> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>> > +    struct viridian_stimer *vs = &vv->stimer[stimerx];
>> > +
>> > +    if ( !test_bit(stimerx, &vv->stimer_pending) )
>> > +        return;
>> > +
>> > +    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
>> > +                                           stimerx, vs->expiration,
>> > +                                           time_now(v->domain)) )
>> > +        return;
>> > +
>> > +    clear_bit(stimerx, &vv->stimer_pending);
>> 
>> While perhaps benign, wouldn't it be better to clear the pending bit
>> before delivering the message?
> 
> No, because I only want to clear it if the delivery is successful.

Ah, I see.

>> > @@ -149,6 +398,63 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>> >          }
>> >          break;
>> >
>> > +    case HV_X64_MSR_TIME_REF_COUNT:
>> > +        return X86EMUL_EXCEPTION;
>> > +
>> > +    case HV_X64_MSR_STIMER0_CONFIG:
>> > +    case HV_X64_MSR_STIMER1_CONFIG:
>> > +    case HV_X64_MSR_STIMER2_CONFIG:
>> > +    case HV_X64_MSR_STIMER3_CONFIG:
>> > +    {
>> > +        unsigned int stimerx =
>> > +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
>> > +                               ARRAY_SIZE(vv->stimer));
>> > +        struct viridian_stimer *vs = &vv->stimer[stimerx];
>> 
>> I again think you'd better use array_access_nospec() here (also
>> for the rdmsr counterparts).
> 
> I don't follow. I *am* using array_index_nospec().

But "index" != "access".

>> > @@ -160,6 +466,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
>> >
>> >  int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
>> >  {
>> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
>> 
>> const?
>> 
> 
> I don't think so. A read of the reference TSC MSR updates a flag.

But you don't make any existing code use vv in this patch. And
the new code you add doesn't look to require it to be non-const.
I can see why vd (introduced by an earlier patch in the series)
can't be constified for the reason you name.

>> > @@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
>> >      case HV_X64_MSR_TSC_FREQUENCY:
>> >      case HV_X64_MSR_APIC_FREQUENCY:
>> >      case HV_X64_MSR_REFERENCE_TSC:
>> > +    case HV_X64_MSR_TIME_REF_COUNT:
>> > +    case HV_X64_MSR_STIMER0_CONFIG:
>> > +    case HV_X64_MSR_STIMER0_COUNT:
>> > +    case HV_X64_MSR_STIMER1_CONFIG:
>> > +    case HV_X64_MSR_STIMER1_COUNT:
>> > +    case HV_X64_MSR_STIMER2_CONFIG:
>> > +    case HV_X64_MSR_STIMER2_COUNT:
>> > +    case HV_X64_MSR_STIMER3_CONFIG:
>> > +    case HV_X64_MSR_STIMER3_COUNT:
>> 
>> For readability / brevity
>> 
>>     case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT:
>> 
>> ?
> 
> Certainly brevity, but I'm not sure about readability. I'll make the change.

Well, you're the maintainer, so I don't want to talk you into
something you're really opposed to.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-13 15:36       ` Jan Beulich
@ 2019-03-13 15:43         ` Paul Durrant
  0 siblings, 0 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-13 15:43 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 13 March 2019 15:37
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> (Xen.org) <tim@xen.org>
> Subject: RE: [PATCH v5 10/11] viridian: add implementation of synthetic timers
> 
> >>> On 13.03.19 at 15:37, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 13 March 2019 14:06
> >>
> >> >>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> >> > +bool viridian_synic_deliver_timer_msg(struct vcpu *v, unsigned int sintx,
> >> > +                                      unsigned int index,
> >> > +                                      uint64_t expiration,
> >> > +                                      uint64_t delivery)
> >> > +{
> >> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> >> > +    const union viridian_sint_msr *vs = &vv->sint[sintx];
> >> > +    HV_MESSAGE *msg = vv->simp.ptr;
> >> > +    struct {
> >> > +        uint32_t TimerIndex;
> >> > +        uint32_t Reserved;
> >> > +        uint64_t ExpirationTime;
> >> > +        uint64_t DeliveryTime;
> >> > +    } payload = {
> >> > +        .TimerIndex = index,
> >> > +        .ExpirationTime = expiration,
> >> > +        .DeliveryTime = delivery,
> >> > +    };
> >> > +
> >> > +    if ( test_bit(sintx, &vv->msg_pending) )
> >> > +        return false;
> >> > +
> >> > +    BUILD_BUG_ON(sizeof(*msg) != HV_MESSAGE_SIZE);
> >> > +    msg += sintx;
> >> > +
> >> > +    /*
> >> > +     * To avoid using an atomic test-and-set, and barrier before calling
> >> > +     * vlapic_set_irq(), this function must be called in context of the
> >> > +     * vcpu receiving the message.
> >> > +     */
> >> > +    ASSERT(v == current);
> >> > +    if ( msg->Header.MessageType != HvMessageTypeNone )
> >> > +    {
> >> > +        msg->Header.MessageFlags.MessagePending = 1;
> >> > +        __set_bit(sintx, &vv->msg_pending);
> >> > +        return false;
> >> > +    }
> >> > +
> >> > +    msg->Header.MessageType = HvMessageTimerExpired;
> >> > +    msg->Header.MessageFlags.MessagePending = 0;
> >> > +    msg->Header.PayloadSize = sizeof(payload);
> >> > +    memcpy(msg->Payload, &payload, sizeof(payload));
> >>
> >> Since you can't use plain assignment here, how about a
> >> BUILD_BUG_ON(sizeof(payload) <= sizeof(msg->payload))?
> >
> > Surely '>' rather than '<='?
> 
> Oops, yes - I was apparently thinking the ASSERT() way.
> 
> >> As to safety of this, I have two concerns:
> >>
> >> 1) TscSequence gets updated as a result of a guest action (an MSR
> >> write). This makes it non-obvious that the loop above will get
> >> exited in due course.
> >>
> >
> > True. The domain could try to DoS this call. This could be avoided by doing
> > a domain_pause() if we test continuously fails for a number of iterations, or
> > maybe just one iteration.
> 
> As per what you say further down, one iteration ought to be enough
> indeed. Otherwise I would have suggested a handful.
> 
> >> > +static void poll_stimer(struct vcpu *v, unsigned int stimerx)
> >> > +{
> >> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> >> > +    struct viridian_stimer *vs = &vv->stimer[stimerx];
> >> > +
> >> > +    if ( !test_bit(stimerx, &vv->stimer_pending) )
> >> > +        return;
> >> > +
> >> > +    if ( !viridian_synic_deliver_timer_msg(v, vs->config.fields.sintx,
> >> > +                                           stimerx, vs->expiration,
> >> > +                                           time_now(v->domain)) )
> >> > +        return;
> >> > +
> >> > +    clear_bit(stimerx, &vv->stimer_pending);
> >>
> >> While perhaps benign, wouldn't it be better to clear the pending bit
> >> before delivering the message?
> >
> > No, because I only want to clear it if the delivery is successful.
> 
> Ah, I see.
> 
> >> > @@ -149,6 +398,63 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> >> >          }
> >> >          break;
> >> >
> >> > +    case HV_X64_MSR_TIME_REF_COUNT:
> >> > +        return X86EMUL_EXCEPTION;
> >> > +
> >> > +    case HV_X64_MSR_STIMER0_CONFIG:
> >> > +    case HV_X64_MSR_STIMER1_CONFIG:
> >> > +    case HV_X64_MSR_STIMER2_CONFIG:
> >> > +    case HV_X64_MSR_STIMER3_CONFIG:
> >> > +    {
> >> > +        unsigned int stimerx =
> >> > +            array_index_nospec((idx - HV_X64_MSR_STIMER0_CONFIG) / 2,
> >> > +                               ARRAY_SIZE(vv->stimer));
> >> > +        struct viridian_stimer *vs = &vv->stimer[stimerx];
> >>
> >> I again think you'd better use array_access_nospec() here (also
> >> for the rdmsr counterparts).
> >
> > I don't follow. I *am* using array_index_nospec().
> 
> But "index" != "access".

Ah, I was blinkered by the 'nospec'... yes, I'll use that rather than rolling my own.

> 
> >> > @@ -160,6 +466,7 @@ int viridian_time_wrmsr(struct vcpu *v, uint32_t idx, uint64_t val)
> >> >
> >> >  int viridian_time_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val)
> >> >  {
> >> > +    struct viridian_vcpu *vv = v->arch.hvm.viridian;
> >>
> >> const?
> >>
> >
> > I don't think so. A read of the reference TSC MSR updates a flag.
> 
> But you don't make any existing code use vv in this patch. And
> the new code you add doesn't look to require it to be non-const.
> I can see why vd (introduced by an earlier patch in the series)
> can't be constified for the reason you name.

Ah, true, I was thinking of vd. Although I'm sure when I tried to const vv I got a compiler error. I'll try again... something else might have changed.

> 
> >> > @@ -322,6 +324,15 @@ int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
> >> >      case HV_X64_MSR_TSC_FREQUENCY:
> >> >      case HV_X64_MSR_APIC_FREQUENCY:
> >> >      case HV_X64_MSR_REFERENCE_TSC:
> >> > +    case HV_X64_MSR_TIME_REF_COUNT:
> >> > +    case HV_X64_MSR_STIMER0_CONFIG:
> >> > +    case HV_X64_MSR_STIMER0_COUNT:
> >> > +    case HV_X64_MSR_STIMER1_CONFIG:
> >> > +    case HV_X64_MSR_STIMER1_COUNT:
> >> > +    case HV_X64_MSR_STIMER2_CONFIG:
> >> > +    case HV_X64_MSR_STIMER2_COUNT:
> >> > +    case HV_X64_MSR_STIMER3_CONFIG:
> >> > +    case HV_X64_MSR_STIMER3_COUNT:
> >>
> >> For readability / brevity
> >>
> >>     case HV_X64_MSR_STIMER0_CONFIG ... HV_X64_MSR_STIMER3_COUNT:
> >>
> >> ?
> >
> > Certainly brevity, but I'm not sure about readability. I'll make the change.
> 
> Well, you're the maintainer, so I don't want to talk you into
> something you're really opposed to.
> 

I'm not that bothered so I'll make the change.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-13 13:14   ` Jan Beulich
  2019-03-13 13:25     ` Paul Durrant
@ 2019-03-13 16:13     ` Paul Durrant
  2019-03-14  7:47       ` Jan Beulich
  1 sibling, 1 reply; 32+ messages in thread
From: Paul Durrant @ 2019-03-13 16:13 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Jan Beulich
> Sent: 13 March 2019 13:15
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim (Xen.org) <tim@xen.org>; Julien
> Grall <julien.grall@arm.com>; xen-devel <xen-devel@lists.xenproject.org>; Roger Pau Monne
> <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
> 
> >>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> > @@ -28,6 +29,32 @@ typedef union _HV_VP_ASSIST_PAGE
> >      uint8_t ReservedZBytePadding[PAGE_SIZE];
> >  } HV_VP_ASSIST_PAGE;
> >
> > +typedef enum HV_MESSAGE_TYPE {
> > +    HvMessageTypeNone,
> > +    HvMessageTimerExpired = 0x80000010,
> > +} HV_MESSAGE_TYPE;
> > +
> > +typedef struct HV_MESSAGE_FLAGS {
> > +    uint8_t MessagePending:1;
> > +    uint8_t Reserved:7;
> > +} HV_MESSAGE_FLAGS;
> > +
> > +typedef struct HV_MESSAGE_HEADER {
> > +    HV_MESSAGE_TYPE MessageType;
> > +    uint16_t Reserved1;
> > +    HV_MESSAGE_FLAGS MessageFlags;
> > +    uint8_t PayloadSize;
> > +    uint64_t Reserved2;
> > +} HV_MESSAGE_HEADER;
> > +
> > +#define HV_MESSAGE_SIZE 256
> > +#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30
> 

Missed this one before...

> Is this defined this way, or (given ...
> 
> > +typedef struct HV_MESSAGE {
> > +    HV_MESSAGE_HEADER Header;
> > +    uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT];
> > +} HV_MESSAGE;
> 
> ... this) isn't it rather
> 
> #define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT \
>     ((HV_MESSAGE_SIZE - sizeof(HV_MESSAGE_HEADER) / 8)
> 

I need the definition for the array in the struct so that sizeof(HV_MESSAGE) == HV_MESSAGE_SIZE (for which there is a BUILD_BUG_ON()) later. It's also written that way in the spec. so I'd rather leave it as-is.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-13 16:13     ` Paul Durrant
@ 2019-03-14  7:47       ` Jan Beulich
  2019-03-14  8:46         ` Paul Durrant
  0 siblings, 1 reply; 32+ messages in thread
From: Jan Beulich @ 2019-03-14  7:47 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim Deegan, george.dunlap, Julien Grall,
	xen-devel, Ian Jackson, Roger Pau Monne

>>> On 13.03.19 at 17:13, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of 
> Jan Beulich
>> Sent: 13 March 2019 13:15
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; 
> Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew 
> Cooper
>> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim (Xen.org) 
> <tim@xen.org>; Julien
>> Grall <julien.grall@arm.com>; xen-devel <xen-devel@lists.xenproject.org>; Roger 
> Pau Monne
>> <roger.pau@citrix.com>
>> Subject: Re: [Xen-devel] [PATCH v5 09/11] viridian: add implementation of 
> synthetic interrupt MSRs
>> 
>> >>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
>> > @@ -28,6 +29,32 @@ typedef union _HV_VP_ASSIST_PAGE
>> >      uint8_t ReservedZBytePadding[PAGE_SIZE];
>> >  } HV_VP_ASSIST_PAGE;
>> >
>> > +typedef enum HV_MESSAGE_TYPE {
>> > +    HvMessageTypeNone,
>> > +    HvMessageTimerExpired = 0x80000010,
>> > +} HV_MESSAGE_TYPE;
>> > +
>> > +typedef struct HV_MESSAGE_FLAGS {
>> > +    uint8_t MessagePending:1;
>> > +    uint8_t Reserved:7;
>> > +} HV_MESSAGE_FLAGS;
>> > +
>> > +typedef struct HV_MESSAGE_HEADER {
>> > +    HV_MESSAGE_TYPE MessageType;
>> > +    uint16_t Reserved1;
>> > +    HV_MESSAGE_FLAGS MessageFlags;
>> > +    uint8_t PayloadSize;
>> > +    uint64_t Reserved2;
>> > +} HV_MESSAGE_HEADER;
>> > +
>> > +#define HV_MESSAGE_SIZE 256
>> > +#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30
>> 
> 
> Missed this one before...
> 
>> Is this defined this way, or (given ...
>> 
>> > +typedef struct HV_MESSAGE {
>> > +    HV_MESSAGE_HEADER Header;
>> > +    uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT];
>> > +} HV_MESSAGE;
>> 
>> ... this) isn't it rather
>> 
>> #define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT \
>>     ((HV_MESSAGE_SIZE - sizeof(HV_MESSAGE_HEADER) / 8)
>> 
> 
> I need the definition for the array in the struct so that sizeof(HV_MESSAGE) 
> == HV_MESSAGE_SIZE (for which there is a BUILD_BUG_ON()) later.

I don't understand this part - I'm not asking to ditch the #define.
As to the BUILD_BUG_ON() - I see now, but that's only in patch
10, and in a specific message handler. I think this would belong
here, and in the main viridian.c file.

> It's also written that way in the spec. so I'd rather leave it as-is.

Well, okay then - I was sort of expecting this to be spelled out
there in such a way. That doesn't change my overall opinion,
but I can see your point of wanting to match the spec, and you
being the maintainer I have no basis to insist anyway.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
  2019-03-14  7:47       ` Jan Beulich
@ 2019-03-14  8:46         ` Paul Durrant
  0 siblings, 0 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-14  8:46 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, xen-devel, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 14 March 2019 07:48
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Julien Grall <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>;
> xen-devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim
> (Xen.org) <tim@xen.org>
> Subject: RE: [Xen-devel] [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs
> 
> >>> On 13.03.19 at 17:13, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of
> > Jan Beulich
> >> Sent: 13 March 2019 13:15
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>;
> > Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew
> > Cooper
> >> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Tim (Xen.org)
> > <tim@xen.org>; Julien
> >> Grall <julien.grall@arm.com>; xen-devel <xen-devel@lists.xenproject.org>; Roger
> > Pau Monne
> >> <roger.pau@citrix.com>
> >> Subject: Re: [Xen-devel] [PATCH v5 09/11] viridian: add implementation of
> > synthetic interrupt MSRs
> >>
> >> >>> On 11.03.19 at 14:41, <paul.durrant@citrix.com> wrote:
> >> > @@ -28,6 +29,32 @@ typedef union _HV_VP_ASSIST_PAGE
> >> >      uint8_t ReservedZBytePadding[PAGE_SIZE];
> >> >  } HV_VP_ASSIST_PAGE;
> >> >
> >> > +typedef enum HV_MESSAGE_TYPE {
> >> > +    HvMessageTypeNone,
> >> > +    HvMessageTimerExpired = 0x80000010,
> >> > +} HV_MESSAGE_TYPE;
> >> > +
> >> > +typedef struct HV_MESSAGE_FLAGS {
> >> > +    uint8_t MessagePending:1;
> >> > +    uint8_t Reserved:7;
> >> > +} HV_MESSAGE_FLAGS;
> >> > +
> >> > +typedef struct HV_MESSAGE_HEADER {
> >> > +    HV_MESSAGE_TYPE MessageType;
> >> > +    uint16_t Reserved1;
> >> > +    HV_MESSAGE_FLAGS MessageFlags;
> >> > +    uint8_t PayloadSize;
> >> > +    uint64_t Reserved2;
> >> > +} HV_MESSAGE_HEADER;
> >> > +
> >> > +#define HV_MESSAGE_SIZE 256
> >> > +#define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT 30
> >>
> >
> > Missed this one before...
> >
> >> Is this defined this way, or (given ...
> >>
> >> > +typedef struct HV_MESSAGE {
> >> > +    HV_MESSAGE_HEADER Header;
> >> > +    uint64_t Payload[HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT];
> >> > +} HV_MESSAGE;
> >>
> >> ... this) isn't it rather
> >>
> >> #define HV_MESSAGE_MAX_PAYLOAD_QWORD_COUNT \
> >>     ((HV_MESSAGE_SIZE - sizeof(HV_MESSAGE_HEADER) / 8)
> >>
> >
> > I need the definition for the array in the struct so that sizeof(HV_MESSAGE)
> > == HV_MESSAGE_SIZE (for which there is a BUILD_BUG_ON()) later.
> 
> I don't understand this part - I'm not asking to ditch the #define.
> As to the BUILD_BUG_ON() - I see now, but that's only in patch
> 10, and in a specific message handler. I think this would belong
> here, and in the main viridian.c file.

Ok, I can see that it is more logical to co-locate the BUILD_BUG_ON() with the definitions.

  Paul

> 
> > It's also written that way in the spec. so I'd rather leave it as-is.
> 
> Well, okay then - I was sort of expecting this to be spelled out
> there in such a way. That doesn't change my overall opinion,
> but I can see your point of wanting to match the spec, and you
> being the maintainer I have no basis to insist anyway.
> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 10/11] viridian: add implementation of synthetic timers
  2019-03-13 14:37     ` Paul Durrant
  2019-03-13 15:36       ` Jan Beulich
@ 2019-03-14  9:58       ` Paul Durrant
  1 sibling, 0 replies; 32+ messages in thread
From: Paul Durrant @ 2019-03-14  9:58 UTC (permalink / raw)
  To: Paul Durrant, 'Jan Beulich'
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Ian Jackson, xen-devel,
	Roger Pau Monne

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf Of Paul Durrant
> Sent: 13 March 2019 14:37
> To: 'Jan Beulich' <JBeulich@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wei.liu2@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Tim (Xen.org) <tim@xen.org>;
> George Dunlap <George.Dunlap@citrix.com>; Julien Grall <julien.grall@arm.com>; xen-devel <xen-
> devel@lists.xenproject.org>; Ian Jackson <Ian.Jackson@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v5 10/11] viridian: add implementation of synthetic timers
> 
[snip]
> 
> > As to safety of this, I have two concerns:
> >
> > 1) TscSequence gets updated as a result of a guest action (an MSR
> > write). This makes it non-obvious that the loop above will get
> > exited in due course.
> >
> 
> True. The domain could try to DoS this call. This could be avoided by doing a domain_pause() if we
> test continuously fails for a number of iterations, or maybe just one iteration.
> 
> > 2) The way update_reference_tsc() deals with the two "invalid"
> > values suggests ~0 and 0 should be special cased in general. I
> > _think_ this is not necessary here, but it also seems to me as if
> > the VM ever having a way to observe either of those two values
> > would be wrong too. Shouldn't the function avoid to ever store
> > ~0 into that field, i.e. increment into a local variable, update
> > that local variable to skip the two "invalid" values, and only then
> > store into the field?
> >
> > Otoh, making it into that function being a result of an MSR write,
> > it may welll be that the spec precludes the guest from reading
> > the reference page while an update was invoked from one of its
> > vCPU-s. If this was the case, then I also wouldn't have to
> > wonder any longer how this entire mechanism can be race free
> > in the first place (without a double increment like we do in the
> > pv-clock protocol).
> 
> From observation, it looks like Windows initializes the reference tsc page before it brings secondary
> CPUs online and then doesn't touch the MSR again, so we should probably only tolerate one mismatch in
> time_now() before doing domain_pause().

Actually it occurred to me last night that I'm being completely thick by coding it this way. The viridian code sets TscScale, not the guest, so we don't even need to reference the HV_REFERENCE_TSC_PAGE struct. Looking again, I'm also concerned that there's a small TOCTOU race in testing whether the reference tsc page is valid where the guest could unmap it on another CPU and cause a NULL pointer deref in time_now(), so I'll re-work this entirely.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2019-03-14  9:58 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-11 13:41 [PATCH v5 00/11] viridian: implement more enlightenments Paul Durrant
2019-03-11 13:41 ` [PATCH v5 01/11] viridian: add init hooks Paul Durrant
2019-03-13 12:23   ` Jan Beulich
2019-03-11 13:41 ` [PATCH v5 02/11] viridian: separately allocate domain and vcpu structures Paul Durrant
2019-03-13 12:25   ` Jan Beulich
2019-03-11 13:41 ` [PATCH v5 03/11] viridian: use stack variables for viridian_vcpu and viridian_domain Paul Durrant
2019-03-13 12:33   ` Jan Beulich
2019-03-11 13:41 ` [PATCH v5 04/11] viridian: make 'fields' struct anonymous Paul Durrant
2019-03-13 12:34   ` Jan Beulich
2019-03-11 13:41 ` [PATCH v5 05/11] viridian: extend init/deinit hooks into synic and time modules Paul Durrant
2019-03-11 13:41 ` [PATCH v5 06/11] viridian: add missing context save helpers " Paul Durrant
2019-03-11 13:41 ` [PATCH v5 07/11] viridian: use viridian_map/unmap_guest_page() for reference tsc page Paul Durrant
2019-03-11 13:41 ` [PATCH v5 08/11] viridian: stop directly calling viridian_time_ref_count_freeze/thaw() Paul Durrant
2019-03-13 12:36   ` Jan Beulich
2019-03-11 13:41 ` [PATCH v5 09/11] viridian: add implementation of synthetic interrupt MSRs Paul Durrant
2019-03-13 13:14   ` Jan Beulich
2019-03-13 13:25     ` Paul Durrant
2019-03-13 14:23       ` Jan Beulich
2019-03-13 14:51         ` Paul Durrant
2019-03-13 16:13     ` Paul Durrant
2019-03-14  7:47       ` Jan Beulich
2019-03-14  8:46         ` Paul Durrant
2019-03-11 13:41 ` [PATCH v5 10/11] viridian: add implementation of synthetic timers Paul Durrant
2019-03-13 13:06   ` Paul Durrant
2019-03-13 14:10     ` Jan Beulich
2019-03-13 14:05   ` Jan Beulich
2019-03-13 14:37     ` Paul Durrant
2019-03-13 15:36       ` Jan Beulich
2019-03-13 15:43         ` Paul Durrant
2019-03-14  9:58       ` Paul Durrant
2019-03-11 13:41 ` [PATCH v5 11/11] viridian: add implementation of the HvSendSyntheticClusterIpi hypercall Paul Durrant
2019-03-13 14:08   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.