All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] clean up MSR save/restore code
@ 2019-01-07 12:02 Paul Durrant
  2019-01-07 12:02 ` [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code Paul Durrant
                   ` (5 more replies)
  0 siblings, 6 replies; 26+ messages in thread
From: Paul Durrant @ 2019-01-07 12:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Paul Durrant,
	Jun Nakajima, Roger Pau Monné

Patch #6 is not strictly related to save/restore. It's just clean-up of
something I noticed along the way.

Paul Durrant (6):
  x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation
    code
  x86: save GUEST_BNDCFGS on context switch...
  x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs
  x86: stop handling MSR_IA32_XSS save/restore in implementation code
  x86: remove defunct init/load/save_msr() hvm_funcs
  x86: introduce dr_mask_idx() helper function...

 xen/arch/x86/domain.c          |   2 +-
 xen/arch/x86/hvm/hvm.c         | 102 ++++++++++++++++-----------------
 xen/arch/x86/hvm/vmx/vmx.c     |  83 +++------------------------
 xen/arch/x86/msr.c             |  48 +++++++++++-----
 xen/arch/x86/pv/emul-priv-op.c |   2 +-
 xen/include/asm-x86/hvm/hvm.h  |  17 ++----
 xen/include/asm-x86/hvm/vcpu.h |   1 -
 xen/include/asm-x86/msr.h      |  33 ++++++++++-
 8 files changed, 133 insertions(+), 155 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code
  2019-01-07 12:02 [PATCH 0/6] clean up MSR save/restore code Paul Durrant
@ 2019-01-07 12:02 ` Paul Durrant
  2019-01-28  2:18   ` Tian, Kevin
  2019-03-08 16:39   ` Jan Beulich
  2019-01-07 12:02 ` [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch Paul Durrant
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 26+ messages in thread
From: Paul Durrant @ 2019-01-07 12:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Paul Durrant,
	Jun Nakajima, Roger Pau Monné

Saving and restoring the value of this MSR is currently handled by
implementation-specific code despite it being architectural. This patch
moves handling of accesses to this MSR from hvm.c into the msr.c, thus
allowing the common MSR save/restore code to handle it.

This patch also changes hvm_get/set_guest_bndcfgs() to check CPUID policy
for the appropriate feature, rather than hardware, and also re-works
the get/set_guest_bndcfgs() hvm_funcs so they are no longer boolean. Uses
of u64 are also converted to uint64_t.

NOTE: Because vmx_get/set_guest_bndcfgs() call vmx_vmcs_enter(), the
      struct vcpu pointer passed in cannot be const.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/hvm.c         | 44 +++++++++++++++++++++-------------
 xen/arch/x86/hvm/vmx/vmx.c     | 31 +++++-------------------
 xen/arch/x86/msr.c             | 14 ++++++++++-
 xen/arch/x86/pv/emul-priv-op.c |  2 +-
 xen/include/asm-x86/hvm/hvm.h  | 13 ++++------
 xen/include/asm-x86/msr.h      |  2 +-
 6 files changed, 52 insertions(+), 54 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 401c4a9312..5fd5478b7d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -308,11 +308,16 @@ int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
     return 1;
 }
 
-bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
+bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
 {
-    if ( !hvm_funcs.set_guest_bndcfgs ||
-         !is_canonical_address(val) ||
-         (val & IA32_BNDCFGS_RESERVED) )
+    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+
+    if ( !cp->feat.mpx )
+        return false;
+
+    ASSERT(hvm_funcs.set_guest_bndcfgs);
+
+    if ( !is_canonical_address(val) || (val & IA32_BNDCFGS_RESERVED) )
         return false;
 
     /*
@@ -342,7 +347,22 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
             /* nothing, best effort only */;
     }
 
-    return hvm_funcs.set_guest_bndcfgs(v, val);
+    hvm_funcs.set_guest_bndcfgs(v, val);
+
+    return true;
+}
+
+bool hvm_get_guest_bndcfgs(struct vcpu *v, uint64_t *val)
+{
+    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+
+    if ( !cp->feat.mpx )
+        return false;
+
+    ASSERT(hvm_funcs.get_guest_bndcfgs);
+    *val = hvm_funcs.get_guest_bndcfgs(v);
+
+    return true;
 }
 
 /*
@@ -1312,6 +1332,7 @@ static int hvm_load_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
 static const uint32_t msrs_to_send[] = {
     MSR_SPEC_CTRL,
     MSR_INTEL_MISC_FEATURES_ENABLES,
+    MSR_IA32_BNDCFGS,
     MSR_AMD64_DR0_ADDRESS_MASK,
     MSR_AMD64_DR1_ADDRESS_MASK,
     MSR_AMD64_DR2_ADDRESS_MASK,
@@ -1449,6 +1470,7 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
 
         case MSR_SPEC_CTRL:
         case MSR_INTEL_MISC_FEATURES_ENABLES:
+        case MSR_IA32_BNDCFGS:
         case MSR_AMD64_DR0_ADDRESS_MASK:
         case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
             rc = guest_wrmsr(v, ctxt->msr[i].index, ctxt->msr[i].val);
@@ -3472,12 +3494,6 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = v->arch.hvm.msr_xss;
         break;
 
-    case MSR_IA32_BNDCFGS:
-        if ( !d->arch.cpuid->feat.mpx ||
-             !hvm_get_guest_bndcfgs(v, msr_content) )
-            goto gp_fault;
-        break;
-
     case MSR_K8_ENABLE_C1E:
     case MSR_AMD64_NB_CFG:
          /*
@@ -3624,12 +3640,6 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         v->arch.hvm.msr_xss = msr_content;
         break;
 
-    case MSR_IA32_BNDCFGS:
-        if ( !d->arch.cpuid->feat.mpx ||
-             !hvm_set_guest_bndcfgs(v, msr_content) )
-            goto gp_fault;
-        break;
-
     case MSR_AMD64_NB_CFG:
         /* ignore the write */
         break;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 64af8bf943..4bfabe8d0e 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -805,17 +805,6 @@ static unsigned int __init vmx_init_msr(void)
 
 static void vmx_save_msr(struct vcpu *v, struct hvm_msr *ctxt)
 {
-    vmx_vmcs_enter(v);
-
-    if ( cpu_has_mpx && cpu_has_vmx_mpx )
-    {
-        __vmread(GUEST_BNDCFGS, &ctxt->msr[ctxt->count].val);
-        if ( ctxt->msr[ctxt->count].val )
-            ctxt->msr[ctxt->count++].index = MSR_IA32_BNDCFGS;
-    }
-
-    vmx_vmcs_exit(v);
-
     if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
     {
         ctxt->msr[ctxt->count].val = v->arch.hvm.msr_xss;
@@ -835,14 +824,6 @@ static int vmx_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
     {
         switch ( ctxt->msr[i].index )
         {
-        case MSR_IA32_BNDCFGS:
-            if ( cpu_has_mpx && cpu_has_vmx_mpx &&
-                 is_canonical_address(ctxt->msr[i].val) &&
-                 !(ctxt->msr[i].val & IA32_BNDCFGS_RESERVED) )
-                __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);
-            else if ( ctxt->msr[i].val )
-                err = -ENXIO;
-            break;
         case MSR_IA32_XSS:
             if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
                 v->arch.hvm.msr_xss = ctxt->msr[i].val;
@@ -1204,26 +1185,26 @@ static int vmx_get_guest_pat(struct vcpu *v, u64 *gpat)
     return 1;
 }
 
-static bool vmx_set_guest_bndcfgs(struct vcpu *v, u64 val)
+static void vmx_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
 {
     ASSERT(cpu_has_mpx && cpu_has_vmx_mpx);
 
     vmx_vmcs_enter(v);
     __vmwrite(GUEST_BNDCFGS, val);
     vmx_vmcs_exit(v);
-
-    return true;
 }
 
-static bool vmx_get_guest_bndcfgs(struct vcpu *v, u64 *val)
+static uint64_t vmx_get_guest_bndcfgs(struct vcpu *v)
 {
+    uint64_t val;
+
     ASSERT(cpu_has_mpx && cpu_has_vmx_mpx);
 
     vmx_vmcs_enter(v);
-    __vmread(GUEST_BNDCFGS, val);
+    __vmread(GUEST_BNDCFGS, &val);
     vmx_vmcs_exit(v);
 
-    return true;
+    return val;
 }
 
 static void vmx_handle_cd(struct vcpu *v, unsigned long value)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 9bb38b6d66..a3406c29a8 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -115,7 +115,7 @@ int init_vcpu_msr_policy(struct vcpu *v)
     return 0;
 }
 
-int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
+int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
 {
     const struct vcpu *curr = current;
     const struct domain *d = v->domain;
@@ -158,6 +158,12 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
         ret = guest_rdmsr_x2apic(v, msr, val);
         break;
 
+    case MSR_IA32_BNDCFGS:
+        if ( !is_hvm_domain(d) || !hvm_get_guest_bndcfgs(v, val) )
+            goto gp_fault;
+
+        break;
+
     case 0x40000000 ... 0x400001ff:
         if ( is_viridian_domain(d) )
         {
@@ -319,6 +325,12 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         ret = guest_wrmsr_x2apic(v, msr, val);
         break;
 
+    case MSR_IA32_BNDCFGS:
+        if ( !is_hvm_domain(d) || !hvm_set_guest_bndcfgs(v, val) )
+            goto gp_fault;
+
+        break;
+
     case 0x40000000 ... 0x400001ff:
         if ( is_viridian_domain(d) )
         {
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 942ece2ca0..678dad3792 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -807,7 +807,7 @@ static inline bool is_cpufreq_controller(const struct domain *d)
 static int read_msr(unsigned int reg, uint64_t *val,
                     struct x86_emulate_ctxt *ctxt)
 {
-    const struct vcpu *curr = current;
+    struct vcpu *curr = current;
     const struct domain *currd = curr->domain;
     bool vpmu_msr = false;
     int ret;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 0a10b51554..5c8237e087 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -149,8 +149,8 @@ struct hvm_function_table {
     int  (*get_guest_pat)(struct vcpu *v, u64 *);
     int  (*set_guest_pat)(struct vcpu *v, u64);
 
-    bool (*get_guest_bndcfgs)(struct vcpu *v, u64 *);
-    bool (*set_guest_bndcfgs)(struct vcpu *v, u64);
+    uint64_t (*get_guest_bndcfgs)(struct vcpu *v);
+    void (*set_guest_bndcfgs)(struct vcpu *v, uint64_t);
 
     void (*set_tsc_offset)(struct vcpu *v, u64 offset, u64 at_tsc);
 
@@ -283,8 +283,6 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
 void hvm_set_segment_register(struct vcpu *v, enum x86_segment seg,
                               struct segment_register *reg);
 
-bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val);
-
 bool hvm_check_cpuid_faulting(struct vcpu *v);
 void hvm_migrate_timers(struct vcpu *v);
 void hvm_do_resume(struct vcpu *v);
@@ -446,11 +444,8 @@ static inline unsigned long hvm_get_shadow_gs_base(struct vcpu *v)
     return hvm_funcs.get_shadow_gs_base(v);
 }
 
-static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
-{
-    return hvm_funcs.get_guest_bndcfgs &&
-           hvm_funcs.get_guest_bndcfgs(v, val);
-}
+bool hvm_get_guest_bndcfgs(struct vcpu *v, uint64_t *val);
+bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val);
 
 #define has_hvm_params(d) \
     ((d)->arch.hvm.params != NULL)
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index adfa2fa05b..ad8688a61f 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -320,7 +320,7 @@ int init_vcpu_msr_policy(struct vcpu *v);
  * These functions are also used by the migration logic, so need to cope with
  * being used outside of v's context.
  */
-int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val);
+int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val);
 int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val);
 
 #endif /* !__ASSEMBLY__ */
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch...
  2019-01-07 12:02 [PATCH 0/6] clean up MSR save/restore code Paul Durrant
  2019-01-07 12:02 ` [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code Paul Durrant
@ 2019-01-07 12:02 ` Paul Durrant
  2019-01-28  2:33   ` Tian, Kevin
  2019-03-08 16:42   ` Jan Beulich
  2019-01-07 12:02 ` [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs Paul Durrant
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 26+ messages in thread
From: Paul Durrant @ 2019-01-07 12:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Paul Durrant,
	Jun Nakajima, Roger Pau Monné

...to avoid the need for a VMCS reload when the value of MSR_IA32_BNDCFGS is
read by the tool-stack.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/hvm.c     | 18 +++++++++++++++---
 xen/arch/x86/hvm/vmx/vmx.c |  3 +++
 xen/include/asm-x86/msr.h  |  5 +++++
 3 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5fd5478b7d..b86aed7c24 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -311,6 +311,7 @@ int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
 bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
 {
     const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    struct vcpu_msrs *msrs = v->arch.msrs;
 
     if ( !cp->feat.mpx )
         return false;
@@ -347,7 +348,8 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
             /* nothing, best effort only */;
     }
 
-    hvm_funcs.set_guest_bndcfgs(v, val);
+    msrs->bndcfgs.raw = val;
+    hvm_funcs.set_guest_bndcfgs(v, msrs->bndcfgs.raw);
 
     return true;
 }
@@ -355,12 +357,22 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
 bool hvm_get_guest_bndcfgs(struct vcpu *v, uint64_t *val)
 {
     const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    struct vcpu_msrs *msrs = v->arch.msrs;
 
     if ( !cp->feat.mpx )
         return false;
 
-    ASSERT(hvm_funcs.get_guest_bndcfgs);
-    *val = hvm_funcs.get_guest_bndcfgs(v);
+    /*
+     * The value only need be read in current context as a context
+     * switch will save the value into msrs->bndcfgs.
+     */
+    if ( v == current )
+    {
+        ASSERT(hvm_funcs.get_guest_bndcfgs);
+        msrs->bndcfgs.raw = hvm_funcs.get_guest_bndcfgs(v);
+    }
+
+    *val = msrs->bndcfgs.raw;
 
     return true;
 }
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4bfabe8d0e..7dba92da45 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -497,6 +497,9 @@ static void vmx_restore_host_msrs(void)
 
 static void vmx_save_guest_msrs(struct vcpu *v)
 {
+    if ( cpu_has_mpx && cpu_has_vmx_mpx )
+        __vmread(GUEST_BNDCFGS, &v->arch.msrs->bndcfgs.raw);
+
     /*
      * We cannot cache SHADOW_GS_BASE while the VCPU runs, as it can
      * be updated at any time via SWAPGS, which we cannot trap.
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index ad8688a61f..c69ce56963 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -305,6 +305,11 @@ struct vcpu_msrs
      * values here may be stale in current context.
      */
     uint32_t dr_mask[4];
+
+    /* 0x00000d90 - MSR_IA32_BNDCFGS */
+    struct {
+        uint64_t raw;
+    } bndcfgs;
 };
 
 void init_guest_msr_policy(void);
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs
  2019-01-07 12:02 [PATCH 0/6] clean up MSR save/restore code Paul Durrant
  2019-01-07 12:02 ` [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code Paul Durrant
  2019-01-07 12:02 ` [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch Paul Durrant
@ 2019-01-07 12:02 ` Paul Durrant
  2019-01-28  2:34   ` Tian, Kevin
  2019-03-08 16:43   ` Jan Beulich
  2019-01-07 12:02 ` [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code Paul Durrant
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 26+ messages in thread
From: Paul Durrant @ 2019-01-07 12:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Paul Durrant,
	Jun Nakajima, Roger Pau Monné

Currently the value is saved directly in struct hvm_vcpu. This patch simply
co-locates it with other saved MSR values. No functional change.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/domain.c          | 2 +-
 xen/arch/x86/hvm/hvm.c         | 4 ++--
 xen/arch/x86/hvm/vmx/vmx.c     | 4 ++--
 xen/include/asm-x86/hvm/vcpu.h | 1 -
 xen/include/asm-x86/msr.h      | 5 +++++
 5 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 32dc4253ff..c5f4b6a38a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1704,7 +1704,7 @@ static void __context_switch(void)
                 BUG();
 
             if ( cpu_has_xsaves && is_hvm_vcpu(n) )
-                set_msr_xss(n->arch.hvm.msr_xss);
+                set_msr_xss(n->arch.msrs->xss.raw);
         }
         vcpu_restore_fpu_nonlazy(n, false);
         nd->arch.ctxt_switch->to(n);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b86aed7c24..b55bb8b081 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3503,7 +3503,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_IA32_XSS:
         if ( !d->arch.cpuid->xstate.xsaves )
             goto gp_fault;
-        *msr_content = v->arch.hvm.msr_xss;
+        *msr_content = v->arch.msrs->xss.raw;
         break;
 
     case MSR_K8_ENABLE_C1E:
@@ -3649,7 +3649,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         /* No XSS features currently supported for guests. */
         if ( !d->arch.cpuid->xstate.xsaves || msr_content != 0 )
             goto gp_fault;
-        v->arch.hvm.msr_xss = msr_content;
+        v->arch.msrs->xss.raw = msr_content;
         break;
 
     case MSR_AMD64_NB_CFG:
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 7dba92da45..ec87607ec7 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -810,7 +810,7 @@ static void vmx_save_msr(struct vcpu *v, struct hvm_msr *ctxt)
 {
     if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
     {
-        ctxt->msr[ctxt->count].val = v->arch.hvm.msr_xss;
+        ctxt->msr[ctxt->count].val = v->arch.msrs->xss.raw;
         if ( ctxt->msr[ctxt->count].val )
             ctxt->msr[ctxt->count++].index = MSR_IA32_XSS;
     }
@@ -829,7 +829,7 @@ static int vmx_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
         {
         case MSR_IA32_XSS:
             if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
-                v->arch.hvm.msr_xss = ctxt->msr[i].val;
+                v->arch.msrs->xss.raw = ctxt->msr[i].val;
             else
                 err = -ENXIO;
             break;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index c8a40f6d55..a8cfd13987 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -171,7 +171,6 @@ struct hvm_vcpu {
     struct hvm_vcpu_asid n1asid;
 
     u64                 msr_tsc_adjust;
-    u64                 msr_xss;
 
     union {
         struct vmx_vcpu vmx;
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index c69ce56963..e9c685613e 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -310,6 +310,11 @@ struct vcpu_msrs
     struct {
         uint64_t raw;
     } bndcfgs;
+
+    /* 0x00000da0 - MSR_IA32_XSS */
+    struct {
+        uint64_t raw;
+    } xss;
 };
 
 void init_guest_msr_policy(void);
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code
  2019-01-07 12:02 [PATCH 0/6] clean up MSR save/restore code Paul Durrant
                   ` (2 preceding siblings ...)
  2019-01-07 12:02 ` [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs Paul Durrant
@ 2019-01-07 12:02 ` Paul Durrant
  2019-01-28  2:37   ` Tian, Kevin
  2019-03-08 16:46   ` Jan Beulich
  2019-01-07 12:02 ` [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs Paul Durrant
  2019-01-07 12:02 ` [PATCH 6/6] x86: introduce dr_mask_idx() helper function Paul Durrant
  5 siblings, 2 replies; 26+ messages in thread
From: Paul Durrant @ 2019-01-07 12:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Paul Durrant,
	Jun Nakajima, Roger Pau Monné

Saving and restoring the value of this MSR is currently handled by
implementation-specific code despite it being architectural. This patch
moves handling of accesses to this MSR from hvm.c into the msr.c, thus
allowing the common MSR save/restore code to handle it.

This patch also adds proper checks of CPUID policy in the new get/set code.

NOTE: MSR_IA32_XSS is the last MSR to be saved and restored by
      implementation-specific code. This patch therefore removes the
      (VMX) definitions and of the init_msr(), save_msr() and
      load_msr() hvm_funcs, as they are no longer necessary. The
      declarations of and calls to those hvm_funcs will be cleaned up
      by a subsequent patch.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/hvm.c     | 15 ++----------
 xen/arch/x86/hvm/vmx/vmx.c | 49 --------------------------------------
 xen/arch/x86/msr.c         | 18 ++++++++++++++
 3 files changed, 20 insertions(+), 62 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b55bb8b081..856dcf696b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1345,6 +1345,7 @@ static const uint32_t msrs_to_send[] = {
     MSR_SPEC_CTRL,
     MSR_INTEL_MISC_FEATURES_ENABLES,
     MSR_IA32_BNDCFGS,
+    MSR_IA32_XSS,
     MSR_AMD64_DR0_ADDRESS_MASK,
     MSR_AMD64_DR1_ADDRESS_MASK,
     MSR_AMD64_DR2_ADDRESS_MASK,
@@ -1483,6 +1484,7 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
         case MSR_SPEC_CTRL:
         case MSR_INTEL_MISC_FEATURES_ENABLES:
         case MSR_IA32_BNDCFGS:
+        case MSR_IA32_XSS:
         case MSR_AMD64_DR0_ADDRESS_MASK:
         case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
             rc = guest_wrmsr(v, ctxt->msr[i].index, ctxt->msr[i].val);
@@ -3500,12 +3502,6 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = var_range_base[index];
         break;
 
-    case MSR_IA32_XSS:
-        if ( !d->arch.cpuid->xstate.xsaves )
-            goto gp_fault;
-        *msr_content = v->arch.msrs->xss.raw;
-        break;
-
     case MSR_K8_ENABLE_C1E:
     case MSR_AMD64_NB_CFG:
          /*
@@ -3645,13 +3641,6 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
             goto gp_fault;
         break;
 
-    case MSR_IA32_XSS:
-        /* No XSS features currently supported for guests. */
-        if ( !d->arch.cpuid->xstate.xsaves || msr_content != 0 )
-            goto gp_fault;
-        v->arch.msrs->xss.raw = msr_content;
-        break;
-
     case MSR_AMD64_NB_CFG:
         /* ignore the write */
         break;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ec87607ec7..f175b79b4b 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -800,52 +800,6 @@ static int vmx_load_vmcs_ctxt(struct vcpu *v, struct hvm_hw_cpu *ctxt)
     return 0;
 }
 
-static unsigned int __init vmx_init_msr(void)
-{
-    return (cpu_has_mpx && cpu_has_vmx_mpx) +
-           (cpu_has_xsaves && cpu_has_vmx_xsaves);
-}
-
-static void vmx_save_msr(struct vcpu *v, struct hvm_msr *ctxt)
-{
-    if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
-    {
-        ctxt->msr[ctxt->count].val = v->arch.msrs->xss.raw;
-        if ( ctxt->msr[ctxt->count].val )
-            ctxt->msr[ctxt->count++].index = MSR_IA32_XSS;
-    }
-}
-
-static int vmx_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
-{
-    unsigned int i;
-    int err = 0;
-
-    vmx_vmcs_enter(v);
-
-    for ( i = 0; i < ctxt->count; ++i )
-    {
-        switch ( ctxt->msr[i].index )
-        {
-        case MSR_IA32_XSS:
-            if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
-                v->arch.msrs->xss.raw = ctxt->msr[i].val;
-            else
-                err = -ENXIO;
-            break;
-        default:
-            continue;
-        }
-        if ( err )
-            break;
-        ctxt->msr[i]._rsvd = 1;
-    }
-
-    vmx_vmcs_exit(v);
-
-    return err;
-}
-
 static void vmx_fpu_enter(struct vcpu *v)
 {
     vcpu_restore_fpu_lazy(v);
@@ -2283,9 +2237,6 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
     .load_cpu_ctxt        = vmx_load_vmcs_ctxt,
-    .init_msr             = vmx_init_msr,
-    .save_msr             = vmx_save_msr,
-    .load_msr             = vmx_load_msr,
     .get_interrupt_shadow = vmx_get_interrupt_shadow,
     .set_interrupt_shadow = vmx_set_interrupt_shadow,
     .guest_x86_mode       = vmx_guest_x86_mode,
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index a3406c29a8..3aa79031cf 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -164,6 +164,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
 
         break;
 
+    case MSR_IA32_XSS:
+        if ( !is_hvm_domain(d) || !cp->xstate.xsaves )
+            goto gp_fault;
+
+        *val = msrs->xss.raw;
+        break;
+
     case 0x40000000 ... 0x400001ff:
         if ( is_viridian_domain(d) )
         {
@@ -331,6 +338,17 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 
         break;
 
+    case MSR_IA32_XSS:
+        if ( !is_hvm_domain(d) || !cp->xstate.xsaves )
+            goto gp_fault;
+
+        /* No XSS features currently supported for guests */
+        if ( val != 0 )
+            goto gp_fault;
+
+        msrs->xss.raw = val;
+        break;
+
     case 0x40000000 ... 0x400001ff:
         if ( is_viridian_domain(d) )
         {
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs
  2019-01-07 12:02 [PATCH 0/6] clean up MSR save/restore code Paul Durrant
                   ` (3 preceding siblings ...)
  2019-01-07 12:02 ` [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code Paul Durrant
@ 2019-01-07 12:02 ` Paul Durrant
  2019-03-08 16:50   ` Jan Beulich
  2019-01-07 12:02 ` [PATCH 6/6] x86: introduce dr_mask_idx() helper function Paul Durrant
  5 siblings, 1 reply; 26+ messages in thread
From: Paul Durrant @ 2019-01-07 12:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

These hvm_funcs are no longer required since no MSR values are saved or
restored by implementation-specific code.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/hvm.c        | 31 ++++++++++---------------------
 xen/include/asm-x86/hvm/hvm.h |  4 ----
 2 files changed, 10 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 856dcf696b..199aa0c148 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1351,7 +1351,6 @@ static const uint32_t msrs_to_send[] = {
     MSR_AMD64_DR2_ADDRESS_MASK,
     MSR_AMD64_DR3_ADDRESS_MASK,
 };
-static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
 static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
 {
@@ -1361,7 +1360,7 @@ static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
     int err;
 
     err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
-                             HVM_CPU_MSR_SIZE(msr_count_max));
+                             HVM_CPU_MSR_SIZE(ARRAY_SIZE(msrs_to_send)));
     if ( err )
         return err;
     ctxt = (struct hvm_msr *)&h->data[h->cur];
@@ -1394,10 +1393,7 @@ static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
         ctxt->msr[ctxt->count++].val = val;
     }
 
-    if ( hvm_funcs.save_msr )
-        hvm_funcs.save_msr(v, ctxt);
-
-    ASSERT(ctxt->count <= msr_count_max);
+    ASSERT(ctxt->count <= ARRAY_SIZE(msrs_to_send));
 
     for ( i = 0; i < ctxt->count; ++i )
         ctxt->msr[i]._rsvd = 0;
@@ -1472,10 +1468,7 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
             return -EOPNOTSUPP;
     /* Checking finished */
 
-    if ( hvm_funcs.load_msr )
-        err = hvm_funcs.load_msr(v, ctxt);
-
-    for ( i = 0; !err && i < ctxt->count; ++i )
+    for ( i = 0; i < ctxt->count; ++i )
     {
         switch ( ctxt->msr[i].index )
         {
@@ -1516,17 +1509,13 @@ static int __init hvm_register_CPU_save_and_restore(void)
                             sizeof(struct hvm_save_descriptor),
                         HVMSR_PER_VCPU);
 
-    if ( hvm_funcs.init_msr )
-        msr_count_max += hvm_funcs.init_msr();
-
-    if ( msr_count_max )
-        hvm_register_savevm(CPU_MSR_CODE,
-                            "CPU_MSR",
-                            hvm_save_cpu_msrs,
-                            hvm_load_cpu_msrs,
-                            HVM_CPU_MSR_SIZE(msr_count_max) +
-                                sizeof(struct hvm_save_descriptor),
-                            HVMSR_PER_VCPU);
+    hvm_register_savevm(CPU_MSR_CODE,
+                        "CPU_MSR",
+                        hvm_save_cpu_msrs,
+                        hvm_load_cpu_msrs,
+                        HVM_CPU_MSR_SIZE(ARRAY_SIZE(msrs_to_send)) +
+                            sizeof(struct hvm_save_descriptor),
+                        HVMSR_PER_VCPU);
 
     return 0;
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 5c8237e087..a7b08d1db3 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -115,10 +115,6 @@ struct hvm_function_table {
     void (*save_cpu_ctxt)(struct vcpu *v, struct hvm_hw_cpu *ctxt);
     int (*load_cpu_ctxt)(struct vcpu *v, struct hvm_hw_cpu *ctxt);
 
-    unsigned int (*init_msr)(void);
-    void (*save_msr)(struct vcpu *, struct hvm_msr *);
-    int (*load_msr)(struct vcpu *, struct hvm_msr *);
-
     /* Examine specifics of the guest state. */
     unsigned int (*get_interrupt_shadow)(struct vcpu *v);
     void (*set_interrupt_shadow)(struct vcpu *v, unsigned int intr_shadow);
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 6/6] x86: introduce dr_mask_idx() helper function...
  2019-01-07 12:02 [PATCH 0/6] clean up MSR save/restore code Paul Durrant
                   ` (4 preceding siblings ...)
  2019-01-07 12:02 ` [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs Paul Durrant
@ 2019-01-07 12:02 ` Paul Durrant
  2019-03-08 16:58   ` Jan Beulich
  5 siblings, 1 reply; 26+ messages in thread
From: Paul Durrant @ 2019-01-07 12:02 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich, Roger Pau Monné

...to avoid repeated open-coding.

Unfortunately the mapping from MSR index MSR_AMD64_DR<idx>_ADDRESS_MASK
to the 'idx' value is non-trivial since the MSR index corresponding to
idx value 0 is non-consecutive with the MSR indices corresponding to
idx values 1-3. This mapping is currently dealt with by near-identical
open coding in guest_rdmsr() and guest_wrmsr().

This patch adds a helper function, dr_mask_idx(), to handle the mapping
and then uses this in guest_rdmsr() and guest_wrmsr() instead, thus
making the code somewhat neater.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/msr.c        | 18 +++++-------------
 xen/include/asm-x86/msr.h | 21 +++++++++++++++++++++
 2 files changed, 26 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 3aa79031cf..4e4c9c805a 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -21,7 +21,6 @@
 
 #include <xen/init.h>
 #include <xen/lib.h>
-#include <xen/nospec.h>
 #include <xen/sched.h>
 
 #include <asm/debugreg.h>
@@ -121,7 +120,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
     const struct domain *d = v->domain;
     const struct cpuid_policy *cp = d->arch.cpuid;
     const struct msr_policy *mp = d->arch.msr;
-    const struct vcpu_msrs *msrs = v->arch.msrs;
+    struct vcpu_msrs *msrs = v->arch.msrs;
     int ret = X86EMUL_OKAY;
 
     switch ( msr )
@@ -202,13 +201,10 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
          */
 #ifdef CONFIG_HVM
         if ( v == current && is_hvm_domain(d) && v->arch.hvm.flag_dr_dirty )
-            rdmsrl(msr, *val);
-        else
+            rdmsrl(msr, msrs->dr_mask[dr_mask_idx(msr)]);
 #endif
-            *val = msrs->dr_mask[
-                array_index_nospec((msr == MSR_AMD64_DR0_ADDRESS_MASK)
-                                   ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
-                                   ARRAY_SIZE(msrs->dr_mask))];
+
+        *val = msrs->dr_mask[dr_mask_idx(msr)];
         break;
 
     default:
@@ -377,11 +373,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         if ( !cp->extd.dbext || val != (uint32_t)val )
             goto gp_fault;
 
-        msrs->dr_mask[
-            array_index_nospec((msr == MSR_AMD64_DR0_ADDRESS_MASK)
-                               ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
-                               ARRAY_SIZE(msrs->dr_mask))] = val;
-
+        msrs->dr_mask[dr_mask_idx(msr)] = val;
         if ( v == curr && (curr->arch.dr7 & DR7_ACTIVE_MASK) )
             wrmsrl(msr, val);
         break;
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index e9c685613e..5eb5e6dace 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -5,6 +5,7 @@
 
 #ifndef __ASSEMBLY__
 
+#include <xen/nospec.h>
 #include <xen/types.h>
 #include <xen/percpu.h>
 #include <xen/errno.h>
@@ -317,6 +318,26 @@ struct vcpu_msrs
     } xss;
 };
 
+static inline unsigned int dr_mask_idx(uint32_t msr)
+{
+    switch (msr)
+    {
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case MSR_AMD64_DR0_ADDRESS_MASK:
+        return 0;
+
+    case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
+    {
+        struct vcpu_msrs msrs; /* only used for ARRAY_SIZE() */
+
+        return array_index_nospec(msr - MSR_AMD64_DR1_ADDRESS_MASK + 1,
+                                  ARRAY_SIZE(msrs.dr_mask));
+    }
+    }
+}
+
 void init_guest_msr_policy(void);
 int init_domain_msr_policy(struct domain *d);
 int init_vcpu_msr_policy(struct vcpu *v);
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code
  2019-01-07 12:02 ` [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code Paul Durrant
@ 2019-01-28  2:18   ` Tian, Kevin
  2019-03-08 16:39   ` Jan Beulich
  1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2019-01-28  2:18 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Andrew Cooper, Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
> 
> Saving and restoring the value of this MSR is currently handled by
> implementation-specific code despite it being architectural. This patch
> moves handling of accesses to this MSR from hvm.c into the msr.c, thus
> allowing the common MSR save/restore code to handle it.
> 
> This patch also changes hvm_get/set_guest_bndcfgs() to check CPUID policy
> for the appropriate feature, rather than hardware, and also re-works
> the get/set_guest_bndcfgs() hvm_funcs so they are no longer boolean.
> Uses
> of u64 are also converted to uint64_t.
> 
> NOTE: Because vmx_get/set_guest_bndcfgs() call vmx_vmcs_enter(), the
>       struct vcpu pointer passed in cannot be const.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch...
  2019-01-07 12:02 ` [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch Paul Durrant
@ 2019-01-28  2:33   ` Tian, Kevin
  2019-01-28  9:59     ` Paul Durrant
  2019-03-08 16:42   ` Jan Beulich
  1 sibling, 1 reply; 26+ messages in thread
From: Tian, Kevin @ 2019-01-28  2:33 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Andrew Cooper, Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
> 
> ...to avoid the need for a VMCS reload when the value of
> MSR_IA32_BNDCFGS is
> read by the tool-stack.

the frequency of context switch is much higher than the
one of reading by tool-stack (at least in general case), then
is it right thing to add a vmread for every context switch
to save a VMCS reload in tool stack path?

> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> ---
>  xen/arch/x86/hvm/hvm.c     | 18 +++++++++++++++---
>  xen/arch/x86/hvm/vmx/vmx.c |  3 +++
>  xen/include/asm-x86/msr.h  |  5 +++++
>  3 files changed, 23 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 5fd5478b7d..b86aed7c24 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -311,6 +311,7 @@ int hvm_set_guest_pat(struct vcpu *v, u64
> guest_pat)
>  bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
>  {
>      const struct cpuid_policy *cp = v->domain->arch.cpuid;
> +    struct vcpu_msrs *msrs = v->arch.msrs;
> 
>      if ( !cp->feat.mpx )
>          return false;
> @@ -347,7 +348,8 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v,
> uint64_t val)
>              /* nothing, best effort only */;
>      }
> 
> -    hvm_funcs.set_guest_bndcfgs(v, val);
> +    msrs->bndcfgs.raw = val;
> +    hvm_funcs.set_guest_bndcfgs(v, msrs->bndcfgs.raw);
> 
>      return true;
>  }
> @@ -355,12 +357,22 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v,
> uint64_t val)
>  bool hvm_get_guest_bndcfgs(struct vcpu *v, uint64_t *val)
>  {
>      const struct cpuid_policy *cp = v->domain->arch.cpuid;
> +    struct vcpu_msrs *msrs = v->arch.msrs;
> 
>      if ( !cp->feat.mpx )
>          return false;
> 
> -    ASSERT(hvm_funcs.get_guest_bndcfgs);
> -    *val = hvm_funcs.get_guest_bndcfgs(v);
> +    /*
> +     * The value only need be read in current context as a context
> +     * switch will save the value into msrs->bndcfgs.
> +     */
> +    if ( v == current )
> +    {
> +        ASSERT(hvm_funcs.get_guest_bndcfgs);
> +        msrs->bndcfgs.raw = hvm_funcs.get_guest_bndcfgs(v);
> +    }
> +
> +    *val = msrs->bndcfgs.raw;
> 
>      return true;
>  }
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 4bfabe8d0e..7dba92da45 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -497,6 +497,9 @@ static void vmx_restore_host_msrs(void)
> 
>  static void vmx_save_guest_msrs(struct vcpu *v)
>  {
> +    if ( cpu_has_mpx && cpu_has_vmx_mpx )
> +        __vmread(GUEST_BNDCFGS, &v->arch.msrs->bndcfgs.raw);
> +
>      /*
>       * We cannot cache SHADOW_GS_BASE while the VCPU runs, as it can
>       * be updated at any time via SWAPGS, which we cannot trap.
> diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
> index ad8688a61f..c69ce56963 100644
> --- a/xen/include/asm-x86/msr.h
> +++ b/xen/include/asm-x86/msr.h
> @@ -305,6 +305,11 @@ struct vcpu_msrs
>       * values here may be stale in current context.
>       */
>      uint32_t dr_mask[4];
> +
> +    /* 0x00000d90 - MSR_IA32_BNDCFGS */
> +    struct {
> +        uint64_t raw;
> +    } bndcfgs;
>  };
> 
>  void init_guest_msr_policy(void);
> --
> 2.20.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs
  2019-01-07 12:02 ` [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs Paul Durrant
@ 2019-01-28  2:34   ` Tian, Kevin
  2019-03-08 16:43   ` Jan Beulich
  1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2019-01-28  2:34 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Andrew Cooper, Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
> 
> Currently the value is saved directly in struct hvm_vcpu. This patch simply
> co-locates it with other saved MSR values. No functional change.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code
  2019-01-07 12:02 ` [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code Paul Durrant
@ 2019-01-28  2:37   ` Tian, Kevin
  2019-03-08 16:46   ` Jan Beulich
  1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2019-01-28  2:37 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Andrew Cooper, Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
> 
> Saving and restoring the value of this MSR is currently handled by
> implementation-specific code despite it being architectural. This patch
> moves handling of accesses to this MSR from hvm.c into the msr.c, thus
> allowing the common MSR save/restore code to handle it.
> 
> This patch also adds proper checks of CPUID policy in the new get/set code.
> 
> NOTE: MSR_IA32_XSS is the last MSR to be saved and restored by
>       implementation-specific code. This patch therefore removes the
>       (VMX) definitions and of the init_msr(), save_msr() and
>       load_msr() hvm_funcs, as they are no longer necessary. The
>       declarations of and calls to those hvm_funcs will be cleaned up
>       by a subsequent patch.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch...
  2019-01-28  2:33   ` Tian, Kevin
@ 2019-01-28  9:59     ` Paul Durrant
  0 siblings, 0 replies; 26+ messages in thread
From: Paul Durrant @ 2019-01-28  9:59 UTC (permalink / raw)
  To: Kevin Tian, xen-devel, Andrew Cooper
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monne

> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: 28 January 2019 02:34
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org
> Cc: Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Roger Pau
> Monne <roger.pau@citrix.com>; Nakajima, Jun <jun.nakajima@intel.com>
> Subject: RE: [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch...
> 
> > From: Paul Durrant [mailto:paul.durrant@citrix.com]
> > Sent: Monday, January 7, 2019 8:03 PM
> >
> > ...to avoid the need for a VMCS reload when the value of
> > MSR_IA32_BNDCFGS is
> > read by the tool-stack.
> 
> the frequency of context switch is much higher than the
> one of reading by tool-stack (at least in general case), then
> is it right thing to add a vmread for every context switch
> to save a VMCS reload in tool stack path?

AIUI the cost of the VMCS reload is sufficiently high (and the extra vmread sufficiently cheap) to make this desirable. 

Andrew, do you have any comment?

  Paul

> 
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> > Cc: Jun Nakajima <jun.nakajima@intel.com>
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > ---
> >  xen/arch/x86/hvm/hvm.c     | 18 +++++++++++++++---
> >  xen/arch/x86/hvm/vmx/vmx.c |  3 +++
> >  xen/include/asm-x86/msr.h  |  5 +++++
> >  3 files changed, 23 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 5fd5478b7d..b86aed7c24 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -311,6 +311,7 @@ int hvm_set_guest_pat(struct vcpu *v, u64
> > guest_pat)
> >  bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
> >  {
> >      const struct cpuid_policy *cp = v->domain->arch.cpuid;
> > +    struct vcpu_msrs *msrs = v->arch.msrs;
> >
> >      if ( !cp->feat.mpx )
> >          return false;
> > @@ -347,7 +348,8 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v,
> > uint64_t val)
> >              /* nothing, best effort only */;
> >      }
> >
> > -    hvm_funcs.set_guest_bndcfgs(v, val);
> > +    msrs->bndcfgs.raw = val;
> > +    hvm_funcs.set_guest_bndcfgs(v, msrs->bndcfgs.raw);
> >
> >      return true;
> >  }
> > @@ -355,12 +357,22 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v,
> > uint64_t val)
> >  bool hvm_get_guest_bndcfgs(struct vcpu *v, uint64_t *val)
> >  {
> >      const struct cpuid_policy *cp = v->domain->arch.cpuid;
> > +    struct vcpu_msrs *msrs = v->arch.msrs;
> >
> >      if ( !cp->feat.mpx )
> >          return false;
> >
> > -    ASSERT(hvm_funcs.get_guest_bndcfgs);
> > -    *val = hvm_funcs.get_guest_bndcfgs(v);
> > +    /*
> > +     * The value only need be read in current context as a context
> > +     * switch will save the value into msrs->bndcfgs.
> > +     */
> > +    if ( v == current )
> > +    {
> > +        ASSERT(hvm_funcs.get_guest_bndcfgs);
> > +        msrs->bndcfgs.raw = hvm_funcs.get_guest_bndcfgs(v);
> > +    }
> > +
> > +    *val = msrs->bndcfgs.raw;
> >
> >      return true;
> >  }
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> > index 4bfabe8d0e..7dba92da45 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -497,6 +497,9 @@ static void vmx_restore_host_msrs(void)
> >
> >  static void vmx_save_guest_msrs(struct vcpu *v)
> >  {
> > +    if ( cpu_has_mpx && cpu_has_vmx_mpx )
> > +        __vmread(GUEST_BNDCFGS, &v->arch.msrs->bndcfgs.raw);
> > +
> >      /*
> >       * We cannot cache SHADOW_GS_BASE while the VCPU runs, as it can
> >       * be updated at any time via SWAPGS, which we cannot trap.
> > diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
> > index ad8688a61f..c69ce56963 100644
> > --- a/xen/include/asm-x86/msr.h
> > +++ b/xen/include/asm-x86/msr.h
> > @@ -305,6 +305,11 @@ struct vcpu_msrs
> >       * values here may be stale in current context.
> >       */
> >      uint32_t dr_mask[4];
> > +
> > +    /* 0x00000d90 - MSR_IA32_BNDCFGS */
> > +    struct {
> > +        uint64_t raw;
> > +    } bndcfgs;
> >  };
> >
> >  void init_guest_msr_policy(void);
> > --
> > 2.20.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code
  2019-01-07 12:02 ` [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code Paul Durrant
  2019-01-28  2:18   ` Tian, Kevin
@ 2019-03-08 16:39   ` Jan Beulich
  2019-03-11 16:10     ` Paul Durrant
  1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2019-03-08 16:39 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -308,11 +308,16 @@ int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
>      return 1;
>  }
>  
> -bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
> +bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
>  {
> -    if ( !hvm_funcs.set_guest_bndcfgs ||
> -         !is_canonical_address(val) ||
> -         (val & IA32_BNDCFGS_RESERVED) )
> +    const struct cpuid_policy *cp = v->domain->arch.cpuid;
> +
> +    if ( !cp->feat.mpx )

Is the local variable really worth it?

> +        return false;
> +
> +    ASSERT(hvm_funcs.set_guest_bndcfgs);

I think it would be better (more safe for release builds) if this was
left as part of the if(). Otherwise, ...

> @@ -342,7 +347,22 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
>              /* nothing, best effort only */;
>      }
>  
> -    return hvm_funcs.set_guest_bndcfgs(v, val);
> +    hvm_funcs.set_guest_bndcfgs(v, val);

... you risk calling NULL here.

> @@ -3472,12 +3494,6 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          *msr_content = v->arch.hvm.msr_xss;
>          break;
>  
> -    case MSR_IA32_BNDCFGS:
> -        if ( !d->arch.cpuid->feat.mpx ||
> -             !hvm_get_guest_bndcfgs(v, msr_content) )
> -            goto gp_fault;
> -        break;

Note how this already checks the CPUID policy, somewhat contrary to
what you say in the commit message. I think this wants to remain this
way in guest_{rd,wr}msr() - see all the other CPUID policy checks there.
That way hvm_get_guest_bndcfgs() can be made consistent with the
hook pointer type again, and it can remain an inline function as before.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch...
  2019-01-07 12:02 ` [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch Paul Durrant
  2019-01-28  2:33   ` Tian, Kevin
@ 2019-03-08 16:42   ` Jan Beulich
  2019-03-11 17:34     ` Paul Durrant
  1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2019-03-08 16:42 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> ...to avoid the need for a VMCS reload when the value of MSR_IA32_BNDCFGS is
> read by the tool-stack.

Is this a good trade-off? I.e. are tool stack reads at least as frequent as
context switches?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs
  2019-01-07 12:02 ` [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs Paul Durrant
  2019-01-28  2:34   ` Tian, Kevin
@ 2019-03-08 16:43   ` Jan Beulich
  1 sibling, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2019-03-08 16:43 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> Currently the value is saved directly in struct hvm_vcpu. This patch simply
> co-locates it with other saved MSR values. No functional change.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code
  2019-01-07 12:02 ` [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code Paul Durrant
  2019-01-28  2:37   ` Tian, Kevin
@ 2019-03-08 16:46   ` Jan Beulich
  2019-03-11 17:41     ` Paul Durrant
  1 sibling, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2019-03-08 16:46 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -164,6 +164,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>          break;
>  
> +    case MSR_IA32_XSS:
> +        if ( !is_hvm_domain(d) || !cp->xstate.xsaves )

Why the is_hvm_domain() check here and below? The CPUID policy bit
alone should properly reflect things, and there should be no change
needed here if we decided to support XSAVES also for PV guests. With
them dropped
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs
  2019-01-07 12:02 ` [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs Paul Durrant
@ 2019-03-08 16:50   ` Jan Beulich
  2019-03-11 17:45     ` Paul Durrant
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2019-03-08 16:50 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> @@ -1472,10 +1468,7 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
>              return -EOPNOTSUPP;
>      /* Checking finished */
>  
> -    if ( hvm_funcs.load_msr )
> -        err = hvm_funcs.load_msr(v, ctxt);
> -
> -    for ( i = 0; !err && i < ctxt->count; ++i )
> +    for ( i = 0; i < ctxt->count; ++i )
>      {
>          switch ( ctxt->msr[i].index )
>          {

err can become set to non-zero in the loop body further down from here,
so I don't think you should alter the loop header.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/6] x86: introduce dr_mask_idx() helper function...
  2019-01-07 12:02 ` [PATCH 6/6] x86: introduce dr_mask_idx() helper function Paul Durrant
@ 2019-03-08 16:58   ` Jan Beulich
  2019-03-08 17:10     ` Andrew Cooper
  0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2019-03-08 16:58 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> @@ -202,13 +201,10 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>           */
>  #ifdef CONFIG_HVM
>          if ( v == current && is_hvm_domain(d) && v->arch.hvm.flag_dr_dirty )
> -            rdmsrl(msr, *val);
> -        else
> +            rdmsrl(msr, msrs->dr_mask[dr_mask_idx(msr)]);
>  #endif
> -            *val = msrs->dr_mask[
> -                array_index_nospec((msr == MSR_AMD64_DR0_ADDRESS_MASK)
> -                                   ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
> -                                   ARRAY_SIZE(msrs->dr_mask))];
> +
> +        *val = msrs->dr_mask[dr_mask_idx(msr)];
>          break;

While I don't really mind this behavioral change (of updating *msrs),
I'd like to get Andrew's opinion on this from a conceptual pov.

> @@ -317,6 +318,26 @@ struct vcpu_msrs
>      } xss;
>  };
>  
> +static inline unsigned int dr_mask_idx(uint32_t msr)
> +{
> +    switch (msr)

Missing blanks immediately inside the parentheses.

> +    {
> +    default:
> +        ASSERT_UNREACHABLE();
> +        /* Fallthrough */
> +    case MSR_AMD64_DR0_ADDRESS_MASK:
> +        return 0;
> +
> +    case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
> +    {
> +        struct vcpu_msrs msrs; /* only used for ARRAY_SIZE() */

I don't think you need this - you can use e.g.
ARRAY_SIZE(current->arch.vcpu.msrs.dr_mask), can't you?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/6] x86: introduce dr_mask_idx() helper function...
  2019-03-08 16:58   ` Jan Beulich
@ 2019-03-08 17:10     ` Andrew Cooper
  2019-03-11 15:59       ` Paul Durrant
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Cooper @ 2019-03-08 17:10 UTC (permalink / raw)
  To: Jan Beulich, Paul Durrant; +Cc: xen-devel, Wei Liu, Roger Pau Monne

On 08/03/2019 16:58, Jan Beulich wrote:
>>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
>> @@ -202,13 +201,10 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>           */
>>  #ifdef CONFIG_HVM
>>          if ( v == current && is_hvm_domain(d) && v->arch.hvm.flag_dr_dirty )
>> -            rdmsrl(msr, *val);
>> -        else
>> +            rdmsrl(msr, msrs->dr_mask[dr_mask_idx(msr)]);
>>  #endif
>> -            *val = msrs->dr_mask[
>> -                array_index_nospec((msr == MSR_AMD64_DR0_ADDRESS_MASK)
>> -                                   ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
>> -                                   ARRAY_SIZE(msrs->dr_mask))];
>> +
>> +        *val = msrs->dr_mask[dr_mask_idx(msr)];
>>          break;
> While I don't really mind this behavioral change (of updating *msrs),
> I'd like to get Andrew's opinion on this from a conceptual pov.

So, considering...

>
>> @@ -317,6 +318,26 @@ struct vcpu_msrs
>>      } xss;
>>  };
>>  
>> +static inline unsigned int dr_mask_idx(uint32_t msr)
>> +{
>> +    switch (msr)
> Missing blanks immediately inside the parentheses.
>
>> +    {
>> +    default:
>> +        ASSERT_UNREACHABLE();
>> +        /* Fallthrough */
>> +    case MSR_AMD64_DR0_ADDRESS_MASK:
>> +        return 0;

... this reintroduces a half-Spectre-v1 gadget, I'm -1 for the change.

I don't anticipate this translation being needed anywhere else, which is
why I didn't introduce a helper the first time around.

I'd just drop the patch and leave the code as it is.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/6] x86: introduce dr_mask_idx() helper function...
  2019-03-08 17:10     ` Andrew Cooper
@ 2019-03-11 15:59       ` Paul Durrant
  0 siblings, 0 replies; 26+ messages in thread
From: Paul Durrant @ 2019-03-11 15:59 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich; +Cc: xen-devel, Wei Liu, Roger Pau Monne

> -----Original Message-----
> From: Andrew Cooper
> Sent: 08 March 2019 17:10
> To: Jan Beulich <JBeulich@suse.com>; Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Wei Liu <wei.liu2@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>
> Subject: Re: [PATCH 6/6] x86: introduce dr_mask_idx() helper function...
> 
> On 08/03/2019 16:58, Jan Beulich wrote:
> >>>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> >> @@ -202,13 +201,10 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >>           */
> >>  #ifdef CONFIG_HVM
> >>          if ( v == current && is_hvm_domain(d) && v->arch.hvm.flag_dr_dirty )
> >> -            rdmsrl(msr, *val);
> >> -        else
> >> +            rdmsrl(msr, msrs->dr_mask[dr_mask_idx(msr)]);
> >>  #endif
> >> -            *val = msrs->dr_mask[
> >> -                array_index_nospec((msr == MSR_AMD64_DR0_ADDRESS_MASK)
> >> -                                   ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
> >> -                                   ARRAY_SIZE(msrs->dr_mask))];
> >> +
> >> +        *val = msrs->dr_mask[dr_mask_idx(msr)];
> >>          break;
> > While I don't really mind this behavioral change (of updating *msrs),
> > I'd like to get Andrew's opinion on this from a conceptual pov.
> 
> So, considering...
> 
> >
> >> @@ -317,6 +318,26 @@ struct vcpu_msrs
> >>      } xss;
> >>  };
> >>
> >> +static inline unsigned int dr_mask_idx(uint32_t msr)
> >> +{
> >> +    switch (msr)
> > Missing blanks immediately inside the parentheses.
> >
> >> +    {
> >> +    default:
> >> +        ASSERT_UNREACHABLE();
> >> +        /* Fallthrough */
> >> +    case MSR_AMD64_DR0_ADDRESS_MASK:
> >> +        return 0;
> 
> ... this reintroduces a half-Spectre-v1 gadget, I'm -1 for the change.
> 
> I don't anticipate this translation being needed anywhere else, which is
> why I didn't introduce a helper the first time around.
> 
> I'd just drop the patch and leave the code as it is.

Ok, I'll drop it.

  Paul

> 
> ~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code
  2019-03-08 16:39   ` Jan Beulich
@ 2019-03-11 16:10     ` Paul Durrant
  2019-03-11 16:17       ` Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: Paul Durrant @ 2019-03-11 16:10 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 March 2019 16:40
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; xen-
> devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code
> 
> >>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -308,11 +308,16 @@ int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
> >      return 1;
> >  }
> >
> > -bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
> > +bool hvm_set_guest_bndcfgs(struct vcpu *v, uint64_t val)
> >  {
> > -    if ( !hvm_funcs.set_guest_bndcfgs ||
> > -         !is_canonical_address(val) ||
> > -         (val & IA32_BNDCFGS_RESERVED) )
> > +    const struct cpuid_policy *cp = v->domain->arch.cpuid;
> > +
> > +    if ( !cp->feat.mpx )
> 
> Is the local variable really worth it?

Maybe not.

> 
> > +        return false;
> > +
> > +    ASSERT(hvm_funcs.set_guest_bndcfgs);
> 
> I think it would be better (more safe for release builds) if this was
> left as part of the if(). Otherwise, ...
> 
> > @@ -342,7 +347,22 @@ bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
> >              /* nothing, best effort only */;
> >      }
> >
> > -    return hvm_funcs.set_guest_bndcfgs(v, val);
> > +    hvm_funcs.set_guest_bndcfgs(v, val);
> 
> ... you risk calling NULL here.

Ok, I'll return false instead.

> 
> > @@ -3472,12 +3494,6 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >          *msr_content = v->arch.hvm.msr_xss;
> >          break;
> >
> > -    case MSR_IA32_BNDCFGS:
> > -        if ( !d->arch.cpuid->feat.mpx ||
> > -             !hvm_get_guest_bndcfgs(v, msr_content) )
> > -            goto gp_fault;
> > -        break;
> 
> Note how this already checks the CPUID policy, somewhat contrary to
> what you say in the commit message. I think this wants to remain this
> way in guest_{rd,wr}msr() - see all the other CPUID policy checks there.
> That way hvm_get_guest_bndcfgs() can be made consistent with the
> hook pointer type again, and it can remain an inline function as before.

I thought it was neater to put the check inside the helper but I can pull it out into the switch statement if that's preferred.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code
  2019-03-11 16:10     ` Paul Durrant
@ 2019-03-11 16:17       ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2019-03-11 16:17 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

>>> On 11.03.19 at 17:10, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 08 March 2019 16:40
>> 
>> >>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
>> > @@ -3472,12 +3494,6 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>> >          *msr_content = v->arch.hvm.msr_xss;
>> >          break;
>> >
>> > -    case MSR_IA32_BNDCFGS:
>> > -        if ( !d->arch.cpuid->feat.mpx ||
>> > -             !hvm_get_guest_bndcfgs(v, msr_content) )
>> > -            goto gp_fault;
>> > -        break;
>> 
>> Note how this already checks the CPUID policy, somewhat contrary to
>> what you say in the commit message. I think this wants to remain this
>> way in guest_{rd,wr}msr() - see all the other CPUID policy checks there.
>> That way hvm_get_guest_bndcfgs() can be made consistent with the
>> hook pointer type again, and it can remain an inline function as before.
> 
> I thought it was neater to put the check inside the helper but I can pull it 
> out into the switch statement if that's preferred.

Unless Andrew explicitly tells you otherwise, please do for the sake of
consistency with other code in that switch().

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch...
  2019-03-08 16:42   ` Jan Beulich
@ 2019-03-11 17:34     ` Paul Durrant
  0 siblings, 0 replies; 26+ messages in thread
From: Paul Durrant @ 2019-03-11 17:34 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 March 2019 16:43
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; xen-
> devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch...
> 
> >>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> > ...to avoid the need for a VMCS reload when the value of MSR_IA32_BNDCFGS is
> > read by the tool-stack.
> 
> Is this a good trade-off? I.e. are tool stack reads at least as frequent as
> context switches?
> 

In the absence of any comment from Andrew on relative cost, I'll drop this.

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code
  2019-03-08 16:46   ` Jan Beulich
@ 2019-03-11 17:41     ` Paul Durrant
  0 siblings, 0 replies; 26+ messages in thread
From: Paul Durrant @ 2019-03-11 17:41 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Jun Nakajima, xen-devel,
	Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 March 2019 16:47
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; xen-
> devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code
> 
> >>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> > --- a/xen/arch/x86/msr.c
> > +++ b/xen/arch/x86/msr.c
> > @@ -164,6 +164,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >
> >          break;
> >
> > +    case MSR_IA32_XSS:
> > +        if ( !is_hvm_domain(d) || !cp->xstate.xsaves )
> 
> Why the is_hvm_domain() check here and below?

Only there because the older code was HVM-specific.

> The CPUID policy bit
> alone should properly reflect things, and there should be no change
> needed here if we decided to support XSAVES also for PV guests. With
> them dropped
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Ok. Thanks,

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs
  2019-03-08 16:50   ` Jan Beulich
@ 2019-03-11 17:45     ` Paul Durrant
  2019-03-11 17:47       ` Paul Durrant
  0 siblings, 1 reply; 26+ messages in thread
From: Paul Durrant @ 2019-03-11 17:45 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 March 2019 16:50
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs
> 
> >>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> > @@ -1472,10 +1468,7 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
> >              return -EOPNOTSUPP;
> >      /* Checking finished */
> >
> > -    if ( hvm_funcs.load_msr )
> > -        err = hvm_funcs.load_msr(v, ctxt);
> > -
> > -    for ( i = 0; !err && i < ctxt->count; ++i )
> > +    for ( i = 0; i < ctxt->count; ++i )
> >      {
> >          switch ( ctxt->msr[i].index )
> >          {
> 
> err can become set to non-zero in the loop body further down from here,
> so I don't think you should alter the loop header.

True. Bailing early is a change of semantic, so I'll drop the check.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs
  2019-03-11 17:45     ` Paul Durrant
@ 2019-03-11 17:47       ` Paul Durrant
  0 siblings, 0 replies; 26+ messages in thread
From: Paul Durrant @ 2019-03-11 17:47 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: Andrew Cooper, Wei Liu, 'xen-devel', Roger Pau Monne

> -----Original Message-----
> From: Paul Durrant
> Sent: 11 March 2019 17:46
> To: 'Jan Beulich' <JBeulich@suse.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
> Subject: RE: [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: 08 March 2019 16:50
> > To: Paul Durrant <Paul.Durrant@citrix.com>
> > Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Roger Pau Monne <roger.pau@citrix.com>; Wei Liu
> > <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
> > Subject: Re: [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs
> >
> > >>> On 07.01.19 at 13:02, <paul.durrant@citrix.com> wrote:
> > > @@ -1472,10 +1468,7 @@ static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
> > >              return -EOPNOTSUPP;
> > >      /* Checking finished */
> > >
> > > -    if ( hvm_funcs.load_msr )
> > > -        err = hvm_funcs.load_msr(v, ctxt);
> > > -
> > > -    for ( i = 0; !err && i < ctxt->count; ++i )
> > > +    for ( i = 0; i < ctxt->count; ++i )
> > >      {
> > >          switch ( ctxt->msr[i].index )
> > >          {
> >
> > err can become set to non-zero in the loop body further down from here,
> > so I don't think you should alter the loop header.
> 
> True. Bailing early is a change of semantic, so I'll drop the check.

       ^ Not

> 
>   Paul
> 
> >
> > Jan
> >


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2019-03-11 17:47 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-07 12:02 [PATCH 0/6] clean up MSR save/restore code Paul Durrant
2019-01-07 12:02 ` [PATCH 1/6] x86: stop handling MSR_IA32_BNDCFGS save/restore in implementation code Paul Durrant
2019-01-28  2:18   ` Tian, Kevin
2019-03-08 16:39   ` Jan Beulich
2019-03-11 16:10     ` Paul Durrant
2019-03-11 16:17       ` Jan Beulich
2019-01-07 12:02 ` [PATCH 2/6] x86: save GUEST_BNDCFGS on context switch Paul Durrant
2019-01-28  2:33   ` Tian, Kevin
2019-01-28  9:59     ` Paul Durrant
2019-03-08 16:42   ` Jan Beulich
2019-03-11 17:34     ` Paul Durrant
2019-01-07 12:02 ` [PATCH 3/6] x86: move the saved value of MSR_IA32_XSS into struct vcpu_msrs Paul Durrant
2019-01-28  2:34   ` Tian, Kevin
2019-03-08 16:43   ` Jan Beulich
2019-01-07 12:02 ` [PATCH 4/6] x86: stop handling MSR_IA32_XSS save/restore in implementation code Paul Durrant
2019-01-28  2:37   ` Tian, Kevin
2019-03-08 16:46   ` Jan Beulich
2019-03-11 17:41     ` Paul Durrant
2019-01-07 12:02 ` [PATCH 5/6] x86: remove defunct init/load/save_msr() hvm_funcs Paul Durrant
2019-03-08 16:50   ` Jan Beulich
2019-03-11 17:45     ` Paul Durrant
2019-03-11 17:47       ` Paul Durrant
2019-01-07 12:02 ` [PATCH 6/6] x86: introduce dr_mask_idx() helper function Paul Durrant
2019-03-08 16:58   ` Jan Beulich
2019-03-08 17:10     ` Andrew Cooper
2019-03-11 15:59       ` Paul Durrant

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.