All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] x86: Cleanup of MSR handling for Xen and Viridian ranges
@ 2018-09-11 18:56 Andrew Cooper
  2018-09-11 18:56 ` [PATCH v3 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr() Andrew Cooper
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Andrew Cooper @ 2018-09-11 18:56 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper

Split apart from v2, to make all the dispatching code obvious in patch 1.  See
individual patches for changes.

Andrew Cooper (3):
  x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr,rd}msr()
  x86/viridan: Clean up Viridian MSR infrastructure
  x86: Clean up the Xen MSR infrastructure

 xen/arch/x86/hvm/svm/svm.c         | 27 +++-------------------
 xen/arch/x86/hvm/viridian.c        | 46 ++++++++++++++------------------------
 xen/arch/x86/hvm/vmx/vmx.c         | 28 ++++-------------------
 xen/arch/x86/msr.c                 | 35 +++++++++++++++++++++++++----
 xen/arch/x86/pv/emul-priv-op.c     |  6 -----
 xen/arch/x86/traps.c               | 29 +++++++++++-------------
 xen/include/asm-x86/hvm/viridian.h | 11 ++-------
 xen/include/asm-x86/processor.h    |  4 ++--
 8 files changed, 72 insertions(+), 114 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr()
  2018-09-11 18:56 [PATCH v3 0/3] x86: Cleanup of MSR handling for Xen and Viridian ranges Andrew Cooper
@ 2018-09-11 18:56 ` Andrew Cooper
  2018-09-12 12:00   ` [PATCH v4 " Andrew Cooper
  2018-09-11 18:56 ` [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure Andrew Cooper
  2018-09-11 18:56 ` [PATCH v3 3/3] x86: Clean up the Xen " Andrew Cooper
  2 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2018-09-11 18:56 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Despite the complicated diff in {svm,vmx}_msr_write_intercept(), it is just
the 0 case losing one level of indentation, as part of removing the call to
wrmsr_hypervisor_regs().

The case blocks in guest_{wr,rd}msr() use raw numbers, partly for consistency
with the CPUID side of things, but mainly because this is clearer code to
follow.  In particular, the Xen block may overlap with the Viridian block if
Viridian is not enabled for the domain, and trying to express this with named
literals caused more confusion that it solved.

Future changes with clean up the individual APIs, including allowing these
MSRs to be usable for vcpus other than current (no callers exist with v !=
current).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

v3:
 * Split out of previous series.  Retain appropriate R-by's
---
 xen/arch/x86/hvm/svm/svm.c     | 27 +++------------------------
 xen/arch/x86/hvm/vmx/vmx.c     | 28 ++++------------------------
 xen/arch/x86/msr.c             | 39 +++++++++++++++++++++++++++++++++++----
 xen/arch/x86/pv/emul-priv-op.c |  6 ------
 4 files changed, 42 insertions(+), 58 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 34d55b4..ef8f271 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2016,10 +2016,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         else if ( ret )
             break;
 
-        if ( rdmsr_viridian_regs(msr, msr_content) ||
-             rdmsr_hypervisor_regs(msr, msr_content) )
-            break;
-
         if ( rdmsr_safe(msr, *msr_content) == 0 )
             break;
 
@@ -2218,28 +2214,11 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         else if ( ret )
             break;
 
-        if ( wrmsr_viridian_regs(msr, msr_content) )
+        /* Match up with the RDMSR side; ultimately this should go away. */
+        if ( rdmsr_safe(msr, msr_content) == 0 )
             break;
 
-        switch ( wrmsr_hypervisor_regs(msr, msr_content) )
-        {
-        case -ERESTART:
-            result = X86EMUL_RETRY;
-            break;
-        case 0:
-            /*
-             * Match up with the RDMSR side for now; ultimately this entire
-             * case block should go away.
-             */
-            if ( rdmsr_safe(msr, msr_content) == 0 )
-                break;
-            goto gpf;
-        case 1:
-            break;
-        default:
-            goto gpf;
-        }
-        break;
+        goto gpf;
     }
 
     return result;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b2e1a28..bf90e22 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2965,10 +2965,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             break;
         }
 
-        if ( rdmsr_viridian_regs(msr, msr_content) ||
-             rdmsr_hypervisor_regs(msr, msr_content) )
-            break;
-
         if ( rdmsr_safe(msr, *msr_content) == 0 )
             break;
 
@@ -3249,31 +3245,15 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         if ( passive_domain_do_wrmsr(msr, msr_content) )
             return X86EMUL_OKAY;
 
-        if ( wrmsr_viridian_regs(msr, msr_content) ) 
-            break;
-
         if ( vmx_write_guest_msr(v, msr, msr_content) == 0 ||
              is_last_branch_msr(msr) )
             break;
 
-        switch ( wrmsr_hypervisor_regs(msr, msr_content) )
-        {
-        case -ERESTART:
-            return X86EMUL_RETRY;
-        case 0:
-            /*
-             * Match up with the RDMSR side for now; ultimately this
-             * entire case block should go away.
-             */
-            if ( rdmsr_safe(msr, msr_content) == 0 )
-                break;
-            goto gp_fault;
-        case 1:
+        /* Match up with the RDMSR side; ultimately this should go away. */
+        if ( rdmsr_safe(msr, msr_content) == 0 )
             break;
-        default:
-            goto gp_fault;
-        }
-        break;
+
+        goto gp_fault;
     }
 
     return X86EMUL_OKAY;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index b675f3a..7f60e85 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -114,9 +114,11 @@ int init_vcpu_msr_policy(struct vcpu *v)
 
 int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
-    const struct msr_policy *mp = v->domain->arch.msr;
+    const struct domain *d = v->domain;
+    const struct cpuid_policy *cp = d->arch.cpuid;
+    const struct msr_policy *mp = d->arch.msr;
     const struct vcpu_msrs *msrs = v->arch.msrs;
+    int ret = X86EMUL_OKAY;
 
     switch ( msr )
     {
@@ -145,11 +147,25 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
         *val = msrs->misc_features_enables.raw;
         break;
 
+    case 0x40000000 ... 0x400001ff:
+        if ( is_viridian_domain(d) )
+        {
+            ret = (rdmsr_viridian_regs(msr, val)
+                   ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+            break;
+        }
+
+        /* Fallthrough. */
+    case 0x40000200 ... 0x400002ff:
+        ret = (rdmsr_hypervisor_regs(msr, val)
+               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+        break;
+
     default:
         return X86EMUL_UNHANDLEABLE;
     }
 
-    return X86EMUL_OKAY;
+    return ret;
 
  gp_fault:
     return X86EMUL_EXCEPTION;
@@ -162,6 +178,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     const struct cpuid_policy *cp = d->arch.cpuid;
     const struct msr_policy *mp = d->arch.msr;
     struct vcpu_msrs *msrs = v->arch.msrs;
+    int ret = X86EMUL_OKAY;
 
     switch ( msr )
     {
@@ -252,11 +269,25 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         break;
     }
 
+    case 0x40000000 ... 0x400001ff:
+        if ( is_viridian_domain(d) )
+        {
+            ret = (wrmsr_viridian_regs(msr, val)
+                   ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+            break;
+        }
+
+        /* Fallthrough. */
+    case 0x40000200 ... 0x400002ff:
+        ret = (wrmsr_hypervisor_regs(msr, val) == 1
+               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+        break;
+
     default:
         return X86EMUL_UNHANDLEABLE;
     }
 
-    return X86EMUL_OKAY;
+    return ret;
 
  gp_fault:
     return X86EMUL_EXCEPTION;
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 45941ea..6422f91 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -950,9 +950,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
         }
         /* fall through */
     default:
-        if ( rdmsr_hypervisor_regs(reg, val) )
-            return X86EMUL_OKAY;
-
         rc = vmce_rdmsr(reg, val);
         if ( rc < 0 )
             break;
@@ -1149,9 +1146,6 @@ static int write_msr(unsigned int reg, uint64_t val,
         }
         /* fall through */
     default:
-        if ( wrmsr_hypervisor_regs(reg, val) == 1 )
-            return X86EMUL_OKAY;
-
         rc = vmce_wrmsr(reg, val);
         if ( rc < 0 )
             break;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure
  2018-09-11 18:56 [PATCH v3 0/3] x86: Cleanup of MSR handling for Xen and Viridian ranges Andrew Cooper
  2018-09-11 18:56 ` [PATCH v3 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr() Andrew Cooper
@ 2018-09-11 18:56 ` Andrew Cooper
  2018-09-12  8:14   ` Sergey Dyasli
  2018-09-13 12:28   ` Jan Beulich
  2018-09-11 18:56 ` [PATCH v3 3/3] x86: Clean up the Xen " Andrew Cooper
  2 siblings, 2 replies; 19+ messages in thread
From: Andrew Cooper @ 2018-09-11 18:56 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Sergey Dyasli, Wei Liu, Jan Beulich, Roger Pau Monné

Rename the functions to guest_{rd,wr}msr_viridian() for consistency, and
because the _regs() suffix isn't very appropriate.

Update them to take a vcpu pointer rather than presuming that they act on
current, which is safe for all implemented operations, and switch their return
ABI to use X86EMUL_*.

The default cases no longer need to deal with MSRs out of the Viridian range,
but drop the printks to debug builds only and identify the value attempting to
be written.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Sergey Dyasli <sergey.dyasli@citrix.com>

v3:
 * Clean up after splitting the series.  Retain appropriate R-by's
---
 xen/arch/x86/hvm/viridian.c        | 46 ++++++++++++++------------------------
 xen/arch/x86/msr.c                 |  6 ++---
 xen/include/asm-x86/hvm/viridian.h | 11 ++-------
 3 files changed, 21 insertions(+), 42 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index a23d087..48f7a31 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -88,9 +88,6 @@
 #define HV_X64_MSR_CRASH_P4                      0x40000104
 #define HV_X64_MSR_CRASH_CTL                     0x40000105
 
-#define VIRIDIAN_MSR_MIN HV_X64_MSR_GUEST_OS_ID
-#define VIRIDIAN_MSR_MAX HV_X64_MSR_CRASH_CTL
-
 /* Viridian Hypercall Status Codes. */
 #define HV_STATUS_SUCCESS                       0x0000
 #define HV_STATUS_INVALID_HYPERCALL_CODE        0x0002
@@ -554,13 +551,11 @@ static void update_reference_tsc(struct domain *d, bool_t initialize)
     put_page_and_type(page);
 }
 
-int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
+int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val)
 {
-    struct vcpu *v = current;
     struct domain *d = v->domain;
 
-    if ( !is_viridian_domain(d) )
-        return 0;
+    ASSERT(is_viridian_domain(d));
 
     switch ( idx )
     {
@@ -615,7 +610,7 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
 
     case HV_X64_MSR_REFERENCE_TSC:
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
-            return 0;
+            return X86EMUL_EXCEPTION;
 
         perfc_incr(mshv_wrmsr_tsc_msr);
         d->arch.hvm.viridian.reference_tsc.raw = val;
@@ -659,14 +654,12 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
     }
 
     default:
-        if ( idx >= VIRIDIAN_MSR_MIN && idx <= VIRIDIAN_MSR_MAX )
-            gprintk(XENLOG_WARNING, "write to unimplemented MSR %#x\n",
-                    idx);
-
-        return 0;
+        gdprintk(XENLOG_INFO,
+                 "Write %016"PRIx64" to unimplemented MSR %#x\n", val, idx);
+        return X86EMUL_EXCEPTION;
     }
 
-    return 1;
+    return X86EMUL_OKAY;
 }
 
 static int64_t raw_trc_val(struct domain *d)
@@ -702,13 +695,11 @@ void viridian_time_ref_count_thaw(struct domain *d)
         trc->off = (int64_t)trc->val - raw_trc_val(d);
 }
 
-int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
+int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val)
 {
-    struct vcpu *v = current;
     struct domain *d = v->domain;
-    
-    if ( !is_viridian_domain(d) )
-        return 0;
+
+    ASSERT(is_viridian_domain(d));
 
     switch ( idx )
     {
@@ -729,7 +720,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
 
     case HV_X64_MSR_TSC_FREQUENCY:
         if ( viridian_feature_mask(d) & HVMPV_no_freq )
-            return 0;
+            return X86EMUL_EXCEPTION;
 
         perfc_incr(mshv_rdmsr_tsc_frequency);
         *val = (uint64_t)d->arch.tsc_khz * 1000ull;
@@ -737,7 +728,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
 
     case HV_X64_MSR_APIC_FREQUENCY:
         if ( viridian_feature_mask(d) & HVMPV_no_freq )
-            return 0;
+            return X86EMUL_EXCEPTION;
 
         perfc_incr(mshv_rdmsr_apic_frequency);
         *val = 1000000000ull / APIC_BUS_CYCLE_NS;
@@ -761,7 +752,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
 
     case HV_X64_MSR_REFERENCE_TSC:
         if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
-            return 0;
+            return X86EMUL_EXCEPTION;
 
         perfc_incr(mshv_rdmsr_tsc_msr);
         *val = d->arch.hvm.viridian.reference_tsc.raw;
@@ -774,7 +765,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
         trc = &d->arch.hvm.viridian.time_ref_count;
 
         if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) )
-            return 0;
+            return X86EMUL_EXCEPTION;
 
         if ( !test_and_set_bit(_TRC_accessed, &trc->flags) )
             printk(XENLOG_G_INFO "d%d: VIRIDIAN MSR_TIME_REF_COUNT: accessed\n",
@@ -808,14 +799,11 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
     }
 
     default:
-        if ( idx >= VIRIDIAN_MSR_MIN && idx <= VIRIDIAN_MSR_MAX )
-            gprintk(XENLOG_WARNING, "read from unimplemented MSR %#x\n",
-                    idx);
-
-        return 0;
+        gdprintk(XENLOG_INFO, "Read from unimplemented MSR %#x\n", idx);
+        return X86EMUL_EXCEPTION;
     }
 
-    return 1;
+    return X86EMUL_OKAY;
 }
 
 void viridian_vcpu_deinit(struct vcpu *v)
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 7f60e85..cf0dc27 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -150,8 +150,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
     case 0x40000000 ... 0x400001ff:
         if ( is_viridian_domain(d) )
         {
-            ret = (rdmsr_viridian_regs(msr, val)
-                   ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+            ret = guest_rdmsr_viridian(v, msr, val);
             break;
         }
 
@@ -272,8 +271,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     case 0x40000000 ... 0x400001ff:
         if ( is_viridian_domain(d) )
         {
-            ret = (wrmsr_viridian_regs(msr, val)
-                   ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+            ret = guest_wrmsr_viridian(v, msr, val);
             break;
         }
 
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index 4cbd133..071fb44 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -101,15 +101,8 @@ struct viridian_domain
 void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
                            uint32_t subleaf, struct cpuid_leaf *res);
 
-int
-wrmsr_viridian_regs(
-    uint32_t idx,
-    uint64_t val);
-
-int
-rdmsr_viridian_regs(
-    uint32_t idx,
-    uint64_t *val);
+int guest_wrmsr_viridian(struct vcpu *v, uint32_t idx, uint64_t val);
+int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t *val);
 
 int
 viridian_hypercall(struct cpu_user_regs *regs);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-11 18:56 [PATCH v3 0/3] x86: Cleanup of MSR handling for Xen and Viridian ranges Andrew Cooper
  2018-09-11 18:56 ` [PATCH v3 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr() Andrew Cooper
  2018-09-11 18:56 ` [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure Andrew Cooper
@ 2018-09-11 18:56 ` Andrew Cooper
  2018-09-12  8:29   ` Sergey Dyasli
  2 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2018-09-11 18:56 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Sergey Dyasli, Wei Liu, Jan Beulich, Roger Pau Monné

Rename them to guest_{rd,wr}msr_xen() for consistency, and because the _regs
suffix isn't very appropriate.

Update them to take a vcpu pointer rather than presuming that they act on
current, and switch to using X86EMUL_* return values.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Sergey Dyasli <sergey.dyasli@citrix.com>

v3:
 * Clean up after splitting the series.
---
 xen/arch/x86/msr.c              |  6 ++----
 xen/arch/x86/traps.c            | 29 +++++++++++++----------------
 xen/include/asm-x86/processor.h |  4 ++--
 3 files changed, 17 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index cf0dc27..8f02a89 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
         /* Fallthrough. */
     case 0x40000200 ... 0x400002ff:
-        ret = (rdmsr_hypervisor_regs(msr, val)
-               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+        ret = guest_rdmsr_xen(v, msr, val);
         break;
 
     default:
@@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 
         /* Fallthrough. */
     case 0x40000200 ... 0x400002ff:
-        ret = (wrmsr_hypervisor_regs(msr, val) == 1
-               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+        ret = guest_wrmsr_xen(v, msr, val);
         break;
 
     default:
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 7c17806..3988753 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs)
           trapnr, trapstr(trapnr), regs->error_code);
 }
 
-/* Returns 0 if not handled, and non-0 for success. */
-int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val)
+int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val)
 {
-    struct domain *d = current->domain;
+    const struct domain *d = v->domain;
     /* Optionally shift out of the way of Viridian architectural MSRs. */
     uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
 
     switch ( idx - base )
     {
     case 0: /* Write hypercall page MSR.  Read as zero. */
-    {
         *val = 0;
-        return 1;
-    }
+        return X86EMUL_OKAY;
     }
 
-    return 0;
+    return X86EMUL_EXCEPTION;
 }
 
-/* Returns 1 if handled, 0 if not and -Exx for error. */
-int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
+int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
 {
-    struct domain *d = current->domain;
+    struct domain *d = v->domain;
     /* Optionally shift out of the way of Viridian architectural MSRs. */
     uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
 
@@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
             gdprintk(XENLOG_WARNING,
                      "wrmsr hypercall page index %#x unsupported\n",
                      page_index);
-            return 0;
+            return X86EMUL_EXCEPTION;
         }
 
         page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
@@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
             if ( p2m_is_paging(t) )
             {
                 p2m_mem_paging_populate(d, gmfn);
-                return -ERESTART;
+                return X86EMUL_RETRY;
             }
 
             gdprintk(XENLOG_WARNING,
                      "Bad GMFN %lx (MFN %#"PRI_mfn") to MSR %08x\n",
                      gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN), base);
-            return 0;
+            return X86EMUL_EXCEPTION;
         }
 
         hypercall_page = __map_domain_page(page);
@@ -836,11 +832,12 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
         unmap_domain_page(hypercall_page);
 
         put_page_and_type(page);
-        return 1;
-    }
+        return X86EMUL_OKAY;
     }
 
-    return 0;
+    default:
+        return X86EMUL_EXCEPTION;
+    }
 }
 
 void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index a166802..03555e1 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -554,8 +554,8 @@ unsigned long alloc_stub_page(unsigned int cpu, unsigned long *mfn);
 
 void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
                              uint32_t subleaf, struct cpuid_leaf *res);
-int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
-int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
+int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val);
+int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
 int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure
  2018-09-11 18:56 ` [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure Andrew Cooper
@ 2018-09-12  8:14   ` Sergey Dyasli
  2018-09-13 12:28   ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Sergey Dyasli @ 2018-09-12  8:14 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: Wei Liu, JBeulich, Roger Pau Monne

On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
> Rename the functions to guest_{rd,wr}msr_viridian() for consistency, and
> because the _regs() suffix isn't very appropriate.
> 
> Update them to take a vcpu pointer rather than presuming that they act on
> current, which is safe for all implemented operations, and switch their return
> ABI to use X86EMUL_*.
> 
> The default cases no longer need to deal with MSRs out of the Viridian range,
> but drop the printks to debug builds only and identify the value attempting to
> be written.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Sergey Dyasli <sergey.dyasli@citrix.com>
> 
> v3:
>  * Clean up after splitting the series.  Retain appropriate R-by's
> ---
>  xen/arch/x86/hvm/viridian.c        | 46 ++++++++++++++------------------------
>  xen/arch/x86/msr.c                 |  6 ++---
>  xen/include/asm-x86/hvm/viridian.h | 11 ++-------
>  3 files changed, 21 insertions(+), 42 deletions(-)
> 

Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-11 18:56 ` [PATCH v3 3/3] x86: Clean up the Xen " Andrew Cooper
@ 2018-09-12  8:29   ` Sergey Dyasli
  2018-09-12  9:12     ` Andrew Cooper
  0 siblings, 1 reply; 19+ messages in thread
From: Sergey Dyasli @ 2018-09-12  8:29 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: Wei Liu, JBeulich, Roger Pau Monne

On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
> Rename them to guest_{rd,wr}msr_xen() for consistency, and because the _regs
> suffix isn't very appropriate.
> 
> Update them to take a vcpu pointer rather than presuming that they act on
> current, and switch to using X86EMUL_* return values.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Sergey Dyasli <sergey.dyasli@citrix.com>
> 
> v3:
>  * Clean up after splitting the series.
> ---
>  xen/arch/x86/msr.c              |  6 ++----
>  xen/arch/x86/traps.c            | 29 +++++++++++++----------------
>  xen/include/asm-x86/processor.h |  4 ++--
>  3 files changed, 17 insertions(+), 22 deletions(-)
> 
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index cf0dc27..8f02a89 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>          /* Fallthrough. */
>      case 0x40000200 ... 0x400002ff:
> -        ret = (rdmsr_hypervisor_regs(msr, val)
> -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> +        ret = guest_rdmsr_xen(v, msr, val);
>          break;
>  
>      default:
> @@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  
>          /* Fallthrough. */
>      case 0x40000200 ... 0x400002ff:
> -        ret = (wrmsr_hypervisor_regs(msr, val) == 1
> -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> +        ret = guest_wrmsr_xen(v, msr, val);
>          break;
>  
>      default:
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 7c17806..3988753 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs)
>            trapnr, trapstr(trapnr), regs->error_code);
>  }
>  
> -/* Returns 0 if not handled, and non-0 for success. */
> -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val)
> +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val)
>  {
> -    struct domain *d = current->domain;
> +    const struct domain *d = v->domain;
>      /* Optionally shift out of the way of Viridian architectural MSRs. */
>      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
>  
>      switch ( idx - base )
>      {
>      case 0: /* Write hypercall page MSR.  Read as zero. */
> -    {
>          *val = 0;
> -        return 1;
> -    }
> +        return X86EMUL_OKAY;
>      }
>  
> -    return 0;
> +    return X86EMUL_EXCEPTION;
>  }
>  
> -/* Returns 1 if handled, 0 if not and -Exx for error. */
> -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
>  {
> -    struct domain *d = current->domain;
> +    struct domain *d = v->domain;
>      /* Optionally shift out of the way of Viridian architectural MSRs. */
>      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
>  
> @@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>              gdprintk(XENLOG_WARNING,
>                       "wrmsr hypercall page index %#x unsupported\n",
>                       page_index);
> -            return 0;
> +            return X86EMUL_EXCEPTION;
>          }
>  
>          page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
> @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>              if ( p2m_is_paging(t) )
>              {
>                  p2m_mem_paging_populate(d, gmfn);
> -                return -ERESTART;
> +                return X86EMUL_RETRY;

Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
with this patch, X86EMUL_RETRY will actually be returned. I don't think
that callers can handle this situation.

E.g. the code from vmx_vmexit_handler():

    case EXIT_REASON_MSR_WRITE:
        switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
        {
        case X86EMUL_OKAY:
            update_guest_eip(); /* Safe: WRMSR */
            break;

        case X86EMUL_EXCEPTION:
            hvm_inject_hw_exception(TRAP_gp_fault, 0);
            break;
        }
        break;

>              }
>  
>              gdprintk(XENLOG_WARNING,
>                       "Bad GMFN %lx (MFN %#"PRI_mfn") to MSR %08x\n",
>                       gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN), base);
> -            return 0;
> +            return X86EMUL_EXCEPTION;
>          }
>  
>          hypercall_page = __map_domain_page(page);
> @@ -836,11 +832,12 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>          unmap_domain_page(hypercall_page);
>  
>          put_page_and_type(page);
> -        return 1;
> -    }
> +        return X86EMUL_OKAY;
>      }
>  
> -    return 0;
> +    default:
> +        return X86EMUL_EXCEPTION;
> +    }
>  }
>  
>  void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
> diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
> index a166802..03555e1 100644
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -554,8 +554,8 @@ unsigned long alloc_stub_page(unsigned int cpu, unsigned long *mfn);
>  
>  void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>                               uint32_t subleaf, struct cpuid_leaf *res);
> -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
> -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
> +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val);
> +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val);
>  
>  void microcode_set_module(unsigned int);
>  int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12  8:29   ` Sergey Dyasli
@ 2018-09-12  9:12     ` Andrew Cooper
  2018-09-12  9:17       ` Jan Beulich
  2018-09-12  9:46       ` Sergey Dyasli
  0 siblings, 2 replies; 19+ messages in thread
From: Andrew Cooper @ 2018-09-12  9:12 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Wei Liu, JBeulich, Roger Pau Monne

On 12/09/18 09:29, Sergey Dyasli wrote:
> On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
>> Rename them to guest_{rd,wr}msr_xen() for consistency, and because the _regs
>> suffix isn't very appropriate.
>>
>> Update them to take a vcpu pointer rather than presuming that they act on
>> current, and switch to using X86EMUL_* return values.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Sergey Dyasli <sergey.dyasli@citrix.com>
>>
>> v3:
>>  * Clean up after splitting the series.
>> ---
>>  xen/arch/x86/msr.c              |  6 ++----
>>  xen/arch/x86/traps.c            | 29 +++++++++++++----------------
>>  xen/include/asm-x86/processor.h |  4 ++--
>>  3 files changed, 17 insertions(+), 22 deletions(-)
>>
>> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
>> index cf0dc27..8f02a89 100644
>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>>  
>>          /* Fallthrough. */
>>      case 0x40000200 ... 0x400002ff:
>> -        ret = (rdmsr_hypervisor_regs(msr, val)
>> -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
>> +        ret = guest_rdmsr_xen(v, msr, val);
>>          break;
>>  
>>      default:
>> @@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>  
>>          /* Fallthrough. */
>>      case 0x40000200 ... 0x400002ff:
>> -        ret = (wrmsr_hypervisor_regs(msr, val) == 1
>> -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
>> +        ret = guest_wrmsr_xen(v, msr, val);
>>          break;
>>  
>>      default:
>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>> index 7c17806..3988753 100644
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs)
>>            trapnr, trapstr(trapnr), regs->error_code);
>>  }
>>  
>> -/* Returns 0 if not handled, and non-0 for success. */
>> -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val)
>> +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val)
>>  {
>> -    struct domain *d = current->domain;
>> +    const struct domain *d = v->domain;
>>      /* Optionally shift out of the way of Viridian architectural MSRs. */
>>      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
>>  
>>      switch ( idx - base )
>>      {
>>      case 0: /* Write hypercall page MSR.  Read as zero. */
>> -    {
>>          *val = 0;
>> -        return 1;
>> -    }
>> +        return X86EMUL_OKAY;
>>      }
>>  
>> -    return 0;
>> +    return X86EMUL_EXCEPTION;
>>  }
>>  
>> -/* Returns 1 if handled, 0 if not and -Exx for error. */
>> -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>> +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
>>  {
>> -    struct domain *d = current->domain;
>> +    struct domain *d = v->domain;
>>      /* Optionally shift out of the way of Viridian architectural MSRs. */
>>      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
>>  
>> @@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>              gdprintk(XENLOG_WARNING,
>>                       "wrmsr hypercall page index %#x unsupported\n",
>>                       page_index);
>> -            return 0;
>> +            return X86EMUL_EXCEPTION;
>>          }
>>  
>>          page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
>> @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>              if ( p2m_is_paging(t) )
>>              {
>>                  p2m_mem_paging_populate(d, gmfn);
>> -                return -ERESTART;
>> +                return X86EMUL_RETRY;
> Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
> with this patch, X86EMUL_RETRY will actually be returned. I don't think
> that callers can handle this situation.
>
> E.g. the code from vmx_vmexit_handler():
>
>     case EXIT_REASON_MSR_WRITE:
>         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
>         {
>         case X86EMUL_OKAY:
>             update_guest_eip(); /* Safe: WRMSR */
>             break;
>
>         case X86EMUL_EXCEPTION:
>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
>             break;
>         }
>         break;

Hmm lovely, so it was broken before, but should be correct now.

RETRY has caused an entry to go onto the paging ring, which will pause
the vcpu until a reply occurs, after which we will re-enter the guest
without having moved RIP forwards, re-execute the wrmsr instruction, and
this time succeed because the frame has been paged in.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12  9:12     ` Andrew Cooper
@ 2018-09-12  9:17       ` Jan Beulich
  2018-09-12  9:24         ` Andrew Cooper
  2018-09-12  9:46       ` Sergey Dyasli
  1 sibling, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2018-09-12  9:17 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Sergey Dyasli, xen-devel, Wei Liu, Roger Pau Monne

>>> On 12.09.18 at 11:12, <andrew.cooper3@citrix.com> wrote:
> On 12/09/18 09:29, Sergey Dyasli wrote:
>> On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
>>> @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>>              if ( p2m_is_paging(t) )
>>>              {
>>>                  p2m_mem_paging_populate(d, gmfn);
>>> -                return -ERESTART;
>>> +                return X86EMUL_RETRY;
>> Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
>> with this patch, X86EMUL_RETRY will actually be returned. I don't think
>> that callers can handle this situation.
>>
>> E.g. the code from vmx_vmexit_handler():
>>
>>     case EXIT_REASON_MSR_WRITE:
>>         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
>>         {
>>         case X86EMUL_OKAY:
>>             update_guest_eip(); /* Safe: WRMSR */
>>             break;
>>
>>         case X86EMUL_EXCEPTION:
>>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>             break;
>>         }
>>         break;
> 
> Hmm lovely, so it was broken before, but should be correct now.
> 
> RETRY has caused an entry to go onto the paging ring, which will pause
> the vcpu until a reply occurs, after which we will re-enter the guest
> without having moved RIP forwards, re-execute the wrmsr instruction, and
> this time succeed because the frame has been paged in.

But then perhaps split out the actual bugfix into a prereq patch,
especially as that one may want backporting?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12  9:17       ` Jan Beulich
@ 2018-09-12  9:24         ` Andrew Cooper
  2018-09-12 10:05           ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2018-09-12  9:24 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Sergey Dyasli, xen-devel, Wei Liu, Roger Pau Monne

On 12/09/18 10:17, Jan Beulich wrote:
>>>> On 12.09.18 at 11:12, <andrew.cooper3@citrix.com> wrote:
>> On 12/09/18 09:29, Sergey Dyasli wrote:
>>> On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
>>>> @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>>>              if ( p2m_is_paging(t) )
>>>>              {
>>>>                  p2m_mem_paging_populate(d, gmfn);
>>>> -                return -ERESTART;
>>>> +                return X86EMUL_RETRY;
>>> Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
>>> with this patch, X86EMUL_RETRY will actually be returned. I don't think
>>> that callers can handle this situation.
>>>
>>> E.g. the code from vmx_vmexit_handler():
>>>
>>>     case EXIT_REASON_MSR_WRITE:
>>>         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
>>>         {
>>>         case X86EMUL_OKAY:
>>>             update_guest_eip(); /* Safe: WRMSR */
>>>             break;
>>>
>>>         case X86EMUL_EXCEPTION:
>>>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>             break;
>>>         }
>>>         break;
>> Hmm lovely, so it was broken before, but should be correct now.
>>
>> RETRY has caused an entry to go onto the paging ring, which will pause
>> the vcpu until a reply occurs, after which we will re-enter the guest
>> without having moved RIP forwards, re-execute the wrmsr instruction, and
>> this time succeed because the frame has been paged in.
> But then perhaps split out the actual bugfix into a prereq patch,
> especially as that one may want backporting?

Sorry, but I'm not sure you can't have that.  It will depend on the
internals of the hooks, but I suspect you'd need the entire rest of my
MSR cleanup work if you wanted to cleanly backport this bugfix.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12  9:12     ` Andrew Cooper
  2018-09-12  9:17       ` Jan Beulich
@ 2018-09-12  9:46       ` Sergey Dyasli
  2018-09-12 10:23         ` Andrew Cooper
  1 sibling, 1 reply; 19+ messages in thread
From: Sergey Dyasli @ 2018-09-12  9:46 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: Wei Liu, JBeulich, Roger Pau Monne

On Wed, 2018-09-12 at 10:12 +0100, Andrew Cooper wrote:
> On 12/09/18 09:29, Sergey Dyasli wrote:
> > On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
> > > Rename them to guest_{rd,wr}msr_xen() for consistency, and because the _regs
> > > suffix isn't very appropriate.
> > > 
> > > Update them to take a vcpu pointer rather than presuming that they act on
> > > current, and switch to using X86EMUL_* return values.
> > > 
> > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > > ---
> > > CC: Jan Beulich <JBeulich@suse.com>
> > > CC: Wei Liu <wei.liu2@citrix.com>
> > > CC: Roger Pau Monné <roger.pau@citrix.com>
> > > CC: Sergey Dyasli <sergey.dyasli@citrix.com>
> > > 
> > > v3:
> > >  * Clean up after splitting the series.
> > > ---
> > >  xen/arch/x86/msr.c              |  6 ++----
> > >  xen/arch/x86/traps.c            | 29 +++++++++++++----------------
> > >  xen/include/asm-x86/processor.h |  4 ++--
> > >  3 files changed, 17 insertions(+), 22 deletions(-)
> > > 
> > > diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> > > index cf0dc27..8f02a89 100644
> > > --- a/xen/arch/x86/msr.c
> > > +++ b/xen/arch/x86/msr.c
> > > @@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
> > >  
> > >          /* Fallthrough. */
> > >      case 0x40000200 ... 0x400002ff:
> > > -        ret = (rdmsr_hypervisor_regs(msr, val)
> > > -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> > > +        ret = guest_rdmsr_xen(v, msr, val);
> > >          break;
> > >  
> > >      default:
> > > @@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> > >  
> > >          /* Fallthrough. */
> > >      case 0x40000200 ... 0x400002ff:
> > > -        ret = (wrmsr_hypervisor_regs(msr, val) == 1
> > > -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> > > +        ret = guest_wrmsr_xen(v, msr, val);
> > >          break;
> > >  
> > >      default:
> > > diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> > > index 7c17806..3988753 100644
> > > --- a/xen/arch/x86/traps.c
> > > +++ b/xen/arch/x86/traps.c
> > > @@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs)
> > >            trapnr, trapstr(trapnr), regs->error_code);
> > >  }
> > >  
> > > -/* Returns 0 if not handled, and non-0 for success. */
> > > -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val)
> > > +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val)
> > >  {
> > > -    struct domain *d = current->domain;
> > > +    const struct domain *d = v->domain;
> > >      /* Optionally shift out of the way of Viridian architectural MSRs. */
> > >      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
> > >  
> > >      switch ( idx - base )
> > >      {
> > >      case 0: /* Write hypercall page MSR.  Read as zero. */
> > > -    {
> > >          *val = 0;
> > > -        return 1;
> > > -    }
> > > +        return X86EMUL_OKAY;
> > >      }
> > >  
> > > -    return 0;
> > > +    return X86EMUL_EXCEPTION;
> > >  }
> > >  
> > > -/* Returns 1 if handled, 0 if not and -Exx for error. */
> > > -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > > +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
> > >  {
> > > -    struct domain *d = current->domain;
> > > +    struct domain *d = v->domain;
> > >      /* Optionally shift out of the way of Viridian architectural MSRs. */
> > >      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
> > >  
> > > @@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > >              gdprintk(XENLOG_WARNING,
> > >                       "wrmsr hypercall page index %#x unsupported\n",
> > >                       page_index);
> > > -            return 0;
> > > +            return X86EMUL_EXCEPTION;
> > >          }
> > >  
> > >          page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
> > > @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > >              if ( p2m_is_paging(t) )
> > >              {
> > >                  p2m_mem_paging_populate(d, gmfn);
> > > -                return -ERESTART;
> > > +                return X86EMUL_RETRY;
> > 
> > Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
> > with this patch, X86EMUL_RETRY will actually be returned. I don't think
> > that callers can handle this situation.
> > 
> > E.g. the code from vmx_vmexit_handler():
> > 
> >     case EXIT_REASON_MSR_WRITE:
> >         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
> >         {
> >         case X86EMUL_OKAY:
> >             update_guest_eip(); /* Safe: WRMSR */
> >             break;
> > 
> >         case X86EMUL_EXCEPTION:
> >             hvm_inject_hw_exception(TRAP_gp_fault, 0);
> >             break;
> >         }
> >         break;
> 
> Hmm lovely, so it was broken before, but should be correct now.
> 
> RETRY has caused an entry to go onto the paging ring, which will pause
> the vcpu until a reply occurs, after which we will re-enter the guest
> without having moved RIP forwards, re-execute the wrmsr instruction, and
> this time succeed because the frame has been paged in.

Actually, the current VMX/SVM (but not PV) code does:

        switch ( wrmsr_hypervisor_regs(msr, msr_content) )
        {
        case -ERESTART:
            return X86EMUL_RETRY;

This code is removed in 1/3 patch but I wasn't CCed.

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12  9:24         ` Andrew Cooper
@ 2018-09-12 10:05           ` Jan Beulich
  2018-09-12 12:02             ` Andrew Cooper
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2018-09-12 10:05 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Sergey Dyasli, xen-devel, Wei Liu, Roger Pau Monne

>>> On 12.09.18 at 11:24, <andrew.cooper3@citrix.com> wrote:
> On 12/09/18 10:17, Jan Beulich wrote:
>>>>> On 12.09.18 at 11:12, <andrew.cooper3@citrix.com> wrote:
>>> On 12/09/18 09:29, Sergey Dyasli wrote:
>>>> On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
>>>>> @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>>>>              if ( p2m_is_paging(t) )
>>>>>              {
>>>>>                  p2m_mem_paging_populate(d, gmfn);
>>>>> -                return -ERESTART;
>>>>> +                return X86EMUL_RETRY;
>>>> Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
>>>> with this patch, X86EMUL_RETRY will actually be returned. I don't think
>>>> that callers can handle this situation.
>>>>
>>>> E.g. the code from vmx_vmexit_handler():
>>>>
>>>>     case EXIT_REASON_MSR_WRITE:
>>>>         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
>>>>         {
>>>>         case X86EMUL_OKAY:
>>>>             update_guest_eip(); /* Safe: WRMSR */
>>>>             break;
>>>>
>>>>         case X86EMUL_EXCEPTION:
>>>>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>>             break;
>>>>         }
>>>>         break;
>>> Hmm lovely, so it was broken before, but should be correct now.
>>>
>>> RETRY has caused an entry to go onto the paging ring, which will pause
>>> the vcpu until a reply occurs, after which we will re-enter the guest
>>> without having moved RIP forwards, re-execute the wrmsr instruction, and
>>> this time succeed because the frame has been paged in.
>> But then perhaps split out the actual bugfix into a prereq patch,
>> especially as that one may want backporting?
> 
> Sorry, but I'm not sure you can't have that.  It will depend on the
> internals of the hooks, but I suspect you'd need the entire rest of my
> MSR cleanup work if you wanted to cleanly backport this bugfix.

Ouch, well - then: No.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12  9:46       ` Sergey Dyasli
@ 2018-09-12 10:23         ` Andrew Cooper
  2018-09-13  7:57           ` Sergey Dyasli
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2018-09-12 10:23 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Wei Liu, JBeulich, Roger Pau Monne

On 12/09/18 10:46, Sergey Dyasli wrote:
> On Wed, 2018-09-12 at 10:12 +0100, Andrew Cooper wrote:
>> On 12/09/18 09:29, Sergey Dyasli wrote:
>>> On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
>>>> Rename them to guest_{rd,wr}msr_xen() for consistency, and because the _regs
>>>> suffix isn't very appropriate.
>>>>
>>>> Update them to take a vcpu pointer rather than presuming that they act on
>>>> current, and switch to using X86EMUL_* return values.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> ---
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Wei Liu <wei.liu2@citrix.com>
>>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>>> CC: Sergey Dyasli <sergey.dyasli@citrix.com>
>>>>
>>>> v3:
>>>>  * Clean up after splitting the series.
>>>> ---
>>>>  xen/arch/x86/msr.c              |  6 ++----
>>>>  xen/arch/x86/traps.c            | 29 +++++++++++++----------------
>>>>  xen/include/asm-x86/processor.h |  4 ++--
>>>>  3 files changed, 17 insertions(+), 22 deletions(-)
>>>>
>>>> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
>>>> index cf0dc27..8f02a89 100644
>>>> --- a/xen/arch/x86/msr.c
>>>> +++ b/xen/arch/x86/msr.c
>>>> @@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>>>>  
>>>>          /* Fallthrough. */
>>>>      case 0x40000200 ... 0x400002ff:
>>>> -        ret = (rdmsr_hypervisor_regs(msr, val)
>>>> -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
>>>> +        ret = guest_rdmsr_xen(v, msr, val);
>>>>          break;
>>>>  
>>>>      default:
>>>> @@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>>>  
>>>>          /* Fallthrough. */
>>>>      case 0x40000200 ... 0x400002ff:
>>>> -        ret = (wrmsr_hypervisor_regs(msr, val) == 1
>>>> -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
>>>> +        ret = guest_wrmsr_xen(v, msr, val);
>>>>          break;
>>>>  
>>>>      default:
>>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>>>> index 7c17806..3988753 100644
>>>> --- a/xen/arch/x86/traps.c
>>>> +++ b/xen/arch/x86/traps.c
>>>> @@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs)
>>>>            trapnr, trapstr(trapnr), regs->error_code);
>>>>  }
>>>>  
>>>> -/* Returns 0 if not handled, and non-0 for success. */
>>>> -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val)
>>>> +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val)
>>>>  {
>>>> -    struct domain *d = current->domain;
>>>> +    const struct domain *d = v->domain;
>>>>      /* Optionally shift out of the way of Viridian architectural MSRs. */
>>>>      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
>>>>  
>>>>      switch ( idx - base )
>>>>      {
>>>>      case 0: /* Write hypercall page MSR.  Read as zero. */
>>>> -    {
>>>>          *val = 0;
>>>> -        return 1;
>>>> -    }
>>>> +        return X86EMUL_OKAY;
>>>>      }
>>>>  
>>>> -    return 0;
>>>> +    return X86EMUL_EXCEPTION;
>>>>  }
>>>>  
>>>> -/* Returns 1 if handled, 0 if not and -Exx for error. */
>>>> -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>>> +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
>>>>  {
>>>> -    struct domain *d = current->domain;
>>>> +    struct domain *d = v->domain;
>>>>      /* Optionally shift out of the way of Viridian architectural MSRs. */
>>>>      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
>>>>  
>>>> @@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>>>              gdprintk(XENLOG_WARNING,
>>>>                       "wrmsr hypercall page index %#x unsupported\n",
>>>>                       page_index);
>>>> -            return 0;
>>>> +            return X86EMUL_EXCEPTION;
>>>>          }
>>>>  
>>>>          page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
>>>> @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>>>              if ( p2m_is_paging(t) )
>>>>              {
>>>>                  p2m_mem_paging_populate(d, gmfn);
>>>> -                return -ERESTART;
>>>> +                return X86EMUL_RETRY;
>>> Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
>>> with this patch, X86EMUL_RETRY will actually be returned. I don't think
>>> that callers can handle this situation.
>>>
>>> E.g. the code from vmx_vmexit_handler():
>>>
>>>     case EXIT_REASON_MSR_WRITE:
>>>         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
>>>         {
>>>         case X86EMUL_OKAY:
>>>             update_guest_eip(); /* Safe: WRMSR */
>>>             break;
>>>
>>>         case X86EMUL_EXCEPTION:
>>>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>             break;
>>>         }
>>>         break;
>> Hmm lovely, so it was broken before, but should be correct now.
>>
>> RETRY has caused an entry to go onto the paging ring, which will pause
>> the vcpu until a reply occurs, after which we will re-enter the guest
>> without having moved RIP forwards, re-execute the wrmsr instruction, and
>> this time succeed because the frame has been paged in.
> Actually, the current VMX/SVM (but not PV) code does:
>
>         switch ( wrmsr_hypervisor_regs(msr, msr_content) )
>         {
>         case -ERESTART:
>             return X86EMUL_RETRY;
>
> This code is removed in 1/3 patch but I wasn't CCed.

Ah right, in which case I need to temporarily transplant this switch
into patch 1.  Given its only the PV side which is then broken, I can
probably see about doing a bugfix for that.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr()
  2018-09-11 18:56 ` [PATCH v3 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr() Andrew Cooper
@ 2018-09-12 12:00   ` Andrew Cooper
  2018-09-13  7:54     ` Sergey Dyasli
  2018-09-13 12:25     ` Jan Beulich
  0 siblings, 2 replies; 19+ messages in thread
From: Andrew Cooper @ 2018-09-12 12:00 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Sergey Dyasli, Wei Liu, Jan Beulich, Roger Pau Monné

Despite the complicated diff in {svm,vmx}_msr_write_intercept(), it is just
the 0 case losing one level of indentation, as part of removing the call to
wrmsr_hypervisor_regs().

The case blocks in guest_{wr,rd}msr() use raw numbers, partly for consistency
with the CPUID side of things, but mainly because this is clearer code to
follow.  In particular, the Xen block may overlap with the Viridian block if
Viridian is not enabled for the domain, and trying to express this with named
literals caused more confusion that it solved.

Future changes with clean up the individual APIs, including allowing these
MSRs to be usable for vcpus other than current (no callers exist with v !=
current).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Sergey Dyasli <sergey.dyasli@citrix.com>

v3:
 * Split out of previous series.  Retain appropriate R-by's
v4:
 * Retain switch() for interpreting the result of wrmsr_hypervisor_regs()
---
 xen/arch/x86/hvm/svm/svm.c     | 27 +++-----------------------
 xen/arch/x86/hvm/vmx/vmx.c     | 28 ++++-----------------------
 xen/arch/x86/msr.c             | 43 ++++++++++++++++++++++++++++++++++++++----
 xen/arch/x86/pv/emul-priv-op.c |  6 ------
 4 files changed, 46 insertions(+), 58 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 34d55b4..ef8f271 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2016,10 +2016,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         else if ( ret )
             break;
 
-        if ( rdmsr_viridian_regs(msr, msr_content) ||
-             rdmsr_hypervisor_regs(msr, msr_content) )
-            break;
-
         if ( rdmsr_safe(msr, *msr_content) == 0 )
             break;
 
@@ -2218,28 +2214,11 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         else if ( ret )
             break;
 
-        if ( wrmsr_viridian_regs(msr, msr_content) )
+        /* Match up with the RDMSR side; ultimately this should go away. */
+        if ( rdmsr_safe(msr, msr_content) == 0 )
             break;
 
-        switch ( wrmsr_hypervisor_regs(msr, msr_content) )
-        {
-        case -ERESTART:
-            result = X86EMUL_RETRY;
-            break;
-        case 0:
-            /*
-             * Match up with the RDMSR side for now; ultimately this entire
-             * case block should go away.
-             */
-            if ( rdmsr_safe(msr, msr_content) == 0 )
-                break;
-            goto gpf;
-        case 1:
-            break;
-        default:
-            goto gpf;
-        }
-        break;
+        goto gpf;
     }
 
     return result;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b2e1a28..bf90e22 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2965,10 +2965,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             break;
         }
 
-        if ( rdmsr_viridian_regs(msr, msr_content) ||
-             rdmsr_hypervisor_regs(msr, msr_content) )
-            break;
-
         if ( rdmsr_safe(msr, *msr_content) == 0 )
             break;
 
@@ -3249,31 +3245,15 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         if ( passive_domain_do_wrmsr(msr, msr_content) )
             return X86EMUL_OKAY;
 
-        if ( wrmsr_viridian_regs(msr, msr_content) ) 
-            break;
-
         if ( vmx_write_guest_msr(v, msr, msr_content) == 0 ||
              is_last_branch_msr(msr) )
             break;
 
-        switch ( wrmsr_hypervisor_regs(msr, msr_content) )
-        {
-        case -ERESTART:
-            return X86EMUL_RETRY;
-        case 0:
-            /*
-             * Match up with the RDMSR side for now; ultimately this
-             * entire case block should go away.
-             */
-            if ( rdmsr_safe(msr, msr_content) == 0 )
-                break;
-            goto gp_fault;
-        case 1:
+        /* Match up with the RDMSR side; ultimately this should go away. */
+        if ( rdmsr_safe(msr, msr_content) == 0 )
             break;
-        default:
-            goto gp_fault;
-        }
-        break;
+
+        goto gp_fault;
     }
 
     return X86EMUL_OKAY;
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index b675f3a..d7c38f1 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -114,9 +114,11 @@ int init_vcpu_msr_policy(struct vcpu *v)
 
 int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
-    const struct msr_policy *mp = v->domain->arch.msr;
+    const struct domain *d = v->domain;
+    const struct cpuid_policy *cp = d->arch.cpuid;
+    const struct msr_policy *mp = d->arch.msr;
     const struct vcpu_msrs *msrs = v->arch.msrs;
+    int ret = X86EMUL_OKAY;
 
     switch ( msr )
     {
@@ -145,11 +147,25 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
         *val = msrs->misc_features_enables.raw;
         break;
 
+    case 0x40000000 ... 0x400001ff:
+        if ( is_viridian_domain(d) )
+        {
+            ret = (rdmsr_viridian_regs(msr, val)
+                   ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+            break;
+        }
+
+        /* Fallthrough. */
+    case 0x40000200 ... 0x400002ff:
+        ret = (rdmsr_hypervisor_regs(msr, val)
+               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+        break;
+
     default:
         return X86EMUL_UNHANDLEABLE;
     }
 
-    return X86EMUL_OKAY;
+    return ret;
 
  gp_fault:
     return X86EMUL_EXCEPTION;
@@ -162,6 +178,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     const struct cpuid_policy *cp = d->arch.cpuid;
     const struct msr_policy *mp = d->arch.msr;
     struct vcpu_msrs *msrs = v->arch.msrs;
+    int ret = X86EMUL_OKAY;
 
     switch ( msr )
     {
@@ -252,11 +269,29 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         break;
     }
 
+    case 0x40000000 ... 0x400001ff:
+        if ( is_viridian_domain(d) )
+        {
+            ret = (wrmsr_viridian_regs(msr, val)
+                   ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
+            break;
+        }
+
+        /* Fallthrough. */
+    case 0x40000200 ... 0x400002ff:
+        switch ( wrmsr_hypervisor_regs(msr, val) )
+        {
+        case -ERESTART: ret = X86EMUL_RETRY;     break;
+        case 1:         ret = X86EMUL_OKAY;      break;
+        default:        ret = X86EMUL_EXCEPTION; break;
+        }
+        break;
+
     default:
         return X86EMUL_UNHANDLEABLE;
     }
 
-    return X86EMUL_OKAY;
+    return ret;
 
  gp_fault:
     return X86EMUL_EXCEPTION;
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 45941ea..6422f91 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -950,9 +950,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
         }
         /* fall through */
     default:
-        if ( rdmsr_hypervisor_regs(reg, val) )
-            return X86EMUL_OKAY;
-
         rc = vmce_rdmsr(reg, val);
         if ( rc < 0 )
             break;
@@ -1149,9 +1146,6 @@ static int write_msr(unsigned int reg, uint64_t val,
         }
         /* fall through */
     default:
-        if ( wrmsr_hypervisor_regs(reg, val) == 1 )
-            return X86EMUL_OKAY;
-
         rc = vmce_wrmsr(reg, val);
         if ( rc < 0 )
             break;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12 10:05           ` Jan Beulich
@ 2018-09-12 12:02             ` Andrew Cooper
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2018-09-12 12:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Sergey Dyasli, xen-devel, Wei Liu, Roger Pau Monne

On 12/09/18 11:05, Jan Beulich wrote:
>>>> On 12.09.18 at 11:24, <andrew.cooper3@citrix.com> wrote:
>> On 12/09/18 10:17, Jan Beulich wrote:
>>>>>> On 12.09.18 at 11:12, <andrew.cooper3@citrix.com> wrote:
>>>> On 12/09/18 09:29, Sergey Dyasli wrote:
>>>>> On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
>>>>>> @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
>>>>>>              if ( p2m_is_paging(t) )
>>>>>>              {
>>>>>>                  p2m_mem_paging_populate(d, gmfn);
>>>>>> -                return -ERESTART;
>>>>>> +                return X86EMUL_RETRY;
>>>>> Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
>>>>> with this patch, X86EMUL_RETRY will actually be returned. I don't think
>>>>> that callers can handle this situation.
>>>>>
>>>>> E.g. the code from vmx_vmexit_handler():
>>>>>
>>>>>     case EXIT_REASON_MSR_WRITE:
>>>>>         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
>>>>>         {
>>>>>         case X86EMUL_OKAY:
>>>>>             update_guest_eip(); /* Safe: WRMSR */
>>>>>             break;
>>>>>
>>>>>         case X86EMUL_EXCEPTION:
>>>>>             hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>>>             break;
>>>>>         }
>>>>>         break;
>>>> Hmm lovely, so it was broken before, but should be correct now.
>>>>
>>>> RETRY has caused an entry to go onto the paging ring, which will pause
>>>> the vcpu until a reply occurs, after which we will re-enter the guest
>>>> without having moved RIP forwards, re-execute the wrmsr instruction, and
>>>> this time succeed because the frame has been paged in.
>>> But then perhaps split out the actual bugfix into a prereq patch,
>>> especially as that one may want backporting?
>> Sorry, but I'm not sure you can't have that.  It will depend on the
>> internals of the hooks, but I suspect you'd need the entire rest of my
>> MSR cleanup work if you wanted to cleanly backport this bugfix.
> Ouch, well - then: No.

Turns out it was a transitory bug introduced by patch 1, so no cause for
alarm.

It is latently buggy on the PV side, but there is no paging support for
PV guests yet, so no need to backport any fixes.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr()
  2018-09-12 12:00   ` [PATCH v4 " Andrew Cooper
@ 2018-09-13  7:54     ` Sergey Dyasli
  2018-09-13 12:25     ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Sergey Dyasli @ 2018-09-13  7:54 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: Wei Liu, JBeulich, Roger Pau Monne

On Wed, 2018-09-12 at 13:00 +0100, Andrew Cooper wrote:
> Despite the complicated diff in {svm,vmx}_msr_write_intercept(), it is just
> the 0 case losing one level of indentation, as part of removing the call to
> wrmsr_hypervisor_regs().
> 
> The case blocks in guest_{wr,rd}msr() use raw numbers, partly for consistency
> with the CPUID side of things, but mainly because this is clearer code to
> follow.  In particular, the Xen block may overlap with the Viridian block if
> Viridian is not enabled for the domain, and trying to express this with named
> literals caused more confusion that it solved.
> 
> Future changes with clean up the individual APIs, including allowing these
> MSRs to be usable for vcpus other than current (no callers exist with v !=
> current).
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Sergey Dyasli <sergey.dyasli@citrix.com>
> 
> v3:
>  * Split out of previous series.  Retain appropriate R-by's
> v4:
>  * Retain switch() for interpreting the result of wrmsr_hypervisor_regs()
> ---
>  xen/arch/x86/hvm/svm/svm.c     | 27 +++-----------------------
>  xen/arch/x86/hvm/vmx/vmx.c     | 28 ++++-----------------------
>  xen/arch/x86/msr.c             | 43 ++++++++++++++++++++++++++++++++++++++----
>  xen/arch/x86/pv/emul-priv-op.c |  6 ------
>  4 files changed, 46 insertions(+), 58 deletions(-)

Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-12 10:23         ` Andrew Cooper
@ 2018-09-13  7:57           ` Sergey Dyasli
  2018-09-13 12:30             ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Sergey Dyasli @ 2018-09-13  7:57 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: Wei Liu, JBeulich, Roger Pau Monne

On Wed, 2018-09-12 at 11:23 +0100, Andrew Cooper wrote:
> On 12/09/18 10:46, Sergey Dyasli wrote:
> > On Wed, 2018-09-12 at 10:12 +0100, Andrew Cooper wrote:
> > > On 12/09/18 09:29, Sergey Dyasli wrote:
> > > > On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
> > > > > Rename them to guest_{rd,wr}msr_xen() for consistency, and because the _regs
> > > > > suffix isn't very appropriate.
> > > > > 
> > > > > Update them to take a vcpu pointer rather than presuming that they act on
> > > > > current, and switch to using X86EMUL_* return values.
> > > > > 
> > > > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > > > > ---
> > > > > CC: Jan Beulich <JBeulich@suse.com>
> > > > > CC: Wei Liu <wei.liu2@citrix.com>
> > > > > CC: Roger Pau Monné <roger.pau@citrix.com>
> > > > > CC: Sergey Dyasli <sergey.dyasli@citrix.com>
> > > > > 
> > > > > v3:
> > > > >  * Clean up after splitting the series.
> > > > > ---
> > > > >  xen/arch/x86/msr.c              |  6 ++----
> > > > >  xen/arch/x86/traps.c            | 29 +++++++++++++----------------
> > > > >  xen/include/asm-x86/processor.h |  4 ++--
> > > > >  3 files changed, 17 insertions(+), 22 deletions(-)
> > > > > 
> > > > > diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> > > > > index cf0dc27..8f02a89 100644
> > > > > --- a/xen/arch/x86/msr.c
> > > > > +++ b/xen/arch/x86/msr.c
> > > > > @@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
> > > > >  
> > > > >          /* Fallthrough. */
> > > > >      case 0x40000200 ... 0x400002ff:
> > > > > -        ret = (rdmsr_hypervisor_regs(msr, val)
> > > > > -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> > > > > +        ret = guest_rdmsr_xen(v, msr, val);
> > > > >          break;
> > > > >  
> > > > >      default:
> > > > > @@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> > > > >  
> > > > >          /* Fallthrough. */
> > > > >      case 0x40000200 ... 0x400002ff:
> > > > > -        ret = (wrmsr_hypervisor_regs(msr, val) == 1
> > > > > -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> > > > > +        ret = guest_wrmsr_xen(v, msr, val);
> > > > >          break;
> > > > >  
> > > > >      default:
> > > > > diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> > > > > index 7c17806..3988753 100644
> > > > > --- a/xen/arch/x86/traps.c
> > > > > +++ b/xen/arch/x86/traps.c
> > > > > @@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs)
> > > > >            trapnr, trapstr(trapnr), regs->error_code);
> > > > >  }
> > > > >  
> > > > > -/* Returns 0 if not handled, and non-0 for success. */
> > > > > -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val)
> > > > > +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val)
> > > > >  {
> > > > > -    struct domain *d = current->domain;
> > > > > +    const struct domain *d = v->domain;
> > > > >      /* Optionally shift out of the way of Viridian architectural MSRs. */
> > > > >      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
> > > > >  
> > > > >      switch ( idx - base )
> > > > >      {
> > > > >      case 0: /* Write hypercall page MSR.  Read as zero. */
> > > > > -    {
> > > > >          *val = 0;
> > > > > -        return 1;
> > > > > -    }
> > > > > +        return X86EMUL_OKAY;
> > > > >      }
> > > > >  
> > > > > -    return 0;
> > > > > +    return X86EMUL_EXCEPTION;
> > > > >  }
> > > > >  
> > > > > -/* Returns 1 if handled, 0 if not and -Exx for error. */
> > > > > -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > > > > +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
> > > > >  {
> > > > > -    struct domain *d = current->domain;
> > > > > +    struct domain *d = v->domain;
> > > > >      /* Optionally shift out of the way of Viridian architectural MSRs. */
> > > > >      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
> > > > >  
> > > > > @@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > > > >              gdprintk(XENLOG_WARNING,
> > > > >                       "wrmsr hypercall page index %#x unsupported\n",
> > > > >                       page_index);
> > > > > -            return 0;
> > > > > +            return X86EMUL_EXCEPTION;
> > > > >          }
> > > > >  
> > > > >          page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
> > > > > @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > > > >              if ( p2m_is_paging(t) )
> > > > >              {
> > > > >                  p2m_mem_paging_populate(d, gmfn);
> > > > > -                return -ERESTART;
> > > > > +                return X86EMUL_RETRY;
> > > > 
> > > > Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
> > > > with this patch, X86EMUL_RETRY will actually be returned. I don't think
> > > > that callers can handle this situation.
> > > > 
> > > > E.g. the code from vmx_vmexit_handler():
> > > > 
> > > >     case EXIT_REASON_MSR_WRITE:
> > > >         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
> > > >         {
> > > >         case X86EMUL_OKAY:
> > > >             update_guest_eip(); /* Safe: WRMSR */
> > > >             break;
> > > > 
> > > >         case X86EMUL_EXCEPTION:
> > > >             hvm_inject_hw_exception(TRAP_gp_fault, 0);
> > > >             break;
> > > >         }
> > > >         break;
> > > 
> > > Hmm lovely, so it was broken before, but should be correct now.
> > > 
> > > RETRY has caused an entry to go onto the paging ring, which will pause
> > > the vcpu until a reply occurs, after which we will re-enter the guest
> > > without having moved RIP forwards, re-execute the wrmsr instruction, and
> > > this time succeed because the frame has been paged in.
> > 
> > Actually, the current VMX/SVM (but not PV) code does:
> > 
> >         switch ( wrmsr_hypervisor_regs(msr, msr_content) )
> >         {
> >         case -ERESTART:
> >             return X86EMUL_RETRY;
> > 
> > This code is removed in 1/3 patch but I wasn't CCed.
> 
> Ah right, in which case I need to temporarily transplant this switch
> into patch 1.  Given its only the PV side which is then broken, I can
> probably see about doing a bugfix for that.

With this being rebased on top of v4 1/3:

Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr()
  2018-09-12 12:00   ` [PATCH v4 " Andrew Cooper
  2018-09-13  7:54     ` Sergey Dyasli
@ 2018-09-13 12:25     ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2018-09-13 12:25 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Sergey Dyasli, Xen-devel, Wei Liu, Roger Pau Monne

>>> On 12.09.18 at 14:00, <andrew.cooper3@citrix.com> wrote:
> Despite the complicated diff in {svm,vmx}_msr_write_intercept(), it is just
> the 0 case losing one level of indentation, as part of removing the call to
> wrmsr_hypervisor_regs().
> 
> The case blocks in guest_{wr,rd}msr() use raw numbers, partly for consistency
> with the CPUID side of things, but mainly because this is clearer code to
> follow.  In particular, the Xen block may overlap with the Viridian block if
> Viridian is not enabled for the domain, and trying to express this with named
> literals caused more confusion that it solved.

Well, imo it's not much better this way, but also not much worse.

> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -114,9 +114,11 @@ int init_vcpu_msr_policy(struct vcpu *v)
>  
>  int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  {
> -    const struct cpuid_policy *cp = v->domain->arch.cpuid;
> -    const struct msr_policy *mp = v->domain->arch.msr;
> +    const struct domain *d = v->domain;
> +    const struct cpuid_policy *cp = d->arch.cpuid;
> +    const struct msr_policy *mp = d->arch.msr;
>      const struct vcpu_msrs *msrs = v->arch.msrs;
> +    int ret = X86EMUL_OKAY;
>  
>      switch ( msr )
>      {
> @@ -145,11 +147,25 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>          *val = msrs->misc_features_enables.raw;
>          break;
>  
> +    case 0x40000000 ... 0x400001ff:
> +        if ( is_viridian_domain(d) )
> +        {
> +            ret = (rdmsr_viridian_regs(msr, val)
> +                   ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> +            break;
> +        }
> +
> +        /* Fallthrough. */
> +    case 0x40000200 ... 0x400002ff:
> +        ret = (rdmsr_hypervisor_regs(msr, val)
> +               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> +        break;
> +
>      default:
>          return X86EMUL_UNHANDLEABLE;

Considering the "return" here, could I talk you into using "return"
for your additions as well, rather than introducing a local variable?
Same for the wrmsr side then. In any event
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure
  2018-09-11 18:56 ` [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure Andrew Cooper
  2018-09-12  8:14   ` Sergey Dyasli
@ 2018-09-13 12:28   ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2018-09-13 12:28 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Sergey Dyasli, Xen-devel, Wei Liu, Roger Pau Monne

>>> On 11.09.18 at 20:56, <andrew.cooper3@citrix.com> wrote:
> Rename the functions to guest_{rd,wr}msr_viridian() for consistency, and
> because the _regs() suffix isn't very appropriate.
> 
> Update them to take a vcpu pointer rather than presuming that they act on
> current, which is safe for all implemented operations, and switch their return
> ABI to use X86EMUL_*.
> 
> The default cases no longer need to deal with MSRs out of the Viridian range,
> but drop the printks to debug builds only and identify the value attempting to
> be written.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
  2018-09-13  7:57           ` Sergey Dyasli
@ 2018-09-13 12:30             ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2018-09-13 12:30 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Sergey Dyasli, xen-devel, Wei Liu, Roger Pau Monne

>>> On 13.09.18 at 09:57, <sergey.dyasli@citrix.com> wrote:
> With this being rebased on top of v4 1/3:
> 
> Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-09-13 12:30 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-11 18:56 [PATCH v3 0/3] x86: Cleanup of MSR handling for Xen and Viridian ranges Andrew Cooper
2018-09-11 18:56 ` [PATCH v3 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr() Andrew Cooper
2018-09-12 12:00   ` [PATCH v4 " Andrew Cooper
2018-09-13  7:54     ` Sergey Dyasli
2018-09-13 12:25     ` Jan Beulich
2018-09-11 18:56 ` [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure Andrew Cooper
2018-09-12  8:14   ` Sergey Dyasli
2018-09-13 12:28   ` Jan Beulich
2018-09-11 18:56 ` [PATCH v3 3/3] x86: Clean up the Xen " Andrew Cooper
2018-09-12  8:29   ` Sergey Dyasli
2018-09-12  9:12     ` Andrew Cooper
2018-09-12  9:17       ` Jan Beulich
2018-09-12  9:24         ` Andrew Cooper
2018-09-12 10:05           ` Jan Beulich
2018-09-12 12:02             ` Andrew Cooper
2018-09-12  9:46       ` Sergey Dyasli
2018-09-12 10:23         ` Andrew Cooper
2018-09-13  7:57           ` Sergey Dyasli
2018-09-13 12:30             ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.