All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling
@ 2017-07-19 11:57 Andrew Cooper
  2017-07-19 11:57 ` [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr() Andrew Cooper
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 11:57 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich

This fixes a bug where HyperV (Windows Server 2012) crashes, because it
attempts to intercept the x2apic MSRs for its L2 guests, but Xen uses a stale
mapping of gfn 0 for the MSR_BITMAP.

There are many more bugs in this area.  I'm fairly sure the merging of
low/high half accesses aren't yet correct.

Andrew Cooper (6):
  x86/vmx: Improvements to vmx_{dis,en}able_intercept_for_msr()
  x86/vpmu: Use vmx_{clear,set}_msr_intercept() rather than opencoding them
  x86/vmx: Introduce and use struct vmx_msr_bitmap
  x86/vvmx: Switch nested MSR intercept handling to use struct vmx_msr_bitmap
  x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing
  x86/vvmx: Fix auditing of MSR_BITMAP parameter

 xen/arch/x86/cpu/vpmu_intel.c      |  64 +++++++------------
 xen/arch/x86/hvm/vmx/vmcs.c        | 126 +++++++++++++++++--------------------
 xen/arch/x86/hvm/vmx/vmx.c         |  34 ++++------
 xen/arch/x86/hvm/vmx/vvmx.c        |  44 ++++++++-----
 xen/include/asm-x86/hvm/vmx/vmcs.h |  28 ++++++---
 5 files changed, 143 insertions(+), 153 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr()
  2017-07-19 11:57 [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling Andrew Cooper
@ 2017-07-19 11:57 ` Andrew Cooper
  2017-07-27  5:43   ` Tian, Kevin
  2017-07-19 11:57 ` [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them Andrew Cooper
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 11:57 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich

 * Shorten the names to vmx_{clear,set}_msr_intercept()
 * Use an enumeration for MSR_TYPE rather than a plain integer
 * Introduce VMX_MSR_RW, as most callers alter both the read and write
   intercept at the same time.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 38 ++++++++++++++++++++------------------
 xen/arch/x86/hvm/vmx/vmx.c         | 34 +++++++++++++---------------------
 xen/include/asm-x86/hvm/vmx/vmcs.h | 15 ++++++++++-----
 3 files changed, 43 insertions(+), 44 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 8103b20..e36a908 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -802,7 +802,8 @@ static void vmx_set_host_env(struct vcpu *v)
               (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code);
 }
 
-void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
+void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                             enum vmx_msr_intercept_type type)
 {
     unsigned long *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
     struct domain *d = v->domain;
@@ -821,17 +822,17 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
      */
     if ( msr <= 0x1fff )
     {
-        if ( type & MSR_TYPE_R )
+        if ( type & VMX_MSR_R )
             clear_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
-        if ( type & MSR_TYPE_W )
+        if ( type & VMX_MSR_W )
             clear_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
-        if ( type & MSR_TYPE_R )
+        if ( type & VMX_MSR_R )
             clear_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
-        if ( type & MSR_TYPE_W )
+        if ( type & VMX_MSR_W )
             clear_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
     }
     else
@@ -842,7 +843,8 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
 
 }
 
-void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
+void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                           enum vmx_msr_intercept_type type)
 {
     unsigned long *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
 
@@ -857,17 +859,17 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
      */
     if ( msr <= 0x1fff )
     {
-        if ( type & MSR_TYPE_R )
+        if ( type & VMX_MSR_R )
             set_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
-        if ( type & MSR_TYPE_W )
+        if ( type & VMX_MSR_W )
             set_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
-        if ( type & MSR_TYPE_R )
+        if ( type & VMX_MSR_R )
             set_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
-        if ( type & MSR_TYPE_W )
+        if ( type & VMX_MSR_W )
             set_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
     }
     else
@@ -1104,17 +1106,17 @@ static int construct_vmcs(struct vcpu *v)
         v->arch.hvm_vmx.msr_bitmap = msr_bitmap;
         __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap));
 
-        vmx_disable_intercept_for_msr(v, MSR_FS_BASE, MSR_TYPE_R | MSR_TYPE_W);
-        vmx_disable_intercept_for_msr(v, MSR_GS_BASE, MSR_TYPE_R | MSR_TYPE_W);
-        vmx_disable_intercept_for_msr(v, MSR_SHADOW_GS_BASE, MSR_TYPE_R | MSR_TYPE_W);
-        vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_CS, MSR_TYPE_R | MSR_TYPE_W);
-        vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_ESP, MSR_TYPE_R | MSR_TYPE_W);
-        vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_EIP, MSR_TYPE_R | MSR_TYPE_W);
+        vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_GS_BASE, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW);
         if ( paging_mode_hap(d) && (!iommu_enabled || iommu_snoop) )
-            vmx_disable_intercept_for_msr(v, MSR_IA32_CR_PAT, MSR_TYPE_R | MSR_TYPE_W);
+            vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
         if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&
              (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) )
-            vmx_disable_intercept_for_msr(v, MSR_IA32_BNDCFGS, MSR_TYPE_R | MSR_TYPE_W);
+            vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW);
     }
 
     /* I/O access bitmap. */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 69ce3aa..ff97e6a 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1374,8 +1374,7 @@ static void vmx_handle_cd(struct vcpu *v, unsigned long value)
 
             vmx_get_guest_pat(v, pat);
             vmx_set_guest_pat(v, uc_pat);
-            vmx_enable_intercept_for_msr(v, MSR_IA32_CR_PAT,
-                                         MSR_TYPE_R | MSR_TYPE_W);
+            vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
 
             wbinvd();               /* flush possibly polluted cache */
             hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TLB */
@@ -1386,8 +1385,7 @@ static void vmx_handle_cd(struct vcpu *v, unsigned long value)
             v->arch.hvm_vcpu.cache_mode = NORMAL_CACHE_MODE;
             vmx_set_guest_pat(v, *pat);
             if ( !iommu_enabled || iommu_snoop )
-                vmx_disable_intercept_for_msr(v, MSR_IA32_CR_PAT,
-                                              MSR_TYPE_R | MSR_TYPE_W);
+                vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
             hvm_asid_flush_vcpu(v); /* no need to flush cache */
         }
     }
@@ -2127,7 +2125,7 @@ static void vmx_enable_msr_interception(struct domain *d, uint32_t msr)
     struct vcpu *v;
 
     for_each_vcpu ( d, v )
-        vmx_enable_intercept_for_msr(v, msr, MSR_TYPE_W);
+        vmx_set_msr_intercept(v, msr, VMX_MSR_W);
 }
 
 static bool_t vmx_is_singlestep_supported(void)
@@ -3031,23 +3029,17 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
             {
                 for ( msr = MSR_IA32_APICBASE_MSR;
                       msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
-                    vmx_disable_intercept_for_msr(v, msr, MSR_TYPE_R);
-
-                vmx_enable_intercept_for_msr(v, MSR_IA32_APICPPR_MSR,
-                                             MSR_TYPE_R);
-                vmx_enable_intercept_for_msr(v, MSR_IA32_APICTMICT_MSR,
-                                             MSR_TYPE_R);
-                vmx_enable_intercept_for_msr(v, MSR_IA32_APICTMCCT_MSR,
-                                             MSR_TYPE_R);
+                    vmx_clear_msr_intercept(v, msr, VMX_MSR_R);
+
+                vmx_set_msr_intercept(v, MSR_IA32_APICPPR_MSR, VMX_MSR_R);
+                vmx_set_msr_intercept(v, MSR_IA32_APICTMICT_MSR, VMX_MSR_R);
+                vmx_set_msr_intercept(v, MSR_IA32_APICTMCCT_MSR, VMX_MSR_R);
             }
             if ( cpu_has_vmx_virtual_intr_delivery )
             {
-                vmx_disable_intercept_for_msr(v, MSR_IA32_APICTPR_MSR,
-                                              MSR_TYPE_W);
-                vmx_disable_intercept_for_msr(v, MSR_IA32_APICEOI_MSR,
-                                              MSR_TYPE_W);
-                vmx_disable_intercept_for_msr(v, MSR_IA32_APICSELF_MSR,
-                                              MSR_TYPE_W);
+                vmx_clear_msr_intercept(v, MSR_IA32_APICTPR_MSR, VMX_MSR_W);
+                vmx_clear_msr_intercept(v, MSR_IA32_APICEOI_MSR, VMX_MSR_W);
+                vmx_clear_msr_intercept(v, MSR_IA32_APICSELF_MSR, VMX_MSR_W);
             }
         }
         else
@@ -3058,7 +3050,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
            SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) )
         for ( msr = MSR_IA32_APICBASE_MSR;
               msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
-            vmx_enable_intercept_for_msr(v, msr, MSR_TYPE_R | MSR_TYPE_W);
+            vmx_set_msr_intercept(v, msr, VMX_MSR_RW);
 
     vmx_update_secondary_exec_control(v);
     vmx_vmcs_exit(v);
@@ -3107,7 +3099,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
                 for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
                     if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
                     {
-                        vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W);
+                        vmx_clear_msr_intercept(v, lbr->base + i, VMX_MSR_RW);
                         if ( lbr_tsx_fixup_needed )
                             v->arch.hvm_vmx.lbr_fixup_enabled |= FIXUP_LBR_TSX;
                         if ( bdw_erratum_bdf14_fixup_needed )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index e3cdfdf..e318dc2 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -498,9 +498,6 @@ enum vmcs_field {
 
 #define VMCS_VPID_WIDTH 16
 
-#define MSR_TYPE_R 1
-#define MSR_TYPE_W 2
-
 #define VMX_GUEST_MSR 0
 #define VMX_HOST_MSR  1
 
@@ -521,8 +518,16 @@ enum vmx_insn_errno
     VMX_INSN_FAIL_INVALID                  = ~0,
 };
 
-void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
-void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
+enum vmx_msr_intercept_type {
+    VMX_MSR_R  = 1,
+    VMX_MSR_W  = 2,
+    VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W,
+};
+
+void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                             enum vmx_msr_intercept_type type);
+void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                           enum vmx_msr_intercept_type type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 struct vmx_msr_entry *vmx_find_msr(u32 msr, int type);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them
  2017-07-19 11:57 [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling Andrew Cooper
  2017-07-19 11:57 ` [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr() Andrew Cooper
@ 2017-07-19 11:57 ` Andrew Cooper
  2017-07-19 12:17   ` Andrew Cooper
  2017-07-19 13:33   ` Boris Ostrovsky
  2017-07-19 11:57 ` [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap Andrew Cooper
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 11:57 UTC (permalink / raw)
  To: Xen-devel
  Cc: Boris Ostrovsky, Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/cpu/vpmu_intel.c | 64 ++++++++++++++++---------------------------
 1 file changed, 23 insertions(+), 41 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 6d768cb..d58eca3 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -231,68 +231,50 @@ static inline int msraddr_to_bitpos(int x)
     return x;
 }
 
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
 {
-    int i;
+    unsigned int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
+        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW);
+
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
+        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW);
 
         if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
+            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW);
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL(i)), msr_bitmap);
+        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R);
 
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R);
+    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R);
 }
 
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
 {
-    int i;
+    unsigned int i;
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
+        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW);
+
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
+        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW);
 
         if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
+            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW);
     }
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL(i)), msr_bitmap);
+        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R);
 
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R);
+    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -327,7 +309,7 @@ static int core2_vpmu_save(struct vcpu *v, bool_t to_guest)
     /* Unset PMU MSR bitmap to trap lazy load. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
          cpu_has_vmx_msr_bitmap )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+        core2_vpmu_unset_msr_bitmap(v);
 
     if ( to_guest )
     {
@@ -541,9 +523,9 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( is_hvm_vcpu(current) &&
-             cpu_has_vmx_msr_bitmap )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+
+        if ( is_hvm_vcpu(current) && cpu_has_vmx_msr_bitmap )
+            core2_vpmu_set_msr_bitmap(current);
     }
     return 1;
 }
@@ -860,7 +842,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
     xfree(vpmu->priv_context);
     vpmu->priv_context = NULL;
     if ( is_hvm_vcpu(v) && cpu_has_vmx_msr_bitmap )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+        core2_vpmu_unset_msr_bitmap(v);
     release_pmu_ownership(PMU_OWNER_HVM);
     vpmu_clear(vpmu);
 }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap
  2017-07-19 11:57 [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling Andrew Cooper
  2017-07-19 11:57 ` [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr() Andrew Cooper
  2017-07-19 11:57 ` [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them Andrew Cooper
@ 2017-07-19 11:57 ` Andrew Cooper
  2017-07-27  6:02   ` Tian, Kevin
  2017-07-19 11:57 ` [PATCH 4/6] x86/vvmx: Switch nested MSR intercept handling to " Andrew Cooper
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 11:57 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich

This avoids opencoding the bitmap bases in accessor functions.  Introduce a
build_assertions() function to check the structure layout against the manual
definiton.  In addition, drop some stale comments and ASSERT() that callers
pass an in-range MSR.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

Slightly RFC: I think these asserts would probably be better if they were (my
planned but not yet implemented) VM_BUG_ON() which would allow us to print out
a rather more useful error and crash the VM rather than bringing the host
down.
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 57 ++++++++++++++++++--------------------
 xen/include/asm-x86/hvm/vmx/vmcs.h | 10 ++++++-
 2 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index e36a908..0e1a142 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -805,7 +805,7 @@ static void vmx_set_host_env(struct vcpu *v)
 void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
                              enum vmx_msr_intercept_type type)
 {
-    unsigned long *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
+    struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
     struct domain *d = v->domain;
 
     /* VMX MSR bitmap supported? */
@@ -815,68 +815,51 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
     if ( unlikely(monitored_msr(d, msr)) )
         return;
 
-    /*
-     * See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
-     * have the write-low and read-high bitmap offsets the wrong way round.
-     * We can control MSRs 0x00000000-0x00001fff and 0xc0000000-0xc0001fff.
-     */
     if ( msr <= 0x1fff )
     {
         if ( type & VMX_MSR_R )
-            clear_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+            clear_bit(msr, msr_bitmap->read_low);
         if ( type & VMX_MSR_W )
-            clear_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+            clear_bit(msr, msr_bitmap->write_low);
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
         if ( type & VMX_MSR_R )
-            clear_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+            clear_bit(msr, msr_bitmap->read_high);
         if ( type & VMX_MSR_W )
-            clear_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+            clear_bit(msr, msr_bitmap->write_high);
     }
     else
-        HVM_DBG_LOG(DBG_LEVEL_MSR,
-                   "msr %x is out of the control range"
-                   "0x00000000-0x00001fff and 0xc0000000-0xc0001fff"
-                   "RDMSR or WRMSR will cause a VM exit", msr); 
-
+        ASSERT(!"MSR out of range for interception\n");
 }
 
 void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
                            enum vmx_msr_intercept_type type)
 {
-    unsigned long *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
+    struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
 
     /* VMX MSR bitmap supported? */
     if ( msr_bitmap == NULL )
         return;
 
-    /*
-     * See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
-     * have the write-low and read-high bitmap offsets the wrong way round.
-     * We can control MSRs 0x00000000-0x00001fff and 0xc0000000-0xc0001fff.
-     */
     if ( msr <= 0x1fff )
     {
         if ( type & VMX_MSR_R )
-            set_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
+            set_bit(msr, msr_bitmap->read_low);
         if ( type & VMX_MSR_W )
-            set_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
+            set_bit(msr, msr_bitmap->write_low);
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
         if ( type & VMX_MSR_R )
-            set_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
+            set_bit(msr, msr_bitmap->read_high);
         if ( type & VMX_MSR_W )
-            set_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
+            set_bit(msr, msr_bitmap->write_high);
     }
     else
-        HVM_DBG_LOG(DBG_LEVEL_MSR,
-                   "msr %x is out of the control range"
-                   "0x00000000-0x00001fff and 0xc0000000-0xc0001fff"
-                   "RDMSR or WRMSR will cause a VM exit", msr); 
+        ASSERT(!"MSR out of range for interception\n");
 }
 
 /*
@@ -1094,7 +1077,7 @@ static int construct_vmcs(struct vcpu *v)
     /* MSR access bitmap. */
     if ( cpu_has_vmx_msr_bitmap )
     {
-        unsigned long *msr_bitmap = alloc_xenheap_page();
+        struct vmx_msr_bitmap *msr_bitmap = alloc_xenheap_page();
 
         if ( msr_bitmap == NULL )
         {
@@ -1958,6 +1941,20 @@ void __init setup_vmcs_dump(void)
     register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
 }
 
+static void __init __maybe_unused build_assertions(void)
+{
+    struct vmx_msr_bitmap bitmap;
+
+    BUILD_BUG_ON(sizeof(bitmap)            != PAGE_SIZE);
+    BUILD_BUG_ON(sizeof(bitmap.read_low)   != 1024);
+    BUILD_BUG_ON(sizeof(bitmap.read_high)  != 1024);
+    BUILD_BUG_ON(sizeof(bitmap.write_low)  != 1024);
+    BUILD_BUG_ON(sizeof(bitmap.write_high) != 1024);
+    BUILD_BUG_ON(offsetof(struct vmx_msr_bitmap, read_low)   != 0);
+    BUILD_BUG_ON(offsetof(struct vmx_msr_bitmap, read_high)  != 1024);
+    BUILD_BUG_ON(offsetof(struct vmx_msr_bitmap, write_low)  != 2048);
+    BUILD_BUG_ON(offsetof(struct vmx_msr_bitmap, write_high) != 3072);
+}
 
 /*
  * Local variables:
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index e318dc2..926e792 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -64,6 +64,14 @@ struct vmx_domain {
     unsigned int status;
 };
 
+/* Layout of the MSR bitmap, as interpreted by hardware. */
+struct vmx_msr_bitmap {
+    unsigned long read_low  [0x2000 / BITS_PER_LONG];
+    unsigned long read_high [0x2000 / BITS_PER_LONG];
+    unsigned long write_low [0x2000 / BITS_PER_LONG];
+    unsigned long write_high[0x2000 / BITS_PER_LONG];
+};
+
 struct pi_desc {
     DECLARE_BITMAP(pir, NR_VECTORS);
     union {
@@ -116,7 +124,7 @@ struct arch_vmx_struct {
     uint64_t             cstar;
     uint64_t             sfmask;
 
-    unsigned long       *msr_bitmap;
+    struct vmx_msr_bitmap *msr_bitmap;
     unsigned int         msr_count;
     struct vmx_msr_entry *msr_area;
     unsigned int         host_msr_count;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/6] x86/vvmx: Switch nested MSR intercept handling to use struct vmx_msr_bitmap
  2017-07-19 11:57 [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling Andrew Cooper
                   ` (2 preceding siblings ...)
  2017-07-19 11:57 ` [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap Andrew Cooper
@ 2017-07-19 11:57 ` Andrew Cooper
  2017-07-27  6:09   ` Tian, Kevin
  2017-07-19 11:57 ` [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing Andrew Cooper
  2017-07-19 11:57 ` [PATCH 6/6] x86/vvmx: Fix auditing of MSR_BITMAP parameter Andrew Cooper
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 11:57 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich

Rename vmx_check_msr_bitmap() to vmx_msr_is_intercepted() in order to more
clearly identify what the boolean return value means.  Change the int
access_type to bool is_write.

The NULL pointer check is moved out, as it doesn't pertain to whether the MSR
is intercepted or not.  The check is moved into nvmx_n2_vmexit_handler(),
where it becomes a hard error in the case that ACTIVATE_MSR_BITMAP is set.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 31 +++++++++----------------------
 xen/arch/x86/hvm/vmx/vvmx.c        | 23 ++++++++++++-----------
 xen/include/asm-x86/hvm/vmx/vmcs.h |  3 ++-
 3 files changed, 23 insertions(+), 34 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0e1a142..f1427f8 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -862,31 +862,18 @@ void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
         ASSERT(!"MSR out of range for interception\n");
 }
 
-/*
- * access_type: read == 0, write == 1
- */
-int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type)
+bool vmx_msr_is_intercepted(struct vmx_msr_bitmap *msr_bitmap,
+                            unsigned int msr, bool is_write)
 {
-    int ret = 1;
-    if ( !msr_bitmap )
-        return 1;
-
     if ( msr <= 0x1fff )
-    {
-        if ( access_type == 0 )
-            ret = test_bit(msr, msr_bitmap + 0x000/BYTES_PER_LONG); /* read-low */
-        else if ( access_type == 1 )
-            ret = test_bit(msr, msr_bitmap + 0x800/BYTES_PER_LONG); /* write-low */
-    }
+        return test_bit(msr, is_write ? msr_bitmap->write_low
+                                      : msr_bitmap->read_low);
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
-    {
-        msr &= 0x1fff;
-        if ( access_type == 0 )
-            ret = test_bit(msr, msr_bitmap + 0x400/BYTES_PER_LONG); /* read-high */
-        else if ( access_type == 1 )
-            ret = test_bit(msr, msr_bitmap + 0xc00/BYTES_PER_LONG); /* write-high */
-    }
-    return ret;
+        return test_bit(msr & 0x1fff, is_write ? msr_bitmap->write_high
+                                               : msr_bitmap->read_high);
+    else
+        /* MSRs outside the bitmap ranges are always intercepted. */
+        return true;
 }
 
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 2c8cf63..0d08789 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -2284,22 +2284,23 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
+
     case EXIT_REASON_MSR_READ:
     case EXIT_REASON_MSR_WRITE:
-    {
-        int status;
         ctrl = __n2_exec_control(v);
-        if ( ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP )
-        {
-            status = vmx_check_msr_bitmap(nvmx->msrbitmap, regs->ecx,
-                         !!(exit_reason == EXIT_REASON_MSR_WRITE));
-            if ( status )
-                nvcpu->nv_vmexit_pending = 1;
-        }
-        else
+
+        /* Without ACTIVATE_MSR_BITMAP, all MSRs are intercepted. */
+        if ( !(ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP) )
             nvcpu->nv_vmexit_pending = 1;
+        else if ( !nvmx->msrbitmap )
+            /* ACTIVATE_MSR_BITMAP set, but L2 bitmap not mapped??? */
+            domain_crash(v->domain);
+        else
+            nvcpu->nv_vmexit_pending =
+                vmx_msr_is_intercepted(nvmx->msrbitmap, regs->ecx,
+                                       exit_reason == EXIT_REASON_MSR_WRITE);
         break;
-    }
+
     case EXIT_REASON_IO_INSTRUCTION:
         ctrl = __n2_exec_control(v);
         if ( ctrl & CPU_BASED_ACTIVATE_IO_BITMAP )
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 926e792..32a6732 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -543,7 +543,8 @@ int vmx_add_msr(u32 msr, int type);
 void vmx_vmcs_switch(paddr_t from, paddr_t to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
-int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
+bool vmx_msr_is_intercepted(struct vmx_msr_bitmap *msr_bitmap,
+                            unsigned int msr, bool is_write) __nonnull(1);
 void virtual_vmcs_enter(const struct vcpu *);
 void virtual_vmcs_exit(const struct vcpu *);
 u64 virtual_vmcs_vmread(const struct vcpu *, u32 encoding);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing
  2017-07-19 11:57 [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling Andrew Cooper
                   ` (3 preceding siblings ...)
  2017-07-19 11:57 ` [PATCH 4/6] x86/vvmx: Switch nested MSR intercept handling to " Andrew Cooper
@ 2017-07-19 11:57 ` Andrew Cooper
  2017-07-26  8:50   ` Sergey Dyasli
  2017-07-27  6:11   ` Tian, Kevin
  2017-07-19 11:57 ` [PATCH 6/6] x86/vvmx: Fix auditing of MSR_BITMAP parameter Andrew Cooper
  5 siblings, 2 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 11:57 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich

Currently, the following sequence of actions:

 * VMPTRLD (creates a mapping, likely pointing at gfn 0 for an empty vmcs)
 * VMWRITE CPU_BASED_VM_EXEC_CONTROL (completed by hardware)
 * VMWRITE MSR_BITMAP (completed by hardware)
 * VMLAUNCH

results in an L2 guest running with ACTIVATE_MSR_BITMAP set, but Xen using a
stale mapping (likely gfn 0) when reading the interception bitmap.  The
MSR_BITMAP field needs unconditionally intercepting even with VMCS shadowing,
so Xen's mapping of the bitmap can be updated.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0d08789..f84478e 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -98,13 +98,15 @@ int nvmx_vcpu_initialise(struct vcpu *v)
         clear_page(vw);
 
         /*
-         * For the following 4 encodings, we need to handle them in VMM.
+         * For the following 6 encodings, we need to handle them in VMM.
          * Let them vmexit as usual.
          */
         set_bit(IO_BITMAP_A, vw);
         set_bit(VMCS_HIGH(IO_BITMAP_A), vw);
         set_bit(IO_BITMAP_B, vw);
         set_bit(VMCS_HIGH(IO_BITMAP_B), vw);
+        set_bit(MSR_BITMAP, vw);
+        set_bit(VMCS_HIGH(MSR_BITMAP), vw);
 
         unmap_domain_page(vw);
     }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/6] x86/vvmx: Fix auditing of MSR_BITMAP parameter
  2017-07-19 11:57 [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling Andrew Cooper
                   ` (4 preceding siblings ...)
  2017-07-19 11:57 ` [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing Andrew Cooper
@ 2017-07-19 11:57 ` Andrew Cooper
  2017-07-27  6:13   ` Tian, Kevin
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 11:57 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, Jan Beulich

The MSR_BITMAP field is required to be page aligned.  Also switch gpa to be a
uint64_t, as the MSR_BITMAP is strictly a 64bit VMCS field.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index f84478e..6ee5385 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -754,14 +754,27 @@ static void __clear_current_vvmcs(struct vcpu *v)
         __vmpclear(nvcpu->nv_n2vmcx_pa);
 }
 
-static bool_t __must_check _map_msr_bitmap(struct vcpu *v)
+/*
+ * Refreshes the MSR bitmap mapping for the current nested vcpu.  Returns true
+ * for a success mapping, and returns false for MSR_BITMAP parameter errors or
+ * gfn mapping errors.
+ */
+static bool __must_check _map_msr_bitmap(struct vcpu *v)
 {
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
-    unsigned long gpa;
+    uint64_t gpa;
 
     if ( nvmx->msrbitmap )
+    {
         hvm_unmap_guest_frame(nvmx->msrbitmap, 1);
+        nvmx->msrbitmap = NULL;
+    }
+
     gpa = get_vvmcs(v, MSR_BITMAP);
+
+    if ( !IS_ALIGNED(gpa, PAGE_SIZE) )
+        return false;
+
     nvmx->msrbitmap = hvm_map_guest_frame_ro(gpa >> PAGE_SHIFT, 1);
 
     return nvmx->msrbitmap != NULL;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them
  2017-07-19 11:57 ` [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them Andrew Cooper
@ 2017-07-19 12:17   ` Andrew Cooper
  2017-07-27  5:46     ` Tian, Kevin
  2017-07-19 13:33   ` Boris Ostrovsky
  1 sibling, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-07-19 12:17 UTC (permalink / raw)
  To: Xen-devel; +Cc: Boris Ostrovsky, Kevin Tian, Jun Nakajima, Jan Beulich

On 19/07/17 12:57, Andrew Cooper wrote:
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I have just realise I can now drop msraddr_to_bitpos(), so have folded
the additional hunk into this patch.

diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index d58eca3..207e2e7 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -225,12 +225,6 @@ static int is_core2_vpmu_msr(u32 msr_index, int
*type, int *index)
     }
 }
 
-static inline int msraddr_to_bitpos(int x)
-{
-    ASSERT(x == (x & 0x1fff));
-    return x;
-}
-
 static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them
  2017-07-19 11:57 ` [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them Andrew Cooper
  2017-07-19 12:17   ` Andrew Cooper
@ 2017-07-19 13:33   ` Boris Ostrovsky
  1 sibling, 0 replies; 18+ messages in thread
From: Boris Ostrovsky @ 2017-07-19 13:33 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Kevin Tian, Jan Beulich, Jun Nakajima

On 07/19/2017 07:57 AM, Andrew Cooper wrote:
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing
  2017-07-19 11:57 ` [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing Andrew Cooper
@ 2017-07-26  8:50   ` Sergey Dyasli
  2017-07-27  6:11   ` Tian, Kevin
  1 sibling, 0 replies; 18+ messages in thread
From: Sergey Dyasli @ 2017-07-26  8:50 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel
  Cc: Sergey Dyasli, Kevin Tian, jun.nakajima, JBeulich

On Wed, 2017-07-19 at 12:57 +0100, Andrew Cooper wrote:
> Currently, the following sequence of actions:
> 
>  * VMPTRLD (creates a mapping, likely pointing at gfn 0 for an empty vmcs)
>  * VMWRITE CPU_BASED_VM_EXEC_CONTROL (completed by hardware)
>  * VMWRITE MSR_BITMAP (completed by hardware)
>  * VMLAUNCH
> 
> results in an L2 guest running with ACTIVATE_MSR_BITMAP set, but Xen using a
> stale mapping (likely gfn 0) when reading the interception bitmap.  The
> MSR_BITMAP field needs unconditionally intercepting even with VMCS shadowing,
> so Xen's mapping of the bitmap can be updated.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr()
  2017-07-19 11:57 ` [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr() Andrew Cooper
@ 2017-07-27  5:43   ` Tian, Kevin
  0 siblings, 0 replies; 18+ messages in thread
From: Tian, Kevin @ 2017-07-27  5:43 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, July 19, 2017 7:58 PM
> 
>  * Shorten the names to vmx_{clear,set}_msr_intercept()
>  * Use an enumeration for MSR_TYPE rather than a plain integer
>  * Introduce VMX_MSR_RW, as most callers alter both the read and write
>    intercept at the same time.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them
  2017-07-19 12:17   ` Andrew Cooper
@ 2017-07-27  5:46     ` Tian, Kevin
  0 siblings, 0 replies; 18+ messages in thread
From: Tian, Kevin @ 2017-07-27  5:46 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Boris Ostrovsky, Nakajima, Jun, Jan Beulich

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, July 19, 2017 8:18 PM
> 
> On 19/07/17 12:57, Andrew Cooper wrote:
> > No functional change.
> >
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> I have just realise I can now drop msraddr_to_bitpos(), so have folded
> the additional hunk into this patch.
> 
> diff --git a/xen/arch/x86/cpu/vpmu_intel.c
> b/xen/arch/x86/cpu/vpmu_intel.c
> index d58eca3..207e2e7 100644
> --- a/xen/arch/x86/cpu/vpmu_intel.c
> +++ b/xen/arch/x86/cpu/vpmu_intel.c
> @@ -225,12 +225,6 @@ static int is_core2_vpmu_msr(u32 msr_index, int
> *type, int *index)
>      }
>  }
> 
> -static inline int msraddr_to_bitpos(int x)
> -{
> -    ASSERT(x == (x & 0x1fff));
> -    return x;
> -}
> -
>  static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
>  {
>      unsigned int i;

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap
  2017-07-19 11:57 ` [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap Andrew Cooper
@ 2017-07-27  6:02   ` Tian, Kevin
  2017-07-27  8:30     ` Andrew Cooper
  0 siblings, 1 reply; 18+ messages in thread
From: Tian, Kevin @ 2017-07-27  6:02 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, July 19, 2017 7:58 PM
> 
> This avoids opencoding the bitmap bases in accessor functions.  Introduce a
> build_assertions() function to check the structure layout against the manual
> definiton.  In addition, drop some stale comments and ASSERT() that callers
> pass an in-range MSR.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>, with a small comment:
[...]

> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-
> x86/hvm/vmx/vmcs.h
> index e318dc2..926e792 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -64,6 +64,14 @@ struct vmx_domain {
>      unsigned int status;
>  };
> 
> +/* Layout of the MSR bitmap, as interpreted by hardware. */
> +struct vmx_msr_bitmap {
> +    unsigned long read_low  [0x2000 / BITS_PER_LONG];
> +    unsigned long read_high [0x2000 / BITS_PER_LONG];
> +    unsigned long write_low [0x2000 / BITS_PER_LONG];
> +    unsigned long write_high[0x2000 / BITS_PER_LONG];
> +};
> +

what about taking this chance to define 0x2000 into a macro
for better readability?

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/6] x86/vvmx: Switch nested MSR intercept handling to use struct vmx_msr_bitmap
  2017-07-19 11:57 ` [PATCH 4/6] x86/vvmx: Switch nested MSR intercept handling to " Andrew Cooper
@ 2017-07-27  6:09   ` Tian, Kevin
  0 siblings, 0 replies; 18+ messages in thread
From: Tian, Kevin @ 2017-07-27  6:09 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, July 19, 2017 7:58 PM
> 
> Rename vmx_check_msr_bitmap() to vmx_msr_is_intercepted() in order to
> more
> clearly identify what the boolean return value means.  Change the int
> access_type to bool is_write.
> 
> The NULL pointer check is moved out, as it doesn't pertain to whether the
> MSR
> is intercepted or not.  The check is moved into nvmx_n2_vmexit_handler(),
> where it becomes a hard error in the case that ACTIVATE_MSR_BITMAP is set.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing
  2017-07-19 11:57 ` [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing Andrew Cooper
  2017-07-26  8:50   ` Sergey Dyasli
@ 2017-07-27  6:11   ` Tian, Kevin
  1 sibling, 0 replies; 18+ messages in thread
From: Tian, Kevin @ 2017-07-27  6:11 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, July 19, 2017 7:58 PM
> 
> Currently, the following sequence of actions:
> 
>  * VMPTRLD (creates a mapping, likely pointing at gfn 0 for an empty vmcs)
>  * VMWRITE CPU_BASED_VM_EXEC_CONTROL (completed by hardware)
>  * VMWRITE MSR_BITMAP (completed by hardware)
>  * VMLAUNCH
> 
> results in an L2 guest running with ACTIVATE_MSR_BITMAP set, but Xen
> using a
> stale mapping (likely gfn 0) when reading the interception bitmap.  The
> MSR_BITMAP field needs unconditionally intercepting even with VMCS
> shadowing,
> so Xen's mapping of the bitmap can be updated.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/6] x86/vvmx: Fix auditing of MSR_BITMAP parameter
  2017-07-19 11:57 ` [PATCH 6/6] x86/vvmx: Fix auditing of MSR_BITMAP parameter Andrew Cooper
@ 2017-07-27  6:13   ` Tian, Kevin
  0 siblings, 0 replies; 18+ messages in thread
From: Tian, Kevin @ 2017-07-27  6:13 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, July 19, 2017 7:58 PM
> 
> The MSR_BITMAP field is required to be page aligned.  Also switch gpa to be
> a
> uint64_t, as the MSR_BITMAP is strictly a 64bit VMCS field.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap
  2017-07-27  6:02   ` Tian, Kevin
@ 2017-07-27  8:30     ` Andrew Cooper
  2017-07-27  9:01       ` Tian, Kevin
  0 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-07-27  8:30 UTC (permalink / raw)
  To: Tian, Kevin, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich

On 27/07/2017 07:02, Tian, Kevin wrote:
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> Sent: Wednesday, July 19, 2017 7:58 PM
>>
>> This avoids opencoding the bitmap bases in accessor functions.  Introduce a
>> build_assertions() function to check the structure layout against the manual
>> definiton.  In addition, drop some stale comments and ASSERT() that callers
>> pass an in-range MSR.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>, with a small comment:
> [...]
>
>> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-
>> x86/hvm/vmx/vmcs.h
>> index e318dc2..926e792 100644
>> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
>> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
>> @@ -64,6 +64,14 @@ struct vmx_domain {
>>      unsigned int status;
>>  };
>>
>> +/* Layout of the MSR bitmap, as interpreted by hardware. */
>> +struct vmx_msr_bitmap {
>> +    unsigned long read_low  [0x2000 / BITS_PER_LONG];
>> +    unsigned long read_high [0x2000 / BITS_PER_LONG];
>> +    unsigned long write_low [0x2000 / BITS_PER_LONG];
>> +    unsigned long write_high[0x2000 / BITS_PER_LONG];
>> +};
>> +
> what about taking this chance to define 0x2000 into a macro
> for better readability?

What would you suggest to make this more readable?

The current way it is expressed by the manuals is that the bitmaps
covers MSRs 0 -> 0x1fff and 0xc0000000 -> 0xc0001fff which is where the
0x2000 is derived from.

Would this be better?

/* Layout of the MSR bitmap, as interpreted by hardware. */
struct vmx_msr_bitmap {
    /* Covers MSRs 0 -> 0x1fff. */
    unsigned long read_low  [0x2000 / BITS_PER_LONG];
    unsigned long read_high [0x2000 / BITS_PER_LONG];
    /* Covers MSRs 0xc0000000 -> 0xc0001fff. */
    unsigned long write_low [0x2000 / BITS_PER_LONG];
    unsigned long write_high[0x2000 / BITS_PER_LONG];
};

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap
  2017-07-27  8:30     ` Andrew Cooper
@ 2017-07-27  9:01       ` Tian, Kevin
  0 siblings, 0 replies; 18+ messages in thread
From: Tian, Kevin @ 2017-07-27  9:01 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Nakajima, Jun, Jan Beulich

> From: Andrew Cooper [mailto:amc96@hermes.cam.ac.uk] On Behalf Of
> Andrew Cooper
> 
> On 27/07/2017 07:02, Tian, Kevin wrote:
> >> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> >> Sent: Wednesday, July 19, 2017 7:58 PM
> >>
> >> This avoids opencoding the bitmap bases in accessor functions.  Introduce
> a
> >> build_assertions() function to check the structure layout against the
> manual
> >> definiton.  In addition, drop some stale comments and ASSERT() that
> callers
> >> pass an in-range MSR.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Acked-by: Kevin Tian <kevin.tian@intel.com>, with a small comment:
> > [...]
> >
> >> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-
> >> x86/hvm/vmx/vmcs.h
> >> index e318dc2..926e792 100644
> >> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> >> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> >> @@ -64,6 +64,14 @@ struct vmx_domain {
> >>      unsigned int status;
> >>  };
> >>
> >> +/* Layout of the MSR bitmap, as interpreted by hardware. */
> >> +struct vmx_msr_bitmap {
> >> +    unsigned long read_low  [0x2000 / BITS_PER_LONG];
> >> +    unsigned long read_high [0x2000 / BITS_PER_LONG];
> >> +    unsigned long write_low [0x2000 / BITS_PER_LONG];
> >> +    unsigned long write_high[0x2000 / BITS_PER_LONG];
> >> +};
> >> +
> > what about taking this chance to define 0x2000 into a macro
> > for better readability?
> 
> What would you suggest to make this more readable?
> 
> The current way it is expressed by the manuals is that the bitmaps
> covers MSRs 0 -> 0x1fff and 0xc0000000 -> 0xc0001fff which is where the
> 0x2000 is derived from.
> 
> Would this be better?
> 
> /* Layout of the MSR bitmap, as interpreted by hardware. */
> struct vmx_msr_bitmap {
>     /* Covers MSRs 0 -> 0x1fff. */
>     unsigned long read_low  [0x2000 / BITS_PER_LONG];
>     unsigned long read_high [0x2000 / BITS_PER_LONG];
>     /* Covers MSRs 0xc0000000 -> 0xc0001fff. */
>     unsigned long write_low [0x2000 / BITS_PER_LONG];
>     unsigned long write_high[0x2000 / BITS_PER_LONG];
> };
> 

yes, this way is better.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-07-27  9:01 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-19 11:57 [PATCH 0/6] x86/vvmx: Fixes to MSR_BITMAP interception handling Andrew Cooper
2017-07-19 11:57 ` [PATCH 1/6] x86/vmx: Improvements to vmx_{dis, en}able_intercept_for_msr() Andrew Cooper
2017-07-27  5:43   ` Tian, Kevin
2017-07-19 11:57 ` [PATCH 2/6] x86/vpmu: Use vmx_{clear, set}_msr_intercept() rather than opencoding them Andrew Cooper
2017-07-19 12:17   ` Andrew Cooper
2017-07-27  5:46     ` Tian, Kevin
2017-07-19 13:33   ` Boris Ostrovsky
2017-07-19 11:57 ` [PATCH 3/6] x86/vmx: Introduce and use struct vmx_msr_bitmap Andrew Cooper
2017-07-27  6:02   ` Tian, Kevin
2017-07-27  8:30     ` Andrew Cooper
2017-07-27  9:01       ` Tian, Kevin
2017-07-19 11:57 ` [PATCH 4/6] x86/vvmx: Switch nested MSR intercept handling to " Andrew Cooper
2017-07-27  6:09   ` Tian, Kevin
2017-07-19 11:57 ` [PATCH 5/6] x86/vvmx: Fix handing of the MSR_BITMAP field with VMCS shadowing Andrew Cooper
2017-07-26  8:50   ` Sergey Dyasli
2017-07-27  6:11   ` Tian, Kevin
2017-07-19 11:57 ` [PATCH 6/6] x86/vvmx: Fix auditing of MSR_BITMAP parameter Andrew Cooper
2017-07-27  6:13   ` Tian, Kevin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.