All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] x86/vmx: fix for vmentry failure with TSX bits in LBR
@ 2017-02-17 15:42 Sergey Dyasli
  2017-02-17 15:42 ` [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr() Sergey Dyasli
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-02-17 15:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

The first 2 patches is a general optimization which is nice to have
prior to the 3rd patch which contains the real fix.

A similar bug was fixed in Linux's perf subsystem in Jun 2016:

    commit 19fc9ddd61e059cc45464bdf6e8fa304bb94080f
    ("perf/x86/intel: Fix MSR_LAST_BRANCH_FROM_x bug when no TSX")

But the issue affects virtualized systems in a more severe way since
all LBR entries have to be fixed during vmentry.

v1 --> v2:
- qsort (heap sort) is replaced with a variant of insertion sort
- both host and guest MSR lists are now kept sorted
- vmx_find_guest_msr() is replaced with more generic vmx_find_msr()
- factored out changes to vmx_read/write_guest_msr() into a separate patch
- LBR_FORMAT_MSR_IA32_PERF_CAP define is added
- enum for LBR_FORMAT_* is added
- LBR_FROM_SIGNEXT_2MSB is now a define instead of static const
- u32/64 replaced with uint32/64_t in vmx.c
- Renamed "last_in_from_ip" --> "lbr_lastint_from"
- Added defines for the layout of struct lbr_info[]
- Added a comment to vmx_lbr_tsx_fixup() about MSR array being sorted
- Fixed tabs for cpu_has_* defines
- arch_vmx_struct::lbr_tsx_fixup_enabled is moved to fill up 1-byte hole
- Made lbr_from_start, lbr_from_end and lbr_lastint_from
  global static __read_mostly.
  Moved their initialization into vmx_lbr_tsx_fixup_check().
- Information about Live Migration is added to the commit message of 3/3

Sergey Dyasli (3):
  x86/vmx: introduce vmx_find_msr()
  x86/vmx: optimize vmx_read/write_guest_msr()
  x86/vmx: fix vmentry failure with TSX bits in LBR

 xen/arch/x86/hvm/vmx/vmcs.c        | 71 +++++++++++++++++++++---------
 xen/arch/x86/hvm/vmx/vmx.c         | 90 ++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/cpufeature.h   |  3 ++
 xen/include/asm-x86/hvm/vmx/vmcs.h |  2 +
 xen/include/asm-x86/msr-index.h    |  2 +
 5 files changed, 148 insertions(+), 20 deletions(-)

-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr()
  2017-02-17 15:42 [PATCH v2 0/3] x86/vmx: fix for vmentry failure with TSX bits in LBR Sergey Dyasli
@ 2017-02-17 15:42 ` Sergey Dyasli
       [not found]   ` <AADFC41AFE54684AB9EE6CBC0274A5D190C40BC0@SHSMSX101.ccr.corp.intel.com>
  2017-02-17 15:42 ` [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr() Sergey Dyasli
  2017-02-17 15:42 ` [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR Sergey Dyasli
  2 siblings, 1 reply; 17+ messages in thread
From: Sergey Dyasli @ 2017-02-17 15:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

Modify vmx_add_msr() to use a variation of insertion sort algorithm:
find a place for the new entry and shift all subsequent elements before
insertion.

The new vmx_find_msr() exploits the fact that MSR list is now sorted
and reuses the existing code for binary search.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v1 --> v2:
- qsort (heap sort) is replaced with a variant of insertion sort
- both host and guest MSR lists are now kept sorted
- vmx_find_guest_msr() is replaced with more generic vmx_find_msr()

 xen/arch/x86/hvm/vmx/vmcs.c        | 45 ++++++++++++++++++++++++++++++++++++--
 xen/include/asm-x86/hvm/vmx/vmcs.h |  1 +
 2 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 454d444..49587d6 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1307,6 +1307,44 @@ static int construct_vmcs(struct vcpu *v)
     return 0;
 }
 
+static int vmx_msr_entry_key_cmp(const void *key, const void *elt)
+{
+    const u32 *msr = key;
+    const struct vmx_msr_entry *entry = elt;
+
+    if ( *msr > entry->index )
+        return 1;
+    if ( *msr < entry->index )
+        return -1;
+
+    return 0;
+}
+
+struct vmx_msr_entry *vmx_find_msr(u32 msr, int type)
+{
+    struct vcpu *curr = current;
+    unsigned int msr_count;
+    struct vmx_msr_entry *msr_area;
+
+    if ( type == VMX_GUEST_MSR )
+    {
+        msr_count = curr->arch.hvm_vmx.msr_count;
+        msr_area = curr->arch.hvm_vmx.msr_area;
+    }
+    else
+    {
+        ASSERT(type == VMX_HOST_MSR);
+        msr_count = curr->arch.hvm_vmx.host_msr_count;
+        msr_area = curr->arch.hvm_vmx.host_msr_area;
+    }
+
+    if ( msr_area == NULL )
+        return NULL;
+
+    return bsearch(&msr, msr_area, msr_count, sizeof(struct vmx_msr_entry),
+                   vmx_msr_entry_key_cmp);
+}
+
 int vmx_read_guest_msr(u32 msr, u64 *val)
 {
     struct vcpu *curr = current;
@@ -1375,14 +1413,17 @@ int vmx_add_msr(u32 msr, int type)
             __vmwrite(VM_EXIT_MSR_LOAD_ADDR, virt_to_maddr(*msr_area));
     }
 
-    for ( idx = 0; idx < *msr_count; idx++ )
+    for ( idx = 0; (*msr_area)[idx].index <= msr && idx < *msr_count; idx++ )
         if ( (*msr_area)[idx].index == msr )
             return 0;
 
     if ( *msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
         return -ENOSPC;
 
-    msr_area_elem = *msr_area + *msr_count;
+    memmove(*msr_area + idx + 1, *msr_area + idx,
+            sizeof(*msr_area_elem) * (*msr_count - idx));
+
+    msr_area_elem = *msr_area + idx;
     msr_area_elem->index = msr;
     msr_area_elem->mbz = 0;
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index c30aab6..903af51 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -529,6 +529,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
+struct vmx_msr_entry *vmx_find_msr(u32 msr, int type);
 int vmx_add_msr(u32 msr, int type);
 void vmx_vmcs_switch(paddr_t from, paddr_t to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr()
  2017-02-17 15:42 [PATCH v2 0/3] x86/vmx: fix for vmentry failure with TSX bits in LBR Sergey Dyasli
  2017-02-17 15:42 ` [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr() Sergey Dyasli
@ 2017-02-17 15:42 ` Sergey Dyasli
  2017-02-21  9:31   ` Tian, Kevin
  2017-02-22 10:16   ` Jan Beulich
  2017-02-17 15:42 ` [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR Sergey Dyasli
  2 siblings, 2 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-02-17 15:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

Replace linear scan with vmx_find_msr().  This way the time complexity
of searching for required MSR reduces from linear to logarithmic.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v1 --> v2:
- a new patch, factored out from v1 1/2

 xen/arch/x86/hvm/vmx/vmcs.c | 26 ++++++++------------------
 1 file changed, 8 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 49587d6..6c86147 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1347,17 +1347,12 @@ struct vmx_msr_entry *vmx_find_msr(u32 msr, int type)
 
 int vmx_read_guest_msr(u32 msr, u64 *val)
 {
-    struct vcpu *curr = current;
-    unsigned int i, msr_count = curr->arch.hvm_vmx.msr_count;
-    const struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+    struct vmx_msr_entry *ent;
 
-    for ( i = 0; i < msr_count; i++ )
+    if ( (ent = vmx_find_msr(msr, VMX_GUEST_MSR)) != NULL )
     {
-        if ( msr_area[i].index == msr )
-        {
-            *val = msr_area[i].data;
-            return 0;
-        }
+        *val = ent->data;
+        return 0;
     }
 
     return -ESRCH;
@@ -1365,17 +1360,12 @@ int vmx_read_guest_msr(u32 msr, u64 *val)
 
 int vmx_write_guest_msr(u32 msr, u64 val)
 {
-    struct vcpu *curr = current;
-    unsigned int i, msr_count = curr->arch.hvm_vmx.msr_count;
-    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+    struct vmx_msr_entry *ent;
 
-    for ( i = 0; i < msr_count; i++ )
+    if ( (ent = vmx_find_msr(msr, VMX_GUEST_MSR)) != NULL )
     {
-        if ( msr_area[i].index == msr )
-        {
-            msr_area[i].data = val;
-            return 0;
-        }
+        ent->data = val;
+        return 0;
     }
 
     return -ESRCH;
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-17 15:42 [PATCH v2 0/3] x86/vmx: fix for vmentry failure with TSX bits in LBR Sergey Dyasli
  2017-02-17 15:42 ` [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr() Sergey Dyasli
  2017-02-17 15:42 ` [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr() Sergey Dyasli
@ 2017-02-17 15:42 ` Sergey Dyasli
  2017-02-21  9:13   ` Tian, Kevin
  2017-02-22 10:26   ` Jan Beulich
  2 siblings, 2 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-02-17 15:42 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

During VM entry, H/W will automatically load guest's MSRs from MSR-load
area in the same way as they would be written by WRMSR.

However, under the following conditions:

    1. LBR (Last Branch Record) MSRs were placed in the MSR-load area
    2. Address format of LBR includes TSX bits 61:62
    3. CPU has TSX support disabled

VM entry will fail with a message in the log similar to:

    (XEN) [   97.239514] d1v0 vmentry failure (reason 0x80000022): MSR loading (entry 3)
    (XEN) [   97.239516]   msr 00000680 val 1fff800000102e60 (mbz 0)

This happens because of the following behaviour:

    - When capturing branches, LBR H/W will always clear bits 61:62
      regardless of the sign extension
    - For WRMSR, bits 61:62 are considered the part of the sign extension

This bug affects only certain pCPUs (e.g. Haswell) with vCPUs that
use LBR.  Fix it by sign-extending TSX bits in all LBR entries during
VM entry in affected cases.

LBR MSRs are currently not Live Migrated. In order to implement such
functionality, the MSR levelling work has to be done first because
hosts can have different LBR formats.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v1 --> v2:
- LBR_FORMAT_MSR_IA32_PERF_CAP define is added
- enum for LBR_FORMAT_* is added
- LBR_FROM_SIGNEXT_2MSB is now a define instead of static const
- u32/64 replaced with uint32/64_t
- Renamed "last_in_from_ip" --> "lbr_lastint_from"
- Added defines for the layout of struct lbr_info[]
- Added a comment to vmx_lbr_tsx_fixup() about MSR array being sorted
- Fixed tabs for cpu_has_* defines
- arch_vmx_struct::lbr_tsx_fixup_enabled is moved to fill up 1-byte hole
- Made lbr_from_start, lbr_from_end and lbr_lastint_from
  global static __read_mostly.
  Moved their initialization into vmx_lbr_tsx_fixup_check().
 - Information about Live Migration is added to the commit message

 xen/arch/x86/hvm/vmx/vmx.c         | 90 ++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/cpufeature.h   |  3 ++
 xen/include/asm-x86/hvm/vmx/vmcs.h |  1 +
 xen/include/asm-x86/msr-index.h    |  2 +
 4 files changed, 96 insertions(+)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 42f4fbd..3547b78 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2284,6 +2284,8 @@ static void pi_notification_interrupt(struct cpu_user_regs *regs)
     raise_softirq(VCPU_KICK_SOFTIRQ);
 }
 
+static void __init vmx_lbr_tsx_fixup_check(void);
+
 const struct hvm_function_table * __init start_vmx(void)
 {
     set_in_cr4(X86_CR4_VMXE);
@@ -2353,6 +2355,8 @@ const struct hvm_function_table * __init start_vmx(void)
 
     setup_vmcs_dump();
 
+    vmx_lbr_tsx_fixup_check();
+
     return &vmx_function_table;
 }
 
@@ -2513,6 +2517,14 @@ static int vmx_cr_access(unsigned long exit_qualification)
     return X86EMUL_OKAY;
 }
 
+/* This defines the layout of struct lbr_info[] */
+#define LBR_LASTINT_FROM_IDX    0
+#define LBR_LASTINT_TO_IDX      1
+#define LBR_LASTBRANCH_TOS_IDX  2
+#define LBR_LASTBRANCH_FROM_IDX 3
+#define LBR_LASTBRANCH_TO_IDX   4
+#define LBR_LASTBRANCH_INFO     5
+
 static const struct lbr_info {
     u32 base, count;
 } p4_lbr[] = {
@@ -2618,6 +2630,56 @@ static const struct lbr_info *last_branch_msr_get(void)
     return NULL;
 }
 
+enum
+{
+    LBR_FORMAT_32                 = 0x0, /* 32-bit record format */
+    LBR_FORMAT_LIP                = 0x1, /* 64-bit LIP record format */
+    LBR_FORMAT_EIP                = 0x2, /* 64-bit EIP record format */
+    LBR_FORMAT_EIP_FLAGS          = 0x3, /* 64-bit EIP, Flags */
+    LBR_FORMAT_EIP_FLAGS_TSX      = 0x4, /* 64-bit EIP, Flags, TSX */
+    LBR_FORMAT_EIP_FLAGS_TSX_INFO = 0x5, /* 64-bit EIP, Flags, TSX, LBR_INFO */
+    LBR_FORMAT_EIP_FLAGS_CYCLES   = 0x6, /* 64-bit EIP, Flags, Cycles */
+};
+
+#define LBR_FROM_SIGNEXT_2MSB  ((1ULL << 59) | (1ULL << 60))
+
+static bool __read_mostly vmx_lbr_tsx_fixup_needed;
+static uint32_t __read_mostly lbr_from_start;
+static uint32_t __read_mostly lbr_from_end;
+static uint32_t __read_mostly lbr_lastint_from;
+
+static void __init vmx_lbr_tsx_fixup_check(void)
+{
+    bool tsx_support = cpu_has_hle || cpu_has_rtm;
+    uint64_t caps;
+    uint32_t lbr_format;
+
+    /* Fixup is needed only when TSX support is disabled ... */
+    if ( tsx_support )
+        return;
+
+    if ( !cpu_has_pdcm )
+        return;
+
+    rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+    lbr_format = caps & LBR_FORMAT_MSR_IA32_PERF_CAP;
+
+    /* ... and the address format of LBR includes TSX bits 61:62 */
+    if ( lbr_format == LBR_FORMAT_EIP_FLAGS_TSX )
+    {
+        const struct lbr_info *lbr = last_branch_msr_get();
+
+        if ( lbr == NULL )
+            return;
+
+        lbr_lastint_from = lbr[LBR_LASTINT_FROM_IDX].base;
+        lbr_from_start = lbr[LBR_LASTBRANCH_FROM_IDX].base;
+        lbr_from_end = lbr_from_start + lbr[LBR_LASTBRANCH_FROM_IDX].count;
+
+        vmx_lbr_tsx_fixup_needed = true;
+    }
+}
+
 static int is_last_branch_msr(u32 ecx)
 {
     const struct lbr_info *lbr = last_branch_msr_get();
@@ -2876,7 +2938,11 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
             for ( ; (rc == 0) && lbr->count; lbr++ )
                 for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
                     if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
+                    {
                         vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W);
+                        if ( vmx_lbr_tsx_fixup_needed )
+                            v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;
+                    }
         }
 
         if ( (rc < 0) ||
@@ -3897,6 +3963,27 @@ out:
     }
 }
 
+static void vmx_lbr_tsx_fixup(void)
+{
+    struct vcpu *curr = current;
+    unsigned int msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+    struct vmx_msr_entry *msr;
+
+    if ( (msr = vmx_find_msr(lbr_from_start, VMX_GUEST_MSR)) != NULL )
+    {
+        /*
+         * Sign extend into bits 61:62 while preserving bit 63
+         * The loop relies on the fact that MSR array is sorted.
+         */
+        for ( ; msr < msr_area + msr_count && msr->index < lbr_from_end; msr++ )
+            msr->data |= ((LBR_FROM_SIGNEXT_2MSB & msr->data) << 2);
+    }
+
+    if ( (msr = vmx_find_msr(lbr_lastint_from, VMX_GUEST_MSR)) != NULL )
+        msr->data |= ((LBR_FROM_SIGNEXT_2MSB & msr->data) << 2);
+}
+
 void vmx_vmenter_helper(const struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -3953,6 +4040,9 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
     }
 
  out:
+    if ( unlikely(curr->arch.hvm_vmx.lbr_tsx_fixup_enabled) )
+        vmx_lbr_tsx_fixup();
+
     HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
 
     __vmwrite(GUEST_RIP,    regs->rip);
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 39ad582..3b187ac 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -78,6 +78,9 @@
 #define cpu_has_cmp_legacy	boot_cpu_has(X86_FEATURE_CMP_LEGACY)
 #define cpu_has_tbm		boot_cpu_has(X86_FEATURE_TBM)
 #define cpu_has_itsc		boot_cpu_has(X86_FEATURE_ITSC)
+#define cpu_has_hle		boot_cpu_has(X86_FEATURE_HLE)
+#define cpu_has_rtm		boot_cpu_has(X86_FEATURE_RTM)
+#define cpu_has_pdcm		boot_cpu_has(X86_FEATURE_PDCM)
 
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 903af51..2c9c3c9 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -147,6 +147,7 @@ struct arch_vmx_struct {
     uint8_t              vmx_realmode;
     /* Are we emulating rather than VMENTERing? */
     uint8_t              vmx_emulate;
+    bool                 lbr_tsx_fixup_enabled;
     /* Bitmask of segments that we can't safely use in virtual 8086 mode */
     uint16_t             vm86_segment_mask;
     /* Shadow CS, SS, DS, ES, FS, GS, TR while in virtual 8086 mode */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 98dbff1..8841419 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -55,6 +55,8 @@
 #define MSR_IA32_PEBS_ENABLE		0x000003f1
 #define MSR_IA32_DS_AREA		0x00000600
 #define MSR_IA32_PERF_CAPABILITIES	0x00000345
+/* Lower 6 bits define the format of the address in the LBR stack */
+#define LBR_FORMAT_MSR_IA32_PERF_CAP	0x3f
 
 #define MSR_IA32_BNDCFGS		0x00000d90
 #define IA32_BNDCFGS_ENABLE		0x00000001
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-17 15:42 ` [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR Sergey Dyasli
@ 2017-02-21  9:13   ` Tian, Kevin
  2017-02-21 11:25     ` Andrew Cooper
  2017-02-22 10:26   ` Jan Beulich
  1 sibling, 1 reply; 17+ messages in thread
From: Tian, Kevin @ 2017-02-21  9:13 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Andrew Cooper, Jan Beulich, Nakajima, Jun

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Friday, February 17, 2017 11:43 PM
> 
> During VM entry, H/W will automatically load guest's MSRs from MSR-load
> area in the same way as they would be written by WRMSR.
> 
> However, under the following conditions:
> 
>     1. LBR (Last Branch Record) MSRs were placed in the MSR-load area
>     2. Address format of LBR includes TSX bits 61:62
>     3. CPU has TSX support disabled
> 
> VM entry will fail with a message in the log similar to:
> 
>     (XEN) [   97.239514] d1v0 vmentry failure (reason 0x80000022): MSR loading (entry
> 3)
>     (XEN) [   97.239516]   msr 00000680 val 1fff800000102e60 (mbz 0)
> 
> This happens because of the following behaviour:
> 
>     - When capturing branches, LBR H/W will always clear bits 61:62
>       regardless of the sign extension
>     - For WRMSR, bits 61:62 are considered the part of the sign extension
> 
> This bug affects only certain pCPUs (e.g. Haswell) with vCPUs that
> use LBR.  Fix it by sign-extending TSX bits in all LBR entries during
> VM entry in affected cases.
> 
> LBR MSRs are currently not Live Migrated. In order to implement such
> functionality, the MSR levelling work has to be done first because
> hosts can have different LBR formats.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> ---
> v1 --> v2:
> - LBR_FORMAT_MSR_IA32_PERF_CAP define is added
> - enum for LBR_FORMAT_* is added
> - LBR_FROM_SIGNEXT_2MSB is now a define instead of static const
> - u32/64 replaced with uint32/64_t
> - Renamed "last_in_from_ip" --> "lbr_lastint_from"
> - Added defines for the layout of struct lbr_info[]
> - Added a comment to vmx_lbr_tsx_fixup() about MSR array being sorted
> - Fixed tabs for cpu_has_* defines
> - arch_vmx_struct::lbr_tsx_fixup_enabled is moved to fill up 1-byte hole
> - Made lbr_from_start, lbr_from_end and lbr_lastint_from
>   global static __read_mostly.
>   Moved their initialization into vmx_lbr_tsx_fixup_check().
>  - Information about Live Migration is added to the commit message
> 
>  xen/arch/x86/hvm/vmx/vmx.c         | 90
> ++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/cpufeature.h   |  3 ++
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  1 +
>  xen/include/asm-x86/msr-index.h    |  2 +
>  4 files changed, 96 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 42f4fbd..3547b78 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2284,6 +2284,8 @@ static void pi_notification_interrupt(struct cpu_user_regs
> *regs)
>      raise_softirq(VCPU_KICK_SOFTIRQ);
>  }
> 
> +static void __init vmx_lbr_tsx_fixup_check(void);
> +
>  const struct hvm_function_table * __init start_vmx(void)
>  {
>      set_in_cr4(X86_CR4_VMXE);
> @@ -2353,6 +2355,8 @@ const struct hvm_function_table * __init start_vmx(void)
> 
>      setup_vmcs_dump();
> 
> +    vmx_lbr_tsx_fixup_check();
> +
>      return &vmx_function_table;
>  }
> 
> @@ -2513,6 +2517,14 @@ static int vmx_cr_access(unsigned long exit_qualification)
>      return X86EMUL_OKAY;
>  }
> 
> +/* This defines the layout of struct lbr_info[] */
> +#define LBR_LASTINT_FROM_IDX    0
> +#define LBR_LASTINT_TO_IDX      1
> +#define LBR_LASTBRANCH_TOS_IDX  2
> +#define LBR_LASTBRANCH_FROM_IDX 3
> +#define LBR_LASTBRANCH_TO_IDX   4
> +#define LBR_LASTBRANCH_INFO     5
> +
>  static const struct lbr_info {
>      u32 base, count;
>  } p4_lbr[] = {
> @@ -2618,6 +2630,56 @@ static const struct lbr_info *last_branch_msr_get(void)
>      return NULL;
>  }
> 
> +enum
> +{
> +    LBR_FORMAT_32                 = 0x0, /* 32-bit record format */
> +    LBR_FORMAT_LIP                = 0x1, /* 64-bit LIP record format */
> +    LBR_FORMAT_EIP                = 0x2, /* 64-bit EIP record format */
> +    LBR_FORMAT_EIP_FLAGS          = 0x3, /* 64-bit EIP, Flags */
> +    LBR_FORMAT_EIP_FLAGS_TSX      = 0x4, /* 64-bit EIP, Flags, TSX */
> +    LBR_FORMAT_EIP_FLAGS_TSX_INFO = 0x5, /* 64-bit EIP, Flags, TSX, LBR_INFO */
> +    LBR_FORMAT_EIP_FLAGS_CYCLES   = 0x6, /* 64-bit EIP, Flags, Cycles */
> +};
> +
> +#define LBR_FROM_SIGNEXT_2MSB  ((1ULL << 59) | (1ULL << 60))
> +
> +static bool __read_mostly vmx_lbr_tsx_fixup_needed;
> +static uint32_t __read_mostly lbr_from_start;
> +static uint32_t __read_mostly lbr_from_end;
> +static uint32_t __read_mostly lbr_lastint_from;
> +
> +static void __init vmx_lbr_tsx_fixup_check(void)
> +{
> +    bool tsx_support = cpu_has_hle || cpu_has_rtm;
> +    uint64_t caps;
> +    uint32_t lbr_format;
> +
> +    /* Fixup is needed only when TSX support is disabled ... */
> +    if ( tsx_support )
> +        return;

!tsx_support

> +
> +    if ( !cpu_has_pdcm )
> +        return;

combine two ifs into one

> +
> +    rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
> +    lbr_format = caps & LBR_FORMAT_MSR_IA32_PERF_CAP;
> +
> +    /* ... and the address format of LBR includes TSX bits 61:62 */

what's "..."?

> +    if ( lbr_format == LBR_FORMAT_EIP_FLAGS_TSX )
> +    {
> +        const struct lbr_info *lbr = last_branch_msr_get();
> +
> +        if ( lbr == NULL )
> +            return;

move above check before rdmsrl. At least avoid one unnecessary
msr read when LBR is not available.

> +
> +        lbr_lastint_from = lbr[LBR_LASTINT_FROM_IDX].base;
> +        lbr_from_start = lbr[LBR_LASTBRANCH_FROM_IDX].base;
> +        lbr_from_end = lbr_from_start + lbr[LBR_LASTBRANCH_FROM_IDX].count;
> +
> +        vmx_lbr_tsx_fixup_needed = true;
> +    }
> +}
> +
>  static int is_last_branch_msr(u32 ecx)
>  {
>      const struct lbr_info *lbr = last_branch_msr_get();
> @@ -2876,7 +2938,11 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t
> msr_content)
>              for ( ; (rc == 0) && lbr->count; lbr++ )
>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
>                      if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
> +                    {
>                          vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R |
> MSR_TYPE_W);
> +                        if ( vmx_lbr_tsx_fixup_needed )
> +                            v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;
> +                    }
>          }
> 
>          if ( (rc < 0) ||
> @@ -3897,6 +3963,27 @@ out:
>      }
>  }
> 
> +static void vmx_lbr_tsx_fixup(void)
> +{
> +    struct vcpu *curr = current;
> +    unsigned int msr_count = curr->arch.hvm_vmx.msr_count;
> +    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +    struct vmx_msr_entry *msr;
> +
> +    if ( (msr = vmx_find_msr(lbr_from_start, VMX_GUEST_MSR)) != NULL )

Curious whether NULL is a valid situation here. If not we need handle it.
Otherwise please ignore this comment.

> +    {
> +        /*
> +         * Sign extend into bits 61:62 while preserving bit 63
> +         * The loop relies on the fact that MSR array is sorted.
> +         */
> +        for ( ; msr < msr_area + msr_count && msr->index < lbr_from_end; msr++ )
> +            msr->data |= ((LBR_FROM_SIGNEXT_2MSB & msr->data) << 2);
> +    }
> +
> +    if ( (msr = vmx_find_msr(lbr_lastint_from, VMX_GUEST_MSR)) != NULL )
> +        msr->data |= ((LBR_FROM_SIGNEXT_2MSB & msr->data) << 2);
> +}
> +
>  void vmx_vmenter_helper(const struct cpu_user_regs *regs)
>  {
>      struct vcpu *curr = current;
> @@ -3953,6 +4040,9 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
>      }
> 
>   out:
> +    if ( unlikely(curr->arch.hvm_vmx.lbr_tsx_fixup_enabled) )
> +        vmx_lbr_tsx_fixup();
> +
>      HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
> 
>      __vmwrite(GUEST_RIP,    regs->rip);
> diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
> index 39ad582..3b187ac 100644
> --- a/xen/include/asm-x86/cpufeature.h
> +++ b/xen/include/asm-x86/cpufeature.h
> @@ -78,6 +78,9 @@
>  #define cpu_has_cmp_legacy	boot_cpu_has(X86_FEATURE_CMP_LEGACY)
>  #define cpu_has_tbm		boot_cpu_has(X86_FEATURE_TBM)
>  #define cpu_has_itsc		boot_cpu_has(X86_FEATURE_ITSC)
> +#define cpu_has_hle		boot_cpu_has(X86_FEATURE_HLE)
> +#define cpu_has_rtm		boot_cpu_has(X86_FEATURE_RTM)
> +#define cpu_has_pdcm		boot_cpu_has(X86_FEATURE_PDCM)
> 
>  enum _cache_type {
>      CACHE_TYPE_NULL = 0,
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h
> b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 903af51..2c9c3c9 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -147,6 +147,7 @@ struct arch_vmx_struct {
>      uint8_t              vmx_realmode;
>      /* Are we emulating rather than VMENTERing? */
>      uint8_t              vmx_emulate;
> +    bool                 lbr_tsx_fixup_enabled;
>      /* Bitmask of segments that we can't safely use in virtual 8086 mode */
>      uint16_t             vm86_segment_mask;
>      /* Shadow CS, SS, DS, ES, FS, GS, TR while in virtual 8086 mode */
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index 98dbff1..8841419 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -55,6 +55,8 @@
>  #define MSR_IA32_PEBS_ENABLE		0x000003f1
>  #define MSR_IA32_DS_AREA		0x00000600
>  #define MSR_IA32_PERF_CAPABILITIES	0x00000345
> +/* Lower 6 bits define the format of the address in the LBR stack */
> +#define LBR_FORMAT_MSR_IA32_PERF_CAP	0x3f
> 
>  #define MSR_IA32_BNDCFGS		0x00000d90
>  #define IA32_BNDCFGS_ENABLE		0x00000001
> --
> 2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr()
  2017-02-17 15:42 ` [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr() Sergey Dyasli
@ 2017-02-21  9:31   ` Tian, Kevin
  2017-02-22 10:16   ` Jan Beulich
  1 sibling, 0 replies; 17+ messages in thread
From: Tian, Kevin @ 2017-02-21  9:31 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Andrew Cooper, Jan Beulich, Nakajima, Jun

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Friday, February 17, 2017 11:43 PM
> 
> Replace linear scan with vmx_find_msr().  This way the time complexity
> of searching for required MSR reduces from linear to logarithmic.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-21  9:13   ` Tian, Kevin
@ 2017-02-21 11:25     ` Andrew Cooper
  2017-02-22  2:56       ` Tian, Kevin
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Cooper @ 2017-02-21 11:25 UTC (permalink / raw)
  To: Tian, Kevin, Sergey Dyasli, xen-devel; +Cc: Jan Beulich, Nakajima, Jun

On 21/02/17 09:13, Tian, Kevin wrote:
>> @@ -2618,6 +2630,56 @@ static const struct lbr_info *last_branch_msr_get(void)
>>      return NULL;
>>  }
>>
>> +enum
>> +{
>> +    LBR_FORMAT_32                 = 0x0, /* 32-bit record format */
>> +    LBR_FORMAT_LIP                = 0x1, /* 64-bit LIP record format */
>> +    LBR_FORMAT_EIP                = 0x2, /* 64-bit EIP record format */
>> +    LBR_FORMAT_EIP_FLAGS          = 0x3, /* 64-bit EIP, Flags */
>> +    LBR_FORMAT_EIP_FLAGS_TSX      = 0x4, /* 64-bit EIP, Flags, TSX */
>> +    LBR_FORMAT_EIP_FLAGS_TSX_INFO = 0x5, /* 64-bit EIP, Flags, TSX, LBR_INFO */
>> +    LBR_FORMAT_EIP_FLAGS_CYCLES   = 0x6, /* 64-bit EIP, Flags, Cycles */
>> +};
>> +
>> +#define LBR_FROM_SIGNEXT_2MSB  ((1ULL << 59) | (1ULL << 60))
>> +
>> +static bool __read_mostly vmx_lbr_tsx_fixup_needed;
>> +static uint32_t __read_mostly lbr_from_start;
>> +static uint32_t __read_mostly lbr_from_end;
>> +static uint32_t __read_mostly lbr_lastint_from;
>> +
>> +static void __init vmx_lbr_tsx_fixup_check(void)
>> +{
>> +    bool tsx_support = cpu_has_hle || cpu_has_rtm;
>> +    uint64_t caps;
>> +    uint32_t lbr_format;
>> +
>> +    /* Fixup is needed only when TSX support is disabled ... */
>> +    if ( tsx_support )
>> +        return;
> !tsx_support

The original code is correct.  This problem only manifests when TSX is
unavailable.

>
>> +
>> +    if ( !cpu_has_pdcm )
>> +        return;
> combine two ifs into one
>
>> +
>> +    rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
>> +    lbr_format = caps & LBR_FORMAT_MSR_IA32_PERF_CAP;
>> +
>> +    /* ... and the address format of LBR includes TSX bits 61:62 */
> what's "..."?

From the previous comment.

>
>> +    if ( lbr_format == LBR_FORMAT_EIP_FLAGS_TSX )
>> +    {
>> +        const struct lbr_info *lbr = last_branch_msr_get();
>> +
>> +        if ( lbr == NULL )
>> +            return;
> move above check before rdmsrl. At least avoid one unnecessary
> msr read when LBR is not available.

Which check and which rdmsr?  In reality, if cpu_has_pdcm is set, we
would always expect to have LBR.

>
>> +
>> +        lbr_lastint_from = lbr[LBR_LASTINT_FROM_IDX].base;
>> +        lbr_from_start = lbr[LBR_LASTBRANCH_FROM_IDX].base;
>> +        lbr_from_end = lbr_from_start + lbr[LBR_LASTBRANCH_FROM_IDX].count;
>> +
>> +        vmx_lbr_tsx_fixup_needed = true;
>> +    }
>> +}
>> +
>>  static int is_last_branch_msr(u32 ecx)
>>  {
>>      const struct lbr_info *lbr = last_branch_msr_get();
>> @@ -2876,7 +2938,11 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t
>> msr_content)
>>              for ( ; (rc == 0) && lbr->count; lbr++ )
>>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
>>                      if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
>> +                    {
>>                          vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R |
>> MSR_TYPE_W);
>> +                        if ( vmx_lbr_tsx_fixup_needed )
>> +                            v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;
>> +                    }
>>          }
>>
>>          if ( (rc < 0) ||
>> @@ -3897,6 +3963,27 @@ out:
>>      }
>>  }
>>
>> +static void vmx_lbr_tsx_fixup(void)
>> +{
>> +    struct vcpu *curr = current;
>> +    unsigned int msr_count = curr->arch.hvm_vmx.msr_count;
>> +    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
>> +    struct vmx_msr_entry *msr;
>> +
>> +    if ( (msr = vmx_find_msr(lbr_from_start, VMX_GUEST_MSR)) != NULL )
> Curious whether NULL is a valid situation here. If not we need handle it.
> Otherwise please ignore this comment.

It is safer in the case that this function gets called and there are no
LBR MSRs in the load list.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-21 11:25     ` Andrew Cooper
@ 2017-02-22  2:56       ` Tian, Kevin
  0 siblings, 0 replies; 17+ messages in thread
From: Tian, Kevin @ 2017-02-22  2:56 UTC (permalink / raw)
  To: Andrew Cooper, Sergey Dyasli, xen-devel; +Cc: Jan Beulich, Nakajima, Jun

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Tuesday, February 21, 2017 7:25 PM
> 
> On 21/02/17 09:13, Tian, Kevin wrote:
> >> @@ -2618,6 +2630,56 @@ static const struct lbr_info *last_branch_msr_get(void)
> >>      return NULL;
> >>  }
> >>
> >> +enum
> >> +{
> >> +    LBR_FORMAT_32                 = 0x0, /* 32-bit record format */
> >> +    LBR_FORMAT_LIP                = 0x1, /* 64-bit LIP record format */
> >> +    LBR_FORMAT_EIP                = 0x2, /* 64-bit EIP record format */
> >> +    LBR_FORMAT_EIP_FLAGS          = 0x3, /* 64-bit EIP, Flags */
> >> +    LBR_FORMAT_EIP_FLAGS_TSX      = 0x4, /* 64-bit EIP, Flags, TSX */
> >> +    LBR_FORMAT_EIP_FLAGS_TSX_INFO = 0x5, /* 64-bit EIP, Flags, TSX, LBR_INFO
> */
> >> +    LBR_FORMAT_EIP_FLAGS_CYCLES   = 0x6, /* 64-bit EIP, Flags, Cycles */
> >> +};
> >> +
> >> +#define LBR_FROM_SIGNEXT_2MSB  ((1ULL << 59) | (1ULL << 60))
> >> +
> >> +static bool __read_mostly vmx_lbr_tsx_fixup_needed;
> >> +static uint32_t __read_mostly lbr_from_start;
> >> +static uint32_t __read_mostly lbr_from_end;
> >> +static uint32_t __read_mostly lbr_lastint_from;
> >> +
> >> +static void __init vmx_lbr_tsx_fixup_check(void)
> >> +{
> >> +    bool tsx_support = cpu_has_hle || cpu_has_rtm;
> >> +    uint64_t caps;
> >> +    uint32_t lbr_format;
> >> +
> >> +    /* Fixup is needed only when TSX support is disabled ... */
> >> +    if ( tsx_support )
> >> +        return;
> > !tsx_support
> 
> The original code is correct.  This problem only manifests when TSX is
> unavailable.
> 

yes. my bad.

> >
> >> +
> >> +    if ( !cpu_has_pdcm )
> >> +        return;
> > combine two ifs into one
> >
> >> +
> >> +    rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
> >> +    lbr_format = caps & LBR_FORMAT_MSR_IA32_PERF_CAP;
> >> +
> >> +    /* ... and the address format of LBR includes TSX bits 61:62 */
> > what's "..."?
> 
> From the previous comment.
> 
> >
> >> +    if ( lbr_format == LBR_FORMAT_EIP_FLAGS_TSX )
> >> +    {
> >> +        const struct lbr_info *lbr = last_branch_msr_get();
> >> +
> >> +        if ( lbr == NULL )
> >> +            return;
> > move above check before rdmsrl. At least avoid one unnecessary
> > msr read when LBR is not available.
> 
> Which check and which rdmsr?  In reality, if cpu_has_pdcm is set, we
> would always expect to have LBR.

move above 4 lines before reading PERF_CAPABILITIES. but
looks not necessary after another thought.

> 
> >
> >> +
> >> +        lbr_lastint_from = lbr[LBR_LASTINT_FROM_IDX].base;
> >> +        lbr_from_start = lbr[LBR_LASTBRANCH_FROM_IDX].base;
> >> +        lbr_from_end = lbr_from_start + lbr[LBR_LASTBRANCH_FROM_IDX].count;
> >> +
> >> +        vmx_lbr_tsx_fixup_needed = true;
> >> +    }
> >> +}
> >> +
> >>  static int is_last_branch_msr(u32 ecx)
> >>  {
> >>      const struct lbr_info *lbr = last_branch_msr_get();
> >> @@ -2876,7 +2938,11 @@ static int vmx_msr_write_intercept(unsigned int msr,
> uint64_t
> >> msr_content)
> >>              for ( ; (rc == 0) && lbr->count; lbr++ )
> >>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
> >>                      if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
> >> +                    {
> >>                          vmx_disable_intercept_for_msr(v, lbr->base + i,
> MSR_TYPE_R |
> >> MSR_TYPE_W);
> >> +                        if ( vmx_lbr_tsx_fixup_needed )
> >> +                            v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;
> >> +                    }
> >>          }
> >>
> >>          if ( (rc < 0) ||
> >> @@ -3897,6 +3963,27 @@ out:
> >>      }
> >>  }
> >>
> >> +static void vmx_lbr_tsx_fixup(void)
> >> +{
> >> +    struct vcpu *curr = current;
> >> +    unsigned int msr_count = curr->arch.hvm_vmx.msr_count;
> >> +    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> >> +    struct vmx_msr_entry *msr;
> >> +
> >> +    if ( (msr = vmx_find_msr(lbr_from_start, VMX_GUEST_MSR)) != NULL )
> > Curious whether NULL is a valid situation here. If not we need handle it.
> > Otherwise please ignore this comment.
> 
> It is safer in the case that this function gets called and there are no
> LBR MSRs in the load list.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr()
       [not found]   ` <AADFC41AFE54684AB9EE6CBC0274A5D190C40BC0@SHSMSX101.ccr.corp.intel.com>
@ 2017-02-22  8:37     ` Sergey Dyasli
  2017-02-22  8:40       ` Tian, Kevin
  0 siblings, 1 reply; 17+ messages in thread
From: Sergey Dyasli @ 2017-02-22  8:37 UTC (permalink / raw)
  To: Kevin Tian
  Cc: Andrew Cooper, xen-devel, jun.nakajima, jbeulich, Sergey Dyasli

On Tue, 2017-02-21 at 09:29 +0000, Tian, Kevin wrote:
> > From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> > Sent: Friday, February 17, 2017 11:43 PM
> > 
> > Modify vmx_add_msr() to use a variation of insertion sort algorithm:
> > find a place for the new entry and shift all subsequent elements before
> > insertion.
> > 
> > The new vmx_find_msr() exploits the fact that MSR list is now sorted
> > and reuses the existing code for binary search.
> > 
> > Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> > ---
> > v1 --> v2:
> > - qsort (heap sort) is replaced with a variant of insertion sort
> > - both host and guest MSR lists are now kept sorted
> > - vmx_find_guest_msr() is replaced with more generic vmx_find_msr()
> > 
> >  xen/arch/x86/hvm/vmx/vmcs.c        | 45
> > ++++++++++++++++++++++++++++++++++++--
> >  xen/include/asm-x86/hvm/vmx/vmcs.h |  1 +
> >  2 files changed, 44 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> > index 454d444..49587d6 100644
> > --- a/xen/arch/x86/hvm/vmx/vmcs.c
> > +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> > @@ -1307,6 +1307,44 @@ static int construct_vmcs(struct vcpu *v)
> >      return 0;
> >  }
> > 
> > +static int vmx_msr_entry_key_cmp(const void *key, const void *elt)
> 
> what is 'elt' short for?

The prototype was taken directly from bsearch():

    int (*cmp)(const void *key, const void *elt)

I believe that elt stands for element.

> 
> > +{
> > +    const u32 *msr = key;
> > +    const struct vmx_msr_entry *entry = elt;
> > +
> > +    if ( *msr > entry->index )
> > +        return 1;
> > +    if ( *msr < entry->index )
> > +        return -1;
> > +
> > +    return 0;
> > +}
> > +
> > +struct vmx_msr_entry *vmx_find_msr(u32 msr, int type)
> > +{
> > +    struct vcpu *curr = current;
> > +    unsigned int msr_count;
> > +    struct vmx_msr_entry *msr_area;
> > +
> > +    if ( type == VMX_GUEST_MSR )
> > +    {
> > +        msr_count = curr->arch.hvm_vmx.msr_count;
> > +        msr_area = curr->arch.hvm_vmx.msr_area;
> > +    }
> > +    else
> > +    {
> > +        ASSERT(type == VMX_HOST_MSR);
> > +        msr_count = curr->arch.hvm_vmx.host_msr_count;
> > +        msr_area = curr->arch.hvm_vmx.host_msr_area;
> > +    }
> > +
> > +    if ( msr_area == NULL )
> > +        return NULL;
> > +
> > +    return bsearch(&msr, msr_area, msr_count, sizeof(struct vmx_msr_entry),
> > +                   vmx_msr_entry_key_cmp);
> > +}
> > +
> >  int vmx_read_guest_msr(u32 msr, u64 *val)
> >  {
> >      struct vcpu *curr = current;
> > @@ -1375,14 +1413,17 @@ int vmx_add_msr(u32 msr, int type)
> >              __vmwrite(VM_EXIT_MSR_LOAD_ADDR, virt_to_maddr(*msr_area));
> >      }
> > 
> > -    for ( idx = 0; idx < *msr_count; idx++ )
> > +    for ( idx = 0; (*msr_area)[idx].index <= msr && idx < *msr_count; idx++ )
> 
> risk of out-of-boundary access. 

How exactly out-of-bounds access is possible? The original condition

    idx < *msr_count

Is still being checked on each loop iteration.

> 
> >          if ( (*msr_area)[idx].index == msr )
> >              return 0;
> > 
> >      if ( *msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
> >          return -ENOSPC;
> > 
> > -    msr_area_elem = *msr_area + *msr_count;
> > +    memmove(*msr_area + idx + 1, *msr_area + idx,
> > +            sizeof(*msr_area_elem) * (*msr_count - idx));
> > +
> > +    msr_area_elem = *msr_area + idx;
> >      msr_area_elem->index = msr;
> >      msr_area_elem->mbz = 0;
> > 
> > diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > index c30aab6..903af51 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > @@ -529,6 +529,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int
> > type);
> >  void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
> >  int vmx_read_guest_msr(u32 msr, u64 *val);
> >  int vmx_write_guest_msr(u32 msr, u64 val);
> > +struct vmx_msr_entry *vmx_find_msr(u32 msr, int type);
> >  int vmx_add_msr(u32 msr, int type);
> >  void vmx_vmcs_switch(paddr_t from, paddr_t to);
> >  void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
> > --
> > 2.9.3
> 
> 
-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr()
  2017-02-22  8:37     ` Sergey Dyasli
@ 2017-02-22  8:40       ` Tian, Kevin
  2017-02-22  8:45         ` Sergey Dyasli
  0 siblings, 1 reply; 17+ messages in thread
From: Tian, Kevin @ 2017-02-22  8:40 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: Andrew Cooper, Nakajima, Jun, jbeulich, xen-devel

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Wednesday, February 22, 2017 4:38 PM
> 
> > >
> > > -    for ( idx = 0; idx < *msr_count; idx++ )
> > > +    for ( idx = 0; (*msr_area)[idx].index <= msr && idx < *msr_count; idx++ )
> >
> > risk of out-of-boundary access.
> 
> How exactly out-of-bounds access is possible? The original condition
> 
>     idx < *msr_count
> 
> Is still being checked on each loop iteration.
> 

Isn't "(*msr_area[idx]).index <= msr" checked before "idx < *msr_count"?

So if idx==*msr_count, you first hit an out-of-boundary access...

I think we should change the condition order here.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr()
  2017-02-22  8:40       ` Tian, Kevin
@ 2017-02-22  8:45         ` Sergey Dyasli
  2017-02-22 10:14           ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Sergey Dyasli @ 2017-02-22  8:45 UTC (permalink / raw)
  To: Kevin Tian
  Cc: Sergey Dyasli, xen-devel, jbeulich, jun.nakajima, Andrew Cooper

On Wed, 2017-02-22 at 08:40 +0000, Tian, Kevin wrote:
> > From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> > Sent: Wednesday, February 22, 2017 4:38 PM
> > 
> > > > 
> > > > -    for ( idx = 0; idx < *msr_count; idx++ )
> > > > +    for ( idx = 0; (*msr_area)[idx].index <= msr && idx < *msr_count; idx++ )
> > > 
> > > risk of out-of-boundary access.
> > 
> > How exactly out-of-bounds access is possible? The original condition
> > 
> >     idx < *msr_count
> > 
> > Is still being checked on each loop iteration.
> > 
> 
> Isn't "(*msr_area[idx]).index <= msr" checked before "idx < *msr_count"?
> 
> So if idx==*msr_count, you first hit an out-of-boundary access...
> 
> I think we should change the condition order here.
> 

You are right. I will fix this in v3.

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr()
  2017-02-22  8:45         ` Sergey Dyasli
@ 2017-02-22 10:14           ` Jan Beulich
  0 siblings, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2017-02-22 10:14 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: Andrew Cooper, Kevin Tian, jun.nakajima, xen-devel

>>> On 22.02.17 at 09:45, <sergey.dyasli@citrix.com> wrote:
> On Wed, 2017-02-22 at 08:40 +0000, Tian, Kevin wrote:
>> > From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
>> > Sent: Wednesday, February 22, 2017 4:38 PM
>> > 
>> > > > 
>> > > > -    for ( idx = 0; idx < *msr_count; idx++ )
>> > > > +    for ( idx = 0; (*msr_area)[idx].index <= msr && idx < *msr_count; idx++ 
> )
>> > > 
>> > > risk of out-of-boundary access.
>> > 
>> > How exactly out-of-bounds access is possible? The original condition
>> > 
>> >     idx < *msr_count
>> > 
>> > Is still being checked on each loop iteration.
>> > 
>> 
>> Isn't "(*msr_area[idx]).index <= msr" checked before "idx < *msr_count"?
>> 
>> So if idx==*msr_count, you first hit an out-of-boundary access...
>> 
>> I think we should change the condition order here.
>> 
> 
> You are right. I will fix this in v3.

And with that taken care of
Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr()
  2017-02-17 15:42 ` [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr() Sergey Dyasli
  2017-02-21  9:31   ` Tian, Kevin
@ 2017-02-22 10:16   ` Jan Beulich
  1 sibling, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2017-02-22 10:16 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, xen-devel

>>> On 17.02.17 at 16:42, <sergey.dyasli@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1347,17 +1347,12 @@ struct vmx_msr_entry *vmx_find_msr(u32 msr, int type)
>  
>  int vmx_read_guest_msr(u32 msr, u64 *val)
>  {
> -    struct vcpu *curr = current;
> -    unsigned int i, msr_count = curr->arch.hvm_vmx.msr_count;
> -    const struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +    struct vmx_msr_entry *ent;
>  
> -    for ( i = 0; i < msr_count; i++ )
> +    if ( (ent = vmx_find_msr(msr, VMX_GUEST_MSR)) != NULL )

I think this would read better (less parentheses etc) as

    struct vmx_msr_entry *ent = vmx_find_msr(msr, VMX_GUEST_MSR);
 
    if ( ent )

but either way
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-17 15:42 ` [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR Sergey Dyasli
  2017-02-21  9:13   ` Tian, Kevin
@ 2017-02-22 10:26   ` Jan Beulich
  2017-02-22 13:58     ` Sergey Dyasli
  1 sibling, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2017-02-22 10:26 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, xen-devel

>>> On 17.02.17 at 16:42, <sergey.dyasli@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2284,6 +2284,8 @@ static void pi_notification_interrupt(struct cpu_user_regs *regs)
>      raise_softirq(VCPU_KICK_SOFTIRQ);
>  }
>  
> +static void __init vmx_lbr_tsx_fixup_check(void);

vmx_ prefixes are pretty pointless for static functions in this
particular source file (there's at least one more below).

> @@ -2876,7 +2938,11 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>              for ( ; (rc == 0) && lbr->count; lbr++ )
>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
>                      if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
> +                    {
>                          vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W);
> +                        if ( vmx_lbr_tsx_fixup_needed )
> +                            v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;

Is there anything wrong with

    v->arch.hvm_vmx.lbr_tsx_fixup_enabled = vmx_lbr_tsx_fixup_needed;

?

> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -147,6 +147,7 @@ struct arch_vmx_struct {
>      uint8_t              vmx_realmode;
>      /* Are we emulating rather than VMENTERing? */
>      uint8_t              vmx_emulate;
> +    bool                 lbr_tsx_fixup_enabled;
>      /* Bitmask of segments that we can't safely use in virtual 8086 mode */
>      uint16_t             vm86_segment_mask;
>      /* Shadow CS, SS, DS, ES, FS, GS, TR while in virtual 8086 mode */

Please put blank lines around your addition.

> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -55,6 +55,8 @@
>  #define MSR_IA32_PEBS_ENABLE		0x000003f1
>  #define MSR_IA32_DS_AREA		0x00000600
>  #define MSR_IA32_PERF_CAPABILITIES	0x00000345
> +/* Lower 6 bits define the format of the address in the LBR stack */
> +#define LBR_FORMAT_MSR_IA32_PERF_CAP	0x3f

Commonly the MSR name precedes the field specific name component.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-22 10:26   ` Jan Beulich
@ 2017-02-22 13:58     ` Sergey Dyasli
  2017-02-22 14:04       ` Andrew Cooper
  2017-02-22 14:13       ` Jan Beulich
  0 siblings, 2 replies; 17+ messages in thread
From: Sergey Dyasli @ 2017-02-22 13:58 UTC (permalink / raw)
  To: JBeulich
  Cc: Sergey Dyasli, Kevin Tian, Andrew Cooper, jun.nakajima, xen-devel

On Wed, 2017-02-22 at 03:26 -0700, Jan Beulich wrote:
> > > > On 17.02.17 at 16:42, <sergey.dyasli@citrix.com> wrote:
> > 
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -2284,6 +2284,8 @@ static void pi_notification_interrupt(struct cpu_user_regs *regs)
> >      raise_softirq(VCPU_KICK_SOFTIRQ);
> >  }
> >  
> > +static void __init vmx_lbr_tsx_fixup_check(void);
> 
> vmx_ prefixes are pretty pointless for static functions in this
> particular source file (there's at least one more below).

I agree on that. Will remove in v3.

> 
> > @@ -2876,7 +2938,11 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
> >              for ( ; (rc == 0) && lbr->count; lbr++ )
> >                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
> >                      if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
> > +                    {
> >                          vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W);
> > +                        if ( vmx_lbr_tsx_fixup_needed )
> > +                            v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;
> 
> Is there anything wrong with
> 
>     v->arch.hvm_vmx.lbr_tsx_fixup_enabled = vmx_lbr_tsx_fixup_needed;
> 
> ?

Only 80 characters limit prevented me from doing it that way.

> 
> > --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > @@ -147,6 +147,7 @@ struct arch_vmx_struct {
> >      uint8_t              vmx_realmode;
> >      /* Are we emulating rather than VMENTERing? */
> >      uint8_t              vmx_emulate;
> > +    bool                 lbr_tsx_fixup_enabled;
> >      /* Bitmask of segments that we can't safely use in virtual 8086 mode */
> >      uint16_t             vm86_segment_mask;
> >      /* Shadow CS, SS, DS, ES, FS, GS, TR while in virtual 8086 mode */
> 
> Please put blank lines around your addition.

Will do in v3.

> 
> > --- a/xen/include/asm-x86/msr-index.h
> > +++ b/xen/include/asm-x86/msr-index.h
> > @@ -55,6 +55,8 @@
> >  #define MSR_IA32_PEBS_ENABLE		0x000003f1
> >  #define MSR_IA32_DS_AREA		0x00000600
> >  #define MSR_IA32_PERF_CAPABILITIES	0x00000345
> > +/* Lower 6 bits define the format of the address in the LBR stack */
> > +#define LBR_FORMAT_MSR_IA32_PERF_CAP	0x3f
> 
> Commonly the MSR name precedes the field specific name component.

I was trying to follow the naming style in that file but I might've
gotten it wrong. The new define can be renamed to a more appropriate
name, I'm open to suggestions.

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-22 13:58     ` Sergey Dyasli
@ 2017-02-22 14:04       ` Andrew Cooper
  2017-02-22 14:13       ` Jan Beulich
  1 sibling, 0 replies; 17+ messages in thread
From: Andrew Cooper @ 2017-02-22 14:04 UTC (permalink / raw)
  To: Sergey Dyasli, JBeulich; +Cc: Kevin Tian, jun.nakajima, xen-devel

On 22/02/17 13:58, Sergey Dyasli wrote:
>
>>> @@ -2876,7 +2938,11 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>>              for ( ; (rc == 0) && lbr->count; lbr++ )
>>>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
>>>                      if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
>>> +                    {
>>>                          vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W);
>>> +                        if ( vmx_lbr_tsx_fixup_needed )
>>> +                            v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;
>> Is there anything wrong with
>>
>>     v->arch.hvm_vmx.lbr_tsx_fixup_enabled = vmx_lbr_tsx_fixup_needed;
>>
>> ?
> Only 80 characters limit prevented me from doing it that way.

Splitting after the = is acceptable, e.g.

v->arch.hvm_vmx.lbr_tsx_fixup_enabled =
    vmx_lbr_tsx_fixup_needed;

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR
  2017-02-22 13:58     ` Sergey Dyasli
  2017-02-22 14:04       ` Andrew Cooper
@ 2017-02-22 14:13       ` Jan Beulich
  1 sibling, 0 replies; 17+ messages in thread
From: Jan Beulich @ 2017-02-22 14:13 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: AndrewCooper, Kevin Tian, jun.nakajima, xen-devel

>>> On 22.02.17 at 14:58, <sergey.dyasli@citrix.com> wrote:
> On Wed, 2017-02-22 at 03:26 -0700, Jan Beulich wrote:
>> > > > On 17.02.17 at 16:42, <sergey.dyasli@citrix.com> wrote:
>> > --- a/xen/include/asm-x86/msr-index.h
>> > +++ b/xen/include/asm-x86/msr-index.h
>> > @@ -55,6 +55,8 @@
>> >  #define MSR_IA32_PEBS_ENABLE		0x000003f1
>> >  #define MSR_IA32_DS_AREA		0x00000600
>> >  #define MSR_IA32_PERF_CAPABILITIES	0x00000345
>> > +/* Lower 6 bits define the format of the address in the LBR stack */
>> > +#define LBR_FORMAT_MSR_IA32_PERF_CAP	0x3f
>> 
>> Commonly the MSR name precedes the field specific name component.
> 
> I was trying to follow the naming style in that file but I might've
> gotten it wrong. The new define can be renamed to a more appropriate
> name, I'm open to suggestions.

MSR_IA32_PERF_CAP_LBR_FORMAT

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-02-22 14:13 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-17 15:42 [PATCH v2 0/3] x86/vmx: fix for vmentry failure with TSX bits in LBR Sergey Dyasli
2017-02-17 15:42 ` [PATCH v2 1/3] x86/vmx: introduce vmx_find_msr() Sergey Dyasli
     [not found]   ` <AADFC41AFE54684AB9EE6CBC0274A5D190C40BC0@SHSMSX101.ccr.corp.intel.com>
2017-02-22  8:37     ` Sergey Dyasli
2017-02-22  8:40       ` Tian, Kevin
2017-02-22  8:45         ` Sergey Dyasli
2017-02-22 10:14           ` Jan Beulich
2017-02-17 15:42 ` [PATCH v2 2/3] x86/vmx: optimize vmx_read/write_guest_msr() Sergey Dyasli
2017-02-21  9:31   ` Tian, Kevin
2017-02-22 10:16   ` Jan Beulich
2017-02-17 15:42 ` [PATCH v2 3/3] x86/vmx: fix vmentry failure with TSX bits in LBR Sergey Dyasli
2017-02-21  9:13   ` Tian, Kevin
2017-02-21 11:25     ` Andrew Cooper
2017-02-22  2:56       ` Tian, Kevin
2017-02-22 10:26   ` Jan Beulich
2017-02-22 13:58     ` Sergey Dyasli
2017-02-22 14:04       ` Andrew Cooper
2017-02-22 14:13       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.