All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] x86/vmx: Misc fixes and improvements
@ 2018-05-28 14:27 Andrew Cooper
  2018-05-28 14:27 ` [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit Andrew Cooper
                   ` (5 more replies)
  0 siblings, 6 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-28 14:27 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

Patch 1 might want to be considered for 4.11.  All others are 4.12 material at
this point.

Andrew Cooper (6):
  x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  x86: Improvements to ler debugging
  x86/pat: Simplify host PAT handling
  x86/vmx: Simplify PAT handling during vcpu construction
  x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()
  x86/vmx: Drop VMX signal for full real-mode

 docs/misc/xen-command-line.markdown |  6 ++++
 xen/arch/x86/acpi/suspend.c         |  3 +-
 xen/arch/x86/cpu/common.c           |  9 +-----
 xen/arch/x86/hvm/mtrr.c             |  2 +-
 xen/arch/x86/hvm/vmx/entry.S        |  9 ++++++
 xen/arch/x86/hvm/vmx/vmcs.c         | 22 +++++---------
 xen/arch/x86/hvm/vmx/vmx.c          | 14 ++-------
 xen/arch/x86/traps.c                | 58 ++++++++++++++++++-------------------
 xen/arch/x86/x86_64/traps.c         | 14 ++++++---
 xen/include/asm-x86/cpufeature.h    |  2 +-
 xen/include/asm-x86/cpufeatures.h   |  1 +
 xen/include/asm-x86/msr.h           |  2 +-
 xen/include/asm-x86/processor.h     |  7 ++++-
 xen/include/asm-x86/x86_64/page.h   |  8 +++++
 14 files changed, 83 insertions(+), 74 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-28 14:27 [PATCH 0/6] x86/vmx: Misc fixes and improvements Andrew Cooper
@ 2018-05-28 14:27 ` Andrew Cooper
  2018-05-29 10:33   ` Jan Beulich
  2018-05-30 17:34   ` [PATCH v2 " Andrew Cooper
  2018-05-28 14:27 ` [PATCH 2/6] x86: Improvements to ler debugging Andrew Cooper
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-28 14:27 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
updates a host MSR load list entry with the current hardware value of
MSR_DEBUGCTL.  This is wrong.

On VMExit, hardware automatically resets MSR_DEBUGCTL to 0.  The only case
where different behaviour is needed is if Xen is debugging itself, and this
needs setting up unconditionally for the lifetime of the VM, rather than based
on guest actions.

This could be fixed by using a host MSR load list entry set up during
construct_vmcs().  However, a more efficient option is to use an alternative
block in the VMExit path, keyed on whether hypervisor debugging has been
enabled.

In order to set this up, drop the per cpu ler_msr variable (as there is no
point having it per cpu when it will be the same everywhere), and use a single
read_mostly variable instead.  Split calc_ler_msr() out of percpu_traps_init()
for clarity, and only perform the calculation on the first call.

Finally, clean up do_debug().  Rearrange it to have a common tail, and call
out the LBR behaviour specifically.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

Initially, I tried to have a common xen_msr_debugctl variable, but
rip-relative addresses don't resolve correctly in alternative blocks.
LBR-only has been fine for ages, and I don't see that changing any time soon.
---
 xen/arch/x86/hvm/vmx/entry.S      |  9 ++++++
 xen/arch/x86/hvm/vmx/vmx.c        |  3 +-
 xen/arch/x86/traps.c              | 58 +++++++++++++++++++--------------------
 xen/arch/x86/x86_64/traps.c       |  7 +++--
 xen/include/asm-x86/cpufeature.h  |  1 +
 xen/include/asm-x86/cpufeatures.h |  1 +
 xen/include/asm-x86/msr.h         |  2 +-
 7 files changed, 45 insertions(+), 36 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
index aa2f103..afd552f 100644
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -41,6 +41,15 @@ ENTRY(vmx_asm_vmexit_handler)
         SPEC_CTRL_ENTRY_FROM_HVM    /* Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
+        /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
+        .macro restore_lbr
+            mov $IA32_DEBUGCTLMSR_LBR, %eax
+            mov $MSR_IA32_DEBUGCTLMSR, %ecx
+            xor %edx, %edx
+            wrmsr
+        .endm
+        ALTERNATIVE "", restore_lbr, X86_FEATURE_XEN_LBR
+
         mov  %rsp,%rdi
         call vmx_vmexit_handler
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 9707514..33d39f6 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3120,8 +3120,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
                     }
         }
 
-        if ( (rc < 0) ||
-             (msr_content && (vmx_add_host_load_msr(msr) < 0)) )
+        if ( rc < 0 )
             hvm_inject_hw_exception(TRAP_machine_check, X86_EVENT_NO_EC);
         else
             __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 8a99174..74784a2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -96,8 +96,6 @@ string_param("nmi", opt_nmi);
 DEFINE_PER_CPU(uint64_t, efer);
 static DEFINE_PER_CPU(unsigned long, last_extable_addr);
 
-DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
-
 DEFINE_PER_CPU_READ_MOSTLY(struct desc_struct *, gdt_table);
 DEFINE_PER_CPU_READ_MOSTLY(struct desc_struct *, compat_gdt_table);
 
@@ -117,6 +115,9 @@ integer_param("debug_stack_lines", debug_stack_lines);
 static bool opt_ler;
 boolean_param("ler", opt_ler);
 
+/* LastExceptionFromIP on this hardware.  Zero if LER is not in use. */
+uint32_t __read_mostly ler_msr;
+
 #define stack_words_per_line 4
 #define ESP_BEFORE_EXCEPTION(regs) ((unsigned long *)regs->rsp)
 
@@ -1764,17 +1765,6 @@ void do_device_not_available(struct cpu_user_regs *regs)
     return;
 }
 
-static void ler_enable(void)
-{
-    u64 debugctl;
-
-    if ( !this_cpu(ler_msr) )
-        return;
-
-    rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
-    wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl | IA32_DEBUGCTLMSR_LBR);
-}
-
 void do_debug(struct cpu_user_regs *regs)
 {
     unsigned long dr6;
@@ -1870,13 +1860,13 @@ void do_debug(struct cpu_user_regs *regs)
     v->arch.debugreg[6] |= (dr6 & ~X86_DR6_DEFAULT);
     v->arch.debugreg[6] &= (dr6 | ~X86_DR6_DEFAULT);
 
-    ler_enable();
     pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
-    return;
 
  out:
-    ler_enable();
-    return;
+
+    /* #DB automatically disabled LBR.  Reinstate it if debugging Xen. */
+    if ( cpu_has_xen_lbr )
+        wrmsrl(MSR_IA32_DEBUGCTLMSR, IA32_DEBUGCTLMSR_LBR);
 }
 
 static void __init noinline __set_intr_gate(unsigned int n,
@@ -1920,38 +1910,46 @@ void load_TR(void)
         : "=m" (old_gdt) : "rm" (TSS_ENTRY << 3), "m" (tss_gdt) : "memory" );
 }
 
-void percpu_traps_init(void)
+static uint32_t calc_ler_msr(void)
 {
-    subarch_percpu_traps_init();
-
-    if ( !opt_ler )
-        return;
-
     switch ( boot_cpu_data.x86_vendor )
     {
     case X86_VENDOR_INTEL:
         switch ( boot_cpu_data.x86 )
         {
         case 6:
-            this_cpu(ler_msr) = MSR_IA32_LASTINTFROMIP;
-            break;
+            return MSR_IA32_LASTINTFROMIP;
+
         case 15:
-            this_cpu(ler_msr) = MSR_P4_LER_FROM_LIP;
-            break;
+            return MSR_P4_LER_FROM_LIP;
         }
         break;
+
     case X86_VENDOR_AMD:
         switch ( boot_cpu_data.x86 )
         {
         case 6:
         case 0xf ... 0x17:
-            this_cpu(ler_msr) = MSR_IA32_LASTINTFROMIP;
-            break;
+            return MSR_IA32_LASTINTFROMIP;
         }
         break;
     }
 
-    ler_enable();
+    return 0;
+}
+
+void percpu_traps_init(void)
+{
+    subarch_percpu_traps_init();
+
+    if ( !opt_ler )
+        return;
+
+    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
+        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);
+
+    if ( cpu_has_xen_lbr )
+        wrmsrl(MSR_IA32_DEBUGCTLMSR, IA32_DEBUGCTLMSR_LBR);
 }
 
 void __init init_idt_traps(void)
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index f7f6928..b040185 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -144,11 +144,12 @@ void show_registers(const struct cpu_user_regs *regs)
     printk("CPU:    %d\n", smp_processor_id());
     _show_registers(&fault_regs, fault_crs, context, v);
 
-    if ( this_cpu(ler_msr) && !guest_mode(regs) )
+    if ( ler_msr && !guest_mode(regs) )
     {
         u64 from, to;
-        rdmsrl(this_cpu(ler_msr), from);
-        rdmsrl(this_cpu(ler_msr) + 1, to);
+
+        rdmsrl(ler_msr, from);
+        rdmsrl(ler_msr + 1, to);
         printk("ler: %016lx -> %016lx\n", from, to);
     }
 }
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 2cf8f7e..b237da1 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -113,6 +113,7 @@
 #define cpu_has_aperfmperf      boot_cpu_has(X86_FEATURE_APERFMPERF)
 #define cpu_has_lfence_dispatch boot_cpu_has(X86_FEATURE_LFENCE_DISPATCH)
 #define cpu_has_no_xpti         boot_cpu_has(X86_FEATURE_NO_XPTI)
+#define cpu_has_xen_lbr         boot_cpu_has(X86_FEATURE_XEN_LBR)
 
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index b90aa2d..8e5cc53 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -32,3 +32,4 @@ XEN_CPUFEATURE(SC_RSB_PV,       (FSCAPINTS+0)*32+18) /* RSB overwrite needed for
 XEN_CPUFEATURE(SC_RSB_HVM,      (FSCAPINTS+0)*32+19) /* RSB overwrite needed for HVM */
 XEN_CPUFEATURE(NO_XPTI,         (FSCAPINTS+0)*32+20) /* XPTI mitigation not in use */
 XEN_CPUFEATURE(SC_MSR_IDLE,     (FSCAPINTS+0)*32+21) /* (SC_MSR_PV || SC_MSR_HVM) && default_xen_spec_ctrl */
+XEN_CPUFEATURE(XEN_LBR,         (FSCAPINTS+0)*32+22) /* Xen uses MSR_DEBUGCTL.LBR */
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index f14f265..9f6d3b2 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -241,7 +241,7 @@ static inline void write_efer(uint64_t val)
     wrmsrl(MSR_EFER, val);
 }
 
-DECLARE_PER_CPU(u32, ler_msr);
+extern uint32_t ler_msr;
 
 DECLARE_PER_CPU(uint32_t, tsc_aux);
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/6] x86: Improvements to ler debugging
  2018-05-28 14:27 [PATCH 0/6] x86/vmx: Misc fixes and improvements Andrew Cooper
  2018-05-28 14:27 ` [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit Andrew Cooper
@ 2018-05-28 14:27 ` Andrew Cooper
  2018-05-29 11:39   ` Jan Beulich
  2018-06-05  7:55   ` Tian, Kevin
  2018-05-28 14:27 ` [PATCH 3/6] x86/pat: Simplify host PAT handling Andrew Cooper
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-28 14:27 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

 * Command line documentation for what the option does.
 * Implement a canonicalise_addr() helper and replace the opencoded use in
   sign_extend_msr()
 * Canonicalise the ler pointers and print symbol information.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 docs/misc/xen-command-line.markdown | 6 ++++++
 xen/arch/x86/hvm/vmx/vmx.c          | 7 +------
 xen/arch/x86/x86_64/traps.c         | 7 ++++++-
 xen/include/asm-x86/x86_64/page.h   | 8 ++++++++
 4 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 8712a83..1212ebd 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1245,6 +1245,12 @@ if left disabled by the BIOS.
 ### ler (x86)
 > `= <boolean>`
 
+> Default: 0
+
+This option is intended for debugging purposes only.  Enable MSR_DEBUGCTL.LBR
+in hypervisor context to be able to dump the Last Interrupt/Exception To/From
+record with other registers.
+
 ### loglvl
 > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 33d39f6..8049264 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4183,12 +4183,7 @@ static void sign_extend_msr(u32 msr, int type)
     struct vmx_msr_entry *entry;
 
     if ( (entry = vmx_find_msr(msr, type)) != NULL )
-    {
-        if ( entry->data & VADDR_TOP_BIT )
-            entry->data |= CANONICAL_MASK;
-        else
-            entry->data &= ~CANONICAL_MASK;
-    }
+        entry->data = canonicalise_addr(entry->data);
 }
 
 static void bdw_erratum_bdf14_fixup(void)
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index b040185..ed02b78 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -150,7 +150,12 @@ void show_registers(const struct cpu_user_regs *regs)
 
         rdmsrl(ler_msr, from);
         rdmsrl(ler_msr + 1, to);
-        printk("ler: %016lx -> %016lx\n", from, to);
+
+        /* Upper bits may store metadata.  Re-canonicalise for printing. */
+        printk("ler: from %016"PRIx64" [%ps]\n",
+               from, _p(canonicalise_addr(from)));
+        printk("       to %016"PRIx64" [%ps]\n",
+               to, _p(canonicalise_addr(to)));
     }
 }
 
diff --git a/xen/include/asm-x86/x86_64/page.h b/xen/include/asm-x86/x86_64/page.h
index 05a0334..4fe0205 100644
--- a/xen/include/asm-x86/x86_64/page.h
+++ b/xen/include/asm-x86/x86_64/page.h
@@ -34,6 +34,14 @@
 
 #ifndef __ASSEMBLY__
 
+static inline unsigned long canonicalise_addr(unsigned long addr)
+{
+    if ( addr & VADDR_TOP_BIT )
+        return addr | CANONICAL_MASK;
+    else
+        return addr & ~CANONICAL_MASK;
+}
+
 #include <asm/types.h>
 
 #include <xen/pdx.h>
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 3/6] x86/pat: Simplify host PAT handling
  2018-05-28 14:27 [PATCH 0/6] x86/vmx: Misc fixes and improvements Andrew Cooper
  2018-05-28 14:27 ` [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit Andrew Cooper
  2018-05-28 14:27 ` [PATCH 2/6] x86: Improvements to ler debugging Andrew Cooper
@ 2018-05-28 14:27 ` Andrew Cooper
  2018-05-29 11:40   ` Jan Beulich
  2018-06-06  9:39   ` Roger Pau Monné
  2018-05-28 14:27 ` [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction Andrew Cooper
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-28 14:27 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

With the removal of the 32bit hypervisor build, host_pat is a constant value.
Drop the variable and the redundant cpu_has_pat predicate, and use a define
instead.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/acpi/suspend.c      | 3 +--
 xen/arch/x86/cpu/common.c        | 9 +--------
 xen/arch/x86/hvm/mtrr.c          | 2 +-
 xen/include/asm-x86/cpufeature.h | 1 -
 xen/include/asm-x86/processor.h  | 7 ++++++-
 5 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/acpi/suspend.c b/xen/arch/x86/acpi/suspend.c
index 044bd81..eecf357 100644
--- a/xen/arch/x86/acpi/suspend.c
+++ b/xen/arch/x86/acpi/suspend.c
@@ -95,8 +95,7 @@ void restore_rest_processor_state(void)
     /* Reload FPU state on next FPU use. */
     stts();
 
-    if (cpu_has_pat)
-        wrmsrl(MSR_IA32_CR_PAT, host_pat);
+    wrmsrl(MSR_IA32_CR_PAT, XEN_MSR_PAT);
 
     mtrr_bp_restore();
 }
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 528aff1..3548b12 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -47,12 +47,6 @@ unsigned int paddr_bits __read_mostly = 36;
 unsigned int hap_paddr_bits __read_mostly = 36;
 unsigned int vaddr_bits __read_mostly = VADDR_BITS;
 
-/*
- * Default host IA32_CR_PAT value to cover all memory types.
- * BIOS usually sets it to 0x07040600070406.
- */
-u64 host_pat = 0x050100070406;
-
 static unsigned int cleared_caps[NCAPINTS];
 static unsigned int forced_caps[NCAPINTS];
 
@@ -814,8 +808,7 @@ void cpu_init(void)
 	if (opt_cpu_info)
 		printk("Initializing CPU#%d\n", cpu);
 
-	if (cpu_has_pat)
-		wrmsrl(MSR_IA32_CR_PAT, host_pat);
+	wrmsrl(MSR_IA32_CR_PAT, XEN_MSR_PAT);
 
 	/* Install correct page table. */
 	write_ptbase(current);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index a61cc1e..c78e5c1 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -125,7 +125,7 @@ static int __init hvm_mtrr_pat_init(void)
     {
         for ( j = 0; j < PAT_TYPE_NUMS; j++ )
         {
-            if ( pat_cr_2_paf(host_pat, j) == i )
+            if ( pat_cr_2_paf(XEN_MSR_PAT, j) == i )
             {
                 pat_entry_tbl[i] = j;
                 break;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index b237da1..4bc6c91 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -37,7 +37,6 @@
 #define cpu_has_sep             boot_cpu_has(X86_FEATURE_SEP)
 #define cpu_has_mtrr            1
 #define cpu_has_pge             1
-#define cpu_has_pat             1
 #define cpu_has_pse36           boot_cpu_has(X86_FEATURE_PSE36)
 #define cpu_has_clflush         boot_cpu_has(X86_FEATURE_CLFLUSH)
 #define cpu_has_mmx             1
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 9924cdf..ac1577c 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -97,6 +97,12 @@
                           X86_EFLAGS_NT|X86_EFLAGS_DF|X86_EFLAGS_IF|    \
                           X86_EFLAGS_TF)
 
+/*
+ * Host IA32_CR_PAT value to cover all memory types.  This is not the default
+ * MSR_PAT value, and is an ABI with PV guests.
+ */
+#define XEN_MSR_PAT 0x050100070406ul
+
 #ifndef __ASSEMBLY__
 
 struct domain;
@@ -145,7 +151,6 @@ extern bool probe_cpuid_faulting(void);
 extern void ctxt_switch_levelling(const struct vcpu *next);
 extern void (*ctxt_switch_masking)(const struct vcpu *next);
 
-extern u64 host_pat;
 extern bool_t opt_cpu_info;
 extern u32 cpuid_ext_features;
 extern u64 trampoline_misc_enable_off;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction
  2018-05-28 14:27 [PATCH 0/6] x86/vmx: Misc fixes and improvements Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-05-28 14:27 ` [PATCH 3/6] x86/pat: Simplify host PAT handling Andrew Cooper
@ 2018-05-28 14:27 ` Andrew Cooper
  2018-05-29 11:41   ` Jan Beulich
                     ` (2 more replies)
  2018-05-28 14:27 ` [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs() Andrew Cooper
  2018-05-28 14:27 ` [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode Andrew Cooper
  5 siblings, 3 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-28 14:27 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

The host PAT value is a compile time constant, and doesn't need to be read out
of hardware.  Merge this if block into the previous block, which has an
identical condition.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 70c2fb7..be02be1 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1246,17 +1246,9 @@ static int construct_vmcs(struct vcpu *v)
 
         ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m));
         __vmwrite(EPT_POINTER, ept->eptp);
-    }
-
-    if ( paging_mode_hap(d) )
-    {
-        u64 host_pat, guest_pat;
-
-        rdmsrl(MSR_IA32_CR_PAT, host_pat);
-        guest_pat = MSR_IA32_CR_PAT_RESET;
 
-        __vmwrite(HOST_PAT, host_pat);
-        __vmwrite(GUEST_PAT, guest_pat);
+        __vmwrite(HOST_PAT, XEN_MSR_PAT);
+        __vmwrite(GUEST_PAT, MSR_IA32_CR_PAT_RESET);
     }
     if ( cpu_has_vmx_mpx )
         __vmwrite(GUEST_BNDCFGS, 0);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()
  2018-05-28 14:27 [PATCH 0/6] x86/vmx: Misc fixes and improvements Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-05-28 14:27 ` [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction Andrew Cooper
@ 2018-05-28 14:27 ` Andrew Cooper
  2018-05-29 11:43   ` Jan Beulich
                     ` (2 more replies)
  2018-05-28 14:27 ` [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode Andrew Cooper
  5 siblings, 3 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-28 14:27 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

paging_update_paging_modes() and vmx_vlapic_msr_changed() both operate on the
VMCS being constructed.  Avoid dropping and re-acquiring the reference
multiple times.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index be02be1..ce78f19 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -996,6 +996,7 @@ static int construct_vmcs(struct vcpu *v)
     struct domain *d = v->domain;
     u32 vmexit_ctl = vmx_vmexit_control;
     u32 vmentry_ctl = vmx_vmentry_control;
+    int rc;
 
     vmx_vmcs_enter(v);
 
@@ -1083,8 +1084,8 @@ static int construct_vmcs(struct vcpu *v)
 
         if ( msr_bitmap == NULL )
         {
-            vmx_vmcs_exit(v);
-            return -ENOMEM;
+            rc = -ENOMEM;
+            goto out;
         }
 
         memset(msr_bitmap, ~0, PAGE_SIZE);
@@ -1258,13 +1259,14 @@ static int construct_vmcs(struct vcpu *v)
     if ( cpu_has_vmx_tsc_scaling )
         __vmwrite(TSC_MULTIPLIER, d->arch.hvm_domain.tsc_scaling_ratio);
 
-    vmx_vmcs_exit(v);
-
     /* will update HOST & GUEST_CR3 as reqd */
     paging_update_paging_modes(v);
 
     vmx_vlapic_msr_changed(v);
 
+ out:
+    vmx_vmcs_exit(v);
+
     return 0;
 }
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode
  2018-05-28 14:27 [PATCH 0/6] x86/vmx: Misc fixes and improvements Andrew Cooper
                   ` (4 preceding siblings ...)
  2018-05-28 14:27 ` [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs() Andrew Cooper
@ 2018-05-28 14:27 ` Andrew Cooper
  2018-06-05  8:01   ` Tian, Kevin
  2018-06-06 10:03   ` Roger Pau Monné
  5 siblings, 2 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-28 14:27 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

The hvmloader code which used this signal was deleted 10 years ago (c/s
50b12df83 "x86 vmx: Remove vmxassist").  Furthermore, the value gets discarded
anyway because the HVM domain builder unconditionally sets %rax to 0 in the
same action it uses to set %rip to the appropriate entrypoint.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 8049264..4318b15 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -462,10 +462,6 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 
     vmx_install_vlapic_mapping(v);
 
-    /* %eax == 1 signals full real-mode support to the guest loader. */
-    if ( v->vcpu_id == 0 )
-        v->arch.user_regs.rax = 1;
-
     return 0;
 }
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-28 14:27 ` [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit Andrew Cooper
@ 2018-05-29 10:33   ` Jan Beulich
  2018-05-29 18:08     ` Andrew Cooper
  2018-05-30 17:34   ` [PATCH v2 " Andrew Cooper
  1 sibling, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2018-05-29 10:33 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
> updates a host MSR load list entry with the current hardware value of
> MSR_DEBUGCTL.  This is wrong.

"This is wrong" goes too far for my taste: It is not very efficient to do it that
way, but it's still correct. Unless, of course, the zeroing of the register
happens after the processing of the MSR load list (which I doubt it does).

> Initially, I tried to have a common xen_msr_debugctl variable, but
> rip-relative addresses don't resolve correctly in alternative blocks.
> LBR-only has been fine for ages, and I don't see that changing any time 
> soon.

The chosen solution is certainly fine, but the issue could have been
avoided by doing the load from memory ahead of the alternative block
(accepting that it also happens when the value isn't actually needed).

Another option would be to invert the sense of the feature flag,
patching NOPs over the register setup plus WRMSR.

> @@ -1764,17 +1765,6 @@ void do_device_not_available(struct cpu_user_regs *regs)
>      return;
>  }
>  
> -static void ler_enable(void)
> -{
> -    u64 debugctl;
> -
> -    if ( !this_cpu(ler_msr) )
> -        return;
> -
> -    rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
> -    wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl | IA32_DEBUGCTLMSR_LBR);
> -}
> -
>  void do_debug(struct cpu_user_regs *regs)
>  {
>      unsigned long dr6;
> @@ -1870,13 +1860,13 @@ void do_debug(struct cpu_user_regs *regs)
>      v->arch.debugreg[6] |= (dr6 & ~X86_DR6_DEFAULT);
>      v->arch.debugreg[6] &= (dr6 | ~X86_DR6_DEFAULT);
>  
> -    ler_enable();
>      pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
> -    return;
>  
>   out:
> -    ler_enable();
> -    return;
> +
> +    /* #DB automatically disabled LBR.  Reinstate it if debugging Xen. */
> +    if ( cpu_has_xen_lbr )
> +        wrmsrl(MSR_IA32_DEBUGCTLMSR, IA32_DEBUGCTLMSR_LBR);

While I can see that we don't currently need anything more than this one
bit, it still doesn't feel overly well to not do a read-modify-write cycle here.

In any event, rather than moving the write further towards the end of
the function, could I ask you to move it further up, so that in the (unlikely)
event of do_debug() itself triggering an exception we'd get a proper
indication of the last branch before that?

> @@ -1920,38 +1910,46 @@ void load_TR(void)
>          : "=m" (old_gdt) : "rm" (TSS_ENTRY << 3), "m" (tss_gdt) : "memory" );
>  }
>  
> -void percpu_traps_init(void)
> +static uint32_t calc_ler_msr(void)

Here and elsewhere "unsigned int" would be more appropriate to use.
We don't require MSR indexes to be exactly 32 bits wide, but only at
least as wide.

> +void percpu_traps_init(void)
> +{
> +    subarch_percpu_traps_init();
> +
> +    if ( !opt_ler )
> +        return;
> +
> +    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
> +        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);

This does not hold up with the promise the description makes: If running
on an unrecognized model, calc_ler_msr() is going to be called more than
once. If it really was called just once, it could also become __init. With
the inverted sense of the feature flag (as suggested above) you could
check whether the flag bit is set or ler_msr is non-zero.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] x86: Improvements to ler debugging
  2018-05-28 14:27 ` [PATCH 2/6] x86: Improvements to ler debugging Andrew Cooper
@ 2018-05-29 11:39   ` Jan Beulich
  2018-05-29 18:09     ` Andrew Cooper
  2018-06-05  7:55   ` Tian, Kevin
  1 sibling, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2018-05-29 11:39 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

 >>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
> * Command line documentation for what the option does.
>  * Implement a canonicalise_addr() helper and replace the opencoded use in
>    sign_extend_msr()
>  * Canonicalise the ler pointers and print symbol information.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one remark:

> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -1245,6 +1245,12 @@ if left disabled by the BIOS.
>  ### ler (x86)
>  > `= <boolean>`
>  
> +> Default: 0

Perhaps better `false`?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] x86/pat: Simplify host PAT handling
  2018-05-28 14:27 ` [PATCH 3/6] x86/pat: Simplify host PAT handling Andrew Cooper
@ 2018-05-29 11:40   ` Jan Beulich
  2018-06-06  9:39   ` Roger Pau Monné
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2018-05-29 11:40 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
> With the removal of the 32bit hypervisor build, host_pat is a constant value.
> Drop the variable and the redundant cpu_has_pat predicate, and use a define
> instead.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction
  2018-05-28 14:27 ` [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction Andrew Cooper
@ 2018-05-29 11:41   ` Jan Beulich
  2018-06-05  7:57   ` Tian, Kevin
  2018-06-06  9:42   ` Roger Pau Monné
  2 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2018-05-29 11:41 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
> The host PAT value is a compile time constant, and doesn't need to be read out
> of hardware.  Merge this if block into the previous block, which has an
> identical condition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()
  2018-05-28 14:27 ` [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs() Andrew Cooper
@ 2018-05-29 11:43   ` Jan Beulich
  2018-06-05  8:00   ` Tian, Kevin
  2018-06-06  9:45   ` Roger Pau Monné
  2 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2018-05-29 11:43 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
> paging_update_paging_modes() and vmx_vlapic_msr_changed() both operate on the
> VMCS being constructed.  Avoid dropping and re-acquiring the reference
> multiple times.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-29 10:33   ` Jan Beulich
@ 2018-05-29 18:08     ` Andrew Cooper
  2018-05-30  7:32       ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Andrew Cooper @ 2018-05-29 18:08 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

On 29/05/18 11:33, Jan Beulich wrote:
>>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
>> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
>> updates a host MSR load list entry with the current hardware value of
>> MSR_DEBUGCTL.  This is wrong.
> "This is wrong" goes too far for my taste: It is not very efficient to do it that
> way, but it's still correct. Unless, of course, the zeroing of the register
> happens after the processing of the MSR load list (which I doubt it does).

It is functionally broken.  Restoration of Xen's debugging setting must
happen from the first vmexit, not the first vmexit after the guest plays
with MSR_DEBUGCTL.

With the current behaviour, Xen looses its MSR_DEBUGCTL setting on any
pcpu where an HVM guest has been scheduled, and then feeds the current
value (0) into the host load list, even when it was attempting to set a
non-zero value.

>
>> Initially, I tried to have a common xen_msr_debugctl variable, but
>> rip-relative addresses don't resolve correctly in alternative blocks.
>> LBR-only has been fine for ages, and I don't see that changing any time 
>> soon.
> The chosen solution is certainly fine, but the issue could have been
> avoided by doing the load from memory ahead of the alternative block
> (accepting that it also happens when the value isn't actually needed).
>
> Another option would be to invert the sense of the feature flag,
> patching NOPs over the register setup plus WRMSR.

I considered both, but until it is necessary, there is little point.

>
>> @@ -1764,17 +1765,6 @@ void do_device_not_available(struct cpu_user_regs *regs)
>>      return;
>>  }
>>  
>> -static void ler_enable(void)
>> -{
>> -    u64 debugctl;
>> -
>> -    if ( !this_cpu(ler_msr) )
>> -        return;
>> -
>> -    rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
>> -    wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl | IA32_DEBUGCTLMSR_LBR);
>> -}
>> -
>>  void do_debug(struct cpu_user_regs *regs)
>>  {
>>      unsigned long dr6;
>> @@ -1870,13 +1860,13 @@ void do_debug(struct cpu_user_regs *regs)
>>      v->arch.debugreg[6] |= (dr6 & ~X86_DR6_DEFAULT);
>>      v->arch.debugreg[6] &= (dr6 | ~X86_DR6_DEFAULT);
>>  
>> -    ler_enable();
>>      pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
>> -    return;
>>  
>>   out:
>> -    ler_enable();
>> -    return;
>> +
>> +    /* #DB automatically disabled LBR.  Reinstate it if debugging Xen. */
>> +    if ( cpu_has_xen_lbr )
>> +        wrmsrl(MSR_IA32_DEBUGCTLMSR, IA32_DEBUGCTLMSR_LBR);
> While I can see that we don't currently need anything more than this one
> bit, it still doesn't feel overly well to not do a read-modify-write cycle here.

We should never be using a RMW cycle.  All that risks doing is
accumulating unexpected debugging controls.

If/when it becomes a variable, the correct code here is:

if ( xen_debugctl_val & IA32_DEBUGCTLMSR_LBR )
    wrmsrl(MSR_IA32_DEBUGCTLMSR, xen_debugctl_val);

(except that since writing this patch, I've found that BTF is also
cleared on AMD hardware, so that probably wants to be taken into account).

> In any event, rather than moving the write further towards the end of
> the function, could I ask you to move it further up, so that in the (unlikely)
> event of do_debug() itself triggering an exception we'd get a proper
> indication of the last branch before that?

Ok.  It can move to immediately after resetting %dr6.

>
>> @@ -1920,38 +1910,46 @@ void load_TR(void)
>>          : "=m" (old_gdt) : "rm" (TSS_ENTRY << 3), "m" (tss_gdt) : "memory" );
>>  }
>>  
>> -void percpu_traps_init(void)
>> +static uint32_t calc_ler_msr(void)
> Here and elsewhere "unsigned int" would be more appropriate to use.
> We don't require MSR indexes to be exactly 32 bits wide, but only at
> least as wide.

MSR indices are architecturally 32 bits wide.

>
>> +void percpu_traps_init(void)
>> +{
>> +    subarch_percpu_traps_init();
>> +
>> +    if ( !opt_ler )
>> +        return;
>> +
>> +    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
>> +        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);
> This does not hold up with the promise the description makes: If running
> on an unrecognized model, calc_ler_msr() is going to be called more than
> once. If it really was called just once, it could also become __init. With
> the inverted sense of the feature flag (as suggested above) you could
> check whether the flag bit is set or ler_msr is non-zero.

Hmm - I suppose it doesn't quite match the description, but does it
matter (if I tweak the description)?  It is debugging functionality, and
I don't see any 64bit models missing from the list.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] x86: Improvements to ler debugging
  2018-05-29 11:39   ` Jan Beulich
@ 2018-05-29 18:09     ` Andrew Cooper
  0 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-29 18:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

On 29/05/18 12:39, Jan Beulich wrote:
>  >>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
>> * Command line documentation for what the option does.
>>  * Implement a canonicalise_addr() helper and replace the opencoded use in
>>    sign_extend_msr()
>>  * Canonicalise the ler pointers and print symbol information.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

> with one remark:
>
>> --- a/docs/misc/xen-command-line.markdown
>> +++ b/docs/misc/xen-command-line.markdown
>> @@ -1245,6 +1245,12 @@ if left disabled by the BIOS.
>>  ### ler (x86)
>>  > `= <boolean>`
>>  
>> +> Default: 0
> Perhaps better `false`?

Fixed.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-29 18:08     ` Andrew Cooper
@ 2018-05-30  7:32       ` Jan Beulich
  2018-05-30 10:28         ` Andrew Cooper
  0 siblings, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2018-05-30  7:32 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 29.05.18 at 20:08, <andrew.cooper3@citrix.com> wrote:
> On 29/05/18 11:33, Jan Beulich wrote:
>>>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
>>> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
>>> updates a host MSR load list entry with the current hardware value of
>>> MSR_DEBUGCTL.  This is wrong.
>> "This is wrong" goes too far for my taste: It is not very efficient to do it that
>> way, but it's still correct. Unless, of course, the zeroing of the register
>> happens after the processing of the MSR load list (which I doubt it does).
> 
> It is functionally broken.  Restoration of Xen's debugging setting must
> happen from the first vmexit, not the first vmexit after the guest plays
> with MSR_DEBUGCTL.
> 
> With the current behaviour, Xen looses its MSR_DEBUGCTL setting on any
> pcpu where an HVM guest has been scheduled, and then feeds the current
> value (0) into the host load list, even when it was attempting to set a
> non-zero value.

Oh, indeed, you're right.

>>> @@ -1920,38 +1910,46 @@ void load_TR(void)
>>>          : "=m" (old_gdt) : "rm" (TSS_ENTRY << 3), "m" (tss_gdt) : "memory" );
>>>  }
>>>  
>>> -void percpu_traps_init(void)
>>> +static uint32_t calc_ler_msr(void)
>> Here and elsewhere "unsigned int" would be more appropriate to use.
>> We don't require MSR indexes to be exactly 32 bits wide, but only at
>> least as wide.
> 
> MSR indices are architecturally 32 bits wide.

Correct. Hence for communicating such values between functions we need
a type at least as wide as 32 bits, not exactly as wide. There's a reason the
standard also defined uint_least32_t et al; it's just that I think unsigned int
is less ugly to read and fulfills the same purpose (with the prereq that we
assume to run only on architectures where unsigned int is at least 32 bits
wide, just like we make an even more strict assumption on unsigned long).

>>> +void percpu_traps_init(void)
>>> +{
>>> +    subarch_percpu_traps_init();
>>> +
>>> +    if ( !opt_ler )
>>> +        return;
>>> +
>>> +    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
>>> +        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);
>> This does not hold up with the promise the description makes: If running
>> on an unrecognized model, calc_ler_msr() is going to be called more than
>> once. If it really was called just once, it could also become __init. With
>> the inverted sense of the feature flag (as suggested above) you could
>> check whether the flag bit is set or ler_msr is non-zero.
> 
> Hmm - I suppose it doesn't quite match the description, but does it
> matter (if I tweak the description)?  It is debugging functionality, and
> I don't see any 64bit models missing from the list.

Non-Intel, non-AMD CPUs are clearly missing. We have Centaur (VIA)
support, and we're going to gain support for one more right after the
tree was branched for 4.11.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-30  7:32       ` Jan Beulich
@ 2018-05-30 10:28         ` Andrew Cooper
  2018-05-30 10:49           ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Andrew Cooper @ 2018-05-30 10:28 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

On 30/05/18 08:32, Jan Beulich wrote:
>>>> On 29.05.18 at 20:08, <andrew.cooper3@citrix.com> wrote:
>> On 29/05/18 11:33, Jan Beulich wrote:
>>>>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
>>>> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
>>>> updates a host MSR load list entry with the current hardware value of
>>>> MSR_DEBUGCTL.  This is wrong.
>>> "This is wrong" goes too far for my taste: It is not very efficient to do it that
>>> way, but it's still correct. Unless, of course, the zeroing of the register
>>> happens after the processing of the MSR load list (which I doubt it does).
>> It is functionally broken.  Restoration of Xen's debugging setting must
>> happen from the first vmexit, not the first vmexit after the guest plays
>> with MSR_DEBUGCTL.
>>
>> With the current behaviour, Xen looses its MSR_DEBUGCTL setting on any
>> pcpu where an HVM guest has been scheduled, and then feeds the current
>> value (0) into the host load list, even when it was attempting to set a
>> non-zero value.
> Oh, indeed, you're right.

I've rewritten this bit of the commit message.  How about:

Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
updates a host MSR load list entry with the current hardware value of
MSR_DEBUGCTL.

On VMExit, hardware automatically resets MSR_DEBUGCTL to 0.  Later, when the
guest writes to MSR_DEBUGCTL, the current value in hardware (0) is fed back
into guest load list.  As a practical result, `ler` debugging gets lost on any
PCPU which has ever scheduled an HVM vcpu, and the common case when `ler`
debugging isn't active, guest actions result in an unnecessary load list entry
repeating the MSR_DEBUGCTL reset.

Restoration of Xen's debugging setting needs to happen from the very first
vmexit.  Due to the automatic reset, Xen need take no action in the general
case, and only needs to load a value when debugging is active.

>>>> +void percpu_traps_init(void)
>>>> +{
>>>> +    subarch_percpu_traps_init();
>>>> +
>>>> +    if ( !opt_ler )
>>>> +        return;
>>>> +
>>>> +    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
>>>> +        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);
>>> This does not hold up with the promise the description makes: If running
>>> on an unrecognized model, calc_ler_msr() is going to be called more than
>>> once. If it really was called just once, it could also become __init. With
>>> the inverted sense of the feature flag (as suggested above) you could
>>> check whether the flag bit is set or ler_msr is non-zero.
>> Hmm - I suppose it doesn't quite match the description, but does it
>> matter (if I tweak the description)?  It is debugging functionality, and
>> I don't see any 64bit models missing from the list.
> Non-Intel, non-AMD CPUs are clearly missing. We have Centaur (VIA)
> support, and we're going to gain support for one more right after the
> tree was branched for 4.11.

Ok, but all of this is behind !opt_ler which means it doesn't get
executed in the general case.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-30 10:28         ` Andrew Cooper
@ 2018-05-30 10:49           ` Jan Beulich
  0 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2018-05-30 10:49 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 30.05.18 at 12:28, <andrew.cooper3@citrix.com> wrote:
> On 30/05/18 08:32, Jan Beulich wrote:
>>>>> On 29.05.18 at 20:08, <andrew.cooper3@citrix.com> wrote:
>>> On 29/05/18 11:33, Jan Beulich wrote:
>>>>>>> On 28.05.18 at 16:27, <andrew.cooper3@citrix.com> wrote:
>>>>> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
>>>>> updates a host MSR load list entry with the current hardware value of
>>>>> MSR_DEBUGCTL.  This is wrong.
>>>> "This is wrong" goes too far for my taste: It is not very efficient to do it 
> that
>>>> way, but it's still correct. Unless, of course, the zeroing of the register
>>>> happens after the processing of the MSR load list (which I doubt it does).
>>> It is functionally broken.  Restoration of Xen's debugging setting must
>>> happen from the first vmexit, not the first vmexit after the guest plays
>>> with MSR_DEBUGCTL.
>>>
>>> With the current behaviour, Xen looses its MSR_DEBUGCTL setting on any
>>> pcpu where an HVM guest has been scheduled, and then feeds the current
>>> value (0) into the host load list, even when it was attempting to set a
>>> non-zero value.
>> Oh, indeed, you're right.
> 
> I've rewritten this bit of the commit message.  How about:
> 
> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
> updates a host MSR load list entry with the current hardware value of
> MSR_DEBUGCTL.
> 
> On VMExit, hardware automatically resets MSR_DEBUGCTL to 0.  Later, when the
> guest writes to MSR_DEBUGCTL, the current value in hardware (0) is fed back
> into guest load list.  As a practical result, `ler` debugging gets lost on any
> PCPU which has ever scheduled an HVM vcpu, and the common case when `ler`
> debugging isn't active, guest actions result in an unnecessary load list entry
> repeating the MSR_DEBUGCTL reset.
> 
> Restoration of Xen's debugging setting needs to happen from the very first
> vmexit.  Due to the automatic reset, Xen need take no action in the general
> case, and only needs to load a value when debugging is active.

Thanks, this makes things more explicit imo.

>>>>> +void percpu_traps_init(void)
>>>>> +{
>>>>> +    subarch_percpu_traps_init();
>>>>> +
>>>>> +    if ( !opt_ler )
>>>>> +        return;
>>>>> +
>>>>> +    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
>>>>> +        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);
>>>> This does not hold up with the promise the description makes: If running
>>>> on an unrecognized model, calc_ler_msr() is going to be called more than
>>>> once. If it really was called just once, it could also become __init. With
>>>> the inverted sense of the feature flag (as suggested above) you could
>>>> check whether the flag bit is set or ler_msr is non-zero.
>>> Hmm - I suppose it doesn't quite match the description, but does it
>>> matter (if I tweak the description)?  It is debugging functionality, and
>>> I don't see any 64bit models missing from the list.
>> Non-Intel, non-AMD CPUs are clearly missing. We have Centaur (VIA)
>> support, and we're going to gain support for one more right after the
>> tree was branched for 4.11.
> 
> Ok, but all of this is behind !opt_ler which means it doesn't get
> executed in the general case.

Of course.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v2 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-28 14:27 ` [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit Andrew Cooper
  2018-05-29 10:33   ` Jan Beulich
@ 2018-05-30 17:34   ` Andrew Cooper
  2018-06-01  9:28     ` Jan Beulich
  2018-06-05  7:54     ` Tian, Kevin
  1 sibling, 2 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-05-30 17:34 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
updates a host MSR load list entry with the current hardware value of
MSR_DEBUGCTL.

On VMExit, hardware automatically resets MSR_DEBUGCTL to 0.  Later, when the
guest writes to MSR_DEBUGCTL, the current value in hardware (0) is fed back
into guest load list.  As a practical result, `ler` debugging gets lost on any
PCPU which has ever scheduled an HVM vcpu, and the common case when `ler`
debugging isn't active, guest actions result in an unnecessary load list entry
repeating the MSR_DEBUGCTL reset.

Restoration of Xen's debugging setting needs to happen from the very first
vmexit.  Due to the automatic reset, Xen need take no action in the general
case, and only needs to load a value when debugging is active.

This could be fixed by using a host MSR load list entry set up during
construct_vmcs().  However, a more efficient option is to use an alternative
block in the VMExit path, keyed on whether hypervisor debugging has been
enabled.

In order to set this up, drop the per cpu ler_msr variable (as there is no
point having it per cpu when it will be the same everywhere), and use a single
read_mostly variable instead.  Split calc_ler_msr() out of percpu_traps_init()
for clarity.

Finally, clean up do_debug().  Reinstate LBR early to help catch cascade
errors, which allows for the removal of the out label.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Reinstate LBR early in do_debug()
 * Rewrite the commit message.
---
 xen/arch/x86/hvm/vmx/entry.S      |  9 ++++++
 xen/arch/x86/hvm/vmx/vmx.c        |  3 +-
 xen/arch/x86/traps.c              | 64 ++++++++++++++++++---------------------
 xen/arch/x86/x86_64/traps.c       |  7 +++--
 xen/include/asm-x86/cpufeature.h  |  1 +
 xen/include/asm-x86/cpufeatures.h |  1 +
 xen/include/asm-x86/msr.h         |  2 +-
 7 files changed, 47 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
index aa2f103..afd552f 100644
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -41,6 +41,15 @@ ENTRY(vmx_asm_vmexit_handler)
         SPEC_CTRL_ENTRY_FROM_HVM    /* Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
+        /* Hardware clears MSR_DEBUGCTL on VMExit.  Reinstate it if debugging Xen. */
+        .macro restore_lbr
+            mov $IA32_DEBUGCTLMSR_LBR, %eax
+            mov $MSR_IA32_DEBUGCTLMSR, %ecx
+            xor %edx, %edx
+            wrmsr
+        .endm
+        ALTERNATIVE "", restore_lbr, X86_FEATURE_XEN_LBR
+
         mov  %rsp,%rdi
         call vmx_vmexit_handler
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 9707514..33d39f6 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3120,8 +3120,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
                     }
         }
 
-        if ( (rc < 0) ||
-             (msr_content && (vmx_add_host_load_msr(msr) < 0)) )
+        if ( rc < 0 )
             hvm_inject_hw_exception(TRAP_machine_check, X86_EVENT_NO_EC);
         else
             __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 8a99174..2b9c276 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -96,8 +96,6 @@ string_param("nmi", opt_nmi);
 DEFINE_PER_CPU(uint64_t, efer);
 static DEFINE_PER_CPU(unsigned long, last_extable_addr);
 
-DEFINE_PER_CPU_READ_MOSTLY(u32, ler_msr);
-
 DEFINE_PER_CPU_READ_MOSTLY(struct desc_struct *, gdt_table);
 DEFINE_PER_CPU_READ_MOSTLY(struct desc_struct *, compat_gdt_table);
 
@@ -117,6 +115,9 @@ integer_param("debug_stack_lines", debug_stack_lines);
 static bool opt_ler;
 boolean_param("ler", opt_ler);
 
+/* LastExceptionFromIP on this hardware.  Zero if LER is not in use. */
+uint32_t __read_mostly ler_msr;
+
 #define stack_words_per_line 4
 #define ESP_BEFORE_EXCEPTION(regs) ((unsigned long *)regs->rsp)
 
@@ -1764,17 +1765,6 @@ void do_device_not_available(struct cpu_user_regs *regs)
     return;
 }
 
-static void ler_enable(void)
-{
-    u64 debugctl;
-
-    if ( !this_cpu(ler_msr) )
-        return;
-
-    rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
-    wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl | IA32_DEBUGCTLMSR_LBR);
-}
-
 void do_debug(struct cpu_user_regs *regs)
 {
     unsigned long dr6;
@@ -1807,6 +1797,10 @@ void do_debug(struct cpu_user_regs *regs)
      */
     write_debugreg(6, X86_DR6_DEFAULT);
 
+    /* #DB automatically disabled LBR.  Reinstate it if debugging Xen. */
+    if ( cpu_has_xen_lbr )
+        wrmsrl(MSR_IA32_DEBUGCTLMSR, IA32_DEBUGCTLMSR_LBR);
+
     if ( !guest_mode(regs) )
     {
         if ( regs->eflags & X86_EFLAGS_TF )
@@ -1817,7 +1811,7 @@ void do_debug(struct cpu_user_regs *regs)
             {
                 if ( regs->rip == (unsigned long)sysenter_eflags_saved )
                     regs->eflags &= ~X86_EFLAGS_TF;
-                goto out;
+                return;
             }
             if ( !debugger_trap_fatal(TRAP_debug, regs) )
             {
@@ -1863,20 +1857,14 @@ void do_debug(struct cpu_user_regs *regs)
                 regs->cs, _p(regs->rip), _p(regs->rip),
                 regs->ss, _p(regs->rsp), dr6);
 
-        goto out;
+        return;
     }
 
     /* Save debug status register where guest OS can peek at it */
     v->arch.debugreg[6] |= (dr6 & ~X86_DR6_DEFAULT);
     v->arch.debugreg[6] &= (dr6 | ~X86_DR6_DEFAULT);
 
-    ler_enable();
     pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
-    return;
-
- out:
-    ler_enable();
-    return;
 }
 
 static void __init noinline __set_intr_gate(unsigned int n,
@@ -1920,38 +1908,46 @@ void load_TR(void)
         : "=m" (old_gdt) : "rm" (TSS_ENTRY << 3), "m" (tss_gdt) : "memory" );
 }
 
-void percpu_traps_init(void)
+static unsigned int calc_ler_msr(void)
 {
-    subarch_percpu_traps_init();
-
-    if ( !opt_ler )
-        return;
-
     switch ( boot_cpu_data.x86_vendor )
     {
     case X86_VENDOR_INTEL:
         switch ( boot_cpu_data.x86 )
         {
         case 6:
-            this_cpu(ler_msr) = MSR_IA32_LASTINTFROMIP;
-            break;
+            return MSR_IA32_LASTINTFROMIP;
+
         case 15:
-            this_cpu(ler_msr) = MSR_P4_LER_FROM_LIP;
-            break;
+            return MSR_P4_LER_FROM_LIP;
         }
         break;
+
     case X86_VENDOR_AMD:
         switch ( boot_cpu_data.x86 )
         {
         case 6:
         case 0xf ... 0x17:
-            this_cpu(ler_msr) = MSR_IA32_LASTINTFROMIP;
-            break;
+            return MSR_IA32_LASTINTFROMIP;
         }
         break;
     }
 
-    ler_enable();
+    return 0;
+}
+
+void percpu_traps_init(void)
+{
+    subarch_percpu_traps_init();
+
+    if ( !opt_ler )
+        return;
+
+    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
+        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);
+
+    if ( cpu_has_xen_lbr )
+        wrmsrl(MSR_IA32_DEBUGCTLMSR, IA32_DEBUGCTLMSR_LBR);
 }
 
 void __init init_idt_traps(void)
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index f7f6928..b040185 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -144,11 +144,12 @@ void show_registers(const struct cpu_user_regs *regs)
     printk("CPU:    %d\n", smp_processor_id());
     _show_registers(&fault_regs, fault_crs, context, v);
 
-    if ( this_cpu(ler_msr) && !guest_mode(regs) )
+    if ( ler_msr && !guest_mode(regs) )
     {
         u64 from, to;
-        rdmsrl(this_cpu(ler_msr), from);
-        rdmsrl(this_cpu(ler_msr) + 1, to);
+
+        rdmsrl(ler_msr, from);
+        rdmsrl(ler_msr + 1, to);
         printk("ler: %016lx -> %016lx\n", from, to);
     }
 }
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 2cf8f7e..b237da1 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -113,6 +113,7 @@
 #define cpu_has_aperfmperf      boot_cpu_has(X86_FEATURE_APERFMPERF)
 #define cpu_has_lfence_dispatch boot_cpu_has(X86_FEATURE_LFENCE_DISPATCH)
 #define cpu_has_no_xpti         boot_cpu_has(X86_FEATURE_NO_XPTI)
+#define cpu_has_xen_lbr         boot_cpu_has(X86_FEATURE_XEN_LBR)
 
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index b90aa2d..8e5cc53 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -32,3 +32,4 @@ XEN_CPUFEATURE(SC_RSB_PV,       (FSCAPINTS+0)*32+18) /* RSB overwrite needed for
 XEN_CPUFEATURE(SC_RSB_HVM,      (FSCAPINTS+0)*32+19) /* RSB overwrite needed for HVM */
 XEN_CPUFEATURE(NO_XPTI,         (FSCAPINTS+0)*32+20) /* XPTI mitigation not in use */
 XEN_CPUFEATURE(SC_MSR_IDLE,     (FSCAPINTS+0)*32+21) /* (SC_MSR_PV || SC_MSR_HVM) && default_xen_spec_ctrl */
+XEN_CPUFEATURE(XEN_LBR,         (FSCAPINTS+0)*32+22) /* Xen uses MSR_DEBUGCTL.LBR */
diff --git a/xen/include/asm-x86/msr.h b/xen/include/asm-x86/msr.h
index f14f265..9f6d3b2 100644
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -241,7 +241,7 @@ static inline void write_efer(uint64_t val)
     wrmsrl(MSR_EFER, val);
 }
 
-DECLARE_PER_CPU(u32, ler_msr);
+extern uint32_t ler_msr;
 
 DECLARE_PER_CPU(uint32_t, tsc_aux);
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-30 17:34   ` [PATCH v2 " Andrew Cooper
@ 2018-06-01  9:28     ` Jan Beulich
  2018-06-05  7:54     ` Tian, Kevin
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2018-06-01  9:28 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 30.05.18 at 19:34, <andrew.cooper3@citrix.com> wrote:
> @@ -117,6 +115,9 @@ integer_param("debug_stack_lines", debug_stack_lines);
>  static bool opt_ler;
>  boolean_param("ler", opt_ler);
>  
> +/* LastExceptionFromIP on this hardware.  Zero if LER is not in use. */
> +uint32_t __read_mostly ler_msr;

Hmm, this is still uint32_t rather than unsigned int, while you did change
calc_ler_msr()'s return type.

> +void percpu_traps_init(void)
> +{
> +    subarch_percpu_traps_init();
> +
> +    if ( !opt_ler )
> +        return;
> +
> +    if ( !ler_msr && (ler_msr = calc_ler_msr()) )
> +        setup_force_cpu_cap(X86_FEATURE_XEN_LBR);

I assume it was on purpose that you've adjusted the commit message
rather than the code here regarding the possibility of pointless multiple
invocation?

With preferably the type inconsistency addressed
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
  2018-05-30 17:34   ` [PATCH v2 " Andrew Cooper
  2018-06-01  9:28     ` Jan Beulich
@ 2018-06-05  7:54     ` Tian, Kevin
  1 sibling, 0 replies; 29+ messages in thread
From: Tian, Kevin @ 2018-06-05  7:54 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Thursday, May 31, 2018 1:35 AM
> 
> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL,
> Xen
> updates a host MSR load list entry with the current hardware value of
> MSR_DEBUGCTL.
> 
> On VMExit, hardware automatically resets MSR_DEBUGCTL to 0.  Later,
> when the
> guest writes to MSR_DEBUGCTL, the current value in hardware (0) is fed
> back
> into guest load list.  As a practical result, `ler` debugging gets lost on any
> PCPU which has ever scheduled an HVM vcpu, and the common case when
> `ler`
> debugging isn't active, guest actions result in an unnecessary load list entry
> repeating the MSR_DEBUGCTL reset.
> 
> Restoration of Xen's debugging setting needs to happen from the very first
> vmexit.  Due to the automatic reset, Xen need take no action in the general
> case, and only needs to load a value when debugging is active.
> 
> This could be fixed by using a host MSR load list entry set up during
> construct_vmcs().  However, a more efficient option is to use an alternative
> block in the VMExit path, keyed on whether hypervisor debugging has been
> enabled.
> 
> In order to set this up, drop the per cpu ler_msr variable (as there is no
> point having it per cpu when it will be the same everywhere), and use a
> single
> read_mostly variable instead.  Split calc_ler_msr() out of percpu_traps_init()
> for clarity.
> 
> Finally, clean up do_debug().  Reinstate LBR early to help catch cascade
> errors, which allows for the removal of the out label.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

nice cleanup. 

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] x86: Improvements to ler debugging
  2018-05-28 14:27 ` [PATCH 2/6] x86: Improvements to ler debugging Andrew Cooper
  2018-05-29 11:39   ` Jan Beulich
@ 2018-06-05  7:55   ` Tian, Kevin
  1 sibling, 0 replies; 29+ messages in thread
From: Tian, Kevin @ 2018-06-05  7:55 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Monday, May 28, 2018 10:28 PM
> 
>  * Command line documentation for what the option does.
>  * Implement a canonicalise_addr() helper and replace the opencoded use
> in
>    sign_extend_msr()
>  * Canonicalise the ler pointers and print symbol information.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction
  2018-05-28 14:27 ` [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction Andrew Cooper
  2018-05-29 11:41   ` Jan Beulich
@ 2018-06-05  7:57   ` Tian, Kevin
  2018-06-06  9:42   ` Roger Pau Monné
  2 siblings, 0 replies; 29+ messages in thread
From: Tian, Kevin @ 2018-06-05  7:57 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Monday, May 28, 2018 10:28 PM
> 
> The host PAT value is a compile time constant, and doesn't need to be read
> out
> of hardware.  Merge this if block into the previous block, which has an
> identical condition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()
  2018-05-28 14:27 ` [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs() Andrew Cooper
  2018-05-29 11:43   ` Jan Beulich
@ 2018-06-05  8:00   ` Tian, Kevin
  2018-06-06  9:45   ` Roger Pau Monné
  2 siblings, 0 replies; 29+ messages in thread
From: Tian, Kevin @ 2018-06-05  8:00 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Monday, May 28, 2018 10:28 PM
> 
> paging_update_paging_modes() and vmx_vlapic_msr_changed() both
> operate on the
> VMCS being constructed.  Avoid dropping and re-acquiring the reference
> multiple times.
> 

Acked-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode
  2018-05-28 14:27 ` [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode Andrew Cooper
@ 2018-06-05  8:01   ` Tian, Kevin
  2018-06-06 10:03   ` Roger Pau Monné
  1 sibling, 0 replies; 29+ messages in thread
From: Tian, Kevin @ 2018-06-05  8:01 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Monday, May 28, 2018 10:28 PM
> 
> The hvmloader code which used this signal was deleted 10 years ago (c/s
> 50b12df83 "x86 vmx: Remove vmxassist").  Furthermore, the value gets
> discarded
> anyway because the HVM domain builder unconditionally sets %rax to 0 in
> the
> same action it uses to set %rip to the appropriate entrypoint.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] x86/pat: Simplify host PAT handling
  2018-05-28 14:27 ` [PATCH 3/6] x86/pat: Simplify host PAT handling Andrew Cooper
  2018-05-29 11:40   ` Jan Beulich
@ 2018-06-06  9:39   ` Roger Pau Monné
  1 sibling, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2018-06-06  9:39 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Mon, May 28, 2018 at 03:27:55PM +0100, Andrew Cooper wrote:
> With the removal of the 32bit hypervisor build, host_pat is a constant value.
> Drop the variable and the redundant cpu_has_pat predicate, and use a define
> instead.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
> index 9924cdf..ac1577c 100644
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -97,6 +97,12 @@
>                            X86_EFLAGS_NT|X86_EFLAGS_DF|X86_EFLAGS_IF|    \
>                            X86_EFLAGS_TF)
>  
> +/*
> + * Host IA32_CR_PAT value to cover all memory types.  This is not the default
> + * MSR_PAT value, and is an ABI with PV guests.
> + */
> +#define XEN_MSR_PAT 0x050100070406ul

Not sure whether it would make sense to use MASK_INSR and define each
page attribute field in order to create this value.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction
  2018-05-28 14:27 ` [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction Andrew Cooper
  2018-05-29 11:41   ` Jan Beulich
  2018-06-05  7:57   ` Tian, Kevin
@ 2018-06-06  9:42   ` Roger Pau Monné
  2 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2018-06-06  9:42 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Xen-devel

On Mon, May 28, 2018 at 03:27:56PM +0100, Andrew Cooper wrote:
> The host PAT value is a compile time constant, and doesn't need to be read out
> of hardware.  Merge this if block into the previous block, which has an
> identical condition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()
  2018-05-28 14:27 ` [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs() Andrew Cooper
  2018-05-29 11:43   ` Jan Beulich
  2018-06-05  8:00   ` Tian, Kevin
@ 2018-06-06  9:45   ` Roger Pau Monné
  2018-06-06 10:11     ` Andrew Cooper
  2 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2018-06-06  9:45 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Xen-devel

On Mon, May 28, 2018 at 03:27:57PM +0100, Andrew Cooper wrote:
> paging_update_paging_modes() and vmx_vlapic_msr_changed() both operate on the
> VMCS being constructed.  Avoid dropping and re-acquiring the reference
> multiple times.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/hvm/vmx/vmcs.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index be02be1..ce78f19 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -996,6 +996,7 @@ static int construct_vmcs(struct vcpu *v)
>      struct domain *d = v->domain;
>      u32 vmexit_ctl = vmx_vmexit_control;
>      u32 vmentry_ctl = vmx_vmentry_control;
> +    int rc;
>  
>      vmx_vmcs_enter(v);
>  
> @@ -1083,8 +1084,8 @@ static int construct_vmcs(struct vcpu *v)
>  
>          if ( msr_bitmap == NULL )
>          {
> -            vmx_vmcs_exit(v);
> -            return -ENOMEM;
> +            rc = -ENOMEM;
> +            goto out;
>          }
>  
>          memset(msr_bitmap, ~0, PAGE_SIZE);
> @@ -1258,13 +1259,14 @@ static int construct_vmcs(struct vcpu *v)
>      if ( cpu_has_vmx_tsc_scaling )
>          __vmwrite(TSC_MULTIPLIER, d->arch.hvm_domain.tsc_scaling_ratio);
>  
> -    vmx_vmcs_exit(v);
> -
>      /* will update HOST & GUEST_CR3 as reqd */
>      paging_update_paging_modes(v);
>  
>      vmx_vlapic_msr_changed(v);
>  
> + out:
> +    vmx_vmcs_exit(v);
> +
>      return 0;

Shouldn't you return rc here? Or else you lose the error value.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode
  2018-05-28 14:27 ` [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode Andrew Cooper
  2018-06-05  8:01   ` Tian, Kevin
@ 2018-06-06 10:03   ` Roger Pau Monné
  1 sibling, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2018-06-06 10:03 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Xen-devel

On Mon, May 28, 2018 at 03:27:58PM +0100, Andrew Cooper wrote:
> The hvmloader code which used this signal was deleted 10 years ago (c/s
> 50b12df83 "x86 vmx: Remove vmxassist").  Furthermore, the value gets discarded
> anyway because the HVM domain builder unconditionally sets %rax to 0 in the
> same action it uses to set %rip to the appropriate entrypoint.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()
  2018-06-06  9:45   ` Roger Pau Monné
@ 2018-06-06 10:11     ` Andrew Cooper
  0 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-06-06 10:11 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Xen-devel

On 06/06/18 10:45, Roger Pau Monné wrote:
> On Mon, May 28, 2018 at 03:27:57PM +0100, Andrew Cooper wrote:
>> paging_update_paging_modes() and vmx_vlapic_msr_changed() both operate on the
>> VMCS being constructed.  Avoid dropping and re-acquiring the reference
>> multiple times.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Jun Nakajima <jun.nakajima@intel.com>
>> CC: Kevin Tian <kevin.tian@intel.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>>  xen/arch/x86/hvm/vmx/vmcs.c | 10 ++++++----
>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
>> index be02be1..ce78f19 100644
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -996,6 +996,7 @@ static int construct_vmcs(struct vcpu *v)
>>      struct domain *d = v->domain;
>>      u32 vmexit_ctl = vmx_vmexit_control;
>>      u32 vmentry_ctl = vmx_vmentry_control;
>> +    int rc;
>>  
>>      vmx_vmcs_enter(v);
>>  
>> @@ -1083,8 +1084,8 @@ static int construct_vmcs(struct vcpu *v)
>>  
>>          if ( msr_bitmap == NULL )
>>          {
>> -            vmx_vmcs_exit(v);
>> -            return -ENOMEM;
>> +            rc = -ENOMEM;
>> +            goto out;
>>          }
>>  
>>          memset(msr_bitmap, ~0, PAGE_SIZE);
>> @@ -1258,13 +1259,14 @@ static int construct_vmcs(struct vcpu *v)
>>      if ( cpu_has_vmx_tsc_scaling )
>>          __vmwrite(TSC_MULTIPLIER, d->arch.hvm_domain.tsc_scaling_ratio);
>>  
>> -    vmx_vmcs_exit(v);
>> -
>>      /* will update HOST & GUEST_CR3 as reqd */
>>      paging_update_paging_modes(v);
>>  
>>      vmx_vlapic_msr_changed(v);
>>  
>> + out:
>> +    vmx_vmcs_exit(v);
>> +
>>      return 0;
> Shouldn't you return rc here? Or else you lose the error value.

Yeah - Coverity told me the same...  Fixed up.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2018-06-06 10:11 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-28 14:27 [PATCH 0/6] x86/vmx: Misc fixes and improvements Andrew Cooper
2018-05-28 14:27 ` [PATCH 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit Andrew Cooper
2018-05-29 10:33   ` Jan Beulich
2018-05-29 18:08     ` Andrew Cooper
2018-05-30  7:32       ` Jan Beulich
2018-05-30 10:28         ` Andrew Cooper
2018-05-30 10:49           ` Jan Beulich
2018-05-30 17:34   ` [PATCH v2 " Andrew Cooper
2018-06-01  9:28     ` Jan Beulich
2018-06-05  7:54     ` Tian, Kevin
2018-05-28 14:27 ` [PATCH 2/6] x86: Improvements to ler debugging Andrew Cooper
2018-05-29 11:39   ` Jan Beulich
2018-05-29 18:09     ` Andrew Cooper
2018-06-05  7:55   ` Tian, Kevin
2018-05-28 14:27 ` [PATCH 3/6] x86/pat: Simplify host PAT handling Andrew Cooper
2018-05-29 11:40   ` Jan Beulich
2018-06-06  9:39   ` Roger Pau Monné
2018-05-28 14:27 ` [PATCH 4/6] x86/vmx: Simplify PAT handling during vcpu construction Andrew Cooper
2018-05-29 11:41   ` Jan Beulich
2018-06-05  7:57   ` Tian, Kevin
2018-06-06  9:42   ` Roger Pau Monné
2018-05-28 14:27 ` [PATCH 5/6] x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs() Andrew Cooper
2018-05-29 11:43   ` Jan Beulich
2018-06-05  8:00   ` Tian, Kevin
2018-06-06  9:45   ` Roger Pau Monné
2018-06-06 10:11     ` Andrew Cooper
2018-05-28 14:27 ` [PATCH 6/6] x86/vmx: Drop VMX signal for full real-mode Andrew Cooper
2018-06-05  8:01   ` Tian, Kevin
2018-06-06 10:03   ` Roger Pau Monné

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.