All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] x86: Improvements to handling of various CPU bugs
@ 2018-12-28 12:39 Andrew Cooper
  2018-12-28 12:39 ` [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers Andrew Cooper
                   ` (5 more replies)
  0 siblings, 6 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 12:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Pu Wen, Roger Pau Monné

This series borrows the cpu_bug_* idea from Linux.  Using that, it reworks
some of the AMD issues which impact the ongoing effort to upstream Hygon CPU
support, and it works around the Harpertown SCE issue which breaks 64bit HVM
guests.

Andrew Cooper (6):
  x86/AMD Split init_amd() into per-uarch helpers
  x86/feature: Generalise synth and introduce a bug word
  x86/AMD: Rework XSA-9 / Erratum 121 handling entirely
  x86/AMD: Introduce and use X86_BUG_NULL_SEG
  x86/AMD: Fix handling of FPU pointer on Zen hardware
  x86/VT-x: Fix 64bit HVM guests on Harpertown cores

 xen/arch/x86/cpu/amd.c                      | 305 ++++++++++++++++------------
 xen/arch/x86/cpu/intel.c                    |   7 +
 xen/arch/x86/domain.c                       |  37 ++--
 xen/arch/x86/hvm/vmx/vmx.c                  |  30 ++-
 xen/arch/x86/xstate.c                       |   6 +-
 xen/include/asm-x86/amd.h                   |   5 -
 xen/include/asm-x86/cpufeature.h            |   6 +
 xen/include/asm-x86/cpufeatures.h           |  62 +++---
 xen/include/asm-x86/domain.h                |   2 +
 xen/include/asm-x86/hvm/hvm.h               |  10 +-
 xen/include/public/arch-x86/cpufeatureset.h |   1 +
 11 files changed, 289 insertions(+), 182 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers
  2018-12-28 12:39 [PATCH 0/6] x86: Improvements to handling of various CPU bugs Andrew Cooper
@ 2018-12-28 12:39 ` Andrew Cooper
  2018-12-28 15:14   ` Roger Pau Monné
  2019-01-08 16:15   ` Jan Beulich
  2018-12-28 12:39 ` [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word Andrew Cooper
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 12:39 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

This reduces the complexity of init_amd(), and collects related
workarounds together.

It also offers us the opportunity to stop performing workarounds when
virtualised - doing so is wasteful, as it all involves poking MSRs which
no hypervisor will let us touch in practice.

As amd.c has diverged almost entirely from its Linux heratage, start
switching it to using Xen hypervisor style.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

I've dropped various printk()s about working around some errata, because
their use was completely inconsistent.  They want to be uniform, whether
that is missing or present.
---
 xen/arch/x86/cpu/amd.c | 266 ++++++++++++++++++++++++++++---------------------
 1 file changed, 153 insertions(+), 113 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index c790416..c3aa1f4 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -534,22 +534,165 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 	ctxt_switch_levelling(NULL);
 }
 
+static void init_amd_k8(struct cpuinfo_x86 *c)
+{
+    uint64_t val;
+
+    /*
+     * Skip errata workarounds if we are virtualised.  We won't have
+     * sufficient control of hardware to do anything useful.
+     */
+    if ( !cpu_has_hypervisor )
+        return;
+
+    /*
+     * Disable TLB flush filter by setting HWCR.FFDIS bit 6
+     *
+     * Erratum 63 for SH-B3 steppings
+     * Erratum 122 for all steppings (F+ have it disabled by default)
+     */
+    rdmsrl(MSR_K7_HWCR, val);
+    if ( !(val & (1u << 6)) )
+        wrmsrl(MSR_K7_HWCR, val | (1u << 6));
+
+    /*
+     * Some BIOSes incorrectly force LAHF_LM, but only revisions D and later
+     * actually support it.
+     *
+     * AMD Erratum #110, docId: 25759.
+     */
+    if ( c->x86_model < 0x14 && cpu_has(c, X86_FEATURE_LAHF_LM) )
+    {
+        unsigned int l, h;
+
+        __clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);
+        if ( !rdmsr_amd_safe(0xc001100d, &l, &h) )
+            wrmsr_amd_safe(0xc001100d, l,
+                           h & ~cpufeat_mask(X86_FEATURE_LAHF_LM));
+    }
+}
+
+static void init_amd_k10(struct cpuinfo_x86 *c)
+{
+    uint64_t val;
+
+    /* Pointless to use MWAIT on Family10 as it does not deep sleep. */
+    __clear_bit(X86_FEATURE_MONITOR, c->x86_capability);
+
+    /*
+     * Skip errata workarounds if we are virtualised.  We won't have
+     * sufficient control of hardware to do anything useful.
+     */
+    if ( !cpu_has_hypervisor )
+        return;
+
+    if ( c == &boot_cpu_data )
+        check_enable_amd_mmconf_dmi();
+    fam10h_check_enable_mmcfg();
+
+    /*
+     * On family 10h BIOS may not have properly enabled WC+ support, causing
+     * it to be converted to CD memtype.  This may result in performance
+     * degradation for certain nested-paging guests.  Prevent this conversion
+     * by clearing bit 24 in MSR_F10_BU_CFG2.
+     */
+    rdmsrl(MSR_F10_BU_CFG2, val);
+    if ( val & (1ul << 24) )
+        wrmsrl(MSR_F10_BU_CFG2, val & ~(1ul << 24));
+}
+
+static void init_amd_lynx(struct cpuinfo_x86 *c) /* Fam 12h */
+{
+    uint64_t val;
+
+    /*
+     * Skip errata workarounds if we are virtualised.  We won't have
+     * sufficient control of hardware to do anything useful.
+     */
+    if ( !cpu_has_hypervisor )
+        return;
+
+    /* Erratum 665 - Integer divide may cause unpredictable system behaviour. */
+    rdmsrl(MSR_AMD64_DE_CFG, val);
+    if ( !(val & (1u << 31)) )
+        wrmsrl(MSR_AMD64_DE_CFG, val | (1u << 31));
+}
+
+static void init_amd_bulldozer(struct cpuinfo_x86 *c) /* Fam 15h */
+{
+    uint64_t val;
+
+    /*
+     * Skip errata workarounds if we are virtualised.  We won't have
+     * sufficient control of hardware to do anything useful.
+     */
+    if ( !cpu_has_hypervisor )
+        return;
+
+    /* re-enable TopologyExtensions if switched off by BIOS */
+    if ( c->x86_model >= 0x10 && c->x86_model <= 0x1f &&
+         !cpu_has(c, X86_FEATURE_TOPOEXT) &&
+         !rdmsr_safe(MSR_K8_EXT_FEATURE_MASK, val) )
+    {
+        __set_bit(X86_FEATURE_TOPOEXT, c->x86_capability);
+        wrmsr_safe(MSR_K8_EXT_FEATURE_MASK, val | (1ull << 54));
+    }
+
+    /*
+     * The way access filter has a performance penalty on some workloads.
+     * Disable it on the affected CPUs.
+     */
+    if ( c->x86_model >= 0x02 && c->x86_model < 0x20 &&
+         !rdmsr_safe(MSR_AMD64_IC_CFG, val) && (val & 0x1e) != 0x1e )
+        wrmsr_safe(MSR_AMD64_IC_CFG, val | 0x1e);
+}
+
+static void init_amd_jaguar(struct cpuinfo_x86 *c) /* Fam 16h */
+{
+    uint64_t val;
+
+    /*
+     * Skip errata workarounds if we are virtualised.  We won't have
+     * sufficient control of hardware to do anything useful.
+     */
+    if ( !cpu_has_hypervisor )
+        return;
+
+    if ( c->x86_model <= 0xf )
+    {
+        /* Erratum #792. */
+        if ( c == &boot_cpu_data )
+        {
+            unsigned int l = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
+            unsigned int h = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
+
+            if ( l & 0x1f )
+                pci_conf_write32(0, 0, 0x18, 0x3, 0x58, l & ~0x1f);
+
+            if ( h & 0x1 )
+                pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, h & ~0x1);
+        }
+
+        /* Erratum #793. */
+        rdmsrl(MSR_AMD64_LS_CFG, val);
+        if ( !(val & (1 << 15)) )
+            wrmsrl(MSR_AMD64_LS_CFG, val | (1 << 15));
+    }
+}
+
 static void init_amd(struct cpuinfo_x86 *c)
 {
 	u32 l, h;
 
 	unsigned long long value;
 
-	/* Disable TLB flush filter by setting HWCR.FFDIS on K8
-	 * bit 6 of msr C001_0015
-	 *
-	 * Errata 63 for SH-B3 steppings
-	 * Errata 122 for all steppings (F+ have it disabled by default)
-	 */
-	if (c->x86 == 15) {
-		rdmsrl(MSR_K7_HWCR, value);
-		value |= 1 << 6;
-		wrmsrl(MSR_K7_HWCR, value);
+	switch ( c->x86 )
+	{
+	case 0xf:  init_amd_k8(c); break;
+	case 0x10: init_amd_k10(c); break;
+	case 0x12: init_amd_lynx(c); break;
+	case 0x15: init_amd_bulldozer(c); break;
+	case 0x16: init_amd_jaguar(c); break;
 	}
 
 	/*
@@ -558,18 +701,6 @@ static void init_amd(struct cpuinfo_x86 *c)
 	 * alias, leaving 3DNow in the extended leaf.
 	 */
 	__clear_bit(X86_FEATURE_PBE, c->x86_capability);
-	
-	if (c->x86 == 0xf && c->x86_model < 0x14
-	    && cpu_has(c, X86_FEATURE_LAHF_LM)) {
-		/*
-		 * Some BIOSes incorrectly force this feature, but only K8
-		 * revision D (model = 0x14) and later actually support it.
-		 * (AMD Erratum #110, docId: 25759).
-		 */
-		__clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);
-		if (!rdmsr_amd_safe(0xc001100d, &l, &h))
-			wrmsr_amd_safe(0xc001100d, l, h & ~1);
-	}
 
 	/*
 	 * Attempt to set lfence to be Dispatch Serialising.  This MSR almost
@@ -651,35 +782,8 @@ static void init_amd(struct cpuinfo_x86 *c)
 		}
 	}
 
-	/* re-enable TopologyExtensions if switched off by BIOS */
-	if ((c->x86 == 0x15) &&
-	    (c->x86_model >= 0x10) && (c->x86_model <= 0x1f) &&
-	    !cpu_has(c, X86_FEATURE_TOPOEXT) &&
-	    !rdmsr_safe(MSR_K8_EXT_FEATURE_MASK, value)) {
-		value |= 1ULL << 54;
-		wrmsr_safe(MSR_K8_EXT_FEATURE_MASK, value);
-		rdmsrl(MSR_K8_EXT_FEATURE_MASK, value);
-		if (value & (1ULL << 54)) {
-			__set_bit(X86_FEATURE_TOPOEXT, c->x86_capability);
-			printk(KERN_INFO "CPU: Re-enabling disabled "
-			       "Topology Extensions Support\n");
-		}
-	}
-
-	/*
-	 * The way access filter has a performance penalty on some workloads.
-	 * Disable it on the affected CPUs.
-	 */
-	if (c->x86 == 0x15 && c->x86_model >= 0x02 && c->x86_model < 0x20 &&
-	    !rdmsr_safe(MSR_AMD64_IC_CFG, value) && (value & 0x1e) != 0x1e)
-		wrmsr_safe(MSR_AMD64_IC_CFG, value | 0x1e);
-
         amd_get_topology(c);
 
-	/* Pointless to use MWAIT on Family10 as it does not deep sleep. */
-	if (c->x86 == 0x10)
-		__clear_bit(X86_FEATURE_MONITOR, c->x86_capability);
-
 	if (!cpu_has_amd_erratum(c, AMD_ERRATUM_121))
 		opt_allow_unsafe = 1;
 	else if (opt_allow_unsafe < 0)
@@ -694,73 +798,9 @@ static void init_amd(struct cpuinfo_x86 *c)
 		       "*** Pass \"allow_unsafe\" if you're trusting"
 		       " all your (PV) guest kernels. ***\n");
 
-	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
-		if (c == &boot_cpu_data) {
-			l = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
-			h = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
-			if ((l & 0x1f) | (h & 0x1))
-				printk(KERN_WARNING
-				       "Applying workaround for erratum 792: %s%s%s\n",
-				       (l & 0x1f) ? "clearing D18F3x58[4:0]" : "",
-				       ((l & 0x1f) && (h & 0x1)) ? " and " : "",
-				       (h & 0x1) ? "clearing D18F3x5C[0]" : "");
-
-			if (l & 0x1f)
-				pci_conf_write32(0, 0, 0x18, 0x3, 0x58,
-						 l & ~0x1f);
-
-			if (h & 0x1)
-				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c,
-						 h & ~0x1);
-		}
-
-		rdmsrl(MSR_AMD64_LS_CFG, value);
-		if (!(value & (1 << 15))) {
-			static bool_t warned;
-
-			if (c == &boot_cpu_data || opt_cpu_info ||
-			    !test_and_set_bool(warned))
-				printk(KERN_WARNING
-				       "CPU%u: Applying workaround for erratum 793\n",
-				       smp_processor_id());
-			wrmsrl(MSR_AMD64_LS_CFG, value | (1 << 15));
-		}
-	} else if (c->x86 == 0x12) {
-		rdmsrl(MSR_AMD64_DE_CFG, value);
-		if (!(value & (1U << 31))) {
-			static bool warned;
-
-			if (c == &boot_cpu_data || opt_cpu_info ||
-			    !test_and_set_bool(warned))
-				printk(KERN_WARNING
-				       "CPU%u: Applying workaround for erratum 665\n",
-				       smp_processor_id());
-			wrmsrl(MSR_AMD64_DE_CFG, value | (1U << 31));
-		}
-	}
-
 	/* AMD CPUs do not support SYSENTER outside of legacy mode. */
 	__clear_bit(X86_FEATURE_SEP, c->x86_capability);
 
-	if (c->x86 == 0x10) {
-		/* do this for boot cpu */
-		if (c == &boot_cpu_data)
-			check_enable_amd_mmconf_dmi();
-
-		fam10h_check_enable_mmcfg();
-
-		/*
-		 * On family 10h BIOS may not have properly enabled WC+
-		 * support, causing it to be converted to CD memtype. This may
-		 * result in performance degradation for certain nested-paging
-		 * guests. Prevent this conversion by clearing bit 24 in
-		 * MSR_F10_BU_CFG2.
-		 */
-		rdmsrl(MSR_F10_BU_CFG2, value);
-		value &= ~(1ULL << 24);
-		wrmsrl(MSR_F10_BU_CFG2, value);
-	}
-
 	/*
 	 * Family 0x12 and above processors have APIC timer
 	 * running in deep C states.
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word
  2018-12-28 12:39 [PATCH 0/6] x86: Improvements to handling of various CPU bugs Andrew Cooper
  2018-12-28 12:39 ` [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers Andrew Cooper
@ 2018-12-28 12:39 ` Andrew Cooper
  2018-12-28 15:18   ` Roger Pau Monné
                     ` (2 more replies)
  2018-12-28 12:39 ` [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely Andrew Cooper
                   ` (3 subsequent siblings)
  5 siblings, 3 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 12:39 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Future changes are going to want to use cpu_bug_* in a mannor similar to
Linux.  Introduce one bug word, and generalise the calculation of
NCAPINTS.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/cpufeatures.h | 57 ++++++++++++++++++++++-----------------
 1 file changed, 33 insertions(+), 24 deletions(-)

diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index 0c06274..ba468ea 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -4,30 +4,39 @@
 
 #include <xen/lib/x86/cpuid-autogen.h>
 
+/* Number of capability words covered by the featureset words. */
 #define FSCAPINTS FEATURESET_NR_ENTRIES
 
-#define NCAPINTS (FSCAPINTS + 1) /* N 32-bit words worth of info */
+/* Synthetic words follow the featureset words. */
+#define X86_NR_SYNTH 1
+#define X86_SYNTH(x) (FSCAPINTS * 32 + (x))
 
-/* Other features, Xen-defined mapping. */
-/* This range is used for feature bits which conflict or are synthesized */
-XEN_CPUFEATURE(CONSTANT_TSC,    (FSCAPINTS+0)*32+ 0) /* TSC ticks at a constant rate */
-XEN_CPUFEATURE(NONSTOP_TSC,     (FSCAPINTS+0)*32+ 1) /* TSC does not stop in C states */
-XEN_CPUFEATURE(ARAT,            (FSCAPINTS+0)*32+ 2) /* Always running APIC timer */
-XEN_CPUFEATURE(ARCH_PERFMON,    (FSCAPINTS+0)*32+ 3) /* Intel Architectural PerfMon */
-XEN_CPUFEATURE(TSC_RELIABLE,    (FSCAPINTS+0)*32+ 4) /* TSC is known to be reliable */
-XEN_CPUFEATURE(XTOPOLOGY,       (FSCAPINTS+0)*32+ 5) /* cpu topology enum extensions */
-XEN_CPUFEATURE(CPUID_FAULTING,  (FSCAPINTS+0)*32+ 6) /* cpuid faulting */
-XEN_CPUFEATURE(CLFLUSH_MONITOR, (FSCAPINTS+0)*32+ 7) /* clflush reqd with monitor */
-XEN_CPUFEATURE(APERFMPERF,      (FSCAPINTS+0)*32+ 8) /* APERFMPERF */
-XEN_CPUFEATURE(MFENCE_RDTSC,    (FSCAPINTS+0)*32+ 9) /* MFENCE synchronizes RDTSC */
-XEN_CPUFEATURE(XEN_SMEP,        (FSCAPINTS+0)*32+10) /* SMEP gets used by Xen itself */
-XEN_CPUFEATURE(XEN_SMAP,        (FSCAPINTS+0)*32+11) /* SMAP gets used by Xen itself */
-XEN_CPUFEATURE(LFENCE_DISPATCH, (FSCAPINTS+0)*32+12) /* lfence set as Dispatch Serialising */
-XEN_CPUFEATURE(IND_THUNK_LFENCE,(FSCAPINTS+0)*32+13) /* Use IND_THUNK_LFENCE */
-XEN_CPUFEATURE(IND_THUNK_JMP,   (FSCAPINTS+0)*32+14) /* Use IND_THUNK_JMP */
-XEN_CPUFEATURE(SC_MSR_PV,       (FSCAPINTS+0)*32+16) /* MSR_SPEC_CTRL used by Xen for PV */
-XEN_CPUFEATURE(SC_MSR_HVM,      (FSCAPINTS+0)*32+17) /* MSR_SPEC_CTRL used by Xen for HVM */
-XEN_CPUFEATURE(SC_RSB_PV,       (FSCAPINTS+0)*32+18) /* RSB overwrite needed for PV */
-XEN_CPUFEATURE(SC_RSB_HVM,      (FSCAPINTS+0)*32+19) /* RSB overwrite needed for HVM */
-XEN_CPUFEATURE(SC_MSR_IDLE,     (FSCAPINTS+0)*32+21) /* (SC_MSR_PV || SC_MSR_HVM) && default_xen_spec_ctrl */
-XEN_CPUFEATURE(XEN_LBR,         (FSCAPINTS+0)*32+22) /* Xen uses MSR_DEBUGCTL.LBR */
+/* Synthetic features */
+XEN_CPUFEATURE(CONSTANT_TSC,      X86_SYNTH( 0)) /* TSC ticks at a constant rate */
+XEN_CPUFEATURE(NONSTOP_TSC,       X86_SYNTH( 1)) /* TSC does not stop in C states */
+XEN_CPUFEATURE(ARAT,              X86_SYNTH( 2)) /* Always running APIC timer */
+XEN_CPUFEATURE(ARCH_PERFMON,      X86_SYNTH( 3)) /* Intel Architectural PerfMon */
+XEN_CPUFEATURE(TSC_RELIABLE,      X86_SYNTH( 4)) /* TSC is known to be reliable */
+XEN_CPUFEATURE(XTOPOLOGY,         X86_SYNTH( 5)) /* cpu topology enum extensions */
+XEN_CPUFEATURE(CPUID_FAULTING,    X86_SYNTH( 6)) /* cpuid faulting */
+XEN_CPUFEATURE(CLFLUSH_MONITOR,   X86_SYNTH( 7)) /* clflush reqd with monitor */
+XEN_CPUFEATURE(APERFMPERF,        X86_SYNTH( 8)) /* APERFMPERF */
+XEN_CPUFEATURE(MFENCE_RDTSC,      X86_SYNTH( 9)) /* MFENCE synchronizes RDTSC */
+XEN_CPUFEATURE(XEN_SMEP,          X86_SYNTH(10)) /* SMEP gets used by Xen itself */
+XEN_CPUFEATURE(XEN_SMAP,          X86_SYNTH(11)) /* SMAP gets used by Xen itself */
+XEN_CPUFEATURE(LFENCE_DISPATCH,   X86_SYNTH(12)) /* lfence set as Dispatch Serialising */
+XEN_CPUFEATURE(IND_THUNK_LFENCE,  X86_SYNTH(13)) /* Use IND_THUNK_LFENCE */
+XEN_CPUFEATURE(IND_THUNK_JMP,     X86_SYNTH(14)) /* Use IND_THUNK_JMP */
+XEN_CPUFEATURE(SC_MSR_PV,         X86_SYNTH(16)) /* MSR_SPEC_CTRL used by Xen for PV */
+XEN_CPUFEATURE(SC_MSR_HVM,        X86_SYNTH(17)) /* MSR_SPEC_CTRL used by Xen for HVM */
+XEN_CPUFEATURE(SC_RSB_PV,         X86_SYNTH(18)) /* RSB overwrite needed for PV */
+XEN_CPUFEATURE(SC_RSB_HVM,        X86_SYNTH(19)) /* RSB overwrite needed for HVM */
+XEN_CPUFEATURE(SC_MSR_IDLE,       X86_SYNTH(21)) /* (SC_MSR_PV || SC_MSR_HVM) && default_xen_spec_ctrl */
+XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
+
+/* Bug words follow the synthetic words. */
+#define X86_NR_BUG 1
+#define X86_BUG(x) ((FSCAPINTS + X86_NR_SYNTH) * 32 + (x))
+
+/* Total number of capability words, inc synth and bug words. */
+#define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely
  2018-12-28 12:39 [PATCH 0/6] x86: Improvements to handling of various CPU bugs Andrew Cooper
  2018-12-28 12:39 ` [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers Andrew Cooper
  2018-12-28 12:39 ` [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word Andrew Cooper
@ 2018-12-28 12:39 ` Andrew Cooper
  2018-12-28 15:26   ` Roger Pau Monné
  2019-01-09 10:16   ` Jan Beulich
  2018-12-28 12:39 ` [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG Andrew Cooper
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 12:39 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

There are multiple problems:

 * The opt_allow_unsafe < 0 logic is dead since 2012 (c/s 0c7a6966511
   "x86-64: refine the XSA-9 fix").
 * Given that opt_allow_unsafe was deliberately intended not to be
   specific to #121 alone, setting it to true for the not-affected case
   will cause a security issue if a second use of this option ever
   appears.
 * Calling cpu_has_amd_erratum() on every domain creation is wasteful,
   given that the answer is static after boot.

Move opt_allow_unsafe into domain.c, as a better location for it to
live, and switch it to be a straight boolean.

Use the new cpu_bug_* infrastructure to precompute erratum 121 during
boot, rather than repeatedly at runtime.  Leave a comment beside the
check in arch_domain_create() to explain why we may refuse to boot
DomU's.

Reflow the printed information for grep-ability, and fix them for
correctness and brevity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/amd.c            | 26 ++++++++------------------
 xen/arch/x86/domain.c             | 19 +++++++++++++------
 xen/include/asm-x86/amd.h         |  5 -----
 xen/include/asm-x86/cpufeature.h  |  3 +++
 xen/include/asm-x86/cpufeatures.h |  2 ++
 xen/include/asm-x86/domain.h      |  2 ++
 6 files changed, 28 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index c3aa1f4..8089fb9 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -40,10 +40,6 @@ integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx);
 static unsigned int __initdata opt_cpuid_mask_thermal_ecx = ~0u;
 integer_param("cpuid_mask_thermal_ecx", opt_cpuid_mask_thermal_ecx);
 
-/* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
-s8 __read_mostly opt_allow_unsafe;
-boolean_param("allow_unsafe", opt_allow_unsafe);
-
 /* Signal whether the ACPI C1E quirk is required. */
 bool __read_mostly amd_acpi_c1e_quirk;
 
@@ -538,6 +534,14 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
 {
     uint64_t val;
 
+    setup_force_cpu_cap(X86_BUG_AMD_ERRATUM_121);
+
+    if ( c == &boot_cpu_data && !opt_allow_unsafe )
+        printk(KERN_WARNING
+               "*** Xen will not allow DomU creation on this CPU for security reasons ***\n"
+               KERN_WARNING
+               "*** Pass \"allow_unsafe\" if you trust all your guest kernels ***\n");
+
     /*
      * Skip errata workarounds if we are virtualised.  We won't have
      * sufficient control of hardware to do anything useful.
@@ -784,20 +788,6 @@ static void init_amd(struct cpuinfo_x86 *c)
 
         amd_get_topology(c);
 
-	if (!cpu_has_amd_erratum(c, AMD_ERRATUM_121))
-		opt_allow_unsafe = 1;
-	else if (opt_allow_unsafe < 0)
-		panic("Xen will not boot on this CPU for security reasons"
-		      "Pass \"allow_unsafe\" if you're trusting all your"
-		      " (PV) guest kernels.\n");
-	else if (!opt_allow_unsafe && c == &boot_cpu_data)
-		printk(KERN_WARNING
-		       "*** Xen will not allow creation of DomU-s on"
-		       " this CPU for security reasons. ***\n"
-		       KERN_WARNING
-		       "*** Pass \"allow_unsafe\" if you're trusting"
-		       " all your (PV) guest kernels. ***\n");
-
 	/* AMD CPUs do not support SYSENTER outside of legacy mode. */
 	__clear_bit(X86_FEATURE_SEP, c->x86_capability);
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 32dc4253..beeb1d7 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -71,6 +71,10 @@
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
+/* Permit creating domains on unsafe systems? */
+bool __read_mostly opt_allow_unsafe;
+boolean_param("allow_unsafe", opt_allow_unsafe);
+
 static void default_idle(void);
 void (*pm_idle) (void) __read_mostly = default_idle;
 void (*dead_idle) (void) __read_mostly = default_dead_idle;
@@ -506,17 +510,20 @@ int arch_domain_create(struct domain *d,
         return -EINVAL;
     }
 
-    if ( d->domain_id && cpu_has_amd_erratum(&boot_cpu_data, AMD_ERRATUM_121) )
+    /*
+     * AMD Erratum 121 results in a core hang when executing into the
+     * non-canonical region.  64bit PV and HVM guests can pull this off, and
+     * there is nothing Xen can do to mitigate.
+     */
+    if ( unlikely(cpu_bug_amd_erratum_121) && d->domain_id )
     {
         if ( !opt_allow_unsafe )
         {
-            printk(XENLOG_G_ERR "Xen does not allow DomU creation on this CPU"
-                   " for security reasons.\n");
+            printk(XENLOG_G_ERR
+                   "Xen does not allow DomU creation on this CPU for security reasons\n");
             return -EPERM;
         }
-        printk(XENLOG_G_WARNING
-               "Dom%d may compromise security on this CPU.\n",
-               d->domain_id);
+        printk(XENLOG_G_WARNING "%pd may compromise security on this CPU\n", d);
     }
 
     d->arch.s3_integrity = config->flags & XEN_DOMCTL_CDF_s3_integrity;
diff --git a/xen/include/asm-x86/amd.h b/xen/include/asm-x86/amd.h
index a82382e..3d82db1 100644
--- a/xen/include/asm-x86/amd.h
+++ b/xen/include/asm-x86/amd.h
@@ -124,9 +124,6 @@
 #define AMD_MODEL_RANGE_START(range)    (((range) >> 12) & 0xfff)
 #define AMD_MODEL_RANGE_END(range)      ((range) & 0xfff)
 
-#define AMD_ERRATUM_121                                                 \
-    AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x0f, 0x0, 0x0, 0x3f, 0xf))
-
 #define AMD_ERRATUM_170                                                 \
     AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x0f, 0x0, 0x0, 0x67, 0xf))
 
@@ -143,8 +140,6 @@
 struct cpuinfo_x86;
 int cpu_has_amd_erratum(const struct cpuinfo_x86 *, int, ...);
 
-extern s8 opt_allow_unsafe;
-
 void fam10h_check_enable_mmcfg(void);
 void check_enable_amd_mmconf_dmi(void);
 
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 5592e17..4ed7be3 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -122,6 +122,9 @@
 
 #define cpu_has_msr_tsc_aux     (cpu_has_rdtscp || cpu_has_rdpid)
 
+/* Bugs. */
+#define cpu_bug_amd_erratum_121 boot_cpu_has(X86_BUG_AMD_ERRATUM_121)
+
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
     CACHE_TYPE_DATA = 1,
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index ba468ea..a19116c 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -38,5 +38,7 @@ XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
 #define X86_NR_BUG 1
 #define X86_BUG(x) ((FSCAPINTS + X86_NR_SYNTH) * 32 + (x))
 
+#define X86_BUG_AMD_ERRATUM_121   X86_BUG( 0) /* Hang on fetch across non-canonical boundary. */
+
 /* Total number of capability words, inc synth and bug words. */
 #define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 277f99f..62bafe3 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -12,6 +12,8 @@
 #include <public/vcpu.h>
 #include <public/hvm/hvm_info_table.h>
 
+extern bool opt_allow_unsafe;
+
 #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
 
 #define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG
  2018-12-28 12:39 [PATCH 0/6] x86: Improvements to handling of various CPU bugs Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-12-28 12:39 ` [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely Andrew Cooper
@ 2018-12-28 12:39 ` Andrew Cooper
  2018-12-28 15:36   ` Roger Pau Monné
                     ` (2 more replies)
  2018-12-28 12:39 ` [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware Andrew Cooper
  2018-12-28 12:39 ` [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores Andrew Cooper
  5 siblings, 3 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 12:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Pu Wen, Jan Beulich, Roger Pau Monné

AMD processors don't clear the base or limit fields when loading a NULL
segment, and Hygon processors inherit this behaviour.

Express the logic in terms of cpu_bug_null_seg, and rearrange
preload_segment() have the more predictable condition first, not
reference AMD specifically.

Tweak the inline ASM, as `mov %sreg` can be encoded with a memory
operand.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Pu Wen <puwen@hygon.cn>
---
 xen/arch/x86/cpu/amd.c            |  6 ++++++
 xen/arch/x86/domain.c             | 18 +++++++++---------
 xen/include/asm-x86/cpufeature.h  |  1 +
 xen/include/asm-x86/cpufeatures.h |  1 +
 4 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 8089fb9..21c82bb 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -707,6 +707,12 @@ static void init_amd(struct cpuinfo_x86 *c)
 	__clear_bit(X86_FEATURE_PBE, c->x86_capability);
 
 	/*
+	 * AMD CPUs don't zero a segments base and limit when loading a NULL
+	 * selector.
+	 */
+	setup_force_cpu_cap(X86_BUG_NULL_SEG);
+
+	/*
 	 * Attempt to set lfence to be Dispatch Serialising.  This MSR almost
 	 * certainly isn't virtualised (and Xen at least will leak the real
 	 * value in but silently discard writes), as well as being per-core
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index beeb1d7..725d0a1 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1307,16 +1307,16 @@ arch_do_vcpu_op(
 }
 
 /*
- * Loading a nul selector does not clear bases and limits on AMD CPUs. Be on
- * the safe side and re-initialize both to flat segment values before loading
- * a nul selector.
+ * Loading a NULL selector doesn't always clear bases and limits.  Be on the
+ * safe side and re-initialize both to flat segment values before loading a
+ * NULL selector.
  */
-#define preload_segment(seg, value) do {              \
-    if ( !((value) & ~3) &&                           \
-         boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) \
-        asm volatile ( "movl %k0, %%" #seg            \
-                       :: "r" (FLAT_USER_DS32) );     \
-} while ( false )
+#define preload_segment(seg, value)                     \
+    do {                                                \
+        if ( cpu_bug_null_seg && !((value) & ~3) )      \
+            asm volatile ( "mov %k0, %%" #seg           \
+                           :: "rm" (FLAT_USER_DS32) );  \
+    } while ( 0 )
 
 #define loadsegment(seg,value) ({               \
     int __r = 1;                                \
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 4ed7be3..de45b6c 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -124,6 +124,7 @@
 
 /* Bugs. */
 #define cpu_bug_amd_erratum_121 boot_cpu_has(X86_BUG_AMD_ERRATUM_121)
+#define cpu_bug_null_seg        boot_cpu_has(X86_BUG_NULL_SEG)
 
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index a19116c..8a73a09 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -39,6 +39,7 @@ XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
 #define X86_BUG(x) ((FSCAPINTS + X86_NR_SYNTH) * 32 + (x))
 
 #define X86_BUG_AMD_ERRATUM_121   X86_BUG( 0) /* Hang on fetch across non-canonical boundary. */
+#define X86_BUG_NULL_SEG          X86_BUG( 1) /* NULL-ing a selector preserves the base and limit. */
 
 /* Total number of capability words, inc synth and bug words. */
 #define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-28 12:39 [PATCH 0/6] x86: Improvements to handling of various CPU bugs Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-12-28 12:39 ` [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG Andrew Cooper
@ 2018-12-28 12:39 ` Andrew Cooper
  2018-12-28 15:43   ` Roger Pau Monné
                     ` (3 more replies)
  2018-12-28 12:39 ` [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores Andrew Cooper
  5 siblings, 4 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 12:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Wei Liu, Pu Wen, Jan Beulich, Roger Pau Monné

AMD hardware before Zen doesn't safe/restore the FPU error pointers
unless an unmasked FPU exception is pending.  Zen processors have a
feature bit indicating that this (mis)behaviour no longer exists.

Express the common logic in terms of cpu_bug_fpu_err_ptr as Hygon
processors (being Zen derivatives) won't inherit this behaviour.

While at it, fix a performance issue with the workaround, which I should
have noticed by now.  Looking at the FPU state slows the context switch
path down, as it is a moderately complicated unpredictable condition
which will evaluate to true for all 64bit OSes and most 32bit ones.

Leave the sole condition being the easily-predictable
cpu_bug_fpu_err_ptr as the asm sequence is needed in ~100% of cases
these days on affected hardware.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Pu Wen <puwen@hygon.cn>
---
 xen/arch/x86/cpu/amd.c                      | 7 +++++++
 xen/arch/x86/xstate.c                       | 6 ++----
 xen/include/asm-x86/cpufeature.h            | 1 +
 xen/include/asm-x86/cpufeatures.h           | 1 +
 xen/include/public/arch-x86/cpufeatureset.h | 1 +
 5 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 21c82bb..8c4521c 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -713,6 +713,13 @@ static void init_amd(struct cpuinfo_x86 *c)
 	setup_force_cpu_cap(X86_BUG_NULL_SEG);
 
 	/*
+	 * Older AMD CPUs don't save/load FOP/FIP/FDP unless an FPU exception
+	 * is pending.
+	 */
+	if ( !cpu_has(c, X86_FEATURE_XSAVEERRPTR) )
+		setup_force_cpu_cap(X86_BUG_FPU_PTR_LEAK);
+
+	/*
 	 * Attempt to set lfence to be Dispatch Serialising.  This MSR almost
 	 * certainly isn't virtualised (and Xen at least will leak the real
 	 * value in but silently discard writes), as well as being per-core
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 15edd5d..7ca5684 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -369,15 +369,13 @@ void xrstor(struct vcpu *v, uint64_t mask)
     unsigned int faults, prev_faults;
 
     /*
-     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
+     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
      * is pending. Clear the x87 state here by setting it to fixed
      * values. The hypervisor data segment can be sometimes 0 and
      * sometimes new user value. Both should be ok. Use the FPU saved
      * data block as a safe address because it should be in L1.
      */
-    if ( (mask & ptr->xsave_hdr.xstate_bv & X86_XCR0_FP) &&
-         !(ptr->fpu_sse.fsw & ~ptr->fpu_sse.fcw & 0x003f) &&
-         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+    if ( cpu_bug_fpu_err_ptr )
         asm volatile ( "fnclex\n\t"        /* clear exceptions */
                        "ffree %%st(7)\n\t" /* clear stack tag */
                        "fildl %0"          /* load to clear state */
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index de45b6c..0f3bb5a 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -125,6 +125,7 @@
 /* Bugs. */
 #define cpu_bug_amd_erratum_121 boot_cpu_has(X86_BUG_AMD_ERRATUM_121)
 #define cpu_bug_null_seg        boot_cpu_has(X86_BUG_NULL_SEG)
+#define cpu_bug_fpu_err_ptr     boot_cpu_has(X86_BUG_FPU_PTR_LEAK)
 
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index 8a73a09..e7d4171 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -40,6 +40,7 @@ XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
 
 #define X86_BUG_AMD_ERRATUM_121   X86_BUG( 0) /* Hang on fetch across non-canonical boundary. */
 #define X86_BUG_NULL_SEG          X86_BUG( 1) /* NULL-ing a selector preserves the base and limit. */
+#define X86_BUG_FPU_PTR_LEAK      X86_BUG( 2) /* (F)XRSTOR doesn't load FOP/FIP/FDP. */
 
 /* Total number of capability words, inc synth and bug words. */
 #define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index fbc68fa..6beaab9 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -237,6 +237,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /*   APERF/MPERF Read Only interface */
 
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
+XEN_CPUFEATURE(XSAVEERRPTR,   8*32+ 2) /*A  (F)XSAVE Error pointers always updated. */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0.edx, word 9 */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores
  2018-12-28 12:39 [PATCH 0/6] x86: Improvements to handling of various CPU bugs Andrew Cooper
                   ` (4 preceding siblings ...)
  2018-12-28 12:39 ` [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware Andrew Cooper
@ 2018-12-28 12:39 ` Andrew Cooper
  2018-12-28 15:54   ` Roger Pau Monné
                     ` (2 more replies)
  5 siblings, 3 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 12:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

c/s fd32dcfe4c "x86/vmx: Don't leak EFER.NXE into guest context" had an
unintended consequence on Harpertown cores which, as it turns out, don't
load MSR_EFER fully from the MSR Load List - on reentry to the guest,
EFER.SCE is clear irrespective of the value in load list.

This, being catastrophic to 64bit guests, is far worse than the EFER.NXE
leakage which was trying to be fixed.

Introduce cpu_bug_msr_ll_efer_sce to encapsulate this partial revert.
Avoid adding MSR_EFER to the Load Lists on impacted hardware, and
reintroduce the logic to use the guests EFER.SCE setting.

In the common case of running 64bit HVM guests, these extra adjustments
to EFER should only be hit during guest boot.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

This is RFC at the moment, because the test lab is full at the moment
and I don't have a Harpertown CPU to hand.  I'm fairly sure the change
is complete and will test when it becomes available, but I don't expect
to make any code changes.
---
 xen/arch/x86/cpu/intel.c          |  7 +++++++
 xen/arch/x86/hvm/vmx/vmx.c        | 30 +++++++++++++++++++++++++++++-
 xen/include/asm-x86/cpufeature.h  |  1 +
 xen/include/asm-x86/cpufeatures.h |  1 +
 xen/include/asm-x86/hvm/hvm.h     | 10 ++++++++--
 5 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 65fa3d6..7d1cb1d 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -331,6 +331,13 @@ static void init_intel(struct cpuinfo_x86 *c)
 	/* Work around errata */
 	Intel_errata_workarounds(c);
 
+	/*
+	 * Harpertown cores don't context switch MSR_EFER correctly when it
+	 * appears in an MSR Load List.
+	 */
+	if (c->x86 == 0x6 && c->x86_model == 0x17)
+		setup_force_cpu_cap(X86_BUG_MSR_LL_EFER_SCE);
+
 	if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
 		(c->x86 == 0x6 && c->x86_model >= 0x0e))
 		__set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 64af8bf..051f5c1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -899,6 +899,15 @@ static void vmx_fpu_leave(struct vcpu *v)
 
 static void vmx_ctxt_switch_from(struct vcpu *v)
 {
+    /* Always restore to Xen's SCE setting when leaving VT-x context. */
+    if ( unlikely(cpu_bug_msr_ll_efer_sce) )
+    {
+        uint64_t efer = read_efer();
+
+        if ( !(efer & EFER_SCE) )
+            write_efer(efer | EFER_SCE);
+    }
+
     /*
      * Return early if trying to do a context switch without VMX enabled,
      * this can happen when the hypervisor shuts down with HVM guests
@@ -931,6 +940,14 @@ static void vmx_ctxt_switch_from(struct vcpu *v)
 
 static void vmx_ctxt_switch_to(struct vcpu *v)
 {
+    /*
+     * When we can't context switch MSR_EFER correctly, we need to load the
+     * guests SCE value to ensure that guest userspace functions correctly.
+     */
+    if ( unlikely(cpu_bug_msr_ll_efer_sce) &&
+         !(v->arch.hvm.guest_efer & EFER_SCE) )
+        write_efer(read_efer() & ~EFER_SCE);
+
     vmx_restore_guest_msrs(v);
     vmx_restore_dr(v);
 
@@ -1684,7 +1701,18 @@ static void vmx_update_guest_efer(struct vcpu *v)
     /* We expect to use EFER loading in the common case, but... */
     if ( likely(cpu_has_vmx_efer) )
         __vmwrite(GUEST_EFER, guest_efer);
-
+    /*
+     * On Harpertown cores, putting MSR_EFER in the MSR Load List causes
+     * EFER.SCE to be loaded as 0 when VMRESUME completes (although EFER.NX is
+     * loaded correctly).  This is catastrophic for 64bit guests.  In such
+     * cases, we run with the guests choice of SCE in EFER, and the guest gets
+     * Xen's choice of NX.
+     */
+    else if ( unlikely(cpu_bug_msr_ll_efer_sce) )
+    {
+        if ( (guest_efer ^ xen_efer) & EFER_SCE )
+            write_efer((xen_efer & ~EFER_SCE) | (guest_efer & EFER_SCE));
+    }
     /* ... on Gen1 VT-x hardware, we have to use MSR load/save lists instead. */
     else
     {
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 0f3bb5a..4daf52a 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -126,6 +126,7 @@
 #define cpu_bug_amd_erratum_121 boot_cpu_has(X86_BUG_AMD_ERRATUM_121)
 #define cpu_bug_null_seg        boot_cpu_has(X86_BUG_NULL_SEG)
 #define cpu_bug_fpu_err_ptr     boot_cpu_has(X86_BUG_FPU_PTR_LEAK)
+#define cpu_bug_msr_ll_efer_sce boot_cpu_has(X86_BUG_MSR_LL_EFER_SCE)
 
 enum _cache_type {
     CACHE_TYPE_NULL = 0,
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index e7d4171..435abea 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -41,6 +41,7 @@ XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
 #define X86_BUG_AMD_ERRATUM_121   X86_BUG( 0) /* Hang on fetch across non-canonical boundary. */
 #define X86_BUG_NULL_SEG          X86_BUG( 1) /* NULL-ing a selector preserves the base and limit. */
 #define X86_BUG_FPU_PTR_LEAK      X86_BUG( 2) /* (F)XRSTOR doesn't load FOP/FIP/FDP. */
+#define X86_BUG_MSR_LL_EFER_SCE   X86_BUG( 3) /* MSR Load List clears EFER.SCE. */
 
 /* Total number of capability words, inc synth and bug words. */
 #define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 0a10b51..3c2c1ec 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -366,8 +366,14 @@ unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore);
     (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_SMEP))
 #define hvm_smap_enabled(v) \
     (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_SMAP))
-#define hvm_nx_enabled(v) \
-    ((v)->arch.hvm.guest_efer & EFER_NX)
+/*
+ * A consequence of not being able to put MSR_EFER in the MSR Load List is
+ * that we can't context switch EFER.NXE correctly for guests.  The guest gets
+ * Xen's value, and has no choice in the matter.
+ */
+#define hvm_nx_enabled(v)                                         \
+    ((likely(!cpu_bug_msr_ll_efer_sce) ? (v)->arch.hvm.guest_efer \
+                                       : read_efer()) & EFER_NX)
 #define hvm_pku_enabled(v) \
     (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_PKE))
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers
  2018-12-28 12:39 ` [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers Andrew Cooper
@ 2018-12-28 15:14   ` Roger Pau Monné
  2018-12-28 16:12     ` Andrew Cooper
  2019-01-08 16:15   ` Jan Beulich
  1 sibling, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2018-12-28 15:14 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:31PM +0000, Andrew Cooper wrote:
> This reduces the complexity of init_amd(), and collects related
> workarounds together.
> 
> It also offers us the opportunity to stop performing workarounds when
> virtualised - doing so is wasteful, as it all involves poking MSRs which
> no hypervisor will let us touch in practice.
> 
> As amd.c has diverged almost entirely from its Linux heratage, start
> switching it to using Xen hypervisor style.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> I've dropped various printk()s about working around some errata, because
> their use was completely inconsistent.  They want to be uniform, whether
> that is missing or present.
> ---
>  xen/arch/x86/cpu/amd.c | 266 ++++++++++++++++++++++++++++---------------------
>  1 file changed, 153 insertions(+), 113 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index c790416..c3aa1f4 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -534,22 +534,165 @@ static void early_init_amd(struct cpuinfo_x86 *c)
>  	ctxt_switch_levelling(NULL);
>  }
>  
> +static void init_amd_k8(struct cpuinfo_x86 *c)
> +{
> +    uint64_t val;
> +
> +    /*
> +     * Skip errata workarounds if we are virtualised.  We won't have
> +     * sufficient control of hardware to do anything useful.
> +     */
> +    if ( !cpu_has_hypervisor )

I think you want `if ( cpu_has_hypervisor ) return;` according to the
comment (here and below).

The rest LGTM, as is mostly code movement AFAICT:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word
  2018-12-28 12:39 ` [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word Andrew Cooper
@ 2018-12-28 15:18   ` Roger Pau Monné
  2019-01-02 12:48   ` Wei Liu
  2019-01-08 16:20   ` Jan Beulich
  2 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2018-12-28 15:18 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:32PM +0000, Andrew Cooper wrote:
> Future changes are going to want to use cpu_bug_* in a mannor similar to
> Linux.  Introduce one bug word, and generalise the calculation of
> NCAPINTS.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Should the commit message also note the introduction of X86_SYNTH? I
think it's quite helpful.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely
  2018-12-28 12:39 ` [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely Andrew Cooper
@ 2018-12-28 15:26   ` Roger Pau Monné
  2018-12-28 16:06     ` Andrew Cooper
  2019-01-09 10:16   ` Jan Beulich
  1 sibling, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2018-12-28 15:26 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:33PM +0000, Andrew Cooper wrote:
> There are multiple problems:
> 
>  * The opt_allow_unsafe < 0 logic is dead since 2012 (c/s 0c7a6966511
>    "x86-64: refine the XSA-9 fix").
>  * Given that opt_allow_unsafe was deliberately intended not to be
>    specific to #121 alone, setting it to true for the not-affected case
>    will cause a security issue if a second use of this option ever
>    appears.
>  * Calling cpu_has_amd_erratum() on every domain creation is wasteful,
>    given that the answer is static after boot.
> 
> Move opt_allow_unsafe into domain.c, as a better location for it to
> live, and switch it to be a straight boolean.
> 
> Use the new cpu_bug_* infrastructure to precompute erratum 121 during
> boot, rather than repeatedly at runtime.  Leave a comment beside the
> check in arch_domain_create() to explain why we may refuse to boot
> DomU's.
> 
> Reflow the printed information for grep-ability, and fix them for
> correctness and brevity.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/cpu/amd.c            | 26 ++++++++------------------
>  xen/arch/x86/domain.c             | 19 +++++++++++++------
>  xen/include/asm-x86/amd.h         |  5 -----
>  xen/include/asm-x86/cpufeature.h  |  3 +++
>  xen/include/asm-x86/cpufeatures.h |  2 ++
>  xen/include/asm-x86/domain.h      |  2 ++
>  6 files changed, 28 insertions(+), 29 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index c3aa1f4..8089fb9 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -40,10 +40,6 @@ integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx);
>  static unsigned int __initdata opt_cpuid_mask_thermal_ecx = ~0u;
>  integer_param("cpuid_mask_thermal_ecx", opt_cpuid_mask_thermal_ecx);
>  
> -/* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
> -s8 __read_mostly opt_allow_unsafe;
> -boolean_param("allow_unsafe", opt_allow_unsafe);
> -
>  /* Signal whether the ACPI C1E quirk is required. */
>  bool __read_mostly amd_acpi_c1e_quirk;
>  
> @@ -538,6 +534,14 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
>  {
>      uint64_t val;
>  
> +    setup_force_cpu_cap(X86_BUG_AMD_ERRATUM_121);
> +
> +    if ( c == &boot_cpu_data && !opt_allow_unsafe )
> +        printk(KERN_WARNING
> +               "*** Xen will not allow DomU creation on this CPU for security reasons ***\n"
> +               KERN_WARNING
> +               "*** Pass \"allow_unsafe\" if you trust all your guest kernels ***\n");

Since you are switching the file to match Xen's coding style, I would
use XENLOG_WARNING instead of KERN_WARNING.

> +
>      /*
>       * Skip errata workarounds if we are virtualised.  We won't have
>       * sufficient control of hardware to do anything useful.
> @@ -784,20 +788,6 @@ static void init_amd(struct cpuinfo_x86 *c)
>  
>          amd_get_topology(c);
>  
> -	if (!cpu_has_amd_erratum(c, AMD_ERRATUM_121))
> -		opt_allow_unsafe = 1;
> -	else if (opt_allow_unsafe < 0)
> -		panic("Xen will not boot on this CPU for security reasons"
> -		      "Pass \"allow_unsafe\" if you're trusting all your"
> -		      " (PV) guest kernels.\n");
> -	else if (!opt_allow_unsafe && c == &boot_cpu_data)
> -		printk(KERN_WARNING
> -		       "*** Xen will not allow creation of DomU-s on"
> -		       " this CPU for security reasons. ***\n"
> -		       KERN_WARNING
> -		       "*** Pass \"allow_unsafe\" if you're trusting"
> -		       " all your (PV) guest kernels. ***\n");
> -
>  	/* AMD CPUs do not support SYSENTER outside of legacy mode. */
>  	__clear_bit(X86_FEATURE_SEP, c->x86_capability);
>  
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 32dc4253..beeb1d7 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -71,6 +71,10 @@
>  
>  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>  
> +/* Permit creating domains on unsafe systems? */
> +bool __read_mostly opt_allow_unsafe;

I think you can make this static now, since you have removed the only
external user which was amd.c.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG
  2018-12-28 12:39 ` [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG Andrew Cooper
@ 2018-12-28 15:36   ` Roger Pau Monné
  2019-01-02 12:55   ` Wei Liu
  2019-01-09 10:32   ` Jan Beulich
  2 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2018-12-28 15:36 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Pu Wen, Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:34PM +0000, Andrew Cooper wrote:
> AMD processors don't clear the base or limit fields when loading a NULL
> segment, and Hygon processors inherit this behaviour.
> 
> Express the logic in terms of cpu_bug_null_seg, and rearrange
> preload_segment() have the more predictable condition first, not
> reference AMD specifically.
> 
> Tweak the inline ASM, as `mov %sreg` can be encoded with a memory
> operand.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-28 12:39 ` [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware Andrew Cooper
@ 2018-12-28 15:43   ` Roger Pau Monné
  2018-12-28 16:11     ` Andrew Cooper
  2018-12-29  9:52   ` Pu Wen
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2018-12-28 15:43 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Pu Wen, Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:35PM +0000, Andrew Cooper wrote:
> AMD hardware before Zen doesn't safe/restore the FPU error pointers
> unless an unmasked FPU exception is pending.  Zen processors have a
> feature bit indicating that this (mis)behaviour no longer exists.
> 
> Express the common logic in terms of cpu_bug_fpu_err_ptr as Hygon
> processors (being Zen derivatives) won't inherit this behaviour.
> 
> While at it, fix a performance issue with the workaround, which I should
> have noticed by now.  Looking at the FPU state slows the context switch
> path down, as it is a moderately complicated unpredictable condition
> which will evaluate to true for all 64bit OSes and most 32bit ones.
> 
> Leave the sole condition being the easily-predictable
> cpu_bug_fpu_err_ptr as the asm sequence is needed in ~100% of cases
> these days on affected hardware.

I'm not that familiar with the FPU registers, but I assume current
code checks whether there's an exception pending before decoding
whether the error pointers should be cleared?

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Apart from the question above, LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores
  2018-12-28 12:39 ` [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores Andrew Cooper
@ 2018-12-28 15:54   ` Roger Pau Monné
  2019-01-02  5:10   ` Tian, Kevin
  2019-01-03 11:01   ` Andrew Cooper
  2 siblings, 0 replies; 29+ messages in thread
From: Roger Pau Monné @ 2018-12-28 15:54 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:36PM +0000, Andrew Cooper wrote:
> c/s fd32dcfe4c "x86/vmx: Don't leak EFER.NXE into guest context" had an
> unintended consequence on Harpertown cores which, as it turns out, don't
> load MSR_EFER fully from the MSR Load List - on reentry to the guest,
> EFER.SCE is clear irrespective of the value in load list.
> 
> This, being catastrophic to 64bit guests, is far worse than the EFER.NXE
> leakage which was trying to be fixed.
> 
> Introduce cpu_bug_msr_ll_efer_sce to encapsulate this partial revert.
> Avoid adding MSR_EFER to the Load Lists on impacted hardware, and
> reintroduce the logic to use the guests EFER.SCE setting.
> 
> In the common case of running 64bit HVM guests, these extra adjustments
> to EFER should only be hit during guest boot.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 0a10b51..3c2c1ec 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -366,8 +366,14 @@ unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore);
>      (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_SMEP))
>  #define hvm_smap_enabled(v) \
>      (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_SMAP))
> -#define hvm_nx_enabled(v) \
> -    ((v)->arch.hvm.guest_efer & EFER_NX)
> +/*
> + * A consequence of not being able to put MSR_EFER in the MSR Load List is
> + * that we can't context switch EFER.NXE correctly for guests.  The guest gets
> + * Xen's value, and has no choice in the matter.
> + */
> +#define hvm_nx_enabled(v)                                         \
> +    ((likely(!cpu_bug_msr_ll_efer_sce) ? (v)->arch.hvm.guest_efer \

Intead of likely(!...) I would use the unlikely(...) variant
because I think it's easier to read.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely
  2018-12-28 15:26   ` Roger Pau Monné
@ 2018-12-28 16:06     ` Andrew Cooper
  0 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 16:06 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Jan Beulich, Xen-devel

On 28/12/2018 15:26, Roger Pau Monné wrote:
> On Fri, Dec 28, 2018 at 12:39:33PM +0000, Andrew Cooper wrote:
>> There are multiple problems:
>>
>>  * The opt_allow_unsafe < 0 logic is dead since 2012 (c/s 0c7a6966511
>>    "x86-64: refine the XSA-9 fix").
>>  * Given that opt_allow_unsafe was deliberately intended not to be
>>    specific to #121 alone, setting it to true for the not-affected case
>>    will cause a security issue if a second use of this option ever
>>    appears.
>>  * Calling cpu_has_amd_erratum() on every domain creation is wasteful,
>>    given that the answer is static after boot.
>>
>> Move opt_allow_unsafe into domain.c, as a better location for it to
>> live, and switch it to be a straight boolean.
>>
>> Use the new cpu_bug_* infrastructure to precompute erratum 121 during
>> boot, rather than repeatedly at runtime.  Leave a comment beside the
>> check in arch_domain_create() to explain why we may refuse to boot
>> DomU's.
>>
>> Reflow the printed information for grep-ability, and fix them for
>> correctness and brevity.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>>  xen/arch/x86/cpu/amd.c            | 26 ++++++++------------------
>>  xen/arch/x86/domain.c             | 19 +++++++++++++------
>>  xen/include/asm-x86/amd.h         |  5 -----
>>  xen/include/asm-x86/cpufeature.h  |  3 +++
>>  xen/include/asm-x86/cpufeatures.h |  2 ++
>>  xen/include/asm-x86/domain.h      |  2 ++
>>  6 files changed, 28 insertions(+), 29 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
>> index c3aa1f4..8089fb9 100644
>> --- a/xen/arch/x86/cpu/amd.c
>> +++ b/xen/arch/x86/cpu/amd.c
>> @@ -40,10 +40,6 @@ integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx);
>>  static unsigned int __initdata opt_cpuid_mask_thermal_ecx = ~0u;
>>  integer_param("cpuid_mask_thermal_ecx", opt_cpuid_mask_thermal_ecx);
>>  
>> -/* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
>> -s8 __read_mostly opt_allow_unsafe;
>> -boolean_param("allow_unsafe", opt_allow_unsafe);
>> -
>>  /* Signal whether the ACPI C1E quirk is required. */
>>  bool __read_mostly amd_acpi_c1e_quirk;
>>  
>> @@ -538,6 +534,14 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
>>  {
>>      uint64_t val;
>>  
>> +    setup_force_cpu_cap(X86_BUG_AMD_ERRATUM_121);
>> +
>> +    if ( c == &boot_cpu_data && !opt_allow_unsafe )
>> +        printk(KERN_WARNING
>> +               "*** Xen will not allow DomU creation on this CPU for security reasons ***\n"
>> +               KERN_WARNING
>> +               "*** Pass \"allow_unsafe\" if you trust all your guest kernels ***\n");
> Since you are switching the file to match Xen's coding style, I would
> use XENLOG_WARNING instead of KERN_WARNING.

Oops - right you are.

>
>> +
>>      /*
>>       * Skip errata workarounds if we are virtualised.  We won't have
>>       * sufficient control of hardware to do anything useful.
>> @@ -784,20 +788,6 @@ static void init_amd(struct cpuinfo_x86 *c)
>>  
>>          amd_get_topology(c);
>>  
>> -	if (!cpu_has_amd_erratum(c, AMD_ERRATUM_121))
>> -		opt_allow_unsafe = 1;
>> -	else if (opt_allow_unsafe < 0)
>> -		panic("Xen will not boot on this CPU for security reasons"
>> -		      "Pass \"allow_unsafe\" if you're trusting all your"
>> -		      " (PV) guest kernels.\n");
>> -	else if (!opt_allow_unsafe && c == &boot_cpu_data)
>> -		printk(KERN_WARNING
>> -		       "*** Xen will not allow creation of DomU-s on"
>> -		       " this CPU for security reasons. ***\n"
>> -		       KERN_WARNING
>> -		       "*** Pass \"allow_unsafe\" if you're trusting"
>> -		       " all your (PV) guest kernels. ***\n");
>> -
>>  	/* AMD CPUs do not support SYSENTER outside of legacy mode. */
>>  	__clear_bit(X86_FEATURE_SEP, c->x86_capability);
>>  
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index 32dc4253..beeb1d7 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -71,6 +71,10 @@
>>  
>>  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>>  
>> +/* Permit creating domains on unsafe systems? */
>> +bool __read_mostly opt_allow_unsafe;
> I think you can make this static now, since you have removed the only
> external user which was amd.c.

There is still a user in amd.c, for the printk() which needs adjusting.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-28 15:43   ` Roger Pau Monné
@ 2018-12-28 16:11     ` Andrew Cooper
  2019-01-04 10:27       ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 16:11 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Pu Wen, Wei Liu, Jan Beulich, Xen-devel

On 28/12/2018 15:43, Roger Pau Monné wrote:
> On Fri, Dec 28, 2018 at 12:39:35PM +0000, Andrew Cooper wrote:
>> AMD hardware before Zen doesn't safe/restore the FPU error pointers
>> unless an unmasked FPU exception is pending.  Zen processors have a
>> feature bit indicating that this (mis)behaviour no longer exists.
>>
>> Express the common logic in terms of cpu_bug_fpu_err_ptr as Hygon
>> processors (being Zen derivatives) won't inherit this behaviour.
>>
>> While at it, fix a performance issue with the workaround, which I should
>> have noticed by now.  Looking at the FPU state slows the context switch
>> path down, as it is a moderately complicated unpredictable condition
>> which will evaluate to true for all 64bit OSes and most 32bit ones.
>>
>> Leave the sole condition being the easily-predictable
>> cpu_bug_fpu_err_ptr as the asm sequence is needed in ~100% of cases
>> these days on affected hardware.
> I'm not that familiar with the FPU registers, but I assume current
> code checks whether there's an exception pending before decoding
> whether the error pointers should be cleared?

The current code has two checks.  One, whether you're trying to load
floating point state, and the second is whether there is an unmasked
floating point exception.

All callers, except those out of the emulator, restore FP state, and the
statistical likelihood of there being an unmasked floating point
exception pending is low when the x87 FPU is in use, and 0 on 64bit
systems which use SSE2 as their base floating point support.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers
  2018-12-28 15:14   ` Roger Pau Monné
@ 2018-12-28 16:12     ` Andrew Cooper
  0 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-28 16:12 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Wei Liu, Jan Beulich, Xen-devel

On 28/12/2018 15:14, Roger Pau Monné wrote:
> On Fri, Dec 28, 2018 at 12:39:31PM +0000, Andrew Cooper wrote:
>> This reduces the complexity of init_amd(), and collects related
>> workarounds together.
>>
>> It also offers us the opportunity to stop performing workarounds when
>> virtualised - doing so is wasteful, as it all involves poking MSRs which
>> no hypervisor will let us touch in practice.
>>
>> As amd.c has diverged almost entirely from its Linux heratage, start
>> switching it to using Xen hypervisor style.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>
>> I've dropped various printk()s about working around some errata, because
>> their use was completely inconsistent.  They want to be uniform, whether
>> that is missing or present.
>> ---
>>  xen/arch/x86/cpu/amd.c | 266 ++++++++++++++++++++++++++++---------------------
>>  1 file changed, 153 insertions(+), 113 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
>> index c790416..c3aa1f4 100644
>> --- a/xen/arch/x86/cpu/amd.c
>> +++ b/xen/arch/x86/cpu/amd.c
>> @@ -534,22 +534,165 @@ static void early_init_amd(struct cpuinfo_x86 *c)
>>  	ctxt_switch_levelling(NULL);
>>  }
>>  
>> +static void init_amd_k8(struct cpuinfo_x86 *c)
>> +{
>> +    uint64_t val;
>> +
>> +    /*
>> +     * Skip errata workarounds if we are virtualised.  We won't have
>> +     * sufficient control of hardware to do anything useful.
>> +     */
>> +    if ( !cpu_has_hypervisor )
> I think you want `if ( cpu_has_hypervisor ) return;` according to the
> comment (here and below).

/sigh - I spent ages double checking that I'd got that the right way
round...

Will fix in v2.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-28 12:39 ` [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware Andrew Cooper
  2018-12-28 15:43   ` Roger Pau Monné
@ 2018-12-29  9:52   ` Pu Wen
  2018-12-29 13:02     ` Andrew Cooper
  2019-01-02 12:56   ` Wei Liu
  2019-01-09 11:04   ` Jan Beulich
  3 siblings, 1 reply; 29+ messages in thread
From: Pu Wen @ 2018-12-29  9:52 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Wei Liu, Jan Beulich, Roger Pau Monné

On 2018/12/28 20:40, Andrew Cooper wrote:
> AMD hardware before Zen doesn't safe/restore the FPU error pointers
> unless an unmasked FPU exception is pending.  Zen processors have a
> feature bit indicating that this (mis)behaviour no longer exists.
> 
> Express the common logic in terms of cpu_bug_fpu_err_ptr as Hygon
> processors (being Zen derivatives) won't inherit this behaviour.
...
> diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
> index 15edd5d..7ca5684 100644
> --- a/xen/arch/x86/xstate.c
> +++ b/xen/arch/x86/xstate.c
> @@ -369,15 +369,13 @@ void xrstor(struct vcpu *v, uint64_t mask)
>       unsigned int faults, prev_faults;
>   
>       /*
> -     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
> +     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
>        * is pending. Clear the x87 state here by setting it to fixed
>        * values. The hypervisor data segment can be sometimes 0 and
>        * sometimes new user value. Both should be ok. Use the FPU saved
>        * data block as a safe address because it should be in L1.
>        */
> -    if ( (mask & ptr->xsave_hdr.xstate_bv & X86_XCR0_FP) &&
> -         !(ptr->fpu_sse.fsw & ~ptr->fpu_sse.fcw & 0x003f) &&
> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
> +    if ( cpu_bug_fpu_err_ptr )
>           asm volatile ( "fnclex\n\t"        /* clear exceptions */
>                          "ffree %%st(7)\n\t" /* clear stack tag */
>                          "fildl %0"          /* load to clear state */

It seems that the functions fpu_fxrstor and fpu_fxsave have the same
problem, even though they are not called on Hygon Dhyana platform.
Should them be fixed in the same way?

-- 
Regards,
Pu Wen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-29  9:52   ` Pu Wen
@ 2018-12-29 13:02     ` Andrew Cooper
  0 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2018-12-29 13:02 UTC (permalink / raw)
  To: Pu Wen, Xen-devel; +Cc: Wei Liu, Jan Beulich, Roger Pau Monné

On 29/12/2018 09:52, Pu Wen wrote:
> On 2018/12/28 20:40, Andrew Cooper wrote:
>> AMD hardware before Zen doesn't safe/restore the FPU error pointers
>> unless an unmasked FPU exception is pending.  Zen processors have a
>> feature bit indicating that this (mis)behaviour no longer exists.
>>
>> Express the common logic in terms of cpu_bug_fpu_err_ptr as Hygon
>> processors (being Zen derivatives) won't inherit this behaviour.
> ...
>> diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
>> index 15edd5d..7ca5684 100644
>> --- a/xen/arch/x86/xstate.c
>> +++ b/xen/arch/x86/xstate.c
>> @@ -369,15 +369,13 @@ void xrstor(struct vcpu *v, uint64_t mask)
>>       unsigned int faults, prev_faults;
>>   
>>       /*
>> -     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
>> +     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
>>        * is pending. Clear the x87 state here by setting it to fixed
>>        * values. The hypervisor data segment can be sometimes 0 and
>>        * sometimes new user value. Both should be ok. Use the FPU saved
>>        * data block as a safe address because it should be in L1.
>>        */
>> -    if ( (mask & ptr->xsave_hdr.xstate_bv & X86_XCR0_FP) &&
>> -         !(ptr->fpu_sse.fsw & ~ptr->fpu_sse.fcw & 0x003f) &&
>> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
>> +    if ( cpu_bug_fpu_err_ptr )
>>           asm volatile ( "fnclex\n\t"        /* clear exceptions */
>>                          "ffree %%st(7)\n\t" /* clear stack tag */
>>                          "fildl %0"          /* load to clear state */
> It seems that the functions fpu_fxrstor and fpu_fxsave have the same
> problem, even though they are not called on Hygon Dhyana platform.
> Should them be fixed in the same way?

Oops yes - they should be fixed. They will get used if the user passes
"xsave=0" on the command line.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores
  2018-12-28 12:39 ` [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores Andrew Cooper
  2018-12-28 15:54   ` Roger Pau Monné
@ 2019-01-02  5:10   ` Tian, Kevin
  2019-01-03 11:01   ` Andrew Cooper
  2 siblings, 0 replies; 29+ messages in thread
From: Tian, Kevin @ 2019-01-02  5:10 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Friday, December 28, 2018 8:40 PM
> 
> c/s fd32dcfe4c "x86/vmx: Don't leak EFER.NXE into guest context" had an
> unintended consequence on Harpertown cores which, as it turns out, don't
> load MSR_EFER fully from the MSR Load List - on reentry to the guest,
> EFER.SCE is clear irrespective of the value in load list.
> 
> This, being catastrophic to 64bit guests, is far worse than the EFER.NXE
> leakage which was trying to be fixed.
> 
> Introduce cpu_bug_msr_ll_efer_sce to encapsulate this partial revert.
> Avoid adding MSR_EFER to the Load Lists on impacted hardware, and
> reintroduce the logic to use the guests EFER.SCE setting.
> 
> In the common case of running 64bit HVM guests, these extra adjustments
> to EFER should only be hit during guest boot.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word
  2018-12-28 12:39 ` [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word Andrew Cooper
  2018-12-28 15:18   ` Roger Pau Monné
@ 2019-01-02 12:48   ` Wei Liu
  2019-01-08 16:20   ` Jan Beulich
  2 siblings, 0 replies; 29+ messages in thread
From: Wei Liu @ 2019-01-02 12:48 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:32PM +0000, Andrew Cooper wrote:
> Future changes are going to want to use cpu_bug_* in a mannor similar to

mannor -> manner.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG
  2018-12-28 12:39 ` [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG Andrew Cooper
  2018-12-28 15:36   ` Roger Pau Monné
@ 2019-01-02 12:55   ` Wei Liu
  2019-01-09 10:32   ` Jan Beulich
  2 siblings, 0 replies; 29+ messages in thread
From: Wei Liu @ 2019-01-02 12:55 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Pu Wen, Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:34PM +0000, Andrew Cooper wrote:
> AMD processors don't clear the base or limit fields when loading a NULL
> segment, and Hygon processors inherit this behaviour.
> 
> Express the logic in terms of cpu_bug_null_seg, and rearrange
> preload_segment() have the more predictable condition first, not
> reference AMD specifically.
> 
> Tweak the inline ASM, as `mov %sreg` can be encoded with a memory
> operand.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-28 12:39 ` [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware Andrew Cooper
  2018-12-28 15:43   ` Roger Pau Monné
  2018-12-29  9:52   ` Pu Wen
@ 2019-01-02 12:56   ` Wei Liu
  2019-01-09 11:04   ` Jan Beulich
  3 siblings, 0 replies; 29+ messages in thread
From: Wei Liu @ 2019-01-02 12:56 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Pu Wen, Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Fri, Dec 28, 2018 at 12:39:35PM +0000, Andrew Cooper wrote:
> AMD hardware before Zen doesn't safe/restore the FPU error pointers

safe -> save.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores
  2018-12-28 12:39 ` [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores Andrew Cooper
  2018-12-28 15:54   ` Roger Pau Monné
  2019-01-02  5:10   ` Tian, Kevin
@ 2019-01-03 11:01   ` Andrew Cooper
  2 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2019-01-03 11:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Jan Beulich, Roger Pau Monné

On 28/12/2018 12:39, Andrew Cooper wrote:
> c/s fd32dcfe4c "x86/vmx: Don't leak EFER.NXE into guest context" had an
> unintended consequence on Harpertown cores which, as it turns out, don't
> load MSR_EFER fully from the MSR Load List - on reentry to the guest,
> EFER.SCE is clear irrespective of the value in load list.
>
> This, being catastrophic to 64bit guests, is far worse than the EFER.NXE
> leakage which was trying to be fixed.
>
> Introduce cpu_bug_msr_ll_efer_sce to encapsulate this partial revert.
> Avoid adding MSR_EFER to the Load Lists on impacted hardware, and
> reintroduce the logic to use the guests EFER.SCE setting.
>
> In the common case of running 64bit HVM guests, these extra adjustments
> to EFER should only be hit during guest boot.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
>
> This is RFC at the moment, because the test lab is full at the moment
> and I don't have a Harpertown CPU to hand.  I'm fairly sure the change
> is complete and will test when it becomes available, but I don't expect
> to make any code changes.

Sadly testing says no.  I'll try and figure out what is going on here.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-28 16:11     ` Andrew Cooper
@ 2019-01-04 10:27       ` Jan Beulich
  0 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2019-01-04 10:27 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Pu Wen, Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.12.18 at 17:11, <andrew.cooper3@citrix.com> wrote:
> On 28/12/2018 15:43, Roger Pau Monné wrote:
>> On Fri, Dec 28, 2018 at 12:39:35PM +0000, Andrew Cooper wrote:
>>> AMD hardware before Zen doesn't safe/restore the FPU error pointers
>>> unless an unmasked FPU exception is pending.  Zen processors have a
>>> feature bit indicating that this (mis)behaviour no longer exists.
>>>
>>> Express the common logic in terms of cpu_bug_fpu_err_ptr as Hygon
>>> processors (being Zen derivatives) won't inherit this behaviour.
>>>
>>> While at it, fix a performance issue with the workaround, which I should
>>> have noticed by now.  Looking at the FPU state slows the context switch
>>> path down, as it is a moderately complicated unpredictable condition
>>> which will evaluate to true for all 64bit OSes and most 32bit ones.
>>>
>>> Leave the sole condition being the easily-predictable
>>> cpu_bug_fpu_err_ptr as the asm sequence is needed in ~100% of cases
>>> these days on affected hardware.
>> I'm not that familiar with the FPU registers, but I assume current
>> code checks whether there's an exception pending before decoding
>> whether the error pointers should be cleared?
> 
> The current code has two checks.  One, whether you're trying to load
> floating point state, and the second is whether there is an unmasked
> floating point exception.
> 
> All callers, except those out of the emulator, restore FP state, and the
> statistical likelihood of there being an unmasked floating point
> exception pending is low when the x87 FPU is in use, and 0 on 64bit
> systems which use SSE2 as their base floating point support.

Well, not exactly zero, but even lower: 80-bit math (i.e. long double
plus the necessary compiler option) requires use of the x87 FPU even
in 64-bit mode.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers
  2018-12-28 12:39 ` [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers Andrew Cooper
  2018-12-28 15:14   ` Roger Pau Monné
@ 2019-01-08 16:15   ` Jan Beulich
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2019-01-08 16:15 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.12.18 at 13:39, <andrew.cooper3@citrix.com> wrote:
> This reduces the complexity of init_amd(), and collects related
> workarounds together.
> 
> It also offers us the opportunity to stop performing workarounds when
> virtualised - doing so is wasteful, as it all involves poking MSRs which
> no hypervisor will let us touch in practice.

No _current_ hypervisor - perhaps. But a really good one might
emulate at least some of them.

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -534,22 +534,165 @@ static void early_init_amd(struct cpuinfo_x86 *c)
>  	ctxt_switch_levelling(NULL);
>  }
>  
> +static void init_amd_k8(struct cpuinfo_x86 *c)
> +{
> +    uint64_t val;
> +
> +    /*
> +     * Skip errata workarounds if we are virtualised.  We won't have
> +     * sufficient control of hardware to do anything useful.
> +     */
> +    if ( !cpu_has_hypervisor )
> +        return;

Note how you then also skip ...

> +    /*
> +     * Disable TLB flush filter by setting HWCR.FFDIS bit 6
> +     *
> +     * Erratum 63 for SH-B3 steppings
> +     * Erratum 122 for all steppings (F+ have it disabled by default)
> +     */
> +    rdmsrl(MSR_K7_HWCR, val);
> +    if ( !(val & (1u << 6)) )
> +        wrmsrl(MSR_K7_HWCR, val | (1u << 6));
> +
> +    /*
> +     * Some BIOSes incorrectly force LAHF_LM, but only revisions D and later
> +     * actually support it.
> +     *
> +     * AMD Erratum #110, docId: 25759.
> +     */
> +    if ( c->x86_model < 0x14 && cpu_has(c, X86_FEATURE_LAHF_LM) )
> +    {
> +        unsigned int l, h;
> +
> +        __clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);

... this. If we expect lower level hypervisors to have fixed this already, we
should imo panic() here (and perhaps in further places) if we still find it wrong
(or what we consider wrong).

> +static void init_amd_k10(struct cpuinfo_x86 *c)
> +{
> +    uint64_t val;
> +
> +    /* Pointless to use MWAIT on Family10 as it does not deep sleep. */
> +    __clear_bit(X86_FEATURE_MONITOR, c->x86_capability);

See how you have this ahead of the virtualization check here?

> +static void init_amd_bulldozer(struct cpuinfo_x86 *c) /* Fam 15h */
> +{
> +    uint64_t val;
> +
> +    /*
> +     * Skip errata workarounds if we are virtualised.  We won't have
> +     * sufficient control of hardware to do anything useful.
> +     */
> +    if ( !cpu_has_hypervisor )
> +        return;
> +
> +    /* re-enable TopologyExtensions if switched off by BIOS */
> +    if ( c->x86_model >= 0x10 && c->x86_model <= 0x1f &&
> +         !cpu_has(c, X86_FEATURE_TOPOEXT) &&
> +         !rdmsr_safe(MSR_K8_EXT_FEATURE_MASK, val) )
> +    {
> +        __set_bit(X86_FEATURE_TOPOEXT, c->x86_capability);
> +        wrmsr_safe(MSR_K8_EXT_FEATURE_MASK, val | (1ull << 54));
> +    }

This is one of the cases where I'd expect a good hypervisor to permit control.
Independent of this I think now would be a good opportunity to __set_bit()
only if WRMSR actually managed to flip the bit.

> +    /*
> @@ -694,73 +798,9 @@ static void init_amd(struct cpuinfo_x86 *c)
>  		       "*** Pass \"allow_unsafe\" if you're trusting"
>  		       " all your (PV) guest kernels. ***\n");
>  
> -	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
> -		if (c == &boot_cpu_data) {
> -			l = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
> -			h = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
> -			if ((l & 0x1f) | (h & 0x1))
> -				printk(KERN_WARNING
> -				       "Applying workaround for erratum 792: %s%s%s\n",
> -				       (l & 0x1f) ? "clearing D18F3x58[4:0]" : "",
> -				       ((l & 0x1f) && (h & 0x1)) ? " and " : "",
> -				       (h & 0x1) ? "clearing D18F3x5C[0]" : "");
> -
> -			if (l & 0x1f)
> -				pci_conf_write32(0, 0, 0x18, 0x3, 0x58,
> -						 l & ~0x1f);
> -
> -			if (h & 0x1)
> -				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c,
> -						 h & ~0x1);
> -		}
> -
> -		rdmsrl(MSR_AMD64_LS_CFG, value);
> -		if (!(value & (1 << 15))) {
> -			static bool_t warned;
> -
> -			if (c == &boot_cpu_data || opt_cpu_info ||
> -			    !test_and_set_bool(warned))
> -				printk(KERN_WARNING
> -				       "CPU%u: Applying workaround for erratum 793\n",
> -				       smp_processor_id());
> -			wrmsrl(MSR_AMD64_LS_CFG, value | (1 << 15));
> -		}
> -	} else if (c->x86 == 0x12) {
> -		rdmsrl(MSR_AMD64_DE_CFG, value);
> -		if (!(value & (1U << 31))) {
> -			static bool warned;
> -
> -			if (c == &boot_cpu_data || opt_cpu_info ||
> -			    !test_and_set_bool(warned))
> -				printk(KERN_WARNING
> -				       "CPU%u: Applying workaround for erratum 665\n",
> -				       smp_processor_id());

While I agree with your consistency argument, messages like this one are meant to
hint at an outdated BIOS, which I think is helpful and hence I'm not convinced should
be dropped despite similar messages missing elsewhere.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word
  2018-12-28 12:39 ` [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word Andrew Cooper
  2018-12-28 15:18   ` Roger Pau Monné
  2019-01-02 12:48   ` Wei Liu
@ 2019-01-08 16:20   ` Jan Beulich
  2 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2019-01-08 16:20 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.12.18 at 13:39, <andrew.cooper3@citrix.com> wrote:
> Future changes are going to want to use cpu_bug_* in a mannor similar to
> Linux.  Introduce one bug word, and generalise the calculation of
> NCAPINTS.

As said elsewhere, unless for the purpose of alternatives patching
I'm not convinced of the benefits of this over simple boolean
variables.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely
  2018-12-28 12:39 ` [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely Andrew Cooper
  2018-12-28 15:26   ` Roger Pau Monné
@ 2019-01-09 10:16   ` Jan Beulich
  1 sibling, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2019-01-09 10:16 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.12.18 at 13:39, <andrew.cooper3@citrix.com> wrote:
> There are multiple problems:
> 
>  * The opt_allow_unsafe < 0 logic is dead since 2012 (c/s 0c7a6966511
>    "x86-64: refine the XSA-9 fix").

Not really, no, the more that said commit only introduced it. Please
pay attention to the description saying "by means of a single line
change" as well as "don't default to boot denial". The thing I agree
was not possible (no idea how I did overlook it) was to set the
value to -1 without it defaulting to it.

>  * Given that opt_allow_unsafe was deliberately intended not to be
>    specific to #121 alone, setting it to true for the not-affected case
>    will cause a security issue if a second use of this option ever
>    appears.
>  * Calling cpu_has_amd_erratum() on every domain creation is wasteful,
>    given that the answer is static after boot.

Hardly a big performance impact, I would say.

> Move opt_allow_unsafe into domain.c, as a better location for it to
> live, and switch it to be a straight boolean.

So you think boot denial is not useful, and hence not worthwhile to
retain (and make properly command line controllable)?

> @@ -538,6 +534,14 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
>  {
>      uint64_t val;
>  
> +    setup_force_cpu_cap(X86_BUG_AMD_ERRATUM_121);
> +        printk(KERN_WARNING
> +               "*** Xen will not allow DomU creation on this CPU for security reasons ***\n"
> +               KERN_WARNING
> +               "*** Pass \"allow_unsafe\" if you trust all your guest kernels ***\n");

This would affect all of Fam0F, while models 0x40 and higher aren't
affected from all I know.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG
  2018-12-28 12:39 ` [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG Andrew Cooper
  2018-12-28 15:36   ` Roger Pau Monné
  2019-01-02 12:55   ` Wei Liu
@ 2019-01-09 10:32   ` Jan Beulich
  2 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2019-01-09 10:32 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Pu Wen, Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.12.18 at 13:39, <andrew.cooper3@citrix.com> wrote:
> AMD processors don't clear the base or limit fields when loading a NULL
> segment, and Hygon processors inherit this behaviour.
> 
> Express the logic in terms of cpu_bug_null_seg,

If this behavior was considered a bug, AMD surely would have fixed
it by now. It's a quirk at best, but personally I would call it just an
implementation detail. Hence cpu_bug_ as a prefix is simply
inappropriate.

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1307,16 +1307,16 @@ arch_do_vcpu_op(
>  }
>  
>  /*
> - * Loading a nul selector does not clear bases and limits on AMD CPUs. Be on
> - * the safe side and re-initialize both to flat segment values before loading
> - * a nul selector.
> + * Loading a NULL selector doesn't always clear bases and limits.  Be on the
> + * safe side and re-initialize both to flat segment values before loading a
> + * NULL selector.

To be honest I dislike the abuse of NULL in cases like this one: In
C code NULL would better stand for just the one thing the language
assigns to it as a meaning. Hence "nul" was better as a term (or if
you dislike its spelling, "null" would still be better).

>   */
> -#define preload_segment(seg, value) do {              \
> -    if ( !((value) & ~3) &&                           \
> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD ) \
> -        asm volatile ( "movl %k0, %%" #seg            \
> -                       :: "r" (FLAT_USER_DS32) );     \
> -} while ( false )
> +#define preload_segment(seg, value)                     \
> +    do {                                                \
> +        if ( cpu_bug_null_seg && !((value) & ~3) )      \
> +            asm volatile ( "mov %k0, %%" #seg           \
> +                           :: "rm" (FLAT_USER_DS32) );  \

As you say in the description, "mov %sreg" allows for a memory
operand, so from an abstract perspective the change is fine. It
won't have any practical effect though (and hence be potentially
misleading), as "m" can't be combined with a literal number as
value (you may want to try out removing the r from the
constraint) - for that to work the compiler would have to allocate
an unnamed stack variable, which it doesn't do (instead you get
"memory input <N> is not directly addressable").

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware
  2018-12-28 12:39 ` [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware Andrew Cooper
                     ` (2 preceding siblings ...)
  2019-01-02 12:56   ` Wei Liu
@ 2019-01-09 11:04   ` Jan Beulich
  3 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2019-01-09 11:04 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Pu Wen, Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.12.18 at 13:39, <andrew.cooper3@citrix.com> wrote:
> AMD hardware before Zen doesn't safe/restore the FPU error pointers
> unless an unmasked FPU exception is pending.  Zen processors have a
> feature bit indicating that this (mis)behaviour no longer exists.
> 
> Express the common logic in terms of cpu_bug_fpu_err_ptr as Hygon
> processors (being Zen derivatives) won't inherit this behaviour.

While they've decided to change the behavior, I still don't think this
was to be considered a bug, as it was well documented for a long
time (albeit perhaps not from the beginning). I again question the
validity of cpu_bug_ as a prefix here.

> --- a/xen/arch/x86/xstate.c
> +++ b/xen/arch/x86/xstate.c
> @@ -369,15 +369,13 @@ void xrstor(struct vcpu *v, uint64_t mask)
>      unsigned int faults, prev_faults;
>  
>      /*
> -     * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
> +     * Some CPUs don't save/restore FDP/FIP/FOP unless an exception
>       * is pending. Clear the x87 state here by setting it to fixed
>       * values. The hypervisor data segment can be sometimes 0 and
>       * sometimes new user value. Both should be ok. Use the FPU saved
>       * data block as a safe address because it should be in L1.
>       */
> -    if ( (mask & ptr->xsave_hdr.xstate_bv & X86_XCR0_FP) &&
> -         !(ptr->fpu_sse.fsw & ~ptr->fpu_sse.fcw & 0x003f) &&
> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
> +    if ( cpu_bug_fpu_err_ptr )
>          asm volatile ( "fnclex\n\t"        /* clear exceptions */
>                         "ffree %%st(7)\n\t" /* clear stack tag */
>                         "fildl %0"          /* load to clear state */

This change will result in clobbering FPU state when X86_CR0_FP
is not set in "mask & ptr->xsave_hdr.xstate_bv". I'm anyway
unconvinced that dropping the conditions is a good thing, even
less so in an unrelated patch (a dedicated patch would otherwise
also adjust the similar FXRSTOR code).

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -237,6 +237,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /*   APERF/MPERF Read Only interface */
>  
>  /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
>  XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
> +XEN_CPUFEATURE(XSAVEERRPTR,   8*32+ 2) /*A  (F)XSAVE Error pointers always updated. */

Any particular reason why you don't derive from AMD's name
(RstrFpErrPtrs)? Also if their documentation is to be trusted,
then the comment is wrongly mentioning FXSAVE; it's
questionable whether mentioning XSAVE but not XRSTOR is a
good thing.

As a side note - this is an example of a CPUID bit which would
better not require white listing for guest exposure.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2019-01-09 11:04 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-28 12:39 [PATCH 0/6] x86: Improvements to handling of various CPU bugs Andrew Cooper
2018-12-28 12:39 ` [PATCH 1/6] x86/AMD Split init_amd() into per-uarch helpers Andrew Cooper
2018-12-28 15:14   ` Roger Pau Monné
2018-12-28 16:12     ` Andrew Cooper
2019-01-08 16:15   ` Jan Beulich
2018-12-28 12:39 ` [PATCH 2/6] x86/feature: Generalise synth and introduce a bug word Andrew Cooper
2018-12-28 15:18   ` Roger Pau Monné
2019-01-02 12:48   ` Wei Liu
2019-01-08 16:20   ` Jan Beulich
2018-12-28 12:39 ` [PATCH 3/6] x86/AMD: Rework XSA-9 / Erratum 121 handling entirely Andrew Cooper
2018-12-28 15:26   ` Roger Pau Monné
2018-12-28 16:06     ` Andrew Cooper
2019-01-09 10:16   ` Jan Beulich
2018-12-28 12:39 ` [PATCH 4/6] x86/AMD: Introduce and use X86_BUG_NULL_SEG Andrew Cooper
2018-12-28 15:36   ` Roger Pau Monné
2019-01-02 12:55   ` Wei Liu
2019-01-09 10:32   ` Jan Beulich
2018-12-28 12:39 ` [PATCH 5/6] x86/AMD: Fix handling of FPU pointer on Zen hardware Andrew Cooper
2018-12-28 15:43   ` Roger Pau Monné
2018-12-28 16:11     ` Andrew Cooper
2019-01-04 10:27       ` Jan Beulich
2018-12-29  9:52   ` Pu Wen
2018-12-29 13:02     ` Andrew Cooper
2019-01-02 12:56   ` Wei Liu
2019-01-09 11:04   ` Jan Beulich
2018-12-28 12:39 ` [PATCH 6/6] x86/VT-x: Fix 64bit HVM guests on Harpertown cores Andrew Cooper
2018-12-28 15:54   ` Roger Pau Monné
2019-01-02  5:10   ` Tian, Kevin
2019-01-03 11:01   ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.