linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1] AMD SSB bits.
@ 2018-06-01 14:59 Konrad Rzeszutek Wilk
  2018-06-01 14:59 ` [PATCH v1 1/3] x86/bugs: Add AMD's variant of SSB_NO Konrad Rzeszutek Wilk
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-01 14:59 UTC (permalink / raw)
  To: linux-kernel, kvm, x86, tglx, andrew.cooper3; +Cc: Konrad Rzeszutek Wilk

Hi,

I was reading the AMD whitepaper on SSBD and noticed that they have added
two new bits in the 8000_0008 CPUID. EBX:
 1) Bit[26] - similar to Intel's SSB_NO not needed anymore.
 2) Bit[24] - use SPEC_CTRL MSR (0x48) instead of VIRT SPEC_CTRL MSR
    (0xC001_011f).

See 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
(A copy of this document is available at
    https://bugzilla.kernel.org/show_bug.cgi?id=199889)

Being that I don't have the hardware (not even sure if AMD has developed it yet)
I ended up cobbling up a DEBUG patch, the last one - which is well, debug
(see below).

QEMU patches will be sent in another patchset.

 arch/x86/include/asm/cpufeatures.h |  2 ++
 arch/x86/kernel/cpu/bugs.c         | 13 +++++--------
 arch/x86/kernel/cpu/common.c       |  9 ++++++++-
 arch/x86/kvm/cpuid.c               | 10 ++++++++--
 arch/x86/kvm/svm.c                 |  8 +++++---
 5 files changed, 28 insertions(+), 14 deletions(-)
Konrad Rzeszutek Wilk (3):
      x86/bugs: Add AMD's variant of SSB_NO.
      x86/bugs: Add AMD's SPEC_CTRL MSR usage
      x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features


>From 3d120f90731dae7e9a6f0c941c8bc228ed346baa Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 31 May 2018 20:56:08 -0400
Subject: [PATCH] DEBUG HACK DEBUG

Expose the two various Bits to the guest depending on the module
parameters.

Also show the various hidden flags in the /proc/cpuinfo.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/cpufeatures.h | 14 +++++++-------
 arch/x86/kvm/cpuid.c               | 12 ++++++++++++
 arch/x86/kvm/svm.c                 | 13 -------------
 3 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 5701f5cecd31..05b74564089a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -206,15 +206,15 @@
 #define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
 #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
 #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
-#define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
+#define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /*  MSR SPEC_CTRL is implemented */
 #define X86_FEATURE_SSBD		( 7*32+17) /* Speculative Store Bypass Disable */
 #define X86_FEATURE_MBA			( 7*32+18) /* Memory Bandwidth Allocation */
 #define X86_FEATURE_RSB_CTXSW		( 7*32+19) /* "" Fill RSB on context switches */
 #define X86_FEATURE_SEV			( 7*32+20) /* AMD Secure Encrypted Virtualization */
 #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
 #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
-#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /* "" Disable Speculative Store Bypass. */
-#define X86_FEATURE_LS_CFG_SSBD		( 7*32+24)  /* "" AMD SSBD implementation via LS_CFG MSR */
+#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /*  Disable Speculative Store Bypass. */
+#define X86_FEATURE_LS_CFG_SSBD		( 7*32+24)  /*  AMD SSBD implementation via LS_CFG MSR */
 #define X86_FEATURE_IBRS		( 7*32+25) /* Indirect Branch Restricted Speculation */
 #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
 #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
@@ -279,12 +279,12 @@
 #define X86_FEATURE_CLZERO		(13*32+ 0) /* CLZERO instruction */
 #define X86_FEATURE_IRPERF		(13*32+ 1) /* Instructions Retired Count */
 #define X86_FEATURE_XSAVEERPTR		(13*32+ 2) /* Always save/restore FP error pointers */
-#define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
+#define X86_FEATURE_AMD_IBPB		(13*32+12) /*  Indirect Branch Prediction Barrier */
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
-#define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
-#define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
+#define X86_FEATURE_AMD_STIBP		(13*32+15) /*  Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_AMD_SSBD		(13*32+24) /*  Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
-#define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
+#define X86_FEATURE_AMD_SSB_NO		(13*32+26) /*  Speculative Store Bypass is fixed in hardware. */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f4f30d0c25c4..67c5d4eb32ac 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -27,6 +27,12 @@
 #include "trace.h"
 #include "pmu.h"
 
+static bool __read_mostly expose_amd_ssb_no = 0;
+module_param(expose_amd_ssb_no, bool, S_IRUGO | S_IWUSR);
+
+static bool __read_mostly expose_amd_spec_ctrl = 0;
+module_param(expose_amd_spec_ctrl, bool, S_IRUGO | S_IWUSR);
+
 static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 {
 	int feature_bit = 0;
@@ -672,6 +678,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
 		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			entry->ebx |= F(VIRT_SSBD);
+
+		if (expose_amd_spec_ctrl && !boot_cpu_has(X86_FEATURE_HYPERVISOR))
+			entry->ebx |= F(AMD_SSBD);
+
+		if (expose_amd_ssb_no && !boot_cpu_has(X86_FEATURE_HYPERVISOR))
+			entry->ebx |= F(AMD_SSB_NO);
 		break;
 	}
 	case 0x80000019:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 950ec50f77c3..a4c71b37df74 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -288,7 +288,6 @@ static const struct svm_direct_access_msrs {
 	{ .index = MSR_CSTAR,				.always = true  },
 	{ .index = MSR_SYSCALL_MASK,			.always = true  },
 #endif
-	{ .index = MSR_IA32_SPEC_CTRL,			.always = false },
 	{ .index = MSR_IA32_PRED_CMD,			.always = false },
 	{ .index = MSR_IA32_LASTBRANCHFROMIP,		.always = false },
 	{ .index = MSR_IA32_LASTBRANCHTOIP,		.always = false },
@@ -4231,18 +4230,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		if (!data)
 			break;
 
-		/*
-		 * For non-nested:
-		 * When it's written (to non-zero) for the first time, pass
-		 * it through.
-		 *
-		 * For nested:
-		 * The handling of the MSR bitmap for L2 guests is done in
-		 * nested_svm_vmrun_msrpm.
-		 * We update the L1 MSR bit as well since it will end up
-		 * touching the MSR anyway now.
-		 */
-		set_msr_interception(svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
 		break;
 	case MSR_IA32_PRED_CMD:
 		if (!msr->host_initiated &&
-- 
2.13.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 1/3] x86/bugs: Add AMD's variant of SSB_NO.
  2018-06-01 14:59 [PATCH v1] AMD SSB bits Konrad Rzeszutek Wilk
@ 2018-06-01 14:59 ` Konrad Rzeszutek Wilk
  2018-06-06 12:15   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
  2018-06-01 14:59 ` [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage Konrad Rzeszutek Wilk
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-01 14:59 UTC (permalink / raw)
  To: linux-kernel, kvm, x86, tglx, andrew.cooper3
  Cc: Konrad Rzeszutek Wilk, Ingo Molnar, H. Peter Anvin,
	Borislav Petkov, David Woodhouse, Tom Lendacky,
	Janakarajan Natarajan, Andy Lutomirski

The AMD document outlining the SSBD handling
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
mentions that the CPUID 8000_0008.EBX[26] will mean that the
speculative store bypass disable is no longer needed.

A copy of this document is available at
    https://bugzilla.kernel.org/show_bug.cgi?id=199889

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/cpufeatures.h | 1 +
 arch/x86/kernel/cpu/common.c       | 3 ++-
 arch/x86/kvm/cpuid.c               | 2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index fb00a2fca990..b6d7ce32927a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -283,6 +283,7 @@
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
+#define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 38276f58d3bf..494735cf63f5 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -972,7 +972,8 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
 
 	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
-	   !(ia32_cap & ARCH_CAP_SSB_NO))
+	   !(ia32_cap & ARCH_CAP_SSB_NO) &&
+	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (x86_match_cpu(cpu_no_meltdown))
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 92bf2f2e7cdd..132f8a58692e 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -379,7 +379,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD);
+		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD) | F(AMD_SSB_NO);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
-- 
2.13.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage
  2018-06-01 14:59 [PATCH v1] AMD SSB bits Konrad Rzeszutek Wilk
  2018-06-01 14:59 ` [PATCH v1 1/3] x86/bugs: Add AMD's variant of SSB_NO Konrad Rzeszutek Wilk
@ 2018-06-01 14:59 ` Konrad Rzeszutek Wilk
  2018-06-02  1:04   ` Tom Lendacky
  2018-06-06 12:16   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
  2018-06-01 14:59 ` [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Konrad Rzeszutek Wilk
  2018-06-05 13:23 ` [PATCH v1] AMD SSB bits Tom Lendacky
  3 siblings, 2 replies; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-01 14:59 UTC (permalink / raw)
  To: linux-kernel, kvm, x86, tglx, andrew.cooper3
  Cc: Konrad Rzeszutek Wilk, Ingo Molnar, H. Peter Anvin,
	Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, David Woodhouse, Tom Lendacky,
	Janakarajan Natarajan, Kees Cook, KarimAllah Ahmed,
	Andy Lutomirski

The AMD document outlining the SSBD handling
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
mentions that if CPUID 8000_0008.EBX[24] is set we should be using
the SPEC_CTRL MSR (0x48) over the VIRT SPEC_CTRL MSR (0xC001_011f)
for speculative store bypass disable.

This in effect means we should clear the X86_FEATURE_VIRT_SSBD
flag so that we would prefer the SPEC_CTRL MSR.

See the document titled:
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf

A copy of this document is available at
   https://bugzilla.kernel.org/show_bug.cgi?id=199889

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Cc: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/kernel/cpu/bugs.c         | 12 +++++++-----
 arch/x86/kernel/cpu/common.c       |  6 ++++++
 arch/x86/kvm/cpuid.c               | 10 ++++++++--
 arch/x86/kvm/svm.c                 |  8 +++++---
 5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b6d7ce32927a..5701f5cecd31 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -282,6 +282,7 @@
 #define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7416fc206b4a..6bea81855cdd 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -529,18 +529,20 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 	if (mode == SPEC_STORE_BYPASS_DISABLE) {
 		setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
 		/*
-		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD uses
-		 * a completely different MSR and bit dependent on family.
+		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
+		 * use a completely different MSR and bit dependent on family.
 		 */
 		switch (boot_cpu_data.x86_vendor) {
 		case X86_VENDOR_INTEL:
+		case X86_VENDOR_AMD:
+			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
+				x86_amd_ssb_disable();
+				break;
+			}
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
 			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 			break;
-		case X86_VENDOR_AMD:
-			x86_amd_ssb_disable();
-			break;
 		}
 	}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 494735cf63f5..d08a29bd0385 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -783,6 +783,12 @@ static void init_speculation_control(struct cpuinfo_x86 *c)
 		set_cpu_cap(c, X86_FEATURE_STIBP);
 		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
 	}
+
+	if (cpu_has(c, X86_FEATURE_AMD_SSBD)) {
+		set_cpu_cap(c, X86_FEATURE_SSBD);
+		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
+		clear_cpu_cap(c, X86_FEATURE_VIRT_SSBD);
+	}
 }
 
 void get_cpu_cap(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 132f8a58692e..f4f30d0c25c4 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -379,7 +379,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD) | F(AMD_SSB_NO);
+		F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
+		F(AMD_SSB_NO);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
@@ -664,7 +665,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			entry->ebx |= F(VIRT_SSBD);
 		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
 		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
-		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+		/*
+		 * The preference is to use SPEC CTRL MSR instead of the
+		 * VIRT_SPEC MSR.
+		 */
+		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
+		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			entry->ebx |= F(VIRT_SSBD);
 		break;
 	}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 26110c202b19..950ec50f77c3 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4115,7 +4115,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
 			return 1;
 
 		msr_info->data = svm->spec_ctrl;
@@ -4217,11 +4218,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
 			return 1;
 
 		/* The STIBP bit doesn't fault even if it's not advertised */
-		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
+		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))
 			return 1;
 
 		svm->spec_ctrl = data;
-- 
2.13.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
  2018-06-01 14:59 [PATCH v1] AMD SSB bits Konrad Rzeszutek Wilk
  2018-06-01 14:59 ` [PATCH v1 1/3] x86/bugs: Add AMD's variant of SSB_NO Konrad Rzeszutek Wilk
  2018-06-01 14:59 ` [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage Konrad Rzeszutek Wilk
@ 2018-06-01 14:59 ` Konrad Rzeszutek Wilk
  2018-06-06 12:16   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
  2018-06-08 21:30   ` [PATCH v1 3/3] " Tom Lendacky
  2018-06-05 13:23 ` [PATCH v1] AMD SSB bits Tom Lendacky
  3 siblings, 2 replies; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-01 14:59 UTC (permalink / raw)
  To: linux-kernel, kvm, x86, tglx, andrew.cooper3
  Cc: Konrad Rzeszutek Wilk, Ingo Molnar, H. Peter Anvin,
	Borislav Petkov, David Woodhouse, Kees Cook, KarimAllah Ahmed

Both AMD and Intel can have SPEC CTRL MSR for SSBD.

However AMD also has two more other ways of doing it - which
are !SPEC_CTRL MSR ways.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Kees Cook <keescook@chromium.org>
Cc: KarimAllah Ahmed <karahmed@amazon.de>
---
 arch/x86/kernel/cpu/bugs.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6bea81855cdd..cd0fda1fff6d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -532,17 +532,12 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
 		 * use a completely different MSR and bit dependent on family.
 		 */
-		switch (boot_cpu_data.x86_vendor) {
-		case X86_VENDOR_INTEL:
-		case X86_VENDOR_AMD:
-			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
-				x86_amd_ssb_disable();
-				break;
-			}
+		if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
+			x86_amd_ssb_disable();
+		else {
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
 			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
-			break;
 		}
 	}
 
-- 
2.13.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage
  2018-06-01 14:59 ` [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage Konrad Rzeszutek Wilk
@ 2018-06-02  1:04   ` Tom Lendacky
  2018-06-04 20:20     ` Konrad Rzeszutek Wilk
  2018-06-06 12:16   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 18+ messages in thread
From: Tom Lendacky @ 2018-06-02  1:04 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, linux-kernel, kvm, x86, tglx, andrew.cooper3
  Cc: Ingo Molnar, H. Peter Anvin, Paolo Bonzini,
	Radim Krčmář,
	Joerg Roedel, Borislav Petkov, David Woodhouse,
	Janakarajan Natarajan, Kees Cook, KarimAllah Ahmed,
	Andy Lutomirski

On 6/1/2018 9:59 AM, Konrad Rzeszutek Wilk wrote:

Hi Konrad,

Thanks for doing this.  It was on my to-do list to get this
support out after everything settled down.

Just some questions/comments below.

> The AMD document outlining the SSBD handling
> 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
> mentions that if CPUID 8000_0008.EBX[24] is set we should be using
> the SPEC_CTRL MSR (0x48) over the VIRT SPEC_CTRL MSR (0xC001_011f)
> for speculative store bypass disable.
> 
> This in effect means we should clear the X86_FEATURE_VIRT_SSBD
> flag so that we would prefer the SPEC_CTRL MSR.
> 
> See the document titled:
> 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
> 
> A copy of this document is available at
>    https://bugzilla.kernel.org/show_bug.cgi?id=199889
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: David Woodhouse <dwmw@amazon.co.uk>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: KarimAllah Ahmed <karahmed@amazon.de>
> Cc: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/include/asm/cpufeatures.h |  1 +
>  arch/x86/kernel/cpu/bugs.c         | 12 +++++++-----
>  arch/x86/kernel/cpu/common.c       |  6 ++++++
>  arch/x86/kvm/cpuid.c               | 10 ++++++++--
>  arch/x86/kvm/svm.c                 |  8 +++++---
>  5 files changed, 27 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index b6d7ce32927a..5701f5cecd31 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -282,6 +282,7 @@
>  #define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
>  #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
>  #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
> +#define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>  #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
>  
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 7416fc206b4a..6bea81855cdd 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -529,18 +529,20 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>  	if (mode == SPEC_STORE_BYPASS_DISABLE) {
>  		setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
>  		/*
> -		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD uses
> -		 * a completely different MSR and bit dependent on family.
> +		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
> +		 * use a completely different MSR and bit dependent on family.
>  		 */
>  		switch (boot_cpu_data.x86_vendor) {
>  		case X86_VENDOR_INTEL:
> +		case X86_VENDOR_AMD:
> +			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
> +				x86_amd_ssb_disable();
> +				break;
> +			}
>  			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
>  			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  			break;
> -		case X86_VENDOR_AMD:
> -			x86_amd_ssb_disable();
> -			break;
>  		}
>  	}
>  
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 494735cf63f5..d08a29bd0385 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -783,6 +783,12 @@ static void init_speculation_control(struct cpuinfo_x86 *c)
>  		set_cpu_cap(c, X86_FEATURE_STIBP);
>  		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
>  	}
> +
> +	if (cpu_has(c, X86_FEATURE_AMD_SSBD)) {
> +		set_cpu_cap(c, X86_FEATURE_SSBD);
> +		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
> +		clear_cpu_cap(c, X86_FEATURE_VIRT_SSBD);
> +	}
>  }
>  
>  void get_cpu_cap(struct cpuinfo_x86 *c)
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 132f8a58692e..f4f30d0c25c4 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -379,7 +379,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>  
>  	/* cpuid 0x80000008.ebx */
>  	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
> -		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD) | F(AMD_SSB_NO);
> +		F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
> +		F(AMD_SSB_NO);
>  
>  	/* cpuid 0xC0000001.edx */
>  	const u32 kvm_cpuid_C000_0001_edx_x86_features =
> @@ -664,7 +665,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>  			entry->ebx |= F(VIRT_SSBD);
>  		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
>  		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
> -		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> +		/*
> +		 * The preference is to use SPEC CTRL MSR instead of the
> +		 * VIRT_SPEC MSR.
> +		 */
> +		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
> +		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
>  			entry->ebx |= F(VIRT_SSBD);
>  		break;
>  	}
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 26110c202b19..950ec50f77c3 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -4115,7 +4115,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		break;
>  	case MSR_IA32_SPEC_CTRL:
>  		if (!msr_info->host_initiated &&
> -		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))

Shouldn't the IBRS/SSBD check be an "or" check?  I don't think it's
necessarily true that IBRS and SSBD have to both be set.  Maybe something
like:

	if (!msr_info->host_initiated &&
	    !(guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) ||
	      guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))

Does that make sense?

>  			return 1;
>  
>  		msr_info->data = svm->spec_ctrl;
> @@ -4217,11 +4218,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  		break;
>  	case MSR_IA32_SPEC_CTRL:
>  		if (!msr->host_initiated &&
> -		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))

Same question as above.

Thanks,
Tom

>  			return 1;
>  
>  		/* The STIBP bit doesn't fault even if it's not advertised */
> -		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
> +		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))
>  			return 1;
>  
>  		svm->spec_ctrl = data;
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage
  2018-06-02  1:04   ` Tom Lendacky
@ 2018-06-04 20:20     ` Konrad Rzeszutek Wilk
  2018-06-04 20:43       ` Tom Lendacky
  0 siblings, 1 reply; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-04 20:20 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: linux-kernel, kvm, x86, tglx, andrew.cooper3, Ingo Molnar,
	H. Peter Anvin, Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, David Woodhouse,
	Janakarajan Natarajan, Kees Cook, KarimAllah Ahmed,
	Andy Lutomirski

> > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > index 26110c202b19..950ec50f77c3 100644
> > --- a/arch/x86/kvm/svm.c
> > +++ b/arch/x86/kvm/svm.c
> > @@ -4115,7 +4115,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  		break;
> >  	case MSR_IA32_SPEC_CTRL:
> >  		if (!msr_info->host_initiated &&
> > -		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
> > +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
> > +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
> 
> Shouldn't the IBRS/SSBD check be an "or" check?  I don't think it's
> necessarily true that IBRS and SSBD have to both be set.  Maybe something
> like:
> 
> 	if (!msr_info->host_initiated &&
> 	    !(guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) ||
> 	      guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
> 
> Does that make sense?

The '!' on each of the CPUID and '&&' make this the same. See:


 AMD_IBRS set	|  AMD_SSBD set	| !AMD_IBRS && !AMD_SSBD | !(AMD_IBRS || AMD_SSBD)
	0	|	0	| 1 && 1 -> return 1	 | !(0) -> 1 -> return 1
	1	|	0	| 0 && 1, continue	 | !(1 || 0) -> continue
	1	|	1	| 0 && 0, continue	 | !(1 || 1) -> continue
	0	|	1	| 1 && 0, continue	 | !(0 || 1) -> continue

Meaning we will return 1 if:
 the host has not initiator it or,
 the guest CPUID does not have AMD_IBRS flag or,
 the guest CPUID does not have AMD SSBD flag

I am fine modifying it the way you had in mind, but in the past the logic
was to use ! and &&, hence stuck to that.
> 
> >  			return 1;
> >  
> >  		msr_info->data = svm->spec_ctrl;
> > @@ -4217,11 +4218,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
> >  		break;
> >  	case MSR_IA32_SPEC_CTRL:
> >  		if (!msr->host_initiated &&
> > -		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
> > +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
> > +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
> 
> Same question as above.
> 
> Thanks,
> Tom
> 
> >  			return 1;
> >  
> >  		/* The STIBP bit doesn't fault even if it's not advertised */
> > -		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
> > +		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))
> >  			return 1;
> >  
> >  		svm->spec_ctrl = data;
> > 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage
  2018-06-04 20:20     ` Konrad Rzeszutek Wilk
@ 2018-06-04 20:43       ` Tom Lendacky
  2018-06-04 20:54         ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 18+ messages in thread
From: Tom Lendacky @ 2018-06-04 20:43 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-kernel, kvm, x86, tglx, andrew.cooper3, Ingo Molnar,
	H. Peter Anvin, Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, David Woodhouse,
	Janakarajan Natarajan, Kees Cook, KarimAllah Ahmed,
	Andy Lutomirski

On 6/4/2018 3:20 PM, Konrad Rzeszutek Wilk wrote:
>>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>>> index 26110c202b19..950ec50f77c3 100644
>>> --- a/arch/x86/kvm/svm.c
>>> +++ b/arch/x86/kvm/svm.c
>>> @@ -4115,7 +4115,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>>  		break;
>>>  	case MSR_IA32_SPEC_CTRL:
>>>  		if (!msr_info->host_initiated &&
>>> -		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
>>> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
>>> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
>>
>> Shouldn't the IBRS/SSBD check be an "or" check?  I don't think it's
>> necessarily true that IBRS and SSBD have to both be set.  Maybe something
>> like:
>>
>> 	if (!msr_info->host_initiated &&
>> 	    !(guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) ||
>> 	      guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
>>
>> Does that make sense?
> 
> The '!' on each of the CPUID and '&&' make this the same. See:

Doh!  Yes, I don't know what I was thinking.  Just the end of a long
week I guess.

> 
> 
>  AMD_IBRS set	|  AMD_SSBD set	| !AMD_IBRS && !AMD_SSBD | !(AMD_IBRS || AMD_SSBD)
> 	0	|	0	| 1 && 1 -> return 1	 | !(0) -> 1 -> return 1
> 	1	|	0	| 0 && 1, continue	 | !(1 || 0) -> continue
> 	1	|	1	| 0 && 0, continue	 | !(1 || 1) -> continue
> 	0	|	1	| 1 && 0, continue	 | !(0 || 1) -> continue
> 
> Meaning we will return 1 if:
>  the host has not initiator it or,
>  the guest CPUID does not have AMD_IBRS flag or,
>  the guest CPUID does not have AMD SSBD flag
> 
> I am fine modifying it the way you had in mind, but in the past the logic
> was to use ! and &&, hence stuck to that.

No reason to change, it's fine the way you have it.

Thanks,
Tom

>>
>>>  			return 1;
>>>  
>>>  		msr_info->data = svm->spec_ctrl;
>>> @@ -4217,11 +4218,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>>>  		break;
>>>  	case MSR_IA32_SPEC_CTRL:
>>>  		if (!msr->host_initiated &&
>>> -		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
>>> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
>>> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
>>
>> Same question as above.
>>
>> Thanks,
>> Tom
>>
>>>  			return 1;
>>>  
>>>  		/* The STIBP bit doesn't fault even if it's not advertised */
>>> -		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
>>> +		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))
>>>  			return 1;
>>>  
>>>  		svm->spec_ctrl = data;
>>>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage
  2018-06-04 20:43       ` Tom Lendacky
@ 2018-06-04 20:54         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-04 20:54 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: linux-kernel, kvm, x86, tglx, andrew.cooper3, Ingo Molnar,
	H. Peter Anvin, Paolo Bonzini, Radim Krčmář,
	Joerg Roedel, Borislav Petkov, David Woodhouse,
	Janakarajan Natarajan, Kees Cook, KarimAllah Ahmed,
	Andy Lutomirski

On Mon, Jun 04, 2018 at 03:43:17PM -0500, Tom Lendacky wrote:
> On 6/4/2018 3:20 PM, Konrad Rzeszutek Wilk wrote:
> >>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> >>> index 26110c202b19..950ec50f77c3 100644
> >>> --- a/arch/x86/kvm/svm.c
> >>> +++ b/arch/x86/kvm/svm.c
> >>> @@ -4115,7 +4115,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >>>  		break;
> >>>  	case MSR_IA32_SPEC_CTRL:
> >>>  		if (!msr_info->host_initiated &&
> >>> -		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
> >>> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
> >>> +		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
> >>
> >> Shouldn't the IBRS/SSBD check be an "or" check?  I don't think it's
> >> necessarily true that IBRS and SSBD have to both be set.  Maybe something
> >> like:
> >>
> >> 	if (!msr_info->host_initiated &&
> >> 	    !(guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) ||
> >> 	      guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
> >>
> >> Does that make sense?
> > 
> > The '!' on each of the CPUID and '&&' make this the same. See:
> 
> Doh!  Yes, I don't know what I was thinking.  Just the end of a long
> week I guess.

<grins> I can imagine!
> 
> > 
> > 
> >  AMD_IBRS set	|  AMD_SSBD set	| !AMD_IBRS && !AMD_SSBD | !(AMD_IBRS || AMD_SSBD)
> > 	0	|	0	| 1 && 1 -> return 1	 | !(0) -> 1 -> return 1
> > 	1	|	0	| 0 && 1, continue	 | !(1 || 0) -> continue
> > 	1	|	1	| 0 && 0, continue	 | !(1 || 1) -> continue
> > 	0	|	1	| 1 && 0, continue	 | !(0 || 1) -> continue
> > 
> > Meaning we will return 1 if:
> >  the host has not initiator it or,
> >  the guest CPUID does not have AMD_IBRS flag or,
> >  the guest CPUID does not have AMD SSBD flag
> > 
> > I am fine modifying it the way you had in mind, but in the past the logic
> > was to use ! and &&, hence stuck to that.
> 
> No reason to change, it's fine the way you have it.

Excellent. Would you be OK giving it an Acked by or such?

Thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1] AMD SSB bits.
  2018-06-01 14:59 [PATCH v1] AMD SSB bits Konrad Rzeszutek Wilk
                   ` (2 preceding siblings ...)
  2018-06-01 14:59 ` [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Konrad Rzeszutek Wilk
@ 2018-06-05 13:23 ` Tom Lendacky
  2018-06-05 20:56   ` Konrad Rzeszutek Wilk
  3 siblings, 1 reply; 18+ messages in thread
From: Tom Lendacky @ 2018-06-05 13:23 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, linux-kernel, kvm, x86, tglx, andrew.cooper3

On 6/1/2018 9:59 AM, Konrad Rzeszutek Wilk wrote:
> Hi,
> 
> I was reading the AMD whitepaper on SSBD and noticed that they have added
> two new bits in the 8000_0008 CPUID. EBX:
>  1) Bit[26] - similar to Intel's SSB_NO not needed anymore.
>  2) Bit[24] - use SPEC_CTRL MSR (0x48) instead of VIRT SPEC_CTRL MSR
>     (0xC001_011f).
> 
> See 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
> (A copy of this document is available at
>     https://bugzilla.kernel.org/show_bug.cgi?id=199889)
> 
> Being that I don't have the hardware (not even sure if AMD has developed it yet)
> I ended up cobbling up a DEBUG patch, the last one - which is well, debug
> (see below).

So I'm not sure what is debug and what isn't, so I'm just commenting as if
they weren't debug.  If this patch is just for debug, then you can
probably ignore.

> 
> QEMU patches will be sent in another patchset.
> 
>  arch/x86/include/asm/cpufeatures.h |  2 ++
>  arch/x86/kernel/cpu/bugs.c         | 13 +++++--------
>  arch/x86/kernel/cpu/common.c       |  9 ++++++++-
>  arch/x86/kvm/cpuid.c               | 10 ++++++++--
>  arch/x86/kvm/svm.c                 |  8 +++++---
>  5 files changed, 28 insertions(+), 14 deletions(-)
> Konrad Rzeszutek Wilk (3):
>       x86/bugs: Add AMD's variant of SSB_NO.
>       x86/bugs: Add AMD's SPEC_CTRL MSR usage
>       x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
> 
> 
> From 3d120f90731dae7e9a6f0c941c8bc228ed346baa Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Thu, 31 May 2018 20:56:08 -0400
> Subject: [PATCH] DEBUG HACK DEBUG
> 
> Expose the two various Bits to the guest depending on the module
> parameters.
> 
> Also show the various hidden flags in the /proc/cpuinfo.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/include/asm/cpufeatures.h | 14 +++++++-------
>  arch/x86/kvm/cpuid.c               | 12 ++++++++++++
>  arch/x86/kvm/svm.c                 | 13 -------------
>  3 files changed, 19 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 5701f5cecd31..05b74564089a 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -206,15 +206,15 @@
>  #define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
>  #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
>  #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
> -#define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
> +#define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /*  MSR SPEC_CTRL is implemented */
>  #define X86_FEATURE_SSBD		( 7*32+17) /* Speculative Store Bypass Disable */
>  #define X86_FEATURE_MBA			( 7*32+18) /* Memory Bandwidth Allocation */
>  #define X86_FEATURE_RSB_CTXSW		( 7*32+19) /* "" Fill RSB on context switches */
>  #define X86_FEATURE_SEV			( 7*32+20) /* AMD Secure Encrypted Virtualization */
>  #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
>  #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> -#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /* "" Disable Speculative Store Bypass. */
> -#define X86_FEATURE_LS_CFG_SSBD		( 7*32+24)  /* "" AMD SSBD implementation via LS_CFG MSR */
> +#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /*  Disable Speculative Store Bypass. */
> +#define X86_FEATURE_LS_CFG_SSBD		( 7*32+24)  /*  AMD SSBD implementation via LS_CFG MSR */
>  #define X86_FEATURE_IBRS		( 7*32+25) /* Indirect Branch Restricted Speculation */
>  #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
>  #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
> @@ -279,12 +279,12 @@
>  #define X86_FEATURE_CLZERO		(13*32+ 0) /* CLZERO instruction */
>  #define X86_FEATURE_IRPERF		(13*32+ 1) /* Instructions Retired Count */
>  #define X86_FEATURE_XSAVEERPTR		(13*32+ 2) /* Always save/restore FP error pointers */
> -#define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
> +#define X86_FEATURE_AMD_IBPB		(13*32+12) /*  Indirect Branch Prediction Barrier */

Do you really want to remove the double quotes?  This will cause lscpu /
cpuinfo to display ibpb and amd_ibpb in the flags.  I think just having
the ibpb flag is what was intended.  Ditto below on stibp and ssbd, too.

>  #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
> -#define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
> -#define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
> +#define X86_FEATURE_AMD_STIBP		(13*32+15) /*  Single Thread Indirect Branch Predictors */
> +#define X86_FEATURE_AMD_SSBD		(13*32+24) /*  Speculative Store Bypass Disable */
>  #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
> -#define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
> +#define X86_FEATURE_AMD_SSB_NO		(13*32+26) /*  Speculative Store Bypass is fixed in hardware. */
>  
>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index f4f30d0c25c4..67c5d4eb32ac 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -27,6 +27,12 @@
>  #include "trace.h"
>  #include "pmu.h"
>  
> +static bool __read_mostly expose_amd_ssb_no = 0;
> +module_param(expose_amd_ssb_no, bool, S_IRUGO | S_IWUSR);
> +
> +static bool __read_mostly expose_amd_spec_ctrl = 0;
> +module_param(expose_amd_spec_ctrl, bool, S_IRUGO | S_IWUSR);
> +
>  static u32 xstate_required_size(u64 xstate_bv, bool compacted)
>  {
>  	int feature_bit = 0;
> @@ -672,6 +678,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>  		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
>  		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
>  			entry->ebx |= F(VIRT_SSBD);
> +
> +		if (expose_amd_spec_ctrl && !boot_cpu_has(X86_FEATURE_HYPERVISOR))
> +			entry->ebx |= F(AMD_SSBD);
> +
> +		if (expose_amd_ssb_no && !boot_cpu_has(X86_FEATURE_HYPERVISOR))
> +			entry->ebx |= F(AMD_SSB_NO);

I'm not sure about the purpose of the module parameters.  Shouldn't you
add F(AMD_SSBD) and F(AMD_SSB_NO) to kvm_cpuid_8000_0008_ebx_x86_features
and allow cpuid_mask() to keep or clear them?

>  		break;
>  	}
>  	case 0x80000019:
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 950ec50f77c3..a4c71b37df74 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -288,7 +288,6 @@ static const struct svm_direct_access_msrs {
>  	{ .index = MSR_CSTAR,				.always = true  },
>  	{ .index = MSR_SYSCALL_MASK,			.always = true  },
>  #endif
> -	{ .index = MSR_IA32_SPEC_CTRL,			.always = false },
>  	{ .index = MSR_IA32_PRED_CMD,			.always = false },
>  	{ .index = MSR_IA32_LASTBRANCHFROMIP,		.always = false },
>  	{ .index = MSR_IA32_LASTBRANCHTOIP,		.always = false },
> @@ -4231,18 +4230,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  		if (!data)
>  			break;
>  
> -		/*
> -		 * For non-nested:
> -		 * When it's written (to non-zero) for the first time, pass
> -		 * it through.
> -		 *
> -		 * For nested:
> -		 * The handling of the MSR bitmap for L2 guests is done in
> -		 * nested_svm_vmrun_msrpm.
> -		 * We update the L1 MSR bit as well since it will end up
> -		 * touching the MSR anyway now.
> -		 */
> -		set_msr_interception(svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);

Removing this will cause the MSR to always be intercepted, when, in fact,
we don't want it to be intercepted after the first time it is written as
non-zero.

Thanks,
Tom

>  		break;
>  	case MSR_IA32_PRED_CMD:
>  		if (!msr->host_initiated &&
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1] AMD SSB bits.
  2018-06-05 13:23 ` [PATCH v1] AMD SSB bits Tom Lendacky
@ 2018-06-05 20:56   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-05 20:56 UTC (permalink / raw)
  To: Tom Lendacky; +Cc: linux-kernel, kvm, x86, tglx, andrew.cooper3

On Tue, Jun 05, 2018 at 08:23:13AM -0500, Tom Lendacky wrote:
> On 6/1/2018 9:59 AM, Konrad Rzeszutek Wilk wrote:
> > Hi,
> > 
> > I was reading the AMD whitepaper on SSBD and noticed that they have added
> > two new bits in the 8000_0008 CPUID. EBX:
> >  1) Bit[26] - similar to Intel's SSB_NO not needed anymore.
> >  2) Bit[24] - use SPEC_CTRL MSR (0x48) instead of VIRT SPEC_CTRL MSR
> >     (0xC001_011f).
> > 
> > See 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
> > (A copy of this document is available at
> >     https://bugzilla.kernel.org/show_bug.cgi?id=199889)
> > 
> > Being that I don't have the hardware (not even sure if AMD has developed it yet)
> > I ended up cobbling up a DEBUG patch, the last one - which is well, debug
> > (see below).
> 
> So I'm not sure what is debug and what isn't, so I'm just commenting as if
> they weren't debug.  If this patch is just for debug, then you can
> probably ignore.

It is just for debugging.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [tip:x86/pti] x86/bugs: Add AMD's variant of SSB_NO
  2018-06-01 14:59 ` [PATCH v1 1/3] x86/bugs: Add AMD's variant of SSB_NO Konrad Rzeszutek Wilk
@ 2018-06-06 12:15   ` tip-bot for Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Konrad Rzeszutek Wilk @ 2018-06-06 12:15 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: konrad.wilk, dwmw, thomas.lendacky, bp, Janakarajan.Natarajan,
	hpa, tglx, linux-kernel, mingo, luto

Commit-ID:  24809860012e0130fbafe536709e08a22b3e959e
Gitweb:     https://git.kernel.org/tip/24809860012e0130fbafe536709e08a22b3e959e
Author:     Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
AuthorDate: Fri, 1 Jun 2018 10:59:19 -0400
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 6 Jun 2018 14:13:16 +0200

x86/bugs: Add AMD's variant of SSB_NO

The AMD document outlining the SSBD handling
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
mentions that the CPUID 8000_0008.EBX[26] will mean that the
speculative store bypass disable is no longer needed.

A copy of this document is available at:
    https://bugzilla.kernel.org/show_bug.cgi?id=199889

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: kvm@vger.kernel.org
Cc: andrew.cooper3@citrix.com
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lkml.kernel.org/r/20180601145921.9500-2-konrad.wilk@oracle.com

---
 arch/x86/include/asm/cpufeatures.h | 1 +
 arch/x86/kernel/cpu/common.c       | 3 ++-
 arch/x86/kvm/cpuid.c               | 2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index fb00a2fca990..b6d7ce32927a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -283,6 +283,7 @@
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
+#define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 95c8e507580d..8e3f062f462d 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -992,7 +992,8 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
 
 	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
-	   !(ia32_cap & ARCH_CAP_SSB_NO))
+	   !(ia32_cap & ARCH_CAP_SSB_NO) &&
+	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (x86_match_cpu(cpu_no_meltdown))
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 92bf2f2e7cdd..132f8a58692e 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -379,7 +379,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD);
+		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD) | F(AMD_SSB_NO);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [tip:x86/pti] x86/bugs: Add AMD's SPEC_CTRL MSR usage
  2018-06-01 14:59 ` [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage Konrad Rzeszutek Wilk
  2018-06-02  1:04   ` Tom Lendacky
@ 2018-06-06 12:16   ` tip-bot for Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 18+ messages in thread
From: tip-bot for Konrad Rzeszutek Wilk @ 2018-06-06 12:16 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: keescook, linux-kernel, tglx, hpa, joro, konrad.wilk,
	thomas.lendacky, pbonzini, luto, Janakarajan.Natarajan, rkrcmar,
	karahmed, dwmw, bp, mingo

Commit-ID:  6ac2f49edb1ef5446089c7c660017732886d62d6
Gitweb:     https://git.kernel.org/tip/6ac2f49edb1ef5446089c7c660017732886d62d6
Author:     Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
AuthorDate: Fri, 1 Jun 2018 10:59:20 -0400
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 6 Jun 2018 14:13:16 +0200

x86/bugs: Add AMD's SPEC_CTRL MSR usage

The AMD document outlining the SSBD handling
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
mentions that if CPUID 8000_0008.EBX[24] is set we should be using
the SPEC_CTRL MSR (0x48) over the VIRT SPEC_CTRL MSR (0xC001_011f)
for speculative store bypass disable.

This in effect means we should clear the X86_FEATURE_VIRT_SSBD
flag so that we would prefer the SPEC_CTRL MSR.

See the document titled:
   124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf

A copy of this document is available at
   https://bugzilla.kernel.org/show_bug.cgi?id=199889

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: kvm@vger.kernel.org
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Cc: andrew.cooper3@citrix.com
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Kees Cook <keescook@chromium.org>
Link: https://lkml.kernel.org/r/20180601145921.9500-3-konrad.wilk@oracle.com

---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/kernel/cpu/bugs.c         | 12 +++++++-----
 arch/x86/kernel/cpu/common.c       |  6 ++++++
 arch/x86/kvm/cpuid.c               | 10 ++++++++--
 arch/x86/kvm/svm.c                 |  8 +++++---
 5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b6d7ce32927a..5701f5cecd31 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -282,6 +282,7 @@
 #define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7416fc206b4a..6bea81855cdd 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -529,18 +529,20 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 	if (mode == SPEC_STORE_BYPASS_DISABLE) {
 		setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
 		/*
-		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD uses
-		 * a completely different MSR and bit dependent on family.
+		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
+		 * use a completely different MSR and bit dependent on family.
 		 */
 		switch (boot_cpu_data.x86_vendor) {
 		case X86_VENDOR_INTEL:
+		case X86_VENDOR_AMD:
+			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
+				x86_amd_ssb_disable();
+				break;
+			}
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
 			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 			break;
-		case X86_VENDOR_AMD:
-			x86_amd_ssb_disable();
-			break;
 		}
 	}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8e3f062f462d..910b47ee8078 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -803,6 +803,12 @@ static void init_speculation_control(struct cpuinfo_x86 *c)
 		set_cpu_cap(c, X86_FEATURE_STIBP);
 		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
 	}
+
+	if (cpu_has(c, X86_FEATURE_AMD_SSBD)) {
+		set_cpu_cap(c, X86_FEATURE_SSBD);
+		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
+		clear_cpu_cap(c, X86_FEATURE_VIRT_SSBD);
+	}
 }
 
 void get_cpu_cap(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 132f8a58692e..f4f30d0c25c4 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -379,7 +379,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD) | F(AMD_SSB_NO);
+		F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
+		F(AMD_SSB_NO);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
@@ -664,7 +665,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			entry->ebx |= F(VIRT_SSBD);
 		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
 		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
-		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+		/*
+		 * The preference is to use SPEC CTRL MSR instead of the
+		 * VIRT_SPEC MSR.
+		 */
+		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
+		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			entry->ebx |= F(VIRT_SSBD);
 		break;
 	}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 26110c202b19..950ec50f77c3 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4115,7 +4115,8 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
 			return 1;
 
 		msr_info->data = svm->spec_ctrl;
@@ -4217,11 +4218,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
 			return 1;
 
 		/* The STIBP bit doesn't fault even if it's not advertised */
-		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
+		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))
 			return 1;
 
 		svm->spec_ctrl = data;

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [tip:x86/pti] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
  2018-06-01 14:59 ` [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Konrad Rzeszutek Wilk
@ 2018-06-06 12:16   ` tip-bot for Konrad Rzeszutek Wilk
  2018-06-08 21:30   ` [PATCH v1 3/3] " Tom Lendacky
  1 sibling, 0 replies; 18+ messages in thread
From: tip-bot for Konrad Rzeszutek Wilk @ 2018-06-06 12:16 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: konrad.wilk, dwmw, mingo, tglx, hpa, bp, keescook, linux-kernel,
	karahmed

Commit-ID:  108fab4b5c8f12064ef86e02cb0459992affb30f
Gitweb:     https://git.kernel.org/tip/108fab4b5c8f12064ef86e02cb0459992affb30f
Author:     Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
AuthorDate: Fri, 1 Jun 2018 10:59:21 -0400
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 6 Jun 2018 14:13:17 +0200

x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features

Both AMD and Intel can have SPEC_CTRL_MSR for SSBD.

However AMD also has two more other ways of doing it - which
are !SPEC_CTRL MSR ways.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: kvm@vger.kernel.org
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Cc: andrew.cooper3@citrix.com
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lkml.kernel.org/r/20180601145921.9500-4-konrad.wilk@oracle.com

---
 arch/x86/kernel/cpu/bugs.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6bea81855cdd..cd0fda1fff6d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -532,17 +532,12 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
 		 * use a completely different MSR and bit dependent on family.
 		 */
-		switch (boot_cpu_data.x86_vendor) {
-		case X86_VENDOR_INTEL:
-		case X86_VENDOR_AMD:
-			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
-				x86_amd_ssb_disable();
-				break;
-			}
+		if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
+			x86_amd_ssb_disable();
+		else {
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
 			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
-			break;
 		}
 	}
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
  2018-06-01 14:59 ` [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Konrad Rzeszutek Wilk
  2018-06-06 12:16   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
@ 2018-06-08 21:30   ` Tom Lendacky
  2018-06-11 14:01     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 18+ messages in thread
From: Tom Lendacky @ 2018-06-08 21:30 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, linux-kernel, kvm, x86, tglx, andrew.cooper3
  Cc: Ingo Molnar, H. Peter Anvin, Borislav Petkov, David Woodhouse,
	Kees Cook, KarimAllah Ahmed

On 6/1/2018 9:59 AM, Konrad Rzeszutek Wilk wrote:
> Both AMD and Intel can have SPEC CTRL MSR for SSBD.
> 
> However AMD also has two more other ways of doing it - which
> are !SPEC_CTRL MSR ways.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: David Woodhouse <dwmw@amazon.co.uk>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: KarimAllah Ahmed <karahmed@amazon.de>
> ---
>  arch/x86/kernel/cpu/bugs.c | 11 +++--------
>  1 file changed, 3 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 6bea81855cdd..cd0fda1fff6d 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -532,17 +532,12 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>  		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
>  		 * use a completely different MSR and bit dependent on family.
>  		 */
> -		switch (boot_cpu_data.x86_vendor) {
> -		case X86_VENDOR_INTEL:
> -		case X86_VENDOR_AMD:
> -			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
> -				x86_amd_ssb_disable();
> -				break;
> -			}
> +		if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> +			x86_amd_ssb_disable();
> +		else {

As I think about this more, I don't think we can do this for AMD.  The
X86_FEATURE_SSBD could be true because of the LS_CFG support and not the
AMD_SSBD CPUID bit.  But if the IBRS CPUID bit was set, we would now try
to use the SPEC_CTRL register for SSBD, which is not valid.

I think you will have to keep the case statements and explicitly check for
X86_FEATURE_AMD_SSBD before using SPEC_CTRL.

Thanks,
Tom

>  			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
>  			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> -			break;
>  		}
>  	}
>  
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
  2018-06-08 21:30   ` [PATCH v1 3/3] " Tom Lendacky
@ 2018-06-11 14:01     ` Konrad Rzeszutek Wilk
  2018-06-12 14:38       ` Tom Lendacky
  0 siblings, 1 reply; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-11 14:01 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: linux-kernel, kvm, x86, tglx, andrew.cooper3, Ingo Molnar,
	H. Peter Anvin, Borislav Petkov, David Woodhouse, Kees Cook,
	KarimAllah Ahmed

On Fri, Jun 08, 2018 at 04:30:15PM -0500, Tom Lendacky wrote:
> On 6/1/2018 9:59 AM, Konrad Rzeszutek Wilk wrote:
> > Both AMD and Intel can have SPEC CTRL MSR for SSBD.
> > 
> > However AMD also has two more other ways of doing it - which
> > are !SPEC_CTRL MSR ways.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > ---
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Borislav Petkov <bp@suse.de>
> > Cc: David Woodhouse <dwmw@amazon.co.uk>
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: KarimAllah Ahmed <karahmed@amazon.de>
> > ---
> >  arch/x86/kernel/cpu/bugs.c | 11 +++--------
> >  1 file changed, 3 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index 6bea81855cdd..cd0fda1fff6d 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -532,17 +532,12 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
> >  		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
> >  		 * use a completely different MSR and bit dependent on family.
> >  		 */
> > -		switch (boot_cpu_data.x86_vendor) {
> > -		case X86_VENDOR_INTEL:
> > -		case X86_VENDOR_AMD:
> > -			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
> > -				x86_amd_ssb_disable();
> > -				break;
> > -			}
> > +		if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> > +			x86_amd_ssb_disable();
> > +		else {
> 
> As I think about this more, I don't think we can do this for AMD.  The
> X86_FEATURE_SSBD could be true because of the LS_CFG support and not the
> AMD_SSBD CPUID bit.  But if the IBRS CPUID bit was set, we would now try
> to use the SPEC_CTRL register for SSBD, which is not valid.

I was reading the AMD docs and while the SPEC CTRL provides IBRS my understanding
(from reading between the lines) is that AMD would actually never implement this.

That is it would have the 'Enhanced IBRS' bit exposed if at all, but nothing else.

Granted this is tea-reading at its best so, ..
> 
> I think you will have to keep the case statements and explicitly check for
> X86_FEATURE_AMD_SSBD before using SPEC_CTRL.

.. we could or alternatively add an extra check for X86_FEATURE_AMD_SSBD ?


I think Thomas already sent this out but it should be no problems to
add a fix as there is no hardware with this so it isn't like we are
breaking anything :-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
  2018-06-11 14:01     ` Konrad Rzeszutek Wilk
@ 2018-06-12 14:38       ` Tom Lendacky
  2018-06-15 18:57         ` Thomas Gleixner
  0 siblings, 1 reply; 18+ messages in thread
From: Tom Lendacky @ 2018-06-12 14:38 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: linux-kernel, kvm, x86, tglx, andrew.cooper3, Ingo Molnar,
	H. Peter Anvin, Borislav Petkov, David Woodhouse, Kees Cook,
	KarimAllah Ahmed

On 6/11/2018 9:01 AM, Konrad Rzeszutek Wilk wrote:
> On Fri, Jun 08, 2018 at 04:30:15PM -0500, Tom Lendacky wrote:
>> On 6/1/2018 9:59 AM, Konrad Rzeszutek Wilk wrote:
>>> Both AMD and Intel can have SPEC CTRL MSR for SSBD.
>>>
>>> However AMD also has two more other ways of doing it - which
>>> are !SPEC_CTRL MSR ways.
>>>
>>> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>>
>>> ---
>>> Cc: Thomas Gleixner <tglx@linutronix.de>
>>> Cc: Ingo Molnar <mingo@redhat.com>
>>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Cc: Borislav Petkov <bp@suse.de>
>>> Cc: David Woodhouse <dwmw@amazon.co.uk>
>>> Cc: Kees Cook <keescook@chromium.org>
>>> Cc: KarimAllah Ahmed <karahmed@amazon.de>
>>> ---
>>>  arch/x86/kernel/cpu/bugs.c | 11 +++--------
>>>  1 file changed, 3 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>>> index 6bea81855cdd..cd0fda1fff6d 100644
>>> --- a/arch/x86/kernel/cpu/bugs.c
>>> +++ b/arch/x86/kernel/cpu/bugs.c
>>> @@ -532,17 +532,12 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>>>  		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
>>>  		 * use a completely different MSR and bit dependent on family.
>>>  		 */
>>> -		switch (boot_cpu_data.x86_vendor) {
>>> -		case X86_VENDOR_INTEL:
>>> -		case X86_VENDOR_AMD:
>>> -			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
>>> -				x86_amd_ssb_disable();
>>> -				break;
>>> -			}
>>> +		if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>>> +			x86_amd_ssb_disable();
>>> +		else {
>>
>> As I think about this more, I don't think we can do this for AMD.  The
>> X86_FEATURE_SSBD could be true because of the LS_CFG support and not the
>> AMD_SSBD CPUID bit.  But if the IBRS CPUID bit was set, we would now try
>> to use the SPEC_CTRL register for SSBD, which is not valid.
> 
> I was reading the AMD docs and while the SPEC CTRL provides IBRS my understanding
> (from reading between the lines) is that AMD would actually never implement this.
> 
> That is it would have the 'Enhanced IBRS' bit exposed if at all, but nothing else.
> 
> Granted this is tea-reading at its best so, ..
>>
>> I think you will have to keep the case statements and explicitly check for
>> X86_FEATURE_AMD_SSBD before using SPEC_CTRL.
> 
> .. we could or alternatively add an extra check for X86_FEATURE_AMD_SSBD ?

Whichever you feel is best, so long as we only use SPEC_CTRL for SSBD on
AMD when X86_FEATURE_AMD_SSBD is present.

Thanks,
Tom

> 
> 
> I think Thomas already sent this out but it should be no problems to
> add a fix as there is no hardware with this so it isn't like we are
> breaking anything :-)
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
  2018-06-12 14:38       ` Tom Lendacky
@ 2018-06-15 18:57         ` Thomas Gleixner
  2018-06-15 19:38           ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 18+ messages in thread
From: Thomas Gleixner @ 2018-06-15 18:57 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: Konrad Rzeszutek Wilk, linux-kernel, kvm, x86, andrew.cooper3,
	Ingo Molnar, H. Peter Anvin, Borislav Petkov, David Woodhouse,
	Kees Cook, KarimAllah Ahmed

On Tue, 12 Jun 2018, Tom Lendacky wrote:
> On 6/11/2018 9:01 AM, Konrad Rzeszutek Wilk wrote:
> >> I think you will have to keep the case statements and explicitly check for
> >> X86_FEATURE_AMD_SSBD before using SPEC_CTRL.
> > 
> > .. we could or alternatively add an extra check for X86_FEATURE_AMD_SSBD ?
> 
> Whichever you feel is best, so long as we only use SPEC_CTRL for SSBD on
> AMD when X86_FEATURE_AMD_SSBD is present.

Is there anyone working on a fix or has this been forgotten?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
  2018-06-15 18:57         ` Thomas Gleixner
@ 2018-06-15 19:38           ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-06-15 19:38 UTC (permalink / raw)
  To: Thomas Gleixner, alexandre.chartre
  Cc: Tom Lendacky, linux-kernel, kvm, x86, andrew.cooper3,
	Ingo Molnar, H. Peter Anvin, Borislav Petkov, David Woodhouse,
	Kees Cook, KarimAllah Ahmed

On Fri, Jun 15, 2018 at 08:57:40PM +0200, Thomas Gleixner wrote:
> On Tue, 12 Jun 2018, Tom Lendacky wrote:
> > On 6/11/2018 9:01 AM, Konrad Rzeszutek Wilk wrote:
> > >> I think you will have to keep the case statements and explicitly check for
> > >> X86_FEATURE_AMD_SSBD before using SPEC_CTRL.
> > > 
> > > .. we could or alternatively add an extra check for X86_FEATURE_AMD_SSBD ?
> > 
> > Whichever you feel is best, so long as we only use SPEC_CTRL for SSBD on
> > AMD when X86_FEATURE_AMD_SSBD is present.
> 
> Is there anyone working on a fix or has this been forgotten?

I have it on my TODO.

But a bit busy with the fpu lazy so will get to it done next week.

Maybe along with the X86_FEATURE_IA32_ARCH_CAPS Bit(1): IBRS_ALL to disable
retpoline on those machines.

CCing Alex

> 
> Thanks,
> 
> 	tglx

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-06-15 19:39 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-01 14:59 [PATCH v1] AMD SSB bits Konrad Rzeszutek Wilk
2018-06-01 14:59 ` [PATCH v1 1/3] x86/bugs: Add AMD's variant of SSB_NO Konrad Rzeszutek Wilk
2018-06-06 12:15   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
2018-06-01 14:59 ` [PATCH v1 2/3] x86/bugs: Add AMD's SPEC_CTRL MSR usage Konrad Rzeszutek Wilk
2018-06-02  1:04   ` Tom Lendacky
2018-06-04 20:20     ` Konrad Rzeszutek Wilk
2018-06-04 20:43       ` Tom Lendacky
2018-06-04 20:54         ` Konrad Rzeszutek Wilk
2018-06-06 12:16   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
2018-06-01 14:59 ` [PATCH v1 3/3] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Konrad Rzeszutek Wilk
2018-06-06 12:16   ` [tip:x86/pti] " tip-bot for Konrad Rzeszutek Wilk
2018-06-08 21:30   ` [PATCH v1 3/3] " Tom Lendacky
2018-06-11 14:01     ` Konrad Rzeszutek Wilk
2018-06-12 14:38       ` Tom Lendacky
2018-06-15 18:57         ` Thomas Gleixner
2018-06-15 19:38           ` Konrad Rzeszutek Wilk
2018-06-05 13:23 ` [PATCH v1] AMD SSB bits Tom Lendacky
2018-06-05 20:56   ` Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).