All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch 00/15] Hidden 0
@ 2018-05-13 14:00 Thomas Gleixner
  2018-05-13 14:00 ` [patch 01/15] Hidden 1 Thomas Gleixner
                   ` (15 more replies)
  0 siblings, 16 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

The following patch series adds the necessary glue for:

 - AMD Zen: software coordination of sibling SSBD state

 - AMD/KVM: Support for MSR_VIRT_SPEC_CTRL

 - Use x86_spec_ctrl_mask to sanitize the guest_spec_ctrl values

 - Simplifications of the code base

Git bundle comes in follow up mail.

Thanks,

	tglx
---
 include/asm/cpufeatures.h   |   20 ++++---
 include/asm/msr-index.h     |    2 
 include/asm/nospec-branch.h |   18 +-----
 include/asm/spec-ctrl.h     |   56 +++++++++++++++++---
 kernel/cpu/amd.c            |    8 --
 kernel/cpu/bugs.c           |   98 +++++++++++++++++++----------------
 kernel/cpu/common.c         |   23 ++++++--
 kernel/cpu/intel.c          |    1 
 kernel/process.c            |  123 +++++++++++++++++++++++++++++++++++++++++---
 kernel/smpboot.c            |    5 +
 kvm/cpuid.c                 |   19 ++++--
 kvm/svm.c                   |   58 ++++++++++++++------
 kvm/vmx.c                   |   13 +---
 13 files changed, 322 insertions(+), 122 deletions(-)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 01/15] Hidden 1
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-13 22:17   ` [MODERATED] " Borislav Petkov
                     ` (2 more replies)
  2018-05-13 14:00 ` [patch 02/15] Hidden 2 Thomas Gleixner
                   ` (14 subsequent siblings)
  15 siblings, 3 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

svm_vcpu_run() invokes x86_spec_ctrl_restore_host() after VMEXIT, but
before the host GS is restored. x86_spec_ctrl_restore_host() uses 'current'
to determine the host SSBD state of the thread. 'current' is GS based, but
host GS is not yet restored and the access causes a triple fault.

Move the call after the host GS restore.

Fixes: 5cf687548705 ("x86/bugs, KVM: Support the combination of guest and host IBRS")
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kvm/svm.c |   24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5651,6 +5651,18 @@ static void svm_vcpu_run(struct kvm_vcpu
 #endif
 		);
 
+	/* Eliminate branch target predictions from guest mode */
+	vmexit_fill_RSB();
+
+#ifdef CONFIG_X86_64
+	wrmsrl(MSR_GS_BASE, svm->host.gs_base);
+#else
+	loadsegment(fs, svm->host.fs);
+#ifndef CONFIG_X86_32_LAZY_GS
+	loadsegment(gs, svm->host.gs);
+#endif
+#endif
+
 	/*
 	 * We do not use IBRS in the kernel. If this vCPU has used the
 	 * SPEC_CTRL MSR it may have left it on; save the value and
@@ -5671,18 +5683,6 @@ static void svm_vcpu_run(struct kvm_vcpu
 
 	x86_spec_ctrl_restore_host(svm->spec_ctrl);
 
-	/* Eliminate branch target predictions from guest mode */
-	vmexit_fill_RSB();
-
-#ifdef CONFIG_X86_64
-	wrmsrl(MSR_GS_BASE, svm->host.gs_base);
-#else
-	loadsegment(fs, svm->host.fs);
-#ifndef CONFIG_X86_32_LAZY_GS
-	loadsegment(gs, svm->host.gs);
-#endif
-#endif
-
 	reload_tss(vcpu);
 
 	local_irq_disable();

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 02/15] Hidden 2
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
  2018-05-13 14:00 ` [patch 01/15] Hidden 1 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-16  2:39   ` [MODERATED] " Konrad Rzeszutek Wilk
  2018-05-13 14:00 ` [patch 03/15] Hidden 3 Thomas Gleixner
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

Intel and AMD have different CPUID bits for those so use synthetic bits
which get set on the respective vendor in init_speculation_control(). So
that debacles like the commit message of

  c65732e4f721 ("x86/cpu: Restore CPUID_8000_0008_EBX reload")

talks about don't happen anymore.

Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: J��rg Otte <jrg.otte@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Link: https://lkml.kernel.org/r/20180504161815.GG9257@pd.tnic
---
 arch/x86/include/asm/cpufeatures.h |   10 ++++++----
 arch/x86/kernel/cpu/common.c       |   14 ++++++++++----
 arch/x86/kvm/cpuid.c               |   10 +++++-----
 arch/x86/kvm/svm.c                 |    6 +++---
 arch/x86/kvm/vmx.c                 |    9 ++-------
 5 files changed, 26 insertions(+), 23 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -198,7 +198,6 @@
 #define X86_FEATURE_CAT_L2		( 7*32+ 5) /* Cache Allocation Technology L2 */
 #define X86_FEATURE_CDP_L3		( 7*32+ 6) /* Code and Data Prioritization L3 */
 #define X86_FEATURE_INVPCID_SINGLE	( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
-
 #define X86_FEATURE_HW_PSTATE		( 7*32+ 8) /* AMD HW-PState */
 #define X86_FEATURE_PROC_FEEDBACK	( 7*32+ 9) /* AMD ProcFeedbackInterface */
 #define X86_FEATURE_SME			( 7*32+10) /* AMD Secure Memory Encryption */
@@ -216,6 +215,9 @@
 #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /* "" Disable Speculative Store Bypass. */
 #define X86_FEATURE_AMD_SSBD		( 7*32+24)  /* "" AMD SSBD implementation */
+#define X86_FEATURE_IBRS		( 7*32+25) /* Indirect Branch Restricted Speculation */
+#define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
+#define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
 
 /* Virtualization flags: Linux defined, word 8 */
 #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
@@ -276,9 +278,9 @@
 #define X86_FEATURE_CLZERO		(13*32+ 0) /* CLZERO instruction */
 #define X86_FEATURE_IRPERF		(13*32+ 1) /* Instructions Retired Count */
 #define X86_FEATURE_XSAVEERPTR		(13*32+ 2) /* Always save/restore FP error pointers */
-#define X86_FEATURE_IBPB		(13*32+12) /* Indirect Branch Prediction Barrier */
-#define X86_FEATURE_IBRS		(13*32+14) /* Indirect Branch Restricted Speculation */
-#define X86_FEATURE_STIBP		(13*32+15) /* Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
+#define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
+#define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -757,17 +757,23 @@ static void init_speculation_control(str
 	 * and they also have a different bit for STIBP support. Also,
 	 * a hypervisor might have set the individual AMD bits even on
 	 * Intel CPUs, for finer-grained selection of what's available.
-	 *
-	 * We use the AMD bits in 0x8000_0008 EBX as the generic hardware
-	 * features, which are visible in /proc/cpuinfo and used by the
-	 * kernel. So set those accordingly from the Intel bits.
 	 */
 	if (cpu_has(c, X86_FEATURE_SPEC_CTRL)) {
 		set_cpu_cap(c, X86_FEATURE_IBRS);
 		set_cpu_cap(c, X86_FEATURE_IBPB);
 	}
+
 	if (cpu_has(c, X86_FEATURE_INTEL_STIBP))
 		set_cpu_cap(c, X86_FEATURE_STIBP);
+
+	if (cpu_has(c, X86_FEATURE_AMD_IBRS))
+		set_cpu_cap(c, X86_FEATURE_IBRS);
+
+	if (cpu_has(c, X86_FEATURE_AMD_IBPB))
+		set_cpu_cap(c, X86_FEATURE_IBPB);
+
+	if (cpu_has(c, X86_FEATURE_AMD_STIBP))
+		set_cpu_cap(c, X86_FEATURE_STIBP);
 }
 
 void get_cpu_cap(struct cpuinfo_x86 *c)
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -379,7 +379,7 @@ static inline int __do_cpuid_ent(struct
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(IBPB) | F(IBRS);
+		F(AMD_IBPB) | F(AMD_IBRS);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
@@ -648,10 +648,10 @@ static inline int __do_cpuid_ent(struct
 		entry->eax = g_phys_as | (virt_as << 8);
 		entry->edx = 0;
 		/* IBRS and IBPB aren't necessarily present in hardware cpuid */
-		if (boot_cpu_has(X86_FEATURE_IBPB))
-			entry->ebx |= F(IBPB);
-		if (boot_cpu_has(X86_FEATURE_IBRS))
-			entry->ebx |= F(IBRS);
+		if (boot_cpu_has(X86_FEATURE_AMD_IBPB))
+			entry->ebx |= F(AMD_IBPB);
+		if (boot_cpu_has(X86_FEATURE_AMD_IBRS))
+			entry->ebx |= F(AMD_IBRS);
 		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
 		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
 		break;
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4108,7 +4108,7 @@ static int svm_get_msr(struct kvm_vcpu *
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
 			return 1;
 
 		msr_info->data = svm->spec_ctrl;
@@ -4203,7 +4203,7 @@ static int svm_set_msr(struct kvm_vcpu *
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
 			return 1;
 
 		/* The STIBP bit doesn't fault even if it's not advertised */
@@ -4230,7 +4230,7 @@ static int svm_set_msr(struct kvm_vcpu *
 		break;
 	case MSR_IA32_PRED_CMD:
 		if (!msr->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_IBPB))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB))
 			return 1;
 
 		if (data & ~PRED_CMD_IBPB)
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3523,9 +3523,7 @@ static int vmx_get_msr(struct kvm_vcpu *
 		return kvm_get_msr_common(vcpu, msr_info);
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_IBRS) &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_SSBD))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
 			return 1;
 
 		msr_info->data = to_vmx(vcpu)->spec_ctrl;
@@ -3643,9 +3641,7 @@ static int vmx_set_msr(struct kvm_vcpu *
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_IBRS) &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_SSBD))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
 			return 1;
 
 		/* The STIBP bit doesn't fault even if it's not advertised */
@@ -3675,7 +3671,6 @@ static int vmx_set_msr(struct kvm_vcpu *
 		break;
 	case MSR_IA32_PRED_CMD:
 		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_IBPB) &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
 			return 1;
 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 03/15] Hidden 3
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
  2018-05-13 14:00 ` [patch 01/15] Hidden 1 Thomas Gleixner
  2018-05-13 14:00 ` [patch 02/15] Hidden 2 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 10:02   ` [MODERATED] " Borislav Petkov
  2018-05-16  2:49   ` Konrad Rzeszutek Wilk
  2018-05-13 14:00 ` [patch 04/15] Hidden 4 Thomas Gleixner
                   ` (12 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

The availability of the SPEC_CTRL MSR is enumerated by a CPUID bit on
Intel and implied by IBRS or STIBP support on AMD. That's just confusing
and in case an AMD CPU has IBRS not supported because the underlying
problem has been fixed but has another bit valid in the SPEC_CTRL MSR,
the thing falls apart.

Add a synthetic feature bit X86_FEATURE_MSR_SPEC_CTRL to denote the
availability on both Intel and AMD.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/cpufeatures.h |    1 +
 arch/x86/kernel/cpu/bugs.c         |   18 +++++++++++-------
 arch/x86/kernel/cpu/common.c       |    9 +++++++--
 arch/x86/kernel/cpu/intel.c        |    1 +
 4 files changed, 20 insertions(+), 9 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -206,6 +206,7 @@
 #define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
 #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
 #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
+#define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
 
 #define X86_FEATURE_MBA			( 7*32+18) /* Memory Bandwidth Allocation */
 #define X86_FEATURE_RSB_CTXSW		( 7*32+19) /* "" Fill RSB on context switches */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -64,7 +64,7 @@ void __init check_bugs(void)
 	 * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
 	 * init code as it is not enumerated and depends on the family.
 	 */
-	if (boot_cpu_has(X86_FEATURE_IBRS))
+	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
 		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 
 	/* Select the proper spectre mitigation before patching alternatives */
@@ -145,7 +145,7 @@ u64 x86_spec_ctrl_get_default(void)
 {
 	u64 msrval = x86_spec_ctrl_base;
 
-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))
 		msrval |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
 	return msrval;
 }
@@ -155,10 +155,12 @@ void x86_spec_ctrl_set_guest(u64 guest_s
 {
 	u64 host = x86_spec_ctrl_base;
 
-	if (!boot_cpu_has(X86_FEATURE_IBRS))
+	/* Is MSR_SPEC_CTRL implemented ? */
+	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
 		return;
 
-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+	/* Intel controls SSB in MSR_SPEC_CTRL */
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))
 		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
 
 	if (host != guest_spec_ctrl)
@@ -170,10 +172,12 @@ void x86_spec_ctrl_restore_host(u64 gues
 {
 	u64 host = x86_spec_ctrl_base;
 
-	if (!boot_cpu_has(X86_FEATURE_IBRS))
+	/* Is MSR_SPEC_CTRL implemented ? */
+	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
 		return;
 
-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+	/* Intel controls SSB in MSR_SPEC_CTRL */
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))
 		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
 
 	if (host != guest_spec_ctrl)
@@ -631,7 +635,7 @@ int arch_prctl_spec_ctrl_get(struct task
 
 void x86_spec_ctrl_setup_ap(void)
 {
-	if (boot_cpu_has(X86_FEATURE_IBRS))
+	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
 		x86_spec_ctrl_set(x86_spec_ctrl_base & ~x86_spec_ctrl_mask);
 
 	if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -761,19 +761,24 @@ static void init_speculation_control(str
 	if (cpu_has(c, X86_FEATURE_SPEC_CTRL)) {
 		set_cpu_cap(c, X86_FEATURE_IBRS);
 		set_cpu_cap(c, X86_FEATURE_IBPB);
+		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
 	}
 
 	if (cpu_has(c, X86_FEATURE_INTEL_STIBP))
 		set_cpu_cap(c, X86_FEATURE_STIBP);
 
-	if (cpu_has(c, X86_FEATURE_AMD_IBRS))
+	if (cpu_has(c, X86_FEATURE_AMD_IBRS)) {
 		set_cpu_cap(c, X86_FEATURE_IBRS);
+		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
+	}
 
 	if (cpu_has(c, X86_FEATURE_AMD_IBPB))
 		set_cpu_cap(c, X86_FEATURE_IBPB);
 
-	if (cpu_has(c, X86_FEATURE_AMD_STIBP))
+	if (cpu_has(c, X86_FEATURE_AMD_STIBP)) {
 		set_cpu_cap(c, X86_FEATURE_STIBP);
+		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
+	}
 }
 
 void get_cpu_cap(struct cpuinfo_x86 *c)
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -188,6 +188,7 @@ static void early_init_intel(struct cpui
 		setup_clear_cpu_cap(X86_FEATURE_IBPB);
 		setup_clear_cpu_cap(X86_FEATURE_STIBP);
 		setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
+		setup_clear_cpu_cap(X86_FEATURE_MSR_SPEC_CTRL);
 		setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
 		setup_clear_cpu_cap(X86_FEATURE_SSBD);
 	}

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 04/15] Hidden 4
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (2 preceding siblings ...)
  2018-05-13 14:00 ` [patch 03/15] Hidden 3 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 11:11   ` [MODERATED] " Borislav Petkov
  2018-05-16  2:53   ` Konrad Rzeszutek Wilk
  2018-05-13 14:00 ` [patch 05/15] Hidden 5 Thomas Gleixner
                   ` (11 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

The SSBD enumeration is similarly to the other bits magically shared
between Intel and AMD though the mechanisms are different.

Make X86_FEATURE_SSBD synthetic and set it depending on the vendor specific
features or family dependent setup.

Change the Intel bit to X86_FEATURE_SPEC_CTRL_SSBD to denote that SSBD is
controlled via MSR_SPEC_CTRL and fix up the usage sites.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/cpufeatures.h |    7 +++----
 arch/x86/kernel/cpu/amd.c          |    7 +------
 arch/x86/kernel/cpu/bugs.c         |   10 +++++-----
 arch/x86/kernel/cpu/common.c       |    3 +++
 arch/x86/kernel/process.c          |    2 +-
 5 files changed, 13 insertions(+), 16 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -207,15 +207,14 @@
 #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
 #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
 #define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
-
+#define X86_FEATURE_SSBD		( 7*32+17) /* Speculative Store Bypass Disable */
 #define X86_FEATURE_MBA			( 7*32+18) /* Memory Bandwidth Allocation */
 #define X86_FEATURE_RSB_CTXSW		( 7*32+19) /* "" Fill RSB on context switches */
 #define X86_FEATURE_SEV			( 7*32+20) /* AMD Secure Encrypted Virtualization */
-
 #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
 #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /* "" Disable Speculative Store Bypass. */
-#define X86_FEATURE_AMD_SSBD		( 7*32+24)  /* "" AMD SSBD implementation */
+#define X86_FEATURE_LS_CFG_SSBD		( 7*32+24)  /* "" AMD SSBD implementation via LS_CFG MSR */
 #define X86_FEATURE_IBRS		( 7*32+25) /* Indirect Branch Restricted Speculation */
 #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
 #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
@@ -339,7 +338,7 @@
 #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
 #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
-#define X86_FEATURE_SSBD		(18*32+31) /* Speculative Store Bypass Disable */
+#define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
 
 /*
  * BUG word(s)
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -570,8 +570,8 @@ static void bsp_init_amd(struct cpuinfo_
 		 * avoid RMW. If that faults, do not enable SSBD.
 		 */
 		if (!rdmsrl_safe(MSR_AMD64_LS_CFG, &x86_amd_ls_cfg_base)) {
+			setup_force_cpu_cap(X86_FEATURE_LS_CFG_SSBD);
 			setup_force_cpu_cap(X86_FEATURE_SSBD);
-			setup_force_cpu_cap(X86_FEATURE_AMD_SSBD);
 			x86_amd_ls_cfg_ssbd_mask = 1ULL << bit;
 		}
 	}
@@ -919,11 +919,6 @@ static void init_amd(struct cpuinfo_x86
 	/* AMD CPUs don't reset SS attributes on SYSRET, Xen does. */
 	if (!cpu_has(c, X86_FEATURE_XENPV))
 		set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
-
-	if (boot_cpu_has(X86_FEATURE_AMD_SSBD)) {
-		set_cpu_cap(c, X86_FEATURE_SSBD);
-		set_cpu_cap(c, X86_FEATURE_AMD_SSBD);
-	}
 }
 
 #ifdef CONFIG_X86_32
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -159,8 +159,8 @@ void x86_spec_ctrl_set_guest(u64 guest_s
 	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
 		return;
 
-	/* Intel controls SSB in MSR_SPEC_CTRL */
-	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))
+	/* SSBD controlled in MSR_SPEC_CTRL */
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
 		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
 
 	if (host != guest_spec_ctrl)
@@ -176,8 +176,8 @@ void x86_spec_ctrl_restore_host(u64 gues
 	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
 		return;
 
-	/* Intel controls SSB in MSR_SPEC_CTRL */
-	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))
+	/* SSBD controlled in MSR_SPEC_CTRL */
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
 		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
 
 	if (host != guest_spec_ctrl)
@@ -189,7 +189,7 @@ static void x86_amd_ssb_disable(void)
 {
 	u64 msrval = x86_amd_ls_cfg_base | x86_amd_ls_cfg_ssbd_mask;
 
-	if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
+	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -767,6 +767,9 @@ static void init_speculation_control(str
 	if (cpu_has(c, X86_FEATURE_INTEL_STIBP))
 		set_cpu_cap(c, X86_FEATURE_STIBP);
 
+	if (cpu_has(c, X86_FEATURE_SPEC_CTRL_SSBD))
+		set_cpu_cap(c, X86_FEATURE_SSBD);
+
 	if (cpu_has(c, X86_FEATURE_AMD_IBRS)) {
 		set_cpu_cap(c, X86_FEATURE_IBRS);
 		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -283,7 +283,7 @@ static __always_inline void __speculativ
 {
 	u64 msr;
 
-	if (static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+	if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
 		msr = x86_amd_ls_cfg_base | ssbd_tif_to_amd_ls_cfg(tifn);
 		wrmsrl(MSR_AMD64_LS_CFG, msr);
 	} else {

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 05/15] Hidden 5
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (3 preceding siblings ...)
  2018-05-13 14:00 ` [patch 04/15] Hidden 4 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 11:18   ` [MODERATED] " Borislav Petkov
  2018-05-16  3:24   ` Konrad Rzeszutek Wilk
  2018-05-13 14:00 ` [patch 06/15] Hidden 6 Thomas Gleixner
                   ` (10 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

Add a ZEN feature bit so family-dependent static_cpu_has() optimizations
can be built for ZEN.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/cpufeatures.h |    1 +
 arch/x86/kernel/cpu/amd.c          |    1 +
 2 files changed, 2 insertions(+)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -218,6 +218,7 @@
 #define X86_FEATURE_IBRS		( 7*32+25) /* Indirect Branch Restricted Speculation */
 #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
 #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
 
 /* Virtualization flags: Linux defined, word 8 */
 #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -812,6 +812,7 @@ static void init_amd_bd(struct cpuinfo_x
 
 static void init_amd_zn(struct cpuinfo_x86 *c)
 {
+	set_cpu_cap(c, X86_FEATURE_ZEN);
 	/*
 	 * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
 	 * all up to and including B1.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 06/15] Hidden 6
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (4 preceding siblings ...)
  2018-05-13 14:00 ` [patch 05/15] Hidden 5 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 12:01   ` [MODERATED] " Borislav Petkov
                     ` (2 more replies)
  2018-05-13 14:00 ` [patch 07/15] Hidden 7 Thomas Gleixner
                   ` (9 subsequent siblings)
  15 siblings, 3 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

The AMD64_LS_CFG MSR is a per core MSR on Family 17H CPUs. That means when
hyperthreading is enabled the SSBD bit toggle needs to take both cores into
account. Otherwise the following situation can happen:

CPU0		CPU1

disable SSB
		disable SSB
		enable  SSB <- Enables it for the Core, i.e. for CPU0 as well

So after the SSB enable on CPU1 the task on CPU0 runs with SSB enabled
again.

On Intel the SSBD control is per core as well, but the synchronization
logic is implemented behind the per thread SPEC_CTRL MSR.

Add the necessary synchronization logic for AMD family 17H. Unfortunately
that requires a spinlock to serialize the access to the MSR, but the locks
are only shared between siblings.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/spec-ctrl.h |    6 ++
 arch/x86/kernel/process.c        |  108 ++++++++++++++++++++++++++++++++++++---
 arch/x86/kernel/smpboot.c        |    5 +
 3 files changed, 113 insertions(+), 6 deletions(-)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -33,6 +33,12 @@ static inline u64 ssbd_tif_to_amd_ls_cfg
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
 }
 
+#ifdef CONFIG_SMP
+extern void speculative_store_bypass_ht_init(void);
+#else
+static inline void speculative_store_bypass_ht_init(void) { }
+#endif
+
 extern void speculative_store_bypass_update(void);
 
 #endif
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -279,22 +279,118 @@ static inline void switch_to_bitmap(stru
 	}
 }
 
-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+#ifdef CONFIG_SMP
+
+struct ssb_state {
+	struct ssb_state	*shared_state;
+	raw_spinlock_t		lock;
+	unsigned int		disable_state;
+	unsigned long		local_state;
+};
+
+#define LSTATE_SSB	0
+
+static DEFINE_PER_CPU(struct ssb_state, ssb_state);
+
+void speculative_store_bypass_ht_init(void)
 {
-	u64 msr;
+	struct ssb_state *st = this_cpu_ptr(&ssb_state);
+	unsigned int this_cpu = smp_processor_id();
+	unsigned int cpu;
+
+	st->local_state = 0;
+	if (st->shared_state)
+		return;
+
+	raw_spin_lock_init(&st->lock);
+
+	/* Go over HT siblings: */
+	for_each_cpu(cpu, topology_sibling_cpumask(this_cpu)) {
+		if (cpu == this_cpu)
+			continue;
 
-	if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
-		msr = x86_amd_ls_cfg_base | ssbd_tif_to_amd_ls_cfg(tifn);
+		if (!per_cpu(ssb_state, cpu).shared_state)
+			continue;
+
+		/* Link it to the state of the sibling: */
+		st->shared_state = per_cpu(ssb_state, cpu).shared_state;
+		return;
+	}
+	/* Link shared state of the first HT sibling to itself. */
+	st->shared_state = st;
+}
+
+/*
+ * Logic is: first HT sibling enables SSBD for both siblings in the core and
+ * last sibling to disable it, disables it for the whole core.
+ */
+static __always_inline void amd_set_core_ssb_state(unsigned long tifn)
+{
+	struct ssb_state *st = this_cpu_ptr(&ssb_state);
+	u64 msr = x86_amd_ls_cfg_base;
+
+	if (!static_cpu_has(X86_FEATURE_ZEN)) {
+		msr |= ssbd_tif_to_amd_ls_cfg(tifn);
 		wrmsrl(MSR_AMD64_LS_CFG, msr);
+		return;
+	}
+
+	if (tifn & _TIF_SSBD) {
+		/*
+		 * Since this can race with prctl(), block reentry on the
+		 * same CPU.
+		 */
+		if (__test_and_set_bit(LSTATE_SSB, &st->local_state))
+			return;
+
+		msr |= x86_amd_ls_cfg_ssbd_mask;
+
+		raw_spin_lock(&st->shared_state->lock);
+		/* First sibling enables SSBD: */
+		if (!st->shared_state->disable_state)
+			wrmsrl(MSR_AMD64_LS_CFG, msr);
+		st->shared_state->disable_state++;
+		raw_spin_unlock(&st->shared_state->lock);
 	} else {
-		msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
-		wrmsrl(MSR_IA32_SPEC_CTRL, msr);
+		if (!__test_and_clear_bit(LSTATE_SSB, &st->local_state))
+			return;
+
+		raw_spin_lock(&st->shared_state->lock);
+		st->shared_state->disable_state--;
+		if (!st->shared_state->disable_state)
+			wrmsrl(MSR_AMD64_LS_CFG, msr);
+		raw_spin_unlock(&st->shared_state->lock);
 	}
 }
+#else
+static __always_inline void amd_set_core_ssb_state(unsigned long tifn)
+{
+	u64 msr = x86_amd_ls_cfg_base | ssbd_tif_to_amd_ls_cfg(tifn);
+
+	wrmsrl(MSR_AMD64_LS_CFG, msr);
+}
+#endif
+
+static __always_inline void intel_set_ssb_state(unsigned long tifn)
+{
+	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
+
+	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
+}
+
+static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+{
+	if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+		amd_set_core_ssb_state(tifn);
+	else
+		intel_set_ssb_state(tifn);
+}
 
 void speculative_store_bypass_update(void)
 {
+	preempt_disable();
 	__speculative_store_bypass_update(current_thread_info()->flags);
+	preempt_enable();
 }
 
 void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -79,6 +79,7 @@
 #include <asm/qspinlock.h>
 #include <asm/intel-family.h>
 #include <asm/cpu_device_id.h>
+#include <asm/spec-ctrl.h>
 
 /* Number of siblings per CPU package */
 int smp_num_siblings = 1;
@@ -244,6 +245,8 @@ static void notrace start_secondary(void
 	 */
 	check_tsc_sync_target();
 
+	speculative_store_bypass_ht_init();
+
 	/*
 	 * Lock vector_lock, set CPU online and bring the vector
 	 * allocator online. Online must be set with vector_lock held
@@ -1292,6 +1295,8 @@ void __init native_smp_prepare_cpus(unsi
 	set_mtrr_aps_delayed_init();
 
 	smp_quirk_init_udelay();
+
+	speculative_store_bypass_ht_init();
 }
 
 void arch_enable_nonboot_cpus_begin(void)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 07/15] Hidden 7
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (5 preceding siblings ...)
  2018-05-13 14:00 ` [patch 06/15] Hidden 6 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 17:07   ` [MODERATED] " Borislav Petkov
  2018-05-16  3:22   ` Konrad Rzeszutek Wilk
  2018-05-13 14:00 ` [patch 08/15] Hidden 8 Thomas Gleixner
                   ` (8 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

AMD is proposing a VIRT_SPEC_CTRL MSR to handle the Speculative Store
Bypass Disable via MSR_AMD64_LS_CFG so that guests do not have to care
about the bit position of the SSBD bit and thus facilitate migration.
Also, the sibling coordination on Family 17H CPUs can only be done on
the host.

Extend x86_spec_ctrl_set_guest() and x86_spec_ctrl_restore_host() with an
extra argument for the VIRT_SPEC_CTRL MSR.

Hand in 0 from VMX and in SVM add a new virt_spec_ctrl member to the CPU
data structure which is going to be used in later patches for the actual
implementation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/spec-ctrl.h |    9 ++++++---
 arch/x86/kernel/cpu/bugs.c       |   20 ++++++++++++++++++--
 arch/x86/kvm/svm.c               |   11 +++++++++--
 arch/x86/kvm/vmx.c               |    4 ++--
 4 files changed, 35 insertions(+), 9 deletions(-)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -10,10 +10,13 @@
  * the guest has, while on VMEXIT we restore the host view. This
  * would be easier if SPEC_CTRL were architecturally maskable or
  * shadowable for guests but this is not (currently) the case.
- * Takes the guest view of SPEC_CTRL MSR as a parameter.
+ * Takes the guest view of SPEC_CTRL MSR as a parameter and also
+ * the guest's version of VIRT_SPEC_CTRL, if emulated.
  */
-extern void x86_spec_ctrl_set_guest(u64);
-extern void x86_spec_ctrl_restore_host(u64);
+extern void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl,
+				    u64 guest_virt_spec_ctrl);
+extern void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl,
+				       u64 guest_virt_spec_ctrl);
 
 /* AMD specific Speculative Store Bypass MSR data */
 extern u64 x86_amd_ls_cfg_base;
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -151,7 +151,15 @@ u64 x86_spec_ctrl_get_default(void)
 }
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_get_default);
 
-void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl)
+/**
+ * x86_spec_ctrl_set_guest - Set speculation control registers for the guest
+ * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
+ * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
+ *				(may get translated to MSR_AMD64_LS_CFG bits)
+ *
+ * Avoids writing to the MSR if the content/bits are the same
+ */
+void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
 {
 	u64 host = x86_spec_ctrl_base;
 
@@ -168,7 +176,15 @@ void x86_spec_ctrl_set_guest(u64 guest_s
 }
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_set_guest);
 
-void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl)
+/**
+ * x86_spec_ctrl_restore_host - Restore host speculation control registers
+ * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
+ * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
+ *				(may get translated to MSR_AMD64_LS_CFG bits)
+ *
+ * Avoids writing to the MSR if the content/bits are the same
+ */
+void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
 {
 	u64 host = x86_spec_ctrl_base;
 
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -213,6 +213,12 @@ struct vcpu_svm {
 	} host;
 
 	u64 spec_ctrl;
+	/*
+	 * Contains guest-controlled bits of VIRT_SPEC_CTRL, which
+	 * will be translated into the appropriate bits to perform
+	 * speculative control.
+	 */
+	u64 virt_spec_ctrl;
 
 	u32 *msrpm;
 
@@ -2060,6 +2066,7 @@ static void svm_vcpu_reset(struct kvm_vc
 
 	vcpu->arch.microcode_version = 0x01000065;
 	svm->spec_ctrl = 0;
+	svm->virt_spec_ctrl = 0;
 
 	if (!init_event) {
 		svm->vcpu.arch.apic_base = APIC_DEFAULT_PHYS_BASE |
@@ -5557,7 +5564,7 @@ static void svm_vcpu_run(struct kvm_vcpu
 	 * is no need to worry about the conditional branch over the wrmsr
 	 * being speculatively taken.
 	 */
-	x86_spec_ctrl_set_guest(svm->spec_ctrl);
+	x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
 
 	asm volatile (
 		"push %%" _ASM_BP "; \n\t"
@@ -5681,7 +5688,7 @@ static void svm_vcpu_run(struct kvm_vcpu
 	if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
 		svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
 
-	x86_spec_ctrl_restore_host(svm->spec_ctrl);
+	x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
 
 	reload_tss(vcpu);
 
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -9717,7 +9717,7 @@ static void __noclone vmx_vcpu_run(struc
 	 * is no need to worry about the conditional branch over the wrmsr
 	 * being speculatively taken.
 	 */
-	x86_spec_ctrl_set_guest(vmx->spec_ctrl);
+	x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0);
 
 	vmx->__launched = vmx->loaded_vmcs->launched;
 
@@ -9865,7 +9865,7 @@ static void __noclone vmx_vcpu_run(struc
 	if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
 		vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
 
-	x86_spec_ctrl_restore_host(vmx->spec_ctrl);
+	x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0);
 
 	/* Eliminate branch target predictions from guest mode */
 	vmexit_fill_RSB();

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 08/15] Hidden 8
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (6 preceding siblings ...)
  2018-05-13 14:00 ` [patch 07/15] Hidden 7 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 17:58   ` [MODERATED] " Borislav Petkov
  2018-05-16  3:31   ` Konrad Rzeszutek Wilk
  2018-05-13 14:00 ` [patch 09/15] Hidden 9 Thomas Gleixner
                   ` (7 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

Some AMD processors only support a non-architectural means of enabling
speculative store bypass disable (SSBD).  To allow a simplified view of
this to a guest, an architectural definition has been created through a new
CPUID bit, 0x80000008_EBX[25], and a new MSR, 0xc001011f.  With this, a
hypervisor can virtualize the existence of this definition and provide an
architectural method for using SSBD to a guest.

Add the new CPUID feature, the new MSR and update the existing SSBD
support to use this MSR when present.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/cpufeatures.h |    1 +
 arch/x86/include/asm/msr-index.h   |    2 ++
 arch/x86/kernel/cpu/bugs.c         |    4 +++-
 arch/x86/kernel/process.c          |   13 ++++++++++++-
 4 files changed, 18 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -282,6 +282,7 @@
 #define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -347,6 +347,8 @@
 #define MSR_AMD64_SEV_ENABLED_BIT	0
 #define MSR_AMD64_SEV_ENABLED		BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT)
 
+#define MSR_AMD64_VIRT_SPEC_CTRL	0xc001011f
+
 /* Fam 17h MSRs */
 #define MSR_F17H_IRPERF			0xc00000e9
 
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -205,7 +205,9 @@ static void x86_amd_ssb_disable(void)
 {
 	u64 msrval = x86_amd_ls_cfg_base | x86_amd_ls_cfg_ssbd_mask;
 
-	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+	if (boot_cpu_has(X86_FEATURE_VIRT_SSBD))
+		wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, SPEC_CTRL_SSBD);
+	else if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -366,6 +366,15 @@ static __always_inline void amd_set_core
 }
 #endif
 
+static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
+{
+	/*
+	 * SSBD has the same definition in SPEC_CTRL and VIRT_SPEC_CTRL,
+	 * so ssbd_tif_to_spec_ctrl() just works.
+	 */
+	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
+}
+
 static __always_inline void intel_set_ssb_state(unsigned long tifn)
 {
 	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
@@ -375,7 +384,9 @@ static __always_inline void intel_set_ss
 
 static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
 {
-	if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
+		amd_set_ssb_virt_state(tifn);
+	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		amd_set_core_ssb_state(tifn);
 	else
 		intel_set_ssb_state(tifn);

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 09/15] Hidden 9
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (7 preceding siblings ...)
  2018-05-13 14:00 ` [patch 08/15] Hidden 8 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 19:49   ` [MODERATED] " Borislav Petkov
  2018-05-13 14:00 ` [patch 10/15] Hidden 10 Thomas Gleixner
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

The upcoming support for the virtual SPEC_CTRL MSR on AMD needs to reuse
speculative_store_bypass_update() to avoid code duplication. Add an
argument for supplying a thread info (TIF) value and create a wrapper
speculative_store_bypass_update_current() which is used at the existing
call site.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/spec-ctrl.h |    7 ++++++-
 arch/x86/kernel/cpu/bugs.c       |    2 +-
 arch/x86/kernel/process.c        |    4 ++--
 3 files changed, 9 insertions(+), 4 deletions(-)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -42,6 +42,11 @@ extern void speculative_store_bypass_ht_
 static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
-extern void speculative_store_bypass_update(void);
+extern void speculative_store_bypass_update(unsigned long tif);
+
+static inline void speculative_store_bypass_update_current(void)
+{
+	speculative_store_bypass_update(current_thread_info()->flags);
+}
 
 #endif
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -598,7 +598,7 @@ static int ssb_prctl_set(struct task_str
 	 * mitigation until it is next scheduled.
 	 */
 	if (task == current && update)
-		speculative_store_bypass_update();
+		speculative_store_bypass_update_current();
 
 	return 0;
 }
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -392,10 +392,10 @@ static __always_inline void __speculativ
 		intel_set_ssb_state(tifn);
 }
 
-void speculative_store_bypass_update(void)
+void speculative_store_bypass_update(unsigned long tif)
 {
 	preempt_disable();
-	__speculative_store_bypass_update(current_thread_info()->flags);
+	__speculative_store_bypass_update(tif);
 	preempt_enable();
 }
 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 10/15] Hidden 10
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (8 preceding siblings ...)
  2018-05-13 14:00 ` [patch 09/15] Hidden 9 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-16  3:38   ` [MODERATED] " Konrad Rzeszutek Wilk
  2018-05-13 14:00 ` [patch 11/15] Hidden 11 Thomas Gleixner
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

Function bodies are very similar and are going to grow more almost
identical code. Add a bool arg to determine whether SPEC_CTRL is being set
for the guest or restored to the host.

No functional changes.

Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/spec-ctrl.h |   33 ++++++++++++++++++---
 arch/x86/kernel/cpu/bugs.c       |   60 +++++++++------------------------------
 2 files changed, 44 insertions(+), 49 deletions(-)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -13,10 +13,35 @@
  * Takes the guest view of SPEC_CTRL MSR as a parameter and also
  * the guest's version of VIRT_SPEC_CTRL, if emulated.
  */
-extern void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl,
-				    u64 guest_virt_spec_ctrl);
-extern void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl,
-				       u64 guest_virt_spec_ctrl);
+extern void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest);
+
+/**
+ * x86_spec_ctrl_set_guest - Set speculation control registers for the guest
+ * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
+ * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
+ *				(may get translated to MSR_AMD64_LS_CFG bits)
+ *
+ * Avoids writing to the MSR if the content/bits are the same
+ */
+static inline
+void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
+{
+	x86_virt_spec_ctrl(guest_spec_ctrl, guest_virt_spec_ctrl, true);
+}
+
+/**
+ * x86_spec_ctrl_restore_host - Restore host speculation control registers
+ * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
+ * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
+ *				(may get translated to MSR_AMD64_LS_CFG bits)
+ *
+ * Avoids writing to the MSR if the content/bits are the same
+ */
+static inline
+void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
+{
+	x86_virt_spec_ctrl(guest_spec_ctrl, guest_virt_spec_ctrl, false);
+}
 
 /* AMD specific Speculative Store Bypass MSR data */
 extern u64 x86_amd_ls_cfg_base;
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -151,55 +151,25 @@ u64 x86_spec_ctrl_get_default(void)
 }
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_get_default);
 
-/**
- * x86_spec_ctrl_set_guest - Set speculation control registers for the guest
- * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
- * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
- *				(may get translated to MSR_AMD64_LS_CFG bits)
- *
- * Avoids writing to the MSR if the content/bits are the same
- */
-void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
+void
+x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
 {
-	u64 host = x86_spec_ctrl_base;
+	u64 hostssbd = ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
+	u64 msr, host = x86_spec_ctrl_base;
 
 	/* Is MSR_SPEC_CTRL implemented ? */
-	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
-		return;
-
-	/* SSBD controlled in MSR_SPEC_CTRL */
-	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
-		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
-
-	if (host != guest_spec_ctrl)
-		wrmsrl(MSR_IA32_SPEC_CTRL, guest_spec_ctrl);
-}
-EXPORT_SYMBOL_GPL(x86_spec_ctrl_set_guest);
-
-/**
- * x86_spec_ctrl_restore_host - Restore host speculation control registers
- * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
- * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
- *				(may get translated to MSR_AMD64_LS_CFG bits)
- *
- * Avoids writing to the MSR if the content/bits are the same
- */
-void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
-{
-	u64 host = x86_spec_ctrl_base;
-
-	/* Is MSR_SPEC_CTRL implemented ? */
-	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
-		return;
-
-	/* SSBD controlled in MSR_SPEC_CTRL */
-	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
-		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
-
-	if (host != guest_spec_ctrl)
-		wrmsrl(MSR_IA32_SPEC_CTRL, host);
+	if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
+		/* SSBD controlled in MSR_SPEC_CTRL */
+		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
+			host |= hostssbd;
+
+		if (host != guest_spec_ctrl) {
+			msr = guest ? guest_spec_ctrl : host;
+			wrmsrl(MSR_IA32_SPEC_CTRL, msr);
+		}
+	}
 }
-EXPORT_SYMBOL_GPL(x86_spec_ctrl_restore_host);
+EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
 
 static void x86_amd_ssb_disable(void)
 {

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 11/15] Hidden 11
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (9 preceding siblings ...)
  2018-05-13 14:00 ` [patch 10/15] Hidden 10 Thomas Gleixner
@ 2018-05-13 14:00 ` Thomas Gleixner
  2018-05-14 20:02   ` [MODERATED] " Borislav Petkov
  2018-05-16  3:35   ` Konrad Rzeszutek Wilk
  2018-05-13 14:01 ` [patch 12/15] Hidden 12 Thomas Gleixner
                   ` (4 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:00 UTC (permalink / raw)
  To: speck

x86_spec_ctrl_base is the system wide default value for MSR_SPEC_CTRL.
x86_spec_ctrl_get_default() returns x86_spec_ctrl_base and was intended to
prevent modification to that variable. Though the variable is read only
after init and globaly visible already.

Remove the function and export the variable instead.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/nospec-branch.h |   16 +++++-----------
 arch/x86/include/asm/spec-ctrl.h     |    3 ---
 arch/x86/kernel/cpu/bugs.c           |   13 ++-----------
 3 files changed, 7 insertions(+), 25 deletions(-)

--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -217,16 +217,7 @@ enum spectre_v2_mitigation {
 	SPECTRE_V2_IBRS,
 };
 
-/*
- * The Intel specification for the SPEC_CTRL MSR requires that we
- * preserve any already set reserved bits at boot time (e.g. for
- * future additions that this kernel is not currently aware of).
- * We then set any additional mitigation bits that we want
- * ourselves and always use this as the base for SPEC_CTRL.
- * We also use this when handling guest entry/exit as below.
- */
 extern void x86_spec_ctrl_set(u64);
-extern u64 x86_spec_ctrl_get_default(void);
 
 /* The Speculative Store Bypass disable variants */
 enum ssb_mitigation {
@@ -278,6 +269,9 @@ static inline void indirect_branch_predi
 	alternative_msr_write(MSR_IA32_PRED_CMD, val, X86_FEATURE_USE_IBPB);
 }
 
+/* The Intel SPEC CTRL MSR base value cache */
+extern u64 x86_spec_ctrl_base;
+
 /*
  * With retpoline, we must use IBRS to restrict branch prediction
  * before calling into firmware.
@@ -286,7 +280,7 @@ static inline void indirect_branch_predi
  */
 #define firmware_restrict_branch_speculation_start()			\
 do {									\
-	u64 val = x86_spec_ctrl_get_default() | SPEC_CTRL_IBRS;		\
+	u64 val = x86_spec_ctrl_base | SPEC_CTRL_IBRS;			\
 									\
 	preempt_disable();						\
 	alternative_msr_write(MSR_IA32_SPEC_CTRL, val,			\
@@ -295,7 +289,7 @@ do {									\
 
 #define firmware_restrict_branch_speculation_end()			\
 do {									\
-	u64 val = x86_spec_ctrl_get_default();				\
+	u64 val = x86_spec_ctrl_base;					\
 									\
 	alternative_msr_write(MSR_IA32_SPEC_CTRL, val,			\
 			      X86_FEATURE_USE_IBRS_FW);			\
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -47,9 +47,6 @@ void x86_spec_ctrl_restore_host(u64 gues
 extern u64 x86_amd_ls_cfg_base;
 extern u64 x86_amd_ls_cfg_ssbd_mask;
 
-/* The Intel SPEC CTRL MSR base value cache */
-extern u64 x86_spec_ctrl_base;
-
 static inline u64 ssbd_tif_to_spec_ctrl(u64 tifn)
 {
 	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -36,12 +36,13 @@ static void __init ssb_select_mitigation
  * writes to SPEC_CTRL contain whatever reserved bits have been set.
  */
 u64 __ro_after_init x86_spec_ctrl_base;
+EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
 
 /*
  * The vendor and possibly platform specific bits which can be modified in
  * x86_spec_ctrl_base.
  */
-static u64 __ro_after_init x86_spec_ctrl_mask = ~SPEC_CTRL_IBRS;
+static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS;
 
 /*
  * AMD specific MSR info for Speculative Store Bypass control.
@@ -141,16 +142,6 @@ void x86_spec_ctrl_set(u64 val)
 }
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_set);
 
-u64 x86_spec_ctrl_get_default(void)
-{
-	u64 msrval = x86_spec_ctrl_base;
-
-	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))
-		msrval |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
-	return msrval;
-}
-EXPORT_SYMBOL_GPL(x86_spec_ctrl_get_default);
-
 void
 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
 {

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 12/15] Hidden 12
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (10 preceding siblings ...)
  2018-05-13 14:00 ` [patch 11/15] Hidden 11 Thomas Gleixner
@ 2018-05-13 14:01 ` Thomas Gleixner
  2018-05-14 20:18   ` [MODERATED] " Borislav Petkov
  2018-05-16  3:40   ` Konrad Rzeszutek Wilk
  2018-05-13 14:01 ` [patch 13/15] Hidden 13 Thomas Gleixner
                   ` (3 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:01 UTC (permalink / raw)
  To: speck

x86_spec_ctrl_set() is only used in bugs.c and the extra mask checks there
provide no real value as both call sites can just write x86_spec_ctrl_base
to MSR_SPEC_CTRL. x86_spec_ctrl_base is valid and does not need any extra
masking or checking.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/nospec-branch.h |    2 --
 arch/x86/kernel/cpu/bugs.c           |   13 ++-----------
 2 files changed, 2 insertions(+), 13 deletions(-)

--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -217,8 +217,6 @@ enum spectre_v2_mitigation {
 	SPECTRE_V2_IBRS,
 };
 
-extern void x86_spec_ctrl_set(u64);
-
 /* The Speculative Store Bypass disable variants */
 enum ssb_mitigation {
 	SPEC_STORE_BYPASS_NONE,
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -133,15 +133,6 @@ static const char *spectre_v2_strings[]
 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
 	SPECTRE_V2_NONE;
 
-void x86_spec_ctrl_set(u64 val)
-{
-	if (val & x86_spec_ctrl_mask)
-		WARN_ONCE(1, "SPEC_CTRL MSR value 0x%16llx is unknown.\n", val);
-	else
-		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base | val);
-}
-EXPORT_SYMBOL_GPL(x86_spec_ctrl_set);
-
 void
 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
 {
@@ -503,7 +494,7 @@ static enum ssb_mitigation __init __ssb_
 		case X86_VENDOR_INTEL:
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
 			x86_spec_ctrl_mask &= ~SPEC_CTRL_SSBD;
-			x86_spec_ctrl_set(SPEC_CTRL_SSBD);
+			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 			break;
 		case X86_VENDOR_AMD:
 			x86_amd_ssb_disable();
@@ -615,7 +606,7 @@ int arch_prctl_spec_ctrl_get(struct task
 void x86_spec_ctrl_setup_ap(void)
 {
 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
-		x86_spec_ctrl_set(x86_spec_ctrl_base & ~x86_spec_ctrl_mask);
+		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 
 	if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
 		x86_amd_ssb_disable();

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 13/15] Hidden 13
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (11 preceding siblings ...)
  2018-05-13 14:01 ` [patch 12/15] Hidden 12 Thomas Gleixner
@ 2018-05-13 14:01 ` Thomas Gleixner
  2018-05-15  9:27   ` [MODERATED] " Borislav Petkov
  2018-05-16  3:42   ` Konrad Rzeszutek Wilk
  2018-05-13 14:01 ` [patch 14/15] Hidden 14 Thomas Gleixner
                   ` (2 subsequent siblings)
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:01 UTC (permalink / raw)
  To: speck

x86_spec_ctrL_mask is intended to mask out bits from a MSR_SPEC_CTRL value
which are not to be modified. Though the implementation is not really used
and the bitmask is inverted for no real reason. Aside of that it is missing
the STIBP bit if it is supported by the platform, so if the mask would be
used in x86_virt_spec_ctrl() then it would prevent a guest from setting
STIBP.

Add the STIBP bit if supported and use the mask in x86_spec_ctrl_set_guest()
to sanitize the value which is supplied by the guest.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |   22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -68,6 +68,10 @@ void __init check_bugs(void)
 	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
 		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 
+	/* Allow STIBP in MSR_SPEC_CTRL if supported */
+	if (boot_cpu_has(X86_FEATURE_STIBP))
+		x86_spec_ctrl_mask |= SPEC_CTRL_STIBP;
+
 	/* Select the proper spectre mitigation before patching alternatives */
 	spectre_v2_select_mitigation();
 
@@ -134,19 +138,27 @@ static enum spectre_v2_mitigation spectr
 	SPECTRE_V2_NONE;
 
 void
-x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
+x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 {
 	u64 hostssbd = ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
-	u64 msr, host = x86_spec_ctrl_base;
+	u64 msr, guest, host = x86_spec_ctrl_base;
 
 	/* Is MSR_SPEC_CTRL implemented ? */
 	if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
+		/*
+		 * Restrict guest_spec_ctrl to supported values. Clear the
+		 * modifiable bits in the host base value and or the
+		 * modifiable bits from the guest value.
+		 */
+		guest = host & ~x86_spec_ctrl_mask;
+		guest |= guest_spec_ctrl & x86_spec_ctrl_mask;
+
 		/* SSBD controlled in MSR_SPEC_CTRL */
 		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
 			host |= hostssbd;
 
-		if (host != guest_spec_ctrl) {
-			msr = guest ? guest_spec_ctrl : host;
+		if (host != guest) {
+			msr = setguest ? guest : host;
 			wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 		}
 	}
@@ -493,7 +505,7 @@ static enum ssb_mitigation __init __ssb_
 		switch (boot_cpu_data.x86_vendor) {
 		case X86_VENDOR_INTEL:
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
-			x86_spec_ctrl_mask &= ~SPEC_CTRL_SSBD;
+			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 			break;
 		case X86_VENDOR_AMD:

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 14/15] Hidden 14
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (12 preceding siblings ...)
  2018-05-13 14:01 ` [patch 13/15] Hidden 13 Thomas Gleixner
@ 2018-05-13 14:01 ` Thomas Gleixner
  2018-05-15 15:35   ` [MODERATED] " Borislav Petkov
  2018-05-16  3:44   ` Konrad Rzeszutek Wilk
  2018-05-13 14:01 ` [patch 15/15] Hidden 15 Thomas Gleixner
  2018-05-13 14:22 ` [patch 00/15] Hidden 0 Thomas Gleixner
  15 siblings, 2 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:01 UTC (permalink / raw)
  To: speck

Add the necessary logic for supporting the emulated VIRT_SPEC_CTRL MSR to
x86_virt_spec_ctrl().  If either X86_FEATURE_LS_CFG_SSBD or
X86_FEATURE_VIRT_SPEC_CTRL is set then use the new guest_virt_spec_ctrl
argument to check whether the state must be modified on the host. The
update reuses speculative_store_bypass_update() so the ZEN-specific sibling
coordination can be reused.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/spec-ctrl.h |    6 ++++++
 arch/x86/kernel/cpu/bugs.c       |   24 ++++++++++++++++++++++++
 2 files changed, 30 insertions(+)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -56,6 +56,12 @@ static inline u64 ssbd_tif_to_spec_ctrl(
 	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
+{
+	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
+	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
+}
+
 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
 {
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -162,6 +162,30 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
 			wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 		}
 	}
+
+	/*
+	 * If SSBD is not handled in MSR_SPEC_CTRL on AMD update
+	 * MSR_AMD64_L2_CFG or MSR_VIRT_SPEC_CTRL if supported.
+	 */
+	if (!static_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
+	    !static_cpu_has(X86_FEATURE_VIRT_SSBD))
+		return;
+
+	/* If host has SSBD disabled via command line, force it */
+	if (static_cpu_has(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
+		hostssbd |= SPEC_CTRL_SSBD;
+
+	/* Sanitize the guest value */
+	guest = guest_virt_spec_ctrl & SPEC_CTRL_SSBD;
+
+	if (hostssbd != guest) {
+		unsigned long tif;
+
+		tif = setguest ? ssbd_spec_ctrl_to_tif(guest) :
+				 ssbd_spec_ctrl_to_tif(hostssbd);
+
+		speculative_store_bypass_update(tif);
+	}
 }
 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [patch 15/15] Hidden 15
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (13 preceding siblings ...)
  2018-05-13 14:01 ` [patch 14/15] Hidden 14 Thomas Gleixner
@ 2018-05-13 14:01 ` Thomas Gleixner
  2018-05-13 14:22 ` [patch 00/15] Hidden 0 Thomas Gleixner
  15 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:01 UTC (permalink / raw)
  To: speck

Expose the new virtualized architectural mechanism, VIRT_SSBD, for using
speculative store bypass disable (SSBD) under SVM.  This will allow guests
to use SSBD on hardware that uses non-architectural mechanisms for enabling
SSBD.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/common.c |    3 ++-
 arch/x86/kvm/cpuid.c         |   11 +++++++++--
 arch/x86/kvm/svm.c           |   17 +++++++++++++++++
 3 files changed, 28 insertions(+), 3 deletions(-)

--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -767,7 +767,8 @@ static void init_speculation_control(str
 	if (cpu_has(c, X86_FEATURE_INTEL_STIBP))
 		set_cpu_cap(c, X86_FEATURE_STIBP);
 
-	if (cpu_has(c, X86_FEATURE_SPEC_CTRL_SSBD))
+	if (cpu_has(c, X86_FEATURE_SPEC_CTRL_SSBD) ||
+	    cpu_has(c, X86_FEATURE_VIRT_SSBD))
 		set_cpu_cap(c, X86_FEATURE_SSBD);
 
 	if (cpu_has(c, X86_FEATURE_AMD_IBRS)) {
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -379,7 +379,7 @@ static inline int __do_cpuid_ent(struct
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(AMD_IBPB) | F(AMD_IBRS);
+		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
@@ -647,13 +647,20 @@ static inline int __do_cpuid_ent(struct
 			g_phys_as = phys_as;
 		entry->eax = g_phys_as | (virt_as << 8);
 		entry->edx = 0;
-		/* IBRS and IBPB aren't necessarily present in hardware cpuid */
+		/*
+		 * IBRS, IBPB and VIRT_SSBD aren't necessarily present in
+		 * hardware cpuid
+		 */
 		if (boot_cpu_has(X86_FEATURE_AMD_IBPB))
 			entry->ebx |= F(AMD_IBPB);
 		if (boot_cpu_has(X86_FEATURE_AMD_IBRS))
 			entry->ebx |= F(AMD_IBRS);
+		if (boot_cpu_has(X86_FEATURE_VIRT_SSBD))
+			entry->ebx |= F(VIRT_SSBD);
 		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
 		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
+		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+			entry->ebx |= F(VIRT_SSBD);
 		break;
 	}
 	case 0x80000019:
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4120,6 +4120,13 @@ static int svm_get_msr(struct kvm_vcpu *
 
 		msr_info->data = svm->spec_ctrl;
 		break;
+	case MSR_AMD64_VIRT_SPEC_CTRL:
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_VIRT_SSBD))
+			return 1;
+
+		msr_info->data = svm->virt_spec_ctrl;
+		break;
 	case MSR_F15H_IC_CFG: {
 
 		int family, model;
@@ -4251,6 +4258,16 @@ static int svm_set_msr(struct kvm_vcpu *
 			break;
 		set_msr_interception(svm->msrpm, MSR_IA32_PRED_CMD, 0, 1);
 		break;
+	case MSR_AMD64_VIRT_SPEC_CTRL:
+		if (!msr->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_VIRT_SSBD))
+			return 1;
+
+		if (data & ~SPEC_CTRL_SSBD)
+			return 1;
+
+		svm->virt_spec_ctrl = data;
+		break;
 	case MSR_STAR:
 		svm->vmcb->save.star = data;
 		break;

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 00/15] Hidden 0
  2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
                   ` (14 preceding siblings ...)
  2018-05-13 14:01 ` [patch 15/15] Hidden 15 Thomas Gleixner
@ 2018-05-13 14:22 ` Thomas Gleixner
  15 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-13 14:22 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 399 bytes --]

On Sun, 13 May 2018, speck for Thomas Gleixner wrote:

> The following patch series adds the necessary glue for:
> 
>  - AMD Zen: software coordination of sibling SSBD state
> 
>  - AMD/KVM: Support for MSR_VIRT_SPEC_CTRL
> 
>  - Use x86_spec_ctrl_mask to sanitize the guest_spec_ctrl values
> 
>  - Simplifications of the code base
> 
> Git bundle comes in follow up mail.

Attached

Thanks,

	tglx

[-- Attachment #2: Type: application/octet-stream, Size: 72274 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 01/15] Hidden 1
  2018-05-13 14:00 ` [patch 01/15] Hidden 1 Thomas Gleixner
@ 2018-05-13 22:17   ` Borislav Petkov
  2018-05-15  9:30   ` Paolo Bonzini
  2018-05-16  2:32   ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-13 22:17 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:49PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 01/15] KVM: SVM: Move spec control call after restore of GS

Make the subject prefix "kvm/svm: ... "

> From: Thomas Gleixner <tglx@linutronix.de>
> 
> svm_vcpu_run() invokes x86_spec_ctrl_restore_host() after VMEXIT, but
> before the host GS is restored. x86_spec_ctrl_restore_host() uses 'current'
> to determine the host SSBD state of the thread. 'current' is GS based, but
> host GS is not yet restored and the access causes a triple fault.
> 
> Move the call after the host GS restore.
> 
> Fixes: 5cf687548705 ("x86/bugs, KVM: Support the combination of guest and host IBRS")
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kvm/svm.c |   24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 03/15] Hidden 3
  2018-05-13 14:00 ` [patch 03/15] Hidden 3 Thomas Gleixner
@ 2018-05-14 10:02   ` Borislav Petkov
  2018-05-16  2:49   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 10:02 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:51PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 03/15] x86/cpufeatures: Disentangle MSR_SPEC_CTRL enumeration from IBRS
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The availability of the SPEC_CTRL MSR is enumerated by a CPUID bit on
> Intel and implied by IBRS or STIBP support on AMD. That's just confusing
> and in case an AMD CPU has IBRS not supported because the underlying
> problem has been fixed but has another bit valid in the SPEC_CTRL MSR,
> the thing falls apart.
> 
> Add a synthetic feature bit X86_FEATURE_MSR_SPEC_CTRL to denote the
> availability on both Intel and AMD.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/cpufeatures.h |    1 +
>  arch/x86/kernel/cpu/bugs.c         |   18 +++++++++++-------
>  arch/x86/kernel/cpu/common.c       |    9 +++++++--
>  arch/x86/kernel/cpu/intel.c        |    1 +
>  4 files changed, 20 insertions(+), 9 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 04/15] Hidden 4
  2018-05-13 14:00 ` [patch 04/15] Hidden 4 Thomas Gleixner
@ 2018-05-14 11:11   ` Borislav Petkov
  2018-05-16  2:53   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 11:11 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:52PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 04/15] x86/cpufeatures: Disentangle SSBD enumeration
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The SSBD enumeration is similarly to the other bits magically shared
> between Intel and AMD though the mechanisms are different.
> 
> Make X86_FEATURE_SSBD synthetic and set it depending on the vendor specific
> features or family dependent setup.
> 
> Change the Intel bit to X86_FEATURE_SPEC_CTRL_SSBD to denote that SSBD is
> controlled via MSR_SPEC_CTRL and fix up the usage sites.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/cpufeatures.h |    7 +++----
>  arch/x86/kernel/cpu/amd.c          |    7 +------
>  arch/x86/kernel/cpu/bugs.c         |   10 +++++-----
>  arch/x86/kernel/cpu/common.c       |    3 +++
>  arch/x86/kernel/process.c          |    2 +-
>  5 files changed, 13 insertions(+), 16 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 05/15] Hidden 5
  2018-05-13 14:00 ` [patch 05/15] Hidden 5 Thomas Gleixner
@ 2018-05-14 11:18   ` Borislav Petkov
  2018-05-16  3:24   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 11:18 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:53PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 05/15] x86/cpufeatures: Add FEATURE_ZEN
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add a ZEN feature bit so family-dependent static_cpu_has() optimizations
> can be built for ZEN.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/cpufeatures.h |    1 +
>  arch/x86/kernel/cpu/amd.c          |    1 +
>  2 files changed, 2 insertions(+)
> 
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -218,6 +218,7 @@
>  #define X86_FEATURE_IBRS		( 7*32+25) /* Indirect Branch Restricted Speculation */
>  #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
>  #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
> +#define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
>  
>  /* Virtualization flags: Linux defined, word 8 */
>  #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -812,6 +812,7 @@ static void init_amd_bd(struct cpuinfo_x
>  
>  static void init_amd_zn(struct cpuinfo_x86 *c)
>  {
> +	set_cpu_cap(c, X86_FEATURE_ZEN);
>  	/*
>  	 * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
>  	 * all up to and including B1.
> 

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 06/15] Hidden 6
  2018-05-13 14:00 ` [patch 06/15] Hidden 6 Thomas Gleixner
@ 2018-05-14 12:01   ` Borislav Petkov
  2018-05-14 12:09   ` Peter Zijlstra
  2018-05-16  3:15   ` [MODERATED] " Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 12:01 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:54PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 06/15] x86/speculation: Handle HT correctly on AMD
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The AMD64_LS_CFG MSR is a per core MSR on Family 17H CPUs. That means when
> hyperthreading is enabled the SSBD bit toggle needs to take both cores into
> account. Otherwise the following situation can happen:
> 
> CPU0		CPU1
> 
> disable SSB
> 		disable SSB
> 		enable  SSB <- Enables it for the Core, i.e. for CPU0 as well
> 
> So after the SSB enable on CPU1 the task on CPU0 runs with SSB enabled
> again.
> 
> On Intel the SSBD control is per core as well, but the synchronization
> logic is implemented behind the per thread SPEC_CTRL MSR.
> 
> Add the necessary synchronization logic for AMD family 17H. Unfortunately
> that requires a spinlock to serialize the access to the MSR, but the locks
> are only shared between siblings.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/spec-ctrl.h |    6 ++
>  arch/x86/kernel/process.c        |  108 ++++++++++++++++++++++++++++++++++++---
>  arch/x86/kernel/smpboot.c        |    5 +
>  3 files changed, 113 insertions(+), 6 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 06/15] Hidden 6
  2018-05-13 14:00 ` [patch 06/15] Hidden 6 Thomas Gleixner
  2018-05-14 12:01   ` [MODERATED] " Borislav Petkov
@ 2018-05-14 12:09   ` Peter Zijlstra
  2018-05-14 12:46     ` Thomas Gleixner
  2018-05-16  3:15   ` [MODERATED] " Konrad Rzeszutek Wilk
  2 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2018-05-14 12:09 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:54PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 06/15] x86/speculation: Handle HT correctly on AMD
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The AMD64_LS_CFG MSR is a per core MSR on Family 17H CPUs. That means when
> hyperthreading is enabled the SSBD bit toggle needs to take both cores into
> account. Otherwise the following situation can happen:
> 
> CPU0		CPU1
> 
> disable SSB
> 		disable SSB
> 		enable  SSB <- Enables it for the Core, i.e. for CPU0 as well
> 
> So after the SSB enable on CPU1 the task on CPU0 runs with SSB enabled
> again.
> 
> On Intel the SSBD control is per core as well, but the synchronization
> logic is implemented behind the per thread SPEC_CTRL MSR.
> 
> Add the necessary synchronization logic for AMD family 17H. Unfortunately
> that requires a spinlock to serialize the access to the MSR, but the locks
> are only shared between siblings.

So we don't need this from NMI context anymore?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 06/15] Hidden 6
  2018-05-14 12:09   ` Peter Zijlstra
@ 2018-05-14 12:46     ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-14 12:46 UTC (permalink / raw)
  To: speck

On Mon, 14 May 2018, speck for Peter Zijlstra wrote:
> On Sun, May 13, 2018 at 04:00:54PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 06/15] x86/speculation: Handle HT correctly on AMD
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > The AMD64_LS_CFG MSR is a per core MSR on Family 17H CPUs. That means when
> > hyperthreading is enabled the SSBD bit toggle needs to take both cores into
> > account. Otherwise the following situation can happen:
> > 
> > CPU0		CPU1
> > 
> > disable SSB
> > 		disable SSB
> > 		enable  SSB <- Enables it for the Core, i.e. for CPU0 as well
> > 
> > So after the SSB enable on CPU1 the task on CPU0 runs with SSB enabled
> > again.
> > 
> > On Intel the SSBD control is per core as well, but the synchronization
> > logic is implemented behind the per thread SPEC_CTRL MSR.
> > 
> > Add the necessary synchronization logic for AMD family 17H. Unfortunately
> > that requires a spinlock to serialize the access to the MSR, but the locks
> > are only shared between siblings.
> 
> So we don't need this from NMI context anymore?

Well, toggling that stuff from NMI is horrible anyway and I don't see how
that would ever work with the Zen sibling coordination.

The current goal is to focus on software mitigations in BPF and avoid the
toggle stuff from random contexts completely.  Alexei will show up on the
list soon...

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 07/15] Hidden 7
  2018-05-13 14:00 ` [patch 07/15] Hidden 7 Thomas Gleixner
@ 2018-05-14 17:07   ` Borislav Petkov
  2018-05-16  3:22   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 17:07 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:55PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 07/15] x86/bugs, KVM: Extend speculation control for VIRT_SPEC_CTRL
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> AMD is proposing a VIRT_SPEC_CTRL MSR to handle the Speculative Store
> Bypass Disable via MSR_AMD64_LS_CFG so that guests do not have to care
> about the bit position of the SSBD bit and thus facilitate migration.
> Also, the sibling coordination on Family 17H CPUs can only be done on
> the host.
> 
> Extend x86_spec_ctrl_set_guest() and x86_spec_ctrl_restore_host() with an
> extra argument for the VIRT_SPEC_CTRL MSR.
> 
> Hand in 0 from VMX and in SVM add a new virt_spec_ctrl member to the CPU
> data structure which is going to be used in later patches for the actual
> implementation.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/spec-ctrl.h |    9 ++++++---
>  arch/x86/kernel/cpu/bugs.c       |   20 ++++++++++++++++++--
>  arch/x86/kvm/svm.c               |   11 +++++++++--
>  arch/x86/kvm/vmx.c               |    4 ++--
>  4 files changed, 35 insertions(+), 9 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 08/15] Hidden 8
  2018-05-13 14:00 ` [patch 08/15] Hidden 8 Thomas Gleixner
@ 2018-05-14 17:58   ` Borislav Petkov
  2018-05-16  3:31   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 17:58 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:56PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 08/15] x86/speculation: Add virtualized speculative store bypass disable support
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Some AMD processors only support a non-architectural means of enabling
> speculative store bypass disable (SSBD).  To allow a simplified view of
> this to a guest, an architectural definition has been created through a new
> CPUID bit, 0x80000008_EBX[25], and a new MSR, 0xc001011f.  With this, a
> hypervisor can virtualize the existence of this definition and provide an
> architectural method for using SSBD to a guest.
> 
> Add the new CPUID feature, the new MSR and update the existing SSBD
> support to use this MSR when present.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/cpufeatures.h |    1 +
>  arch/x86/include/asm/msr-index.h   |    2 ++
>  arch/x86/kernel/cpu/bugs.c         |    4 +++-
>  arch/x86/kernel/process.c          |   13 ++++++++++++-
>  4 files changed, 18 insertions(+), 2 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 09/15] Hidden 9
  2018-05-13 14:00 ` [patch 09/15] Hidden 9 Thomas Gleixner
@ 2018-05-14 19:49   ` Borislav Petkov
  0 siblings, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 19:49 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:57PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 09/15] x86/speculation: Rework speculative_store_bypass_update()
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The upcoming support for the virtual SPEC_CTRL MSR on AMD needs to reuse
> speculative_store_bypass_update() to avoid code duplication. Add an
> argument for supplying a thread info (TIF) value and create a wrapper
> speculative_store_bypass_update_current() which is used at the existing
> call site.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/spec-ctrl.h |    7 ++++++-
>  arch/x86/kernel/cpu/bugs.c       |    2 +-
>  arch/x86/kernel/process.c        |    4 ++--
>  3 files changed, 9 insertions(+), 4 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 11/15] Hidden 11
  2018-05-13 14:00 ` [patch 11/15] Hidden 11 Thomas Gleixner
@ 2018-05-14 20:02   ` Borislav Petkov
  2018-05-16  3:35   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 20:02 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:59PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 11/15] x86/bugs: Expose x86_spec_ctrl_base directly
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> x86_spec_ctrl_base is the system wide default value for MSR_SPEC_CTRL.
> x86_spec_ctrl_get_default() returns x86_spec_ctrl_base and was intended to
> prevent modification to that variable. Though the variable is read only
> after init and globaly visible already.
> 
> Remove the function and export the variable instead.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/nospec-branch.h |   16 +++++-----------
>  arch/x86/include/asm/spec-ctrl.h     |    3 ---
>  arch/x86/kernel/cpu/bugs.c           |   13 ++-----------
>  3 files changed, 7 insertions(+), 25 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 12/15] Hidden 12
  2018-05-13 14:01 ` [patch 12/15] Hidden 12 Thomas Gleixner
@ 2018-05-14 20:18   ` Borislav Petkov
  2018-05-16  3:40   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-14 20:18 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:01:00PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 12/15] x86/bugs: Remove x86_spec_ctrl_set()
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> x86_spec_ctrl_set() is only used in bugs.c and the extra mask checks there
> provide no real value as both call sites can just write x86_spec_ctrl_base
> to MSR_SPEC_CTRL. x86_spec_ctrl_base is valid and does not need any extra
> masking or checking.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/nospec-branch.h |    2 --
>  arch/x86/kernel/cpu/bugs.c           |   13 ++-----------
>  2 files changed, 2 insertions(+), 13 deletions(-)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 13/15] Hidden 13
  2018-05-13 14:01 ` [patch 13/15] Hidden 13 Thomas Gleixner
@ 2018-05-15  9:27   ` Borislav Petkov
  2018-05-16  3:42   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-15  9:27 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:01:01PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 13/15] x86/bugs: Rework spec_ctrl base and mask logic
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> x86_spec_ctrL_mask is intended to mask out bits from a MSR_SPEC_CTRL value
> which are not to be modified. Though the implementation is not really used

s/Though/However,/

> and the bitmask is inverted for no real reason. Aside of that it is missing
> the STIBP bit if it is supported by the platform, so if the mask would be
> used in x86_virt_spec_ctrl() then it would prevent a guest from setting
> STIBP.
> 
> Add the STIBP bit if supported and use the mask in x86_spec_ctrl_set_guest()
> to sanitize the value which is supplied by the guest.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/bugs.c |   22 +++++++++++++++++-----
>  1 file changed, 17 insertions(+), 5 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -68,6 +68,10 @@ void __init check_bugs(void)
>  	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>  		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  
> +	/* Allow STIBP in MSR_SPEC_CTRL if supported */
> +	if (boot_cpu_has(X86_FEATURE_STIBP))
> +		x86_spec_ctrl_mask |= SPEC_CTRL_STIBP;
> +
>  	/* Select the proper spectre mitigation before patching alternatives */
>  	spectre_v2_select_mitigation();
>  
> @@ -134,19 +138,27 @@ static enum spectre_v2_mitigation spectr
>  	SPECTRE_V2_NONE;
>  
>  void
> -x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
> +x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
>  {
>  	u64 hostssbd = ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
> -	u64 msr, host = x86_spec_ctrl_base;
> +	u64 msr, guest, host = x86_spec_ctrl_base;
>  
>  	/* Is MSR_SPEC_CTRL implemented ? */
>  	if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
> +		/*
> +		 * Restrict guest_spec_ctrl to supported values. Clear the
> +		 * modifiable bits in the host base value and or the
> +		 * modifiable bits from the guest value.
> +		 */
> +		guest = host & ~x86_spec_ctrl_mask;
> +		guest |= guest_spec_ctrl & x86_spec_ctrl_mask;
> +
>  		/* SSBD controlled in MSR_SPEC_CTRL */
>  		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
>  			host |= hostssbd;
>  
> -		if (host != guest_spec_ctrl) {
> -			msr = guest ? guest_spec_ctrl : host;
> +		if (host != guest) {
> +			msr = setguest ? guest : host;

Just a nitpick:
			msrval = setguest ? guest : host;
			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);

calling it "msrval" is a bit clearer as it shows that's you're selecting
the MSR *value* and not the MSR itself. (And yes, we do select which
MSRs to access in other places).

With that

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 01/15] Hidden 1
  2018-05-13 14:00 ` [patch 01/15] Hidden 1 Thomas Gleixner
  2018-05-13 22:17   ` [MODERATED] " Borislav Petkov
@ 2018-05-15  9:30   ` Paolo Bonzini
  2018-05-16  2:32   ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 57+ messages in thread
From: Paolo Bonzini @ 2018-05-15  9:30 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1965 bytes --]

On 13/05/2018 16:00, speck for Thomas Gleixner wrote:
> 
> 
> Subject: [patch 01/15] KVM: SVM: Move spec control call after restore of GS
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> svm_vcpu_run() invokes x86_spec_ctrl_restore_host() after VMEXIT, but
> before the host GS is restored. x86_spec_ctrl_restore_host() uses 'current'
> to determine the host SSBD state of the thread. 'current' is GS based, but
> host GS is not yet restored and the access causes a triple fault.
> 
> Move the call after the host GS restore.
> 
> Fixes: 5cf687548705 ("x86/bugs, KVM: Support the combination of guest and host IBRS")
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Thanks,

Paolo

> ---
>  arch/x86/kvm/svm.c |   24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -5651,6 +5651,18 @@ static void svm_vcpu_run(struct kvm_vcpu
>  #endif
>  		);
>  
> +	/* Eliminate branch target predictions from guest mode */
> +	vmexit_fill_RSB();
> +
> +#ifdef CONFIG_X86_64
> +	wrmsrl(MSR_GS_BASE, svm->host.gs_base);
> +#else
> +	loadsegment(fs, svm->host.fs);
> +#ifndef CONFIG_X86_32_LAZY_GS
> +	loadsegment(gs, svm->host.gs);
> +#endif
> +#endif
> +
>  	/*
>  	 * We do not use IBRS in the kernel. If this vCPU has used the
>  	 * SPEC_CTRL MSR it may have left it on; save the value and
> @@ -5671,18 +5683,6 @@ static void svm_vcpu_run(struct kvm_vcpu
>  
>  	x86_spec_ctrl_restore_host(svm->spec_ctrl);
>  
> -	/* Eliminate branch target predictions from guest mode */
> -	vmexit_fill_RSB();
> -
> -#ifdef CONFIG_X86_64
> -	wrmsrl(MSR_GS_BASE, svm->host.gs_base);
> -#else
> -	loadsegment(fs, svm->host.fs);
> -#ifndef CONFIG_X86_32_LAZY_GS
> -	loadsegment(gs, svm->host.gs);
> -#endif
> -#endif
> -
>  	reload_tss(vcpu);
>  
>  	local_irq_disable();
> 



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 14/15] Hidden 14
  2018-05-13 14:01 ` [patch 14/15] Hidden 14 Thomas Gleixner
@ 2018-05-15 15:35   ` Borislav Petkov
  2018-05-16  3:44   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-15 15:35 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:01:02PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 14/15] x86/speculation, KVM: Implement support for VIRT_SPEC_CTRL/LS_CFG
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add the necessary logic for supporting the emulated VIRT_SPEC_CTRL MSR to
> x86_virt_spec_ctrl().  If either X86_FEATURE_LS_CFG_SSBD or
> X86_FEATURE_VIRT_SPEC_CTRL is set then use the new guest_virt_spec_ctrl
> argument to check whether the state must be modified on the host. The
> update reuses speculative_store_bypass_update() so the ZEN-specific sibling
> coordination can be reused.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/spec-ctrl.h |    6 ++++++
>  arch/x86/kernel/cpu/bugs.c       |   24 ++++++++++++++++++++++++
>  2 files changed, 30 insertions(+)
> 
> --- a/arch/x86/include/asm/spec-ctrl.h
> +++ b/arch/x86/include/asm/spec-ctrl.h
> @@ -56,6 +56,12 @@ static inline u64 ssbd_tif_to_spec_ctrl(
>  	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
>  }
>  
> +static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
> +{
> +	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
> +	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
> +}
> +
>  static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
>  {
>  	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -162,6 +162,30 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
>  			wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  		}
>  	}
> +
> +	/*
> +	 * If SSBD is not handled in MSR_SPEC_CTRL on AMD update
> +	 * MSR_AMD64_L2_CFG or MSR_VIRT_SPEC_CTRL if supported.
> +	 */
> +	if (!static_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
> +	    !static_cpu_has(X86_FEATURE_VIRT_SSBD))
> +		return;
> +
> +	/* If host has SSBD disabled via command line, force it */

Maybe

	/* If host has SSBD mitigation enabled, force it: */

is a bit clearer?

Other than that:

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 01/15] Hidden 1
  2018-05-13 14:00 ` [patch 01/15] Hidden 1 Thomas Gleixner
  2018-05-13 22:17   ` [MODERATED] " Borislav Petkov
  2018-05-15  9:30   ` Paolo Bonzini
@ 2018-05-16  2:32   ` Konrad Rzeszutek Wilk
  2018-05-16  7:51     ` Thomas Gleixner
  2 siblings, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  2:32 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:49PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 01/15] KVM: SVM: Move spec control call after restore of GS
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> svm_vcpu_run() invokes x86_spec_ctrl_restore_host() after VMEXIT, but
> before the host GS is restored. x86_spec_ctrl_restore_host() uses 'current'
> to determine the host SSBD state of the thread. 'current' is GS based, but
> host GS is not yet restored and the access causes a triple fault.
> 
> Move the call after the host GS restore.
> 
> Fixes: 5cf687548705 ("x86/bugs, KVM: Support the combination of guest and host IBRS")

But that commit didn't use 'current' at all. It had:

void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl)
{
       if (!boot_cpu_has(X86_FEATURE_IBRS))
               return;
       if (x86_spec_ctrl_base != guest_spec_ctrl)
               wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
}
EXPORT_SYMBOL_GPL(x86_spec_ctrl_restore_host);

I think you meant:

885f82bfbc6f x86/process: Allow runtime control of Speculative Store Bypass

with that:

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kvm/svm.c |   24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -5651,6 +5651,18 @@ static void svm_vcpu_run(struct kvm_vcpu
>  #endif
>  		);
>  
> +	/* Eliminate branch target predictions from guest mode */
> +	vmexit_fill_RSB();
> +
> +#ifdef CONFIG_X86_64
> +	wrmsrl(MSR_GS_BASE, svm->host.gs_base);
> +#else
> +	loadsegment(fs, svm->host.fs);
> +#ifndef CONFIG_X86_32_LAZY_GS
> +	loadsegment(gs, svm->host.gs);
> +#endif
> +#endif
> +
>  	/*
>  	 * We do not use IBRS in the kernel. If this vCPU has used the
>  	 * SPEC_CTRL MSR it may have left it on; save the value and
> @@ -5671,18 +5683,6 @@ static void svm_vcpu_run(struct kvm_vcpu
>  
>  	x86_spec_ctrl_restore_host(svm->spec_ctrl);
>  
> -	/* Eliminate branch target predictions from guest mode */
> -	vmexit_fill_RSB();
> -
> -#ifdef CONFIG_X86_64
> -	wrmsrl(MSR_GS_BASE, svm->host.gs_base);
> -#else
> -	loadsegment(fs, svm->host.fs);
> -#ifndef CONFIG_X86_32_LAZY_GS
> -	loadsegment(gs, svm->host.gs);
> -#endif
> -#endif
> -
>  	reload_tss(vcpu);
>  
>  	local_irq_disable();
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 02/15] Hidden 2
  2018-05-13 14:00 ` [patch 02/15] Hidden 2 Thomas Gleixner
@ 2018-05-16  2:39   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  2:39 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:50PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 02/15] x86/speculation: Use synthetic bits for IBRS/IBPB/STIBP
> From: Borislav Petkov <bp@suse.de>
> 
> Intel and AMD have different CPUID bits for those so use synthetic bits

s/for those so use/hence for those use/
> which get set on the respective vendor in init_speculation_control(). So

s/vendor/vendor's/
> that debacles like the commit message of

s/like the/like what the/
> 
>   c65732e4f721 ("x86/cpu: Restore CPUID_8000_0008_EBX reload")
> 
> talks about don't happen anymore.
> 
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Jörg Otte <jrg.otte@gmail.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Link: https://lkml.kernel.org/r/20180504161815.GG9257@pd.tnic

.. with that change above:


Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 03/15] Hidden 3
  2018-05-13 14:00 ` [patch 03/15] Hidden 3 Thomas Gleixner
  2018-05-14 10:02   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  2:49   ` Konrad Rzeszutek Wilk
  2018-05-16  8:07     ` Thomas Gleixner
  1 sibling, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  2:49 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:51PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 03/15] x86/cpufeatures: Disentangle MSR_SPEC_CTRL enumeration from IBRS
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The availability of the SPEC_CTRL MSR is enumerated by a CPUID bit on
> Intel and implied by IBRS or STIBP support on AMD. That's just confusing
> and in case an AMD CPU has IBRS not supported because the underlying
> problem has been fixed but has another bit valid in the SPEC_CTRL MSR,
> the thing falls apart.
> 
> Add a synthetic feature bit X86_FEATURE_MSR_SPEC_CTRL to denote the
> availability on both Intel and AMD.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/cpufeatures.h |    1 +
>  arch/x86/kernel/cpu/bugs.c         |   18 +++++++++++-------
>  arch/x86/kernel/cpu/common.c       |    9 +++++++--
>  arch/x86/kernel/cpu/intel.c        |    1 +
>  4 files changed, 20 insertions(+), 9 deletions(-)
> 
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -206,6 +206,7 @@
>  #define X86_FEATURE_RETPOLINE_AMD	( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
>  #define X86_FEATURE_INTEL_PPIN		( 7*32+14) /* Intel Processor Inventory Number */
>  #define X86_FEATURE_CDP_L2		( 7*32+15) /* Code and Data Prioritization L2 */
> +#define X86_FEATURE_MSR_SPEC_CTRL	( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
>  
>  #define X86_FEATURE_MBA			( 7*32+18) /* Memory Bandwidth Allocation */
>  #define X86_FEATURE_RSB_CTXSW		( 7*32+19) /* "" Fill RSB on context switches */
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -64,7 +64,7 @@ void __init check_bugs(void)
>  	 * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
>  	 * init code as it is not enumerated and depends on the family.
>  	 */
> -	if (boot_cpu_has(X86_FEATURE_IBRS))
> +	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>  		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  
>  	/* Select the proper spectre mitigation before patching alternatives */
> @@ -145,7 +145,7 @@ u64 x86_spec_ctrl_get_default(void)
>  {
>  	u64 msrval = x86_spec_ctrl_base;
>  
> -	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
> +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))

You are using the static_cpu_has which ends up doing alternative
patching - neat.

But what if the machine at bootup has no SPEC_CTRL MSR support at all,
and then we end up loading a new microcode with this and then suddenly
SPEC_CTRL MSR is available?

Wouldn't the static_cpu_has in this scenario end up always returning false?

[And this may all be moot - perhaps we don't support this scenario?]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 04/15] Hidden 4
  2018-05-13 14:00 ` [patch 04/15] Hidden 4 Thomas Gleixner
  2018-05-14 11:11   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  2:53   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  2:53 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:52PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 04/15] x86/cpufeatures: Disentangle SSBD enumeration
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The SSBD enumeration is similarly to the other bits magically shared
> between Intel and AMD though the mechanisms are different.
> 
> Make X86_FEATURE_SSBD synthetic and set it depending on the vendor specific
> features or family dependent setup.
> 
> Change the Intel bit to X86_FEATURE_SPEC_CTRL_SSBD to denote that SSBD is
> controlled via MSR_SPEC_CTRL and fix up the usage sites.

I think you are missing the clearing of the X86_FEATURE_SPEC_CTRL_SSBD in
early_init_intel when we detect the microcode is busted?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 06/15] Hidden 6
  2018-05-13 14:00 ` [patch 06/15] Hidden 6 Thomas Gleixner
  2018-05-14 12:01   ` [MODERATED] " Borislav Petkov
  2018-05-14 12:09   ` Peter Zijlstra
@ 2018-05-16  3:15   ` Konrad Rzeszutek Wilk
  2018-05-16  8:44     ` Thomas Gleixner
  2 siblings, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:15 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:54PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 06/15] x86/speculation: Handle HT correctly on AMD
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The AMD64_LS_CFG MSR is a per core MSR on Family 17H CPUs. That means when
> hyperthreading is enabled the SSBD bit toggle needs to take both cores into
> account. Otherwise the following situation can happen:
> 
> CPU0		CPU1
> 
> disable SSB
> 		disable SSB
> 		enable  SSB <- Enables it for the Core, i.e. for CPU0 as well
> 
> So after the SSB enable on CPU1 the task on CPU0 runs with SSB enabled
> again.
> 
> On Intel the SSBD control is per core as well, but the synchronization
> logic is implemented behind the per thread SPEC_CTRL MSR.

I am missing something here. You speak of hardware synchronization which
would mean you would get the same exact behavior as AMD? That is whacking
the MSR would synchronize the state on both siblings? That is if you
enable memory disambiguation on one sibling it would enable it on the other?


has sibling level granularity? So if one sibling is running with SSBD
ON and the other with OFF it has the brains to figure this out? Or would
the brains be to keep it OFF for both siblings?

In which case why the 'On Intel the SSBD control is per core as well' ?

Or.. oh, you are saying it keeps the state latched - so if one has memory
disambiguation disabled then _both_ siblings have it so - even if the other
tries to enable it back on. Gosh, I hope this is spelled out in the SDM
when that comes out.

Perhaps then:
"s/synchronization/synchronization (keep it disabled even if another
sibling enables - only enable it if both siblings set this)/" ?

> 
> Add the necessary synchronization logic for AMD family 17H. Unfortunately
> that requires a spinlock to serialize the access to the MSR, but the locks
> are only shared between siblings.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/spec-ctrl.h |    6 ++
>  arch/x86/kernel/process.c        |  108 ++++++++++++++++++++++++++++++++++++---
>  arch/x86/kernel/smpboot.c        |    5 +
>  3 files changed, 113 insertions(+), 6 deletions(-)
> 
> --- a/arch/x86/include/asm/spec-ctrl.h
> +++ b/arch/x86/include/asm/spec-ctrl.h
> @@ -33,6 +33,12 @@ static inline u64 ssbd_tif_to_amd_ls_cfg
>  	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
>  }
>  
> +#ifdef CONFIG_SMP
> +extern void speculative_store_bypass_ht_init(void);
> +#else
> +static inline void speculative_store_bypass_ht_init(void) { }
> +#endif
> +
>  extern void speculative_store_bypass_update(void);
>  
>  #endif
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -279,22 +279,118 @@ static inline void switch_to_bitmap(stru
>  	}
>  }
>  
> -static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
> +#ifdef CONFIG_SMP
> +
> +struct ssb_state {
> +	struct ssb_state	*shared_state;
> +	raw_spinlock_t		lock;
> +	unsigned int		disable_state;
> +	unsigned long		local_state;
> +};
> +
> +#define LSTATE_SSB	0
> +
> +static DEFINE_PER_CPU(struct ssb_state, ssb_state);
> +
> +void speculative_store_bypass_ht_init(void)
>  {
> -	u64 msr;
> +	struct ssb_state *st = this_cpu_ptr(&ssb_state);
> +	unsigned int this_cpu = smp_processor_id();
> +	unsigned int cpu;
> +
> +	st->local_state = 0;
> +	if (st->shared_state)
> +		return;
> +
> +	raw_spin_lock_init(&st->lock);

Should we also hold this lock in the CPU hotplug code? That is when
you power off an CPU?
> +
> +	/* Go over HT siblings: */
> +	for_each_cpu(cpu, topology_sibling_cpumask(this_cpu)) {

.. As could you (on a bad of course), access the sibling here - right
when the sibling is powered-off?

> +		if (cpu == this_cpu)
> +			continue;
>  
> -	if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
> -		msr = x86_amd_ls_cfg_base | ssbd_tif_to_amd_ls_cfg(tifn);
> +		if (!per_cpu(ssb_state, cpu).shared_state)
> +			continue;
> +
> +		/* Link it to the state of the sibling: */
> +		st->shared_state = per_cpu(ssb_state, cpu).shared_state;

And then this would refer to a dead per-cpu area. Do we clear the per-cpu
area when offlining? Aka is the per_cpu(.., cpu) where CPU is offline
end up with a NULL pointer?


> +		return;
> +	}
> +	/* Link shared state of the first HT sibling to itself. */
> +	st->shared_state = st;
> +}
> +
> +/*
> + * Logic is: first HT sibling enables SSBD for both siblings in the core and
> + * last sibling to disable it, disables it for the whole core.

Would it make sense to say this follows how the Intel CPU has it implemented?

> + */

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 07/15] Hidden 7
  2018-05-13 14:00 ` [patch 07/15] Hidden 7 Thomas Gleixner
  2018-05-14 17:07   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  3:22   ` Konrad Rzeszutek Wilk
  2018-05-16  8:46     ` Thomas Gleixner
  1 sibling, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:22 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:55PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 07/15] x86/bugs, KVM: Extend speculation control for VIRT_SPEC_CTRL
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> AMD is proposing a VIRT_SPEC_CTRL MSR to handle the Speculative Store
> Bypass Disable via MSR_AMD64_LS_CFG so that guests do not have to care
> about the bit position of the SSBD bit and thus facilitate migration.

Oh, that is news to me. Is there an MSR value they had in mind?


> Also, the sibling coordination on Family 17H CPUs can only be done on
> the host.
> 
> Extend x86_spec_ctrl_set_guest() and x86_spec_ctrl_restore_host() with an
> extra argument for the VIRT_SPEC_CTRL MSR.
> 
> Hand in 0 from VMX and in SVM add a new virt_spec_ctrl member to the CPU
> data structure which is going to be used in later patches for the actual
> implementation.

Why not expand it on VMX? That is couldn't this virtualized MSR be generic
to do the appropiate mitigation on both AMD and Intel?

After all this is a software emulated MSR - so why make VMX ignore it?
(And yes I know the counter-argument - on Intel it should whack the SPEC_CTRL
MSR instead of fiddling with this - but then why have an 'virtualized' MSR
that is suppose to be generic!).

..snip..
> +++ b/arch/x86/kvm/svm.c
> @@ -213,6 +213,12 @@ struct vcpu_svm {
>  	} host;
>  
>  	u64 spec_ctrl;
> +	/*
> +	 * Contains guest-controlled bits of VIRT_SPEC_CTRL, which
> +	 * will be translated into the appropriate bits to perform

s/bits/bits on the host/ ?
> +	 * speculative control.
> +	 */
> +	u64 virt_spec_ctrl;
>  
>  	u32 *msrpm;
>  

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 05/15] Hidden 5
  2018-05-13 14:00 ` [patch 05/15] Hidden 5 Thomas Gleixner
  2018-05-14 11:18   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  3:24   ` Konrad Rzeszutek Wilk
  2018-05-16  9:09     ` Thomas Gleixner
  1 sibling, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:24 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:53PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 05/15] x86/cpufeatures: Add FEATURE_ZEN
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add a ZEN feature bit so family-dependent static_cpu_has() optimizations
> can be built for ZEN.

.. In the future this may cover other architectures as well. Would it make
sense to call this something more generic? Say

X86_FEATURE_SSBD_SYNC_SMT

?
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/cpufeatures.h |    1 +
>  arch/x86/kernel/cpu/amd.c          |    1 +
>  2 files changed, 2 insertions(+)
> 
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -218,6 +218,7 @@
>  #define X86_FEATURE_IBRS		( 7*32+25) /* Indirect Branch Restricted Speculation */
>  #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
>  #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
> +#define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
>  
>  /* Virtualization flags: Linux defined, word 8 */
>  #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -812,6 +812,7 @@ static void init_amd_bd(struct cpuinfo_x
>  
>  static void init_amd_zn(struct cpuinfo_x86 *c)
>  {
> +	set_cpu_cap(c, X86_FEATURE_ZEN);
>  	/*
>  	 * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
>  	 * all up to and including B1.
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 08/15] Hidden 8
  2018-05-13 14:00 ` [patch 08/15] Hidden 8 Thomas Gleixner
  2018-05-14 17:58   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  3:31   ` Konrad Rzeszutek Wilk
  2018-05-16 12:22     ` Thomas Gleixner
  1 sibling, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:31 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:56PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 08/15] x86/speculation: Add virtualized speculative store bypass disable support
> From: Tom Lendacky <thomas.lendacky@amd.com>
> 
> Some AMD processors only support a non-architectural means of enabling
> speculative store bypass disable (SSBD).  To allow a simplified view of
> this to a guest, an architectural definition has been created through a new
> CPUID bit, 0x80000008_EBX[25], and a new MSR, 0xc001011f.  With this, a
> hypervisor can virtualize the existence of this definition and provide an
> architectural method for using SSBD to a guest.

Which would imply you should be able to use this on Intel. That is an
'software' and 'virtualized' MSR should be CPU vendor agnostic.

Why have it on 0x80000008_EBX instead of the virtualized CPUID leafs?

Oh wait, future AMD chips will do the right thing in the microcode.

And it may be that Intel will do it too? In which case see below..

> 
> Add the new CPUID feature, the new MSR and update the existing SSBD
> support to use this MSR when present.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/cpufeatures.h |    1 +
>  arch/x86/include/asm/msr-index.h   |    2 ++
>  arch/x86/kernel/cpu/bugs.c         |    4 +++-
>  arch/x86/kernel/process.c          |   13 ++++++++++++-
>  4 files changed, 18 insertions(+), 2 deletions(-)
> 
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -282,6 +282,7 @@
>  #define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
>  #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
>  #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
> +#define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
>  
>  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -347,6 +347,8 @@
>  #define MSR_AMD64_SEV_ENABLED_BIT	0
>  #define MSR_AMD64_SEV_ENABLED		BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT)
>  
> +#define MSR_AMD64_VIRT_SPEC_CTRL	0xc001011f

/me scratches his head. this looks off, but it is probably my editor.

> +
>  /* Fam 17h MSRs */
>  #define MSR_F17H_IRPERF			0xc00000e9
>  
..snip..
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -366,6 +366,15 @@ static __always_inline void amd_set_core
>  }
>  #endif
>  
> +static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)

s/amd_// ?

> +{
> +	/*
> +	 * SSBD has the same definition in SPEC_CTRL and VIRT_SPEC_CTRL,
> +	 * so ssbd_tif_to_spec_ctrl() just works.
> +	 */
> +	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
> +}
> +
>  static __always_inline void intel_set_ssb_state(unsigned long tifn)
>  {
>  	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
> @@ -375,7 +384,9 @@ static __always_inline void intel_set_ss
>  
>  static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
>  {
> -	if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> +	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> +		amd_set_ssb_virt_state(tifn);

and here too obviously..
> +	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
>  		amd_set_core_ssb_state(tifn);
>  	else
>  		intel_set_ssb_state(tifn);
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 11/15] Hidden 11
  2018-05-13 14:00 ` [patch 11/15] Hidden 11 Thomas Gleixner
  2018-05-14 20:02   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  3:35   ` Konrad Rzeszutek Wilk
  2018-05-16  8:50     ` Thomas Gleixner
  1 sibling, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:35 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:00:59PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 11/15] x86/bugs: Expose x86_spec_ctrl_base directly
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> x86_spec_ctrl_base is the system wide default value for MSR_SPEC_CTRL.
> x86_spec_ctrl_get_default() returns x86_spec_ctrl_base and was intended to
> prevent modification to that variable. Though the variable is read only
> after init and globaly visible already.
> 
> Remove the function and export the variable instead.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

..snip..
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -36,12 +36,13 @@ static void __init ssb_select_mitigation
>   * writes to SPEC_CTRL contain whatever reserved bits have been set.
>   */
>  u64 __ro_after_init x86_spec_ctrl_base;
> +EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
>  
>  /*
>   * The vendor and possibly platform specific bits which can be modified in
>   * x86_spec_ctrl_base.
>   */
> -static u64 __ro_after_init x86_spec_ctrl_mask = ~SPEC_CTRL_IBRS;
> +static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS;

Why the flip? The commit description does not talk about this at all?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 10/15] Hidden 10
  2018-05-13 14:00 ` [patch 10/15] Hidden 10 Thomas Gleixner
@ 2018-05-16  3:38   ` Konrad Rzeszutek Wilk
  2018-05-16  8:51     ` Thomas Gleixner
  0 siblings, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:38 UTC (permalink / raw)
  To: speck

> +void
> +x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
>  {
> -	u64 host = x86_spec_ctrl_base;
> +	u64 hostssbd = ssbd_tif_to_spec_ctrl(current_thread_info()->flags);

Would it make sense to move the:
	hostssbd = ssbd_tif_to..

> +	u64 msr, host = x86_spec_ctrl_base;
>  
>  	/* Is MSR_SPEC_CTRL implemented ? */
> -	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> -		return;
> -
> -	/* SSBD controlled in MSR_SPEC_CTRL */
> -	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> -		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
> -
> -	if (host != guest_spec_ctrl)
> -		wrmsrl(MSR_IA32_SPEC_CTRL, guest_spec_ctrl);
> -}
> -EXPORT_SYMBOL_GPL(x86_spec_ctrl_set_guest);
> -
> -/**
> - * x86_spec_ctrl_restore_host - Restore host speculation control registers
> - * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
> - * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
> - *				(may get translated to MSR_AMD64_LS_CFG bits)
> - *
> - * Avoids writing to the MSR if the content/bits are the same
> - */
> -void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
> -{
> -	u64 host = x86_spec_ctrl_base;
> -
> -	/* Is MSR_SPEC_CTRL implemented ? */
> -	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> -		return;
> -
> -	/* SSBD controlled in MSR_SPEC_CTRL */
> -	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> -		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
> -
> -	if (host != guest_spec_ctrl)
> -		wrmsrl(MSR_IA32_SPEC_CTRL, host);
> +	if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
> +		/* SSBD controlled in MSR_SPEC_CTRL */
> +		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> +			host |= hostssbd;

to here? That is you are reading the ssbd_tif_to_spec_ctrl even if the
X86_FEATURE_MSR_SPEC_CTRL hasn't been enabled? (In theory, the compiler has
it probably optimized).
> +
> +		if (host != guest_spec_ctrl) {
> +			msr = guest ? guest_spec_ctrl : host;
> +			wrmsrl(MSR_IA32_SPEC_CTRL, msr);
> +		}
> +	}
>  }
> -EXPORT_SYMBOL_GPL(x86_spec_ctrl_restore_host);
> +EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
>  
>  static void x86_amd_ssb_disable(void)
>  {
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 12/15] Hidden 12
  2018-05-13 14:01 ` [patch 12/15] Hidden 12 Thomas Gleixner
  2018-05-14 20:18   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  3:40   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:40 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:01:00PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 12/15] x86/bugs: Remove x86_spec_ctrl_set()
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> x86_spec_ctrl_set() is only used in bugs.c and the extra mask checks there
> provide no real value as both call sites can just write x86_spec_ctrl_base
> to MSR_SPEC_CTRL. x86_spec_ctrl_base is valid and does not need any extra
> masking or checking.


Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 13/15] Hidden 13
  2018-05-13 14:01 ` [patch 13/15] Hidden 13 Thomas Gleixner
  2018-05-15  9:27   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  3:42   ` Konrad Rzeszutek Wilk
  2018-05-16  8:56     ` Thomas Gleixner
  1 sibling, 1 reply; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:42 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:01:01PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 13/15] x86/bugs: Rework spec_ctrl base and mask logic
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> x86_spec_ctrL_mask is intended to mask out bits from a MSR_SPEC_CTRL value
> which are not to be modified. Though the implementation is not really used
> and the bitmask is inverted for no real reason. Aside of that it is missing

This is due to:
 Subject: [patch 11/15] x86/bugs: Expose x86_spec_ctrl_base directly

Would it make sense to remove the inversion from that patch? Which naturally
would change the logic here..
> the STIBP bit if it is supported by the platform, so if the mask would be
> used in x86_virt_spec_ctrl() then it would prevent a guest from setting
> STIBP.

Ouch.
> 
> Add the STIBP bit if supported and use the mask in x86_spec_ctrl_set_guest()

What OS actually uses that bit?

> to sanitize the value which is supplied by the guest.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/bugs.c |   22 +++++++++++++++++-----
>  1 file changed, 17 insertions(+), 5 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -68,6 +68,10 @@ void __init check_bugs(void)
>  	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
>  		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  
> +	/* Allow STIBP in MSR_SPEC_CTRL if supported */
> +	if (boot_cpu_has(X86_FEATURE_STIBP))
> +		x86_spec_ctrl_mask |= SPEC_CTRL_STIBP;
> +
>  	/* Select the proper spectre mitigation before patching alternatives */
>  	spectre_v2_select_mitigation();
>  
> @@ -134,19 +138,27 @@ static enum spectre_v2_mitigation spectr
>  	SPECTRE_V2_NONE;
>  
>  void
> -x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
> +x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
>  {
>  	u64 hostssbd = ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
> -	u64 msr, host = x86_spec_ctrl_base;
> +	u64 msr, guest, host = x86_spec_ctrl_base;
>  
>  	/* Is MSR_SPEC_CTRL implemented ? */
>  	if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
> +		/*
> +		 * Restrict guest_spec_ctrl to supported values. Clear the
> +		 * modifiable bits in the host base value and or the
> +		 * modifiable bits from the guest value.
> +		 */
> +		guest = host & ~x86_spec_ctrl_mask;
> +		guest |= guest_spec_ctrl & x86_spec_ctrl_mask;
> +
>  		/* SSBD controlled in MSR_SPEC_CTRL */
>  		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
>  			host |= hostssbd;
>  
> -		if (host != guest_spec_ctrl) {
> -			msr = guest ? guest_spec_ctrl : host;
> +		if (host != guest) {
> +			msr = setguest ? guest : host;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  		}
>  	}
> @@ -493,7 +505,7 @@ static enum ssb_mitigation __init __ssb_
>  		switch (boot_cpu_data.x86_vendor) {
>  		case X86_VENDOR_INTEL:
>  			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
> -			x86_spec_ctrl_mask &= ~SPEC_CTRL_SSBD;
> +			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  			break;
>  		case X86_VENDOR_AMD:
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 14/15] Hidden 14
  2018-05-13 14:01 ` [patch 14/15] Hidden 14 Thomas Gleixner
  2018-05-15 15:35   ` [MODERATED] " Borislav Petkov
@ 2018-05-16  3:44   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16  3:44 UTC (permalink / raw)
  To: speck

On Sun, May 13, 2018 at 04:01:02PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 14/15] x86/speculation, KVM: Implement support for VIRT_SPEC_CTRL/LS_CFG
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add the necessary logic for supporting the emulated VIRT_SPEC_CTRL MSR to
> x86_virt_spec_ctrl().  If either X86_FEATURE_LS_CFG_SSBD or
> X86_FEATURE_VIRT_SPEC_CTRL is set then use the new guest_virt_spec_ctrl
> argument to check whether the state must be modified on the host. The
> update reuses speculative_store_bypass_update() so the ZEN-specific sibling
> coordination can be reused.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/spec-ctrl.h |    6 ++++++
>  arch/x86/kernel/cpu/bugs.c       |   24 ++++++++++++++++++++++++
>  2 files changed, 30 insertions(+)
> 
> --- a/arch/x86/include/asm/spec-ctrl.h
> +++ b/arch/x86/include/asm/spec-ctrl.h
> @@ -56,6 +56,12 @@ static inline u64 ssbd_tif_to_spec_ctrl(
>  	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
>  }
>  
> +static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
> +{
> +	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
> +	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
> +}
> +
>  static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
>  {
>  	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -162,6 +162,30 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
>  			wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  		}
>  	}
> +
> +	/*
> +	 * If SSBD is not handled in MSR_SPEC_CTRL on AMD update

s/AMD/AMD,/
> +	 * MSR_AMD64_L2_CFG or MSR_VIRT_SPEC_CTRL if supported.
> +	 */
> +	if (!static_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
> +	    !static_cpu_has(X86_FEATURE_VIRT_SSBD))
> +		return;
> +
> +	/* If host has SSBD disabled via command line, force it */
> +	if (static_cpu_has(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
> +		hostssbd |= SPEC_CTRL_SSBD;
> +
> +	/* Sanitize the guest value */
> +	guest = guest_virt_spec_ctrl & SPEC_CTRL_SSBD;
> +
> +	if (hostssbd != guest) {
> +		unsigned long tif;
> +
> +		tif = setguest ? ssbd_spec_ctrl_to_tif(guest) :
> +				 ssbd_spec_ctrl_to_tif(hostssbd);
> +
> +		speculative_store_bypass_update(tif);
> +	}
>  }
>  EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
>  
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 01/15] Hidden 1
  2018-05-16  2:32   ` Konrad Rzeszutek Wilk
@ 2018-05-16  7:51     ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  7:51 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:

> On Sun, May 13, 2018 at 04:00:49PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 01/15] KVM: SVM: Move spec control call after restore of GS
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > svm_vcpu_run() invokes x86_spec_ctrl_restore_host() after VMEXIT, but
> > before the host GS is restored. x86_spec_ctrl_restore_host() uses 'current'
> > to determine the host SSBD state of the thread. 'current' is GS based, but
> > host GS is not yet restored and the access causes a triple fault.
> > 
> > Move the call after the host GS restore.
> > 
> > Fixes: 5cf687548705 ("x86/bugs, KVM: Support the combination of guest and host IBRS")
> 
> But that commit didn't use 'current' at all. It had:
> 
> void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl)
> {
>        if (!boot_cpu_has(X86_FEATURE_IBRS))
>                return;
>        if (x86_spec_ctrl_base != guest_spec_ctrl)
>                wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> }
> EXPORT_SYMBOL_GPL(x86_spec_ctrl_restore_host);
> 
> I think you meant:
> 
> 885f82bfbc6f x86/process: Allow runtime control of Speculative Store Bypass

Indeed.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 03/15] Hidden 3
  2018-05-16  2:49   ` Konrad Rzeszutek Wilk
@ 2018-05-16  8:07     ` Thomas Gleixner
  2018-05-16  8:53       ` [MODERATED] Re: " Borislav Petkov
  0 siblings, 1 reply; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  8:07 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> On Sun, May 13, 2018 at 04:00:51PM +0200, speck for Thomas Gleixner wrote:
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -64,7 +64,7 @@ void __init check_bugs(void)
> >  	 * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
> >  	 * init code as it is not enumerated and depends on the family.
> >  	 */
> > -	if (boot_cpu_has(X86_FEATURE_IBRS))
> > +	if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> >  		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> >  
> >  	/* Select the proper spectre mitigation before patching alternatives */
> > @@ -145,7 +145,7 @@ u64 x86_spec_ctrl_get_default(void)
> >  {
> >  	u64 msrval = x86_spec_ctrl_base;
> >  
> > -	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
> > +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL))
> 
> You are using the static_cpu_has which ends up doing alternative
> patching - neat.
> 
> But what if the machine at bootup has no SPEC_CTRL MSR support at all,
> and then we end up loading a new microcode with this and then suddenly
> SPEC_CTRL MSR is available?
> 
> Wouldn't the static_cpu_has in this scenario end up always returning false?
> 
> [And this may all be moot - perhaps we don't support this scenario?]

Allowing it after late micro code loading might work, but it just works
because x86_spec_ctrl_base is initialized to 0, which happens to be the
right default value. But it's not working by design.

Similarly, we don't reevaluate mitigation selection and command line
options after early boot, so just special casing it for virtualization does
not make much sense to me. 

We can change it back, but then then we need some way to check that
SPEC_CTRL_MSR reads 0 at detection time and audit everything else whether
it can run into weird inconsistent state as we have to deal with a
bunch of variants here.

I'm all for keeping it simple and just treat it like a CPU upgrade which
you can't do on a running system either.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 06/15] Hidden 6
  2018-05-16  3:15   ` [MODERATED] " Konrad Rzeszutek Wilk
@ 2018-05-16  8:44     ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  8:44 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> On Sun, May 13, 2018 at 04:00:54PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 06/15] x86/speculation: Handle HT correctly on AMD
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > The AMD64_LS_CFG MSR is a per core MSR on Family 17H CPUs. That means when
> > hyperthreading is enabled the SSBD bit toggle needs to take both cores into
> > account. Otherwise the following situation can happen:
> > 
> > CPU0		CPU1
> > 
> > disable SSB
> > 		disable SSB
> > 		enable  SSB <- Enables it for the Core, i.e. for CPU0 as well
> > 
> > So after the SSB enable on CPU1 the task on CPU0 runs with SSB enabled
> > again.
> > 
> > On Intel the SSBD control is per core as well, but the synchronization
> > logic is implemented behind the per thread SPEC_CTRL MSR.
> 
> I am missing something here. You speak of hardware synchronization which
> would mean you would get the same exact behavior as AMD? That is whacking
> the MSR would synchronize the state on both siblings? That is if you
> enable memory disambiguation on one sibling it would enable it on the other?
> 
> has sibling level granularity? So if one sibling is running with SSBD
> ON and the other with OFF it has the brains to figure this out? Or would
> the brains be to keep it OFF for both siblings?
> 
> In which case why the 'On Intel the SSBD control is per core as well' ?
>
> Or.. oh, you are saying it keeps the state latched - so if one has memory
> disambiguation disabled then _both_ siblings have it so - even if the other
> tries to enable it back on. Gosh, I hope this is spelled out in the SDM
> when that comes out.

Yes, the logic here is:

  CORE_SPEC_CTRL = THREAD0_SPEC_CRTL | THREAD1_SPEC_CTRL

The documentation for IBRS is a bit vague on that:

  Enabling IBRS on one logical processor of a core with Intel
  Hyper-Threading Technology may affect branch prediction on other logical
  processors of the same core.

but as Intel folks confirmed on several occasions the above logic is
reality.

> Perhaps then:
> "s/synchronization/synchronization (keep it disabled even if another
> sibling enables - only enable it if both siblings set this)/" ?

Will rephrase.

> > +void speculative_store_bypass_ht_init(void)
> >  {
> > -	u64 msr;
> > +	struct ssb_state *st = this_cpu_ptr(&ssb_state);
> > +	unsigned int this_cpu = smp_processor_id();
> > +	unsigned int cpu;
> > +
> > +	st->local_state = 0;
> > +	if (st->shared_state)
> > +		return;
> > +
> > +	raw_spin_lock_init(&st->lock);
> 
> Should we also hold this lock in the CPU hotplug code? That is when
> you power off an CPU?
> > +
> > +	/* Go over HT siblings: */
> > +	for_each_cpu(cpu, topology_sibling_cpumask(this_cpu)) {
> 
> .. As could you (on a bad of course), access the sibling here - right
> when the sibling is powered-off?
> 
> > +		if (cpu == this_cpu)
> > +			continue;
> >  
> > -	if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
> > -		msr = x86_amd_ls_cfg_base | ssbd_tif_to_amd_ls_cfg(tifn);
> > +		if (!per_cpu(ssb_state, cpu).shared_state)
> > +			continue;
> > +
> > +		/* Link it to the state of the sibling: */
> > +		st->shared_state = per_cpu(ssb_state, cpu).shared_state;
> 
> And then this would refer to a dead per-cpu area. Do we clear the per-cpu
> area when offlining? Aka is the per_cpu(.., cpu) where CPU is offline
> end up with a NULL pointer?

No. It's set up once and stays so.

> > +		return;
> > +	}
> > +	/* Link shared state of the first HT sibling to itself. */
> > +	st->shared_state = st;
> > +}
> > +
> > +/*
> > + * Logic is: first HT sibling enables SSBD for both siblings in the core and
> > + * last sibling to disable it, disables it for the whole core.
> 
> Would it make sense to say this follows how the Intel CPU has it implemented?

Sure. Will add.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 07/15] Hidden 7
  2018-05-16  3:22   ` Konrad Rzeszutek Wilk
@ 2018-05-16  8:46     ` Thomas Gleixner
  2018-05-16 12:15       ` Thomas Gleixner
  0 siblings, 1 reply; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  8:46 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> On Sun, May 13, 2018 at 04:00:55PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 07/15] x86/bugs, KVM: Extend speculation control for VIRT_SPEC_CTRL
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > AMD is proposing a VIRT_SPEC_CTRL MSR to handle the Speculative Store
> > Bypass Disable via MSR_AMD64_LS_CFG so that guests do not have to care
> > about the bit position of the SSBD bit and thus facilitate migration.
> 
> Oh, that is news to me. Is there an MSR value they had in mind?
> 
> 
> > Also, the sibling coordination on Family 17H CPUs can only be done on
> > the host.
> > 
> > Extend x86_spec_ctrl_set_guest() and x86_spec_ctrl_restore_host() with an
> > extra argument for the VIRT_SPEC_CTRL MSR.
> > 
> > Hand in 0 from VMX and in SVM add a new virt_spec_ctrl member to the CPU
> > data structure which is going to be used in later patches for the actual
> > implementation.
> 
> Why not expand it on VMX? That is couldn't this virtualized MSR be generic
> to do the appropiate mitigation on both AMD and Intel?
> 
> After all this is a software emulated MSR - so why make VMX ignore it?
> (And yes I know the counter-argument - on Intel it should whack the SPEC_CTRL
> MSR instead of fiddling with this - but then why have an 'virtualized' MSR
> that is suppose to be generic!).

Fine with me, but I leave that to the KVM wizards. Though I assume that
this is a temporary workaround until AMD supports the real SPEC_CTRL MSR as
well.

> ..snip..
> > +++ b/arch/x86/kvm/svm.c
> > @@ -213,6 +213,12 @@ struct vcpu_svm {
> >  	} host;
> >  
> >  	u64 spec_ctrl;
> > +	/*
> > +	 * Contains guest-controlled bits of VIRT_SPEC_CTRL, which
> > +	 * will be translated into the appropriate bits to perform
> 
> s/bits/bits on the host/ ?

sure.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 11/15] Hidden 11
  2018-05-16  3:35   ` Konrad Rzeszutek Wilk
@ 2018-05-16  8:50     ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  8:50 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> On Sun, May 13, 2018 at 04:00:59PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 11/15] x86/bugs: Expose x86_spec_ctrl_base directly
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > x86_spec_ctrl_base is the system wide default value for MSR_SPEC_CTRL.
> > x86_spec_ctrl_get_default() returns x86_spec_ctrl_base and was intended to
> > prevent modification to that variable. Though the variable is read only
> > after init and globaly visible already.
> > 
> > Remove the function and export the variable instead.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ..snip..
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -36,12 +36,13 @@ static void __init ssb_select_mitigation
> >   * writes to SPEC_CTRL contain whatever reserved bits have been set.
> >   */
> >  u64 __ro_after_init x86_spec_ctrl_base;
> > +EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
> >  
> >  /*
> >   * The vendor and possibly platform specific bits which can be modified in
> >   * x86_spec_ctrl_base.
> >   */
> > -static u64 __ro_after_init x86_spec_ctrl_mask = ~SPEC_CTRL_IBRS;
> > +static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS;
> 
> Why the flip? The commit description does not talk about this at all?

Bah, that want's to be in the next patch.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 10/15] Hidden 10
  2018-05-16  3:38   ` [MODERATED] " Konrad Rzeszutek Wilk
@ 2018-05-16  8:51     ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  8:51 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> > +void
> > +x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool guest)
> >  {
> > -	u64 host = x86_spec_ctrl_base;
> > +	u64 hostssbd = ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
> 
> Would it make sense to move the:
> 	hostssbd = ssbd_tif_to..
> 
> > +	u64 msr, host = x86_spec_ctrl_base;
> >  
> >  	/* Is MSR_SPEC_CTRL implemented ? */
> > -	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> > -		return;
> > -
> > -	/* SSBD controlled in MSR_SPEC_CTRL */
> > -	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> > -		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
> > -
> > -	if (host != guest_spec_ctrl)
> > -		wrmsrl(MSR_IA32_SPEC_CTRL, guest_spec_ctrl);
> > -}
> > -EXPORT_SYMBOL_GPL(x86_spec_ctrl_set_guest);
> > -
> > -/**
> > - * x86_spec_ctrl_restore_host - Restore host speculation control registers
> > - * @guest_spec_ctrl:		The guest content of MSR_SPEC_CTRL
> > - * @guest_virt_spec_ctrl:	The guest controlled bits of MSR_VIRT_SPEC_CTRL
> > - *				(may get translated to MSR_AMD64_LS_CFG bits)
> > - *
> > - * Avoids writing to the MSR if the content/bits are the same
> > - */
> > -void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl)
> > -{
> > -	u64 host = x86_spec_ctrl_base;
> > -
> > -	/* Is MSR_SPEC_CTRL implemented ? */
> > -	if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> > -		return;
> > -
> > -	/* SSBD controlled in MSR_SPEC_CTRL */
> > -	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> > -		host |= ssbd_tif_to_spec_ctrl(current_thread_info()->flags);
> > -
> > -	if (host != guest_spec_ctrl)
> > -		wrmsrl(MSR_IA32_SPEC_CTRL, host);
> > +	if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
> > +		/* SSBD controlled in MSR_SPEC_CTRL */
> > +		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))
> > +			host |= hostssbd;
> 
> to here? That is you are reading the ssbd_tif_to_spec_ctrl even if the
> X86_FEATURE_MSR_SPEC_CTRL hasn't been enabled? (In theory, the compiler has
> it probably optimized).

Yes, I can move it. 

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: Re: [patch 03/15] Hidden 3
  2018-05-16  8:07     ` Thomas Gleixner
@ 2018-05-16  8:53       ` Borislav Petkov
  0 siblings, 0 replies; 57+ messages in thread
From: Borislav Petkov @ 2018-05-16  8:53 UTC (permalink / raw)
  To: speck

On Wed, May 16, 2018 at 10:07:55AM +0200, speck for Thomas Gleixner wrote:
> Similarly, we don't reevaluate mitigation selection and command line
> options after early boot, so just special casing it for virtualization does
> not make much sense to me.

Yes, we do:

void microcode_check(void)

	...

	pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
        pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 13/15] Hidden 13
  2018-05-16  3:42   ` Konrad Rzeszutek Wilk
@ 2018-05-16  8:56     ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  8:56 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> On Sun, May 13, 2018 at 04:01:01PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 13/15] x86/bugs: Rework spec_ctrl base and mask logic
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > x86_spec_ctrL_mask is intended to mask out bits from a MSR_SPEC_CTRL value
> > which are not to be modified. Though the implementation is not really used
> > and the bitmask is inverted for no real reason. Aside of that it is missing
> 
> This is due to:
>  Subject: [patch 11/15] x86/bugs: Expose x86_spec_ctrl_base directly
> 
> Would it make sense to remove the inversion from that patch? Which naturally
> would change the logic here..

Yes. That hunk belongs into this one.

> > the STIBP bit if it is supported by the platform, so if the mask would be
> > used in x86_virt_spec_ctrl() then it would prevent a guest from setting
> > STIBP.
> 
> Ouch.
> > 
> > Add the STIBP bit if supported and use the mask in x86_spec_ctrl_set_guest()
> 
> What OS actually uses that bit?

I don't know, but we allow SPEC_CTRL for the guest and STIBP is in the
allowed values:

	if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))

so we better have it working.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 05/15] Hidden 5
  2018-05-16  3:24   ` Konrad Rzeszutek Wilk
@ 2018-05-16  9:09     ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16  9:09 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:

> On Sun, May 13, 2018 at 04:00:53PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 05/15] x86/cpufeatures: Add FEATURE_ZEN
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > Add a ZEN feature bit so family-dependent static_cpu_has() optimizations
> > can be built for ZEN.
> 
> .. In the future this may cover other architectures as well. Would it make
> sense to call this something more generic? Say
> 
> X86_FEATURE_SSBD_SYNC_SMT
> 
> ?

I don't think so. Next generation CPUs will either be not vulnerable or
have support for SSBD in MSR_SPEC_CTRL, so extending it beyond ZEN does not
make a lot of sense.

If Intel or AMD go there and make MSR_SPEC_CTRL core shared like the LS_CFG
one, then I'm going to buy a goat farm and get out of this business.

Wait... I should probably talk to a real estate agent now.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 07/15] Hidden 7
  2018-05-16  8:46     ` Thomas Gleixner
@ 2018-05-16 12:15       ` Thomas Gleixner
  0 siblings, 0 replies; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16 12:15 UTC (permalink / raw)
  To: speck

On Wed, 16 May 2018, speck for Thomas Gleixner wrote:
> On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> > On Sun, May 13, 2018 at 04:00:55PM +0200, speck for Thomas Gleixner wrote:
> > > Subject: [patch 07/15] x86/bugs, KVM: Extend speculation control for VIRT_SPEC_CTRL
> > > From: Thomas Gleixner <tglx@linutronix.de>
> > > 
> > > AMD is proposing a VIRT_SPEC_CTRL MSR to handle the Speculative Store
> > > Bypass Disable via MSR_AMD64_LS_CFG so that guests do not have to care
> > > about the bit position of the SSBD bit and thus facilitate migration.
> > 
> > Oh, that is news to me. Is there an MSR value they had in mind?
> > 
> > 
> > > Also, the sibling coordination on Family 17H CPUs can only be done on
> > > the host.
> > > 
> > > Extend x86_spec_ctrl_set_guest() and x86_spec_ctrl_restore_host() with an
> > > extra argument for the VIRT_SPEC_CTRL MSR.
> > > 
> > > Hand in 0 from VMX and in SVM add a new virt_spec_ctrl member to the CPU
> > > data structure which is going to be used in later patches for the actual
> > > implementation.
> > 
> > Why not expand it on VMX? That is couldn't this virtualized MSR be generic
> > to do the appropiate mitigation on both AMD and Intel?
> > 
> > After all this is a software emulated MSR - so why make VMX ignore it?
> > (And yes I know the counter-argument - on Intel it should whack the SPEC_CTRL
> > MSR instead of fiddling with this - but then why have an 'virtualized' MSR
> > that is suppose to be generic!).
> 
> Fine with me, but I leave that to the KVM wizards. Though I assume that
> this is a temporary workaround until AMD supports the real SPEC_CTRL MSR as
> well.

Plus taking an VMEXIT for every modification of that virtual MSR is
painful, so why support it on VMX when you can actually use the real thing.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [patch 08/15] Hidden 8
  2018-05-16  3:31   ` Konrad Rzeszutek Wilk
@ 2018-05-16 12:22     ` Thomas Gleixner
  2018-05-16 13:48       ` [MODERATED] " Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 57+ messages in thread
From: Thomas Gleixner @ 2018-05-16 12:22 UTC (permalink / raw)
  To: speck

On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> On Sun, May 13, 2018 at 04:00:56PM +0200, speck for Thomas Gleixner wrote:
> >  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
> >  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
> > --- a/arch/x86/include/asm/msr-index.h
> > +++ b/arch/x86/include/asm/msr-index.h
> > @@ -347,6 +347,8 @@
> >  #define MSR_AMD64_SEV_ENABLED_BIT	0
> >  #define MSR_AMD64_SEV_ENABLED		BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT)
> >  
> > +#define MSR_AMD64_VIRT_SPEC_CTRL	0xc001011f
> 
> /me scratches his head. this looks off, but it is probably my editor.

What looks off?

> >  }
> >  #endif
> >  
> > +static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
> 
> s/amd_// ?

I really think this is AMD specific to deal with the fact that the SSBD bit
in LS_CFG MSR is on different bit positions depending on the CPU family.
which is a pain for guest migration accross host systems with different CPU
families.

Intel does not have that and they shall burn Inhell if they start to
implement that.

Let's not try to generalize stuff which should not exist in the first place.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [MODERATED] Re: [patch 08/15] Hidden 8
  2018-05-16 12:22     ` Thomas Gleixner
@ 2018-05-16 13:48       ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 57+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-16 13:48 UTC (permalink / raw)
  To: speck

On Wed, May 16, 2018 at 02:22:45PM +0200, speck for Thomas Gleixner wrote:
> On Tue, 15 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> > On Sun, May 13, 2018 at 04:00:56PM +0200, speck for Thomas Gleixner wrote:
> > >  /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
> > >  #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
> > > --- a/arch/x86/include/asm/msr-index.h
> > > +++ b/arch/x86/include/asm/msr-index.h
> > > @@ -347,6 +347,8 @@
> > >  #define MSR_AMD64_SEV_ENABLED_BIT	0
> > >  #define MSR_AMD64_SEV_ENABLED		BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT)
> > >  
> > > +#define MSR_AMD64_VIRT_SPEC_CTRL	0xc001011f
> > 
> > /me scratches his head. this looks off, but it is probably my editor.
> 
> What looks off?

It looks like it has spaces instead of tabs.
> 
> > >  }
> > >  #endif
> > >  
> > > +static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
> > 
> > s/amd_// ?
> 
> I really think this is AMD specific to deal with the fact that the SSBD bit
> in LS_CFG MSR is on different bit positions depending on the CPU family.
> which is a pain for guest migration accross host systems with different CPU
> families.
> 
> Intel does not have that and they shall burn Inhell if they start to
> implement that.

<chuckles>
> 
> Let's not try to generalize stuff which should not exist in the first place.
> 
> Thanks,
> 
> 	tglx

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2018-05-16 13:48 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-13 14:00 [patch 00/15] Hidden 0 Thomas Gleixner
2018-05-13 14:00 ` [patch 01/15] Hidden 1 Thomas Gleixner
2018-05-13 22:17   ` [MODERATED] " Borislav Petkov
2018-05-15  9:30   ` Paolo Bonzini
2018-05-16  2:32   ` Konrad Rzeszutek Wilk
2018-05-16  7:51     ` Thomas Gleixner
2018-05-13 14:00 ` [patch 02/15] Hidden 2 Thomas Gleixner
2018-05-16  2:39   ` [MODERATED] " Konrad Rzeszutek Wilk
2018-05-13 14:00 ` [patch 03/15] Hidden 3 Thomas Gleixner
2018-05-14 10:02   ` [MODERATED] " Borislav Petkov
2018-05-16  2:49   ` Konrad Rzeszutek Wilk
2018-05-16  8:07     ` Thomas Gleixner
2018-05-16  8:53       ` [MODERATED] Re: " Borislav Petkov
2018-05-13 14:00 ` [patch 04/15] Hidden 4 Thomas Gleixner
2018-05-14 11:11   ` [MODERATED] " Borislav Petkov
2018-05-16  2:53   ` Konrad Rzeszutek Wilk
2018-05-13 14:00 ` [patch 05/15] Hidden 5 Thomas Gleixner
2018-05-14 11:18   ` [MODERATED] " Borislav Petkov
2018-05-16  3:24   ` Konrad Rzeszutek Wilk
2018-05-16  9:09     ` Thomas Gleixner
2018-05-13 14:00 ` [patch 06/15] Hidden 6 Thomas Gleixner
2018-05-14 12:01   ` [MODERATED] " Borislav Petkov
2018-05-14 12:09   ` Peter Zijlstra
2018-05-14 12:46     ` Thomas Gleixner
2018-05-16  3:15   ` [MODERATED] " Konrad Rzeszutek Wilk
2018-05-16  8:44     ` Thomas Gleixner
2018-05-13 14:00 ` [patch 07/15] Hidden 7 Thomas Gleixner
2018-05-14 17:07   ` [MODERATED] " Borislav Petkov
2018-05-16  3:22   ` Konrad Rzeszutek Wilk
2018-05-16  8:46     ` Thomas Gleixner
2018-05-16 12:15       ` Thomas Gleixner
2018-05-13 14:00 ` [patch 08/15] Hidden 8 Thomas Gleixner
2018-05-14 17:58   ` [MODERATED] " Borislav Petkov
2018-05-16  3:31   ` Konrad Rzeszutek Wilk
2018-05-16 12:22     ` Thomas Gleixner
2018-05-16 13:48       ` [MODERATED] " Konrad Rzeszutek Wilk
2018-05-13 14:00 ` [patch 09/15] Hidden 9 Thomas Gleixner
2018-05-14 19:49   ` [MODERATED] " Borislav Petkov
2018-05-13 14:00 ` [patch 10/15] Hidden 10 Thomas Gleixner
2018-05-16  3:38   ` [MODERATED] " Konrad Rzeszutek Wilk
2018-05-16  8:51     ` Thomas Gleixner
2018-05-13 14:00 ` [patch 11/15] Hidden 11 Thomas Gleixner
2018-05-14 20:02   ` [MODERATED] " Borislav Petkov
2018-05-16  3:35   ` Konrad Rzeszutek Wilk
2018-05-16  8:50     ` Thomas Gleixner
2018-05-13 14:01 ` [patch 12/15] Hidden 12 Thomas Gleixner
2018-05-14 20:18   ` [MODERATED] " Borislav Petkov
2018-05-16  3:40   ` Konrad Rzeszutek Wilk
2018-05-13 14:01 ` [patch 13/15] Hidden 13 Thomas Gleixner
2018-05-15  9:27   ` [MODERATED] " Borislav Petkov
2018-05-16  3:42   ` Konrad Rzeszutek Wilk
2018-05-16  8:56     ` Thomas Gleixner
2018-05-13 14:01 ` [patch 14/15] Hidden 14 Thomas Gleixner
2018-05-15 15:35   ` [MODERATED] " Borislav Petkov
2018-05-16  3:44   ` Konrad Rzeszutek Wilk
2018-05-13 14:01 ` [patch 15/15] Hidden 15 Thomas Gleixner
2018-05-13 14:22 ` [patch 00/15] Hidden 0 Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.