linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/22] SRSO fixes/cleanups
@ 2023-08-21  1:18 Josh Poimboeuf
  2023-08-21  1:18 ` [PATCH 01/22] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
                   ` (21 more replies)
  0 siblings, 22 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:18 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Here are several SRSO fixes and cleanups, based on tip/x86/urgent.

One of the patches also adds KVM support, though a corresponding patch
is still needed in QEMU.  I have a working QEMU patch which I can post
to qemu-devel.

Josh Poimboeuf (22):
  x86/srso: Fix srso_show_state() side effect
  x86/srso: Set CPUID feature bits independently of bug or mitigation
    status
  KVM: x86: Support IBPB_BRTYPE and SBPB
  x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  x86/srso: Fix SBPB enablement for mitigations=off
  x86/srso: Print actual mitigation if requested mitigation isn't
    possible
  x86/srso: Remove default case in srso_select_mitigation()
  x86/srso: Downgrade retbleed IBPB warning to informational message
  x86/srso: Simplify exit paths
  x86/srso: Print mitigation for retbleed IBPB case
  x86/srso: Slight simplification
  x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
  x86/srso: Fix vulnerability reporting for missing microcode
  x86/srso: Fix unret validation dependencies
  x86/alternatives: Remove faulty optimization
  x86/srso: Unexport untraining functions
  x86/srso: Disentangle rethunk-dependent options
  x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros
  x86/srso: Improve i-cache locality for alias mitigation
  x86/retpoline: Remove .text..__x86.return_thunk section
  x86/nospec: Refactor UNTRAIN_RET[_*]
  x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()

 Documentation/admin-guide/hw-vuln/srso.rst |  22 ++-
 arch/x86/include/asm/nospec-branch.h       |  69 ++++-----
 arch/x86/include/asm/processor.h           |   2 -
 arch/x86/kernel/alternative.c              |   8 -
 arch/x86/kernel/cpu/amd.c                  |  28 ++--
 arch/x86/kernel/cpu/bugs.c                 |  87 +++++------
 arch/x86/kernel/vmlinux.lds.S              |  10 +-
 arch/x86/kvm/cpuid.c                       |   4 +
 arch/x86/kvm/x86.c                         |   9 +-
 arch/x86/lib/retpoline.S                   | 171 +++++++++++----------
 include/linux/objtool.h                    |   3 +-
 scripts/Makefile.vmlinux_o                 |   3 +-
 12 files changed, 199 insertions(+), 217 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 01/22] x86/srso: Fix srso_show_state() side effect
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
@ 2023-08-21  1:18 ` Josh Poimboeuf
  2023-08-21  5:42   ` Nikolay Borisov
  2023-08-21  6:04   ` Borislav Petkov
  2023-08-21  1:18 ` [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
                   ` (20 subsequent siblings)
  21 siblings, 2 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:18 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Reading the 'spec_rstack_overflow' sysfs file can trigger an unnecessary
MSR write, and possibly even a (handled) exception if the microcode
hasn't been updated.

Avoid all that by just checking X86_FEATURE_IBPB_BRTYPE instead, which
gets set by srso_select_mitigation() if the updated microcode exists.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f081d26616ac..bdd3e296f72b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2717,7 +2717,7 @@ static ssize_t srso_show_state(char *buf)
 
 	return sysfs_emit(buf, "%s%s\n",
 			  srso_strings[srso_mitigation],
-			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
+			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
 }
 
 static ssize_t gds_show_state(char *buf)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
  2023-08-21  1:18 ` [PATCH 01/22] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
@ 2023-08-21  1:18 ` Josh Poimboeuf
  2023-08-21  5:42   ` Nikolay Borisov
                     ` (2 more replies)
  2023-08-21  1:19 ` [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB Josh Poimboeuf
                   ` (19 subsequent siblings)
  21 siblings, 3 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:18 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Booting with mitigations=off incorrectly prevents the
X86_FEATURE_{IBPB_BRTYPE,SBPB} CPUID bits from getting set.

Also, future CPUs without X86_BUG_SRSO might still have IBPB with branch
type prediction flushing, in which case SBPB should be used instead of
IBPB.  The current code doesn't allow for that.

Also, cpu_has_ibpb_brtype_microcode() has some surprising side effects
and the setting of these feature bits really doesn't belong in the
mitigation code anyway.  Move it to earlier.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/processor.h |  2 --
 arch/x86/kernel/cpu/amd.c        | 28 +++++++++-------------------
 arch/x86/kernel/cpu/bugs.c       | 13 +------------
 3 files changed, 10 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index fd750247ca89..9e26294e415c 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -676,12 +676,10 @@ extern u16 get_llc_id(unsigned int cpu);
 #ifdef CONFIG_CPU_SUP_AMD
 extern u32 amd_get_nodes_per_socket(void);
 extern u32 amd_get_highest_perf(void);
-extern bool cpu_has_ibpb_brtype_microcode(void);
 extern void amd_clear_divider(void);
 #else
 static inline u32 amd_get_nodes_per_socket(void)	{ return 0; }
 static inline u32 amd_get_highest_perf(void)		{ return 0; }
-static inline bool cpu_has_ibpb_brtype_microcode(void)	{ return false; }
 static inline void amd_clear_divider(void)		{ }
 #endif
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 7eca6a8abbb1..b08af929135d 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -766,6 +766,15 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
+
+	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
+		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
+			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
+			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+			setup_force_cpu_cap(X86_FEATURE_SBPB);
+		}
+	}
 }
 
 static void init_amd_k8(struct cpuinfo_x86 *c)
@@ -1301,25 +1310,6 @@ void amd_check_microcode(void)
 	on_each_cpu(zenbleed_check_cpu, NULL, 1);
 }
 
-bool cpu_has_ibpb_brtype_microcode(void)
-{
-	switch (boot_cpu_data.x86) {
-	/* Zen1/2 IBPB flushes branch type predictions too. */
-	case 0x17:
-		return boot_cpu_has(X86_FEATURE_AMD_IBPB);
-	case 0x19:
-		/* Poke the MSR bit on Zen3/4 to check its presence. */
-		if (!wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
-			setup_force_cpu_cap(X86_FEATURE_SBPB);
-			return true;
-		} else {
-			return false;
-		}
-	default:
-		return false;
-	}
-}
-
 /*
  * Issue a DIV 0/1 insn to clear any division data from previous DIV
  * operations.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bdd3e296f72b..b0ae985aa6a4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2404,26 +2404,15 @@ early_param("spec_rstack_overflow", srso_parse_cmdline);
 
 static void __init srso_select_mitigation(void)
 {
-	bool has_microcode;
+	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	/*
-	 * The first check is for the kernel running as a guest in order
-	 * for guests to verify whether IBPB is a viable mitigation.
-	 */
-	has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode();
 	if (!has_microcode) {
 		pr_warn("IBPB-extending microcode not applied!\n");
 		pr_warn(SRSO_NOTICE);
 	} else {
-		/*
-		 * Enable the synthetic (even if in a real CPUID leaf)
-		 * flags for guests.
-		 */
-		setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
-
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
  2023-08-21  1:18 ` [PATCH 01/22] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
  2023-08-21  1:18 ` [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  9:34   ` Andrew Cooper
  2023-08-21 16:49   ` Sean Christopherson
  2023-08-21  1:19 ` [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
                   ` (18 subsequent siblings)
  21 siblings, 2 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.

From the AMD SRSO whitepaper:

  "Hypervisor software should synthesize the value of both the
  IBPB_BRTYPE and SBPB CPUID bits on these platforms for use by guest
  software."

These bits are already set during kernel boot.  Manually propagate them
to the guest.

Also, propagate PRED_CMD_SBPB writes.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kvm/cpuid.c | 4 ++++
 arch/x86/kvm/x86.c   | 9 +++++----
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index d3432687c9e6..cdf703eec42d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -729,6 +729,10 @@ void kvm_set_cpu_caps(void)
 		F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
 	);
 
+	if (cpu_feature_enabled(X86_FEATURE_SBPB))
+		kvm_cpu_cap_set(X86_FEATURE_SBPB);
+	if (cpu_feature_enabled(X86_FEATURE_IBPB_BRTYPE))
+		kvm_cpu_cap_set(X86_FEATURE_IBPB_BRTYPE);
 	if (cpu_feature_enabled(X86_FEATURE_SRSO_NO))
 		kvm_cpu_cap_set(X86_FEATURE_SRSO_NO);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c381770bcbf1..dd7472121142 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
 			return 1;
 
-		if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
+		if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
+			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
+		else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
+			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
+		else if (data)
 			return 1;
-		if (!data)
-			break;
 
-		wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
 		break;
 	case MSR_IA32_FLUSH_CMD:
 		if (!msr_info->host_initiated &&
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (2 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21 14:16   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off Josh Poimboeuf
                   ` (17 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

If the user has requested no SRSO mitigation, other mitigations can use
the lighter-weight SBPB instead of IBPB.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b0ae985aa6a4..10499bcd4e39 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2433,7 +2433,7 @@ static void __init srso_select_mitigation(void)
 
 	switch (srso_cmd) {
 	case SRSO_CMD_OFF:
-		return;
+		goto pred_cmd;
 
 	case SRSO_CMD_MICROCODE:
 		if (has_microcode) {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (3 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-23  5:57   ` Borislav Petkov
  2023-08-23 23:02   ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 06/22] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
                   ` (16 subsequent siblings)
  21 siblings, 2 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

If the user has requested no mitigations with mitigations=off, use the
lighter-weight SBPB instead of IBPB for other mitigations.

Note that even with mitigations=off, IBPB/SBPB may still be used for
Spectre v2 user <-> user protection.  Whether that makes sense is a
question for another day.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 10499bcd4e39..ff5bfe8f0ee9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2496,8 +2496,7 @@ static void __init srso_select_mitigation(void)
 	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
 
 pred_cmd:
-	if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) &&
-	     boot_cpu_has(X86_FEATURE_SBPB))
+	if (boot_cpu_has(X86_FEATURE_SBPB) && srso_mitigation == SRSO_MITIGATION_NONE)
 		x86_pred_cmd = PRED_CMD_SBPB;
 }
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 06/22] x86/srso: Print actual mitigation if requested mitigation isn't possible
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (4 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-23  6:06   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 07/22] x86/srso: Remove default case in srso_select_mitigation() Josh Poimboeuf
                   ` (15 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

If the kernel wasn't compiled to support the requested option, print the
actual option that ends up getting used.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ff5bfe8f0ee9..579e06655613 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2461,7 +2461,6 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
-			goto pred_cmd;
 		}
 		break;
 
@@ -2473,7 +2472,6 @@ static void __init srso_select_mitigation(void)
 			}
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n");
-			goto pred_cmd;
 		}
 		break;
 
@@ -2485,7 +2483,6 @@ static void __init srso_select_mitigation(void)
 			}
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
-			goto pred_cmd;
                 }
 		break;
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 07/22] x86/srso: Remove default case in srso_select_mitigation()
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (5 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 06/22] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-23  6:18   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 08/22] x86/srso: Downgrade retbleed IBPB warning to informational message Josh Poimboeuf
                   ` (14 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Remove the default case so a compiler warning gets printed if we forget
one of the enums.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 579e06655613..cda4b5e6a362 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2485,9 +2485,6 @@ static void __init srso_select_mitigation(void)
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
                 }
 		break;
-
-	default:
-		break;
 	}
 
 	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 08/22] x86/srso: Downgrade retbleed IBPB warning to informational message
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (6 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 07/22] x86/srso: Remove default case in srso_select_mitigation() Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-24  4:43   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 09/22] x86/srso: Simplify exit paths Josh Poimboeuf
                   ` (13 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

This warning is nothing to get excited over.  Downgrade to pr_info().

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index cda4b5e6a362..e59e09babf8f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2425,7 +2425,7 @@ static void __init srso_select_mitigation(void)
 
 	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
 		if (has_microcode) {
-			pr_err("Retbleed IBPB mitigation enabled, using same for SRSO\n");
+			pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto pred_cmd;
 		}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 09/22] x86/srso: Simplify exit paths
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (7 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 08/22] x86/srso: Downgrade retbleed IBPB warning to informational message Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Send all function exit paths through the pred_cmd check to simplify the
control flow and make it more future-proof.

While at it, rename the 'pred_cmd' label to 'out' to make it clear that
it's the exit.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index e59e09babf8f..da480c089739 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2407,7 +2407,7 @@ static void __init srso_select_mitigation(void)
 	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
-		goto pred_cmd;
+		goto out;
 
 	if (!has_microcode) {
 		pr_warn("IBPB-extending microcode not applied!\n");
@@ -2419,7 +2419,7 @@ static void __init srso_select_mitigation(void)
 		 */
 		if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
-			return;
+			goto out;
 		}
 	}
 
@@ -2427,13 +2427,13 @@ static void __init srso_select_mitigation(void)
 		if (has_microcode) {
 			pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
 			srso_mitigation = SRSO_MITIGATION_IBPB;
-			goto pred_cmd;
+			goto out;
 		}
 	}
 
 	switch (srso_cmd) {
 	case SRSO_CMD_OFF:
-		goto pred_cmd;
+		goto out;
 
 	case SRSO_CMD_MICROCODE:
 		if (has_microcode) {
@@ -2489,7 +2489,7 @@ static void __init srso_select_mitigation(void)
 
 	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
 
-pred_cmd:
+out:
 	if (boot_cpu_has(X86_FEATURE_SBPB) && srso_mitigation == SRSO_MITIGATION_NONE)
 		x86_pred_cmd = PRED_CMD_SBPB;
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (8 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 09/22] x86/srso: Simplify exit paths Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-24  4:48   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 11/22] x86/srso: Slight simplification Josh Poimboeuf
                   ` (11 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

When overriding the requested mitigation with IBPB due to retbleed=ibpb,
print the actual mitigation.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index da480c089739..4e332707a343 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2427,7 +2427,7 @@ static void __init srso_select_mitigation(void)
 		if (has_microcode) {
 			pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
 			srso_mitigation = SRSO_MITIGATION_IBPB;
-			goto out;
+			goto out_print;
 		}
 	}
 
@@ -2487,7 +2487,8 @@ static void __init srso_select_mitigation(void)
 		break;
 	}
 
-	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
+out_print:
+	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
 
 out:
 	if (boot_cpu_has(X86_FEATURE_SBPB) && srso_mitigation == SRSO_MITIGATION_NONE)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 11/22] x86/srso: Slight simplification
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (9 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-24  4:55   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 12/22] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
                   ` (10 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Simplify the code flow a bit by moving the retbleed IBPB check into the
existing 'has_microcode' block.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4e332707a343..b27aeb86ed7a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2421,10 +2421,8 @@ static void __init srso_select_mitigation(void)
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
 			goto out;
 		}
-	}
 
-	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-		if (has_microcode) {
+		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
 			pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out_print;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 12/22] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (10 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 11/22] x86/srso: Slight simplification Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-25  7:09   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 13/22] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
                   ` (9 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The X86_FEATURE_ENTRY_IBPB check is redundant here due to the above
RETBLEED_MITIGATION_IBPB check.  RETBLEED_MITIGATION_IBPB already
implies X86_FEATURE_ENTRY_IBPB.  So if we got here and 'has_microcode'
is true, it means X86_FEATURE_ENTRY_IBPB is not set.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b27aeb86ed7a..aeddd5ce9f34 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2475,7 +2475,7 @@ static void __init srso_select_mitigation(void)
 
 	case SRSO_CMD_IBPB_ON_VMEXIT:
 		if (IS_ENABLED(CONFIG_CPU_SRSO)) {
-			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
+			if (has_microcode) {
 				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
 				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
 			}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 13/22] x86/srso: Fix vulnerability reporting for missing microcode
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (11 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 12/22] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-25  7:25   ` Borislav Petkov
  2023-08-21  1:19 ` [PATCH 14/22] x86/srso: Fix unret validation dependencies Josh Poimboeuf
                   ` (8 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated.  That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 Documentation/admin-guide/hw-vuln/srso.rst | 22 +++++++++----
 arch/x86/kernel/cpu/bugs.c                 | 36 +++++++++++++---------
 2 files changed, 38 insertions(+), 20 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51cb0b4..4516719e00b5 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:
 
    The processor is not vulnerable
 
- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+   The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':
 
    The processor is vulnerable, no microcode extending IBPB
    functionality to address the vulnerability has been applied.
 
- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+   The "Safe Ret" mitigation (see below) has been applied to protect the
+   kernel, but the IBPB-extending microcode has not been applied.  User
+   space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':
 
    Extended IBPB functionality microcode patch has been applied. It does
    not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:
 
    (spec_rstack_overflow=microcode)
 
- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':
 
-   Software-only mitigation. It complements the extended IBPB microcode
-   patch functionality by addressing User->Kernel and Guest->Host
-   transitions protection.
+   Combined microcode/software mitigation. It complements the
+   extended IBPB microcode patch functionality by addressing
+   User->Kernel and Guest->Host transitions protection.
 
    Selected by default or by spec_rstack_overflow=safe-ret
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index aeddd5ce9f34..f24c0f7e3e8a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_UCODE_NEEDED,
+	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
 	SRSO_MITIGATION_SAFE_RET,
 	SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
 };
 
 static const char * const srso_strings[] = {
-	[SRSO_MITIGATION_NONE]           = "Vulnerable",
-	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
-	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
-	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
-	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+	[SRSO_MITIGATION_NONE]			= "Vulnerable",
+	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED]	= "Vulnerable: Safe RET, no microcode",
+	[SRSO_MITIGATION_MICROCODE]		= "Vulnerable: Microcode, no safe RET",
+	[SRSO_MITIGATION_SAFE_RET]		= "Mitigation: Safe RET",
+	[SRSO_MITIGATION_IBPB]			= "Mitigation: IBPB",
+	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
 static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2406,13 +2410,10 @@ static void __init srso_select_mitigation(void)
 {
 	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
-	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off() || srso_cmd == SRSO_CMD_OFF)
 		goto out;
 
-	if (!has_microcode) {
-		pr_warn("IBPB-extending microcode not applied!\n");
-		pr_warn(SRSO_NOTICE);
-	} else {
+	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
@@ -2427,6 +2428,12 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out_print;
 		}
+	} else {
+		pr_warn("IBPB-extending microcode not applied!\n");
+		pr_warn(SRSO_NOTICE);
+
+		/* may be overwritten by SRSO_CMD_SAFE_RET below */
+		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
 	}
 
 	switch (srso_cmd) {
@@ -2456,7 +2463,10 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			if (has_microcode)
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			else
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
 		}
@@ -2696,9 +2706,7 @@ static ssize_t srso_show_state(char *buf)
 	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
 		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
 
-	return sysfs_emit(buf, "%s%s\n",
-			  srso_strings[srso_mitigation],
-			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+	return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
 }
 
 static ssize_t gds_show_state(char *buf)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 14/22] x86/srso: Fix unret validation dependencies
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (12 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 13/22] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 15/22] x86/alternatives: Remove faulty optimization Josh Poimboeuf
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

CONFIG_CPU_SRSO isn't dependent on CONFIG_CPU_UNRET_ENTRY (AMD
Retbleed), so the two features are independently configurable.  Fix
several issues for the (presumably rare) case where CONFIG_CPU_SRSO is
enabled but CONFIG_CPU_UNRET_ENTRY isn't.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 4 ++--
 include/linux/objtool.h              | 3 ++-
 scripts/Makefile.vmlinux_o           | 3 ++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c55cc243592e..197ff4f4d1ce 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -271,7 +271,7 @@
 .Lskip_rsb_\@:
 .endm
 
-#ifdef CONFIG_CPU_UNRET_ENTRY
+#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
 #define CALL_UNTRAIN_RET	"call entry_untrain_ret"
 #else
 #define CALL_UNTRAIN_RET	""
@@ -312,7 +312,7 @@
 
 .macro UNTRAIN_RET_FROM_CALL
 #if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING)
+	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index 03f82c2c2ebf..b5440e7da55b 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -130,7 +130,8 @@
  * it will be ignored.
  */
 .macro VALIDATE_UNRET_BEGIN
-#if defined(CONFIG_NOINSTR_VALIDATION) && defined(CONFIG_CPU_UNRET_ENTRY)
+#if defined(CONFIG_NOINSTR_VALIDATION) && \
+	(defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
 .Lhere_\@:
 	.pushsection .discard.validate_unret
 	.long	.Lhere_\@ - .
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 0edfdb40364b..25b3b587d37c 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -37,7 +37,8 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
 
 vmlinux-objtool-args-$(delay-objtool)			+= $(objtool-args-y)
 vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL)		+= --no-unreachable
-vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr $(if $(CONFIG_CPU_UNRET_ENTRY), --unret)
+vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
+							   $(if $(or $(CONFIG_CPU_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)
 
 objtool-args = $(vmlinux-objtool-args-y) --link
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 15/22] x86/alternatives: Remove faulty optimization
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (13 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 14/22] x86/srso: Fix unret validation dependencies Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 16/22] x86/srso: Unexport untraining functions Josh Poimboeuf
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The following commit

  095b8303f383 ("x86/alternative: Make custom return thunk

made '__x86_return_thunk' a placeholder value.  All code setting
X86_FEATURE_RETHUNK also changes the value of 'x86_return_thunk'.  So
the optimization at the beginning of apply_returns() is dead code.

Also, before the above-mentioned commit, the optimization actually had a
bug It bypassed __static_call_fixup(), causing some raw returns to
remain unpatched in static call trampolines.  Thus the 'Fixes' tag.

Fixes: d2408e043e72 ("x86/alternative: Optimize returns patching")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/alternative.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 099d58d02a26..34be5fbaf41e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -720,14 +720,6 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
 {
 	s32 *s;
 
-	/*
-	 * Do not patch out the default return thunks if those needed are the
-	 * ones generated by the compiler.
-	 */
-	if (cpu_feature_enabled(X86_FEATURE_RETHUNK) &&
-	    (x86_return_thunk == __x86_return_thunk))
-		return;
-
 	for (s = start; s < end; s++) {
 		void *dest = NULL, *addr = (void *)s + *s;
 		struct insn insn;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 16/22] x86/srso: Unexport untraining functions
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (14 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 15/22] x86/alternatives: Remove faulty optimization Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  5:50   ` Nikolay Borisov
  2023-08-21  1:19 ` [PATCH 17/22] x86/srso: Disentangle rethunk-dependent options Josh Poimboeuf
                   ` (5 subsequent siblings)
  21 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

These functions aren't called outside of retpoline.S.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 4 ----
 arch/x86/lib/retpoline.S             | 7 ++-----
 2 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 197ff4f4d1ce..6c14fd1f5912 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -352,10 +352,6 @@ extern void retbleed_return_thunk(void);
 extern void srso_return_thunk(void);
 extern void srso_alias_return_thunk(void);
 
-extern void retbleed_untrain_ret(void);
-extern void srso_untrain_ret(void);
-extern void srso_alias_untrain_ret(void);
-
 extern void entry_untrain_ret(void);
 extern void entry_ibpb(void);
 
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index cd86aeb5fdd3..5e3b016c6d3e 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -157,7 +157,6 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lfence
 	jmp srso_alias_return_thunk
 SYM_FUNC_END(srso_alias_untrain_ret)
-__EXPORT_THUNK(srso_alias_untrain_ret)
 
 	.section .text..__x86.rethunk_safe
 #else
@@ -216,7 +215,7 @@ SYM_CODE_END(srso_alias_return_thunk)
  */
 	.align 64
 	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
-SYM_START(retbleed_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_START(retbleed_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	/*
 	 * As executed from retbleed_untrain_ret, this is:
@@ -264,7 +263,6 @@ SYM_CODE_END(retbleed_return_thunk)
 	jmp retbleed_return_thunk
 	int3
 SYM_FUNC_END(retbleed_untrain_ret)
-__EXPORT_THUNK(retbleed_untrain_ret)
 
 /*
  * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
@@ -278,7 +276,7 @@ __EXPORT_THUNK(retbleed_untrain_ret)
  */
 	.align 64
 	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	.byte 0x48, 0xb8
 
@@ -299,7 +297,6 @@ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
 	ud2
 SYM_CODE_END(srso_safe_ret)
 SYM_FUNC_END(srso_untrain_ret)
-__EXPORT_THUNK(srso_untrain_ret)
 
 SYM_CODE_START(srso_return_thunk)
 	UNWIND_HINT_FUNC
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 17/22] x86/srso: Disentangle rethunk-dependent options
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (15 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 16/22] x86/srso: Unexport untraining functions Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 18/22] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros Josh Poimboeuf
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

CONFIG_RETHUNK, CONFIG_CPU_UNRET_ENTRY and CONFIG_CPU_SRSO are all
tangled up.  De-spaghettify the code a bit.

Some of the rethunk-related code has been shuffled around within the
'.text..__x86.return_thunk' section, but otherwise there are no
functional changes.  srso_alias_untrain_ret() and srso_alias_safe_ret()
((which are very address-sensitive) haven't moved.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h |  25 +++--
 arch/x86/kernel/cpu/bugs.c           |   5 +-
 arch/x86/kernel/vmlinux.lds.S        |   7 +-
 arch/x86/lib/retpoline.S             | 158 +++++++++++++++------------
 4 files changed, 109 insertions(+), 86 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 6c14fd1f5912..51e3f1a287d2 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -289,19 +289,17 @@
  * where we have a stack but before any RET instruction.
  */
 .macro UNTRAIN_RET
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
 		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		      __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
+		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
 #endif
 .endm
 
 .macro UNTRAIN_RET_VM
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
@@ -311,8 +309,7 @@
 .endm
 
 .macro UNTRAIN_RET_FROM_CALL
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
@@ -348,6 +345,20 @@ extern void __x86_return_thunk(void);
 static inline void __x86_return_thunk(void) {}
 #endif
 
+#ifdef CONFIG_CPU_UNRET_ENTRY
+extern void retbleed_return_thunk(void);
+#else
+static inline void retbleed_return_thunk(void) {}
+#endif
+
+#ifdef CONFIG_CPU_SRSO
+extern void srso_return_thunk(void);
+extern void srso_alias_return_thunk(void);
+#else
+static inline void srso_return_thunk(void) {}
+static inline void srso_alias_return_thunk(void) {}
+#endif
+
 extern void retbleed_return_thunk(void);
 extern void srso_return_thunk(void);
 extern void srso_alias_return_thunk(void);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f24c0f7e3e8a..73d10e54fc1f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -63,7 +63,7 @@ EXPORT_SYMBOL_GPL(x86_pred_cmd);
 
 static DEFINE_MUTEX(spec_ctrl_mutex);
 
-void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk;
+void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
 
 /* Update SPEC_CTRL MSR and its cached copy unconditionally */
 static void update_spec_ctrl(u64 val)
@@ -1042,8 +1042,7 @@ static void __init retbleed_select_mitigation(void)
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
 		setup_force_cpu_cap(X86_FEATURE_UNRET);
 
-		if (IS_ENABLED(CONFIG_RETHUNK))
-			x86_return_thunk = retbleed_return_thunk;
+		x86_return_thunk = retbleed_return_thunk;
 
 		if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
 		    boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 83d41c2601d7..9188834e56c9 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -139,10 +139,7 @@ SECTIONS
 		STATIC_CALL_TEXT
 
 		ALIGN_ENTRY_TEXT_BEGIN
-#ifdef CONFIG_CPU_SRSO
 		*(.text..__x86.rethunk_untrain)
-#endif
-
 		ENTRY_TEXT
 
 #ifdef CONFIG_CPU_SRSO
@@ -520,12 +517,12 @@ INIT_PER_CPU(irq_stack_backing_store);
            "fixed_percpu_data is not at start of per-cpu area");
 #endif
 
-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_CPU_UNRET_ENTRY
 . = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
-. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
 #endif
 
 #ifdef CONFIG_CPU_SRSO
+. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
 /*
  * GNU ld cannot do XOR until 2.41.
  * https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c4907fb8d7f6ded8295f1947b1
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 5e3b016c6d3e..e5be0ecf3ce0 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -126,12 +126,13 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
 #include <asm/GEN-for-each-reg.h>
 #undef GEN
 #endif
-/*
- * This function name is magical and is used by -mfunction-return=thunk-extern
- * for the compiler to generate JMPs to it.
- */
+
 #ifdef CONFIG_RETHUNK
 
+	.section .text..__x86.return_thunk
+
+#ifdef CONFIG_CPU_SRSO
+
 /*
  * srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
  * special addresses:
@@ -147,9 +148,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  *
  * As a result, srso_alias_safe_ret() becomes a safe return.
  */
-#ifdef CONFIG_CPU_SRSO
-	.section .text..__x86.rethunk_untrain
-
+	.pushsection .text..__x86.rethunk_untrain
 SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
@@ -157,17 +156,9 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lfence
 	jmp srso_alias_return_thunk
 SYM_FUNC_END(srso_alias_untrain_ret)
+	.popsection
 
-	.section .text..__x86.rethunk_safe
-#else
-/* dummy definition for alternatives */
-SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
-	ANNOTATE_UNRET_SAFE
-	ret
-	int3
-SYM_FUNC_END(srso_alias_untrain_ret)
-#endif
-
+	.pushsection .text..__x86.rethunk_safe
 SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lea 8(%_ASM_SP), %_ASM_SP
 	UNWIND_HINT_FUNC
@@ -175,8 +166,7 @@ SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	ret
 	int3
 SYM_FUNC_END(srso_alias_safe_ret)
-
-	.section .text..__x86.return_thunk
+	.popsection
 
 SYM_CODE_START(srso_alias_return_thunk)
 	UNWIND_HINT_FUNC
@@ -185,6 +175,56 @@ SYM_CODE_START(srso_alias_return_thunk)
 	ud2
 SYM_CODE_END(srso_alias_return_thunk)
 
+/*
+ * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
+ * above. On kernel entry, srso_untrain_ret() is executed which is a
+ *
+ * movabs $0xccccc30824648d48,%rax
+ *
+ * and when the return thunk executes the inner label srso_safe_ret()
+ * later, it is a stack manipulation and a RET which is mispredicted and
+ * thus a "safe" one to use.
+ */
+	.align 64
+	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
+SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+	ANNOTATE_NOENDBR
+	.byte 0x48, 0xb8
+
+/*
+ * This forces the function return instruction to speculate into a trap
+ * (UD2 in srso_return_thunk() below).  This RET will then mispredict
+ * and execution will continue at the return site read from the top of
+ * the stack.
+ */
+SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
+	lea 8(%_ASM_SP), %_ASM_SP
+	ret
+	int3
+	int3
+	/* end of movabs */
+	lfence
+	call srso_safe_ret
+	ud2
+SYM_CODE_END(srso_safe_ret)
+SYM_FUNC_END(srso_untrain_ret)
+
+SYM_CODE_START(srso_return_thunk)
+	UNWIND_HINT_FUNC
+	ANNOTATE_NOENDBR
+	call srso_safe_ret
+	ud2
+SYM_CODE_END(srso_return_thunk)
+
+#define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret"
+#define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret"
+#else /* !CONFIG_CPU_SRSO */
+#define JMP_SRSO_UNTRAIN_RET "ud2"
+#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
+#endif /* CONFIG_CPU_SRSO */
+
+#ifdef CONFIG_CPU_UNRET_ENTRY
+
 /*
  * Some generic notes on the untraining sequences:
  *
@@ -264,64 +304,21 @@ SYM_CODE_END(retbleed_return_thunk)
 	int3
 SYM_FUNC_END(retbleed_untrain_ret)
 
-/*
- * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
- * above. On kernel entry, srso_untrain_ret() is executed which is a
- *
- * movabs $0xccccc30824648d48,%rax
- *
- * and when the return thunk executes the inner label srso_safe_ret()
- * later, it is a stack manipulation and a RET which is mispredicted and
- * thus a "safe" one to use.
- */
-	.align 64
-	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
-	ANNOTATE_NOENDBR
-	.byte 0x48, 0xb8
+#define JMP_RETBLEED_UNTRAIN_RET "jmp retbleed_untrain_ret"
+#else /* !CONFIG_CPU_UNRET_ENTRY */
+#define JMP_RETBLEED_UNTRAIN_RET "ud2"
+#endif /* CONFIG_CPU_UNRET_ENTRY */
 
-/*
- * This forces the function return instruction to speculate into a trap
- * (UD2 in srso_return_thunk() below).  This RET will then mispredict
- * and execution will continue at the return site read from the top of
- * the stack.
- */
-SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
-	lea 8(%_ASM_SP), %_ASM_SP
-	ret
-	int3
-	int3
-	/* end of movabs */
-	lfence
-	call srso_safe_ret
-	ud2
-SYM_CODE_END(srso_safe_ret)
-SYM_FUNC_END(srso_untrain_ret)
-
-SYM_CODE_START(srso_return_thunk)
-	UNWIND_HINT_FUNC
-	ANNOTATE_NOENDBR
-	call srso_safe_ret
-	ud2
-SYM_CODE_END(srso_return_thunk)
+#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
 
 SYM_FUNC_START(entry_untrain_ret)
-	ALTERNATIVE_2 "jmp retbleed_untrain_ret", \
-		      "jmp srso_untrain_ret", X86_FEATURE_SRSO, \
-		      "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
+	ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET,				\
+		      JMP_SRSO_UNTRAIN_RET, X86_FEATURE_SRSO,		\
+		      JMP_SRSO_ALIAS_UNTRAIN_RET, X86_FEATURE_SRSO_ALIAS
 SYM_FUNC_END(entry_untrain_ret)
 __EXPORT_THUNK(entry_untrain_ret)
 
-SYM_CODE_START(__x86_return_thunk)
-	UNWIND_HINT_FUNC
-	ANNOTATE_NOENDBR
-	ANNOTATE_UNRET_SAFE
-	ret
-	int3
-SYM_CODE_END(__x86_return_thunk)
-EXPORT_SYMBOL(__x86_return_thunk)
-
-#endif /* CONFIG_RETHUNK */
+#endif /* CONFIG_CPU_UNRET_ENTRY || CONFIG_CPU_SRSO */
 
 #ifdef CONFIG_CALL_DEPTH_TRACKING
 
@@ -356,3 +353,22 @@ SYM_FUNC_START(__x86_return_skl)
 SYM_FUNC_END(__x86_return_skl)
 
 #endif /* CONFIG_CALL_DEPTH_TRACKING */
+
+/*
+ * This function name is magical and is used by -mfunction-return=thunk-extern
+ * for the compiler to generate JMPs to it.
+ *
+ * This code is only used during kernel boot or module init.  All
+ * 'JMP __x86_return_thunk' sites are changed to something else by
+ * apply_returns().
+ */
+SYM_CODE_START(__x86_return_thunk)
+	UNWIND_HINT_FUNC
+	ANNOTATE_NOENDBR
+	ANNOTATE_UNRET_SAFE
+	ret
+	int3
+SYM_CODE_END(__x86_return_thunk)
+EXPORT_SYMBOL(__x86_return_thunk)
+
+#endif /* CONFIG_RETHUNK */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 18/22] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (16 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 17/22] x86/srso: Disentangle rethunk-dependent options Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 19/22] x86/srso: Improve i-cache locality for alias mitigation Josh Poimboeuf
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Macros already exist for unaligned code block symbols.  Use them.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/lib/retpoline.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index e5be0ecf3ce0..af3c1f0e4fb8 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -149,7 +149,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  * As a result, srso_alias_safe_ret() becomes a safe return.
  */
 	.pushsection .text..__x86.rethunk_untrain
-SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_CODE_START_NOALIGN(srso_alias_untrain_ret)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
 	ASM_NOP2
@@ -159,7 +159,7 @@ SYM_FUNC_END(srso_alias_untrain_ret)
 	.popsection
 
 	.pushsection .text..__x86.rethunk_safe
-SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_CODE_START_NOALIGN(srso_alias_safe_ret)
 	lea 8(%_ASM_SP), %_ASM_SP
 	UNWIND_HINT_FUNC
 	ANNOTATE_UNRET_SAFE
@@ -187,7 +187,7 @@ SYM_CODE_END(srso_alias_return_thunk)
  */
 	.align 64
 	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+SYM_CODE_START_LOCAL_NOALIGN(srso_untrain_ret)
 	ANNOTATE_NOENDBR
 	.byte 0x48, 0xb8
 
@@ -255,7 +255,7 @@ SYM_CODE_END(srso_return_thunk)
  */
 	.align 64
 	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
-SYM_START(retbleed_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+SYM_CODE_START_LOCAL_NOALIGN(retbleed_untrain_ret)
 	ANNOTATE_NOENDBR
 	/*
 	 * As executed from retbleed_untrain_ret, this is:
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 19/22] x86/srso: Improve i-cache locality for alias mitigation
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (17 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 18/22] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 20/22] x86/retpoline: Remove .text..__x86.return_thunk section Josh Poimboeuf
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Move srso_alias_return_thunk() to the same section as
srso_alias_safe_ret() so they can share a cache line.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/lib/retpoline.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index af3c1f0e4fb8..415521dbe15e 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -166,14 +166,14 @@ SYM_CODE_START_NOALIGN(srso_alias_safe_ret)
 	ret
 	int3
 SYM_FUNC_END(srso_alias_safe_ret)
-	.popsection
 
-SYM_CODE_START(srso_alias_return_thunk)
+SYM_CODE_START_NOALIGN(srso_alias_return_thunk)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
 	call srso_alias_safe_ret
 	ud2
 SYM_CODE_END(srso_alias_return_thunk)
+	.popsection
 
 /*
  * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 20/22] x86/retpoline: Remove .text..__x86.return_thunk section
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (18 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 19/22] x86/srso: Improve i-cache locality for alias mitigation Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 21/22] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 22/22] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk() Josh Poimboeuf
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The '.text..__x86.return_thunk' section has no purpose.  Remove it and
let the return thunk code live in '.text..__x86.indirect_thunk'.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/vmlinux.lds.S | 3 ---
 arch/x86/lib/retpoline.S      | 2 --
 2 files changed, 5 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 9188834e56c9..f1c3516d356d 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -132,10 +132,7 @@ SECTIONS
 		LOCK_TEXT
 		KPROBES_TEXT
 		SOFTIRQENTRY_TEXT
-#ifdef CONFIG_RETPOLINE
 		*(.text..__x86.indirect_thunk)
-		*(.text..__x86.return_thunk)
-#endif
 		STATIC_CALL_TEXT
 
 		ALIGN_ENTRY_TEXT_BEGIN
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 415521dbe15e..49f2be7c7b35 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -129,8 +129,6 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
 
 #ifdef CONFIG_RETHUNK
 
-	.section .text..__x86.return_thunk
-
 #ifdef CONFIG_CPU_SRSO
 
 /*
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 21/22] x86/nospec: Refactor UNTRAIN_RET[_*]
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (19 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 20/22] x86/retpoline: Remove .text..__x86.return_thunk section Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  2023-08-21  1:19 ` [PATCH 22/22] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk() Josh Poimboeuf
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Factor out the UNTRAIN_RET[_*] common bits into a helper macro.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 31 +++++++++-------------------
 1 file changed, 10 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 51e3f1a287d2..dcc78477a38d 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -288,35 +288,24 @@
  * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
  * where we have a stack but before any RET instruction.
  */
-.macro UNTRAIN_RET
+.macro __UNTRAIN_RET ibpb_feature, call_depth_insns
 #if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
+		      "call entry_ibpb", \ibpb_feature,			\
+		     __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH
 #endif
 .endm
 
-.macro UNTRAIN_RET_VM
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
-	VALIDATE_UNRET_END
-	ALTERNATIVE_3 "",						\
-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT,	\
-		      __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
-#endif
-.endm
+#define UNTRAIN_RET \
+	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH)
 
-.macro UNTRAIN_RET_FROM_CALL
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
-	VALIDATE_UNRET_END
-	ALTERNATIVE_3 "",						\
-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		      __stringify(RESET_CALL_DEPTH_FROM_CALL), X86_FEATURE_CALL_DEPTH
-#endif
-.endm
+#define UNTRAIN_RET_VM \
+	__UNTRAIN_RET X86_FEATURE_IBPB_ON_VMEXIT, __stringify(RESET_CALL_DEPTH)
+
+#define UNTRAIN_RET_FROM_CALL \
+	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH_FROM_CALL)
 
 
 .macro CALL_DEPTH_ACCOUNT
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 22/22] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()
  2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
                   ` (20 preceding siblings ...)
  2023-08-21  1:19 ` [PATCH 21/22] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
@ 2023-08-21  1:19 ` Josh Poimboeuf
  21 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21  1:19 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

For consistency with the other return thunks, rename __x86_return_skl()
to call_depth_return_thunk().

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 13 ++++---------
 arch/x86/kernel/cpu/bugs.c           |  3 ++-
 arch/x86/lib/retpoline.S             |  4 ++--
 3 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index dcc78477a38d..14cd3cd5f85a 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -358,12 +358,7 @@ extern void entry_ibpb(void);
 extern void (*x86_return_thunk)(void);
 
 #ifdef CONFIG_CALL_DEPTH_TRACKING
-extern void __x86_return_skl(void);
-
-static inline void x86_set_skl_return_thunk(void)
-{
-	x86_return_thunk = &__x86_return_skl;
-}
+extern void call_depth_return_thunk(void);
 
 #define CALL_DEPTH_ACCOUNT					\
 	ALTERNATIVE("",						\
@@ -376,12 +371,12 @@ DECLARE_PER_CPU(u64, __x86_ret_count);
 DECLARE_PER_CPU(u64, __x86_stuffs_count);
 DECLARE_PER_CPU(u64, __x86_ctxsw_count);
 #endif
-#else
-static inline void x86_set_skl_return_thunk(void) {}
+#else /* !CONFIG_CALL_DEPTH_TRACKING */
 
+static inline void call_depth_return_thunk(void) {}
 #define CALL_DEPTH_ACCOUNT ""
 
-#endif
+#endif /* CONFIG_CALL_DEPTH_TRACKING */
 
 #ifdef CONFIG_RETPOLINE
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 73d10e54fc1f..83eb3f77d911 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1060,7 +1060,8 @@ static void __init retbleed_select_mitigation(void)
 	case RETBLEED_MITIGATION_STUFF:
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
 		setup_force_cpu_cap(X86_FEATURE_CALL_DEPTH);
-		x86_set_skl_return_thunk();
+
+		x86_return_thunk = call_depth_return_thunk;
 		break;
 
 	default:
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 49f2be7c7b35..6376d0164395 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -321,7 +321,7 @@ __EXPORT_THUNK(entry_untrain_ret)
 #ifdef CONFIG_CALL_DEPTH_TRACKING
 
 	.align 64
-SYM_FUNC_START(__x86_return_skl)
+SYM_FUNC_START(call_depth_return_thunk)
 	ANNOTATE_NOENDBR
 	/*
 	 * Keep the hotpath in a 16byte I-fetch for the non-debug
@@ -348,7 +348,7 @@ SYM_FUNC_START(__x86_return_skl)
 	ANNOTATE_UNRET_SAFE
 	ret
 	int3
-SYM_FUNC_END(__x86_return_skl)
+SYM_FUNC_END(call_depth_return_thunk)
 
 #endif /* CONFIG_CALL_DEPTH_TRACKING */
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-21  1:18 ` [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
@ 2023-08-21  5:42   ` Nikolay Borisov
  2023-08-21  9:27   ` Andrew Cooper
  2023-08-21 13:59   ` Borislav Petkov
  2 siblings, 0 replies; 63+ messages in thread
From: Nikolay Borisov @ 2023-08-21  5:42 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	gregkh, Thomas Gleixner



On 21.08.23 г. 4:18 ч., Josh Poimboeuf wrote:
> Booting with mitigations=off incorrectly prevents the
> X86_FEATURE_{IBPB_BRTYPE,SBPB} CPUID bits from getting set.
> 
> Also, future CPUs without X86_BUG_SRSO might still have IBPB with branch
> type prediction flushing, in which case SBPB should be used instead of
> IBPB.  The current code doesn't allow for that.
> 
> Also, cpu_has_ibpb_brtype_microcode() has some surprising side effects
> and the setting of these feature bits really doesn't belong in the
> mitigation code anyway.  Move it to earlier.
> 
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>

LGTM and it's a lot more clear when IBPB_TYPE can actually be set.

Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>


> ---
>   arch/x86/include/asm/processor.h |  2 --
>   arch/x86/kernel/cpu/amd.c        | 28 +++++++++-------------------
>   arch/x86/kernel/cpu/bugs.c       | 13 +------------
>   3 files changed, 10 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index fd750247ca89..9e26294e415c 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -676,12 +676,10 @@ extern u16 get_llc_id(unsigned int cpu);
>   #ifdef CONFIG_CPU_SUP_AMD
>   extern u32 amd_get_nodes_per_socket(void);
>   extern u32 amd_get_highest_perf(void);
> -extern bool cpu_has_ibpb_brtype_microcode(void);
>   extern void amd_clear_divider(void);
>   #else
>   static inline u32 amd_get_nodes_per_socket(void)	{ return 0; }
>   static inline u32 amd_get_highest_perf(void)		{ return 0; }
> -static inline bool cpu_has_ibpb_brtype_microcode(void)	{ return false; }
>   static inline void amd_clear_divider(void)		{ }
>   #endif
>   
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 7eca6a8abbb1..b08af929135d 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -766,6 +766,15 @@ static void early_init_amd(struct cpuinfo_x86 *c)
>   
>   	if (cpu_has(c, X86_FEATURE_TOPOEXT))
>   		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
> +
> +	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {

The only way this check can be true is if this kernel is running as a 
guest and KVM has synthesized this flag already, right?

> +		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
> +			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
> +		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
> +			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
> +			setup_force_cpu_cap(X86_FEATURE_SBPB);
> +		}
> +	}
>   }
>   
>   static void init_amd_k8(struct cpuinfo_x86 *c)

<snip>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 01/22] x86/srso: Fix srso_show_state() side effect
  2023-08-21  1:18 ` [PATCH 01/22] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
@ 2023-08-21  5:42   ` Nikolay Borisov
  2023-08-21  6:04   ` Borislav Petkov
  1 sibling, 0 replies; 63+ messages in thread
From: Nikolay Borisov @ 2023-08-21  5:42 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	gregkh, Thomas Gleixner



On 21.08.23 г. 4:18 ч., Josh Poimboeuf wrote:
> Reading the 'spec_rstack_overflow' sysfs file can trigger an unnecessary
> MSR write, and possibly even a (handled) exception if the microcode
> hasn't been updated.
> 
> Avoid all that by just checking X86_FEATURE_IBPB_BRTYPE instead, which
> gets set by srso_select_mitigation() if the updated microcode exists.
> 
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>   arch/x86/kernel/cpu/bugs.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index f081d26616ac..bdd3e296f72b 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2717,7 +2717,7 @@ static ssize_t srso_show_state(char *buf)
>   
>   	return sysfs_emit(buf, "%s%s\n",
>   			  srso_strings[srso_mitigation],
> -			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
> +			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
>   }
>   
>   static ssize_t gds_show_state(char *buf)


Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 16/22] x86/srso: Unexport untraining functions
  2023-08-21  1:19 ` [PATCH 16/22] x86/srso: Unexport untraining functions Josh Poimboeuf
@ 2023-08-21  5:50   ` Nikolay Borisov
  0 siblings, 0 replies; 63+ messages in thread
From: Nikolay Borisov @ 2023-08-21  5:50 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	gregkh, Thomas Gleixner



On 21.08.23 г. 4:19 ч., Josh Poimboeuf wrote:
> These functions aren't called outside of retpoline.S.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>   arch/x86/include/asm/nospec-branch.h | 4 ----
>   arch/x86/lib/retpoline.S             | 7 ++-----
>   2 files changed, 2 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index 197ff4f4d1ce..6c14fd1f5912 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -352,10 +352,6 @@ extern void retbleed_return_thunk(void);
>   extern void srso_return_thunk(void);
>   extern void srso_alias_return_thunk(void);
>   
> -extern void retbleed_untrain_ret(void);
> -extern void srso_untrain_ret(void);
> -extern void srso_alias_untrain_ret(void);
> -
>   extern void entry_untrain_ret(void);
>   extern void entry_ibpb(void);
>   
> diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
> index cd86aeb5fdd3..5e3b016c6d3e 100644
> --- a/arch/x86/lib/retpoline.S
> +++ b/arch/x86/lib/retpoline.S
> @@ -157,7 +157,6 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
>   	lfence
>   	jmp srso_alias_return_thunk
>   SYM_FUNC_END(srso_alias_untrain_ret)
> -__EXPORT_THUNK(srso_alias_untrain_ret)
>   
>   	.section .text..__x86.rethunk_safe
>   #else
> @@ -216,7 +215,7 @@ SYM_CODE_END(srso_alias_return_thunk)
>    */
>   	.align 64
>   	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
> -SYM_START(retbleed_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
> +SYM_START(retbleed_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)

nit: There's already SYM_FUNC_START_LOCAL_NOALIGN


>   	ANNOTATE_NOENDBR
>   	/*
>   	 * As executed from retbleed_untrain_ret, this is:
> @@ -264,7 +263,6 @@ SYM_CODE_END(retbleed_return_thunk)
>   	jmp retbleed_return_thunk
>   	int3
>   SYM_FUNC_END(retbleed_untrain_ret)
> -__EXPORT_THUNK(retbleed_untrain_ret)
>   
>   /*
>    * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
> @@ -278,7 +276,7 @@ __EXPORT_THUNK(retbleed_untrain_ret)
>    */
>   	.align 64
>   	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
> -SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
> +SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
>   	ANNOTATE_NOENDBR
>   	.byte 0x48, 0xb8
>   
> @@ -299,7 +297,6 @@ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
>   	ud2
>   SYM_CODE_END(srso_safe_ret)
>   SYM_FUNC_END(srso_untrain_ret)
> -__EXPORT_THUNK(srso_untrain_ret)
>   
>   SYM_CODE_START(srso_return_thunk)
>   	UNWIND_HINT_FUNC

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 01/22] x86/srso: Fix srso_show_state() side effect
  2023-08-21  1:18 ` [PATCH 01/22] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
  2023-08-21  5:42   ` Nikolay Borisov
@ 2023-08-21  6:04   ` Borislav Petkov
  2023-08-21 16:17     ` Josh Poimboeuf
  1 sibling, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-21  6:04 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:18:58PM -0700, Josh Poimboeuf wrote:
> Reading the 'spec_rstack_overflow' sysfs file can trigger an unnecessary
> MSR write, and possibly even a (handled) exception if the microcode
> hasn't been updated.
> 
> Avoid all that by just checking X86_FEATURE_IBPB_BRTYPE instead, which
> gets set by srso_select_mitigation() if the updated microcode exists.
> 
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index f081d26616ac..bdd3e296f72b 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2717,7 +2717,7 @@ static ssize_t srso_show_state(char *buf)
>

Please put here a comment - something along the lines of:

"X86_FEATURE_IBPB_BRTYPE gets set as a result of the presence of the
needed microcode so checking that is equivalent."

so that it is clear why it is ok to check this feature bit.

>  	return sysfs_emit(buf, "%s%s\n",
>  			  srso_strings[srso_mitigation],
> -			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
> +			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
>  }
>  
>  static ssize_t gds_show_state(char *buf)
> -- 

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-21  1:18 ` [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
  2023-08-21  5:42   ` Nikolay Borisov
@ 2023-08-21  9:27   ` Andrew Cooper
  2023-08-21 14:06     ` Borislav Petkov
  2023-08-21 13:59   ` Borislav Petkov
  2 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2023-08-21  9:27 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On 21/08/2023 2:18 am, Josh Poimboeuf wrote:
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 7eca6a8abbb1..b08af929135d 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -766,6 +766,15 @@ static void early_init_amd(struct cpuinfo_x86 *c)
>  
>  	if (cpu_has(c, X86_FEATURE_TOPOEXT))
>  		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
> +
> +	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {

This patch is necessary but not sufficient to fix the bugs.  There needs
to be a !cpu_has_hypervisor in here.

Linux must not probe microcode when virtualised.  What it may see
instantaneously on boot (owing to MSR_PRED_CMD being fully passed
through) is not accurate for the lifetime of the VM.

And yes, sucks for you if you're under an unaware hypervisor, but you've
already lost in that case anyway...

~Andrew

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21  1:19 ` [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB Josh Poimboeuf
@ 2023-08-21  9:34   ` Andrew Cooper
  2023-08-21 16:23     ` Josh Poimboeuf
  2023-08-21 16:49   ` Sean Christopherson
  1 sibling, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2023-08-21  9:34 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On 21/08/2023 2:19 am, Josh Poimboeuf wrote:
> The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.

"Current hardware".

> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c381770bcbf1..dd7472121142 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
>  			return 1;
>  
> -		if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
> +		if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
> +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> +		else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
> +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
> +		else if (data)
>  			return 1;

SBPB | IBPB is an explicitly permitted combination, but will be rejected
by this logic.

FWIW, my patch to Xen went something like:

---8<---
         if ( !cp->feat.ibrsb && !cp->extd.ibpb )
             goto gp_fault; /* MSR available? */
 
-        if ( val & ~PRED_CMD_IBPB )
+        rsvd = ~(PRED_CMD_IBPB |
+                 (cp->extd.sbpb ? PRED_CMD_SBPB : 0));
+
+        if ( val & rsvd )
             goto gp_fault; /* Rsvd bit set? */
 
         if ( v == curr )
---8<---

~Andrew

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-21  1:18 ` [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
  2023-08-21  5:42   ` Nikolay Borisov
  2023-08-21  9:27   ` Andrew Cooper
@ 2023-08-21 13:59   ` Borislav Petkov
  2 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-21 13:59 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:18:59PM -0700, Josh Poimboeuf wrote:
> Booting with mitigations=off incorrectly prevents the
> X86_FEATURE_{IBPB_BRTYPE,SBPB} CPUID bits from getting set.
> 
> Also, future CPUs without X86_BUG_SRSO might still have IBPB with branch
> type prediction flushing, in which case SBPB should be used instead of
> IBPB.  The current code doesn't allow for that.
> 
> Also, cpu_has_ibpb_brtype_microcode() has some surprising side effects
> and the setting of these feature bits really doesn't belong in the
> mitigation code anyway.  Move it to earlier.
> 
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/include/asm/processor.h |  2 --
>  arch/x86/kernel/cpu/amd.c        | 28 +++++++++-------------------
>  arch/x86/kernel/cpu/bugs.c       | 13 +------------
>  3 files changed, 10 insertions(+), 33 deletions(-)

Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-21  9:27   ` Andrew Cooper
@ 2023-08-21 14:06     ` Borislav Petkov
  2023-08-23  5:20       ` Borislav Petkov
  0 siblings, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-21 14:06 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Josh Poimboeuf, x86, linux-kernel, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 10:27:50AM +0100, Andrew Cooper wrote:
> This patch is necessary but not sufficient to fix the bugs.  There needs
> to be a !cpu_has_hypervisor in here.

Yes, but in a separate patch.

And I still don't know what exactly we're going to support when Linux
runs as a guest. For example, live migration between Zen1/2 and Zen3/4
won't work due to the alternatives patching, for example...

IBPB won't work either because we detect those feature bits only once
during boot, like every other feature bit...

Whatever it is, I'd like to see it written down first so that we go and
look it up and point to it when someone's changing that code.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-21  1:19 ` [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
@ 2023-08-21 14:16   ` Borislav Petkov
  2023-08-21 16:36     ` Josh Poimboeuf
  0 siblings, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-21 14:16 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:01PM -0700, Josh Poimboeuf wrote:
> If the user has requested no SRSO mitigation, other mitigations can use
> the lighter-weight SBPB instead of IBPB.
> 
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index b0ae985aa6a4..10499bcd4e39 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2433,7 +2433,7 @@ static void __init srso_select_mitigation(void)
>  
>  	switch (srso_cmd) {
>  	case SRSO_CMD_OFF:
> -		return;
> +		goto pred_cmd;

Can't do that - you need to synchronize it with retbleed. If retbleed
has selected IBPB mitigation you must not override it.

It's a whole different story whether it makes sense.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 01/22] x86/srso: Fix srso_show_state() side effect
  2023-08-21  6:04   ` Borislav Petkov
@ 2023-08-21 16:17     ` Josh Poimboeuf
  2023-08-22  5:23       ` Borislav Petkov
  0 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21 16:17 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 08:04:16AM +0200, Borislav Petkov wrote:
> On Sun, Aug 20, 2023 at 06:18:58PM -0700, Josh Poimboeuf wrote:
> > Reading the 'spec_rstack_overflow' sysfs file can trigger an unnecessary
> > MSR write, and possibly even a (handled) exception if the microcode
> > hasn't been updated.
> > 
> > Avoid all that by just checking X86_FEATURE_IBPB_BRTYPE instead, which
> > gets set by srso_select_mitigation() if the updated microcode exists.
> > 
> > Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> > ---
> >  arch/x86/kernel/cpu/bugs.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index f081d26616ac..bdd3e296f72b 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -2717,7 +2717,7 @@ static ssize_t srso_show_state(char *buf)
> >
> 
> Please put here a comment - something along the lines of:
> 
> "X86_FEATURE_IBPB_BRTYPE gets set as a result of the presence of the
> needed microcode so checking that is equivalent."
> 
> so that it is clear why it is ok to check this feature bit.

I could do that, but this check ends up getting replaced by a later
patch anyway.

Would you want this comment in srso_select_mitigation()?  After the next
patch it has:

  bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);

Though that seems clear to me already.

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21  9:34   ` Andrew Cooper
@ 2023-08-21 16:23     ` Josh Poimboeuf
  2023-08-21 16:35       ` Sean Christopherson
  0 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21 16:23 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 10:34:38AM +0100, Andrew Cooper wrote:
> On 21/08/2023 2:19 am, Josh Poimboeuf wrote:
> > The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.
> 
> "Current hardware".
> 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index c381770bcbf1..dd7472121142 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  		if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
> >  			return 1;
> >  
> > -		if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
> > +		if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
> > +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> > +		else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
> > +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
> > +		else if (data)
> >  			return 1;
> 
> SBPB | IBPB is an explicitly permitted combination, but will be rejected
> by this logic.

Ah yes, I see that now:

  If software writes PRED_CMD with both bits 0 and 7 set to 1, the
  processor performs an IBPB operation.

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21 16:23     ` Josh Poimboeuf
@ 2023-08-21 16:35       ` Sean Christopherson
  2023-08-21 16:46         ` Nikolay Borisov
  2023-08-21 17:05         ` Josh Poimboeuf
  0 siblings, 2 replies; 63+ messages in thread
From: Sean Christopherson @ 2023-08-21 16:35 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andrew Cooper, x86, linux-kernel, Borislav Petkov,
	Peter Zijlstra, Babu Moger, Paolo Bonzini, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023, Josh Poimboeuf wrote:
> On Mon, Aug 21, 2023 at 10:34:38AM +0100, Andrew Cooper wrote:
> > On 21/08/2023 2:19 am, Josh Poimboeuf wrote:
> > > The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.
> > 
> > "Current hardware".
> > 
> > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > index c381770bcbf1..dd7472121142 100644
> > > --- a/arch/x86/kvm/x86.c
> > > +++ b/arch/x86/kvm/x86.c
> > > @@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > >  		if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
> > >  			return 1;
> > >  
> > > -		if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
> > > +		if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
> > > +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> > > +		else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
> > > +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
> > > +		else if (data)
> > >  			return 1;
> > 
> > SBPB | IBPB is an explicitly permitted combination, but will be rejected
> > by this logic.
> 
> Ah yes, I see that now:
> 
>   If software writes PRED_CMD with both bits 0 and 7 set to 1, the
>   processor performs an IBPB operation.

The KVM code being a bit funky isn't doing you any favors.  This is the least
awful approach I could come up with:

	case MSR_IA32_PRED_CMD: {
		u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB);

		if (!msr_info->host_initiated) {
			if (!guest_has_pred_cmd_msr(vcpu))
				return 1;

			if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB))
				reserved_bits |= PRED_CMD_SBPB;
		}

		if (!boot_cpu_has(X86_FEATURE_IBPB))
			reserved_bits |= PRED_CMD_IBPB;

		if (!boot_cpu_has(X86_FEATURE_SBPB))
			reserved_bits |= PRED_CMD_SBPB;

		if (!data)
			break;

		wrmsrl(MSR_IA32_PRED_CMD, data);
		break;
	}

There are more wrinkles though.  KVM passes through MSR_IA32_PRED_CMD based on
IBPB.  If hardware supports both IBPB and SBPB, but userspace does NOT exposes
SBPB to the guest, then KVM will create a virtualization hole where the guest can
write SBPB against userspace's wishes.  I haven't read up on SBPB enought o know
whether or not that's problematic.

And conversely, if userspace expoes SBPB but not IBPB, then KVM will intercept
writes to MSR_IA32_PRED_CMD and probably tank guest performance.  Again, I haven't
paid attention enough to know if this is a reasonable configuration, i.e. whether
or not it's worth caring about in KVM.

If the virtualization holes are deemed safe, then the easiest thing would be to
treat MSR_IA32_PRED_CMD as existing if either IBPB or SBPB exists.  E.g.

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index b1658c0de847..e4db844a58fe 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -174,7 +174,8 @@ static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu)
 static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu)
 {
        return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) ||
-               guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB));
+               guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) ||
+               guest_cpuid_has(vcpu, X86_FEATURE_SBPB));
 }
 
 static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 12688754c556..aa4620fb43f8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3656,17 +3656,33 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
                vcpu->arch.perf_capabilities = data;
                kvm_pmu_refresh(vcpu);
                break;
-       case MSR_IA32_PRED_CMD:
-               if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
-                       return 1;
+       case MSR_IA32_PRED_CMD: {
+               u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB);
+
+               if (!msr_info->host_initiated) {
+                       if (!guest_has_pred_cmd_msr(vcpu))
+                               return 1;
+
+                       if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+                           !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB))
+                               reserved_bits |= PRED_CMD_IBPB;
+
+                       if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB))
+                               reserved_bits |= PRED_CMD_SBPB;
+               }
+
+               if (!boot_cpu_has(X86_FEATURE_IBPB))
+                       reserved_bits |= PRED_CMD_IBPB;
+
+               if (!boot_cpu_has(X86_FEATURE_SBPB))
+                       reserved_bits |= PRED_CMD_SBPB;
 
-               if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
-                       return 1;
                if (!data)
                        break;
 
-               wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
+               wrmsrl(MSR_IA32_PRED_CMD, data);
                break;
+       }
        case MSR_IA32_FLUSH_CMD:
                if (!msr_info->host_initiated &&
                    !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D))

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-21 14:16   ` Borislav Petkov
@ 2023-08-21 16:36     ` Josh Poimboeuf
  2023-08-22  5:54       ` Borislav Petkov
  0 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21 16:36 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 04:16:19PM +0200, Borislav Petkov wrote:
> On Sun, Aug 20, 2023 at 06:19:01PM -0700, Josh Poimboeuf wrote:
> > If the user has requested no SRSO mitigation, other mitigations can use
> > the lighter-weight SBPB instead of IBPB.
> > 
> > Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> > ---
> >  arch/x86/kernel/cpu/bugs.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index b0ae985aa6a4..10499bcd4e39 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -2433,7 +2433,7 @@ static void __init srso_select_mitigation(void)
> >  
> >  	switch (srso_cmd) {
> >  	case SRSO_CMD_OFF:
> > -		return;
> > +		goto pred_cmd;
> 
> Can't do that - you need to synchronize it with retbleed. If retbleed
> has selected IBPB mitigation you must not override it.

Hm?  How exactly is this overriding the retbleed IBPB mitigation?

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21 16:35       ` Sean Christopherson
@ 2023-08-21 16:46         ` Nikolay Borisov
  2023-08-21 16:50           ` Sean Christopherson
  2023-08-21 17:05         ` Josh Poimboeuf
  1 sibling, 1 reply; 63+ messages in thread
From: Nikolay Borisov @ 2023-08-21 16:46 UTC (permalink / raw)
  To: Sean Christopherson, Josh Poimboeuf
  Cc: Andrew Cooper, x86, linux-kernel, Borislav Petkov,
	Peter Zijlstra, Babu Moger, Paolo Bonzini, David.Kaplan, gregkh,
	Thomas Gleixner



On 21.08.23 г. 19:35 ч., Sean Christopherson wrote:
> On Mon, Aug 21, 2023, Josh Poimboeuf wrote:
>> On Mon, Aug 21, 2023 at 10:34:38AM +0100, Andrew Cooper wrote:
>>> On 21/08/2023 2:19 am, Josh Poimboeuf wrote:
>>>> The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.
>>>
>>> "Current hardware".
>>>
>>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>>> index c381770bcbf1..dd7472121142 100644
>>>> --- a/arch/x86/kvm/x86.c
>>>> +++ b/arch/x86/kvm/x86.c
>>>> @@ -3676,12 +3676,13 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>>>   		if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
>>>>   			return 1;
>>>>   
>>>> -		if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
>>>> +		if (boot_cpu_has(X86_FEATURE_IBPB) && data == PRED_CMD_IBPB)
>>>> +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
>>>> +		else if (boot_cpu_has(X86_FEATURE_SBPB) && data == PRED_CMD_SBPB)
>>>> +			wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_SBPB);
>>>> +		else if (data)
>>>>   			return 1;
>>>
>>> SBPB | IBPB is an explicitly permitted combination, but will be rejected
>>> by this logic.
>>
>> Ah yes, I see that now:
>>
>>    If software writes PRED_CMD with both bits 0 and 7 set to 1, the
>>    processor performs an IBPB operation.
> 
> The KVM code being a bit funky isn't doing you any favors.  This is the least
> awful approach I could come up with:
> 
> 	case MSR_IA32_PRED_CMD: {
> 		u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB);
> 
> 		if (!msr_info->host_initiated) {
> 			if (!guest_has_pred_cmd_msr(vcpu))
> 				return 1;
> 
> 			if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB))
> 				reserved_bits |= PRED_CMD_SBPB;
> 		}
> 
> 		if (!boot_cpu_has(X86_FEATURE_IBPB))
> 			reserved_bits |= PRED_CMD_IBPB;
> 
> 		if (!boot_cpu_has(X86_FEATURE_SBPB))
> 			reserved_bits |= PRED_CMD_SBPB;
> 
> 		if (!data)
> 			break;
> 
> 		wrmsrl(MSR_IA32_PRED_CMD, data);
> 		break;
> 	}
> 
> There are more wrinkles though.  KVM passes through MSR_IA32_PRED_CMD based on
> IBPB.  If hardware supports both IBPB and SBPB, but userspace does NOT exposes
> SBPB to the guest, then KVM will create a virtualization hole where the guest can
> write SBPB against userspace's wishes.  I haven't read up on SBPB enought o know
> whether or not that's problematic.
> 
> And conversely, if userspace expoes SBPB but not IBPB, then KVM will intercept
> writes to MSR_IA32_PRED_CMD and probably tank guest performance.  Again, I haven't
> paid attention enough to know if this is a reasonable configuration, i.e. whether
> or not it's worth caring about in KVM.
> 
> If the virtualization holes are deemed safe, then the easiest thing would be to
> treat MSR_IA32_PRED_CMD as existing if either IBPB or SBPB exists.  E.g.
> 
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index b1658c0de847..e4db844a58fe 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -174,7 +174,8 @@ static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu)
>   static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu)
>   {
>          return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) ||
> -               guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB));
> +               guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) ||
> +               guest_cpuid_has(vcpu, X86_FEATURE_SBPB));
>   }
>   
>   static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 12688754c556..aa4620fb43f8 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3656,17 +3656,33 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>                  vcpu->arch.perf_capabilities = data;
>                  kvm_pmu_refresh(vcpu);
>                  break;
> -       case MSR_IA32_PRED_CMD:
> -               if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
> -                       return 1;
> +       case MSR_IA32_PRED_CMD: {
> +               u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB);
> +
> +               if (!msr_info->host_initiated) {
> +                       if (!guest_has_pred_cmd_msr(vcpu))
> +                               return 1;
> +
> +                       if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
> +                           !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB))
> +                               reserved_bits |= PRED_CMD_IBPB;
> +
> +                       if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB))
> +                               reserved_bits |= PRED_CMD_SBPB;
> +               }
> +
> +               if (!boot_cpu_has(X86_FEATURE_IBPB))
> +                       reserved_bits |= PRED_CMD_IBPB;
> +
> +               if (!boot_cpu_has(X86_FEATURE_SBPB))
> +                       reserved_bits |= PRED_CMD_SBPB;
>   
> -               if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
> -                       return 1;


Surely data must be sanitized against reserved_bit before this if is 
executed?

>                  if (!data)
>                          break;
>   
> -               wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
> +               wrmsrl(MSR_IA32_PRED_CMD, data);
>                  break;
> +       }
>          case MSR_IA32_FLUSH_CMD:
>                  if (!msr_info->host_initiated &&
>                      !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D))

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21  1:19 ` [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB Josh Poimboeuf
  2023-08-21  9:34   ` Andrew Cooper
@ 2023-08-21 16:49   ` Sean Christopherson
  1 sibling, 0 replies; 63+ messages in thread
From: Sean Christopherson @ 2023-08-21 16:49 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Sun, Aug 20, 2023, Josh Poimboeuf wrote:
> The IBPB_BRTYPE and SBPB CPUID bits aren't set by HW.
> 
> From the AMD SRSO whitepaper:
> 
>   "Hypervisor software should synthesize the value of both the
>   IBPB_BRTYPE and SBPB CPUID bits on these platforms for use by guest
>   software."
> 
> These bits are already set during kernel boot.  Manually propagate them
> to the guest.

Setting the bits in kvm_cpu_caps just advertises them to userspace, i.e. it doesn't
propagate them to the guest, that's up to userspace.

> Also, propagate PRED_CMD_SBPB writes.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kvm/cpuid.c | 4 ++++
>  arch/x86/kvm/x86.c   | 9 +++++----
>  2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index d3432687c9e6..cdf703eec42d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -729,6 +729,10 @@ void kvm_set_cpu_caps(void)
>  		F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
>  	);
>  
> +	if (cpu_feature_enabled(X86_FEATURE_SBPB))
> +		kvm_cpu_cap_set(X86_FEATURE_SBPB);

This can simply be:

	kvm_cpu_cap_check_and_set(X86_FEATURE_SBPB);

If there's a strong desire to use cpu_feature_enabled() instead of boot_cpu_has(),
then I would rather make than change in kvm_cpu_cap_check_and_set() for all features.

> +	if (cpu_feature_enabled(X86_FEATURE_IBPB_BRTYPE))
> +		kvm_cpu_cap_set(X86_FEATURE_IBPB_BRTYPE);

Assuming IBPB_BRTYPE doesn't require any extra support, it's probably best to add
that one in a separate patch, as SBPB support is likely going to be a bit more
involved.

>  	if (cpu_feature_enabled(X86_FEATURE_SRSO_NO))
>  		kvm_cpu_cap_set(X86_FEATURE_SRSO_NO);

Ah, this snuck in without going through the normal review channels.  This too
can use kvm_cpu_cap_check_and_set().

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21 16:46         ` Nikolay Borisov
@ 2023-08-21 16:50           ` Sean Christopherson
  0 siblings, 0 replies; 63+ messages in thread
From: Sean Christopherson @ 2023-08-21 16:50 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: Josh Poimboeuf, Andrew Cooper, x86, linux-kernel,
	Borislav Petkov, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	David.Kaplan, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023, Nikolay Borisov wrote:
> 
> On 21.08.23 г. 19:35 ч., Sean Christopherson wrote:
> > On Mon, Aug 21, 2023, Josh Poimboeuf wrote:
> > +               if (!boot_cpu_has(X86_FEATURE_IBPB))
> > +                       reserved_bits |= PRED_CMD_IBPB;
> > +
> > +               if (!boot_cpu_has(X86_FEATURE_SBPB))
> > +                       reserved_bits |= PRED_CMD_SBPB;
> > -               if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
> > -                       return 1;
> 
> Surely data must be sanitized against reserved_bit before this if is
> executed?

Heh, yeah, I missed that minor detail in my quick write-up.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21 16:35       ` Sean Christopherson
  2023-08-21 16:46         ` Nikolay Borisov
@ 2023-08-21 17:05         ` Josh Poimboeuf
  2023-08-24 16:39           ` Sean Christopherson
  1 sibling, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-21 17:05 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Andrew Cooper, x86, linux-kernel, Borislav Petkov,
	Peter Zijlstra, Babu Moger, Paolo Bonzini, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 04:35:41PM +0000, Sean Christopherson wrote:
> There are more wrinkles though.  KVM passes through MSR_IA32_PRED_CMD based on
> IBPB.  If hardware supports both IBPB and SBPB, but userspace does NOT exposes
> SBPB to the guest, then KVM will create a virtualization hole where the guest can
> write SBPB against userspace's wishes.  I haven't read up on SBPB enought o know
> whether or not that's problematic.
> 
> And conversely, if userspace expoes SBPB but not IBPB, then KVM will intercept
> writes to MSR_IA32_PRED_CMD and probably tank guest performance.  Again, I haven't
> paid attention enough to know if this is a reasonable configuration, i.e. whether
> or not it's worth caring about in KVM.
> 
> If the virtualization holes are deemed safe, then the easiest thing would be to
> treat MSR_IA32_PRED_CMD as existing if either IBPB or SBPB exists.  E.g.

I can't think of a reason why the holes wouldn't be safe, i.e. AFAICT
there's no harm in letting the guest do whatever type of barrier it
wants even if it's not technically supported by their configuration.

Question: if we're just always passing PRED_CMD through, what's the
point of having any PRED_CMD code in kvm_set_msr_common at all?

Also, since you're clearly more qualified to write this patch than me,
can I nominate you to do so? :-)

FWIW, the below is my qemu patch which worked for me in testing:

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 97ad229d8b..4b17f0152b 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1054,8 +1054,8 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
             NULL, NULL, NULL, NULL,
             NULL, NULL, NULL, NULL,
             NULL, NULL, NULL, NULL,
-            NULL, NULL, NULL, NULL,
-            NULL, NULL, NULL, NULL,
+            NULL, NULL, NULL, "sbpb",
+            "ibpb-brtype", "srso-no", NULL, NULL,
         },
         .cpuid = { .eax = 0x80000021, .reg = R_EAX, },
         .tcg_features = 0,
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index e0771a1043..ff3c714214 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -969,6 +969,12 @@ uint64_t x86_cpu_get_supported_feature_word(FeatureWord w,
 #define CPUID_8000_0021_EAX_NULL_SEL_CLR_BASE    (1U << 6)
 /* Automatic IBRS */
 #define CPUID_8000_0021_EAX_AUTO_IBRS   (1U << 8)
+/* Selective Branch Prediction Barrier */
+#define CPUID_8000_0021_EAX_SBPB        (1U << 27)
+/* MSR_PRED_CMD[IBPB] flushes all branch type predictions */
+#define CPUID_8000_0021_EAX_IBPB_BRTYPE (1U << 28)
+/* CPU is not affected by SRSO */
+#define CPUID_8000_0021_EAX_SRSO_NO     (1U << 29)
 
 #define CPUID_XSAVE_XSAVEOPT   (1U << 0)
 #define CPUID_XSAVE_XSAVEC     (1U << 1)

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 01/22] x86/srso: Fix srso_show_state() side effect
  2023-08-21 16:17     ` Josh Poimboeuf
@ 2023-08-22  5:23       ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-22  5:23 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 09:17:06AM -0700, Josh Poimboeuf wrote:
> I could do that, but this check ends up getting replaced by a later
> patch anyway.
> 
> Would you want this comment in srso_select_mitigation()?  After the next
> patch it has:
> 
>   bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
> 
> Though that seems clear to me already.

Ok, good enough.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-21 16:36     ` Josh Poimboeuf
@ 2023-08-22  5:54       ` Borislav Petkov
  2023-08-22  6:07         ` Borislav Petkov
  0 siblings, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-22  5:54 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 09:36:49AM -0700, Josh Poimboeuf wrote:
> Hm?  How exactly is this overriding the retbleed IBPB mitigation?

Sorry, clearly -ETOOMANYMITIGATIONS.

I meant the spectre_v2_user thing which does
indirect_branch_prediction_barrier() based on X86_FEATURE_USE_IBPB.

indirect_branch_prediction_barrier() uses x86_pred_cmd to select which
MSR bits to set and it is initialized by default to PRED_CMD_IBPB.

If you goto pred_cmd, you will overwrite it with PRED_CMD_SBPB here.

I think it should not overwrite it and simply return like before.
Meaning: if SRSO mitigation is off but the spectre_v2_user isn't so you
get what you want.

If you do mitigations=off - which is what most use cases do when they
don't care about mitigations - then it'll work too.

I don't see a sensible use case where user->user spectre_v2 is enabled
but SRSO is off. Maybe there is...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-22  5:54       ` Borislav Petkov
@ 2023-08-22  6:07         ` Borislav Petkov
  2023-08-22 21:59           ` Josh Poimboeuf
  0 siblings, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-22  6:07 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Tue, Aug 22, 2023 at 07:54:52AM +0200, Borislav Petkov wrote:
> If you goto pred_cmd, you will overwrite it with PRED_CMD_SBPB here.

Looking at this more:

"If SRSO mitigation is not required or is disabled, software may use
SBPB on context/virtual machine switch to help protect against
vulnerabilities like Spectre v2."

I think we actually want this overwrite to happen.

But then if retbleed=ibpb, entry_ibpb() will do bit 0 unconditionally...

Hmm, lemme talk to people.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-22  6:07         ` Borislav Petkov
@ 2023-08-22 21:59           ` Josh Poimboeuf
  2023-08-23  1:27             ` Borislav Petkov
  0 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-22 21:59 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Tue, Aug 22, 2023 at 08:07:06AM +0200, Borislav Petkov wrote:
> On Tue, Aug 22, 2023 at 07:54:52AM +0200, Borislav Petkov wrote:
> > If you goto pred_cmd, you will overwrite it with PRED_CMD_SBPB here.
> 
> Looking at this more:
> 
> "If SRSO mitigation is not required or is disabled, software may use
> SBPB on context/virtual machine switch to help protect against
> vulnerabilities like Spectre v2."
> 
> I think we actually want this overwrite to happen.

Yeah, I had seen that.  The combination of spectre_v2_user=on with
srso=off doesn't make a whole lot of sense, but... give the user what
they want and all.  Which would presumably be IBPB *without* the SRSO
mitigation (aka SBPB).

> But then if retbleed=ibpb, entry_ibpb() will do bit 0 unconditionally...
> 
> Hmm, lemme talk to people.

I don't think we need to worry about that, SBPB is >= fam19 but retbleed
is <= fam17.  So either way (0x17 or 0x19) entry_ibpb() should do IBPB.

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-22 21:59           ` Josh Poimboeuf
@ 2023-08-23  1:27             ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-23  1:27 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Tue, Aug 22, 2023 at 02:59:01PM -0700, Josh Poimboeuf wrote:
> Yeah, I had seen that.  The combination of spectre_v2_user=on with
> srso=off doesn't make a whole lot of sense, but... give the user what
> they want and all.  Which would presumably be IBPB *without* the SRSO
> mitigation (aka SBPB).

Right.

> I don't think we need to worry about that, SBPB is >= fam19 but retbleed
> is <= fam17.  So either way (0x17 or 0x19) entry_ibpb() should do IBPB.

Right, and SBPB is possible only on family => 0x19 anyway which is not
affected by retbleed.

I was worried that we might open some mitigation hole with the overwrite
but it seems we're fine:

family 0x17: retbleed=ibpb, srso=off - is fine, can't do SBPB anyway
family 0x19: retbleed=ibpb, srso=off - fine too, not affected by retbleed, can do SBPB

Unless I'm missing something again which is very likely, by now. ;-\

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-21 14:06     ` Borislav Petkov
@ 2023-08-23  5:20       ` Borislav Petkov
  2023-08-23 12:22         ` Andrew Cooper
  0 siblings, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-23  5:20 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Josh Poimboeuf, x86, linux-kernel, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023 at 04:06:19PM +0200, Borislav Petkov wrote:
> And I still don't know what exactly we're going to support when Linux
> runs as a guest. For example, live migration between Zen1/2 and Zen3/4
> won't work due to the alternatives patching, for example...
> 
> IBPB won't work either because we detect those feature bits only once
> during boot, like every other feature bit...

The lowest common denominator of features exposed to the guests, should
work, as I'm being told. As in, Zen2 and Zen3 should hide the SBPB bit
from the guests, for example.

Anything else like kernel code patching based on early detection of
features won't fly. But that has never flown anyway unless you don't
change that set of features.

I'm thinking if anyone cares really deeply about live migration, anyone
should say so and then we can see what cases we can support upstream. My
guess is those who do, have enough engineers to patch their kernel the
way they want it...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off
  2023-08-21  1:19 ` [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off Josh Poimboeuf
@ 2023-08-23  5:57   ` Borislav Petkov
  2023-08-23 20:55     ` Josh Poimboeuf
  2023-08-23 23:02   ` Josh Poimboeuf
  1 sibling, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-23  5:57 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:02PM -0700, Josh Poimboeuf wrote:
> If the user has requested no mitigations with mitigations=off, use the
> lighter-weight SBPB instead of IBPB for other mitigations.
> 
> Note that even with mitigations=off, IBPB/SBPB may still be used for
> Spectre v2 user <-> user protection.  Whether that makes sense is a
> question for another day.

Well, with my user hat on, off means off.

IINM, spectre_v2_parse_cmdline() will give SPECTRE_V2_CMD_NONE to
spectre_v2_select_mitigation() when mitigations=off.

spectre_v2_user_select_mitigation() will use the
spectre_v2_select_mitigation()'s result, which turn into
SPECTRE_V2_USER_CMD_NONE and then not enable *BPB either.

So even if we set x86_pred_cmd to SBPB here, it won't do anything
because X86_FEATURE_USE_IBPB won't be set and
indirect_branch_prediction_barrier() will be a NOP.

IOW, I think we should separate the check:

        if (cpu_mitigations_off())
                return;

at the beginning of srso_select_mitigation() so that it is crystal
clear. Maybe even slap a comment over it.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 06/22] x86/srso: Print actual mitigation if requested mitigation isn't possible
  2023-08-21  1:19 ` [PATCH 06/22] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
@ 2023-08-23  6:06   ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-23  6:06 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:03PM -0700, Josh Poimboeuf wrote:
> If the kernel wasn't compiled to support the requested option, print the
> actual option that ends up getting used.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 3 ---
>  1 file changed, 3 deletions(-)

Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 07/22] x86/srso: Remove default case in srso_select_mitigation()
  2023-08-21  1:19 ` [PATCH 07/22] x86/srso: Remove default case in srso_select_mitigation() Josh Poimboeuf
@ 2023-08-23  6:18   ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-23  6:18 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:04PM -0700, Josh Poimboeuf wrote:
> Remove the default case so a compiler warning gets printed

There are other similar cases in this file. We should make it an
explicit rule that this file - being special and security-sensitive and
all - should handle all switch cases explicitly so that it is obvious
what gets selected and that people think about every possible option
when doing a switch-case.

> if we forget one of the enums.

No "we" please - make that passive voice.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-23  5:20       ` Borislav Petkov
@ 2023-08-23 12:22         ` Andrew Cooper
  2023-08-24  4:24           ` Borislav Petkov
  0 siblings, 1 reply; 63+ messages in thread
From: Andrew Cooper @ 2023-08-23 12:22 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Josh Poimboeuf, x86, linux-kernel, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On 23/08/2023 6:20 am, Borislav Petkov wrote:
> On Mon, Aug 21, 2023 at 04:06:19PM +0200, Borislav Petkov wrote:
>> And I still don't know what exactly we're going to support when Linux
>> runs as a guest. For example, live migration between Zen1/2 and Zen3/4
>> won't work due to the alternatives patching, for example...
>>
>> IBPB won't work either because we detect those feature bits only once
>> during boot, like every other feature bit...
> The lowest common denominator of features exposed to the guests,

Correct.  This is what a hypervisor will do for the SBPB *CPUID* bit.

> should
> work, as I'm being told. As in, Zen2 and Zen3 should hide the SBPB bit
> from the guests, for example.

In my previous reply, I explained why this goes wrong when Linux ignores
the CPUID bit provided by the hypervisor and decides to probe manually.

> I'm thinking if anyone cares really deeply about live migration, anyone
> should say so and then we can see what cases we can support upstream. My
> guess is those who do, have enough engineers to patch their kernel the
> way they want it...

No.

You don't get to take my code, break it when integrating it into Linux,
then dismiss the bug as something hypothetical that you don't want to fix.

It's not *me* needing to patch *my* kernel when this goes wrong.  It's
me (or VMware, or HyperV or one of many KVM vendors) getting a bug
report saying "my VM crashed on migrate", and then having to persuade
Debian and Ubuntu and RH and Oracle and all the other distros to take an
out-of-tree fix into their patchqueue, then release another kernel, and
then come back to this thread and repeat this damn argument.

I'm just trying to cutting out the middle misery here.

~Andrew

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off
  2023-08-23  5:57   ` Borislav Petkov
@ 2023-08-23 20:55     ` Josh Poimboeuf
  0 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-23 20:55 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Wed, Aug 23, 2023 at 07:57:20AM +0200, Borislav Petkov wrote:
> On Sun, Aug 20, 2023 at 06:19:02PM -0700, Josh Poimboeuf wrote:
> > If the user has requested no mitigations with mitigations=off, use the
> > lighter-weight SBPB instead of IBPB for other mitigations.
> > 
> > Note that even with mitigations=off, IBPB/SBPB may still be used for
> > Spectre v2 user <-> user protection.  Whether that makes sense is a
> > question for another day.
> 
> Well, with my user hat on, off means off.
> 
> IINM, spectre_v2_parse_cmdline() will give SPECTRE_V2_CMD_NONE to
> spectre_v2_select_mitigation() when mitigations=off.
> 
> spectre_v2_user_select_mitigation() will use the
> spectre_v2_select_mitigation()'s result, which turn into
> SPECTRE_V2_USER_CMD_NONE and then not enable *BPB either.

Ah, right.  I missed how spectre_v2_parse_user_cmdline() checks
spectre_v2_cmd.  That is quite the maze.

> So even if we set x86_pred_cmd to SBPB here, it won't do anything
> because X86_FEATURE_USE_IBPB won't be set and
> indirect_branch_prediction_barrier() will be a NOP.

Right.

> IOW, I think we should separate the check:
> 
>         if (cpu_mitigations_off())
>                 return;
> 
> at the beginning of srso_select_mitigation() so that it is crystal
> clear. Maybe even slap a comment over it.

Yeah, that's fine.  I can drop this patch and add a new patch to do
that.

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off
  2023-08-21  1:19 ` [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off Josh Poimboeuf
  2023-08-23  5:57   ` Borislav Petkov
@ 2023-08-23 23:02   ` Josh Poimboeuf
  1 sibling, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-23 23:02 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:02PM -0700, Josh Poimboeuf wrote:
> If the user has requested no mitigations with mitigations=off, use the
> lighter-weight SBPB instead of IBPB for other mitigations.
> 
> Note that even with mitigations=off, IBPB/SBPB may still be used for
> Spectre v2 user <-> user protection.  Whether that makes sense is a
> question for another day.
> 
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 10499bcd4e39..ff5bfe8f0ee9 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2496,8 +2496,7 @@ static void __init srso_select_mitigation(void)
>  	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
>  
>  pred_cmd:
> -	if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) &&
> -	     boot_cpu_has(X86_FEATURE_SBPB))
> +	if (boot_cpu_has(X86_FEATURE_SBPB) && srso_mitigation == SRSO_MITIGATION_NONE)
>  		x86_pred_cmd = PRED_CMD_SBPB;

Actually, I remembered this patch had another purpose.  On future HW, if
SRSO_NO is not set by the HW (which Boris said might be the case), and
the SRSO bug bit is not set, then SBPB needs to be set.

I may just get rid of this label altogether and just hard-code the
setting of x86_pred_cmd in the two places where it's needed.

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-23 12:22         ` Andrew Cooper
@ 2023-08-24  4:24           ` Borislav Petkov
  2023-08-24 22:04             ` Josh Poimboeuf
  0 siblings, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-24  4:24 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Josh Poimboeuf, x86, linux-kernel, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Wed, Aug 23, 2023 at 01:22:34PM +0100, Andrew Cooper wrote:
> In my previous reply, I explained why this goes wrong when Linux ignores
> the CPUID bit provided by the hypervisor and decides to probe manually.

Send a patch and explain in its commit message *why* this is needed.

> No.

Hell yeah!

How do you expect us to support use cases we don't know about?!

> You don't get to take my code, break it when integrating it into Linux,
> then dismiss the bug as something hypothetical that you don't want to fix.

I have no clue what you're talking about but it sounds like
a misunderstanding. All I'm saying is, the live migration use cases the
kernel should support, should be documented first. If there's no
documentation for them, *then* you have hypothetical.

So patches explaining what we're supporting are welcome.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 08/22] x86/srso: Downgrade retbleed IBPB warning to informational message
  2023-08-21  1:19 ` [PATCH 08/22] x86/srso: Downgrade retbleed IBPB warning to informational message Josh Poimboeuf
@ 2023-08-24  4:43   ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-24  4:43 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:05PM -0700, Josh Poimboeuf wrote:
> This warning is nothing to get excited over.  Downgrade to pr_info().
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index cda4b5e6a362..e59e09babf8f 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2425,7 +2425,7 @@ static void __init srso_select_mitigation(void)
>  
>  	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
>  		if (has_microcode) {
> -			pr_err("Retbleed IBPB mitigation enabled, using same for SRSO\n");
> +			pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
>  			srso_mitigation = SRSO_MITIGATION_IBPB;
>  			goto pred_cmd;
>  		}
> -- 

Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case
  2023-08-21  1:19 ` [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
@ 2023-08-24  4:48   ` Borislav Petkov
  2023-08-24 21:40     ` Josh Poimboeuf
  0 siblings, 1 reply; 63+ messages in thread
From: Borislav Petkov @ 2023-08-24  4:48 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:07PM -0700, Josh Poimboeuf wrote:
> When overriding the requested mitigation with IBPB due to retbleed=ibpb,
> print the actual mitigation.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index da480c089739..4e332707a343 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2427,7 +2427,7 @@ static void __init srso_select_mitigation(void)
>  		if (has_microcode) {
>  			pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");

This print was supposed to do that. Now you have two for the IBPB case.
If you want to print it using the usual format, then whack the above.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 11/22] x86/srso: Slight simplification
  2023-08-21  1:19 ` [PATCH 11/22] x86/srso: Slight simplification Josh Poimboeuf
@ 2023-08-24  4:55   ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-24  4:55 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:08PM -0700, Josh Poimboeuf wrote:
> Subject: Re: [PATCH 11/22] x86/srso: Slight simplification

Commit title needs a verb and a more descriptive title.

Strictly speaking, we should use the IBPB selected as the retbleed
mitigation for SRSO only when the SRSO microcode has been applied so
moving this inside the "has_microcode" branch makes sense.

But even if it is outside that branch, we will still say "no microcode"
so basically it boils down to the same thing.

But sure, ok, slight simplification. :)

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-21 17:05         ` Josh Poimboeuf
@ 2023-08-24 16:39           ` Sean Christopherson
  2023-08-24 17:07             ` Josh Poimboeuf
  0 siblings, 1 reply; 63+ messages in thread
From: Sean Christopherson @ 2023-08-24 16:39 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andrew Cooper, x86, linux-kernel, Borislav Petkov,
	Peter Zijlstra, Babu Moger, Paolo Bonzini, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Mon, Aug 21, 2023, Josh Poimboeuf wrote:
> On Mon, Aug 21, 2023 at 04:35:41PM +0000, Sean Christopherson wrote:
> > There are more wrinkles though.  KVM passes through MSR_IA32_PRED_CMD based on
> > IBPB.  If hardware supports both IBPB and SBPB, but userspace does NOT exposes
> > SBPB to the guest, then KVM will create a virtualization hole where the guest can
> > write SBPB against userspace's wishes.  I haven't read up on SBPB enought o know
> > whether or not that's problematic.
> > 
> > And conversely, if userspace expoes SBPB but not IBPB, then KVM will intercept
> > writes to MSR_IA32_PRED_CMD and probably tank guest performance.  Again, I haven't
> > paid attention enough to know if this is a reasonable configuration, i.e. whether
> > or not it's worth caring about in KVM.
> > 
> > If the virtualization holes are deemed safe, then the easiest thing would be to
> > treat MSR_IA32_PRED_CMD as existing if either IBPB or SBPB exists.  E.g.
> 
> I can't think of a reason why the holes wouldn't be safe, i.e. AFAICT
> there's no harm in letting the guest do whatever type of barrier it
> wants even if it's not technically supported by their configuration.
> 
> Question: if we're just always passing PRED_CMD through, what's the
> point of having any PRED_CMD code in kvm_set_msr_common at all?

Emulation :-(  KVM's emulator supports WRMSR, and on Intel without unrestricted
guest, it's unfortunately necessary for KVM to emulate large swaths of guest code.
Emulating WRMSR on other hardware setups is far less likely, but still plausible.

KVM's ABI is also that userspace is allowed to write guest MSRs that KVM says exist,
so KVM needs to at least not reject KVM_SET_MSRS.

Whether or not it makes sense for KVM to forward the WRMSR the hardware is
definitely debatable, especially for writes from host userspace.  But IIUC, at
at worst the WRMSR from KVM could be a noisy neighbor for an SMT sibling, so IMO
it's not worth the brain power needed to determine whether or not KVM can safely
omit the WRMSR.

> Also, since you're clearly more qualified to write this patch than me,
> can I nominate you to do so? :-)

Sorry, didn't mean to ghost you.  I can write the patch, but I won't get to it
before next week some time.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB
  2023-08-24 16:39           ` Sean Christopherson
@ 2023-08-24 17:07             ` Josh Poimboeuf
  0 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-24 17:07 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Andrew Cooper, x86, linux-kernel, Borislav Petkov,
	Peter Zijlstra, Babu Moger, Paolo Bonzini, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Thu, Aug 24, 2023 at 09:39:03AM -0700, Sean Christopherson wrote:
> > Also, since you're clearly more qualified to write this patch than me,
> > can I nominate you to do so? :-)
> 
> Sorry, didn't mean to ghost you.  I can write the patch, but I won't get to it
> before next week some time.

No worries, I'll pull in your code (with a reserved_bit fix) for my v2.

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case
  2023-08-24  4:48   ` Borislav Petkov
@ 2023-08-24 21:40     ` Josh Poimboeuf
  0 siblings, 0 replies; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-24 21:40 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Thu, Aug 24, 2023 at 06:48:18AM +0200, Borislav Petkov wrote:
> On Sun, Aug 20, 2023 at 06:19:07PM -0700, Josh Poimboeuf wrote:
> > When overriding the requested mitigation with IBPB due to retbleed=ibpb,
> > print the actual mitigation.
> > 
> > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> > ---
> >  arch/x86/kernel/cpu/bugs.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index da480c089739..4e332707a343 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -2427,7 +2427,7 @@ static void __init srso_select_mitigation(void)
> >  		if (has_microcode) {
> >  			pr_info("Retbleed IBPB mitigation enabled, using same for SRSO\n");
> 
> This print was supposed to do that. Now you have two for the IBPB case.
> If you want to print it using the usual format, then whack the above.

Sure, and in that case I'll just drop the other one which changes the
above from pr_err() to pr_info().

-- 
Josh

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-24  4:24           ` Borislav Petkov
@ 2023-08-24 22:04             ` Josh Poimboeuf
  2023-08-25  6:42               ` Borislav Petkov
  0 siblings, 1 reply; 63+ messages in thread
From: Josh Poimboeuf @ 2023-08-24 22:04 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Andrew Cooper, x86, linux-kernel, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Thu, Aug 24, 2023 at 06:24:20AM +0200, Borislav Petkov wrote:
> On Wed, Aug 23, 2023 at 01:22:34PM +0100, Andrew Cooper wrote:
> > In my previous reply, I explained why this goes wrong when Linux ignores
> > the CPUID bit provided by the hypervisor and decides to probe manually.
> 
> Send a patch and explain in its commit message *why* this is needed.
> 
> > No.
> 
> Hell yeah!
> 
> How do you expect us to support use cases we don't know about?!
> 
> > You don't get to take my code, break it when integrating it into Linux,
> > then dismiss the bug as something hypothetical that you don't want to fix.
> 
> I have no clue what you're talking about but it sounds like
> a misunderstanding. All I'm saying is, the live migration use cases the
> kernel should support, should be documented first. If there's no
> documentation for them, *then* you have hypothetical.
> 
> So patches explaining what we're supporting are welcome.

Something like this?

From: Josh Poimboeuf <jpoimboe@kernel.org>
Subject: [PATCH] x86/srso: Don't probe microcode in a guest

To support live migration, the hypervisor sets the "lowest common
denominator" of features.  Probing the microcode isn't allowed because
any detected features might go away after a migration.

As Andy Cooper states:

  "Linux must not probe microcode when virtualised.  What it may see
  instantaneously on boot (owing to MSR_PRED_CMD being fully passed
  through) is not accurate for the lifetime of the VM."

Rely on the hypervisor to set the needed IBPB_BRTYPE and SBPB bits.

Fixes: 1b5277c0ea0b ("x86/srso: Add SRSO_NO support")
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/amd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index b08af929135d..28e77c5d6484 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -767,7 +767,7 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
 
-	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
+	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
 		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
 			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
 		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-24 22:04             ` Josh Poimboeuf
@ 2023-08-25  6:42               ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-25  6:42 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Andrew Cooper, x86, linux-kernel, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Thu, Aug 24, 2023 at 03:04:40PM -0700, Josh Poimboeuf wrote:
> Something like this?

Yeah, that's solving the immediate issue but what I mean is, I'd prefer
to have a statement saying:

"This is the use cases Foo and Bar related to live migration which we
want to support because of reasons X and Y."

So that we can know what we're actually supporting. I keep hearing of
cloud vendors doing live migration but nothing about what the kernel
running as a guest needs to support. And whether migration across
generations should be supported. At all. And whether the kernel needs to
even support anything.

And if we don't know the use cases we can't even commit to supporting
them. Or not break them in the future.

And above you can replace "live migration" with any other feature that
is requested. It helps immensely if we know how the kernel is used as
most of us tip maintainers, IMNSVHO, are blissfully unaware of live
migration.

I hope that makes more sense.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 12/22] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
  2023-08-21  1:19 ` [PATCH 12/22] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
@ 2023-08-25  7:09   ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-25  7:09 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:09PM -0700, Josh Poimboeuf wrote:
> The X86_FEATURE_ENTRY_IBPB check is redundant here due to the above
> RETBLEED_MITIGATION_IBPB check.  RETBLEED_MITIGATION_IBPB already
> implies X86_FEATURE_ENTRY_IBPB.  So if we got here and 'has_microcode'
> is true, it means X86_FEATURE_ENTRY_IBPB is not set.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index b27aeb86ed7a..aeddd5ce9f34 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2475,7 +2475,7 @@ static void __init srso_select_mitigation(void)
>  
>  	case SRSO_CMD_IBPB_ON_VMEXIT:
>  		if (IS_ENABLED(CONFIG_CPU_SRSO)) {
> -			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
> +			if (has_microcode) {
>  				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
>  				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
>  			}

Well, frankly, I'd prefer to keep this check explicit as it is also
documenting the situation. And it is also protecting against future,
potential mistakes done while refactoring. And it is not such a complex
condition so that it stands in the way and makes the code too
unreadable, while removing it makes it a bit too subtle considering the
amazing maze we're in.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 13/22] x86/srso: Fix vulnerability reporting for missing microcode
  2023-08-21  1:19 ` [PATCH 13/22] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
@ 2023-08-25  7:25   ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2023-08-25  7:25 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Sun, Aug 20, 2023 at 06:19:10PM -0700, Josh Poimboeuf wrote:
> + * 'Vulnerable: Safe RET, no microcode':
> +
> +   The "Safe Ret" mitigation (see below) has been applied to protect the

s/Ret/RET/

> @@ -2456,7 +2463,10 @@ static void __init srso_select_mitigation(void)
>  				setup_force_cpu_cap(X86_FEATURE_SRSO);
>  				x86_return_thunk = srso_return_thunk;
>  			}
> -			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
> +			if (has_microcode)
> +				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
> +			else
> +				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
>  		} else {
>  			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
>  		}

You missed one "no microcode" here at out_print:

[    0.553950] Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode, no microcode

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2023-08-25  7:26 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-21  1:18 [PATCH 00/22] SRSO fixes/cleanups Josh Poimboeuf
2023-08-21  1:18 ` [PATCH 01/22] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
2023-08-21  5:42   ` Nikolay Borisov
2023-08-21  6:04   ` Borislav Petkov
2023-08-21 16:17     ` Josh Poimboeuf
2023-08-22  5:23       ` Borislav Petkov
2023-08-21  1:18 ` [PATCH 02/22] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
2023-08-21  5:42   ` Nikolay Borisov
2023-08-21  9:27   ` Andrew Cooper
2023-08-21 14:06     ` Borislav Petkov
2023-08-23  5:20       ` Borislav Petkov
2023-08-23 12:22         ` Andrew Cooper
2023-08-24  4:24           ` Borislav Petkov
2023-08-24 22:04             ` Josh Poimboeuf
2023-08-25  6:42               ` Borislav Petkov
2023-08-21 13:59   ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 03/22] KVM: x86: Support IBPB_BRTYPE and SBPB Josh Poimboeuf
2023-08-21  9:34   ` Andrew Cooper
2023-08-21 16:23     ` Josh Poimboeuf
2023-08-21 16:35       ` Sean Christopherson
2023-08-21 16:46         ` Nikolay Borisov
2023-08-21 16:50           ` Sean Christopherson
2023-08-21 17:05         ` Josh Poimboeuf
2023-08-24 16:39           ` Sean Christopherson
2023-08-24 17:07             ` Josh Poimboeuf
2023-08-21 16:49   ` Sean Christopherson
2023-08-21  1:19 ` [PATCH 04/22] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
2023-08-21 14:16   ` Borislav Petkov
2023-08-21 16:36     ` Josh Poimboeuf
2023-08-22  5:54       ` Borislav Petkov
2023-08-22  6:07         ` Borislav Petkov
2023-08-22 21:59           ` Josh Poimboeuf
2023-08-23  1:27             ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 05/22] x86/srso: Fix SBPB enablement for mitigations=off Josh Poimboeuf
2023-08-23  5:57   ` Borislav Petkov
2023-08-23 20:55     ` Josh Poimboeuf
2023-08-23 23:02   ` Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 06/22] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
2023-08-23  6:06   ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 07/22] x86/srso: Remove default case in srso_select_mitigation() Josh Poimboeuf
2023-08-23  6:18   ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 08/22] x86/srso: Downgrade retbleed IBPB warning to informational message Josh Poimboeuf
2023-08-24  4:43   ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 09/22] x86/srso: Simplify exit paths Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 10/22] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
2023-08-24  4:48   ` Borislav Petkov
2023-08-24 21:40     ` Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 11/22] x86/srso: Slight simplification Josh Poimboeuf
2023-08-24  4:55   ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 12/22] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
2023-08-25  7:09   ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 13/22] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
2023-08-25  7:25   ` Borislav Petkov
2023-08-21  1:19 ` [PATCH 14/22] x86/srso: Fix unret validation dependencies Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 15/22] x86/alternatives: Remove faulty optimization Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 16/22] x86/srso: Unexport untraining functions Josh Poimboeuf
2023-08-21  5:50   ` Nikolay Borisov
2023-08-21  1:19 ` [PATCH 17/22] x86/srso: Disentangle rethunk-dependent options Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 18/22] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 19/22] x86/srso: Improve i-cache locality for alias mitigation Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 20/22] x86/retpoline: Remove .text..__x86.return_thunk section Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 21/22] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
2023-08-21  1:19 ` [PATCH 22/22] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk() Josh Poimboeuf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).