linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC 0/7] CPU feature evaluation after microcode late loading
@ 2020-07-02 15:18 Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 1/7] x86: cpu: bugs.c: remove init attribute from functions and variables Mihai Carabas
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

This RFC patch set aims to provide the ability to re-evaluate all CPU
features and take proper bug mitigation in place after a microcode
late loading.

This was debated last year and this patch set implements a subset of
point #2 from Thomas Gleixner's idea:
https://lore.kernel.org/lkml/alpine.DEB.2.21.1909062237580.1902@nanos.tec.linutronix.de/

Point #1 was sent as an RFC some time ago
(https://lkml.org/lkml/2020/4/27/214), but after a discussion with CPU
vendors (Intel), the metadata file is not easily buildable at this
moment so we could not advance with it more. Without #1, I know it is
unlikely to embrace the feature re-evaluation.

Patches from 1 to 4 bring in changes for functions/variables in order to be
able to use them at runtime.

Patch 5 re-evaluates CPU features, patch 6 is re-probing bugs and patch 7
deals with speculation blacklist CPUs/microcode versions.

Thank you,
Mihai Carabas

Mihai Carabas (7):
  x86: cpu: bugs.c: remove init attribute from functions and variables
  x86: cpu: modify boot_command_line to saved_command_line
  x86: kernel: cpu: bugs.c: modify static_cpu_has to boot_cpu_has
  x86: cpu: bugs.c: update cpu_smt_disable to be callable at runtime
  x86: microcode: late loading feature and bug evaluation
  x86: cpu: bugs.c: reprobe bugs at runtime
  x86: cpu: update blacklist spec features for late loading

 arch/x86/include/asm/microcode.h       |   3 +
 arch/x86/include/asm/microcode_intel.h |   1 +
 arch/x86/kernel/cpu/bugs.c             | 142 +++++++++++++++++++--------------
 arch/x86/kernel/cpu/common.c           |  32 +++++++-
 arch/x86/kernel/cpu/cpu.h              |   4 +-
 arch/x86/kernel/cpu/intel.c            |  28 +++++++
 arch/x86/kernel/cpu/microcode/core.c   |  28 +++++++
 arch/x86/kernel/cpu/microcode/intel.c  |   5 +-
 arch/x86/kernel/cpu/tsx.c              |   8 +-
 arch/x86/kernel/process.c              |   8 +-
 arch/x86/kvm/vmx/vmx.c                 |   2 +-
 kernel/cpu.c                           |  18 ++++-
 12 files changed, 201 insertions(+), 78 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH RFC 1/7] x86: cpu: bugs.c: remove init attribute from functions and variables
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
@ 2020-07-02 15:18 ` Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 2/7] x86: cpu: modify boot_command_line to saved_command_line Mihai Carabas
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

in order to be able to call them after the system has booted.

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
---
 arch/x86/kernel/cpu/bugs.c   | 76 ++++++++++++++++++++++----------------------
 arch/x86/kernel/cpu/common.c |  4 +--
 arch/x86/kernel/cpu/cpu.h    |  4 +--
 arch/x86/kernel/cpu/tsx.c    |  6 ++--
 kernel/cpu.c                 |  2 +-
 5 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0b71970..7091947 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -34,14 +34,14 @@
 
 #include "cpu.h"
 
-static void __init spectre_v1_select_mitigation(void);
-static void __init spectre_v2_select_mitigation(void);
-static void __init ssb_select_mitigation(void);
-static void __init l1tf_select_mitigation(void);
-static void __init mds_select_mitigation(void);
-static void __init mds_print_mitigation(void);
-static void __init taa_select_mitigation(void);
-static void __init srbds_select_mitigation(void);
+static void spectre_v1_select_mitigation(void);
+static void spectre_v2_select_mitigation(void);
+static void ssb_select_mitigation(void);
+static void l1tf_select_mitigation(void);
+static void mds_select_mitigation(void);
+static void mds_print_mitigation(void);
+static void taa_select_mitigation(void);
+static void srbds_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -52,14 +52,14 @@
  * The vendor and possibly platform specific bits which can be modified in
  * x86_spec_ctrl_base.
  */
-static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS;
+static u64 x86_spec_ctrl_mask = SPEC_CTRL_IBRS;
 
 /*
  * AMD specific MSR info for Speculative Store Bypass control.
  * x86_amd_ls_cfg_ssbd_mask is initialized in identify_boot_cpu().
  */
-u64 __ro_after_init x86_amd_ls_cfg_base;
-u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask;
+u64 x86_amd_ls_cfg_base;
+u64 x86_amd_ls_cfg_ssbd_mask;
 
 /* Control conditional STIBP in switch_to() */
 DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);
@@ -75,7 +75,7 @@
 DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
 EXPORT_SYMBOL_GPL(mds_idle_clear);
 
-void __init check_bugs(void)
+void __ref check_bugs(void)
 {
 	identify_boot_cpu();
 
@@ -228,7 +228,7 @@ static void x86_amd_ssb_disable(void)
 #define pr_fmt(fmt)	"MDS: " fmt
 
 /* Default mitigation for MDS-affected CPUs */
-static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_FULL;
+static enum mds_mitigations mds_mitigation = MDS_MITIGATION_FULL;
 static bool mds_nosmt __ro_after_init = false;
 
 static const char * const mds_strings[] = {
@@ -237,7 +237,7 @@ static void x86_amd_ssb_disable(void)
 	[MDS_MITIGATION_VMWERV]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
 };
 
-static void __init mds_select_mitigation(void)
+static void mds_select_mitigation(void)
 {
 	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
 		mds_mitigation = MDS_MITIGATION_OFF;
@@ -256,7 +256,7 @@ static void __init mds_select_mitigation(void)
 	}
 }
 
-static void __init mds_print_mitigation(void)
+static void mds_print_mitigation(void)
 {
 	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
 		return;
@@ -296,7 +296,7 @@ enum taa_mitigations {
 };
 
 /* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW;
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -306,7 +306,7 @@ enum taa_mitigations {
 	[TAA_MITIGATION_TSX_DISABLED]	= "Mitigation: TSX disabled",
 };
 
-static void __init taa_select_mitigation(void)
+static void taa_select_mitigation(void)
 {
 	u64 ia32_cap;
 
@@ -410,7 +410,7 @@ enum srbds_mitigations {
 	SRBDS_MITIGATION_HYPERVISOR,
 };
 
-static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
+static enum srbds_mitigations srbds_mitigation = SRBDS_MITIGATION_FULL;
 
 static const char * const srbds_strings[] = {
 	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
@@ -452,7 +452,7 @@ void update_srbds_msr(void)
 	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
 }
 
-static void __init srbds_select_mitigation(void)
+static void srbds_select_mitigation(void)
 {
 	u64 ia32_cap;
 
@@ -498,7 +498,7 @@ enum spectre_v1_mitigation {
 	SPECTRE_V1_MITIGATION_AUTO,
 };
 
-static enum spectre_v1_mitigation spectre_v1_mitigation __ro_after_init =
+static enum spectre_v1_mitigation spectre_v1_mitigation =
 	SPECTRE_V1_MITIGATION_AUTO;
 
 static const char * const spectre_v1_strings[] = {
@@ -527,7 +527,7 @@ static bool smap_works_speculatively(void)
 	return true;
 }
 
-static void __init spectre_v1_select_mitigation(void)
+static void spectre_v1_select_mitigation(void)
 {
 	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
 		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
@@ -585,12 +585,12 @@ static int __init nospectre_v1_cmdline(char *str)
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V2 : " fmt
 
-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+static enum spectre_v2_mitigation spectre_v2_enabled =
 	SPECTRE_V2_NONE;
 
-static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init =
+static enum spectre_v2_user_mitigation spectre_v2_user_stibp =
 	SPECTRE_V2_USER_NONE;
-static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init =
+static enum spectre_v2_user_mitigation spectre_v2_user_ibpb =
 	SPECTRE_V2_USER_NONE;
 
 #ifdef CONFIG_RETPOLINE
@@ -653,7 +653,7 @@ enum spectre_v2_user_cmd {
 	const char			*option;
 	enum spectre_v2_user_cmd	cmd;
 	bool				secure;
-} v2_user_options[] __initconst = {
+} v2_user_options[] = {
 	{ "auto",		SPECTRE_V2_USER_CMD_AUTO,		false },
 	{ "off",		SPECTRE_V2_USER_CMD_NONE,		false },
 	{ "on",			SPECTRE_V2_USER_CMD_FORCE,		true  },
@@ -663,13 +663,13 @@ enum spectre_v2_user_cmd {
 	{ "seccomp,ibpb",	SPECTRE_V2_USER_CMD_SECCOMP_IBPB,	false },
 };
 
-static void __init spec_v2_user_print_cond(const char *reason, bool secure)
+static void spec_v2_user_print_cond(const char *reason, bool secure)
 {
 	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
 		pr_info("spectre_v2_user=%s forced on command line.\n", reason);
 }
 
-static enum spectre_v2_user_cmd __init
+static enum spectre_v2_user_cmd
 spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
 {
 	char arg[20];
@@ -701,7 +701,7 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
 	return SPECTRE_V2_USER_CMD_AUTO;
 }
 
-static void __init
+static void
 spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 {
 	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
@@ -801,7 +801,7 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
 	const char *option;
 	enum spectre_v2_mitigation_cmd cmd;
 	bool secure;
-} mitigation_options[] __initconst = {
+} mitigation_options[] = {
 	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
 	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
 	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
@@ -810,13 +810,13 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
 	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
 };
 
-static void __init spec_v2_print_cond(const char *reason, bool secure)
+static void spec_v2_print_cond(const char *reason, bool secure)
 {
 	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
 		pr_info("%s selected on command line.\n", reason);
 }
 
-static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+static enum spectre_v2_mitigation_cmd spectre_v2_parse_cmdline(void)
 {
 	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
 	char arg[20];
@@ -862,7 +862,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 	return cmd;
 }
 
-static void __init spectre_v2_select_mitigation(void)
+static void spectre_v2_select_mitigation(void)
 {
 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
 	enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
@@ -1061,7 +1061,7 @@ void cpu_bugs_smt_update(void)
 #undef pr_fmt
 #define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
 
-static enum ssb_mitigation ssb_mode __ro_after_init = SPEC_STORE_BYPASS_NONE;
+static enum ssb_mitigation ssb_mode = SPEC_STORE_BYPASS_NONE;
 
 /* The kernel command line selection */
 enum ssb_mitigation_cmd {
@@ -1082,7 +1082,7 @@ enum ssb_mitigation_cmd {
 static const struct {
 	const char *option;
 	enum ssb_mitigation_cmd cmd;
-} ssb_mitigation_options[]  __initconst = {
+} ssb_mitigation_options[] = {
 	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO },    /* Platform decides */
 	{ "on",		SPEC_STORE_BYPASS_CMD_ON },      /* Disable Speculative Store Bypass */
 	{ "off",	SPEC_STORE_BYPASS_CMD_NONE },    /* Don't touch Speculative Store Bypass */
@@ -1090,7 +1090,7 @@ enum ssb_mitigation_cmd {
 	{ "seccomp",	SPEC_STORE_BYPASS_CMD_SECCOMP }, /* Disable Speculative Store Bypass via prctl and seccomp */
 };
 
-static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
+static enum ssb_mitigation_cmd ssb_parse_cmdline(void)
 {
 	enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
 	char arg[20];
@@ -1122,7 +1122,7 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
 	return cmd;
 }
 
-static enum ssb_mitigation __init __ssb_select_mitigation(void)
+static enum ssb_mitigation __ssb_select_mitigation(void)
 {
 	enum ssb_mitigation mode = SPEC_STORE_BYPASS_NONE;
 	enum ssb_mitigation_cmd cmd;
@@ -1402,7 +1402,7 @@ void x86_spec_ctrl_setup_ap(void)
 #define pr_fmt(fmt)	"L1TF: " fmt
 
 /* Default mitigation for L1TF-affected CPUs */
-enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+enum l1tf_mitigations l1tf_mitigation = L1TF_MITIGATION_FLUSH;
 #if IS_ENABLED(CONFIG_KVM_INTEL)
 EXPORT_SYMBOL_GPL(l1tf_mitigation);
 #endif
@@ -1448,7 +1448,7 @@ static void override_cache_bits(struct cpuinfo_x86 *c)
 	}
 }
 
-static void __init l1tf_select_mitigation(void)
+static void l1tf_select_mitigation(void)
 {
 	u64 half_pa;
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 95c090a..c11daa6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1018,7 +1018,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
 #define VULNWL_HYGON(family, whitelist)		\
 	VULNWL(HYGON, family, X86_MODEL_ANY, whitelist)
 
-static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+static const struct x86_cpu_id cpu_vuln_whitelist[] = {
 	VULNWL(ANY,	4, X86_MODEL_ANY,	NO_SPECULATION),
 	VULNWL(CENTAUR,	5, X86_MODEL_ANY,	NO_SPECULATION),
 	VULNWL(INTEL,	5, X86_MODEL_ANY,	NO_SPECULATION),
@@ -1094,7 +1094,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
 	{}
 };
 
-static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
+static bool cpu_matches(const struct x86_cpu_id *table, unsigned long which)
 {
 	const struct x86_cpu_id *m = x86_match_cpu(table);
 
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 9d03369..bf025b8 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -51,9 +51,9 @@ enum tsx_ctrl_states {
 	TSX_CTRL_NOT_SUPPORTED,
 };
 
-extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+extern enum tsx_ctrl_states tsx_ctrl_state;
 
-extern void __init tsx_init(void);
+extern void tsx_init(void);
 extern void tsx_enable(void);
 extern void tsx_disable(void);
 #else
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index e2ad30e..7c46581 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -17,7 +17,7 @@
 #undef pr_fmt
 #define pr_fmt(fmt) "tsx: " fmt
 
-enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED;
 
 void tsx_disable(void)
 {
@@ -58,7 +58,7 @@ void tsx_enable(void)
 	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
 }
 
-static bool __init tsx_ctrl_is_supported(void)
+static bool tsx_ctrl_is_supported(void)
 {
 	u64 ia32_cap = x86_read_arch_cap_msr();
 
@@ -84,7 +84,7 @@ static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
 	return TSX_CTRL_ENABLE;
 }
 
-void __init tsx_init(void)
+void tsx_init(void)
 {
 	char arg[5] = {};
 	int ret;
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6ff2578..fe67a01 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -391,7 +391,7 @@ void __weak arch_smt_update(void) { }
 #ifdef CONFIG_HOTPLUG_SMT
 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
 
-void __init cpu_smt_disable(bool force)
+void cpu_smt_disable(bool force)
 {
 	if (!cpu_smt_possible())
 		return;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RFC 2/7] x86: cpu: modify boot_command_line to saved_command_line
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 1/7] x86: cpu: bugs.c: remove init attribute from functions and variables Mihai Carabas
@ 2020-07-02 15:18 ` Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 3/7] x86: kernel: cpu: bugs.c: modify static_cpu_has to boot_cpu_has Mihai Carabas
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

as boot_command_line has the __initdata attribute and cannot be
used after booting the kernel. command_line evaluation needs to
be used on microcode late loading in order to enforce the proper
mitigations for different CPU bugs.

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
---
 arch/x86/kernel/cpu/bugs.c | 11 ++++++-----
 arch/x86/kernel/cpu/tsx.c  |  2 +-
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7091947..1760598 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -684,7 +684,7 @@ static void spec_v2_user_print_cond(const char *reason, bool secure)
 		break;
 	}
 
-	ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
+	ret = cmdline_find_option(saved_command_line, "spectre_v2_user",
 				  arg, sizeof(arg));
 	if (ret < 0)
 		return SPECTRE_V2_USER_CMD_AUTO;
@@ -822,11 +822,12 @@ static enum spectre_v2_mitigation_cmd spectre_v2_parse_cmdline(void)
 	char arg[20];
 	int ret, i;
 
-	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2") ||
+	if (cmdline_find_option_bool(saved_command_line, "nospectre_v2") ||
 	    cpu_mitigations_off())
 		return SPECTRE_V2_CMD_NONE;
 
-	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+	ret = cmdline_find_option(saved_command_line, "spectre_v2", arg,
+	    sizeof(arg));
 	if (ret < 0)
 		return SPECTRE_V2_CMD_AUTO;
 
@@ -1096,11 +1097,11 @@ static enum ssb_mitigation_cmd ssb_parse_cmdline(void)
 	char arg[20];
 	int ret, i;
 
-	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable") ||
+	if (cmdline_find_option_bool(saved_command_line, "nospec_store_bypass_disable") ||
 	    cpu_mitigations_off()) {
 		return SPEC_STORE_BYPASS_CMD_NONE;
 	} else {
-		ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
+		ret = cmdline_find_option(saved_command_line, "spec_store_bypass_disable",
 					  arg, sizeof(arg));
 		if (ret < 0)
 			return SPEC_STORE_BYPASS_CMD_AUTO;
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index 7c46581..436fa93 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -92,7 +92,7 @@ void tsx_init(void)
 	if (!tsx_ctrl_is_supported())
 		return;
 
-	ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
+	ret = cmdline_find_option(saved_command_line, "tsx", arg, sizeof(arg));
 	if (ret >= 0) {
 		if (!strcmp(arg, "on")) {
 			tsx_ctrl_state = TSX_CTRL_ENABLE;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RFC 3/7] x86: kernel: cpu: bugs.c: modify static_cpu_has to boot_cpu_has
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 1/7] x86: cpu: bugs.c: remove init attribute from functions and variables Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 2/7] x86: cpu: modify boot_command_line to saved_command_line Mihai Carabas
@ 2020-07-02 15:18 ` Mihai Carabas
  2020-07-06 14:46   ` Thomas Gleixner
  2020-07-02 15:18 ` [PATCH RFC 4/7] x86: cpu: bugs.c: update cpu_smt_disable to be callable at runtime Mihai Carabas
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

The usage of static_cpu_has in bugs.c file is counter-productive
since the code is executed once but there is extra effort to patch
it and keep alternatives in a special section --- so there is both
space and time cost.

Quote from _static_cpu_has definition:
/*
 * Static testing of CPU features. Used the same as boot_cpu_has(). It
 * statically patches the target code for additional performance. Use
 * static_cpu_has() only in fast paths, where every cycle counts. Which
 * means that the boot_cpu_has() variant is already fast enough for the
 * majority of cases and you should stick to using it as it is generally
 * only two instructions: a RIP-relative MOV and a TEST.
 */

There are two other places where static_cpu_has is used and might be
considered critical paths: __speculation_ctrl_update() and vmx_l1d_flush().

Given these facts, changing static_cpu_has to boot_cpu_has is done in
order to bypass alternative instructions which cannot be updated at runtime
for now.

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
---
 arch/x86/kernel/cpu/bugs.c | 18 +++++++++---------
 arch/x86/kernel/process.c  |  8 ++++----
 arch/x86/kvm/vmx/vmx.c     |  2 +-
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1760598..21b9df3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -168,8 +168,8 @@ void __ref check_bugs(void)
 		guestval |= guest_spec_ctrl & x86_spec_ctrl_mask;
 
 		/* SSBD controlled in MSR_SPEC_CTRL */
-		if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
-		    static_cpu_has(X86_FEATURE_AMD_SSBD))
+		if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+		    boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
 
 		/* Conditional STIBP enabled? */
@@ -186,8 +186,8 @@ void __ref check_bugs(void)
 	 * If SSBD is not handled in MSR_SPEC_CTRL on AMD, update
 	 * MSR_AMD64_L2_CFG or MSR_VIRT_SPEC_CTRL if supported.
 	 */
-	if (!static_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
-	    !static_cpu_has(X86_FEATURE_VIRT_SSBD))
+	if (!boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
+	    !boot_cpu_has(X86_FEATURE_VIRT_SSBD))
 		return;
 
 	/*
@@ -195,7 +195,7 @@ void __ref check_bugs(void)
 	 * virtual MSR value. If its not permanently enabled, evaluate
 	 * current's TIF_SSBD thread flag.
 	 */
-	if (static_cpu_has(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
+	if (boot_cpu_has(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
 		hostval = SPEC_CTRL_SSBD;
 	else
 		hostval = ssbd_tif_to_spec_ctrl(ti->flags);
@@ -1164,8 +1164,8 @@ static enum ssb_mitigation __ssb_select_mitigation(void)
 	 * bit in the mask to allow guests to use the mitigation even in the
 	 * case where the host does not enable it.
 	 */
-	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
-	    static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+	if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+	    boot_cpu_has(X86_FEATURE_AMD_SSBD)) {
 		x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 	}
 
@@ -1181,8 +1181,8 @@ static enum ssb_mitigation __ssb_select_mitigation(void)
 		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
 		 * use a completely different MSR and bit dependent on family.
 		 */
-		if (!static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) &&
-		    !static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+		if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) &&
+		    !boot_cpu_has(X86_FEATURE_AMD_SSBD)) {
 			x86_amd_ssb_disable();
 		} else {
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index f362ce0..6362e0c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -546,14 +546,14 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 	lockdep_assert_irqs_disabled();
 
 	/* Handle change of TIF_SSBD depending on the mitigation method. */
-	if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {
+	if (boot_cpu_has(X86_FEATURE_VIRT_SSBD)) {
 		if (tif_diff & _TIF_SSBD)
 			amd_set_ssb_virt_state(tifn);
-	} else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
+	} else if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
 		if (tif_diff & _TIF_SSBD)
 			amd_set_core_ssb_state(tifn);
-	} else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
-		   static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+	} else if (boot_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+		   boot_cpu_has(X86_FEATURE_AMD_SSBD)) {
 		updmsr |= !!(tif_diff & _TIF_SSBD);
 		msr |= ssbd_tif_to_spec_ctrl(tifn);
 	}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index cb22f33..f08ef38 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6145,7 +6145,7 @@ static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
 
 	vcpu->stat.l1d_flush++;
 
-	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
+	if (boot_cpu_has(X86_FEATURE_FLUSH_L1D)) {
 		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
 		return;
 	}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RFC 4/7] x86: cpu: bugs.c: update cpu_smt_disable to be callable at runtime
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
                   ` (2 preceding siblings ...)
  2020-07-02 15:18 ` [PATCH RFC 3/7] x86: kernel: cpu: bugs.c: modify static_cpu_has to boot_cpu_has Mihai Carabas
@ 2020-07-02 15:18 ` Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 5/7] x86: microcode: late loading feature and bug evaluation Mihai Carabas
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

If the microcode late loading and bug mitigation logic needs to turn
off SMT, it must use the hot plug infrastructure, not the boot time call
"cpu_smt_disable".

Update "cpu_smt_disable" to use the hot plug infrastructure to turn off
SMT when the system is in state running.

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
---
 kernel/cpu.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index fe67a01..719670f 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -393,9 +393,25 @@ void __weak arch_smt_update(void) { }
 
 void cpu_smt_disable(bool force)
 {
+	int ret;
+
 	if (!cpu_smt_possible())
 		return;
 
+	if (system_state == SYSTEM_RUNNING) {
+		if (force)
+			ret = cpuhp_smt_disable(CPU_SMT_FORCE_DISABLED);
+		else
+			ret = cpuhp_smt_disable(CPU_SMT_DISABLED);
+
+		/* If SMT disable did not succeed, print a warning*/
+		if (ret)
+			pr_warn("SMT: not disabled %d\n", ret);
+		else
+			pr_info("SMT:%s disabled\n", force ? " Force" : "");
+		return;
+	}
+
 	if (force) {
 		pr_info("SMT: Force disabled\n");
 		cpu_smt_control = CPU_SMT_FORCE_DISABLED;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RFC 5/7] x86: microcode: late loading feature and bug evaluation
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
                   ` (3 preceding siblings ...)
  2020-07-02 15:18 ` [PATCH RFC 4/7] x86: cpu: bugs.c: update cpu_smt_disable to be callable at runtime Mihai Carabas
@ 2020-07-02 15:18 ` Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 6/7] x86: cpu: bugs.c: reprobe bugs at runtime Mihai Carabas
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

While doing microcode late loading, need to probe again all the
CPU features after the microcode has been loaded. Before probing
the CPU features and bug, need to clear the current bug bits. The
new function, cpu_clear_bug_bits, will clear all the bug bits and
then set them.

The logic is as follows:

- for boot cpu call cpu_clear_bug_bits, get_cpu_cap and then
cpu_set_bug_bits

- meanwhile all the other cores are waiting because they need
information from boot cpu about the forced caps

- in the last step every cpu is calling cpu_clear_bug_bits and
the bug bits will be set by get_cpu_cap through the apply_forced_caps

- also when the microcode feature for disabling TSX is not available
at boot time, taa_select_mitigation will not disable TSX to ensure
proper mitigation for TAA. Call tsx_init on each CPU after the new
microcode has been loaded

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
---
 arch/x86/include/asm/microcode.h     |  3 +++
 arch/x86/kernel/cpu/common.c         | 28 +++++++++++++++++++++++++++-
 arch/x86/kernel/cpu/microcode/core.c | 26 ++++++++++++++++++++++++++
 3 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
index 2b7cc53..7a6a5aa 100644
--- a/arch/x86/include/asm/microcode.h
+++ b/arch/x86/include/asm/microcode.h
@@ -142,4 +142,7 @@ static inline void reload_early_microcode(void)			{ }
 get_builtin_firmware(struct cpio_data *cd, const char *name)	{ return false; }
 #endif
 
+void cpu_set_bug_bits(struct cpuinfo_x86 *c);
+void cpu_clear_bug_bits(struct cpuinfo_x86 *c);
+
 #endif /* _ASM_X86_MICROCODE_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index c11daa6..f722c1e 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1101,6 +1101,32 @@ static bool cpu_matches(const struct x86_cpu_id *table, unsigned long which)
 	return m && !!(m->driver_data & which);
 }
 
+void cpu_clear_bug_bits(struct cpuinfo_x86 *c)
+{
+	int i;
+	unsigned int bugs[] = {
+		X86_BUG_SPECTRE_V1,
+		X86_BUG_SPECTRE_V2,
+		X86_BUG_SPEC_STORE_BYPASS,
+		X86_FEATURE_IBRS_ENHANCED,
+		X86_BUG_MDS,
+		X86_BUG_MSBDS_ONLY,
+		X86_BUG_SWAPGS,
+		X86_BUG_TAA,
+		X86_BUG_SRBDS,
+		X86_BUG_CPU_MELTDOWN,
+		X86_BUG_L1TF
+	};
+
+	for (i = 0; i < ARRAY_SIZE(bugs); i++)
+		clear_cpu_cap(c, bugs[i]);
+
+	if (c->cpu_index == boot_cpu_data.cpu_index) {
+		for (i = 0; i < ARRAY_SIZE(bugs); i++)
+			setup_clear_cpu_cap(bugs[i]);
+	}
+}
+
 u64 x86_read_arch_cap_msr(void)
 {
 	u64 ia32_cap = 0;
@@ -1111,7 +1137,7 @@ u64 x86_read_arch_cap_msr(void)
 	return ia32_cap;
 }
 
-static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+void cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = x86_read_arch_cap_msr();
 
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index baec68b..2cd983a 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -40,6 +40,8 @@
 #include <asm/cmdline.h>
 #include <asm/setup.h>
 
+#include "../cpu.h"
+
 #define DRIVER_VERSION	"2.2"
 
 static struct microcode_ops	*microcode_ops;
@@ -542,6 +544,20 @@ static int __wait_for_cpus(atomic_t *t, long long timeout)
 	return 0;
 }
 
+static void update_cpu_caps(struct cpuinfo_x86 *c)
+{
+	cpu_clear_bug_bits(c);
+
+	/*
+	 * If we are at late loading, we need to re-initialize tsx because
+	 * MSR_IA32_TSX_CTRL might be available as result of the microcode
+	 * update.
+	 */
+	tsx_init();
+
+	get_cpu_cap(c);
+}
+
 /*
  * Returns:
  * < 0 - on error
@@ -550,6 +566,7 @@ static int __wait_for_cpus(atomic_t *t, long long timeout)
 static int __reload_late(void *info)
 {
 	int cpu = smp_processor_id();
+	struct cpuinfo_x86 *c = &cpu_data(cpu);
 	enum ucode_state err;
 	int ret = 0;
 
@@ -579,6 +596,12 @@ static int __reload_late(void *info)
 		ret = -1;
 	}
 
+	if (ret == 0 && c->cpu_index == boot_cpu_data.cpu_index) {
+		update_cpu_caps(c);
+		memcpy(&boot_cpu_data, c, sizeof(boot_cpu_data));
+		cpu_set_bug_bits(c);
+	}
+
 wait_for_siblings:
 	if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC))
 		panic("Timeout during microcode update!\n");
@@ -592,6 +615,9 @@ static int __reload_late(void *info)
 	if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu)
 		apply_microcode_local(&err);
 
+	if (ret == 0 && c->cpu_index != boot_cpu_data.cpu_index)
+		update_cpu_caps(c);
+
 	return ret;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RFC 6/7] x86: cpu: bugs.c: reprobe bugs at runtime
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
                   ` (4 preceding siblings ...)
  2020-07-02 15:18 ` [PATCH RFC 5/7] x86: microcode: late loading feature and bug evaluation Mihai Carabas
@ 2020-07-02 15:18 ` Mihai Carabas
  2020-07-02 15:18 ` [PATCH RFC 7/7] x86: cpu: update blacklist spec features for late loading Mihai Carabas
  2020-07-02 18:42 ` [PATCH RFC 0/7] CPU feature evaluation after microcode " Sean Christopherson
  7 siblings, 0 replies; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

Adapt check_bugs to be callable at runtime after the
microcode late loading has been done.

Also update SRBDS to reset the default value for srbds_mitigation and call
update_srbds_msr on all CPUs.

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
---
 arch/x86/kernel/cpu/bugs.c           | 37 ++++++++++++++++++++++++++----------
 arch/x86/kernel/cpu/microcode/core.c |  2 ++
 2 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 21b9df3..c4084d7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -77,17 +77,19 @@
 
 void __ref check_bugs(void)
 {
-	identify_boot_cpu();
+	if (system_state != SYSTEM_RUNNING) {
+		identify_boot_cpu();
 
-	/*
-	 * identify_boot_cpu() initialized SMT support information, let the
-	 * core code know.
-	 */
-	cpu_smt_check_topology();
+		/*
+		 * identify_boot_cpu() initialized SMT support information,
+		 * let the core code know.
+		 */
+		cpu_smt_check_topology();
 
-	if (!IS_ENABLED(CONFIG_SMP)) {
-		pr_info("CPU: ");
-		print_cpu_info(&boot_cpu_data);
+		if (!IS_ENABLED(CONFIG_SMP)) {
+			pr_info("CPU: ");
+			print_cpu_info(&boot_cpu_data);
+		}
 	}
 
 	/*
@@ -112,6 +114,13 @@ void __ref check_bugs(void)
 	srbds_select_mitigation();
 
 	/*
+	 * If we are late loading the microcode, code below should
+	 * not be executed --- it is only needed during boot.
+	 */
+	if (system_state == SYSTEM_RUNNING)
+		return;
+
+	/*
 	 * As MDS and TAA mitigations are inter-related, print MDS
 	 * mitigation until after TAA mitigation selection is done.
 	 */
@@ -452,10 +461,17 @@ void update_srbds_msr(void)
 	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
 }
 
+static void _update_srbds_msr(void *p)
+{
+	update_srbds_msr();
+}
+
 static void srbds_select_mitigation(void)
 {
 	u64 ia32_cap;
 
+	srbds_mitigation = SRBDS_MITIGATION_FULL;
+
 	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
 		return;
 
@@ -473,7 +489,8 @@ static void srbds_select_mitigation(void)
 	else if (cpu_mitigations_off() || srbds_off)
 		srbds_mitigation = SRBDS_MITIGATION_OFF;
 
-	update_srbds_msr();
+	on_each_cpu(_update_srbds_msr, NULL, 1);
+
 	pr_info("%s\n", srbds_strings[srbds_mitigation]);
 }
 
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index 2cd983a..6d327a0 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -31,6 +31,7 @@
 #include <linux/fs.h>
 #include <linux/mm.h>
 
+#include <asm/bugs.h>
 #include <asm/microcode_intel.h>
 #include <asm/cpu_device_id.h>
 #include <asm/microcode_amd.h>
@@ -669,6 +670,7 @@ static ssize_t reload_store(struct device *dev,
 
 	mutex_lock(&microcode_mutex);
 	ret = microcode_reload_late();
+	check_bugs();
 	mutex_unlock(&microcode_mutex);
 
 put:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RFC 7/7] x86: cpu: update blacklist spec features for late loading
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
                   ` (5 preceding siblings ...)
  2020-07-02 15:18 ` [PATCH RFC 6/7] x86: cpu: bugs.c: reprobe bugs at runtime Mihai Carabas
@ 2020-07-02 15:18 ` Mihai Carabas
  2020-07-02 18:42 ` [PATCH RFC 0/7] CPU feature evaluation after microcode " Sean Christopherson
  7 siblings, 0 replies; 11+ messages in thread
From: Mihai Carabas @ 2020-07-02 15:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

If we have loaded a broken microcode at boot time, all the
speculation features will be blacklisted. Created a new
function for Intel CPUs to verify if we have a broken microcode
loaded or not and whitelist/blacklist features as needed.

This has to be done before get_cpu_cap because it uses these
black/white lists.

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
---
 arch/x86/include/asm/microcode_intel.h |  1 +
 arch/x86/kernel/cpu/intel.c            | 28 ++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/microcode/intel.c  |  5 ++++-
 3 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/microcode_intel.h b/arch/x86/include/asm/microcode_intel.h
index d85a07d..74c87cc 100644
--- a/arch/x86/include/asm/microcode_intel.h
+++ b/arch/x86/include/asm/microcode_intel.h
@@ -74,6 +74,7 @@ static inline u32 intel_get_microcode_revision(void)
 extern void show_ucode_info_early(void);
 extern int __init save_microcode_in_initrd_intel(void);
 void reload_ucode_intel(void);
+void check_intel_bad_spectre_microcode(struct cpuinfo_x86 *c);
 #else
 static inline __init void load_ucode_intel_bsp(void) {}
 static inline void load_ucode_intel_ap(void) {}
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index c25a67a..286168e 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -170,6 +170,34 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
 	return false;
 }
 
+/*
+ * check_intel_bad_spectre_microcode verifies if a valid microcode was
+ * loaded and whitelist/blacklist the features related to speculation control.
+ */
+void check_intel_bad_spectre_microcode(struct cpuinfo_x86 *c)
+{
+	int i;
+	unsigned int features[] = {
+		X86_FEATURE_IBRS,
+		X86_FEATURE_IBPB,
+		X86_FEATURE_STIBP,
+		X86_FEATURE_SPEC_CTRL,
+		X86_FEATURE_MSR_SPEC_CTRL,
+		X86_FEATURE_INTEL_STIBP,
+		X86_FEATURE_SSBD,
+		X86_FEATURE_SPEC_CTRL_SSBD
+	};
+
+	if (bad_spectre_microcode(c)) {
+		for (i = 0; i < ARRAY_SIZE(features); i++)
+			set_bit(features[i], (unsigned long *)cpu_caps_cleared);
+	} else {
+		for (i = 0; i < ARRAY_SIZE(features); i++)
+			clear_bit(features[i],
+				  (unsigned long *)cpu_caps_cleared);
+	}
+}
+
 static void early_init_intel(struct cpuinfo_x86 *c)
 {
 	u64 misc_enable;
diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
index 2ef4338..73a5a52 100644
--- a/arch/x86/kernel/cpu/microcode/intel.c
+++ b/arch/x86/kernel/cpu/microcode/intel.c
@@ -854,8 +854,11 @@ static enum ucode_state apply_microcode_intel(int cpu)
 	c->microcode	 = rev;
 
 	/* Update boot_cpu_data's revision too, if we're on the BSP: */
-	if (bsp)
+	if (bsp) {
 		boot_cpu_data.microcode = rev;
+		/* Whitelist/blacklist speculation control features. */
+		check_intel_bad_spectre_microcode(c);
+	}
 
 	return ret;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC 0/7] CPU feature evaluation after microcode late loading
  2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
                   ` (6 preceding siblings ...)
  2020-07-02 15:18 ` [PATCH RFC 7/7] x86: cpu: update blacklist spec features for late loading Mihai Carabas
@ 2020-07-02 18:42 ` Sean Christopherson
  2020-07-06 15:15   ` Thomas Gleixner
  7 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2020-07-02 18:42 UTC (permalink / raw)
  To: Mihai Carabas
  Cc: linux-kernel, tglx, mingo, bp, x86, boris.ostrovsky, konrad.wilk

On Thu, Jul 02, 2020 at 06:18:20PM +0300, Mihai Carabas wrote:
> This RFC patch set aims to provide the ability to re-evaluate all CPU
> features and take proper bug mitigation in place after a microcode
> late loading.
> 
> This was debated last year and this patch set implements a subset of
> point #2 from Thomas Gleixner's idea:
> https://lore.kernel.org/lkml/alpine.DEB.2.21.1909062237580.1902@nanos.tec.linutronix.de/
> 
> Point #1 was sent as an RFC some time ago
> (https://lkml.org/lkml/2020/4/27/214), but after a discussion with CPU
> vendors (Intel), the metadata file is not easily buildable at this
> moment so we could not advance with it more. Without #1, I know it is
> unlikely to embrace the feature re-evaluation.
> 
> Patches from 1 to 4 bring in changes for functions/variables in order to be
> able to use them at runtime.
> 
> Patch 5 re-evaluates CPU features, patch 6 is re-probing bugs and patch 7
> deals with speculation blacklist CPUs/microcode versions.

This misses critical functionality in KVM.  KVM snapshots boot_cpu_data at
module load time (and does further modifications) for ongoing reuse in
filtering what features are advertised to the userspace VMM.  See
kvm_set_cpu_caps() for details.

Even if you found a way to reference kvm_cpu_caps, that still leaves the
problem of existing guests having been created with stale data.  Oh, and
KVM also needs to properly handle MSR_IA32_TSX_CTRL.

Rather than forcefully tearing down guests, what about adding a way to block
updates, e.g. KVM would block updates on module load and unblock on module
exit. That puts the onus of coordinating updates on the orchestration layer
where it belongs.

KVM aside, it wouldn't surprise in the least if there is other code in the
kernel that captures bug state locally.  This series feels like it needs a
fair bit of infrastructure to either detect conflicting usage at build time
or actively prevent consuming stale state at runtime.

There's also the problem of the flags being exposed to userspace via
/proc/cpuinfo, though I suppose that's userspace's problem to not shoot
itself in the foot.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC 3/7] x86: kernel: cpu: bugs.c: modify static_cpu_has to boot_cpu_has
  2020-07-02 15:18 ` [PATCH RFC 3/7] x86: kernel: cpu: bugs.c: modify static_cpu_has to boot_cpu_has Mihai Carabas
@ 2020-07-06 14:46   ` Thomas Gleixner
  0 siblings, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2020-07-06 14:46 UTC (permalink / raw)
  To: Mihai Carabas, linux-kernel
  Cc: mingo, bp, x86, boris.ostrovsky, konrad.wilk, Mihai Carabas

Mihai Carabas <mihai.carabas@oracle.com> writes:

> The usage of static_cpu_has in bugs.c file is counter-productive
> since the code is executed once but there is extra effort to patch
> it and keep alternatives in a special section --- so there is both
> space and time cost.
>
> Quote from _static_cpu_has definition:
> /*
>  * Static testing of CPU features. Used the same as boot_cpu_has(). It
>  * statically patches the target code for additional performance. Use
>  * static_cpu_has() only in fast paths, where every cycle counts. Which
>  * means that the boot_cpu_has() variant is already fast enough for the
>  * majority of cases and you should stick to using it as it is generally
>  * only two instructions: a RIP-relative MOV and a TEST.
>  */
>
> There are two other places where static_cpu_has is used and might be
> considered critical paths: __speculation_ctrl_update() and vmx_l1d_flush().
>
> Given these facts, changing static_cpu_has to boot_cpu_has is done in
> order to bypass alternative instructions which cannot be updated at runtime
> for now.

Not going to happen. We are not adding 4 conditionals to context switch
just to support that late loading horrors. There are better ways to do
that.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC 0/7] CPU feature evaluation after microcode late loading
  2020-07-02 18:42 ` [PATCH RFC 0/7] CPU feature evaluation after microcode " Sean Christopherson
@ 2020-07-06 15:15   ` Thomas Gleixner
  0 siblings, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2020-07-06 15:15 UTC (permalink / raw)
  To: Sean Christopherson, Mihai Carabas
  Cc: linux-kernel, mingo, bp, x86, boris.ostrovsky, konrad.wilk

Sean Christopherson <sean.j.christopherson@intel.com> writes:
> On Thu, Jul 02, 2020 at 06:18:20PM +0300, Mihai Carabas wrote:
>> This RFC patch set aims to provide the ability to re-evaluate all CPU
>> features and take proper bug mitigation in place after a microcode
>> late loading.
>> 
>> This was debated last year and this patch set implements a subset of
>> point #2 from Thomas Gleixner's idea:
>> https://lore.kernel.org/lkml/alpine.DEB.2.21.1909062237580.1902@nanos.tec.linutronix.de/

An incomplete and dangerous subset.

> KVM aside, it wouldn't surprise in the least if there is other code in the
> kernel that captures bug state locally.  This series feels like it needs a
> fair bit of infrastructure to either detect conflicting usage at build time
> or actively prevent consuming stale state at runtime.
>
> There's also the problem of the flags being exposed to userspace via
> /proc/cpuinfo, though I suppose that's userspace's problem to not shoot
> itself in the foot.

User space is the least of my worries. Inconsistent kernel state as I
described in my mail referenced above is the far more dangerous issue.

And just reevaluating some bits is not covering the whole
problem. Microcode can bring substantial changes and as long as we don't
have any clear advise and documentation from the vendors it's all
wishful thinking.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-07-06 15:15 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-02 15:18 [PATCH RFC 0/7] CPU feature evaluation after microcode late loading Mihai Carabas
2020-07-02 15:18 ` [PATCH RFC 1/7] x86: cpu: bugs.c: remove init attribute from functions and variables Mihai Carabas
2020-07-02 15:18 ` [PATCH RFC 2/7] x86: cpu: modify boot_command_line to saved_command_line Mihai Carabas
2020-07-02 15:18 ` [PATCH RFC 3/7] x86: kernel: cpu: bugs.c: modify static_cpu_has to boot_cpu_has Mihai Carabas
2020-07-06 14:46   ` Thomas Gleixner
2020-07-02 15:18 ` [PATCH RFC 4/7] x86: cpu: bugs.c: update cpu_smt_disable to be callable at runtime Mihai Carabas
2020-07-02 15:18 ` [PATCH RFC 5/7] x86: microcode: late loading feature and bug evaluation Mihai Carabas
2020-07-02 15:18 ` [PATCH RFC 6/7] x86: cpu: bugs.c: reprobe bugs at runtime Mihai Carabas
2020-07-02 15:18 ` [PATCH RFC 7/7] x86: cpu: update blacklist spec features for late loading Mihai Carabas
2020-07-02 18:42 ` [PATCH RFC 0/7] CPU feature evaluation after microcode " Sean Christopherson
2020-07-06 15:15   ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).