* [MODERATED] [PATCH v5 07/11] [PATCH v5 07/10] Linux Patch #7
@ 2018-04-26 2:04 konrad.wilk
2018-04-26 10:35 ` [MODERATED] " Borislav Petkov
2018-04-26 12:50 ` Borislav Petkov
0 siblings, 2 replies; 3+ messages in thread
From: konrad.wilk @ 2018-04-26 2:04 UTC (permalink / raw)
To: speck
Intel CPUs expose methods to:
- Detect whether Reduced Data Speculation capability is available via
CPUID.7.0.EDX[31],
- The SPEC_CTRL MSR(0x48), bit 2 set to enable Reduced Data Speculation.
- MSR_IA32_ARCH_CAPABILITIES, Bit(4) no need to enable Reduced Data Speculation.
With that in mind if spec_store_bypass_disable=[auto,on] is selected we will
set at boot-time the SPEC_CTRL MSR to enable Reduced Data Speculation if the
platform requires it.
Note that this does not fix the KVM case where the SPEC_CTRL
is exposed to guests who can muck with, see patch titled :
KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest IBRS and ours.
And for the firmware (IBRS to be set), see patch titled:
x86/spectre_v2: Read SPEC_CTRL MSR during boot and re-use reserved bits
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v1.2: Expand on the commit description
s/md_v4/mdd/
s/spec_ctrl_msr_on/spec_ctrl_priv/
s/spec_ctrl_msr_off/spec_ctrp_unpriv/
v1.3:
- Add comment about privilege level changes.
v1.4: Simplify and incorporate various suggestions from Jon Masters
- Export a single x86_spec_ctrl_base value with initial bits
v2: Rip out the c_fix_cpu.
Depend on synthetic CPU flag
v3: Move the generic_identify to be done _after_ we figure out whether
we can do the mitigation.
v4: s/MDD/RDS/
s/Memory Disambiguation Disable/Reduced Data Speculation/
Tweak the various 'disable', enabled now that it is called RDS.
Set the x86_spec_ctrl with SPEC_CTRL_RDS if RDS is detected
Fixup x86_set_spec_ctrl to deal with two Bitfields.
v5: s/X86_FEATURE_DISABLE_SSB/X86_FEATURE_SPEC_STORE_BYPASS_DISABLE/
Also check MSR_IA32_ARCH_CAPABILITIES for Bit(4)
Add documentation on what those three flags mean
Add docs on why we set x86_spec_ctrl only on Intel
Add extra check in ssb_parse_cmdline for RDS be available
In init_intel drop the check for RDS as the X86_FEATURE_SPEC_STORE_BYPASS_DISABLE
is implicitly set only iff RDS has been set in ssb_parse_cmdline.
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/kernel/cpu/bugs.c | 43 ++++++++++++++++++++++++++++++--------
arch/x86/kernel/cpu/common.c | 9 ++++----
arch/x86/kernel/cpu/intel.c | 6 ++++++
5 files changed, 48 insertions(+), 13 deletions(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index fc5a8378652e..914391814ba7 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -334,6 +334,7 @@
#define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
#define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
#define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+#define X86_FEATURE_RDS (18*32+31) /* Reduced Data Speculation */
/*
* BUG word(s)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index c9084dedfcfa..e4a6fb5f0888 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -42,6 +42,7 @@
#define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */
#define SPEC_CTRL_IBRS (1 << 0) /* Indirect Branch Restricted Speculation */
#define SPEC_CTRL_STIBP (1 << 1) /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_RDS (1 << 2) /* Reduced Data Speculation */
#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
#define PRED_CMD_IBPB (1 << 0) /* Indirect Branch Prediction Barrier */
@@ -68,6 +69,7 @@
#define MSR_IA32_ARCH_CAPABILITIES 0x0000010a
#define ARCH_CAP_RDCL_NO (1 << 0) /* Not susceptible to Meltdown */
#define ARCH_CAP_IBRS_ALL (1 << 1) /* Enhanced IBRS support */
+#define ARCH_CAP_RDS_NO (1 << 4) /* Not susceptible to speculative store bypass */
#define MSR_IA32_BBL_CR_CTL 0x00000119
#define MSR_IA32_BBL_CR_CTL3 0x0000011e
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index dff706bba3aa..99cdd44c20bc 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -37,8 +37,6 @@ u64 __read_mostly x86_spec_ctrl_base;
void __init check_bugs(void)
{
- identify_boot_cpu();
-
/*
* Read the SPEC_CTRL MSR to account for reserved bits which may have
* unknown values.
@@ -46,11 +44,6 @@ void __init check_bugs(void)
if (boot_cpu_has(X86_FEATURE_IBRS))
rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
- if (!IS_ENABLED(CONFIG_SMP)) {
- pr_info("CPU: ");
- print_cpu_info(&boot_cpu_data);
- }
-
/* Select the proper spectre mitigation before patching alternatives */
spectre_v2_select_mitigation();
@@ -60,6 +53,18 @@ void __init check_bugs(void)
*/
ssb_select_mitigation();
+ /*
+ * Need to do this _after_ spec_store_bypass_select_mitigation to
+ * figure if any mitigation should be latched.
+ */
+ identify_boot_cpu();
+
+ if (!IS_ENABLED(CONFIG_SMP)) {
+ pr_info("CPU: ");
+ print_cpu_info(&boot_cpu_data);
+ }
+
+
#ifdef CONFIG_X86_32
/*
* Check whether we are able to run this kernel safely on SMP.
@@ -117,7 +122,7 @@ static enum spectre_v2_mitigation spectre_v2_enabled = SPECTRE_V2_NONE;
void x86_set_spec_ctrl(u64 val)
{
- if (val & ~(SPEC_CTRL_IBRS))
+ if (val & ~(SPEC_CTRL_IBRS | SPEC_CTRL_RDS))
WARN_ONCE(1, "SPEC_CTRL MSR value 0x%16llx is unknown.\n", val);
else
wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base | val);
@@ -399,6 +404,13 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
if (!boot_cpu_has(X86_BUG_SPEC_STORE_BYPASS))
return SPEC_STORE_BYPASS_CMD_NONE;
+ /*
+ * On Intel it is exposed in CPUID as Reduced Data Speculation
+ * bit.
+ */
+ if (!boot_cpu_has(X86_FEATURE_RDS))
+ return SPEC_STORE_BYPASS_CMD_NONE;
+
if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable"))
return SPEC_STORE_BYPASS_CMD_NONE;
else {
@@ -446,8 +458,21 @@ static void __init ssb_select_mitigation(void)
ssb_mode = mode;
pr_info("%s\n", ssb_strings[mode]);
- if (mode != SPEC_STORE_BYPASS_NONE)
+ /*
+ * We have three CPU feature flags that are in play here:
+ * - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
+ * - X86_FEATURE_RDS - CPU is able to turn off speculative store bypass
+ * - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
+ */
+ if (mode != SPEC_STORE_BYPASS_NONE) {
setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
+ /*
+ * Intel uses the SPEC CTRL MSR Bit(2) for this.
+ */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+ x86_spec_ctrl_base |= SPEC_CTRL_RDS;
+
+ }
}
#undef pr_fmt
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 7ae1f2e6caf7..5d55423cce74 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -939,7 +939,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
{
u64 ia32_cap = 0;
- if (!x86_match_cpu(cpu_no_spec_store_bypass))
+ if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+
+ if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
+ !(ia32_cap & ARCH_CAP_RDS_NO))
setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
if (x86_match_cpu(cpu_no_speculation))
@@ -951,9 +955,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
if (x86_match_cpu(cpu_no_meltdown))
return;
- if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
- rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
-
/* Rogue Data Cache Load? No! */
if (ia32_cap & ARCH_CAP_RDCL_NO)
return;
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index c3af167d0a70..74fdc0fec2e0 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -17,6 +17,7 @@
#include <asm/cpu.h>
#include <asm/intel-family.h>
#include <asm/microcode_intel.h>
+#include <asm/nospec-branch.h>
#include <asm/hwcap2.h>
#include <asm/elf.h>
@@ -189,6 +190,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
setup_clear_cpu_cap(X86_FEATURE_STIBP);
setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
+ setup_clear_cpu_cap(X86_FEATURE_RDS);
}
/*
@@ -682,8 +684,12 @@ static void init_intel(struct cpuinfo_x86 *c)
init_intel_energy_perf(c);
init_intel_misc_features(c);
+
+ if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
+ x86_set_spec_ctrl(SPEC_CTRL_RDS);
}
+
#ifdef CONFIG_X86_32
static unsigned int intel_size_cache(struct cpuinfo_x86 *c, unsigned int size)
{
--
2.14.3
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [MODERATED] Re: [PATCH v5 07/11] [PATCH v5 07/10] Linux Patch #7
2018-04-26 2:04 [MODERATED] [PATCH v5 07/11] [PATCH v5 07/10] Linux Patch #7 konrad.wilk
@ 2018-04-26 10:35 ` Borislav Petkov
2018-04-26 12:50 ` Borislav Petkov
1 sibling, 0 replies; 3+ messages in thread
From: Borislav Petkov @ 2018-04-26 10:35 UTC (permalink / raw)
To: speck
On Wed, Apr 25, 2018 at 10:04:22PM -0400, speck for konrad.wilk_at_oracle.com wrote:
> x86/bugs/Intel: set proper CPU features and latch mitigation for RDS
...
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index dff706bba3aa..99cdd44c20bc 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -37,8 +37,6 @@ u64 __read_mostly x86_spec_ctrl_base;
>
> void __init check_bugs(void)
> {
> - identify_boot_cpu();
> -
> /*
> * Read the SPEC_CTRL MSR to account for reserved bits which may have
> * unknown values.
> @@ -46,11 +44,6 @@ void __init check_bugs(void)
> if (boot_cpu_has(X86_FEATURE_IBRS))
> rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>
> - if (!IS_ENABLED(CONFIG_SMP)) {
> - pr_info("CPU: ");
> - print_cpu_info(&boot_cpu_data);
> - }
> -
> /* Select the proper spectre mitigation before patching alternatives */
> spectre_v2_select_mitigation();
>
> @@ -60,6 +53,18 @@ void __init check_bugs(void)
> */
> ssb_select_mitigation();
>
> + /*
> + * Need to do this _after_ spec_store_bypass_select_mitigation to
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As mentioned yesterday already:
s/spec_store_bypass_select_mitigation/ssb_select_mitigation/
in the comment above.
With that:
Reviewed-by: Borislav Petkov <bp@suse.de>
--
Regards/Gruss,
Boris.
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--
^ permalink raw reply [flat|nested] 3+ messages in thread
* [MODERATED] Re: [PATCH v5 07/11] [PATCH v5 07/10] Linux Patch #7
2018-04-26 2:04 [MODERATED] [PATCH v5 07/11] [PATCH v5 07/10] Linux Patch #7 konrad.wilk
2018-04-26 10:35 ` [MODERATED] " Borislav Petkov
@ 2018-04-26 12:50 ` Borislav Petkov
1 sibling, 0 replies; 3+ messages in thread
From: Borislav Petkov @ 2018-04-26 12:50 UTC (permalink / raw)
To: speck
On Wed, Apr 25, 2018 at 10:04:22PM -0400, speck for konrad.wilk_at_oracle.com wrote:
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index dff706bba3aa..99cdd44c20bc 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -37,8 +37,6 @@ u64 __read_mostly x86_spec_ctrl_base;
>
> void __init check_bugs(void)
> {
> - identify_boot_cpu();
> -
> /*
> * Read the SPEC_CTRL MSR to account for reserved bits which may have
> * unknown values.
This hunk breaks on AMD because after it I see in dmesg:
[ 0.038706] Spectre V2 : Spectre mitigation: LFENCE not serializing, switching to generic retpoline
instead of
[ 0.049074] Spectre V2 : Mitigation: Full AMD retpoline
because the making LFENCE serializable bit is done in init_amd().
IOW, you need to have the following calling order:
ssb_select_mitigation
identify_boot_cpu
spectre_v2_select_mitigation
Hunk ontop:
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 289eefd6c67e..7f105f3d195a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -50,9 +50,6 @@ void __init check_bugs(void)
if (boot_cpu_has(X86_FEATURE_IBRS))
rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
- /* Select the proper spectre mitigation before patching alternatives */
- spectre_v2_select_mitigation();
-
/*
* Select proper mitigation for any exposure to the Speculative Store
* Bypass vulnerability.
@@ -60,11 +57,14 @@ void __init check_bugs(void)
ssb_select_mitigation();
/*
- * Need to do this _after_ spec_store_bypass_select_mitigation to
- * figure if any mitigation should be latched.
+ * Need to do this _after_ ssb_select_mitigation() to figure if any
+ * mitigation should be latched.
*/
identify_boot_cpu();
+ /* Select the proper spectre mitigation before patching alternatives */
+ spectre_v2_select_mitigation();
+
if (!IS_ENABLED(CONFIG_SMP)) {
pr_info("CPU: ");
print_cpu_info(&boot_cpu_data);
--
Regards/Gruss,
Boris.
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2018-04-26 12:51 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-26 2:04 [MODERATED] [PATCH v5 07/11] [PATCH v5 07/10] Linux Patch #7 konrad.wilk
2018-04-26 10:35 ` [MODERATED] " Borislav Petkov
2018-04-26 12:50 ` Borislav Petkov
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.