All of lore.kernel.org
 help / color / mirror / Atom feed
* [MODERATED] [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
@ 2018-04-24  3:16 konrad.wilk
  2018-04-24 13:21 ` [MODERATED] " Borislav Petkov
  2018-04-25 16:51 ` Tim Chen
  0 siblings, 2 replies; 16+ messages in thread
From: konrad.wilk @ 2018-04-24  3:16 UTC (permalink / raw)
  To: speck

Contemporary high performance processors use a common industry-wide
optimization known as "Speculative Store Bypass" in which loads from
addresses to which a recent store has occurred may (speculatively)
see an older value. Intel refers to this feature as "Reduced Data
Speculation", which is part of their "Smart Memory Access"
capability in Nehalem and later generation processors.

Some processors have an implementation bug that enables a cache
side-channel attack against such speculatively read values. An
attacker can create exploit code that allows them to read memory
outside of a sandbox environment (for example, malicious JavaScript
in a web page), or to perform more complex attacks against code
running within the same privilege level, e.g. via the stack.

We provide two command line control knobs:

 nospec_store_bypass_disable
 spec_store_bypass_disable=[off,auto,on]

By default affected x86 processors will power on with Speculative
Store Bypass enabled. Hence the provided kernel parameters are written
from the point of view of whether to enable a mitigation or not.
The parameters are as follows:

 - auto - kernel detects whether your CPU model contains a
	  vulnerable implementation of Speculative Store
	  Bypass and picks the most appropriate mitigation

	  Some CPUs may have a Speculative Store Bypass
	  implementation which is not vulnerable to the described attacks.
	  For those, the 'auto' (default) option will pick the right choice.

 - on   - disable Speculative Store Bypass
 - off  - enable Speculative Store Bypass

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
v1.3:
 - Wrong array was used when figuring mdd_v4= arguments
 - If broken microcode found, also disable MD feature.
 - s/mdd_v4/mdd/
 - s/kernel/userpsace/
 - Fix up the commit description
v1.4:
 - Incorporate various feedback and suggestions from Jon Masters
 - Change mdd parameters to allow only big hammer on/off for now
 - Introduce "ssbd" industry standard term for cross-architecture
 - Use mdd_ and MDD_ prefixes to clearly spell out that "enable"
   means we are actually disabling a CPU processor feature (MD)
v2:
 - s/mdd/ssb/ in the x86 code to match with the 'ssb' part
v3:
 - rip out mdd.
 - remove 'Haswell' and such and replace with 'contemporary'
 - s/ssb/spec_store_bypass/ and various other cleanups (Jon Masters)
v4: Rip out 'Memory Disambiguation Disable'
 - Fix up documentation.
 - Rip out the X86_FEATURE_STBUF_BYPASS
 - s/X86_FEATURE_STBUF_MITIGATE/X86_FEATURE_STBUF_BYPASS_MITIGATE/
 - On the functions change spec_store_bypass_ to ssb_

one patch earlier
---
 Documentation/admin-guide/kernel-parameters.txt | 32 +++++++++
 arch/x86/include/asm/cpufeatures.h              |  1 +
 arch/x86/include/asm/nospec-branch.h            |  6 ++
 arch/x86/kernel/cpu/bugs.c                      | 96 +++++++++++++++++++++++++
 4 files changed, 135 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 1d1d53f85ddd..93633f2ac69a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2647,6 +2647,9 @@
 			allow data leaks with this option, which is equivalent
 			to spectre_v2=off.
 
+	nospec_store_bypass_disable
+			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
+
 	noxsave		[BUGS=X86] Disables x86 extended register state save
 			and restore using xsave. The kernel will fallback to
 			enabling legacy floating-point and sse state.
@@ -3997,6 +4000,35 @@
 			Not specifying this option is equivalent to
 			spectre_v2=auto.
 
+	spec_store_bypass_disable=
+			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
+			(Speculative Store Bypass vulnerability)
+			Certain CPUs are vulnerable to an exploit against a
+			a common industry wide performance optimization known
+			as "Speculative Store Bypass" in which recent stores
+			to the same memory location may not be observed by
+			later loads during speculative execution. The idea
+			is that such stores are unlikely and that they can
+			be detected prior to instruction retirement at the
+			end of a particular speculation execution window.
+
+			In vulnerable processors, the speculatively forwarded
+			store can be used in a cache side channel attack, for
+			example to read memory to which the attacker does not
+			directly have access (e.g. inside sandboxed code).
+
+			This parameter controls whether the Speculative Store
+			Bypass optimization is used.
+
+			on - unconditionally disable Speculative Store Bypass
+			off - unconditionally enable Speculative Store Bypass
+			auto - kernel detects whether your CPU model contains a
+			       vulnerable implementation of Speculative Store
+			       Bypass and picks the most appropriate mitigation
+
+			Not specifying this option is equivalent to
+			spec_store_bypass_disable=auto.
+
 	spia_io_base=	[HW,MTD]
 	spia_fio_base=
 	spia_pedr=
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index c70b9a5d5045..3d6c684f98b4 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -214,6 +214,7 @@
 
 #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
 #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
+#define X86_FEATURE_DISABLE_SSB		( 7*32+23) /* "" Disable Speculative Store Bypass. */
 
 /* Virtualization flags: Linux defined, word 8 */
 #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index f5fb6783b26c..6f3d22458a8a 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -238,6 +238,12 @@ extern u64 x86_get_spec_ctrl(void);
 extern void x86_set_guest_spec_ctrl(u64);
 extern void x86_restore_host_spec_ctrl(u64);
 
+/* The Speculative Store Bypass disable variants */
+enum ssb_mitigation {
+	SPEC_STORE_BYPASS_NONE,
+	SPEC_STORE_BYPASS_DISABLE,
+};
+
 extern char __indirect_thunk_start[];
 extern char __indirect_thunk_end[];
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 96081fbe37f1..8e79a06290ea 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -27,6 +27,7 @@
 #include <asm/intel-family.h>
 
 static void __init spectre_v2_select_mitigation(void);
+static void __init ssb_select_mitigation(void);
 static void __init spec_ctrl_save_msr(void);
 
 void __init check_bugs(void)
@@ -43,6 +44,12 @@ void __init check_bugs(void)
 	/* Select the proper spectre mitigation before patching alternatives */
 	spectre_v2_select_mitigation();
 
+	/*
+	 * Select proper mitigation for any exposure to the Speculative Store
+	 * Bypass vulnerability.
+	 */
+	ssb_select_mitigation();
+
 #ifdef CONFIG_X86_32
 	/*
 	 * Check whether we are able to run this kernel safely on SMP.
@@ -359,6 +366,92 @@ static void __init spectre_v2_select_mitigation(void)
 	}
 }
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
+
+static enum ssb_mitigation ssb_mode = SPEC_STORE_BYPASS_NONE;
+
+/* The kernel command line selection */
+enum ssb_mitigation_cmd {
+	SPEC_STORE_BYPASS_CMD_NONE,
+	SPEC_STORE_BYPASS_CMD_AUTO,
+	SPEC_STORE_BYPASS_CMD_ON,
+};
+
+static const char *ssb_strings[] = {
+	[SPEC_STORE_BYPASS_NONE]	= "Vulnerable",
+	[SPEC_STORE_BYPASS_DISABLE]	= "Mitigation: Speculative Store Bypass disabled"
+};
+
+static const struct {
+	const char *option;
+	enum ssb_mitigation_cmd cmd;
+} ssb_mitigation_options[] = {
+	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */
+	{ "on",		SPEC_STORE_BYPASS_CMD_ON },   /* Disable Speculative Store Bypass */
+	{ "off",	SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */
+};
+
+static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
+{
+	char arg[20];
+	int ret, i;
+	enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
+
+	if (!boot_cpu_has(X86_BUG_SPEC_STORE_BYPASS))
+		return SPEC_STORE_BYPASS_CMD_NONE;
+
+	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable"))
+		return SPEC_STORE_BYPASS_CMD_NONE;
+	else {
+		ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
+					  arg, sizeof(arg));
+		if (ret < 0)
+			return SPEC_STORE_BYPASS_CMD_AUTO;
+
+		for (i = 0; i < ARRAY_SIZE(ssb_mitigation_options); i++) {
+			if (!match_option(arg, ret, ssb_mitigation_options[i].option))
+				continue;
+
+			cmd = ssb_mitigation_options[i].cmd;
+			break;
+		}
+
+		if (i >= ARRAY_SIZE(ssb_mitigation_options)) {
+			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+			return SPEC_STORE_BYPASS_CMD_AUTO;
+		}
+	}
+
+	return cmd;
+}
+
+static void __init ssb_select_mitigation(void)
+{
+	enum ssb_mitigation_cmd cmd = ssb_parse_cmdline();
+	enum ssb_mitigation mode = SPEC_STORE_BYPASS_NONE;
+
+	if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS) &&
+	    (cmd == SPEC_STORE_BYPASS_CMD_NONE ||
+	     cmd == SPEC_STORE_BYPASS_CMD_AUTO))
+		return;
+
+	switch (cmd) {
+	case SPEC_STORE_BYPASS_CMD_AUTO:
+	case SPEC_STORE_BYPASS_CMD_ON:
+		mode = SPEC_STORE_BYPASS_DISABLE;
+		break;
+	case SPEC_STORE_BYPASS_CMD_NONE:
+		break;
+	}
+
+	ssb_mode = mode;
+	pr_info("%s\n", ssb_strings[mode]);
+
+	if (mode != SPEC_STORE_BYPASS_NONE)
+		setup_force_cpu_cap(X86_FEATURE_DISABLE_SSB);
+}
+
 #undef pr_fmt
 
 #ifdef CONFIG_SYSFS
@@ -385,6 +478,9 @@ ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
 			       spectre_v2_module_string());
 
+	case X86_BUG_SPEC_STORE_BYPASS:
+		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+
 	default:
 		break;
 	}
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24  3:16 [MODERATED] [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6 konrad.wilk
@ 2018-04-24 13:21 ` Borislav Petkov
  2018-04-24 15:13   ` Linus Torvalds
  2018-04-24 15:46   ` Konrad Rzeszutek Wilk
  2018-04-25 16:51 ` Tim Chen
  1 sibling, 2 replies; 16+ messages in thread
From: Borislav Petkov @ 2018-04-24 13:21 UTC (permalink / raw)
  To: speck

On Mon, Apr 23, 2018 at 11:16:07PM -0400, speck for konrad.wilk_at_oracle.com wrote:
> x86/spec_store_bypass_disable: Provide boot parameters for the mitigation

"x86/bugs: ..."

> 
> Contemporary high performance processors use a common industry-wide
> optimization known as "Speculative Store Bypass" in which loads from
> addresses to which a recent store has occurred may (speculatively)
> see an older value. Intel refers to this feature as "Reduced Data
> Speculation", which is part of their "Smart Memory Access"
> capability in Nehalem and later generation processors.

...

> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index c70b9a5d5045..3d6c684f98b4 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -214,6 +214,7 @@
>  
>  #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
>  #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> +#define X86_FEATURE_DISABLE_SSB		( 7*32+23) /* "" Disable Speculative Store Bypass. */

I think Linus explicitly didn't want the "SSB" abbreviation. I.e., this
should be:

X86_FEATURE_SPEC_STORE_BYPASS_DISABLE

or so.

> +#undef pr_fmt
> +#define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
> +
> +static enum ssb_mitigation ssb_mode = SPEC_STORE_BYPASS_NONE;
> +
> +/* The kernel command line selection */
> +enum ssb_mitigation_cmd {
> +	SPEC_STORE_BYPASS_CMD_NONE,
> +	SPEC_STORE_BYPASS_CMD_AUTO,
> +	SPEC_STORE_BYPASS_CMD_ON,
> +};
> +
> +static const char *ssb_strings[] = {
> +	[SPEC_STORE_BYPASS_NONE]	= "Vulnerable",
> +	[SPEC_STORE_BYPASS_DISABLE]	= "Mitigation: Speculative Store Bypass disabled"
> +};
> +
> +static const struct {
> +	const char *option;
> +	enum ssb_mitigation_cmd cmd;
> +} ssb_mitigation_options[] = {
> +	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */
> +	{ "on",		SPEC_STORE_BYPASS_CMD_ON },   /* Disable Speculative Store Bypass */
> +	{ "off",	SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */
> +};
> +
> +static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
> +{
> +	char arg[20];
> +	int ret, i;
> +	enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
> +
> +	if (!boot_cpu_has(X86_BUG_SPEC_STORE_BYPASS))
> +		return SPEC_STORE_BYPASS_CMD_NONE;

And the proper logic with the X86_BUG/X86_FEATURE bits came to me in my
sleep this morning. :-)

And the logic is the same as with the other bugs, we need three flags:

X86_BUG_SPEC_STORE_BYPASS		- CPU is affected by bug
X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	- CPU can disable store bypassing
X86_FEATURE_SPEC_STORE_BYPASS_IN_USE	- CPU has disabled store bypassing

Then, the checks in this ssb_parse_cmdline() function should be:

static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
{
	char arg[20];
	int ret, i;
	enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;

	/* CPU not affected? */
	if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
		return SPEC_STORE_BYPASS_CMD_NONE;

	/* CPU can even enable the mitigation? */
	if (!boot_cpu_has(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
		return SPEC_STORE_BYPASS_CMD_NONE;

Now, to the vendors:

Intel: there we set X86_FEATURE_RDS which means, we can use either
X86_FEATURE_RDS or X86_FEATURE_SPEC_STORE_BYPASS_DISABLE as those two
mean one thing: the CPU can disable the bypassing.

Then, early_identify_cpu() will have that feature bit set on Intel so
that you can query it in ssb_parse_cmdline().

AMD: There we have to do something like:

---
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 12bc0a1139da..1d225935724e 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -673,6 +673,9 @@ static void early_init_amd(struct cpuinfo_x86 *c)
                set_cpu_bug(c, X86_BUG_AMD_E400);
 
        early_detect_mem_encrypt(c);
+
+       if (c->x86 >= 0x15 && c->x86 <= 0x17)
+               set_cpu_cap(c, X86_FEATURE_RDS);
 }
 
 static void init_amd_k8(struct cpuinfo_x86 *c)
---

with that done, you do at the end of ssb_select_mitigation():

        if (mode != SPEC_STORE_BYPASS_NONE)
                setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_IN_USE);

because you've done all the necessary checks earlier.

From now on you test X86_FEATURE_SPEC_STORE_BYPASS_IN_USE everywhere else.

Which means, the AMD side in init_amd() will have to look like this:

	if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_IN_USE))
		msr_set_bit(MSR_AMD64_LS_CFG, 54);

The Intel side in init_intel() respectively:

	if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_IN_USE)
		x86_set_spec_ctrl(SPEC_CTRL_RDS);

and from now on, the big hammer is enabled.

Yap, this finally makes sense!

;-)

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24 13:21 ` [MODERATED] " Borislav Petkov
@ 2018-04-24 15:13   ` Linus Torvalds
  2018-04-24 15:46   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 16+ messages in thread
From: Linus Torvalds @ 2018-04-24 15:13 UTC (permalink / raw)
  To: speck



On Tue, 24 Apr 2018, speck for Borislav Petkov wrote:
> 
> I think Linus explicitly didn't want the "SSB" abbreviation.

Well, it wasn't just SSB. Some of the other abbreviations have been 
equally bad.

For the specific case of the x86 cpuid flags (ie the _architected_ ones, 
not the extra bug-disable flags that we then add for our own use), I 
suspect we should use whatever name Intel ends up using (which is 
apparently not SSB, and no longer MD either).

But that's purely for the actual Intel enumeration (and the listing in 
/proc/cpuinfo that matches it).

Anywhere else, I'd really like it to be written out. That's _particularly_ 
true for anything user-facing (ie things like

  /sys/devices/system/cpu/vulnerabilities/name-goes-here

but also the kernel command line). But it's true even for kernel code that 
only developers see, particularly any kernel code that then tests for this 
feature in odd places, so any test for "static_cpu_has(..)" etc should be 
something that actually talks about "SPEC_STORE_BYPASS" (or whatever 
people agreed on - I don't want to force any particular naming, I just 
want to enforce it being _legible_ not some random acronym that you have 
to be intimately familiar with to recognize)

To make a long post short: I agree with everything Borislav said, I just 
wanted to clarify that it wasn't SSB in _particular_ I hated, it was any 
random TLA model.

               Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24 13:21 ` [MODERATED] " Borislav Petkov
  2018-04-24 15:13   ` Linus Torvalds
@ 2018-04-24 15:46   ` Konrad Rzeszutek Wilk
  2018-04-24 16:36     ` Borislav Petkov
  1 sibling, 1 reply; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-24 15:46 UTC (permalink / raw)
  To: speck

On Tue, Apr 24, 2018 at 03:21:57PM +0200, speck for Borislav Petkov wrote:
> On Mon, Apr 23, 2018 at 11:16:07PM -0400, speck for konrad.wilk_at_oracle.com wrote:
> > x86/spec_store_bypass_disable: Provide boot parameters for the mitigation
> 
> "x86/bugs: ..."
> 
> > 
> > Contemporary high performance processors use a common industry-wide
> > optimization known as "Speculative Store Bypass" in which loads from
> > addresses to which a recent store has occurred may (speculatively)
> > see an older value. Intel refers to this feature as "Reduced Data
> > Speculation", which is part of their "Smart Memory Access"
> > capability in Nehalem and later generation processors.
> 
> ...
> 
> > diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> > index c70b9a5d5045..3d6c684f98b4 100644
> > --- a/arch/x86/include/asm/cpufeatures.h
> > +++ b/arch/x86/include/asm/cpufeatures.h
> > @@ -214,6 +214,7 @@
> >  
> >  #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
> >  #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> > +#define X86_FEATURE_DISABLE_SSB		( 7*32+23) /* "" Disable Speculative Store Bypass. */
> 
> I think Linus explicitly didn't want the "SSB" abbreviation. I.e., this
> should be:
> 
> X86_FEATURE_SPEC_STORE_BYPASS_DISABLE

You OK with it moving the nicely aligned ( 7*32.. and so on?): It will
look like this:

215 #define X86_FEATURE_USE_IBPB            ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
216 #define X86_FEATURE_USE_IBRS_FW         ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
217 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
218 

> 
> or so.
> 
> > +#undef pr_fmt
> > +#define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
> > +
> > +static enum ssb_mitigation ssb_mode = SPEC_STORE_BYPASS_NONE;
> > +
> > +/* The kernel command line selection */
> > +enum ssb_mitigation_cmd {
> > +	SPEC_STORE_BYPASS_CMD_NONE,
> > +	SPEC_STORE_BYPASS_CMD_AUTO,
> > +	SPEC_STORE_BYPASS_CMD_ON,
> > +};
> > +
> > +static const char *ssb_strings[] = {
> > +	[SPEC_STORE_BYPASS_NONE]	= "Vulnerable",
> > +	[SPEC_STORE_BYPASS_DISABLE]	= "Mitigation: Speculative Store Bypass disabled"
> > +};
> > +
> > +static const struct {
> > +	const char *option;
> > +	enum ssb_mitigation_cmd cmd;
> > +} ssb_mitigation_options[] = {
> > +	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */
> > +	{ "on",		SPEC_STORE_BYPASS_CMD_ON },   /* Disable Speculative Store Bypass */
> > +	{ "off",	SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */
> > +};
> > +
> > +static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
> > +{
> > +	char arg[20];
> > +	int ret, i;
> > +	enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
> > +
> > +	if (!boot_cpu_has(X86_BUG_SPEC_STORE_BYPASS))
> > +		return SPEC_STORE_BYPASS_CMD_NONE;
> 
> And the proper logic with the X86_BUG/X86_FEATURE bits came to me in my
> sleep this morning. :-)
> 
> And the logic is the same as with the other bugs, we need three flags:
> 
> X86_BUG_SPEC_STORE_BYPASS		- CPU is affected by bug
> X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	- CPU can disable store bypassing

Right, which on Intel can also be guarded by X86_FEATURE_RDS and on AMD
it was implicit by the families (15h->17h). (The v4, patch #7 has that
logic). The v2 (or v1.7) of the posting had the logic you just described but
with different names.

> X86_FEATURE_SPEC_STORE_BYPASS_IN_USE	- CPU has disabled store bypassing
> 
> Then, the checks in this ssb_parse_cmdline() function should be:
> 
> static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
> {
> 	char arg[20];
> 	int ret, i;
> 	enum ssb_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
> 
> 	/* CPU not affected? */
> 	if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
> 		return SPEC_STORE_BYPASS_CMD_NONE;
> 
> 	/* CPU can even enable the mitigation? */
> 	if (!boot_cpu_has(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
> 		return SPEC_STORE_BYPASS_CMD_NONE;
> 
> Now, to the vendors:
> 
> Intel: there we set X86_FEATURE_RDS which means, we can use either
> X86_FEATURE_RDS or X86_FEATURE_SPEC_STORE_BYPASS_DISABLE as those two
> mean one thing: the CPU can disable the bypassing.
> 
> Then, early_identify_cpu() will have that feature bit set on Intel so
> that you can query it in ssb_parse_cmdline().
> 
> AMD: There we have to do something like:
> 
> ---
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 12bc0a1139da..1d225935724e 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -673,6 +673,9 @@ static void early_init_amd(struct cpuinfo_x86 *c)
>                 set_cpu_bug(c, X86_BUG_AMD_E400);
>  
>         early_detect_mem_encrypt(c);
> +
> +       if (c->x86 >= 0x15 && c->x86 <= 0x17)
> +               set_cpu_cap(c, X86_FEATURE_RDS);

Surely you meant
	set_cpu_cap(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE); ?
>  }
>  
>  static void init_amd_k8(struct cpuinfo_x86 *c)
> ---
> 
> with that done, you do at the end of ssb_select_mitigation():
> 
>         if (mode != SPEC_STORE_BYPASS_NONE)
>                 setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_IN_USE);
> 
> because you've done all the necessary checks earlier.

And you also need to update the global x86_spec_ctrl_base with the
SPEC_CTRL_RDS be allowed. Otherwise the VMEXIT/VMENTER will over-write
with the original (boot-up) values instead of the ones we want.

Meaning you either add:

466                 if (boot_cpu_has(X86_FEATURE_RDS))
467                         x86_spec_ctrl_base |= SPEC_CTRL_RDS;

Or the intel/cpu.c has to have a way to reach out and modify the
x86_spec_ctrl_base. So could have another accessory:
x86_spec_ctrl_update(u64)

or so? I think having the modification in bugs.c makes it easier.

> 
> >From now on you test X86_FEATURE_SPEC_STORE_BYPASS_IN_USE everywhere else.
> 
> Which means, the AMD side in init_amd() will have to look like this:
> 
> 	if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_IN_USE))
> 		msr_set_bit(MSR_AMD64_LS_CFG, 54);
> 
> The Intel side in init_intel() respectively:
> 
> 	if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_IN_USE)
> 		x86_set_spec_ctrl(SPEC_CTRL_RDS);
> 
> and from now on, the big hammer is enabled.
> 
> Yap, this finally makes sense!
> 
> ;-)
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
> -- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24 15:46   ` Konrad Rzeszutek Wilk
@ 2018-04-24 16:36     ` Borislav Petkov
  2018-04-24 18:07       ` Konrad Rzeszutek Wilk
  2018-04-25 16:12       ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 16+ messages in thread
From: Borislav Petkov @ 2018-04-24 16:36 UTC (permalink / raw)
  To: speck

On Tue, Apr 24, 2018 at 11:46:53AM -0400, speck for Konrad Rzeszutek Wilk wrote:
> You OK with it moving the nicely aligned ( 7*32.. and so on?): It will
> look like this:
> 
> 215 #define X86_FEATURE_USE_IBPB            ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
> 216 #define X86_FEATURE_USE_IBRS_FW         ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> 217 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
> 218 

Yeah, Ingo will have to realign them again :-)

> Right, which on Intel can also be guarded by X86_FEATURE_RDS and on AMD
> it was implicit by the families (15h->17h). (The v4, patch #7 has that
> logic). The v2 (or v1.7) of the posting had the logic you just described but
> with different names.

So I'd like for us to use a single one - either X86_FEATURE_RDS or
X86_FEATURE_SPEC_STORE_BYPASS_DISABLE globally. No need to cause
unnecessary confusion with two things meaning roughly the same thing.

> Surely you meant
> 	set_cpu_cap(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE); ?

Right, as said, we can choose which one we take as both mean roughly the
same thing. We can do X86_FEATURE_RDS which is the architectural or do
our own - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE.

I'm leaning more towards RDS as it is architectural and will be visible
in cpuinfo.

Oh, and it won't break the alignment in cpufeatures.h :-)

> And you also need to update the global x86_spec_ctrl_base with the
> SPEC_CTRL_RDS be allowed. Otherwise the VMEXIT/VMENTER will over-write
> with the original (boot-up) values instead of the ones we want.
> 
> Meaning you either add:
> 
> 466                 if (boot_cpu_has(X86_FEATURE_RDS))
> 467                         x86_spec_ctrl_base |= SPEC_CTRL_RDS;
> 
> Or the intel/cpu.c has to have a way to reach out and modify the
> x86_spec_ctrl_base. So could have another accessory:
> x86_spec_ctrl_update(u64)
> 
> or so? I think having the modification in bugs.c makes it easier.

Right.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24 16:36     ` Borislav Petkov
@ 2018-04-24 18:07       ` Konrad Rzeszutek Wilk
  2018-04-24 18:46         ` Borislav Petkov
  2018-04-25 16:12       ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-24 18:07 UTC (permalink / raw)
  To: speck

On Tue, Apr 24, 2018 at 06:36:56PM +0200, speck for Borislav Petkov wrote:
> On Tue, Apr 24, 2018 at 11:46:53AM -0400, speck for Konrad Rzeszutek Wilk wrote:
> > You OK with it moving the nicely aligned ( 7*32.. and so on?): It will
> > look like this:
> > 
> > 215 #define X86_FEATURE_USE_IBPB            ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
> > 216 #define X86_FEATURE_USE_IBRS_FW         ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> > 217 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
> > 218 
> 
> Yeah, Ingo will have to realign them again :-)

He is going to be mad at me. I will make a note in the patch
saying you wanted this name.
> 
> > Right, which on Intel can also be guarded by X86_FEATURE_RDS and on AMD
> > it was implicit by the families (15h->17h). (The v4, patch #7 has that
> > logic). The v2 (or v1.7) of the posting had the logic you just described but
> > with different names.
> 
> So I'd like for us to use a single one - either X86_FEATURE_RDS or
> X86_FEATURE_SPEC_STORE_BYPASS_DISABLE globally. No need to cause
> unnecessary confusion with two things meaning roughly the same thing.
> 
> > Surely you meant
> > 	set_cpu_cap(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE); ?
> 
> Right, as said, we can choose which one we take as both mean roughly the
> same thing. We can do X86_FEATURE_RDS which is the architectural or do
> our own - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE.
> 
> I'm leaning more towards RDS as it is architectural and will be visible
> in cpuinfo.

And it nicely slurps it up from the CPUID.7.EDX[31].

Which means patch #7 is good - just needs a better name for the
X86_FEATURE_SPEC_STORE_BYPASS_DISABLE_BECAUSE_I_LIKE_LONG_NAMES

:-)
> 
> Oh, and it won't break the alignment in cpufeatures.h :-)
> 
> > And you also need to update the global x86_spec_ctrl_base with the
> > SPEC_CTRL_RDS be allowed. Otherwise the VMEXIT/VMENTER will over-write
> > with the original (boot-up) values instead of the ones we want.
> > 
> > Meaning you either add:
> > 
> > 466                 if (boot_cpu_has(X86_FEATURE_RDS))
> > 467                         x86_spec_ctrl_base |= SPEC_CTRL_RDS;
> > 
> > Or the intel/cpu.c has to have a way to reach out and modify the
> > x86_spec_ctrl_base. So could have another accessory:
> > x86_spec_ctrl_update(u64)
> > 
> > or so? I think having the modification in bugs.c makes it easier.
> 
> Right.
> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
> -- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24 18:07       ` Konrad Rzeszutek Wilk
@ 2018-04-24 18:46         ` Borislav Petkov
  2018-04-25 15:32           ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 16+ messages in thread
From: Borislav Petkov @ 2018-04-24 18:46 UTC (permalink / raw)
  To: speck

On Tue, Apr 24, 2018 at 02:07:50PM -0400, speck for Konrad Rzeszutek Wilk wrote:
> And it nicely slurps it up from the CPUID.7.EDX[31].

Yap.

> Which means patch #7 is good - just needs a better name for the
> X86_FEATURE_SPEC_STORE_BYPASS_DISABLE_BECAUSE_I_LIKE_LONG_NAMES
> 

Huh?

You either use X86_FEATURE_RDS

XOR

X86_FEATURE_SPEC_STORE_BYPASS_DISABLE

No need for both.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24 18:46         ` Borislav Petkov
@ 2018-04-25 15:32           ` Konrad Rzeszutek Wilk
  2018-04-25 17:25             ` Borislav Petkov
  0 siblings, 1 reply; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-25 15:32 UTC (permalink / raw)
  To: speck

On Tue, Apr 24, 2018 at 08:46:46PM +0200, speck for Borislav Petkov wrote:
> On Tue, Apr 24, 2018 at 02:07:50PM -0400, speck for Konrad Rzeszutek Wilk wrote:
> > And it nicely slurps it up from the CPUID.7.EDX[31].
> 
> Yap.
> 
> > Which means patch #7 is good - just needs a better name for the
> > X86_FEATURE_SPEC_STORE_BYPASS_DISABLE_BECAUSE_I_LIKE_LONG_NAMES
> > 
> 
> Huh?
> 
> You either use X86_FEATURE_RDS
> 
> XOR
> 
> X86_FEATURE_SPEC_STORE_BYPASS_DISABLE
> 
> No need for both.

We need two CPU bit flags. First to say:

 1). I am able to mitigate. This will be via X86_FEATURE_RDS for both
    AMD and Intel.

 2). I _want_ to mitigate. This will be via X86_FEATURE_SPEC_STORE_BYPASS_DISABLE
   to tell the init_amd_bd,init_amd_zn,init_amd_jg, and init_intel
   to do its magic.


> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
> -- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24 16:36     ` Borislav Petkov
  2018-04-24 18:07       ` Konrad Rzeszutek Wilk
@ 2018-04-25 16:12       ` Konrad Rzeszutek Wilk
  2018-04-25 16:19         ` Konrad Rzeszutek Wilk
  2018-04-25 17:26         ` Borislav Petkov
  1 sibling, 2 replies; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-25 16:12 UTC (permalink / raw)
  To: speck

On Tue, Apr 24, 2018 at 06:36:56PM +0200, speck for Borislav Petkov wrote:
> On Tue, Apr 24, 2018 at 11:46:53AM -0400, speck for Konrad Rzeszutek Wilk wrote:
> > You OK with it moving the nicely aligned ( 7*32.. and so on?): It will
> > look like this:
> > 
> > 215 #define X86_FEATURE_USE_IBPB            ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
> > 216 #define X86_FEATURE_USE_IBRS_FW         ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> > 217 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
> > 218 
> 
> Yeah, Ingo will have to realign them again :-)
> 
> > Right, which on Intel can also be guarded by X86_FEATURE_RDS and on AMD
> > it was implicit by the families (15h->17h). (The v4, patch #7 has that
> > logic). The v2 (or v1.7) of the posting had the logic you just described but
> > with different names.
> 
> So I'd like for us to use a single one - either X86_FEATURE_RDS or
> X86_FEATURE_SPEC_STORE_BYPASS_DISABLE globally. No need to cause
> unnecessary confusion with two things meaning roughly the same thing.

This is the part that won't work (sorry about not spotting this
earlier).

That is X86_FEATURE_RDS gets sets on Intel when we slurp up
CPUID_7_EDX:
 789         /* Additional Intel-defined flags: level 0x00000007 */
 790         if (c->cpuid_level >= 0x00000007) {
 791                 cpuid_count(0x00000007, 0, &eax, &ebx, &ecx, &edx);
 792                 c->x86_capability[CPUID_7_0_EBX] = ebx;
 793                 c->x86_capability[CPUID_7_ECX] = ecx;
 794                 c->x86_capability[CPUID_7_EDX] = edx;
 795         }

From there on Intel we have the 'rds' flag already set.

Which means that the logic in Intel code, as you suggested:

688         if (cpu_has(c, X86_FEATURE_RDS))
689                 x86_set_spec_ctrl(SPEC_CTRL_RDS);

will always do its thing.

Unless in bugs.c we end up clearing the RDS flag:

459         if (mode == SPEC_STORE_BYPASS_NONE)
460                 setup_clear_cpu_cap(X86_FEATURE_RDS);
461         else {
462                 if (boot_cpu_has(X86_FEATURE_RDS))
463                         x86_spec_ctrl_base |= SPEC_CTRL_RDS;
464
465         }

?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-25 16:12       ` Konrad Rzeszutek Wilk
@ 2018-04-25 16:19         ` Konrad Rzeszutek Wilk
  2018-04-25 17:29           ` Borislav Petkov
  2018-04-25 17:26         ` Borislav Petkov
  1 sibling, 1 reply; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-25 16:19 UTC (permalink / raw)
  To: speck

On Wed, Apr 25, 2018 at 12:12:14PM -0400, speck for Konrad Rzeszutek Wilk wrote:
> On Tue, Apr 24, 2018 at 06:36:56PM +0200, speck for Borislav Petkov wrote:
> > On Tue, Apr 24, 2018 at 11:46:53AM -0400, speck for Konrad Rzeszutek Wilk wrote:
> > > You OK with it moving the nicely aligned ( 7*32.. and so on?): It will
> > > look like this:
> > > 
> > > 215 #define X86_FEATURE_USE_IBPB            ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
> > > 216 #define X86_FEATURE_USE_IBRS_FW         ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> > > 217 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
> > > 218 
> > 
> > Yeah, Ingo will have to realign them again :-)
> > 
> > > Right, which on Intel can also be guarded by X86_FEATURE_RDS and on AMD
> > > it was implicit by the families (15h->17h). (The v4, patch #7 has that
> > > logic). The v2 (or v1.7) of the posting had the logic you just described but
> > > with different names.
> > 
> > So I'd like for us to use a single one - either X86_FEATURE_RDS or
> > X86_FEATURE_SPEC_STORE_BYPASS_DISABLE globally. No need to cause
> > unnecessary confusion with two things meaning roughly the same thing.
> 
> This is the part that won't work (sorry about not spotting this
> earlier).
> 
> That is X86_FEATURE_RDS gets sets on Intel when we slurp up
> CPUID_7_EDX:
>  789         /* Additional Intel-defined flags: level 0x00000007 */
>  790         if (c->cpuid_level >= 0x00000007) {
>  791                 cpuid_count(0x00000007, 0, &eax, &ebx, &ecx, &edx);
>  792                 c->x86_capability[CPUID_7_0_EBX] = ebx;
>  793                 c->x86_capability[CPUID_7_ECX] = ecx;
>  794                 c->x86_capability[CPUID_7_EDX] = edx;
>  795         }
> 
> >From there on Intel we have the 'rds' flag already set.

I would like to add that I think we should display the 'rds'
in the /proc/cpuinfo regardless of whether we use it or not.

That follows the same logic as what 'ibrs' and 'stibp' do.

> 
> Which means that the logic in Intel code, as you suggested:
> 
> 688         if (cpu_has(c, X86_FEATURE_RDS))
> 689                 x86_set_spec_ctrl(SPEC_CTRL_RDS);
> 
> will always do its thing.

Unless we want to use 'x86_get_default_spec_ctrl' value and see
if the SPEC_CTRL_RDS bit is set. Shudders.

> 
> Unless in bugs.c we end up clearing the RDS flag:
> 
> 459         if (mode == SPEC_STORE_BYPASS_NONE)
> 460                 setup_clear_cpu_cap(X86_FEATURE_RDS);
> 461         else {
> 462                 if (boot_cpu_has(X86_FEATURE_RDS))
> 463                         x86_spec_ctrl_base |= SPEC_CTRL_RDS;
> 464
> 465         }
> 
> ?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-24  3:16 [MODERATED] [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6 konrad.wilk
  2018-04-24 13:21 ` [MODERATED] " Borislav Petkov
@ 2018-04-25 16:51 ` Tim Chen
  2018-04-25 17:38   ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 16+ messages in thread
From: Tim Chen @ 2018-04-25 16:51 UTC (permalink / raw)
  To: speck


[-- Attachment #1.1: Type: text/plain, Size: 1770 bytes --]


Future Intel CPUs that do not have speculative store bypass vulnerability will
turn on bit 4 in MSR 0x10a (MSR_IA32_ARCH_CAPABILITIES).

So we should add this check.

Suggested code changes below.

Tim

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 96ae1e7..c28af75 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -69,6 +69,7 @@
 #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
 #define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
 #define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
+#define ARCH_CAP_SSB_NO			(1 << 4)   /* Not susceptible to speculative store bypass */
 
 #define MSR_IA32_BBL_CR_CTL		0x00000119
 #define MSR_IA32_BBL_CR_CTL3		0x0000011e
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 581eb04..de7fb75 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -944,8 +944,13 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = 0;
 
-	if (!x86_match_cpu(cpu_no_spec_store_bypass))
+	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+
+	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
+	    !(ia32_cap & ARCH_CAP_SSB_NO)) {
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+	}
 
 	if (x86_match_cpu(cpu_no_speculation))
 		return;
@@ -956,9 +961,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	if (x86_match_cpu(cpu_no_meltdown))
 		return;
 
-	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
-
 	/* Rogue Data Cache Load? No! */
 	if (ia32_cap & ARCH_CAP_RDCL_NO)
 		return;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-25 15:32           ` Konrad Rzeszutek Wilk
@ 2018-04-25 17:25             ` Borislav Petkov
  0 siblings, 0 replies; 16+ messages in thread
From: Borislav Petkov @ 2018-04-25 17:25 UTC (permalink / raw)
  To: speck

On Wed, Apr 25, 2018 at 11:32:23AM -0400, speck for Konrad Rzeszutek Wilk wrote:
> We need two CPU bit flags. First to say:
> 
>  1). I am able to mitigate. This will be via X86_FEATURE_RDS for both
>     AMD and Intel.
> 
>  2). I _want_ to mitigate. This will be via X86_FEATURE_SPEC_STORE_BYPASS_DISABLE
>    to tell the init_amd_bd,init_amd_zn,init_amd_jg, and init_intel
>    to do its magic.

Lemme paste from my earlier mail:

"
X86_BUG_SPEC_STORE_BYPASS               - CPU is affected by bug
X86_FEATURE_SPEC_STORE_BYPASS_DISABLE   - CPU can disable store bypassing
X86_FEATURE_SPEC_STORE_BYPASS_IN_USE    - CPU has disabled store bypassing
"

So I meant to use X86_FEATURE_SPEC_STORE_BYPASS_DISABLE for "I am able
to mitigate".

You want to use it to denote "IN_USE". I think "IN_USE" is more telling:

	if (... X86_FEATURE_SPEC_STORE_BYPASS_IN_USE)
		disable MD

So we can do

X86_BUG_SPEC_STORE_BYPASS
X86_FEATURE_RDS
X86_FEATURE_SPEC_STORE_BYPASS_IN_USE

or

X86_BUG_SPEC_STORE_BYPASS
X86_FEATURE_RDS
X86_FEATURE_SPEC_STORE_BYPASS_DISABLE

as you want it.

Whatever - I'm all out of energy for bikeshedding so pick one and
document what it means.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-25 16:12       ` Konrad Rzeszutek Wilk
  2018-04-25 16:19         ` Konrad Rzeszutek Wilk
@ 2018-04-25 17:26         ` Borislav Petkov
  1 sibling, 0 replies; 16+ messages in thread
From: Borislav Petkov @ 2018-04-25 17:26 UTC (permalink / raw)
  To: speck

On Wed, Apr 25, 2018 at 12:12:14PM -0400, speck for Konrad Rzeszutek Wilk wrote:
> Unless in bugs.c we end up clearing the RDS flag:
> 
> 459         if (mode == SPEC_STORE_BYPASS_NONE)
> 460                 setup_clear_cpu_cap(X86_FEATURE_RDS);

Well, you don't clear the RDS flag - that's the architectural feature.
You clear the IN_USE flag - see the email I just sent.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-25 16:19         ` Konrad Rzeszutek Wilk
@ 2018-04-25 17:29           ` Borislav Petkov
  0 siblings, 0 replies; 16+ messages in thread
From: Borislav Petkov @ 2018-04-25 17:29 UTC (permalink / raw)
  To: speck

On Wed, Apr 25, 2018 at 12:19:18PM -0400, speck for Konrad Rzeszutek Wilk wrote:
> I would like to add that I think we should display the 'rds'
> in the /proc/cpuinfo regardless of whether we use it or not.

Yes, we use three flags - first is the bug flag which denotes CPU is
affected, second denotes the ability of the CPU to disable the MD
feature and third is the one we clear if spec_store_bypass_disable=off.
Third flag is what the rest of the code uses to query whether to
enable/disable MD.

Makes sense?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-25 16:51 ` Tim Chen
@ 2018-04-25 17:38   ` Konrad Rzeszutek Wilk
  2018-04-25 20:25     ` Tim Chen
  0 siblings, 1 reply; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-25 17:38 UTC (permalink / raw)
  To: speck

On Wed, Apr 25, 2018 at 09:51:52AM -0700, speck for Tim Chen wrote:
> 
> Future Intel CPUs that do not have speculative store bypass vulnerability will
> turn on bit 4 in MSR 0x10a (MSR_IA32_ARCH_CAPABILITIES).


Is it going to be called 'RDS' or SSB'?

I am going to assume ARCH_CAP_RDS_NO

> 
> So we should add this check.
> 
> Suggested code changes below.

Let me roll it in (removed the {} in the cpu_set_bug_bits).

> 
> Tim
> 
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 96ae1e7..c28af75 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -69,6 +69,7 @@
>  #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
>  #define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
>  #define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
> +#define ARCH_CAP_SSB_NO			(1 << 4)   /* Not susceptible to speculative store bypass */
>  
>  #define MSR_IA32_BBL_CR_CTL		0x00000119
>  #define MSR_IA32_BBL_CR_CTL3		0x0000011e
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 581eb04..de7fb75 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -944,8 +944,13 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  {
>  	u64 ia32_cap = 0;
>  
> -	if (!x86_match_cpu(cpu_no_spec_store_bypass))
> +	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
> +		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
> +
> +	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
> +	    !(ia32_cap & ARCH_CAP_SSB_NO)) {
>  		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
> +	}
>  
>  	if (x86_match_cpu(cpu_no_speculation))
>  		return;
> @@ -956,9 +961,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	if (x86_match_cpu(cpu_no_meltdown))
>  		return;
>  
> -	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
> -		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
> -
>  	/* Rogue Data Cache Load? No! */
>  	if (ia32_cap & ARCH_CAP_RDCL_NO)
>  		return;
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6
  2018-04-25 17:38   ` Konrad Rzeszutek Wilk
@ 2018-04-25 20:25     ` Tim Chen
  0 siblings, 0 replies; 16+ messages in thread
From: Tim Chen @ 2018-04-25 20:25 UTC (permalink / raw)
  To: speck


[-- Attachment #1.1: Type: text/plain, Size: 815 bytes --]

On 04/25/2018 10:38 AM, speck for Konrad Rzeszutek Wilk wrote:
> On Wed, Apr 25, 2018 at 09:51:52AM -0700, speck for Tim Chen wrote:
>>
>> Future Intel CPUs that do not have speculative store bypass vulnerability will
>> turn on bit 4 in MSR 0x10a (MSR_IA32_ARCH_CAPABILITIES).
> 
> 
> Is it going to be called 'RDS' or SSB'?
> 
> I am going to assume ARCH_CAP_RDS_NO

Reduced data speculation (RDS) is the mitigation method
and not the vulnerability.  Vulnerability type is Speculative Store Bypass (SSB)
so ARCH_CAP_SSB_NO makes more sense.

That said, the hardware guys are still bikeshedding over
the name and we may see an updated name yet when the
specs actually come out.  So it is okay for whichever
name you want to put in for now till the hardware guys
make up their mind. :(

Tim


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-04-25 20:25 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-24  3:16 [MODERATED] [PATCH v4 06/10] [PATCH v4 6/9] Linux Patch #6 konrad.wilk
2018-04-24 13:21 ` [MODERATED] " Borislav Petkov
2018-04-24 15:13   ` Linus Torvalds
2018-04-24 15:46   ` Konrad Rzeszutek Wilk
2018-04-24 16:36     ` Borislav Petkov
2018-04-24 18:07       ` Konrad Rzeszutek Wilk
2018-04-24 18:46         ` Borislav Petkov
2018-04-25 15:32           ` Konrad Rzeszutek Wilk
2018-04-25 17:25             ` Borislav Petkov
2018-04-25 16:12       ` Konrad Rzeszutek Wilk
2018-04-25 16:19         ` Konrad Rzeszutek Wilk
2018-04-25 17:29           ` Borislav Petkov
2018-04-25 17:26         ` Borislav Petkov
2018-04-25 16:51 ` Tim Chen
2018-04-25 17:38   ` Konrad Rzeszutek Wilk
2018-04-25 20:25     ` Tim Chen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.