All of lore.kernel.org
 help / color / mirror / Atom feed
* [MODERATED] [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6
@ 2018-04-23 17:11 konrad.wilk
  2018-04-23 18:34 ` [MODERATED] " Borislav Petkov
  2018-04-23 18:57 ` Borislav Petkov
  0 siblings, 2 replies; 7+ messages in thread
From: konrad.wilk @ 2018-04-23 17:11 UTC (permalink / raw)
  To: speck

Contemporary high performance processors use a common industry-wide
optimization known as "Speculative Store Bypass" in which loads from
addresses to which a recent store has occurred may (speculatively)
see an older value. Intel refers to this feature as "Memory
Disambiguation", which is part of their "Smart Memory Access"
capability in Nehalem and later generation processors.

Some processors have an implementation bug that enables a cache
side-channel attack against such speculatively read values. An
attacker can create exploit code that allows them to read memory
outside of a sandbox environment (for example, malicious JavaScript
in a web page), or to perform more complex attacks against code
running within the same privilege level, e.g. via the stack.

We provide two command line control knobs:

 nospec_store_bypass_disable
 spec_store_bypass_disable=[off,auto,on]

By default affected x86 processors will power on with Speculative
Store Bypass (Memory Disambiguation) enabled. Hence the provided
kernel parameters are written from the point of view of whether
to enable a mitigation or not. The parameters are as follows:

 - auto - if possible will disable Speculative Store Bypass (Memory
	  Disambiguation) at boot if CPU is vulnerable to exploit
 - on   - disable Speculative Store Bypass (Memory Disambiguation)
 - off  - enable Speculative Store Bypass (Memory Disambiguation)

And as mentioned - some CPUs may have a Speculative Store Bypass
implementation which is not vulnerable to the described attacks.
For those, the 'auto' (default) option will pick the right choice.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
v1.3:
 - Wrong array was used when figuring mdd_v4= arguments
 - If broken microcode found, also disable MD feature.
 - s/mdd_v4/mdd/
 - s/kernel/userpsace/
 - Fix up the commit description
v1.4:
 - Incorporate various feedback and suggestions from Jon Masters
 - Change mdd parameters to allow only big hammer on/off for now
 - Introduce "ssbd" industry standard term for cross-architecture
 - Use mdd_ and MDD_ prefixes to clearly spell out that "enable"
   means we are actually disabling a CPU processor feature (MD)
v2:
 - s/mdd/ssb/ in the x86 code to match with the 'ssb' part
v3:
 - rip out mdd.
 - remove 'Haswell' and such and replace with 'contemporary'
 - s/ssb/spec_store_bypass/ and various other cleanups (Jon Masters)
---
 Documentation/admin-guide/kernel-parameters.txt | 32 +++++++++
 arch/x86/include/asm/cpufeatures.h              |  2 +
 arch/x86/include/asm/nospec-branch.h            |  6 ++
 arch/x86/kernel/cpu/bugs.c                      | 96 +++++++++++++++++++++++++
 arch/x86/kernel/cpu/intel.c                     |  1 +
 5 files changed, 137 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 1d1d53f85ddd..340a84e05f9e 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2647,6 +2647,9 @@
 			allow data leaks with this option, which is equivalent
 			to spectre_v2=off.
 
+	nospec_store_disable
+			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
+
 	noxsave		[BUGS=X86] Disables x86 extended register state save
 			and restore using xsave. The kernel will fallback to
 			enabling legacy floating-point and sse state.
@@ -3997,6 +4000,35 @@
 			Not specifying this option is equivalent to
 			spectre_v2=auto.
 
+	spec_store_bypass_disable=
+			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
+			(Speculative Store Bypass vulnerability)
+			Certain CPUs are vulnerable to an exploit against a
+			a common industry wide performance optimization known
+			as "Speculative Store Bypass" in which recent stores
+			to the same memory location may not be observed by
+			later loads during speculative execution. The idea
+			is that such stores are unlikely and that they can
+			be detected prior to instruction retirement at the
+			end of a particular speculation execution window.
+
+			In vulnerable processors, the speculatively forwarded
+			store can be used in a cache side channel attack, for
+			example to read memory to which the attacker does not
+			directly have access (e.g. inside sandboxed code).
+
+			This parameter controls whether the Speculative Store
+			Bypass optimization is used.
+
+			on - unconditionally disable Speculative Store Bypass
+			off - unconditionally enable Speculative Store Bypass
+			auto - kernel detects whether your CPU model contains a
+			       vulnerable implementation of Speculative Store
+			       Bypass and picks the most appropriate mitigation
+
+			Not specifying this option is equivalent to
+			spec_store_bypass_disable=auto.
+
 	spia_io_base=	[HW,MTD]
 	spia_fio_base=
 	spia_pedr=
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 25c9406de6c9..1966b223e11e 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -214,6 +214,8 @@
 
 #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
 #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
+#define X86_FEATURE_STBUF_BYPASS	( 7*32+23) /* Speculative Store Bypass mitigation is feasible. */
+#define X86_FEATURE_STBUF_MITIGATE	( 7*32+24) /* "" Use Speculative Store Bypass. */
 
 /* Virtualization flags: Linux defined, word 8 */
 #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 3e4b66bdf25f..c4a99c970147 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -237,6 +237,12 @@ extern u64 x86_spec_ctrl_base;
 extern void x86_set_guest_spec_ctrl(u64);
 extern void x86_restore_host_spec_ctrl(u64);
 
+/* The Speculative Store Bypass disable variants */
+enum spec_store_bypass_mitigation {
+	SPEC_STORE_BYPASS_NONE,
+	SPEC_STORE_BYPASS_DISABLE,
+};
+
 extern char __indirect_thunk_start[];
 extern char __indirect_thunk_end[];
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0fc475a54aa1..b54336a42dca 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -27,6 +27,7 @@
 #include <asm/intel-family.h>
 
 static void __init spectre_v2_select_mitigation(void);
+static void __init spec_store_bypass_select_mitigation(void);
 static void __init spec_ctrl_save_msr(void);
 
 void __init check_bugs(void)
@@ -43,6 +44,12 @@ void __init check_bugs(void)
 	/* Select the proper spectre mitigation before patching alternatives */
 	spectre_v2_select_mitigation();
 
+	/*
+	 * Select proper mitigation for any exposure to the Speculative Store
+	 * Bypass vulnerability.
+	 */
+	spec_store_bypass_select_mitigation();
+
 #ifdef CONFIG_X86_32
 	/*
 	 * Check whether we are able to run this kernel safely on SMP.
@@ -344,6 +351,92 @@ static void __init spectre_v2_select_mitigation(void)
 	}
 }
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
+
+static enum spec_store_bypass_mitigation spec_store_bypass_mode = SPEC_STORE_BYPASS_NONE;
+
+/* The kernel command line selection */
+enum spec_store_bypass_mitigation_cmd {
+	SPEC_STORE_BYPASS_CMD_NONE,
+	SPEC_STORE_BYPASS_CMD_AUTO,
+	SPEC_STORE_BYPASS_CMD_ON,
+};
+
+static const char *spec_store_bypass_strings[] = {
+	[SPEC_STORE_BYPASS_NONE]			= "Vulnerable",
+	[SPEC_STORE_BYPASS_DISABLE]			= "Mitigation: Speculative Store Bypass disabled"
+};
+
+static const struct {
+	const char *option;
+	enum spec_store_bypass_mitigation_cmd cmd;
+} spec_store_bypass_mitigation_options[] = {
+	{ "auto",				SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */
+	{ "on",					SPEC_STORE_BYPASS_CMD_ON },   /* Disable Speculative Store Bypass */
+	{ "off",				SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */
+};
+
+static enum spec_store_bypass_mitigation_cmd __init spec_store_bypass_parse_cmdline(void)
+{
+	char arg[20];
+	int ret, i;
+	enum spec_store_bypass_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
+
+	if (!boot_cpu_has(X86_FEATURE_STBUF_BYPASS))
+		return SPEC_STORE_BYPASS_CMD_NONE;
+
+	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable"))
+		return SPEC_STORE_BYPASS_CMD_NONE;
+	else {
+		ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
+					  arg, sizeof(arg));
+		if (ret < 0)
+			return SPEC_STORE_BYPASS_CMD_AUTO;
+
+		for (i = 0; i < ARRAY_SIZE(spec_store_bypass_mitigation_options); i++) {
+			if (!match_option(arg, ret, spec_store_bypass_mitigation_options[i].option))
+				continue;
+
+			cmd = spec_store_bypass_mitigation_options[i].cmd;
+			break;
+		}
+
+		if (i >= ARRAY_SIZE(spec_store_bypass_mitigation_options)) {
+			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+			return SPEC_STORE_BYPASS_CMD_AUTO;
+		}
+	}
+
+	return cmd;
+}
+
+static void __init spec_store_bypass_select_mitigation(void)
+{
+	enum spec_store_bypass_mitigation_cmd cmd = spec_store_bypass_parse_cmdline();
+	enum spec_store_bypass_mitigation mode = SPEC_STORE_BYPASS_NONE;
+
+	if (!boot_cpu_has_bug(X86_BUG_CPU_SPEC_STORE_BYPASS) &&
+	    (cmd == SPEC_STORE_BYPASS_CMD_NONE ||
+	     cmd == SPEC_STORE_BYPASS_CMD_AUTO))
+		return;
+
+	switch (cmd) {
+	case SPEC_STORE_BYPASS_CMD_AUTO:
+	case SPEC_STORE_BYPASS_CMD_ON:
+		mode = SPEC_STORE_BYPASS_DISABLE;
+		break;
+	case SPEC_STORE_BYPASS_CMD_NONE:
+		break;
+	}
+
+	spec_store_bypass_mode = mode;
+	pr_info("%s\n", spec_store_bypass_strings[mode]);
+
+	if (mode != SPEC_STORE_BYPASS_NONE)
+		setup_force_cpu_cap(X86_FEATURE_STBUF_MITIGATE);
+}
+
 #undef pr_fmt
 
 #ifdef CONFIG_SYSFS
@@ -370,6 +463,9 @@ ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
 			       spectre_v2_module_string());
 
+	case X86_BUG_CPU_SPEC_STORE_BYPASS:
+		return sprintf(buf, "%s\n", spec_store_bypass_strings[spec_store_bypass_mode]);
+
 	default:
 		break;
 	}
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index c3af167d0a70..16ee38d8977a 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -189,6 +189,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 		setup_clear_cpu_cap(X86_FEATURE_STIBP);
 		setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
 		setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
+		setup_clear_cpu_cap(X86_FEATURE_MDD);
 	}
 
 	/*
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [MODERATED] Re: [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6
  2018-04-23 17:11 [MODERATED] [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6 konrad.wilk
@ 2018-04-23 18:34 ` Borislav Petkov
  2018-04-23 20:45   ` Jiri Kosina
  2018-04-24  0:19   ` Konrad Rzeszutek Wilk
  2018-04-23 18:57 ` Borislav Petkov
  1 sibling, 2 replies; 7+ messages in thread
From: Borislav Petkov @ 2018-04-23 18:34 UTC (permalink / raw)
  To: speck

On Mon, Apr 23, 2018 at 01:11:30PM -0400, speck for konrad.wilk_at_oracle.com wrote:
> x86/spec_store_bypass_disable: Provide boot parameters for the mitigation

"x86/bugs: ...

> Contemporary high performance processors use a common industry-wide
> optimization known as "Speculative Store Bypass" in which loads from
> addresses to which a recent store has occurred may (speculatively)
> see an older value. Intel refers to this feature as "Memory
> Disambiguation", which is part of their "Smart Memory Access"
> capability in Nehalem and later generation processors.
> 
> Some processors have an implementation bug that enables a cache
> side-channel attack against such speculatively read values. An
> attacker can create exploit code that allows them to read memory
> outside of a sandbox environment (for example, malicious JavaScript
> in a web page), or to perform more complex attacks against code
> running within the same privilege level, e.g. via the stack.
> 
> We provide two command line control knobs:
> 
>  nospec_store_bypass_disable
>  spec_store_bypass_disable=[off,auto,on]
> 
> By default affected x86 processors will power on with Speculative
> Store Bypass (Memory Disambiguation) enabled. Hence the provided
> kernel parameters are written from the point of view of whether
> to enable a mitigation or not. The parameters are as follows:
> 
>  - auto - if possible will disable Speculative Store Bypass (Memory
> 	  Disambiguation) at boot if CPU is vulnerable to exploit

This version of the "auto" explanation from below is the correct one:

+                     auto - kernel detects whether your CPU model contains a
+                            vulnerable implementation of Speculative Store
+                            Bypass and picks the most appropriate mitigation


>  - on   - disable Speculative Store Bypass (Memory Disambiguation)
>  - off  - enable Speculative Store Bypass (Memory Disambiguation)
> 
> And as mentioned - some CPUs may have a Speculative Store Bypass
> implementation which is not vulnerable to the described attacks.
> For those, the 'auto' (default) option will pick the right choice.

IOW, that paragraph should be after the "auto" bulletpoint above.

> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> v1.3:
>  - Wrong array was used when figuring mdd_v4= arguments
>  - If broken microcode found, also disable MD feature.
>  - s/mdd_v4/mdd/
>  - s/kernel/userpsace/
>  - Fix up the commit description
> v1.4:
>  - Incorporate various feedback and suggestions from Jon Masters
>  - Change mdd parameters to allow only big hammer on/off for now
>  - Introduce "ssbd" industry standard term for cross-architecture
>  - Use mdd_ and MDD_ prefixes to clearly spell out that "enable"
>    means we are actually disabling a CPU processor feature (MD)
> v2:
>  - s/mdd/ssb/ in the x86 code to match with the 'ssb' part
> v3:
>  - rip out mdd.
>  - remove 'Haswell' and such and replace with 'contemporary'
>  - s/ssb/spec_store_bypass/ and various other cleanups (Jon Masters)
> ---
>  Documentation/admin-guide/kernel-parameters.txt | 32 +++++++++
>  arch/x86/include/asm/cpufeatures.h              |  2 +
>  arch/x86/include/asm/nospec-branch.h            |  6 ++
>  arch/x86/kernel/cpu/bugs.c                      | 96 +++++++++++++++++++++++++
>  arch/x86/kernel/cpu/intel.c                     |  1 +
>  5 files changed, 137 insertions(+)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 1d1d53f85ddd..340a84e05f9e 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2647,6 +2647,9 @@
>  			allow data leaks with this option, which is equivalent
>  			to spectre_v2=off.
>  
> +	nospec_store_disable

nospec_store_bypass_disable

You guys need to make up your mind. :)

> +			[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
> +
>  	noxsave		[BUGS=X86] Disables x86 extended register state save
>  			and restore using xsave. The kernel will fallback to
>  			enabling legacy floating-point and sse state.
> @@ -3997,6 +4000,35 @@
>  			Not specifying this option is equivalent to
>  			spectre_v2=auto.
>  
> +	spec_store_bypass_disable=
> +			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
> +			(Speculative Store Bypass vulnerability)
> +			Certain CPUs are vulnerable to an exploit against a
> +			a common industry wide performance optimization known
> +			as "Speculative Store Bypass" in which recent stores
> +			to the same memory location may not be observed by
> +			later loads during speculative execution. The idea
> +			is that such stores are unlikely and that they can
> +			be detected prior to instruction retirement at the
> +			end of a particular speculation execution window.
> +
> +			In vulnerable processors, the speculatively forwarded
> +			store can be used in a cache side channel attack, for
> +			example to read memory to which the attacker does not
> +			directly have access (e.g. inside sandboxed code).
> +
> +			This parameter controls whether the Speculative Store
> +			Bypass optimization is used.
> +
> +			on - unconditionally disable Speculative Store Bypass
> +			off - unconditionally enable Speculative Store Bypass
> +			auto - kernel detects whether your CPU model contains a
> +			       vulnerable implementation of Speculative Store
> +			       Bypass and picks the most appropriate mitigation
> +
> +			Not specifying this option is equivalent to
> +			spec_store_bypass_disable=auto.
> +
>  	spia_io_base=	[HW,MTD]
>  	spia_fio_base=
>  	spia_pedr=
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 25c9406de6c9..1966b223e11e 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -214,6 +214,8 @@
>  
>  #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
>  #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> +#define X86_FEATURE_STBUF_BYPASS	( 7*32+23) /* Speculative Store Bypass mitigation is feasible. */

Needs "" in the comment too so that it doesn't appear in /proc/cpuinfo.

> +#define X86_FEATURE_STBUF_MITIGATE	( 7*32+24) /* "" Use Speculative Store Bypass. */

X86_FEATURE_STBUF_BYPASS_MITIGATE got too long or what?

>  /* Virtualization flags: Linux defined, word 8 */
>  #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index 3e4b66bdf25f..c4a99c970147 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -237,6 +237,12 @@ extern u64 x86_spec_ctrl_base;
>  extern void x86_set_guest_spec_ctrl(u64);
>  extern void x86_restore_host_spec_ctrl(u64);
>  
> +/* The Speculative Store Bypass disable variants */
> +enum spec_store_bypass_mitigation {
> +	SPEC_STORE_BYPASS_NONE,
> +	SPEC_STORE_BYPASS_DISABLE,
> +};
> +
>  extern char __indirect_thunk_start[];
>  extern char __indirect_thunk_end[];
>  
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 0fc475a54aa1..b54336a42dca 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -27,6 +27,7 @@
>  #include <asm/intel-family.h>
>  
>  static void __init spectre_v2_select_mitigation(void);
> +static void __init spec_store_bypass_select_mitigation(void);
>  static void __init spec_ctrl_save_msr(void);
>  
>  void __init check_bugs(void)
> @@ -43,6 +44,12 @@ void __init check_bugs(void)
>  	/* Select the proper spectre mitigation before patching alternatives */
>  	spectre_v2_select_mitigation();
>  
> +	/*
> +	 * Select proper mitigation for any exposure to the Speculative Store
> +	 * Bypass vulnerability.
> +	 */
> +	spec_store_bypass_select_mitigation();
> +
>  #ifdef CONFIG_X86_32
>  	/*
>  	 * Check whether we are able to run this kernel safely on SMP.
> @@ -344,6 +351,92 @@ static void __init spectre_v2_select_mitigation(void)
>  	}
>  }
>  
> +#undef pr_fmt
> +#define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
> +
> +static enum spec_store_bypass_mitigation spec_store_bypass_mode = SPEC_STORE_BYPASS_NONE;
> +
> +/* The kernel command line selection */
> +enum spec_store_bypass_mitigation_cmd {

Yeah, that enum name is clearly too long and it is noticeable in the code below.

> +	SPEC_STORE_BYPASS_CMD_NONE,
> +	SPEC_STORE_BYPASS_CMD_AUTO,
> +	SPEC_STORE_BYPASS_CMD_ON,
> +};
> +
> +static const char *spec_store_bypass_strings[] = {
> +	[SPEC_STORE_BYPASS_NONE]			= "Vulnerable",
> +	[SPEC_STORE_BYPASS_DISABLE]			= "Mitigation: Speculative Store Bypass disabled"

That's clearly too much whitespace between the element and the string.

> +};
> +
> +static const struct {
> +	const char *option;
> +	enum spec_store_bypass_mitigation_cmd cmd;
> +} spec_store_bypass_mitigation_options[] = {
> +	{ "auto",				SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */
> +	{ "on",					SPEC_STORE_BYPASS_CMD_ON },   /* Disable Speculative Store Bypass */
> +	{ "off",				SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */

Ditto. What's wrong with:

	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */
	{ "on",		SPEC_STORE_BYPASS_CMD_ON },   /* Disable Speculative Store Bypass */
	{ "off",	SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */

?

> +};
> +
> +static enum spec_store_bypass_mitigation_cmd __init spec_store_bypass_parse_cmdline(void)

static enum spec_store_bypass_cmd __init spec_store_bypass_parse_cmdline(void)

looks a bit better to me. Btw, that "spec_store_bypass" string is kinda
everywhere and screaming at me now. :)

I'm wondering if for all the functions and variables we could really
use the "ssb_" prefix for shorter but keep the visible, cmdline options
human readable...?

> +{
> +	char arg[20];
> +	int ret, i;
> +	enum spec_store_bypass_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;
> +
> +	if (!boot_cpu_has(X86_FEATURE_STBUF_BYPASS))
> +		return SPEC_STORE_BYPASS_CMD_NONE;
> +
> +	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable"))
> +		return SPEC_STORE_BYPASS_CMD_NONE;
> +	else {
> +		ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
> +					  arg, sizeof(arg));
> +		if (ret < 0)
> +			return SPEC_STORE_BYPASS_CMD_AUTO;
> +
> +		for (i = 0; i < ARRAY_SIZE(spec_store_bypass_mitigation_options); i++) {
> +			if (!match_option(arg, ret, spec_store_bypass_mitigation_options[i].option))
> +				continue;
> +
> +			cmd = spec_store_bypass_mitigation_options[i].cmd;
> +			break;
> +		}
> +
> +		if (i >= ARRAY_SIZE(spec_store_bypass_mitigation_options)) {
> +			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
> +			return SPEC_STORE_BYPASS_CMD_AUTO;
> +		}
> +	}
> +
> +	return cmd;
> +}
> +
> +static void __init spec_store_bypass_select_mitigation(void)
> +{
> +	enum spec_store_bypass_mitigation_cmd cmd = spec_store_bypass_parse_cmdline();
> +	enum spec_store_bypass_mitigation mode = SPEC_STORE_BYPASS_NONE;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_CPU_SPEC_STORE_BYPASS) &&
> +	    (cmd == SPEC_STORE_BYPASS_CMD_NONE ||
> +	     cmd == SPEC_STORE_BYPASS_CMD_AUTO))
> +		return;
> +
> +	switch (cmd) {
> +	case SPEC_STORE_BYPASS_CMD_AUTO:
> +	case SPEC_STORE_BYPASS_CMD_ON:
> +		mode = SPEC_STORE_BYPASS_DISABLE;
> +		break;
> +	case SPEC_STORE_BYPASS_CMD_NONE:
> +		break;
> +	}
> +
> +	spec_store_bypass_mode = mode;
> +	pr_info("%s\n", spec_store_bypass_strings[mode]);
> +
> +	if (mode != SPEC_STORE_BYPASS_NONE)
> +		setup_force_cpu_cap(X86_FEATURE_STBUF_MITIGATE);
> +}
> +
>  #undef pr_fmt
>  
>  #ifdef CONFIG_SYSFS
> @@ -370,6 +463,9 @@ ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
>  			       spectre_v2_module_string());
>  
> +	case X86_BUG_CPU_SPEC_STORE_BYPASS:
> +		return sprintf(buf, "%s\n", spec_store_bypass_strings[spec_store_bypass_mode]);
> +
>  	default:
>  		break;
>  	}
> diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
> index c3af167d0a70..16ee38d8977a 100644
> --- a/arch/x86/kernel/cpu/intel.c
> +++ b/arch/x86/kernel/cpu/intel.c
> @@ -189,6 +189,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
>  		setup_clear_cpu_cap(X86_FEATURE_STIBP);
>  		setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
>  		setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
> +		setup_clear_cpu_cap(X86_FEATURE_MDD);

Forgotten hunk:

arch/x86/kernel/cpu/intel.c: In function ‘early_init_intel’:
arch/x86/kernel/cpu/intel.c:192:23: error: ‘X86_FEATURE_MDD’ undeclared (first use in this function); did you mean ‘X86_FEATURE_MBA’?
   setup_clear_cpu_cap(X86_FEATURE_MDD);
                       ^~~~~~~~~~~~~~~
                       X86_FEATURE_MBA
arch/x86/kernel/cpu/intel.c:192:23: note: each undeclared identifier is reported only once for each function it appears in
make[3]: *** [arch/x86/kernel/cpu/intel.o] Error 1
make[2]: *** [arch/x86/kernel/cpu] Error 2
make[1]: *** [arch/x86/kernel] Error 2
make: *** [arch/x86] Error 2
make: *** Waiting for unfinished jobs....

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [MODERATED] Re: [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6
  2018-04-23 17:11 [MODERATED] [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6 konrad.wilk
  2018-04-23 18:34 ` [MODERATED] " Borislav Petkov
@ 2018-04-23 18:57 ` Borislav Petkov
  2018-04-23 19:08   ` Borislav Petkov
  1 sibling, 1 reply; 7+ messages in thread
From: Borislav Petkov @ 2018-04-23 18:57 UTC (permalink / raw)
  To: speck

On Mon, Apr 23, 2018 at 01:11:30PM -0400, speck for konrad.wilk_at_oracle.com wrote:
> x86/spec_store_bypass_disable: Provide boot parameters for the mitigation
> 
> Contemporary high performance processors use a common industry-wide
> optimization known as "Speculative Store Bypass" in which loads from
> addresses to which a recent store has occurred may (speculatively)
> see an older value. Intel refers to this feature as "Memory
> Disambiguation", which is part of their "Smart Memory Access"
> capability in Nehalem and later generation processors.
> 
> Some processors have an implementation bug that enables a cache
> side-channel attack against such speculatively read values. An
> attacker can create exploit code that allows them to read memory
> outside of a sandbox environment (for example, malicious JavaScript
> in a web page), or to perform more complex attacks against code
> running within the same privilege level, e.g. via the stack.
> 
> We provide two command line control knobs:
> 
>  nospec_store_bypass_disable
>  spec_store_bypass_disable=[off,auto,on]

Something's still fishy with this patch. I'm booting a guest with

spec_store_bypass_disable=on

but dmesg still says:

[    0.044001] Speculative Store Bypass: Vulnerable

and sysfs:

$ grep . /sys/devices/system/cpu/vulnerabilities/*
/sys/devices/system/cpu/vulnerabilities/meltdown:Not affected
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass:Vulnerable

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [MODERATED] Re: [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6
  2018-04-23 18:57 ` Borislav Petkov
@ 2018-04-23 19:08   ` Borislav Petkov
  0 siblings, 0 replies; 7+ messages in thread
From: Borislav Petkov @ 2018-04-23 19:08 UTC (permalink / raw)
  To: speck

On Mon, Apr 23, 2018 at 08:57:55PM +0200, Borislav Petkov wrote:
> Something's still fishy with this patch. I'm booting a guest with
> 
> spec_store_bypass_disable=on
> 
> but dmesg still says:
> 
> [    0.044001] Speculative Store Bypass: Vulnerable

Ok, there it is:

static enum spec_store_bypass_mitigation_cmd __init spec_store_bypass_parse_cmdline(void)
{
        char arg[20];
        int ret, i;
        enum spec_store_bypass_mitigation_cmd cmd = SPEC_STORE_BYPASS_CMD_AUTO;

        if (!boot_cpu_has_bug(X86_BUG_CPU_SPEC_STORE_BYPASS)) {
                return SPEC_STORE_BYPASS_CMD_NONE;

It needs to be

	if (!boot_cpu_has_bug(X86_BUG_CPU_SPEC_STORE_BYPASS))

not

	if (!boot_cpu_has(X86_FEATURE_STBUF_BYPASS))

which we set earlier in cpu_set_bug_bits().

Which makes me wonder why we even need X86_FEATURE_STBUF_BYPASS?

Or why we need X86_BUG_CPU_SPEC_STORE_BYPASS?

One of the two is superfluous AFAICT.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [MODERATED] Re: [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6
  2018-04-23 18:34 ` [MODERATED] " Borislav Petkov
@ 2018-04-23 20:45   ` Jiri Kosina
  2018-04-23 21:06     ` Borislav Petkov
  2018-04-24  0:19   ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 7+ messages in thread
From: Jiri Kosina @ 2018-04-23 20:45 UTC (permalink / raw)
  To: speck

On Mon, 23 Apr 2018, speck for Borislav Petkov wrote:

> > --- a/arch/x86/include/asm/cpufeatures.h
> > +++ b/arch/x86/include/asm/cpufeatures.h
> > @@ -214,6 +214,8 @@
> >  
> >  #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
> >  #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> > +#define X86_FEATURE_STBUF_BYPASS	( 7*32+23) /* Speculative Store Bypass mitigation is feasible. */
> 
> Needs "" in the comment too so that it doesn't appear in /proc/cpuinfo.

Don't we really want it to appear in /proc/cpuinfo though?

At the end of the day, it really is about toggling an internal CPU feature 
behavior, so I'd say it belongs there.

We even have PTI there, and this is way more actually really internal to 
the CPU.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [MODERATED] Re: [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6
  2018-04-23 20:45   ` Jiri Kosina
@ 2018-04-23 21:06     ` Borislav Petkov
  0 siblings, 0 replies; 7+ messages in thread
From: Borislav Petkov @ 2018-04-23 21:06 UTC (permalink / raw)
  To: speck

On Mon, Apr 23, 2018 at 10:45:10PM +0200, speck for Jiri Kosina wrote:
> Don't we really want it to appear in /proc/cpuinfo though?
> 
> At the end of the day, it really is about toggling an internal CPU feature 
> behavior, so I'd say it belongs there.
> 
> We even have PTI there, and this is way more actually really internal to 
> the CPU.

Well, the thing is that other CPUs have store bypassing too but they
don't necessarily have the disabling bit.

So if anything, we should use the X86_BUG_CPU_SPEC_STORE_BYPASS
bit so that it appears in the bugs: line.

I mean, one of the two: X86_FEATURE_STBUF_BYPASS or
X86_BUG_CPU_SPEC_STORE_BYPASS is redundant AFAICT so we can stick with
the X86_BUG one.

Also, we have the state in /sys/devices/system/cpu/vulnerabilities/ too,
in case something needs it.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [MODERATED] Re: [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6
  2018-04-23 18:34 ` [MODERATED] " Borislav Petkov
  2018-04-23 20:45   ` Jiri Kosina
@ 2018-04-24  0:19   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 7+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-24  0:19 UTC (permalink / raw)
  To: speck

> > diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> > index 25c9406de6c9..1966b223e11e 100644
> > --- a/arch/x86/include/asm/cpufeatures.h
> > +++ b/arch/x86/include/asm/cpufeatures.h
> > @@ -214,6 +214,8 @@
> >  
> >  #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
> >  #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
> > +#define X86_FEATURE_STBUF_BYPASS	( 7*32+23) /* Speculative Store Bypass mitigation is feasible. */
> 
> Needs "" in the comment too so that it doesn't appear in /proc/cpuinfo.

Per your other comments I dropped that.
> 
> > +#define X86_FEATURE_STBUF_MITIGATE	( 7*32+24) /* "" Use Speculative Store Bypass. */
> 
> X86_FEATURE_STBUF_BYPASS_MITIGATE got too long or what?

Yes, it is too long:

#define X86_FEATURE_USE_IBPB            ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
#define X86_FEATURE_USE_IBRS_FW         ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
#define X86_FEATURE_STBUF_BYPASS_MITIGATE( 7*32+24) /* "" Use Speculative Store Bypass. */

We could go back to

#define X86_FEATURE_DISABLE_SSB

.. as we are also fixing up:

<..giant snip..>

> > +static enum spec_store_bypass_mitigation_cmd __init spec_store_bypass_parse_cmdline(void)
> 
> static enum spec_store_bypass_cmd __init spec_store_bypass_parse_cmdline(void)
> 
> looks a bit better to me. Btw, that "spec_store_bypass" string is kinda
> everywhere and screaming at me now. :)
> 
> I'm wondering if for all the functions and variables we could really
> use the "ssb_" prefix for shorter but keep the visible, cmdline options
> human readable...?

.. The s/spec_store_bypass/ssb/ in the functions/enums/..

Let me do what I suggested here and lets see how it looks.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-04-24  0:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-23 17:11 [MODERATED] [PATCH v3 06/10] [PATCH v3 6/9] Linux Patch #6 konrad.wilk
2018-04-23 18:34 ` [MODERATED] " Borislav Petkov
2018-04-23 20:45   ` Jiri Kosina
2018-04-23 21:06     ` Borislav Petkov
2018-04-24  0:19   ` Konrad Rzeszutek Wilk
2018-04-23 18:57 ` Borislav Petkov
2018-04-23 19:08   ` Borislav Petkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.