linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead
@ 2018-11-25 18:33 Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
                   ` (30 more replies)
  0 siblings, 31 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

Thats hopefully the final version of this. Changes since V1:

  - Renamed the command line option and related code to spectre_v2_user= as
    suggested by Josh.

  - Thought more about the back to back optimization and finally left the
    IBPB code in switch_mm().

    It still removes the ptrace check for the always IBPB case. That's
    substantial overhead for dubious value now that the default is
    conditional (prctl/seccomp) IBPB.

  - Added two options which allow conditional STIBP and IBPB always mode.

  - Addressed the review comments

Documentation is still work in progress. Thanks Andi for providing the
first draft for it.

Still based on tip.git x86/pti as it has been discussed to remove the
minimal RETPOLINE bandaid from stable kernels as well.

It's avaiable from git:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/pti


Thanks,

	tglx

8<------------------------

 Documentation/admin-guide/kernel-parameters.txt |   56 ++
 Documentation/userspace-api/spec_ctrl.rst       |    9 
 arch/x86/Kconfig                                |    8 
 arch/x86/include/asm/msr-index.h                |    5 
 arch/x86/include/asm/nospec-branch.h            |   14 
 arch/x86/include/asm/spec-ctrl.h                |   18 
 arch/x86/include/asm/switch_to.h                |    3 
 arch/x86/include/asm/thread_info.h              |   18 
 arch/x86/include/asm/tlbflush.h                 |    8 
 arch/x86/kernel/cpu/bugs.c                      |  520 ++++++++++++++++++------
 arch/x86/kernel/process.c                       |   79 ++-
 arch/x86/kernel/process.h                       |   39 +
 arch/x86/kernel/process_32.c                    |   10 
 arch/x86/kernel/process_64.c                    |   10 
 arch/x86/mm/tlb.c                               |  109 +++--
 include/linux/ptrace.h                          |   17 
 include/linux/sched.h                           |    9 
 include/linux/sched/smt.h                       |   20 
 include/uapi/linux/prctl.h                      |    1 
 kernel/cpu.c                                    |   15 
 kernel/ptrace.c                                 |   10 
 kernel/sched/core.c                             |   19 
 kernel/sched/sched.h                            |    4 
 tools/include/uapi/linux/prctl.h                |    1 
 24 files changed, 745 insertions(+), 257 deletions(-)


^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:20   ` [tip:x86/pti] " tip-bot for Tim Chen
  2018-11-29 14:27   ` [patch V2 01/28] " Konrad Rzeszutek Wilk
  2018-11-25 18:33 ` [patch V2 02/28] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
                   ` (29 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Update-the-TIF-SSBD-comment.patch --]
[-- Type: text/plain, Size: 919 bytes --]

"Reduced Data Speculation" is an obsolete term. The correct new name is
"Speculative store bypass disable" - which is abbreviated into SSBD.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/thread_info.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -79,7 +79,7 @@ struct thread_info {
 #define TIF_SIGPENDING		2	/* signal pending */
 #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
 #define TIF_SINGLESTEP		4	/* reenable singlestep on user return*/
-#define TIF_SSBD			5	/* Reduced data speculation */
+#define TIF_SSBD		5	/* Speculative store bypass disable */
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 02/28] x86/speculation: Clean up spectre_v2_parse_cmdline()
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:20   ` [tip:x86/pti] " tip-bot for Tim Chen
  2018-11-29 14:28   ` [patch V2 02/28] " Konrad Rzeszutek Wilk
  2018-11-25 18:33 ` [patch V2 03/28] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
                   ` (28 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Clean-up-spectre-v2-parse-cmdline-.patch --]
[-- Type: text/plain, Size: 1541 bytes --]

Remove the unnecessary 'else' statement in spectre_v2_parse_cmdline()
to save an indentation level.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |   27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -276,22 +276,21 @@ static enum spectre_v2_mitigation_cmd __
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
-	else {
-		ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
-		if (ret < 0)
-			return SPECTRE_V2_CMD_AUTO;
 
-		for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
-			if (!match_option(arg, ret, mitigation_options[i].option))
-				continue;
-			cmd = mitigation_options[i].cmd;
-			break;
-		}
+	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_CMD_AUTO;
 
-		if (i >= ARRAY_SIZE(mitigation_options)) {
-			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
-			return SPECTRE_V2_CMD_AUTO;
-		}
+	for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
+		if (!match_option(arg, ret, mitigation_options[i].option))
+			continue;
+		cmd = mitigation_options[i].cmd;
+		break;
+	}
+
+	if (i >= ARRAY_SIZE(mitigation_options)) {
+		pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+		return SPECTRE_V2_CMD_AUTO;
 	}
 
 	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 03/28] x86/speculation: Remove unnecessary ret variable in cpu_show_common()
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 02/28] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:21   ` [tip:x86/pti] " tip-bot for Tim Chen
  2018-11-29 14:28   ` [patch V2 03/28] " Konrad Rzeszutek Wilk
  2018-11-25 18:33 ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
                   ` (27 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Remove-unnecessary-ret-variable-in-cpu-show-common-.patch --]
[-- Type: text/plain, Size: 1281 bytes --]

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -847,8 +847,6 @@ static ssize_t l1tf_show_state(char *buf
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
-	int ret;
-
 	if (!boot_cpu_has_bug(bug))
 		return sprintf(buf, "Not affected\n");
 
@@ -866,13 +864,12 @@ static ssize_t cpu_show_common(struct de
 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 
 	case X86_BUG_SPECTRE_V2:
-		ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
 			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
-		return ret;
 
 	case X86_BUG_SPEC_STORE_BYPASS:
 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 04/28] x86/speculation: Reorganize cpu_show_common()
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (2 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 03/28] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-26 15:08   ` Borislav Petkov
                     ` (2 more replies)
  2018-11-25 18:33 ` [patch V2 05/28] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
                   ` (26 subsequent siblings)
  30 siblings, 3 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Reorganize-cpu-show-common-.patch --]
[-- Type: text/plain, Size: 1642 bytes --]

The Spectre V2 printout in cpu_show_common() handles conditionals for the
various mitigation methods directly in the sprintf() argument list. That's
hard to read and will become unreadable if more complex decisions need to
be made for a particular method.

Move the conditionals for STIBP and IBPB string selection into helper
functions, so they can be extended later on.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |   20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -844,6 +844,22 @@ static ssize_t l1tf_show_state(char *buf
 }
 #endif
 
+static char *stibp_state(void)
+{
+	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
+		return ", STIBP";
+	else
+		return "";
+}
+
+static char *ibpb_state(void)
+{
+	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
+		return ", IBPB";
+	else
+		return "";
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -865,9 +881,9 @@ static ssize_t cpu_show_common(struct de
 
 	case X86_BUG_SPECTRE_V2:
 		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
-			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+			       ibpb_state(),
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
-			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+			       stibp_state(),
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
 



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 05/28] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (3 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:22   ` [tip:x86/pti] " tip-bot for Tim Chen
  2018-11-29 14:35   ` [patch V2 05/28] " Konrad Rzeszutek Wilk
  2018-11-25 18:33 ` [patch V2 06/28] x86/speculation: Rename SSBD update functions Thomas Gleixner
                   ` (25 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Disable-STIBP-when-enhanced-IBRS-is-in-use.patch --]
[-- Type: text/plain, Size: 959 bytes --]

If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2
user space exploits from hyperthread sibling.

Disable STIBP when enhanced IBRS is used.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |    7 +++++++
 1 file changed, 7 insertions(+)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -321,6 +321,10 @@ static bool stibp_needed(void)
 	if (spectre_v2_enabled == SPECTRE_V2_NONE)
 		return false;
 
+	/* Enhanced IBRS makes using STIBP unnecessary. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return false;
+
 	if (!boot_cpu_has(X86_FEATURE_STIBP))
 		return false;
 
@@ -846,6 +850,9 @@ static ssize_t l1tf_show_state(char *buf
 
 static char *stibp_state(void)
 {
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return "";
+
 	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
 		return ", STIBP";
 	else



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 06/28] x86/speculation: Rename SSBD update functions
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (4 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 05/28] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-26 15:24   ` Borislav Petkov
                     ` (2 more replies)
  2018-11-25 18:33 ` [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
                   ` (24 subsequent siblings)
  30 siblings, 3 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Rename-SSBD-update-functions.patch --]
[-- Type: text/plain, Size: 3326 bytes --]

During context switch, the SSBD bit in SPEC_CTRL MSR is updated according
to changes of the TIF_SSBD flag in the current and next running task.

Currently, only the bit controlling speculative store bypass disable in
SPEC_CTRL MSR is updated and the related update functions all have
"speculative_store" or "ssb" in their names.

For enhanced mitigation control other bits in SPEC_CTRL MSR need to be
updated as well, which makes the SSB names inadequate.

Rename the "speculative_store*" functions to a more generic name.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


---
 arch/x86/include/asm/spec-ctrl.h |    6 +++---
 arch/x86/kernel/cpu/bugs.c       |    4 ++--
 arch/x86/kernel/process.c        |   12 ++++++------
 3 files changed, 11 insertions(+), 11 deletions(-)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -70,11 +70,11 @@ extern void speculative_store_bypass_ht_
 static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
-extern void speculative_store_bypass_update(unsigned long tif);
+extern void speculation_ctrl_update(unsigned long tif);
 
-static inline void speculative_store_bypass_update_current(void)
+static inline void speculation_ctrl_update_current(void)
 {
-	speculative_store_bypass_update(current_thread_info()->flags);
+	speculation_ctrl_update(current_thread_info()->flags);
 }
 
 #endif
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -200,7 +200,7 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
 		tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) :
 				 ssbd_spec_ctrl_to_tif(hostval);
 
-		speculative_store_bypass_update(tif);
+		speculation_ctrl_update(tif);
 	}
 }
 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
@@ -632,7 +632,7 @@ static int ssb_prctl_set(struct task_str
 	 * mitigation until it is next scheduled.
 	 */
 	if (task == current && update)
-		speculative_store_bypass_update_current();
+		speculation_ctrl_update_current();
 
 	return 0;
 }
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -395,27 +395,27 @@ static __always_inline void amd_set_ssb_
 	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
 }
 
-static __always_inline void intel_set_ssb_state(unsigned long tifn)
+static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
 	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+static __always_inline void __speculation_ctrl_update(unsigned long tifn)
 {
 	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
 		amd_set_ssb_virt_state(tifn);
 	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		amd_set_core_ssb_state(tifn);
 	else
-		intel_set_ssb_state(tifn);
+		spec_ctrl_update_msr(tifn);
 }
 
-void speculative_store_bypass_update(unsigned long tif)
+void speculation_ctrl_update(unsigned long tif)
 {
 	preempt_disable();
-	__speculative_store_bypass_update(tif);
+	__speculation_ctrl_update(tif);
 	preempt_enable();
 }
 
@@ -452,7 +452,7 @@ void __switch_to_xtra(struct task_struct
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
 	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculative_store_bypass_update(tifn);
+		__speculation_ctrl_update(tifn);
 }
 
 /*



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (5 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 06/28] x86/speculation: Rename SSBD update functions Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-26 15:47   ` Borislav Petkov
                     ` (2 more replies)
  2018-11-25 18:33 ` [patch V2 08/28] sched/smt: Make sched_smt_present track topology Thomas Gleixner
                   ` (23 subsequent siblings)
  30 siblings, 3 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Reorganize-speculation-control-MSRs-update.patch --]
[-- Type: text/plain, Size: 2678 bytes --]

The logic to detect whether there's a change in the previous and next
task's flag relevant to update speculation control MSRs are spread out
across multiple functions.

Consolidate all checks needed for updating speculation control MSRs into
the new __speculation_ctrl_update() helper function.

This makes it easy to pick the right speculation control MSR and the bits
in the MSR that needs updating based on TIF flags changes.

Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/process.c |   42 ++++++++++++++++++++++++++++++++----------
 1 file changed, 32 insertions(+), 10 deletions(-)

--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -397,25 +397,48 @@ static __always_inline void amd_set_ssb_
 
 static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
-	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
+	u64 msr = x86_spec_ctrl_base;
+
+	/*
+	 * If X86_FEATURE_SSBD is not set, the SSBD bit is not to be
+	 * touched.
+	 */
+	if (static_cpu_has(X86_FEATURE_SSBD))
+		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculation_ctrl_update(unsigned long tifn)
+/*
+ * Update the MSRs managing speculation control, during context switch.
+ *
+ * tifp: Previous task's thread flags
+ * tifn: Next task's thread flags
+ */
+static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+						      unsigned long tifn)
 {
-	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
-		amd_set_ssb_virt_state(tifn);
-	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
-		amd_set_core_ssb_state(tifn);
-	else
+	bool updmsr = false;
+
+	/* If TIF_SSBD is different, select the proper mitigation method */
+	if ((tifp ^ tifn) & _TIF_SSBD) {
+		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
+			amd_set_ssb_virt_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+			amd_set_core_ssb_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_SSBD))
+			updmsr  = true;
+	}
+
+	if (updmsr)
 		spec_ctrl_update_msr(tifn);
 }
 
 void speculation_ctrl_update(unsigned long tif)
 {
+	/* Forced update. Make sure all relevant TIF flags are different */
 	preempt_disable();
-	__speculation_ctrl_update(tif);
+	__speculation_ctrl_update(~tif, tif);
 	preempt_enable();
 }
 
@@ -451,8 +474,7 @@ void __switch_to_xtra(struct task_struct
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
-	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculation_ctrl_update(tifn);
+	__speculation_ctrl_update(tifp, tifn);
 }
 
 /*



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 08/28] sched/smt: Make sched_smt_present track topology
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (6 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:24   ` [tip:x86/pti] " tip-bot for Peter Zijlstra (Intel)
  2018-11-29 14:42   ` [patch V2 08/28] " Konrad Rzeszutek Wilk
  2018-11-25 18:33 ` [patch V2 09/28] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
                   ` (22 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: sched-smt-Make-sched-smt-present-track-topology.patch --]
[-- Type: text/plain, Size: 1871 bytes --]

Currently the 'sched_smt_present' static key is enabled when at CPU bringup
SMT topology is observed, but it is never disabled. However there is demand
to also disable the key when the topology changes such that there is no SMT
present anymore.

Implement this by making the key count the number of cores that have SMT
enabled.

In particular, the SMT topology bits are set before interrrupts are enabled
and similarly, are cleared after interrupts are disabled for the last time
and the CPU dies.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 kernel/sched/core.c |   19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
 
 #ifdef CONFIG_SCHED_SMT
 	/*
-	 * The sched_smt_present static key needs to be evaluated on every
-	 * hotplug event because at boot time SMT might be disabled when
-	 * the number of booted CPUs is limited.
-	 *
-	 * If then later a sibling gets hotplugged, then the key would stay
-	 * off and SMT scheduling would never be functional.
+	 * When going up, increment the number of cores with SMT present.
 	 */
-	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
-		static_branch_enable_cpuslocked(&sched_smt_present);
+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+		static_branch_inc_cpuslocked(&sched_smt_present);
 #endif
 	set_cpu_active(cpu, true);
 
@@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cp
 	 */
 	synchronize_rcu_mult(call_rcu, call_rcu_sched);
 
+#ifdef CONFIG_SCHED_SMT
+	/*
+	 * When going down, decrement the number of cores with SMT present.
+	 */
+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+		static_branch_dec_cpuslocked(&sched_smt_present);
+#endif
+
 	if (!sched_smp_initialized)
 		return 0;
 



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 09/28] x86/Kconfig: Select SCHED_SMT if SMP enabled
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (7 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 08/28] sched/smt: Make sched_smt_present track topology Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:24   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-29 14:44   ` [patch V2 09/28] " Konrad Rzeszutek Wilk
  2018-11-25 18:33 ` [patch V2 10/28] sched/smt: Expose sched_smt_present static key Thomas Gleixner
                   ` (21 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-Kconfig-Select-SCHED-SMT-if-SMP-enabled.patch --]
[-- Type: text/plain, Size: 1132 bytes --]

CONFIG_SCHED_SMT is enabled by all distros, so there is not a real point to
have it configurable. The runtime overhead in the core scheduler code is
minimal because the actual SMT scheduling parts are conditional on a static
key.

This allows to expose the scheduler's SMT state static key to the
speculation control code. Alternatively the scheduler's static key could be
made always available when CONFIG_SMP is enabled, but that's just adding an
unused static key to every other architecture for nothing.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/Kconfig |    8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1001,13 +1001,7 @@ config NR_CPUS
 	  to the kernel image.
 
 config SCHED_SMT
-	bool "SMT (Hyperthreading) scheduler support"
-	depends on SMP
-	---help---
-	  SMT scheduler support improves the CPU scheduler's decision making
-	  when dealing with Intel Pentium 4 chips with HyperThreading at a
-	  cost of slightly increased overhead in some places. If unsure say
-	  N here.
+	def_bool y if SMP
 
 config SCHED_MC
 	def_bool y



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 10/28] sched/smt: Expose sched_smt_present static key
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (8 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 09/28] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:25   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-29 14:44   ` [patch V2 10/28] " Konrad Rzeszutek Wilk
  2018-11-25 18:33 ` [patch V2 11/28] x86/speculation: Rework SMT state change Thomas Gleixner
                   ` (20 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: sched-smt-Expose-sched-smt-present-static-key.patch --]
[-- Type: text/plain, Size: 1467 bytes --]

Make the scheduler's 'sched_smt_present' static key globaly available, so
it can be used in the x86 speculation control code.

Provide a query function and a stub for the CONFIG_SMP=n case.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---

v1 -> v2: Move SMT stuff to separate header. Unbreaks ia64 build

---
 include/linux/sched/smt.h |   18 ++++++++++++++++++
 kernel/sched/sched.h      |    4 +---
 2 files changed, 19 insertions(+), 3 deletions(-)

--- /dev/null
+++ b/include/linux/sched/smt.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_SCHED_SMT_H
+#define _LINUX_SCHED_SMT_H
+
+#include <linux/static_key.h>
+
+#ifdef CONFIG_SCHED_SMT
+extern struct static_key_false sched_smt_present;
+
+static __always_inline bool sched_smt_active(void)
+{
+	return static_branch_likely(&sched_smt_present);
+}
+#else
+static inline bool sched_smt_active(void) { return false; }
+#endif
+
+#endif
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -23,6 +23,7 @@
 #include <linux/sched/prio.h>
 #include <linux/sched/rt.h>
 #include <linux/sched/signal.h>
+#include <linux/sched/smt.h>
 #include <linux/sched/stat.h>
 #include <linux/sched/sysctl.h>
 #include <linux/sched/task.h>
@@ -936,9 +937,6 @@ static inline int cpu_of(struct rq *rq)
 
 
 #ifdef CONFIG_SCHED_SMT
-
-extern struct static_key_false sched_smt_present;
-
 extern void __update_idle_core(struct rq *rq);
 
 static inline void update_idle_core(struct rq *rq)



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 11/28] x86/speculation: Rework SMT state change
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (9 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 10/28] sched/smt: Expose sched_smt_present static key Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:26   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 12/28] x86/l1tf: Show actual SMT state Thomas Gleixner
                   ` (19 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Rework-SMT-state-change.patch --]
[-- Type: text/plain, Size: 3266 bytes --]

arch_smt_update() is only called when the sysfs SMT control knob is
changed. This means that when SMT is enabled in the sysfs control knob the
system is considered to have SMT active even if all siblings are offline.

To allow finegrained control of the speculation mitigations, the actual SMT
state is more interesting than the fact that siblings could be enabled.

Rework the code, so arch_smt_update() is invoked from each individual CPU
hotplug function, and simplify the update function while at it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---

v1 -> v2: Adapt to new header sched/smt.h

---

 arch/x86/kernel/cpu/bugs.c |   11 +++++------
 include/linux/sched/smt.h  |    2 ++
 kernel/cpu.c               |   15 +++++++++------
 3 files changed, 16 insertions(+), 12 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -14,6 +14,7 @@
 #include <linux/module.h>
 #include <linux/nospec.h>
 #include <linux/prctl.h>
+#include <linux/sched/smt.h>
 
 #include <asm/spec-ctrl.h>
 #include <asm/cmdline.h>
@@ -344,16 +345,14 @@ void arch_smt_update(void)
 		return;
 
 	mutex_lock(&spec_ctrl_mutex);
-	mask = x86_spec_ctrl_base;
-	if (cpu_smt_control == CPU_SMT_ENABLED)
+
+	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+	if (sched_smt_active())
 		mask |= SPEC_CTRL_STIBP;
-	else
-		mask &= ~SPEC_CTRL_STIBP;
 
 	if (mask != x86_spec_ctrl_base) {
 		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-				cpu_smt_control == CPU_SMT_ENABLED ?
-				"Enabling" : "Disabling");
+			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
 		x86_spec_ctrl_base = mask;
 		on_each_cpu(update_stibp_msr, NULL, 1);
 	}
--- a/include/linux/sched/smt.h
+++ b/include/linux/sched/smt.h
@@ -15,4 +15,6 @@ static __always_inline bool sched_smt_ac
 static inline bool sched_smt_active(void) { return false; }
 #endif
 
+void arch_smt_update(void);
+
 #endif
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -10,6 +10,7 @@
 #include <linux/sched/signal.h>
 #include <linux/sched/hotplug.h>
 #include <linux/sched/task.h>
+#include <linux/sched/smt.h>
 #include <linux/unistd.h>
 #include <linux/cpu.h>
 #include <linux/oom.h>
@@ -367,6 +368,12 @@ static void lockdep_release_cpus_lock(vo
 
 #endif	/* CONFIG_HOTPLUG_CPU */
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override this.
+ */
+void __weak arch_smt_update(void) { }
+
 #ifdef CONFIG_HOTPLUG_SMT
 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
 EXPORT_SYMBOL_GPL(cpu_smt_control);
@@ -1011,6 +1018,7 @@ static int __ref _cpu_down(unsigned int
 	 * concurrent CPU hotplug via cpu_add_remove_lock.
 	 */
 	lockup_detector_cleanup();
+	arch_smt_update();
 	return ret;
 }
 
@@ -1139,6 +1147,7 @@ static int _cpu_up(unsigned int cpu, int
 	ret = cpuhp_up_callbacks(cpu, st, target);
 out:
 	cpus_write_unlock();
+	arch_smt_update();
 	return ret;
 }
 
@@ -2055,12 +2064,6 @@ static void cpuhp_online_cpu_device(unsi
 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
 }
 
-/*
- * Architectures that need SMT-specific errata handling during SMT hotplug
- * should override this.
- */
-void __weak arch_smt_update(void) { };
-
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
 	int cpu, ret = 0;



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 12/28] x86/l1tf: Show actual SMT state
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (10 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 11/28] x86/speculation: Rework SMT state change Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:26   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 13/28] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
                   ` (18 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-l1tf-Show-actual-SMT-state.patch --]
[-- Type: text/plain, Size: 1225 bytes --]

Use the now exposed real SMT state, not the SMT sysfs control knob
state. This reflects the state of the system when the mitigation status is
queried.

This does not change the warning in the VMX launch code. There the
dependency on the control knob makes sense because siblings could be
brought online anytime after launching the VM.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -832,13 +832,14 @@ static ssize_t l1tf_show_state(char *buf
 
 	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
 	    (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
-	     cpu_smt_control == CPU_SMT_ENABLED))
+	     sched_smt_active())) {
 		return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
 			       l1tf_vmx_states[l1tf_vmx_mitigation]);
+	}
 
 	return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
 		       l1tf_vmx_states[l1tf_vmx_mitigation],
-		       cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");
+		       sched_smt_active() ? "vulnerable" : "disabled");
 }
 #else
 static ssize_t l1tf_show_state(char *buf)



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 13/28] x86/speculation: Reorder the spec_v2 code
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (11 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 12/28] x86/l1tf: Show actual SMT state Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-26 22:21   ` Borislav Petkov
  2018-11-28 14:27   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 14/28] x86/speculation: Mark string arrays const correctly Thomas Gleixner
                   ` (17 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Reorder-the-spec-v2-code.patch --]
[-- Type: text/plain, Size: 6410 bytes --]

Reorder the code so it is better grouped.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |  168 ++++++++++++++++++++++-----------------------
 1 file changed, 84 insertions(+), 84 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -124,29 +124,6 @@ void __init check_bugs(void)
 #endif
 }
 
-/* The kernel command line selection */
-enum spectre_v2_mitigation_cmd {
-	SPECTRE_V2_CMD_NONE,
-	SPECTRE_V2_CMD_AUTO,
-	SPECTRE_V2_CMD_FORCE,
-	SPECTRE_V2_CMD_RETPOLINE,
-	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
-	SPECTRE_V2_CMD_RETPOLINE_AMD,
-};
-
-static const char *spectre_v2_strings[] = {
-	[SPECTRE_V2_NONE]			= "Vulnerable",
-	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
-	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
-	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
-};
-
-#undef pr_fmt
-#define pr_fmt(fmt)     "Spectre V2 : " fmt
-
-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
-	SPECTRE_V2_NONE;
-
 void
 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 {
@@ -216,6 +193,12 @@ static void x86_amd_ssb_disable(void)
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
+#undef pr_fmt
+#define pr_fmt(fmt)     "Spectre V2 : " fmt
+
+static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+	SPECTRE_V2_NONE;
+
 #ifdef RETPOLINE
 static bool spectre_v2_bad_module;
 
@@ -237,18 +220,6 @@ static inline const char *spectre_v2_mod
 static inline const char *spectre_v2_module_string(void) { return ""; }
 #endif
 
-static void __init spec2_print_if_insecure(const char *reason)
-{
-	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
-static void __init spec2_print_if_secure(const char *reason)
-{
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
 static inline bool match_option(const char *arg, int arglen, const char *opt)
 {
 	int len = strlen(opt);
@@ -256,24 +227,53 @@ static inline bool match_option(const ch
 	return len == arglen && !strncmp(arg, opt, len);
 }
 
+/* The kernel command line selection for spectre v2 */
+enum spectre_v2_mitigation_cmd {
+	SPECTRE_V2_CMD_NONE,
+	SPECTRE_V2_CMD_AUTO,
+	SPECTRE_V2_CMD_FORCE,
+	SPECTRE_V2_CMD_RETPOLINE,
+	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
+	SPECTRE_V2_CMD_RETPOLINE_AMD,
+};
+
+static const char *spectre_v2_strings[] = {
+	[SPECTRE_V2_NONE]			= "Vulnerable",
+	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
+	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
+};
+
 static const struct {
 	const char *option;
 	enum spectre_v2_mitigation_cmd cmd;
 	bool secure;
 } mitigation_options[] = {
-	{ "off",               SPECTRE_V2_CMD_NONE,              false },
-	{ "on",                SPECTRE_V2_CMD_FORCE,             true },
-	{ "retpoline",         SPECTRE_V2_CMD_RETPOLINE,         false },
-	{ "retpoline,amd",     SPECTRE_V2_CMD_RETPOLINE_AMD,     false },
-	{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
-	{ "auto",              SPECTRE_V2_CMD_AUTO,              false },
+	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
+	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
+	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
+	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_AMD,	  false },
+	{ "retpoline,generic",	SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
+	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
 };
 
+static void __init spec2_print_if_insecure(const char *reason)
+{
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		pr_info("%s selected on command line.\n", reason);
+}
+
+static void __init spec2_print_if_secure(const char *reason)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		pr_info("%s selected on command line.\n", reason);
+}
+
 static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 {
+	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
 	char arg[20];
 	int ret, i;
-	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
@@ -317,48 +317,6 @@ static enum spectre_v2_mitigation_cmd __
 	return cmd;
 }
 
-static bool stibp_needed(void)
-{
-	if (spectre_v2_enabled == SPECTRE_V2_NONE)
-		return false;
-
-	/* Enhanced IBRS makes using STIBP unnecessary. */
-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
-		return false;
-
-	if (!boot_cpu_has(X86_FEATURE_STIBP))
-		return false;
-
-	return true;
-}
-
-static void update_stibp_msr(void *info)
-{
-	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
-}
-
-void arch_smt_update(void)
-{
-	u64 mask;
-
-	if (!stibp_needed())
-		return;
-
-	mutex_lock(&spec_ctrl_mutex);
-
-	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
-	if (sched_smt_active())
-		mask |= SPEC_CTRL_STIBP;
-
-	if (mask != x86_spec_ctrl_base) {
-		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
-		x86_spec_ctrl_base = mask;
-		on_each_cpu(update_stibp_msr, NULL, 1);
-	}
-	mutex_unlock(&spec_ctrl_mutex);
-}
-
 static void __init spectre_v2_select_mitigation(void)
 {
 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -462,6 +420,48 @@ static void __init spectre_v2_select_mit
 	arch_smt_update();
 }
 
+static bool stibp_needed(void)
+{
+	if (spectre_v2_enabled == SPECTRE_V2_NONE)
+		return false;
+
+	/* Enhanced IBRS makes using STIBP unnecessary. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return false;
+
+	if (!boot_cpu_has(X86_FEATURE_STIBP))
+		return false;
+
+	return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+	u64 mask;
+
+	if (!stibp_needed())
+		return;
+
+	mutex_lock(&spec_ctrl_mutex);
+
+	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+	if (sched_smt_active())
+		mask |= SPEC_CTRL_STIBP;
+
+	if (mask != x86_spec_ctrl_base) {
+		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
+		x86_spec_ctrl_base = mask;
+		on_each_cpu(update_stibp_msr, NULL, 1);
+	}
+	mutex_unlock(&spec_ctrl_mutex);
+}
+
 #undef pr_fmt
 #define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
 



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 14/28] x86/speculation: Mark string arrays const correctly
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (12 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 13/28] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:27   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 15/28] x86/speculataion: Mark command line parser data __initdata Thomas Gleixner
                   ` (16 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Mark-string-arrays-const-correctly.patch --]
[-- Type: text/plain, Size: 1504 bytes --]

checkpatch.pl muttered when reshuffling the code:
 WARNING: static const char * array should probably be static const char * const

Fix up all the string arrays.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -237,7 +237,7 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_RETPOLINE_AMD,
 };
 
-static const char *spectre_v2_strings[] = {
+static const char * const spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
 	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
@@ -476,7 +476,7 @@ enum ssb_mitigation_cmd {
 	SPEC_STORE_BYPASS_CMD_SECCOMP,
 };
 
-static const char *ssb_strings[] = {
+static const char * const ssb_strings[] = {
 	[SPEC_STORE_BYPASS_NONE]	= "Vulnerable",
 	[SPEC_STORE_BYPASS_DISABLE]	= "Mitigation: Speculative Store Bypass disabled",
 	[SPEC_STORE_BYPASS_PRCTL]	= "Mitigation: Speculative Store Bypass disabled via prctl",
@@ -816,7 +816,7 @@ early_param("l1tf", l1tf_cmdline);
 #define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
 
 #if IS_ENABLED(CONFIG_KVM_INTEL)
-static const char *l1tf_vmx_states[] = {
+static const char * const l1tf_vmx_states[] = {
 	[VMENTER_L1D_FLUSH_AUTO]		= "auto",
 	[VMENTER_L1D_FLUSH_NEVER]		= "vulnerable",
 	[VMENTER_L1D_FLUSH_COND]		= "conditional cache flushes",



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 15/28] x86/speculataion: Mark command line parser data __initdata
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (13 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 14/28] x86/speculation: Mark string arrays const correctly Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:28   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 16/28] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
                   ` (15 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculataion--Mark-command-line-parser-data-__initdata.patch --]
[-- Type: text/plain, Size: 1025 bytes --]

No point to keep that around.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -248,7 +248,7 @@ static const struct {
 	const char *option;
 	enum spectre_v2_mitigation_cmd cmd;
 	bool secure;
-} mitigation_options[] = {
+} mitigation_options[] __initdata = {
 	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
 	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
 	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
@@ -486,7 +486,7 @@ static const char * const ssb_strings[]
 static const struct {
 	const char *option;
 	enum ssb_mitigation_cmd cmd;
-} ssb_mitigation_options[] = {
+} ssb_mitigation_options[]  __initdata = {
 	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO },    /* Platform decides */
 	{ "on",		SPEC_STORE_BYPASS_CMD_ON },      /* Disable Speculative Store Bypass */
 	{ "off",	SPEC_STORE_BYPASS_CMD_NONE },    /* Don't touch Speculative Store Bypass */



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 16/28] x86/speculation: Unify conditional spectre v2 print functions
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (14 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 15/28] x86/speculataion: Mark command line parser data __initdata Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:29   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 17/28] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
                   ` (14 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Unify-conditional-spectre-v2-print-functions.patch --]
[-- Type: text/plain, Size: 1234 bytes --]

There is no point in having two functions and a conditional at the call
site.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |   17 ++++-------------
 1 file changed, 4 insertions(+), 13 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -257,15 +257,9 @@ static const struct {
 	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
 };
 
-static void __init spec2_print_if_insecure(const char *reason)
+static void __init spec_v2_print_cond(const char *reason, bool secure)
 {
-	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
-static void __init spec2_print_if_secure(const char *reason)
-{
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
 		pr_info("%s selected on command line.\n", reason);
 }
 
@@ -309,11 +303,8 @@ static enum spectre_v2_mitigation_cmd __
 		return SPECTRE_V2_CMD_AUTO;
 	}
 
-	if (mitigation_options[i].secure)
-		spec2_print_if_secure(mitigation_options[i].option);
-	else
-		spec2_print_if_insecure(mitigation_options[i].option);
-
+	spec_v2_print_cond(mitigation_options[i].option,
+			   mitigation_options[i].secure);
 	return cmd;
 }
 



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 17/28] x86/speculation: Add command line control for indirect branch speculation
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (15 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 16/28] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:29   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
                   ` (13 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Add-command-line-control-for-indirect-branch-speculation.patch --]
[-- Type: text/plain, Size: 8993 bytes --]

Add command line control for user space indirect branch speculation
mitigations. The new option is: spectre_v2_user=

The initial options are:

    -  on:   Unconditionally enabled
    - off:   Unconditionally disabled
    -auto:   Kernel selects mitigation (default off for now)

When the spectre_v2= command line argument is either 'on' or 'off' this
implies that the application to application control follows that state even
if a contradicting spectre_v2_user= argument is supplied.

Originally-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V1 -> V2: Change the option to spectre_v2_user=
---
 Documentation/admin-guide/kernel-parameters.txt |   32 +++++
 arch/x86/include/asm/nospec-branch.h            |   10 +
 arch/x86/kernel/cpu/bugs.c                      |  133 ++++++++++++++++++++----
 3 files changed, 156 insertions(+), 19 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4194,9 +4194,13 @@
 
 	spectre_v2=	[X86] Control mitigation of Spectre variant 2
 			(indirect branch speculation) vulnerability.
+			The default operation protects the kernel from
+			user space attacks.
 
-			on   - unconditionally enable
-			off  - unconditionally disable
+			on   - unconditionally enable, implies
+			       spectre_v2_user=on
+			off  - unconditionally disable, implies
+			       spectre_v2_user=off
 			auto - kernel detects whether your CPU model is
 			       vulnerable
 
@@ -4206,6 +4210,12 @@
 			CONFIG_RETPOLINE configuration option, and the
 			compiler with which the kernel was built.
 
+			Selecting 'on' will also enable the mitigation
+			against user space to user space task attacks.
+
+			Selecting 'off' will disable both the kernel and
+			the user space protections.
+
 			Specific mitigations can also be selected manually:
 
 			retpoline	  - replace indirect branches
@@ -4215,6 +4225,24 @@
 			Not specifying this option is equivalent to
 			spectre_v2=auto.
 
+	spectre_v2_user=
+			[X86] Control mitigation of Spectre variant 2
+		        (indirect branch speculation) vulnerability between
+		        user space tasks
+
+			on	- Unconditionally enable mitigations. Is
+				  enforced by spectre_v2=on
+
+			off     - Unconditionally disable mitigations. Is
+				  enforced by spectre_v2=off
+
+			auto    - Kernel selects the mitigation depending on
+				  the available CPU features and vulnerability.
+				  Default is off.
+
+			Not specifying this option is equivalent to
+			spectre_v2_user=auto.
+
 	spec_store_bypass_disable=
 			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
 			(Speculative Store Bypass vulnerability)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -3,6 +3,8 @@
 #ifndef _ASM_X86_NOSPEC_BRANCH_H_
 #define _ASM_X86_NOSPEC_BRANCH_H_
 
+#include <linux/static_key.h>
+
 #include <asm/alternative.h>
 #include <asm/alternative-asm.h>
 #include <asm/cpufeatures.h>
@@ -226,6 +228,12 @@ enum spectre_v2_mitigation {
 	SPECTRE_V2_IBRS_ENHANCED,
 };
 
+/* The indirect branch speculation control variants */
+enum spectre_v2_user_mitigation {
+	SPECTRE_V2_USER_NONE,
+	SPECTRE_V2_USER_STRICT,
+};
+
 /* The Speculative Store Bypass disable variants */
 enum ssb_mitigation {
 	SPEC_STORE_BYPASS_NONE,
@@ -303,6 +311,8 @@ do {									\
 	preempt_enable();						\
 } while (0)
 
+DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+
 #endif /* __ASSEMBLY__ */
 
 /*
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,9 @@ static u64 __ro_after_init x86_spec_ctrl
 u64 __ro_after_init x86_amd_ls_cfg_base;
 u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask;
 
+/* Control conditional STIPB in switch_to() */
+DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+
 void __init check_bugs(void)
 {
 	identify_boot_cpu();
@@ -199,6 +202,9 @@ static void x86_amd_ssb_disable(void)
 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
 	SPECTRE_V2_NONE;
 
+static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init =
+	SPECTRE_V2_USER_NONE;
+
 #ifdef RETPOLINE
 static bool spectre_v2_bad_module;
 
@@ -237,6 +243,104 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_RETPOLINE_AMD,
 };
 
+enum spectre_v2_user_cmd {
+	SPECTRE_V2_USER_CMD_NONE,
+	SPECTRE_V2_USER_CMD_AUTO,
+	SPECTRE_V2_USER_CMD_FORCE,
+};
+
+static const char * const spectre_v2_user_strings[] = {
+	[SPECTRE_V2_USER_NONE]		= "User space: Vulnerable",
+	[SPECTRE_V2_USER_STRICT]	= "User space: Mitigation: STIBP protection",
+};
+
+static const struct {
+	const char			*option;
+	enum spectre_v2_user_cmd	cmd;
+	bool				secure;
+} v2_user_options[] __initdata = {
+	{ "auto",	SPECTRE_V2_USER_CMD_AUTO,	false },
+	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
+	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
+};
+
+static void __init spec_v2_user_print_cond(const char *reason, bool secure)
+{
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
+		pr_info("spectre_v2_user=%s forced on command line.\n", reason);
+}
+
+static enum spectre_v2_user_cmd __init
+spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
+{
+	char arg[20];
+	int ret, i;
+
+	switch (v2_cmd) {
+	case SPECTRE_V2_CMD_NONE:
+		return SPECTRE_V2_USER_CMD_NONE;
+	case SPECTRE_V2_CMD_FORCE:
+		return SPECTRE_V2_USER_CMD_FORCE;
+	default:
+		break;
+	}
+
+	ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
+				  arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_USER_CMD_AUTO;
+
+	for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
+		if (match_option(arg, ret, v2_user_options[i].option)) {
+			spec_v2_user_print_cond(v2_user_options[i].option,
+						v2_user_options[i].secure);
+			return v2_user_options[i].cmd;
+		}
+	}
+
+	pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg);
+	return SPECTRE_V2_USER_CMD_AUTO;
+}
+
+static void __init
+spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+{
+	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
+	bool smt_possible = IS_ENABLED(CONFIG_SMP);
+
+	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
+		return;
+
+	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
+	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
+		smt_possible = false;
+
+	switch (spectre_v2_parse_user_cmdline(v2_cmd)) {
+	case SPECTRE_V2_USER_CMD_AUTO:
+	case SPECTRE_V2_USER_CMD_NONE:
+		goto set_mode;
+	case SPECTRE_V2_USER_CMD_FORCE:
+		mode = SPECTRE_V2_USER_STRICT;
+		break;
+	}
+
+	/* Initialize Indirect Branch Prediction Barrier */
+	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+	}
+
+	/* If enhanced IBRS is enabled no STIPB required */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return;
+
+set_mode:
+	spectre_v2_user = mode;
+	/* Only print the STIBP mode when SMT possible */
+	if (smt_possible)
+		pr_info("%s\n", spectre_v2_user_strings[mode]);
+}
+
 static const char * const spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
@@ -385,12 +489,6 @@ static void __init spectre_v2_select_mit
 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
 
-	/* Initialize Indirect Branch Prediction Barrier if supported */
-	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
-	}
-
 	/*
 	 * Retpoline means the kernel is safe because it has no indirect
 	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
@@ -407,23 +505,21 @@ static void __init spectre_v2_select_mit
 		pr_info("Enabling Restricted Speculation for firmware calls\n");
 	}
 
+	/* Set up IBPB and STIBP depending on the general spectre V2 command */
+	spectre_v2_user_select_mitigation(cmd);
+
 	/* Enable STIBP if appropriate */
 	arch_smt_update();
 }
 
 static bool stibp_needed(void)
 {
-	if (spectre_v2_enabled == SPECTRE_V2_NONE)
-		return false;
-
 	/* Enhanced IBRS makes using STIBP unnecessary. */
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return false;
 
-	if (!boot_cpu_has(X86_FEATURE_STIBP))
-		return false;
-
-	return true;
+	/* Check for strict user mitigation mode */
+	return spectre_v2_user == SPECTRE_V2_USER_STRICT;
 }
 
 static void update_stibp_msr(void *info)
@@ -844,10 +940,13 @@ static char *stibp_state(void)
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return "";
 
-	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
-		return ", STIBP";
-	else
-		return "";
+	switch (spectre_v2_user) {
+	case SPECTRE_V2_USER_NONE:
+		return ", STIBP: disabled";
+	case SPECTRE_V2_USER_STRICT:
+		return ", STIBP: forced";
+	}
+	return "";
 }
 
 static char *ibpb_state(void)



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (16 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 17/28] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-27 17:25   ` Lendacky, Thomas
  2018-11-28 14:30   ` [tip:x86/pti] " tip-bot for Tim Chen
  2018-11-25 18:33 ` [patch V2 19/28] x86/process: Consolidate and simplify switch_to_xtra() code Thomas Gleixner
                   ` (12 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Prepare-for-per-task-indirect-branch-speculation-control.patch --]
[-- Type: text/plain, Size: 6403 bytes --]

To avoid the overhead of STIBP always on, it's necessary to allow per task
control of STIBP.

Add a new task flag TIF_SPEC_IB and evaluate it during context switch if
SMT is active and flag evaluation is enabled by the speculation control
code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the
guest/host switch works properly.

This has no effect because TIF_SPEC_IB cannot be set yet and the static key
which controls evaluation is off. Preparatory patch for adding the control
code.

[ tglx: Simplify the context switch logic and make the TIF evaluation
  	depend on SMP=y and on the static key controlling the conditional
  	update. Rename it to TIF_SPEC_IB because it controls both STIBP and
  	IBPB ]

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---

v1 -> v2: Remove pointless include. Use consistent comments.

---
 arch/x86/include/asm/msr-index.h   |    5 +++--
 arch/x86/include/asm/spec-ctrl.h   |   12 ++++++++++++
 arch/x86/include/asm/thread_info.h |    5 ++++-
 arch/x86/kernel/cpu/bugs.c         |    4 ++++
 arch/x86/kernel/process.c          |   23 +++++++++++++++++++++--
 5 files changed, 44 insertions(+), 5 deletions(-)

--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -41,9 +41,10 @@
 
 #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
 #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
-#define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_STIBP_SHIFT		1	   /* Single Thread Indirect Branch Predictor (STIBP) bit */
+#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
-#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
+#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(
 	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
+{
+	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
+	return (tifn & _TIF_SPEC_IB) >> (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
 {
 	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
 	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
+{
+	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
+	return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
 {
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -83,6 +83,7 @@ struct thread_info {
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
+#define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -110,6 +111,7 @@ struct thread_info {
 #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -146,7 +148,8 @@ struct thread_info {
 
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW							\
-	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
+	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
+	 _TIF_SSBD|_TIF_SPEC_IB)
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -148,6 +148,10 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
 		    static_cpu_has(X86_FEATURE_AMD_SSBD))
 			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
 
+		/* Conditional STIBP enabled? */
+		if (static_branch_unlikely(&switch_to_cond_stibp))
+			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
+
 		if (hostval != guestval) {
 			msrval = setguest ? guestval : hostval;
 			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -406,6 +406,11 @@ static __always_inline void spec_ctrl_up
 	if (static_cpu_has(X86_FEATURE_SSBD))
 		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
+	/* Only evaluate if conditional STIBP is enabled */
+	if (IS_ENABLED(CONFIG_SMP) &&
+	    static_branch_unlikely(&switch_to_cond_stibp))
+		msr |= stibp_tif_to_spec_ctrl(tifn);
+
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
@@ -418,10 +423,16 @@ static __always_inline void spec_ctrl_up
 static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 						      unsigned long tifn)
 {
+	unsigned long tif_diff = tifp ^ tifn;
 	bool updmsr = false;
 
-	/* If TIF_SSBD is different, select the proper mitigation method */
-	if ((tifp ^ tifn) & _TIF_SSBD) {
+	/*
+	 * If TIF_SSBD is different, select the proper mitigation
+	 * method. Note that if SSBD mitigation is disabled or permanentely
+	 * enabled this branch can't be taken because nothing can set
+	 * TIF_SSBD.
+	 */
+	if (tif_diff & _TIF_SSBD) {
 		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
 			amd_set_ssb_virt_state(tifn);
 		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
@@ -430,6 +441,14 @@ static __always_inline void __speculatio
 			updmsr  = true;
 	}
 
+	/*
+	 * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled,
+	 * otherwise avoid the MSR write.
+	 */
+	if (IS_ENABLED(CONFIG_SMP) &&
+	    static_branch_unlikely(&switch_to_cond_stibp))
+		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
+
 	if (updmsr)
 		spec_ctrl_update_msr(tifn);
 }



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 19/28] x86/process: Consolidate and simplify switch_to_xtra() code
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (17 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-26 18:30   ` Borislav Petkov
  2018-11-28 14:30   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 20/28] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
                   ` (11 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-process--Consolidate-and-simplify-switch_to_xtra---code.patch --]
[-- Type: text/plain, Size: 5635 bytes --]

Move the conditional invocation of __switch_to_xtra() into an inline
function so the logic can be shared between 32 and 64 bit.

Remove the handthrough of the TSS pointer and retrieve the pointer directly
in the bitmap handling function. Use this_cpu_ptr() instead of the
per_cpu() indirection.

This is a preparatory change so integration of conditional indirect branch
speculation optimization happens only in one place.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/switch_to.h |    3 ---
 arch/x86/kernel/process.c        |   12 +++++++-----
 arch/x86/kernel/process.h        |   24 ++++++++++++++++++++++++
 arch/x86/kernel/process_32.c     |   10 +++-------
 arch/x86/kernel/process_64.c     |   10 +++-------
 5 files changed, 37 insertions(+), 22 deletions(-)

--- a/arch/x86/include/asm/switch_to.h
+++ b/arch/x86/include/asm/switch_to.h
@@ -11,9 +11,6 @@ struct task_struct *__switch_to_asm(stru
 
 __visible struct task_struct *__switch_to(struct task_struct *prev,
 					  struct task_struct *next);
-struct tss_struct;
-void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
-		      struct tss_struct *tss);
 
 /* This runs runs on the previous thread's stack. */
 static inline void prepare_switch_to(struct task_struct *next)
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -40,6 +40,8 @@
 #include <asm/prctl.h>
 #include <asm/spec-ctrl.h>
 
+#include "process.h"
+
 /*
  * per-CPU TSS segments. Threads are completely 'soft' on Linux,
  * no more per-task TSS's. The TSS size is kept cacheline-aligned
@@ -252,11 +254,12 @@ void arch_setup_new_exec(void)
 		enable_cpuid();
 }
 
-static inline void switch_to_bitmap(struct tss_struct *tss,
-				    struct thread_struct *prev,
+static inline void switch_to_bitmap(struct thread_struct *prev,
 				    struct thread_struct *next,
 				    unsigned long tifp, unsigned long tifn)
 {
+	struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw);
+
 	if (tifn & _TIF_IO_BITMAP) {
 		/*
 		 * Copy the relevant range of the IO bitmap.
@@ -461,8 +464,7 @@ void speculation_ctrl_update(unsigned lo
 	preempt_enable();
 }
 
-void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
-		      struct tss_struct *tss)
+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
 {
 	struct thread_struct *prev, *next;
 	unsigned long tifp, tifn;
@@ -472,7 +474,7 @@ void __switch_to_xtra(struct task_struct
 
 	tifn = READ_ONCE(task_thread_info(next_p)->flags);
 	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
-	switch_to_bitmap(tss, prev, next, tifp, tifn);
+	switch_to_bitmap(prev, next, tifp, tifn);
 
 	propagate_user_return_notify(prev_p, next_p);
 
--- /dev/null
+++ b/arch/x86/kernel/process.h
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+//
+// Code shared between 32 and 64 bit
+
+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p);
+
+/*
+ * This needs to be inline to optimize for the common case where no extra
+ * work needs to be done.
+ */
+static inline void switch_to_extra(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	unsigned long next_tif = task_thread_info(next)->flags;
+	unsigned long prev_tif = task_thread_info(prev)->flags;
+
+	/*
+	 * __switch_to_xtra() handles debug registers, i/o bitmaps,
+	 * speculation mitigations etc.
+	 */
+	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||
+		     prev_tif & _TIF_WORK_CTXSW_PREV))
+		__switch_to_xtra(prev, next);
+}
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -59,6 +59,8 @@
 #include <asm/intel_rdt_sched.h>
 #include <asm/proto.h>
 
+#include "process.h"
+
 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
 {
 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;
@@ -232,7 +234,6 @@ EXPORT_SYMBOL_GPL(start_thread);
 	struct fpu *prev_fpu = &prev->fpu;
 	struct fpu *next_fpu = &next->fpu;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);
 
 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
 
@@ -264,12 +265,7 @@ EXPORT_SYMBOL_GPL(start_thread);
 	if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))
 		set_iopl_mask(next->iopl);
 
-	/*
-	 * Now maybe handle debug registers and/or IO bitmaps
-	 */
-	if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV ||
-		     task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
-		__switch_to_xtra(prev_p, next_p, tss);
+	switch_to_extra(prev_p, next_p);
 
 	/*
 	 * Leave lazy mode, flushing any hypercalls made here.
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -60,6 +60,8 @@
 #include <asm/unistd_32_ia32.h>
 #endif
 
+#include "process.h"
+
 /* Prints also some state that isn't saved in the pt_regs */
 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
 {
@@ -553,7 +555,6 @@ void compat_start_thread(struct pt_regs
 	struct fpu *prev_fpu = &prev->fpu;
 	struct fpu *next_fpu = &next->fpu;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);
 
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) &&
 		     this_cpu_read(irq_count) != -1);
@@ -617,12 +618,7 @@ void compat_start_thread(struct pt_regs
 	/* Reload sp0. */
 	update_task_stack(next_p);
 
-	/*
-	 * Now maybe reload the debug registers and handle I/O bitmaps
-	 */
-	if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||
-		     task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
-		__switch_to_xtra(prev_p, next_p, tss);
+	switch_to_extra(prev_p, next_p);
 
 #ifdef CONFIG_XEN_PV
 	/*



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 20/28] x86/speculation: Avoid __switch_to_xtra() calls
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (18 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 19/28] x86/process: Consolidate and simplify switch_to_xtra() code Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:31   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm() Thomas Gleixner
                   ` (10 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Avoid-switch-to-xtra-calls.patch --]
[-- Type: text/plain, Size: 2337 bytes --]

The TIF_SPEC_IB bit does not need to be evaluated in the decision to invoke
__switch_to_xtra() when:

 - CONFIG_SMP is disabled

 - The conditional STIPB mode is disabled

The TIF_SPEC_IB bit still controls IBPB in both cases so the TIF work mask
checks might invoke __switch_to_xtra() for nothing if TIF_SPEC_IB is the
only set bit in the work masks.

Optimize it out by masking the bit at compile time for CONFIG_SMP=n and at
run time when the static key controlling the conditional STIBP mode is
disabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/thread_info.h |   13 +++++++++++--
 arch/x86/kernel/process.h          |   15 +++++++++++++++
 2 files changed, 26 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -147,9 +147,18 @@ struct thread_info {
 	 _TIF_FSCHECK)
 
 /* flags to check in __switch_to() */
-#define _TIF_WORK_CTXSW							\
+#define _TIF_WORK_CTXSW_BASE						\
 	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
-	 _TIF_SSBD|_TIF_SPEC_IB)
+	 _TIF_SSBD)
+
+/*
+ * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.
+ */
+#ifdef CONFIG_SMP
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB)
+#else
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE)
+#endif
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
--- a/arch/x86/kernel/process.h
+++ b/arch/x86/kernel/process.h
@@ -2,6 +2,8 @@
 //
 // Code shared between 32 and 64 bit
 
+#include <asm/spec-ctrl.h>
+
 void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p);
 
 /*
@@ -14,6 +16,19 @@ static inline void switch_to_extra(struc
 	unsigned long next_tif = task_thread_info(next)->flags;
 	unsigned long prev_tif = task_thread_info(prev)->flags;
 
+	if (IS_ENABLED(CONFIG_SMP)) {
+		/*
+		 * Avoid __switch_to_xtra() invocation when conditional
+		 * STIPB is disabled and the only different bit is
+		 * TIF_SPEC_IB. For CONFIG_SMP=n TIF_SPEC_IB is not
+		 * in the TIF_WORK_CTXSW masks.
+		 */
+		if (!static_branch_likely(&switch_to_cond_stibp)) {
+			prev_tif &= ~_TIF_SPEC_IB;
+			next_tif &= ~_TIF_SPEC_IB;
+		}
+	}
+
 	/*
 	 * __switch_to_xtra() handles debug registers, i/o bitmaps,
 	 * speculation mitigations etc.



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (19 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 20/28] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-25 19:11   ` Thomas Gleixner
                     ` (2 more replies)
  2018-11-25 18:33 ` [patch V2 22/28] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS Thomas Gleixner
                   ` (9 subsequent siblings)
  30 siblings, 3 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Prepare-for-conditional-IBPB-in-switch_mm--.patch --]
[-- Type: text/plain, Size: 10124 bytes --]

The IBPB speculation barrier is issued from switch_mm() when the kernel
switches to a user space task with a different mm than the user space task
which ran last on the same CPU.

An additional optimization is to avoid IBPB when the incoming task can be
ptraced by the outgoing task. This optimization only works when switching
directly between two user space tasks. When switching from a kernel task to
a user space task the optimization fails because the previous task cannot
be accessed anymore. So for quite some scenarios the optimization is just
adding overhead.

The upcoming conditional IBPB support will issue IBPB only for user space
tasks which have the TIF_SPEC_IB bit set. This requires to handle the
following cases:

  1) Switch from a user space task (potential attacker) which has
     TIF_SPEC_IB set to a user space task (potential victim) which has
     TIF_SPEC_IB not set.

  2) Switch from a user space task (potential attacker) which has
     TIF_SPEC_IB not set to a user space task (potential victim) which has
     TIF_SPEC_IB set.

This needs to be optimized for the case where the IBPB can be avoided when
only kernel threads ran in between user space tasks which belong to the
same process.

The current check whether two tasks belong to the same context is using the
tasks context id. While correct, it's simpler to use the mm pointer because
it allows to mangle the TIF_SPEC_IB bit into it. The context id based
mechanism requires extra storage, which creates worse code.

When a task is scheduled out its TIF_SPEC_IB bit is mangled as bit 0 into
the per CPU storage which is used to track the last user space mm which was
running on a CPU. This bit can be used together with the TIF_SPEC_IB bit of
the incoming task to make the decision whether IBPB needs to be issued or
not to cover the two cases above.

As conditional IBPB is going to be the default, remove the dubious ptrace
check for the IBPB always case and simply issue IBPB always when the
process changes.

Move the storage to a different place in the struct as the original one
created a hole.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/nospec-branch.h |    2 
 arch/x86/include/asm/tlbflush.h      |    8 +-
 arch/x86/kernel/cpu/bugs.c           |   29 +++++++--
 arch/x86/mm/tlb.c                    |  109 +++++++++++++++++++++++++----------
 4 files changed, 110 insertions(+), 38 deletions(-)

--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -312,6 +312,8 @@ do {									\
 } while (0)
 
 DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
 #endif /* __ASSEMBLY__ */
 
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -169,10 +169,14 @@ struct tlb_state {
 
 #define LOADED_MM_SWITCHING ((struct mm_struct *)1)
 
+	/* Last user mm for optimizing IBPB */
+	union {
+		struct mm_struct	*last_user_mm;
+		unsigned long		last_user_mm_ibpb;
+	};
+
 	u16 loaded_mm_asid;
 	u16 next_asid;
-	/* last user mm's ctx id */
-	u64 last_ctx_id;
 
 	/*
 	 * We can be in one of several states:
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,10 @@ u64 __ro_after_init x86_amd_ls_cfg_ssbd_
 
 /* Control conditional STIPB in switch_to() */
 DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+/* Control conditional IBPB in switch_mm() */
+DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+/* Control unconditional IBPB in switch_mm() */
+DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
 void __init check_bugs(void)
 {
@@ -331,7 +335,17 @@ spectre_v2_user_select_mitigation(enum s
 	/* Initialize Indirect Branch Prediction Barrier */
 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+
+		switch (mode) {
+		case SPECTRE_V2_USER_STRICT:
+			static_branch_enable(&switch_mm_always_ibpb);
+			break;
+		default:
+			break;
+		}
+
+		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+			mode == SPECTRE_V2_USER_STRICT ? "always-on" : "conditional");
 	}
 
 	/* If enhanced IBRS is enabled no STIPB required */
@@ -955,10 +969,15 @@ static char *stibp_state(void)
 
 static char *ibpb_state(void)
 {
-	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
-		return ", IBPB";
-	else
-		return "";
+	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+		switch (spectre_v2_user) {
+		case SPECTRE_V2_USER_NONE:
+			return ", IBPB: disabled";
+		case SPECTRE_V2_USER_STRICT:
+			return ", IBPB: always-on";
+		}
+	}
+	return "";
 }
 
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,7 +7,6 @@
 #include <linux/export.h>
 #include <linux/cpu.h>
 #include <linux/debugfs.h>
-#include <linux/ptrace.h>
 
 #include <asm/tlbflush.h>
 #include <asm/mmu_context.h>
@@ -31,6 +30,12 @@
  */
 
 /*
+ * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is
+ * stored in cpu_tlb_state.last_user_mm_ibpb.
+ */
+#define LAST_USER_MM_IBPB	0x1UL
+
+/*
  * We get here when we do something requiring a TLB invalidation
  * but could not go invalidate all of the contexts.  We do the
  * necessary invalidation by clearing out the 'ctx_id' which
@@ -181,17 +186,77 @@ static void sync_current_stack_to_mm(str
 	}
 }
 
-static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next)
 {
-	/*
-	 * Check if the current (previous) task has access to the memory
-	 * of the @tsk (next) task. If access is denied, make sure to
-	 * issue a IBPB to stop user->user Spectre-v2 attacks.
-	 *
-	 * Note: __ptrace_may_access() returns 0 or -ERRNO.
-	 */
-	return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
-		ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+	unsigned long next_tif = task_thread_info(next)->flags;
+	unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USR_MM_IBPB;
+
+	return (unsigned long)next->mm | ibpb;
+}
+
+static void cond_ibpb(struct task_struct *next)
+{
+	if (!next || !next->mm)
+		return;
+
+	if (static_branch_likely(&switch_mm_cond_ibpb)) {
+		unsigned long prev_mm, next_mm;
+
+		/*
+		 * This is a bit more complex than the always mode because
+		 * it has to handle two cases:
+		 *
+		 * 1) Switch from a user space task (potential attacker)
+		 *    which has TIF_SPEC_IB set to a user space task
+		 *    (potential victim) which has TIF_SPEC_IB not set.
+		 *
+		 * 2) Switch from a user space task (potential attacker)
+		 *    which has TIF_SPEC_IB not set to a user space task
+		 *    (potential victim) which has TIF_SPEC_IB set.
+		 *
+		 * This could be done by unconditionally issuing IBPB when
+		 * a task which has TIF_SPEC_IB set is either scheduled in
+		 * or out. Though that results in two flushes when:
+		 *
+		 * - the same user space task is scheduled out and later
+		 *   scheduled in again and only a kernel thread ran in
+		 *   between.
+		 *
+		 * - a user space task belonging to the same process is
+		 *   scheduled in after a kernel thread ran in between
+		 *
+		 * - a user space task belonging to the same process is
+		 *   scheduled in immediately.
+		 *
+		 * Optimize this with reasonably small overhead for the
+		 * above cases. Mangle the TIF_SPEC_IB bit into the mm
+		 * pointer of the incoming task which is stored in
+		 * cpu_tlbstate.last_user_mm_ibpb for comparison.
+		 */
+		next_mm = mm_mangle_tif_spec_ib(next);
+		prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb);
+
+		/*
+		 * Issue IBPB only if the mm's are different and one or
+		 * both have the IBPB bit set.
+		 */
+		if (next_mm != prev_mm && (next_mm | prev_mm) & LAST_USR_MM_IBPB)
+			indirect_branch_prediction_barrier();
+
+		this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm);
+	}
+
+	if (static_branch_unlikely(&switch_mm_always_ibpb)) {
+		/*
+		 * Only flush when switching to a user space task with a
+		 * different context than the user space task which ran
+		 * last on this CPU.
+		 */
+		if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) {
+			indirect_branch_prediction_barrier();
+			this_cpu_write(cpu_tlbstate.last_user_mm, next->mm);
+		}
+	}
 }
 
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
@@ -292,22 +357,12 @@ void switch_mm_irqs_off(struct mm_struct
 		new_asid = prev_asid;
 		need_flush = true;
 	} else {
-		u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);
-
 		/*
 		 * Avoid user/user BTB poisoning by flushing the branch
 		 * predictor when switching between processes. This stops
 		 * one process from doing Spectre-v2 attacks on another.
-		 *
-		 * As an optimization, flush indirect branches only when
-		 * switching into a processes that can't be ptrace by the
-		 * current one (as in such case, attacker has much more
-		 * convenient way how to tamper with the next process than
-		 * branch buffer poisoning).
 		 */
-		if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
-				ibpb_needed(tsk, last_ctx_id))
-			indirect_branch_prediction_barrier();
+		cond_ibpb(tsk);
 
 		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
 			/*
@@ -365,14 +420,6 @@ void switch_mm_irqs_off(struct mm_struct
 		trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
 	}
 
-	/*
-	 * Record last user mm's context id, so we can avoid
-	 * flushing branch buffer with IBPB if we switch back
-	 * to the same user.
-	 */
-	if (next != &init_mm)
-		this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
-
 	/* Make sure we write CR3 before loaded_mm. */
 	barrier();
 
@@ -441,7 +488,7 @@ void initialize_tlbstate_and_flush(void)
 	write_cr3(build_cr3(mm->pgd, 0));
 
 	/* Reinitialize tlbstate. */
-	this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id);
+	this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USR_MM_IBPB);
 	this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
 	this_cpu_write(cpu_tlbstate.next_asid, 1);
 	this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 22/28] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (20 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm() Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:32   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 23/28] x86/speculation: Split out TIF update Thomas Gleixner
                   ` (8 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: ptrace-Remove-unused-ptrace-may-access-sched-and-MODE-IBRS.patch --]
[-- Type: text/plain, Size: 2684 bytes --]

The IBPB control code in x86 removed the usage. Remove the functionality
which was introduced for this.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 include/linux/ptrace.h |   17 -----------------
 kernel/ptrace.c        |   10 ----------
 2 files changed, 27 deletions(-)

--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,15 +64,12 @@ extern void exit_ptrace(struct task_stru
 #define PTRACE_MODE_NOAUDIT	0x04
 #define PTRACE_MODE_FSCREDS	0x08
 #define PTRACE_MODE_REALCREDS	0x10
-#define PTRACE_MODE_SCHED	0x20
-#define PTRACE_MODE_IBPB	0x40
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS)
-#define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -90,20 +87,6 @@ extern void exit_ptrace(struct task_stru
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
 
-/**
- * ptrace_may_access - check whether the caller is permitted to access
- * a target task.
- * @task: target task
- * @mode: selects type of access and caller credentials
- *
- * Returns true on success, false on denial.
- *
- * Similar to ptrace_may_access(). Only to be called from context switch
- * code. Does not call into audit and the regular LSM hooks due to locking
- * constraints.
- */
-extern bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode);
-
 static inline int ptrace_reparented(struct task_struct *child)
 {
 	return !same_thread_group(child->real_parent, child->parent);
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -261,9 +261,6 @@ static int ptrace_check_attach(struct ta
 
 static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)
 {
-	if (mode & PTRACE_MODE_SCHED)
-		return false;
-
 	if (mode & PTRACE_MODE_NOAUDIT)
 		return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE);
 	else
@@ -331,16 +328,9 @@ static int __ptrace_may_access(struct ta
 	     !ptrace_has_cap(mm->user_ns, mode)))
 	    return -EPERM;
 
-	if (mode & PTRACE_MODE_SCHED)
-		return 0;
 	return security_ptrace_access_check(task, mode);
 }
 
-bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode)
-{
-	return __ptrace_may_access(task, mode | PTRACE_MODE_SCHED);
-}
-
 bool ptrace_may_access(struct task_struct *task, unsigned int mode)
 {
 	int err;



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 23/28] x86/speculation: Split out TIF update
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (21 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 22/28] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:33   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
                   ` (7 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Split-out-TIF-update.patch --]
[-- Type: text/plain, Size: 2508 bytes --]

The update of the TIF_SSBD flag and the conditional speculation control MSR
update is done in the ssb_prctl_set() function directly. The upcoming prctl
support for controlling indirect branch speculation via STIBP needs the
same mechanism.

Split the code out and make it reusable. Reword the comment about updates
for other tasks.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V1 -> V2: Update comment.
---
 arch/x86/kernel/cpu/bugs.c |   35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -702,10 +702,29 @@ static void ssb_select_mitigation(void)
 #undef pr_fmt
 #define pr_fmt(fmt)     "Speculation prctl: " fmt
 
-static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
+static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
 {
 	bool update;
 
+	if (on)
+		update = !test_and_set_tsk_thread_flag(tsk, tifbit);
+	else
+		update = test_and_clear_tsk_thread_flag(tsk, tifbit);
+
+	/*
+	 * Immediately update the speculation control MSRs for the current
+	 * task, but for a non-current task delay setting the CPU
+	 * mitigation until it is scheduled next.
+	 *
+	 * This can only happen for SECCOMP mitigation. For PRCTL it's
+	 * always the current task.
+	 */
+	if (tsk == current && update)
+		speculation_ctrl_update_current();
+}
+
+static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
 	if (ssb_mode != SPEC_STORE_BYPASS_PRCTL &&
 	    ssb_mode != SPEC_STORE_BYPASS_SECCOMP)
 		return -ENXIO;
@@ -716,28 +735,20 @@ static int ssb_prctl_set(struct task_str
 		if (task_spec_ssb_force_disable(task))
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
-		update = test_and_clear_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, false);
 		break;
 	case PR_SPEC_DISABLE:
 		task_set_spec_ssb_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, true);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, true);
 		break;
 	default:
 		return -ERANGE;
 	}
-
-	/*
-	 * If being set on non-current task, delay setting the CPU
-	 * mitigation until it is next scheduled.
-	 */
-	if (task == current && update)
-		speculation_ctrl_update_current();
-
 	return 0;
 }
 



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (22 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 23/28] x86/speculation: Split out TIF update Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-27 20:18   ` Lendacky, Thomas
  2018-11-28 14:34   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 25/28] x86/speculation: Add prctl() control for indirect branch speculation Thomas Gleixner
                   ` (6 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Prepare-arch-smt-update-for-PRCTL-mode.patch --]
[-- Type: text/plain, Size: 2306 bytes --]

The upcoming fine grained per task STIBP control needs to be updated on CPU
hotplug as well.

Split out the code which controls the strict mode so the prctl control code
can be added later. Mark the SMP function call argument __unused while at it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---

v1 -> v2: s/app2app/user/. Mark smp function argument __unused

---
 arch/x86/kernel/cpu/bugs.c |   46 ++++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 21 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -530,40 +530,44 @@ static void __init spectre_v2_select_mit
 	arch_smt_update();
 }
 
-static bool stibp_needed(void)
+static void update_stibp_msr(void * __unused)
 {
-	/* Enhanced IBRS makes using STIBP unnecessary. */
-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
-		return false;
-
-	/* Check for strict user mitigation mode */
-	return spectre_v2_user == SPECTRE_V2_USER_STRICT;
+	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 }
 
-static void update_stibp_msr(void *info)
+/* Update x86_spec_ctrl_base in case SMT state changed. */
+static void update_stibp_strict(void)
 {
-	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+	u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+
+	if (sched_smt_active())
+		mask |= SPEC_CTRL_STIBP;
+
+	if (mask == x86_spec_ctrl_base)
+		return;
+
+	pr_info("Spectre v2 user space SMT mitigation: STIBP %s\n",
+		mask & SPEC_CTRL_STIBP ? "always-on" : "off");
+	x86_spec_ctrl_base = mask;
+	on_each_cpu(update_stibp_msr, NULL, 1);
 }
 
 void arch_smt_update(void)
 {
-	u64 mask;
-
-	if (!stibp_needed())
+	/* Enhanced IBRS implies STIBP. No update required. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return;
 
 	mutex_lock(&spec_ctrl_mutex);
 
-	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
-	if (sched_smt_active())
-		mask |= SPEC_CTRL_STIBP;
-
-	if (mask != x86_spec_ctrl_base) {
-		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
-		x86_spec_ctrl_base = mask;
-		on_each_cpu(update_stibp_msr, NULL, 1);
+	switch (spectre_v2_user) {
+	case SPECTRE_V2_USER_NONE:
+		break;
+	case SPECTRE_V2_USER_STRICT:
+		update_stibp_strict();
+		break;
 	}
+
 	mutex_unlock(&spec_ctrl_mutex);
 }
 



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 25/28] x86/speculation: Add prctl() control for indirect branch speculation
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (23 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:34   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 26/28] x86/speculation: Enable prctl mode for spectre_v2_user Thomas Gleixner
                   ` (5 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Create-PRCTL-interface-to-restrict-indirect-branch-speculation.patch --]
[-- Type: text/plain, Size: 7496 bytes --]

Add the PR_SPEC_INDIRECT_BRANCH option for the PR_GET_SPECULATION_CTRL and
PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of
indirect branch speculation via STIBP and IBPB.

Invocations:
 Check indirect branch speculation status with
 - prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0);

 Enable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0);

 Disable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0);

 Force disable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);

See Documentation/userspace-api/spec_ctrl.rst.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---

V1 -> V2: s/INDIR_/INDIRECT_/ in ABI
      	  s/INDIR_BRANCH/IB/ for internal functions and defines
	  s/app2app/user/
	  Merge the DISABLE cases
---
 Documentation/userspace-api/spec_ctrl.rst |    9 ++++
 arch/x86/include/asm/nospec-branch.h      |    1 
 arch/x86/kernel/cpu/bugs.c                |   67 ++++++++++++++++++++++++++++++
 include/linux/sched.h                     |    9 ++++
 include/uapi/linux/prctl.h                |    1 
 tools/include/uapi/linux/prctl.h          |    1 
 6 files changed, 88 insertions(+)

--- a/Documentation/userspace-api/spec_ctrl.rst
+++ b/Documentation/userspace-api/spec_ctrl.rst
@@ -92,3 +92,12 @@ Speculation misfeature controls
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
+
+- PR_SPEC_INDIR_BRANCH: Indirect Branch Speculation in User Processes
+                        (Mitigate Spectre V2 style attacks against user processes)
+
+  Invocations:
+   * prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -232,6 +232,7 @@ enum spectre_v2_mitigation {
 enum spectre_v2_user_mitigation {
 	SPECTRE_V2_USER_NONE,
 	SPECTRE_V2_USER_STRICT,
+	SPECTRE_V2_USER_PRCTL,
 };
 
 /* The Speculative Store Bypass disable variants */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -566,6 +566,8 @@ void arch_smt_update(void)
 	case SPECTRE_V2_USER_STRICT:
 		update_stibp_strict();
 		break;
+	case SPECTRE_V2_USER_PRCTL:
+		break;
 	}
 
 	mutex_unlock(&spec_ctrl_mutex);
@@ -756,12 +758,50 @@ static int ssb_prctl_set(struct task_str
 	return 0;
 }
 
+static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
+			return 0;
+		/*
+		 * Indirect branch speculation is always disabled in strict
+		 * mode.
+		 */
+		if (spectre_v2_user == SPECTRE_V2_USER_STRICT)
+			return -EPERM;
+		task_clear_spec_ib_disable(task);
+		task_update_spec_tif(task, TIF_SPEC_IB, false);
+		break;
+	case PR_SPEC_DISABLE:
+	case PR_SPEC_FORCE_DISABLE:
+		/*
+		 * Indirect branch speculation is always allowed when
+		 * mitigation is force disabled.
+		 */
+		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
+			return -EPERM;
+		if (spectre_v2_user == SPECTRE_V2_USER_STRICT)
+			return 0;
+		task_set_spec_ib_disable(task);
+		if (ctrl == PR_SPEC_FORCE_DISABLE)
+			task_set_spec_ib_force_disable(task);
+		task_update_spec_tif(task, TIF_SPEC_IB, true);
+		break;
+	default:
+		return -ERANGE;
+	}
+	return 0;
+}
+
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_set(task, ctrl);
+	case PR_SPEC_INDIRECT_BRANCH:
+		return ib_prctl_set(task, ctrl);
 	default:
 		return -ENODEV;
 	}
@@ -794,11 +834,34 @@ static int ssb_prctl_get(struct task_str
 	}
 }
 
+static int ib_prctl_get(struct task_struct *task)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		return PR_SPEC_NOT_AFFECTED;
+
+	switch (spectre_v2_user) {
+	case SPECTRE_V2_USER_NONE:
+		return PR_SPEC_ENABLE;
+	case SPECTRE_V2_USER_PRCTL:
+		if (task_spec_ib_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+		if (test_tsk_thread_flag(task, TIF_SPEC_IB))
+			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+	case SPECTRE_V2_USER_STRICT:
+		return PR_SPEC_DISABLE;
+	default:
+		return PR_SPEC_NOT_AFFECTED;
+	}
+}
+
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_get(task);
+	case PR_SPEC_INDIRECT_BRANCH:
+		return ib_prctl_get(task);
 	default:
 		return -ENODEV;
 	}
@@ -978,6 +1041,8 @@ static char *stibp_state(void)
 		return ", STIBP: disabled";
 	case SPECTRE_V2_USER_STRICT:
 		return ", STIBP: forced";
+	case SPECTRE_V2_USER_PRCTL:
+		return "";
 	}
 	return "";
 }
@@ -990,6 +1055,8 @@ static char *ibpb_state(void)
 			return ", IBPB: disabled";
 		case SPECTRE_V2_USER_STRICT:
 			return ", IBPB: always-on";
+		case SPECTRE_V2_USER_PRCTL:
+			return "";
 		}
 	}
 	return "";
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1453,6 +1453,8 @@ static inline bool is_percpu_thread(void
 #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
 #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
 #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
+#define PFA_SPEC_IB_DISABLE		5	/* Indirect branch speculation restricted */
+#define PFA_SPEC_IB_FORCE_DISABLE	6	/* Indirect branch speculation permanently restricted */
 
 #define TASK_PFA_TEST(name, func)					\
 	static inline bool task_##func(struct task_struct *p)		\
@@ -1484,6 +1486,13 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ss
 TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 
+TASK_PFA_TEST(SPEC_IB_DISABLE, spec_ib_disable)
+TASK_PFA_SET(SPEC_IB_DISABLE, spec_ib_disable)
+TASK_PFA_CLEAR(SPEC_IB_DISABLE, spec_ib_disable)
+
+TASK_PFA_TEST(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)
+TASK_PFA_SET(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)
+
 static inline void
 current_restore_flags(unsigned long orig_flags, unsigned long flags)
 {
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIRECT_BRANCH	1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIRECT_BRANCH	1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 26/28] x86/speculation: Enable prctl mode for spectre_v2_user
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (24 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 25/28] x86/speculation: Add prctl() control for indirect branch speculation Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-26  7:56   ` Dominik Brodowski
  2018-11-28 14:35   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-25 18:33 ` [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode Thomas Gleixner
                   ` (4 subsequent siblings)
  30 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Enable-PRCTL-mode-for-spectre-v2-app2app.patch --]
[-- Type: text/plain, Size: 4813 bytes --]

Now that all prerequisites are in place:

 - Add the prctl command line option

 - Default the 'auto' mode to 'prctl'

 - When SMT state changes, update the static key which controls the
   conditional STIBP evaluation on context switch.

 - At init update the static key which controls the conditional IBPB
   evaluation on context switch.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---

V1 -> V2: Fix comments
      	  s/app2app/user/
	  Make the IBPB printout depend on the static keys as it can be
	  independent of the mode, e.g. when SMT is not supported.
	  Remove the CONFIG_SMP conditional
---
 Documentation/admin-guide/kernel-parameters.txt |    7 +++-
 arch/x86/kernel/cpu/bugs.c                      |   41 ++++++++++++++++++------
 2 files changed, 38 insertions(+), 10 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4236,9 +4236,14 @@
 			off     - Unconditionally disable mitigations. Is
 				  enforced by spectre_v2=off
 
+			prctl   - Indirect branch speculation is enabled,
+				  but mitigation can be enabled via prctl
+				  per thread.  The mitigation control state
+				  is inherited on fork.
+
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
-				  Default is off.
+				  Default is prctl.
 
 			Not specifying this option is equivalent to
 			spectre_v2_user=auto.
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -255,11 +255,13 @@ enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_NONE,
 	SPECTRE_V2_USER_CMD_AUTO,
 	SPECTRE_V2_USER_CMD_FORCE,
+	SPECTRE_V2_USER_CMD_PRCTL,
 };
 
 static const char * const spectre_v2_user_strings[] = {
 	[SPECTRE_V2_USER_NONE]		= "User space: Vulnerable",
 	[SPECTRE_V2_USER_STRICT]	= "User space: Mitigation: STIBP protection",
+	[SPECTRE_V2_USER_PRCTL]		= "User space: Mitigation: STIBP via prctl",
 };
 
 static const struct {
@@ -270,6 +272,7 @@ static const struct {
 	{ "auto",	SPECTRE_V2_USER_CMD_AUTO,	false },
 	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
 	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
+	{ "prctl",	SPECTRE_V2_USER_CMD_PRCTL,	false },
 };
 
 static void __init spec_v2_user_print_cond(const char *reason, bool secure)
@@ -324,12 +327,15 @@ spectre_v2_user_select_mitigation(enum s
 		smt_possible = false;
 
 	switch (spectre_v2_parse_user_cmdline(v2_cmd)) {
-	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_NONE:
 		goto set_mode;
 	case SPECTRE_V2_USER_CMD_FORCE:
 		mode = SPECTRE_V2_USER_STRICT;
 		break;
+	case SPECTRE_V2_USER_CMD_AUTO:
+	case SPECTRE_V2_USER_CMD_PRCTL:
+		mode = SPECTRE_V2_USER_PRCTL;
+		break;
 	}
 
 	/* Initialize Indirect Branch Prediction Barrier */
@@ -340,6 +346,9 @@ spectre_v2_user_select_mitigation(enum s
 		case SPECTRE_V2_USER_STRICT:
 			static_branch_enable(&switch_mm_always_ibpb);
 			break;
+		case SPECTRE_V2_USER_PRCTL:
+			static_branch_enable(&switch_mm_cond_ibpb);
+			break;
 		default:
 			break;
 		}
@@ -352,6 +361,12 @@ spectre_v2_user_select_mitigation(enum s
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return;
 
+	/*
+	 * If SMT is not possible or STIBP is not available clear the STIPB
+	 * mode.
+	 */
+	if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP))
+		mode = SPECTRE_V2_USER_NONE;
 set_mode:
 	spectre_v2_user = mode;
 	/* Only print the STIBP mode when SMT possible */
@@ -552,6 +567,15 @@ static void update_stibp_strict(void)
 	on_each_cpu(update_stibp_msr, NULL, 1);
 }
 
+/* Update the static key controlling the evaluation of TIF_SPEC_IB */
+static void update_indir_branch_cond(void)
+{
+	if (sched_smt_active())
+		static_branch_enable(&switch_to_cond_stibp);
+	else
+		static_branch_disable(&switch_to_cond_stibp);
+}
+
 void arch_smt_update(void)
 {
 	/* Enhanced IBRS implies STIBP. No update required. */
@@ -567,6 +591,7 @@ void arch_smt_update(void)
 		update_stibp_strict();
 		break;
 	case SPECTRE_V2_USER_PRCTL:
+		update_indir_branch_cond();
 		break;
 	}
 
@@ -1042,7 +1067,8 @@ static char *stibp_state(void)
 	case SPECTRE_V2_USER_STRICT:
 		return ", STIBP: forced";
 	case SPECTRE_V2_USER_PRCTL:
-		return "";
+		if (static_key_enabled(&switch_to_cond_stibp))
+			return ", STIBP: conditional";
 	}
 	return "";
 }
@@ -1050,14 +1076,11 @@ static char *stibp_state(void)
 static char *ibpb_state(void)
 {
 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		switch (spectre_v2_user) {
-		case SPECTRE_V2_USER_NONE:
-			return ", IBPB: disabled";
-		case SPECTRE_V2_USER_STRICT:
+		if (static_key_enabled(&switch_mm_always_ibpb))
 			return ", IBPB: always-on";
-		case SPECTRE_V2_USER_PRCTL:
-			return "";
-		}
+		if (static_key_enabled(&switch_mm_cond_ibpb))
+			return ", IBPB: conditional";
+		return ", IBPB: disabled";
 	}
 	return "";
 }



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (25 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 26/28] x86/speculation: Enable prctl mode for spectre_v2_user Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-25 19:35   ` Randy Dunlap
                     ` (3 more replies)
  2018-11-25 18:33 ` [patch V2 28/28] x86/speculation: Provide IBPB always command line options Thomas Gleixner
                   ` (3 subsequent siblings)
  30 siblings, 4 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Add-seccomp-Spectre-v2-app-to-app-protection-mode.patch --]
[-- Type: text/plain, Size: 5077 bytes --]

If 'prctl' mode of user space protection from spectre v2 is selected
on the kernel command-line, STIBP and IBPB are applied on tasks which
restrict their indirect branch speculation via prctl.

SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
makes sense to prevent spectre v2 user space to user space attacks as
well.

The mitigation guide documents how STIPB works:
    
   Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
   prevents the predicted targets of indirect branches on any logical
   processor of that core from being controlled by software that executes
   (or executed previously) on another logical processor of the same core.
    
Ergo setting STIBP protects the task itself from being attacked from a task
running on a different hyper-thread and protects the tasks running on
different hyper-threads from being attacked.
    
IBPB is issued when the task switches out, so malicious sandbox code cannot
mistrain the branch predictor for the next user space task on the same
logical processor.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 Documentation/admin-guide/kernel-parameters.txt |    9 ++++++++-
 arch/x86/include/asm/nospec-branch.h            |    1 +
 arch/x86/kernel/cpu/bugs.c                      |   17 ++++++++++++++++-
 3 files changed, 25 insertions(+), 2 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4241,9 +4241,16 @@
 				  per thread.  The mitigation control state
 				  is inherited on fork.
 
+			seccomp
+				- Same as "prctl" above, but all seccomp
+				  threads will enable the mitigation unless
+				  they explicitly opt out.
+
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
-				  Default is prctl.
+
+			Default mitigation:
+			If CONFIG_SECCOMP=y "seccomp", otherwise "prctl"
 
 			Not specifying this option is equivalent to
 			spectre_v2_user=auto.
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -233,6 +233,7 @@ enum spectre_v2_user_mitigation {
 	SPECTRE_V2_USER_NONE,
 	SPECTRE_V2_USER_STRICT,
 	SPECTRE_V2_USER_PRCTL,
+	SPECTRE_V2_USER_SECCOMP,
 };
 
 /* The Speculative Store Bypass disable variants */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -256,12 +256,14 @@ enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_AUTO,
 	SPECTRE_V2_USER_CMD_FORCE,
 	SPECTRE_V2_USER_CMD_PRCTL,
+	SPECTRE_V2_USER_CMD_SECCOMP,
 };
 
 static const char * const spectre_v2_user_strings[] = {
 	[SPECTRE_V2_USER_NONE]		= "User space: Vulnerable",
 	[SPECTRE_V2_USER_STRICT]	= "User space: Mitigation: STIBP protection",
 	[SPECTRE_V2_USER_PRCTL]		= "User space: Mitigation: STIBP via prctl",
+	[SPECTRE_V2_USER_SECCOMP]	= "User space: Mitigation: STIBP via seccomp and prctl",
 };
 
 static const struct {
@@ -273,6 +275,7 @@ static const struct {
 	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
 	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
 	{ "prctl",	SPECTRE_V2_USER_CMD_PRCTL,	false },
+	{ "seccomp",	SPECTRE_V2_USER_CMD_SECCOMP,	false },
 };
 
 static void __init spec_v2_user_print_cond(const char *reason, bool secure)
@@ -332,10 +335,16 @@ spectre_v2_user_select_mitigation(enum s
 	case SPECTRE_V2_USER_CMD_FORCE:
 		mode = SPECTRE_V2_USER_STRICT;
 		break;
-	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_PRCTL:
 		mode = SPECTRE_V2_USER_PRCTL;
 		break;
+	case SPECTRE_V2_USER_CMD_AUTO:
+	case SPECTRE_V2_USER_CMD_SECCOMP:
+		if (IS_ENABLED(CONFIG_SECCOMP))
+			mode = SPECTRE_V2_USER_SECCOMP;
+		else
+			mode = SPECTRE_V2_USER_PRCTL;
+		break;
 	}
 
 	/* Initialize Indirect Branch Prediction Barrier */
@@ -347,6 +356,7 @@ spectre_v2_user_select_mitigation(enum s
 			static_branch_enable(&switch_mm_always_ibpb);
 			break;
 		case SPECTRE_V2_USER_PRCTL:
+		case SPECTRE_V2_USER_SECCOMP:
 			static_branch_enable(&switch_mm_cond_ibpb);
 			break;
 		default:
@@ -591,6 +601,7 @@ void arch_smt_update(void)
 		update_stibp_strict();
 		break;
 	case SPECTRE_V2_USER_PRCTL:
+	case SPECTRE_V2_USER_SECCOMP:
 		update_indir_branch_cond();
 		break;
 	}
@@ -837,6 +848,8 @@ void arch_seccomp_spec_mitigate(struct t
 {
 	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
 		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+	if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP)
+		ib_prctl_set(task, PR_SPEC_FORCE_DISABLE);
 }
 #endif
 
@@ -868,6 +881,7 @@ static int ib_prctl_get(struct task_stru
 	case SPECTRE_V2_USER_NONE:
 		return PR_SPEC_ENABLE;
 	case SPECTRE_V2_USER_PRCTL:
+	case SPECTRE_V2_USER_SECCOMP:
 		if (task_spec_ib_force_disable(task))
 			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
 		if (test_tsk_thread_flag(task, TIF_SPEC_IB))
@@ -1067,6 +1081,7 @@ static char *stibp_state(void)
 	case SPECTRE_V2_USER_STRICT:
 		return ", STIBP: forced";
 	case SPECTRE_V2_USER_PRCTL:
+	case SPECTRE_V2_USER_SECCOMP:
 		if (static_key_enabled(&switch_to_cond_stibp))
 			return ", STIBP: conditional";
 	}



^ permalink raw reply	[flat|nested] 112+ messages in thread

* [patch V2 28/28] x86/speculation: Provide IBPB always command line options
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (26 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode Thomas Gleixner
@ 2018-11-25 18:33 ` Thomas Gleixner
  2018-11-28 14:36   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-26 13:37 ` [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Ingo Molnar
                   ` (2 subsequent siblings)
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 18:33 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Provide-IBPB-always-option.patch --]
[-- Type: text/plain, Size: 4493 bytes --]

Provide the possibility to enable IBPB always in combination with 'prctl'
and 'seccomp'.

Add the extra command line options and rework the IBPB selection to
evaluate the command instead of the mode selected by the STIPB switch case.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/admin-guide/kernel-parameters.txt |   12 ++++++++
 arch/x86/kernel/cpu/bugs.c                      |   34 ++++++++++++++++--------
 2 files changed, 35 insertions(+), 11 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4241,11 +4241,23 @@
 				  per thread.  The mitigation control state
 				  is inherited on fork.
 
+			prctl,ibpb
+				- Like "prctl" above, but only STIBP is
+				  controlled per thread. IBPB is issued
+				  always when switching between different user
+				  space processes.
+
 			seccomp
 				- Same as "prctl" above, but all seccomp
 				  threads will enable the mitigation unless
 				  they explicitly opt out.
 
+			seccomp,ibpb
+				- Like "seccomp" above, but only STIBP is
+				  controlled per thread. IBPB is issued
+				  always when switching between different
+				  user space processes.
+
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
 
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -256,7 +256,9 @@ enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_AUTO,
 	SPECTRE_V2_USER_CMD_FORCE,
 	SPECTRE_V2_USER_CMD_PRCTL,
+	SPECTRE_V2_USER_CMD_PRCTL_IBPB,
 	SPECTRE_V2_USER_CMD_SECCOMP,
+	SPECTRE_V2_USER_CMD_SECCOMP_IBPB,
 };
 
 static const char * const spectre_v2_user_strings[] = {
@@ -271,11 +273,13 @@ static const struct {
 	enum spectre_v2_user_cmd	cmd;
 	bool				secure;
 } v2_user_options[] __initdata = {
-	{ "auto",	SPECTRE_V2_USER_CMD_AUTO,	false },
-	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
-	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
-	{ "prctl",	SPECTRE_V2_USER_CMD_PRCTL,	false },
-	{ "seccomp",	SPECTRE_V2_USER_CMD_SECCOMP,	false },
+	{ "auto",		SPECTRE_V2_USER_CMD_AUTO,		false },
+	{ "off",		SPECTRE_V2_USER_CMD_NONE,		false },
+	{ "on",			SPECTRE_V2_USER_CMD_FORCE,		true  },
+	{ "prctl",		SPECTRE_V2_USER_CMD_PRCTL,		false },
+	{ "prctl,ibpb",		SPECTRE_V2_USER_CMD_PRCTL_IBPB,		false },
+	{ "seccomp",		SPECTRE_V2_USER_CMD_SECCOMP,		false },
+	{ "seccomp,ibpb",	SPECTRE_V2_USER_CMD_SECCOMP_IBPB,	false },
 };
 
 static void __init spec_v2_user_print_cond(const char *reason, bool secure)
@@ -321,6 +325,7 @@ spectre_v2_user_select_mitigation(enum s
 {
 	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
 	bool smt_possible = IS_ENABLED(CONFIG_SMP);
+	enum spectre_v2_user_cmd cmd;
 
 	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
 		return;
@@ -329,17 +334,20 @@ spectre_v2_user_select_mitigation(enum s
 	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
 		smt_possible = false;
 
-	switch (spectre_v2_parse_user_cmdline(v2_cmd)) {
+	cmd = spectre_v2_parse_user_cmdline(v2_cmd);
+	switch (cmd) {
 	case SPECTRE_V2_USER_CMD_NONE:
 		goto set_mode;
 	case SPECTRE_V2_USER_CMD_FORCE:
 		mode = SPECTRE_V2_USER_STRICT;
 		break;
 	case SPECTRE_V2_USER_CMD_PRCTL:
+	case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
 		mode = SPECTRE_V2_USER_PRCTL;
 		break;
 	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_SECCOMP:
+	case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
 		if (IS_ENABLED(CONFIG_SECCOMP))
 			mode = SPECTRE_V2_USER_SECCOMP;
 		else
@@ -351,12 +359,15 @@ spectre_v2_user_select_mitigation(enum s
 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
 
-		switch (mode) {
-		case SPECTRE_V2_USER_STRICT:
+		switch (cmd) {
+		case SPECTRE_V2_USER_CMD_FORCE:
+		case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
+		case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
 			static_branch_enable(&switch_mm_always_ibpb);
 			break;
-		case SPECTRE_V2_USER_PRCTL:
-		case SPECTRE_V2_USER_SECCOMP:
+		case SPECTRE_V2_USER_CMD_PRCTL:
+		case SPECTRE_V2_USER_CMD_AUTO:
+		case SPECTRE_V2_USER_CMD_SECCOMP:
 			static_branch_enable(&switch_mm_cond_ibpb);
 			break;
 		default:
@@ -364,7 +375,8 @@ spectre_v2_user_select_mitigation(enum s
 		}
 
 		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
-			mode == SPECTRE_V2_USER_STRICT ? "always-on" : "conditional");
+			static_key_enabled(&switch_mm_always_ibpb) ?
+			"always-on" : "conditional");
 	}
 
 	/* If enhanced IBRS is enabled no STIPB required */



^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 18:33 ` [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm() Thomas Gleixner
@ 2018-11-25 19:11   ` Thomas Gleixner
  2018-11-25 20:53   ` Andi Kleen
  2018-11-28 14:31   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2 siblings, 0 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 19:11 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, 25 Nov 2018, Thomas Gleixner wrote:
>  /*
> + * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is
> + * stored in cpu_tlb_state.last_user_mm_ibpb.
> + */
> +#define LAST_USER_MM_IBPB	0x1UL
> +
> +/*
> +	unsigned long next_tif = task_thread_info(next)->flags;
> +	unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USR_MM_IBPB;

That wants to be LAST_USER_ of course. That's what you get for last minute
changes...


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 18:33 ` [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode Thomas Gleixner
@ 2018-11-25 19:35   ` Randy Dunlap
  2018-11-25 20:40   ` Linus Torvalds
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 112+ messages in thread
From: Randy Dunlap @ 2018-11-25 19:35 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

Hi,

Can you alter this without publishing a v3?
(see below)

On 11/25/18 10:33 AM, Thomas Gleixner wrote:
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4241,9 +4241,16 @@
>  				  per thread.  The mitigation control state
>  				  is inherited on fork.
>  
> +			seccomp
> +				- Same as "prctl" above, but all seccomp
> +				  threads will enable the mitigation unless
> +				  they explicitly opt out.
> +
>  			auto    - Kernel selects the mitigation depending on
>  				  the available CPU features and vulnerability.
> -				  Default is prctl.
> +
> +			Default mitigation:
> +			If CONFIG_SECCOMP=y "seccomp", otherwise "prctl"

			If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl".

>  
>  			Not specifying this option is equivalent to
>  			spectre_v2_user=auto.

g'day.
-- 
~Randy

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 18:33 ` [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode Thomas Gleixner
  2018-11-25 19:35   ` Randy Dunlap
@ 2018-11-25 20:40   ` Linus Torvalds
  2018-11-25 20:52     ` Jiri Kosina
                       ` (2 more replies)
  2018-11-28 14:35   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-12-04 18:45   ` [patch V2 27/28] " Dave Hansen
  3 siblings, 3 replies; 112+ messages in thread
From: Linus Torvalds @ 2018-11-25 20:40 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Linux List Kernel Mailing, the arch/x86 maintainers,
	Peter Zijlstra, Andrew Lutomirski, Jiri Kosina, thomas.lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Tim Chen,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook

[ You forgot to fix your quilt setup.. ]

On Sun, 25 Nov 2018, Thomas Gleixner wrote:
>
> The mitigation guide documents how STIPB works:
>
>    Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
>    prevents the predicted targets of indirect branches on any logical
>    processor of that core from being controlled by software that executes
>    (or executed previously) on another logical processor of the same core.

Can we please just fix this stupid lie?

Yes, Intel calls it "STIBP" and tries to make it out to be about the
indirect branch predictor being per-SMT thread.

But the reason it is unacceptable is apparently because in reality it just
disables indirect branch prediction entirely. So yes, *technically* it's
true that that limits indirect branch prediction to just a single SMT
core, but in reality it is just a "go really slow" mode.

If STIBP had actually just keyed off the logical SMT thread, we wouldn't
need to have worried about it in the first place.

So let's document reality rather than Intel's Pollyanna world-view.

Reality matters. It's why we had to go all this. Lying about things
and making it appear like it's not a big deal was why the original
patch made it through without people noticing.

           Linus

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 20:40   ` Linus Torvalds
@ 2018-11-25 20:52     ` Jiri Kosina
  2018-11-25 22:28     ` Thomas Gleixner
  2018-12-04  1:38     ` Tim Chen
  2 siblings, 0 replies; 112+ messages in thread
From: Jiri Kosina @ 2018-11-25 20:52 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Thomas Gleixner, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, dave.hansen,
	Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan, jcm,
	longman9394, Greg KH, david.c.stewart, Kees Cook

On Sun, 25 Nov 2018, Linus Torvalds wrote:

> > The mitigation guide documents how STIPB works:
> >
> >    Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
> >    prevents the predicted targets of indirect branches on any logical
> >    processor of that core from being controlled by software that executes
> >    (or executed previously) on another logical processor of the same core.
> 
> Can we please just fix this stupid lie?
> 
> Yes, Intel calls it "STIBP" and tries to make it out to be about the
> indirect branch predictor being per-SMT thread.
> 
> But the reason it is unacceptable is apparently because in reality it just
> disables indirect branch prediction entirely. So yes, *technically* it's
> true that that limits indirect branch prediction to just a single SMT
> core, but in reality it is just a "go really slow" mode.
> 
> If STIBP had actually just keyed off the logical SMT thread, we wouldn't
> need to have worried about it in the first place.
> 
> So let's document reality rather than Intel's Pollyanna world-view.
> 
> Reality matters. It's why we had to go all this. Lying about things
> and making it appear like it's not a big deal was why the original
> patch made it through without people noticing.

Yeah, exactly; the documentation doesn't discourage STIBP use (well, the 
AMD one now actually does).

I am all in favor of documenting the truth rather than the documented 
behavior, but I guess without having a word from CPU folks, explaining how 
exactly this is implemented in reality, we can just guess based on 
observed symptoms (which is what we'll do anyway I guess if we don't get 
any better / more accurate wording).

Arjan, Tim, would you have a wording handy that would be guaranteed to 
describe the reality for the sake of changelog?

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 18:33 ` [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm() Thomas Gleixner
  2018-11-25 19:11   ` Thomas Gleixner
@ 2018-11-25 20:53   ` Andi Kleen
  2018-11-25 22:20     ` Thomas Gleixner
  2018-11-28 14:31   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2 siblings, 1 reply; 112+ messages in thread
From: Andi Kleen @ 2018-11-25 20:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

> The current check whether two tasks belong to the same context is using the
> tasks context id. While correct, it's simpler to use the mm pointer because
> it allows to mangle the TIF_SPEC_IB bit into it. The context id based
> mechanism requires extra storage, which creates worse code.

[We tried similar in some really early versions, but it was replaced
with the context id later.]

One issue with using the pointer is that the pointer can be reused
when the original mm_struct is freed, and then gets reallocated
immediately to an attacker. Then the attacker may avoid the IBPB.

Given it's probably hard to generate any reasonable leak bandwidth with
such a complex scenario, but it still seemed better to close the hole.

Because of concerns with that the counter ID was used instead.

The ID can wrap too, but since it's 64bit, it will take very long.

-Andi


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 20:53   ` Andi Kleen
@ 2018-11-25 22:20     ` Thomas Gleixner
  2018-11-25 23:04       ` Andy Lutomirski
  2018-11-26  3:07       ` Andi Kleen
  0 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 22:20 UTC (permalink / raw)
  To: Andi Kleen
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Sun, 25 Nov 2018, Andi Kleen wrote:

> > The current check whether two tasks belong to the same context is using the
> > tasks context id. While correct, it's simpler to use the mm pointer because
> > it allows to mangle the TIF_SPEC_IB bit into it. The context id based
> > mechanism requires extra storage, which creates worse code.
> 
> [We tried similar in some really early versions, but it was replaced
> with the context id later.]
> 
> One issue with using the pointer is that the pointer can be reused
> when the original mm_struct is freed, and then gets reallocated
> immediately to an attacker. Then the attacker may avoid the IBPB.
> 
> Given it's probably hard to generate any reasonable leak bandwidth with
> such a complex scenario, but it still seemed better to close the hole.

Sorry, but that's really a purely academic exercise. 

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 20:40   ` Linus Torvalds
  2018-11-25 20:52     ` Jiri Kosina
@ 2018-11-25 22:28     ` Thomas Gleixner
  2018-11-26 13:30       ` Ingo Molnar
  2018-11-26 20:48       ` Andrea Arcangeli
  2018-12-04  1:38     ` Tim Chen
  2 siblings, 2 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-25 22:28 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux List Kernel Mailing, the arch/x86 maintainers,
	Peter Zijlstra, Andrew Lutomirski, Jiri Kosina, thomas.lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Tim Chen,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook

On Sun, 25 Nov 2018, Linus Torvalds wrote:

> [ You forgot to fix your quilt setup.. ]

Duh. Should have pinned that package.

> On Sun, 25 Nov 2018, Thomas Gleixner wrote:
> >
> > The mitigation guide documents how STIPB works:
> >
> >    Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
> >    prevents the predicted targets of indirect branches on any logical
> >    processor of that core from being controlled by software that executes
> >    (or executed previously) on another logical processor of the same core.
> 
> Can we please just fix this stupid lie?

Well, it's not a lie. The above is correct, it just does not tell WHY this
works.

> Yes, Intel calls it "STIBP" and tries to make it out to be about the
> indirect branch predictor being per-SMT thread.
> 
> But the reason it is unacceptable is apparently because in reality it just
> disables indirect branch prediction entirely. So yes, *technically* it's
> true that that limits indirect branch prediction to just a single SMT
> core, but in reality it is just a "go really slow" mode.

Indeed. Just checked the documentation again, it's also not clear whether
IBPB is required if STIPB is in use.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 22:20     ` Thomas Gleixner
@ 2018-11-25 23:04       ` Andy Lutomirski
  2018-11-26  7:10         ` Thomas Gleixner
  2018-11-26  3:07       ` Andi Kleen
  1 sibling, 1 reply; 112+ messages in thread
From: Andy Lutomirski @ 2018-11-25 23:04 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Andi Kleen, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Tim Chen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook



> On Nov 25, 2018, at 2:20 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> On Sun, 25 Nov 2018, Andi Kleen wrote:
> 
>>> The current check whether two tasks belong to the same context is using the
>>> tasks context id. While correct, it's simpler to use the mm pointer because
>>> it allows to mangle the TIF_SPEC_IB bit into it. The context id based
>>> mechanism requires extra storage, which creates worse code.
>> 
>> [We tried similar in some really early versions, but it was replaced
>> with the context id later.]
>> 
>> One issue with using the pointer is that the pointer can be reused
>> when the original mm_struct is freed, and then gets reallocated
>> immediately to an attacker. Then the attacker may avoid the IBPB.
>> 
>> Given it's probably hard to generate any reasonable leak bandwidth with
>> such a complex scenario, but it still seemed better to close the hole.
> 
> Sorry, but that's really a purely academic exercise. 
> 
> 

I would guess that it’s actually very easy to force mm_struct* reuse.  Don’t the various allocators try to allocate hot memory?  There’s nothing hotter than a just-freed allocation of the same size.

Can someone explain the actual problem with ctx_id?  If you just need an extra bit, how about:

2*ctx_id vs 2*ctx_id+1

Or any of the many variants of approximately the same thing?

—Andy

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 22:20     ` Thomas Gleixner
  2018-11-25 23:04       ` Andy Lutomirski
@ 2018-11-26  3:07       ` Andi Kleen
  2018-11-26  6:50         ` Thomas Gleixner
  1 sibling, 1 reply; 112+ messages in thread
From: Andi Kleen @ 2018-11-26  3:07 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 11:20:50PM +0100, Thomas Gleixner wrote:
> On Sun, 25 Nov 2018, Andi Kleen wrote:
> 
> > > The current check whether two tasks belong to the same context is using the
> > > tasks context id. While correct, it's simpler to use the mm pointer because
> > > it allows to mangle the TIF_SPEC_IB bit into it. The context id based
> > > mechanism requires extra storage, which creates worse code.
> > 
> > [We tried similar in some really early versions, but it was replaced
> > with the context id later.]
> > 
> > One issue with using the pointer is that the pointer can be reused
> > when the original mm_struct is freed, and then gets reallocated
> > immediately to an attacker. Then the attacker may avoid the IBPB.
> > 
> > Given it's probably hard to generate any reasonable leak bandwidth with
> > such a complex scenario, but it still seemed better to close the hole.
> 
> Sorry, but that's really a purely academic exercise. 

Ok fair enough. I guess it's acceptable if you add a comment explaining it.

-Andi

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-26  3:07       ` Andi Kleen
@ 2018-11-26  6:50         ` Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-26  6:50 UTC (permalink / raw)
  To: Andi Kleen
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Sun, 25 Nov 2018, Andi Kleen wrote:
> On Sun, Nov 25, 2018 at 11:20:50PM +0100, Thomas Gleixner wrote:
> > On Sun, 25 Nov 2018, Andi Kleen wrote:
> > 
> > > > The current check whether two tasks belong to the same context is using the
> > > > tasks context id. While correct, it's simpler to use the mm pointer because
> > > > it allows to mangle the TIF_SPEC_IB bit into it. The context id based
> > > > mechanism requires extra storage, which creates worse code.
> > > 
> > > [We tried similar in some really early versions, but it was replaced
> > > with the context id later.]
> > > 
> > > One issue with using the pointer is that the pointer can be reused
> > > when the original mm_struct is freed, and then gets reallocated
> > > immediately to an attacker. Then the attacker may avoid the IBPB.
> > > 
> > > Given it's probably hard to generate any reasonable leak bandwidth with
> > > such a complex scenario, but it still seemed better to close the hole.
> > 
> > Sorry, but that's really a purely academic exercise. 
> 
> Ok fair enough. I guess it's acceptable if you add a comment explaining it.

Will do.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 23:04       ` Andy Lutomirski
@ 2018-11-26  7:10         ` Thomas Gleixner
  2018-11-26 13:36           ` Ingo Molnar
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-26  7:10 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Andi Kleen, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Tim Chen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: Type: text/plain, Size: 2064 bytes --]

On Sun, 25 Nov 2018, Andy Lutomirski wrote:
> > On Nov 25, 2018, at 2:20 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > On Sun, 25 Nov 2018, Andi Kleen wrote:
> > 
> >>> The current check whether two tasks belong to the same context is using the
> >>> tasks context id. While correct, it's simpler to use the mm pointer because
> >>> it allows to mangle the TIF_SPEC_IB bit into it. The context id based
> >>> mechanism requires extra storage, which creates worse code.
> >> 
> >> [We tried similar in some really early versions, but it was replaced
> >> with the context id later.]
> >> 
> >> One issue with using the pointer is that the pointer can be reused
> >> when the original mm_struct is freed, and then gets reallocated
> >> immediately to an attacker. Then the attacker may avoid the IBPB.
> >> 
> >> Given it's probably hard to generate any reasonable leak bandwidth with
> >> such a complex scenario, but it still seemed better to close the hole.
> > 
> > Sorry, but that's really a purely academic exercise. 
> 
> I would guess that it’s actually very easy to force mm_struct* reuse.
> Don’t the various allocators try to allocate hot memory?  There’s nothing
> hotter than a just-freed allocation of the same size.

Sure, but this is about a indirect branch predictor attack against
something which reuses the mm.

So you'd need to pull off:

   P1 poisons branch predictor
   P1 exit

   P2 starts and resuses mm(P1) and uses the poisoned branch predictor

the only thing between P1 and P2 is either idle or some other kernel
thread, but no other user task. If that happens then the code would not
issue IBPB as it assumes to switch back to the same process.

Even if you can pull that off the speculation would hit the startup code of
P2, which is truly a source of secret information. Creating a valuable
attack based on mm reuse is really less proabable than a lottery jackpot.

So using mm is really good enough and results in better assembly code which
is surely more valuable than addressing some hypothetical hole.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 26/28] x86/speculation: Enable prctl mode for spectre_v2_user
  2018-11-25 18:33 ` [patch V2 26/28] x86/speculation: Enable prctl mode for spectre_v2_user Thomas Gleixner
@ 2018-11-26  7:56   ` Dominik Brodowski
  2018-11-28 14:35   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 112+ messages in thread
From: Dominik Brodowski @ 2018-11-26  7:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:54PM +0100, Thomas Gleixner wrote:
> +			prctl   - Indirect branch speculation is enabled,
> +				  but mitigation can be enabled via prctl

s/can be/is only/ or "must be".

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 22:28     ` Thomas Gleixner
@ 2018-11-26 13:30       ` Ingo Molnar
  2018-11-26 20:48       ` Andrea Arcangeli
  1 sibling, 0 replies; 112+ messages in thread
From: Ingo Molnar @ 2018-11-26 13:30 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, dave.hansen,
	Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan, jcm,
	longman9394, Greg KH, david.c.stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> On Sun, 25 Nov 2018, Linus Torvalds wrote:
> 
> > [ You forgot to fix your quilt setup.. ]
> 
> Duh. Should have pinned that package.
> 
> > On Sun, 25 Nov 2018, Thomas Gleixner wrote:
> > >
> > > The mitigation guide documents how STIPB works:
> > >
> > >    Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
> > >    prevents the predicted targets of indirect branches on any logical
> > >    processor of that core from being controlled by software that executes
> > >    (or executed previously) on another logical processor of the same core.
> > 
> > Can we please just fix this stupid lie?
> 
> Well, it's not a lie. The above is correct, it just does not tell WHY this
> works.

Well, it's a "technically correct but misleading" phrase, which has much 
more of the effects of an actual "lie" than that of a true description of 
it.

I.e. in terms of what effects it's likely going to have on readers not 
aware of the underlying mechanics it's much more correct to call it a 
"lie" than to call it "truth" - which I think is at the core of Linus's 
argument.

> > Yes, Intel calls it "STIBP" and tries to make it out to be about the 
> > indirect branch predictor being per-SMT thread.
> > 
> > But the reason it is unacceptable is apparently because in reality it 
> > just disables indirect branch prediction entirely. So yes, 
> > *technically* it's true that that limits indirect branch prediction 
> > to just a single SMT core, but in reality it is just a "go really 
> > slow" mode.
> 
> Indeed. Just checked the documentation again, it's also not clear 
> whether IBPB is required if STIPB is in use.

So I think we should clarify all this.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-26  7:10         ` Thomas Gleixner
@ 2018-11-26 13:36           ` Ingo Molnar
  0 siblings, 0 replies; 112+ messages in thread
From: Ingo Molnar @ 2018-11-26 13:36 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Andy Lutomirski, Andi Kleen, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Jiri Kosina, Tom Lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Tim Chen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, Waiman Long, Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> On Sun, 25 Nov 2018, Andy Lutomirski wrote:
> > > On Nov 25, 2018, at 2:20 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > > On Sun, 25 Nov 2018, Andi Kleen wrote:
> > > 
> > >>> The current check whether two tasks belong to the same context is using the
> > >>> tasks context id. While correct, it's simpler to use the mm pointer because
> > >>> it allows to mangle the TIF_SPEC_IB bit into it. The context id based
> > >>> mechanism requires extra storage, which creates worse code.
> > >> 
> > >> [We tried similar in some really early versions, but it was replaced
> > >> with the context id later.]
> > >> 
> > >> One issue with using the pointer is that the pointer can be reused
> > >> when the original mm_struct is freed, and then gets reallocated
> > >> immediately to an attacker. Then the attacker may avoid the IBPB.
> > >> 
> > >> Given it's probably hard to generate any reasonable leak bandwidth with
> > >> such a complex scenario, but it still seemed better to close the hole.
> > > 
> > > Sorry, but that's really a purely academic exercise. 
> > 
> > I would guess that it’s actually very easy to force mm_struct* reuse.
> > Don’t the various allocators try to allocate hot memory?  There’s nothing
> > hotter than a just-freed allocation of the same size.
> 
> Sure, but this is about a indirect branch predictor attack against
> something which reuses the mm.
> 
> So you'd need to pull off:
> 
>    P1 poisons branch predictor
>    P1 exit
> 
>    P2 starts and resuses mm(P1) and uses the poisoned branch predictor
> 
> the only thing between P1 and P2 is either idle or some other kernel
> thread, but no other user task. If that happens then the code would not
> issue IBPB as it assumes to switch back to the same process.
> 
> Even if you can pull that off the speculation would hit the startup code of
> P2, which is truly a source of secret information. Creating a valuable
> attack based on mm reuse is really less proabable than a lottery jackpot.
> 
> So using mm is really good enough and results in better assembly code which
> is surely more valuable than addressing some hypothetical hole.

OTOH we could probably close even this with very little cost if we added 
an IBPB to non-threaded fork() and vfork()+exec() paths? Those are really 
slow paths compared to all the context switch paths we are trying to 
optimize here.

Alternatively we could IBPB on the post-exit() final task struct freeing, 
which too is a relative slow path compared to the context switch paths.

But no strong opinion.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (27 preceding siblings ...)
  2018-11-25 18:33 ` [patch V2 28/28] x86/speculation: Provide IBPB always command line options Thomas Gleixner
@ 2018-11-26 13:37 ` Ingo Molnar
  2018-11-28 14:24 ` Thomas Gleixner
  2018-12-10 23:43 ` Pavel Machek
  30 siblings, 0 replies; 112+ messages in thread
From: Ingo Molnar @ 2018-11-26 13:37 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> Thats hopefully the final version of this. Changes since V1:
> 
>   - Renamed the command line option and related code to spectre_v2_user= as
>     suggested by Josh.
> 
>   - Thought more about the back to back optimization and finally left the
>     IBPB code in switch_mm().
> 
>     It still removes the ptrace check for the always IBPB case. That's
>     substantial overhead for dubious value now that the default is
>     conditional (prctl/seccomp) IBPB.
> 
>   - Added two options which allow conditional STIBP and IBPB always mode.
> 
>   - Addressed the review comments

With the typo review feedback outlined in the discussions:

Reviewed-by: Ingo Molnar <mingo@kernel.org>

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 04/28] x86/speculation: Reorganize cpu_show_common()
  2018-11-25 18:33 ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
@ 2018-11-26 15:08   ` Borislav Petkov
  2018-11-28 14:22   ` [tip:x86/pti] x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common() tip-bot for Tim Chen
  2018-11-29 14:29   ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: Borislav Petkov @ 2018-11-26 15:08 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:32PM +0100, Thomas Gleixner wrote:
> The Spectre V2 printout in cpu_show_common() handles conditionals for the
> various mitigation methods directly in the sprintf() argument list. That's
> hard to read and will become unreadable if more complex decisions need to
> be made for a particular method.
> 
> Move the conditionals for STIBP and IBPB string selection into helper
> functions, so they can be extended later on.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |   20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)

Just a nitpick:

That subject should probably be more like:

"x86/speculation: Move cpu_show_common() string generation in separate functions"

or so, as it is not really reorganizing - just carving out functionality.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 06/28] x86/speculation: Rename SSBD update functions
  2018-11-25 18:33 ` [patch V2 06/28] x86/speculation: Rename SSBD update functions Thomas Gleixner
@ 2018-11-26 15:24   ` Borislav Petkov
  2018-11-28 14:23   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  2018-11-29 14:37   ` [patch V2 06/28] " Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: Borislav Petkov @ 2018-11-26 15:24 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:34PM +0100, Thomas Gleixner wrote:
> During context switch, the SSBD bit in SPEC_CTRL MSR is updated according
> to changes of the TIF_SSBD flag in the current and next running task.
> 
> Currently, only the bit controlling speculative store bypass disable in
> SPEC_CTRL MSR is updated and the related update functions all have
> "speculative_store" or "ssb" in their names.
> 
> For enhanced mitigation control other bits in SPEC_CTRL MSR need to be
> updated as well, which makes the SSB names inadequate.
> 
> Rename the "speculative_store*" functions to a more generic name.
> 

And add:

"No functional changes."

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update
  2018-11-25 18:33 ` [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
@ 2018-11-26 15:47   ` Borislav Petkov
  2018-11-28 14:23   ` [tip:x86/pti] " tip-bot for Tim Chen
  2018-11-29 14:41   ` [patch V2 07/28] " Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: Borislav Petkov @ 2018-11-26 15:47 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:35PM +0100, Thomas Gleixner wrote:
> The logic to detect whether there's a change in the previous and next
> task's flag relevant to update speculation control MSRs are spread out

s/are/is/

> across multiple functions.
> 
> Consolidate all checks needed for updating speculation control MSRs into
> the new __speculation_ctrl_update() helper function.
> 
> This makes it easy to pick the right speculation control MSR and the bits
> in the MSR that needs updating based on TIF flags changes.

s/needs/need/

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 19/28] x86/process: Consolidate and simplify switch_to_xtra() code
  2018-11-25 18:33 ` [patch V2 19/28] x86/process: Consolidate and simplify switch_to_xtra() code Thomas Gleixner
@ 2018-11-26 18:30   ` Borislav Petkov
  2018-11-28 14:30   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 112+ messages in thread
From: Borislav Petkov @ 2018-11-26 18:30 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:47PM +0100, Thomas Gleixner wrote:
> Move the conditional invocation of __switch_to_xtra() into an inline
> function so the logic can be shared between 32 and 64 bit.
> 
> Remove the handthrough of the TSS pointer and retrieve the pointer directly

s/handthrough/passing/

I guess.

> in the bitmap handling function. Use this_cpu_ptr() instead of the
> per_cpu() indirection.
>
> This is a preparatory change so integration of conditional indirect branch
> speculation optimization happens only in one place.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/switch_to.h |    3 ---
>  arch/x86/kernel/process.c        |   12 +++++++-----
>  arch/x86/kernel/process.h        |   24 ++++++++++++++++++++++++
>  arch/x86/kernel/process_32.c     |   10 +++-------
>  arch/x86/kernel/process_64.c     |   10 +++-------
>  5 files changed, 37 insertions(+), 22 deletions(-)

...

> --- /dev/null
> +++ b/arch/x86/kernel/process.h
> @@ -0,0 +1,24 @@
> +// SPDX-License-Identifier: GPL-2.0
> +//
> +// Code shared between 32 and 64 bit

checkpatch mutters here too:

WARNING: Missing or malformed SPDX-License-Identifier tag in line 1
#105: FILE: arch/x86/kernel/process.h:1:
+// SPDX-License-Identifier: GPL-2.0

I guess you want /* comments.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 22:28     ` Thomas Gleixner
  2018-11-26 13:30       ` Ingo Molnar
@ 2018-11-26 20:48       ` Andrea Arcangeli
  2018-11-26 20:58         ` Thomas Gleixner
  1 sibling, 1 reply; 112+ messages in thread
From: Andrea Arcangeli @ 2018-11-26 20:48 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, David Woodhouse,
	Tim Chen, Andi Kleen, dave.hansen, Casey Schaufler, Mallick,
	Asit K, Van De Ven, Arjan, jcm, longman9394, Greg KH,
	david.c.stewart, Kees Cook

Hello,

On Sun, Nov 25, 2018 at 11:28:59PM +0100, Thomas Gleixner wrote:
> Indeed. Just checked the documentation again, it's also not clear whether
> IBPB is required if STIPB is in use.

I tried to ask this question too earlier:

https://lkml.kernel.org/r/20181119234528.GJ29258@redhat.com

If the BTB mistraining in SECCOMP context with STIBP set in SPEC_CTRL,
can still influence the hyperthreading sibling after STIBP is cleared,
IBPB is needed before clearing STIBP. Otherwise it's not. Unless told
otherwise, it'd be safe to assume IBPB is needed in such case.

The SPEC_CTRL MSR specs seems a catch-all lowest common denominator
and so intuition or measurement of the exact behavior in one CPU
model, don't necessarily give a result that can be applied to all
microcodes out there.

Thanks,
Andrea

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-26 20:48       ` Andrea Arcangeli
@ 2018-11-26 20:58         ` Thomas Gleixner
  2018-11-26 21:52           ` Lendacky, Thomas
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-26 20:58 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, David Woodhouse,
	Tim Chen, Andi Kleen, dave.hansen, Casey Schaufler, Mallick,
	Asit K, Van De Ven, Arjan, jcm, longman9394, Greg KH,
	david.c.stewart, Kees Cook

On Mon, 26 Nov 2018, Andrea Arcangeli wrote:

> Hello,
> 
> On Sun, Nov 25, 2018 at 11:28:59PM +0100, Thomas Gleixner wrote:
> > Indeed. Just checked the documentation again, it's also not clear whether
> > IBPB is required if STIPB is in use.
> 
> I tried to ask this question too earlier:
> 
> https://lkml.kernel.org/r/20181119234528.GJ29258@redhat.com
> 
> If the BTB mistraining in SECCOMP context with STIBP set in SPEC_CTRL,
> can still influence the hyperthreading sibling after STIBP is cleared,
> IBPB is needed before clearing STIBP. Otherwise it's not. Unless told
> otherwise, it'd be safe to assume IBPB is needed in such case.

IBPB is still issued. I won't change that before we have clarification.

But I doubt it's necessary. STIBP seems to be a rather big hammer.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-26 20:58         ` Thomas Gleixner
@ 2018-11-26 21:52           ` Lendacky, Thomas
  2018-11-27  0:37             ` Tim Chen
  0 siblings, 1 reply; 112+ messages in thread
From: Lendacky, Thomas @ 2018-11-26 21:52 UTC (permalink / raw)
  To: Thomas Gleixner, Andrea Arcangeli
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, Josh Poimboeuf, David Woodhouse, Tim Chen,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook

On 11/26/2018 02:58 PM, Thomas Gleixner wrote:
> On Mon, 26 Nov 2018, Andrea Arcangeli wrote:
> 
>> Hello,
>>
>> On Sun, Nov 25, 2018 at 11:28:59PM +0100, Thomas Gleixner wrote:
>>> Indeed. Just checked the documentation again, it's also not clear whether
>>> IBPB is required if STIPB is in use.
>>
>> I tried to ask this question too earlier:
>>
>> https://lkml.kernel.org/r/20181119234528.GJ29258@redhat.com
>>
>> If the BTB mistraining in SECCOMP context with STIBP set in SPEC_CTRL,
>> can still influence the hyperthreading sibling after STIBP is cleared,
>> IBPB is needed before clearing STIBP. Otherwise it's not. Unless told
>> otherwise, it'd be safe to assume IBPB is needed in such case.
> 
> IBPB is still issued. I won't change that before we have clarification.

From an AMD standpoint, we recommend still issuing the IBPB.

> 
> But I doubt it's necessary. STIBP seems to be a rather big hammer.

For AMD parts that support STIBP, you will see likely differing levels
of performance impact.  AMD also has a CPUID bit (0x8000_0008_EBX[17])
that indicates STIBP always on mode is preferred [1] to toggling the
MSR.

I was planning to do a follow on patch set for that support after this
series is accepted rather than ask that it be added to this series at
this time (unless folks would prefer that it be done now?).

Thanks,
Tom

[1] https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf

> 
> Thanks,
> 
> 	tglx
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 13/28] x86/speculation: Reorder the spec_v2 code
  2018-11-25 18:33 ` [patch V2 13/28] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
@ 2018-11-26 22:21   ` Borislav Petkov
  2018-11-28 14:27   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 112+ messages in thread
From: Borislav Petkov @ 2018-11-26 22:21 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:41PM +0100, Thomas Gleixner wrote:
> Reorder the code so it is better grouped.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |  168 ++++++++++++++++++++++-----------------------
>  1 file changed, 84 insertions(+), 84 deletions(-)

...

>  static const struct {
>  	const char *option;
>  	enum spectre_v2_mitigation_cmd cmd;
>  	bool secure;
>  } mitigation_options[] = {

Yeah, now that we have ssb_mitigation_options and v2_user_options also,
those should probably be called "spectre_v2_options" or something more
specific to have the code more clear which options the code is using...


> -	{ "off",               SPECTRE_V2_CMD_NONE,              false },
> -	{ "on",                SPECTRE_V2_CMD_FORCE,             true },
> -	{ "retpoline",         SPECTRE_V2_CMD_RETPOLINE,         false },
> -	{ "retpoline,amd",     SPECTRE_V2_CMD_RETPOLINE_AMD,     false },
> -	{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
> -	{ "auto",              SPECTRE_V2_CMD_AUTO,              false },
> +	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
> +	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
> +	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
> +	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_AMD,	  false },
> +	{ "retpoline,generic",	SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
> +	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
>  };

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-26 21:52           ` Lendacky, Thomas
@ 2018-11-27  0:37             ` Tim Chen
  0 siblings, 0 replies; 112+ messages in thread
From: Tim Chen @ 2018-11-27  0:37 UTC (permalink / raw)
  To: Lendacky, Thomas, Thomas Gleixner, Andrea Arcangeli
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, Josh Poimboeuf, David Woodhouse, Andi Kleen,
	dave.hansen, Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan,
	jcm, longman9394, Greg KH, david.c.stewart, Kees Cook

On 11/26/2018 01:52 PM, Lendacky, Thomas wrote:
> On 11/26/2018 02:58 PM, Thomas Gleixner wrote:
>> On Mon, 26 Nov 2018, Andrea Arcangeli wrote:
>>
>>> Hello,
>>>
>>> On Sun, Nov 25, 2018 at 11:28:59PM +0100, Thomas Gleixner wrote:
>>>> Indeed. Just checked the documentation again, it's also not clear whether
>>>> IBPB is required if STIPB is in use.
>>>
>>> I tried to ask this question too earlier:
>>>
>>> https://lkml.kernel.org/r/20181119234528.GJ29258@redhat.com
>>>
>>> If the BTB mistraining in SECCOMP context with STIBP set in SPEC_CTRL,
>>> can still influence the hyperthreading sibling after STIBP is cleared,
>>> IBPB is needed before clearing STIBP. Otherwise it's not. Unless told
>>> otherwise, it'd be safe to assume IBPB is needed in such case.
>>
>> IBPB is still issued. I won't change that before we have clarification.
> 
> From an AMD standpoint, we recommend still issuing the IBPB.
> 

Yes, our Intel HW architect also recommends still issuing the IBPB.  We're now
getting approval for some additional explanations of STIBP.  Those additional
explanations should help clarify things.

Thanks.

Tim



^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-25 18:33 ` [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
@ 2018-11-27 17:25   ` Lendacky, Thomas
  2018-11-27 19:51     ` Tim Chen
  2018-11-27 20:39     ` Thomas Gleixner
  2018-11-28 14:30   ` [tip:x86/pti] " tip-bot for Tim Chen
  1 sibling, 2 replies; 112+ messages in thread
From: Lendacky, Thomas @ 2018-11-27 17:25 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Tim Chen, Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
> To avoid the overhead of STIBP always on, it's necessary to allow per task
> control of STIBP.
> 
> Add a new task flag TIF_SPEC_IB and evaluate it during context switch if
> SMT is active and flag evaluation is enabled by the speculation control
> code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the
> guest/host switch works properly.
> 
> This has no effect because TIF_SPEC_IB cannot be set yet and the static key
> which controls evaluation is off. Preparatory patch for adding the control
> code.
> 
> [ tglx: Simplify the context switch logic and make the TIF evaluation
>   	depend on SMP=y and on the static key controlling the conditional
>   	update. Rename it to TIF_SPEC_IB because it controls both STIBP and
>   	IBPB ]
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
> 
> v1 -> v2: Remove pointless include. Use consistent comments.
> 
> ---
>  arch/x86/include/asm/msr-index.h   |    5 +++--
>  arch/x86/include/asm/spec-ctrl.h   |   12 ++++++++++++
>  arch/x86/include/asm/thread_info.h |    5 ++++-
>  arch/x86/kernel/cpu/bugs.c         |    4 ++++
>  arch/x86/kernel/process.c          |   23 +++++++++++++++++++++--
>  5 files changed, 44 insertions(+), 5 deletions(-)
> 
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -41,9 +41,10 @@
>  
>  #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
>  #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
> -#define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
> +#define SPEC_CTRL_STIBP_SHIFT		1	   /* Single Thread Indirect Branch Predictor (STIBP) bit */
> +#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
> -#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
> +#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
>  
>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>  #define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
> --- a/arch/x86/include/asm/spec-ctrl.h
> +++ b/arch/x86/include/asm/spec-ctrl.h
> @@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(
>  	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
>  }
>  
> +static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
> +{
> +	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
> +	return (tifn & _TIF_SPEC_IB) >> (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
> +}
> +
>  static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
>  {
>  	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
>  	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
>  }
>  
> +static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
> +{
> +	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
> +	return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
> +}
> +
>  static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
>  {
>  	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
> --- a/arch/x86/include/asm/thread_info.h
> +++ b/arch/x86/include/asm/thread_info.h
> @@ -83,6 +83,7 @@ struct thread_info {
>  #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
>  #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
>  #define TIF_SECCOMP		8	/* secure computing */
> +#define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
>  #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
>  #define TIF_UPROBE		12	/* breakpointed or singlestepping */
>  #define TIF_PATCH_PENDING	13	/* pending live patching update */
> @@ -110,6 +111,7 @@ struct thread_info {
>  #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
>  #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
>  #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
> +#define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
>  #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
>  #define _TIF_UPROBE		(1 << TIF_UPROBE)
>  #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
> @@ -146,7 +148,8 @@ struct thread_info {
>  
>  /* flags to check in __switch_to() */
>  #define _TIF_WORK_CTXSW							\
> -	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
> +	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
> +	 _TIF_SSBD|_TIF_SPEC_IB)
>  
>  #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
>  #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -148,6 +148,10 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
>  		    static_cpu_has(X86_FEATURE_AMD_SSBD))
>  			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
>  
> +		/* Conditional STIBP enabled? */
> +		if (static_branch_unlikely(&switch_to_cond_stibp))
> +			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
> +
>  		if (hostval != guestval) {
>  			msrval = setguest ? guestval : hostval;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -406,6 +406,11 @@ static __always_inline void spec_ctrl_up
>  	if (static_cpu_has(X86_FEATURE_SSBD))
>  		msr |= ssbd_tif_to_spec_ctrl(tifn);

I did some quick testing and found my original logic was flawed. Since
spec_ctrl_update_msr() can now be called for STIBP, an additional check
is needed to set the SSBD MSR bit.

Both X86_FEATURE_VIRT_SSBD and X86_FEATURE_LS_CFG_SSBD cause
X86_FEATURE_SSBD to be set. Before this patch, spec_ctrl_update_msr() was
only called if X86_FEATURE_SSBD was set and one of the other SSBD features
wasn't set. But now, STIBP can cause spec_ctrl_update_msr() to get called
and cause the SSBD MSR bit to be set when it shouldn't (could result in
a GP fault).

Thanks,
Tom

>  
> +	/* Only evaluate if conditional STIBP is enabled */
> +	if (IS_ENABLED(CONFIG_SMP) &&
> +	    static_branch_unlikely(&switch_to_cond_stibp))
> +		msr |= stibp_tif_to_spec_ctrl(tifn);
> +
>  	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  }
>  
> @@ -418,10 +423,16 @@ static __always_inline void spec_ctrl_up
>  static __always_inline void __speculation_ctrl_update(unsigned long tifp,
>  						      unsigned long tifn)
>  {
> +	unsigned long tif_diff = tifp ^ tifn;
>  	bool updmsr = false;
>  
> -	/* If TIF_SSBD is different, select the proper mitigation method */
> -	if ((tifp ^ tifn) & _TIF_SSBD) {
> +	/*
> +	 * If TIF_SSBD is different, select the proper mitigation
> +	 * method. Note that if SSBD mitigation is disabled or permanentely
> +	 * enabled this branch can't be taken because nothing can set
> +	 * TIF_SSBD.
> +	 */
> +	if (tif_diff & _TIF_SSBD) {
>  		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
>  			amd_set_ssb_virt_state(tifn);
>  		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> @@ -430,6 +441,14 @@ static __always_inline void __speculatio
>  			updmsr  = true;
>  	}
>  
> +	/*
> +	 * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled,
> +	 * otherwise avoid the MSR write.
> +	 */
> +	if (IS_ENABLED(CONFIG_SMP) &&
> +	    static_branch_unlikely(&switch_to_cond_stibp))
> +		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
> +
>  	if (updmsr)
>  		spec_ctrl_update_msr(tifn);
>  }
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-27 17:25   ` Lendacky, Thomas
@ 2018-11-27 19:51     ` Tim Chen
  2018-11-28  9:39       ` Thomas Gleixner
  2018-11-27 20:39     ` Thomas Gleixner
  1 sibling, 1 reply; 112+ messages in thread
From: Tim Chen @ 2018-11-27 19:51 UTC (permalink / raw)
  To: Lendacky, Thomas, Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On 11/27/2018 09:25 AM, Lendacky, Thomas wrote:
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -148,6 +148,10 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
>>  		    static_cpu_has(X86_FEATURE_AMD_SSBD))
>>  			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
>>  
>> +		/* Conditional STIBP enabled? */
>> +		if (static_branch_unlikely(&switch_to_cond_stibp))
>> +			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
>> +
>>  		if (hostval != guestval) {
>>  			msrval = setguest ? guestval : hostval;
>>  			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
>> --- a/arch/x86/kernel/process.c
>> +++ b/arch/x86/kernel/process.c
>> @@ -406,6 +406,11 @@ static __always_inline void spec_ctrl_up
>>  	if (static_cpu_has(X86_FEATURE_SSBD))
>>  		msr |= ssbd_tif_to_spec_ctrl(tifn);
> 
> I did some quick testing and found my original logic was flawed. Since
> spec_ctrl_update_msr() can now be called for STIBP, an additional check
> is needed to set the SSBD MSR bit.
> 
> Both X86_FEATURE_VIRT_SSBD and X86_FEATURE_LS_CFG_SSBD cause
> X86_FEATURE_SSBD to be set. Before this patch, spec_ctrl_update_msr() was
> only called if X86_FEATURE_SSBD was set and one of the other SSBD features
> wasn't set. But now, STIBP can cause spec_ctrl_update_msr() to get called
> and cause the SSBD MSR bit to be set when it shouldn't (could result in
> a GP fault).
> 

I think it will be cleaner just to fold the msr update into
__speculation_ctrl_update to fix this issue.

Something like this perhaps.

Thanks.

Tim

---

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 3f5e351..614ec51 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -398,25 +398,6 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
 }
 
-static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
-{
-	u64 msr = x86_spec_ctrl_base;
-
-	/*
-	 * If X86_FEATURE_SSBD is not set, the SSBD bit is not to be
-	 * touched.
-	 */
-	if (static_cpu_has(X86_FEATURE_SSBD))
-		msr |= ssbd_tif_to_spec_ctrl(tifn);
-
-	/* Only evaluate if conditional STIBP is enabled */
-	if (IS_ENABLED(CONFIG_SMP) &&
-	    static_branch_unlikely(&switch_to_cond_stibp))
-		msr |= stibp_tif_to_spec_ctrl(tifn);
-
-	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
-}
-
 /*
  * Update the MSRs managing speculation control, during context switch.
  *
@@ -428,6 +409,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 {
 	unsigned long tif_diff = tifp ^ tifn;
 	bool updmsr = false;
+	u64 msr = x86_spec_ctrl_base;
 
 	/*
 	 * If TIF_SSBD is different, select the proper mitigation
@@ -440,8 +422,10 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 			amd_set_ssb_virt_state(tifn);
 		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 			amd_set_core_ssb_state(tifn);
-		else if (static_cpu_has(X86_FEATURE_SSBD))
+		else if (static_cpu_has(X86_FEATURE_SSBD)) {
 			updmsr  = true;
+			msr |= ssbd_tif_to_spec_ctrl(tifn);
+		}
 	}
 
 	/*
@@ -449,11 +433,13 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 	 * otherwise avoid the MSR write.
 	 */
 	if (IS_ENABLED(CONFIG_SMP) &&
-	    static_branch_unlikely(&switch_to_cond_stibp))
+	    static_branch_unlikely(&switch_to_cond_stibp)) {
 		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
+		msr |= stibp_tif_to_spec_ctrl(tifn);
+	}
 
 	if (updmsr)
-		spec_ctrl_update_msr(tifn);
+		wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
 void speculation_ctrl_update(unsigned long tif)

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-25 18:33 ` [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
@ 2018-11-27 20:18   ` Lendacky, Thomas
  2018-11-27 20:30     ` Thomas Gleixner
  2018-11-28 14:34   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
  1 sibling, 1 reply; 112+ messages in thread
From: Lendacky, Thomas @ 2018-11-27 20:18 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Tim Chen, Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
> The upcoming fine grained per task STIBP control needs to be updated on CPU
> hotplug as well.
> 
> Split out the code which controls the strict mode so the prctl control code
> can be added later. Mark the SMP function call argument __unused while at it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
> 
> v1 -> v2: s/app2app/user/. Mark smp function argument __unused
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |   46 ++++++++++++++++++++++++---------------------
>  1 file changed, 25 insertions(+), 21 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -530,40 +530,44 @@ static void __init spectre_v2_select_mit
>  	arch_smt_update();
>  }
>  
> -static bool stibp_needed(void)
> +static void update_stibp_msr(void * __unused)
>  {
> -	/* Enhanced IBRS makes using STIBP unnecessary. */
> -	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> -		return false;
> -
> -	/* Check for strict user mitigation mode */
> -	return spectre_v2_user == SPECTRE_V2_USER_STRICT;
> +	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  }
>  
> -static void update_stibp_msr(void *info)
> +/* Update x86_spec_ctrl_base in case SMT state changed. */
> +static void update_stibp_strict(void)
>  {
> -	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +	u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
> +
> +	if (sched_smt_active())
> +		mask |= SPEC_CTRL_STIBP;
> +
> +	if (mask == x86_spec_ctrl_base)
> +		return;
> +
> +	pr_info("Spectre v2 user space SMT mitigation: STIBP %s\n",
> +		mask & SPEC_CTRL_STIBP ? "always-on" : "off");
> +	x86_spec_ctrl_base = mask;
> +	on_each_cpu(update_stibp_msr, NULL, 1);

Some more testing using spectre_v2_user=on and I've found that during boot
up, once the first SMT thread is encountered no more updates to MSRs for
STIBP are done for any CPUs brought up after that. The first SMT thread
will cause mask != x86_spec_ctrl_base, but then x86_spec_ctrl_base is set
to mask and the check always causes a return for subsequent CPUs that are
brought up.

Talking to our HW folks, they recommend that it be set on all threads, so
I'm not sure what the right approach would be for this.

Also, I've seen some BIOSes set up the cores/threads where the core and
its thread are enumerated before the next core and its thread, etc. If
that were the case, I think this would result in only the first core
and its thread having STIBP set, right?

Thanks,
Tom

>  }
>  
>  void arch_smt_update(void)
>  {
> -	u64 mask;
> -
> -	if (!stibp_needed())
> +	/* Enhanced IBRS implies STIBP. No update required. */
> +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
>  		return;
>  
>  	mutex_lock(&spec_ctrl_mutex);
>  
> -	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
> -	if (sched_smt_active())
> -		mask |= SPEC_CTRL_STIBP;
> -
> -	if (mask != x86_spec_ctrl_base) {
> -		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
> -			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
> -		x86_spec_ctrl_base = mask;
> -		on_each_cpu(update_stibp_msr, NULL, 1);
> +	switch (spectre_v2_user) {
> +	case SPECTRE_V2_USER_NONE:
> +		break;
> +	case SPECTRE_V2_USER_STRICT:
> +		update_stibp_strict();
> +		break;
>  	}
> +
>  	mutex_unlock(&spec_ctrl_mutex);
>  }
>  
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-27 20:18   ` Lendacky, Thomas
@ 2018-11-27 20:30     ` Thomas Gleixner
  2018-11-27 21:20       ` Lendacky, Thomas
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-27 20:30 UTC (permalink / raw)
  To: Lendacky, Thomas
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Tim Chen, Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Lendacky, Thomas wrote:
> On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
> > +/* Update x86_spec_ctrl_base in case SMT state changed. */
> > +static void update_stibp_strict(void)
> >  {
> > -	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> > +	u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
> > +
> > +	if (sched_smt_active())
> > +		mask |= SPEC_CTRL_STIBP;
> > +
> > +	if (mask == x86_spec_ctrl_base)
> > +		return;
> > +
> > +	pr_info("Spectre v2 user space SMT mitigation: STIBP %s\n",
> > +		mask & SPEC_CTRL_STIBP ? "always-on" : "off");
> > +	x86_spec_ctrl_base = mask;
> > +	on_each_cpu(update_stibp_msr, NULL, 1);
> 
> Some more testing using spectre_v2_user=on and I've found that during boot
> up, once the first SMT thread is encountered no more updates to MSRs for
> STIBP are done for any CPUs brought up after that. The first SMT thread
> will cause mask != x86_spec_ctrl_base, but then x86_spec_ctrl_base is set
> to mask and the check always causes a return for subsequent CPUs that are
> brought up.

The above code merily handles the switch between SMT and non-SMT mode,
because there all other online CPUs need to be updated, but after that each
upcoming CPU calls x86_spec_ctrl_setup_ap() which will write the MSR. So
it's all good.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-27 17:25   ` Lendacky, Thomas
  2018-11-27 19:51     ` Tim Chen
@ 2018-11-27 20:39     ` Thomas Gleixner
  2018-11-27 20:42       ` Thomas Gleixner
  1 sibling, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-27 20:39 UTC (permalink / raw)
  To: Lendacky, Thomas
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Tim Chen, Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Lendacky, Thomas wrote:
> On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
> > --- a/arch/x86/kernel/process.c
> > +++ b/arch/x86/kernel/process.c
> > @@ -406,6 +406,11 @@ static __always_inline void spec_ctrl_up
> >  	if (static_cpu_has(X86_FEATURE_SSBD))
> >  		msr |= ssbd_tif_to_spec_ctrl(tifn);
> 
> I did some quick testing and found my original logic was flawed. Since
> spec_ctrl_update_msr() can now be called for STIBP, an additional check
> is needed to set the SSBD MSR bit.
> 
> Both X86_FEATURE_VIRT_SSBD and X86_FEATURE_LS_CFG_SSBD cause
> X86_FEATURE_SSBD to be set. Before this patch, spec_ctrl_update_msr() was
> only called if X86_FEATURE_SSBD was set and one of the other SSBD features
> wasn't set. But now, STIBP can cause spec_ctrl_update_msr() to get called
> and cause the SSBD MSR bit to be set when it shouldn't (could result in
> a GP fault).

The below should fix that. We have the same logic in x86_virt_spec_ctrl()

Thanks,

	tglx

8<---------------
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -403,10 +403,11 @@ static __always_inline void spec_ctrl_up
 	u64 msr = x86_spec_ctrl_base;
 
 	/*
-	 * If X86_FEATURE_SSBD is not set, the SSBD bit is not to be
-	 * touched.
+	 * If SSBD is not controlled in MSR_SPEC_CTRL, the SSBD bit has not
+	 * to be touched.
 	 */
-	if (static_cpu_has(X86_FEATURE_SSBD))
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+	    static_cpu_has(X86_FEATURE_AMD_SSBD))
 		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
 	/* Only evaluate if conditional STIBP is enabled */

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-27 20:39     ` Thomas Gleixner
@ 2018-11-27 20:42       ` Thomas Gleixner
  2018-11-27 21:52         ` Lendacky, Thomas
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-27 20:42 UTC (permalink / raw)
  To: Lendacky, Thomas
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Tim Chen, Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Thomas Gleixner wrote:
> On Tue, 27 Nov 2018, Lendacky, Thomas wrote:
> > On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
> > > --- a/arch/x86/kernel/process.c
> > > +++ b/arch/x86/kernel/process.c
> > > @@ -406,6 +406,11 @@ static __always_inline void spec_ctrl_up
> > >  	if (static_cpu_has(X86_FEATURE_SSBD))
> > >  		msr |= ssbd_tif_to_spec_ctrl(tifn);
> > 
> > I did some quick testing and found my original logic was flawed. Since
> > spec_ctrl_update_msr() can now be called for STIBP, an additional check
> > is needed to set the SSBD MSR bit.
> > 
> > Both X86_FEATURE_VIRT_SSBD and X86_FEATURE_LS_CFG_SSBD cause
> > X86_FEATURE_SSBD to be set. Before this patch, spec_ctrl_update_msr() was
> > only called if X86_FEATURE_SSBD was set and one of the other SSBD features
> > wasn't set. But now, STIBP can cause spec_ctrl_update_msr() to get called
> > and cause the SSBD MSR bit to be set when it shouldn't (could result in
> > a GP fault).
> 
> The below should fix that. We have the same logic in x86_virt_spec_ctrl()

Actually it's incomplete. Full version below.

Thanks,

	tglx

8<-----------------
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -403,10 +403,11 @@ static __always_inline void spec_ctrl_up
 	u64 msr = x86_spec_ctrl_base;
 
 	/*
-	 * If X86_FEATURE_SSBD is not set, the SSBD bit is not to be
-	 * touched.
+	 * If SSBD is not controlled in MSR_SPEC_CTRL, the SSBD bit has not
+	 * to be touched.
 	 */
-	if (static_cpu_has(X86_FEATURE_SSBD))
+	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+	    static_cpu_has(X86_FEATURE_AMD_SSBD))
 		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
 	/* Only evaluate if conditional STIBP is enabled */
@@ -440,7 +441,8 @@ static __always_inline void __speculatio
 			amd_set_ssb_virt_state(tifn);
 		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 			amd_set_core_ssb_state(tifn);
-		else if (static_cpu_has(X86_FEATURE_SSBD))
+		else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+			 static_cpu_has(X86_FEATURE_AMD_SSBD))
 			updmsr  = true;
 	}
 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-27 20:30     ` Thomas Gleixner
@ 2018-11-27 21:20       ` Lendacky, Thomas
  0 siblings, 0 replies; 112+ messages in thread
From: Lendacky, Thomas @ 2018-11-27 21:20 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Tim Chen, Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On 11/27/2018 02:30 PM, Thomas Gleixner wrote:
> On Tue, 27 Nov 2018, Lendacky, Thomas wrote:
>> On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
>>> +/* Update x86_spec_ctrl_base in case SMT state changed. */
>>> +static void update_stibp_strict(void)
>>>  {
>>> -	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>>> +	u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
>>> +
>>> +	if (sched_smt_active())
>>> +		mask |= SPEC_CTRL_STIBP;
>>> +
>>> +	if (mask == x86_spec_ctrl_base)
>>> +		return;
>>> +
>>> +	pr_info("Spectre v2 user space SMT mitigation: STIBP %s\n",
>>> +		mask & SPEC_CTRL_STIBP ? "always-on" : "off");
>>> +	x86_spec_ctrl_base = mask;
>>> +	on_each_cpu(update_stibp_msr, NULL, 1);
>>
>> Some more testing using spectre_v2_user=on and I've found that during boot
>> up, once the first SMT thread is encountered no more updates to MSRs for
>> STIBP are done for any CPUs brought up after that. The first SMT thread
>> will cause mask != x86_spec_ctrl_base, but then x86_spec_ctrl_base is set
>> to mask and the check always causes a return for subsequent CPUs that are
>> brought up.
> 
> The above code merily handles the switch between SMT and non-SMT mode,
> because there all other online CPUs need to be updated, but after that each
> upcoming CPU calls x86_spec_ctrl_setup_ap() which will write the MSR. So
> it's all good.

Yup, sorry for the noise. Trying to test different scenarios using some
code hacks and put them in the wrong place, so wasn't triggering the
WRMSR in x86_spec_ctrl_setup_ap().

Thanks,
Tom

> 
> Thanks,
> 
> 	tglx
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-27 20:42       ` Thomas Gleixner
@ 2018-11-27 21:52         ` Lendacky, Thomas
  0 siblings, 0 replies; 112+ messages in thread
From: Lendacky, Thomas @ 2018-11-27 21:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Tim Chen, Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On 11/27/2018 02:42 PM, Thomas Gleixner wrote:
> On Tue, 27 Nov 2018, Thomas Gleixner wrote:
>> On Tue, 27 Nov 2018, Lendacky, Thomas wrote:
>>> On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
>>>> --- a/arch/x86/kernel/process.c
>>>> +++ b/arch/x86/kernel/process.c
>>>> @@ -406,6 +406,11 @@ static __always_inline void spec_ctrl_up
>>>>  	if (static_cpu_has(X86_FEATURE_SSBD))
>>>>  		msr |= ssbd_tif_to_spec_ctrl(tifn);
>>>
>>> I did some quick testing and found my original logic was flawed. Since
>>> spec_ctrl_update_msr() can now be called for STIBP, an additional check
>>> is needed to set the SSBD MSR bit.
>>>
>>> Both X86_FEATURE_VIRT_SSBD and X86_FEATURE_LS_CFG_SSBD cause
>>> X86_FEATURE_SSBD to be set. Before this patch, spec_ctrl_update_msr() was
>>> only called if X86_FEATURE_SSBD was set and one of the other SSBD features
>>> wasn't set. But now, STIBP can cause spec_ctrl_update_msr() to get called
>>> and cause the SSBD MSR bit to be set when it shouldn't (could result in
>>> a GP fault).
>>
>> The below should fix that. We have the same logic in x86_virt_spec_ctrl()
> 
> Actually it's incomplete. Full version below.

Just one little nit on the comment below, otherwise works nicely.

Thanks,
Tom

> 
> Thanks,
> 
> 	tglx
> 
> 8<-----------------
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -403,10 +403,11 @@ static __always_inline void spec_ctrl_up
>  	u64 msr = x86_spec_ctrl_base;
>  
>  	/*
> -	 * If X86_FEATURE_SSBD is not set, the SSBD bit is not to be
> -	 * touched.
> +	 * If SSBD is not controlled in MSR_SPEC_CTRL, the SSBD bit has not

s/has not/is not/

> +	 * to be touched.
>  	 */
> -	if (static_cpu_has(X86_FEATURE_SSBD))
> +	if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
> +	    static_cpu_has(X86_FEATURE_AMD_SSBD))
>  		msr |= ssbd_tif_to_spec_ctrl(tifn);
>  
>  	/* Only evaluate if conditional STIBP is enabled */
> @@ -440,7 +441,8 @@ static __always_inline void __speculatio
>  			amd_set_ssb_virt_state(tifn);
>  		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
>  			amd_set_core_ssb_state(tifn);
> -		else if (static_cpu_has(X86_FEATURE_SSBD))
> +		else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
> +			 static_cpu_has(X86_FEATURE_AMD_SSBD))
>  			updmsr  = true;
>  	}
>  
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-27 19:51     ` Tim Chen
@ 2018-11-28  9:39       ` Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-28  9:39 UTC (permalink / raw)
  To: Tim Chen
  Cc: Lendacky, Thomas, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Tim Chen wrote:
> I think it will be cleaner just to fold the msr update into
> __speculation_ctrl_update to fix this issue.

Yes, that looks nicer and avoids a couple of extra static_cpu_has()
evaluations. I'll fold it into the proper places.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Update the TIF_SSBD comment
  2018-11-25 18:33 ` [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
@ 2018-11-28 14:20   ` tip-bot for Tim Chen
  2018-11-29 14:27   ` [patch V2 01/28] " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Tim Chen @ 2018-11-28 14:20 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: ak, tim.c.chen, dave.hansen, gregkh, keescook, longman9394,
	peterz, aarcange, jkosina, arjan, david.c.stewart,
	casey.schaufler, torvalds, hpa, jpoimboe, linux-kernel, jcm,
	dwmw, tglx, mingo, asit.k.mallick, luto, thomas.lendacky

Commit-ID:  8eb729b77faf83ac4c1f363a9ad68d042415f24c
Gitweb:     https://git.kernel.org/tip/8eb729b77faf83ac4c1f363a9ad68d042415f24c
Author:     Tim Chen <tim.c.chen@linux.intel.com>
AuthorDate: Sun, 25 Nov 2018 19:33:29 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:04 +0100

x86/speculation: Update the TIF_SSBD comment

"Reduced Data Speculation" is an obsolete term. The correct new name is
"Speculative store bypass disable" - which is abbreviated into SSBD.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185003.593893901@linutronix.de


---
 arch/x86/include/asm/thread_info.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 2ff2a30a264f..523c69efc38a 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -79,7 +79,7 @@ struct thread_info {
 #define TIF_SIGPENDING		2	/* signal pending */
 #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
 #define TIF_SINGLESTEP		4	/* reenable singlestep on user return*/
-#define TIF_SSBD			5	/* Reduced data speculation */
+#define TIF_SSBD		5	/* Speculative store bypass disable */
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Clean up spectre_v2_parse_cmdline()
  2018-11-25 18:33 ` [patch V2 02/28] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
@ 2018-11-28 14:20   ` tip-bot for Tim Chen
  2018-11-29 14:28   ` [patch V2 02/28] " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Tim Chen @ 2018-11-28 14:20 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: asit.k.mallick, dave.hansen, dwmw, tglx, jkosina, tim.c.chen,
	mingo, jcm, torvalds, ak, peterz, arjan, thomas.lendacky, gregkh,
	aarcange, linux-kernel, casey.schaufler, david.c.stewart,
	jpoimboe, longman9394, keescook, luto, hpa

Commit-ID:  24848509aa55eac39d524b587b051f4e86df3c12
Gitweb:     https://git.kernel.org/tip/24848509aa55eac39d524b587b051f4e86df3c12
Author:     Tim Chen <tim.c.chen@linux.intel.com>
AuthorDate: Sun, 25 Nov 2018 19:33:30 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:04 +0100

x86/speculation: Clean up spectre_v2_parse_cmdline()

Remove the unnecessary 'else' statement in spectre_v2_parse_cmdline()
to save an indentation level.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185003.688010903@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7f6d8159398e..839ab4103e89 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -276,22 +276,21 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
-	else {
-		ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
-		if (ret < 0)
-			return SPECTRE_V2_CMD_AUTO;
 
-		for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
-			if (!match_option(arg, ret, mitigation_options[i].option))
-				continue;
-			cmd = mitigation_options[i].cmd;
-			break;
-		}
+	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_CMD_AUTO;
 
-		if (i >= ARRAY_SIZE(mitigation_options)) {
-			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
-			return SPECTRE_V2_CMD_AUTO;
-		}
+	for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
+		if (!match_option(arg, ret, mitigation_options[i].option))
+			continue;
+		cmd = mitigation_options[i].cmd;
+		break;
+	}
+
+	if (i >= ARRAY_SIZE(mitigation_options)) {
+		pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+		return SPECTRE_V2_CMD_AUTO;
 	}
 
 	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Remove unnecessary ret variable in cpu_show_common()
  2018-11-25 18:33 ` [patch V2 03/28] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
@ 2018-11-28 14:21   ` tip-bot for Tim Chen
  2018-11-29 14:28   ` [patch V2 03/28] " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Tim Chen @ 2018-11-28 14:21 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dave.hansen, torvalds, peterz, keescook, luto, casey.schaufler,
	jpoimboe, gregkh, mingo, longman9394, david.c.stewart,
	asit.k.mallick, jcm, thomas.lendacky, ak, tim.c.chen, jkosina,
	aarcange, dwmw, linux-kernel, hpa, arjan, tglx

Commit-ID:  b86bda0426853bfe8a3506c7d2a5b332760ae46b
Gitweb:     https://git.kernel.org/tip/b86bda0426853bfe8a3506c7d2a5b332760ae46b
Author:     Tim Chen <tim.c.chen@linux.intel.com>
AuthorDate: Sun, 25 Nov 2018 19:33:31 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:05 +0100

x86/speculation: Remove unnecessary ret variable in cpu_show_common()

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185003.783903657@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 839ab4103e89..b52a48966e01 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -847,8 +847,6 @@ static ssize_t l1tf_show_state(char *buf)
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
-	int ret;
-
 	if (!boot_cpu_has_bug(bug))
 		return sprintf(buf, "Not affected\n");
 
@@ -866,13 +864,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 
 	case X86_BUG_SPECTRE_V2:
-		ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
 			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
-		return ret;
 
 	case X86_BUG_SPEC_STORE_BYPASS:
 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common()
  2018-11-25 18:33 ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
  2018-11-26 15:08   ` Borislav Petkov
@ 2018-11-28 14:22   ` tip-bot for Tim Chen
  2018-11-29 14:29   ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Tim Chen @ 2018-11-28 14:22 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: thomas.lendacky, dwmw, linux-kernel, mingo, david.c.stewart,
	luto, dave.hansen, gregkh, longman9394, jpoimboe,
	casey.schaufler, peterz, arjan, asit.k.mallick, torvalds, tglx,
	hpa, jkosina, aarcange, jcm, tim.c.chen, keescook, ak

Commit-ID:  a8f76ae41cd633ac00be1b3019b1eb4741be3828
Gitweb:     https://git.kernel.org/tip/a8f76ae41cd633ac00be1b3019b1eb4741be3828
Author:     Tim Chen <tim.c.chen@linux.intel.com>
AuthorDate: Sun, 25 Nov 2018 19:33:32 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:05 +0100

x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common()

The Spectre V2 printout in cpu_show_common() handles conditionals for the
various mitigation methods directly in the sprintf() argument list. That's
hard to read and will become unreadable if more complex decisions need to
be made for a particular method.

Move the conditionals for STIBP and IBPB string selection into helper
functions, so they can be extended later on.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185003.874479208@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b52a48966e01..a1502bce9eb8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -844,6 +844,22 @@ static ssize_t l1tf_show_state(char *buf)
 }
 #endif
 
+static char *stibp_state(void)
+{
+	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
+		return ", STIBP";
+	else
+		return "";
+}
+
+static char *ibpb_state(void)
+{
+	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
+		return ", IBPB";
+	else
+		return "";
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -865,9 +881,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 
 	case X86_BUG_SPECTRE_V2:
 		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
-			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+			       ibpb_state(),
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
-			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+			       stibp_state(),
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
 

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-11-25 18:33 ` [patch V2 05/28] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
@ 2018-11-28 14:22   ` tip-bot for Tim Chen
  2018-11-29 14:35   ` [patch V2 05/28] " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Tim Chen @ 2018-11-28 14:22 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, linux-kernel, thomas.lendacky, aarcange, hpa,
	asit.k.mallick, tglx, casey.schaufler, dwmw, longman9394, jcm,
	arjan, jkosina, gregkh, tim.c.chen, dave.hansen, jpoimboe, luto,
	peterz, ak, david.c.stewart, torvalds, keescook

Commit-ID:  34bce7c9690b1d897686aac89604ba7adc365556
Gitweb:     https://git.kernel.org/tip/34bce7c9690b1d897686aac89604ba7adc365556
Author:     Tim Chen <tim.c.chen@linux.intel.com>
AuthorDate: Sun, 25 Nov 2018 19:33:33 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:05 +0100

x86/speculation: Disable STIBP when enhanced IBRS is in use

If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2
user space exploits from hyperthread sibling.

Disable STIBP when enhanced IBRS is used.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185003.966801480@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a1502bce9eb8..924cd06dd43b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -321,6 +321,10 @@ static bool stibp_needed(void)
 	if (spectre_v2_enabled == SPECTRE_V2_NONE)
 		return false;
 
+	/* Enhanced IBRS makes using STIBP unnecessary. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return false;
+
 	if (!boot_cpu_has(X86_FEATURE_STIBP))
 		return false;
 
@@ -846,6 +850,9 @@ static ssize_t l1tf_show_state(char *buf)
 
 static char *stibp_state(void)
 {
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return "";
+
 	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
 		return ", STIBP";
 	else

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Rename SSBD update functions
  2018-11-25 18:33 ` [patch V2 06/28] x86/speculation: Rename SSBD update functions Thomas Gleixner
  2018-11-26 15:24   ` Borislav Petkov
@ 2018-11-28 14:23   ` tip-bot for Thomas Gleixner
  2018-11-29 14:37   ` [patch V2 06/28] " Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:23 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, casey.schaufler, arjan, longman9394, thomas.lendacky,
	dave.hansen, gregkh, jkosina, david.c.stewart, tim.c.chen,
	keescook, jpoimboe, jcm, asit.k.mallick, linux-kernel, aarcange,
	luto, dwmw, torvalds, ak, tglx, peterz, mingo

Commit-ID:  26c4d75b234040c11728a8acb796b3a85ba7507c
Gitweb:     https://git.kernel.org/tip/26c4d75b234040c11728a8acb796b3a85ba7507c
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:34 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:06 +0100

x86/speculation: Rename SSBD update functions

During context switch, the SSBD bit in SPEC_CTRL MSR is updated according
to changes of the TIF_SSBD flag in the current and next running task.

Currently, only the bit controlling speculative store bypass disable in
SPEC_CTRL MSR is updated and the related update functions all have
"speculative_store" or "ssb" in their names.

For enhanced mitigation control other bits in SPEC_CTRL MSR need to be
updated as well, which makes the SSB names inadequate.

Rename the "speculative_store*" functions to a more generic name. No
functional change.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.058866968@linutronix.de



---
 arch/x86/include/asm/spec-ctrl.h |  6 +++---
 arch/x86/kernel/cpu/bugs.c       |  4 ++--
 arch/x86/kernel/process.c        | 12 ++++++------
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index ae7c2c5cd7f0..8e2f8411c7a7 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -70,11 +70,11 @@ extern void speculative_store_bypass_ht_init(void);
 static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
-extern void speculative_store_bypass_update(unsigned long tif);
+extern void speculation_ctrl_update(unsigned long tif);
 
-static inline void speculative_store_bypass_update_current(void)
+static inline void speculation_ctrl_update_current(void)
 {
-	speculative_store_bypass_update(current_thread_info()->flags);
+	speculation_ctrl_update(current_thread_info()->flags);
 }
 
 #endif
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 924cd06dd43b..a723af0c4400 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -200,7 +200,7 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 		tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) :
 				 ssbd_spec_ctrl_to_tif(hostval);
 
-		speculative_store_bypass_update(tif);
+		speculation_ctrl_update(tif);
 	}
 }
 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
@@ -632,7 +632,7 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	 * mitigation until it is next scheduled.
 	 */
 	if (task == current && update)
-		speculative_store_bypass_update_current();
+		speculation_ctrl_update_current();
 
 	return 0;
 }
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index c93fcfdf1673..8aa49604f9ae 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -395,27 +395,27 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
 }
 
-static __always_inline void intel_set_ssb_state(unsigned long tifn)
+static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
 	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+static __always_inline void __speculation_ctrl_update(unsigned long tifn)
 {
 	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
 		amd_set_ssb_virt_state(tifn);
 	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		amd_set_core_ssb_state(tifn);
 	else
-		intel_set_ssb_state(tifn);
+		spec_ctrl_update_msr(tifn);
 }
 
-void speculative_store_bypass_update(unsigned long tif)
+void speculation_ctrl_update(unsigned long tif)
 {
 	preempt_disable();
-	__speculative_store_bypass_update(tif);
+	__speculation_ctrl_update(tif);
 	preempt_enable();
 }
 
@@ -452,7 +452,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
 	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculative_store_bypass_update(tifn);
+		__speculation_ctrl_update(tifn);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Reorganize speculation control MSRs update
  2018-11-25 18:33 ` [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
  2018-11-26 15:47   ` Borislav Petkov
@ 2018-11-28 14:23   ` tip-bot for Tim Chen
  2018-11-29 14:41   ` [patch V2 07/28] " Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Tim Chen @ 2018-11-28 14:23 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, arjan, asit.k.mallick, keescook, ak, casey.schaufler,
	tim.c.chen, aarcange, longman9394, Thomas.Lendacky,
	thomas.lendacky, jpoimboe, luto, hpa, linux-kernel, dave.hansen,
	dwmw, torvalds, jkosina, tglx, jcm, gregkh, mingo,
	david.c.stewart

Commit-ID:  01daf56875ee0cd50ed496a09b20eb369b45dfa5
Gitweb:     https://git.kernel.org/tip/01daf56875ee0cd50ed496a09b20eb369b45dfa5
Author:     Tim Chen <tim.c.chen@linux.intel.com>
AuthorDate: Sun, 25 Nov 2018 19:33:35 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:06 +0100

x86/speculation: Reorganize speculation control MSRs update

The logic to detect whether there's a change in the previous and next
task's flag relevant to update speculation control MSRs is spread out
across multiple functions.

Consolidate all checks needed for updating speculation control MSRs into
the new __speculation_ctrl_update() helper function.

This makes it easy to pick the right speculation control MSR and the bits
in MSR_IA32_SPEC_CTRL that need updating based on TIF flags changes.

Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.151077005@linutronix.de


---
 arch/x86/kernel/process.c | 46 +++++++++++++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 8aa49604f9ae..70e9832379e1 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -395,27 +395,40 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
 }
 
-static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
-{
-	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
-
-	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
-}
+/*
+ * Update the MSRs managing speculation control, during context switch.
+ *
+ * tifp: Previous task's thread flags
+ * tifn: Next task's thread flags
+ */
+static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+						      unsigned long tifn)
+{
+	u64 msr = x86_spec_ctrl_base;
+	bool updmsr = false;
+
+	/* If TIF_SSBD is different, select the proper mitigation method */
+	if ((tifp ^ tifn) & _TIF_SSBD) {
+		if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {
+			amd_set_ssb_virt_state(tifn);
+		} else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
+			amd_set_core_ssb_state(tifn);
+		} else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+			   static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+			msr |= ssbd_tif_to_spec_ctrl(tifn);
+			updmsr  = true;
+		}
+	}
 
-static __always_inline void __speculation_ctrl_update(unsigned long tifn)
-{
-	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
-		amd_set_ssb_virt_state(tifn);
-	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
-		amd_set_core_ssb_state(tifn);
-	else
-		spec_ctrl_update_msr(tifn);
+	if (updmsr)
+		wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
 void speculation_ctrl_update(unsigned long tif)
 {
+	/* Forced update. Make sure all relevant TIF flags are different */
 	preempt_disable();
-	__speculation_ctrl_update(tif);
+	__speculation_ctrl_update(~tif, tif);
 	preempt_enable();
 }
 
@@ -451,8 +464,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
-	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculation_ctrl_update(tifn);
+	__speculation_ctrl_update(tifp, tifn);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] sched/smt: Make sched_smt_present track topology
  2018-11-25 18:33 ` [patch V2 08/28] sched/smt: Make sched_smt_present track topology Thomas Gleixner
@ 2018-11-28 14:24   ` tip-bot for Peter Zijlstra (Intel)
  2018-11-29 14:42   ` [patch V2 08/28] " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Peter Zijlstra (Intel) @ 2018-11-28 14:24 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: asit.k.mallick, dwmw, keescook, gregkh, dave.hansen, luto,
	thomas.lendacky, tim.c.chen, hpa, jkosina, jpoimboe, mingo, ak,
	david.c.stewart, jcm, tglx, torvalds, peterz, longman9394, arjan,
	aarcange, linux-kernel, casey.schaufler

Commit-ID:  c5511d03ec090980732e929c318a7a6374b5550e
Gitweb:     https://git.kernel.org/tip/c5511d03ec090980732e929c318a7a6374b5550e
Author:     Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate: Sun, 25 Nov 2018 19:33:36 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:06 +0100

sched/smt: Make sched_smt_present track topology

Currently the 'sched_smt_present' static key is enabled when at CPU bringup
SMT topology is observed, but it is never disabled. However there is demand
to also disable the key when the topology changes such that there is no SMT
present anymore.

Implement this by making the key count the number of cores that have SMT
enabled.

In particular, the SMT topology bits are set before interrrupts are enabled
and similarly, are cleared after interrupts are disabled for the last time
and the CPU dies.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.246110444@linutronix.de


---
 kernel/sched/core.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 091e089063be..6fedf3a98581 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
 
 #ifdef CONFIG_SCHED_SMT
 	/*
-	 * The sched_smt_present static key needs to be evaluated on every
-	 * hotplug event because at boot time SMT might be disabled when
-	 * the number of booted CPUs is limited.
-	 *
-	 * If then later a sibling gets hotplugged, then the key would stay
-	 * off and SMT scheduling would never be functional.
+	 * When going up, increment the number of cores with SMT present.
 	 */
-	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
-		static_branch_enable_cpuslocked(&sched_smt_present);
+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+		static_branch_inc_cpuslocked(&sched_smt_present);
 #endif
 	set_cpu_active(cpu, true);
 
@@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cpu)
 	 */
 	synchronize_rcu_mult(call_rcu, call_rcu_sched);
 
+#ifdef CONFIG_SCHED_SMT
+	/*
+	 * When going down, decrement the number of cores with SMT present.
+	 */
+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+		static_branch_dec_cpuslocked(&sched_smt_present);
+#endif
+
 	if (!sched_smp_initialized)
 		return 0;
 

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (28 preceding siblings ...)
  2018-11-26 13:37 ` [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Ingo Molnar
@ 2018-11-28 14:24 ` Thomas Gleixner
  2018-11-29 19:02   ` Tim Chen
  2018-12-10 23:43 ` Pavel Machek
  30 siblings, 1 reply; 112+ messages in thread
From: Thomas Gleixner @ 2018-11-28 14:24 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, 25 Nov 2018, Thomas Gleixner wrote:

> Thats hopefully the final version of this. Changes since V1:
> 
>   - Renamed the command line option and related code to spectre_v2_user= as
>     suggested by Josh.
> 
>   - Thought more about the back to back optimization and finally left the
>     IBPB code in switch_mm().
> 
>     It still removes the ptrace check for the always IBPB case. That's
>     substantial overhead for dubious value now that the default is
>     conditional (prctl/seccomp) IBPB.
> 
>   - Added two options which allow conditional STIBP and IBPB always mode.
> 
>   - Addressed the review comments
> 
> Documentation is still work in progress. Thanks Andi for providing the
> first draft for it.
> 
> Still based on tip.git x86/pti as it has been discussed to remove the
> minimal RETPOLINE bandaid from stable kernels as well.

I've integrated the latest review feedback and the change which plugs the
TIF async update issue and pushed all of it out to:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/pti

For the stable 4.14.y and 4.19.y trees, I've collected the missing bits and
pieces and uploaded tarballs which contain everything ready for consumption:

   https://tglx.de/~tglx/patches-spec-4.14.y.tar.xz

      sha256 of patches-spec-4.14.y.tar:
      3d2976ef06ab5556c1c6cba975b0c9390eb57f43c506fb7f8834bb484feb9b17

   https://tglx.de/~tglx/patches-spec-4.19.y.tar.xz

      sha256 of patches-spec-4.19.y.tar:
      b7666cf378ad63810a17e98a471aae81a49738c552dbe912aea49de83f8145cc

Thanks everyone for review, discussion, testing ... !

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/Kconfig: Select SCHED_SMT if SMP enabled
  2018-11-25 18:33 ` [patch V2 09/28] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
@ 2018-11-28 14:24   ` tip-bot for Thomas Gleixner
  2018-11-29 14:44   ` [patch V2 09/28] " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:24 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: keescook, ak, tglx, linux-kernel, casey.schaufler,
	asit.k.mallick, jcm, mingo, dave.hansen, gregkh, luto, aarcange,
	jkosina, torvalds, longman9394, dwmw, hpa, david.c.stewart,
	arjan, jpoimboe, peterz, thomas.lendacky, tim.c.chen

Commit-ID:  dbe733642e01dd108f71436aaea7b328cb28fd87
Gitweb:     https://git.kernel.org/tip/dbe733642e01dd108f71436aaea7b328cb28fd87
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:37 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:07 +0100

x86/Kconfig: Select SCHED_SMT if SMP enabled

CONFIG_SCHED_SMT is enabled by all distros, so there is not a real point to
have it configurable. The runtime overhead in the core scheduler code is
minimal because the actual SMT scheduling parts are conditional on a static
key.

This allows to expose the scheduler's SMT state static key to the
speculation control code. Alternatively the scheduler's static key could be
made always available when CONFIG_SMP is enabled, but that's just adding an
unused static key to every other architecture for nothing.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.337452245@linutronix.de


---
 arch/x86/Kconfig | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b5286ad2a982..8689e794a43c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1000,13 +1000,7 @@ config NR_CPUS
 	  to the kernel image.
 
 config SCHED_SMT
-	bool "SMT (Hyperthreading) scheduler support"
-	depends on SMP
-	---help---
-	  SMT scheduler support improves the CPU scheduler's decision making
-	  when dealing with Intel Pentium 4 chips with HyperThreading at a
-	  cost of slightly increased overhead in some places. If unsure say
-	  N here.
+	def_bool y if SMP
 
 config SCHED_MC
 	def_bool y

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] sched/smt: Expose sched_smt_present static key
  2018-11-25 18:33 ` [patch V2 10/28] sched/smt: Expose sched_smt_present static key Thomas Gleixner
@ 2018-11-28 14:25   ` tip-bot for Thomas Gleixner
  2018-11-29 14:44   ` [patch V2 10/28] " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, keescook, torvalds, asit.k.mallick, david.c.stewart,
	dave.hansen, ak, dwmw, gregkh, thomas.lendacky, tim.c.chen, luto,
	longman9394, casey.schaufler, linux-kernel, jpoimboe, peterz,
	jcm, jkosina, aarcange, arjan, hpa, mingo

Commit-ID:  321a874a7ef85655e93b3206d0f36b4a6097f948
Gitweb:     https://git.kernel.org/tip/321a874a7ef85655e93b3206d0f36b4a6097f948
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:38 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:07 +0100

sched/smt: Expose sched_smt_present static key

Make the scheduler's 'sched_smt_present' static key globaly available, so
it can be used in the x86 speculation control code.

Provide a query function and a stub for the CONFIG_SMP=n case.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.430168326@linutronix.de

---
 include/linux/sched/smt.h | 18 ++++++++++++++++++
 kernel/sched/sched.h      |  4 +---
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h
new file mode 100644
index 000000000000..c9e0be514110
--- /dev/null
+++ b/include/linux/sched/smt.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_SCHED_SMT_H
+#define _LINUX_SCHED_SMT_H
+
+#include <linux/static_key.h>
+
+#ifdef CONFIG_SCHED_SMT
+extern struct static_key_false sched_smt_present;
+
+static __always_inline bool sched_smt_active(void)
+{
+	return static_branch_likely(&sched_smt_present);
+}
+#else
+static inline bool sched_smt_active(void) { return false; }
+#endif
+
+#endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 618577fc9aa8..4e524ab589c9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -23,6 +23,7 @@
 #include <linux/sched/prio.h>
 #include <linux/sched/rt.h>
 #include <linux/sched/signal.h>
+#include <linux/sched/smt.h>
 #include <linux/sched/stat.h>
 #include <linux/sched/sysctl.h>
 #include <linux/sched/task.h>
@@ -936,9 +937,6 @@ static inline int cpu_of(struct rq *rq)
 
 
 #ifdef CONFIG_SCHED_SMT
-
-extern struct static_key_false sched_smt_present;
-
 extern void __update_idle_core(struct rq *rq);
 
 static inline void update_idle_core(struct rq *rq)

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Rework SMT state change
  2018-11-25 18:33 ` [patch V2 11/28] x86/speculation: Rework SMT state change Thomas Gleixner
@ 2018-11-28 14:26   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, casey.schaufler, luto, dwmw, jpoimboe, torvalds, gregkh,
	tglx, asit.k.mallick, david.c.stewart, hpa, mingo, keescook,
	aarcange, dave.hansen, thomas.lendacky, tim.c.chen, arjan,
	linux-kernel, jkosina, longman9394, jcm, ak

Commit-ID:  a74cfffb03b73d41e08f84c2e5c87dec0ce3db9f
Gitweb:     https://git.kernel.org/tip/a74cfffb03b73d41e08f84c2e5c87dec0ce3db9f
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:39 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:07 +0100

x86/speculation: Rework SMT state change

arch_smt_update() is only called when the sysfs SMT control knob is
changed. This means that when SMT is enabled in the sysfs control knob the
system is considered to have SMT active even if all siblings are offline.

To allow finegrained control of the speculation mitigations, the actual SMT
state is more interesting than the fact that siblings could be enabled.

Rework the code, so arch_smt_update() is invoked from each individual CPU
hotplug function, and simplify the update function while at it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.521974984@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 11 +++++------
 include/linux/sched/smt.h  |  2 ++
 kernel/cpu.c               | 15 +++++++++------
 3 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a723af0c4400..5625b323ff32 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -14,6 +14,7 @@
 #include <linux/module.h>
 #include <linux/nospec.h>
 #include <linux/prctl.h>
+#include <linux/sched/smt.h>
 
 #include <asm/spec-ctrl.h>
 #include <asm/cmdline.h>
@@ -344,16 +345,14 @@ void arch_smt_update(void)
 		return;
 
 	mutex_lock(&spec_ctrl_mutex);
-	mask = x86_spec_ctrl_base;
-	if (cpu_smt_control == CPU_SMT_ENABLED)
+
+	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+	if (sched_smt_active())
 		mask |= SPEC_CTRL_STIBP;
-	else
-		mask &= ~SPEC_CTRL_STIBP;
 
 	if (mask != x86_spec_ctrl_base) {
 		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-				cpu_smt_control == CPU_SMT_ENABLED ?
-				"Enabling" : "Disabling");
+			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
 		x86_spec_ctrl_base = mask;
 		on_each_cpu(update_stibp_msr, NULL, 1);
 	}
diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h
index c9e0be514110..59d3736c454c 100644
--- a/include/linux/sched/smt.h
+++ b/include/linux/sched/smt.h
@@ -15,4 +15,6 @@ static __always_inline bool sched_smt_active(void)
 static inline bool sched_smt_active(void) { return false; }
 #endif
 
+void arch_smt_update(void);
+
 #endif
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 3c7f3b4c453c..91d5c38eb7e5 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -10,6 +10,7 @@
 #include <linux/sched/signal.h>
 #include <linux/sched/hotplug.h>
 #include <linux/sched/task.h>
+#include <linux/sched/smt.h>
 #include <linux/unistd.h>
 #include <linux/cpu.h>
 #include <linux/oom.h>
@@ -367,6 +368,12 @@ static void lockdep_release_cpus_lock(void)
 
 #endif	/* CONFIG_HOTPLUG_CPU */
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override this.
+ */
+void __weak arch_smt_update(void) { }
+
 #ifdef CONFIG_HOTPLUG_SMT
 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
 EXPORT_SYMBOL_GPL(cpu_smt_control);
@@ -1011,6 +1018,7 @@ out:
 	 * concurrent CPU hotplug via cpu_add_remove_lock.
 	 */
 	lockup_detector_cleanup();
+	arch_smt_update();
 	return ret;
 }
 
@@ -1139,6 +1147,7 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
 	ret = cpuhp_up_callbacks(cpu, st, target);
 out:
 	cpus_write_unlock();
+	arch_smt_update();
 	return ret;
 }
 
@@ -2055,12 +2064,6 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
 }
 
-/*
- * Architectures that need SMT-specific errata handling during SMT hotplug
- * should override this.
- */
-void __weak arch_smt_update(void) { };
-
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
 	int cpu, ret = 0;

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/l1tf: Show actual SMT state
  2018-11-25 18:33 ` [patch V2 12/28] x86/l1tf: Show actual SMT state Thomas Gleixner
@ 2018-11-28 14:26   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: david.c.stewart, longman9394, casey.schaufler, jkosina, torvalds,
	tglx, peterz, dwmw, mingo, gregkh, hpa, asit.k.mallick, ak,
	arjan, keescook, luto, linux-kernel, aarcange, jpoimboe,
	dave.hansen, tim.c.chen, jcm, thomas.lendacky

Commit-ID:  130d6f946f6f2a972ee3ec8540b7243ab99abe97
Gitweb:     https://git.kernel.org/tip/130d6f946f6f2a972ee3ec8540b7243ab99abe97
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:40 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:08 +0100

x86/l1tf: Show actual SMT state

Use the now exposed real SMT state, not the SMT sysfs control knob
state. This reflects the state of the system when the mitigation status is
queried.

This does not change the warning in the VMX launch code. There the
dependency on the control knob makes sense because siblings could be
brought online anytime after launching the VM.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.613357354@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5625b323ff32..2dc4ee2bedcb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -832,13 +832,14 @@ static ssize_t l1tf_show_state(char *buf)
 
 	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
 	    (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
-	     cpu_smt_control == CPU_SMT_ENABLED))
+	     sched_smt_active())) {
 		return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
 			       l1tf_vmx_states[l1tf_vmx_mitigation]);
+	}
 
 	return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
 		       l1tf_vmx_states[l1tf_vmx_mitigation],
-		       cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");
+		       sched_smt_active() ? "vulnerable" : "disabled");
 }
 #else
 static ssize_t l1tf_show_state(char *buf)

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Reorder the spec_v2 code
  2018-11-25 18:33 ` [patch V2 13/28] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
  2018-11-26 22:21   ` Borislav Petkov
@ 2018-11-28 14:27   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:27 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: david.c.stewart, asit.k.mallick, tglx, ak, jkosina, jpoimboe,
	mingo, luto, tim.c.chen, casey.schaufler, jcm, aarcange,
	longman9394, dave.hansen, arjan, gregkh, dwmw, thomas.lendacky,
	linux-kernel, hpa, peterz, torvalds, keescook

Commit-ID:  15d6b7aab0793b2de8a05d8a828777dd24db424e
Gitweb:     https://git.kernel.org/tip/15d6b7aab0793b2de8a05d8a828777dd24db424e
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:41 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:08 +0100

x86/speculation: Reorder the spec_v2 code

Reorder the code so it is better grouped. No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.707122879@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 168 ++++++++++++++++++++++-----------------------
 1 file changed, 84 insertions(+), 84 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2dc4ee2bedcb..c9542b9fb329 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -124,29 +124,6 @@ void __init check_bugs(void)
 #endif
 }
 
-/* The kernel command line selection */
-enum spectre_v2_mitigation_cmd {
-	SPECTRE_V2_CMD_NONE,
-	SPECTRE_V2_CMD_AUTO,
-	SPECTRE_V2_CMD_FORCE,
-	SPECTRE_V2_CMD_RETPOLINE,
-	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
-	SPECTRE_V2_CMD_RETPOLINE_AMD,
-};
-
-static const char *spectre_v2_strings[] = {
-	[SPECTRE_V2_NONE]			= "Vulnerable",
-	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
-	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
-	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
-};
-
-#undef pr_fmt
-#define pr_fmt(fmt)     "Spectre V2 : " fmt
-
-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
-	SPECTRE_V2_NONE;
-
 void
 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 {
@@ -216,6 +193,12 @@ static void x86_amd_ssb_disable(void)
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
+#undef pr_fmt
+#define pr_fmt(fmt)     "Spectre V2 : " fmt
+
+static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+	SPECTRE_V2_NONE;
+
 #ifdef RETPOLINE
 static bool spectre_v2_bad_module;
 
@@ -237,18 +220,6 @@ static inline const char *spectre_v2_module_string(void)
 static inline const char *spectre_v2_module_string(void) { return ""; }
 #endif
 
-static void __init spec2_print_if_insecure(const char *reason)
-{
-	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
-static void __init spec2_print_if_secure(const char *reason)
-{
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
 static inline bool match_option(const char *arg, int arglen, const char *opt)
 {
 	int len = strlen(opt);
@@ -256,24 +227,53 @@ static inline bool match_option(const char *arg, int arglen, const char *opt)
 	return len == arglen && !strncmp(arg, opt, len);
 }
 
+/* The kernel command line selection for spectre v2 */
+enum spectre_v2_mitigation_cmd {
+	SPECTRE_V2_CMD_NONE,
+	SPECTRE_V2_CMD_AUTO,
+	SPECTRE_V2_CMD_FORCE,
+	SPECTRE_V2_CMD_RETPOLINE,
+	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
+	SPECTRE_V2_CMD_RETPOLINE_AMD,
+};
+
+static const char *spectre_v2_strings[] = {
+	[SPECTRE_V2_NONE]			= "Vulnerable",
+	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
+	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
+};
+
 static const struct {
 	const char *option;
 	enum spectre_v2_mitigation_cmd cmd;
 	bool secure;
 } mitigation_options[] = {
-	{ "off",               SPECTRE_V2_CMD_NONE,              false },
-	{ "on",                SPECTRE_V2_CMD_FORCE,             true },
-	{ "retpoline",         SPECTRE_V2_CMD_RETPOLINE,         false },
-	{ "retpoline,amd",     SPECTRE_V2_CMD_RETPOLINE_AMD,     false },
-	{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
-	{ "auto",              SPECTRE_V2_CMD_AUTO,              false },
+	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
+	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
+	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
+	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_AMD,	  false },
+	{ "retpoline,generic",	SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
+	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
 };
 
+static void __init spec2_print_if_insecure(const char *reason)
+{
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		pr_info("%s selected on command line.\n", reason);
+}
+
+static void __init spec2_print_if_secure(const char *reason)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		pr_info("%s selected on command line.\n", reason);
+}
+
 static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 {
+	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
 	char arg[20];
 	int ret, i;
-	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
@@ -317,48 +317,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 	return cmd;
 }
 
-static bool stibp_needed(void)
-{
-	if (spectre_v2_enabled == SPECTRE_V2_NONE)
-		return false;
-
-	/* Enhanced IBRS makes using STIBP unnecessary. */
-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
-		return false;
-
-	if (!boot_cpu_has(X86_FEATURE_STIBP))
-		return false;
-
-	return true;
-}
-
-static void update_stibp_msr(void *info)
-{
-	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
-}
-
-void arch_smt_update(void)
-{
-	u64 mask;
-
-	if (!stibp_needed())
-		return;
-
-	mutex_lock(&spec_ctrl_mutex);
-
-	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
-	if (sched_smt_active())
-		mask |= SPEC_CTRL_STIBP;
-
-	if (mask != x86_spec_ctrl_base) {
-		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
-		x86_spec_ctrl_base = mask;
-		on_each_cpu(update_stibp_msr, NULL, 1);
-	}
-	mutex_unlock(&spec_ctrl_mutex);
-}
-
 static void __init spectre_v2_select_mitigation(void)
 {
 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -462,6 +420,48 @@ specv2_set_mode:
 	arch_smt_update();
 }
 
+static bool stibp_needed(void)
+{
+	if (spectre_v2_enabled == SPECTRE_V2_NONE)
+		return false;
+
+	/* Enhanced IBRS makes using STIBP unnecessary. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return false;
+
+	if (!boot_cpu_has(X86_FEATURE_STIBP))
+		return false;
+
+	return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+	u64 mask;
+
+	if (!stibp_needed())
+		return;
+
+	mutex_lock(&spec_ctrl_mutex);
+
+	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+	if (sched_smt_active())
+		mask |= SPEC_CTRL_STIBP;
+
+	if (mask != x86_spec_ctrl_base) {
+		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
+		x86_spec_ctrl_base = mask;
+		on_each_cpu(update_stibp_msr, NULL, 1);
+	}
+	mutex_unlock(&spec_ctrl_mutex);
+}
+
 #undef pr_fmt
 #define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
 

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Mark string arrays const correctly
  2018-11-25 18:33 ` [patch V2 14/28] x86/speculation: Mark string arrays const correctly Thomas Gleixner
@ 2018-11-28 14:27   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:27 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: longman9394, tglx, hpa, luto, jkosina, mingo, ak,
	casey.schaufler, keescook, peterz, gregkh, dave.hansen, arjan,
	dwmw, torvalds, thomas.lendacky, david.c.stewart, jpoimboe, jcm,
	linux-kernel, tim.c.chen, asit.k.mallick, aarcange

Commit-ID:  8770709f411763884535662744a3786a1806afd3
Gitweb:     https://git.kernel.org/tip/8770709f411763884535662744a3786a1806afd3
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:42 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:09 +0100

x86/speculation: Mark string arrays const correctly

checkpatch.pl muttered when reshuffling the code:
 WARNING: static const char * array should probably be static const char * const

Fix up all the string arrays.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.800018931@linutronix.de

---
 arch/x86/kernel/cpu/bugs.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c9542b9fb329..4fcbccb16200 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -237,7 +237,7 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_RETPOLINE_AMD,
 };
 
-static const char *spectre_v2_strings[] = {
+static const char * const spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
 	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
@@ -476,7 +476,7 @@ enum ssb_mitigation_cmd {
 	SPEC_STORE_BYPASS_CMD_SECCOMP,
 };
 
-static const char *ssb_strings[] = {
+static const char * const ssb_strings[] = {
 	[SPEC_STORE_BYPASS_NONE]	= "Vulnerable",
 	[SPEC_STORE_BYPASS_DISABLE]	= "Mitigation: Speculative Store Bypass disabled",
 	[SPEC_STORE_BYPASS_PRCTL]	= "Mitigation: Speculative Store Bypass disabled via prctl",
@@ -816,7 +816,7 @@ early_param("l1tf", l1tf_cmdline);
 #define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
 
 #if IS_ENABLED(CONFIG_KVM_INTEL)
-static const char *l1tf_vmx_states[] = {
+static const char * const l1tf_vmx_states[] = {
 	[VMENTER_L1D_FLUSH_AUTO]		= "auto",
 	[VMENTER_L1D_FLUSH_NEVER]		= "vulnerable",
 	[VMENTER_L1D_FLUSH_COND]		= "conditional cache flushes",

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculataion: Mark command line parser data __initdata
  2018-11-25 18:33 ` [patch V2 15/28] x86/speculataion: Mark command line parser data __initdata Thomas Gleixner
@ 2018-11-28 14:28   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:28 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: aarcange, gregkh, peterz, tim.c.chen, asit.k.mallick,
	dave.hansen, luto, jkosina, tglx, arjan, longman9394,
	casey.schaufler, mingo, hpa, thomas.lendacky, jpoimboe, ak,
	david.c.stewart, torvalds, jcm, linux-kernel, keescook, dwmw

Commit-ID:  30ba72a990f5096ae08f284de17986461efcc408
Gitweb:     https://git.kernel.org/tip/30ba72a990f5096ae08f284de17986461efcc408
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:43 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:09 +0100

x86/speculataion: Mark command line parser data __initdata

No point to keep that around.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.893886356@linutronix.de

---
 arch/x86/kernel/cpu/bugs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4fcbccb16200..9279cbabe16e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -248,7 +248,7 @@ static const struct {
 	const char *option;
 	enum spectre_v2_mitigation_cmd cmd;
 	bool secure;
-} mitigation_options[] = {
+} mitigation_options[] __initdata = {
 	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
 	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
 	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
@@ -486,7 +486,7 @@ static const char * const ssb_strings[] = {
 static const struct {
 	const char *option;
 	enum ssb_mitigation_cmd cmd;
-} ssb_mitigation_options[] = {
+} ssb_mitigation_options[]  __initdata = {
 	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO },    /* Platform decides */
 	{ "on",		SPEC_STORE_BYPASS_CMD_ON },      /* Disable Speculative Store Bypass */
 	{ "off",	SPEC_STORE_BYPASS_CMD_NONE },    /* Don't touch Speculative Store Bypass */

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Unify conditional spectre v2 print functions
  2018-11-25 18:33 ` [patch V2 16/28] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
@ 2018-11-28 14:29   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:29 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dave.hansen, linux-kernel, luto, longman9394, jpoimboe, peterz,
	keescook, david.c.stewart, hpa, torvalds, gregkh,
	thomas.lendacky, arjan, dwmw, ak, jkosina, mingo, jcm, tglx,
	aarcange, asit.k.mallick, tim.c.chen, casey.schaufler

Commit-ID:  495d470e9828500e0155027f230449ac5e29c025
Gitweb:     https://git.kernel.org/tip/495d470e9828500e0155027f230449ac5e29c025
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:44 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:09 +0100

x86/speculation: Unify conditional spectre v2 print functions

There is no point in having two functions and a conditional at the call
site.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.986890749@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 17 ++++-------------
 1 file changed, 4 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9279cbabe16e..4f5a6319dca6 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -257,15 +257,9 @@ static const struct {
 	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
 };
 
-static void __init spec2_print_if_insecure(const char *reason)
+static void __init spec_v2_print_cond(const char *reason, bool secure)
 {
-	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
-static void __init spec2_print_if_secure(const char *reason)
-{
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
 		pr_info("%s selected on command line.\n", reason);
 }
 
@@ -309,11 +303,8 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 		return SPECTRE_V2_CMD_AUTO;
 	}
 
-	if (mitigation_options[i].secure)
-		spec2_print_if_secure(mitigation_options[i].option);
-	else
-		spec2_print_if_insecure(mitigation_options[i].option);
-
+	spec_v2_print_cond(mitigation_options[i].option,
+			   mitigation_options[i].secure);
 	return cmd;
 }
 

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Add command line control for indirect branch speculation
  2018-11-25 18:33 ` [patch V2 17/28] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
@ 2018-11-28 14:29   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:29 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: aarcange, longman9394, david.c.stewart, tim.c.chen,
	asit.k.mallick, arjan, hpa, gregkh, ak, casey.schaufler, dwmw,
	tglx, jpoimboe, keescook, mingo, jcm, peterz, torvalds, jkosina,
	luto, linux-kernel, thomas.lendacky, dave.hansen

Commit-ID:  fa1202ef224391b6f5b26cdd44cc50495e8fab54
Gitweb:     https://git.kernel.org/tip/fa1202ef224391b6f5b26cdd44cc50495e8fab54
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:45 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:10 +0100

x86/speculation: Add command line control for indirect branch speculation

Add command line control for user space indirect branch speculation
mitigations. The new option is: spectre_v2_user=

The initial options are:

    -  on:   Unconditionally enabled
    - off:   Unconditionally disabled
    -auto:   Kernel selects mitigation (default off for now)

When the spectre_v2= command line argument is either 'on' or 'off' this
implies that the application to application control follows that state even
if a contradicting spectre_v2_user= argument is supplied.

Originally-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.082720373@linutronix.de

---
 Documentation/admin-guide/kernel-parameters.txt |  32 +++++-
 arch/x86/include/asm/nospec-branch.h            |  10 ++
 arch/x86/kernel/cpu/bugs.c                      | 133 +++++++++++++++++++++---
 3 files changed, 156 insertions(+), 19 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 19f4423e70d9..b6e5b33b9d75 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4194,9 +4194,13 @@
 
 	spectre_v2=	[X86] Control mitigation of Spectre variant 2
 			(indirect branch speculation) vulnerability.
+			The default operation protects the kernel from
+			user space attacks.
 
-			on   - unconditionally enable
-			off  - unconditionally disable
+			on   - unconditionally enable, implies
+			       spectre_v2_user=on
+			off  - unconditionally disable, implies
+			       spectre_v2_user=off
 			auto - kernel detects whether your CPU model is
 			       vulnerable
 
@@ -4206,6 +4210,12 @@
 			CONFIG_RETPOLINE configuration option, and the
 			compiler with which the kernel was built.
 
+			Selecting 'on' will also enable the mitigation
+			against user space to user space task attacks.
+
+			Selecting 'off' will disable both the kernel and
+			the user space protections.
+
 			Specific mitigations can also be selected manually:
 
 			retpoline	  - replace indirect branches
@@ -4215,6 +4225,24 @@
 			Not specifying this option is equivalent to
 			spectre_v2=auto.
 
+	spectre_v2_user=
+			[X86] Control mitigation of Spectre variant 2
+		        (indirect branch speculation) vulnerability between
+		        user space tasks
+
+			on	- Unconditionally enable mitigations. Is
+				  enforced by spectre_v2=on
+
+			off     - Unconditionally disable mitigations. Is
+				  enforced by spectre_v2=off
+
+			auto    - Kernel selects the mitigation depending on
+				  the available CPU features and vulnerability.
+				  Default is off.
+
+			Not specifying this option is equivalent to
+			spectre_v2_user=auto.
+
 	spec_store_bypass_disable=
 			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
 			(Speculative Store Bypass vulnerability)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c202a64edd95..be0b0aa780e2 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -3,6 +3,8 @@
 #ifndef _ASM_X86_NOSPEC_BRANCH_H_
 #define _ASM_X86_NOSPEC_BRANCH_H_
 
+#include <linux/static_key.h>
+
 #include <asm/alternative.h>
 #include <asm/alternative-asm.h>
 #include <asm/cpufeatures.h>
@@ -226,6 +228,12 @@ enum spectre_v2_mitigation {
 	SPECTRE_V2_IBRS_ENHANCED,
 };
 
+/* The indirect branch speculation control variants */
+enum spectre_v2_user_mitigation {
+	SPECTRE_V2_USER_NONE,
+	SPECTRE_V2_USER_STRICT,
+};
+
 /* The Speculative Store Bypass disable variants */
 enum ssb_mitigation {
 	SPEC_STORE_BYPASS_NONE,
@@ -303,6 +311,8 @@ do {									\
 	preempt_enable();						\
 } while (0)
 
+DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+
 #endif /* __ASSEMBLY__ */
 
 /*
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4f5a6319dca6..3a223cce1fac 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,9 @@ static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS;
 u64 __ro_after_init x86_amd_ls_cfg_base;
 u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask;
 
+/* Control conditional STIPB in switch_to() */
+DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+
 void __init check_bugs(void)
 {
 	identify_boot_cpu();
@@ -199,6 +202,9 @@ static void x86_amd_ssb_disable(void)
 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
 	SPECTRE_V2_NONE;
 
+static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init =
+	SPECTRE_V2_USER_NONE;
+
 #ifdef RETPOLINE
 static bool spectre_v2_bad_module;
 
@@ -237,6 +243,104 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_RETPOLINE_AMD,
 };
 
+enum spectre_v2_user_cmd {
+	SPECTRE_V2_USER_CMD_NONE,
+	SPECTRE_V2_USER_CMD_AUTO,
+	SPECTRE_V2_USER_CMD_FORCE,
+};
+
+static const char * const spectre_v2_user_strings[] = {
+	[SPECTRE_V2_USER_NONE]		= "User space: Vulnerable",
+	[SPECTRE_V2_USER_STRICT]	= "User space: Mitigation: STIBP protection",
+};
+
+static const struct {
+	const char			*option;
+	enum spectre_v2_user_cmd	cmd;
+	bool				secure;
+} v2_user_options[] __initdata = {
+	{ "auto",	SPECTRE_V2_USER_CMD_AUTO,	false },
+	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
+	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
+};
+
+static void __init spec_v2_user_print_cond(const char *reason, bool secure)
+{
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
+		pr_info("spectre_v2_user=%s forced on command line.\n", reason);
+}
+
+static enum spectre_v2_user_cmd __init
+spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
+{
+	char arg[20];
+	int ret, i;
+
+	switch (v2_cmd) {
+	case SPECTRE_V2_CMD_NONE:
+		return SPECTRE_V2_USER_CMD_NONE;
+	case SPECTRE_V2_CMD_FORCE:
+		return SPECTRE_V2_USER_CMD_FORCE;
+	default:
+		break;
+	}
+
+	ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
+				  arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_USER_CMD_AUTO;
+
+	for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
+		if (match_option(arg, ret, v2_user_options[i].option)) {
+			spec_v2_user_print_cond(v2_user_options[i].option,
+						v2_user_options[i].secure);
+			return v2_user_options[i].cmd;
+		}
+	}
+
+	pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg);
+	return SPECTRE_V2_USER_CMD_AUTO;
+}
+
+static void __init
+spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+{
+	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
+	bool smt_possible = IS_ENABLED(CONFIG_SMP);
+
+	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
+		return;
+
+	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
+	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
+		smt_possible = false;
+
+	switch (spectre_v2_parse_user_cmdline(v2_cmd)) {
+	case SPECTRE_V2_USER_CMD_AUTO:
+	case SPECTRE_V2_USER_CMD_NONE:
+		goto set_mode;
+	case SPECTRE_V2_USER_CMD_FORCE:
+		mode = SPECTRE_V2_USER_STRICT;
+		break;
+	}
+
+	/* Initialize Indirect Branch Prediction Barrier */
+	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+	}
+
+	/* If enhanced IBRS is enabled no STIPB required */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return;
+
+set_mode:
+	spectre_v2_user = mode;
+	/* Only print the STIBP mode when SMT possible */
+	if (smt_possible)
+		pr_info("%s\n", spectre_v2_user_strings[mode]);
+}
+
 static const char * const spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
@@ -385,12 +489,6 @@ specv2_set_mode:
 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
 
-	/* Initialize Indirect Branch Prediction Barrier if supported */
-	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
-	}
-
 	/*
 	 * Retpoline means the kernel is safe because it has no indirect
 	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
@@ -407,23 +505,21 @@ specv2_set_mode:
 		pr_info("Enabling Restricted Speculation for firmware calls\n");
 	}
 
+	/* Set up IBPB and STIBP depending on the general spectre V2 command */
+	spectre_v2_user_select_mitigation(cmd);
+
 	/* Enable STIBP if appropriate */
 	arch_smt_update();
 }
 
 static bool stibp_needed(void)
 {
-	if (spectre_v2_enabled == SPECTRE_V2_NONE)
-		return false;
-
 	/* Enhanced IBRS makes using STIBP unnecessary. */
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return false;
 
-	if (!boot_cpu_has(X86_FEATURE_STIBP))
-		return false;
-
-	return true;
+	/* Check for strict user mitigation mode */
+	return spectre_v2_user == SPECTRE_V2_USER_STRICT;
 }
 
 static void update_stibp_msr(void *info)
@@ -844,10 +940,13 @@ static char *stibp_state(void)
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return "";
 
-	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
-		return ", STIBP";
-	else
-		return "";
+	switch (spectre_v2_user) {
+	case SPECTRE_V2_USER_NONE:
+		return ", STIBP: disabled";
+	case SPECTRE_V2_USER_STRICT:
+		return ", STIBP: forced";
+	}
+	return "";
 }
 
 static char *ibpb_state(void)

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-25 18:33 ` [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
  2018-11-27 17:25   ` Lendacky, Thomas
@ 2018-11-28 14:30   ` tip-bot for Tim Chen
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Tim Chen @ 2018-11-28 14:30 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tim.c.chen, jcm, tglx, mingo, longman9394, luto, ak,
	linux-kernel, dwmw, arjan, casey.schaufler, jpoimboe, torvalds,
	peterz, david.c.stewart, thomas.lendacky, jkosina, hpa,
	dave.hansen, gregkh, asit.k.mallick, keescook, aarcange

Commit-ID:  5bfbe3ad5840d941b89bcac54b821ba14f50a0ba
Gitweb:     https://git.kernel.org/tip/5bfbe3ad5840d941b89bcac54b821ba14f50a0ba
Author:     Tim Chen <tim.c.chen@linux.intel.com>
AuthorDate: Sun, 25 Nov 2018 19:33:46 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:10 +0100

x86/speculation: Prepare for per task indirect branch speculation control

To avoid the overhead of STIBP always on, it's necessary to allow per task
control of STIBP.

Add a new task flag TIF_SPEC_IB and evaluate it during context switch if
SMT is active and flag evaluation is enabled by the speculation control
code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the
guest/host switch works properly.

This has no effect because TIF_SPEC_IB cannot be set yet and the static key
which controls evaluation is off. Preparatory patch for adding the control
code.

[ tglx: Simplify the context switch logic and make the TIF evaluation
  	depend on SMP=y and on the static key controlling the conditional
  	update. Rename it to TIF_SPEC_IB because it controls both STIBP and
  	IBPB ]

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.176917199@linutronix.de


---
 arch/x86/include/asm/msr-index.h   |  5 +++--
 arch/x86/include/asm/spec-ctrl.h   | 12 ++++++++++++
 arch/x86/include/asm/thread_info.h |  5 ++++-
 arch/x86/kernel/cpu/bugs.c         |  4 ++++
 arch/x86/kernel/process.c          | 20 ++++++++++++++++++--
 5 files changed, 41 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 80f4a4f38c79..c8f73efb4ece 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -41,9 +41,10 @@
 
 #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
 #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
-#define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_STIBP_SHIFT		1	   /* Single Thread Indirect Branch Predictor (STIBP) bit */
+#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
-#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
+#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index 8e2f8411c7a7..27b0bce3933b 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(u64 tifn)
 	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
+{
+	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
+	return (tifn & _TIF_SPEC_IB) >> (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
 {
 	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
 	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
+{
+	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
+	return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
 {
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 523c69efc38a..fa583ec99e3e 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -83,6 +83,7 @@ struct thread_info {
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
+#define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -110,6 +111,7 @@ struct thread_info {
 #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -146,7 +148,8 @@ struct thread_info {
 
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW							\
-	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
+	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
+	 _TIF_SSBD|_TIF_SPEC_IB)
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3a223cce1fac..1e13dbfc0919 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -148,6 +148,10 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 		    static_cpu_has(X86_FEATURE_AMD_SSBD))
 			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
 
+		/* Conditional STIBP enabled? */
+		if (static_branch_unlikely(&switch_to_cond_stibp))
+			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
+
 		if (hostval != guestval) {
 			msrval = setguest ? guestval : hostval;
 			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 70e9832379e1..574b144d2b53 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -404,11 +404,17 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 						      unsigned long tifn)
 {
+	unsigned long tif_diff = tifp ^ tifn;
 	u64 msr = x86_spec_ctrl_base;
 	bool updmsr = false;
 
-	/* If TIF_SSBD is different, select the proper mitigation method */
-	if ((tifp ^ tifn) & _TIF_SSBD) {
+	/*
+	 * If TIF_SSBD is different, select the proper mitigation
+	 * method. Note that if SSBD mitigation is disabled or permanentely
+	 * enabled this branch can't be taken because nothing can set
+	 * TIF_SSBD.
+	 */
+	if (tif_diff & _TIF_SSBD) {
 		if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {
 			amd_set_ssb_virt_state(tifn);
 		} else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
@@ -420,6 +426,16 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 		}
 	}
 
+	/*
+	 * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled,
+	 * otherwise avoid the MSR write.
+	 */
+	if (IS_ENABLED(CONFIG_SMP) &&
+	    static_branch_unlikely(&switch_to_cond_stibp)) {
+		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
+		msr |= stibp_tif_to_spec_ctrl(tifn);
+	}
+
 	if (updmsr)
 		wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/process: Consolidate and simplify switch_to_xtra() code
  2018-11-25 18:33 ` [patch V2 19/28] x86/process: Consolidate and simplify switch_to_xtra() code Thomas Gleixner
  2018-11-26 18:30   ` Borislav Petkov
@ 2018-11-28 14:30   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:30 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: thomas.lendacky, asit.k.mallick, dwmw, jcm, dave.hansen, mingo,
	hpa, gregkh, casey.schaufler, jpoimboe, aarcange, arjan, luto,
	longman9394, torvalds, peterz, ak, jkosina, tim.c.chen,
	linux-kernel, keescook, tglx, david.c.stewart

Commit-ID:  ff16701a29cba3aafa0bd1656d766813b2d0a811
Gitweb:     https://git.kernel.org/tip/ff16701a29cba3aafa0bd1656d766813b2d0a811
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:47 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:11 +0100

x86/process: Consolidate and simplify switch_to_xtra() code

Move the conditional invocation of __switch_to_xtra() into an inline
function so the logic can be shared between 32 and 64 bit.

Remove the handthrough of the TSS pointer and retrieve the pointer directly
in the bitmap handling function. Use this_cpu_ptr() instead of the
per_cpu() indirection.

This is a preparatory change so integration of conditional indirect branch
speculation optimization happens only in one place.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.280855518@linutronix.de

---
 arch/x86/include/asm/switch_to.h |  3 ---
 arch/x86/kernel/process.c        | 12 +++++++-----
 arch/x86/kernel/process.h        | 24 ++++++++++++++++++++++++
 arch/x86/kernel/process_32.c     | 10 +++-------
 arch/x86/kernel/process_64.c     | 10 +++-------
 5 files changed, 37 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h
index 36bd243843d6..7cf1a270d891 100644
--- a/arch/x86/include/asm/switch_to.h
+++ b/arch/x86/include/asm/switch_to.h
@@ -11,9 +11,6 @@ struct task_struct *__switch_to_asm(struct task_struct *prev,
 
 __visible struct task_struct *__switch_to(struct task_struct *prev,
 					  struct task_struct *next);
-struct tss_struct;
-void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
-		      struct tss_struct *tss);
 
 /* This runs runs on the previous thread's stack. */
 static inline void prepare_switch_to(struct task_struct *next)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 574b144d2b53..cdf8e6694f71 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -40,6 +40,8 @@
 #include <asm/prctl.h>
 #include <asm/spec-ctrl.h>
 
+#include "process.h"
+
 /*
  * per-CPU TSS segments. Threads are completely 'soft' on Linux,
  * no more per-task TSS's. The TSS size is kept cacheline-aligned
@@ -252,11 +254,12 @@ void arch_setup_new_exec(void)
 		enable_cpuid();
 }
 
-static inline void switch_to_bitmap(struct tss_struct *tss,
-				    struct thread_struct *prev,
+static inline void switch_to_bitmap(struct thread_struct *prev,
 				    struct thread_struct *next,
 				    unsigned long tifp, unsigned long tifn)
 {
+	struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw);
+
 	if (tifn & _TIF_IO_BITMAP) {
 		/*
 		 * Copy the relevant range of the IO bitmap.
@@ -448,8 +451,7 @@ void speculation_ctrl_update(unsigned long tif)
 	preempt_enable();
 }
 
-void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
-		      struct tss_struct *tss)
+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
 {
 	struct thread_struct *prev, *next;
 	unsigned long tifp, tifn;
@@ -459,7 +461,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 
 	tifn = READ_ONCE(task_thread_info(next_p)->flags);
 	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
-	switch_to_bitmap(tss, prev, next, tifp, tifn);
+	switch_to_bitmap(prev, next, tifp, tifn);
 
 	propagate_user_return_notify(prev_p, next_p);
 
diff --git a/arch/x86/kernel/process.h b/arch/x86/kernel/process.h
new file mode 100644
index 000000000000..020fbfac3a27
--- /dev/null
+++ b/arch/x86/kernel/process.h
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+//
+// Code shared between 32 and 64 bit
+
+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p);
+
+/*
+ * This needs to be inline to optimize for the common case where no extra
+ * work needs to be done.
+ */
+static inline void switch_to_extra(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	unsigned long next_tif = task_thread_info(next)->flags;
+	unsigned long prev_tif = task_thread_info(prev)->flags;
+
+	/*
+	 * __switch_to_xtra() handles debug registers, i/o bitmaps,
+	 * speculation mitigations etc.
+	 */
+	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||
+		     prev_tif & _TIF_WORK_CTXSW_PREV))
+		__switch_to_xtra(prev, next);
+}
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 5046a3c9dec2..d3e593eb189f 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -59,6 +59,8 @@
 #include <asm/intel_rdt_sched.h>
 #include <asm/proto.h>
 
+#include "process.h"
+
 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
 {
 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;
@@ -232,7 +234,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	struct fpu *prev_fpu = &prev->fpu;
 	struct fpu *next_fpu = &next->fpu;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);
 
 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
 
@@ -264,12 +265,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))
 		set_iopl_mask(next->iopl);
 
-	/*
-	 * Now maybe handle debug registers and/or IO bitmaps
-	 */
-	if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV ||
-		     task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
-		__switch_to_xtra(prev_p, next_p, tss);
+	switch_to_extra(prev_p, next_p);
 
 	/*
 	 * Leave lazy mode, flushing any hypercalls made here.
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 0e0b4288a4b2..bbfbf017065c 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -60,6 +60,8 @@
 #include <asm/unistd_32_ia32.h>
 #endif
 
+#include "process.h"
+
 /* Prints also some state that isn't saved in the pt_regs */
 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
 {
@@ -553,7 +555,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	struct fpu *prev_fpu = &prev->fpu;
 	struct fpu *next_fpu = &next->fpu;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);
 
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) &&
 		     this_cpu_read(irq_count) != -1);
@@ -617,12 +618,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	/* Reload sp0. */
 	update_task_stack(next_p);
 
-	/*
-	 * Now maybe reload the debug registers and handle I/O bitmaps
-	 */
-	if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||
-		     task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
-		__switch_to_xtra(prev_p, next_p, tss);
+	switch_to_extra(prev_p, next_p);
 
 #ifdef CONFIG_XEN_PV
 	/*

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Avoid __switch_to_xtra() calls
  2018-11-25 18:33 ` [patch V2 20/28] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
@ 2018-11-28 14:31   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:31 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: aarcange, longman9394, mingo, torvalds, dave.hansen, peterz,
	tglx, ak, luto, thomas.lendacky, jcm, jpoimboe, hpa, arjan,
	asit.k.mallick, casey.schaufler, keescook, jkosina, linux-kernel,
	gregkh, dwmw, tim.c.chen, david.c.stewart

Commit-ID:  5635d99953f04b550738f6f4c1c532667c3fd872
Gitweb:     https://git.kernel.org/tip/5635d99953f04b550738f6f4c1c532667c3fd872
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:48 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:11 +0100

x86/speculation: Avoid __switch_to_xtra() calls

The TIF_SPEC_IB bit does not need to be evaluated in the decision to invoke
__switch_to_xtra() when:

 - CONFIG_SMP is disabled

 - The conditional STIPB mode is disabled

The TIF_SPEC_IB bit still controls IBPB in both cases so the TIF work mask
checks might invoke __switch_to_xtra() for nothing if TIF_SPEC_IB is the
only set bit in the work masks.

Optimize it out by masking the bit at compile time for CONFIG_SMP=n and at
run time when the static key controlling the conditional STIBP mode is
disabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.374062201@linutronix.de


---
 arch/x86/include/asm/thread_info.h | 13 +++++++++++--
 arch/x86/kernel/process.h          | 15 +++++++++++++++
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index fa583ec99e3e..6d201699c651 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -147,9 +147,18 @@ struct thread_info {
 	 _TIF_FSCHECK)
 
 /* flags to check in __switch_to() */
-#define _TIF_WORK_CTXSW							\
+#define _TIF_WORK_CTXSW_BASE						\
 	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
-	 _TIF_SSBD|_TIF_SPEC_IB)
+	 _TIF_SSBD)
+
+/*
+ * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.
+ */
+#ifdef CONFIG_SMP
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB)
+#else
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE)
+#endif
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
diff --git a/arch/x86/kernel/process.h b/arch/x86/kernel/process.h
index 020fbfac3a27..898e97cf6629 100644
--- a/arch/x86/kernel/process.h
+++ b/arch/x86/kernel/process.h
@@ -2,6 +2,8 @@
 //
 // Code shared between 32 and 64 bit
 
+#include <asm/spec-ctrl.h>
+
 void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p);
 
 /*
@@ -14,6 +16,19 @@ static inline void switch_to_extra(struct task_struct *prev,
 	unsigned long next_tif = task_thread_info(next)->flags;
 	unsigned long prev_tif = task_thread_info(prev)->flags;
 
+	if (IS_ENABLED(CONFIG_SMP)) {
+		/*
+		 * Avoid __switch_to_xtra() invocation when conditional
+		 * STIPB is disabled and the only different bit is
+		 * TIF_SPEC_IB. For CONFIG_SMP=n TIF_SPEC_IB is not
+		 * in the TIF_WORK_CTXSW masks.
+		 */
+		if (!static_branch_likely(&switch_to_cond_stibp)) {
+			prev_tif &= ~_TIF_SPEC_IB;
+			next_tif &= ~_TIF_SPEC_IB;
+		}
+	}
+
 	/*
 	 * __switch_to_xtra() handles debug registers, i/o bitmaps,
 	 * speculation mitigations etc.

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Prepare for conditional IBPB in switch_mm()
  2018-11-25 18:33 ` [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm() Thomas Gleixner
  2018-11-25 19:11   ` Thomas Gleixner
  2018-11-25 20:53   ` Andi Kleen
@ 2018-11-28 14:31   ` tip-bot for Thomas Gleixner
  2 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:31 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: casey.schaufler, jpoimboe, peterz, linux-kernel, thomas.lendacky,
	keescook, hpa, jkosina, david.c.stewart, arjan, torvalds, mingo,
	asit.k.mallick, ak, jcm, dwmw, luto, tglx, gregkh, dave.hansen,
	tim.c.chen, longman9394, aarcange

Commit-ID:  4c71a2b6fd7e42814aa68a6dec88abf3b42ea573
Gitweb:     https://git.kernel.org/tip/4c71a2b6fd7e42814aa68a6dec88abf3b42ea573
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:49 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:11 +0100

x86/speculation: Prepare for conditional IBPB in switch_mm()

The IBPB speculation barrier is issued from switch_mm() when the kernel
switches to a user space task with a different mm than the user space task
which ran last on the same CPU.

An additional optimization is to avoid IBPB when the incoming task can be
ptraced by the outgoing task. This optimization only works when switching
directly between two user space tasks. When switching from a kernel task to
a user space task the optimization fails because the previous task cannot
be accessed anymore. So for quite some scenarios the optimization is just
adding overhead.

The upcoming conditional IBPB support will issue IBPB only for user space
tasks which have the TIF_SPEC_IB bit set. This requires to handle the
following cases:

  1) Switch from a user space task (potential attacker) which has
     TIF_SPEC_IB set to a user space task (potential victim) which has
     TIF_SPEC_IB not set.

  2) Switch from a user space task (potential attacker) which has
     TIF_SPEC_IB not set to a user space task (potential victim) which has
     TIF_SPEC_IB set.

This needs to be optimized for the case where the IBPB can be avoided when
only kernel threads ran in between user space tasks which belong to the
same process.

The current check whether two tasks belong to the same context is using the
tasks context id. While correct, it's simpler to use the mm pointer because
it allows to mangle the TIF_SPEC_IB bit into it. The context id based
mechanism requires extra storage, which creates worse code.

When a task is scheduled out its TIF_SPEC_IB bit is mangled as bit 0 into
the per CPU storage which is used to track the last user space mm which was
running on a CPU. This bit can be used together with the TIF_SPEC_IB bit of
the incoming task to make the decision whether IBPB needs to be issued or
not to cover the two cases above.

As conditional IBPB is going to be the default, remove the dubious ptrace
check for the IBPB always case and simply issue IBPB always when the
process changes.

Move the storage to a different place in the struct as the original one
created a hole.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.466447057@linutronix.de

---
 arch/x86/include/asm/nospec-branch.h |   2 +
 arch/x86/include/asm/tlbflush.h      |   8 ++-
 arch/x86/kernel/cpu/bugs.c           |  29 +++++++--
 arch/x86/mm/tlb.c                    | 115 ++++++++++++++++++++++++++---------
 4 files changed, 118 insertions(+), 36 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index be0b0aa780e2..d4d35baf0430 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -312,6 +312,8 @@ do {									\
 } while (0)
 
 DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index d760611cfc35..f4204bf377fc 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -169,10 +169,14 @@ struct tlb_state {
 
 #define LOADED_MM_SWITCHING ((struct mm_struct *)1)
 
+	/* Last user mm for optimizing IBPB */
+	union {
+		struct mm_struct	*last_user_mm;
+		unsigned long		last_user_mm_ibpb;
+	};
+
 	u16 loaded_mm_asid;
 	u16 next_asid;
-	/* last user mm's ctx id */
-	u64 last_ctx_id;
 
 	/*
 	 * We can be in one of several states:
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1e13dbfc0919..7c946a9af947 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,10 @@ u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask;
 
 /* Control conditional STIPB in switch_to() */
 DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+/* Control conditional IBPB in switch_mm() */
+DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+/* Control unconditional IBPB in switch_mm() */
+DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
 void __init check_bugs(void)
 {
@@ -331,7 +335,17 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 	/* Initialize Indirect Branch Prediction Barrier */
 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+
+		switch (mode) {
+		case SPECTRE_V2_USER_STRICT:
+			static_branch_enable(&switch_mm_always_ibpb);
+			break;
+		default:
+			break;
+		}
+
+		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+			mode == SPECTRE_V2_USER_STRICT ? "always-on" : "conditional");
 	}
 
 	/* If enhanced IBRS is enabled no STIPB required */
@@ -955,10 +969,15 @@ static char *stibp_state(void)
 
 static char *ibpb_state(void)
 {
-	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
-		return ", IBPB";
-	else
-		return "";
+	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+		switch (spectre_v2_user) {
+		case SPECTRE_V2_USER_NONE:
+			return ", IBPB: disabled";
+		case SPECTRE_V2_USER_STRICT:
+			return ", IBPB: always-on";
+		}
+	}
+	return "";
 }
 
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index bddd6b3cee1d..03b6b4c2238d 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,7 +7,6 @@
 #include <linux/export.h>
 #include <linux/cpu.h>
 #include <linux/debugfs.h>
-#include <linux/ptrace.h>
 
 #include <asm/tlbflush.h>
 #include <asm/mmu_context.h>
@@ -30,6 +29,12 @@
  *	Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi
  */
 
+/*
+ * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is
+ * stored in cpu_tlb_state.last_user_mm_ibpb.
+ */
+#define LAST_USER_MM_IBPB	0x1UL
+
 /*
  * We get here when we do something requiring a TLB invalidation
  * but could not go invalidate all of the contexts.  We do the
@@ -181,17 +186,87 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
 	}
 }
 
-static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next)
+{
+	unsigned long next_tif = task_thread_info(next)->flags;
+	unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_IBPB;
+
+	return (unsigned long)next->mm | ibpb;
+}
+
+static void cond_ibpb(struct task_struct *next)
 {
+	if (!next || !next->mm)
+		return;
+
 	/*
-	 * Check if the current (previous) task has access to the memory
-	 * of the @tsk (next) task. If access is denied, make sure to
-	 * issue a IBPB to stop user->user Spectre-v2 attacks.
-	 *
-	 * Note: __ptrace_may_access() returns 0 or -ERRNO.
+	 * Both, the conditional and the always IBPB mode use the mm
+	 * pointer to avoid the IBPB when switching between tasks of the
+	 * same process. Using the mm pointer instead of mm->context.ctx_id
+	 * opens a hypothetical hole vs. mm_struct reuse, which is more or
+	 * less impossible to control by an attacker. Aside of that it
+	 * would only affect the first schedule so the theoretically
+	 * exposed data is not really interesting.
 	 */
-	return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
-		ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+	if (static_branch_likely(&switch_mm_cond_ibpb)) {
+		unsigned long prev_mm, next_mm;
+
+		/*
+		 * This is a bit more complex than the always mode because
+		 * it has to handle two cases:
+		 *
+		 * 1) Switch from a user space task (potential attacker)
+		 *    which has TIF_SPEC_IB set to a user space task
+		 *    (potential victim) which has TIF_SPEC_IB not set.
+		 *
+		 * 2) Switch from a user space task (potential attacker)
+		 *    which has TIF_SPEC_IB not set to a user space task
+		 *    (potential victim) which has TIF_SPEC_IB set.
+		 *
+		 * This could be done by unconditionally issuing IBPB when
+		 * a task which has TIF_SPEC_IB set is either scheduled in
+		 * or out. Though that results in two flushes when:
+		 *
+		 * - the same user space task is scheduled out and later
+		 *   scheduled in again and only a kernel thread ran in
+		 *   between.
+		 *
+		 * - a user space task belonging to the same process is
+		 *   scheduled in after a kernel thread ran in between
+		 *
+		 * - a user space task belonging to the same process is
+		 *   scheduled in immediately.
+		 *
+		 * Optimize this with reasonably small overhead for the
+		 * above cases. Mangle the TIF_SPEC_IB bit into the mm
+		 * pointer of the incoming task which is stored in
+		 * cpu_tlbstate.last_user_mm_ibpb for comparison.
+		 */
+		next_mm = mm_mangle_tif_spec_ib(next);
+		prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb);
+
+		/*
+		 * Issue IBPB only if the mm's are different and one or
+		 * both have the IBPB bit set.
+		 */
+		if (next_mm != prev_mm &&
+		    (next_mm | prev_mm) & LAST_USER_MM_IBPB)
+			indirect_branch_prediction_barrier();
+
+		this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm);
+	}
+
+	if (static_branch_unlikely(&switch_mm_always_ibpb)) {
+		/*
+		 * Only flush when switching to a user space task with a
+		 * different context than the user space task which ran
+		 * last on this CPU.
+		 */
+		if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) {
+			indirect_branch_prediction_barrier();
+			this_cpu_write(cpu_tlbstate.last_user_mm, next->mm);
+		}
+	}
 }
 
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
@@ -292,22 +367,12 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		new_asid = prev_asid;
 		need_flush = true;
 	} else {
-		u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);
-
 		/*
 		 * Avoid user/user BTB poisoning by flushing the branch
 		 * predictor when switching between processes. This stops
 		 * one process from doing Spectre-v2 attacks on another.
-		 *
-		 * As an optimization, flush indirect branches only when
-		 * switching into a processes that can't be ptrace by the
-		 * current one (as in such case, attacker has much more
-		 * convenient way how to tamper with the next process than
-		 * branch buffer poisoning).
 		 */
-		if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
-				ibpb_needed(tsk, last_ctx_id))
-			indirect_branch_prediction_barrier();
+		cond_ibpb(tsk);
 
 		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
 			/*
@@ -365,14 +430,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
 	}
 
-	/*
-	 * Record last user mm's context id, so we can avoid
-	 * flushing branch buffer with IBPB if we switch back
-	 * to the same user.
-	 */
-	if (next != &init_mm)
-		this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
-
 	/* Make sure we write CR3 before loaded_mm. */
 	barrier();
 
@@ -441,7 +498,7 @@ void initialize_tlbstate_and_flush(void)
 	write_cr3(build_cr3(mm->pgd, 0));
 
 	/* Reinitialize tlbstate. */
-	this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id);
+	this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB);
 	this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
 	this_cpu_write(cpu_tlbstate.next_asid, 1);
 	this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS
  2018-11-25 18:33 ` [patch V2 22/28] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS Thomas Gleixner
@ 2018-11-28 14:32   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:32 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: ak, jkosina, linux-kernel, arjan, asit.k.mallick, jcm, hpa,
	thomas.lendacky, keescook, jpoimboe, longman9394, torvalds,
	gregkh, luto, david.c.stewart, dwmw, tglx, aarcange, tim.c.chen,
	mingo, peterz, casey.schaufler, dave.hansen

Commit-ID:  46f7ecb1e7359f183f5bbd1e08b90e10e52164f9
Gitweb:     https://git.kernel.org/tip/46f7ecb1e7359f183f5bbd1e08b90e10e52164f9
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:50 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:11 +0100

ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS

The IBPB control code in x86 removed the usage. Remove the functionality
which was introduced for this.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.559149393@linutronix.de


---
 include/linux/ptrace.h | 17 -----------------
 kernel/ptrace.c        | 10 ----------
 2 files changed, 27 deletions(-)

diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 6c2ffed907f5..de20ede2c5c8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,15 +64,12 @@ extern void exit_ptrace(struct task_struct *tracer, struct list_head *dead);
 #define PTRACE_MODE_NOAUDIT	0x04
 #define PTRACE_MODE_FSCREDS	0x08
 #define PTRACE_MODE_REALCREDS	0x10
-#define PTRACE_MODE_SCHED	0x20
-#define PTRACE_MODE_IBPB	0x40
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS)
-#define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -90,20 +87,6 @@ extern void exit_ptrace(struct task_struct *tracer, struct list_head *dead);
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
 
-/**
- * ptrace_may_access - check whether the caller is permitted to access
- * a target task.
- * @task: target task
- * @mode: selects type of access and caller credentials
- *
- * Returns true on success, false on denial.
- *
- * Similar to ptrace_may_access(). Only to be called from context switch
- * code. Does not call into audit and the regular LSM hooks due to locking
- * constraints.
- */
-extern bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode);
-
 static inline int ptrace_reparented(struct task_struct *child)
 {
 	return !same_thread_group(child->real_parent, child->parent);
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index 80b34dffdfb9..c2cee9db5204 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -261,9 +261,6 @@ static int ptrace_check_attach(struct task_struct *child, bool ignore_state)
 
 static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)
 {
-	if (mode & PTRACE_MODE_SCHED)
-		return false;
-
 	if (mode & PTRACE_MODE_NOAUDIT)
 		return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE);
 	else
@@ -331,16 +328,9 @@ ok:
 	     !ptrace_has_cap(mm->user_ns, mode)))
 	    return -EPERM;
 
-	if (mode & PTRACE_MODE_SCHED)
-		return 0;
 	return security_ptrace_access_check(task, mode);
 }
 
-bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode)
-{
-	return __ptrace_may_access(task, mode | PTRACE_MODE_SCHED);
-}
-
 bool ptrace_may_access(struct task_struct *task, unsigned int mode)
 {
 	int err;

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Split out TIF update
  2018-11-25 18:33 ` [patch V2 23/28] x86/speculation: Split out TIF update Thomas Gleixner
@ 2018-11-28 14:33   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:33 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: aarcange, asit.k.mallick, luto, dwmw, tglx, david.c.stewart,
	peterz, ak, dave.hansen, keescook, torvalds, hpa, arjan,
	longman9394, jpoimboe, mingo, jcm, gregkh, tim.c.chen, jkosina,
	thomas.lendacky, linux-kernel, casey.schaufler

Commit-ID:  e6da8bb6f9abb2628381904b24163c770e630bac
Gitweb:     https://git.kernel.org/tip/e6da8bb6f9abb2628381904b24163c770e630bac
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:51 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:12 +0100

x86/speculation: Split out TIF update

The update of the TIF_SSBD flag and the conditional speculation control MSR
update is done in the ssb_prctl_set() function directly. The upcoming prctl
support for controlling indirect branch speculation via STIBP needs the
same mechanism.

Split the code out and make it reusable. Reword the comment about updates
for other tasks.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.652305076@linutronix.de

---
 arch/x86/kernel/cpu/bugs.c | 35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7c946a9af947..3b65a53d2c33 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -702,10 +702,29 @@ static void ssb_select_mitigation(void)
 #undef pr_fmt
 #define pr_fmt(fmt)     "Speculation prctl: " fmt
 
-static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
+static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
 {
 	bool update;
 
+	if (on)
+		update = !test_and_set_tsk_thread_flag(tsk, tifbit);
+	else
+		update = test_and_clear_tsk_thread_flag(tsk, tifbit);
+
+	/*
+	 * Immediately update the speculation control MSRs for the current
+	 * task, but for a non-current task delay setting the CPU
+	 * mitigation until it is scheduled next.
+	 *
+	 * This can only happen for SECCOMP mitigation. For PRCTL it's
+	 * always the current task.
+	 */
+	if (tsk == current && update)
+		speculation_ctrl_update_current();
+}
+
+static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
 	if (ssb_mode != SPEC_STORE_BYPASS_PRCTL &&
 	    ssb_mode != SPEC_STORE_BYPASS_SECCOMP)
 		return -ENXIO;
@@ -716,28 +735,20 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 		if (task_spec_ssb_force_disable(task))
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
-		update = test_and_clear_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, false);
 		break;
 	case PR_SPEC_DISABLE:
 		task_set_spec_ssb_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, true);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, true);
 		break;
 	default:
 		return -ERANGE;
 	}
-
-	/*
-	 * If being set on non-current task, delay setting the CPU
-	 * mitigation until it is next scheduled.
-	 */
-	if (task == current && update)
-		speculation_ctrl_update_current();
-
 	return 0;
 }
 

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-25 18:33 ` [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
  2018-11-27 20:18   ` Lendacky, Thomas
@ 2018-11-28 14:34   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jkosina, tglx, hpa, linux-kernel, longman9394, dwmw, luto, ak,
	gregkh, thomas.lendacky, david.c.stewart, torvalds, dave.hansen,
	arjan, jpoimboe, mingo, peterz, asit.k.mallick, keescook,
	casey.schaufler, tim.c.chen, aarcange, jcm

Commit-ID:  6893a959d7fdebbab5f5aa112c277d5a44435ba1
Gitweb:     https://git.kernel.org/tip/6893a959d7fdebbab5f5aa112c277d5a44435ba1
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:52 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:13 +0100

x86/speculation: Prepare arch_smt_update() for PRCTL mode

The upcoming fine grained per task STIBP control needs to be updated on CPU
hotplug as well.

Split out the code which controls the strict mode so the prctl control code
can be added later. Mark the SMP function call argument __unused while at it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.759457117@linutronix.de


---
 arch/x86/kernel/cpu/bugs.c | 46 +++++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 29f40a92f5a8..9cab538e10f1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -530,40 +530,44 @@ specv2_set_mode:
 	arch_smt_update();
 }
 
-static bool stibp_needed(void)
+static void update_stibp_msr(void * __unused)
 {
-	/* Enhanced IBRS makes using STIBP unnecessary. */
-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
-		return false;
-
-	/* Check for strict user mitigation mode */
-	return spectre_v2_user == SPECTRE_V2_USER_STRICT;
+	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 }
 
-static void update_stibp_msr(void *info)
+/* Update x86_spec_ctrl_base in case SMT state changed. */
+static void update_stibp_strict(void)
 {
-	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+	u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+
+	if (sched_smt_active())
+		mask |= SPEC_CTRL_STIBP;
+
+	if (mask == x86_spec_ctrl_base)
+		return;
+
+	pr_info("Update user space SMT mitigation: STIBP %s\n",
+		mask & SPEC_CTRL_STIBP ? "always-on" : "off");
+	x86_spec_ctrl_base = mask;
+	on_each_cpu(update_stibp_msr, NULL, 1);
 }
 
 void arch_smt_update(void)
 {
-	u64 mask;
-
-	if (!stibp_needed())
+	/* Enhanced IBRS implies STIBP. No update required. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return;
 
 	mutex_lock(&spec_ctrl_mutex);
 
-	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
-	if (sched_smt_active())
-		mask |= SPEC_CTRL_STIBP;
-
-	if (mask != x86_spec_ctrl_base) {
-		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
-		x86_spec_ctrl_base = mask;
-		on_each_cpu(update_stibp_msr, NULL, 1);
+	switch (spectre_v2_user) {
+	case SPECTRE_V2_USER_NONE:
+		break;
+	case SPECTRE_V2_USER_STRICT:
+		update_stibp_strict();
+		break;
 	}
+
 	mutex_unlock(&spec_ctrl_mutex);
 }
 

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Add prctl() control for indirect branch speculation
  2018-11-25 18:33 ` [patch V2 25/28] x86/speculation: Add prctl() control for indirect branch speculation Thomas Gleixner
@ 2018-11-28 14:34   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, tglx, david.c.stewart, asit.k.mallick, tim.c.chen,
	keescook, dwmw, mingo, thomas.lendacky, peterz, dave.hansen,
	jpoimboe, torvalds, hpa, jcm, gregkh, casey.schaufler, arjan,
	longman9394, jkosina, luto, ak, aarcange

Commit-ID:  9137bb27e60e554dab694eafa4cca241fa3a694f
Gitweb:     https://git.kernel.org/tip/9137bb27e60e554dab694eafa4cca241fa3a694f
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:53 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:13 +0100

x86/speculation: Add prctl() control for indirect branch speculation

Add the PR_SPEC_INDIRECT_BRANCH option for the PR_GET_SPECULATION_CTRL and
PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of
indirect branch speculation via STIBP and IBPB.

Invocations:
 Check indirect branch speculation status with
 - prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0);

 Enable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0);

 Disable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0);

 Force disable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);

See Documentation/userspace-api/spec_ctrl.rst.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.866780996@linutronix.de

---
 Documentation/userspace-api/spec_ctrl.rst |  9 +++++
 arch/x86/include/asm/nospec-branch.h      |  1 +
 arch/x86/kernel/cpu/bugs.c                | 67 +++++++++++++++++++++++++++++++
 arch/x86/kernel/process.c                 |  5 +++
 include/linux/sched.h                     |  9 +++++
 include/uapi/linux/prctl.h                |  1 +
 tools/include/uapi/linux/prctl.h          |  1 +
 7 files changed, 93 insertions(+)

diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
index 32f3d55c54b7..c4dbe6f7cdae 100644
--- a/Documentation/userspace-api/spec_ctrl.rst
+++ b/Documentation/userspace-api/spec_ctrl.rst
@@ -92,3 +92,12 @@ Speculation misfeature controls
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
+
+- PR_SPEC_INDIR_BRANCH: Indirect Branch Speculation in User Processes
+                        (Mitigate Spectre V2 style attacks against user processes)
+
+  Invocations:
+   * prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index d4d35baf0430..2adbe7b047fa 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -232,6 +232,7 @@ enum spectre_v2_mitigation {
 enum spectre_v2_user_mitigation {
 	SPECTRE_V2_USER_NONE,
 	SPECTRE_V2_USER_STRICT,
+	SPECTRE_V2_USER_PRCTL,
 };
 
 /* The Speculative Store Bypass disable variants */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9cab538e10f1..74359fff87fd 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -566,6 +566,8 @@ void arch_smt_update(void)
 	case SPECTRE_V2_USER_STRICT:
 		update_stibp_strict();
 		break;
+	case SPECTRE_V2_USER_PRCTL:
+		break;
 	}
 
 	mutex_unlock(&spec_ctrl_mutex);
@@ -752,12 +754,50 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	return 0;
 }
 
+static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
+			return 0;
+		/*
+		 * Indirect branch speculation is always disabled in strict
+		 * mode.
+		 */
+		if (spectre_v2_user == SPECTRE_V2_USER_STRICT)
+			return -EPERM;
+		task_clear_spec_ib_disable(task);
+		task_update_spec_tif(task);
+		break;
+	case PR_SPEC_DISABLE:
+	case PR_SPEC_FORCE_DISABLE:
+		/*
+		 * Indirect branch speculation is always allowed when
+		 * mitigation is force disabled.
+		 */
+		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
+			return -EPERM;
+		if (spectre_v2_user == SPECTRE_V2_USER_STRICT)
+			return 0;
+		task_set_spec_ib_disable(task);
+		if (ctrl == PR_SPEC_FORCE_DISABLE)
+			task_set_spec_ib_force_disable(task);
+		task_update_spec_tif(task);
+		break;
+	default:
+		return -ERANGE;
+	}
+	return 0;
+}
+
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_set(task, ctrl);
+	case PR_SPEC_INDIRECT_BRANCH:
+		return ib_prctl_set(task, ctrl);
 	default:
 		return -ENODEV;
 	}
@@ -790,11 +830,34 @@ static int ssb_prctl_get(struct task_struct *task)
 	}
 }
 
+static int ib_prctl_get(struct task_struct *task)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		return PR_SPEC_NOT_AFFECTED;
+
+	switch (spectre_v2_user) {
+	case SPECTRE_V2_USER_NONE:
+		return PR_SPEC_ENABLE;
+	case SPECTRE_V2_USER_PRCTL:
+		if (task_spec_ib_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+		if (task_spec_ib_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+	case SPECTRE_V2_USER_STRICT:
+		return PR_SPEC_DISABLE;
+	default:
+		return PR_SPEC_NOT_AFFECTED;
+	}
+}
+
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_get(task);
+	case PR_SPEC_INDIRECT_BRANCH:
+		return ib_prctl_get(task);
 	default:
 		return -ENODEV;
 	}
@@ -974,6 +1037,8 @@ static char *stibp_state(void)
 		return ", STIBP: disabled";
 	case SPECTRE_V2_USER_STRICT:
 		return ", STIBP: forced";
+	case SPECTRE_V2_USER_PRCTL:
+		return "";
 	}
 	return "";
 }
@@ -986,6 +1051,8 @@ static char *ibpb_state(void)
 			return ", IBPB: disabled";
 		case SPECTRE_V2_USER_STRICT:
 			return ", IBPB: always-on";
+		case SPECTRE_V2_USER_PRCTL:
+			return "";
 		}
 	}
 	return "";
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index afbe2eb4a1c6..7d31192296a8 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -450,6 +450,11 @@ static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)
 			set_tsk_thread_flag(tsk, TIF_SSBD);
 		else
 			clear_tsk_thread_flag(tsk, TIF_SSBD);
+
+		if (task_spec_ib_disable(tsk))
+			set_tsk_thread_flag(tsk, TIF_SPEC_IB);
+		else
+			clear_tsk_thread_flag(tsk, TIF_SPEC_IB);
 	}
 	/* Return the updated threadinfo flags*/
 	return task_thread_info(tsk)->flags;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a51c13c2b1a0..d607db5fcc6a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1453,6 +1453,8 @@ static inline bool is_percpu_thread(void)
 #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
 #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
 #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
+#define PFA_SPEC_IB_DISABLE		5	/* Indirect branch speculation restricted */
+#define PFA_SPEC_IB_FORCE_DISABLE	6	/* Indirect branch speculation permanently restricted */
 
 #define TASK_PFA_TEST(name, func)					\
 	static inline bool task_##func(struct task_struct *p)		\
@@ -1484,6 +1486,13 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ssb_disable)
 TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 
+TASK_PFA_TEST(SPEC_IB_DISABLE, spec_ib_disable)
+TASK_PFA_SET(SPEC_IB_DISABLE, spec_ib_disable)
+TASK_PFA_CLEAR(SPEC_IB_DISABLE, spec_ib_disable)
+
+TASK_PFA_TEST(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)
+TASK_PFA_SET(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)
+
 static inline void
 current_restore_flags(unsigned long orig_flags, unsigned long flags)
 {
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index c0d7ea0bf5b6..b17201edfa09 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIRECT_BRANCH	1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index c0d7ea0bf5b6..b17201edfa09 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIRECT_BRANCH	1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Enable prctl mode for spectre_v2_user
  2018-11-25 18:33 ` [patch V2 26/28] x86/speculation: Enable prctl mode for spectre_v2_user Thomas Gleixner
  2018-11-26  7:56   ` Dominik Brodowski
@ 2018-11-28 14:35   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, peterz, david.c.stewart, longman9394, jkosina, ak,
	casey.schaufler, gregkh, arjan, jcm, tglx, thomas.lendacky,
	dave.hansen, luto, hpa, asit.k.mallick, jpoimboe, linux-kernel,
	torvalds, dwmw, aarcange, keescook, tim.c.chen

Commit-ID:  7cc765a67d8e04ef7d772425ca5a2a1e2b894c15
Gitweb:     https://git.kernel.org/tip/7cc765a67d8e04ef7d772425ca5a2a1e2b894c15
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:54 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:13 +0100

x86/speculation: Enable prctl mode for spectre_v2_user

Now that all prerequisites are in place:

 - Add the prctl command line option

 - Default the 'auto' mode to 'prctl'

 - When SMT state changes, update the static key which controls the
   conditional STIBP evaluation on context switch.

 - At init update the static key which controls the conditional IBPB
   evaluation on context switch.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185005.958421388@linutronix.de


---
 Documentation/admin-guide/kernel-parameters.txt |  7 ++++-
 arch/x86/kernel/cpu/bugs.c                      | 41 +++++++++++++++++++------
 2 files changed, 38 insertions(+), 10 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index b6e5b33b9d75..a9b98a4e8789 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4236,9 +4236,14 @@
 			off     - Unconditionally disable mitigations. Is
 				  enforced by spectre_v2=off
 
+			prctl   - Indirect branch speculation is enabled,
+				  but mitigation can be enabled via prctl
+				  per thread.  The mitigation control state
+				  is inherited on fork.
+
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
-				  Default is off.
+				  Default is prctl.
 
 			Not specifying this option is equivalent to
 			spectre_v2_user=auto.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 74359fff87fd..d0137d10f9a6 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -255,11 +255,13 @@ enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_NONE,
 	SPECTRE_V2_USER_CMD_AUTO,
 	SPECTRE_V2_USER_CMD_FORCE,
+	SPECTRE_V2_USER_CMD_PRCTL,
 };
 
 static const char * const spectre_v2_user_strings[] = {
 	[SPECTRE_V2_USER_NONE]		= "User space: Vulnerable",
 	[SPECTRE_V2_USER_STRICT]	= "User space: Mitigation: STIBP protection",
+	[SPECTRE_V2_USER_PRCTL]		= "User space: Mitigation: STIBP via prctl",
 };
 
 static const struct {
@@ -270,6 +272,7 @@ static const struct {
 	{ "auto",	SPECTRE_V2_USER_CMD_AUTO,	false },
 	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
 	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
+	{ "prctl",	SPECTRE_V2_USER_CMD_PRCTL,	false },
 };
 
 static void __init spec_v2_user_print_cond(const char *reason, bool secure)
@@ -324,12 +327,15 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 		smt_possible = false;
 
 	switch (spectre_v2_parse_user_cmdline(v2_cmd)) {
-	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_NONE:
 		goto set_mode;
 	case SPECTRE_V2_USER_CMD_FORCE:
 		mode = SPECTRE_V2_USER_STRICT;
 		break;
+	case SPECTRE_V2_USER_CMD_AUTO:
+	case SPECTRE_V2_USER_CMD_PRCTL:
+		mode = SPECTRE_V2_USER_PRCTL;
+		break;
 	}
 
 	/* Initialize Indirect Branch Prediction Barrier */
@@ -340,6 +346,9 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 		case SPECTRE_V2_USER_STRICT:
 			static_branch_enable(&switch_mm_always_ibpb);
 			break;
+		case SPECTRE_V2_USER_PRCTL:
+			static_branch_enable(&switch_mm_cond_ibpb);
+			break;
 		default:
 			break;
 		}
@@ -352,6 +361,12 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return;
 
+	/*
+	 * If SMT is not possible or STIBP is not available clear the STIPB
+	 * mode.
+	 */
+	if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP))
+		mode = SPECTRE_V2_USER_NONE;
 set_mode:
 	spectre_v2_user = mode;
 	/* Only print the STIBP mode when SMT possible */
@@ -552,6 +567,15 @@ static void update_stibp_strict(void)
 	on_each_cpu(update_stibp_msr, NULL, 1);
 }
 
+/* Update the static key controlling the evaluation of TIF_SPEC_IB */
+static void update_indir_branch_cond(void)
+{
+	if (sched_smt_active())
+		static_branch_enable(&switch_to_cond_stibp);
+	else
+		static_branch_disable(&switch_to_cond_stibp);
+}
+
 void arch_smt_update(void)
 {
 	/* Enhanced IBRS implies STIBP. No update required. */
@@ -567,6 +591,7 @@ void arch_smt_update(void)
 		update_stibp_strict();
 		break;
 	case SPECTRE_V2_USER_PRCTL:
+		update_indir_branch_cond();
 		break;
 	}
 
@@ -1038,7 +1063,8 @@ static char *stibp_state(void)
 	case SPECTRE_V2_USER_STRICT:
 		return ", STIBP: forced";
 	case SPECTRE_V2_USER_PRCTL:
-		return "";
+		if (static_key_enabled(&switch_to_cond_stibp))
+			return ", STIBP: conditional";
 	}
 	return "";
 }
@@ -1046,14 +1072,11 @@ static char *stibp_state(void)
 static char *ibpb_state(void)
 {
 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		switch (spectre_v2_user) {
-		case SPECTRE_V2_USER_NONE:
-			return ", IBPB: disabled";
-		case SPECTRE_V2_USER_STRICT:
+		if (static_key_enabled(&switch_mm_always_ibpb))
 			return ", IBPB: always-on";
-		case SPECTRE_V2_USER_PRCTL:
-			return "";
-		}
+		if (static_key_enabled(&switch_mm_cond_ibpb))
+			return ", IBPB: conditional";
+		return ", IBPB: disabled";
 	}
 	return "";
 }

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 18:33 ` [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode Thomas Gleixner
  2018-11-25 19:35   ` Randy Dunlap
  2018-11-25 20:40   ` Linus Torvalds
@ 2018-11-28 14:35   ` tip-bot for Thomas Gleixner
  2018-12-04 18:45   ` [patch V2 27/28] " Dave Hansen
  3 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, tglx, linux-kernel, arjan, thomas.lendacky, jkosina,
	aarcange, jpoimboe, keescook, mingo, ak, david.c.stewart,
	longman9394, gregkh, jcm, dave.hansen, dwmw, tim.c.chen, hpa,
	asit.k.mallick, casey.schaufler, luto, torvalds

Commit-ID:  6b3e64c237c072797a9ec918654a60e3a46488e2
Gitweb:     https://git.kernel.org/tip/6b3e64c237c072797a9ec918654a60e3a46488e2
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:55 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:14 +0100

x86/speculation: Add seccomp Spectre v2 user space protection mode

If 'prctl' mode of user space protection from spectre v2 is selected
on the kernel command-line, STIBP and IBPB are applied on tasks which
restrict their indirect branch speculation via prctl.

SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
makes sense to prevent spectre v2 user space to user space attacks as
well.

The Intel mitigation guide documents how STIPB works:
    
   Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
   prevents the predicted targets of indirect branches on any logical
   processor of that core from being controlled by software that executes
   (or executed previously) on another logical processor of the same core.

Ergo setting STIBP protects the task itself from being attacked from a task
running on a different hyper-thread and protects the tasks running on
different hyper-threads from being attacked.

While the document suggests that the branch predictors are shielded between
the logical processors, the observed performance regressions suggest that
STIBP simply disables the branch predictor more or less completely. Of
course the document wording is vague, but the fact that there is also no
requirement for issuing IBPB when STIBP is used points clearly in that
direction. The kernel still issues IBPB even when STIBP is used until Intel
clarifies the whole mechanism.

IBPB is issued when the task switches out, so malicious sandbox code cannot
mistrain the branch predictor for the next user space task on the same
logical processor.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185006.051663132@linutronix.de


---
 Documentation/admin-guide/kernel-parameters.txt |  9 ++++++++-
 arch/x86/include/asm/nospec-branch.h            |  1 +
 arch/x86/kernel/cpu/bugs.c                      | 17 ++++++++++++++++-
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a9b98a4e8789..f405281bb202 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4241,9 +4241,16 @@
 				  per thread.  The mitigation control state
 				  is inherited on fork.
 
+			seccomp
+				- Same as "prctl" above, but all seccomp
+				  threads will enable the mitigation unless
+				  they explicitly opt out.
+
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
-				  Default is prctl.
+
+			Default mitigation:
+			If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"
 
 			Not specifying this option is equivalent to
 			spectre_v2_user=auto.
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 2adbe7b047fa..032b6009baab 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -233,6 +233,7 @@ enum spectre_v2_user_mitigation {
 	SPECTRE_V2_USER_NONE,
 	SPECTRE_V2_USER_STRICT,
 	SPECTRE_V2_USER_PRCTL,
+	SPECTRE_V2_USER_SECCOMP,
 };
 
 /* The Speculative Store Bypass disable variants */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d0137d10f9a6..c9e304960534 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -256,12 +256,14 @@ enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_AUTO,
 	SPECTRE_V2_USER_CMD_FORCE,
 	SPECTRE_V2_USER_CMD_PRCTL,
+	SPECTRE_V2_USER_CMD_SECCOMP,
 };
 
 static const char * const spectre_v2_user_strings[] = {
 	[SPECTRE_V2_USER_NONE]		= "User space: Vulnerable",
 	[SPECTRE_V2_USER_STRICT]	= "User space: Mitigation: STIBP protection",
 	[SPECTRE_V2_USER_PRCTL]		= "User space: Mitigation: STIBP via prctl",
+	[SPECTRE_V2_USER_SECCOMP]	= "User space: Mitigation: STIBP via seccomp and prctl",
 };
 
 static const struct {
@@ -273,6 +275,7 @@ static const struct {
 	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
 	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
 	{ "prctl",	SPECTRE_V2_USER_CMD_PRCTL,	false },
+	{ "seccomp",	SPECTRE_V2_USER_CMD_SECCOMP,	false },
 };
 
 static void __init spec_v2_user_print_cond(const char *reason, bool secure)
@@ -332,10 +335,16 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 	case SPECTRE_V2_USER_CMD_FORCE:
 		mode = SPECTRE_V2_USER_STRICT;
 		break;
-	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_PRCTL:
 		mode = SPECTRE_V2_USER_PRCTL;
 		break;
+	case SPECTRE_V2_USER_CMD_AUTO:
+	case SPECTRE_V2_USER_CMD_SECCOMP:
+		if (IS_ENABLED(CONFIG_SECCOMP))
+			mode = SPECTRE_V2_USER_SECCOMP;
+		else
+			mode = SPECTRE_V2_USER_PRCTL;
+		break;
 	}
 
 	/* Initialize Indirect Branch Prediction Barrier */
@@ -347,6 +356,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 			static_branch_enable(&switch_mm_always_ibpb);
 			break;
 		case SPECTRE_V2_USER_PRCTL:
+		case SPECTRE_V2_USER_SECCOMP:
 			static_branch_enable(&switch_mm_cond_ibpb);
 			break;
 		default:
@@ -591,6 +601,7 @@ void arch_smt_update(void)
 		update_stibp_strict();
 		break;
 	case SPECTRE_V2_USER_PRCTL:
+	case SPECTRE_V2_USER_SECCOMP:
 		update_indir_branch_cond();
 		break;
 	}
@@ -833,6 +844,8 @@ void arch_seccomp_spec_mitigate(struct task_struct *task)
 {
 	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
 		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+	if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP)
+		ib_prctl_set(task, PR_SPEC_FORCE_DISABLE);
 }
 #endif
 
@@ -864,6 +877,7 @@ static int ib_prctl_get(struct task_struct *task)
 	case SPECTRE_V2_USER_NONE:
 		return PR_SPEC_ENABLE;
 	case SPECTRE_V2_USER_PRCTL:
+	case SPECTRE_V2_USER_SECCOMP:
 		if (task_spec_ib_force_disable(task))
 			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
 		if (task_spec_ib_disable(task))
@@ -1063,6 +1077,7 @@ static char *stibp_state(void)
 	case SPECTRE_V2_USER_STRICT:
 		return ", STIBP: forced";
 	case SPECTRE_V2_USER_PRCTL:
+	case SPECTRE_V2_USER_SECCOMP:
 		if (static_key_enabled(&switch_to_cond_stibp))
 			return ", STIBP: conditional";
 	}

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [tip:x86/pti] x86/speculation: Provide IBPB always command line options
  2018-11-25 18:33 ` [patch V2 28/28] x86/speculation: Provide IBPB always command line options Thomas Gleixner
@ 2018-11-28 14:36   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 112+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:36 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jpoimboe, david.c.stewart, aarcange, peterz, tim.c.chen, dwmw,
	casey.schaufler, gregkh, linux-kernel, torvalds, keescook, arjan,
	longman9394, thomas.lendacky, tglx, luto, hpa, jcm,
	asit.k.mallick, mingo, dave.hansen, ak, jkosina

Commit-ID:  55a974021ec952ee460dc31ca08722158639de72
Gitweb:     https://git.kernel.org/tip/55a974021ec952ee460dc31ca08722158639de72
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Sun, 25 Nov 2018 19:33:56 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:14 +0100

x86/speculation: Provide IBPB always command line options

Provide the possibility to enable IBPB always in combination with 'prctl'
and 'seccomp'.

Add the extra command line options and rework the IBPB selection to
evaluate the command instead of the mode selected by the STIPB switch case.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185006.144047038@linutronix.de

---
 Documentation/admin-guide/kernel-parameters.txt | 12 +++++++++
 arch/x86/kernel/cpu/bugs.c                      | 34 +++++++++++++++++--------
 2 files changed, 35 insertions(+), 11 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f405281bb202..05a252e5178d 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4241,11 +4241,23 @@
 				  per thread.  The mitigation control state
 				  is inherited on fork.
 
+			prctl,ibpb
+				- Like "prctl" above, but only STIBP is
+				  controlled per thread. IBPB is issued
+				  always when switching between different user
+				  space processes.
+
 			seccomp
 				- Same as "prctl" above, but all seccomp
 				  threads will enable the mitigation unless
 				  they explicitly opt out.
 
+			seccomp,ibpb
+				- Like "seccomp" above, but only STIBP is
+				  controlled per thread. IBPB is issued
+				  always when switching between different
+				  user space processes.
+
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c9e304960534..500278f5308e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -256,7 +256,9 @@ enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_AUTO,
 	SPECTRE_V2_USER_CMD_FORCE,
 	SPECTRE_V2_USER_CMD_PRCTL,
+	SPECTRE_V2_USER_CMD_PRCTL_IBPB,
 	SPECTRE_V2_USER_CMD_SECCOMP,
+	SPECTRE_V2_USER_CMD_SECCOMP_IBPB,
 };
 
 static const char * const spectre_v2_user_strings[] = {
@@ -271,11 +273,13 @@ static const struct {
 	enum spectre_v2_user_cmd	cmd;
 	bool				secure;
 } v2_user_options[] __initdata = {
-	{ "auto",	SPECTRE_V2_USER_CMD_AUTO,	false },
-	{ "off",	SPECTRE_V2_USER_CMD_NONE,	false },
-	{ "on",		SPECTRE_V2_USER_CMD_FORCE,	true  },
-	{ "prctl",	SPECTRE_V2_USER_CMD_PRCTL,	false },
-	{ "seccomp",	SPECTRE_V2_USER_CMD_SECCOMP,	false },
+	{ "auto",		SPECTRE_V2_USER_CMD_AUTO,		false },
+	{ "off",		SPECTRE_V2_USER_CMD_NONE,		false },
+	{ "on",			SPECTRE_V2_USER_CMD_FORCE,		true  },
+	{ "prctl",		SPECTRE_V2_USER_CMD_PRCTL,		false },
+	{ "prctl,ibpb",		SPECTRE_V2_USER_CMD_PRCTL_IBPB,		false },
+	{ "seccomp",		SPECTRE_V2_USER_CMD_SECCOMP,		false },
+	{ "seccomp,ibpb",	SPECTRE_V2_USER_CMD_SECCOMP_IBPB,	false },
 };
 
 static void __init spec_v2_user_print_cond(const char *reason, bool secure)
@@ -321,6 +325,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 {
 	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
 	bool smt_possible = IS_ENABLED(CONFIG_SMP);
+	enum spectre_v2_user_cmd cmd;
 
 	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
 		return;
@@ -329,17 +334,20 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
 		smt_possible = false;
 
-	switch (spectre_v2_parse_user_cmdline(v2_cmd)) {
+	cmd = spectre_v2_parse_user_cmdline(v2_cmd);
+	switch (cmd) {
 	case SPECTRE_V2_USER_CMD_NONE:
 		goto set_mode;
 	case SPECTRE_V2_USER_CMD_FORCE:
 		mode = SPECTRE_V2_USER_STRICT;
 		break;
 	case SPECTRE_V2_USER_CMD_PRCTL:
+	case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
 		mode = SPECTRE_V2_USER_PRCTL;
 		break;
 	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_SECCOMP:
+	case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
 		if (IS_ENABLED(CONFIG_SECCOMP))
 			mode = SPECTRE_V2_USER_SECCOMP;
 		else
@@ -351,12 +359,15 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
 
-		switch (mode) {
-		case SPECTRE_V2_USER_STRICT:
+		switch (cmd) {
+		case SPECTRE_V2_USER_CMD_FORCE:
+		case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
+		case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
 			static_branch_enable(&switch_mm_always_ibpb);
 			break;
-		case SPECTRE_V2_USER_PRCTL:
-		case SPECTRE_V2_USER_SECCOMP:
+		case SPECTRE_V2_USER_CMD_PRCTL:
+		case SPECTRE_V2_USER_CMD_AUTO:
+		case SPECTRE_V2_USER_CMD_SECCOMP:
 			static_branch_enable(&switch_mm_cond_ibpb);
 			break;
 		default:
@@ -364,7 +375,8 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
 		}
 
 		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
-			mode == SPECTRE_V2_USER_STRICT ? "always-on" : "conditional");
+			static_key_enabled(&switch_mm_always_ibpb) ?
+			"always-on" : "conditional");
 	}
 
 	/* If enhanced IBRS is enabled no STIPB required */

^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment
  2018-11-25 18:33 ` [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
  2018-11-28 14:20   ` [tip:x86/pti] " tip-bot for Tim Chen
@ 2018-11-29 14:27   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:27 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:29PM +0100, Thomas Gleixner wrote:
> "Reduced Data Speculation" is an obsolete term. The correct new name is
> "Speculative store bypass disable" - which is abbreviated into SSBD.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> 
> ---
>  arch/x86/include/asm/thread_info.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/arch/x86/include/asm/thread_info.h
> +++ b/arch/x86/include/asm/thread_info.h
> @@ -79,7 +79,7 @@ struct thread_info {
>  #define TIF_SIGPENDING		2	/* signal pending */
>  #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
>  #define TIF_SINGLESTEP		4	/* reenable singlestep on user return*/
> -#define TIF_SSBD			5	/* Reduced data speculation */
> +#define TIF_SSBD		5	/* Speculative store bypass disable */
>  #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
>  #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
>  #define TIF_SECCOMP		8	/* secure computing */
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 02/28] x86/speculation: Clean up spectre_v2_parse_cmdline()
  2018-11-25 18:33 ` [patch V2 02/28] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
  2018-11-28 14:20   ` [tip:x86/pti] " tip-bot for Tim Chen
@ 2018-11-29 14:28   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:28 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:30PM +0100, Thomas Gleixner wrote:
> Remove the unnecessary 'else' statement in spectre_v2_parse_cmdline()
> to save an indentation level.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |   27 +++++++++++++--------------
>  1 file changed, 13 insertions(+), 14 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -276,22 +276,21 @@ static enum spectre_v2_mitigation_cmd __
>  
>  	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
>  		return SPECTRE_V2_CMD_NONE;
> -	else {
> -		ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
> -		if (ret < 0)
> -			return SPECTRE_V2_CMD_AUTO;
>  
> -		for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
> -			if (!match_option(arg, ret, mitigation_options[i].option))
> -				continue;
> -			cmd = mitigation_options[i].cmd;
> -			break;
> -		}
> +	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
> +	if (ret < 0)
> +		return SPECTRE_V2_CMD_AUTO;
>  
> -		if (i >= ARRAY_SIZE(mitigation_options)) {
> -			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
> -			return SPECTRE_V2_CMD_AUTO;
> -		}
> +	for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
> +		if (!match_option(arg, ret, mitigation_options[i].option))
> +			continue;
> +		cmd = mitigation_options[i].cmd;
> +		break;
> +	}
> +
> +	if (i >= ARRAY_SIZE(mitigation_options)) {
> +		pr_err("unknown option (%s). Switching to AUTO select\n", arg);
> +		return SPECTRE_V2_CMD_AUTO;
>  	}
>  
>  	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 03/28] x86/speculation: Remove unnecessary ret variable in cpu_show_common()
  2018-11-25 18:33 ` [patch V2 03/28] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
  2018-11-28 14:21   ` [tip:x86/pti] " tip-bot for Tim Chen
@ 2018-11-29 14:28   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:28 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:31PM +0100, Thomas Gleixner wrote:
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |    5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -847,8 +847,6 @@ static ssize_t l1tf_show_state(char *buf
>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       char *buf, unsigned int bug)
>  {
> -	int ret;
> -
>  	if (!boot_cpu_has_bug(bug))
>  		return sprintf(buf, "Not affected\n");
>  
> @@ -866,13 +864,12 @@ static ssize_t cpu_show_common(struct de
>  		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>  
>  	case X86_BUG_SPECTRE_V2:
> -		ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> +		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
>  			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
>  			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
>  			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
>  			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
>  			       spectre_v2_module_string());
> -		return ret;
>  
>  	case X86_BUG_SPEC_STORE_BYPASS:
>  		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 04/28] x86/speculation: Reorganize cpu_show_common()
  2018-11-25 18:33 ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
  2018-11-26 15:08   ` Borislav Petkov
  2018-11-28 14:22   ` [tip:x86/pti] x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common() tip-bot for Tim Chen
@ 2018-11-29 14:29   ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:29 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:32PM +0100, Thomas Gleixner wrote:
> The Spectre V2 printout in cpu_show_common() handles conditionals for the
> various mitigation methods directly in the sprintf() argument list. That's
> hard to read and will become unreadable if more complex decisions need to
> be made for a particular method.
> 
> Move the conditionals for STIBP and IBPB string selection into helper
> functions, so they can be extended later on.
> 

Yeeey!


Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |   20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -844,6 +844,22 @@ static ssize_t l1tf_show_state(char *buf
>  }
>  #endif
>  
> +static char *stibp_state(void)
> +{
> +	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
> +		return ", STIBP";
> +	else
> +		return "";
> +}
> +
> +static char *ibpb_state(void)
> +{
> +	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
> +		return ", IBPB";
> +	else
> +		return "";
> +}
> +
>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       char *buf, unsigned int bug)
>  {
> @@ -865,9 +881,9 @@ static ssize_t cpu_show_common(struct de
>  
>  	case X86_BUG_SPECTRE_V2:
>  		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> -			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
> +			       ibpb_state(),
>  			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
> -			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
> +			       stibp_state(),
>  			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
>  			       spectre_v2_module_string());
>  
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 05/28] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-11-25 18:33 ` [patch V2 05/28] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
  2018-11-28 14:22   ` [tip:x86/pti] " tip-bot for Tim Chen
@ 2018-11-29 14:35   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:33PM +0100, Thomas Gleixner wrote:
> If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2
> user space exploits from hyperthread sibling.
> 
> Disable STIBP when enhanced IBRS is used.
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |    7 +++++++
>  1 file changed, 7 insertions(+)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -321,6 +321,10 @@ static bool stibp_needed(void)
>  	if (spectre_v2_enabled == SPECTRE_V2_NONE)
>  		return false;
>  
> +	/* Enhanced IBRS makes using STIBP unnecessary. */
> +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> +		return false;
> +
>  	if (!boot_cpu_has(X86_FEATURE_STIBP))
>  		return false;
>  
> @@ -846,6 +850,9 @@ static ssize_t l1tf_show_state(char *buf
>  
>  static char *stibp_state(void)
>  {
> +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> +		return "";
> +
>  	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
>  		return ", STIBP";
>  	else
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 06/28] x86/speculation: Rename SSBD update functions
  2018-11-25 18:33 ` [patch V2 06/28] x86/speculation: Rename SSBD update functions Thomas Gleixner
  2018-11-26 15:24   ` Borislav Petkov
  2018-11-28 14:23   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
@ 2018-11-29 14:37   ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:37 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:34PM +0100, Thomas Gleixner wrote:
> During context switch, the SSBD bit in SPEC_CTRL MSR is updated according
> to changes of the TIF_SSBD flag in the current and next running task.
> 
> Currently, only the bit controlling speculative store bypass disable in
> SPEC_CTRL MSR is updated and the related update functions all have
> "speculative_store" or "ssb" in their names.
> 
> For enhanced mitigation control other bits in SPEC_CTRL MSR need to be
> updated as well, which makes the SSB names inadequate.
> 
> Rename the "speculative_store*" functions to a more generic name.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update
  2018-11-25 18:33 ` [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
  2018-11-26 15:47   ` Borislav Petkov
  2018-11-28 14:23   ` [tip:x86/pti] " tip-bot for Tim Chen
@ 2018-11-29 14:41   ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:41 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:35PM +0100, Thomas Gleixner wrote:
> The logic to detect whether there's a change in the previous and next
> task's flag relevant to update speculation control MSRs are spread out
> across multiple functions.
> 
> Consolidate all checks needed for updating speculation control MSRs into
> the new __speculation_ctrl_update() helper function.
> 
> This makes it easy to pick the right speculation control MSR and the bits
> in the MSR that needs updating based on TIF flags changes.
> 
> Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

.. and I also have two tiny comments below - feel free to
incorporate or not them in.
> 
> ---
>  arch/x86/kernel/process.c |   42 ++++++++++++++++++++++++++++++++----------
>  1 file changed, 32 insertions(+), 10 deletions(-)
> 
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -397,25 +397,48 @@ static __always_inline void amd_set_ssb_
>  
>  static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
>  {
> -	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
> +	u64 msr = x86_spec_ctrl_base;
> +
> +	/*
> +	 * If X86_FEATURE_SSBD is not set, the SSBD bit is not to be
> +	 * touched.
> +	 */

I had a bit of hard time parsing that. Could it perhaps be changed to:

"If X86_FEATURE_SSBD is off (not set), we MUST leave the SSBD bit alone"
> +	if (static_cpu_has(X86_FEATURE_SSBD))
> +		msr |= ssbd_tif_to_spec_ctrl(tifn);
>  
>  	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  }
>  
> -static __always_inline void __speculation_ctrl_update(unsigned long tifn)
> +/*
> + * Update the MSRs managing speculation control, during context switch.
> + *
> + * tifp: Previous task's thread flags
> + * tifn: Next task's thread flags
> + */
> +static __always_inline void __speculation_ctrl_update(unsigned long tifp,
> +						      unsigned long tifn)
>  {
> -	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> -		amd_set_ssb_virt_state(tifn);
> -	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> -		amd_set_core_ssb_state(tifn);
> -	else
> +	bool updmsr = false;
> +
> +	/* If TIF_SSBD is different, select the proper mitigation method */
> +	if ((tifp ^ tifn) & _TIF_SSBD) {
> +		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> +			amd_set_ssb_virt_state(tifn);
> +		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> +			amd_set_core_ssb_state(tifn);
> +		else if (static_cpu_has(X86_FEATURE_SSBD))
> +			updmsr  = true;
                              ^
Nothing really big, but you have an extra space here.
> +	}
> +
> +	if (updmsr)
>  		spec_ctrl_update_msr(tifn);
>  }
>  
>  void speculation_ctrl_update(unsigned long tif)
>  {
> +	/* Forced update. Make sure all relevant TIF flags are different */
>  	preempt_disable();
> -	__speculation_ctrl_update(tif);
> +	__speculation_ctrl_update(~tif, tif);
>  	preempt_enable();
>  }
>  
> @@ -451,8 +474,7 @@ void __switch_to_xtra(struct task_struct
>  	if ((tifp ^ tifn) & _TIF_NOCPUID)
>  		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>  
> -	if ((tifp ^ tifn) & _TIF_SSBD)
> -		__speculation_ctrl_update(tifn);
> +	__speculation_ctrl_update(tifp, tifn);
>  }
>  
>  /*
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 08/28] sched/smt: Make sched_smt_present track topology
  2018-11-25 18:33 ` [patch V2 08/28] sched/smt: Make sched_smt_present track topology Thomas Gleixner
  2018-11-28 14:24   ` [tip:x86/pti] " tip-bot for Peter Zijlstra (Intel)
@ 2018-11-29 14:42   ` Konrad Rzeszutek Wilk
  2018-11-29 14:50     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:42 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:36PM +0100, Thomas Gleixner wrote:
> Currently the 'sched_smt_present' static key is enabled when at CPU bringup
> SMT topology is observed, but it is never disabled. However there is demand
> to also disable the key when the topology changes such that there is no SMT
> present anymore.
> 
> Implement this by making the key count the number of cores that have SMT
> enabled.
> 
> In particular, the SMT topology bits are set before interrrupts are enabled
> and similarly, are cleared after interrupts are disabled for the last time
> and the CPU dies.

I see that the number you used is '2', but I thought that there are some
CPUs out there (Knights Landing?) that could have four threads?

Would it be better to have a generic function that would provide the
amount of threads the platform does expose - and use that instead
of a constant value? 

> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  kernel/sched/core.c |   19 +++++++++++--------
>  1 file changed, 11 insertions(+), 8 deletions(-)
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
>  
>  #ifdef CONFIG_SCHED_SMT
>  	/*
> -	 * The sched_smt_present static key needs to be evaluated on every
> -	 * hotplug event because at boot time SMT might be disabled when
> -	 * the number of booted CPUs is limited.
> -	 *
> -	 * If then later a sibling gets hotplugged, then the key would stay
> -	 * off and SMT scheduling would never be functional.
> +	 * When going up, increment the number of cores with SMT present.
>  	 */
> -	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
> -		static_branch_enable_cpuslocked(&sched_smt_present);
> +	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> +		static_branch_inc_cpuslocked(&sched_smt_present);
>  #endif
>  	set_cpu_active(cpu, true);
>  
> @@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cp
>  	 */
>  	synchronize_rcu_mult(call_rcu, call_rcu_sched);
>  
> +#ifdef CONFIG_SCHED_SMT
> +	/*
> +	 * When going down, decrement the number of cores with SMT present.
> +	 */
> +	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> +		static_branch_dec_cpuslocked(&sched_smt_present);
> +#endif
> +
>  	if (!sched_smp_initialized)
>  		return 0;
>  
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 09/28] x86/Kconfig: Select SCHED_SMT if SMP enabled
  2018-11-25 18:33 ` [patch V2 09/28] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
  2018-11-28 14:24   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
@ 2018-11-29 14:44   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:44 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:37PM +0100, Thomas Gleixner wrote:
> CONFIG_SCHED_SMT is enabled by all distros, so there is not a real point to
> have it configurable. The runtime overhead in the core scheduler code is
> minimal because the actual SMT scheduling parts are conditional on a static
> key.
> 
> This allows to expose the scheduler's SMT state static key to the
> speculation control code. Alternatively the scheduler's static key could be
> made always available when CONFIG_SMP is enabled, but that's just adding an
> unused static key to every other architecture for nothing.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


> 
> ---
>  arch/x86/Kconfig |    8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1001,13 +1001,7 @@ config NR_CPUS
>  	  to the kernel image.
>  
>  config SCHED_SMT
> -	bool "SMT (Hyperthreading) scheduler support"
> -	depends on SMP
> -	---help---
> -	  SMT scheduler support improves the CPU scheduler's decision making
> -	  when dealing with Intel Pentium 4 chips with HyperThreading at a
> -	  cost of slightly increased overhead in some places. If unsure say
> -	  N here.
> +	def_bool y if SMP
>  
>  config SCHED_MC
>  	def_bool y
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 10/28] sched/smt: Expose sched_smt_present static key
  2018-11-25 18:33 ` [patch V2 10/28] sched/smt: Expose sched_smt_present static key Thomas Gleixner
  2018-11-28 14:25   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
@ 2018-11-29 14:44   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:44 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Sun, Nov 25, 2018 at 07:33:38PM +0100, Thomas Gleixner wrote:
> Make the scheduler's 'sched_smt_present' static key globaly available, so
> it can be used in the x86 speculation control code.
> 
> Provide a query function and a stub for the CONFIG_SMP=n case.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
> 
> v1 -> v2: Move SMT stuff to separate header. Unbreaks ia64 build
> 
> ---
>  include/linux/sched/smt.h |   18 ++++++++++++++++++
>  kernel/sched/sched.h      |    4 +---
>  2 files changed, 19 insertions(+), 3 deletions(-)
> 
> --- /dev/null
> +++ b/include/linux/sched/smt.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_SCHED_SMT_H
> +#define _LINUX_SCHED_SMT_H
> +
> +#include <linux/static_key.h>
> +
> +#ifdef CONFIG_SCHED_SMT
> +extern struct static_key_false sched_smt_present;
> +
> +static __always_inline bool sched_smt_active(void)
> +{
> +	return static_branch_likely(&sched_smt_present);
> +}
> +#else
> +static inline bool sched_smt_active(void) { return false; }
> +#endif
> +
> +#endif
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -23,6 +23,7 @@
>  #include <linux/sched/prio.h>
>  #include <linux/sched/rt.h>
>  #include <linux/sched/signal.h>
> +#include <linux/sched/smt.h>
>  #include <linux/sched/stat.h>
>  #include <linux/sched/sysctl.h>
>  #include <linux/sched/task.h>
> @@ -936,9 +937,6 @@ static inline int cpu_of(struct rq *rq)
>  
>  
>  #ifdef CONFIG_SCHED_SMT
> -
> -extern struct static_key_false sched_smt_present;
> -
>  extern void __update_idle_core(struct rq *rq);
>  
>  static inline void update_idle_core(struct rq *rq)
> 
> 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 08/28] sched/smt: Make sched_smt_present track topology
  2018-11-29 14:42   ` [patch V2 08/28] " Konrad Rzeszutek Wilk
@ 2018-11-29 14:50     ` Konrad Rzeszutek Wilk
  2018-11-29 15:48       ` Peter Zijlstra
  0 siblings, 1 reply; 112+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-11-29 14:50 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Thu, Nov 29, 2018 at 09:42:56AM -0500, Konrad Rzeszutek Wilk wrote:
> On Sun, Nov 25, 2018 at 07:33:36PM +0100, Thomas Gleixner wrote:
> > Currently the 'sched_smt_present' static key is enabled when at CPU bringup
> > SMT topology is observed, but it is never disabled. However there is demand
> > to also disable the key when the topology changes such that there is no SMT
> > present anymore.
> > 
> > Implement this by making the key count the number of cores that have SMT
> > enabled.
> > 
> > In particular, the SMT topology bits are set before interrrupts are enabled
> > and similarly, are cleared after interrupts are disabled for the last time
> > and the CPU dies.
> 
> I see that the number you used is '2', but I thought that there are some
> CPUs out there (Knights Landing?) that could have four threads?
> 
> Would it be better to have a generic function that would provide the
> amount of threads the platform does expose - and use that instead
> of a constant value? 

Nevermind - this would work even with 4 threads as we would hit the
number '2' before '4' and the key would be turned on/off properly.

Sorry for the noise.

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thank you!
> 
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > 
> > ---
> >  kernel/sched/core.c |   19 +++++++++++--------
> >  1 file changed, 11 insertions(+), 8 deletions(-)
> > 
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
> >  
> >  #ifdef CONFIG_SCHED_SMT
> >  	/*
> > -	 * The sched_smt_present static key needs to be evaluated on every
> > -	 * hotplug event because at boot time SMT might be disabled when
> > -	 * the number of booted CPUs is limited.
> > -	 *
> > -	 * If then later a sibling gets hotplugged, then the key would stay
> > -	 * off and SMT scheduling would never be functional.
> > +	 * When going up, increment the number of cores with SMT present.
> >  	 */
> > -	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
> > -		static_branch_enable_cpuslocked(&sched_smt_present);
> > +	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> > +		static_branch_inc_cpuslocked(&sched_smt_present);
> >  #endif
> >  	set_cpu_active(cpu, true);
> >  
> > @@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cp
> >  	 */
> >  	synchronize_rcu_mult(call_rcu, call_rcu_sched);
> >  
> > +#ifdef CONFIG_SCHED_SMT
> > +	/*
> > +	 * When going down, decrement the number of cores with SMT present.
> > +	 */
> > +	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> > +		static_branch_dec_cpuslocked(&sched_smt_present);
> > +#endif
> > +
> >  	if (!sched_smp_initialized)
> >  		return 0;
> >  
> > 
> > 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 08/28] sched/smt: Make sched_smt_present track topology
  2018-11-29 14:50     ` Konrad Rzeszutek Wilk
@ 2018-11-29 15:48       ` Peter Zijlstra
  0 siblings, 0 replies; 112+ messages in thread
From: Peter Zijlstra @ 2018-11-29 15:48 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Thomas Gleixner, LKML, x86, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Thu, Nov 29, 2018 at 09:50:13AM -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Nov 29, 2018 at 09:42:56AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Sun, Nov 25, 2018 at 07:33:36PM +0100, Thomas Gleixner wrote:
> > > Currently the 'sched_smt_present' static key is enabled when at CPU bringup
> > > SMT topology is observed, but it is never disabled. However there is demand
> > > to also disable the key when the topology changes such that there is no SMT
> > > present anymore.
> > > 
> > > Implement this by making the key count the number of cores that have SMT
> > > enabled.
> > > 
> > > In particular, the SMT topology bits are set before interrrupts are enabled
> > > and similarly, are cleared after interrupts are disabled for the last time
> > > and the CPU dies.
> > 
> > I see that the number you used is '2', but I thought that there are some
> > CPUs out there (Knights Landing?) that could have four threads?

This code is generic, Sparc, Power and I think MIPS all have parts with
SMT8.

> > Would it be better to have a generic function that would provide the
> > amount of threads the platform does expose - and use that instead
> > of a constant value? 
> 
> Nevermind - this would work even with 4 threads as we would hit the
> number '2' before '4' and the key would be turned on/off properly.

Indeed so, 2 is the point where we either have more than 1 sibling, or
will go back to 1.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead
  2018-11-28 14:24 ` Thomas Gleixner
@ 2018-11-29 19:02   ` Tim Chen
  0 siblings, 0 replies; 112+ messages in thread
From: Tim Chen @ 2018-11-29 19:02 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On 11/28/2018 06:24 AM, Thomas Gleixner wrote:

> 
> I've integrated the latest review feedback and the change which plugs the
> TIF async update issue and pushed all of it out to:
> 
>    git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/pti
> 
> For the stable 4.14.y and 4.19.y trees, I've collected the missing bits and
> pieces and uploaded tarballs which contain everything ready for consumption:
> 
>    https://tglx.de/~tglx/patches-spec-4.14.y.tar.xz
> 
>       sha256 of patches-spec-4.14.y.tar:
>       3d2976ef06ab5556c1c6cba975b0c9390eb57f43c506fb7f8834bb484feb9b17
> 
>    https://tglx.de/~tglx/patches-spec-4.19.y.tar.xz
> 
>       sha256 of patches-spec-4.19.y.tar:
>       b7666cf378ad63810a17e98a471aae81a49738c552dbe912aea49de83f8145cc
> 
> Thanks everyone for review, discussion, testing ... !

Big thanks to Thomas for getting all these changes merged.

Tim

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 20:40   ` Linus Torvalds
  2018-11-25 20:52     ` Jiri Kosina
  2018-11-25 22:28     ` Thomas Gleixner
@ 2018-12-04  1:38     ` Tim Chen
  2018-12-04  8:39       ` Jiri Kosina
  2018-12-04 17:20       ` Linus Torvalds
  2 siblings, 2 replies; 112+ messages in thread
From: Tim Chen @ 2018-12-04  1:38 UTC (permalink / raw)
  To: Linus Torvalds, Thomas Gleixner
  Cc: Linux List Kernel Mailing, the arch/x86 maintainers,
	Peter Zijlstra, Andrew Lutomirski, Jiri Kosina, thomas.lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	dave.hansen, Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan,
	jcm, longman9394, Greg KH, david.c.stewart, Kees Cook,
	Jason Brandt

On 11/25/2018 12:40 PM, Linus Torvalds wrote:
> [ You forgot to fix your quilt setup.. ]
> 
> On Sun, 25 Nov 2018, Thomas Gleixner wrote:
>>
>> The mitigation guide documents how STIPB works:
>>
>>    Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
>>    prevents the predicted targets of indirect branches on any logical
>>    processor of that core from being controlled by software that executes
>>    (or executed previously) on another logical processor of the same core.
> 
> Can we please just fix this stupid lie?
> 
> Yes, Intel calls it "STIBP" and tries to make it out to be about the
> indirect branch predictor being per-SMT thread.
> 
> But the reason it is unacceptable is apparently because in reality it just
> disables indirect branch prediction entirely. So yes, *technically* it's
> true that that limits indirect branch prediction to just a single SMT
> core, but in reality it is just a "go really slow" mode.
> 
> If STIBP had actually just keyed off the logical SMT thread, we wouldn't
> need to have worried about it in the first place.
> 
> So let's document reality rather than Intel's Pollyanna world-view.
> 
> Reality matters. It's why we had to go all this. Lying about things
> and making it appear like it's not a big deal was why the original
> patch made it through without people noticing.
> 


To make the usage of STIBP and its working principles clear,
here are some additional explanations of STIBP from our Intel
HW architects.  This should also help answer some of the questions
from Thomas and others on STIBP's usages with IBPB and IBRS.

Thanks.

Tim

---

STIBP
^^^^^
Implementations of STIBP on existing Core-family processors (where STIBP
functionality was added through a microcode update) work by disabling
branch predictors that both:

    1. Contain indirect branch predictions for both hardware threads, and
    2. Do not contain a dedicated thread ID bit 

Unlike IBRS and IBPB, STIBP does not affect all branch predictors
that contain indirect branch predictions. STIBP only affects those
branch predictors where software on one hardware thread can create a
prediction that can then be used by the other hardware thread. This is
part of what makes STIBP have lower performance overhead than IBRS on
current implementations.

IBRS is a superset of STIBP functionality; thus, setting both STIBP and
IBRS is redundant. On processors without enhanced IBRS, we recommend
using retpoline or setting IBRS only during ring 0 and VMM modes. IBPB
should be used when switching to a different process/guest that does
not trust the last process/guest that ran on a particular hardware
thread. For performance reasons, IBRS should not be left set during
application execution.

Processes that are particularly security-sensitive may set STIBP when
they execute to prevent their indirect branch predictions from being
controlled by another hardware thread on the same physical core. On
existing Core-family processors, this comes at significant performance
cost to both hardware threads due to disabling some indirect branch
predictors (as described earlier). Because of this, we do not recommend
setting STIBP during all application execution.

STIBP is architecturally defined to apply to all hardware threads on
the physical core on which it is set. Because of this, STIBP can be set
when running an untrusted process to ensure that the untrusted process
does not control the indirect branch predictions of software running
on other hardware threads (for example, threads that do not have STIBP
or IBRS set) while STIBP is still set. Before running with both STIBP
and IBRS cleared, an IBPB can be executed to ensure that any indirect
branch predictions that were installed by the untrusted process while
STIBP was set are not used by the other hardware thread once STIBP and
IBRS are cleared. Regardless of the usage model, STIBP should be used
judiciously due to its impact on performance.

Enhanced IBRS is a feature that also provides a superset of STIBP
functionality; therefore it is redundant to set both STIBP and enhanced
IBRS. Processors with enhanced IBRS add a thread ID bit to the needed
indirect branch predictors and use that bit to ensure that indirect
branch predictions are only used by the thread that created them.

On processors with enhanced IBRS support, we recommend setting IBRS to 1
and left set. The traditional IBRS model of setting IBRS only during ring
0 execution is just as secure on parts with enhanced IBRS support as it is
on parts with vanilla IBRS, but the WRMSRs on ring transitions and/or VM
exit/entry will cost performance compared to just leaving IBRS set. Again,
there is no need to use STIBP when IBRS is set. However, IBPB should
still be used when switching to a different application/guest that does
not trust the last application/guest that ran on a particular hardware
thread. Guests in a VM migration pool that includes hardware without
enhanced IBRS may not have IA32_ARCH_CAPABILITIES.IBRS_ALL (enhanced IBRS)
enumerated to them and thus may use the traditional IBRS usage model of
setting IBRS only in ring 0. For performance reasons, once a guest has
been shown to frequently write IA32_SPEC_CTRL, we do not recommend that
the VMM cause a VM exit on such WRMSRs. The VMM running on processors
that support enhanced IBRS should allow the IA32_SPEC_CTRL-writing guest
to control guest IA32_SPEC_CTRL. The VMM should thus set IBRS after VM
exits from such guests to protect itself (or use alternative techniques
like retpoline, secret removal, or indirect branch removal).

 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-12-04  1:38     ` Tim Chen
@ 2018-12-04  8:39       ` Jiri Kosina
  2018-12-04  9:43         ` Arjan van de Ven
  2018-12-04  9:46         ` Arjan van de Ven
  2018-12-04 17:20       ` Linus Torvalds
  1 sibling, 2 replies; 112+ messages in thread
From: Jiri Kosina @ 2018-12-04  8:39 UTC (permalink / raw)
  To: Tim Chen
  Cc: Linus Torvalds, Thomas Gleixner, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, Van De Ven, Arjan, jcm, longman9394, Greg KH,
	david.c.stewart, Kees Cook, Jason Brandt

On Mon, 3 Dec 2018, Tim Chen wrote:

> > Can we please just fix this stupid lie?
> > 
> > Yes, Intel calls it "STIBP" and tries to make it out to be about the
> > indirect branch predictor being per-SMT thread.
> > 
> > But the reason it is unacceptable is apparently because in reality it just
> > disables indirect branch prediction entirely. So yes, *technically* it's
> > true that that limits indirect branch prediction to just a single SMT
> > core, but in reality it is just a "go really slow" mode.
> > 
> > If STIBP had actually just keyed off the logical SMT thread, we wouldn't
> > need to have worried about it in the first place.
> > 
> > So let's document reality rather than Intel's Pollyanna world-view.
> > 
> > Reality matters. It's why we had to go all this. Lying about things
> > and making it appear like it's not a big deal was why the original
> > patch made it through without people noticing.
> > 
> 
> 
> To make the usage of STIBP and its working principles clear,
> here are some additional explanations of STIBP from our Intel
> HW architects.  This should also help answer some of the questions
> from Thomas and others on STIBP's usages with IBPB and IBRS.

Thanks a lot, this indeed does shed some light.

I have one question though:

[ ... snip ... ]
> On processors with enhanced IBRS support, we recommend setting IBRS to 1
> and left set. 

Then why doesn't CPU with EIBRS support acutally *default* to '1', with 
opt-out possibility for OS?

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-12-04  8:39       ` Jiri Kosina
@ 2018-12-04  9:43         ` Arjan van de Ven
  2018-12-04  9:46         ` Arjan van de Ven
  1 sibling, 0 replies; 112+ messages in thread
From: Arjan van de Ven @ 2018-12-04  9:43 UTC (permalink / raw)
  To: Jiri Kosina, Tim Chen
  Cc: Linus Torvalds, Thomas Gleixner, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Jason Brandt

>> On processors with enhanced IBRS support, we recommend setting IBRS to 1
>> and left set.
> 
> Then why doesn't CPU with EIBRS support acutally *default* to '1', with
> opt-out possibility for OS?

the BIOSes could indeed get this set up this way.

do you want to trust the bios to get it right?

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-12-04  8:39       ` Jiri Kosina
  2018-12-04  9:43         ` Arjan van de Ven
@ 2018-12-04  9:46         ` Arjan van de Ven
  1 sibling, 0 replies; 112+ messages in thread
From: Arjan van de Ven @ 2018-12-04  9:46 UTC (permalink / raw)
  To: Jiri Kosina, Tim Chen
  Cc: Linus Torvalds, Thomas Gleixner, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Jason Brandt

>> On processors with enhanced IBRS support, we recommend setting IBRS to 1
>> and left set.
> 
> Then why doesn't CPU with EIBRS support acutally *default* to '1', with
> opt-out possibility for OS?

(slightly longer answer)

you can pretty much assume that on these CPUs, IBRS doesn't actually do anything
(e.g. just a scratch bit)

we could debate (and did :-)) for some time what the default value should be at boot,
but it kind of is one of those minor issues that should not hold up getting things out.

it could well be that the cpus that do this will ship with 1 as default, but it's hard to
guarantee across many products and different CPU vendors when time was tight.


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-12-04  1:38     ` Tim Chen
  2018-12-04  8:39       ` Jiri Kosina
@ 2018-12-04 17:20       ` Linus Torvalds
  2018-12-04 18:58         ` Tim Chen
  1 sibling, 1 reply; 112+ messages in thread
From: Linus Torvalds @ 2018-12-04 17:20 UTC (permalink / raw)
  To: Tim Chen
  Cc: Thomas Gleixner, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, Van De Ven, Arjan, jcm, longman9394, Greg KH,
	david.c.stewart, Kees Cook, jason.w.brandt

On Mon, Dec 3, 2018 at 5:38 PM Tim Chen <tim.c.chen@linux.intel.com> wrote:
>
> To make the usage of STIBP and its working principles clear,
> here are some additional explanations of STIBP from our Intel
> HW architects.  This should also help answer some of the questions
> from Thomas and others on STIBP's usages with IBPB and IBRS.
>
> Thanks.
>
> Tim
>
> ---
>
> STIBP
> ^^^^^
> Implementations of STIBP on existing Core-family processors (where STIBP
> functionality was added through a microcode update) work by disabling
> branch predictors that both:
>
>     1. Contain indirect branch predictions for both hardware threads, and
>     2. Do not contain a dedicated thread ID bit

Honestly, it still feels entirely misguided to me.

The above is not STIBP. It's just "disable IB". There's nothing "ST" about it.

So on processors where there is no thread ID bit (or per-thread
predictors), Intel simply SHOULD NOT EXPOSE this at all.

As it is, I refuse to call this shit "STIBP", because on current CPU's
that's simply a lie.

Being "technically correct" is not an excuse. It's just lying. I would
really hope that we restrict the lying to politicians, and not do it
in technical documentation.

               Linus

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-11-25 18:33 ` [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode Thomas Gleixner
                     ` (2 preceding siblings ...)
  2018-11-28 14:35   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
@ 2018-12-04 18:45   ` Dave Hansen
  3 siblings, 0 replies; 112+ messages in thread
From: Dave Hansen @ 2018-12-04 18:45 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

>  static const char * const spectre_v2_user_strings[] = {
>  	[SPECTRE_V2_USER_NONE]		= "User space: Vulnerable",
>  	[SPECTRE_V2_USER_STRICT]	= "User space: Mitigation: STIBP protection",
>  	[SPECTRE_V2_USER_PRCTL]		= "User space: Mitigation: STIBP via prctl",
> +	[SPECTRE_V2_USER_SECCOMP]	= "User space: Mitigation: STIBP via seccomp and prctl",
>  };

Since there's some heartburn about the STIBP naming, should we make this
more generic?  Maybe something like "SMT hardening", so it says:

	"User space: Mitigation: SMT hardening via prctl"

or,

 	"User space: Mitigation: maybe go slow on indirect branches via prctl"

if we're trying be more precise on the effects. :)

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode
  2018-12-04 17:20       ` Linus Torvalds
@ 2018-12-04 18:58         ` Tim Chen
  0 siblings, 0 replies; 112+ messages in thread
From: Tim Chen @ 2018-12-04 18:58 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Thomas Gleixner, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, Van De Ven, Arjan, jcm, longman9394, Greg KH,
	david.c.stewart, Kees Cook, jason.w.brandt

On 12/04/2018 09:20 AM, Linus Torvalds wrote:

>> STIBP
>> ^^^^^
>> Implementations of STIBP on existing Core-family processors (where STIBP
>> functionality was added through a microcode update) work by disabling
>> branch predictors that both:
>>
>>     1. Contain indirect branch predictions for both hardware threads, and
>>     2. Do not contain a dedicated thread ID bit
> 
> Honestly, it still feels entirely misguided to me.
> 
> The above is not STIBP. It's just "disable IB". There's nothing "ST" about it.
> 
> So on processors where there is no thread ID bit (or per-thread
> predictors), Intel simply SHOULD NOT EXPOSE this at all.
> 
> As it is, I refuse to call this shit "STIBP", because on current CPU's
> that's simply a lie.
> 
> Being "technically correct" is not an excuse. It's just lying. I would
> really hope that we restrict the lying to politicians, and not do it
> in technical documentation.
> 

Linus,

I consulted our HW architects to get their thinking behind the STIBP name
and why it is exposed on CPUs without thread ID bit.

1) Why expose STIBP even when it is just a scratch bit?

VM migration pools prefer that bits which guests have direct access to
(as we recommend for IA32_SPEC_CTRL) do not cause #GP when they are
migrated to different processors in order to prevent guests from crashing
or require restricting VM migration targets. That is why we decided
to allow the STIBP bit to be set and to return the last value written
even on parts where it has no other effect (e.g. Atom parts without
multithreading). There was also discomfort with allowing a bit to be
set, to return the last value written, and to meet the architecturally
documented behavior, but to not enumerate that it is supported. Not
enumerating STIBP would make it more difficult for software to understand
things like how the CPU does reserved bit checks. That is why we enumerate
and support STIBP even when it affects no branch predictors.

2) Why did we not call STIBP "disable IB":

 * It does not disable all indirect branch predictors (on current Core), just a subset
 * It does not disable any indirect branch predictors (on future Core)
 * It does not disable any indirect branch predictors (on Atom parts with no SMT)
 * It does more than disable some indirect branch predictors (on some non-Core)

3) Philosophy for naming architectural bits.

The microcode updates and future hardware changes do a variety
of different micro-architectural behaviors in order to achieve the
goals behind each MSR bit. This means that the names aren't the most
descriptive for each individual project. Had we instead exposed the
different functionality/behavior to software for each CPU then it would
have been a model-specific feature, likely needing different software
behavior for different CPUID family/model/stepping. This would have been
painful for the OS, and even more painful for VMMs and VM migration
pools. This is why we made these bits architectural and used a common
name and definition across the projects.

We don't object to the Linux community using an alternative name for
STIBP (but not Disable IB), so long as it is accurate across our products.
Changing our MSR name in the SDM seems like would it
cause unneeded confusion and work.

Thanks.

Tim


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead
  2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (29 preceding siblings ...)
  2018-11-28 14:24 ` Thomas Gleixner
@ 2018-12-10 23:43 ` Pavel Machek
  30 siblings, 0 replies; 112+ messages in thread
From: Pavel Machek @ 2018-12-10 23:43 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Tim Chen, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

Hi!

>  Documentation/admin-guide/kernel-parameters.txt |   56 ++
>  Documentation/userspace-api/spec_ctrl.rst       |    9 

Could we name this speculation.rst instead? _ is inconsistent, and
spec could be shorthand for other stuff, too...
								Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 112+ messages in thread

end of thread, other threads:[~2018-12-10 23:43 UTC | newest]

Thread overview: 112+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-25 18:33 [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
2018-11-25 18:33 ` [patch V2 01/28] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
2018-11-28 14:20   ` [tip:x86/pti] " tip-bot for Tim Chen
2018-11-29 14:27   ` [patch V2 01/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 02/28] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
2018-11-28 14:20   ` [tip:x86/pti] " tip-bot for Tim Chen
2018-11-29 14:28   ` [patch V2 02/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 03/28] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
2018-11-28 14:21   ` [tip:x86/pti] " tip-bot for Tim Chen
2018-11-29 14:28   ` [patch V2 03/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
2018-11-26 15:08   ` Borislav Petkov
2018-11-28 14:22   ` [tip:x86/pti] x86/speculation: Move STIPB/IBPB string conditionals out of cpu_show_common() tip-bot for Tim Chen
2018-11-29 14:29   ` [patch V2 04/28] x86/speculation: Reorganize cpu_show_common() Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 05/28] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
2018-11-28 14:22   ` [tip:x86/pti] " tip-bot for Tim Chen
2018-11-29 14:35   ` [patch V2 05/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 06/28] x86/speculation: Rename SSBD update functions Thomas Gleixner
2018-11-26 15:24   ` Borislav Petkov
2018-11-28 14:23   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-29 14:37   ` [patch V2 06/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 07/28] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
2018-11-26 15:47   ` Borislav Petkov
2018-11-28 14:23   ` [tip:x86/pti] " tip-bot for Tim Chen
2018-11-29 14:41   ` [patch V2 07/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 08/28] sched/smt: Make sched_smt_present track topology Thomas Gleixner
2018-11-28 14:24   ` [tip:x86/pti] " tip-bot for Peter Zijlstra (Intel)
2018-11-29 14:42   ` [patch V2 08/28] " Konrad Rzeszutek Wilk
2018-11-29 14:50     ` Konrad Rzeszutek Wilk
2018-11-29 15:48       ` Peter Zijlstra
2018-11-25 18:33 ` [patch V2 09/28] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
2018-11-28 14:24   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-29 14:44   ` [patch V2 09/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 10/28] sched/smt: Expose sched_smt_present static key Thomas Gleixner
2018-11-28 14:25   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-29 14:44   ` [patch V2 10/28] " Konrad Rzeszutek Wilk
2018-11-25 18:33 ` [patch V2 11/28] x86/speculation: Rework SMT state change Thomas Gleixner
2018-11-28 14:26   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 12/28] x86/l1tf: Show actual SMT state Thomas Gleixner
2018-11-28 14:26   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 13/28] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
2018-11-26 22:21   ` Borislav Petkov
2018-11-28 14:27   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 14/28] x86/speculation: Mark string arrays const correctly Thomas Gleixner
2018-11-28 14:27   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 15/28] x86/speculataion: Mark command line parser data __initdata Thomas Gleixner
2018-11-28 14:28   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 16/28] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
2018-11-28 14:29   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 17/28] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
2018-11-28 14:29   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 18/28] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
2018-11-27 17:25   ` Lendacky, Thomas
2018-11-27 19:51     ` Tim Chen
2018-11-28  9:39       ` Thomas Gleixner
2018-11-27 20:39     ` Thomas Gleixner
2018-11-27 20:42       ` Thomas Gleixner
2018-11-27 21:52         ` Lendacky, Thomas
2018-11-28 14:30   ` [tip:x86/pti] " tip-bot for Tim Chen
2018-11-25 18:33 ` [patch V2 19/28] x86/process: Consolidate and simplify switch_to_xtra() code Thomas Gleixner
2018-11-26 18:30   ` Borislav Petkov
2018-11-28 14:30   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 20/28] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
2018-11-28 14:31   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 21/28] x86/speculation: Prepare for conditional IBPB in switch_mm() Thomas Gleixner
2018-11-25 19:11   ` Thomas Gleixner
2018-11-25 20:53   ` Andi Kleen
2018-11-25 22:20     ` Thomas Gleixner
2018-11-25 23:04       ` Andy Lutomirski
2018-11-26  7:10         ` Thomas Gleixner
2018-11-26 13:36           ` Ingo Molnar
2018-11-26  3:07       ` Andi Kleen
2018-11-26  6:50         ` Thomas Gleixner
2018-11-28 14:31   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 22/28] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS Thomas Gleixner
2018-11-28 14:32   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 23/28] x86/speculation: Split out TIF update Thomas Gleixner
2018-11-28 14:33   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
2018-11-27 20:18   ` Lendacky, Thomas
2018-11-27 20:30     ` Thomas Gleixner
2018-11-27 21:20       ` Lendacky, Thomas
2018-11-28 14:34   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 25/28] x86/speculation: Add prctl() control for indirect branch speculation Thomas Gleixner
2018-11-28 14:34   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 26/28] x86/speculation: Enable prctl mode for spectre_v2_user Thomas Gleixner
2018-11-26  7:56   ` Dominik Brodowski
2018-11-28 14:35   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-25 18:33 ` [patch V2 27/28] x86/speculation: Add seccomp Spectre v2 user space protection mode Thomas Gleixner
2018-11-25 19:35   ` Randy Dunlap
2018-11-25 20:40   ` Linus Torvalds
2018-11-25 20:52     ` Jiri Kosina
2018-11-25 22:28     ` Thomas Gleixner
2018-11-26 13:30       ` Ingo Molnar
2018-11-26 20:48       ` Andrea Arcangeli
2018-11-26 20:58         ` Thomas Gleixner
2018-11-26 21:52           ` Lendacky, Thomas
2018-11-27  0:37             ` Tim Chen
2018-12-04  1:38     ` Tim Chen
2018-12-04  8:39       ` Jiri Kosina
2018-12-04  9:43         ` Arjan van de Ven
2018-12-04  9:46         ` Arjan van de Ven
2018-12-04 17:20       ` Linus Torvalds
2018-12-04 18:58         ` Tim Chen
2018-11-28 14:35   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-12-04 18:45   ` [patch V2 27/28] " Dave Hansen
2018-11-25 18:33 ` [patch V2 28/28] x86/speculation: Provide IBPB always command line options Thomas Gleixner
2018-11-28 14:36   ` [tip:x86/pti] " tip-bot for Thomas Gleixner
2018-11-26 13:37 ` [patch V2 00/28] x86/speculation: Remedy the STIBP/IBPB overhead Ingo Molnar
2018-11-28 14:24 ` Thomas Gleixner
2018-11-29 19:02   ` Tim Chen
2018-12-10 23:43 ` Pavel Machek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).