linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead
@ 2018-11-21 20:14 Thomas Gleixner
  2018-11-21 20:14 ` [patch 01/24] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
                   ` (25 more replies)
  0 siblings, 26 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

This is based on Tim Chen's V5 patch series. The following changes have
been made:

   - Control STIPB evaluation with a single static key

   - Move IBPB out from switch_mm() into switch_to() and control the
     always and the conditional mode with static keys.

     The mainline implementation is wrong in a few aspects, e.g. it fails
     to protect tasks within the same process, which breaks
     sandboxing. That same process optimization was the sole reason to have
     it in switch_mm().

     The new always mode is just issuing the barrier unconditionally when
     switching to a user task, but that also leaves STIPB always on. So
     really paranoid people get the highest possible protection and the
     highest overhead.

     The conditional mode issues the barrier when a task which is mitigated
     is scheduling out or scheduling in. That is required to support proper
     sandboxing.

   - Remove the ptrace_may_access_sched() code as it's unused now. It was
     ugly anyway and would have given people ideas how to slow down
     switch_mm() some more.

   - Rename TIF_STIPB to TIF_SPEC_IB because it controls both STIBP and
     IBPB.

   - Fix all the corner cases vs. UP and SMT disabled.

   - Limit the overhead when conditional STIPB is not enabled so
     switch_to_xtra() is not invoked for nothing when the TIF bit would
     trigger the entry and nothing else is to do. That can happen when SMT
     is off and a task has the TIF bit set. On UP STIPB is never enabled.

   - Dropped the dumpable part

TODO: Write documentation

It's avaiable from git:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/pti

It's based on the x86/pti branch unfortunately, which contains the removal
of the minimal asm retpoline hackery. I noticed too late. If the minimal
asm stuff should not be backported it's trivial to rebase that series on
Linus tree.

Thanks,

	tglx




^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:28   ` Linus Torvalds
  2018-11-21 20:14 ` [patch 02/24] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Update-comment-on-TIF-SSBD.patch --]
[-- Type: text/plain, Size: 964 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

"Reduced Data Speculation" is an obsolete term. The correct new name is
"Speculative store bypass disable" - which is abbreviated into SSBD.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/thread_info.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -79,7 +79,7 @@ struct thread_info {
 #define TIF_SIGPENDING		2	/* signal pending */
 #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
 #define TIF_SINGLESTEP		4	/* reenable singlestep on user return*/
-#define TIF_SSBD			5	/* Reduced data speculation */
+#define TIF_SSBD		5	/* Speculative store bypass disable */
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 02/24] x86/speculation: Clean up spectre_v2_parse_cmdline()
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
  2018-11-21 20:14 ` [patch 01/24] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 03/24] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Clean-up-spectre-v2-parse-cmdline-.patch --]
[-- Type: text/plain, Size: 1586 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

Remove the unnecessary 'else' statement in spectre_v2_parse_cmdline()
to save an indentation level.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |   27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -276,22 +276,21 @@ static enum spectre_v2_mitigation_cmd __
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
-	else {
-		ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
-		if (ret < 0)
-			return SPECTRE_V2_CMD_AUTO;
 
-		for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
-			if (!match_option(arg, ret, mitigation_options[i].option))
-				continue;
-			cmd = mitigation_options[i].cmd;
-			break;
-		}
+	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_CMD_AUTO;
 
-		if (i >= ARRAY_SIZE(mitigation_options)) {
-			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
-			return SPECTRE_V2_CMD_AUTO;
-		}
+	for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
+		if (!match_option(arg, ret, mitigation_options[i].option))
+			continue;
+		cmd = mitigation_options[i].cmd;
+		break;
+	}
+
+	if (i >= ARRAY_SIZE(mitigation_options)) {
+		pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+		return SPECTRE_V2_CMD_AUTO;
 	}
 
 	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 03/24] x86/speculation: Remove unnecessary ret variable in cpu_show_common()
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
  2018-11-21 20:14 ` [patch 01/24] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
  2018-11-21 20:14 ` [patch 02/24] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 04/24] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Remove-unnecessary-ret-variable-in-cpu-show-common-.patch --]
[-- Type: text/plain, Size: 1326 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -847,8 +847,6 @@ static ssize_t l1tf_show_state(char *buf
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
-	int ret;
-
 	if (!boot_cpu_has_bug(bug))
 		return sprintf(buf, "Not affected\n");
 
@@ -866,13 +864,12 @@ static ssize_t cpu_show_common(struct de
 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 
 	case X86_BUG_SPECTRE_V2:
-		ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
 			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
-		return ret;
 
 	case X86_BUG_SPEC_STORE_BYPASS:
 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 04/24] x86/speculation: Reorganize cpu_show_common()
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (2 preceding siblings ...)
  2018-11-21 20:14 ` [patch 03/24] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Reorganize-cpu-show-common-.patch --]
[-- Type: text/plain, Size: 1687 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

The Spectre V2 printout in cpu_show_common() handles conditionals for the
various mitigation methods directly in the sprintf() argument list. That's
hard to read and will become unreadable if more complex decisions need to
be made for a particular method.

Move the conditionals for STIBP and IBPB string selection into helper
functions, so they can be extended later on.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |   20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -844,6 +844,22 @@ static ssize_t l1tf_show_state(char *buf
 }
 #endif
 
+static char *stibp_state(void)
+{
+	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
+		return ", STIBP";
+	else
+		return "";
+}
+
+static char *ibpb_state(void)
+{
+	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
+		return ", IBPB";
+	else
+		return "";
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -865,9 +881,9 @@ static ssize_t cpu_show_common(struct de
 
 	case X86_BUG_SPECTRE_V2:
 		return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
-			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+			       ibpb_state(),
 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
-			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+			       stibp_state(),
 			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
 			       spectre_v2_module_string());
 



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (3 preceding siblings ...)
  2018-11-21 20:14 ` [patch 04/24] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:33   ` Borislav Petkov
  2018-11-21 20:14 ` [patch 06/24] x86/speculation: Rename SSBD update functions Thomas Gleixner
                   ` (20 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Disable-STIBP-when-enhanced-IBRS-is-in-use.patch --]
[-- Type: text/plain, Size: 1004 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2
user space exploits from hyperthread sibling.

Disable STIBP when enhanced IBRS is used.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/bugs.c |    7 +++++++
 1 file changed, 7 insertions(+)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -321,6 +321,10 @@ static bool stibp_needed(void)
 	if (spectre_v2_enabled == SPECTRE_V2_NONE)
 		return false;
 
+	/* Enhanced IBRS makes using STIBP unnecessary. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return false;
+
 	if (!boot_cpu_has(X86_FEATURE_STIBP))
 		return false;
 
@@ -846,6 +850,9 @@ static ssize_t l1tf_show_state(char *buf
 
 static char *stibp_state(void)
 {
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return "";
+
 	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
 		return ", STIBP";
 	else



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 06/24] x86/speculation: Rename SSBD update functions
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (4 preceding siblings ...)
  2018-11-21 20:14 ` [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 07/24] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Rename-SSBD-update-functions.patch --]
[-- Type: text/plain, Size: 3370 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

During context switch, the SSBD bit in SPEC_CTRL MSR is updated according
to changes of the TIF_SSBD flag in the current and next running task.

Currently, only the bit controlling speculative store bypass disable in
SPEC_CTRL MSR is updated and the related update functions all have
"speculative_store" or "ssb" in their names.

For enhanced mitigation control other bits in SPEC_CTRL MSR need to be
updated as well, which makes the SSB names inadequate.

Rename the "speculative_store*" functions to a more generic name.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/spec-ctrl.h |    6 +++---
 arch/x86/kernel/cpu/bugs.c       |    4 ++--
 arch/x86/kernel/process.c        |   12 ++++++------
 3 files changed, 11 insertions(+), 11 deletions(-)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -70,11 +70,11 @@ extern void speculative_store_bypass_ht_
 static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
-extern void speculative_store_bypass_update(unsigned long tif);
+extern void speculation_ctrl_update(unsigned long tif);
 
-static inline void speculative_store_bypass_update_current(void)
+static inline void speculation_ctrl_update_current(void)
 {
-	speculative_store_bypass_update(current_thread_info()->flags);
+	speculation_ctrl_update(current_thread_info()->flags);
 }
 
 #endif
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -200,7 +200,7 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
 		tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) :
 				 ssbd_spec_ctrl_to_tif(hostval);
 
-		speculative_store_bypass_update(tif);
+		speculation_ctrl_update(tif);
 	}
 }
 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
@@ -632,7 +632,7 @@ static int ssb_prctl_set(struct task_str
 	 * mitigation until it is next scheduled.
 	 */
 	if (task == current && update)
-		speculative_store_bypass_update_current();
+		speculation_ctrl_update_current();
 
 	return 0;
 }
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -395,27 +395,27 @@ static __always_inline void amd_set_ssb_
 	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
 }
 
-static __always_inline void intel_set_ssb_state(unsigned long tifn)
+static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
 	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+static __always_inline void __speculation_ctrl_update(unsigned long tifn)
 {
 	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
 		amd_set_ssb_virt_state(tifn);
 	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		amd_set_core_ssb_state(tifn);
 	else
-		intel_set_ssb_state(tifn);
+		spec_ctrl_update_msr(tifn);
 }
 
-void speculative_store_bypass_update(unsigned long tif)
+void speculation_ctrl_update(unsigned long tif)
 {
 	preempt_disable();
-	__speculative_store_bypass_update(tif);
+	__speculation_ctrl_update(tif);
 	preempt_enable();
 }
 
@@ -452,7 +452,7 @@ void __switch_to_xtra(struct task_struct
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
 	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculative_store_bypass_update(tifn);
+		__speculation_ctrl_update(tifn);
 }
 
 /*



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 07/24] x86/speculation: Reorganize speculation control MSRs update
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (5 preceding siblings ...)
  2018-11-21 20:14 ` [patch 06/24] x86/speculation: Rename SSBD update functions Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 08/24] sched/smt: Make sched_smt_present track topology Thomas Gleixner
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Reorganize-speculation-control-MSRs-update.patch --]
[-- Type: text/plain, Size: 2723 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

The logic to detect whether there's a change in the previous and next
task's flag relevant to update speculation control MSRs are spread out
across multiple functions.

Consolidate all checks needed for updating speculation control MSRs into
the new __speculation_ctrl_update() helper function.

This makes it easy to pick the right speculation control MSR and the bits
in the MSR that needs updating based on TIF flags changes.

Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/process.c |   42 ++++++++++++++++++++++++++++++++----------
 1 file changed, 32 insertions(+), 10 deletions(-)

--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -397,25 +397,48 @@ static __always_inline void amd_set_ssb_
 
 static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
-	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
+	u64 msr = x86_spec_ctrl_base;
+
+	/*
+	 * If X86_FEATURE_SSBD is not set, the SSBD bit is not to be
+	 * touched.
+	 */
+	if (static_cpu_has(X86_FEATURE_SSBD))
+		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculation_ctrl_update(unsigned long tifn)
+/*
+ * Update the MSRs managing speculation control, during context switch.
+ *
+ * tifp: Previous task's thread flags
+ * tifn: Next task's thread flags
+ */
+static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+						      unsigned long tifn)
 {
-	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
-		amd_set_ssb_virt_state(tifn);
-	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
-		amd_set_core_ssb_state(tifn);
-	else
+	bool updmsr = false;
+
+	/* If TIF_SSBD is different, select the proper mitigation method */
+	if ((tifp ^ tifn) & _TIF_SSBD) {
+		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
+			amd_set_ssb_virt_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+			amd_set_core_ssb_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_SSBD))
+			updmsr  = true;
+	}
+
+	if (updmsr)
 		spec_ctrl_update_msr(tifn);
 }
 
 void speculation_ctrl_update(unsigned long tif)
 {
+	/* Forced update. Make sure all relevant TIF flags are different */
 	preempt_disable();
-	__speculation_ctrl_update(tif);
+	__speculation_ctrl_update(~tif, tif);
 	preempt_enable();
 }
 
@@ -451,8 +474,7 @@ void __switch_to_xtra(struct task_struct
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
-	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculation_ctrl_update(tifn);
+	__speculation_ctrl_update(tifp, tifn);
 }
 
 /*



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 08/24] sched/smt: Make sched_smt_present track topology
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (6 preceding siblings ...)
  2018-11-21 20:14 ` [patch 07/24] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 09/24] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: Re-smt-Create-cpu-smt-enabled-static-key-for-SMT-specific-code.patch --]
[-- Type: text/plain, Size: 1916 bytes --]

From: Peter Zijlstra <peterz@infradead.org>

Currently the 'sched_smt_present' static key is enabled when at CPU bringup
SMT topology is observed, but it is never disabled. However there is demand
to also disable the key when the topology changes such that there is no SMT
present anymore.

Implement this by making the key count the number of cores that have SMT
enabled.

In particular, the SMT topology bits are set before interrrupts are enabled
and similarly, are cleared after interrupts are disabled for the last time
and the CPU dies.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 kernel/sched/core.c |   19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
 
 #ifdef CONFIG_SCHED_SMT
 	/*
-	 * The sched_smt_present static key needs to be evaluated on every
-	 * hotplug event because at boot time SMT might be disabled when
-	 * the number of booted CPUs is limited.
-	 *
-	 * If then later a sibling gets hotplugged, then the key would stay
-	 * off and SMT scheduling would never be functional.
+	 * When going up, increment the number of cores with SMT present.
 	 */
-	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
-		static_branch_enable_cpuslocked(&sched_smt_present);
+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+		static_branch_inc_cpuslocked(&sched_smt_present);
 #endif
 	set_cpu_active(cpu, true);
 
@@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cp
 	 */
 	synchronize_rcu_mult(call_rcu, call_rcu_sched);
 
+#ifdef CONFIG_SCHED_SMT
+	/*
+	 * When going down, decrement the number of cores with SMT present.
+	 */
+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
+		static_branch_dec_cpuslocked(&sched_smt_present);
+#endif
+
 	if (!sched_smp_initialized)
 		return 0;
 



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 09/24] x86/Kconfig: Select SCHED_SMT if SMP enabled
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (7 preceding siblings ...)
  2018-11-21 20:14 ` [patch 08/24] sched/smt: Make sched_smt_present track topology Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 10/24] sched/smt: Expose sched_smt_present static key Thomas Gleixner
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-Kconfig--Select-SCHED_SMT-if-SMP-enabled.patch --]
[-- Type: text/plain, Size: 1131 bytes --]

CONFIG_SCHED_SMT is enabled by all distros, so there is not a real point to
have it configurable. The runtime overhead in the core scheduler code is
minimal because the actual SMT scheduling parts are conditional on a static
key.

This allows to expose the scheduler's SMT state static key to the
speculation control code. Alternatively the scheduler's static key could be
made always available when CONFIG_SMP is enabled, but that's just adding an
unused static key to every other architecture for nothing.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/Kconfig |    8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1000,13 +1000,7 @@ config NR_CPUS
 	  to the kernel image.
 
 config SCHED_SMT
-	bool "SMT (Hyperthreading) scheduler support"
-	depends on SMP
-	---help---
-	  SMT scheduler support improves the CPU scheduler's decision making
-	  when dealing with Intel Pentium 4 chips with HyperThreading at a
-	  cost of slightly increased overhead in some places. If unsure say
-	  N here.
+	def_bool y if SMP
 
 config SCHED_MC
 	def_bool y



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 10/24] sched/smt: Expose sched_smt_present static key
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (8 preceding siblings ...)
  2018-11-21 20:14 ` [patch 09/24] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:41   ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 11/24] x86/speculation: Rework SMT state change Thomas Gleixner
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: sched-smt--Expose-sched_smt_present-static-key.patch --]
[-- Type: text/plain, Size: 1233 bytes --]

Make the scheduler's 'sched_smt_present' static key globaly available, so
it can be used in the x86 speculation control code.

Provide a query function and a stub for the CONFIG_SMP=n case.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/sched/topology.h |    9 +++++++++
 kernel/sched/sched.h           |    3 ---
 2 files changed, 9 insertions(+), 3 deletions(-)

--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -34,10 +34,19 @@
 #define SD_NUMA			0x4000	/* cross-node balancing */
 
 #ifdef CONFIG_SCHED_SMT
+extern struct static_key_false sched_smt_present;
+
+static __always_inline bool sched_smt_active(void)
+{
+	return static_branch_likely(&sched_smt_present);
+}
+
 static inline int cpu_smt_flags(void)
 {
 	return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;
 }
+#else
+static inline bool sched_smt_active(void) { return false; }
 #endif
 
 #ifdef CONFIG_SCHED_MC
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -941,9 +941,6 @@ static inline int cpu_of(struct rq *rq)
 
 
 #ifdef CONFIG_SCHED_SMT
-
-extern struct static_key_false sched_smt_present;
-
 extern void __update_idle_core(struct rq *rq);
 
 static inline void update_idle_core(struct rq *rq)



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 11/24] x86/speculation: Rework SMT state change
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (9 preceding siblings ...)
  2018-11-21 20:14 ` [patch 10/24] sched/smt: Expose sched_smt_present static key Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 12/24] x86/l1tf: Show actual SMT state Thomas Gleixner
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Rework-SMT-state-change.patch --]
[-- Type: text/plain, Size: 3153 bytes --]

arch_smt_update() is only called when the sysfs SMT control knob is
changed. This means that when SMT is enabled in the sysfs control knob the
system is considered to have SMT active even if all siblings are offline.

To allow finegrained control of the speculation mitigations, the actual SMT
state is more interesting than the fact that siblings could be enabled.

Rework the code, so arch_smt_update() is invoked from each individual CPU
hotplug function, and simplify the update function while at it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c     |   11 +++++------
 include/linux/sched/topology.h |    4 ++++
 kernel/cpu.c                   |   14 ++++++++------
 3 files changed, 17 insertions(+), 12 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -14,6 +14,7 @@
 #include <linux/module.h>
 #include <linux/nospec.h>
 #include <linux/prctl.h>
+#include <linux/sched/topology.h>
 
 #include <asm/spec-ctrl.h>
 #include <asm/cmdline.h>
@@ -344,16 +345,14 @@ void arch_smt_update(void)
 		return;
 
 	mutex_lock(&spec_ctrl_mutex);
-	mask = x86_spec_ctrl_base;
-	if (cpu_smt_control == CPU_SMT_ENABLED)
+
+	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+	if (sched_smt_active())
 		mask |= SPEC_CTRL_STIBP;
-	else
-		mask &= ~SPEC_CTRL_STIBP;
 
 	if (mask != x86_spec_ctrl_base) {
 		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-				cpu_smt_control == CPU_SMT_ENABLED ?
-				"Enabling" : "Disabling");
+			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
 		x86_spec_ctrl_base = mask;
 		on_each_cpu(update_stibp_msr, NULL, 1);
 	}
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -226,8 +226,12 @@ static inline bool cpus_share_cache(int
 	return true;
 }
 
+static inline bool sched_smt_active(void) { return false; }
+
 #endif	/* !CONFIG_SMP */
 
+void arch_smt_update(void);
+
 static inline int task_node(const struct task_struct *p)
 {
 	return cpu_to_node(task_cpu(p));
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -367,6 +367,12 @@ static void lockdep_release_cpus_lock(vo
 
 #endif	/* CONFIG_HOTPLUG_CPU */
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override this.
+ */
+void __weak arch_smt_update(void) { }
+
 #ifdef CONFIG_HOTPLUG_SMT
 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
 EXPORT_SYMBOL_GPL(cpu_smt_control);
@@ -1011,6 +1017,7 @@ static int __ref _cpu_down(unsigned int
 	 * concurrent CPU hotplug via cpu_add_remove_lock.
 	 */
 	lockup_detector_cleanup();
+	arch_smt_update();
 	return ret;
 }
 
@@ -1139,6 +1146,7 @@ static int _cpu_up(unsigned int cpu, int
 	ret = cpuhp_up_callbacks(cpu, st, target);
 out:
 	cpus_write_unlock();
+	arch_smt_update();
 	return ret;
 }
 
@@ -2055,12 +2063,6 @@ static void cpuhp_online_cpu_device(unsi
 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
 }
 
-/*
- * Architectures that need SMT-specific errata handling during SMT hotplug
- * should override this.
- */
-void __weak arch_smt_update(void) { };
-
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
 	int cpu, ret = 0;



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 12/24] x86/l1tf: Show actual SMT state
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (10 preceding siblings ...)
  2018-11-21 20:14 ` [patch 11/24] x86/speculation: Rework SMT state change Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 13/24] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-l1tf--Show-actual-SMT-state.patch --]
[-- Type: text/plain, Size: 1224 bytes --]

Use the now exposed real SMT state, not the SMT sysfs control knob
state. This reflects the state of the system when the mitigation status is
queried.

This does not change the warning in the VMX launch code. There the
dependency on the control knob makes sense because siblings could be
brought online anytime after launching the VM.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -832,13 +832,14 @@ static ssize_t l1tf_show_state(char *buf
 
 	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
 	    (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
-	     cpu_smt_control == CPU_SMT_ENABLED))
+	     sched_smt_active())) {
 		return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
 			       l1tf_vmx_states[l1tf_vmx_mitigation]);
+	}
 
 	return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
 		       l1tf_vmx_states[l1tf_vmx_mitigation],
-		       cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");
+		       sched_smt_active() ? "vulnerable" : "disabled");
 }
 #else
 static ssize_t l1tf_show_state(char *buf)



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 13/24] x86/speculation: Reorder the spec_v2 code
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (11 preceding siblings ...)
  2018-11-21 20:14 ` [patch 12/24] x86/l1tf: Show actual SMT state Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 14/24] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Sanitize-the-spec_v2-code.patch --]
[-- Type: text/plain, Size: 6409 bytes --]

Reorder the code so it is better grouped.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |  168 ++++++++++++++++++++++-----------------------
 1 file changed, 84 insertions(+), 84 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -124,29 +124,6 @@ void __init check_bugs(void)
 #endif
 }
 
-/* The kernel command line selection */
-enum spectre_v2_mitigation_cmd {
-	SPECTRE_V2_CMD_NONE,
-	SPECTRE_V2_CMD_AUTO,
-	SPECTRE_V2_CMD_FORCE,
-	SPECTRE_V2_CMD_RETPOLINE,
-	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
-	SPECTRE_V2_CMD_RETPOLINE_AMD,
-};
-
-static const char *spectre_v2_strings[] = {
-	[SPECTRE_V2_NONE]			= "Vulnerable",
-	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
-	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
-	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
-};
-
-#undef pr_fmt
-#define pr_fmt(fmt)     "Spectre V2 : " fmt
-
-static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
-	SPECTRE_V2_NONE;
-
 void
 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 {
@@ -216,6 +193,12 @@ static void x86_amd_ssb_disable(void)
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
+#undef pr_fmt
+#define pr_fmt(fmt)     "Spectre V2 : " fmt
+
+static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+	SPECTRE_V2_NONE;
+
 #ifdef RETPOLINE
 static bool spectre_v2_bad_module;
 
@@ -237,18 +220,6 @@ static inline const char *spectre_v2_mod
 static inline const char *spectre_v2_module_string(void) { return ""; }
 #endif
 
-static void __init spec2_print_if_insecure(const char *reason)
-{
-	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
-static void __init spec2_print_if_secure(const char *reason)
-{
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
 static inline bool match_option(const char *arg, int arglen, const char *opt)
 {
 	int len = strlen(opt);
@@ -256,24 +227,53 @@ static inline bool match_option(const ch
 	return len == arglen && !strncmp(arg, opt, len);
 }
 
+/* The kernel command line selection for spectre v2 */
+enum spectre_v2_mitigation_cmd {
+	SPECTRE_V2_CMD_NONE,
+	SPECTRE_V2_CMD_AUTO,
+	SPECTRE_V2_CMD_FORCE,
+	SPECTRE_V2_CMD_RETPOLINE,
+	SPECTRE_V2_CMD_RETPOLINE_GENERIC,
+	SPECTRE_V2_CMD_RETPOLINE_AMD,
+};
+
+static const char *spectre_v2_strings[] = {
+	[SPECTRE_V2_NONE]			= "Vulnerable",
+	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
+	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
+};
+
 static const struct {
 	const char *option;
 	enum spectre_v2_mitigation_cmd cmd;
 	bool secure;
 } mitigation_options[] = {
-	{ "off",               SPECTRE_V2_CMD_NONE,              false },
-	{ "on",                SPECTRE_V2_CMD_FORCE,             true },
-	{ "retpoline",         SPECTRE_V2_CMD_RETPOLINE,         false },
-	{ "retpoline,amd",     SPECTRE_V2_CMD_RETPOLINE_AMD,     false },
-	{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
-	{ "auto",              SPECTRE_V2_CMD_AUTO,              false },
+	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
+	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
+	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
+	{ "retpoline,amd",	SPECTRE_V2_CMD_RETPOLINE_AMD,	  false },
+	{ "retpoline,generic",	SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
+	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
 };
 
+static void __init spec2_print_if_insecure(const char *reason)
+{
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		pr_info("%s selected on command line.\n", reason);
+}
+
+static void __init spec2_print_if_secure(const char *reason)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		pr_info("%s selected on command line.\n", reason);
+}
+
 static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 {
+	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
 	char arg[20];
 	int ret, i;
-	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
@@ -317,48 +317,6 @@ static enum spectre_v2_mitigation_cmd __
 	return cmd;
 }
 
-static bool stibp_needed(void)
-{
-	if (spectre_v2_enabled == SPECTRE_V2_NONE)
-		return false;
-
-	/* Enhanced IBRS makes using STIBP unnecessary. */
-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
-		return false;
-
-	if (!boot_cpu_has(X86_FEATURE_STIBP))
-		return false;
-
-	return true;
-}
-
-static void update_stibp_msr(void *info)
-{
-	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
-}
-
-void arch_smt_update(void)
-{
-	u64 mask;
-
-	if (!stibp_needed())
-		return;
-
-	mutex_lock(&spec_ctrl_mutex);
-
-	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
-	if (sched_smt_active())
-		mask |= SPEC_CTRL_STIBP;
-
-	if (mask != x86_spec_ctrl_base) {
-		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
-		x86_spec_ctrl_base = mask;
-		on_each_cpu(update_stibp_msr, NULL, 1);
-	}
-	mutex_unlock(&spec_ctrl_mutex);
-}
-
 static void __init spectre_v2_select_mitigation(void)
 {
 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -462,6 +420,48 @@ static void __init spectre_v2_select_mit
 	arch_smt_update();
 }
 
+static bool stibp_needed(void)
+{
+	if (spectre_v2_enabled == SPECTRE_V2_NONE)
+		return false;
+
+	/* Enhanced IBRS makes using STIBP unnecessary. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return false;
+
+	if (!boot_cpu_has(X86_FEATURE_STIBP))
+		return false;
+
+	return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+	u64 mask;
+
+	if (!stibp_needed())
+		return;
+
+	mutex_lock(&spec_ctrl_mutex);
+
+	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+	if (sched_smt_active())
+		mask |= SPEC_CTRL_STIBP;
+
+	if (mask != x86_spec_ctrl_base) {
+		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
+		x86_spec_ctrl_base = mask;
+		on_each_cpu(update_stibp_msr, NULL, 1);
+	}
+	mutex_unlock(&spec_ctrl_mutex);
+}
+
 #undef pr_fmt
 #define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
 



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 14/24] x86/speculation: Unify conditional spectre v2 print functions
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (12 preceding siblings ...)
  2018-11-21 20:14 ` [patch 13/24] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  7:59   ` Ingo Molnar
  2018-11-21 20:14 ` [patch 15/24] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
                   ` (11 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Simplify-conditional-printout.patch --]
[-- Type: text/plain, Size: 1233 bytes --]

There is no point in having two functions and a conditional at the call
site.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |   17 ++++-------------
 1 file changed, 4 insertions(+), 13 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -257,15 +257,9 @@ static const struct {
 	{ "auto",		SPECTRE_V2_CMD_AUTO,		  false },
 };
 
-static void __init spec2_print_if_insecure(const char *reason)
+static void __init spec_v2_print_cond(const char *reason, bool secure)
 {
-	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
-		pr_info("%s selected on command line.\n", reason);
-}
-
-static void __init spec2_print_if_secure(const char *reason)
-{
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
 		pr_info("%s selected on command line.\n", reason);
 }
 
@@ -309,11 +303,8 @@ static enum spectre_v2_mitigation_cmd __
 		return SPECTRE_V2_CMD_AUTO;
 	}
 
-	if (mitigation_options[i].secure)
-		spec2_print_if_secure(mitigation_options[i].option);
-	else
-		spec2_print_if_insecure(mitigation_options[i].option);
-
+	spec_v2_print_cond(mitigation_options[i].option,
+			   mitigation_options[i].secure);
 	return cmd;
 }
 



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 15/24] x86/speculation: Add command line control for indirect branch speculation
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (13 preceding siblings ...)
  2018-11-21 20:14 ` [patch 14/24] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 23:43   ` Borislav Petkov
  2018-11-21 20:14 ` [patch 16/24] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Add-command-line-control-for-indirect-branch-speculation.patch --]
[-- Type: text/plain, Size: 8543 bytes --]

Add command line control for application to application indirect branch
speculation mitigations.

The initial options are:

    -  on:   Unconditionally enabled
    - off:   Unconditionally disabled
    -auto:   Kernel selects mitigation (default off for now)

When the spectre_v2= command line argument is either 'on' or 'off' this
implies that the application to application control follows that state even
if when a contradicting spectre_v2_app2app= argument is supplied.

Originally-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/admin-guide/kernel-parameters.txt |   22 +++
 arch/x86/include/asm/nospec-branch.h            |   10 +
 arch/x86/kernel/cpu/bugs.c                      |  133 ++++++++++++++++++++----
 3 files changed, 146 insertions(+), 19 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4213,8 +4213,10 @@
 	spectre_v2=	[X86] Control mitigation of Spectre variant 2
 			(indirect branch speculation) vulnerability.
 
-			on   - unconditionally enable
-			off  - unconditionally disable
+			on   - unconditionally enable, implies
+			       spectre_v2_app2app=on
+			off  - unconditionally disable, implies
+			       spectre_v2_app2app=off
 			auto - kernel detects whether your CPU model is
 			       vulnerable
 
@@ -4233,6 +4235,22 @@
 			Not specifying this option is equivalent to
 			spectre_v2=auto.
 
+	spectre_v2_app2app=
+			[X86] Control mitigation of Spectre variant 2
+		        application to application (indirect branch speculation)
+			vulnerability.
+
+			on      - Unconditionally enable mitigations. Is enforced
+				  by spectre_v2=on
+			off     - Unconditionally disable mitigations. Is enforced
+				  by spectre_v2=off
+			auto    - Kernel selects the mitigation depending on
+				  the available CPU features and vulnerability.
+				  Default is off.
+
+			Not specifying this option is equivalent to
+			spectre_v2_app2app=auto.
+
 	spec_store_bypass_disable=
 			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
 			(Speculative Store Bypass vulnerability)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -3,6 +3,8 @@
 #ifndef _ASM_X86_NOSPEC_BRANCH_H_
 #define _ASM_X86_NOSPEC_BRANCH_H_
 
+#include <linux/static_key.h>
+
 #include <asm/alternative.h>
 #include <asm/alternative-asm.h>
 #include <asm/cpufeatures.h>
@@ -226,6 +228,12 @@ enum spectre_v2_mitigation {
 	SPECTRE_V2_IBRS_ENHANCED,
 };
 
+/* The indirect branch speculation control variants */
+enum spectre_v2_app2app_mitigation {
+	SPECTRE_V2_APP2APP_NONE,
+	SPECTRE_V2_APP2APP_STRICT,
+};
+
 /* The Speculative Store Bypass disable variants */
 enum ssb_mitigation {
 	SPEC_STORE_BYPASS_NONE,
@@ -303,6 +311,8 @@ do {									\
 	preempt_enable();						\
 } while (0)
 
+DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+
 #endif /* __ASSEMBLY__ */
 
 /*
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,9 @@ static u64 __ro_after_init x86_spec_ctrl
 u64 __ro_after_init x86_amd_ls_cfg_base;
 u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask;
 
+/* Control conditional STIPB in switch_to() */
+DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+
 void __init check_bugs(void)
 {
 	identify_boot_cpu();
@@ -199,6 +202,9 @@ static void x86_amd_ssb_disable(void)
 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
 	SPECTRE_V2_NONE;
 
+static enum spectre_v2_app2app_mitigation spectre_v2_app2app __ro_after_init =
+	SPECTRE_V2_APP2APP_NONE;
+
 #ifdef RETPOLINE
 static bool spectre_v2_bad_module;
 
@@ -237,6 +243,104 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_RETPOLINE_AMD,
 };
 
+enum spectre_v2_app2app_cmd {
+	SPECTRE_V2_APP2APP_CMD_NONE,
+	SPECTRE_V2_APP2APP_CMD_AUTO,
+	SPECTRE_V2_APP2APP_CMD_FORCE,
+};
+
+static const char *spectre_v2_app2app_strings[] = {
+	[SPECTRE_V2_APP2APP_NONE]	= "App-App Vulnerable",
+	[SPECTRE_V2_APP2APP_STRICT]	= "App-App Mitigation: STIBP protection",
+};
+
+static const struct {
+	const char			*option;
+	enum spectre_v2_app2app_cmd	cmd;
+	bool				secure;
+} app2app_options[] = {
+	{ "auto",	SPECTRE_V2_APP2APP_CMD_AUTO,	false },
+	{ "off",	SPECTRE_V2_APP2APP_CMD_NONE,	false },
+	{ "on",		SPECTRE_V2_APP2APP_CMD_FORCE,	true  },
+};
+
+static void __init spec_v2_app_print_cond(const char *reason, bool secure)
+{
+	if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure)
+		pr_info("app2app %s selected on command line.\n", reason);
+}
+
+static enum spectre_v2_app2app_cmd __init
+spectre_v2_parse_app2app_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
+{
+	char arg[20];
+	int ret, i;
+
+	switch (v2_cmd) {
+	case SPECTRE_V2_CMD_NONE:
+		return SPECTRE_V2_APP2APP_CMD_NONE;
+	case SPECTRE_V2_CMD_FORCE:
+		return SPECTRE_V2_APP2APP_CMD_FORCE;
+	default:
+		break;
+	}
+
+	ret = cmdline_find_option(boot_command_line, "spectre_v2_app2app",
+				  arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_APP2APP_CMD_AUTO;
+
+	for (i = 0; i < ARRAY_SIZE(app2app_options); i++) {
+		if (match_option(arg, ret, app2app_options[i].option)) {
+			spec_v2_app_print_cond(app2app_options[i].option,
+					       app2app_options[i].secure);
+			return app2app_options[i].cmd;
+		}
+	}
+
+	pr_err("Unknown app to app protection option (%s). Switching to AUTO select\n", arg);
+	return SPECTRE_V2_APP2APP_CMD_AUTO;
+}
+
+static void __init
+spectre_v2_app2app_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+{
+	enum spectre_v2_app2app_mitigation mode = SPECTRE_V2_APP2APP_NONE;
+	bool smt_possible = IS_ENABLED(CONFIG_SMP);
+
+	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
+		return;
+
+	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
+	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
+		smt_possible = false;
+
+	switch (spectre_v2_parse_app2app_cmdline(v2_cmd)) {
+	case SPECTRE_V2_APP2APP_CMD_AUTO:
+	case SPECTRE_V2_APP2APP_CMD_NONE:
+		goto set_mode;
+	case SPECTRE_V2_APP2APP_CMD_FORCE:
+	       mode = SPECTRE_V2_APP2APP_STRICT;
+	       break;
+	}
+
+	/* Initialize Indirect Branch Prediction Barrier */
+	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+	}
+
+	/* If enhanced IBRS is enabled no STIPB required */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+		return;
+
+set_mode:
+	spectre_v2_app2app = mode;
+	/* Only print the STIBP mode when SMT possible */
+	if (smt_possible)
+		pr_info("%s\n", spectre_v2_app2app_strings[mode]);
+}
+
 static const char *spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
@@ -385,12 +489,6 @@ static void __init spectre_v2_select_mit
 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
 
-	/* Initialize Indirect Branch Prediction Barrier if supported */
-	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
-	}
-
 	/*
 	 * Retpoline means the kernel is safe because it has no indirect
 	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
@@ -407,23 +505,21 @@ static void __init spectre_v2_select_mit
 		pr_info("Enabling Restricted Speculation for firmware calls\n");
 	}
 
+	/* Set up IBPB and STIBP depending on the general spectre V2 command */
+	spectre_v2_app2app_select_mitigation(cmd);
+
 	/* Enable STIBP if appropriate */
 	arch_smt_update();
 }
 
 static bool stibp_needed(void)
 {
-	if (spectre_v2_enabled == SPECTRE_V2_NONE)
-		return false;
-
 	/* Enhanced IBRS makes using STIBP unnecessary. */
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return false;
 
-	if (!boot_cpu_has(X86_FEATURE_STIBP))
-		return false;
-
-	return true;
+	/* Check for strict app2app mitigation mode */
+	return spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT;
 }
 
 static void update_stibp_msr(void *info)
@@ -844,10 +940,13 @@ static char *stibp_state(void)
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return "";
 
-	if (x86_spec_ctrl_base & SPEC_CTRL_STIBP)
-		return ", STIBP";
-	else
-		return "";
+	switch (spectre_v2_app2app) {
+	case SPECTRE_V2_APP2APP_NONE:
+		return ", STIBP: disabled";
+	case SPECTRE_V2_APP2APP_STRICT:
+		return ", STIBP: forced";
+	}
+	return "";
 }
 
 static char *ibpb_state(void)



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 16/24] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (14 preceding siblings ...)
  2018-11-21 20:14 ` [patch 15/24] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  7:57   ` Ingo Molnar
  2018-11-21 20:14 ` [patch 17/24] x86/speculation: Move IBPB control out of switch_mm() Thomas Gleixner
                   ` (9 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Turn-on-or-off-STIBP-according-to-a-task-s-TIF-STIBP.patch --]
[-- Type: text/plain, Size: 6636 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

To avoid the overhead of STIBP always on, it's necessary to allow per task
control of STIBP.

Add a new task flag TIF_SPEC_IB and evaluate it during context switch if
SMT is active and flag evaluation is enabled by the speculation control
code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the
guest/host switch works properly.

This has no effect because TIF_SPEC_IB cannot be set yet and the static key
which controls evaluation is off. Preparatory patch for adding the control
code.

[ tglx: Simplify the context switch logic and make the TIF evaluation
  	depend on SMP=y and on the static key controlling the conditional
  	update. Rename it to TIF_SPEC_IB because it controls both STIBP and
  	IBPB ]

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/msr-index.h   |    5 +++--
 arch/x86/include/asm/spec-ctrl.h   |   12 ++++++++++++
 arch/x86/include/asm/thread_info.h |    5 ++++-
 arch/x86/kernel/cpu/bugs.c         |    4 ++++
 arch/x86/kernel/process.c          |   24 ++++++++++++++++++++++--
 5 files changed, 45 insertions(+), 5 deletions(-)

--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -41,9 +41,10 @@
 
 #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
 #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
-#define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_STIBP_SHIFT		1	   /* Single Thread Indirect Branch Predictor (STIBP) bit */
+#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
-#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
+#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(
 	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
+{
+	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
+	return (tifn & _TIF_SPEC_IB) >> (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
 {
 	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
 	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
+{
+	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
+	return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
 {
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -83,6 +83,7 @@ struct thread_info {
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
+#define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -110,6 +111,7 @@ struct thread_info {
 #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -146,7 +148,8 @@ struct thread_info {
 
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW							\
-	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
+	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
+	 _TIF_SSBD|_TIF_SPEC_IB)
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -148,6 +148,10 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
 		    static_cpu_has(X86_FEATURE_AMD_SSBD))
 			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
 
+		/* Check whether dynamic indirect branch control is on */
+		if (static_branch_unlikely(&switch_to_cond_stibp))
+			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
+
 		if (hostval != guestval) {
 			msrval = setguest ? guestval : hostval;
 			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -12,6 +12,7 @@
 #include <linux/sched/debug.h>
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
+#include <linux/sched/topology.h>
 #include <linux/init.h>
 #include <linux/export.h>
 #include <linux/pm.h>
@@ -406,6 +407,11 @@ static __always_inline void spec_ctrl_up
 	if (static_cpu_has(X86_FEATURE_SSBD))
 		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
+	/* Only evaluate STIBP if dynamic control is enabled */
+	if (IS_ENABLED(CONFIG_SMP) &&
+	    static_branch_unlikely(&switch_to_cond_stibp))
+		msr |= stibp_tif_to_spec_ctrl(tifn);
+
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
@@ -418,10 +424,16 @@ static __always_inline void spec_ctrl_up
 static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 						      unsigned long tifn)
 {
+	unsigned long tif_diff = tifp ^ tifn;
 	bool updmsr = false;
 
-	/* If TIF_SSBD is different, select the proper mitigation method */
-	if ((tifp ^ tifn) & _TIF_SSBD) {
+	/*
+	 * If TIF_SSBD is different, select the proper mitigation
+	 * method. Note that if SSBD mitigation is disabled or permanentely
+	 * enabled this branch can't be taken because nothing can set
+	 * TIF_SSBD.
+	 */
+	if (tif_diff & _TIF_SSBD) {
 		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
 			amd_set_ssb_virt_state(tifn);
 		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
@@ -430,6 +442,14 @@ static __always_inline void __speculatio
 			updmsr  = true;
 	}
 
+	/*
+	 * Only evaluate TIF_SPEC_IB if dynamic control is
+	 * enabled, otherwise avoid the MSR write
+	 */
+	if (IS_ENABLED(CONFIG_SMP) &&
+	    static_branch_unlikely(&switch_to_cond_stibp))
+		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
+
 	if (updmsr)
 		spec_ctrl_update_msr(tifn);
 }



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 17/24] x86/speculation: Move IBPB control out of switch_mm()
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (15 preceding siblings ...)
  2018-11-21 20:14 ` [patch 16/24] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  0:01   ` Andi Kleen
                     ` (2 more replies)
  2018-11-21 20:14 ` [patch 18/24] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
                   ` (8 subsequent siblings)
  25 siblings, 3 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Move-IBPB-control-out-of-switch_mm--.patch --]
[-- Type: text/plain, Size: 10219 bytes --]

IBPB control is currently in switch_mm() to avoid issuing IBPB when
switching between tasks of the same process.

But that's not covering the case of sandboxed tasks which get the
TIF_SPEC_IB flag set via seccomp. There the barrier is required when the
potentially malicious task is switched out because the task which is
switched in might have it not set and would still be attackable.

For tasks which mark themself with TIF_SPEC_IB via the prctl, the barrier
needs to be when the tasks switches in because the previous one might be an
attacker.

Move the code out of switch_mm() and evaluate the TIF bit in
switch_to(). Make it an inline function so it can be used both in 32bit and
64bit code.

This loses the optimization of switching back to the same process, but
that's wrong in the context of seccomp anyway as it does not protect tasks
of the same process against each other.

This could be optimized by keeping track of the last user task per cpu and
avoiding the barrier when the task is immediately scheduled back and the
thread inbetween was a kernel thread. It's dubious whether that'd be worth
the extra load/store and conditional operations. Keep it optimized for the
common case where the TIF bit is not set.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/nospec-branch.h |    2 +
 arch/x86/include/asm/spec-ctrl.h     |   46 +++++++++++++++++++++++++++++++++++
 arch/x86/include/asm/tlbflush.h      |    2 -
 arch/x86/kernel/cpu/bugs.c           |   16 +++++++++++-
 arch/x86/kernel/process_32.c         |   11 ++++++--
 arch/x86/kernel/process_64.c         |   11 ++++++--
 arch/x86/mm/tlb.c                    |   39 -----------------------------
 7 files changed, 81 insertions(+), 46 deletions(-)

--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -312,6 +312,8 @@ do {									\
 } while (0)
 
 DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+DECLARE_STATIC_KEY_FALSE(switch_to_cond_ibpb);
+DECLARE_STATIC_KEY_FALSE(switch_to_always_ibpb);
 
 #endif /* __ASSEMBLY__ */
 
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -76,6 +76,52 @@ static inline u64 ssbd_tif_to_amd_ls_cfg
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
 }
 
+/**
+ * switch_to_ibpb - Issue IBPB on task switch
+ * @next:	Pointer to the next task
+ * @prev_tif:	Threadinfo flags of the previous task
+ * @next_tif:	Threadinfo flags of the next task
+ *
+ * IBPB flushes the branch predictor, which stops Spectre-v2 attacks
+ * between user space tasks. Depending on the mode the flush is made
+ * conditional.
+ */
+static inline void switch_to_ibpb(struct task_struct *next,
+				  unsigned long prev_tif,
+				  unsigned long next_tif)
+{
+	if (static_branch_unlikely(&switch_to_always_ibpb)) {
+		/* Only flush when switching to a user task. */
+		if (next->mm)
+			indirect_branch_prediction_barrier();
+	}
+
+	if (static_branch_unlikely(&switch_to_cond_ibpb)) {
+		/*
+		 * Both tasks' threadinfo flags are checked for TIF_SPEC_IB.
+		 *
+		 * For an outgoing sandboxed task which has TIF_SPEC_IB set
+		 * via seccomp this is needed because it might be malicious
+		 * and the next user task switching in might not have it
+		 * set.
+		 *
+		 * For an incoming task which has set TIF_SPEC_IB itself
+		 * via prctl() this is needed because the previous user
+		 * task might be malicious and have the flag unset.
+		 *
+		 * This could be optimized by keeping track of the last
+		 * user task per cpu and avoiding the barrier when the task
+		 * is immediately scheduled back and the thread inbetween
+		 * was a kernel thread. It's dubious whether that'd be
+		 * worth the extra load/store and conditional operations.
+		 * Keep it optimized for the common case where the TIF bit
+		 * is not set.
+		 */
+		if ((prev_tif | next_tif) & _TIF_SPEC_IB)
+			indirect_branch_prediction_barrier();
+	}
+}
+
 #ifdef CONFIG_SMP
 extern void speculative_store_bypass_ht_init(void);
 #else
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -171,8 +171,6 @@ struct tlb_state {
 
 	u16 loaded_mm_asid;
 	u16 next_asid;
-	/* last user mm's ctx id */
-	u64 last_ctx_id;
 
 	/*
 	 * We can be in one of several states:
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,10 @@ u64 __ro_after_init x86_amd_ls_cfg_ssbd_
 
 /* Control conditional STIPB in switch_to() */
 DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+/* Control conditional IBPB in switch_to() */
+DEFINE_STATIC_KEY_FALSE(switch_to_cond_ibpb);
+/* Control unconditional IBPB in switch_to() */
+DEFINE_STATIC_KEY_FALSE(switch_to_always_ibpb);
 
 void __init check_bugs(void)
 {
@@ -331,7 +335,17 @@ spectre_v2_app2app_select_mitigation(enu
 	/* Initialize Indirect Branch Prediction Barrier */
 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+
+		switch (mode) {
+		case SPECTRE_V2_APP2APP_STRICT:
+			static_branch_enable(&switch_to_always_ibpb);
+			break;
+		default:
+			break;
+		}
+
+		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+			mode == SPECTRE_V2_APP2APP_STRICT ? "forced" : "conditional");
 	}
 
 	/* If enhanced IBRS is enabled no STIPB required */
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -58,6 +58,7 @@
 #include <asm/vm86.h>
 #include <asm/intel_rdt_sched.h>
 #include <asm/proto.h>
+#include <asm/spec-ctrl.h>
 
 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
 {
@@ -231,6 +232,7 @@ EXPORT_SYMBOL_GPL(start_thread);
 			     *next = &next_p->thread;
 	struct fpu *prev_fpu = &prev->fpu;
 	struct fpu *next_fpu = &next->fpu;
+	unsigned long prev_tif, next_tif;
 	int cpu = smp_processor_id();
 	struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);
 
@@ -264,11 +266,16 @@ EXPORT_SYMBOL_GPL(start_thread);
 	if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))
 		set_iopl_mask(next->iopl);
 
+	prev_tif = task_thread_info(prev_p)->flags;
+	next_tif = task_thread_info(next_p)->flags;
+	/* Indirect branch prediction barrier control */
+	switch_to_ibpb(next_p, prev_tif, next_tif);
+
 	/*
 	 * Now maybe handle debug registers and/or IO bitmaps
 	 */
-	if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV ||
-		     task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
+	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||
+		     prev_tif & _TIF_WORK_CTXSW_PREV))
 		__switch_to_xtra(prev_p, next_p, tss);
 
 	/*
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -55,6 +55,7 @@
 #include <asm/intel_rdt_sched.h>
 #include <asm/unistd.h>
 #include <asm/fsgsbase.h>
+#include <asm/spec-ctrl.h>
 #ifdef CONFIG_IA32_EMULATION
 /* Not included via unistd.h */
 #include <asm/unistd_32_ia32.h>
@@ -552,6 +553,7 @@ void compat_start_thread(struct pt_regs
 	struct thread_struct *next = &next_p->thread;
 	struct fpu *prev_fpu = &prev->fpu;
 	struct fpu *next_fpu = &next->fpu;
+	unsigned long prev_tif, next_tif;
 	int cpu = smp_processor_id();
 	struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);
 
@@ -617,11 +619,16 @@ void compat_start_thread(struct pt_regs
 	/* Reload sp0. */
 	update_task_stack(next_p);
 
+	prev_tif = task_thread_info(prev_p)->flags;
+	next_tif = task_thread_info(next_p)->flags;
+	/* Indirect branch prediction barrier control */
+	switch_to_ibpb(next_p, prev_tif, next_tif);
+
 	/*
 	 * Now maybe reload the debug registers and handle I/O bitmaps
 	 */
-	if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||
-		     task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
+	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||
+		     prev_tif & _TIF_WORK_CTXSW_PREV))
 		__switch_to_xtra(prev_p, next_p, tss);
 
 #ifdef CONFIG_XEN_PV
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -181,19 +181,6 @@ static void sync_current_stack_to_mm(str
 	}
 }
 
-static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
-{
-	/*
-	 * Check if the current (previous) task has access to the memory
-	 * of the @tsk (next) task. If access is denied, make sure to
-	 * issue a IBPB to stop user->user Spectre-v2 attacks.
-	 *
-	 * Note: __ptrace_may_access() returns 0 or -ERRNO.
-	 */
-	return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
-		ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
-}
-
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 			struct task_struct *tsk)
 {
@@ -292,23 +279,6 @@ void switch_mm_irqs_off(struct mm_struct
 		new_asid = prev_asid;
 		need_flush = true;
 	} else {
-		u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);
-
-		/*
-		 * Avoid user/user BTB poisoning by flushing the branch
-		 * predictor when switching between processes. This stops
-		 * one process from doing Spectre-v2 attacks on another.
-		 *
-		 * As an optimization, flush indirect branches only when
-		 * switching into a processes that can't be ptrace by the
-		 * current one (as in such case, attacker has much more
-		 * convenient way how to tamper with the next process than
-		 * branch buffer poisoning).
-		 */
-		if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
-				ibpb_needed(tsk, last_ctx_id))
-			indirect_branch_prediction_barrier();
-
 		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
 			/*
 			 * If our current stack is in vmalloc space and isn't
@@ -365,14 +335,6 @@ void switch_mm_irqs_off(struct mm_struct
 		trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
 	}
 
-	/*
-	 * Record last user mm's context id, so we can avoid
-	 * flushing branch buffer with IBPB if we switch back
-	 * to the same user.
-	 */
-	if (next != &init_mm)
-		this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
-
 	/* Make sure we write CR3 before loaded_mm. */
 	barrier();
 
@@ -441,7 +403,6 @@ void initialize_tlbstate_and_flush(void)
 	write_cr3(build_cr3(mm->pgd, 0));
 
 	/* Reinitialize tlbstate. */
-	this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id);
 	this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
 	this_cpu_write(cpu_tlbstate.next_asid, 1);
 	this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 18/24] x86/speculation: Avoid __switch_to_xtra() calls
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (16 preceding siblings ...)
  2018-11-21 20:14 ` [patch 17/24] x86/speculation: Move IBPB control out of switch_mm() Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  1:23   ` Tim Chen
  2018-11-21 20:14 ` [patch 19/24] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS Thomas Gleixner
                   ` (7 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Avoid-switch_to_xtra---calls.patch --]
[-- Type: text/plain, Size: 2900 bytes --]

The TIF_SPEC_IB bit does not need to be evaluated in the decision to invoke
__switch_to_xtra() when:

 - CONFIG_SMP is disabled

 - The conditional STIPB mode is disabled

The TIF_SPEC_IB bit still controls IBPB in both cases.

Optimize it out by masking the bit at compile time for CONFIG_SMP=n and at
run time when the static key controlling the conditional STIBP mode is
disabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/spec-ctrl.h   |    5 +++++
 arch/x86/include/asm/thread_info.h |   13 +++++++++++--
 arch/x86/kernel/process_32.c       |    9 +++++++++
 arch/x86/kernel/process_64.c       |    9 +++++++++
 4 files changed, 34 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -115,8 +115,13 @@ static inline void switch_to_ibpb(struct
 }
 
 #ifdef CONFIG_SMP
+static __always_inline bool switch_to_xtra_needs_spec_ib(void)
+{
+	return static_branch_likely(&switch_to_cond_stibp);
+}
 extern void speculative_store_bypass_ht_init(void);
 #else
+static inline bool switch_to_xtra_needs_spec_ib(void) { return false; }
 static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -147,9 +147,18 @@ struct thread_info {
 	 _TIF_FSCHECK)
 
 /* flags to check in __switch_to() */
-#define _TIF_WORK_CTXSW							\
+#define _TIF_WORK_CTXSW_BASE						\
 	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
-	 _TIF_SSBD|_TIF_SPEC_IB)
+	 _TIF_SSBD)
+
+/*
+ * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.
+ */
+#ifdef CONFIG_SMP
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB)
+#else
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE)
+#endif
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -272,6 +272,15 @@ EXPORT_SYMBOL_GPL(start_thread);
 	switch_to_ibpb(next_p, prev_tif, next_tif);
 
 	/*
+	 * Avoid __switch_to_xtra() invocation when conditional stpib is
+	 * disabled.
+	 */
+	if (!switch_to_xtra_needs_spec_ib()) {
+		prev_tif &= ~_TIF_SPEC_IB;
+		next_tif &= ~_TIF_SPEC_IB;
+	}
+
+	/*
 	 * Now maybe handle debug registers and/or IO bitmaps
 	 */
 	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -625,6 +625,15 @@ void compat_start_thread(struct pt_regs
 	switch_to_ibpb(next_p, prev_tif, next_tif);
 
 	/*
+	 * Avoid __switch_to_xtra() invocation when conditional stpib is
+	 * disabled.
+	 */
+	if (!switch_to_xtra_needs_spec_ib()) {
+		prev_tif &= ~_TIF_SPEC_IB;
+		next_tif &= ~_TIF_SPEC_IB;
+	}
+
+	/*
 	 * Now maybe reload the debug registers and handle I/O bitmaps
 	 */
 	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 19/24] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (17 preceding siblings ...)
  2018-11-21 20:14 ` [patch 18/24] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-21 20:14 ` [patch 20/24] x86/speculation: Split out TIF update Thomas Gleixner
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: ptrace--Remove-unused-ptrace_may_access_sched--.patch --]
[-- Type: text/plain, Size: 2683 bytes --]

The IBPB control code in x86 removed the usage. Remove the functionality
which was introduced for this.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/ptrace.h |   17 -----------------
 kernel/ptrace.c        |   10 ----------
 2 files changed, 27 deletions(-)

--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,15 +64,12 @@ extern void exit_ptrace(struct task_stru
 #define PTRACE_MODE_NOAUDIT	0x04
 #define PTRACE_MODE_FSCREDS	0x08
 #define PTRACE_MODE_REALCREDS	0x10
-#define PTRACE_MODE_SCHED	0x20
-#define PTRACE_MODE_IBPB	0x40
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS)
-#define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -90,20 +87,6 @@ extern void exit_ptrace(struct task_stru
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
 
-/**
- * ptrace_may_access - check whether the caller is permitted to access
- * a target task.
- * @task: target task
- * @mode: selects type of access and caller credentials
- *
- * Returns true on success, false on denial.
- *
- * Similar to ptrace_may_access(). Only to be called from context switch
- * code. Does not call into audit and the regular LSM hooks due to locking
- * constraints.
- */
-extern bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode);
-
 static inline int ptrace_reparented(struct task_struct *child)
 {
 	return !same_thread_group(child->real_parent, child->parent);
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -261,9 +261,6 @@ static int ptrace_check_attach(struct ta
 
 static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)
 {
-	if (mode & PTRACE_MODE_SCHED)
-		return false;
-
 	if (mode & PTRACE_MODE_NOAUDIT)
 		return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE);
 	else
@@ -331,16 +328,9 @@ static int __ptrace_may_access(struct ta
 	     !ptrace_has_cap(mm->user_ns, mode)))
 	    return -EPERM;
 
-	if (mode & PTRACE_MODE_SCHED)
-		return 0;
 	return security_ptrace_access_check(task, mode);
 }
 
-bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode)
-{
-	return __ptrace_may_access(task, mode | PTRACE_MODE_SCHED);
-}
-
 bool ptrace_may_access(struct task_struct *task, unsigned int mode)
 {
 	int err;



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 20/24] x86/speculation: Split out TIF update
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (18 preceding siblings ...)
  2018-11-21 20:14 ` [patch 19/24] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  2:13   ` Tim Chen
  2018-11-22  7:43   ` [patch 20/24] x86/speculation: Split out TIF update Ingo Molnar
  2018-11-21 20:14 ` [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
                   ` (5 subsequent siblings)
  25 siblings, 2 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Split-out-TIF-update.patch --]
[-- Type: text/plain, Size: 2256 bytes --]

The update of the TIF_SSBD flag and the conditional speculation control MSR
update is done in the ssb_prctl_set() function directly. The upcoming prctl
support for controlling indirect branch speculation via STIBP needs the
same mechanism.

Split the code out and make it reusable.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |   31 +++++++++++++++++++------------
 1 file changed, 19 insertions(+), 12 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -703,10 +703,25 @@ static void ssb_select_mitigation(void)
 #undef pr_fmt
 #define pr_fmt(fmt)     "Speculation prctl: " fmt
 
-static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
+static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
 {
 	bool update;
 
+	if (on)
+		update = !test_and_set_tsk_thread_flag(tsk, tifbit);
+	else
+		update = test_and_clear_tsk_thread_flag(tsk, tifbit);
+
+	/*
+	 * If being set on non-current task, delay setting the CPU
+	 * mitigation until it is scheduled next.
+	 */
+	if (tsk == current && update)
+		speculation_ctrl_update_current();
+}
+
+static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
 	if (ssb_mode != SPEC_STORE_BYPASS_PRCTL &&
 	    ssb_mode != SPEC_STORE_BYPASS_SECCOMP)
 		return -ENXIO;
@@ -717,28 +732,20 @@ static int ssb_prctl_set(struct task_str
 		if (task_spec_ssb_force_disable(task))
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
-		update = test_and_clear_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, false);
 		break;
 	case PR_SPEC_DISABLE:
 		task_set_spec_ssb_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, true);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
+		task_update_spec_tif(task, TIF_SSBD, true);
 		break;
 	default:
 		return -ERANGE;
 	}
-
-	/*
-	 * If being set on non-current task, delay setting the CPU
-	 * mitigation until it is next scheduled.
-	 */
-	if (task == current && update)
-		speculation_ctrl_update_current();
-
 	return 0;
 }
 



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (19 preceding siblings ...)
  2018-11-21 20:14 ` [patch 20/24] x86/speculation: Split out TIF update Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  7:34   ` Ingo Molnar
  2018-11-21 20:14 ` [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Thomas Gleixner
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Prepare-arch_smt_update---for-PRCTL-mode.patch --]
[-- Type: text/plain, Size: 2215 bytes --]

The upcoming fine grained per task STIBP control needs to be updated on CPU
hotplug as well.

Split out the code which controls the strict mode so the prctl control code
can be added later.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |   46 ++++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 21 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -531,40 +531,44 @@ static void __init spectre_v2_select_mit
 	arch_smt_update();
 }
 
-static bool stibp_needed(void)
+static void update_stibp_msr(void *info)
 {
-	/* Enhanced IBRS makes using STIBP unnecessary. */
-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
-		return false;
-
-	/* Check for strict app2app mitigation mode */
-	return spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT;
+	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 }
 
-static void update_stibp_msr(void *info)
+/* Update x86_spec_ctrl_base in case SMT state changed. */
+static void update_stibp_strict(void)
 {
-	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+	u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
+
+	if (sched_smt_active())
+		mask |= SPEC_CTRL_STIBP;
+
+	if (mask == x86_spec_ctrl_base)
+		return;
+
+	pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+		mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
+	x86_spec_ctrl_base = mask;
+	on_each_cpu(update_stibp_msr, NULL, 1);
 }
 
 void arch_smt_update(void)
 {
-	u64 mask;
-
-	if (!stibp_needed())
+	/* Enhanced IBRS makes using STIBP unnecessary. No update required. */
+	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return;
 
 	mutex_lock(&spec_ctrl_mutex);
 
-	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
-	if (sched_smt_active())
-		mask |= SPEC_CTRL_STIBP;
-
-	if (mask != x86_spec_ctrl_base) {
-		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
-			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
-		x86_spec_ctrl_base = mask;
-		on_each_cpu(update_stibp_msr, NULL, 1);
+	switch (spectre_v2_app2app) {
+	case SPECTRE_V2_APP2APP_NONE:
+		break;
+	case SPECTRE_V2_APP2APP_STRICT:
+		update_stibp_strict();
+		break;
 	}
+
 	mutex_unlock(&spec_ctrl_mutex);
 }
 



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (20 preceding siblings ...)
  2018-11-21 20:14 ` [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  7:10   ` Ingo Molnar
                     ` (2 more replies)
  2018-11-21 20:14 ` [patch 23/24] x86/speculation: Enable PRCTL mode for spectre_v2_app2app Thomas Gleixner
                   ` (3 subsequent siblings)
  25 siblings, 3 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

[-- Attachment #1: x86-speculation-Create-PRCTL-interface-to-restrict-indirect-branch-speculation.patch --]
[-- Type: text/plain, Size: 7598 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>

Add the PR_SPEC_INDIR_BRANCH option for the PR_GET_SPECULATION_CTRL and
PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of
indirect branch speculation via STIBP.

Invocations:
 Check indirect branch speculation status with
 - prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, 0, 0, 0);

 Enable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);

 Disable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);

 Force disable indirect branch speculation with
 - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);

See Documentation/userspace-api/spec_ctrl.rst.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 Documentation/userspace-api/spec_ctrl.rst |    9 +++
 arch/x86/include/asm/nospec-branch.h      |    1 
 arch/x86/kernel/cpu/bugs.c                |   71 ++++++++++++++++++++++++++++++
 include/linux/sched.h                     |    9 +++
 include/uapi/linux/prctl.h                |    1 
 tools/include/uapi/linux/prctl.h          |    1 
 6 files changed, 92 insertions(+)

--- a/Documentation/userspace-api/spec_ctrl.rst
+++ b/Documentation/userspace-api/spec_ctrl.rst
@@ -92,3 +92,12 @@ Speculation misfeature controls
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
+
+- PR_SPEC_INDIR_BRANCH: Indirect Branch Speculation in User Processes
+                        (Mitigate Spectre V2 style attacks against user processes)
+
+  Invocations:
+   * prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, 0, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -232,6 +232,7 @@ enum spectre_v2_mitigation {
 enum spectre_v2_app2app_mitigation {
 	SPECTRE_V2_APP2APP_NONE,
 	SPECTRE_V2_APP2APP_STRICT,
+	SPECTRE_V2_APP2APP_PRCTL,
 };
 
 /* The Speculative Store Bypass disable variants */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -567,6 +567,8 @@ void arch_smt_update(void)
 	case SPECTRE_V2_APP2APP_STRICT:
 		update_stibp_strict();
 		break;
+	case SPECTRE_V2_APP2APP_PRCTL:
+		break;
 	}
 
 	mutex_unlock(&spec_ctrl_mutex);
@@ -753,12 +755,56 @@ static int ssb_prctl_set(struct task_str
 	return 0;
 }
 
+static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
+			return 0;
+		/*
+		 * Indirect branch speculation is always disabled in strict
+		 * mode.
+		 */
+		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
+			return -EPERM;
+		task_clear_spec_indir_branch_disable(task);
+		task_update_spec_tif(task, TIF_SPEC_IB, false);
+		break;
+	case PR_SPEC_DISABLE:
+		/*
+		 * Indirect branch speculation is always allowed when
+		 * mitigation is force disabled.
+		 */
+		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
+			return -EPERM;
+		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
+			return 0;
+		task_set_spec_indir_branch_disable(task);
+		task_update_spec_tif(task, TIF_SPEC_IB, true);
+		break;
+	case PR_SPEC_FORCE_DISABLE:
+		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
+			return -EPERM;
+		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
+			return 0;
+		task_set_spec_indir_branch_disable(task);
+		task_set_spec_indir_branch_force_disable(task);
+		task_update_spec_tif(task, TIF_SPEC_IB, true);
+		break;
+	default:
+		return -ERANGE;
+	}
+	return 0;
+}
+
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_set(task, ctrl);
+	case PR_SPEC_INDIR_BRANCH:
+		return indir_branch_prctl_set(task, ctrl);
 	default:
 		return -ENODEV;
 	}
@@ -791,11 +837,34 @@ static int ssb_prctl_get(struct task_str
 	}
 }
 
+static int indir_branch_prctl_get(struct task_struct *task)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		return PR_SPEC_NOT_AFFECTED;
+
+	switch (spectre_v2_app2app) {
+	case SPECTRE_V2_APP2APP_NONE:
+		return PR_SPEC_ENABLE;
+	case SPECTRE_V2_APP2APP_PRCTL:
+		if (task_spec_indir_branch_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+		if (test_tsk_thread_flag(task, TIF_SPEC_IB))
+			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+	case SPECTRE_V2_APP2APP_STRICT:
+		return PR_SPEC_DISABLE;
+	default:
+		return PR_SPEC_NOT_AFFECTED;
+	}
+}
+
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_get(task);
+	case PR_SPEC_INDIR_BRANCH:
+		return indir_branch_prctl_get(task);
 	default:
 		return -ENODEV;
 	}
@@ -975,6 +1044,8 @@ static char *stibp_state(void)
 		return ", STIBP: disabled";
 	case SPECTRE_V2_APP2APP_STRICT:
 		return ", STIBP: forced";
+	case SPECTRE_V2_APP2APP_PRCTL:
+		return "";
 	}
 	return "";
 }
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1453,6 +1453,8 @@ static inline bool is_percpu_thread(void
 #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
 #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
 #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
+#define PFA_SPEC_INDIR_BRANCH_DISABLE	5	/* Indirect branch speculation restricted */
+#define PFA_SPEC_INDIR_BRANCH_FORCE_DISABLE 6	/* Indirect branch speculation permanentely restricted */
 
 #define TASK_PFA_TEST(name, func)					\
 	static inline bool task_##func(struct task_struct *p)		\
@@ -1484,6 +1486,13 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ss
 TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 
+TASK_PFA_TEST(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_SET(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_CLEAR(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+
+TASK_PFA_TEST(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
+TASK_PFA_SET(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
+
 static inline void
 current_restore_flags(unsigned long orig_flags, unsigned long flags)
 {
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIR_BRANCH		1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_SPEC_INDIR_BRANCH		1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 23/24] x86/speculation: Enable PRCTL mode for spectre_v2_app2app
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (21 preceding siblings ...)
  2018-11-21 20:14 ` [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  7:17   ` Ingo Molnar
  2018-11-21 20:14 ` [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode Thomas Gleixner
                   ` (2 subsequent siblings)
  25 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation--Enable-PRCTL-mode-for-spectre_v2_app2app.patch --]
[-- Type: text/plain, Size: 4565 bytes --]

Now that all prerequisites are in place:

 - Add the prctl command line option

 - Default the 'auto' mode to 'prctl'

 - When SMT state changes, update the static key which controls the
   conditional STIBP evaluation on context switch.

 - At init update the static key which controls the conditional IBPB
   evaluation on context switch.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/admin-guide/kernel-parameters.txt |    5 ++
 arch/x86/kernel/cpu/bugs.c                      |   46 +++++++++++++++++++++---
 2 files changed, 45 insertions(+), 6 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4246,7 +4246,10 @@
 				  by spectre_v2=off
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
-				  Default is off.
+				  Default is prctl.
+			prctl   - Indirect branch speculation is enabled, but
+				  mitigation can be enabled via prctl per thread.
+				  The mitigation control state is inherited on fork.
 
 			Not specifying this option is equivalent to
 			spectre_v2_app2app=auto.
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -255,11 +255,13 @@ enum spectre_v2_app2app_cmd {
 	SPECTRE_V2_APP2APP_CMD_NONE,
 	SPECTRE_V2_APP2APP_CMD_AUTO,
 	SPECTRE_V2_APP2APP_CMD_FORCE,
+	SPECTRE_V2_APP2APP_CMD_PRCTL,
 };
 
 static const char *spectre_v2_app2app_strings[] = {
 	[SPECTRE_V2_APP2APP_NONE]	= "App-App Vulnerable",
 	[SPECTRE_V2_APP2APP_STRICT]	= "App-App Mitigation: STIBP protection",
+	[SPECTRE_V2_APP2APP_PRCTL]	= "App-App Mitigation: STIBP via prctl",
 };
 
 static const struct {
@@ -270,6 +272,7 @@ static const struct {
 	{ "auto",	SPECTRE_V2_APP2APP_CMD_AUTO,	false },
 	{ "off",	SPECTRE_V2_APP2APP_CMD_NONE,	false },
 	{ "on",		SPECTRE_V2_APP2APP_CMD_FORCE,	true  },
+	{ "prctl",	SPECTRE_V2_APP2APP_CMD_PRCTL,	false },
 };
 
 static void __init spec_v2_app_print_cond(const char *reason, bool secure)
@@ -324,12 +327,15 @@ spectre_v2_app2app_select_mitigation(enu
 		smt_possible = false;
 
 	switch (spectre_v2_parse_app2app_cmdline(v2_cmd)) {
-	case SPECTRE_V2_APP2APP_CMD_AUTO:
 	case SPECTRE_V2_APP2APP_CMD_NONE:
 		goto set_mode;
 	case SPECTRE_V2_APP2APP_CMD_FORCE:
 	       mode = SPECTRE_V2_APP2APP_STRICT;
 	       break;
+	case SPECTRE_V2_APP2APP_CMD_AUTO:
+	case SPECTRE_V2_APP2APP_CMD_PRCTL:
+		mode = SPECTRE_V2_APP2APP_PRCTL;
+		break;
 	}
 
 	/* Initialize Indirect Branch Prediction Barrier */
@@ -340,6 +346,9 @@ spectre_v2_app2app_select_mitigation(enu
 		case SPECTRE_V2_APP2APP_STRICT:
 			static_branch_enable(&switch_to_always_ibpb);
 			break;
+		case SPECTRE_V2_APP2APP_PRCTL:
+			static_branch_enable(&switch_to_cond_ibpb);
+			break;
 		default:
 			break;
 		}
@@ -352,6 +361,12 @@ spectre_v2_app2app_select_mitigation(enu
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
 		return;
 
+	/*
+	 * If STIBP is not available or SMT is not possible clear the STIPB
+	 * mode.
+	 */
+	if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP))
+		mode = SPECTRE_V2_APP2APP_NONE;
 set_mode:
 	spectre_v2_app2app = mode;
 	/* Only print the STIBP mode when SMT possible */
@@ -552,6 +567,18 @@ static void update_stibp_strict(void)
 	on_each_cpu(update_stibp_msr, NULL, 1);
 }
 
+/* Update the static key controlling the evaluation of TIF_SPEC_IB */
+static void update_indir_branch_cond(void)
+{
+	if (!IS_ENABLED(CONFIG_SMP))
+		return;
+
+	if (sched_smt_active())
+		static_branch_enable(&switch_to_cond_stibp);
+	else
+		static_branch_disable(&switch_to_cond_stibp);
+}
+
 void arch_smt_update(void)
 {
 	/* Enhanced IBRS makes using STIBP unnecessary. No update required. */
@@ -567,6 +594,7 @@ void arch_smt_update(void)
 		update_stibp_strict();
 		break;
 	case SPECTRE_V2_APP2APP_PRCTL:
+		update_indir_branch_cond();
 		break;
 	}
 
@@ -1044,17 +1072,25 @@ static char *stibp_state(void)
 	case SPECTRE_V2_APP2APP_STRICT:
 		return ", STIBP: forced";
 	case SPECTRE_V2_APP2APP_PRCTL:
-		return "";
+		return ", STIBP: opt-in";
 	}
 	return "";
 }
 
 static char *ibpb_state(void)
 {
-	if (boot_cpu_has(X86_FEATURE_USE_IBPB))
-		return ", IBPB";
-	else
+	if (!boot_cpu_has(X86_FEATURE_IBPB))
 		return "";
+
+	switch (spectre_v2_app2app) {
+	case SPECTRE_V2_APP2APP_NONE:
+		return ", IBPB: disabled";
+	case SPECTRE_V2_APP2APP_STRICT:
+		return ", IBPB: forced";
+	case SPECTRE_V2_APP2APP_PRCTL:
+		return ", IBBP: opt-in";
+	}
+	return "";
 }
 
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,



^ permalink raw reply	[flat|nested] 95+ messages in thread

* [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (22 preceding siblings ...)
  2018-11-21 20:14 ` [patch 23/24] x86/speculation: Enable PRCTL mode for spectre_v2_app2app Thomas Gleixner
@ 2018-11-21 20:14 ` Thomas Gleixner
  2018-11-22  2:24   ` Tim Chen
  2018-11-22  7:26   ` Ingo Molnar
  2018-11-21 23:48 ` [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Tim Chen
  2018-11-22  9:45 ` Peter Zijlstra
  25 siblings, 2 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:14 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

[-- Attachment #1: x86-speculation-Add-seccomp-Spectre-v2-app-to-app-protection-mode.patch --]
[-- Type: text/plain, Size: 5513 bytes --]

From: Jiri Kosina <jkosina@suse.cz>

If 'prctl' mode of app2app protection from spectre v2 is selected on the
kernel command-line, STIBP and IBPB are applied on tasks which restrict
their indirect branch speculation via prctl.

SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
makes sense to prevent spectre v2 application to application attacks as
well.

The mitigation guide documents how STIPB works:
    
   Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
   prevents the predicted targets of indirect branches on any logical
   processor of that core from being controlled by software that executes
   (or executed previously) on another logical processor of the same core.
    
Ergo setting STIBP protects the task itself from being attacked from a task
running on a different hyper-thread and protects the tasks running on
different hyper-threads from being attacked.
    
IBPB is issued when the task switches out, so malicious sandbox code cannot
mistrain the branch predictor for the next user space task on the same
logical processor.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 Documentation/admin-guide/kernel-parameters.txt |    7 +++++-
 arch/x86/include/asm/nospec-branch.h            |    1 
 arch/x86/kernel/cpu/bugs.c                      |   27 +++++++++++++++++++-----
 3 files changed, 29 insertions(+), 6 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4228,10 +4228,15 @@
 				  by spectre_v2=off
 			auto    - Kernel selects the mitigation depending on
 				  the available CPU features and vulnerability.
-				  Default is prctl.
 			prctl   - Indirect branch speculation is enabled, but
 				  mitigation can be enabled via prctl per thread.
 				  The mitigation control state is inherited on fork.
+			seccomp - Same as "prctl" above, but all seccomp threads
+				  will enable the mitigation unless they explicitly
+				  opt out.
+
+			Default mitigation:
+			If CONFIG_SECCOMP=y "seccomp", otherwise "prctl"
 
 			Not specifying this option is equivalent to
 			spectre_v2_app2app=auto.
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -233,6 +233,7 @@ enum spectre_v2_app2app_mitigation {
 	SPECTRE_V2_APP2APP_NONE,
 	SPECTRE_V2_APP2APP_STRICT,
 	SPECTRE_V2_APP2APP_PRCTL,
+	SPECTRE_V2_APP2APP_SECCOMP,
 };
 
 /* The Speculative Store Bypass disable variants */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -256,12 +256,14 @@ enum spectre_v2_app2app_cmd {
 	SPECTRE_V2_APP2APP_CMD_AUTO,
 	SPECTRE_V2_APP2APP_CMD_FORCE,
 	SPECTRE_V2_APP2APP_CMD_PRCTL,
+	SPECTRE_V2_APP2APP_CMD_SECCOMP,
 };
 
 static const char *spectre_v2_app2app_strings[] = {
 	[SPECTRE_V2_APP2APP_NONE]	= "App-App Vulnerable",
-	[SPECTRE_V2_APP2APP_STRICT]	= "App-App Mitigation: STIBP protection",
-	[SPECTRE_V2_APP2APP_PRCTL]	= "App-App Mitigation: STIBP via prctl",
+	[SPECTRE_V2_APP2APP_STRICT]	= "App-App Mitigation: forced protection",
+	[SPECTRE_V2_APP2APP_PRCTL]	= "App-App Mitigation: prctl opt-in",
+	[SPECTRE_V2_APP2APP_SECCOMP]	= "App-App Mitigation: seccomp and prctl opt-in",
 };
 
 static const struct {
@@ -332,10 +334,16 @@ spectre_v2_app2app_select_mitigation(enu
 	case SPECTRE_V2_APP2APP_CMD_FORCE:
 	       mode = SPECTRE_V2_APP2APP_STRICT;
 	       break;
-	case SPECTRE_V2_APP2APP_CMD_AUTO:
 	case SPECTRE_V2_APP2APP_CMD_PRCTL:
 		mode = SPECTRE_V2_APP2APP_PRCTL;
 		break;
+	case SPECTRE_V2_APP2APP_CMD_AUTO:
+	case SPECTRE_V2_APP2APP_CMD_SECCOMP:
+		if (IS_ENABLED(CONFIG_SECCOMP))
+			mode = SPECTRE_V2_APP2APP_SECCOMP;
+		else
+			mode = SPECTRE_V2_APP2APP_PRCTL;
+		break;
 	}
 
 	/* Initialize Indirect Branch Prediction Barrier */
@@ -347,6 +355,7 @@ spectre_v2_app2app_select_mitigation(enu
 			static_branch_enable(&switch_to_always_ibpb);
 			break;
 		case SPECTRE_V2_APP2APP_PRCTL:
+		case SPECTRE_V2_APP2APP_SECCOMP:
 			static_branch_enable(&switch_to_cond_ibpb);
 			break;
 		default:
@@ -594,6 +603,7 @@ void arch_smt_update(void)
 		update_stibp_strict();
 		break;
 	case SPECTRE_V2_APP2APP_PRCTL:
+	case SPECTRE_V2_APP2APP_SECCOMP:
 		update_indir_branch_cond();
 		break;
 	}
@@ -842,6 +852,8 @@ void arch_seccomp_spec_mitigate(struct t
 {
 	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
 		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+	if (spectre_v2_app2app == SPECTRE_V2_APP2APP_SECCOMP)
+		indir_branch_prctl_set(task, PR_SPEC_FORCE_DISABLE);
 }
 #endif
 
@@ -873,6 +885,7 @@ static int indir_branch_prctl_get(struct
 	case SPECTRE_V2_APP2APP_NONE:
 		return PR_SPEC_ENABLE;
 	case SPECTRE_V2_APP2APP_PRCTL:
+	case SPECTRE_V2_APP2APP_SECCOMP:
 		if (task_spec_indir_branch_force_disable(task))
 			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
 		if (test_tsk_thread_flag(task, TIF_SPEC_IB))
@@ -1072,7 +1085,9 @@ static char *stibp_state(void)
 	case SPECTRE_V2_APP2APP_STRICT:
 		return ", STIBP: forced";
 	case SPECTRE_V2_APP2APP_PRCTL:
-		return ", STIBP: opt-in";
+		return ", STIBP: prctl opt-in";
+	case SPECTRE_V2_APP2APP_SECCOMP:
+		return ", STIBP: seccomp and prctl opt-in";
 	}
 	return "";
 }
@@ -1088,7 +1103,9 @@ static char *ibpb_state(void)
 	case SPECTRE_V2_APP2APP_STRICT:
 		return ", IBPB: forced";
 	case SPECTRE_V2_APP2APP_PRCTL:
-		return ", IBBP: opt-in";
+		return ", IBBP: prctl opt-in";
+	case SPECTRE_V2_APP2APP_SECCOMP:
+		return ", IBPB: seccomp and prctl opt-in";
 	}
 	return "";
 }



^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 20:14 ` [patch 01/24] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
@ 2018-11-21 20:28   ` Linus Torvalds
  2018-11-21 20:30     ` Thomas Gleixner
  2018-11-21 20:33     ` Linus Torvalds
  0 siblings, 2 replies; 95+ messages in thread
From: Linus Torvalds @ 2018-11-21 20:28 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Linux List Kernel Mailing, the arch/x86 maintainers,
	Peter Zijlstra, Andrew Lutomirski, Jiri Kosina, thomas.lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	dave.hansen, Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan,
	jcm, longman9394, Greg KH, david.c.stewart, Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 12:18 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> From: Tim Chen "Reduced Data Speculation" is an obsolete term.

Ugh. Now you're using the broken quilt thing that makes a mush of emails for me.

                  Linus

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 20:28   ` Linus Torvalds
@ 2018-11-21 20:30     ` Thomas Gleixner
  2018-11-21 20:33     ` Linus Torvalds
  1 sibling, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:30 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux List Kernel Mailing, the arch/x86 maintainers,
	Peter Zijlstra, Andrew Lutomirski, Jiri Kosina, thomas.lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	dave.hansen, Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan,
	jcm, longman9394, Greg KH, david.c.stewart, Kees Cook, Tim Chen

On Wed, 21 Nov 2018, Linus Torvalds wrote:

> On Wed, Nov 21, 2018 at 12:18 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >
> > From: Tim Chen "Reduced Data Speculation" is an obsolete term.
> 
> Ugh. Now you're using the broken quilt thing that makes a mush of emails for me.

Gah. Dammit, I forgot to disable this file inline thingy. Sorry about that.

/me goes and fixes

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 20:28   ` Linus Torvalds
  2018-11-21 20:30     ` Thomas Gleixner
@ 2018-11-21 20:33     ` Linus Torvalds
  2018-11-21 22:48       ` Thomas Gleixner
  1 sibling, 1 reply; 95+ messages in thread
From: Linus Torvalds @ 2018-11-21 20:33 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Linux List Kernel Mailing, the arch/x86 maintainers,
	Peter Zijlstra, Andrew Lutomirski, Jiri Kosina, thomas.lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	dave.hansen, Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan,
	jcm, longman9394, Greg KH, david.c.stewart, Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 12:28 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> Ugh. Now you're using the broken quilt thing that makes a mush of emails for me.

Reading the series in alpine makes it look fine. No testing, but each
patch seems sensible.

And yes, triggering on seccomp makes more sense than dumpable to me.

               Linus

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-11-21 20:14 ` [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
@ 2018-11-21 20:33   ` Borislav Petkov
  2018-11-21 20:36     ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Borislav Petkov @ 2018-11-21 20:33 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 09:14:35PM +0100, Thomas Gleixner wrote:
> From: Tim Chen <tim.c.chen@linux.intel.com>
> 
> If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2
> user space exploits from hyperthread sibling.
> 
> Disable STIBP when enhanced IBRS is used.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  arch/x86/kernel/cpu/bugs.c |    7 +++++++
>  1 file changed, 7 insertions(+)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -321,6 +321,10 @@ static bool stibp_needed(void)
>  	if (spectre_v2_enabled == SPECTRE_V2_NONE)
>  		return false;
>  
> +	/* Enhanced IBRS makes using STIBP unnecessary. */
> +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> +		return false;
> +
>  	if (!boot_cpu_has(X86_FEATURE_STIBP))
>  		return false;
>  
> @@ -846,6 +850,9 @@ static ssize_t l1tf_show_state(char *buf
>  
>  static char *stibp_state(void)
>  {
> +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> +		return "";

If
	spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED

then SPEC_CTRL_STIBP should not be set in x86_spec_ctrl_base
(stibp_needed() prevents the setting in arch_smt_update()) so the above
check should not be needed.

I *think*.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-11-21 20:33   ` Borislav Petkov
@ 2018-11-21 20:36     ` Thomas Gleixner
  2018-11-21 22:01       ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:36 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

On Wed, 21 Nov 2018, Borislav Petkov wrote:
> On Wed, Nov 21, 2018 at 09:14:35PM +0100, Thomas Gleixner wrote:
> > From: Tim Chen <tim.c.chen@linux.intel.com>
> > 
> > If enhanced IBRS is active, STIBP is redundant for mitigating Spectre v2
> > user space exploits from hyperthread sibling.
> > 
> > Disable STIBP when enhanced IBRS is used.
> > 
> > Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > 
> > ---
> >  arch/x86/kernel/cpu/bugs.c |    7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -321,6 +321,10 @@ static bool stibp_needed(void)
> >  	if (spectre_v2_enabled == SPECTRE_V2_NONE)
> >  		return false;
> >  
> > +	/* Enhanced IBRS makes using STIBP unnecessary. */
> > +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> > +		return false;
> > +
> >  	if (!boot_cpu_has(X86_FEATURE_STIBP))
> >  		return false;
> >  
> > @@ -846,6 +850,9 @@ static ssize_t l1tf_show_state(char *buf
> >  
> >  static char *stibp_state(void)
> >  {
> > +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> > +		return "";
> 
> If
> 	spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED
> 
> then SPEC_CTRL_STIBP should not be set in x86_spec_ctrl_base
> (stibp_needed() prevents the setting in arch_smt_update()) so the above
> check should not be needed.
> 
> I *think*.

Yes, makes sense.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 10/24] sched/smt: Expose sched_smt_present static key
  2018-11-21 20:14 ` [patch 10/24] sched/smt: Expose sched_smt_present static key Thomas Gleixner
@ 2018-11-21 20:41   ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 20:41 UTC (permalink / raw)
  To: LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Wed, 21 Nov 2018, Thomas Gleixner wrote:

> Make the scheduler's 'sched_smt_present' static key globaly available, so
> it can be used in the x86 speculation control code.
> 
> Provide a query function and a stub for the CONFIG_SMP=n case.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  include/linux/sched/topology.h |    9 +++++++++
>  kernel/sched/sched.h           |    3 ---
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -34,10 +34,19 @@
>  #define SD_NUMA			0x4000	/* cross-node balancing */
>  
>  #ifdef CONFIG_SCHED_SMT
> +extern struct static_key_false sched_smt_present;
> +
> +static __always_inline bool sched_smt_active(void)
> +{
> +	return static_branch_likely(&sched_smt_present);

0day just told me that this breaks ia64.

/me goes to fix



^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-11-21 20:36     ` Thomas Gleixner
@ 2018-11-21 22:01       ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 22:01 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

On Wed, 21 Nov 2018, Thomas Gleixner wrote:
> On Wed, 21 Nov 2018, Borislav Petkov wrote:
> > >  static char *stibp_state(void)
> > >  {
> > > +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> > > +		return "";
> > 
> > If
> > 	spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED
> > 
> > then SPEC_CTRL_STIBP should not be set in x86_spec_ctrl_base
> > (stibp_needed() prevents the setting in arch_smt_update()) so the above
> > check should not be needed.
> > 
> > I *think*.
> 
> Yes, makes sense.

For this patch, but for the final thing it does not.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 20:33     ` Linus Torvalds
@ 2018-11-21 22:48       ` Thomas Gleixner
  2018-11-21 22:53         ` Borislav Petkov
  2018-11-21 23:04         ` Josh Poimboeuf
  0 siblings, 2 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 22:48 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux List Kernel Mailing, the arch/x86 maintainers,
	Peter Zijlstra, Andrew Lutomirski, Jiri Kosina, thomas.lendacky,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	dave.hansen, Casey Schaufler, Mallick, Asit K, Van De Ven, Arjan,
	jcm, longman9394, Greg KH, david.c.stewart, Kees Cook, Tim Chen

On Wed, 21 Nov 2018, Linus Torvalds wrote:

> On Wed, Nov 21, 2018 at 12:28 PM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > Ugh. Now you're using the broken quilt thing that makes a mush of emails for me.
> 
> Reading the series in alpine makes it look fine. No testing, but each
> patch seems sensible.
> 
> And yes, triggering on seccomp makes more sense than dumpable to me.

That's what we ended up with SSBD as well. We had the same discussion before.

Btw, I really do not like the app2app wording. I'd rather go for usr2usr,
but that's kinda horrible as well. But then, all of this is horrible.

Any better ideas?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 22:48       ` Thomas Gleixner
@ 2018-11-21 22:53         ` Borislav Petkov
  2018-11-21 22:55           ` Thomas Gleixner
  2018-11-21 22:55           ` Arjan van de Ven
  2018-11-21 23:04         ` Josh Poimboeuf
  1 sibling, 2 replies; 95+ messages in thread
From: Borislav Petkov @ 2018-11-21 22:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, Van De Ven, Arjan, jcm, longman9394, Greg KH,
	david.c.stewart, Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 11:48:41PM +0100, Thomas Gleixner wrote:
> Btw, I really do not like the app2app wording. I'd rather go for usr2usr,
> but that's kinda horrible as well. But then, all of this is horrible.
> 
> Any better ideas?

It needs to have "task isolation" in there somewhere as this is what it
does, practically. But it needs to be more precise as in "isolates the
tasks from influence due to shared hardware." :)

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 22:53         ` Borislav Petkov
@ 2018-11-21 22:55           ` Thomas Gleixner
  2018-11-21 22:55           ` Arjan van de Ven
  1 sibling, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-21 22:55 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, Van De Ven, Arjan, jcm, longman9394, Greg KH,
	david.c.stewart, Kees Cook, Tim Chen

On Wed, 21 Nov 2018, Borislav Petkov wrote:

> On Wed, Nov 21, 2018 at 11:48:41PM +0100, Thomas Gleixner wrote:
> > Btw, I really do not like the app2app wording. I'd rather go for usr2usr,
> > but that's kinda horrible as well. But then, all of this is horrible.
> > 
> > Any better ideas?
> 
> It needs to have "task isolation" in there somewhere as this is what it
> does, practically. But it needs to be more precise as in "isolates the
> tasks from influence due to shared hardware." :)

Not only shared hardware. IBPB is about tasks running back to back on the
same CPU.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 22:53         ` Borislav Petkov
  2018-11-21 22:55           ` Thomas Gleixner
@ 2018-11-21 22:55           ` Arjan van de Ven
  2018-11-21 22:56             ` Borislav Petkov
  1 sibling, 1 reply; 95+ messages in thread
From: Arjan van de Ven @ 2018-11-21 22:55 UTC (permalink / raw)
  To: Borislav Petkov, Thomas Gleixner
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On 11/21/2018 2:53 PM, Borislav Petkov wrote:
> On Wed, Nov 21, 2018 at 11:48:41PM +0100, Thomas Gleixner wrote:
>> Btw, I really do not like the app2app wording. I'd rather go for usr2usr,
>> but that's kinda horrible as well. But then, all of this is horrible.
>>
>> Any better ideas?
> 
> It needs to have "task isolation" in there somewhere as this is what it
> does, practically. But it needs to be more precise as in "isolates the
> tasks from influence due to shared hardware." :)
> 

part of the problem is that "sharing" has multiple dimensions: time and space (e.g. hyperthreading)
which makes it hard to find a nice term for it other than describing who attacks whom


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 22:55           ` Arjan van de Ven
@ 2018-11-21 22:56             ` Borislav Petkov
  2018-11-21 23:07               ` Borislav Petkov
  0 siblings, 1 reply; 95+ messages in thread
From: Borislav Petkov @ 2018-11-21 22:56 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Thomas Gleixner, Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 02:55:20PM -0800, Arjan van de Ven wrote:
> part of the problem is that "sharing" has multiple dimensions: time
> and space (e.g. hyperthreading) which makes it hard to find a nice
> term for it other than describing who attacks whom

Shared Hardware Isolation of Tasks ?

:-)))

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 22:48       ` Thomas Gleixner
  2018-11-21 22:53         ` Borislav Petkov
@ 2018-11-21 23:04         ` Josh Poimboeuf
  2018-11-21 23:08           ` Borislav Petkov
  1 sibling, 1 reply; 95+ messages in thread
From: Josh Poimboeuf @ 2018-11-21 23:04 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 11:48:41PM +0100, Thomas Gleixner wrote:
> On Wed, 21 Nov 2018, Linus Torvalds wrote:
> 
> > On Wed, Nov 21, 2018 at 12:28 PM Linus Torvalds
> > <torvalds@linux-foundation.org> wrote:
> > >
> > > Ugh. Now you're using the broken quilt thing that makes a mush of emails for me.
> > 
> > Reading the series in alpine makes it look fine. No testing, but each
> > patch seems sensible.
> > 
> > And yes, triggering on seccomp makes more sense than dumpable to me.
> 
> That's what we ended up with SSBD as well. We had the same discussion before.
> 
> Btw, I really do not like the app2app wording. I'd rather go for usr2usr,
> but that's kinda horrible as well. But then, all of this is horrible.

Why not just 'user'?  Like SPECTRE_V2_USER_*.

-- 
Josh

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 22:56             ` Borislav Petkov
@ 2018-11-21 23:07               ` Borislav Petkov
  0 siblings, 0 replies; 95+ messages in thread
From: Borislav Petkov @ 2018-11-21 23:07 UTC (permalink / raw)
  To: Arjan van de Ven, Thomas Gleixner
  Cc: Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, dave.hansen, Casey Schaufler,
	Mallick, Asit K, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 11:56:46PM +0100, Borislav Petkov wrote:
> On Wed, Nov 21, 2018 at 02:55:20PM -0800, Arjan van de Ven wrote:
> > part of the problem is that "sharing" has multiple dimensions: time
> > and space (e.g. hyperthreading) which makes it hard to find a nice
> > term for it other than describing who attacks whom
> 
> Shared Hardware Isolation of Tasks ?

Ok, srsly:

spectre_v2_task_isol=

to mean, task isolation and it not being an abbreviation.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 23:04         ` Josh Poimboeuf
@ 2018-11-21 23:08           ` Borislav Petkov
  2018-11-22 17:30             ` Josh Poimboeuf
  0 siblings, 1 reply; 95+ messages in thread
From: Borislav Petkov @ 2018-11-21 23:08 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Thomas Gleixner, Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 05:04:50PM -0600, Josh Poimboeuf wrote:
> Why not just 'user'?  Like SPECTRE_V2_USER_*.

Sure, a bit better except that it doesn't explain what it does, I'd say.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 15/24] x86/speculation: Add command line control for indirect branch speculation
  2018-11-21 20:14 ` [patch 15/24] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
@ 2018-11-21 23:43   ` Borislav Petkov
  2018-11-22  8:14     ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Borislav Petkov @ 2018-11-21 23:43 UTC (permalink / raw)
  To: Thomas Gleixner, Tom Lendacky
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On Wed, Nov 21, 2018 at 09:14:45PM +0100, Thomas Gleixner wrote:
> Add command line control for application to application indirect branch
> speculation mitigations.
> 
> The initial options are:
> 
>     -  on:   Unconditionally enabled
>     - off:   Unconditionally disabled
>     -auto:   Kernel selects mitigation (default off for now)
> 
> When the spectre_v2= command line argument is either 'on' or 'off' this
> implies that the application to application control follows that state even
> if when a contradicting spectre_v2_app2app= argument is supplied.
> 
> Originally-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |   22 +++
>  arch/x86/include/asm/nospec-branch.h            |   10 +
>  arch/x86/kernel/cpu/bugs.c                      |  133 ++++++++++++++++++++----
>  3 files changed, 146 insertions(+), 19 deletions(-)

...

> +static void __init
> +spectre_v2_app2app_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
> +{
> +	enum spectre_v2_app2app_mitigation mode = SPECTRE_V2_APP2APP_NONE;
> +	bool smt_possible = IS_ENABLED(CONFIG_SMP);
> +
> +	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
> +		return;
> +
> +	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
> +	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
> +		smt_possible = false;
> +
> +	switch (spectre_v2_parse_app2app_cmdline(v2_cmd)) {
> +	case SPECTRE_V2_APP2APP_CMD_AUTO:
> +	case SPECTRE_V2_APP2APP_CMD_NONE:
> +		goto set_mode;
> +	case SPECTRE_V2_APP2APP_CMD_FORCE:
> +	       mode = SPECTRE_V2_APP2APP_STRICT;
> +	       break;
> +	}
> +
> +	/* Initialize Indirect Branch Prediction Barrier */
> +	if (boot_cpu_has(X86_FEATURE_IBPB)) {
> +		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
> +		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
> +	}

So AFAICT, if coming in here with AUTO, we won't enable IBPB and I
*think* AMD wants IBPB enabled. At least the whitepaper says:

"IBPB combined with Reptoline software support is the AMD recommended
setting for Linux mitigation of Google Project Zero Variant 2
(Spectre)."

from https://www.amd.com/en/corporate/security-updates

Tom, am I completely off base here?

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (23 preceding siblings ...)
  2018-11-21 20:14 ` [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode Thomas Gleixner
@ 2018-11-21 23:48 ` Tim Chen
  2018-11-22  9:55   ` Thomas Gleixner
  2018-11-22  9:45 ` Peter Zijlstra
  25 siblings, 1 reply; 95+ messages in thread
From: Tim Chen @ 2018-11-21 23:48 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Andi Kleen

On 11/21/2018 12:14 PM, Thomas Gleixner wrote:
> This is based on Tim Chen's V5 patch series. The following changes have
> been made:
> 
...
> 
> TODO: Write documentation
> 

Andi took a crack at the document.  I made some 
modifications on top.  It can be used as a
starting point for the final document.

Thanks.

Tim

---

From bc69984e744192fa0e0b4850ecc4b25ec667b3a8 Mon Sep 17 00:00:00 2001
From: Tim Chen <tim.c.chen@linux.intel.com>
Date: Wed, 21 Nov 2018 15:36:19 -0800
Subject: [PATCH] x86/speculation: Add document to describe Spectre and its
 mitigations

From: Andi Kleen <ak@linux.intel.com>

There are no document in admin guides describing
Spectre v1 and v2 side channels and their mitigations
in Linux.

Create a document to describe Spectre and the mitigation
methods used in the kernel.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 Documentation/admin-guide/spectre.rst | 401 ++++++++++++++++++++++++++++++++++
 1 file changed, 401 insertions(+)
 create mode 100644 Documentation/admin-guide/spectre.rst

diff --git a/Documentation/admin-guide/spectre.rst b/Documentation/admin-guide/spectre.rst
new file mode 100644
index 0000000..67db151
--- /dev/null
+++ b/Documentation/admin-guide/spectre.rst
@@ -0,0 +1,401 @@
+Spectre side channels
+=====================
+
+Spectre is a class of side channel attacks against modern CPUs that
+use branch prediction and speculative execution, which allows to read malicious
+local software to read memory it does not have access to. It does not
+modify any memory.
+
+This document covers Spectre variant 1 and 2.
+
+Affected processors
+-------------------
+
+The vulnerability affects a wide range of modern high performance processors, since
+most modern high speed processors use branch prediction and speculative execution.
+
+The following CPUs are vulnerable:
+
+    - Intel Core, Atom, Pentium, Xeon CPUs
+    - AMD CPUs like Phenom, EPYC, Zen.
+    - IBM processors like POWER and zSeries
+    - Higher end ARM processors
+    - Apple CPUs
+    - Higher end MIPS CPUs
+    - Likely most other high performance CPUs. Contact your CPU vendor for details.
+
+This document documents the mitigations on Intel CPUs.
+      
+Related CVEs
+------------
+
+The following CVE entries describe Spectre variants:
+
+   =============   =======================  ==========
+   CVE-2017-5753   Bounds check bypass      Spectre-V1   
+   CVE-2017-5715   Branch target injection  Spectre-V2
+      
+Problem
+-------
+
+CPUs have shared caches, such as buffers for branch prediction,
+which are later used to guide speculative execution. These
+buffers are not flushed over context switches. Malicious local
+software might influence these buffers and trigger specific
+speculative execution in the kernel or different user processes.
+This speculative execution can then be used to read data in memory
+and cause a side effect in a data cache. The side effect can then
+later be measured by the malicious software, after it gets
+executed again, and used to determine the memory values read
+speculatively.
+
+Spectre attacks allow tricking other software to disclose
+values in their memory.
+
+For Spectre variant 1 the attacker passes an parameter to a
+victim. The victim boundary checks the parameter and rejects illegal
+values. However due to speculation over branch prediction the code
+path for correct values might be speculatively executed, then
+reference memory controlled by the input parameter and leave
+measurable side effects in the caches.  The attacker could then
+measure these side effects after it gains control again and determine
+the leaked value.
+
+There are some extensions of Spectre variant 1 attacks for reading
+data over the network, see [2]. However the attacks are very
+difficult, low bandwidth and fragile and considered low risk.
+
+For Spectre variant 2 the attacker poisons the indirect branch
+predictors of the CPU. Then control is passed to the victim, which
+executes indirect branches. Due to the poisoned branch predictor data
+the CPU can speculatively execute arbitrary code in the victim's
+address space, such as a code sequence ("disclosure gadget") that
+reads arbitrary data on some input parameter and causes a measurable
+cache side effect based on the value. The attacker can then measure
+this side effect after gaining control again and determine the value.
+
+For fully usable gadgets there needs to be an input parameter
+so that the memory read can be controlled. It might be possible
+to do attacks without input parameter, however in this case the attacker
+has little control over what memory can be read and the risk
+of actual secret disclosure is low.
+
+Attack scenarios
+----------------
+
+1. Local User process attacking kernel
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The kernel can read all memory. An malicious user program can trigger
+an kernel entry. For variant 1 it would need to pass a parameter, so
+only system calls (but not interrupts or exceptions) are vulnerable.
+
+For variant 2 the attacker needs to poison the CPU branch buffers
+first, and then enter the kernel and trick it into jumping to a disclosure
+gadget through an indirect branch. If it wants to control the address that
+the gadget can read it would also need to pass a parameter to the gadget,
+either through a register or through a known address in memory. Finally
+it needs to gain execution again to measure the side effect.
+
+Requirements: malicious local process passing parameters to kernel with
+kernel or other process on same machine having secrets.
+
+2. User process attacking another user process
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In this scenario an malicious user process wants to attack another
+user process through a context switch.
+
+For variant 1 this generally requires passing some parameter between
+the processes, which needs a data passing relationship, such a remote
+procedure calls (RPC).
+
+For variant 2 the poisoning can happen through a context switch, or
+on CPUs with simultaneous multi-threading (SMT) potentially on the
+thread sibling executing in parallel on the same core.  In any case to
+control the memory read by the disclosure gadget also requires a data
+passing relationship to the victim process, otherwise while it may
+observe values through side effects, it won't know which memory
+addresses they relate to.
+
+Requirements: malicious local process attacking
+containing secrets running on same core.
+
+3. User sandbox attacking runtime in process
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A process, such as a web browser, might be running interpreted or JITed
+untrusted code, such as java script code downloaded from a website.
+It uses restrictions in the JIT code generator and checks in a run time
+to prevent the untrusted code from attacking the hosting process.
+
+The untrusted code might either use variant 1 or 2 to trick
+a disclosure gadget in the run time to read memory inside the process.
+
+Requirements: sandbox in process running untrusted code 
+with runtime in same process containing secrets.
+
+4. Kernel sandbox attacking kernel
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The kernel has support for eBPF to execute JITed untrusted byte code inside
+the kernel. eBPF is used for manipulating and examining network
+packets, examining system call parameters for sand boxes and other uses.
+
+A malicious local process could upload and trigger an malicious
+eBPF script to the kernel, with the script attacking the kernel
+using variant 1 or 2 and reading memory.
+
+Requirements: Malicious local process, EBPF enabled for
+unprivileged users, attacking kernel with secrets on the same
+machine.
+
+5. Virtualization guest attacking host
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+An untrusted guest might attack the host through a hyper call
+or other virtualization exit.
+
+Requirements: untrusted guest attacking host with secrets
+on local machine.
+
+For variant 1 VM exits use appropiate mitigations
+("bounds clipping") to prevent speculation leaking data
+in kernel code. For variant 2 the kernel flushes the branch buffer.
+
+6. Virtualization guest attacking other guest
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+An untrusted guest attacking another guest containing
+secrets. Mitigations are similar to the case above.
+
+Runtime vulnerability information
+---------------------------------
+
+The kernel reports the vulnerability and mitigation status in
+/sys/devices/system/cpu/vulnerabilities/
+
+The summary can be displayed with
+grep . /sys/devices/system/cpu/vulnerabilities/*
+
+The spectre_v1 file describes the always enabled variant 1
+mitigation.
+
+The spectre_v2 the kernel mitigation status is reported,
+which includes if the kernel has been compiled with a retpoline
+aware compiler, if the CPU has hardware mitigation, and if has
+microcode support for additional process specific mitigations.
+
+Full mitigations might require an microcode update from the CPU
+vendor. When the necessary microcode is not available the kernel
+will report vulnerability.
+
+Kernel mitigation
+-----------------
+
+The kernel has default on mitigations for Variant 1 and Variant 2
+against attacks from user programs or guests. For variant 1 it
+annotates vulnerable kernel code (as determined by the sparse code
+scanning tool and code audits) to use "bounds clipping" to avoid any
+usable disclosure gadgets.
+
+For variant 2 the kernel employs "retpoline" with compiler help to
+secure the indirect branches inside the kernel, when CONFIG_RETPOLINE
+is enabled and the compiler supports retpoline. On Intel Skylake systems
+the mitigation covers most, but not all, cases, see [1] for more details.
+
+On CPUs with hardware mitigations for variant 2 retpoline is automatically
+disabled at runtime. 
+
+Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
+and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration)
+makes attacks on the kernel generally more difficult.
+
+Host mitigation
+---------------
+
+The Linux kernel uses retpoline to eliminate attacks on indirect
+branches. It also flushes the Return Branch Stack on every VM exit to
+prevent guests from attacking the host kernel when retpoline is
+enabled.
+
+Variant 1 attacks are mitigated unconditionally.
+
+The kernel also allows guests to use any microcode based mitigations
+they chose to use (such as IBRS, IBPB or STIBP), assuming the
+host has an updated microcode and reports IBPB in
+/sys/devices/system/cpu/vulnerabilities/spectre_v2.
+
+Mitigation control at kernel build time
+---------------------------------------
+
+When the CONFIG_RETPOLINE option is enabled the kernel uses special
+code sequences to avoid attacks on indirect branches through
+Variant 2 attacks.
+
+The compiler also needs to support retpoline and support the
+-mindirect-branch=thunk-extern -mindirect-branch-register options
+for gcc, or -mretpoline-external-thunk option for clang.
+When the compiler doesn't support these options the
+
+Variant 1 mitigations and other side channel related user APIs are
+enabled unconditionally.
+
+Hardware mitigation
+-------------------
+
+Some CPUs have hardware mitigations for Spectre variant 2.  The 4.19
+kernel has support for detecting this capability and automatically
+disable any unnecessary workarounds at runtime.
+
+User mitigation
+---------------
+
+For variant 1 user programs can use LFENCE or bounds clipping. For more
+details see [3].
+
+For variant 2 user programs can be compiled with retpoline.
+
+User programs should use address space randomization
+(/proc/sys/kernel/randomize_va_space = 1) to make any attacks
+more difficult.
+
+Mitigation control on the kernel command line
+---------------------------------------------
+
+Spectre v2 mitigations can be disabled and force enabled at the kernel
+command line.
+
+	nospectre_v2	[X86] Disable all mitigations for the Spectre variant 2
+			(indirect branch prediction) vulnerability. System may
+			allow data leaks with this option, which is equivalent
+			to spectre_v2=off.
+
+	spectre_v2=	[X86] Control mitigation of Spectre variant 2
+			(indirect branch speculation) vulnerability.
+
+			on   - unconditionally enable
+			off  - unconditionally disable
+			auto - kernel detects whether your CPU model is
+			       vulnerable
+
+			Selecting 'on' will, and 'auto' may, choose a
+			mitigation method at run time according to the
+			CPU, the available microcode, the setting of the
+			CONFIG_RETPOLINE configuration option, and the
+			compiler with which the kernel was built.
+
+			Specific mitigations can also be selected manually:
+
+			retpoline	  - replace indirect branches
+			retpoline,generic - google's original retpoline
+			retpoline,amd     - AMD-specific minimal thunk
+
+			Not specifying this option is equivalent to
+			spectre_v2=auto.			
+
+For user space mitigation: 
+
+	spectre_v2_app2app=
+			[X86] Control mitigation of Spectre variant 2
+		        application to application (indirect branch speculation)
+			vulnerability.
+
+			on      - Unconditionally enable mitigations. Is enforced
+				  by spectre_v2=on
+			off     - Unconditionally disable mitigations. Is enforced
+				  by spectre_v2=off
+			auto    - Kernel selects the mitigation depending on
+				  the available CPU features and vulnerability.
+			prctl   - Indirect branch speculation is enabled, but
+				  mitigation can be enabled via prctl per thread.
+				  The mitigation control state is inherited on fork.
+			seccomp - Same as "prctl" above, but all seccomp threads
+				  will enable the mitigation unless they explicitly
+				  opt out.
+
+			Default mitigation:
+			If CONFIG_SECCOMP=y "seccomp", otherwise "prctl"
+
+			Not specifying this option is equivalent to
+			spectre_v2_app2app=auto.
+
+In general the kernel by default selects reasonable mitigations for
+the current CPU. To disable Spectre v2 mitigations boot with
+spectre_v2=off. Spectre v1 mitigations cannot be disabled.
+
+APIs for mitigation control per process
+---------------------------------------
+
+Under the "prctl" option for spectre_v2_app2app, issuing
+prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_DISABLE,
+0, 0) on a process restricts indirect branch speculation on a process.
+
+Processes containing secrets, such as cryptographic keys, can invoke this
+prctl for extra protection against Spectre v2. 
+
+Before running untrusted processes, this prctl should be issued to prevent
+such processes from launching Spectre v2 attacks.
+
+The kernel automatically flushes the branch buffers when context switching
+in or out of the process, which prevents any branch buffer poisoning
+from the sandboxed code. IBPB will be issued on context switching into
+and out of such processes to clear branch target buffers.  This protects the
+process from any external influence on its indirect branch predictions or
+influencing others on the same thread.
+
+On systems with Simultaneous Multi Threading (SMT)
+it may be possible for a process to affect the indirect branches on a
+process running on a thread sibling on the same core.
+
+Using prctl to restrict indirect branch speculation prevents
+either untrusted code in the current process affect anything else,
+or code running in SMT affect the current process.  This is
+done using STIBP.
+
+This should be only deployed as needed, as it has performance
+impact on both the current process, and any process running
+on the thread sibling.
+
+Under the "seccomp" option, the processes running with SECCOMP
+will be restrained similarly. 
+
+References
+----------
+
+Intel white papers and documents on Spectre:
+
+https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf
+
+[1]
+https://software.intel.com/security-software-guidance/api-app/sites/default/files/Retpoline-A-Branch-Target-Injection-Mitigation.pdf
+
+https://www.intel.com/content/www/us/en/architecture-and-technology/facts-about-side-channel-analysis-and-intel-products.html
+
+[3] https://software.intel.com/security-software-guidance/
+
+AMD white papers:
+
+https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf
+
+https://www.amd.com/en/corporate/security-updates
+
+ARM white papers:
+
+https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/download-the-whitepaper
+
+https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/latest-updates/cache-speculation-issues-update
+
+MIPS:
+
+https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/
+
+Academic papers:
+
+https://spectreattack.com/spectre.pdf [original spectre paper]
+
+https://arxiv.org/abs/1807.07940 [Spectre RSB, a variant of Spectre v2]
+
+[2] https://arxiv.org/abs/1807.10535 [NetSpectre]
+
+https://arxiv.org/abs/1811.05441 [generalization of Spectre]
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 95+ messages in thread

* Re: [patch 17/24] x86/speculation: Move IBPB control out of switch_mm()
  2018-11-21 20:14 ` [patch 17/24] x86/speculation: Move IBPB control out of switch_mm() Thomas Gleixner
@ 2018-11-22  0:01   ` Andi Kleen
  2018-11-22  7:42     ` Jiri Kosina
  2018-11-22  1:40   ` Tim Chen
  2018-11-22  7:52   ` Ingo Molnar
  2 siblings, 1 reply; 95+ messages in thread
From: Andi Kleen @ 2018-11-22  0:01 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

> +		 * This could be optimized by keeping track of the last
> +		 * user task per cpu and avoiding the barrier when the task
> +		 * is immediately scheduled back and the thread inbetween
> +		 * was a kernel thread. It's dubious whether that'd be
> +		 * worth the extra load/store and conditional operations.
> +		 * Keep it optimized for the common case where the TIF bit
> +		 * is not set.
> +		 */

The optimization was there before and you removed it?

It's quite important for switching to idle and back. With your variant short IOs
that do short idle waits will be badly impacted. 

-Andi


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 18/24] x86/speculation: Avoid __switch_to_xtra() calls
  2018-11-21 20:14 ` [patch 18/24] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
@ 2018-11-22  1:23   ` Tim Chen
  2018-11-22  7:44     ` Ingo Molnar
  0 siblings, 1 reply; 95+ messages in thread
From: Tim Chen @ 2018-11-22  1:23 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On 11/21/2018 12:14 PM, Thomas Gleixner wrote:

> +	 * Avoid __switch_to_xtra() invocation when conditional stpib is

s/stpib/stibp

Tim

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 17/24] x86/speculation: Move IBPB control out of switch_mm()
  2018-11-21 20:14 ` [patch 17/24] x86/speculation: Move IBPB control out of switch_mm() Thomas Gleixner
  2018-11-22  0:01   ` Andi Kleen
@ 2018-11-22  1:40   ` Tim Chen
  2018-11-22  7:52   ` Ingo Molnar
  2 siblings, 0 replies; 95+ messages in thread
From: Tim Chen @ 2018-11-22  1:40 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On 11/21/2018 12:14 PM, Thomas Gleixner wrote:

> +		 * is immediately scheduled back and the thread inbetween

s/inbetween/in between

Tim

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-21 20:14 ` [patch 20/24] x86/speculation: Split out TIF update Thomas Gleixner
@ 2018-11-22  2:13   ` Tim Chen
  2018-11-22 23:00     ` Thomas Gleixner
  2018-11-22  7:43   ` [patch 20/24] x86/speculation: Split out TIF update Ingo Molnar
  1 sibling, 1 reply; 95+ messages in thread
From: Tim Chen @ 2018-11-22  2:13 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Wed, Nov 21, 2018 at 09:14:50PM +0100, Thomas Gleixner wrote:
> +static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
>  {
>       bool update;
>
> +     if (on)
> +             update = !test_and_set_tsk_thread_flag(tsk, tifbit);
> +     else
> +             update = test_and_clear_tsk_thread_flag(tsk, tifbit);
> +
> +     /*
> +      * If being set on non-current task, delay setting the CPU
> +	 * mitigation until it is scheduled next.
> +	 */
> +     if (tsk == current && update)
> +             speculation_ctrl_update_current();

I think all the call paths from prctl and seccomp coming here
has tsk == current.

But if task_update_spec_tif gets used in the future
where tsk is running on a remote CPU, this could lead to the MSR
getting out of sync with the running task's TIF flag. This will break
either performance or security.

Should we add a
        WARN_ON(smp_processor_id() != task_cpu(tsk));

in case the assumption breaks that task is
on local CPU, or document this assumption?

Thanks.

Tim


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode
  2018-11-21 20:14 ` [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode Thomas Gleixner
@ 2018-11-22  2:24   ` Tim Chen
  2018-11-22  7:26   ` Ingo Molnar
  1 sibling, 0 replies; 95+ messages in thread
From: Tim Chen @ 2018-11-22  2:24 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Wed, Nov 21, 2018 at 09:14:54PM +0100, Thomas Gleixner wrote:
> From: Jiri Kosina <jkosina@suse.cz>
> 
> If 'prctl' mode of app2app protection from spectre v2 is selected on the
> kernel command-line, STIBP and IBPB are applied on tasks which restrict
> their indirect branch speculation via prctl.
> 
> SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
> makes sense to prevent spectre v2 application to application attacks as
> well.

Will need to add this chunk.

Thanks.

Tim

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c4d010d..d070e84 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -275,6 +275,7 @@ enum spectre_v2_app2app_cmd {
 	{ "off",	SPECTRE_V2_APP2APP_CMD_NONE,	false },
 	{ "on",		SPECTRE_V2_APP2APP_CMD_FORCE,	true  },
 	{ "prctl",	SPECTRE_V2_APP2APP_CMD_PRCTL,	false },
+	{ "seccomp",	SPECTRE_V2_APP2APP_CMD_SECCOMP,	false },
 };
 
 static void __init spec_v2_app_print_cond(const char *reason, bool secure)


^ permalink raw reply related	[flat|nested] 95+ messages in thread

* Re: [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-11-21 20:14 ` [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Thomas Gleixner
@ 2018-11-22  7:10   ` Ingo Molnar
  2018-11-22  9:03   ` Peter Zijlstra
  2018-11-22 12:26   ` Borislav Petkov
  2 siblings, 0 replies; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:10 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen


* Thomas Gleixner <tglx@linutronix.de> wrote:

> From: Tim Chen <tim.c.chen@linux.intel.com>
> 
> Add the PR_SPEC_INDIR_BRANCH option for the PR_GET_SPECULATION_CTRL and
> PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of
> indirect branch speculation via STIBP.
> 
> Invocations:
>  Check indirect branch speculation status with
>  - prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, 0, 0, 0);
> 
>  Enable indirect branch speculation with
>  - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
> 
>  Disable indirect branch speculation with
>  - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
> 
>  Force disable indirect branch speculation with
>  - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
> 
> See Documentation/userspace-api/spec_ctrl.rst.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Please either de-capitalize the title to 'prctl()', or use PR_SPEC_PRCTL 
or PR_SET/GET_SPECULATION_CTRL - there's no such thing as 'PRCTL' 
interface - the interface is called prctl() and the speculation control 
ABIs have their own names.

This applies to the next patch as well.

> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -212,6 +212,7 @@ struct prctl_mm_map {
>  #define PR_SET_SPECULATION_CTRL		53
>  /* Speculation control variants */
>  # define PR_SPEC_STORE_BYPASS		0
> +# define PR_SPEC_INDIR_BRANCH		1
>  /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
>  # define PR_SPEC_NOT_AFFECTED		0
>  # define PR_SPEC_PRCTL			(1UL << 0)
> --- a/tools/include/uapi/linux/prctl.h
> +++ b/tools/include/uapi/linux/prctl.h
> @@ -212,6 +212,7 @@ struct prctl_mm_map {
>  #define PR_SET_SPECULATION_CTRL		53
>  /* Speculation control variants */
>  # define PR_SPEC_STORE_BYPASS		0
> +# define PR_SPEC_INDIR_BRANCH		1
>  /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
>  # define PR_SPEC_NOT_AFFECTED		0
>  # define PR_SPEC_PRCTL			(1UL << 0)

Please de-abbreviate the new ABI: PR_SPEC_INDIRECT_BRANCH?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 23/24] x86/speculation: Enable PRCTL mode for spectre_v2_app2app
  2018-11-21 20:14 ` [patch 23/24] x86/speculation: Enable PRCTL mode for spectre_v2_app2app Thomas Gleixner
@ 2018-11-22  7:17   ` Ingo Molnar
  0 siblings, 0 replies; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:17 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> Now that all prerequisites are in place:
> 
>  - Add the prctl command line option
> 
>  - Default the 'auto' mode to 'prctl'
> 
>  - When SMT state changes, update the static key which controls the
>    conditional STIBP evaluation on context switch.
> 
>  - At init update the static key which controls the conditional IBPB
>    evaluation on context switch.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |    5 ++
>  arch/x86/kernel/cpu/bugs.c                      |   46 +++++++++++++++++++++---
>  2 files changed, 45 insertions(+), 6 deletions(-)
> 
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4246,7 +4246,10 @@
>  				  by spectre_v2=off
>  			auto    - Kernel selects the mitigation depending on
>  				  the available CPU features and vulnerability.
> -				  Default is off.
> +				  Default is prctl.
> +			prctl   - Indirect branch speculation is enabled, but
> +				  mitigation can be enabled via prctl per thread.
> +				  The mitigation control state is inherited on fork.

Please change the order of the last two entries, i.e. make 'auto' the 
last one. This is the pattern we use in other places plus we refer to 
'prctl' before we document it.

>  static const struct {
> @@ -270,6 +272,7 @@ static const struct {
>  	{ "auto",	SPECTRE_V2_APP2APP_CMD_AUTO,	false },
>  	{ "off",	SPECTRE_V2_APP2APP_CMD_NONE,	false },
>  	{ "on",		SPECTRE_V2_APP2APP_CMD_FORCE,	true  },
> +	{ "prctl",	SPECTRE_V2_APP2APP_CMD_PRCTL,	false },

Might make sense to order them in a consistent fashion as well.

> +	/*
> +	 * If STIBP is not available or SMT is not possible clear the STIPB
> +	 * mode.
> +	 */
> +	if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP))
> +		mode = SPECTRE_V2_APP2APP_NONE;

Another nit: please match order of the comments to how the condition is 
written in the code.

> +/* Update the static key controlling the evaluation of TIF_SPEC_IB */
> +static void update_indir_branch_cond(void)
> +{
> +	if (!IS_ENABLED(CONFIG_SMP))
> +		return;
> +
> +	if (sched_smt_active())
> +		static_branch_enable(&switch_to_cond_stibp);
> +	else
> +		static_branch_disable(&switch_to_cond_stibp);

So in the !SMP case sched_smt_active() is already doing the right thing:

  static inline bool sched_smt_active(void) { return false; }

I.e. couldn't we just remove the extra CONFIG_SMP condition?
This would simplify the code with some very minor expense on !SMP.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode
  2018-11-21 20:14 ` [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode Thomas Gleixner
  2018-11-22  2:24   ` Tim Chen
@ 2018-11-22  7:26   ` Ingo Molnar
  2018-11-22 23:45     ` Thomas Gleixner
  1 sibling, 1 reply; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:26 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> From: Jiri Kosina <jkosina@suse.cz>
> 
> If 'prctl' mode of app2app protection from spectre v2 is selected on the
> kernel command-line, STIBP and IBPB are applied on tasks which restrict
> their indirect branch speculation via prctl.
> 
> SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
> makes sense to prevent spectre v2 application to application attacks as
> well.
> 
> The mitigation guide documents how STIPB works:
>     
>    Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
>    prevents the predicted targets of indirect branches on any logical
>    processor of that core from being controlled by software that executes
>    (or executed previously) on another logical processor of the same core.
>     
> Ergo setting STIBP protects the task itself from being attacked from a task
> running on a different hyper-thread and protects the tasks running on
> different hyper-threads from being attacked.
>     
> IBPB is issued when the task switches out, so malicious sandbox code cannot
> mistrain the branch predictor for the next user space task on the same
> logical processor.
> 
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  Documentation/admin-guide/kernel-parameters.txt |    7 +++++-
>  arch/x86/include/asm/nospec-branch.h            |    1 
>  arch/x86/kernel/cpu/bugs.c                      |   27 +++++++++++++++++++-----
>  3 files changed, 29 insertions(+), 6 deletions(-)
> 
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4228,10 +4228,15 @@
>  				  by spectre_v2=off
>  			auto    - Kernel selects the mitigation depending on
>  				  the available CPU features and vulnerability.
> -				  Default is prctl.
>  			prctl   - Indirect branch speculation is enabled, but
>  				  mitigation can be enabled via prctl per thread.
>  				  The mitigation control state is inherited on fork.
> +			seccomp - Same as "prctl" above, but all seccomp threads
> +				  will enable the mitigation unless they explicitly
> +				  opt out.
> +
> +			Default mitigation:
> +			If CONFIG_SECCOMP=y "seccomp", otherwise "prctl"
>  
>  			Not specifying this option is equivalent to
>  			spectre_v2_app2app=auto.
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -233,6 +233,7 @@ enum spectre_v2_app2app_mitigation {
>  	SPECTRE_V2_APP2APP_NONE,
>  	SPECTRE_V2_APP2APP_STRICT,
>  	SPECTRE_V2_APP2APP_PRCTL,
> +	SPECTRE_V2_APP2APP_SECCOMP,
>  };
>  
>  /* The Speculative Store Bypass disable variants */
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -256,12 +256,14 @@ enum spectre_v2_app2app_cmd {
>  	SPECTRE_V2_APP2APP_CMD_AUTO,
>  	SPECTRE_V2_APP2APP_CMD_FORCE,
>  	SPECTRE_V2_APP2APP_CMD_PRCTL,
> +	SPECTRE_V2_APP2APP_CMD_SECCOMP,
>  };
>  
>  static const char *spectre_v2_app2app_strings[] = {
>  	[SPECTRE_V2_APP2APP_NONE]	= "App-App Vulnerable",
> -	[SPECTRE_V2_APP2APP_STRICT]	= "App-App Mitigation: STIBP protection",
> -	[SPECTRE_V2_APP2APP_PRCTL]	= "App-App Mitigation: STIBP via prctl",
> +	[SPECTRE_V2_APP2APP_STRICT]	= "App-App Mitigation: forced protection",
> +	[SPECTRE_V2_APP2APP_PRCTL]	= "App-App Mitigation: prctl opt-in",
> +	[SPECTRE_V2_APP2APP_SECCOMP]	= "App-App Mitigation: seccomp and prctl opt-in",

This description is not accurate: it's not a 'seccomp and prctl opt-in', 
the seccomp functionality is opt-out, the prctl is opt-in.

So something like:

> +	[SPECTRE_V2_APP2APP_SECCOMP]	= "App-App Mitigation: seccomp by default and prctl opt-in",

or so?

>  void arch_seccomp_spec_mitigate(struct task_struct *task)
>  {
>  	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
>  		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
> +	if (spectre_v2_app2app == SPECTRE_V2_APP2APP_SECCOMP)
> +		indir_branch_prctl_set(task, PR_SPEC_FORCE_DISABLE);
>  }
>  #endif

Hm, so isn't arch_seccomp_spec_mitigate() called right before untrusted 
seccomp code is executed? So why are we disabling the mitigation here?

> +	case SPECTRE_V2_APP2APP_SECCOMP:
> +		return ", STIBP: seccomp and prctl opt-in";
> +	case SPECTRE_V2_APP2APP_SECCOMP:
> +		return ", IBPB: seccomp and prctl opt-in";

Same feedback wrt. potentially confusing use of 'opt-in' here, while 
seccomp is more like an opt-out mechanism.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-21 20:14 ` [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
@ 2018-11-22  7:34   ` Ingo Molnar
  2018-11-22 23:17     ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:34 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> The upcoming fine grained per task STIBP control needs to be updated on CPU
> hotplug as well.
> 
> Split out the code which controls the strict mode so the prctl control code
> can be added later.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/bugs.c |   46 ++++++++++++++++++++++++---------------------
>  1 file changed, 25 insertions(+), 21 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -531,40 +531,44 @@ static void __init spectre_v2_select_mit
>  	arch_smt_update();
>  }
>  
> -static bool stibp_needed(void)
> +static void update_stibp_msr(void *info)
>  {
> -	/* Enhanced IBRS makes using STIBP unnecessary. */
> -	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> -		return false;
> -
> -	/* Check for strict app2app mitigation mode */
> -	return spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT;
> +	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  }


Does Sparse or other tooling warn about unused function parameters? If 
yes then it might make sense to mark it __used?

>  
> -static void update_stibp_msr(void *info)
> +/* Update x86_spec_ctrl_base in case SMT state changed. */
> +static void update_stibp_strict(void)
>  {
> -	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +	u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
> +
> +	if (sched_smt_active())
> +		mask |= SPEC_CTRL_STIBP;
> +
> +	if (mask == x86_spec_ctrl_base)
> +		return;
> +
> +	pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
> +		mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
> +	x86_spec_ctrl_base = mask;
> +	on_each_cpu(update_stibp_msr, NULL, 1);
>  }
>  
>  void arch_smt_update(void)
>  {
> -	u64 mask;
> -
> -	if (!stibp_needed())
> +	/* Enhanced IBRS makes using STIBP unnecessary. No update required. */
> +	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
>  		return;
>  
>  	mutex_lock(&spec_ctrl_mutex);
>  
> -	mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
> -	if (sched_smt_active())
> -		mask |= SPEC_CTRL_STIBP;
> -
> -	if (mask != x86_spec_ctrl_base) {
> -		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
> -			mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
> -		x86_spec_ctrl_base = mask;
> -		on_each_cpu(update_stibp_msr, NULL, 1);
> +	switch (spectre_v2_app2app) {
> +	case SPECTRE_V2_APP2APP_NONE:
> +		break;
> +	case SPECTRE_V2_APP2APP_STRICT:
> +		update_stibp_strict();
> +		break;
>  	}

So I'm wondering, shouldn't firmware_restrict_branch_speculation_start()/_end()
also enable/disable STIBP? It already enabled/disables IBRS.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 17/24] x86/speculation: Move IBPB control out of switch_mm()
  2018-11-22  0:01   ` Andi Kleen
@ 2018-11-22  7:42     ` Jiri Kosina
  2018-11-22  9:18       ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Jiri Kosina @ 2018-11-22  7:42 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Thomas Gleixner, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On Wed, 21 Nov 2018, Andi Kleen wrote:

> > +		 * This could be optimized by keeping track of the last
> > +		 * user task per cpu and avoiding the barrier when the task
> > +		 * is immediately scheduled back and the thread inbetween
> > +		 * was a kernel thread. It's dubious whether that'd be
> > +		 * worth the extra load/store and conditional operations.
> > +		 * Keep it optimized for the common case where the TIF bit
> > +		 * is not set.
> > +		 */
> 
> The optimization was there before and you removed it?
> 
> It's quite important for switching to idle and back. With your variant short IOs
> that do short idle waits will be badly impacted. 

The question is what scenario to optimize for.

Either you penalize everybody in the default prctl+seccomp setup 
(irrespective of it's TIF flag value), as you have the extra overhead on 
each and every switch_to() (to check exactly for this back-to-back 
scheduling), or you penalize only those tasks that are penalized anyway by 
the IBPB flush.

I think the latter (which is what this patch implements) makes more sense. 

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-21 20:14 ` [patch 20/24] x86/speculation: Split out TIF update Thomas Gleixner
  2018-11-22  2:13   ` Tim Chen
@ 2018-11-22  7:43   ` Ingo Molnar
  2018-11-22 23:04     ` Thomas Gleixner
  1 sibling, 1 reply; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:43 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> The update of the TIF_SSBD flag and the conditional speculation control MSR
> update is done in the ssb_prctl_set() function directly. The upcoming prctl
> support for controlling indirect branch speculation via STIBP needs the
> same mechanism.
> 
> Split the code out and make it reusable.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/kernel/cpu/bugs.c |   31 +++++++++++++++++++------------
>  1 file changed, 19 insertions(+), 12 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -703,10 +703,25 @@ static void ssb_select_mitigation(void)
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "Speculation prctl: " fmt
>  
> -static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
> +static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
>  {
>  	bool update;
>  
> +	if (on)
> +		update = !test_and_set_tsk_thread_flag(tsk, tifbit);
> +	else
> +		update = test_and_clear_tsk_thread_flag(tsk, tifbit);
> +
> +	/*
> +	 * If being set on non-current task, delay setting the CPU
> +	 * mitigation until it is scheduled next.
> +	 */
> +	if (tsk == current && update)
> +		speculation_ctrl_update_current();

Had to read this twice, because the comment and the code are both correct 
but deal with the inverse case. This might have helped:

	/*
	 * Immediately update the speculation MSRs on the current task,
	 * but for non-current tasks delay setting the CPU mitigation 
	 * until it is scheduled next.
	 */
	if (tsk == current && update)
		speculation_ctrl_update_current();

But can the target task ever be non-current here? I don't think so: the 
two callers are prctl and seccomp, and both are passing in the current 
task pointer.

If so then it would be nice to rename all these task variable names from 
'task' to 'curr' or such, to make this property apparent.

Then we can also remove the condition and the comment, and update 
unconditionally, and maybe add:

	WARN_ON_ONCE(curr != current);

... or so?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 18/24] x86/speculation: Avoid __switch_to_xtra() calls
  2018-11-22  1:23   ` Tim Chen
@ 2018-11-22  7:44     ` Ingo Molnar
  0 siblings, 0 replies; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:44 UTC (permalink / raw)
  To: Tim Chen
  Cc: Thomas Gleixner, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook


* Tim Chen <tim.c.chen@linux.intel.com> wrote:

> On 11/21/2018 12:14 PM, Thomas Gleixner wrote:
> 
> > +	 * Avoid __switch_to_xtra() invocation when conditional stpib is
> 
> s/stpib/stibp

and:

  s/stibp/STIBP

to make it consistent throughout the patchset.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 17/24] x86/speculation: Move IBPB control out of switch_mm()
  2018-11-21 20:14 ` [patch 17/24] x86/speculation: Move IBPB control out of switch_mm() Thomas Gleixner
  2018-11-22  0:01   ` Andi Kleen
  2018-11-22  1:40   ` Tim Chen
@ 2018-11-22  7:52   ` Ingo Molnar
  2018-11-22 22:29     ` Thomas Gleixner
  2 siblings, 1 reply; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> IBPB control is currently in switch_mm() to avoid issuing IBPB when
> switching between tasks of the same process.
> 
> But that's not covering the case of sandboxed tasks which get the
> TIF_SPEC_IB flag set via seccomp. There the barrier is required when the
> potentially malicious task is switched out because the task which is
> switched in might have it not set and would still be attackable.
> 
> For tasks which mark themself with TIF_SPEC_IB via the prctl, the barrier
> needs to be when the tasks switches in because the previous one might be an
> attacker.

s/themself
 /themselves
> 
> Move the code out of switch_mm() and evaluate the TIF bit in
> switch_to(). Make it an inline function so it can be used both in 32bit and
> 64bit code.

s/32bit
 /32-bit

s/64bit
 /64-bit

> 
> This loses the optimization of switching back to the same process, but
> that's wrong in the context of seccomp anyway as it does not protect tasks
> of the same process against each other.
> 
> This could be optimized by keeping track of the last user task per cpu and
> avoiding the barrier when the task is immediately scheduled back and the
> thread inbetween was a kernel thread. It's dubious whether that'd be worth
> the extra load/store and conditional operations. Keep it optimized for the
> common case where the TIF bit is not set.

s/cpu/CPU

> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/nospec-branch.h |    2 +
>  arch/x86/include/asm/spec-ctrl.h     |   46 +++++++++++++++++++++++++++++++++++
>  arch/x86/include/asm/tlbflush.h      |    2 -
>  arch/x86/kernel/cpu/bugs.c           |   16 +++++++++++-
>  arch/x86/kernel/process_32.c         |   11 ++++++--
>  arch/x86/kernel/process_64.c         |   11 ++++++--
>  arch/x86/mm/tlb.c                    |   39 -----------------------------
>  7 files changed, 81 insertions(+), 46 deletions(-)
> 
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -312,6 +312,8 @@ do {									\
>  } while (0)
>  
>  DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
> +DECLARE_STATIC_KEY_FALSE(switch_to_cond_ibpb);
> +DECLARE_STATIC_KEY_FALSE(switch_to_always_ibpb);
>  
>  #endif /* __ASSEMBLY__ */
>  
> --- a/arch/x86/include/asm/spec-ctrl.h
> +++ b/arch/x86/include/asm/spec-ctrl.h
> @@ -76,6 +76,52 @@ static inline u64 ssbd_tif_to_amd_ls_cfg
>  	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
>  }
>  
> +/**
> + * switch_to_ibpb - Issue IBPB on task switch
> + * @next:	Pointer to the next task
> + * @prev_tif:	Threadinfo flags of the previous task
> + * @next_tif:	Threadinfo flags of the next task
> + *
> + * IBPB flushes the branch predictor, which stops Spectre-v2 attacks
> + * between user space tasks. Depending on the mode the flush is made
> + * conditional.
> + */
> +static inline void switch_to_ibpb(struct task_struct *next,
> +				  unsigned long prev_tif,
> +				  unsigned long next_tif)
> +{
> +	if (static_branch_unlikely(&switch_to_always_ibpb)) {
> +		/* Only flush when switching to a user task. */
> +		if (next->mm)
> +			indirect_branch_prediction_barrier();
> +	}
> +
> +	if (static_branch_unlikely(&switch_to_cond_ibpb)) {
> +		/*
> +		 * Both tasks' threadinfo flags are checked for TIF_SPEC_IB.
> +		 *
> +		 * For an outgoing sandboxed task which has TIF_SPEC_IB set
> +		 * via seccomp this is needed because it might be malicious
> +		 * and the next user task switching in might not have it
> +		 * set.
> +		 *
> +		 * For an incoming task which has set TIF_SPEC_IB itself
> +		 * via prctl() this is needed because the previous user
> +		 * task might be malicious and have the flag unset.
> +		 *
> +		 * This could be optimized by keeping track of the last
> +		 * user task per cpu and avoiding the barrier when the task
> +		 * is immediately scheduled back and the thread inbetween
> +		 * was a kernel thread. It's dubious whether that'd be
> +		 * worth the extra load/store and conditional operations.
> +		 * Keep it optimized for the common case where the TIF bit
> +		 * is not set.
> +		 */
> +		if ((prev_tif | next_tif) & _TIF_SPEC_IB)
> +			indirect_branch_prediction_barrier();

s/cpu/CPU

> +
> +		switch (mode) {
> +		case SPECTRE_V2_APP2APP_STRICT:
> +			static_branch_enable(&switch_to_always_ibpb);
> +			break;
> +		default:
> +			break;
> +		}
> +
> +		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
> +			mode == SPECTRE_V2_APP2APP_STRICT ? "forced" : "conditional");

Maybe s/forced/always-on, to better match the code?

> @@ -617,11 +619,16 @@ void compat_start_thread(struct pt_regs
>  	/* Reload sp0. */
>  	update_task_stack(next_p);
>  
> +	prev_tif = task_thread_info(prev_p)->flags;
> +	next_tif = task_thread_info(next_p)->flags;
> +	/* Indirect branch prediction barrier control */
> +	switch_to_ibpb(next_p, prev_tif, next_tif);
> +
>  	/*
>  	 * Now maybe reload the debug registers and handle I/O bitmaps
>  	 */
> -	if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||
> -		     task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
> +	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||
> +		     prev_tif & _TIF_WORK_CTXSW_PREV))
>  		__switch_to_xtra(prev_p, next_p, tss);

Hm, the repetition between process_32.c and process_64.c is getting 
stronger - could some of this be unified into process.c? (in later 
patches)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 16/24] x86/speculation: Prepare for per task indirect branch speculation control
  2018-11-21 20:14 ` [patch 16/24] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
@ 2018-11-22  7:57   ` Ingo Molnar
  0 siblings, 0 replies; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:57 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen


* Thomas Gleixner <tglx@linutronix.de> wrote:

> From: Tim Chen <tim.c.chen@linux.intel.com>
> 
> To avoid the overhead of STIBP always on, it's necessary to allow per task
> control of STIBP.
> 
> Add a new task flag TIF_SPEC_IB and evaluate it during context switch if
> SMT is active and flag evaluation is enabled by the speculation control
> code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the
> guest/host switch works properly.
> 
> This has no effect because TIF_SPEC_IB cannot be set yet and the static key
> which controls evaluation is off. Preparatory patch for adding the control
> code.
> 
> [ tglx: Simplify the context switch logic and make the TIF evaluation
>   	depend on SMP=y and on the static key controlling the conditional
>   	update. Rename it to TIF_SPEC_IB because it controls both STIBP and
>   	IBPB ]
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  arch/x86/include/asm/msr-index.h   |    5 +++--
>  arch/x86/include/asm/spec-ctrl.h   |   12 ++++++++++++
>  arch/x86/include/asm/thread_info.h |    5 ++++-
>  arch/x86/kernel/cpu/bugs.c         |    4 ++++
>  arch/x86/kernel/process.c          |   24 ++++++++++++++++++++++--
>  5 files changed, 45 insertions(+), 5 deletions(-)
> 
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -41,9 +41,10 @@
>  
>  #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
>  #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
> -#define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
> +#define SPEC_CTRL_STIBP_SHIFT		1	   /* Single Thread Indirect Branch Predictor (STIBP) bit */
> +#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
>  #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
> -#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
> +#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
>  
>  #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
>  #define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
> --- a/arch/x86/include/asm/spec-ctrl.h
> +++ b/arch/x86/include/asm/spec-ctrl.h
> @@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(
>  	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
>  }
>  
> +static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
> +{
> +	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
> +	return (tifn & _TIF_SPEC_IB) >> (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
> +}
> +
>  static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
>  {
>  	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
>  	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
>  }
>  
> +static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
> +{
> +	BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT);
> +	return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT);
> +}
> +
>  static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
>  {
>  	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
> --- a/arch/x86/include/asm/thread_info.h
> +++ b/arch/x86/include/asm/thread_info.h
> @@ -83,6 +83,7 @@ struct thread_info {
>  #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
>  #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
>  #define TIF_SECCOMP		8	/* secure computing */
> +#define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
>  #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
>  #define TIF_UPROBE		12	/* breakpointed or singlestepping */
>  #define TIF_PATCH_PENDING	13	/* pending live patching update */
> @@ -110,6 +111,7 @@ struct thread_info {
>  #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
>  #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
>  #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
> +#define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
>  #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
>  #define _TIF_UPROBE		(1 << TIF_UPROBE)
>  #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
> @@ -146,7 +148,8 @@ struct thread_info {
>  
>  /* flags to check in __switch_to() */
>  #define _TIF_WORK_CTXSW							\
> -	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
> +	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
> +	 _TIF_SSBD|_TIF_SPEC_IB)
>  
>  #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
>  #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -148,6 +148,10 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl,
>  		    static_cpu_has(X86_FEATURE_AMD_SSBD))
>  			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
>  
> +		/* Check whether dynamic indirect branch control is on */
> +		if (static_branch_unlikely(&switch_to_cond_stibp))
> +			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
> +
>  		if (hostval != guestval) {
>  			msrval = setguest ? guestval : hostval;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -12,6 +12,7 @@
>  #include <linux/sched/debug.h>
>  #include <linux/sched/task.h>
>  #include <linux/sched/task_stack.h>
> +#include <linux/sched/topology.h>
>  #include <linux/init.h>
>  #include <linux/export.h>
>  #include <linux/pm.h>
> @@ -406,6 +407,11 @@ static __always_inline void spec_ctrl_up
>  	if (static_cpu_has(X86_FEATURE_SSBD))
>  		msr |= ssbd_tif_to_spec_ctrl(tifn);
>  
> +	/* Only evaluate STIBP if dynamic control is enabled */
> +	if (IS_ENABLED(CONFIG_SMP) &&
> +	    static_branch_unlikely(&switch_to_cond_stibp))
> +		msr |= stibp_tif_to_spec_ctrl(tifn);

> +	/*
> +	 * Only evaluate TIF_SPEC_IB if dynamic control is
> +	 * enabled, otherwise avoid the MSR write
> +	 */
> +	if (IS_ENABLED(CONFIG_SMP) &&
> +	    static_branch_unlikely(&switch_to_cond_stibp))
> +		updmsr |= !!(tif_diff & _TIF_SPEC_IB);

Small nit:

we use several terms here in an interchangeable fashion:

 - 'dynamic control'
 - 'conditional STIBP'

The in-code variable naming follows the second nomenclature, while we 
often mention 'dynamic control' - and the relationship is not always 
obvious immediately.

It might make sense to pick one of these - for example if we pick 
'conditional STIBP' then the second comment would become:

	/*
	 * Only evaluate TIF_SPEC_IB if conditional STIBP is
	 * enabled, otherwise avoid the MSR write
	 */

etc.

Thanks,

	ngo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 14/24] x86/speculation: Unify conditional spectre v2 print functions
  2018-11-21 20:14 ` [patch 14/24] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
@ 2018-11-22  7:59   ` Ingo Molnar
  0 siblings, 0 replies; 95+ messages in thread
From: Ingo Molnar @ 2018-11-22  7:59 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> There is no point in having two functions and a conditional at the call
> site.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

patches 1-14:

  Reviewed-by: Ingo Molnar <mingo@kernel.org>

15-24 look good to me too, modulo the (mostly trivial) feedback I gave.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 15/24] x86/speculation: Add command line control for indirect branch speculation
  2018-11-21 23:43   ` Borislav Petkov
@ 2018-11-22  8:14     ` Thomas Gleixner
  2018-11-22  9:07       ` Thomas Gleixner
  2018-11-22  9:18       ` Peter Zijlstra
  0 siblings, 2 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22  8:14 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Tom Lendacky, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Borislav Petkov wrote:
> > +
> > +	/* Initialize Indirect Branch Prediction Barrier */
> > +	if (boot_cpu_has(X86_FEATURE_IBPB)) {
> > +		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
> > +		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
> > +	}
> 
> So AFAICT, if coming in here with AUTO, we won't enable IBPB and I
> *think* AMD wants IBPB enabled. At least the whitepaper says:
> 
> "IBPB combined with Reptoline software support is the AMD recommended
> setting for Linux mitigation of Google Project Zero Variant 2
> (Spectre)."

Ok. That's indeed a step backwards, because we don't do IBPB in KVM
anymore. I'll fix that tomorrow morning when brain is more awake.

IBPB on context switch is controlled separately anyway now, so that's a
nobrainer to sort out.

Though I wait for Toms answer whether we really want IBPB on context switch
for AMD by default.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-11-21 20:14 ` [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Thomas Gleixner
  2018-11-22  7:10   ` Ingo Molnar
@ 2018-11-22  9:03   ` Peter Zijlstra
  2018-11-22  9:08     ` Thomas Gleixner
  2018-11-22 12:26   ` Borislav Petkov
  2 siblings, 1 reply; 95+ messages in thread
From: Peter Zijlstra @ 2018-11-22  9:03 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Linus Torvalds, Jiri Kosina,
	Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 09:14:52PM +0100, Thomas Gleixner wrote:
> @@ -1453,6 +1453,8 @@ static inline bool is_percpu_thread(void
>  #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
>  #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
>  #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
> +#define PFA_SPEC_INDIR_BRANCH_DISABLE	5	/* Indirect branch speculation restricted */
> +#define PFA_SPEC_INDIR_BRANCH_FORCE_DISABLE 6	/* Indirect branch speculation permanentely restricted */
>  
>  #define TASK_PFA_TEST(name, func)					\
>  	static inline bool task_##func(struct task_struct *p)		\
> @@ -1484,6 +1486,13 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ss
>  TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
>  TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
>  
> +TASK_PFA_TEST(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
> +TASK_PFA_SET(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
> +TASK_PFA_CLEAR(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
> +
> +TASK_PFA_TEST(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
> +TASK_PFA_SET(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
> +
>  static inline void
>  current_restore_flags(unsigned long orig_flags, unsigned long flags)
>  {

s/INDIR_BRANCH/IB/ would be consistent here with the SSB case.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 15/24] x86/speculation: Add command line control for indirect branch speculation
  2018-11-22  8:14     ` Thomas Gleixner
@ 2018-11-22  9:07       ` Thomas Gleixner
  2018-11-22  9:18       ` Peter Zijlstra
  1 sibling, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22  9:07 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Tom Lendacky, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Thomas Gleixner wrote:
> On Thu, 22 Nov 2018, Borislav Petkov wrote:
> > > +
> > > +	/* Initialize Indirect Branch Prediction Barrier */
> > > +	if (boot_cpu_has(X86_FEATURE_IBPB)) {
> > > +		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
> > > +		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
> > > +	}
> > 
> > So AFAICT, if coming in here with AUTO, we won't enable IBPB and I
> > *think* AMD wants IBPB enabled. At least the whitepaper says:
> > 
> > "IBPB combined with Reptoline software support is the AMD recommended
> > setting for Linux mitigation of Google Project Zero Variant 2
> > (Spectre)."
> 
> Ok. That's indeed a step backwards, because we don't do IBPB in KVM
> anymore. I'll fix that tomorrow morning when brain is more awake.

OTOH, off means that all of it is disabled. Which was the case already
before this when spectre_v2=off is on the command line.

Now with the default to prctl/seccomp the IBPB in KVM is enabled. So no
change there.

> IBPB on context switch is controlled separately anyway now, so that's a
> nobrainer to sort out.
> 
> Though I wait for Toms answer whether we really want IBPB on context switch
> for AMD by default.

That still stands. But if we want to do that, then we need to optimize it a
bit. Isn't that hard, but ...

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-11-22  9:03   ` Peter Zijlstra
@ 2018-11-22  9:08     ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22  9:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, x86, Andy Lutomirski, Linus Torvalds, Jiri Kosina,
	Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook, Tim Chen

On Thu, 22 Nov 2018, Peter Zijlstra wrote:

> On Wed, Nov 21, 2018 at 09:14:52PM +0100, Thomas Gleixner wrote:
> > @@ -1453,6 +1453,8 @@ static inline bool is_percpu_thread(void
> >  #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
> >  #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
> >  #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
> > +#define PFA_SPEC_INDIR_BRANCH_DISABLE	5	/* Indirect branch speculation restricted */
> > +#define PFA_SPEC_INDIR_BRANCH_FORCE_DISABLE 6	/* Indirect branch speculation permanentely restricted */
> >  
> >  #define TASK_PFA_TEST(name, func)					\
> >  	static inline bool task_##func(struct task_struct *p)		\
> > @@ -1484,6 +1486,13 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ss
> >  TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
> >  TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
> >  
> > +TASK_PFA_TEST(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
> > +TASK_PFA_SET(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
> > +TASK_PFA_CLEAR(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
> > +
> > +TASK_PFA_TEST(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
> > +TASK_PFA_SET(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
> > +
> >  static inline void
> >  current_restore_flags(unsigned long orig_flags, unsigned long flags)
> >  {
> 
> s/INDIR_BRANCH/IB/ would be consistent here with the SSB case.

Good point.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 17/24] x86/speculation: Move IBPB control out of switch_mm()
  2018-11-22  7:42     ` Jiri Kosina
@ 2018-11-22  9:18       ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22  9:18 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Andi Kleen, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Jiri Kosina wrote:
> On Wed, 21 Nov 2018, Andi Kleen wrote:
> 
> > > +		 * This could be optimized by keeping track of the last
> > > +		 * user task per cpu and avoiding the barrier when the task
> > > +		 * is immediately scheduled back and the thread inbetween
> > > +		 * was a kernel thread. It's dubious whether that'd be
> > > +		 * worth the extra load/store and conditional operations.
> > > +		 * Keep it optimized for the common case where the TIF bit
> > > +		 * is not set.
> > > +		 */
> > 
> > The optimization was there before and you removed it?
> > 
> > It's quite important for switching to idle and back. With your variant short IOs
> > that do short idle waits will be badly impacted. 
> 
> The question is what scenario to optimize for.
> 
> Either you penalize everybody in the default prctl+seccomp setup 
> (irrespective of it's TIF flag value), as you have the extra overhead on 
> each and every switch_to() (to check exactly for this back-to-back 
> scheduling), or you penalize only those tasks that are penalized anyway by 
> the IBPB flush.
> 
> I think the latter (which is what this patch implements) makes more sense. 

That was my rationale. The back to back thing makes sense for unconditional
mode, but not so much for the prctl/seccomp case which is default.

In unconditional mode we could add the extra overhead, but then performance
is down the drain already with STIBP.

Thanks,

	tglx




^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 15/24] x86/speculation: Add command line control for indirect branch speculation
  2018-11-22  8:14     ` Thomas Gleixner
  2018-11-22  9:07       ` Thomas Gleixner
@ 2018-11-22  9:18       ` Peter Zijlstra
  2018-11-22 10:10         ` Borislav Petkov
  1 sibling, 1 reply; 95+ messages in thread
From: Peter Zijlstra @ 2018-11-22  9:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Borislav Petkov, Tom Lendacky, LKML, x86, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, Nov 22, 2018 at 09:14:47AM +0100, Thomas Gleixner wrote:
> On Thu, 22 Nov 2018, Borislav Petkov wrote:
> > > +
> > > +	/* Initialize Indirect Branch Prediction Barrier */
> > > +	if (boot_cpu_has(X86_FEATURE_IBPB)) {
> > > +		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
> > > +		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
> > > +	}
> > 
> > So AFAICT, if coming in here with AUTO, we won't enable IBPB and I
> > *think* AMD wants IBPB enabled. At least the whitepaper says:
> > 
> > "IBPB combined with Reptoline software support is the AMD recommended
> > setting for Linux mitigation of Google Project Zero Variant 2
> > (Spectre)."
> 
> Ok. That's indeed a step backwards, because we don't do IBPB in KVM
> anymore. I'll fix that tomorrow morning when brain is more awake.
> 
> IBPB on context switch is controlled separately anyway now, so that's a
> nobrainer to sort out.
> 
> Though I wait for Toms answer whether we really want IBPB on context switch
> for AMD by default.

Right; that retpoline + IBPB case is one that came up earlier when we
talked about this stuff. The IBPB also helps against app2app BTB ASLR
attacks. So even if you have userspace retpoline, you might still want
IBPB.

But yes, this should be relatively straight forward to allow/fix with
the proposed code.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead
  2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
                   ` (24 preceding siblings ...)
  2018-11-21 23:48 ` [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Tim Chen
@ 2018-11-22  9:45 ` Peter Zijlstra
  25 siblings, 0 replies; 95+ messages in thread
From: Peter Zijlstra @ 2018-11-22  9:45 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Linus Torvalds, Jiri Kosina,
	Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, Dave Hansen, Casey Schaufler, Asit Mallick,
	Arjan van de Ven, Jon Masters, Waiman Long, Greg KH,
	Dave Stewart, Kees Cook

On Wed, Nov 21, 2018 at 09:14:30PM +0100, Thomas Gleixner wrote:
> It's based on the x86/pti branch unfortunately, which contains the removal
> of the minimal asm retpoline hackery. I noticed too late. If the minimal
> asm stuff should not be backported it's trivial to rebase that series on
> Linus tree.

I see no problem with backporting those patches; the rationale for them
also holds for the stable trees, people really should have a retpoline
enabled compiler available by now.

Also, that minimal thing provided about as much protection as thinking
happy thoughts.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead
  2018-11-21 23:48 ` [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Tim Chen
@ 2018-11-22  9:55   ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22  9:55 UTC (permalink / raw)
  To: Tim Chen
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Andi Kleen

On Wed, 21 Nov 2018, Tim Chen wrote:
> On 11/21/2018 12:14 PM, Thomas Gleixner wrote:
> > This is based on Tim Chen's V5 patch series. The following changes have
> > been made:
> > 
> ...
> > 
> > TODO: Write documentation
> > 
> 
> Andi took a crack at the document.  I made some 
> modifications on top.  It can be used as a
> starting point for the final document.

Nice. Thanks a lot for starting this. I'll go through it later.

      Thomas

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 15/24] x86/speculation: Add command line control for indirect branch speculation
  2018-11-22  9:18       ` Peter Zijlstra
@ 2018-11-22 10:10         ` Borislav Petkov
  2018-11-22 10:48           ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Borislav Petkov @ 2018-11-22 10:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Tom Lendacky, LKML, x86, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, Nov 22, 2018 at 10:18:58AM +0100, Peter Zijlstra wrote:
> Right; that retpoline + IBPB case is one that came up earlier when we
> talked about this stuff. The IBPB also helps against app2app BTB ASLR
> attacks. So even if you have userspace retpoline, you might still want
> IBPB.
> 
> But yes, this should be relatively straight forward to allow/fix with
> the proposed code.

So I got some feedback from AMD that IBPB on context switch has a
small perf impact and they wouldn't mind it being enabled by default
considering that it provides protection against a lot of attack
scenarios. Basically, what the recommendation says.

But if we go and do opt-in, then they're fine with it being off by
default if we decide to do it so in the kernel.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 15/24] x86/speculation: Add command line control for indirect branch speculation
  2018-11-22 10:10         ` Borislav Petkov
@ 2018-11-22 10:48           ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22 10:48 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Peter Zijlstra, Tom Lendacky, LKML, x86, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Borislav Petkov wrote:
> On Thu, Nov 22, 2018 at 10:18:58AM +0100, Peter Zijlstra wrote:
> > Right; that retpoline + IBPB case is one that came up earlier when we
> > talked about this stuff. The IBPB also helps against app2app BTB ASLR
> > attacks. So even if you have userspace retpoline, you might still want
> > IBPB.
> > 
> > But yes, this should be relatively straight forward to allow/fix with
> > the proposed code.
> 
> So I got some feedback from AMD that IBPB on context switch has a
> small perf impact and they wouldn't mind it being enabled by default
> considering that it provides protection against a lot of attack
> scenarios. Basically, what the recommendation says.
> 
> But if we go and do opt-in, then they're fine with it being off by
> default if we decide to do it so in the kernel.

So one way to do this would be to have additional options:

   prctl,ibpb and seccomp,ibpb

which then would keep the STIBP stuff as proposed and switch ibpb to always
mode. Adding the back to back optimization for the always ibpb mode is not
rocket science.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-11-21 20:14 ` [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Thomas Gleixner
  2018-11-22  7:10   ` Ingo Molnar
  2018-11-22  9:03   ` Peter Zijlstra
@ 2018-11-22 12:26   ` Borislav Petkov
  2018-11-22 12:33     ` Peter Zijlstra
  2 siblings, 1 reply; 95+ messages in thread
From: Borislav Petkov @ 2018-11-22 12:26 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

On Wed, Nov 21, 2018 at 09:14:52PM +0100, Thomas Gleixner wrote:
> From: Tim Chen <tim.c.chen@linux.intel.com>
> 
> Add the PR_SPEC_INDIR_BRANCH option for the PR_GET_SPECULATION_CTRL and
> PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of
> indirect branch speculation via STIBP.
> 
> Invocations:
>  Check indirect branch speculation status with
>  - prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, 0, 0, 0);
> 
>  Enable indirect branch speculation with
>  - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
> 
>  Disable indirect branch speculation with
>  - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
> 
>  Force disable indirect branch speculation with
>  - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
> 
> See Documentation/userspace-api/spec_ctrl.rst.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  Documentation/userspace-api/spec_ctrl.rst |    9 +++
>  arch/x86/include/asm/nospec-branch.h      |    1 
>  arch/x86/kernel/cpu/bugs.c                |   71 ++++++++++++++++++++++++++++++
>  include/linux/sched.h                     |    9 +++
>  include/uapi/linux/prctl.h                |    1 
>  tools/include/uapi/linux/prctl.h          |    1 
>  6 files changed, 92 insertions(+)

> @@ -753,12 +755,56 @@ static int ssb_prctl_set(struct task_str
>  	return 0;
>  }
>  
> +static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
> +{
> +	switch (ctrl) {
> +	case PR_SPEC_ENABLE:
> +		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
> +			return 0;
> +		/*
> +		 * Indirect branch speculation is always disabled in strict
> +		 * mode.
> +		 */
> +		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
> +			return -EPERM;
> +		task_clear_spec_indir_branch_disable(task);
> +		task_update_spec_tif(task, TIF_SPEC_IB, false);
> +		break;
> +	case PR_SPEC_DISABLE:
> +		/*
> +		 * Indirect branch speculation is always allowed when
> +		 * mitigation is force disabled.
> +		 */
> +		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
> +			return -EPERM;
> +		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
> +			return 0;
> +		task_set_spec_indir_branch_disable(task);
> +		task_update_spec_tif(task, TIF_SPEC_IB, true);
> +		break;
> +	case PR_SPEC_FORCE_DISABLE:
> +		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
> +			return -EPERM;
> +		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
> +			return 0;
> +		task_set_spec_indir_branch_disable(task);
> +		task_set_spec_indir_branch_force_disable(task);
> +		task_update_spec_tif(task, TIF_SPEC_IB, true);
> +		break;
> +	default:
> +		return -ERANGE;
> +	}
> +	return 0;
> +}

Perhaps merge the two DISABLE branches to make it obvious what the
difference between them is:

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6eac074e3935..28cece3a067b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -769,7 +769,9 @@ static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
 		task_clear_spec_indir_branch_disable(task);
 		task_update_spec_tif(task, TIF_SPEC_IB, false);
 		break;
+
 	case PR_SPEC_DISABLE:
+	case PR_SPEC_FORCE_DISABLE:
 		/*
 		 * Indirect branch speculation is always allowed when
 		 * mitigation is force disabled.
@@ -780,16 +782,11 @@ static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
 			return 0;
 		task_set_spec_indir_branch_disable(task);
 		task_update_spec_tif(task, TIF_SPEC_IB, true);
+
+		if (ctrl == PR_SPEC_FORCE_DISABLE)
+			task_set_spec_indir_branch_force_disable(task);
 		break;
-	case PR_SPEC_FORCE_DISABLE:
-		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
-			return -EPERM;
-		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
-			return 0;
-		task_set_spec_indir_branch_disable(task);
-		task_set_spec_indir_branch_force_disable(task);
-		task_update_spec_tif(task, TIF_SPEC_IB, true);
-		break;
+
 	default:
 		return -ERANGE;
 	}

> @@ -1453,6 +1453,8 @@ static inline bool is_percpu_thread(void
>  #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
>  #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
>  #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
> +#define PFA_SPEC_INDIR_BRANCH_DISABLE	5	/* Indirect branch speculation restricted */
> +#define PFA_SPEC_INDIR_BRANCH_FORCE_DISABLE 6	/* Indirect branch speculation permanentely restricted */

permanently

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply related	[flat|nested] 95+ messages in thread

* Re: [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-11-22 12:26   ` Borislav Petkov
@ 2018-11-22 12:33     ` Peter Zijlstra
  0 siblings, 0 replies; 95+ messages in thread
From: Peter Zijlstra @ 2018-11-22 12:33 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, LKML, x86, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook, Tim Chen

On Thu, Nov 22, 2018 at 01:26:38PM +0100, Borislav Petkov wrote:
> Perhaps merge the two DISABLE branches to make it obvious what the
> difference between them is:
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 6eac074e3935..28cece3a067b 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -769,7 +769,9 @@ static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
>  		task_clear_spec_indir_branch_disable(task);
>  		task_update_spec_tif(task, TIF_SPEC_IB, false);
>  		break;
> +
>  	case PR_SPEC_DISABLE:
> +	case PR_SPEC_FORCE_DISABLE:
>  		/*
>  		 * Indirect branch speculation is always allowed when
>  		 * mitigation is force disabled.
> @@ -780,16 +782,11 @@ static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
>  			return 0;
>  		task_set_spec_indir_branch_disable(task);
>  		task_update_spec_tif(task, TIF_SPEC_IB, true);
> +
> +		if (ctrl == PR_SPEC_FORCE_DISABLE)
> +			task_set_spec_indir_branch_force_disable(task);
>  		break;
> -	case PR_SPEC_FORCE_DISABLE:
> -		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_NONE)
> -			return -EPERM;
> -		if (spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT)
> -			return 0;
> -		task_set_spec_indir_branch_disable(task);
> -		task_set_spec_indir_branch_force_disable(task);
> -		task_update_spec_tif(task, TIF_SPEC_IB, true);
> -		break;
> +
>  	default:
>  		return -ERANGE;
>  	}

I like that; maybe also do the same to the ssb code, for symmetry.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-21 23:08           ` Borislav Petkov
@ 2018-11-22 17:30             ` Josh Poimboeuf
  2018-11-22 17:52               ` Borislav Petkov
  0 siblings, 1 reply; 95+ messages in thread
From: Josh Poimboeuf @ 2018-11-22 17:30 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On Thu, Nov 22, 2018 at 12:08:54AM +0100, Borislav Petkov wrote:
> On Wed, Nov 21, 2018 at 05:04:50PM -0600, Josh Poimboeuf wrote:
> > Why not just 'user'?  Like SPECTRE_V2_USER_*.
> 
> Sure, a bit better except that it doesn't explain what it does, I'd say.

But it does describe its purpose, especially in relation to the
'spectre_v2=' option.

Previously 'spectre_v2=' might have been more appropriately named
'spectre_v2_kernel=' because it only protected the kernel from Spectre
v2 attacks.  Now with these new patches, 'spectre_v2=on' will protect
the entire system.

Whereas 'spectre_v2_user=' is a subset of that; it helps protect user
space from itself.  Appending "user" to the existing 'spectre_v2='
option helps to communicate that, IMO.

Now off to eat a giant turkey.

-- 
Josh

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-22 17:30             ` Josh Poimboeuf
@ 2018-11-22 17:52               ` Borislav Petkov
  2018-11-22 21:17                 ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Borislav Petkov @ 2018-11-22 17:52 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Thomas Gleixner, Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On Thu, Nov 22, 2018 at 11:30:04AM -0600, Josh Poimboeuf wrote:
> But it does describe its purpose, especially in relation to the
> 'spectre_v2=' option.

Sure, but the thing I'm proposing

spectre_v2_task_isol=

describes it more precisely, IMHO. :)

I.e., "enable/disable spectre v2 task isolation".

> Previously 'spectre_v2=' might have been more appropriately named
> 'spectre_v2_kernel=' because it only protected the kernel from Spectre
> v2 attacks.  Now with these new patches, 'spectre_v2=on' will protect
> the entire system.

Hmmm, crazy idea: can we extend the options of spectre_v2= nstead?

spectre_v2=user_isolation,...
spectre_v2=kernel,...
spectre_v2=task_isolation,...

and so on?

This way we can do a couple of option switches in one go.

Hmmm?

> Now off to eat a giant turkey.

Try not to fall into a turkey coma. :-P

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 01/24] x86/speculation: Update the TIF_SSBD comment
  2018-11-22 17:52               ` Borislav Petkov
@ 2018-11-22 21:17                 ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22 21:17 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Josh Poimboeuf, Linus Torvalds, Linux List Kernel Mailing,
	the arch/x86 maintainers, Peter Zijlstra, Andrew Lutomirski,
	Jiri Kosina, thomas.lendacky, Andrea Arcangeli, David Woodhouse,
	Andi Kleen, dave.hansen, Casey Schaufler, Mallick, Asit K,
	Van De Ven, Arjan, jcm, longman9394, Greg KH, david.c.stewart,
	Kees Cook, Tim Chen

On Thu, 22 Nov 2018, Borislav Petkov wrote:
> On Thu, Nov 22, 2018 at 11:30:04AM -0600, Josh Poimboeuf wrote:
> > But it does describe its purpose, especially in relation to the
> > 'spectre_v2=' option.
> 
> Sure, but the thing I'm proposing
> 
> spectre_v2_task_isol=
> 
> describes it more precisely, IMHO. :)
> 
> I.e., "enable/disable spectre v2 task isolation".
> 
> > Previously 'spectre_v2=' might have been more appropriately named
> > 'spectre_v2_kernel=' because it only protected the kernel from Spectre
> > v2 attacks.  Now with these new patches, 'spectre_v2=on' will protect
> > the entire system.
> 
> Hmmm, crazy idea: can we extend the options of spectre_v2= nstead?
> 
> spectre_v2=user_isolation,...
> spectre_v2=kernel,...
> spectre_v2=task_isolation,...
> 
> and so on?
> 
> This way we can do a couple of option switches in one go.

That's results in a huge parser state space and changes the existing
interface. We stay with the separate option.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 17/24] x86/speculation: Move IBPB control out of switch_mm()
  2018-11-22  7:52   ` Ingo Molnar
@ 2018-11-22 22:29     ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22 22:29 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Ingo Molnar wrote:
> * Thomas Gleixner <tglx@linutronix.de> wrote:
> >  	/*
> >  	 * Now maybe reload the debug registers and handle I/O bitmaps
> >  	 */
> > -	if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||
> > -		     task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
> > +	if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||
> > +		     prev_tif & _TIF_WORK_CTXSW_PREV))
> >  		__switch_to_xtra(prev_p, next_p, tss);
> 
> Hm, the repetition between process_32.c and process_64.c is getting 
> stronger - could some of this be unified into process.c? (in later 
> patches)

Yes, for the price of an out of line call.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-22  2:13   ` Tim Chen
@ 2018-11-22 23:00     ` Thomas Gleixner
  2018-11-23  7:37       ` Ingo Molnar
  0 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22 23:00 UTC (permalink / raw)
  To: Tim Chen
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Wed, 21 Nov 2018, Tim Chen wrote:

> On Wed, Nov 21, 2018 at 09:14:50PM +0100, Thomas Gleixner wrote:
> > +static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
> >  {
> >       bool update;
> >
> > +     if (on)
> > +             update = !test_and_set_tsk_thread_flag(tsk, tifbit);
> > +     else
> > +             update = test_and_clear_tsk_thread_flag(tsk, tifbit);
> > +
> > +     /*
> > +      * If being set on non-current task, delay setting the CPU
> > +	 * mitigation until it is scheduled next.
> > +	 */
> > +     if (tsk == current && update)
> > +             speculation_ctrl_update_current();
> 
> I think all the call paths from prctl and seccomp coming here
> has tsk == current.

We had that discussion before with SSBD:

seccomp_set_mode_filter()
   seccomp_attach_filter()
      seccomp_sync_threads()
         for_each_thread(t)
	    if (t == current)
              continue;
	    seccomp_assign_mode(t)
	      arch_seccomp_spec_mitigate(t);

seccomp_assign_mode(current...)
  arch_seccomp_spec_mitigate();

> But if task_update_spec_tif gets used in the future where tsk is running
> on a remote CPU, this could lead to the MSR getting out of sync with the
> running task's TIF flag. This will break either performance or security.

We also had that discussion with SSBD and decided that we won't chase
threads and send IPIs around. Yes, it's not perfect, but not the end of the
world either. For PRCTL it's a non issue.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-22  7:43   ` [patch 20/24] x86/speculation: Split out TIF update Ingo Molnar
@ 2018-11-22 23:04     ` Thomas Gleixner
  2018-11-23  7:37       ` Ingo Molnar
  0 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22 23:04 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Ingo Molnar wrote:
> * Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> Had to read this twice, because the comment and the code are both correct 
> but deal with the inverse case. This might have helped:
> 
> 	/*
> 	 * Immediately update the speculation MSRs on the current task,
> 	 * but for non-current tasks delay setting the CPU mitigation 
> 	 * until it is scheduled next.
> 	 */
> 	if (tsk == current && update)
> 		speculation_ctrl_update_current();
> 
> But can the target task ever be non-current here? I don't think so: the 
> two callers are prctl and seccomp, and both are passing in the current 
> task pointer.

See te other mail. Yes, seccomp passes non current task pointers. Will
update the comment.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-22  7:34   ` Ingo Molnar
@ 2018-11-22 23:17     ` Thomas Gleixner
  2018-11-22 23:28       ` Jiri Kosina
  0 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22 23:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Ingo Molnar wrote:
> * Thomas Gleixner <tglx@linutronix.de> wrote:
> > +static void update_stibp_msr(void *info)
> >  {
> > -	/* Enhanced IBRS makes using STIBP unnecessary. */
> > -	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> > -		return false;
> > -
> > -	/* Check for strict app2app mitigation mode */
> > -	return spectre_v2_app2app == SPECTRE_V2_APP2APP_STRICT;
> > +	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> >  }
> 
> 
> Does Sparse or other tooling warn about unused function parameters? If 
> yes then it might make sense to mark it __used?

That would be __useless :)

> So I'm wondering, shouldn't firmware_restrict_branch_speculation_start()/_end()
> also enable/disable STIBP? It already enabled/disables IBRS.

IBRS includes STIBP. We don't use IBRS in the kernel otherwise because
you'd have to do more MSR writes on the protection boundaries.

We only use ENHANCED IBRS if it's available, which also makes STIBP
redundant.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode
  2018-11-22 23:17     ` Thomas Gleixner
@ 2018-11-22 23:28       ` Jiri Kosina
  0 siblings, 0 replies; 95+ messages in thread
From: Jiri Kosina @ 2018-11-22 23:28 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Ingo Molnar, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Fri, 23 Nov 2018, Thomas Gleixner wrote:

> > So I'm wondering, shouldn't firmware_restrict_branch_speculation_start()/_end()
> > also enable/disable STIBP? It already enabled/disables IBRS.
> 
> IBRS includes STIBP. 

True.

> We don't use IBRS in the kernel otherwise because you'd have to do more 
> MSR writes on the protection boundaries.

Just for the record -- we do have an option for IBRS in our distro kernel 
on SKL+ systems.

There definitely is a measurable performance impact, but the MSR writes on 
protection boundaries are totally cheap compared to the actual IBRS 
runtime operation effect.

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode
  2018-11-22  7:26   ` Ingo Molnar
@ 2018-11-22 23:45     ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-22 23:45 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On Thu, 22 Nov 2018, Ingo Molnar wrote:
> > +	[SPECTRE_V2_APP2APP_SECCOMP]	= "App-App Mitigation: seccomp and prctl opt-in",
> 
> This description is not accurate: it's not a 'seccomp and prctl opt-in', 
> the seccomp functionality is opt-out, the prctl is opt-in.
> 
> So something like:
> 
> > +	[SPECTRE_V2_APP2APP_SECCOMP]	= "App-App Mitigation: seccomp by default and prctl opt-in",

Na. I just make it: "prctl" and "seccomp + prctl" 

> >  void arch_seccomp_spec_mitigate(struct task_struct *task)
> >  {
> >  	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
> >  		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
> > +	if (spectre_v2_app2app == SPECTRE_V2_APP2APP_SECCOMP)
> > +		indir_branch_prctl_set(task, PR_SPEC_FORCE_DISABLE);
> >  }
> >  #endif
> 
> Hm, so isn't arch_seccomp_spec_mitigate() called right before untrusted 
> seccomp code is executed? So why are we disabling the mitigation here?

It disables the CPU speculation misfeature not the mitigation. And no, we
are not going to change it because the constants are user space ABI today.

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-22 23:00     ` Thomas Gleixner
@ 2018-11-23  7:37       ` Ingo Molnar
  2018-11-26 18:35         ` Tim Chen
  0 siblings, 1 reply; 95+ messages in thread
From: Ingo Molnar @ 2018-11-23  7:37 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> On Wed, 21 Nov 2018, Tim Chen wrote:
> 
> > On Wed, Nov 21, 2018 at 09:14:50PM +0100, Thomas Gleixner wrote:
> > > +static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
> > >  {
> > >       bool update;
> > >
> > > +     if (on)
> > > +             update = !test_and_set_tsk_thread_flag(tsk, tifbit);
> > > +     else
> > > +             update = test_and_clear_tsk_thread_flag(tsk, tifbit);
> > > +
> > > +     /*
> > > +      * If being set on non-current task, delay setting the CPU
> > > +	 * mitigation until it is scheduled next.
> > > +	 */
> > > +     if (tsk == current && update)
> > > +             speculation_ctrl_update_current();
> > 
> > I think all the call paths from prctl and seccomp coming here
> > has tsk == current.
> 
> We had that discussion before with SSBD:
> 
> seccomp_set_mode_filter()
>    seccomp_attach_filter()
>       seccomp_sync_threads()
>          for_each_thread(t)
> 	    if (t == current)
>               continue;
> 	    seccomp_assign_mode(t)
> 	      arch_seccomp_spec_mitigate(t);
> 
> seccomp_assign_mode(current...)
>   arch_seccomp_spec_mitigate();
> 
> > But if task_update_spec_tif gets used in the future where tsk is running
> > on a remote CPU, this could lead to the MSR getting out of sync with the
> > running task's TIF flag. This will break either performance or security.
> 
> We also had that discussion with SSBD and decided that we won't chase
> threads and send IPIs around. Yes, it's not perfect, but not the end of the
> world either. For PRCTL it's a non issue.

Fair enough and agreed - but please add a comment for all this, as it's a 
non-trivial and rare call context and a non-trivial implementation 
trade-off as a result.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-22 23:04     ` Thomas Gleixner
@ 2018-11-23  7:37       ` Ingo Molnar
  0 siblings, 0 replies; 95+ messages in thread
From: Ingo Molnar @ 2018-11-23  7:37 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook


* Thomas Gleixner <tglx@linutronix.de> wrote:

> On Thu, 22 Nov 2018, Ingo Molnar wrote:
> > * Thomas Gleixner <tglx@linutronix.de> wrote:
> > 
> > Had to read this twice, because the comment and the code are both correct 
> > but deal with the inverse case. This might have helped:
> > 
> > 	/*
> > 	 * Immediately update the speculation MSRs on the current task,
> > 	 * but for non-current tasks delay setting the CPU mitigation 
> > 	 * until it is scheduled next.
> > 	 */
> > 	if (tsk == current && update)
> > 		speculation_ctrl_update_current();
> > 
> > But can the target task ever be non-current here? I don't think so: the 
> > two callers are prctl and seccomp, and both are passing in the current 
> > task pointer.
> 
> See te other mail. Yes, seccomp passes non current task pointers. Will
> update the comment.

Ignore my previous mail :-)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-23  7:37       ` Ingo Molnar
@ 2018-11-26 18:35         ` Tim Chen
  2018-11-26 21:55           ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Tim Chen @ 2018-11-26 18:35 UTC (permalink / raw)
  To: Ingo Molnar, Thomas Gleixner
  Cc: LKML, x86, Peter Zijlstra, Andy Lutomirski, Linus Torvalds,
	Jiri Kosina, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On 11/22/2018 11:37 PM, Ingo Molnar wrote:

>>> I think all the call paths from prctl and seccomp coming here
>>> has tsk == current.
>>
>> We had that discussion before with SSBD:
>>
>> seccomp_set_mode_filter()
>>    seccomp_attach_filter()
>>       seccomp_sync_threads()
>>          for_each_thread(t)
>> 	    if (t == current)
>>               continue;
>> 	    seccomp_assign_mode(t)
>> 	      arch_seccomp_spec_mitigate(t);
>>
>> seccomp_assign_mode(current...)
>>   arch_seccomp_spec_mitigate();
>>
>>> But if task_update_spec_tif gets used in the future where tsk is running
>>> on a remote CPU, this could lead to the MSR getting out of sync with the
>>> running task's TIF flag. This will break either performance or security.
>>
>> We also had that discussion with SSBD and decided that we won't chase
>> threads and send IPIs around. Yes, it's not perfect, but not the end of the
>> world either. For PRCTL it's a non issue.


Looks like seccomp thread can be running on a remote CPU when its TIF_SPEC_IB flag
gets updated.
 
I wonder if this will cause STIBP to be always off in this scenario, when
two tasks with SPEC_IB flags running on a remote CPU have STIBP bit always
*off* in SPEC MSR.

Let's say we have tasks A and B running on a remote CPU:

task A: SPEC_IB flag is on
task B: SPEC_IB flag is off but is currently running on remote CPU, SPEC MSR's STIBP bit is off

Now arch_seccomp_spec_mitigation is called, setting SPEC_IB flag on task B.
SPEC MSR becomes out of sync with running task B's SPEC_IB flag.

Task B context switches to task A. Because both tasks have SPEC_IB flag set and the flag
status is unchanged, SPEC MSR's STIBP bit is not updated.
SPEC MSR STIBP bit remains off if tasks A and B are the only tasks running
on the CPU.

There is an equivalent scenario where the SPEC MSR's STIBP bit remains on even though both
running task A and B's SPEC_IB flags are turned off.

Wonder if I may be missing something so the above scenario is not of concern?

Thanks.

Tim


> 
> Fair enough and agreed - but please add a comment for all this, as it's a 
> non-trivial and rare call context and a non-trivial implementation 
> trade-off as a result.
> 


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-26 18:35         ` Tim Chen
@ 2018-11-26 21:55           ` Thomas Gleixner
  2018-11-27  7:05             ` Jiri Kosina
  0 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-26 21:55 UTC (permalink / raw)
  To: Tim Chen
  Cc: Ingo Molnar, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Jiri Kosina, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Mon, 26 Nov 2018, Tim Chen wrote:
> On 11/22/2018 11:37 PM, Ingo Molnar wrote:
> >> We also had that discussion with SSBD and decided that we won't chase
> >> threads and send IPIs around. Yes, it's not perfect, but not the end of the
> >> world either. For PRCTL it's a non issue.
> 
> Looks like seccomp thread can be running on a remote CPU when its
> TIF_SPEC_IB flag gets updated.
>
> I wonder if this will cause STIBP to be always off in this scenario, when
> two tasks with SPEC_IB flags running on a remote CPU have STIBP bit always
> *off* in SPEC MSR.
> 
> Let's say we have tasks A and B running on a remote CPU:
> 
> task A: SPEC_IB flag is on
>
> task B: SPEC_IB flag is off but is currently running on remote CPU, SPEC
>         MSR's STIBP bit is off
>
> Now arch_seccomp_spec_mitigation is called, setting SPEC_IB flag on task B.
> SPEC MSR becomes out of sync with running task B's SPEC_IB flag.
> 
>
> Task B context switches to task A. Because both tasks have SPEC_IB flag
> set and the flag status is unchanged, SPEC MSR's STIBP bit is not
> updated.  SPEC MSR STIBP bit remains off if tasks A and B are the only
> tasks running on the CPU.
>
> There is an equivalent scenario where the SPEC MSR's STIBP bit remains on
> even though both running task A and B's SPEC_IB flags are turned off.
>
> Wonder if I may be missing something so the above scenario is not of
> concern?

The above is real. The question is whether we need to worry about it.

If so, then the right thing to do is to leave thread_info.flags alone and
flip the bits in a shadow storage, e.g. thread_info.spec_flags and after
updating that set something like TIF_SPEC_UPDATE and evaluate that bit on
context switch and if set update the TIF flags. Too tired to code that now,
but it's straight forward. I'll look at it on wednesday if nobody beats me
to it.

Thanks,

	tglx





^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-26 21:55           ` Thomas Gleixner
@ 2018-11-27  7:05             ` Jiri Kosina
  2018-11-27  7:13               ` Thomas Gleixner
  0 siblings, 1 reply; 95+ messages in thread
From: Jiri Kosina @ 2018-11-27  7:05 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Mon, 26 Nov 2018, Thomas Gleixner wrote:

> > Looks like seccomp thread can be running on a remote CPU when its
> > TIF_SPEC_IB flag gets updated.
> >
> > I wonder if this will cause STIBP to be always off in this scenario, when
> > two tasks with SPEC_IB flags running on a remote CPU have STIBP bit always
> > *off* in SPEC MSR.
> > 
> > Let's say we have tasks A and B running on a remote CPU:
> > 
> > task A: SPEC_IB flag is on
> >
> > task B: SPEC_IB flag is off but is currently running on remote CPU, SPEC
> >         MSR's STIBP bit is off
> >
> > Now arch_seccomp_spec_mitigation is called, setting SPEC_IB flag on task B.
> > SPEC MSR becomes out of sync with running task B's SPEC_IB flag.
> > 
> >
> > Task B context switches to task A. Because both tasks have SPEC_IB flag
> > set and the flag status is unchanged, SPEC MSR's STIBP bit is not
> > updated.  SPEC MSR STIBP bit remains off if tasks A and B are the only
> > tasks running on the CPU.
> >
> > There is an equivalent scenario where the SPEC MSR's STIBP bit remains on
> > even though both running task A and B's SPEC_IB flags are turned off.
> >
> > Wonder if I may be missing something so the above scenario is not of
> > concern?
> 
> The above is real. 

Agreed.

> The question is whether we need to worry about it.

Well, update of seccomp filters (and therefore updating of the flags) 
might happen at any time, long after the seccomp process has been started, 
so it might be pretty spread across cores by that time. So I think it 
indeed is a real scenario, although probably even harder for explicitly 
target by an attacker.

> If so, then the right thing to do is to leave thread_info.flags alone 
> and flip the bits in a shadow storage, e.g. thread_info.spec_flags and 
> after updating that set something like TIF_SPEC_UPDATE and evaluate that 
> bit on context switch and if set update the TIF flags. Too tired to code 
> that now, but it's straight forward. I'll look at it on wednesday if 
> nobody beats me to it.

Hm, the we'd have to implement the same split for things like checking of 
the work masks etc. (because we'd have to be checking in both places), 
right? That doesn't look particularly nice.

How about the minimalistic aproach below? (only compile tested so far, 
applies on top of your latest WIP.x86/pti branch). The downside of course 
is wasting another TIF bit.

Thanks.



From: Jiri Kosina <jkosina@suse.cz>
Subject: [PATCH] x86/speculation: Always properly update SPEC_CTRL MSR for remote seccomp tasks

If seccomp task is setting (*) TIF_SPEC_IB of a task running on remote CPU, the
value of TIF_SPEC_IB becomes out-of-sync with the actual MSR value on that CPU.

This becomes a problem when such task then context switches to another task
that has TIF_SPEC_IB set, as in such case the value of SPEC_CTRL MSR is not
updated and the next task starts running with stale value of SPEC_CTRL,
potentially unprotected by STIBP.

Fix that by always unconditionally updating the MSR in case that

- next task's TIF_SPEC_IB has been remotely set by its another seccomp thread,
  and

- TIF_SPEC_IB value of next is equal to the one of prev, and therefore
  we are guaranteed to be in a situation where MSR update would be lost

(*) symmetrical situation happens with clearing of the flag

Reported-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
 arch/x86/include/asm/thread_info.h |  4 +++-
 arch/x86/kernel/cpu/bugs.c         |  8 ++++++++
 arch/x86/kernel/process.c          | 16 +++++++++++++++-
 3 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 6d201699c651..278f9036ca45 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -84,6 +84,7 @@ struct thread_info {
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
 #define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
+#define TIF_SPEC_UPDATE		10	/* SPEC_CTRL MSR sync needed on CTXSW */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -112,6 +113,7 @@ struct thread_info {
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
+#define _TIF_SPEC_UPDATE	(1 << TIF_SPEC_UPDATE)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -155,7 +157,7 @@ struct thread_info {
  * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.
  */
 #ifdef CONFIG_SMP
-# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB)
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB | _TIF_SPEC_UPDATE)
 #else
 # define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE)
 #endif
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b5d2b36618a5..20d7c67b3dda 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -772,9 +772,17 @@ static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
 	 *
 	 * This can only happen for SECCOMP mitigation. For PRCTL it's
 	 * always the current task.
+	 *
+	 * If we are updating non-current task, set a flag for it to always
+	 * perform the MSR sync on a first context switch, to make sure
+	 * the TIF_SPEC_IB above is not out of sync with the MSR value during
+	 * task's runtime.
 	 */
 	if (tsk == current && update)
 		speculation_ctrl_update_current();
+	else
+		set_tsk_thread_flag(tsk, TIF_SPEC_UPDATE);
+
 }
 
 static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 3f5e351bdd37..78208234e63e 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -449,8 +449,20 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 	 * otherwise avoid the MSR write.
 	 */
 	if (IS_ENABLED(CONFIG_SMP) &&
-	    static_branch_unlikely(&switch_to_cond_stibp))
+	    static_branch_unlikely(&switch_to_cond_stibp)) {
 		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
+		/*
+		 * We need to update the MSR if remote task did set
+		 * TIF_SPEC_UPDATE on us, and therefore MSR value and
+		 * the TIF_SPEC_IB values might be out of sync.
+		 *
+		 * This can only happen if seccomp task has updated
+		 * one of its remote threads.
+		 */
+		if (IS_ENABLED(CONFIG_SECCOMP) && !updmsr &&
+				(tifn & TIF_SPEC_UPDATE))
+			updmsr = true;
+	}
 
 	if (updmsr)
 		spec_ctrl_update_msr(tifn);
@@ -496,6 +508,8 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
 	__speculation_ctrl_update(tifp, tifn);
+	if (IS_ENABLED(CONFIG_SECCOMP))
+		clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE);
 }
 
 /*


-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply related	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27  7:05             ` Jiri Kosina
@ 2018-11-27  7:13               ` Thomas Gleixner
  2018-11-27  7:30                 ` Jiri Kosina
  0 siblings, 1 reply; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-27  7:13 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Jiri Kosina wrote:
> On Mon, 26 Nov 2018, Thomas Gleixner wrote:
> 
> How about the minimalistic aproach below? (only compile tested so far, 
> applies on top of your latest WIP.x86/pti branch). The downside of course 
> is wasting another TIF bit.

We need to waste another TIF bit in any case.

>  	 *
>  	 * This can only happen for SECCOMP mitigation. For PRCTL it's
>  	 * always the current task.
> +	 *
> +	 * If we are updating non-current task, set a flag for it to always
> +	 * perform the MSR sync on a first context switch, to make sure
> +	 * the TIF_SPEC_IB above is not out of sync with the MSR value during
> +	 * task's runtime.
>  	 */
>  	if (tsk == current && update)
>  		speculation_ctrl_update_current();
> +	else
> +		set_tsk_thread_flag(tsk, TIF_SPEC_UPDATE);
> +
>  }
>  
>  static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 3f5e351bdd37..78208234e63e 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -449,8 +449,20 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
>  	 * otherwise avoid the MSR write.
>  	 */
>  	if (IS_ENABLED(CONFIG_SMP) &&
> -	    static_branch_unlikely(&switch_to_cond_stibp))
> +	    static_branch_unlikely(&switch_to_cond_stibp)) {
>  		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
> +		/*
> +		 * We need to update the MSR if remote task did set
> +		 * TIF_SPEC_UPDATE on us, and therefore MSR value and
> +		 * the TIF_SPEC_IB values might be out of sync.
> +		 *
> +		 * This can only happen if seccomp task has updated
> +		 * one of its remote threads.
> +		 */
> +		if (IS_ENABLED(CONFIG_SECCOMP) && !updmsr &&
> +				(tifn & TIF_SPEC_UPDATE))
> +			updmsr = true;
> +	}
>  
>  	if (updmsr)
>  		spec_ctrl_update_msr(tifn);
> @@ -496,6 +508,8 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
>  		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>  
>  	__speculation_ctrl_update(tifp, tifn);
> +	if (IS_ENABLED(CONFIG_SECCOMP))
> +		clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE);

That's racy and does not prevent the situation because the TIF flags are
updated befor the UPDATE bit is set. So __speculation_ctrl_update() might
see the new bits, but not TIF_SPEC_UPDATE. You really need shadow storage
to avoid that.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27  7:13               ` Thomas Gleixner
@ 2018-11-27  7:30                 ` Jiri Kosina
  2018-11-27 12:52                   ` Jiri Kosina
  0 siblings, 1 reply; 95+ messages in thread
From: Jiri Kosina @ 2018-11-27  7:30 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Thomas Gleixner wrote:

> That's racy and does not prevent the situation because the TIF flags are
> updated befor the UPDATE bit is set. 

> So __speculation_ctrl_update() might see the new bits, but not 
> TIF_SPEC_UPDATE. 

Hm, right, scratch that. We'd need to do that before updating TIF_SPEC_IB 
in task_update_spec_tif(), but that has the opposite ordering issue.

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27  7:30                 ` Jiri Kosina
@ 2018-11-27 12:52                   ` Jiri Kosina
  2018-11-27 13:18                     ` Jiri Kosina
  2018-11-27 21:57                     ` Thomas Gleixner
  0 siblings, 2 replies; 95+ messages in thread
From: Jiri Kosina @ 2018-11-27 12:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Jiri Kosina wrote:

> > That's racy and does not prevent the situation because the TIF flags are
> > updated befor the UPDATE bit is set. 
> 
> > So __speculation_ctrl_update() might see the new bits, but not 
> > TIF_SPEC_UPDATE. 
> 
> Hm, right, scratch that. We'd need to do that before updating TIF_SPEC_IB 
> in task_update_spec_tif(), but that has the opposite ordering issue.

I think this should do it (not yet fully tested).


From: Jiri Kosina <jkosina@suse.cz>
Subject: [PATCH] x86/speculation: Always properly update SPEC_CTRL MSR for remote SECCOMP tasks

If seccomp task is setting TIF_SPEC_IB for a task running on remote CPU, 
the value of TIF_SPEC_IB becomes out-of-sync with the actual MSR value on 
that CPU.

This becomes a problem when such task then context switches to another 
task that has TIF_SPEC_IB set, as in such case the value of SPEC_CTRL MSR 
is not updated and the next task starts running with stale value of 
SPEC_CTRL, potentially unprotected by STIBP.

Fix that by "queuing" the needed flags update in 'spec_ctrl' shadow 
variable, and populating proper TI flags with it on context switch to the 
task that has the TIF_SPEC_UPDATE flag (indicating that syncing is 
necessary) set.

Reported-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
 arch/x86/include/asm/thread_info.h |  5 ++++-
 arch/x86/kernel/cpu/bugs.c         | 13 +++++++++++--
 arch/x86/kernel/process.c          | 15 +++++++++++++++
 3 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 6d201699c651..001b053067d7 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -55,6 +55,7 @@ struct task_struct;
 
 struct thread_info {
 	unsigned long		flags;		/* low level flags */
+	unsigned long		spec_flags;	/* spec flags to sync on ctxsw */
 	u32			status;		/* thread synchronous flags */
 };
 
@@ -84,6 +85,7 @@ struct thread_info {
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
 #define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
+#define TIF_SPEC_UPDATE		10	/* SPEC_CTRL MSR sync needed on CTXSW */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -112,6 +114,7 @@ struct thread_info {
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
+#define _TIF_SPEC_UPDATE	(1 << TIF_SPEC_UPDATE)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -155,7 +158,7 @@ struct thread_info {
  * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.
  */
 #ifdef CONFIG_SMP
-# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB)
+# define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB | _TIF_SPEC_UPDATE)
 #else
 # define _TIF_WORK_CTXSW	(_TIF_WORK_CTXSW_BASE)
 #endif
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b5d2b36618a5..679946135789 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -761,9 +761,11 @@ static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
 	bool update;
 
 	if (on)
-		update = !test_and_set_tsk_thread_flag(tsk, tifbit);
+		update = !test_and_set_bit(tifbit,
+				&task_thread_info(tsk)->spec_flags);
 	else
-		update = test_and_clear_tsk_thread_flag(tsk, tifbit);
+		update = test_and_clear_bit(tifbit,
+				&task_thread_info(tsk)->spec_flags);
 
 	/*
 	 * Immediately update the speculation control MSRs for the current
@@ -772,9 +774,16 @@ static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
 	 *
 	 * This can only happen for SECCOMP mitigation. For PRCTL it's
 	 * always the current task.
+	 *
+	 * If we are updating non-current SECCOMP task, set a flag for it to
+	 * always perform the MSR sync on a first context switch to it, in order
+	 * to make sure the TIF_SPEC_IB above is not out of sync with the MSR value.
 	 */
 	if (tsk == current && update)
 		speculation_ctrl_update_current();
+	else
+		set_tsk_thread_flag(tsk, TIF_SPEC_UPDATE);
+
 }
 
 static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 3f5e351bdd37..6c4fcef52b19 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -474,6 +474,21 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
 
 	tifn = READ_ONCE(task_thread_info(next_p)->flags);
 	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
+
+	/*
+	 * SECCOMP tasks might have had their spec_ctrl flags updated during
+	 * runtime from a different CPU.
+	 *
+	 * When switching to such a task, populate thread flags with the ones
+	 * that have been temporarily saved in spec_flags by task_update_spec_tif()
+	 * in order to make sure MSR value is always kept up to date.
+	 *
+	 * SECCOMP tasks never disable the mitigation for other threads, only enable.
+	 */
+	if (IS_ENABLED(CONFIG_SECCOMP) &&
+			test_and_clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE))
+		tifp |= READ_ONCE(task_thread_info(next_p)->spec_flags);
+
 	switch_to_bitmap(prev, next, tifp, tifn);
 
 	propagate_user_return_notify(prev_p, next_p);

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply related	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27 12:52                   ` Jiri Kosina
@ 2018-11-27 13:18                     ` Jiri Kosina
  2018-11-27 21:57                     ` Thomas Gleixner
  1 sibling, 0 replies; 95+ messages in thread
From: Jiri Kosina @ 2018-11-27 13:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Jiri Kosina wrote:


> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -474,6 +474,21 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
>  
>  	tifn = READ_ONCE(task_thread_info(next_p)->flags);
>  	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
> +
> +	/*
> +	 * SECCOMP tasks might have had their spec_ctrl flags updated during
> +	 * runtime from a different CPU.
> +	 *
> +	 * When switching to such a task, populate thread flags with the ones
> +	 * that have been temporarily saved in spec_flags by task_update_spec_tif()
> +	 * in order to make sure MSR value is always kept up to date.
> +	 *
> +	 * SECCOMP tasks never disable the mitigation for other threads, only enable.
> +	 */
> +	if (IS_ENABLED(CONFIG_SECCOMP) &&
> +			test_and_clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE))
> +		tifp |= READ_ONCE(task_thread_info(next_p)->spec_flags);

This should be 'tifn' of course.

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27 12:52                   ` Jiri Kosina
  2018-11-27 13:18                     ` Jiri Kosina
@ 2018-11-27 21:57                     ` Thomas Gleixner
  2018-11-27 22:07                       ` Jiri Kosina
  2018-11-28 14:33                       ` [tip:x86/pti] x86/speculation: Prevent stale SPEC_CTRL msr content tip-bot for Thomas Gleixner
  1 sibling, 2 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-27 21:57 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Jiri Kosina wrote:
>  struct thread_info {
>  	unsigned long		flags;		/* low level flags */
> +	unsigned long		spec_flags;	/* spec flags to sync on ctxsw */

The information is already available in task->atomic_flags, no need for new
storage.

>  static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 3f5e351bdd37..6c4fcef52b19 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -474,6 +474,21 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
>  
>  	tifn = READ_ONCE(task_thread_info(next_p)->flags);
>  	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
> +
> +	/*
> +	 * SECCOMP tasks might have had their spec_ctrl flags updated during
> +	 * runtime from a different CPU.
> +	 *
> +	 * When switching to such a task, populate thread flags with the ones
> +	 * that have been temporarily saved in spec_flags by task_update_spec_tif()
> +	 * in order to make sure MSR value is always kept up to date.
> +	 *
> +	 * SECCOMP tasks never disable the mitigation for other threads, only enable.
> +	 */
> +	if (IS_ENABLED(CONFIG_SECCOMP) &&
> +			test_and_clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE))
> +		tifp |= READ_ONCE(task_thread_info(next_p)->spec_flags);

And how does that get folded into task_thread_info(next_p)->flags for the
next context switch? Also you really need to check both the incoming and
the outgoing in order to enforce consistent state.

The completely untested patch below should fix that.

Thanks,

	tglx

8<---------------
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -83,10 +83,6 @@ static inline void speculative_store_byp
 #endif
 
 extern void speculation_ctrl_update(unsigned long tif);
-
-static inline void speculation_ctrl_update_current(void)
-{
-	speculation_ctrl_update(current_thread_info()->flags);
-}
+extern void speculation_ctrl_update_current(void);
 
 #endif
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -84,6 +84,7 @@ struct thread_info {
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
 #define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
+#define TIF_SPEC_FORCE_UPDATE	10	/* Force speculation MSR update in context switch */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -112,6 +113,7 @@ struct thread_info {
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
+#define _TIF_SPEC_FORCE_UPDATE	(1 << TIF_SPEC_FORCE_UPDATE)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -149,7 +151,7 @@ struct thread_info {
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW_BASE						\
 	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
-	 _TIF_SSBD)
+	 _TIF_SSBD | _TIF_SPEC_FORCE_UPDATE)
 
 /*
  * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -756,14 +756,10 @@ static void ssb_select_mitigation(void)
 #undef pr_fmt
 #define pr_fmt(fmt)     "Speculation prctl: " fmt
 
-static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
+static void task_update_spec_tif(struct task_struct *tsk)
 {
-	bool update;
-
-	if (on)
-		update = !test_and_set_tsk_thread_flag(tsk, tifbit);
-	else
-		update = test_and_clear_tsk_thread_flag(tsk, tifbit);
+	/* Force the update of the real TIF bits */
+	set_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE);
 
 	/*
 	 * Immediately update the speculation control MSRs for the current
@@ -773,7 +769,7 @@ static void task_update_spec_tif(struct
 	 * This can only happen for SECCOMP mitigation. For PRCTL it's
 	 * always the current task.
 	 */
-	if (tsk == current && update)
+	if (tsk == current)
 		speculation_ctrl_update_current();
 }
 
@@ -789,16 +785,16 @@ static int ssb_prctl_set(struct task_str
 		if (task_spec_ssb_force_disable(task))
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
-		task_update_spec_tif(task, TIF_SSBD, false);
+		task_update_spec_tif(task);
 		break;
 	case PR_SPEC_DISABLE:
 		task_set_spec_ssb_disable(task);
-		task_update_spec_tif(task, TIF_SSBD, true);
+		task_update_spec_tif(task);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
-		task_update_spec_tif(task, TIF_SSBD, true);
+		task_update_spec_tif(task);
 		break;
 	default:
 		return -ERANGE;
@@ -819,7 +815,7 @@ static int ib_prctl_set(struct task_stru
 		if (spectre_v2_user == SPECTRE_V2_USER_STRICT)
 			return -EPERM;
 		task_clear_spec_ib_disable(task);
-		task_update_spec_tif(task, TIF_SPEC_IB, false);
+		task_update_spec_tif(task);
 		break;
 	case PR_SPEC_DISABLE:
 	case PR_SPEC_FORCE_DISABLE:
@@ -834,7 +830,7 @@ static int ib_prctl_set(struct task_stru
 		task_set_spec_ib_disable(task);
 		if (ctrl == PR_SPEC_FORCE_DISABLE)
 			task_set_spec_ib_force_disable(task);
-		task_update_spec_tif(task, TIF_SPEC_IB, true);
+		task_update_spec_tif(task);
 		break;
 	default:
 		return -ERANGE;
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -458,6 +458,23 @@ static __always_inline void __speculatio
 		spec_ctrl_update_msr(tifn);
 }
 
+static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)
+{
+	if (test_and_clear_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE)) {
+		if (task_spec_ssb_disable(tsk))
+			set_tsk_thread_flag(tsk, TIF_SSBD);
+		else
+			clear_tsk_thread_flag(tsk, TIF_SSBD);
+
+		if (task_spec_ib_disable(tsk))
+			set_tsk_thread_flag(tsk, TIF_SPEC_IB);
+		else
+			clear_tsk_thread_flag(tsk, TIF_SPEC_IB);
+	}
+	/* Return the updated threadinfo flags*/
+	return task_thread_info(tsk)->flags;
+}
+
 void speculation_ctrl_update(unsigned long tif)
 {
 	/* Forced update. Make sure all relevant TIF flags are different */
@@ -466,6 +483,11 @@ void speculation_ctrl_update(unsigned lo
 	preempt_enable();
 }
 
+void speculation_ctrl_update_current(void)
+{
+	speculation_ctrl_update(speculation_ctrl_update_tif(current));
+}
+
 void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
 {
 	struct thread_struct *prev, *next;
@@ -497,6 +519,11 @@ void __switch_to_xtra(struct task_struct
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
+	if (unlikely((tifp | tifn) & _TIF_SPEC_FORCE_UPDATE)) {
+		tifp = speculation_ctrl_update_tif(prev_p);
+		tifn = speculation_ctrl_update_tif(next_p);
+	}
+
 	__speculation_ctrl_update(tifp, tifn);
 }
 

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27 21:57                     ` Thomas Gleixner
@ 2018-11-27 22:07                       ` Jiri Kosina
  2018-11-27 22:20                         ` Jiri Kosina
  2018-11-27 22:36                         ` Thomas Gleixner
  2018-11-28 14:33                       ` [tip:x86/pti] x86/speculation: Prevent stale SPEC_CTRL msr content tip-bot for Thomas Gleixner
  1 sibling, 2 replies; 95+ messages in thread
From: Jiri Kosina @ 2018-11-27 22:07 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Thomas Gleixner wrote:

> >  static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
> > diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> > index 3f5e351bdd37..6c4fcef52b19 100644
> > --- a/arch/x86/kernel/process.c
> > +++ b/arch/x86/kernel/process.c
> > @@ -474,6 +474,21 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
> >  
> >  	tifn = READ_ONCE(task_thread_info(next_p)->flags);
> >  	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
> > +
> > +	/*
> > +	 * SECCOMP tasks might have had their spec_ctrl flags updated during
> > +	 * runtime from a different CPU.
> > +	 *
> > +	 * When switching to such a task, populate thread flags with the ones
> > +	 * that have been temporarily saved in spec_flags by task_update_spec_tif()
> > +	 * in order to make sure MSR value is always kept up to date.
> > +	 *
> > +	 * SECCOMP tasks never disable the mitigation for other threads, only enable.
> > +	 */
> > +	if (IS_ENABLED(CONFIG_SECCOMP) &&
> > +			test_and_clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE))
> > +		tifp |= READ_ONCE(task_thread_info(next_p)->spec_flags);
> 
> And how does that get folded into task_thread_info(next_p)->flags for the
> next context switch? 

Does it really have to? 

We need this special handling only if the next task has TIF_SPEC_UPDATE 
set, which is one-off event globally (when seccomp marks all its threads 
so due to seccomp filter change), and once all the TIF_SPEC_UPDATE tasks 
schedule at least once, we're in a consistent state again and don't need 
this, as every running task will then have its TIF consistent with MSR 
value.

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27 22:07                       ` Jiri Kosina
@ 2018-11-27 22:20                         ` Jiri Kosina
  2018-11-27 22:36                         ` Thomas Gleixner
  1 sibling, 0 replies; 95+ messages in thread
From: Jiri Kosina @ 2018-11-27 22:20 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Jiri Kosina wrote:

> > >  static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
> > > diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> > > index 3f5e351bdd37..6c4fcef52b19 100644
> > > --- a/arch/x86/kernel/process.c
> > > +++ b/arch/x86/kernel/process.c
> > > @@ -474,6 +474,21 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
> > >  
> > >  	tifn = READ_ONCE(task_thread_info(next_p)->flags);
> > >  	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
> > > +
> > > +	/*
> > > +	 * SECCOMP tasks might have had their spec_ctrl flags updated during
> > > +	 * runtime from a different CPU.
> > > +	 *
> > > +	 * When switching to such a task, populate thread flags with the ones
> > > +	 * that have been temporarily saved in spec_flags by task_update_spec_tif()
> > > +	 * in order to make sure MSR value is always kept up to date.
> > > +	 *
> > > +	 * SECCOMP tasks never disable the mitigation for other threads, only enable.
> > > +	 */
> > > +	if (IS_ENABLED(CONFIG_SECCOMP) &&
> > > +			test_and_clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE))
> > > +		tifp |= READ_ONCE(task_thread_info(next_p)->spec_flags);
> > 
> > And how does that get folded into task_thread_info(next_p)->flags for the
> > next context switch? 
> 
> Does it really have to? 

I guess I misunderstood the question, and the answer is that it actually 
should be 'tifn' there, as I wrote in a followup mail.

But in any case, I agree we need to handle both directions for full 
consistency, so your patch is a correct one.

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27 22:07                       ` Jiri Kosina
  2018-11-27 22:20                         ` Jiri Kosina
@ 2018-11-27 22:36                         ` Thomas Gleixner
  2018-11-28  1:50                           ` Tim Chen
  2018-11-28  6:05                           ` Jiri Kosina
  1 sibling, 2 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-27 22:36 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Jiri Kosina wrote:

> On Tue, 27 Nov 2018, Thomas Gleixner wrote:
> 
> > >  static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
> > > diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> > > index 3f5e351bdd37..6c4fcef52b19 100644
> > > --- a/arch/x86/kernel/process.c
> > > +++ b/arch/x86/kernel/process.c
> > > @@ -474,6 +474,21 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
> > >  
> > >  	tifn = READ_ONCE(task_thread_info(next_p)->flags);
> > >  	tifp = READ_ONCE(task_thread_info(prev_p)->flags);
> > > +
> > > +	/*
> > > +	 * SECCOMP tasks might have had their spec_ctrl flags updated during
> > > +	 * runtime from a different CPU.
> > > +	 *
> > > +	 * When switching to such a task, populate thread flags with the ones
> > > +	 * that have been temporarily saved in spec_flags by task_update_spec_tif()
> > > +	 * in order to make sure MSR value is always kept up to date.
> > > +	 *
> > > +	 * SECCOMP tasks never disable the mitigation for other threads, only enable.
> > > +	 */
> > > +	if (IS_ENABLED(CONFIG_SECCOMP) &&
> > > +			test_and_clear_tsk_thread_flag(next_p, TIF_SPEC_UPDATE))
> > > +		tifp |= READ_ONCE(task_thread_info(next_p)->spec_flags);
> > 
> > And how does that get folded into task_thread_info(next_p)->flags for the
> > next context switch? 
> 
> Does it really have to? 
> 
> We need this special handling only if the next task has TIF_SPEC_UPDATE 
> set, which is one-off event globally (when seccomp marks all its threads 
> so due to seccomp filter change), and once all the TIF_SPEC_UPDATE tasks 
> schedule at least once, we're in a consistent state again and don't need 
> this, as every running task will then have its TIF consistent with MSR 
> value.

And how so? You set the bits is spec_flags. And then you set the TIF_UPDATE
bit which is evaluated once.

Then you OR the bits into tifp which is a local variable and has nothing to
do with the TIF flags of the next task. So on the next context switch this
will evaluate the previous state of the TIF bits and you could have spared
the whole exercise :)

Thanks,

	tglx





^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27 22:36                         ` Thomas Gleixner
@ 2018-11-28  1:50                           ` Tim Chen
  2018-11-28 10:43                             ` Thomas Gleixner
  2018-11-28  6:05                           ` Jiri Kosina
  1 sibling, 1 reply; 95+ messages in thread
From: Tim Chen @ 2018-11-28  1:50 UTC (permalink / raw)
  To: Thomas Gleixner, Jiri Kosina
  Cc: Ingo Molnar, LKML, x86, Peter Zijlstra, Andy Lutomirski,
	Linus Torvalds, Tom Lendacky, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, Waiman Long,
	Greg KH, Dave Stewart, Kees Cook

On 11/27/2018 02:36 PM, Thomas Gleixner wrote:

>>
>> We need this special handling only if the next task has TIF_SPEC_UPDATE 
>> set, which is one-off event globally (when seccomp marks all its threads 
>> so due to seccomp filter change), and once all the TIF_SPEC_UPDATE tasks 
>> schedule at least once, we're in a consistent state again and don't need 
>> this, as every running task will then have its TIF consistent with MSR 
>> value.
> 
> And how so? You set the bits is spec_flags. And then you set the TIF_UPDATE
> bit which is evaluated once.
> 
> Then you OR the bits into tifp which is a local variable and has nothing to
> do with the TIF flags of the next task. So on the next context switch this
> will evaluate the previous state of the TIF bits and you could have spared
> the whole exercise :)
> 

This is better than my original implementation which was racy.
Using task_spec_ssb_disable and task_spec_ib_disable to update TIF_* flags
at context switch time makes the update logic very clear
and extensible.

Thanks.

Tim

^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-27 22:36                         ` Thomas Gleixner
  2018-11-28  1:50                           ` Tim Chen
@ 2018-11-28  6:05                           ` Jiri Kosina
  1 sibling, 0 replies; 95+ messages in thread
From: Jiri Kosina @ 2018-11-28  6:05 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Tim Chen, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Thomas Gleixner wrote:

> > Does it really have to? 
> > 
> > We need this special handling only if the next task has TIF_SPEC_UPDATE 
> > set, which is one-off event globally (when seccomp marks all its threads 
> > so due to seccomp filter change), and once all the TIF_SPEC_UPDATE tasks 
> > schedule at least once, we're in a consistent state again and don't need 
> > this, as every running task will then have its TIF consistent with MSR 
> > value.
> 
> And how so? You set the bits is spec_flags. And then you set the TIF_UPDATE
> bit which is evaluated once.

Yeah, that was a complete brainfart on my side, sorry for the noise, 
disregard that crap. I blame it all on the dentist appointment I went 
through before writing the patch :p

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 95+ messages in thread

* Re: [patch 20/24] x86/speculation: Split out TIF update
  2018-11-28  1:50                           ` Tim Chen
@ 2018-11-28 10:43                             ` Thomas Gleixner
  0 siblings, 0 replies; 95+ messages in thread
From: Thomas Gleixner @ 2018-11-28 10:43 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Ingo Molnar, LKML, x86, Peter Zijlstra,
	Andy Lutomirski, Linus Torvalds, Tom Lendacky, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	Waiman Long, Greg KH, Dave Stewart, Kees Cook

On Tue, 27 Nov 2018, Tim Chen wrote:
> On 11/27/2018 02:36 PM, Thomas Gleixner wrote:
> >> We need this special handling only if the next task has TIF_SPEC_UPDATE 
> >> set, which is one-off event globally (when seccomp marks all its threads 
> >> so due to seccomp filter change), and once all the TIF_SPEC_UPDATE tasks 
> >> schedule at least once, we're in a consistent state again and don't need 
> >> this, as every running task will then have its TIF consistent with MSR 
> >> value.
> > 
> > And how so? You set the bits is spec_flags. And then you set the TIF_UPDATE
> > bit which is evaluated once.
> > 
> > Then you OR the bits into tifp which is a local variable and has nothing to
> > do with the TIF flags of the next task. So on the next context switch this
> > will evaluate the previous state of the TIF bits and you could have spared
> > the whole exercise :)
> > 
> 
> This is better than my original implementation which was racy.
> Using task_spec_ssb_disable and task_spec_ib_disable to update TIF_* flags
> at context switch time makes the update logic very clear
> and extensible.

Clear yes. Extensible - hopefully not. This needs to end.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 95+ messages in thread

* [tip:x86/pti] x86/speculation: Prevent stale SPEC_CTRL msr content
  2018-11-27 21:57                     ` Thomas Gleixner
  2018-11-27 22:07                       ` Jiri Kosina
@ 2018-11-28 14:33                       ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 95+ messages in thread
From: tip-bot for Thomas Gleixner @ 2018-11-28 14:33 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dwmw, torvalds, jcm, hpa, linux-kernel, tglx, aarcange, ak,
	jkosina, thomas.lendacky, dave.hansen, david.c.stewart, jpoimboe,
	arjan, casey.schaufler, mingo, asit.k.mallick, peterz, keescook,
	longman9394, tim.c.chen, gregkh, luto

Commit-ID:  6d991ba509ebcfcc908e009d1db51972a4f7a064
Gitweb:     https://git.kernel.org/tip/6d991ba509ebcfcc908e009d1db51972a4f7a064
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 28 Nov 2018 10:56:57 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 28 Nov 2018 11:57:12 +0100

x86/speculation: Prevent stale SPEC_CTRL msr content

The seccomp speculation control operates on all tasks of a process, but
only the current task of a process can update the MSR immediately. For the
other threads the update is deferred to the next context switch.

This creates the following situation with Process A and B:

Process A task 2 and Process B task 1 are pinned on CPU1. Process A task 2
does not have the speculation control TIF bit set. Process B task 1 has the
speculation control TIF bit set.

CPU0					CPU1
					MSR bit is set
					ProcB.T1 schedules out
					ProcA.T2 schedules in
					MSR bit is cleared
ProcA.T1
  seccomp_update()
  set TIF bit on ProcA.T2
					ProcB.T1 schedules in
					MSR is not updated  <-- FAIL

This happens because the context switch code tries to avoid the MSR update
if the speculation control TIF bits of the incoming and the outgoing task
are the same. In the worst case ProcB.T1 and ProcA.T2 are the only tasks
scheduling back and forth on CPU1, which keeps the MSR stale forever.

In theory this could be remedied by IPIs, but chasing the remote task which
could be migrated is complex and full of races.

The straight forward solution is to avoid the asychronous update of the TIF
bit and defer it to the next context switch. The speculation control state
is stored in task_struct::atomic_flags by the prctl and seccomp updates
already.

Add a new TIF_SPEC_FORCE_UPDATE bit and set this after updating the
atomic_flags. Check the bit on context switch and force a synchronous
update of the speculation control if set. Use the same mechanism for
updating the current task.

Reported-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Casey Schaufler <casey.schaufler@intel.com>
Cc: Asit Mallick <asit.k.mallick@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Waiman Long <longman9394@gmail.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Dave Stewart <david.c.stewart@intel.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1811272247140.1875@nanos.tec.linutronix.de

---
 arch/x86/include/asm/spec-ctrl.h   |  6 +-----
 arch/x86/include/asm/thread_info.h |  4 +++-
 arch/x86/kernel/cpu/bugs.c         | 18 +++++++-----------
 arch/x86/kernel/process.c          | 30 +++++++++++++++++++++++++++++-
 4 files changed, 40 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index 27b0bce3933b..5393babc0598 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -83,10 +83,6 @@ static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
 extern void speculation_ctrl_update(unsigned long tif);
-
-static inline void speculation_ctrl_update_current(void)
-{
-	speculation_ctrl_update(current_thread_info()->flags);
-}
+extern void speculation_ctrl_update_current(void);
 
 #endif
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 6d201699c651..82b73b75d67c 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -84,6 +84,7 @@ struct thread_info {
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
 #define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
+#define TIF_SPEC_FORCE_UPDATE	10	/* Force speculation MSR update in context switch */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -112,6 +113,7 @@ struct thread_info {
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_SPEC_IB		(1 << TIF_SPEC_IB)
+#define _TIF_SPEC_FORCE_UPDATE	(1 << TIF_SPEC_FORCE_UPDATE)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -149,7 +151,7 @@ struct thread_info {
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW_BASE						\
 	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
-	 _TIF_SSBD)
+	 _TIF_SSBD | _TIF_SPEC_FORCE_UPDATE)
 
 /*
  * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3b65a53d2c33..29f40a92f5a8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -702,14 +702,10 @@ static void ssb_select_mitigation(void)
 #undef pr_fmt
 #define pr_fmt(fmt)     "Speculation prctl: " fmt
 
-static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
+static void task_update_spec_tif(struct task_struct *tsk)
 {
-	bool update;
-
-	if (on)
-		update = !test_and_set_tsk_thread_flag(tsk, tifbit);
-	else
-		update = test_and_clear_tsk_thread_flag(tsk, tifbit);
+	/* Force the update of the real TIF bits */
+	set_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE);
 
 	/*
 	 * Immediately update the speculation control MSRs for the current
@@ -719,7 +715,7 @@ static void task_update_spec_tif(struct task_struct *tsk, int tifbit, bool on)
 	 * This can only happen for SECCOMP mitigation. For PRCTL it's
 	 * always the current task.
 	 */
-	if (tsk == current && update)
+	if (tsk == current)
 		speculation_ctrl_update_current();
 }
 
@@ -735,16 +731,16 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 		if (task_spec_ssb_force_disable(task))
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
-		task_update_spec_tif(task, TIF_SSBD, false);
+		task_update_spec_tif(task);
 		break;
 	case PR_SPEC_DISABLE:
 		task_set_spec_ssb_disable(task);
-		task_update_spec_tif(task, TIF_SSBD, true);
+		task_update_spec_tif(task);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
-		task_update_spec_tif(task, TIF_SSBD, true);
+		task_update_spec_tif(task);
 		break;
 	default:
 		return -ERANGE;
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index cdf8e6694f71..afbe2eb4a1c6 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -443,6 +443,18 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 		wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
+static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)
+{
+	if (test_and_clear_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE)) {
+		if (task_spec_ssb_disable(tsk))
+			set_tsk_thread_flag(tsk, TIF_SSBD);
+		else
+			clear_tsk_thread_flag(tsk, TIF_SSBD);
+	}
+	/* Return the updated threadinfo flags*/
+	return task_thread_info(tsk)->flags;
+}
+
 void speculation_ctrl_update(unsigned long tif)
 {
 	/* Forced update. Make sure all relevant TIF flags are different */
@@ -451,6 +463,14 @@ void speculation_ctrl_update(unsigned long tif)
 	preempt_enable();
 }
 
+/* Called from seccomp/prctl update */
+void speculation_ctrl_update_current(void)
+{
+	preempt_disable();
+	speculation_ctrl_update(speculation_ctrl_update_tif(current));
+	preempt_enable();
+}
+
 void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
 {
 	struct thread_struct *prev, *next;
@@ -482,7 +502,15 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
-	__speculation_ctrl_update(tifp, tifn);
+	if (likely(!((tifp | tifn) & _TIF_SPEC_FORCE_UPDATE))) {
+		__speculation_ctrl_update(tifp, tifn);
+	} else {
+		speculation_ctrl_update_tif(prev_p);
+		tifn = speculation_ctrl_update_tif(next_p);
+
+		/* Enforce MSR update to ensure consistent state */
+		__speculation_ctrl_update(~tifn, tifn);
+	}
 }
 
 /*

^ permalink raw reply related	[flat|nested] 95+ messages in thread

end of thread, other threads:[~2018-11-28 14:34 UTC | newest]

Thread overview: 95+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-21 20:14 [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Thomas Gleixner
2018-11-21 20:14 ` [patch 01/24] x86/speculation: Update the TIF_SSBD comment Thomas Gleixner
2018-11-21 20:28   ` Linus Torvalds
2018-11-21 20:30     ` Thomas Gleixner
2018-11-21 20:33     ` Linus Torvalds
2018-11-21 22:48       ` Thomas Gleixner
2018-11-21 22:53         ` Borislav Petkov
2018-11-21 22:55           ` Thomas Gleixner
2018-11-21 22:55           ` Arjan van de Ven
2018-11-21 22:56             ` Borislav Petkov
2018-11-21 23:07               ` Borislav Petkov
2018-11-21 23:04         ` Josh Poimboeuf
2018-11-21 23:08           ` Borislav Petkov
2018-11-22 17:30             ` Josh Poimboeuf
2018-11-22 17:52               ` Borislav Petkov
2018-11-22 21:17                 ` Thomas Gleixner
2018-11-21 20:14 ` [patch 02/24] x86/speculation: Clean up spectre_v2_parse_cmdline() Thomas Gleixner
2018-11-21 20:14 ` [patch 03/24] x86/speculation: Remove unnecessary ret variable in cpu_show_common() Thomas Gleixner
2018-11-21 20:14 ` [patch 04/24] x86/speculation: Reorganize cpu_show_common() Thomas Gleixner
2018-11-21 20:14 ` [patch 05/24] x86/speculation: Disable STIBP when enhanced IBRS is in use Thomas Gleixner
2018-11-21 20:33   ` Borislav Petkov
2018-11-21 20:36     ` Thomas Gleixner
2018-11-21 22:01       ` Thomas Gleixner
2018-11-21 20:14 ` [patch 06/24] x86/speculation: Rename SSBD update functions Thomas Gleixner
2018-11-21 20:14 ` [patch 07/24] x86/speculation: Reorganize speculation control MSRs update Thomas Gleixner
2018-11-21 20:14 ` [patch 08/24] sched/smt: Make sched_smt_present track topology Thomas Gleixner
2018-11-21 20:14 ` [patch 09/24] x86/Kconfig: Select SCHED_SMT if SMP enabled Thomas Gleixner
2018-11-21 20:14 ` [patch 10/24] sched/smt: Expose sched_smt_present static key Thomas Gleixner
2018-11-21 20:41   ` Thomas Gleixner
2018-11-21 20:14 ` [patch 11/24] x86/speculation: Rework SMT state change Thomas Gleixner
2018-11-21 20:14 ` [patch 12/24] x86/l1tf: Show actual SMT state Thomas Gleixner
2018-11-21 20:14 ` [patch 13/24] x86/speculation: Reorder the spec_v2 code Thomas Gleixner
2018-11-21 20:14 ` [patch 14/24] x86/speculation: Unify conditional spectre v2 print functions Thomas Gleixner
2018-11-22  7:59   ` Ingo Molnar
2018-11-21 20:14 ` [patch 15/24] x86/speculation: Add command line control for indirect branch speculation Thomas Gleixner
2018-11-21 23:43   ` Borislav Petkov
2018-11-22  8:14     ` Thomas Gleixner
2018-11-22  9:07       ` Thomas Gleixner
2018-11-22  9:18       ` Peter Zijlstra
2018-11-22 10:10         ` Borislav Petkov
2018-11-22 10:48           ` Thomas Gleixner
2018-11-21 20:14 ` [patch 16/24] x86/speculation: Prepare for per task indirect branch speculation control Thomas Gleixner
2018-11-22  7:57   ` Ingo Molnar
2018-11-21 20:14 ` [patch 17/24] x86/speculation: Move IBPB control out of switch_mm() Thomas Gleixner
2018-11-22  0:01   ` Andi Kleen
2018-11-22  7:42     ` Jiri Kosina
2018-11-22  9:18       ` Thomas Gleixner
2018-11-22  1:40   ` Tim Chen
2018-11-22  7:52   ` Ingo Molnar
2018-11-22 22:29     ` Thomas Gleixner
2018-11-21 20:14 ` [patch 18/24] x86/speculation: Avoid __switch_to_xtra() calls Thomas Gleixner
2018-11-22  1:23   ` Tim Chen
2018-11-22  7:44     ` Ingo Molnar
2018-11-21 20:14 ` [patch 19/24] ptrace: Remove unused ptrace_may_access_sched() and MODE_IBRS Thomas Gleixner
2018-11-21 20:14 ` [patch 20/24] x86/speculation: Split out TIF update Thomas Gleixner
2018-11-22  2:13   ` Tim Chen
2018-11-22 23:00     ` Thomas Gleixner
2018-11-23  7:37       ` Ingo Molnar
2018-11-26 18:35         ` Tim Chen
2018-11-26 21:55           ` Thomas Gleixner
2018-11-27  7:05             ` Jiri Kosina
2018-11-27  7:13               ` Thomas Gleixner
2018-11-27  7:30                 ` Jiri Kosina
2018-11-27 12:52                   ` Jiri Kosina
2018-11-27 13:18                     ` Jiri Kosina
2018-11-27 21:57                     ` Thomas Gleixner
2018-11-27 22:07                       ` Jiri Kosina
2018-11-27 22:20                         ` Jiri Kosina
2018-11-27 22:36                         ` Thomas Gleixner
2018-11-28  1:50                           ` Tim Chen
2018-11-28 10:43                             ` Thomas Gleixner
2018-11-28  6:05                           ` Jiri Kosina
2018-11-28 14:33                       ` [tip:x86/pti] x86/speculation: Prevent stale SPEC_CTRL msr content tip-bot for Thomas Gleixner
2018-11-22  7:43   ` [patch 20/24] x86/speculation: Split out TIF update Ingo Molnar
2018-11-22 23:04     ` Thomas Gleixner
2018-11-23  7:37       ` Ingo Molnar
2018-11-21 20:14 ` [patch 21/24] x86/speculation: Prepare arch_smt_update() for PRCTL mode Thomas Gleixner
2018-11-22  7:34   ` Ingo Molnar
2018-11-22 23:17     ` Thomas Gleixner
2018-11-22 23:28       ` Jiri Kosina
2018-11-21 20:14 ` [patch 22/24] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Thomas Gleixner
2018-11-22  7:10   ` Ingo Molnar
2018-11-22  9:03   ` Peter Zijlstra
2018-11-22  9:08     ` Thomas Gleixner
2018-11-22 12:26   ` Borislav Petkov
2018-11-22 12:33     ` Peter Zijlstra
2018-11-21 20:14 ` [patch 23/24] x86/speculation: Enable PRCTL mode for spectre_v2_app2app Thomas Gleixner
2018-11-22  7:17   ` Ingo Molnar
2018-11-21 20:14 ` [patch 24/24] x86/speculation: Add seccomp Spectre v2 app to app protection mode Thomas Gleixner
2018-11-22  2:24   ` Tim Chen
2018-11-22  7:26   ` Ingo Molnar
2018-11-22 23:45     ` Thomas Gleixner
2018-11-21 23:48 ` [patch 00/24] x86/speculation: Remedy the STIBP/IBPB overhead Tim Chen
2018-11-22  9:55   ` Thomas Gleixner
2018-11-22  9:45 ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).