linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection
@ 2018-10-17 17:59 Tim Chen
  2018-10-17 17:59 ` [Patch v3 01/13] x86/speculation: Clean up spectre_v2_parse_cmdline Tim Chen
                   ` (13 more replies)
  0 siblings, 14 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Thanks to the valuable feedback from Thomas, Ingo and other
reviewers to the second version of this patchset.

The patches are now broken down into smaller functional changes
and should make them clearer and easier to review and merge.
One major change is that STIBP is not needed when enhanced
IBRS is being used.  The new code reflect this logic.

Patch 1 and 2 are clean up patches.
Patch 3 and 4 disable STIBP for enhacned IBRS.
Patch 5 to 9 reorganizes the code without affecting
 functionality for easier modification later.
Patch 10 introduces the STIBP flag on a process to dynamically
 enable STIBP for that process.
Patch 11 introduces the lite option to protect only
 processes against Spectre v2 user space attack
 for processes with STIBP flag.
Patch 12 mark the non-dumpable processes to be protected.
Patch 13 introduces prctl interface to restrict indirect
 branch speculation via prctl.
	      
Tim

Changes:
v3:
1. Add logic to skip STIBP when Enhanced IBRS is used.
2. Break up v2 patches into smaller logical patches. 
3. Fix bug in arch_set_dumpable that did not update SPEC_CTRL
MSR right away when according to task's STIBP flag clearing which
caused SITBP to be left on.
4. Various code clean up. 

v2:
1. Extend per process STIBP to AMD cpus
2. Add prctl option to control per process indirect branch speculation
3. Bug fixes and cleanups 

Jiri's patchset to harden Spectre v2 user space mitigation makes IBPB
and STIBP in use for Spectre v2 mitigation on all processes.  IBPB will
be issued for switching to an application that's not ptraceable by the
previous application and STIBP will be always turned on.

However, leaving STIBP on all the time is expensive for certain
applications that have frequent indirect branches. One such application
is perlbench in the SpecInt Rate 2006 test suite which shows a
21% reduction in throughput.  Other application like bzip2 in
the same test suite with  minimal indirct branches have
only a 0.7% reduction in throughput. IBPB will also impose
overhead during context switches.

Application to application exploit is in general difficult due to address
space layout randomization in applications and the need to know an
application's address space layout ahead of time.  Users may not wish to
incur performance overhead from IBPB and STIBP for general non security
sensitive processes and use these mitigations only for security sensitive
processes.

This patchset provides a process property based lite protection mode that
applies IBPB and STIBP mitigation only to security sensitive non-dumpable
processes and processes that users want to protect by having indirect
branch speculation disabled via PRCTL.  So the overhead from IBPB and
STIBP are avoided for low security processes that don't require extra
protection.


Tim Chen (13):
  x86/speculation: Clean up spectre_v2_parse_cmdline
  x86/speculation: Remove unnecessary ret variable in cpu_show_common
  x86/speculation: Add static key for Enhanced IBRS
  x86/speculation: Disable STIBP when enhanced IBRS is in use
  x86/smt: Create cpu_smt_enabled static key for SMT specific code
  mm: Pass task instead of task->mm as argument to set_dumpable
  x86/process Add arch_set_dumpable
  x86/speculation: Rename SSBD update functions
  x86/speculation: Reorganize SPEC_CTRL MSR update
  x86/speculation: Add per thread STIBP flag
  x86/speculation: Add Spectre v2 lite app to app protection mode
  x86/speculation: Protect non-dumpable processes against Spectre v2
    attack
  x86/speculation: Create PRCTL interface to restrict indirect branch
    speculation

 Documentation/admin-guide/kernel-parameters.txt |  21 ++
 Documentation/userspace-api/spec_ctrl.rst       |  10 +
 arch/x86/include/asm/msr-index.h                |   6 +-
 arch/x86/include/asm/nospec-branch.h            |  10 +
 arch/x86/include/asm/spec-ctrl.h                |  18 +-
 arch/x86/include/asm/thread_info.h              |   5 +-
 arch/x86/kernel/cpu/bugs.c                      | 294 +++++++++++++++++++++---
 arch/x86/kernel/process.c                       |  53 +++--
 arch/x86/kvm/vmx.c                              |   2 +-
 arch/x86/mm/tlb.c                               |  19 +-
 fs/exec.c                                       |  20 +-
 include/linux/cpu.h                             |   1 +
 include/linux/sched.h                           |  11 +
 include/linux/sched/coredump.h                  |   2 +-
 include/uapi/linux/prctl.h                      |   1 +
 kernel/cpu.c                                    |  12 +-
 kernel/cred.c                                   |   2 +-
 kernel/sys.c                                    |   2 +-
 tools/include/uapi/linux/prctl.h                |   1 +
 19 files changed, 427 insertions(+), 63 deletions(-)

-- 
2.9.4


^ permalink raw reply	[flat|nested] 47+ messages in thread

* [Patch v3 01/13] x86/speculation: Clean up spectre_v2_parse_cmdline
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 12:43   ` Thomas Gleixner
  2018-10-17 17:59 ` [Patch v3 02/13] x86/speculation: Remove unnecessary ret variable in cpu_show_common Tim Chen
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Remove unnecessary else statement in spectre_v2_parse_cmdline
to save an indent.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fe32103..bbda9b6 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -283,22 +283,21 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 
 	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
 		return SPECTRE_V2_CMD_NONE;
-	else {
-		ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
-		if (ret < 0)
-			return SPECTRE_V2_CMD_AUTO;
 
-		for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
-			if (!match_option(arg, ret, mitigation_options[i].option))
-				continue;
-			cmd = mitigation_options[i].cmd;
-			break;
-		}
+	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_CMD_AUTO;
 
-		if (i >= ARRAY_SIZE(mitigation_options)) {
-			pr_err("unknown option (%s). Switching to AUTO select\n", arg);
-			return SPECTRE_V2_CMD_AUTO;
-		}
+	for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
+		if (!match_option(arg, ret, mitigation_options[i].option))
+			continue;
+		cmd = mitigation_options[i].cmd;
+		break;
+	}
+
+	if (i >= ARRAY_SIZE(mitigation_options)) {
+		pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+		return SPECTRE_V2_CMD_AUTO;
 	}
 
 	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 02/13] x86/speculation: Remove unnecessary ret variable in cpu_show_common
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
  2018-10-17 17:59 ` [Patch v3 01/13] x86/speculation: Clean up spectre_v2_parse_cmdline Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 12:46   ` Thomas Gleixner
  2018-10-17 17:59 ` [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS Tim Chen
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Remove unecessary ret variable in cpu_show_common.
Break up long lines too to make the code more concise
and easier to read and modify in later patches.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bbda9b6..b2f6b8b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -854,8 +854,6 @@ static ssize_t l1tf_show_state(char *buf)
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
-	int ret;
-
 	if (!boot_cpu_has_bug(bug))
 		return sprintf(buf, "Not affected\n");
 
@@ -873,13 +871,17 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 
 	case X86_BUG_SPECTRE_V2:
-		ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
-			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
-			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
-			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
-			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
-			       spectre_v2_module_string());
-		return ret;
+		return sprintf(buf, "%s%s%s%s%s%s\n",
+			spectre_v2_strings[spectre_v2_enabled],
+			boot_cpu_has(X86_FEATURE_USE_IBPB) ?
+				   ", IBPB" : "",
+			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
+				   ", IBRS_FW" : "",
+			(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
+				   ", STIBP" : "",
+			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
+				   ", RSB filling" : "",
+			spectre_v2_module_string());
 
 	case X86_BUG_SPEC_STORE_BYPASS:
 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
  2018-10-17 17:59 ` [Patch v3 01/13] x86/speculation: Clean up spectre_v2_parse_cmdline Tim Chen
  2018-10-17 17:59 ` [Patch v3 02/13] x86/speculation: Remove unnecessary ret variable in cpu_show_common Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 12:50   ` Thomas Gleixner
  2018-10-26 16:58   ` Waiman Long
  2018-10-17 17:59 ` [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
                   ` (10 subsequent siblings)
  13 siblings, 2 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Add static key to indicate whether we are using Enhanced IBRS to mitigate
Spectre v2.  This will be used in later patches to disengage STIBP code
for Spectre v2 mitigation as STIBP is not needed when Enhanced IBRS is
in use.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/nospec-branch.h | 3 +++
 arch/x86/kernel/cpu/bugs.c           | 4 ++++
 2 files changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index fd2a8c1..d57e84e 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -3,6 +3,7 @@
 #ifndef _ASM_X86_NOSPEC_BRANCH_H_
 #define _ASM_X86_NOSPEC_BRANCH_H_
 
+#include <linux/static_key.h>
 #include <asm/alternative.h>
 #include <asm/alternative-asm.h>
 #include <asm/cpufeatures.h>
@@ -228,6 +229,8 @@ enum ssb_mitigation {
 extern char __indirect_thunk_start[];
 extern char __indirect_thunk_end[];
 
+DECLARE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
+
 /*
  * On VMEXIT we must ensure that no RSB predictions learned in the guest
  * can be followed in the host, by overwriting the RSB completely. Both
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b2f6b8b..2fc7b4e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -142,6 +142,9 @@ static const char *spectre_v2_strings[] = {
 	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
 };
 
+DEFINE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
+EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V2 : " fmt
 
@@ -386,6 +389,7 @@ static void __init spectre_v2_select_mitigation(void)
 			/* Force it so VMEXIT will restore correctly */
 			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+			static_branch_enable(&spectre_v2_enhanced_ibrs);
 			goto specv2_set_mode;
 		}
 		if (IS_ENABLED(CONFIG_RETPOLINE))
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (2 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 12:58   ` Thomas Gleixner
  2018-10-26 17:00   ` Waiman Long
  2018-10-17 17:59 ` [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
                   ` (9 subsequent siblings)
  13 siblings, 2 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

With enhanced IBRS in use, the application running on sibling CPU will not
be able to launch Spectre v2 attack to the application on current CPU.
There is no need to use STIBP for this case.  Disable the STIBP code
when enhanced IBRS is used.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2fc7b4e..6ed82ea 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -327,6 +327,14 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 
 static bool stibp_needed(void)
 {
+	/*
+	 * Determine if we want to leave STIBP always on.
+	 * Using enhanced IBRS makes using STIBP unnecessary.
+	 */
+
+	if (static_branch_unlikely(&spectre_v2_enhanced_ibrs))
+		return false;
+
 	if (spectre_v2_enabled == SPECTRE_V2_NONE)
 		return false;
 
@@ -881,7 +889,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 				   ", IBPB" : "",
 			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
 				   ", IBRS_FW" : "",
-			(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
+			spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ?
+				   ", Enhanced IBRS" :
+			    (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
 				   ", STIBP" : "",
 			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
 				   ", RSB filling" : "",
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (3 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 13:03   ` Thomas Gleixner
  2018-10-19  7:51   ` Peter Zijlstra
  2018-10-17 17:59 ` [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable Tim Chen
                   ` (8 subsequent siblings)
  13 siblings, 2 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Create the cpu_smt_enabled static key to indicate if we are
using SMT.  SMT specific code paths are executed only when SMT
code paths are enabled by this key.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c |  2 +-
 arch/x86/kvm/vmx.c         |  2 +-
 include/linux/cpu.h        |  1 +
 kernel/cpu.c               | 12 ++++++++++--
 4 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6ed82ea..0338fa1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -358,7 +358,7 @@ void arch_smt_update(void)
 
 	mutex_lock(&spec_ctrl_mutex);
 	mask = x86_spec_ctrl_base;
-	if (cpu_smt_control == CPU_SMT_ENABLED)
+	if (static_branch_likely(&cpu_smt_enabled))
 		mask |= SPEC_CTRL_STIBP;
 	else
 		mask &= ~SPEC_CTRL_STIBP;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 533a327..8ec0ea3 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -11072,7 +11072,7 @@ static int vmx_vm_init(struct kvm *kvm)
 			 * Warn upon starting the first VM in a potentially
 			 * insecure environment.
 			 */
-			if (cpu_smt_control == CPU_SMT_ENABLED)
+			if (static_branch_likely(&cpu_smt_enabled))
 				pr_warn_once(L1TF_MSG_SMT);
 			if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)
 				pr_warn_once(L1TF_MSG_L1D);
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 218df7f..b54f085 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -188,5 +188,6 @@ static inline void cpu_smt_disable(bool force) { }
 static inline void cpu_smt_check_topology_early(void) { }
 static inline void cpu_smt_check_topology(void) { }
 #endif
+DECLARE_STATIC_KEY_TRUE(cpu_smt_enabled);
 
 #endif /* _LINUX_CPU_H_ */
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 3adecda..ad28afc 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -349,6 +349,8 @@ EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
 #ifdef CONFIG_HOTPLUG_SMT
 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
 EXPORT_SYMBOL_GPL(cpu_smt_control);
+DEFINE_STATIC_KEY_TRUE(cpu_smt_enabled);
+EXPORT_SYMBOL_GPL(cpu_smt_enabled);
 
 static bool cpu_smt_available __read_mostly;
 
@@ -364,6 +366,7 @@ void __init cpu_smt_disable(bool force)
 	} else {
 		cpu_smt_control = CPU_SMT_DISABLED;
 	}
+	static_branch_disable(&cpu_smt_enabled);	
 }
 
 /*
@@ -373,8 +376,10 @@ void __init cpu_smt_disable(bool force)
  */
 void __init cpu_smt_check_topology_early(void)
 {
-	if (!topology_smt_supported())
+	if (!topology_smt_supported()) {
 		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
+		static_branch_disable(&cpu_smt_enabled);	
+	}
 }
 
 /*
@@ -386,8 +391,10 @@ void __init cpu_smt_check_topology_early(void)
  */
 void __init cpu_smt_check_topology(void)
 {
-	if (!cpu_smt_available)
+	if (!cpu_smt_available) {
 		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
+		static_branch_disable(&cpu_smt_enabled);	
+	}
 }
 
 static int __init smt_cmdline_disable(char *str)
@@ -2072,6 +2079,7 @@ static int cpuhp_smt_enable(void)
 
 	cpu_maps_update_begin();
 	cpu_smt_control = CPU_SMT_ENABLED;
+	static_branch_enable(&cpu_smt_enabled);
 	arch_smt_update();
 	for_each_present_cpu(cpu) {
 		/* Skip online CPUs and CPUs on offline nodes */
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (4 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 13:22   ` Thomas Gleixner
  2018-10-19 20:02   ` Peter Zijlstra
  2018-10-17 17:59 ` [Patch v3 07/13] x86/process Add arch_set_dumpable Tim Chen
                   ` (7 subsequent siblings)
  13 siblings, 2 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Change the argument to set_dumpable from task->mm to task.  This allows us
to later add hooks to modify a task's property according to whether it is
a non-dumpable task. Non dumpable tasks demand a higher level of security.
Changes the dumpable value from in to unsigned int as negative number is
not allowed.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 fs/exec.c                      | 14 ++++++++------
 include/linux/sched/coredump.h |  2 +-
 kernel/cred.c                  |  2 +-
 kernel/sys.c                   |  2 +-
 4 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/fs/exec.c b/fs/exec.c
index 1ebf6e5..e204830 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1362,9 +1362,9 @@ void setup_new_exec(struct linux_binprm * bprm)
 	if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP ||
 	    !(uid_eq(current_euid(), current_uid()) &&
 	      gid_eq(current_egid(), current_gid())))
-		set_dumpable(current->mm, suid_dumpable);
+		set_dumpable(current, (unsigned int) suid_dumpable);
 	else
-		set_dumpable(current->mm, SUID_DUMP_USER);
+		set_dumpable(current, SUID_DUMP_USER);
 
 	arch_setup_new_exec();
 	perf_event_exec();
@@ -1943,17 +1943,19 @@ EXPORT_SYMBOL(set_binfmt);
 /*
  * set_dumpable stores three-value SUID_DUMP_* into mm->flags.
  */
-void set_dumpable(struct mm_struct *mm, int value)
+void set_dumpable(struct task_struct *tsk, unsigned int value)
 {
 	unsigned long old, new;
 
-	if (WARN_ON((unsigned)value > SUID_DUMP_ROOT))
+	if (WARN_ON(value > SUID_DUMP_ROOT))
+		return;
+	if (WARN_ON(!tsk->mm))
 		return;
 
 	do {
-		old = READ_ONCE(mm->flags);
+		old = READ_ONCE(tsk->mm->flags);
 		new = (old & ~MMF_DUMPABLE_MASK) | value;
-	} while (cmpxchg(&mm->flags, old, new) != old);
+	} while (cmpxchg(&tsk->mm->flags, old, new) != old);
 }
 
 SYSCALL_DEFINE3(execve,
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index ec912d0..c02b1ab 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -14,7 +14,7 @@
 #define MMF_DUMPABLE_BITS 2
 #define MMF_DUMPABLE_MASK ((1 << MMF_DUMPABLE_BITS) - 1)
 
-extern void set_dumpable(struct mm_struct *mm, int value);
+extern void set_dumpable(struct task_struct *tsk, unsigned int value);
 /*
  * This returns the actual value of the suid_dumpable flag. For things
  * that are using this for checking for privilege transitions, it must
diff --git a/kernel/cred.c b/kernel/cred.c
index ecf0365..18ef774 100644
--- a/kernel/cred.c
+++ b/kernel/cred.c
@@ -446,7 +446,7 @@ int commit_creds(struct cred *new)
 	    !gid_eq(old->fsgid, new->fsgid) ||
 	    !cred_cap_issubset(old, new)) {
 		if (task->mm)
-			set_dumpable(task->mm, suid_dumpable);
+			set_dumpable(task, (unsigned int) suid_dumpable);
 		task->pdeath_signal = 0;
 		smp_wmb();
 	}
diff --git a/kernel/sys.c b/kernel/sys.c
index cf5c675..d0892cd 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2292,7 +2292,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 			error = -EINVAL;
 			break;
 		}
-		set_dumpable(me->mm, arg2);
+		set_dumpable(me, (unsigned int) arg2);
 		break;
 
 	case PR_SET_UNALIGN:
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 07/13] x86/process Add arch_set_dumpable
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (5 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 13:28   ` Thomas Gleixner
  2018-10-17 17:59 ` [Patch v3 08/13] x86/speculation: Rename SSBD update functions Tim Chen
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Add arch_set_dumpable for setting architecture specific security
modifications on processes according to its dumpable properties.
Non dumpable processes are security sensitive and they can be modified
to gain architecture specific security defenses via arch_set_dumpable.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 fs/exec.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/exec.c b/fs/exec.c
index e204830..6f329fc 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1940,6 +1940,11 @@ void set_binfmt(struct linux_binfmt *new)
 }
 EXPORT_SYMBOL(set_binfmt);
 
+void __weak arch_set_dumpable(struct task_struct *tsk, unsigned int value)
+{
+	return;
+}
+
 /*
  * set_dumpable stores three-value SUID_DUMP_* into mm->flags.
  */
@@ -1956,6 +1961,7 @@ void set_dumpable(struct task_struct *tsk, unsigned int value)
 		old = READ_ONCE(tsk->mm->flags);
 		new = (old & ~MMF_DUMPABLE_MASK) | value;
 	} while (cmpxchg(&tsk->mm->flags, old, new) != old);
+	arch_set_dumpable(tsk, value);
 }
 
 SYSCALL_DEFINE3(execve,
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 08/13] x86/speculation: Rename SSBD update functions
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (6 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 07/13] x86/process Add arch_set_dumpable Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 13:37   ` Thomas Gleixner
  2018-10-17 17:59 ` [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update Tim Chen
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

This is a clean up patch that doesn't change any
functionality.  We rename intel_set_ssb_state to
spec_ctrl_update_msr, speculative_store_bypass_update to
speculation_ctrl_update and speculative_store_bypass_update_current to
speculation_ctrl_update_current.  This prepares us to update
of other bits in the SPEC_CTRL MSR dynamically.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/spec-ctrl.h |  6 +++---
 arch/x86/kernel/cpu/bugs.c       |  4 ++--
 arch/x86/kernel/process.c        | 12 ++++++------
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index ae7c2c5..8e2f841 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -70,11 +70,11 @@ extern void speculative_store_bypass_ht_init(void);
 static inline void speculative_store_bypass_ht_init(void) { }
 #endif
 
-extern void speculative_store_bypass_update(unsigned long tif);
+extern void speculation_ctrl_update(unsigned long tif);
 
-static inline void speculative_store_bypass_update_current(void)
+static inline void speculation_ctrl_update_current(void)
 {
-	speculative_store_bypass_update(current_thread_info()->flags);
+	speculation_ctrl_update(current_thread_info()->flags);
 }
 
 #endif
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0338fa1..673d434 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -205,7 +205,7 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 		tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) :
 				 ssbd_spec_ctrl_to_tif(hostval);
 
-		speculative_store_bypass_update(tif);
+		speculation_ctrl_update(tif);
 	}
 }
 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl);
@@ -647,7 +647,7 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	 * mitigation until it is next scheduled.
 	 */
 	if (task == current && update)
-		speculative_store_bypass_update_current();
+		speculation_ctrl_update_current();
 
 	return 0;
 }
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index c93fcfd..8aa4960 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -395,27 +395,27 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 	wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
 }
 
-static __always_inline void intel_set_ssb_state(unsigned long tifn)
+static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
 	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+static __always_inline void __speculation_ctrl_update(unsigned long tifn)
 {
 	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
 		amd_set_ssb_virt_state(tifn);
 	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
 		amd_set_core_ssb_state(tifn);
 	else
-		intel_set_ssb_state(tifn);
+		spec_ctrl_update_msr(tifn);
 }
 
-void speculative_store_bypass_update(unsigned long tif)
+void speculation_ctrl_update(unsigned long tif)
 {
 	preempt_disable();
-	__speculative_store_bypass_update(tif);
+	__speculation_ctrl_update(tif);
 	preempt_enable();
 }
 
@@ -452,7 +452,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
 	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculative_store_bypass_update(tifn);
+		__speculation_ctrl_update(tifn);
 }
 
 /*
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (7 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 08/13] x86/speculation: Rename SSBD update functions Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 13:47   ` Thomas Gleixner
  2018-10-26 17:21   ` Waiman Long
  2018-10-17 17:59 ` [Patch v3 10/13] x86/speculation: Add per thread STIBP flag Tim Chen
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Reorganize the spculation control MSR update code. Currently it is limited
to only dynamic update of the Speculative Store Bypass Disable bit.
This patch consolidates the logic to check for AMD CPUs that may or may
not use this MSR to control SSBD. This prepares us to add logic to update
other bits in the SPEC_CTRL MSR cleanly.

Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/process.c | 39 +++++++++++++++++++++++++++++----------
 1 file changed, 29 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 8aa4960..789f1bada 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -397,25 +397,45 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
 
 static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 {
-	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
+	u64 msr = x86_spec_ctrl_base;
+
+	/*
+	 * If X86_FEATURE_SSBD is not set, the SSBD
+	 * bit is not to be touched.
+	 */
+	if (static_cpu_has(X86_FEATURE_SSBD))
+		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
-static __always_inline void __speculation_ctrl_update(unsigned long tifn)
+static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+						      unsigned long tifn)
 {
-	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
-		amd_set_ssb_virt_state(tifn);
-	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
-		amd_set_core_ssb_state(tifn);
-	else
+	bool updmsr = false;
+
+	/* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */
+	if ((tifp ^ tifn) & _TIF_SSBD) {
+		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
+			amd_set_ssb_virt_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+			amd_set_core_ssb_state(tifn);
+		else if (static_cpu_has(X86_FEATURE_SSBD))
+			updmsr  = true;
+	}
+
+	if (updmsr)
 		spec_ctrl_update_msr(tifn);
 }
 
 void speculation_ctrl_update(unsigned long tif)
 {
+	/*
+	 * On this path we're forcing the update, so use ~tif as the
+	 * previous flags.
+	 */
 	preempt_disable();
-	__speculation_ctrl_update(tif);
+	__speculation_ctrl_update(~tif, tif);
 	preempt_enable();
 }
 
@@ -451,8 +471,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
-	if ((tifp ^ tifn) & _TIF_SSBD)
-		__speculation_ctrl_update(tifn);
+	__speculation_ctrl_update(tifp, tifn);
 }
 
 /*
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 10/13] x86/speculation: Add per thread STIBP flag
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (8 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 13:53   ` Thomas Gleixner
  2018-10-17 17:59 ` [Patch v3 11/13] x86/speculation: Add Spectre v2 lite app to app protection mode Tim Chen
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Add per thread STIBP flag. When context switching to a process thread that
has the STIBP flag, the STIBP bit in the SPEC_CTRL MSR will be turned
on to guard against application to application Spectre v2 attack. When
switching to a non-security sensitive thread that doesn't have STIBP flag,
the STIBP bit in the SPEC_CTRL MSR is turned off.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/include/asm/msr-index.h   |  6 +++++-
 arch/x86/include/asm/spec-ctrl.h   | 12 ++++++++++++
 arch/x86/include/asm/thread_info.h |  5 ++++-
 arch/x86/kernel/process.c          | 10 +++++++++-
 4 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 4731f0c..bd19452 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -41,7 +41,11 @@
 
 #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
 #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
-#define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_STIBP_SHIFT		1          /*
+						    * Single Thread Indirect Branch
+						    * Predictor (STIBP) bit
+						    */
+#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */
 #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
 #define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
 
diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index 8e2f841..b593779 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(u64 tifn)
 	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
+{
+	BUILD_BUG_ON(TIF_STIBP < SPEC_CTRL_STIBP_SHIFT);
+	return (tifn & _TIF_STIBP) >> (TIF_STIBP - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
 {
 	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
 	return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
+static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
+{
+	BUILD_BUG_ON(TIF_STIBP < SPEC_CTRL_STIBP_SHIFT);
+	return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_STIBP - SPEC_CTRL_STIBP_SHIFT);
+}
+
 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
 {
 	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 2ff2a30..a3ec545 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -83,6 +83,7 @@ struct thread_info {
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
+#define TIF_STIBP              9       /* Single threaded indirect branch predict */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -110,6 +111,7 @@ struct thread_info {
 #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_STIBP		(1 << TIF_STIBP)
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
@@ -146,7 +148,8 @@ struct thread_info {
 
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW							\
-	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
+	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
+	 _TIF_SSBD|_TIF_STIBP)
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 789f1bada..a65b456 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -406,13 +406,21 @@ static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
 	if (static_cpu_has(X86_FEATURE_SSBD))
 		msr |= ssbd_tif_to_spec_ctrl(tifn);
 
+	/*
+	 * Need STIBP defense against Spectre v2 attack
+	 * if SMT is in use and we don't have enhanced IBRS.
+	 */
+	if (static_branch_likely(&cpu_smt_enabled) &&
+	    !static_branch_unlikely(&spectre_v2_enhanced_ibrs))
+		msr |= stibp_tif_to_spec_ctrl(tifn);
+
 	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
 }
 
 static __always_inline void __speculation_ctrl_update(unsigned long tifp,
 						      unsigned long tifn)
 {
-	bool updmsr = false;
+	bool updmsr = !!((tifp ^ tifn) & _TIF_STIBP);
 
 	/* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */
 	if ((tifp ^ tifn) & _TIF_SSBD) {
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 11/13] x86/speculation: Add Spectre v2 lite app to app protection mode
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (9 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 10/13] x86/speculation: Add per thread STIBP flag Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 15:12   ` Thomas Gleixner
  2018-10-17 17:59 ` [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack Tim Chen
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Currently the STIBP is always turned on for CPUs vulnerable to Spectre V2.
A new lite protection mode option is created.  In this new mode, we protect
security sensitive processes with STIBP and IBPB against application to
application attack based on its process property or security level. This
will allow non security sensitive processes not needing the protection
avoid overhead associated with STIBP and IBPB.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  18 ++++
 arch/x86/include/asm/nospec-branch.h            |   7 ++
 arch/x86/kernel/cpu/bugs.c                      | 135 ++++++++++++++++++++++--
 arch/x86/mm/tlb.c                               |  19 +++-
 4 files changed, 166 insertions(+), 13 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 64a3bf5..2feb6b2 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4186,6 +4186,24 @@
 			Not specifying this option is equivalent to
 			spectre_v2=auto.
 
+	spectre_v2_app2app=
+			[X86] Control mitigation of Spectre variant 2
+		        application to application (indirect branch speculation)
+			vulnerability.
+
+			off    - turn off Spectre v2 application to
+				 application attack mitigation
+			lite   - turn on mitigation for non-dumpable
+				 processes (i.e. protect daemons and other
+				 privileged processes that tend to be
+				 non-dumpable).
+			strict - protect against attacks for all user processes
+			auto   - let kernel decide lite or strict mode
+
+			Not specifying this option is equivalent to
+			spectre_v2_app2app=auto.  Setting spectre_v2=off will 
+			also turn off this mitigation.
+
 	spec_store_bypass_disable=
 			[HW] Control Speculative Store Bypass (SSB) Disable mitigation
 			(Speculative Store Bypass vulnerability)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index d57e84e..ad1c141 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -218,6 +218,12 @@ enum spectre_v2_mitigation {
 	SPECTRE_V2_IBRS_ENHANCED,
 };
 
+enum spectre_v2_app2app_mitigation {
+	SPECTRE_V2_APP2APP_NONE,
+	SPECTRE_V2_APP2APP_LITE,
+	SPECTRE_V2_APP2APP_STRICT,
+};
+
 /* The Speculative Store Bypass disable variants */
 enum ssb_mitigation {
 	SPEC_STORE_BYPASS_NONE,
@@ -229,6 +235,7 @@ enum ssb_mitigation {
 extern char __indirect_thunk_start[];
 extern char __indirect_thunk_end[];
 
+DECLARE_STATIC_KEY_FALSE(spectre_v2_app_lite);
 DECLARE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
 
 /*
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 673d434..1d317f2 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -133,6 +133,13 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_RETPOLINE_AMD,
 };
 
+enum spectre_v2_app2app_mitigation_cmd {
+	SPECTRE_V2_APP2APP_CMD_NONE,
+	SPECTRE_V2_APP2APP_CMD_AUTO,
+	SPECTRE_V2_APP2APP_CMD_LITE,
+	SPECTRE_V2_APP2APP_CMD_STRICT,
+};
+
 static const char *spectre_v2_strings[] = {
 	[SPECTRE_V2_NONE]			= "Vulnerable",
 	[SPECTRE_V2_RETPOLINE_MINIMAL]		= "Vulnerable: Minimal generic ASM retpoline",
@@ -142,6 +149,17 @@ static const char *spectre_v2_strings[] = {
 	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
 };
 
+static const char *spectre_v2_app2app_strings[] = {
+	[SPECTRE_V2_APP2APP_NONE]   = "App-App Vulnerable",
+	[SPECTRE_V2_APP2APP_LITE]   = "App-App Mitigation: Protect non-dumpable"
+				      " and indirect branch restricted process",
+	[SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app"
+				      " attack protection",
+};
+
+DEFINE_STATIC_KEY_FALSE(spectre_v2_app_lite);
+EXPORT_SYMBOL(spectre_v2_app_lite);
+
 DEFINE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
 EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);
 
@@ -151,6 +169,9 @@ EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);
 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
 	SPECTRE_V2_NONE;
 
+static enum spectre_v2_app2app_mitigation
+	spectre_v2_app2app_enabled __ro_after_init = SPECTRE_V2_APP2APP_NONE;
+
 void
 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 {
@@ -172,6 +193,9 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
 		    static_cpu_has(X86_FEATURE_AMD_SSBD))
 			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
 
+		if (static_branch_unlikely(&spectre_v2_app_lite))
+			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
+
 		if (hostval != guestval) {
 			msrval = setguest ? guestval : hostval;
 			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
@@ -278,6 +302,52 @@ static const struct {
 	{ "auto",              SPECTRE_V2_CMD_AUTO,              false },
 };
 
+static const struct {
+	const char *option;
+	enum spectre_v2_app2app_mitigation_cmd cmd;
+	bool secure;
+} app2app_mitigation_options[] = {
+	{ "off",	SPECTRE_V2_APP2APP_CMD_NONE,   false },
+	{ "lite",	SPECTRE_V2_APP2APP_CMD_LITE,   false },
+	{ "strict",	SPECTRE_V2_APP2APP_CMD_STRICT, false },
+	{ "auto",	SPECTRE_V2_APP2APP_CMD_AUTO,   false },
+};
+
+static enum spectre_v2_app2app_mitigation_cmd __init
+		spectre_v2_parse_app2app_cmdline(void)
+{
+	enum spectre_v2_app2app_mitigation_cmd cmd;
+	char arg[20];
+	int ret, i;
+
+	cmd = SPECTRE_V2_APP2APP_CMD_AUTO;
+	ret = cmdline_find_option(boot_command_line, "spectre_v2_app2app",
+					arg, sizeof(arg));
+	if (ret < 0)
+		return SPECTRE_V2_APP2APP_CMD_AUTO;
+
+	for (i = 0; i < ARRAY_SIZE(app2app_mitigation_options); i++) {
+		if (!match_option(arg, ret,
+				  app2app_mitigation_options[i].option))
+			continue;
+		cmd = app2app_mitigation_options[i].cmd;
+		break;
+	}
+
+	if (i >= ARRAY_SIZE(app2app_mitigation_options)) {
+		pr_err("unknown app to app protection option (%s). "
+		       "Switching to AUTO select\n", arg);
+		return SPECTRE_V2_APP2APP_CMD_AUTO;
+	}
+
+	if (app2app_mitigation_options[i].secure)
+		spec2_print_if_secure(app2app_mitigation_options[i].option);
+	else
+		spec2_print_if_insecure(app2app_mitigation_options[i].option);
+
+	return cmd;
+}
+
 static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
 {
 	char arg[20];
@@ -330,14 +400,22 @@ static bool stibp_needed(void)
 	/*
 	 * Determine if we want to leave STIBP always on.
 	 * Using enhanced IBRS makes using STIBP unnecessary.
+	 * For lite option, we enable STIBP based on a process's
+	 * flag during context switch.
 	 */
 
 	if (static_branch_unlikely(&spectre_v2_enhanced_ibrs))
 		return false;
 
+	if (static_branch_unlikely(&spectre_v2_app_lite))
+		return false;
+
 	if (spectre_v2_enabled == SPECTRE_V2_NONE)
 		return false;
 
+	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+		return false;
+
 	if (!boot_cpu_has(X86_FEATURE_STIBP))
 		return false;
 
@@ -377,7 +455,11 @@ static void __init spectre_v2_select_mitigation(void)
 {
 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
 	enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+	enum spectre_v2_app2app_mitigation_cmd app2app_cmd;
+	enum spectre_v2_app2app_mitigation app2app_mode;
 
+	app2app_cmd = spectre_v2_parse_app2app_cmdline();
+	app2app_mode = SPECTRE_V2_APP2APP_NONE;
 	/*
 	 * If the CPU is not affected and the command line mode is NONE or AUTO
 	 * then nothing to do.
@@ -398,7 +480,7 @@ static void __init spectre_v2_select_mitigation(void)
 			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 			static_branch_enable(&spectre_v2_enhanced_ibrs);
-			goto specv2_set_mode;
+			goto specv2_app_set_mode;
 		}
 		if (IS_ENABLED(CONFIG_RETPOLINE))
 			goto retpoline_auto;
@@ -437,7 +519,31 @@ static void __init spectre_v2_select_mitigation(void)
 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
 	}
 
+specv2_app_set_mode:
+	if (!boot_cpu_has(X86_FEATURE_IBPB) ||
+	    !boot_cpu_has(X86_FEATURE_STIBP))
+		goto specv2_set_mode;	
+
+	switch (app2app_cmd) {
+	case SPECTRE_V2_APP2APP_CMD_NONE:
+		break;	
+
+	case SPECTRE_V2_APP2APP_CMD_LITE:
+	case SPECTRE_V2_APP2APP_CMD_AUTO:
+		app2app_mode = SPECTRE_V2_APP2APP_LITE;
+		break;
+
+	case SPECTRE_V2_APP2APP_CMD_STRICT:
+		app2app_mode = SPECTRE_V2_APP2APP_STRICT;
+		break;
+	}
+
 specv2_set_mode:
+	spectre_v2_app2app_enabled = app2app_mode;
+	pr_info("%s\n", spectre_v2_app2app_strings[app2app_mode]);
+	if (app2app_mode == SPECTRE_V2_APP2APP_LITE)
+		static_branch_enable(&spectre_v2_app_lite);
+
 	spectre_v2_enabled = mode;
 	pr_info("%s\n", spectre_v2_strings[mode]);
 
@@ -452,8 +558,12 @@ static void __init spectre_v2_select_mitigation(void)
 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
 
-	/* Initialize Indirect Branch Prediction Barrier if supported */
-	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+	/*
+	 * Initialize Indirect Branch Prediction Barrier if supported
+	 * and not disabled
+	 */
+	if (boot_cpu_has(X86_FEATURE_IBPB) && 
+	    app2app_mode != SPECTRE_V2_APP2APP_NONE) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
 		pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
 	}
@@ -883,16 +993,23 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 
 	case X86_BUG_SPECTRE_V2:
-		return sprintf(buf, "%s%s%s%s%s%s\n",
+		return sprintf(buf, "%s%s%s%s%s%s%s\n",
 			spectre_v2_strings[spectre_v2_enabled],
-			boot_cpu_has(X86_FEATURE_USE_IBPB) ?
-				   ", IBPB" : "",
+			spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE ||
+			    !boot_cpu_has(X86_FEATURE_USE_IBPB) ?
+				   "" :
+			    spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_LITE ?
+				   ", IBPB-lite" : ", IBPB-strict",
+			spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE ||
+		            spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ||
+			    !boot_cpu_has(X86_FEATURE_STIBP) ?
+				   "" :
+			    spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_LITE ?
+				   ", STIBP-lite" : ", STIBP-strict",
 			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
 				   ", IBRS_FW" : "",
 			spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ?
-				   ", Enhanced IBRS" :
-			    (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
-				   ", STIBP" : "",
+				   ", Enhanced IBRS" : "",
 			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
 				   ", RSB filling" : "",
 			spectre_v2_module_string());
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 073b8df..1bd1597 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -184,14 +184,25 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
 static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
 {
 	/*
-	 * Check if the current (previous) task has access to the memory
-	 * of the @tsk (next) task. If access is denied, make sure to
+	 * For lite protection mode, we protect processes  
+	 * marked with STIBP flag needing app to app proection.
+	 *
+	 * Otherwise check if the current (previous) task has access to the
+	 * the memory of the @tsk (next) task for strict app to app protection.
+	 * If access is denied, make sure to
 	 * issue a IBPB to stop user->user Spectre-v2 attacks.
 	 *
 	 * Note: __ptrace_may_access() returns 0 or -ERRNO.
 	 */
-	return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
-		ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+
+	/* skip IBPB if no context changes */
+	if (!tsk || !tsk->mm || tsk->mm->context.ctx_id == last_ctx_id)
+		return false;
+
+	if (static_branch_unlikely(&spectre_v2_app_lite))
+		return task_thread_info(tsk)->flags & _TIF_STIBP;
+	else
+		return ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB);
 }
 
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (10 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 11/13] x86/speculation: Add Spectre v2 lite app to app protection mode Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-18 15:17   ` Thomas Gleixner
  2018-10-26 17:46   ` Waiman Long
  2018-10-17 17:59 ` [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen
  2018-10-19  7:57 ` [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Peter Zijlstra
  13 siblings, 2 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Mark the non-dumpable processes with TIF_STIBP flag so they will
use STIBP and IBPB defenses against Spectre v2 attack from
processes in user space.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1d317f2..cc77b9e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -14,6 +14,7 @@
 #include <linux/module.h>
 #include <linux/nospec.h>
 #include <linux/prctl.h>
+#include <linux/sched/coredump.h>
 
 #include <asm/spec-ctrl.h>
 #include <asm/cmdline.h>
@@ -773,6 +774,26 @@ int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 	}
 }
 
+void arch_set_dumpable(struct task_struct *tsk, unsigned int value)
+{
+	bool update;
+
+	if (!static_branch_unlikely(&spectre_v2_app_lite))
+		return;
+	if (!static_cpu_has(X86_FEATURE_STIBP))
+		return;
+	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+		return;
+
+	if (tsk->mm && value != SUID_DUMP_USER)
+		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
+	else
+		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
+
+	if (tsk == current && update)
+		speculation_ctrl_update_current();
+}
+
 #ifdef CONFIG_SECCOMP
 void arch_seccomp_spec_mitigate(struct task_struct *task)
 {
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (11 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack Tim Chen
@ 2018-10-17 17:59 ` Tim Chen
  2018-10-17 19:12   ` Randy Dunlap
  2018-10-18 15:31   ` Thomas Gleixner
  2018-10-19  7:57 ` [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Peter Zijlstra
  13 siblings, 2 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-17 17:59 UTC (permalink / raw)
  To: Jiri Kosina, Thomas Gleixner
  Cc: Tim Chen, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

Create PRCTL interface to restrict an application's indirect branch
speculation.  This will protect the application against spectre v2 attack
from another application.

Invocations:
Check indirect branch speculation status with
- prctl(PR_GET_SPECULATION_CTRL, PR_INDIR_BRANCH, 0, 0, 0);

Enable indirect branch speculation with
- prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);

Disable indirect branch speculation with
- prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);

Force disable indirect branch speculation with
- prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);

See Documentation/userspace-api/spec_ctrl.rst.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  5 +-
 Documentation/userspace-api/spec_ctrl.rst       | 10 +++
 arch/x86/kernel/cpu/bugs.c                      | 85 ++++++++++++++++++++++++-
 include/linux/sched.h                           | 11 ++++
 include/uapi/linux/prctl.h                      |  1 +
 tools/include/uapi/linux/prctl.h                |  1 +
 6 files changed, 111 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 2feb6b2..9af11be 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4196,7 +4196,10 @@
 			lite   - turn on mitigation for non-dumpable
 				 processes (i.e. protect daemons and other
 				 privileged processes that tend to be
-				 non-dumpable).
+				 non-dumpable), and processes that has indirect
+				 branch speculation restricted via prctl's
+				 PR_SET_SPECULATION_CTRL option
+
 			strict - protect against attacks for all user processes
 			auto   - let kernel decide lite or strict mode
 
diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
index 32f3d55..1acf198 100644
--- a/Documentation/userspace-api/spec_ctrl.rst
+++ b/Documentation/userspace-api/spec_ctrl.rst
@@ -92,3 +92,13 @@ Speculation misfeature controls
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
+
+- PR_INDIR_BRANCH: Indirect Branch Speculation in Applications
+                   (Mitigate Spectre V2 style user space application
+                    to application attack)
+
+  Invocations:
+   * prctl(PR_GET_SPECULATION_CTRL, PR_INDIR_BRANCH, 0, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index cc77b9e..d5c5faf 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -763,12 +763,70 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 	return 0;
 }
 
+static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+	bool update;
+
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+			return 0;
+		/*
+		 * Indirect branch speculation is always disabled in
+		 * strict mode or if the application is non dumpable
+		 * in lite mode. 
+		 */
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+			return -ENXIO;
+		if (task->mm && get_dumpable(task->mm) != SUID_DUMP_USER)
+			return -ENXIO;
+		task_clear_spec_indir_branch_disable(task);
+		update = test_and_clear_tsk_thread_flag(task, TIF_STIBP);
+		break;
+	case PR_SPEC_DISABLE:
+		/*
+		 * Indirect branch speculation is always enabled when
+		 * app to app mitigation is off.
+		 */
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+			return -ENXIO;
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+			return 0;
+		task_set_spec_indir_branch_disable(task);
+		update = !test_and_set_tsk_thread_flag(task, TIF_STIBP);
+		break;
+	case PR_SPEC_FORCE_DISABLE:
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
+			return -ENXIO;
+		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+			return 0;
+		task_set_spec_indir_branch_disable(task);
+		task_set_spec_indir_branch_force_disable(task);
+		update = !test_and_set_tsk_thread_flag(task, TIF_STIBP);
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	/*
+	 * If being set on non-current task, delay setting the CPU
+	 * mitigation until it is next scheduled.
+	 * Use speculative_store_bypass_update will update SPEC_CTRL MSR
+	 */
+	if (task == current && update)
+		speculation_ctrl_update_current();
+
+	return 0;
+}
+
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_set(task, ctrl);
+	case PR_INDIR_BRANCH:
+		return indir_branch_prctl_set(task, ctrl);
 	default:
 		return -ENODEV;
 	}
@@ -787,7 +845,7 @@ void arch_set_dumpable(struct task_struct *tsk, unsigned int value)
 
 	if (tsk->mm && value != SUID_DUMP_USER)
 		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
-	else
+	else if (!task_spec_indir_branch_disable(tsk))
 		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
 
 	if (tsk == current && update)
@@ -821,11 +879,36 @@ static int ssb_prctl_get(struct task_struct *task)
 	}
 }
 
+static int indir_branch_prctl_get(struct task_struct *task)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+		return PR_SPEC_NOT_AFFECTED;
+
+	switch (spectre_v2_app2app_enabled) {
+	case SPECTRE_V2_APP2APP_NONE:
+		return PR_SPEC_ENABLE;
+	case SPECTRE_V2_APP2APP_LITE:
+		if (task_spec_indir_branch_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+		if (task->mm && get_dumpable(task->mm) != SUID_DUMP_USER)
+			return PR_SPEC_DISABLE;
+		if (task_spec_indir_branch_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+	case SPECTRE_V2_APP2APP_STRICT:
+		return PR_SPEC_DISABLE;
+	default:
+		return PR_SPEC_NOT_AFFECTED;
+	}
+}
+
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
 {
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_get(task);
+	case PR_INDIR_BRANCH:
+		return indir_branch_prctl_get(task);
 	default:
 		return -ENODEV;
 	}
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 977cb57..8e44de6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1439,6 +1439,10 @@ static inline bool is_percpu_thread(void)
 #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
 #define PFA_SPEC_SSB_DISABLE		3	/* Speculative Store Bypass disabled */
 #define PFA_SPEC_SSB_FORCE_DISABLE	4	/* Speculative Store Bypass force disabled*/
+#define PFA_SPEC_INDIR_BRANCH_DISABLE	5	/* Indirect branch speculation */
+						/* restricted in apps */
+#define PFA_SPEC_INDIR_BRANCH_FORCE_DISABLE 6	/* Indirect branch speculation */
+						/* restricted in apps force disabled */
 
 #define TASK_PFA_TEST(name, func)					\
 	static inline bool task_##func(struct task_struct *p)		\
@@ -1470,6 +1474,13 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ssb_disable)
 TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
 
+TASK_PFA_TEST(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_SET(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_CLEAR(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+
+TASK_PFA_TEST(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
+TASK_PFA_SET(SPEC_INDIR_BRANCH_FORCE_DISABLE, spec_indir_branch_force_disable)
+
 static inline void
 current_restore_flags(unsigned long orig_flags, unsigned long flags)
 {
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index c0d7ea0..06f71f6 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_INDIR_BRANCH		1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index c0d7ea0..06f71f6 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
 #define PR_SET_SPECULATION_CTRL		53
 /* Speculation control variants */
 # define PR_SPEC_STORE_BYPASS		0
+# define PR_INDIR_BRANCH		1
 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */
 # define PR_SPEC_NOT_AFFECTED		0
 # define PR_SPEC_PRCTL			(1UL << 0)
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-10-17 17:59 ` [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen
@ 2018-10-17 19:12   ` Randy Dunlap
  2018-10-18 15:31   ` Thomas Gleixner
  1 sibling, 0 replies; 47+ messages in thread
From: Randy Dunlap @ 2018-10-17 19:12 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/17/18 10:59 AM, Tim Chen wrote:
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  5 +-
>  Documentation/userspace-api/spec_ctrl.rst       | 10 +++
>  arch/x86/kernel/cpu/bugs.c                      | 85 ++++++++++++++++++++++++-
>  include/linux/sched.h                           | 11 ++++
>  include/uapi/linux/prctl.h                      |  1 +
>  tools/include/uapi/linux/prctl.h                |  1 +
>  6 files changed, 111 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 2feb6b2..9af11be 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4196,7 +4196,10 @@
>  			lite   - turn on mitigation for non-dumpable
>  				 processes (i.e. protect daemons and other
>  				 privileged processes that tend to be
> -				 non-dumpable).
> +				 non-dumpable), and processes that has indirect

				                                   have

> +				 branch speculation restricted via prctl's
> +				 PR_SET_SPECULATION_CTRL option
> +
>  			strict - protect against attacks for all user processes
>  			auto   - let kernel decide lite or strict mode
>  



-- 
~Randy

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 01/13] x86/speculation: Clean up spectre_v2_parse_cmdline
  2018-10-17 17:59 ` [Patch v3 01/13] x86/speculation: Clean up spectre_v2_parse_cmdline Tim Chen
@ 2018-10-18 12:43   ` Thomas Gleixner
  0 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 12:43 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

Please add () after the function name, so it's immediately recognizable as function.

> Remove unnecessary else statement in spectre_v2_parse_cmdline
> to save an indent.

s/an indent/a indentation level/

Nice cleanup!

With that fixed:

     Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 02/13] x86/speculation: Remove unnecessary ret variable in cpu_show_common
  2018-10-17 17:59 ` [Patch v3 02/13] x86/speculation: Remove unnecessary ret variable in cpu_show_common Tim Chen
@ 2018-10-18 12:46   ` Thomas Gleixner
  0 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 12:46 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> Remove unecessary ret variable in cpu_show_common.
>
> Break up long lines too to make the code more concise
> and easier to read and modify in later patches.

So this does two things at once.

>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       char *buf, unsigned int bug)
>  {
> -	int ret;
> -
>  	if (!boot_cpu_has_bug(bug))
>  		return sprintf(buf, "Not affected\n");
>  
> @@ -873,13 +871,17 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>  		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>  
>  	case X86_BUG_SPECTRE_V2:
> -		ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> -			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
> -			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
> -			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
> -			       boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
> -			       spectre_v2_module_string());
> -		return ret;
> +		return sprintf(buf, "%s%s%s%s%s%s\n",
> +			spectre_v2_strings[spectre_v2_enabled],
> +			boot_cpu_has(X86_FEATURE_USE_IBPB) ?
> +				   ", IBPB" : "",
> +			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
> +				   ", IBRS_FW" : "",
> +			(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
> +				   ", STIBP" : "",
> +			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
> +				   ", RSB filling" : "",

And I do not agree at all that this is more readable.

IMO it's actually worse and I do not see how that makes it easier to
modify.

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS
  2018-10-17 17:59 ` [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS Tim Chen
@ 2018-10-18 12:50   ` Thomas Gleixner
  2018-10-26 16:58   ` Waiman Long
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 12:50 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> Add static key to indicate whether we are using Enhanced IBRS to mitigate

Please avoid personification of technical things. _We_ are not using
anything, really.

> +DEFINE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
> +EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);

All the speculation symbols are exported with _GPL. Please make it consistent.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-10-17 17:59 ` [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
@ 2018-10-18 12:58   ` Thomas Gleixner
  2018-10-26 17:00   ` Waiman Long
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 12:58 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> With enhanced IBRS in use, the application running on sibling CPU will not

  on a hyperthread sibling

> be able to launch Spectre v2 attack to the application on current CPU.

That's technically wrong. It still can launch an attack, but the attack
wont work.

So this wants to be:

  will not be able to exploit the Spectre V2 vulnerability.

> @@ -881,7 +889,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>  				   ", IBPB" : "",
>  			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
>  				   ", IBRS_FW" : "",
> -			(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
> +			spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ?
> +				   ", Enhanced IBRS" :
> +			    (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
>  				   ", STIBP" : "",

This is more than horrible, really. Can you please do the following:

1) Split the sprintf() into a helper function in a first patch, which
   spares a ibdentation level. i.e. what you tried in 2/13

2) If that condition still needs ugly unreadable line breaks, then split it
   out into a helper function as well.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code
  2018-10-17 17:59 ` [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
@ 2018-10-18 13:03   ` Thomas Gleixner
  2018-10-19  7:51   ` Peter Zijlstra
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 13:03 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> Create the cpu_smt_enabled static key to indicate if we are
> using SMT.  SMT specific code paths are executed only when SMT
> code paths are enabled by this key.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/bugs.c |  2 +-
>  arch/x86/kvm/vmx.c         |  2 +-
>  include/linux/cpu.h        |  1 +
>  kernel/cpu.c               | 12 ++++++++++--

I have a hard time to understand why the subject prefix is x86/smt.

This want's to be split into:

1) Add the static key to the core

2) Make use of it in x86

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable
  2018-10-17 17:59 ` [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable Tim Chen
@ 2018-10-18 13:22   ` Thomas Gleixner
  2018-10-19 20:02   ` Peter Zijlstra
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 13:22 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> Change the argument to set_dumpable from task->mm to task.  This allows us
> to later add hooks to modify a task's property according to whether it is
> a non-dumpable task. Non dumpable tasks demand a higher level of security.
> Changes the dumpable value from in to unsigned int as negative number is
> not allowed.

Please use paragraphs and do not write everything in one big lump. Also
please start with the context and the rationale for a change before
explaining what. Suggestion:

  set_dumpable() takes a struct mm pointer as argument, but for finer
  grained security control of hardware vulnerabilites a architecture
  specific set_dumpable() needs to be added which needs access to the task
  struct and not only to the tasks mm struct.

  Replace the mm pointer with a task pointer and fix up implementation and
  call sites.
  
  While at it change the type of the value argument from int to unsigned
  int as the valid dumpable mode values are greater equal zero.

Hmm?

> diff --git a/fs/exec.c b/fs/exec.c
> index 1ebf6e5..e204830 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1362,9 +1362,9 @@ void setup_new_exec(struct linux_binprm * bprm)
>  	if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP ||
>  	    !(uid_eq(current_euid(), current_uid()) &&
>  	      gid_eq(current_egid(), current_gid())))
> -		set_dumpable(current->mm, suid_dumpable);
> +		set_dumpable(current, (unsigned int) suid_dumpable);

Yuck. For one the type cast is pointless, but can we please fix the whole
thing and make suid_dumpable unsigned int?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 07/13] x86/process Add arch_set_dumpable
  2018-10-17 17:59 ` [Patch v3 07/13] x86/process Add arch_set_dumpable Tim Chen
@ 2018-10-18 13:28   ` Thomas Gleixner
  2018-10-18 18:46     ` Tim Chen
  0 siblings, 1 reply; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 13:28 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> Add arch_set_dumpable for setting architecture specific security
> modifications on processes according to its dumpable properties.
> Non dumpable processes are security sensitive and they can be modified
> to gain architecture specific security defenses via arch_set_dumpable.

You love that sentence, right? But it's still wrong.

arch_set_dumpable() does arch specific extra modifications depending on
'value'. It's not a one way street. It can tighten or relax.

> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  fs/exec.c | 6 ++++++

And this is related to the subsystem in $subject (x86/process) in which way?

>  1 file changed, 6 insertions(+)
> 
> diff --git a/fs/exec.c b/fs/exec.c
> index e204830..6f329fc 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1940,6 +1940,11 @@ void set_binfmt(struct linux_binfmt *new)
>  }
>  EXPORT_SYMBOL(set_binfmt);
>  
> +void __weak arch_set_dumpable(struct task_struct *tsk, unsigned int value)
> +{
> +	return;
> +}
> +
>  /*
>   * set_dumpable stores three-value SUID_DUMP_* into mm->flags.
>   */
> @@ -1956,6 +1961,7 @@ void set_dumpable(struct task_struct *tsk, unsigned int value)
>  		old = READ_ONCE(tsk->mm->flags);
>  		new = (old & ~MMF_DUMPABLE_MASK) | value;
>  	} while (cmpxchg(&tsk->mm->flags, old, new) != old);
> +	arch_set_dumpable(tsk, value);

So now the obvious question. set_dumpable() operates on tsk->mm. i.e. it's
a process wide operation. But arch_set_dumpable() operates on the task
itself. What about the other tasks of that process?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 08/13] x86/speculation: Rename SSBD update functions
  2018-10-17 17:59 ` [Patch v3 08/13] x86/speculation: Rename SSBD update functions Tim Chen
@ 2018-10-18 13:37   ` Thomas Gleixner
  0 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 13:37 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> This is a clean up patch that doesn't change any

This is not a cleanup. It's a preparatory patch and again you want to
explain first the context and then what you are doing.

> functionality.  We rename intel_set_ssb_state to
> spec_ctrl_update_msr, speculative_store_bypass_update to
> speculation_ctrl_update and speculative_store_bypass_update_current to
> speculation_ctrl_update_current.

There is no point in listing all what the patch is doing. We all can see
that from the patch itself. And again 'We'? We are doing nothing. Please
write your changelogs in imperative mode, like Documentation/process
requires.

> This prepares us to update of other bits in the SPEC_CTRL MSR
> dynamically.

Suggestion:

  The speculative store bypass related functions are controlling the SSBD
  bit in SPEC_CTRL_MSR depending on a TIF flag. 

  For more fine grained control of STIBP this control needs to be extended,
  so the SSB naming convention does not apply anymore.

  Rename the functions to a more generic name.

See?

Looks good otherwise.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update
  2018-10-17 17:59 ` [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update Tim Chen
@ 2018-10-18 13:47   ` Thomas Gleixner
  2018-10-26 17:21   ` Waiman Long
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 13:47 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> Reorganize the spculation control MSR update code. Currently it is limited
> to only dynamic update of the Speculative Store Bypass Disable bit.
> This patch consolidates the logic to check for AMD CPUs that may or may

# git grep 'This patch' Documentation/process/

> not use this MSR to control SSBD. This prepares us to add logic to update
> other bits in the SPEC_CTRL MSR cleanly.
> 
> Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  arch/x86/kernel/process.c | 39 +++++++++++++++++++++++++++++----------
>  1 file changed, 29 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 8aa4960..789f1bada 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -397,25 +397,45 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
>  
>  static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
>  {
> -	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
> +	u64 msr = x86_spec_ctrl_base;
> +
> +	/*
> +	 * If X86_FEATURE_SSBD is not set, the SSBD
> +	 * bit is not to be touched.
> +	 */
> +	if (static_cpu_has(X86_FEATURE_SSBD))
> +		msr |= ssbd_tif_to_spec_ctrl(tifn);
>  
>  	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  }
>  
> -static __always_inline void __speculation_ctrl_update(unsigned long tifn)
> +static __always_inline void __speculation_ctrl_update(unsigned long tifp,
> +						      unsigned long tifn)
>  {
> -	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> -		amd_set_ssb_virt_state(tifn);
> -	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> -		amd_set_core_ssb_state(tifn);
> -	else
> +	bool updmsr = false;
> +
> +	/* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */

Errm. This checks a TIF bit and nothing AMD. The AMD specific check only
happens when the branch is taken. Please be precise with your comments.

	/* If TIF_SSBD is different, select the proper mitigation method */

Hmm?

> +	if ((tifp ^ tifn) & _TIF_SSBD) {
> +		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> +			amd_set_ssb_virt_state(tifn);
> +		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> +			amd_set_core_ssb_state(tifn);
> +		else if (static_cpu_has(X86_FEATURE_SSBD))
> +			updmsr  = true;
> +	}
> +
> +	if (updmsr)
>  		spec_ctrl_update_msr(tifn);
>  }
>  
>  void speculation_ctrl_update(unsigned long tif)
>  {
> +	/*
> +	 * On this path we're forcing the update, so use ~tif as the
> +	 * previous flags.
> +	 */

	/* Forced update. Make sure all relevant TIF flags are different */

Hmm?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 10/13] x86/speculation: Add per thread STIBP flag
  2018-10-17 17:59 ` [Patch v3 10/13] x86/speculation: Add per thread STIBP flag Tim Chen
@ 2018-10-18 13:53   ` Thomas Gleixner
  0 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 13:53 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

Again the subject is misleading, It's suggesting that it adds a per thread
flag. But the patch does way more than that.

> Add per thread STIBP flag. When context switching to a process thread that
> has the STIBP flag, the STIBP bit in the SPEC_CTRL MSR will be turned
> on to guard against application to application Spectre v2 attack. When
> switching to a non-security sensitive thread that doesn't have STIBP flag,
> the STIBP bit in the SPEC_CTRL MSR is turned off.

I think I gave you quite some examples how such a changelog should look like.

> --- a/arch/x86/include/asm/thread_info.h
> +++ b/arch/x86/include/asm/thread_info.h
> @@ -83,6 +83,7 @@ struct thread_info {
>  #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
>  #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
>  #define TIF_SECCOMP		8	/* secure computing */
> +#define TIF_STIBP              9       /* Single threaded indirect branch predict */
>  #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
>  #define TIF_UPROBE		12	/* breakpointed or singlestepping */
>  #define TIF_PATCH_PENDING	13	/* pending live patching update */
> @@ -110,6 +111,7 @@ struct thread_info {
>  #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
>  #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
>  #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
> +#define _TIF_STIBP		(1 << TIF_STIBP)
>  #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
>  #define _TIF_UPROBE		(1 << TIF_UPROBE)
>  #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
> @@ -146,7 +148,8 @@ struct thread_info {
>  
>  /* flags to check in __switch_to() */
>  #define _TIF_WORK_CTXSW							\
> -	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
> +	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|		\
> +	 _TIF_SSBD|_TIF_STIBP)

These flags are at random locations. I wonder whether they should be
grouped together. Might not influence code generation, but at least worth
to have a look.

> +	/*
> +	 * Need STIBP defense against Spectre v2 attack
> +	 * if SMT is in use and we don't have enhanced IBRS.

and enhanced IBRS is not supported.

> +	 */
> +	if (static_branch_likely(&cpu_smt_enabled) &&
> +	    !static_branch_unlikely(&spectre_v2_enhanced_ibrs))
> +		msr |= stibp_tif_to_spec_ctrl(tifn);
> +
>  	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  }

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 11/13] x86/speculation: Add Spectre v2 lite app to app protection mode
  2018-10-17 17:59 ` [Patch v3 11/13] x86/speculation: Add Spectre v2 lite app to app protection mode Tim Chen
@ 2018-10-18 15:12   ` Thomas Gleixner
  0 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 15:12 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> Currently the STIBP is always turned on for CPUs vulnerable to Spectre V2.
> A new lite protection mode option is created.  In this new mode, we protect
> security sensitive processes with STIBP and IBPB against application to
> application attack based on its process property or security level. This
> will allow non security sensitive processes not needing the protection
> avoid overhead associated with STIBP and IBPB.

This is again a big lump with no significant structure. Please reword.

> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  18 ++++
>  arch/x86/include/asm/nospec-branch.h            |   7 ++
>  arch/x86/kernel/cpu/bugs.c                      | 135 ++++++++++++++++++++++--
>  arch/x86/mm/tlb.c                               |  19 +++-
>  4 files changed, 166 insertions(+), 13 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 64a3bf5..2feb6b2 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4186,6 +4186,24 @@
>  			Not specifying this option is equivalent to
>  			spectre_v2=auto.
>  
> +	spectre_v2_app2app=
> +			[X86] Control mitigation of Spectre variant 2
> +		        application to application (indirect branch speculation)
> +			vulnerability.
> +
> +			off    - turn off Spectre v2 application to
> +				 application attack mitigation

  Please start the explanations with an upper case letter. And there is no
  point to repeat the 'Spectre v2 application to application attack' thing
  over and over. It's explained above with the command line switch.

       	Disable mitigation

  is really sufficient.

> +			lite   - turn on mitigation for non-dumpable
> +				 processes (i.e. protect daemons and other
> +				 privileged processes that tend to be
> +				 non-dumpable).

  			tend to be?

	 Turn on mitigation for processes, which are
	 marked non-dumpable.

> +			strict - protect against attacks for all user processes

  	 Protect all processes

> +			auto   - let kernel decide lite or strict mode

  	 Kernel selects the mode

> +			Not specifying this option is equivalent to
> +			spectre_v2_app2app=auto.  Setting spectre_v2=off will 
> +			also turn off this mitigation.

Please split this into two paragraphs so the latter stands out more.

> +enum spectre_v2_app2app_mitigation_cmd {
> +	SPECTRE_V2_APP2APP_CMD_NONE,
> +	SPECTRE_V2_APP2APP_CMD_AUTO,
> +	SPECTRE_V2_APP2APP_CMD_LITE,
> +	SPECTRE_V2_APP2APP_CMD_STRICT,
> +};
> +
>  static const char *spectre_v2_strings[] = {
>  	[SPECTRE_V2_NONE]			= "Vulnerable",
>  	[SPECTRE_V2_RETPOLINE_MINIMAL]		= "Vulnerable: Minimal generic ASM retpoline",
> @@ -142,6 +149,17 @@ static const char *spectre_v2_strings[] = {
>  	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
>  };
>  
> +static const char *spectre_v2_app2app_strings[] = {
> +	[SPECTRE_V2_APP2APP_NONE]   = "App-App Vulnerable",
> +	[SPECTRE_V2_APP2APP_LITE]   = "App-App Mitigation: Protect non-dumpable"
> +				      " and indirect branch restricted process",

Please do not split strings. 

> +	[SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app"
> +				      " attack protection",
> +};
> +
> +DEFINE_STATIC_KEY_FALSE(spectre_v2_app_lite);
> +EXPORT_SYMBOL(spectre_v2_app_lite);
> +
>  DEFINE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
>  EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);
>  
> @@ -151,6 +169,9 @@ EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);
>  static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
>  	SPECTRE_V2_NONE;
>  
> +static enum spectre_v2_app2app_mitigation
> +	spectre_v2_app2app_enabled __ro_after_init = SPECTRE_V2_APP2APP_NONE;
> +
>  void
>  x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
>  {
> @@ -172,6 +193,9 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
>  		    static_cpu_has(X86_FEATURE_AMD_SSBD))
>  			hostval |= ssbd_tif_to_spec_ctrl(ti->flags);
>  
> +		if (static_branch_unlikely(&spectre_v2_app_lite))
> +			hostval |= stibp_tif_to_spec_ctrl(ti->flags);
> +
>  		if (hostval != guestval) {
>  			msrval = setguest ? guestval : hostval;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
> @@ -278,6 +302,52 @@ static const struct {
>  	{ "auto",              SPECTRE_V2_CMD_AUTO,              false },
>  };
>  
> +static const struct {
> +	const char *option;
> +	enum spectre_v2_app2app_mitigation_cmd cmd;
> +	bool secure;
> +} app2app_mitigation_options[] = {
> +	{ "off",	SPECTRE_V2_APP2APP_CMD_NONE,   false },
> +	{ "lite",	SPECTRE_V2_APP2APP_CMD_LITE,   false },
> +	{ "strict",	SPECTRE_V2_APP2APP_CMD_STRICT, false },
> +	{ "auto",	SPECTRE_V2_APP2APP_CMD_AUTO,   false },

So all of those options are insecure, right? Why do we need them at all?

> +};
> +
> +static enum spectre_v2_app2app_mitigation_cmd __init
> +		spectre_v2_parse_app2app_cmdline(void)
> +{
> +	enum spectre_v2_app2app_mitigation_cmd cmd;
> +	char arg[20];
> +	int ret, i;
> +
> +	cmd = SPECTRE_V2_APP2APP_CMD_AUTO;
> +	ret = cmdline_find_option(boot_command_line, "spectre_v2_app2app",
> +					arg, sizeof(arg));

Please align the first argument in the second line with the first argument
of the upper line.

> +	if (ret < 0)
> +		return SPECTRE_V2_APP2APP_CMD_AUTO;
> +
> +	for (i = 0; i < ARRAY_SIZE(app2app_mitigation_options); i++) {
> +		if (!match_option(arg, ret,
> +				  app2app_mitigation_options[i].option))

If you'd shorten the length of that array name, you could spare the ugly
line breaks.

> +			continue;
> +		cmd = app2app_mitigation_options[i].cmd;
> +		break;
> +	}
> +
> +	if (i >= ARRAY_SIZE(app2app_mitigation_options)) {
> +		pr_err("unknown app to app protection option (%s). "
> +		       "Switching to AUTO select\n", arg);
> +		return SPECTRE_V2_APP2APP_CMD_AUTO;
> +	}
> +
> +	if (app2app_mitigation_options[i].secure)
> +		spec2_print_if_secure(app2app_mitigation_options[i].option);
> +	else
> +		spec2_print_if_insecure(app2app_mitigation_options[i].option);
> +
> +	return cmd;
> +}
> +
>  static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
>  {
>  	char arg[20];
> @@ -330,14 +400,22 @@ static bool stibp_needed(void)
>  	/*
>  	 * Determine if we want to leave STIBP always on.
>  	 * Using enhanced IBRS makes using STIBP unnecessary.
> +	 * For lite option, we enable STIBP based on a process's
> +	 * flag during context switch.
>  	 */
>  
>  	if (static_branch_unlikely(&spectre_v2_enhanced_ibrs))
>  		return false;
>  
> +	if (static_branch_unlikely(&spectre_v2_app_lite))
> +		return false;
> +
>  	if (spectre_v2_enabled == SPECTRE_V2_NONE)
>  		return false;

This should be the first check, really because everything else is
irrelevant when this is false. I missed that in the patch which added the
enhanced ibrs check above.

> +	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
> +		return false;
> +
>  	if (!boot_cpu_has(X86_FEATURE_STIBP))
>  		return false;
>  
> @@ -377,7 +455,11 @@ static void __init spectre_v2_select_mitigation(void)
>  {
>  	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
>  	enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
> +	enum spectre_v2_app2app_mitigation_cmd app2app_cmd;
> +	enum spectre_v2_app2app_mitigation app2app_mode;
>  
> +	app2app_cmd = spectre_v2_parse_app2app_cmdline();

You can avoid the parsing when neither IBPB nor STIBP are available. So
please move this further down.

> +	app2app_mode = SPECTRE_V2_APP2APP_NONE;

This can go right before the IBPB or STIBP check.

>  	/*
>  	 * If the CPU is not affected and the command line mode is NONE or AUTO
>  	 * then nothing to do.
> @@ -398,7 +480,7 @@ static void __init spectre_v2_select_mitigation(void)
>  			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>  			static_branch_enable(&spectre_v2_enhanced_ibrs);
> -			goto specv2_set_mode;
> +			goto specv2_app_set_mode;
>  		}
>  		if (IS_ENABLED(CONFIG_RETPOLINE))
>  			goto retpoline_auto;
> @@ -437,7 +519,31 @@ static void __init spectre_v2_select_mitigation(void)
>  		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
>  	}
>  
> +specv2_app_set_mode:
> +	if (!boot_cpu_has(X86_FEATURE_IBPB) ||
> +	    !boot_cpu_has(X86_FEATURE_STIBP))
> +		goto specv2_set_mode;	
> +
> +	switch (app2app_cmd) {
> +	case SPECTRE_V2_APP2APP_CMD_NONE:
> +		break;	
> +
> +	case SPECTRE_V2_APP2APP_CMD_LITE:
> +	case SPECTRE_V2_APP2APP_CMD_AUTO:
> +		app2app_mode = SPECTRE_V2_APP2APP_LITE;
> +		break;
> +
> +	case SPECTRE_V2_APP2APP_CMD_STRICT:
> +		app2app_mode = SPECTRE_V2_APP2APP_STRICT;
> +		break;
> +	}
> +
>  specv2_set_mode:
> +	spectre_v2_app2app_enabled = app2app_mode;
> +	pr_info("%s\n", spectre_v2_app2app_strings[app2app_mode]);
> +	if (app2app_mode == SPECTRE_V2_APP2APP_LITE)
> +		static_branch_enable(&spectre_v2_app_lite);
> +
>  	spectre_v2_enabled = mode;
>  	pr_info("%s\n", spectre_v2_strings[mode]);

This is weird. Why can't you leave the current code as is including the
jump label and then add the app2app stuff underneath. I really wondered
where mode is gone and it makes the code hard to follow for no value. The
the second label becomes specv2_app_set_mode, which actually makes way more
sense than the above convolution.

>  	case X86_BUG_SPECTRE_V2:
> -		return sprintf(buf, "%s%s%s%s%s%s\n",
> +		return sprintf(buf, "%s%s%s%s%s%s%s\n",
>  			spectre_v2_strings[spectre_v2_enabled],
> -			boot_cpu_has(X86_FEATURE_USE_IBPB) ?
> -				   ", IBPB" : "",
> +			spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE ||
> +			    !boot_cpu_has(X86_FEATURE_USE_IBPB) ?
> +				   "" :
> +			    spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_LITE ?
> +				   ", IBPB-lite" : ", IBPB-strict",
> +			spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE ||
> +		            spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ||
> +			    !boot_cpu_has(X86_FEATURE_STIBP) ?
> +				   "" :
> +			    spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_LITE ?
> +				   ", STIBP-lite" : ", STIBP-strict",

So by now this is completely unparseable really. Please split it apart.

>  			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
>  				   ", IBRS_FW" : "",
>  			spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ?
> -				   ", Enhanced IBRS" :
> -			    (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
> -				   ", STIBP" : "",
> +				   ", Enhanced IBRS" : "",
>  			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
>  				   ", RSB filling" : "",
>  			spectre_v2_module_string());
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 073b8df..1bd1597 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -184,14 +184,25 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
>  static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
>  {
>  	/*
> -	 * Check if the current (previous) task has access to the memory
> -	 * of the @tsk (next) task. If access is denied, make sure to
> +	 * For lite protection mode, we protect processes  
> +	 * marked with STIBP flag needing app to app proection.
> +	 *

  	 * If lite protection mode is enabled, check the STIBP thread flag.

Hmm?

> +	 * Otherwise check if the current (previous) task has access to the
> +	 * the memory of the @tsk (next) task for strict app to app protection.
> +	 * If access is denied, make sure to
>  	 * issue a IBPB to stop user->user Spectre-v2 attacks.

Sigh. Can you please format the result properly?

>  	 *
>  	 * Note: __ptrace_may_access() returns 0 or -ERRNO.
>  	 */
> -	return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
> -		ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
> +
> +	/* skip IBPB if no context changes */

Sentences start with capital letters.

	/*
	 * Don't issue IBPB when switching to kernel threads or staying in the
	 * same mm context.
	 */

Gives a bit more accurate information, hmm?

> +	if (!tsk || !tsk->mm || tsk->mm->context.ctx_id == last_ctx_id)
> +		return false;

And this check along with the comment wants to move above the other comment
which is related to the below:

> +
> +	if (static_branch_unlikely(&spectre_v2_app_lite))
> +		return task_thread_info(tsk)->flags & _TIF_STIBP;
> +	else
> +		return ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB);
>  }

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack
  2018-10-17 17:59 ` [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack Tim Chen
@ 2018-10-18 15:17   ` Thomas Gleixner
  2018-10-26 17:46   ` Waiman Long
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 15:17 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:
> +void arch_set_dumpable(struct task_struct *tsk, unsigned int value)
> +{
> +	bool update;
> +
> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
> +		return;
> +	if (!static_cpu_has(X86_FEATURE_STIBP))
> +		return;
> +	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
> +		return;

Can spectre_v2_app_lite be enabled when the cpu does not support STIBP or
spectre_v2_app2app_enabled is not set to SPECTRE_V2_APP2APP_CMD_LITE?

No it cannot. So checking the static key is sufficient.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation
  2018-10-17 17:59 ` [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen
  2018-10-17 19:12   ` Randy Dunlap
@ 2018-10-18 15:31   ` Thomas Gleixner
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-18 15:31 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, 17 Oct 2018, Tim Chen wrote:

> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4196,7 +4196,10 @@
>  			lite   - turn on mitigation for non-dumpable
>  				 processes (i.e. protect daemons and other
>  				 privileged processes that tend to be
> -				 non-dumpable).
> +				 non-dumpable), and processes that has indirect
> +				 branch speculation restricted via prctl's
> +				 PR_SET_SPECULATION_CTRL option

				Protect processes which are marked non-dumpable and
				processes which have requested restricted indirect
				branch speculation via the PR_SET_SPECULATION_CTRL
				ptrcl().

> @@ -92,3 +92,13 @@ Speculation misfeature controls
>     * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
>     * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
>     * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
> +
> +- PR_INDIR_BRANCH: Indirect Branch Speculation in Applications
> +                   (Mitigate Spectre V2 style user space application
> +                    to application attack)

No. Please do not create a random name space. We have

    PR_SPEC_STORE_BYPASS

so the logical name for this is

    PR_SPEC_INDIR_BRANCH


> +static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
> +{
> +	bool update;
> +
> +	switch (ctrl) {
> +	case PR_SPEC_ENABLE:
> +		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
> +			return 0;
> +		/*
> +		 * Indirect branch speculation is always disabled in
> +		 * strict mode or if the application is non dumpable
> +		 * in lite mode. 
> +		 */
> +		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
> +			return -ENXIO;

Please stay consistent with ssb_prctl_set(). EPERM is what you want here.

> +		if (task->mm && get_dumpable(task->mm) != SUID_DUMP_USER)
> +			return -ENXIO;

Ditto

> +		task_clear_spec_indir_branch_disable(task);
> +		update = test_and_clear_tsk_thread_flag(task, TIF_STIBP);
> +		break;
> +	case PR_SPEC_DISABLE:
> +		/*
> +		 * Indirect branch speculation is always enabled when
> +		 * app to app mitigation is off.
> +		 */
> +		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
> +			return -ENXIO;
> +		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
> +			return 0;
> +		task_set_spec_indir_branch_disable(task);
> +		update = !test_and_set_tsk_thread_flag(task, TIF_STIBP);
> +		break;
> +	case PR_SPEC_FORCE_DISABLE:
> +		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
> +			return -ENXIO;
> +		if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
> +			return 0;
> +		task_set_spec_indir_branch_disable(task);
> +		task_set_spec_indir_branch_force_disable(task);
> +		update = !test_and_set_tsk_thread_flag(task, TIF_STIBP);
> +		break;
> +	default:
> +		return -ERANGE;
> +	}
> +
> +	/*
> +	 * If being set on non-current task, delay setting the CPU
> +	 * mitigation until it is next scheduled.
> +	 * Use speculative_store_bypass_update will update SPEC_CTRL MSR

Stale comment.

> +	 */
> +	if (task == current && update)
> +		speculation_ctrl_update_current();
> +
> +	return 0;
> +}

Aside of that several patches have trailing whitespace. Please be more careful.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 07/13] x86/process Add arch_set_dumpable
  2018-10-18 13:28   ` Thomas Gleixner
@ 2018-10-18 18:46     ` Tim Chen
  2018-10-19 19:12       ` Thomas Gleixner
  0 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-18 18:46 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On 10/18/2018 06:28 AM, Thomas Gleixner wrote:

> 
> So now the obvious question. set_dumpable() operates on tsk->mm. i.e. it's
> a process wide operation. But arch_set_dumpable() operates on the task
> itself. What about the other tasks of that process?

I missed this part.

Fixing this is tricky as I don't see an easy way to
reverse map mm back to all tasks that use the same mm
to update their STIBP flags.

One possible solution is to not use STIBP flag for
non-dumpable processes.
We check during context switch whether
get_dumpable(prev) != get_dumpable(next) in addition
to STIBP flag changes to update SPEC_CTRL MSR and IBPB.

We will need to IPI all other CPUs to update
their SPEC_CTRL MSR if they are using the mm
that has dumpable property changes.

Any better suggestions?

Tim

> 
> Thanks,
> 
> 	tglx
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code
  2018-10-17 17:59 ` [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
  2018-10-18 13:03   ` Thomas Gleixner
@ 2018-10-19  7:51   ` Peter Zijlstra
  1 sibling, 0 replies; 47+ messages in thread
From: Peter Zijlstra @ 2018-10-19  7:51 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Thomas Gleixner, Tom Lendacky, Ingo Molnar,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, Oct 17, 2018 at 10:59:33AM -0700, Tim Chen wrote:
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 3adecda..ad28afc 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -349,6 +349,8 @@ EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
>  #ifdef CONFIG_HOTPLUG_SMT
>  enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
>  EXPORT_SYMBOL_GPL(cpu_smt_control);
> +DEFINE_STATIC_KEY_TRUE(cpu_smt_enabled);
> +EXPORT_SYMBOL_GPL(cpu_smt_enabled);

We already have sched_smt_present; please merge the two. It is pointless
having this twice.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection
  2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
                   ` (12 preceding siblings ...)
  2018-10-17 17:59 ` [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen
@ 2018-10-19  7:57 ` Peter Zijlstra
  2018-10-19 16:43   ` Tim Chen
  13 siblings, 1 reply; 47+ messages in thread
From: Peter Zijlstra @ 2018-10-19  7:57 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Thomas Gleixner, Tom Lendacky, Ingo Molnar,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, Oct 17, 2018 at 10:59:28AM -0700, Tim Chen wrote:
> Application to application exploit is in general difficult due to address
> space layout randomization in applications and the need to know an

Does the BTB attack on KASLR not work for userspace?

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection
  2018-10-19  7:57 ` [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Peter Zijlstra
@ 2018-10-19 16:43   ` Tim Chen
  2018-10-19 18:38     ` Peter Zijlstra
  0 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-19 16:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Jiri Kosina, Thomas Gleixner, Tom Lendacky, Ingo Molnar,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On 10/19/2018 12:57 AM, Peter Zijlstra wrote:
> On Wed, Oct 17, 2018 at 10:59:28AM -0700, Tim Chen wrote:
>> Application to application exploit is in general difficult due to address
>> space layout randomization in applications and the need to know an
> 
> Does the BTB attack on KASLR not work for userspace?
> 

With KASLR, you can probe the kernel mapped and unmapped
addresses with side channels like TLB and infer the kernel mapping
offsets much more easily, as kernel is in the same address
space as the attack process.  It is a lot harder to do
such probing from another process that doesn't share the
same page tables.

Tim

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection
  2018-10-19 16:43   ` Tim Chen
@ 2018-10-19 18:38     ` Peter Zijlstra
  0 siblings, 0 replies; 47+ messages in thread
From: Peter Zijlstra @ 2018-10-19 18:38 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Thomas Gleixner, Tom Lendacky, Ingo Molnar,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Fri, Oct 19, 2018 at 09:43:35AM -0700, Tim Chen wrote:
> On 10/19/2018 12:57 AM, Peter Zijlstra wrote:
> > On Wed, Oct 17, 2018 at 10:59:28AM -0700, Tim Chen wrote:
> >> Application to application exploit is in general difficult due to address
> >> space layout randomization in applications and the need to know an
> > 
> > Does the BTB attack on KASLR not work for userspace?
> > 
> 
> With KASLR, you can probe the kernel mapped and unmapped
> addresses with side channels like TLB and infer the kernel mapping
> offsets much more easily, as kernel is in the same address
> space as the attack process.  It is a lot harder to do
> such probing from another process that doesn't share the
> same page tables.

I said BTB; see: http://www.cs.binghamton.edu/~dima/micro16.pdf

From what I understood, local ASLR (of any kind) is a pipe dream.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 07/13] x86/process Add arch_set_dumpable
  2018-10-18 18:46     ` Tim Chen
@ 2018-10-19 19:12       ` Thomas Gleixner
  2018-10-19 20:16         ` Thomas Gleixner
  0 siblings, 1 reply; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-19 19:12 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Thu, 18 Oct 2018, Tim Chen wrote:
> On 10/18/2018 06:28 AM, Thomas Gleixner wrote:
> > 
> > So now the obvious question. set_dumpable() operates on tsk->mm. i.e. it's
> > a process wide operation. But arch_set_dumpable() operates on the task
> > itself. What about the other tasks of that process?
> 
> I missed this part.
> 
> Fixing this is tricky as I don't see an easy way to
> reverse map mm back to all tasks that use the same mm
> to update their STIBP flags.

task is known and handed in to the function. So why do you want to reverse
map mm?

for_each_thread(task, ...) is what you want. The only thing to verify is
whether you can lock the tasks sighand lock there. And then it's enough to
set/clear the TIF bit and let it take effect at the next context switch.

Thanks,

	tglx





^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable
  2018-10-17 17:59 ` [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable Tim Chen
  2018-10-18 13:22   ` Thomas Gleixner
@ 2018-10-19 20:02   ` Peter Zijlstra
  1 sibling, 0 replies; 47+ messages in thread
From: Peter Zijlstra @ 2018-10-19 20:02 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Thomas Gleixner, Tom Lendacky, Ingo Molnar,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Wed, Oct 17, 2018 at 10:59:34AM -0700, Tim Chen wrote:
> Change the argument to set_dumpable from task->mm to task.  This allows us
> to later add hooks to modify a task's property according to whether it is
> a non-dumpable task. Non dumpable tasks demand a higher level of security.
> Changes the dumpable value from in to unsigned int as negative number is
> not allowed.

Per task dumpable doesn't make any kind of sense. Dumpable is about
coredumps, that's an mm property.

If you want to introduce other (per task) state; give that a new name
and check for that explicitly. Don't create confusion like this.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 07/13] x86/process Add arch_set_dumpable
  2018-10-19 19:12       ` Thomas Gleixner
@ 2018-10-19 20:16         ` Thomas Gleixner
  2018-10-22 23:55           ` Tim Chen
  0 siblings, 1 reply; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-19 20:16 UTC (permalink / raw)
  To: Tim Chen
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On Fri, 19 Oct 2018, Thomas Gleixner wrote:
> On Thu, 18 Oct 2018, Tim Chen wrote:
> > On 10/18/2018 06:28 AM, Thomas Gleixner wrote:
> > > 
> > > So now the obvious question. set_dumpable() operates on tsk->mm. i.e. it's
> > > a process wide operation. But arch_set_dumpable() operates on the task
> > > itself. What about the other tasks of that process?
> > 
> > I missed this part.
> > 
> > Fixing this is tricky as I don't see an easy way to
> > reverse map mm back to all tasks that use the same mm
> > to update their STIBP flags.
> 
> task is known and handed in to the function. So why do you want to reverse
> map mm?
> 
> for_each_thread(task, ...) is what you want. The only thing to verify is
> whether you can lock the tasks sighand lock there. And then it's enough to
> set/clear the TIF bit and let it take effect at the next context switch.

Though there is another issue with mm shared between processes,
i.e. CLONE_VM without CLONE_THREAD where set_dumpabe() affects both
processes, but for_each_thread() only seing the threads which belong to the
thread group of 'task'.

I can understand your idea of clumping this on dumpable(), but at some
point we really have to draw a line and just stop adding more and more
expensive stuff to context switch.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 07/13] x86/process Add arch_set_dumpable
  2018-10-19 20:16         ` Thomas Gleixner
@ 2018-10-22 23:55           ` Tim Chen
  0 siblings, 0 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-22 23:55 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jiri Kosina, Tom Lendacky, Ingo Molnar, Peter Zijlstra,
	Josh Poimboeuf, Andrea Arcangeli, David Woodhouse, Andi Kleen,
	Dave Hansen, Casey Schaufler, Asit Mallick, Arjan van de Ven,
	Jon Masters, linux-kernel, x86

On 10/19/2018 01:16 PM, Thomas Gleixner wrote:
> On Fri, 19 Oct 2018, Thomas Gleixner wrote:
>> On Thu, 18 Oct 2018, Tim Chen wrote:
>>> On 10/18/2018 06:28 AM, Thomas Gleixner wrote:
>>>>
>>>> So now the obvious question. set_dumpable() operates on tsk->mm. i.e. it's
>>>> a process wide operation. But arch_set_dumpable() operates on the task
>>>> itself. What about the other tasks of that process?
>>>
>>> I missed this part.
>>>
>>> Fixing this is tricky as I don't see an easy way to
>>> reverse map mm back to all tasks that use the same mm
>>> to update their STIBP flags.
>>
>> task is known and handed in to the function. So why do you want to reverse
>> map mm?
>>
>> for_each_thread(task, ...) is what you want. The only thing to verify is
>> whether you can lock the tasks sighand lock there. And then it's enough to
>> set/clear the TIF bit and let it take effect at the next context switch.
> 
> Though there is another issue with mm shared between processes,
> i.e. CLONE_VM without CLONE_THREAD where set_dumpabe() affects both
> processes, but for_each_thread() only seing the threads which belong to the
> thread group of 'task'.

One complication is threads currently running on some remote CPU
when we change flag.  Normally we will call spec_ctrl_update_current to make
sure the CPU's current SPEC CTRL MSR stays consistent with TIF flag changes.

But for the remote CPUs, if we don't the update their SPEC CTRL MSRs, their
SPEC CTRL MSR will get out of step with the TIF flag state.  It can last for a
long time till we have a TIF flag change during context switches to cause SEPC CTRL MSR
update.

One thing I could do is to add a TIF flag to force update the SPEC_CTRL MSR on
next context switch.  Is that okay?  Or should I put such flag elsewhere?

Tim


> 
> I can understand your idea of clumping this on dumpable(), but at some
> point we really have to draw a line and just stop adding more and more
> expensive stuff to context switch.
> 
> Thanks,
> 
> 	tglx
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS
  2018-10-17 17:59 ` [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS Tim Chen
  2018-10-18 12:50   ` Thomas Gleixner
@ 2018-10-26 16:58   ` Waiman Long
  2018-10-26 18:15     ` Tim Chen
  1 sibling, 1 reply; 47+ messages in thread
From: Waiman Long @ 2018-10-26 16:58 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/17/2018 01:59 PM, Tim Chen wrote:
> Add static key to indicate whether we are using Enhanced IBRS to mitigate
> Spectre v2.  This will be used in later patches to disengage STIBP code
> for Spectre v2 mitigation as STIBP is not needed when Enhanced IBRS is
> in use.
>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  arch/x86/include/asm/nospec-branch.h | 3 +++
>  arch/x86/kernel/cpu/bugs.c           | 4 ++++
>  2 files changed, 7 insertions(+)
>
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index fd2a8c1..d57e84e 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -3,6 +3,7 @@
>  #ifndef _ASM_X86_NOSPEC_BRANCH_H_
>  #define _ASM_X86_NOSPEC_BRANCH_H_
>  
> +#include <linux/static_key.h>
>  #include <asm/alternative.h>
>  #include <asm/alternative-asm.h>
>  #include <asm/cpufeatures.h>
> @@ -228,6 +229,8 @@ enum ssb_mitigation {
>  extern char __indirect_thunk_start[];
>  extern char __indirect_thunk_end[];
>  
> +DECLARE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
> +
>  /*
>   * On VMEXIT we must ensure that no RSB predictions learned in the guest
>   * can be followed in the host, by overwriting the RSB completely. Both
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index b2f6b8b..2fc7b4e 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -142,6 +142,9 @@ static const char *spectre_v2_strings[] = {
>  	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
>  };
>  
> +DEFINE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
> +EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);
> +
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "Spectre V2 : " fmt
>  
> @@ -386,6 +389,7 @@ static void __init spectre_v2_select_mitigation(void)
>  			/* Force it so VMEXIT will restore correctly */
>  			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> +			static_branch_enable(&spectre_v2_enhanced_ibrs);
>  			goto specv2_set_mode;
>  		}
>  		if (IS_ENABLED(CONFIG_RETPOLINE))

Why you need a static key for enhanced IBRS? It is supposed to be set at
boot time and never get changed after that. It will be easier to use a
feature bit for that instead. We usually use static key when the value
can be changed at run time.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-10-17 17:59 ` [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
  2018-10-18 12:58   ` Thomas Gleixner
@ 2018-10-26 17:00   ` Waiman Long
  2018-10-26 18:18     ` Tim Chen
  1 sibling, 1 reply; 47+ messages in thread
From: Waiman Long @ 2018-10-26 17:00 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/17/2018 01:59 PM, Tim Chen wrote:
> With enhanced IBRS in use, the application running on sibling CPU will not
> be able to launch Spectre v2 attack to the application on current CPU.
> There is no need to use STIBP for this case.  Disable the STIBP code
> when enhanced IBRS is used.
>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/bugs.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 2fc7b4e..6ed82ea 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -327,6 +327,14 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
>  
>  static bool stibp_needed(void)
>  {
> +	/*
> +	 * Determine if we want to leave STIBP always on.
> +	 * Using enhanced IBRS makes using STIBP unnecessary.
> +	 */
> +
> +	if (static_branch_unlikely(&spectre_v2_enhanced_ibrs))
> +		return false;
> +
>  	if (spectre_v2_enabled == SPECTRE_V2_NONE)
>  		return false;
>  
> @@ -881,7 +889,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>  				   ", IBPB" : "",
>  			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
>  				   ", IBRS_FW" : "",
> -			(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
> +			spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ?
> +				   ", Enhanced IBRS" :
> +			    (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
>  				   ", STIBP" : "",
>  			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
>  				   ", RSB filling" : "",

The "Enhanced IBRS" is one of the states of spectre_v2_enabled. So you
don't need to print that out one more time.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update
  2018-10-17 17:59 ` [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update Tim Chen
  2018-10-18 13:47   ` Thomas Gleixner
@ 2018-10-26 17:21   ` Waiman Long
  2018-10-26 18:25     ` Tim Chen
  1 sibling, 1 reply; 47+ messages in thread
From: Waiman Long @ 2018-10-26 17:21 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/17/2018 01:59 PM, Tim Chen wrote:
> Reorganize the spculation control MSR update code. Currently it is limited
> to only dynamic update of the Speculative Store Bypass Disable bit.
> This patch consolidates the logic to check for AMD CPUs that may or may
> not use this MSR to control SSBD. This prepares us to add logic to update
> other bits in the SPEC_CTRL MSR cleanly.
>
> Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  arch/x86/kernel/process.c | 39 +++++++++++++++++++++++++++++----------
>  1 file changed, 29 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 8aa4960..789f1bada 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -397,25 +397,45 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
>  
>  static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
>  {
> -	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
> +	u64 msr = x86_spec_ctrl_base;
> +
> +	/*
> +	 * If X86_FEATURE_SSBD is not set, the SSBD
> +	 * bit is not to be touched.
> +	 */
> +	if (static_cpu_has(X86_FEATURE_SSBD))
> +		msr |= ssbd_tif_to_spec_ctrl(tifn);
>  
>  	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>  }
>  
> -static __always_inline void __speculation_ctrl_update(unsigned long tifn)
> +static __always_inline void __speculation_ctrl_update(unsigned long tifp,
> +						      unsigned long tifn)

I think it will be more intuitive to pass in (tifp ^ tifn) as bitmask of
changed TIF bits than tifp alone as you are only interested in the
changed bits anyway. Please also document the input parameters as it is
hard to know what they are by reading the function alone.

Cheers,
Longman
>  {
> -	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> -		amd_set_ssb_virt_state(tifn);
> -	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> -		amd_set_core_ssb_state(tifn);
> -	else
> +	bool updmsr = false;
> +
> +	/* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */
> +	if ((tifp ^ tifn) & _TIF_SSBD) {
> +		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> +			amd_set_ssb_virt_state(tifn);
> +		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> +			amd_set_core_ssb_state(tifn);
> +		else if (static_cpu_has(X86_FEATURE_SSBD))
> +			updmsr  = true;
> +	}
> +
> +	if (updmsr)
>  		spec_ctrl_update_msr(tifn);
>  }
>  
>  void speculation_ctrl_update(unsigned long tif)
>  {
> +	/*
> +	 * On this path we're forcing the update, so use ~tif as the
> +	 * previous flags.
> +	 */
>  	preempt_disable();
> -	__speculation_ctrl_update(tif);
> +	__speculation_ctrl_update(~tif, tif);
>  	preempt_enable();
>  }
>  
> @@ -451,8 +471,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
>  	if ((tifp ^ tifn) & _TIF_NOCPUID)
>  		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>  
> -	if ((tifp ^ tifn) & _TIF_SSBD)
> -		__speculation_ctrl_update(tifn);
> +	__speculation_ctrl_update(tifp, tifn);
>  }
>  
>  /*



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack
  2018-10-17 17:59 ` [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack Tim Chen
  2018-10-18 15:17   ` Thomas Gleixner
@ 2018-10-26 17:46   ` Waiman Long
  2018-10-26 18:10     ` Tim Chen
  1 sibling, 1 reply; 47+ messages in thread
From: Waiman Long @ 2018-10-26 17:46 UTC (permalink / raw)
  To: Tim Chen, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/17/2018 01:59 PM, Tim Chen wrote:
> Mark the non-dumpable processes with TIF_STIBP flag so they will
> use STIBP and IBPB defenses against Spectre v2 attack from
> processes in user space.
>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 1d317f2..cc77b9e 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -14,6 +14,7 @@
>  #include <linux/module.h>
>  #include <linux/nospec.h>
>  #include <linux/prctl.h>
> +#include <linux/sched/coredump.h>
>  
>  #include <asm/spec-ctrl.h>
>  #include <asm/cmdline.h>
> @@ -773,6 +774,26 @@ int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
>  	}
>  }
>  
> +void arch_set_dumpable(struct task_struct *tsk, unsigned int value)
> +{
> +	bool update;
> +
> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
> +		return;
> +	if (!static_cpu_has(X86_FEATURE_STIBP))
> +		return;
> +	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
> +		return;

The third if above seems to be a subset of the first one. Do you need to
do the check one more time?

Cheers,
Longman

> +
> +	if (tsk->mm && value != SUID_DUMP_USER)
> +		update = !test_and_set_tsk_thread_flag(tsk, TIF_STIBP);
> +	else
> +		update = test_and_clear_tsk_thread_flag(tsk, TIF_STIBP);
> +
> +	if (tsk == current && update)
> +		speculation_ctrl_update_current();
> +}
> +
>  #ifdef CONFIG_SECCOMP
>  void arch_seccomp_spec_mitigate(struct task_struct *task)
>  {



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack
  2018-10-26 17:46   ` Waiman Long
@ 2018-10-26 18:10     ` Tim Chen
  0 siblings, 0 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-26 18:10 UTC (permalink / raw)
  To: Waiman Long, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/26/2018 10:46 AM, Waiman Long wrote:
> On 10/17/2018 01:59 PM, Tim Chen wrote:
>> Mark the non-dumpable processes with TIF_STIBP flag so they will
>> use STIBP and IBPB defenses against Spectre v2 attack from
>> processes in user space.
>>
>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>> ---
>>  arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++++++++++
>>  1 file changed, 21 insertions(+)
>>
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index 1d317f2..cc77b9e 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -14,6 +14,7 @@
>>  #include <linux/module.h>
>>  #include <linux/nospec.h>
>>  #include <linux/prctl.h>
>> +#include <linux/sched/coredump.h>
>>  
>>  #include <asm/spec-ctrl.h>
>>  #include <asm/cmdline.h>
>> @@ -773,6 +774,26 @@ int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
>>  	}
>>  }
>>  
>> +void arch_set_dumpable(struct task_struct *tsk, unsigned int value)
>> +{
>> +	bool update;
>> +
>> +	if (!static_branch_unlikely(&spectre_v2_app_lite))
>> +		return;
>> +	if (!static_cpu_has(X86_FEATURE_STIBP))
>> +		return;
>> +	if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_NONE)
>> +		return;
> 
> The third if above seems to be a subset of the first one. Do you need to
> do the check one more time?
> 

Yes, it is redundant and can be removed.

Tim

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS
  2018-10-26 16:58   ` Waiman Long
@ 2018-10-26 18:15     ` Tim Chen
  2018-10-28  9:32       ` Thomas Gleixner
  0 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-26 18:15 UTC (permalink / raw)
  To: Waiman Long, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/26/2018 09:58 AM, Waiman Long wrote:
> On 10/17/2018 01:59 PM, Tim Chen wrote:
>> Add static key to indicate whether we are using Enhanced IBRS to mitigate
>> Spectre v2.  This will be used in later patches to disengage STIBP code
>> for Spectre v2 mitigation as STIBP is not needed when Enhanced IBRS is
>> in use.
>>
>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>> ---
>>  arch/x86/include/asm/nospec-branch.h | 3 +++
>>  arch/x86/kernel/cpu/bugs.c           | 4 ++++
>>  2 files changed, 7 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>> index fd2a8c1..d57e84e 100644
>> --- a/arch/x86/include/asm/nospec-branch.h
>> +++ b/arch/x86/include/asm/nospec-branch.h
>> @@ -3,6 +3,7 @@
>>  #ifndef _ASM_X86_NOSPEC_BRANCH_H_
>>  #define _ASM_X86_NOSPEC_BRANCH_H_
>>  
>> +#include <linux/static_key.h>
>>  #include <asm/alternative.h>
>>  #include <asm/alternative-asm.h>
>>  #include <asm/cpufeatures.h>
>> @@ -228,6 +229,8 @@ enum ssb_mitigation {
>>  extern char __indirect_thunk_start[];
>>  extern char __indirect_thunk_end[];
>>  
>> +DECLARE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
>> +
>>  /*
>>   * On VMEXIT we must ensure that no RSB predictions learned in the guest
>>   * can be followed in the host, by overwriting the RSB completely. Both
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index b2f6b8b..2fc7b4e 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -142,6 +142,9 @@ static const char *spectre_v2_strings[] = {
>>  	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
>>  };
>>  
>> +DEFINE_STATIC_KEY_FALSE(spectre_v2_enhanced_ibrs);
>> +EXPORT_SYMBOL(spectre_v2_enhanced_ibrs);
>> +
>>  #undef pr_fmt
>>  #define pr_fmt(fmt)     "Spectre V2 : " fmt
>>  
>> @@ -386,6 +389,7 @@ static void __init spectre_v2_select_mitigation(void)
>>  			/* Force it so VMEXIT will restore correctly */
>>  			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
>>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> +			static_branch_enable(&spectre_v2_enhanced_ibrs);
>>  			goto specv2_set_mode;
>>  		}
>>  		if (IS_ENABLED(CONFIG_RETPOLINE))
> 
> Why you need a static key for enhanced IBRS? It is supposed to be set at
> boot time and never get changed after that. It will be easier to use a
> feature bit for that instead. We usually use static key when the value
> can be changed at run time.
>

We're close to running out of the feature bits.  So I'm trying not to
use those.

Tim


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-10-26 17:00   ` Waiman Long
@ 2018-10-26 18:18     ` Tim Chen
  2018-10-26 18:29       ` Tim Chen
  0 siblings, 1 reply; 47+ messages in thread
From: Tim Chen @ 2018-10-26 18:18 UTC (permalink / raw)
  To: Waiman Long, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/26/2018 10:00 AM, Waiman Long wrote:
> On 10/17/2018 01:59 PM, Tim Chen wrote:
>> With enhanced IBRS in use, the application running on sibling CPU will not
>> be able to launch Spectre v2 attack to the application on current CPU.
>> There is no need to use STIBP for this case.  Disable the STIBP code
>> when enhanced IBRS is used.
>>
>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>> ---
>>  arch/x86/kernel/cpu/bugs.c | 12 +++++++++++-
>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> index 2fc7b4e..6ed82ea 100644
>> --- a/arch/x86/kernel/cpu/bugs.c
>> +++ b/arch/x86/kernel/cpu/bugs.c
>> @@ -327,6 +327,14 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
>>  
>>  static bool stibp_needed(void)
>>  {
>> +	/*
>> +	 * Determine if we want to leave STIBP always on.
>> +	 * Using enhanced IBRS makes using STIBP unnecessary.
>> +	 */
>> +
>> +	if (static_branch_unlikely(&spectre_v2_enhanced_ibrs))
>> +		return false;
>> +
>>  	if (spectre_v2_enabled == SPECTRE_V2_NONE)
>>  		return false;
>>  
>> @@ -881,7 +889,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>>  				   ", IBPB" : "",
>>  			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
>>  				   ", IBRS_FW" : "",
>> -			(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
>> +			spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ?
>> +				   ", Enhanced IBRS" :
>> +			    (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
>>  				   ", STIBP" : "",
>>  			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
>>  				   ", RSB filling" : "",
> 
> The "Enhanced IBRS" is one of the states of spectre_v2_enabled. So you
> don't need to print that out one more time.
> 

This is for the query to to 
/sys/devices/system/cpu/vulnerabilities/spectre_v2

Currently Enhanced IBRS usage is not shown and should be listed.

Tim


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update
  2018-10-26 17:21   ` Waiman Long
@ 2018-10-26 18:25     ` Tim Chen
  0 siblings, 0 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-26 18:25 UTC (permalink / raw)
  To: Waiman Long, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/26/2018 10:21 AM, Waiman Long wrote:
> On 10/17/2018 01:59 PM, Tim Chen wrote:
>> Reorganize the spculation control MSR update code. Currently it is limited
>> to only dynamic update of the Speculative Store Bypass Disable bit.
>> This patch consolidates the logic to check for AMD CPUs that may or may
>> not use this MSR to control SSBD. This prepares us to add logic to update
>> other bits in the SPEC_CTRL MSR cleanly.
>>
>> Originally-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>> ---
>>  arch/x86/kernel/process.c | 39 +++++++++++++++++++++++++++++----------
>>  1 file changed, 29 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
>> index 8aa4960..789f1bada 100644
>> --- a/arch/x86/kernel/process.c
>> +++ b/arch/x86/kernel/process.c
>> @@ -397,25 +397,45 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
>>  
>>  static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
>>  {
>> -	u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
>> +	u64 msr = x86_spec_ctrl_base;
>> +
>> +	/*
>> +	 * If X86_FEATURE_SSBD is not set, the SSBD
>> +	 * bit is not to be touched.
>> +	 */
>> +	if (static_cpu_has(X86_FEATURE_SSBD))
>> +		msr |= ssbd_tif_to_spec_ctrl(tifn);
>>  
>>  	wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>>  }
>>  
>> -static __always_inline void __speculation_ctrl_update(unsigned long tifn)
>> +static __always_inline void __speculation_ctrl_update(unsigned long tifp,
>> +						      unsigned long tifn)
> 
> I think it will be more intuitive to pass in (tifp ^ tifn) as bitmask of
> changed TIF bits than tifp alone as you are only interested in the
> changed bits anyway. 

I will need to then pass the bitmask plus tifn.  I still need to
pass two parameters and it is a bit more work for the caller
to generate the bit mask.  It is not obvious to me that it is better.

Please also document the input parameters as it is
> hard to know what they are by reading the function alone.

Sure.

> 
> Cheers,
> Longman
>>  {
>> -	if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
>> -		amd_set_ssb_virt_state(tifn);
>> -	else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
>> -		amd_set_core_ssb_state(tifn);
>> -	else
>> +	bool updmsr = false;
>> +
>> +	/* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */
>> +	if ((tifp ^ tifn) & _TIF_SSBD) {
>> +		if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
>> +			amd_set_ssb_virt_state(tifn);
>> +		else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
>> +			amd_set_core_ssb_state(tifn);
>> +		else if (static_cpu_has(X86_FEATURE_SSBD))
>> +			updmsr  = true;
>> +	}
>> +
>> +	if (updmsr)
>>  		spec_ctrl_update_msr(tifn);
>>  }
>>  
>>  void speculation_ctrl_update(unsigned long tif)
>>  {
>> +	/*
>> +	 * On this path we're forcing the update, so use ~tif as the
>> +	 * previous flags.
>> +	 */
>>  	preempt_disable();
>> -	__speculation_ctrl_update(tif);
>> +	__speculation_ctrl_update(~tif, tif);
>>  	preempt_enable();
>>  }
>>  
>> @@ -451,8 +471,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
>>  	if ((tifp ^ tifn) & _TIF_NOCPUID)
>>  		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>>  
>> -	if ((tifp ^ tifn) & _TIF_SSBD)
>> -		__speculation_ctrl_update(tifn);
>> +	__speculation_ctrl_update(tifp, tifn);
>>  }
>>  
>>  /*
> 
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use
  2018-10-26 18:18     ` Tim Chen
@ 2018-10-26 18:29       ` Tim Chen
  0 siblings, 0 replies; 47+ messages in thread
From: Tim Chen @ 2018-10-26 18:29 UTC (permalink / raw)
  To: Waiman Long, Jiri Kosina, Thomas Gleixner
  Cc: Tom Lendacky, Ingo Molnar, Peter Zijlstra, Josh Poimboeuf,
	Andrea Arcangeli, David Woodhouse, Andi Kleen, Dave Hansen,
	Casey Schaufler, Asit Mallick, Arjan van de Ven, Jon Masters,
	linux-kernel, x86

On 10/26/2018 11:18 AM, Tim Chen wrote:
> On 10/26/2018 10:00 AM, Waiman Long wrote:
>> On 10/17/2018 01:59 PM, Tim Chen wrote:
>>> With enhanced IBRS in use, the application running on sibling CPU will not
>>> be able to launch Spectre v2 attack to the application on current CPU.
>>> There is no need to use STIBP for this case.  Disable the STIBP code
>>> when enhanced IBRS is used.
>>>
>>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>>> ---
>>>  arch/x86/kernel/cpu/bugs.c | 12 +++++++++++-
>>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>>> index 2fc7b4e..6ed82ea 100644
>>> --- a/arch/x86/kernel/cpu/bugs.c
>>> +++ b/arch/x86/kernel/cpu/bugs.c
>>> @@ -327,6 +327,14 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
>>>  
>>>  static bool stibp_needed(void)
>>>  {
>>> +	/*
>>> +	 * Determine if we want to leave STIBP always on.
>>> +	 * Using enhanced IBRS makes using STIBP unnecessary.
>>> +	 */
>>> +
>>> +	if (static_branch_unlikely(&spectre_v2_enhanced_ibrs))
>>> +		return false;
>>> +
>>>  	if (spectre_v2_enabled == SPECTRE_V2_NONE)
>>>  		return false;
>>>  
>>> @@ -881,7 +889,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>>>  				   ", IBPB" : "",
>>>  			boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ?
>>>  				   ", IBRS_FW" : "",
>>> -			(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
>>> +			spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED ?
>>> +				   ", Enhanced IBRS" :
>>> +			    (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ?
>>>  				   ", STIBP" : "",
>>>  			boot_cpu_has(X86_FEATURE_RSB_CTXSW) ?
>>>  				   ", RSB filling" : "",
>>
>> The "Enhanced IBRS" is one of the states of spectre_v2_enabled. So you
>> don't need to print that out one more time.
>>
> 
> This is for the query to to 
> /sys/devices/system/cpu/vulnerabilities/spectre_v2
> 
> Currently Enhanced IBRS usage is not shown and should be listed.
> 

Ah, you are correct.  It is shown in spectre_v2 string.  Will remove
the extra Enhanced IBRS string.

Tim


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS
  2018-10-26 18:15     ` Tim Chen
@ 2018-10-28  9:32       ` Thomas Gleixner
  0 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2018-10-28  9:32 UTC (permalink / raw)
  To: Tim Chen
  Cc: Waiman Long, Jiri Kosina, Tom Lendacky, Ingo Molnar,
	Peter Zijlstra, Josh Poimboeuf, Andrea Arcangeli,
	David Woodhouse, Andi Kleen, Dave Hansen, Casey Schaufler,
	Asit Mallick, Arjan van de Ven, Jon Masters, linux-kernel, x86

On Fri, 26 Oct 2018, Tim Chen wrote:
> On 10/26/2018 09:58 AM, Waiman Long wrote:
> >> @@ -386,6 +389,7 @@ static void __init spectre_v2_select_mitigation(void)
> >>  			/* Force it so VMEXIT will restore correctly */
> >>  			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
> >>  			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> >> +			static_branch_enable(&spectre_v2_enhanced_ibrs);
> >>  			goto specv2_set_mode;
> >>  		}
> >>  		if (IS_ENABLED(CONFIG_RETPOLINE))
> > 
> > Why you need a static key for enhanced IBRS? It is supposed to be set at
> > boot time and never get changed after that. It will be easier to use a
> > feature bit for that instead. We usually use static key when the value
> > can be changed at run time.
> >
> 
> We're close to running out of the feature bits.  So I'm trying not to
> use those.

Software feature bits can be extended when needed. That's really a non issue.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2018-10-28  9:44 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-17 17:59 [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Tim Chen
2018-10-17 17:59 ` [Patch v3 01/13] x86/speculation: Clean up spectre_v2_parse_cmdline Tim Chen
2018-10-18 12:43   ` Thomas Gleixner
2018-10-17 17:59 ` [Patch v3 02/13] x86/speculation: Remove unnecessary ret variable in cpu_show_common Tim Chen
2018-10-18 12:46   ` Thomas Gleixner
2018-10-17 17:59 ` [Patch v3 03/13] x86/speculation: Add static key for Enhanced IBRS Tim Chen
2018-10-18 12:50   ` Thomas Gleixner
2018-10-26 16:58   ` Waiman Long
2018-10-26 18:15     ` Tim Chen
2018-10-28  9:32       ` Thomas Gleixner
2018-10-17 17:59 ` [Patch v3 04/13] x86/speculation: Disable STIBP when enhanced IBRS is in use Tim Chen
2018-10-18 12:58   ` Thomas Gleixner
2018-10-26 17:00   ` Waiman Long
2018-10-26 18:18     ` Tim Chen
2018-10-26 18:29       ` Tim Chen
2018-10-17 17:59 ` [Patch v3 05/13] x86/smt: Create cpu_smt_enabled static key for SMT specific code Tim Chen
2018-10-18 13:03   ` Thomas Gleixner
2018-10-19  7:51   ` Peter Zijlstra
2018-10-17 17:59 ` [Patch v3 06/13] mm: Pass task instead of task->mm as argument to set_dumpable Tim Chen
2018-10-18 13:22   ` Thomas Gleixner
2018-10-19 20:02   ` Peter Zijlstra
2018-10-17 17:59 ` [Patch v3 07/13] x86/process Add arch_set_dumpable Tim Chen
2018-10-18 13:28   ` Thomas Gleixner
2018-10-18 18:46     ` Tim Chen
2018-10-19 19:12       ` Thomas Gleixner
2018-10-19 20:16         ` Thomas Gleixner
2018-10-22 23:55           ` Tim Chen
2018-10-17 17:59 ` [Patch v3 08/13] x86/speculation: Rename SSBD update functions Tim Chen
2018-10-18 13:37   ` Thomas Gleixner
2018-10-17 17:59 ` [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update Tim Chen
2018-10-18 13:47   ` Thomas Gleixner
2018-10-26 17:21   ` Waiman Long
2018-10-26 18:25     ` Tim Chen
2018-10-17 17:59 ` [Patch v3 10/13] x86/speculation: Add per thread STIBP flag Tim Chen
2018-10-18 13:53   ` Thomas Gleixner
2018-10-17 17:59 ` [Patch v3 11/13] x86/speculation: Add Spectre v2 lite app to app protection mode Tim Chen
2018-10-18 15:12   ` Thomas Gleixner
2018-10-17 17:59 ` [Patch v3 12/13] x86/speculation: Protect non-dumpable processes against Spectre v2 attack Tim Chen
2018-10-18 15:17   ` Thomas Gleixner
2018-10-26 17:46   ` Waiman Long
2018-10-26 18:10     ` Tim Chen
2018-10-17 17:59 ` [Patch v3 13/13] x86/speculation: Create PRCTL interface to restrict indirect branch speculation Tim Chen
2018-10-17 19:12   ` Randy Dunlap
2018-10-18 15:31   ` Thomas Gleixner
2018-10-19  7:57 ` [Patch v3 00/13] Provide process property based options to enable Spectre v2 userspace-userspace protection Peter Zijlstra
2018-10-19 16:43   ` Tim Chen
2018-10-19 18:38     ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).