All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch 0/6] SSB update 0
@ 2018-05-04 13:23 Thomas Gleixner
  2018-05-04 13:23 ` [patch 1/6] SSB update 1 Thomas Gleixner
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:23 UTC (permalink / raw)
  To: speck

Following up to the discussion about seccomp and enforced mitigation and
after Andi clarifying his concerns (Thanks Andi!), Kees and myself came up
with the following solution:

1) Add a PRCTL control which allows to force disable mitigation. Once set
   this cannot be undone anymore.

2) Make seccomp use that new control because the seccomp semantics do not
   allow to widen restrictions after they have been applied.

3) Add a seccomp filter flag which allows seccomp users to opt out of the
   mitigation enforcement. This has no effect when the mitigation has been
   enforced globally or via the prctl before.

4) Add a migitation option for the command line "seccomp" which enables the
   seccomp mechanism plus the prctl. Selecting "prctl" disables the seccomp
   mechanism

5) Make "seccomp" the default mitigation mode for now.

Applies on top of the master branch. Git bundle follows in separate mail.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [patch 1/6] SSB update 1
  2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
@ 2018-05-04 13:23 ` Thomas Gleixner
  2018-05-04 13:23 ` [patch 2/6] SSB update 2 Thomas Gleixner
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:23 UTC (permalink / raw)
  To: speck

For certain use cases it is desired to enforce mitigations so they cannot
be undone afterwards. That's important for loader stubs which want to
prevent a child from disabling the mitigation again. Will also be used for
seccomp().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/userspace-api/spec_ctrl.rst |   34 ++++++++++++++++++------------
 arch/x86/kernel/cpu/bugs.c                |   22 +++++++++++++++----
 fs/proc/array.c                           |    3 ++
 include/linux/sched.h                     |    4 +++
 include/uapi/linux/prctl.h                |    1 
 5 files changed, 46 insertions(+), 18 deletions(-)

--- a/Documentation/userspace-api/spec_ctrl.rst
+++ b/Documentation/userspace-api/spec_ctrl.rst
@@ -25,19 +25,21 @@ PR_GET_SPECULATION_CTRL
 -----------------------
 
 PR_GET_SPECULATION_CTRL returns the state of the speculation misfeature
-which is selected with arg2 of prctl(2). The return value uses bits 0-2 with
+which is selected with arg2 of prctl(2). The return value uses bits 0-3 with
 the following meaning:
 
-==== ================ ===================================================
-Bit  Define           Description
-==== ================ ===================================================
-0    PR_SPEC_PRCTL    Mitigation can be controlled per task by
-                      PR_SET_SPECULATION_CTRL
-1    PR_SPEC_ENABLE   The speculation feature is enabled, mitigation is
-                      disabled
-2    PR_SPEC_DISABLE  The speculation feature is disabled, mitigation is
-                      enabled
-==== ================ ===================================================
+==== ===================== ===================================================
+Bit  Define                Description
+==== ===================== ===================================================
+0    PR_SPEC_PRCTL         Mitigation can be controlled per task by
+                           PR_SET_SPECULATION_CTRL
+1    PR_SPEC_ENABLE        The speculation feature is enabled, mitigation is
+                           disabled
+2    PR_SPEC_DISABLE       The speculation feature is disabled, mitigation is
+                           enabled
+3    PR_SPEC_FORCE_DISABLE Same as PR_SPEC_DISABLE, but cannot be undone. A
+                           subsequent prctl(..., PR_SPEC_ENABLE) will fail.
+==== ===================== ===================================================
 
 If all bits are 0 the CPU is not affected by the speculation misfeature.
 
@@ -47,9 +49,11 @@ misfeature will fail.
 
 PR_SET_SPECULATION_CTRL
 -----------------------
+
 PR_SET_SPECULATION_CTRL allows to control the speculation misfeature, which
 is selected by arg2 of :manpage:`prctl(2)` per task. arg3 is used to hand
-in the control value, i.e. either PR_SPEC_ENABLE or PR_SPEC_DISABLE.
+in the control value, i.e. either PR_SPEC_ENABLE or PR_SPEC_DISABLE or
+PR_SPEC_FORCE_DISABLE.
 
 Common error codes
 ------------------
@@ -70,10 +74,13 @@ Value   Meaning
 0       Success
 
 ERANGE  arg3 is incorrect, i.e. it's neither PR_SPEC_ENABLE nor
-        PR_SPEC_DISABLE
+        PR_SPEC_DISABLE nor PR_SPEC_FORCE_DISABLE
 
 ENXIO   Control of the selected speculation misfeature is not possible.
         See PR_GET_SPECULATION_CTRL.
+
+EPERM   Speculation was disabled with PR_SPEC_FORCE_DISABLE and caller
+        tried to enable it again.
 ======= =================================================================
 
 Speculation misfeature controls
@@ -84,3 +91,4 @@ Speculation misfeature controls
    * prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, 0, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
    * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
+   * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -537,10 +537,23 @@ static int ssb_prctl_set(struct task_str
 	if (ssb_mode != SPEC_STORE_BYPASS_PRCTL)
 		return -ENXIO;
 
-	if (ctrl == PR_SPEC_ENABLE)
+	switch (ctrl) {
+	case PR_SPEC_ENABLE:
+		/* If speculation is force disabled, enable is not allowed */
+		if (task_spec_force_disable(task))
+			return -EPERM;
 		clear_tsk_thread_flag(task, TIF_RDS);
-	else
+		break;
+	case PR_SPEC_DISABLE:
 		set_tsk_thread_flag(task, TIF_RDS);
+		break;
+	case PR_SPEC_FORCE_DISABLE:
+		set_tsk_thread_flag(task, TIF_RDS);
+		task_set_spec_force_disable(task);
+		break;
+	default:
+		return -ERANGE;
+	}
 
 	/*
 	 * If being set on non-current task, delay setting the CPU
@@ -558,6 +571,8 @@ static int ssb_prctl_get(struct task_str
 	case SPEC_STORE_BYPASS_DISABLE:
 		return PR_SPEC_DISABLE;
 	case SPEC_STORE_BYPASS_PRCTL:
+		if (task_spec_force_disable(task))
+			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
 		if (test_tsk_thread_flag(task, TIF_RDS))
 			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
 		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
@@ -571,9 +586,6 @@ static int ssb_prctl_get(struct task_str
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl)
 {
-	if (ctrl != PR_SPEC_ENABLE && ctrl != PR_SPEC_DISABLE)
-		return -ERANGE;
-
 	switch (which) {
 	case PR_SPEC_STORE_BYPASS:
 		return ssb_prctl_set(task, ctrl);
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -344,6 +344,9 @@ static inline void task_seccomp(struct s
 	case PR_SPEC_NOT_AFFECTED:
 		seq_printf(m, "not vulnerable");
 		break;
+	case PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE:
+		seq_printf(m, "thread force mitigated");
+		break;
 	case PR_SPEC_PRCTL | PR_SPEC_DISABLE:
 		seq_printf(m, "thread mitigated");
 		break;
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1393,6 +1393,7 @@ static inline bool is_percpu_thread(void
 #define PFA_NO_NEW_PRIVS		0	/* May not gain new privileges. */
 #define PFA_SPREAD_PAGE			1	/* Spread page cache over cpuset */
 #define PFA_SPREAD_SLAB			2	/* Spread some slab caches over cpuset */
+#define PFA_SPEC_FORCE_DISABLE		3	/* Force disable speculation */
 
 
 #define TASK_PFA_TEST(name, func)					\
@@ -1418,6 +1419,9 @@ TASK_PFA_TEST(SPREAD_SLAB, spread_slab)
 TASK_PFA_SET(SPREAD_SLAB, spread_slab)
 TASK_PFA_CLEAR(SPREAD_SLAB, spread_slab)
 
+TASK_PFA_TEST(SPEC_FORCE_DISABLE, spec_force_disable)
+TASK_PFA_SET(SPEC_FORCE_DISABLE, spec_force_disable)
+
 static inline void
 current_restore_flags(unsigned long orig_flags, unsigned long flags)
 {
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -217,5 +217,6 @@ struct prctl_mm_map {
 # define PR_SPEC_PRCTL			(1UL << 0)
 # define PR_SPEC_ENABLE			(1UL << 1)
 # define PR_SPEC_DISABLE		(1UL << 2)
+# define PR_SPEC_FORCE_DISABLE		(1UL << 3)
 
 #endif /* _LINUX_PRCTL_H */

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [patch 2/6] SSB update 2
  2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
  2018-05-04 13:23 ` [patch 1/6] SSB update 1 Thomas Gleixner
@ 2018-05-04 13:23 ` Thomas Gleixner
  2018-05-04 13:23 ` [patch 3/6] SSB update 3 Thomas Gleixner
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:23 UTC (permalink / raw)
  To: speck

Use PR_SPEC_FORCE_DISABLE in seccomp() because seccomp does not allow to
widen restrictions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/seccomp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -239,7 +239,7 @@ static inline void spec_mitigate(struct
 	int state = arch_prctl_spec_ctrl_get(task, which);
 
 	if (state > 0 && (state & PR_SPEC_PRCTL))
-		arch_prctl_spec_ctrl_set(task, which, PR_SPEC_DISABLE);
+		arch_prctl_spec_ctrl_set(task, which, PR_SPEC_FORCE_DISABLE);
 }
 
 static inline void seccomp_assign_mode(struct task_struct *task,

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [patch 3/6] SSB update 3
  2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
  2018-05-04 13:23 ` [patch 1/6] SSB update 1 Thomas Gleixner
  2018-05-04 13:23 ` [patch 2/6] SSB update 2 Thomas Gleixner
@ 2018-05-04 13:23 ` Thomas Gleixner
  2018-05-04 13:23 ` [patch 4/6] SSB update 4 Thomas Gleixner
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:23 UTC (permalink / raw)
  To: speck

There's no reason for these to be changed after boot.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -129,7 +129,8 @@ static const char *spectre_v2_strings[]
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V2 : " fmt
 
-static enum spectre_v2_mitigation spectre_v2_enabled = SPECTRE_V2_NONE;
+static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+	SPECTRE_V2_NONE;
 
 void x86_spec_ctrl_set(u64 val)
 {
@@ -407,7 +408,7 @@ static void __init spectre_v2_select_mit
 #undef pr_fmt
 #define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
 
-static enum ssb_mitigation ssb_mode = SPEC_STORE_BYPASS_NONE;
+static enum ssb_mitigation ssb_mode __ro_after_init = SPEC_STORE_BYPASS_NONE;
 
 /* The kernel command line selection */
 enum ssb_mitigation_cmd {

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [patch 4/6] SSB update 4
  2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
                   ` (2 preceding siblings ...)
  2018-05-04 13:23 ` [patch 3/6] SSB update 3 Thomas Gleixner
@ 2018-05-04 13:23 ` Thomas Gleixner
  2018-05-04 16:25   ` [MODERATED] " Kees Cook
  2018-05-04 13:23 ` [patch 5/6] SSB update 5 Thomas Gleixner
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:23 UTC (permalink / raw)
  To: speck

If a seccomp user is not interested in Speculative Store Bypass mitigation
by default, it can set the new SECCOMP_FILTER_FLAG_SPEC_ALLOW flag when
adding filters.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/uapi/linux/seccomp.h                  |  1 +
 kernel/seccomp.c                              | 19 +++++++++++--------
 tools/testing/selftests/seccomp/seccomp_bpf.c |  7 ++++++-
 3 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/include/uapi/linux/seccomp.h b/include/uapi/linux/seccomp.h
index 2a0bd9dd104d..f88b9e6c32c6 100644
--- a/include/uapi/linux/seccomp.h
+++ b/include/uapi/linux/seccomp.h
@@ -19,6 +19,7 @@
 /* Valid flags for SECCOMP_SET_MODE_FILTER */
 #define SECCOMP_FILTER_FLAG_TSYNC	1
 #define SECCOMP_FILTER_FLAG_LOG		2
+#define SECCOMP_FILTER_FLAG_SPEC_ALLOW	3
 
 /*
  * All BPF programs must return a 32-bit value.
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 2c819d65e15f..53eb946120c1 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -243,7 +243,8 @@ static inline void spec_mitigate(struct task_struct *task,
 }
 
 static inline void seccomp_assign_mode(struct task_struct *task,
-				       unsigned long seccomp_mode)
+				       unsigned long seccomp_mode,
+				       unsigned long flags)
 {
 	assert_spin_locked(&task->sighand->siglock);
 
@@ -253,8 +254,9 @@ static inline void seccomp_assign_mode(struct task_struct *task,
 	 * filter) is set.
 	 */
 	smp_mb__before_atomic();
-	/* Assume seccomp processes want speculation flaw mitigation. */
-	spec_mitigate(task, PR_SPEC_STORE_BYPASS);
+	/* Assume default seccomp processes want spec flaw mitigation. */
+	if ((flags & SECCOMP_FILTER_FLAG_SPEC_ALLOW) == 0)
+		spec_mitigate(task, PR_SPEC_STORE_BYPASS);
 	set_tsk_thread_flag(task, TIF_SECCOMP);
 }
 
@@ -322,7 +324,7 @@ static inline pid_t seccomp_can_sync_threads(void)
  * without dropping the locks.
  *
  */
-static inline void seccomp_sync_threads(void)
+static inline void seccomp_sync_threads(unsigned long flags)
 {
 	struct task_struct *thread, *caller;
 
@@ -363,7 +365,8 @@ static inline void seccomp_sync_threads(void)
 		 * allow one thread to transition the other.
 		 */
 		if (thread->seccomp.mode == SECCOMP_MODE_DISABLED)
-			seccomp_assign_mode(thread, SECCOMP_MODE_FILTER);
+			seccomp_assign_mode(thread, SECCOMP_MODE_FILTER,
+					    flags);
 	}
 }
 
@@ -486,7 +489,7 @@ static long seccomp_attach_filter(unsigned int flags,
 
 	/* Now that the new filter is in place, synchronize to all threads. */
 	if (flags & SECCOMP_FILTER_FLAG_TSYNC)
-		seccomp_sync_threads();
+		seccomp_sync_threads(flags);
 
 	return 0;
 }
@@ -835,7 +838,7 @@ static long seccomp_set_mode_strict(void)
 #ifdef TIF_NOTSC
 	disable_TSC();
 #endif
-	seccomp_assign_mode(current, seccomp_mode);
+	seccomp_assign_mode(current, seccomp_mode, 0);
 	ret = 0;
 
 out:
@@ -893,7 +896,7 @@ static long seccomp_set_mode_filter(unsigned int flags,
 	/* Do not free the successfully attached filter. */
 	prepared = NULL;
 
-	seccomp_assign_mode(current, seccomp_mode);
+	seccomp_assign_mode(current, seccomp_mode, flags);
 out:
 	spin_unlock_irq(&current->sighand->siglock);
 	if (flags & SECCOMP_FILTER_FLAG_TSYNC)
diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
index 168c66d74fc5..c281d961c935 100644
--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
+++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
@@ -141,6 +141,10 @@ struct seccomp_data {
 #define SECCOMP_FILTER_FLAG_LOG 2
 #endif
 
+#ifndef SECCOMP_FILTER_FLAG_SPEC_ALLOW
+#define SECCOMP_FILTER_FLAG_SPEC_ALLOW 3
+#endif
+
 #ifndef PTRACE_SECCOMP_GET_METADATA
 #define PTRACE_SECCOMP_GET_METADATA	0x420d
 
@@ -2072,7 +2076,8 @@ TEST(seccomp_syscall_mode_lock)
 TEST(detect_seccomp_filter_flags)
 {
 	unsigned int flags[] = { SECCOMP_FILTER_FLAG_TSYNC,
-				 SECCOMP_FILTER_FLAG_LOG };
+				 SECCOMP_FILTER_FLAG_LOG,
+				 SECCOMP_FILTER_FLAG_SPEC_ALLOW };
 	unsigned int flag, all_flags;
 	int i;
 	long ret;
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [patch 5/6] SSB update 5
  2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
                   ` (3 preceding siblings ...)
  2018-05-04 13:23 ` [patch 4/6] SSB update 4 Thomas Gleixner
@ 2018-05-04 13:23 ` Thomas Gleixner
  2018-05-04 13:23 ` [patch 6/6] SSB update 6 Thomas Gleixner
  2018-05-04 13:34 ` [patch 0/6] SSB update 0 Thomas Gleixner
  6 siblings, 0 replies; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:23 UTC (permalink / raw)
  To: speck

The migitation control is simpler to implement in architecture code as it
avoids the extra function call to check the mode. Aside of that having an
explicit seccomp enabled mode in the architecture mitigations would require
even more workarounds.

Move it into architecture code and provide a weak function in the seccomp
code. Remove the 'which' argument as this allows the architecture to decide
which mitigations are relevant for seccomp.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/bugs.c |   27 ++++++++++++++++-----------
 include/linux/nospec.h     |    2 ++
 kernel/seccomp.c           |   15 ++-------------
 3 files changed, 20 insertions(+), 24 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -566,6 +566,22 @@ static int ssb_prctl_set(struct task_str
 	return 0;
 }
 
+int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
+			     unsigned long ctrl)
+{
+	switch (which) {
+	case PR_SPEC_STORE_BYPASS:
+		return ssb_prctl_set(task, ctrl);
+	default:
+		return -ENODEV;
+	}
+}
+
+void arch_seccomp_spec_mitigate(struct task_struct *task)
+{
+	ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+}
+
 static int ssb_prctl_get(struct task_struct *task)
 {
 	switch (ssb_mode) {
@@ -584,17 +600,6 @@ static int ssb_prctl_get(struct task_str
 	}
 }
 
-int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
-			     unsigned long ctrl)
-{
-	switch (which) {
-	case PR_SPEC_STORE_BYPASS:
-		return ssb_prctl_set(task, ctrl);
-	default:
-		return -ENODEV;
-	}
-}
-
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
 {
 	switch (which) {
--- a/include/linux/nospec.h
+++ b/include/linux/nospec.h
@@ -62,5 +62,7 @@ static inline unsigned long array_index_
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which);
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl);
+/* Speculation control for seccomp enforced mitigation */
+void arch_seccomp_spec_mitigate(struct task_struct *task);
 
 #endif /* _LINUX_NOSPEC_H */
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -229,18 +229,7 @@ static inline bool seccomp_may_assign_mo
 	return true;
 }
 
-/*
- * If a given speculation mitigation is opt-in (prctl()-controlled),
- * select it, by disabling speculation (enabling mitigation).
- */
-static inline void spec_mitigate(struct task_struct *task,
-				 unsigned long which)
-{
-	int state = arch_prctl_spec_ctrl_get(task, which);
-
-	if (state > 0 && (state & PR_SPEC_PRCTL))
-		arch_prctl_spec_ctrl_set(task, which, PR_SPEC_FORCE_DISABLE);
-}
+void __weak arch_seccomp_spec_mitigate(struct task_struct *task) { }
 
 static inline void seccomp_assign_mode(struct task_struct *task,
 				       unsigned long seccomp_mode,
@@ -256,7 +245,7 @@ static inline void seccomp_assign_mode(s
 	smp_mb__before_atomic();
 	/* Assume default seccomp processes want spec flaw mitigation. */
 	if ((flags & SECCOMP_FILTER_FLAG_SPEC_ALLOW) == 0)
-		spec_mitigate(task, PR_SPEC_STORE_BYPASS);
+		arch_seccomp_spec_mitigate(task);
 	set_tsk_thread_flag(task, TIF_SECCOMP);
 }
 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [patch 6/6] SSB update 6
  2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
                   ` (4 preceding siblings ...)
  2018-05-04 13:23 ` [patch 5/6] SSB update 5 Thomas Gleixner
@ 2018-05-04 13:23 ` Thomas Gleixner
  2018-05-04 16:58   ` [MODERATED] " Kees Cook
  2018-05-04 13:34 ` [patch 0/6] SSB update 0 Thomas Gleixner
  6 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:23 UTC (permalink / raw)
  To: speck

Unless explicitly opted out of, anything running under seccomp will have
SSB mitigations enabled. Choosing the "prctl" mode will disable this.

[ tglx: Adjusted it to the new arch_seccomp_spec_mitigate() mechanism ]

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/admin-guide/kernel-parameters.txt |   26 +++++++++++++-------
 arch/x86/include/asm/nospec-branch.h            |    1 
 arch/x86/kernel/cpu/bugs.c                      |   30 ++++++++++++++++--------
 3 files changed, 39 insertions(+), 18 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4049,19 +4049,27 @@
 			This parameter controls whether the Speculative Store
 			Bypass optimization is used.
 
-			on     - Unconditionally disable Speculative Store Bypass
-			off    - Unconditionally enable Speculative Store Bypass
-			auto   - Kernel detects whether the CPU model contains an
-				 implementation of Speculative Store Bypass and
-				 picks the most appropriate mitigation.
-			prctl  - Control Speculative Store Bypass per thread
-				 via prctl. Speculative Store Bypass is enabled
-				 for a process by default. The state of the control
-				 is inherited on fork.
+			on      - Unconditionally disable Speculative Store Bypass
+			off     - Unconditionally enable Speculative Store Bypass
+			auto    - Kernel detects whether the CPU model contains an
+				  implementation of Speculative Store Bypass and
+				  picks the most appropriate mitigation. If the
+				  CPU is not vulnerable, "off" is selected. If the
+				  CPU is vulnerable the default mitigation is
+				  architecture and Kconfig dependent. See below.
+			prctl   - Control Speculative Store Bypass per thread
+				  via prctl. Speculative Store Bypass is enabled
+				  for a process by default. The state of the control
+				  is inherited on fork.
+			seccomp - Same as "prctl" above, but all seccomp threads
+				  will disable SSB unless they explicitly opt out.
 
 			Not specifying this option is equivalent to
 			spec_store_bypass_disable=auto.
 
+			Default mitigations:
+			X86:	If CONFIG_SECCOMP=y "seccomp", otherwise "prctl"
+
 	spia_io_base=	[HW,MTD]
 	spia_fio_base=
 	spia_pedr=
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -233,6 +233,7 @@ enum ssb_mitigation {
 	SPEC_STORE_BYPASS_NONE,
 	SPEC_STORE_BYPASS_DISABLE,
 	SPEC_STORE_BYPASS_PRCTL,
+	SPEC_STORE_BYPASS_SECCOMP,
 };
 
 extern char __indirect_thunk_start[];
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -416,22 +416,25 @@ enum ssb_mitigation_cmd {
 	SPEC_STORE_BYPASS_CMD_AUTO,
 	SPEC_STORE_BYPASS_CMD_ON,
 	SPEC_STORE_BYPASS_CMD_PRCTL,
+	SPEC_STORE_BYPASS_CMD_SECCOMP,
 };
 
 static const char *ssb_strings[] = {
 	[SPEC_STORE_BYPASS_NONE]	= "Vulnerable",
 	[SPEC_STORE_BYPASS_DISABLE]	= "Mitigation: Speculative Store Bypass disabled",
-	[SPEC_STORE_BYPASS_PRCTL]	= "Mitigation: Speculative Store Bypass disabled via prctl"
+	[SPEC_STORE_BYPASS_PRCTL]	= "Mitigation: Speculative Store Bypass disabled via prctl",
+	[SPEC_STORE_BYPASS_SECCOMP]	= "Mitigation: Speculative Store Bypass disabled via prctl and seccomp",
 };
 
 static const struct {
 	const char *option;
 	enum ssb_mitigation_cmd cmd;
 } ssb_mitigation_options[] = {
-	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO },  /* Platform decides */
-	{ "on",		SPEC_STORE_BYPASS_CMD_ON },    /* Disable Speculative Store Bypass */
-	{ "off",	SPEC_STORE_BYPASS_CMD_NONE },  /* Don't touch Speculative Store Bypass */
-	{ "prctl",	SPEC_STORE_BYPASS_CMD_PRCTL }, /* Disable Speculative Store Bypass via prctl */
+	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO },    /* Platform decides */
+	{ "on",		SPEC_STORE_BYPASS_CMD_ON },      /* Disable Speculative Store Bypass */
+	{ "off",	SPEC_STORE_BYPASS_CMD_NONE },    /* Don't touch Speculative Store Bypass */
+	{ "prctl",	SPEC_STORE_BYPASS_CMD_PRCTL },   /* Disable Speculative Store Bypass via prctl */
+	{ "seccomp",	SPEC_STORE_BYPASS_CMD_SECCOMP }, /* Disable Speculative Store Bypass via prctl and seccomp */
 };
 
 static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
@@ -481,8 +484,15 @@ static enum ssb_mitigation_cmd __init __
 
 	switch (cmd) {
 	case SPEC_STORE_BYPASS_CMD_AUTO:
-		/* Choose prctl as the default mode */
-		mode = SPEC_STORE_BYPASS_PRCTL;
+	case SPEC_STORE_BYPASS_CMD_SECCOMP:
+		/*
+		 * Choose prctl+seccomp as the default mode if seccomp is
+		 * enabled.
+		 */
+		if (IS_ENABLED(CONFIG_SECCOMP))
+			mode = SPEC_STORE_BYPASS_SECCOMP;
+		else
+			mode = SPEC_STORE_BYPASS_PRCTL;
 		break;
 	case SPEC_STORE_BYPASS_CMD_ON:
 		mode = SPEC_STORE_BYPASS_DISABLE;
@@ -535,7 +545,8 @@ static int ssb_prctl_set(struct task_str
 {
 	bool rds = !!test_tsk_thread_flag(task, TIF_RDS);
 
-	if (ssb_mode != SPEC_STORE_BYPASS_PRCTL)
+	if (ssb_mode != SPEC_STORE_BYPASS_PRCTL &&
+	    ssb_mode != SPEC_STORE_BYPASS_SECCOMP)
 		return -ENXIO;
 
 	switch (ctrl) {
@@ -579,7 +590,8 @@ int arch_prctl_spec_ctrl_set(struct task
 
 void arch_seccomp_spec_mitigate(struct task_struct *task)
 {
-	ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
+		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
 }
 
 static int ssb_prctl_get(struct task_struct *task)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [patch 0/6] SSB update 0
  2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
                   ` (5 preceding siblings ...)
  2018-05-04 13:23 ` [patch 6/6] SSB update 6 Thomas Gleixner
@ 2018-05-04 13:34 ` Thomas Gleixner
  2018-05-04 17:34   ` [MODERATED] " Konrad Rzeszutek Wilk
  2018-05-04 17:52   ` [MODERATED] Is: bikeshedding the bit name (feedback requested)Was:e: " Konrad Rzeszutek Wilk
  6 siblings, 2 replies; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 13:34 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 160 bytes --]

On Fri, 4 May 2018, speck for Thomas Gleixner wrote:
> 
> Applies on top of the master branch. Git bundle follows in separate mail.

Attached.

Thanks,
 
 	tglx

[-- Attachment #2: Type: application/octet-stream, Size: 7920 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [MODERATED] Re: [patch 4/6] SSB update 4
  2018-05-04 13:23 ` [patch 4/6] SSB update 4 Thomas Gleixner
@ 2018-05-04 16:25   ` Kees Cook
  0 siblings, 0 replies; 17+ messages in thread
From: Kees Cook @ 2018-05-04 16:25 UTC (permalink / raw)
  To: speck

On Fri, May 04, 2018 at 03:23:21PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 4/6] seccomp: Add filter flag to opt-out of SSB mitigation
> From: Kees Cook <keescook@chromium.org>
> 
> If a seccomp user is not interested in Speculative Store Bypass mitigation
> by default, it can set the new SECCOMP_FILTER_FLAG_SPEC_ALLOW flag when
> adding filters.

Ugh, I busted the bit values. Attached is a fix for this patch...
---
From: Kees Cook <keescook@chromium.org>
Subject: [PATCH] seccomp: Actually use a bit field for flags

This fixes the new flag to be the 3rd bit, not a value of 3. Oops.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/linux/seccomp.h                       |  5 +++--
 include/uapi/linux/seccomp.h                  |  6 +++---
 tools/testing/selftests/seccomp/seccomp_bpf.c | 17 ++++++++++++++---
 3 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index c723a5c4e3ff..e5320f6c8654 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -4,8 +4,9 @@
 
 #include <uapi/linux/seccomp.h>
 
-#define SECCOMP_FILTER_FLAG_MASK	(SECCOMP_FILTER_FLAG_TSYNC | \
-					 SECCOMP_FILTER_FLAG_LOG)
+#define SECCOMP_FILTER_FLAG_MASK	(SECCOMP_FILTER_FLAG_TSYNC	| \
+					 SECCOMP_FILTER_FLAG_LOG	| \
+					 SECCOMP_FILTER_FLAG_SPEC_ALLOW)
 
 #ifdef CONFIG_SECCOMP
 
diff --git a/include/uapi/linux/seccomp.h b/include/uapi/linux/seccomp.h
index f88b9e6c32c6..9efc0e73d50b 100644
--- a/include/uapi/linux/seccomp.h
+++ b/include/uapi/linux/seccomp.h
@@ -17,9 +17,9 @@
 #define SECCOMP_GET_ACTION_AVAIL	2
 
 /* Valid flags for SECCOMP_SET_MODE_FILTER */
-#define SECCOMP_FILTER_FLAG_TSYNC	1
-#define SECCOMP_FILTER_FLAG_LOG		2
-#define SECCOMP_FILTER_FLAG_SPEC_ALLOW	3
+#define SECCOMP_FILTER_FLAG_TSYNC	(1UL << 0)
+#define SECCOMP_FILTER_FLAG_LOG		(1UL << 1)
+#define SECCOMP_FILTER_FLAG_SPEC_ALLOW	(1UL << 2)
 
 /*
  * All BPF programs must return a 32-bit value.
diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
index c281d961c935..e1473234968d 100644
--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
+++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
@@ -134,15 +134,15 @@ struct seccomp_data {
 #endif
 
 #ifndef SECCOMP_FILTER_FLAG_TSYNC
-#define SECCOMP_FILTER_FLAG_TSYNC 1
+#define SECCOMP_FILTER_FLAG_TSYNC (1UL << 0)
 #endif
 
 #ifndef SECCOMP_FILTER_FLAG_LOG
-#define SECCOMP_FILTER_FLAG_LOG 2
+#define SECCOMP_FILTER_FLAG_LOG (1UL << 1)
 #endif
 
 #ifndef SECCOMP_FILTER_FLAG_SPEC_ALLOW
-#define SECCOMP_FILTER_FLAG_SPEC_ALLOW 3
+#define SECCOMP_FILTER_FLAG_SPEC_ALLOW (1UL << 2)
 #endif
 
 #ifndef PTRACE_SECCOMP_GET_METADATA
@@ -2084,7 +2084,18 @@ TEST(detect_seccomp_filter_flags)
 
 	/* Test detection of known-good filter flags */
 	for (i = 0, all_flags = 0; i < ARRAY_SIZE(flags); i++) {
+		int bits = 0;
+
+		flag = flags[i];
+		/* Make sure the flag is a single bit! */
+		while (flag) {
+			if (flag & 0x1)
+				bits ++;
+			flag >>= 1;
+		}
+		ASSERT_EQ(1, bits);
 		flag = flags[i];
+
 		ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);
 		ASSERT_NE(ENOSYS, errno) {
 			TH_LOG("Kernel does not support seccomp syscall!");
-- 
2.17.0


-- 
Kees Cook                                            @outflux.net

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [MODERATED] Re: [patch 6/6] SSB update 6
  2018-05-04 13:23 ` [patch 6/6] SSB update 6 Thomas Gleixner
@ 2018-05-04 16:58   ` Kees Cook
  2018-05-04 18:42     ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Kees Cook @ 2018-05-04 16:58 UTC (permalink / raw)
  To: speck

On Fri, May 04, 2018 at 03:23:23PM +0200, speck for Thomas Gleixner wrote:
> Subject: [patch 6/6] x86/speculation: Make "seccomp" the default mode for Speculative Store Bypass
> From: Kees Cook <keescook@chromium.org>
> 
> Unless explicitly opted out of, anything running under seccomp will have
> SSB mitigations enabled. Choosing the "prctl" mode will disable this.
> 
> [ tglx: Adjusted it to the new arch_seccomp_spec_mitigate() mechanism ]

It looks like the following hunk got missed when adjusting my earlier
patch:

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 993a572dc06a..686847b00a20 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -599,6 +607,7 @@ static int ssb_prctl_get(struct task_struct *task)
 	switch (ssb_mode) {
 	case SPEC_STORE_BYPASS_DISABLE:
 		return PR_SPEC_DISABLE;
+	case SPEC_STORE_BYPASS_SECCOMP:
 	case SPEC_STORE_BYPASS_PRCTL:
 		if (task_spec_force_disable(task))
 			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;


And this might be useful to fix later uses of pr_*:


diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 993a572dc06a..686847b00a20 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -540,6 +542,7 @@ static void ssb_select_mitigation()
 }
 
 #undef pr_fmt
+#define pr_fmt(fmt)     "Speculation prctl: " fmt
 
 static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 {


-- 
Kees Cook                                            @outflux.net

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [MODERATED] Re: [patch 0/6] SSB update 0
  2018-05-04 13:34 ` [patch 0/6] SSB update 0 Thomas Gleixner
@ 2018-05-04 17:34   ` Konrad Rzeszutek Wilk
  2018-05-04 17:52   ` [MODERATED] Is: bikeshedding the bit name (feedback requested)Was:e: " Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 17+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-04 17:34 UTC (permalink / raw)
  To: speck

On Fri, May 04, 2018 at 03:34:22PM +0200, speck for Thomas Gleixner wrote:
> On Fri, 4 May 2018, speck for Thomas Gleixner wrote:
> > 
> > Applies on top of the master branch. Git bundle follows in separate mail.
> 
> Attached.

And inline is the man-page update with the new PR_SPEC_FORCE_DISABLE.


From 4d7a129d5a64fdb86950ede67dae6c24c53740cd Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 30 Apr 2018 13:25:20 -0400
Subject: [PATCH v12.1] prctl.2: PR_[SET|GET]_SPECULATION_CTRL

field.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v8: New patch
v9: s/EUCLEAN/EINVAL/
   Also add section in PR_SET_SPECULATION_CTRL about arg[4,5] being zero.
v12.1:
   s/bits 0-2/bits 0-3/
   Add PR_SPEC_FORCE_DISABLE and its EPERM return value.
---
 man2/prctl.2 | 143 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)

diff --git a/man2/prctl.2 b/man2/prctl.2
index 54764d881..87135a0b3 100644
--- a/man2/prctl.2
+++ b/man2/prctl.2
@@ -1008,6 +1008,102 @@ the "securebits" flags of the calling thread.
 See
 .BR capabilities (7).
 .TP
+.BR PR_GET_SPECULATION_CTRL
+Returns the state of the speculation misfeature which is selected with
+the value of
+.IR arg2 ,
+which must be
+.B PR_SPEC_STORE_BYPASS.
+Otherwise the call fails with the error
+.BR ENODEV .
+The return value uses bit 0-3 with the following meaning:
+.RS
+.TP
+.BR PR_SPEC_PRCTL
+Mitigation can be controlled per task by
+.B PR_SET_SPECULATION_CTRL
+.TP
+.BR PR_SPEC_ENABLE
+The speculation feature is enabled, mitigation is disabled.
+.TP
+.BR PR_SPEC_DISABLE
+The speculation feature is disabled, mitigation is enabled
+.TP
+.BR PR_SPEC_FORCE_DISABLE
+Same as
+.B PR_SPEC_DISABLE
+but cannot be undone.
+.RE
+.IP
+If all bits are
+.B 0
+then the CPU is not affected by the speculation misfeature.
+.IP
+If
+.B PR_SPEC_PRCTL
+is set, then the per task control of the mitigation is available. If not set,
+.B prctl()
+for the speculation misfeature will fail.
+In the above operation
+.I arg3
+,
+.I arg4,
+and
+.I arg5
+must be specified as 0, otherwise the call fails with the error
+.BR EINVAL.
+.TP
+.BR PR_SET_SPECULATION_CTRL
+Sets the state of the speculation misfeature which is selected with
+the value of
+.IR arg2 ,
+which must be
+.B PR_SPEC_STORE_BYPASS.
+Otherwise the call fails with the error
+.BR ENODEV .
+This control is per task. The
+.IR arg3
+is used to hand in the control value, which can be either:
+.RS
+.TP
+.BR PR_SPEC_ENABLE
+The speculation feature is enabled, mitigation is disabled.
+.TP
+.BR PR_SPEC_DISABLE
+The speculation feature is disabled, mitigation is enabled
+.TP
+.BR PR_SPEC_FORCE_DISABLE
+Same as
+.B PR_SPEC_DISABLE
+but cannot be undone. A subsequent
+.B
+prctl(..., PR_SPEC_ENABLE)
+will fail with
+.BR EPERM.
+.RE
+.IP
+Any other value in
+.IR arg3
+will result in the call failure with the error
+.BR ERANGE .
+Also
+.I arg4,
+and
+.I arg5
+must be specified as 0, otherwise the call fails with ethe rror
+.BR EINVAL.
+.IP
+Furtheremore this speculation feature can also be controlled by the boot-time
+parameter of
+.B
+spec_store_bypass_disable=
+Which could enforce a read-only policy which will result in the call failure
+with the error
+.BR ENXIO .
+Consult the
+.B PR_GET_SPECULATION_CTRL
+for details on the possible enumerations.
+.TP
 .BR PR_SET_THP_DISABLE " (since Linux 3.15)"
 .\" commit a0715cc22601e8830ace98366c0c2bd8da52af52
 Set the state of the "THP disable" flag for the calling thread.
@@ -1501,6 +1597,12 @@ and
 .IR arg3
 does not specify a valid capability.
 .TP
+.B ENODEV
+.I option
+was
+.BR PR_SET_SPECULATION_CTRL
+the kernel or CPU does not support the requested speculation misfeature.
+.TP
 .B ENXIO
 .I option
 was
@@ -1510,6 +1612,15 @@ or
 and the kernel or the CPU does not support MPX management.
 Check that the kernel and processor have MPX support.
 .TP
+.B ENXIO
+.I option
+was
+.BR PR_SET_SPECULATION_CTRL
+implies that the control of the selected speculation misfeature is not possible.
+See
+.BR PR_GET_SPECULATION_CTRL
+for the bit fields to determine which option is available.
+.TP
 .B EOPNOTSUPP
 .I option
 is
@@ -1533,6 +1644,14 @@ or tried to set a flag whose corresponding locked flag was set
 .B EPERM
 .I option
 is
+.BR PR_SET_SPECULATION_CTRL
+wherein the speculation was disabled with
+.B PR_SPEC_FORCE_DISABLE
+and caller tried to enable it again.
+.TP
+.B EPERM
+.I option
+is
 .BR PR_SET_KEEPCAPS ,
 and the caller's
 .B SECBIT_KEEP_CAPS_LOCKED
@@ -1570,6 +1689,30 @@ is not present in the process's permitted and inheritable capability sets,
 or the
 .B PR_CAP_AMBIENT_LOWER
 securebit has been set.
+.TP
+.B ERANGE
+.I option
+was
+.BR PR_SET_SPECULATION_CTRL
+and
+.IR arg3
+is incorrect - neither
+.B PR_SPEC_ENABLE
+nor
+.B PR_SPEC_DISABLE
+nor
+.B PR_SPEC_FORCE_DISABLE
+was choosen
+.TP
+.B EINVAL
+.I option
+was
+.BR PR_GET_SPECULATION_CTRL
+or
+.BR PR_SET_SPECULATION_CTRL
+and unused arguments to
+.B prctl()
+are not 0.
 .SH VERSIONS
 The
 .BR prctl ()
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [MODERATED] Is: bikeshedding the bit name (feedback requested)Was:e: [patch 0/6] SSB update 0
  2018-05-04 13:34 ` [patch 0/6] SSB update 0 Thomas Gleixner
  2018-05-04 17:34   ` [MODERATED] " Konrad Rzeszutek Wilk
@ 2018-05-04 17:52   ` Konrad Rzeszutek Wilk
  2018-05-04 23:12     ` Thomas Gleixner
  1 sibling, 1 reply; 17+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-04 17:52 UTC (permalink / raw)
  To: speck

On Fri, May 04, 2018 at 03:34:22PM +0200, speck for Thomas Gleixner wrote:
> On Fri, 4 May 2018, speck for Thomas Gleixner wrote:
> > 
> > Applies on top of the master branch. Git bundle follows in separate mail.
> 
> Attached.

Intel is not exactly sure about the bit name, and has asked the community
to provide feedback. I will send anything .. umm positive you want me to tell them.


@jasonbrandt:
"This is still undergoing discussion within Intel.  RDS was a proposal that we made for feedback. Obviously, it escaped into the wild. The names being considered are MDD, RDS (in particular is it is too late to change it for any of you), RMS (reduced memory speculation), and SSBD. Please tell me if you have strong dislike for any of the names. And thank you for your patience. I thought it was still the case though that the Linux name was SSBD for the general capability. "

My response:
"@jasonbrandt  The boot time parameter is spec_store_bypass_disable= and the various functions/enums use the SSB. Let me communicate your statement to Thomas and Linus""

Everybody please vote:

SSBD
RMS
MDD
RDS

The current bit name is RDS in the code base for Linux and Xen. This is

+#define SPEC_CTRL_RDS_SHIFT             2          /* Reduced Data Speculation bit */
+#define SPEC_CTRL_RDS                   (1 << SPEC_CTRL_RDS_SHIFT)   /* Reduced Data Speculation */

and 

+#define X86_FEATURE_AMD_RDS            (7*32+24)  /* "" AMD RDS implementation */
+#define X86_FEATURE_RDS                        (18*32+31) /* Reduced Data Speculation */



> 
> Thanks,
>  
>  	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [patch 6/6] SSB update 6
  2018-05-04 16:58   ` [MODERATED] " Kees Cook
@ 2018-05-04 18:42     ` Thomas Gleixner
  0 siblings, 0 replies; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 18:42 UTC (permalink / raw)
  To: speck

On Fri, 4 May 2018, speck for Kees Cook wrote:

> On Fri, May 04, 2018 at 03:23:23PM +0200, speck for Thomas Gleixner wrote:
> > Subject: [patch 6/6] x86/speculation: Make "seccomp" the default mode for Speculative Store Bypass
> > From: Kees Cook <keescook@chromium.org>
> > 
> > Unless explicitly opted out of, anything running under seccomp will have
> > SSB mitigations enabled. Choosing the "prctl" mode will disable this.
> > 
> > [ tglx: Adjusted it to the new arch_seccomp_spec_mitigate() mechanism ]
> 
> It looks like the following hunk got missed when adjusting my earlier
> patch:
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 993a572dc06a..686847b00a20 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -599,6 +607,7 @@ static int ssb_prctl_get(struct task_struct *task)
>  	switch (ssb_mode) {
>  	case SPEC_STORE_BYPASS_DISABLE:
>  		return PR_SPEC_DISABLE;
> +	case SPEC_STORE_BYPASS_SECCOMP:
>  	case SPEC_STORE_BYPASS_PRCTL:
>  		if (task_spec_force_disable(task))
>  			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
> 
> 
> And this might be useful to fix later uses of pr_*:
> 
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 993a572dc06a..686847b00a20 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -540,6 +542,7 @@ static void ssb_select_mitigation()
>  }
>  
>  #undef pr_fmt
> +#define pr_fmt(fmt)     "Speculation prctl: " fmt

indeed. Thanks for spotting!

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Is: bikeshedding the bit name (feedback requested)Was:e: [patch 0/6] SSB update 0
  2018-05-04 17:52   ` [MODERATED] Is: bikeshedding the bit name (feedback requested)Was:e: " Konrad Rzeszutek Wilk
@ 2018-05-04 23:12     ` Thomas Gleixner
  2018-05-07 20:05       ` [MODERATED] " Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-04 23:12 UTC (permalink / raw)
  To: speck

On Fri, 4 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> On Fri, May 04, 2018 at 03:34:22PM +0200, speck for Thomas Gleixner wrote:
> > On Fri, 4 May 2018, speck for Thomas Gleixner wrote:
> > > 
> > > Applies on top of the master branch. Git bundle follows in separate mail.
> > 
> > Attached.
> 
> Intel is not exactly sure about the bit name, and has asked the community
> to provide feedback. I will send anything .. umm positive you want me to tell them.
> 
> 
> @jasonbrandt:
> "This is still undergoing discussion within Intel.  RDS was a proposal that we made for feedback. Obviously, it escaped into the wild. The names being considered are MDD, RDS (in particular is it is too late to change it for any of you), RMS (reduced memory speculation), and SSBD. Please tell me if you have strong dislike for any of the names. And thank you for your patience. I thought it was still the case though that the Linux name was SSBD for the general capability. "
> 
> My response:
> "@jasonbrandt  The boot time parameter is spec_store_bypass_disable= and the various functions/enums use the SSB. Let me communicate your statement to Thomas and Linus""
> 

I don't think we want to change the spec_store_bypass (SSB) name
anymore. It's a cross architecture term.

For the bit name in the MSR, I really do not care as long as it is not
completely nonsensical. If we can keep RDS, fine and less churn.

> Everybody please vote:
> 
> SSBD
> RMS
> MDD
> RDS	

RDS because you asked me to vote and I'm lazy.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [MODERATED] Re: Is: bikeshedding the bit name (feedback requested)Was:e: [patch 0/6] SSB update 0
  2018-05-04 23:12     ` Thomas Gleixner
@ 2018-05-07 20:05       ` Konrad Rzeszutek Wilk
  2018-05-07 20:10         ` Thomas Gleixner
  0 siblings, 1 reply; 17+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-07 20:05 UTC (permalink / raw)
  To: speck

On Sat, May 05, 2018 at 01:12:49AM +0200, speck for Thomas Gleixner wrote:
> On Fri, 4 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> > On Fri, May 04, 2018 at 03:34:22PM +0200, speck for Thomas Gleixner wrote:
> > > On Fri, 4 May 2018, speck for Thomas Gleixner wrote:
> > > > 
> > > > Applies on top of the master branch. Git bundle follows in separate mail.
> > > 
> > > Attached.
> > 
> > Intel is not exactly sure about the bit name, and has asked the community
> > to provide feedback. I will send anything .. umm positive you want me to tell them.
> > 
> > 
> > @jasonbrandt:
> > "This is still undergoing discussion within Intel.  RDS was a proposal that we made for feedback. Obviously, it escaped into the wild. The names being considered are MDD, RDS (in particular is it is too late to change it for any of you), RMS (reduced memory speculation), and SSBD. Please tell me if you have strong dislike for any of the names. And thank you for your patience. I thought it was still the case though that the Linux name was SSBD for the general capability. "
> > 
> > My response:
> > "@jasonbrandt  The boot time parameter is spec_store_bypass_disable= and the various functions/enums use the SSB. Let me communicate your statement to Thomas and Linus""
> > 
> 
> I don't think we want to change the spec_store_bypass (SSB) name
> anymore. It's a cross architecture term.
> 
> For the bit name in the MSR, I really do not care as long as it is not
> completely nonsensical. If we can keep RDS, fine and less churn.


"
FYI to those who could not attend today's meeting, Intel is changing the name of our SSB mitigation bit in IA32_SPEC_CTL[2] to SSBD (Speculative Store Bypass Disable).
"

Ugh.

Thomas, do you want me to rebase the patches and change
all the RDS, and more importantly the commit messages
to have this name?

Or you prefer a follow on patch to just change it as
s/RDS/SSBD/

I am assuming the latter (s/RDS/SSBD), so working on a patch for that.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Is: bikeshedding the bit name (feedback requested)Was:e: [patch 0/6] SSB update 0
  2018-05-07 20:05       ` [MODERATED] " Konrad Rzeszutek Wilk
@ 2018-05-07 20:10         ` Thomas Gleixner
  2018-05-07 21:16           ` [MODERATED] " Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Gleixner @ 2018-05-07 20:10 UTC (permalink / raw)
  To: speck

On Mon, 7 May 2018, speck for Konrad Rzeszutek Wilk wrote:
"
> "
> FYI to those who could not attend today's meeting, Intel is changing the name of our SSB mitigation bit in IA32_SPEC_CTL[2] to SSBD (Speculative Store Bypass Disable).
> "
> 
> Ugh.
> 
> Thomas, do you want me to rebase the patches and change
> all the RDS, and more importantly the commit messages
> to have this name?
> 
> Or you prefer a follow on patch to just change it as
> s/RDS/SSBD/
> 
> I am assuming the latter (s/RDS/SSBD), so working on a patch for that.

Yes, on top of the current master branch (f21b53b20c75) please.

Thanks,

	tglx

 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [MODERATED] Re: Is: bikeshedding the bit name (feedback requested)Was:e: [patch 0/6] SSB update 0
  2018-05-07 20:10         ` Thomas Gleixner
@ 2018-05-07 21:16           ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 17+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-07 21:16 UTC (permalink / raw)
  To: speck

On Mon, May 07, 2018 at 10:10:28PM +0200, speck for Thomas Gleixner wrote:
> On Mon, 7 May 2018, speck for Konrad Rzeszutek Wilk wrote:
> "
> > "
> > FYI to those who could not attend today's meeting, Intel is changing the name of our SSB mitigation bit in IA32_SPEC_CTL[2] to SSBD (Speculative Store Bypass Disable).
> > "
> > 
> > Ugh.
> > 
> > Thomas, do you want me to rebase the patches and change
> > all the RDS, and more importantly the commit messages
> > to have this name?
> > 
> > Or you prefer a follow on patch to just change it as
> > s/RDS/SSBD/
> > 
> > I am assuming the latter (s/RDS/SSBD), so working on a patch for that.
> 
> Yes, on top of the current master branch (f21b53b20c75) please.

Here it is (inline):

From ed9d15c70942f8fe9ebfec595492770fd0358bf0 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 7 May 2018 16:32:22 -0400
Subject: [PATCH v13] x86/bugs: Rename _RDS to _SSBD

Intel collateral will reference the SSB mitigation bit in
IA32_SPEC_CTL[2] as SSBD (Speculative Store Bypass Disable).

Hence changing it.

It is unclear yet what the MSR_IA32_ARCH_CAPABILITIES (0x10a)
Bit(4) name is going to be. Following the rename it would be
SSBD_NO but that rolls out to Speculative Store Bypass Disable No.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v13: New patch.
---
 arch/x86/include/asm/cpufeatures.h |  4 ++--
 arch/x86/include/asm/msr-index.h   | 10 +++++-----
 arch/x86/include/asm/spec-ctrl.h   |  6 +++---
 arch/x86/include/asm/thread_info.h |  6 +++---
 arch/x86/kernel/cpu/amd.c          | 12 ++++++------
 arch/x86/kernel/cpu/bugs.c         | 18 +++++++++---------
 arch/x86/kernel/cpu/common.c       |  2 +-
 arch/x86/kernel/cpu/intel.c        |  2 +-
 arch/x86/kernel/process.c          |  4 ++--
 arch/x86/kvm/cpuid.c               |  2 +-
 arch/x86/kvm/vmx.c                 |  6 +++---
 11 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index b2464c1787df..4e1c747acbf8 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -215,7 +215,7 @@
 #define X86_FEATURE_USE_IBPB		( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
 #define X86_FEATURE_USE_IBRS_FW		( 7*32+22) /* "" Use IBRS during runtime firmware calls */
 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE	( 7*32+23) /* "" Disable Speculative Store Bypass. */
-#define X86_FEATURE_AMD_RDS		(7*32+24)  /* "" AMD RDS implementation */
+#define X86_FEATURE_AMD_SSBD		( 7*32+24)  /* "" AMD SSBD implementation */
 
 /* Virtualization flags: Linux defined, word 8 */
 #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
@@ -336,7 +336,7 @@
 #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
 #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
-#define X86_FEATURE_RDS			(18*32+31) /* Reduced Data Speculation */
+#define X86_FEATURE_SSBD		(18*32+31) /* Speculative Store Bypass Disable */
 
 /*
  * BUG word(s)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 810f50bb338d..0da3ca260b06 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -42,8 +42,8 @@
 #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
 #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
 #define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
-#define SPEC_CTRL_RDS_SHIFT		2	   /* Reduced Data Speculation bit */
-#define SPEC_CTRL_RDS			(1 << SPEC_CTRL_RDS_SHIFT)   /* Reduced Data Speculation */
+#define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
+#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)   /* Speculative Store Bypass Disable */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
@@ -70,10 +70,10 @@
 #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
 #define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
 #define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
-#define ARCH_CAP_RDS_NO			(1 << 4)   /*
+#define ARCH_CAP_SSBD_NO		(1 << 4)   /*
 						    * Not susceptible to Speculative Store Bypass
-						    * attack, so no Reduced Data Speculation control
-						    * required.
+						    * attack, so no Speculative Store Bypass
+						    * control required.
 						    */
 
 #define MSR_IA32_BBL_CR_CTL		0x00000119
diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index 45ef00ad5105..37e45ea0659b 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -24,13 +24,13 @@ extern u64 x86_spec_ctrl_base;
 
 static inline u64 rds_tif_to_spec_ctrl(u64 tifn)
 {
-	BUILD_BUG_ON(TIF_RDS < SPEC_CTRL_RDS_SHIFT);
-	return (tifn & _TIF_RDS) >> (TIF_RDS - SPEC_CTRL_RDS_SHIFT);
+	BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
+	return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
 }
 
 static inline u64 rds_tif_to_amd_ls_cfg(u64 tifn)
 {
-	return (tifn & _TIF_RDS) ? x86_amd_ls_cfg_rds_mask : 0ULL;
+	return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_rds_mask : 0ULL;
 }
 
 extern void speculative_store_bypass_update(void);
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index e5c26cc59619..2ff2a30a264f 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -79,7 +79,7 @@ struct thread_info {
 #define TIF_SIGPENDING		2	/* signal pending */
 #define TIF_NEED_RESCHED	3	/* rescheduling necessary */
 #define TIF_SINGLESTEP		4	/* reenable singlestep on user return*/
-#define TIF_RDS			5	/* Reduced data speculation */
+#define TIF_SSBD			5	/* Reduced data speculation */
 #define TIF_SYSCALL_EMU		6	/* syscall emulation active */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
@@ -106,7 +106,7 @@ struct thread_info {
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
 #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
-#define _TIF_RDS		(1 << TIF_RDS)
+#define _TIF_SSBD		(1 << TIF_SSBD)
 #define _TIF_SYSCALL_EMU	(1 << TIF_SYSCALL_EMU)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
@@ -146,7 +146,7 @@ struct thread_info {
 
 /* flags to check in __switch_to() */
 #define _TIF_WORK_CTXSW							\
-	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_RDS)
+	(_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
 
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 18efc33a8d2e..e5f7f7ac5cf7 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -567,11 +567,11 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
 		}
 		/*
 		 * Try to cache the base value so further operations can
-		 * avoid RMW. If that faults, do not enable RDS.
+		 * avoid RMW. If that faults, do not enable SSBD.
 		 */
 		if (!rdmsrl_safe(MSR_AMD64_LS_CFG, &x86_amd_ls_cfg_base)) {
-			setup_force_cpu_cap(X86_FEATURE_RDS);
-			setup_force_cpu_cap(X86_FEATURE_AMD_RDS);
+			setup_force_cpu_cap(X86_FEATURE_SSBD);
+			setup_force_cpu_cap(X86_FEATURE_AMD_SSBD);
 			x86_amd_ls_cfg_rds_mask = 1ULL << bit;
 		}
 	}
@@ -920,9 +920,9 @@ static void init_amd(struct cpuinfo_x86 *c)
 	if (!cpu_has(c, X86_FEATURE_XENPV))
 		set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
 
-	if (boot_cpu_has(X86_FEATURE_AMD_RDS)) {
-		set_cpu_cap(c, X86_FEATURE_RDS);
-		set_cpu_cap(c, X86_FEATURE_AMD_RDS);
+	if (boot_cpu_has(X86_FEATURE_AMD_SSBD)) {
+		set_cpu_cap(c, X86_FEATURE_SSBD);
+		set_cpu_cap(c, X86_FEATURE_AMD_SSBD);
 	}
 }
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 563d8e54c863..236875283a7c 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -185,7 +185,7 @@ static void x86_amd_rds_enable(void)
 {
 	u64 msrval = x86_amd_ls_cfg_base | x86_amd_ls_cfg_rds_mask;
 
-	if (boot_cpu_has(X86_FEATURE_AMD_RDS))
+	if (boot_cpu_has(X86_FEATURE_AMD_SSBD))
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
@@ -473,7 +473,7 @@ static enum ssb_mitigation_cmd __init __ssb_select_mitigation(void)
 	enum ssb_mitigation mode = SPEC_STORE_BYPASS_NONE;
 	enum ssb_mitigation_cmd cmd;
 
-	if (!boot_cpu_has(X86_FEATURE_RDS))
+	if (!boot_cpu_has(X86_FEATURE_SSBD))
 		return mode;
 
 	cmd = ssb_parse_cmdline();
@@ -507,7 +507,7 @@ static enum ssb_mitigation_cmd __init __ssb_select_mitigation(void)
 	/*
 	 * We have three CPU feature flags that are in play here:
 	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
-	 *  - X86_FEATURE_RDS - CPU is able to turn off speculative store bypass
+	 *  - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
 	 *  - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
 	 */
 	if (mode == SPEC_STORE_BYPASS_DISABLE) {
@@ -518,9 +518,9 @@ static enum ssb_mitigation_cmd __init __ssb_select_mitigation(void)
 		 */
 		switch (boot_cpu_data.x86_vendor) {
 		case X86_VENDOR_INTEL:
-			x86_spec_ctrl_base |= SPEC_CTRL_RDS;
-			x86_spec_ctrl_mask &= ~SPEC_CTRL_RDS;
-			x86_spec_ctrl_set(SPEC_CTRL_RDS);
+			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
+			x86_spec_ctrl_mask &= ~SPEC_CTRL_SSBD;
+			x86_spec_ctrl_set(SPEC_CTRL_SSBD);
 			break;
 		case X86_VENDOR_AMD:
 			x86_amd_rds_enable();
@@ -556,16 +556,16 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
 		if (task_spec_ssb_force_disable(task))
 			return -EPERM;
 		task_clear_spec_ssb_disable(task);
-		update = test_and_clear_tsk_thread_flag(task, TIF_RDS);
+		update = test_and_clear_tsk_thread_flag(task, TIF_SSBD);
 		break;
 	case PR_SPEC_DISABLE:
 		task_set_spec_ssb_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_RDS);
+		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
 		break;
 	case PR_SPEC_FORCE_DISABLE:
 		task_set_spec_ssb_disable(task);
 		task_set_spec_ssb_force_disable(task);
-		update = !test_and_set_tsk_thread_flag(task, TIF_RDS);
+		update = !test_and_set_tsk_thread_flag(task, TIF_SSBD);
 		break;
 	default:
 		return -ERANGE;
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index e0517bcee446..9fbb388fadac 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -959,7 +959,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
 
 	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
-	   !(ia32_cap & ARCH_CAP_RDS_NO))
+	   !(ia32_cap & ARCH_CAP_SSBD_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (x86_match_cpu(cpu_no_speculation))
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index ef3f9c01c274..0eab6c89c8d9 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -189,7 +189,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 		setup_clear_cpu_cap(X86_FEATURE_STIBP);
 		setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
 		setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
-		setup_clear_cpu_cap(X86_FEATURE_RDS);
+		setup_clear_cpu_cap(X86_FEATURE_SSBD);
 	}
 
 	/*
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 397342725046..215e0db4d580 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -283,7 +283,7 @@ static __always_inline void __speculative_store_bypass_update(unsigned long tifn
 {
 	u64 msr;
 
-	if (static_cpu_has(X86_FEATURE_AMD_RDS)) {
+	if (static_cpu_has(X86_FEATURE_AMD_SSBD)) {
 		msr = x86_amd_ls_cfg_base | rds_tif_to_amd_ls_cfg(tifn);
 		wrmsrl(MSR_AMD64_LS_CFG, msr);
 	} else {
@@ -329,7 +329,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
 
-	if ((tifp ^ tifn) & _TIF_RDS)
+	if ((tifp ^ tifn) & _TIF_SSBD)
 		__speculative_store_bypass_update(tifn);
 }
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 376ac9a2a2b9..865c9a769864 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -407,7 +407,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	/* cpuid 7.0.edx*/
 	const u32 kvm_cpuid_7_0_edx_x86_features =
-		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) | F(RDS) |
+		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) | F(SSBD) |
 		F(ARCH_CAPABILITIES);
 
 	/* all calls to cpuid_count() should be made on the same cpu */
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 16a111e44691..9b8d80bf3889 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3525,7 +3525,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (!msr_info->host_initiated &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_IBRS) &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_RDS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_SSBD))
 			return 1;
 
 		msr_info->data = to_vmx(vcpu)->spec_ctrl;
@@ -3645,11 +3645,11 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (!msr_info->host_initiated &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_IBRS) &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_RDS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_SSBD))
 			return 1;
 
 		/* The STIBP bit doesn't fault even if it's not advertised */
-		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_RDS))
+		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))
 			return 1;
 
 		vmx->spec_ctrl = data;
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-05-07 21:16 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-04 13:23 [patch 0/6] SSB update 0 Thomas Gleixner
2018-05-04 13:23 ` [patch 1/6] SSB update 1 Thomas Gleixner
2018-05-04 13:23 ` [patch 2/6] SSB update 2 Thomas Gleixner
2018-05-04 13:23 ` [patch 3/6] SSB update 3 Thomas Gleixner
2018-05-04 13:23 ` [patch 4/6] SSB update 4 Thomas Gleixner
2018-05-04 16:25   ` [MODERATED] " Kees Cook
2018-05-04 13:23 ` [patch 5/6] SSB update 5 Thomas Gleixner
2018-05-04 13:23 ` [patch 6/6] SSB update 6 Thomas Gleixner
2018-05-04 16:58   ` [MODERATED] " Kees Cook
2018-05-04 18:42     ` Thomas Gleixner
2018-05-04 13:34 ` [patch 0/6] SSB update 0 Thomas Gleixner
2018-05-04 17:34   ` [MODERATED] " Konrad Rzeszutek Wilk
2018-05-04 17:52   ` [MODERATED] Is: bikeshedding the bit name (feedback requested)Was:e: " Konrad Rzeszutek Wilk
2018-05-04 23:12     ` Thomas Gleixner
2018-05-07 20:05       ` [MODERATED] " Konrad Rzeszutek Wilk
2018-05-07 20:10         ` Thomas Gleixner
2018-05-07 21:16           ` [MODERATED] " Konrad Rzeszutek Wilk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.