All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch V2 00/10] MDS basics+ 0
@ 2019-02-20 15:07 Thomas Gleixner
  2019-02-20 15:07 ` [patch V2 01/10] MDS basics+ 1 Thomas Gleixner
                   ` (10 more replies)
  0 siblings, 11 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:07 UTC (permalink / raw)
  To: speck

Hi!

This is an update to yesterdays series with the following changes:

   - Addressed review comments (on/off list)

   - Changed the approach with static keys slightly

   - Added "cc" clobber to the VERW asm magic (spotted by Peterz)

   - Added x86 specific documentation which explains the mitigation methods
     and details on why particular code pathes are excluded.

   - Added an internal 'HOPE' mitigation mode to address the VMWare wish.

   - Added the basic infrastructure for conditional mode

Dropped the documentation patch for now as I'm not done with updating it
and I have to run now and attend my grandson's birthday party.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 01/10] MDS basics+ 1
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
@ 2019-02-20 15:07 ` Thomas Gleixner
  2019-02-20 16:27   ` [MODERATED] " Borislav Petkov
  2019-02-20 16:46   ` Greg KH
  2019-02-20 15:07 ` [patch V2 02/10] MDS basics+ 2 Thomas Gleixner
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:07 UTC (permalink / raw)
  To: speck

Subject: [patch V2 01/10] x86/speculation/mds: Add basic bug infrastructure for MDS
From: Andi Kleen <ak@linux.intel.com>

Microarchitectural Data Sampling (MDS), is a class of side channel attacks
on internal buffers in Intel CPUs. The variants are:

 - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
 - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
 - Microarchitectural Load Port Data (MLPDS) (CVE-2018-12127)

MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
dependent load (store-load forwarding) as an optimization. The forward can
also happen to a faulting or assisting load operation for a different
memory address, which can be exploited under certain conditions. Store
buffers are partitionened between Hyper-Threads so cross thread forwarding
is not possible. But if a thread enters or exits a sleep state the store
buffer is repartioned which can expose data from one thread to the other.

MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
L1 miss situations and to hold data which is returned or sent in response
to a memory or I/O operation. Fill buffers can forward data to a load
operation and also write data to the cache. When the fill buffer is
deallocated it can retain the stale data of the preceeding operations which
can then be forwarded to a faulting or assisting load operation, which can
be exploited under certain conditions. Fill buffers are shared between
Hyper-Threads so cross thread leakage is possible.

MLDPS leaks Load Port Data. Load ports are used to perform load operations
from memory or I/O. The received data is then forwarded to the register
file or a subsequent operation. In some implementations the Load Port can
contain stale data from a previous operation which can be forwarded to
faulting or assisting loads under certain conditions, which again can be
exploited eventually. Load poorts are shared between Hyper-Threads so cross
thread leakage is possible.

All variants have the same mitigation for single CPU thread case (SMT off),
so the kernel can treat them as one MDS issue.

Add the basic infrastructure to detect if the current CPU is affected by
MDS.

[ tglx: Rewrote changelog ]

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/cpufeatures.h |    2 ++
 arch/x86/include/asm/msr-index.h   |    5 +++++
 arch/x86/kernel/cpu/common.c       |   13 +++++++++++++
 3 files changed, 20 insertions(+)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -344,6 +344,7 @@
 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
 #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
 #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
+#define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW flushs CPU state */
 #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
 #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
 #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
@@ -381,5 +382,6 @@
 #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
 #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
 #define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+#define X86_BUG_MDS			X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -77,6 +77,11 @@
 						    * attack, so no Speculative Store Bypass
 						    * control required.
 						    */
+#define ARCH_CAP_MDS_NO			(1 << 5)   /*
+						    * Not susceptible to
+						    * Microarchitectural Data
+						    * Sampling (MDS) vulnerabilities.
+						    */
 
 #define MSR_IA32_FLUSH_CMD		0x0000010b
 #define L1D_FLUSH			(1 << 0)   /*
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -998,6 +998,14 @@ static const __initconst struct x86_cpu_
 	{}
 };
 
+static const __initconst struct x86_cpu_id cpu_no_mds[] = {
+	/* in addition to cpu_no_speculation */
+	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
+	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_X	},
+	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_PLUS	},
+	{}
+};
+
 static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = 0;
@@ -1019,6 +1027,11 @@ static void __init cpu_set_bug_bits(stru
 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
 
+	if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+	    !x86_match_cpu(cpu_no_mds)) &&
+	    !(ia32_cap & ARCH_CAP_MDS_NO))
+		setup_force_cpu_bug(X86_BUG_MDS);
+
 	if (x86_match_cpu(cpu_no_meltdown))
 		return;
 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 02/10] MDS basics+ 2
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
  2019-02-20 15:07 ` [patch V2 01/10] MDS basics+ 1 Thomas Gleixner
@ 2019-02-20 15:07 ` Thomas Gleixner
  2019-02-20 16:47   ` [MODERATED] " Borislav Petkov
  2019-02-20 16:48   ` Greg KH
  2019-02-20 15:07 ` [patch V2 03/10] MDS basics+ 3 Thomas Gleixner
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:07 UTC (permalink / raw)
  To: speck

From: Andi Kleen <ak@linux.intel.com>
Subject: [patch V2 02/10] x86/kvm: Expose X86_FEATURE_MD_CLEAR to guests

X86_FEATURE_MD_CLEAR is a new CPUID bit which is set when microcode
provides the mechanism to invoke a flush of various exploitable CPU buffers
by invoking the VERW instruction.

Hand it through to guests so they can adjust their mitigations.

This also requires corresponding qemu changes, which are available
seperately.

[ tglx: Massaged changelog ]

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kvm/cpuid.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -409,7 +409,8 @@ static inline int __do_cpuid_ent(struct
 	/* cpuid 7.0.edx*/
 	const u32 kvm_cpuid_7_0_edx_x86_features =
 		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
-		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP);
+		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
+		F(MD_CLEAR);
 
 	/* all calls to cpuid_count() should be made on the same cpu */
 	get_cpu();

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 03/10] MDS basics+ 3
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
  2019-02-20 15:07 ` [patch V2 01/10] MDS basics+ 1 Thomas Gleixner
  2019-02-20 15:07 ` [patch V2 02/10] MDS basics+ 2 Thomas Gleixner
@ 2019-02-20 15:07 ` Thomas Gleixner
  2019-02-20 16:54   ` [MODERATED] " mark gross
                     ` (2 more replies)
  2019-02-20 15:07 ` [patch V2 04/10] MDS basics+ 4 Thomas Gleixner
                   ` (7 subsequent siblings)
  10 siblings, 3 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:07 UTC (permalink / raw)
  To: speck

Subject: [patch V2 03/10] x86/speculation/mds: Add mds_clear_cpu_buffer()
From: Thomas Gleixner <tglx@linutronix.de>

The Microarchitectural Data Sampling (MDS) vulernabilities are mitigated by
clearing the affected CPU buffers. The mechanism for clearing the buffers
uses the unused and obsolete VERW instruction in combination with a
microcode update which triggers a CPU buffer clear when VERW is executed.

Provide a inline function with the assembly magic. The argument of the VERW
instruction must be a memory operand.

Add x86 specific documentation about MDS and the internal workings of the
mitigation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V1 --> V2: Add "cc" clobber and documentation
---
 Documentation/index.rst              |    1 
 Documentation/x86/conf.py            |   10 ++++
 Documentation/x86/index.rst          |    8 +++
 Documentation/x86/mds.rst            |   72 +++++++++++++++++++++++++++++++++++
 arch/x86/include/asm/nospec-branch.h |   20 +++++++++
 5 files changed, 111 insertions(+)

--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -101,6 +101,7 @@ implementation.
    :maxdepth: 2
 
    sh/index
+   x86/index
 
 Filesystem Documentation
 ------------------------
--- /dev/null
+++ b/Documentation/x86/conf.py
@@ -0,0 +1,10 @@
+# -*- coding: utf-8; mode: python -*-
+
+project = "X86 architecture specific documentation"
+
+tags.add("subproject")
+
+latex_documents = [
+    ('index', 'x86.tex', project,
+     'The kernel development community', 'manual'),
+]
--- /dev/null
+++ b/Documentation/x86/index.rst
@@ -0,0 +1,8 @@
+==========================
+x86 architecture specifics
+==========================
+
+.. toctree::
+   :maxdepth: 1
+
+   mds
--- /dev/null
+++ b/Documentation/x86/mds.rst
@@ -0,0 +1,72 @@
+Microarchitecural Data Sampling (MDS) mitigation
+================================================
+
+Microarchitectural Data Sampling (MDS), is a class of side channel attacks
+on internal buffers in Intel CPUs. The variants are:
+
+ - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
+ - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
+ - Microarchitectural Load Port Data (MLPDS) (CVE-2018-12127)
+
+MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
+dependent load (store-load forwarding) as an optimization. The forward can
+also happen to a faulting or assisting load operation for a different
+memory address, which can be exploited under certain conditions. Store
+buffers are partitionened between Hyper-Threads so cross thread forwarding
+is not possible. But if a thread enters or exits a sleep state the store
+buffer is repartioned which can expose data from one thread to the other.
+
+MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
+L1 miss situations and to hold data which is returned or sent in response
+to a memory or I/O operation. Fill buffers can forward data to a load
+operation and also write data to the cache. When the fill buffer is
+deallocated it can retain the stale data of the preceeding operations which
+can then be forwarded to a faulting or assisting load operation, which can
+be exploited under certain conditions. Fill buffers are shared between
+Hyper-Threads so cross thread leakage is possible.
+
+MLDPS leaks Load Port Data. Load ports are used to perform load operations
+from memory or I/O. The received data is then forwarded to the register
+file or a subsequent operation. In some implementations the Load Port can
+contain stale data from a previous operation which can be forwarded to
+faulting or assisting loads under certain conditions, which again can be
+exploited eventually. Load ports are shared between Hyper-Threads so cross
+thread leakage is possible.
+
+Mitigation strategy
+-------------------
+
+All variants have the same mitigation strategy at least for the single CPU
+thread case (SMT off): Force the CPU to clear the affected buffers.
+
+This is achieved by using the otherwise unused and obsolete VERW
+instruction in combination with a microcode update. The microcode clears
+the affected CPU buffers when the VERW instruction is executed.
+
+For virtualization there are two ways to achieve CPU buffer
+clearing. Either the modified VERW instruction or via the L1D Flush
+command. The latter is issued when L1TF mitigation is enabled so the extra
+VERW can be spared. If the CPU is not affected by L1TF then VERW needs to
+be issued.
+
+If the VERW instruction with the supplied segment selector argument is
+executed on a CPU without the microcode update there is no side effect
+other than a small number of pointlessly wasted CPU cycles.
+
+This does not protect against cross Hyper-Thread attacks except for MSBDS
+which is only exploitable cross Hyper-thread when one of the Hyper-Threads
+enters a C-state.
+
+The kernel provides a function to invoke the buffer clearing:
+
+    mds_clear_cpu_buffers()
+
+The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
+(idle) transitions. Depending on the mitigation mode and the system state
+the invocation can be enforced or conditional.
+
+According to current knowledge additional mitigations inside the kernel
+itself are not required because the necessary gadgets to expose the leaked
+data cannot be controlled in a way which allows exploitation from malicious
+user space or VM guests.
+
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -318,6 +318,26 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_
 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
 DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
+#include <asm/segment.h>
+
+/**
+ * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
+ *
+ * This uses the otherwise unused and obsolete VERW instruction in
+ * combination with microcode which triggers a CPU buffer flush when the
+ * instruction is executed.
+ */
+static inline void mds_clear_cpu_buffers(void)
+{
+	static const u16 ds = __KERNEL_DS;
+
+	/*
+	 * Has to be memory form, don't modify to use a register. VERW
+	 * modifies ZF.
+	 */
+	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");
+}
+
 #endif /* __ASSEMBLY__ */
 
 /*

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 04/10] MDS basics+ 4
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (2 preceding siblings ...)
  2019-02-20 15:07 ` [patch V2 03/10] MDS basics+ 3 Thomas Gleixner
@ 2019-02-20 15:07 ` Thomas Gleixner
  2019-02-20 16:52   ` [MODERATED] " Greg KH
                     ` (3 more replies)
  2019-02-20 15:07 ` [patch V2 05/10] MDS basics+ 5 Thomas Gleixner
                   ` (6 subsequent siblings)
  10 siblings, 4 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:07 UTC (permalink / raw)
  To: speck

Subject: [patch V2 04/10] x86/speculation/mds: Clear CPU buffers on exit to user
From: Thomas Gleixner <tglx@linutronix.de>

Add a static key which controls the invocation of the CPU buffer clear
mechanism on exit to user space and add the call into
prepare_exit_to_usermode() right before actually returning.

Add documentation which kernel to user space transition this covers and
explain in detail why those which are not mitigated do not need it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/x86/mds.rst            |   79 +++++++++++++++++++++++++++++++++++
 arch/x86/entry/common.c              |    9 +++
 arch/x86/include/asm/nospec-branch.h |    2 
 arch/x86/kernel/cpu/bugs.c           |    4 +
 4 files changed, 93 insertions(+), 1 deletion(-)

--- a/Documentation/x86/mds.rst
+++ b/Documentation/x86/mds.rst
@@ -64,3 +64,82 @@ itself are not required because the nece
 data cannot be controlled in a way which allows exploitation from malicious
 user space or VM guests.
 
+Mitigation points
+-----------------
+
+1. Return to user space
+^^^^^^^^^^^^^^^^^^^^^^^
+   When transition from kernel to user space the CPU buffers are flushed
+   on affected CPUs:
+
+   - always when the mitigation mode is full. In this case the invocation
+     depends on the static key mds_user_clear_always.
+
+   - depending on executed functions between entering kernel space and
+     returning to user space. This is not yet implemented.
+
+   This covers transitions from kernel to user space through a return to
+   user space from a syscall and from an interrupt or a regular exception.
+
+   There are other kernel to user space transitions which are not covered
+   by this: NMIs and all non maskable exceptions which go through the
+   paranoid exit, which means that they are not going to the regular
+   prepare_exit_to_usermode() exit path which handles the CPU buffer
+   clearing.
+
+   The occasional non maskable exceptions which go through paranoid exit
+   are not controllable by user space in any way and most of these
+   exceptions cannot expose any valuable information either.
+
+   Neither can NMIs be reliably controlled by a non priviledged attacker
+   and their exposure to sensitive data is very limited. NMIs originate
+   from:
+
+      - Performance monitoring.
+
+	Performance monitoring is restricted by various mechanisms, i.e. a
+	regular user on a properly secured system can- if at all - only
+	monitor it's own user space processes. The performance monitoring
+	NMI surely executes priviledged kernel code and accesses kernel
+	internal data structures, which might be exploitable to break the
+	kernel's address space layout randomization, which is a non-issue
+	on affected CPUs as there are simpler ways to achieve that.
+
+      - Watchdog
+
+	The kernel uses - if enabled - a performance monitoring event to
+	trigger NMIs periodically which allow detection of hard lockups in
+	kernel space due to deadlocks or other issues.
+
+	The watchdog period is a multiple of seconds and the code path
+	executed cannot expose any secret information other than kernel
+	address space layout. Due to the low frequency and a limited
+	control of a potential attacker to align on the watchdog period the
+	attack surface is close to zero.
+
+      - Legacy oprofile NMI handler
+
+	Similar to performance monitoring, albeit potentially less
+	restricted, but has been widely replaced by the performance
+	monitoring interface perf. State of the art systems will not expose
+	the oprofile interface and even if exposed the potentially
+	exploitable information is accessible by other and simpler means.
+
+      - KGBD
+
+        If the kernel debugger is accessible by an unpriviledged attacker,
+        then the NMI handler is the least of the problems.
+
+      - ACPI/GHES
+
+        A firmware based error reporting mechanism which uses NMIs for
+        notification. Similar to Machine Check Exceptions there is no known
+        way for an attacker to reliably control and trigger errors which
+        would cause NMIs. Even if that would be the case the potentially
+        exploitable data, e.g. kernel address space layout, would be
+        accessible by simpler means.
+
+      - IPMI, vendor specific NMIs, forced shutdown NMI
+
+	None of those are controllable by unpriviledged attackers to form a
+	reliable exploit surface.
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -31,6 +31,7 @@
 #include <asm/vdso.h>
 #include <linux/uaccess.h>
 #include <asm/cpufeature.h>
+#include <asm/nospec-branch.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/syscalls.h>
@@ -180,6 +181,12 @@ static void exit_to_usermode_loop(struct
 	}
 }
 
+static inline void mds_user_clear_cpu_buffers(void)
+{
+	if (static_branch_likely(&mds_user_clear_always))
+		mds_clear_cpu_buffers();
+}
+
 /* Called with IRQs disabled. */
 __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
 {
@@ -212,6 +219,8 @@ static void exit_to_usermode_loop(struct
 #endif
 
 	user_enter_irqoff();
+
+	mds_user_clear_cpu_buffers();
 }
 
 #define SYSCALL_EXIT_WORK_FLAGS				\
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -318,6 +318,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_
 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
 DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
+DECLARE_STATIC_KEY_FALSE(mds_user_clear_always);
+
 #include <asm/segment.h>
 
 /**
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -63,10 +63,12 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_i
 /* Control unconditional IBPB in switch_mm() */
 DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
+/* Control MDS CPU buffer clear before returning to user space */
+DEFINE_STATIC_KEY_FALSE(mds_user_clear_always);
+
 void __init check_bugs(void)
 {
 	identify_boot_cpu();
-
 	/*
 	 * identify_boot_cpu() initialized SMT support information, let the
 	 * core code know.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 05/10] MDS basics+ 5
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (3 preceding siblings ...)
  2019-02-20 15:07 ` [patch V2 04/10] MDS basics+ 4 Thomas Gleixner
@ 2019-02-20 15:07 ` Thomas Gleixner
  2019-02-20 20:05   ` [MODERATED] " Borislav Petkov
  2019-02-21  2:24   ` Andrew Cooper
  2019-02-20 15:07 ` [patch V2 06/10] MDS basics+ 6 Thomas Gleixner
                   ` (5 subsequent siblings)
  10 siblings, 2 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:07 UTC (permalink / raw)
  To: speck

Subject: [patch V2 05/10] x86/speculation/mds: Conditionaly clear CPU buffers on idle entry
From: Thomas Gleixner <tglx@linutronix.de>

Add a static key which controls the invocation of the CPU buffer clear
mechanism on idle entry. This is independent of other MDS mitigations
because the idle entry invocation to mitigate the potential leakage due to
store buffer repartitioning is only necessary on SMT systems.

Add the actual invocations to the different halt/mwait variants which
covers all usage sites. mwaitx is not patched as it's not available on
Intel CPUs.

The buffer clear is only invoked before entering the C-State to prevent
that stale data from the idling CPU can be spilled to the Hyper-Thread
sibling after the Store buffer got repartitioned and all entries are
available to the non idle sibling.

When coming out of idle the store buffer is partitioned again so each
sibling has half of it available. Now the back from idle CPU could be
speculatively exposed to contents of the sibling, but the buffers are
flushed either on exit to user space or on VMENTER.

When later on conditional buffer clearing is implemented on top of this,
then there is no action required either because before returning to user
space the context switch will set the condition flag which causes a flush
on the return to user path.

This intentionaly does not handle the case in the acpi/processor_idle
driver which uses the legacy IO port interface for C-State transitions for
two reasons:

 - The acpi/processor_idle driver was replaced by the intel_idle driver
   almost a decade ago. Anything Nehalem upwards supports it and defaults
   to that new driver.

 - The legacy IO port interface is likely to be used on older and therefore
   unaffected CPUs or on systems which do not receive microcode updates
   anymore, so there is no point in adding that.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/x86/mds.rst            |   33 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/irqflags.h      |    4 ++++
 arch/x86/include/asm/mwait.h         |    7 +++++++
 arch/x86/include/asm/nospec-branch.h |   12 ++++++++++++
 arch/x86/kernel/cpu/bugs.c           |    2 ++
 5 files changed, 58 insertions(+)

--- a/Documentation/x86/mds.rst
+++ b/Documentation/x86/mds.rst
@@ -143,3 +143,36 @@ Mitigation points
 
 	None of those are controllable by unpriviledged attackers to form a
 	reliable exploit surface.
+
+2. C-State transition
+^^^^^^^^^^^^^^^^^^^^^
+
+   When a CPU goes idle and enters a C-State the CPU buffers need to be
+   cleared on affected CPUs when SMT is active. This addresses the
+   repartitioning of the Store buffer when one of the Hyper-Thread enters a
+   C-State.
+
+   When SMT is inactive, i.e. either the CPU does not support it or all
+   sibling threads are offline CPU buffer clearing is not required.
+
+   The invocation is controlled by the static key mds_idle_clear which is
+   switched depending on the chosen mitigation mode and the SMT state of
+   the system.
+
+   The buffer clear is only invoked before entering the C-State to prevent
+   that stale data from the idling CPU can be spilled to the Hyper-Thread
+   sibling after the Store buffer got repartitioned and all entries are
+   available to the non idle sibling.
+
+   When coming out of idle the store buffer is partitioned again so each
+   sibling has half of it available. Now the back from idle CPU could be
+   speculatively exposed to contents of the sibling, but the buffers are
+   flushed either on exit to user space or on VMENTER.
+
+   The mitigation is hooked into all variants of halt()/mwait(), but does
+   not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
+   has been superseeded by the intel_idle driver around 2010 and is
+   preferred on all affected CPUs which still receive microcode updates
+   (Nehalem onwards). Aside of that the IO-Port mechanism is a legacy
+   interface which is only used on older systems which are either not
+   affected or do not receive microcode updates anymore.
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -6,6 +6,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/nospec-branch.h>
+
 /* Provide __cpuidle; we can't safely include <linux/cpu.h> */
 #define __cpuidle __attribute__((__section__(".cpuidle.text")))
 
@@ -54,11 +56,13 @@ static inline void native_irq_enable(voi
 
 static inline __cpuidle void native_safe_halt(void)
 {
+	mds_idle_clear_cpu_buffers();
 	asm volatile("sti; hlt": : :"memory");
 }
 
 static inline __cpuidle void native_halt(void)
 {
+	mds_idle_clear_cpu_buffers();
 	asm volatile("hlt": : :"memory");
 }
 
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -6,6 +6,7 @@
 #include <linux/sched/idle.h>
 
 #include <asm/cpufeature.h>
+#include <asm/nospec-branch.h>
 
 #define MWAIT_SUBSTATE_MASK		0xf
 #define MWAIT_CSTATE_MASK		0xf
@@ -40,6 +41,8 @@ static inline void __monitorx(const void
 
 static inline void __mwait(unsigned long eax, unsigned long ecx)
 {
+	mds_idle_clear_cpu_buffers();
+
 	/* "mwait %eax, %ecx;" */
 	asm volatile(".byte 0x0f, 0x01, 0xc9;"
 		     :: "a" (eax), "c" (ecx));
@@ -74,6 +77,8 @@ static inline void __mwait(unsigned long
 static inline void __mwaitx(unsigned long eax, unsigned long ebx,
 			    unsigned long ecx)
 {
+	/* No MDS buffer clear as this is AMD/HYGON only */
+
 	/* "mwaitx %eax, %ebx, %ecx;" */
 	asm volatile(".byte 0x0f, 0x01, 0xfb;"
 		     :: "a" (eax), "b" (ebx), "c" (ecx));
@@ -81,6 +86,8 @@ static inline void __mwaitx(unsigned lon
 
 static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
 {
+	mds_idle_clear_cpu_buffers();
+
 	trace_hardirqs_on();
 	/* "mwait %eax, %ecx;" */
 	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -319,6 +319,7 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_
 DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
 DECLARE_STATIC_KEY_FALSE(mds_user_clear_always);
+DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
 
 #include <asm/segment.h>
 
@@ -340,6 +341,17 @@ static inline void mds_clear_cpu_buffers
 	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");
 }
 
+/**
+ * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
+ *
+ * Clear CPU buffers if the corresponding static key is enabled
+ */
+static inline void mds_idle_clear_cpu_buffers(void)
+{
+	if (static_branch_likely(&mds_idle_clear))
+		mds_clear_cpu_buffers();
+}
+
 #endif /* __ASSEMBLY__ */
 
 /*
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,8 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always
 
 /* Control MDS CPU buffer clear before returning to user space */
 DEFINE_STATIC_KEY_FALSE(mds_user_clear_always);
+/* Control MDS CPU buffer clear before idling (halt, mwait) */
+DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
 
 void __init check_bugs(void)
 {

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 06/10] MDS basics+ 6
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (4 preceding siblings ...)
  2019-02-20 15:07 ` [patch V2 05/10] MDS basics+ 5 Thomas Gleixner
@ 2019-02-20 15:07 ` Thomas Gleixner
  2019-02-21 10:18   ` [MODERATED] " Borislav Petkov
  2019-02-20 15:08 ` [patch V2 07/10] MDS basics+ 7 Thomas Gleixner
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:07 UTC (permalink / raw)
  To: speck

Subject: [patch V2 06/10] x86/speculation/mds: Add mitigation control for MDS
From: Thomas Gleixner <tglx@linutronix.de>

Now that the mitigations are in place, add a command line parameter to
control the mitigation, a mitigation selector function and a SMT update
mechanism.

This is the minimal straight forward initial implementation which just
provides an always on/off mode. The command line parameter is:

  mds=[full|off|auto]

This is consistent with the existing mitigations for other speculative
hardware vulnerabilities.

The idle invocation is dynamically updated according to the SMT state of
the system similar to the dynamic update of the STIBP mitigation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/admin-guide/kernel-parameters.txt |   27 ++++++++
 arch/x86/include/asm/processor.h                |    6 +
 arch/x86/kernel/cpu/bugs.c                      |   76 ++++++++++++++++++++++++
 3 files changed, 109 insertions(+)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2362,6 +2362,33 @@
 			Format: <first>,<last>
 			Specifies range of consoles to be captured by the MDA.
 
+	mds=		[X86,INTEL]
+			Control mitigation for the Micro-architectural Data
+			Sampling (MDS) vulnerability.
+
+			Certain CPUs are vulnerable to an exploit against CPU
+			internal buffers which can forward information to a
+			disclosure gadget under certain conditions.
+
+			In vulnerable processors, the speculatively
+			forwarded data can be used in a cache side channel
+			attack, to access data to which the attacker does
+			not have direct access.
+
+			This parameter controls the MDS mitigation. The the
+			options are:
+
+			full    - Unconditionally enable MDS mitigation
+			off     - Unconditionally disable MDS mitigation
+			auto    - Kernel detects whether the CPU model is
+				  vulnerable to MDS and picks the most
+				  appropriate mitigation. If the CPU is not
+				  vulnerable, "off" is selected. If the CPU
+				  is vulnerable "full" is selected.
+
+			Not specifying this option is equivalent to
+			mds=auto.
+
 	mem=nn[KMG]	[KNL,BOOT] Force usage of a specific amount of memory
 			Amount of memory to be used when the kernel is not able
 			to see the whole system memory or for test.
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -991,4 +991,10 @@ enum l1tf_mitigations {
 
 extern enum l1tf_mitigations l1tf_mitigation;
 
+enum mds_mitigations {
+	MDS_MITIGATION_OFF,
+	MDS_MITIGATION_AUTO,
+	MDS_MITIGATION_FULL,
+};
+
 #endif /* _ASM_X86_PROCESSOR_H */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -37,6 +37,7 @@
 static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
+static void __init mds_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -105,6 +106,8 @@ void __init check_bugs(void)
 
 	l1tf_select_mitigation();
 
+	mds_select_mitigation();
+
 #ifdef CONFIG_X86_32
 	/*
 	 * Check whether we are able to run this kernel safely on SMP.
@@ -211,6 +214,59 @@ static void x86_amd_ssb_disable(void)
 }
 
 #undef pr_fmt
+#define pr_fmt(fmt)	"MDS: " fmt
+
+/* Default mitigation for L1TF-affected CPUs */
+static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_AUTO;
+
+static const char * const mds_strings[] = {
+	[MDS_MITIGATION_OFF]	= "Vulnerable",
+	[MDS_MITIGATION_FULL]	= "Mitigation: Clear CPU buffers"
+};
+
+static void mds_select_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_MDS)) {
+		mds_mitigation = MDS_MITIGATION_OFF;
+		return;
+	}
+
+	switch (mds_mitigation) {
+	case MDS_MITIGATION_OFF:
+		break;
+	case MDS_MITIGATION_AUTO:
+	case MDS_MITIGATION_FULL:
+		if (boot_cpu_has(X86_FEATURE_MD_CLEAR)) {
+			mds_mitigation = MDS_MITIGATION_FULL;
+			static_branch_enable(&mds_user_clear_always);
+		} else {
+			mds_mitigation = MDS_MITIGATION_OFF;
+		}
+		break;
+	}
+	pr_info("%s\n", mds_strings[mds_mitigation]);
+}
+
+static int __init mds_cmdline(char *str)
+{
+	if (!boot_cpu_has_bug(X86_BUG_MDS))
+		return 0;
+
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off"))
+		mds_mitigation = MDS_MITIGATION_OFF;
+	else if (!strcmp(str, "auto"))
+		mds_mitigation = MDS_MITIGATION_AUTO;
+	else if (!strcmp(str, "full"))
+		mds_mitigation = MDS_MITIGATION_FULL;
+
+	return 0;
+}
+early_param("mds", mds_cmdline);
+
+#undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V2 : " fmt
 
 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
@@ -614,6 +670,15 @@ static void update_indir_branch_cond(voi
 		static_branch_disable(&switch_to_cond_stibp);
 }
 
+/* Update the static key controlling the MDS CPU buffer clear in idle */
+static void update_mds_branch_idle(void)
+{
+	if (sched_smt_active())
+		static_branch_enable(&mds_idle_clear);
+	else
+		static_branch_disable(&mds_idle_clear);
+}
+
 void arch_smt_update(void)
 {
 	/* Enhanced IBRS implies STIBP. No update required. */
@@ -635,6 +700,17 @@ void arch_smt_update(void)
 		break;
 	}
 
+	switch (mds_mitigation) {
+	case MDS_MITIGATION_OFF:
+		break;
+	case MDS_MITIGATION_FULL:
+		update_mds_branch_idle();
+		break;
+	/* Keep GCC happy */
+	case MDS_MITIGATION_AUTO:
+		break;
+	}
+
 	mutex_unlock(&spec_ctrl_mutex);
 }
 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 07/10] MDS basics+ 7
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (5 preceding siblings ...)
  2019-02-20 15:07 ` [patch V2 06/10] MDS basics+ 6 Thomas Gleixner
@ 2019-02-20 15:08 ` Thomas Gleixner
  2019-02-21 12:47   ` [MODERATED] " Borislav Petkov
  2019-02-20 15:08 ` [patch V2 08/10] MDS basics+ 8 Thomas Gleixner
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:08 UTC (permalink / raw)
  To: speck

Subject: [patch V2 07/10] x86/speculation/mds: Add sysfs reporting for MDS
From: Thomas Gleixner <tglx@linutronix.de>

Add the sysfs reporting file for MDS. It exposes the vulnerability and
mitigation state similar to the existing files for the other speculative
hardware vulnerabilities.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/ABI/testing/sysfs-devices-system-cpu |    1 +
 arch/x86/kernel/cpu/bugs.c                         |   20 ++++++++++++++++++++
 drivers/base/cpu.c                                 |    6 ++++--
 include/linux/cpu.h                                |    2 ++
 4 files changed, 27 insertions(+), 2 deletions(-)

--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -484,6 +484,7 @@ What:		/sys/devices/system/cpu/vulnerabi
 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
+		/sys/devices/system/cpu/vulnerabilities/mds
 Date:		January 2018
 Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 Description:	Information about CPU vulnerabilities
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1187,6 +1187,17 @@ static ssize_t l1tf_show_state(char *buf
 }
 #endif
 
+static ssize_t mds_show_state(char *buf)
+{
+	if (!hypervisor_is_type(X86_HYPER_NATIVE)) {
+		return sprintf(buf, "%s; SMT Host state unknown\n",
+			       mds_strings[mds_mitigation]);
+	}
+
+	return sprintf(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
+		       sched_smt_active() ? "vulnerable" : "disabled");
+}
+
 static char *stibp_state(void)
 {
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
@@ -1253,6 +1264,10 @@ static ssize_t cpu_show_common(struct de
 		if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
 			return l1tf_show_state(buf);
 		break;
+
+	case X86_BUG_MDS:
+		return mds_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1284,4 +1299,9 @@ ssize_t cpu_show_l1tf(struct device *dev
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_L1TF);
 }
+
+ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_MDS);
+}
 #endif
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -540,8 +540,8 @@ ssize_t __weak cpu_show_spec_store_bypas
 	return sprintf(buf, "Not affected\n");
 }
 
-ssize_t __weak cpu_show_l1tf(struct device *dev,
-			     struct device_attribute *attr, char *buf)
+ssize_t __weak cpu_show_mds(struct device *dev,
+			    struct device_attribute *attr, char *buf)
 {
 	return sprintf(buf, "Not affected\n");
 }
@@ -551,6 +551,7 @@ static DEVICE_ATTR(spectre_v1, 0444, cpu
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
 static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
 static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -558,6 +559,7 @@ static struct attribute *cpu_root_vulner
 	&dev_attr_spectre_v2.attr,
 	&dev_attr_spec_store_bypass.attr,
 	&dev_attr_l1tf.attr,
+	&dev_attr_mds.attr,
 	NULL
 };
 
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -57,6 +57,8 @@ extern ssize_t cpu_show_spec_store_bypas
 					  struct device_attribute *attr, char *buf);
 extern ssize_t cpu_show_l1tf(struct device *dev,
 			     struct device_attribute *attr, char *buf);
+extern ssize_t cpu_show_mds(struct device *dev,
+			    struct device_attribute *attr, char *buf);
 
 extern __printf(4, 5)
 struct device *cpu_device_create(struct device *parent, void *drvdata,

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 08/10] MDS basics+ 8
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (6 preceding siblings ...)
  2019-02-20 15:08 ` [patch V2 07/10] MDS basics+ 7 Thomas Gleixner
@ 2019-02-20 15:08 ` Thomas Gleixner
  2019-02-21 14:04   ` [MODERATED] " Borislav Petkov
  2019-02-20 15:08 ` [patch V2 09/10] MDS basics+ 9 Thomas Gleixner
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:08 UTC (permalink / raw)
  To: speck

In virtualized environments it can happen that the host has the microcode
update which utilizes the VERW instruction to clear CPU buffers, but the
hypervisor is not yet updated to expose the X86_FEATURE_MD_CLEAR CPUID bit
to guests.

Introduce an internal mitigation mode 'HOPE' which enables the invocation
of the CPU buffer clearing even if X86_FEATURE_MD_CLEAR is not set. If the
system has no updated microcode this results in a pointless execution of
the VERW instruction wasting a few CPU cycles. If the microcode is updated,
but not exposed to a guest then the CPU buffers will be cleared.

Hope dies last....

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/x86/mds.rst        |    5 +++++
 arch/x86/include/asm/processor.h |    1 +
 arch/x86/kernel/cpu/bugs.c       |   14 ++++++++------
 3 files changed, 14 insertions(+), 6 deletions(-)

--- a/Documentation/x86/mds.rst
+++ b/Documentation/x86/mds.rst
@@ -65,6 +65,11 @@ The mitigation is invoked on kernel/user
 (idle) transitions. Depending on the mitigation mode and the system state
 the invocation can be enforced or conditional.
 
+As a special quirk to address virtualization scenarios where the host has
+the microcode updated, but the hypervisor does not (yet) expose the
+MD_CLEAR CPUID bit to guests, the kernel issues the VERW instruction in the
+hope that it might work. The state is reflected accordingly.
+
 According to current knowledge additional mitigations inside the kernel
 itself are not required because the necessary gadgets to expose the leaked
 data cannot be controlled in a way which allows exploitation from malicious
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -995,6 +995,7 @@ enum mds_mitigations {
 	MDS_MITIGATION_OFF,
 	MDS_MITIGATION_AUTO,
 	MDS_MITIGATION_FULL,
+	MDS_MITIGATION_HOPE,
 };
 
 #endif /* _ASM_X86_PROCESSOR_H */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -221,7 +221,8 @@ static enum mds_mitigations mds_mitigati
 
 static const char * const mds_strings[] = {
 	[MDS_MITIGATION_OFF]	= "Vulnerable",
-	[MDS_MITIGATION_FULL]	= "Mitigation: Clear CPU buffers"
+	[MDS_MITIGATION_FULL]	= "Mitigation: Clear CPU buffers",
+	[MDS_MITIGATION_HOPE]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
 };
 
 static void mds_select_mitigation(void)
@@ -236,12 +237,12 @@ static void mds_select_mitigation(void)
 		break;
 	case MDS_MITIGATION_AUTO:
 	case MDS_MITIGATION_FULL:
-		if (boot_cpu_has(X86_FEATURE_MD_CLEAR)) {
+	case MDS_MITIGATION_HOPE:
+		if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
 			mds_mitigation = MDS_MITIGATION_FULL;
-			static_branch_enable(&mds_user_clear_always);
-		} else {
-			mds_mitigation = MDS_MITIGATION_OFF;
-		}
+		else
+			mds_mitigation = MDS_MITIGATION_HOPE;
+		static_branch_enable(&mds_user_clear_always);
 		break;
 	}
 	pr_info("%s\n", mds_strings[mds_mitigation]);
@@ -704,6 +705,7 @@ void arch_smt_update(void)
 	case MDS_MITIGATION_OFF:
 		break;
 	case MDS_MITIGATION_FULL:
+	case MDS_MITIGATION_HOPE:
 		update_mds_branch_idle();
 		break;
 	/* Keep GCC happy */

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 09/10] MDS basics+ 9
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (7 preceding siblings ...)
  2019-02-20 15:08 ` [patch V2 08/10] MDS basics+ 8 Thomas Gleixner
@ 2019-02-20 15:08 ` Thomas Gleixner
  2019-02-20 16:21   ` [MODERATED] " Peter Zijlstra
                     ` (3 more replies)
  2019-02-20 15:08 ` [patch V2 10/10] MDS basics+ 10 Thomas Gleixner
  2019-02-22 16:05 ` [MODERATED] Re: [patch V2 00/10] MDS basics+ 0 mark gross
  10 siblings, 4 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:08 UTC (permalink / raw)
  To: speck

To avoid the expensive CPU buffer flushing on every transition from kernel
to user space it is desired to provide a conditional mitigation mode.

Provide the infrastructure which is required to implement this:

 - A static key to enable conditional mode CPU buffer flushing.

 - A per CPU variable which indicates that CPU buffers need to
   be flushed on return to user space. The variable is defined
   next to __preempt_count to it ends up in a cacheline which
   is required on return to user space anyway.

 - The conditinal flush mechanics on return to user space.

 - A helper function to set the flush request. Is in processor.h for now to
   avoid include hell, but might move to a separate header.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/common.c              |    6 ++++++
 arch/x86/include/asm/nospec-branch.h |    3 +++
 arch/x86/include/asm/processor.h     |   13 +++++++++++++
 arch/x86/kernel/cpu/bugs.c           |    1 +
 arch/x86/kernel/cpu/common.c         |    7 +++++++
 5 files changed, 30 insertions(+)

--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -183,6 +183,12 @@ static void exit_to_usermode_loop(struct
 
 static inline void mds_user_clear_cpu_buffers(void)
 {
+	if (static_branch_likely(&mds_user_clear_cond)) {
+		if (__this_cpu_read(mds_cond_clear)) {
+			__this_cpu_write(mds_cond_clear, 0);
+			mds_clear_cpu_buffers();
+		}
+	}
 	if (static_branch_likely(&mds_user_clear_always))
 		mds_clear_cpu_buffers();
 }
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -9,6 +9,7 @@
 #include <asm/alternative-asm.h>
 #include <asm/cpufeatures.h>
 #include <asm/msr-index.h>
+#include <asm/percpu.h>
 
 /*
  * Fill the CPU return stack buffer.
@@ -319,7 +320,9 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_
 DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
 DECLARE_STATIC_KEY_FALSE(mds_user_clear_always);
+DECLARE_STATIC_KEY_FALSE(mds_user_clear_cond);
 DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
+DECLARE_PER_CPU(unsigned int, mds_cond_clear);
 
 #include <asm/segment.h>
 
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -24,6 +24,7 @@ struct vm86;
 #include <asm/special_insns.h>
 #include <asm/fpu/types.h>
 #include <asm/unwind_hints.h>
+#include <asm/nospec-branch.h>
 
 #include <linux/personality.h>
 #include <linux/cache.h>
@@ -998,4 +999,16 @@ enum mds_mitigations {
 	MDS_MITIGATION_HOPE,
 };
 
+/**
+ * mds_request_buffer_clear - Set the request to clear CPU buffers
+ *
+ * This is invoked from contexts which identify a necessarity to clear CPU
+ * buffers on the next return to user space.
+ */
+static inline void mds_request_buffer_clear(void)
+{
+	if (static_branch_likely(&mds_user_clear_cond))
+		this_cpu_write(mds_cond_clear, 1);
+}
+
 #endif /* _ASM_X86_PROCESSOR_H */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -66,6 +66,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always
 
 /* Control MDS CPU buffer clear before returning to user space */
 DEFINE_STATIC_KEY_FALSE(mds_user_clear_always);
+DEFINE_STATIC_KEY_FALSE(mds_user_clear_cond);
 /* Control MDS CPU buffer clear before idling (halt, mwait) */
 DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
 
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -8,6 +8,7 @@
 #include <linux/export.h>
 #include <linux/percpu.h>
 #include <linux/string.h>
+#include <linux/nospec.h>
 #include <linux/ctype.h>
 #include <linux/delay.h>
 #include <linux/sched/mm.h>
@@ -1544,6 +1545,9 @@ DEFINE_PER_CPU(unsigned int, irq_count)
 DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
 EXPORT_PER_CPU_SYMBOL(__preempt_count);
 
+/* Indicator for return to user space or VMENTER to clear CPU buffers */
+DEFINE_PER_CPU(unsigned int, mds_cond_clear);
+
 /* May not be marked __init: used by software suspend */
 void syscall_init(void)
 {
@@ -1617,6 +1621,9 @@ EXPORT_PER_CPU_SYMBOL(current_task);
 DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
 EXPORT_PER_CPU_SYMBOL(__preempt_count);
 
+/* Indicator for return to user space or VMENTER to clear CPU buffers */
+DEFINE_PER_CPU(unsigned int, mds_cond_clear);
+
 /*
  * On x86_32, vm86 modifies tss.sp0, so sp0 isn't a reliable way to find
  * the top of the kernel stack.  Use an extra percpu variable to track the

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [patch V2 10/10] MDS basics+ 10
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (8 preceding siblings ...)
  2019-02-20 15:08 ` [patch V2 09/10] MDS basics+ 9 Thomas Gleixner
@ 2019-02-20 15:08 ` Thomas Gleixner
  2019-02-22 16:05 ` [MODERATED] Re: [patch V2 00/10] MDS basics+ 0 mark gross
  10 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 15:08 UTC (permalink / raw)
  To: speck

Utilize the already existing information in switch_mm_irqs_off() to request
CPU buffer clearing on return to user space for conditional MDS mitigation:

  - Switching between two processes

  - Switching back and forth between process and kernel thread. This
    utilizes the lazy mm mechanism, which already provides all required
    conditionals.

In both cases the flush is requested to prevent potential leakage of data
either from the previous process or the kernel thread.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/tlb.c |   18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -351,6 +351,16 @@ void switch_mm_irqs_off(struct mm_struct
 			return;
 
 		/*
+		 * The context switch sequence was: USER -> KERNEL -> USER.
+		 *
+		 * For CPUs which are affected by MDS this is a condition
+		 * to enforce flushing of CPU buffers before returning
+		 * to user space to prevent potential leakage of data which
+		 * was touched by the kernel thread.
+		 */
+		mds_request_buffer_clear();
+
+		/*
 		 * Read the tlb_gen to check whether a flush is needed.
 		 * If the TLB is up to date, just use it.
 		 * The barrier synchronizes with the tlb_gen increment in
@@ -376,6 +386,14 @@ void switch_mm_irqs_off(struct mm_struct
 		 */
 		cond_ibpb(tsk);
 
+		/*
+		 * Switching to a different process triggers flushing of
+		 * CPU buffers before returning to user space to prevent
+		 * potential leakage of data which was touched by the
+		 * previous process or by a kernel thread.
+		 */
+		mds_request_buffer_clear();
+
 		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
 			/*
 			 * If our current stack is in vmalloc space and isn't

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 09/10] MDS basics+ 9
  2019-02-20 15:08 ` [patch V2 09/10] MDS basics+ 9 Thomas Gleixner
@ 2019-02-20 16:21   ` Peter Zijlstra
  2019-02-20 22:32     ` Thomas Gleixner
  2019-02-21 11:04   ` [MODERATED] " Peter Zijlstra
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 94+ messages in thread
From: Peter Zijlstra @ 2019-02-20 16:21 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:08:02PM +0100, speck for Thomas Gleixner wrote:
>  static inline void mds_user_clear_cpu_buffers(void)
>  {
> +	if (static_branch_likely(&mds_user_clear_cond)) {

Are we sure we want _likely() ? That puts the body in-line.

> +		if (__this_cpu_read(mds_cond_clear)) {
> +			__this_cpu_write(mds_cond_clear, 0);
> +			mds_clear_cpu_buffers();
> +		}
		return;

_might_ generate better code; by making it explitit we'll never have
both branches enabled at the same time.

> +	}
>  	if (static_branch_likely(&mds_user_clear_always))
>  		mds_clear_cpu_buffers();
>  }

And in general; yuck! Should I once again look at tri-state jump_labels
or static_switch? Last time I tried I got stuck on the C part, writing
different targets is trivial.

Ideally we'd end up with something like:

1:	jmp 3f
	test $1, %gs:mds_cond_clear
	jz 3f
	mov $0, %gs:mds_cond_clear
2:	verw ds
3:

And change the jmp at 1 between: JMP 3f, JMP 2f and NOP for
mds={off,full,cond} resp.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 01/10] MDS basics+ 1
  2019-02-20 15:07 ` [patch V2 01/10] MDS basics+ 1 Thomas Gleixner
@ 2019-02-20 16:27   ` Borislav Petkov
  2019-02-20 16:46   ` Greg KH
  1 sibling, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-20 16:27 UTC (permalink / raw)
  To: speck

This is fine-tooth comb review, round 2.

On Wed, Feb 20, 2019 at 04:07:54PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 01/10] x86/speculation/mds: Add basic bug infrastructure for MDS
> From: Andi Kleen <ak@linux.intel.com>
> 
> Microarchitectural Data Sampling (MDS), is a class of side channel attacks

s/,//

> on internal buffers in Intel CPUs. The variants are:
> 
>  - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
>  - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
>  - Microarchitectural Load Port Data (MLPDS) (CVE-2018-12127)
				      ^
				      Sampling

> MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
> dependent load (store-load forwarding) as an optimization. The forward can

AFAIK, that's called "store-to-load" forwarding, abbreviated as STLF in
CPU speak:

https://en.wikipedia.org/wiki/Memory_disambiguation#Store_to_load_forwarding

> also happen to a faulting or assisting load operation for a different
> memory address, which can be exploited under certain conditions. Store
> buffers are partitionened between Hyper-Threads so cross thread forwarding

partitioned

> is not possible. But if a thread enters or exits a sleep state the store
> buffer is repartioned which can expose data from one thread to the other.

"... repartitioned, "

> 
> MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
> L1 miss situations and to hold data which is returned or sent in response
> to a memory or I/O operation. Fill buffers can forward data to a load
> operation and also write data to the cache. When the fill buffer is
> deallocated it can retain the stale data of the preceeding operations which

WARNING: 'preceeding' may be misspelled - perhaps 'preceding'?

> can then be forwarded to a faulting or assisting load operation, which can
> be exploited under certain conditions. Fill buffers are shared between
> Hyper-Threads so cross thread leakage is possible.
> 
> MLDPS leaks Load Port Data. Load ports are used to perform load operations
> from memory or I/O. The received data is then forwarded to the register
> file or a subsequent operation. In some implementations the Load Port can
> contain stale data from a previous operation which can be forwarded to
> faulting or assisting loads under certain conditions, which again can be
> exploited eventually. Load poorts are shared between Hyper-Threads so cross

ports

> thread leakage is possible.
> 
> All variants have the same mitigation for single CPU thread case (SMT off),
> so the kernel can treat them as one MDS issue.
> 
> Add the basic infrastructure to detect if the current CPU is affected by
> MDS.
> 
> [ tglx: Rewrote changelog ]
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> ---
>  arch/x86/include/asm/cpufeatures.h |    2 ++
>  arch/x86/include/asm/msr-index.h   |    5 +++++
>  arch/x86/kernel/cpu/common.c       |   13 +++++++++++++
>  3 files changed, 20 insertions(+)
> 
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -344,6 +344,7 @@
>  /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
>  #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
>  #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
> +#define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW flushs CPU state */
							   ^
							   flushes


>  #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
>  #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
>  #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
> @@ -381,5 +382,6 @@
>  #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
>  #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
>  #define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
> +#define X86_BUG_MDS			X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
>  
>  #endif /* _ASM_X86_CPUFEATURES_H */

With that addressed:

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 01/10] MDS basics+ 1
  2019-02-20 15:07 ` [patch V2 01/10] MDS basics+ 1 Thomas Gleixner
  2019-02-20 16:27   ` [MODERATED] " Borislav Petkov
@ 2019-02-20 16:46   ` Greg KH
  1 sibling, 0 replies; 94+ messages in thread
From: Greg KH @ 2019-02-20 16:46 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:54PM +0100, speck for Thomas Gleixner wrote:
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -77,6 +77,11 @@
>  						    * attack, so no Speculative Store Bypass
>  						    * control required.
>  						    */
> +#define ARCH_CAP_MDS_NO			(1 << 5)   /*
> +						    * Not susceptible to
> +						    * Microarchitectural Data
> +						    * Sampling (MDS) vulnerabilities.
> +						    */

Nit, maybe BIT(5)?  I know the others in this list are not using BIT,
but other parts of this file are.

Anyway, not a big deal:

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 02/10] MDS basics+ 2
  2019-02-20 15:07 ` [patch V2 02/10] MDS basics+ 2 Thomas Gleixner
@ 2019-02-20 16:47   ` Borislav Petkov
  2019-02-20 16:48   ` Greg KH
  1 sibling, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-20 16:47 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:55PM +0100, speck for Thomas Gleixner wrote:
> From: Andi Kleen <ak@linux.intel.com>
> Subject: [patch V2 02/10] x86/kvm: Expose X86_FEATURE_MD_CLEAR to guests
> 
> X86_FEATURE_MD_CLEAR is a new CPUID bit which is set when microcode
> provides the mechanism to invoke a flush of various exploitable CPU buffers
> by invoking the VERW instruction.
> 
> Hand it through to guests so they can adjust their mitigations.
> 
> This also requires corresponding qemu changes, which are available
> seperately.

WARNING: 'seperately' may be misspelled - perhaps 'separately'?

With that:

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 02/10] MDS basics+ 2
  2019-02-20 15:07 ` [patch V2 02/10] MDS basics+ 2 Thomas Gleixner
  2019-02-20 16:47   ` [MODERATED] " Borislav Petkov
@ 2019-02-20 16:48   ` Greg KH
  1 sibling, 0 replies; 94+ messages in thread
From: Greg KH @ 2019-02-20 16:48 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:55PM +0100, speck for Thomas Gleixner wrote:
> From: Andi Kleen <ak@linux.intel.com>
> Subject: [patch V2 02/10] x86/kvm: Expose X86_FEATURE_MD_CLEAR to guests
> 
> X86_FEATURE_MD_CLEAR is a new CPUID bit which is set when microcode
> provides the mechanism to invoke a flush of various exploitable CPU buffers
> by invoking the VERW instruction.
> 
> Hand it through to guests so they can adjust their mitigations.
> 
> This also requires corresponding qemu changes, which are available
> seperately.

"separately"  :)

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 04/10] MDS basics+ 4
  2019-02-20 15:07 ` [patch V2 04/10] MDS basics+ 4 Thomas Gleixner
@ 2019-02-20 16:52   ` Greg KH
  2019-02-20 17:10   ` mark gross
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 94+ messages in thread
From: Greg KH @ 2019-02-20 16:52 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:57PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 04/10] x86/speculation/mds: Clear CPU buffers on exit to user
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add a static key which controls the invocation of the CPU buffer clear
> mechanism on exit to user space and add the call into
> prepare_exit_to_usermode() right before actually returning.
> 
> Add documentation which kernel to user space transition this covers and
> explain in detail why those which are not mitigated do not need it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 03/10] MDS basics+ 3
  2019-02-20 15:07 ` [patch V2 03/10] MDS basics+ 3 Thomas Gleixner
@ 2019-02-20 16:54   ` mark gross
  2019-02-20 16:57     ` Thomas Gleixner
  2019-02-20 17:14   ` [MODERATED] " Borislav Petkov
  2019-02-21  2:12   ` [MODERATED] " Andrew Cooper
  2 siblings, 1 reply; 94+ messages in thread
From: mark gross @ 2019-02-20 16:54 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:56PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 03/10] x86/speculation/mds: Add mds_clear_cpu_buffer()
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The Microarchitectural Data Sampling (MDS) vulernabilities are mitigated by
> clearing the affected CPU buffers. The mechanism for clearing the buffers
> uses the unused and obsolete VERW instruction in combination with a
> microcode update which triggers a CPU buffer clear when VERW is executed.
> 
> Provide a inline function with the assembly magic. The argument of the VERW
> instruction must be a memory operand.
> 
> Add x86 specific documentation about MDS and the internal workings of the
> mitigation.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
> V1 --> V2: Add "cc" clobber and documentation
> ---
>  Documentation/index.rst              |    1 
>  Documentation/x86/conf.py            |   10 ++++
>  Documentation/x86/index.rst          |    8 +++
>  Documentation/x86/mds.rst            |   72 +++++++++++++++++++++++++++++++++++
>  arch/x86/include/asm/nospec-branch.h |   20 +++++++++
>  5 files changed, 111 insertions(+)
> 
> --- a/Documentation/index.rst
> +++ b/Documentation/index.rst
> @@ -101,6 +101,7 @@ implementation.
>     :maxdepth: 2
>  
>     sh/index
> +   x86/index
>  
>  Filesystem Documentation
>  ------------------------
> --- /dev/null
> +++ b/Documentation/x86/conf.py
> @@ -0,0 +1,10 @@
> +# -*- coding: utf-8; mode: python -*-
> +
> +project = "X86 architecture specific documentation"
> +
> +tags.add("subproject")
> +
> +latex_documents = [
> +    ('index', 'x86.tex', project,
> +     'The kernel development community', 'manual'),
> +]
> --- /dev/null
> +++ b/Documentation/x86/index.rst
> @@ -0,0 +1,8 @@
> +==========================
> +x86 architecture specifics
> +==========================
> +
> +.. toctree::
> +   :maxdepth: 1
> +
> +   mds
> --- /dev/null
> +++ b/Documentation/x86/mds.rst
> @@ -0,0 +1,72 @@
> +Microarchitecural Data Sampling (MDS) mitigation
> +================================================
> +
> +Microarchitectural Data Sampling (MDS), is a class of side channel attacks
> +on internal buffers in Intel CPUs. The variants are:
> +
> + - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
> + - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
> + - Microarchitectural Load Port Data (MLPDS) (CVE-2018-12127)
> +
> +MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
> +dependent load (store-load forwarding) as an optimization. The forward can
> +also happen to a faulting or assisting load operation for a different
> +memory address, which can be exploited under certain conditions. Store
> +buffers are partitionened between Hyper-Threads so cross thread forwarding
> +is not possible. But if a thread enters or exits a sleep state the store
> +buffer is repartioned which can expose data from one thread to the other.
> +
> +MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
> +L1 miss situations and to hold data which is returned or sent in response
> +to a memory or I/O operation. Fill buffers can forward data to a load
> +operation and also write data to the cache. When the fill buffer is
> +deallocated it can retain the stale data of the preceeding operations which
> +can then be forwarded to a faulting or assisting load operation, which can
> +be exploited under certain conditions. Fill buffers are shared between
> +Hyper-Threads so cross thread leakage is possible.
> +
> +MLDPS leaks Load Port Data. Load ports are used to perform load operations
> +from memory or I/O. The received data is then forwarded to the register
> +file or a subsequent operation. In some implementations the Load Port can
> +contain stale data from a previous operation which can be forwarded to
> +faulting or assisting loads under certain conditions, which again can be
> +exploited eventually. Load ports are shared between Hyper-Threads so cross
> +thread leakage is possible.
> +
> +Mitigation strategy
> +-------------------
> +
> +All variants have the same mitigation strategy at least for the single CPU
> +thread case (SMT off): Force the CPU to clear the affected buffers.
> +
> +This is achieved by using the otherwise unused and obsolete VERW
> +instruction in combination with a microcode update. The microcode clears
> +the affected CPU buffers when the VERW instruction is executed.
> +
> +For virtualization there are two ways to achieve CPU buffer
> +clearing. Either the modified VERW instruction or via the L1D Flush
> +command. The latter is issued when L1TF mitigation is enabled so the extra
> +VERW can be spared. If the CPU is not affected by L1TF then VERW needs to
> +be issued.
> +
> +If the VERW instruction with the supplied segment selector argument is
> +executed on a CPU without the microcode update there is no side effect
> +other than a small number of pointlessly wasted CPU cycles.
> +
> +This does not protect against cross Hyper-Thread attacks except for MSBDS
> +which is only exploitable cross Hyper-thread when one of the Hyper-Threads
> +enters a C-state.
> +
> +The kernel provides a function to invoke the buffer clearing:
> +
> +    mds_clear_cpu_buffers()
> +
> +The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
> +(idle) transitions. Depending on the mitigation mode and the system state
> +the invocation can be enforced or conditional.
> +
> +According to current knowledge additional mitigations inside the kernel
> +itself are not required because the necessary gadgets to expose the leaked
> +data cannot be controlled in a way which allows exploitation from malicious
> +user space or VM guests.
> +
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -318,6 +318,26 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_
>  DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
>  DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
>  
> +#include <asm/segment.h>
> +
> +/**
> + * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
> + *
> + * This uses the otherwise unused and obsolete VERW instruction in
> + * combination with microcode which triggers a CPU buffer flush when the
> + * instruction is executed.
> + */
> +static inline void mds_clear_cpu_buffers(void)
> +{
> +	static const u16 ds = __KERNEL_DS;
> +
> +	/*
> +	 * Has to be memory form, don't modify to use a register. VERW
> +	 * modifies ZF.
> +	 */
> +	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");
> +}
> +
>  #endif /* __ASSEMBLY__ */
>  
>  /*
>
Perhaps a dumb question but, is there any point to including that fall back ASM
altrinitive for platfroms without the uCode update enabling verw to clear
buffers?

--mark

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 03/10] MDS basics+ 3
  2019-02-20 16:54   ` [MODERATED] " mark gross
@ 2019-02-20 16:57     ` Thomas Gleixner
  2019-02-20 18:08       ` [MODERATED] " mark gross
  0 siblings, 1 reply; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 16:57 UTC (permalink / raw)
  To: speck

On Wed, 20 Feb 2019, speck for mark gross wrote:
> On Wed, Feb 20, 2019 at 04:07:56PM +0100, speck for Thomas Gleixner wrote:
> > +static inline void mds_clear_cpu_buffers(void)
> > +{
> > +	static const u16 ds = __KERNEL_DS;
> > +
> > +	/*
> > +	 * Has to be memory form, don't modify to use a register. VERW
> > +	 * modifies ZF.
> > +	 */
> > +	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");
> > +}
> > +
> >  #endif /* __ASSEMBLY__ */
> >  
> >  /*
> >
> Perhaps a dumb question but, is there any point to including that fall back ASM
> altrinitive for platfroms without the uCode update enabling verw to clear
> buffers?

See 8/10 ....

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 04/10] MDS basics+ 4
  2019-02-20 15:07 ` [patch V2 04/10] MDS basics+ 4 Thomas Gleixner
  2019-02-20 16:52   ` [MODERATED] " Greg KH
@ 2019-02-20 17:10   ` mark gross
  2019-02-21 19:26     ` [MODERATED] Encrypted Message Tim Chen
  2019-02-20 18:43   ` [MODERATED] Re: [patch V2 04/10] MDS basics+ 4 Borislav Petkov
  2019-02-20 19:26   ` Jiri Kosina
  3 siblings, 1 reply; 94+ messages in thread
From: mark gross @ 2019-02-20 17:10 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:57PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 04/10] x86/speculation/mds: Clear CPU buffers on exit to user
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add a static key which controls the invocation of the CPU buffer clear
> mechanism on exit to user space and add the call into
> prepare_exit_to_usermode() right before actually returning.
> 
> Add documentation which kernel to user space transition this covers and
> explain in detail why those which are not mitigated do not need it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Documentation/x86/mds.rst            |   79 +++++++++++++++++++++++++++++++++++
>  arch/x86/entry/common.c              |    9 +++
>  arch/x86/include/asm/nospec-branch.h |    2 
>  arch/x86/kernel/cpu/bugs.c           |    4 +
>  4 files changed, 93 insertions(+), 1 deletion(-)
> 
> --- a/Documentation/x86/mds.rst
> +++ b/Documentation/x86/mds.rst
> @@ -64,3 +64,82 @@ itself are not required because the nece
>  data cannot be controlled in a way which allows exploitation from malicious
>  user space or VM guests.
>  
> +Mitigation points
> +-----------------
> +
> +1. Return to user space
> +^^^^^^^^^^^^^^^^^^^^^^^
> +   When transition from kernel to user space the CPU buffers are flushed
> +   on affected CPUs:
> +
> +   - always when the mitigation mode is full. In this case the invocation
> +     depends on the static key mds_user_clear_always.
> +
> +   - depending on executed functions between entering kernel space and
> +     returning to user space. This is not yet implemented.
> +
> +   This covers transitions from kernel to user space through a return to
> +   user space from a syscall and from an interrupt or a regular exception.
> +
> +   There are other kernel to user space transitions which are not covered
> +   by this: NMIs and all non maskable exceptions which go through the
> +   paranoid exit, which means that they are not going to the regular
> +   prepare_exit_to_usermode() exit path which handles the CPU buffer
> +   clearing.
> +
> +   The occasional non maskable exceptions which go through paranoid exit
> +   are not controllable by user space in any way and most of these
> +   exceptions cannot expose any valuable information either.
> +
> +   Neither can NMIs be reliably controlled by a non priviledged attacker
> +   and their exposure to sensitive data is very limited. NMIs originate
> +   from:
> +
> +      - Performance monitoring.
> +
> +	Performance monitoring is restricted by various mechanisms, i.e. a
> +	regular user on a properly secured system can- if at all - only
> +	monitor it's own user space processes. The performance monitoring
> +	NMI surely executes priviledged kernel code and accesses kernel
> +	internal data structures, which might be exploitable to break the
> +	kernel's address space layout randomization, which is a non-issue
> +	on affected CPUs as there are simpler ways to achieve that.
> +
> +      - Watchdog
> +
> +	The kernel uses - if enabled - a performance monitoring event to
> +	trigger NMIs periodically which allow detection of hard lockups in
> +	kernel space due to deadlocks or other issues.
> +
> +	The watchdog period is a multiple of seconds and the code path
> +	executed cannot expose any secret information other than kernel
> +	address space layout. Due to the low frequency and a limited
> +	control of a potential attacker to align on the watchdog period the
> +	attack surface is close to zero.
> +
> +      - Legacy oprofile NMI handler
> +
> +	Similar to performance monitoring, albeit potentially less
> +	restricted, but has been widely replaced by the performance
> +	monitoring interface perf. State of the art systems will not expose
> +	the oprofile interface and even if exposed the potentially
> +	exploitable information is accessible by other and simpler means.
> +
> +      - KGBD
> +
> +        If the kernel debugger is accessible by an unpriviledged attacker,
> +        then the NMI handler is the least of the problems.
> +
> +      - ACPI/GHES
> +
> +        A firmware based error reporting mechanism which uses NMIs for
> +        notification. Similar to Machine Check Exceptions there is no known
> +        way for an attacker to reliably control and trigger errors which
> +        would cause NMIs. Even if that would be the case the potentially
> +        exploitable data, e.g. kernel address space layout, would be
> +        accessible by simpler means.
> +
> +      - IPMI, vendor specific NMIs, forced shutdown NMI
> +
> +	None of those are controllable by unpriviledged attackers to form a
> +	reliable exploit surface.
I agree we need some balance between paranoia and reality.  

However; if I'm being pedantic, the attacker not having controlability aspect
of your argument can apply to most aspects of the MDS vulnerability.  I think
that's why its name uses "data sampling".  Also, I need to ask the chip heads
about if this list of NMI's is complete and can be expected to stay that way
across processor and platfrom generations.

--mark

> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -31,6 +31,7 @@
>  #include <asm/vdso.h>
>  #include <linux/uaccess.h>
>  #include <asm/cpufeature.h>
> +#include <asm/nospec-branch.h>
>  
>  #define CREATE_TRACE_POINTS
>  #include <trace/events/syscalls.h>
> @@ -180,6 +181,12 @@ static void exit_to_usermode_loop(struct
>  	}
>  }
>  
> +static inline void mds_user_clear_cpu_buffers(void)
> +{
> +	if (static_branch_likely(&mds_user_clear_always))
> +		mds_clear_cpu_buffers();
> +}
> +
>  /* Called with IRQs disabled. */
>  __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
>  {
> @@ -212,6 +219,8 @@ static void exit_to_usermode_loop(struct
>  #endif
>  
>  	user_enter_irqoff();
> +
> +	mds_user_clear_cpu_buffers();
>  }
>  
>  #define SYSCALL_EXIT_WORK_FLAGS				\
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -318,6 +318,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_
>  DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
>  DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
>  
> +DECLARE_STATIC_KEY_FALSE(mds_user_clear_always);
> +
>  #include <asm/segment.h>
>  
>  /**
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -63,10 +63,12 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_i
>  /* Control unconditional IBPB in switch_mm() */
>  DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
>  
> +/* Control MDS CPU buffer clear before returning to user space */
> +DEFINE_STATIC_KEY_FALSE(mds_user_clear_always);
> +
>  void __init check_bugs(void)
>  {
>  	identify_boot_cpu();
> -
>  	/*
>  	 * identify_boot_cpu() initialized SMT support information, let the
>  	 * core code know.
> 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 03/10] MDS basics+ 3
  2019-02-20 15:07 ` [patch V2 03/10] MDS basics+ 3 Thomas Gleixner
  2019-02-20 16:54   ` [MODERATED] " mark gross
@ 2019-02-20 17:14   ` Borislav Petkov
  2019-02-20 21:31     ` Thomas Gleixner
  2019-02-21  2:12   ` [MODERATED] " Andrew Cooper
  2 siblings, 1 reply; 94+ messages in thread
From: Borislav Petkov @ 2019-02-20 17:14 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:56PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 03/10] x86/speculation/mds: Add mds_clear_cpu_buffer()
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The Microarchitectural Data Sampling (MDS) vulernabilities are mitigated by

vulnerabilities

> clearing the affected CPU buffers. The mechanism for clearing the buffers
> uses the unused and obsolete VERW instruction in combination with a
> microcode update which triggers a CPU buffer clear when VERW is executed.
> 
> Provide a inline function with the assembly magic. The argument of the VERW
	  an

> instruction must be a memory operand.

Do we know why it has to be a memop?

> Add x86 specific documentation about MDS and the internal workings of the
> mitigation.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
> V1 --> V2: Add "cc" clobber and documentation
> ---
>  Documentation/index.rst              |    1 
>  Documentation/x86/conf.py            |   10 ++++
>  Documentation/x86/index.rst          |    8 +++
>  Documentation/x86/mds.rst            |   72 +++++++++++++++++++++++++++++++++++
>  arch/x86/include/asm/nospec-branch.h |   20 +++++++++
>  5 files changed, 111 insertions(+)
> 
> --- a/Documentation/index.rst
> +++ b/Documentation/index.rst
> @@ -101,6 +101,7 @@ implementation.
>     :maxdepth: 2
>  
>     sh/index
> +   x86/index
>  
>  Filesystem Documentation
>  ------------------------
> --- /dev/null
> +++ b/Documentation/x86/conf.py
> @@ -0,0 +1,10 @@
> +# -*- coding: utf-8; mode: python -*-
> +
> +project = "X86 architecture specific documentation"
> +
> +tags.add("subproject")
> +
> +latex_documents = [
> +    ('index', 'x86.tex', project,
> +     'The kernel development community', 'manual'),
> +]
> --- /dev/null
> +++ b/Documentation/x86/index.rst
> @@ -0,0 +1,8 @@
> +==========================
> +x86 architecture specifics
> +==========================
> +
> +.. toctree::
> +   :maxdepth: 1
> +
> +   mds
> --- /dev/null
> +++ b/Documentation/x86/mds.rst
> @@ -0,0 +1,72 @@
> +Microarchitecural Data Sampling (MDS) mitigation
> +================================================
> +
> +Microarchitectural Data Sampling (MDS), is a class of side channel attacks

s/,//

basically the same nitpicks as to the commit message of patch 1 :).

> +on internal buffers in Intel CPUs. The variants are:
> +
> + - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
> + - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
> + - Microarchitectural Load Port Data (MLPDS) (CVE-2018-12127)
> +
> +MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
> +dependent load (store-load forwarding) as an optimization. The forward can
> +also happen to a faulting or assisting load operation for a different
> +memory address, which can be exploited under certain conditions. Store
> +buffers are partitionened between Hyper-Threads so cross thread forwarding
> +is not possible. But if a thread enters or exits a sleep state the store
> +buffer is repartioned which can expose data from one thread to the other.
> +
> +MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
> +L1 miss situations and to hold data which is returned or sent in response
> +to a memory or I/O operation. Fill buffers can forward data to a load
> +operation and also write data to the cache. When the fill buffer is
> +deallocated it can retain the stale data of the preceeding operations which

WARNING: 'preceeding' may be misspelled - perhaps 'preceding'?

> +can then be forwarded to a faulting or assisting load operation, which can
> +be exploited under certain conditions. Fill buffers are shared between
> +Hyper-Threads so cross thread leakage is possible.
> +
> +MLDPS leaks Load Port Data. Load ports are used to perform load operations
> +from memory or I/O. The received data is then forwarded to the register
> +file or a subsequent operation. In some implementations the Load Port can
> +contain stale data from a previous operation which can be forwarded to
> +faulting or assisting loads under certain conditions, which again can be
> +exploited eventually. Load ports are shared between Hyper-Threads so cross
> +thread leakage is possible.
> +
> +Mitigation strategy
> +-------------------
> +
> +All variants have the same mitigation strategy at least for the single CPU
> +thread case (SMT off): Force the CPU to clear the affected buffers.
> +
> +This is achieved by using the otherwise unused and obsolete VERW
> +instruction in combination with a microcode update. The microcode clears
> +the affected CPU buffers when the VERW instruction is executed.
> +
> +For virtualization there are two ways to achieve CPU buffer
> +clearing. Either the modified VERW instruction or via the L1D Flush
> +command. The latter is issued when L1TF mitigation is enabled so the extra
> +VERW can be spared. If the CPU is not affected by L1TF then VERW needs to

maybe s/spared/avoided/

> +be issued.
> +
> +If the VERW instruction with the supplied segment selector argument is
> +executed on a CPU without the microcode update there is no side effect
> +other than a small number of pointlessly wasted CPU cycles.
> +
> +This does not protect against cross Hyper-Thread attacks except for MSBDS
> +which is only exploitable cross Hyper-thread when one of the Hyper-Threads
> +enters a C-state.
> +
> +The kernel provides a function to invoke the buffer clearing:
> +
> +    mds_clear_cpu_buffers()
> +
> +The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
> +(idle) transitions. Depending on the mitigation mode and the system state
> +the invocation can be enforced or conditional.
> +
> +According to current knowledge additional mitigations inside the kernel
> +itself are not required because the necessary gadgets to expose the leaked
> +data cannot be controlled in a way which allows exploitation from malicious
> +user space or VM guests.
> +
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -318,6 +318,26 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_
>  DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
>  DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
>  
> +#include <asm/segment.h>
> +
> +/**
> + * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
> + *
> + * This uses the otherwise unused and obsolete VERW instruction in
> + * combination with microcode which triggers a CPU buffer flush when the
> + * instruction is executed.
> + */
> +static inline void mds_clear_cpu_buffers(void)
> +{
> +	static const u16 ds = __KERNEL_DS;
> +
> +	/*
> +	 * Has to be memory form, don't modify to use a register. VERW
> +	 * modifies ZF.

Yeah, so did it say somewhere why it has to be memory operation? Would
be useful to have it here in the comment.

With that:

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 03/10] MDS basics+ 3
  2019-02-20 16:57     ` Thomas Gleixner
@ 2019-02-20 18:08       ` mark gross
  2019-02-20 21:40         ` Thomas Gleixner
  0 siblings, 1 reply; 94+ messages in thread
From: mark gross @ 2019-02-20 18:08 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 05:57:04PM +0100, speck for Thomas Gleixner wrote:
> On Wed, 20 Feb 2019, speck for mark gross wrote:
> > On Wed, Feb 20, 2019 at 04:07:56PM +0100, speck for Thomas Gleixner wrote:
> > > +static inline void mds_clear_cpu_buffers(void)
> > > +{
> > > +	static const u16 ds = __KERNEL_DS;
> > > +
> > > +	/*
> > > +	 * Has to be memory form, don't modify to use a register. VERW
> > > +	 * modifies ZF.
> > > +	 */
> > > +	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");
> > > +}
> > > +
> > >  #endif /* __ASSEMBLY__ */
> > >  
> > >  /*
> > >
> > Perhaps a dumb question but, is there any point to including that fall back ASM
> > altrinitive for platfroms without the uCode update enabling verw to clear
> > buffers?
> 
> See 8/10 ....
I was just wrapping my head around that one.  I don't like the name for that
mode but I can't think of a better one other than MDS_GUEST_CHICKEN_BIT.  Which
is worse.  

Anyway, Andi pointed out off list to me that Linus rejected the ASM sequences
that could be used if verw was not available before I was on this list.  So,
please ingore my initial comment as it was related to those.

--mark

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 04/10] MDS basics+ 4
  2019-02-20 15:07 ` [patch V2 04/10] MDS basics+ 4 Thomas Gleixner
  2019-02-20 16:52   ` [MODERATED] " Greg KH
  2019-02-20 17:10   ` mark gross
@ 2019-02-20 18:43   ` Borislav Petkov
  2019-02-20 19:26   ` Jiri Kosina
  3 siblings, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-20 18:43 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:57PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 04/10] x86/speculation/mds: Clear CPU buffers on exit to user
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add a static key which controls the invocation of the CPU buffer clear
> mechanism on exit to user space and add the call into
> prepare_exit_to_usermode() right before actually returning.
> 
> Add documentation which kernel to user space transition this covers and
> explain in detail why those which are not mitigated do not need it.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Documentation/x86/mds.rst            |   79 +++++++++++++++++++++++++++++++++++
>  arch/x86/entry/common.c              |    9 +++
>  arch/x86/include/asm/nospec-branch.h |    2 
>  arch/x86/kernel/cpu/bugs.c           |    4 +
>  4 files changed, 93 insertions(+), 1 deletion(-)
> 
> --- a/Documentation/x86/mds.rst
> +++ b/Documentation/x86/mds.rst
> @@ -64,3 +64,82 @@ itself are not required because the nece
>  data cannot be controlled in a way which allows exploitation from malicious
>  user space or VM guests.
>  
> +Mitigation points
> +-----------------
> +
> +1. Return to user space
> +^^^^^^^^^^^^^^^^^^^^^^^
> +   When transition from kernel to user space the CPU buffers are flushed

transitioning

With that:

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 04/10] MDS basics+ 4
  2019-02-20 15:07 ` [patch V2 04/10] MDS basics+ 4 Thomas Gleixner
                     ` (2 preceding siblings ...)
  2019-02-20 18:43   ` [MODERATED] Re: [patch V2 04/10] MDS basics+ 4 Borislav Petkov
@ 2019-02-20 19:26   ` Jiri Kosina
  2019-02-20 21:42     ` Thomas Gleixner
  3 siblings, 1 reply; 94+ messages in thread
From: Jiri Kosina @ 2019-02-20 19:26 UTC (permalink / raw)
  To: speck

On Wed, 20 Feb 2019, speck for Thomas Gleixner wrote:

> +   Neither can NMIs be reliably controlled by a non priviledged attacker
> +   and their exposure to sensitive data is very limited. NMIs originate
> +   from:
[ ... snip ... ]
> +	None of those are controllable by unpriviledged attackers to form a
> +	reliable exploit surface.

One thing where I am not completely sure about this at the moment, is NMI 
that's used to trigger backtrace on all CPUs.

In a hypothetical case where we have a situation where unprivileged user 
can (due to some other issue) controllably trigger either of:

- hung task situation
- rcu stall
- hardlockup

then MDS turns this into revealing contents of kernel stacks (as the 
stacktrace unwinder will walk the whole thing between the stack top and 
%rsp) of all CPUs.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 05/10] MDS basics+ 5
  2019-02-20 15:07 ` [patch V2 05/10] MDS basics+ 5 Thomas Gleixner
@ 2019-02-20 20:05   ` Borislav Petkov
  2019-02-21  2:24   ` Andrew Cooper
  1 sibling, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-20 20:05 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:58PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 05/10] x86/speculation/mds: Conditionaly clear CPU buffers on idle entry

WARNING: 'Conditionaly' may be misspelled - perhaps 'Conditionally'?

> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add a static key which controls the invocation of the CPU buffer clear
> mechanism on idle entry. This is independent of other MDS mitigations
> because the idle entry invocation to mitigate the potential leakage due to
> store buffer repartitioning is only necessary on SMT systems.
> 
> Add the actual invocations to the different halt/mwait variants which
> covers all usage sites. mwaitx is not patched as it's not available on
> Intel CPUs.
> 
> The buffer clear is only invoked before entering the C-State to prevent
> that stale data from the idling CPU can be spilled to the Hyper-Thread

s/can //

> sibling after the Store buffer got repartitioned and all entries are
> available to the non idle sibling.
> 
> When coming out of idle the store buffer is partitioned again so each
> sibling has half of it available. Now the back from idle CPU could be

"Now the CPU which returned from idle... "

> speculatively exposed to contents of the sibling, but the buffers are
> flushed either on exit to user space or on VMENTER.
> 
> When later on conditional buffer clearing is implemented on top of this,
> then there is no action required either because before returning to user
> space the context switch will set the condition flag which causes a flush
> on the return to user path.
> 
> This intentionaly does not handle the case in the acpi/processor_idle
> driver which uses the legacy IO port interface for C-State transitions for
> two reasons:
> 
>  - The acpi/processor_idle driver was replaced by the intel_idle driver
>    almost a decade ago. Anything Nehalem upwards supports it and defaults
>    to that new driver.
> 
>  - The legacy IO port interface is likely to be used on older and therefore
>    unaffected CPUs or on systems which do not receive microcode updates
>    anymore, so there is no point in adding that.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Documentation/x86/mds.rst            |   33 +++++++++++++++++++++++++++++++++
>  arch/x86/include/asm/irqflags.h      |    4 ++++
>  arch/x86/include/asm/mwait.h         |    7 +++++++
>  arch/x86/include/asm/nospec-branch.h |   12 ++++++++++++
>  arch/x86/kernel/cpu/bugs.c           |    2 ++
>  5 files changed, 58 insertions(+)
> 
> --- a/Documentation/x86/mds.rst
> +++ b/Documentation/x86/mds.rst
> @@ -143,3 +143,36 @@ Mitigation points
>  
>  	None of those are controllable by unpriviledged attackers to form a
>  	reliable exploit surface.
> +
> +2. C-State transition
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +   When a CPU goes idle and enters a C-State the CPU buffers need to be
> +   cleared on affected CPUs when SMT is active. This addresses the
> +   repartitioning of the Store buffer when one of the Hyper-Thread enters a

"... one of the Hyper-Threads... "

> +   C-State.
> +
> +   When SMT is inactive, i.e. either the CPU does not support it or all
> +   sibling threads are offline CPU buffer clearing is not required.
> +
> +   The invocation is controlled by the static key mds_idle_clear which is
> +   switched depending on the chosen mitigation mode and the SMT state of
> +   the system.
> +
> +   The buffer clear is only invoked before entering the C-State to prevent
> +   that stale data from the idling CPU can be spilled to the Hyper-Thread
> +   sibling after the Store buffer got repartitioned and all entries are
> +   available to the non idle sibling.
> +
> +   When coming out of idle the store buffer is partitioned again so each
> +   sibling has half of it available. Now the back from idle CPU could be
> +   speculatively exposed to contents of the sibling, but the buffers are
> +   flushed either on exit to user space or on VMENTER.
> +
> +   The mitigation is hooked into all variants of halt()/mwait(), but does
> +   not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
> +   has been superseeded by the intel_idle driver around 2010 and is

WARNING: 'superseeded' may be misspelled - perhaps 'superseded'?

> +   preferred on all affected CPUs which still receive microcode updates
> +   (Nehalem onwards). Aside of that the IO-Port mechanism is a legacy
> +   interface which is only used on older systems which are either not
> +   affected or do not receive microcode updates anymore.

With that addressed:

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 03/10] MDS basics+ 3
  2019-02-20 17:14   ` [MODERATED] " Borislav Petkov
@ 2019-02-20 21:31     ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 21:31 UTC (permalink / raw)
  To: speck

On Wed, 20 Feb 2019, speck for Borislav Petkov wrote:
> > Provide a inline function with the assembly magic. The argument of the VERW
> 	  an
> 
> > instruction must be a memory operand.
> 
> Do we know why it has to be a memop?

From the magic PDF:

  MD_CLEAR enumerates that the memory-operand variant of VERW (for example,
  VERW m16) has been extended to also overwrite buffers affected by MDS.

  This buffer overwriting functionality is not guaranteed for the register
  operand variant of VERW.

Will add some blurb.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 03/10] MDS basics+ 3
  2019-02-20 18:08       ` [MODERATED] " mark gross
@ 2019-02-20 21:40         ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 21:40 UTC (permalink / raw)
  To: speck

On Wed, 20 Feb 2019, speck for mark gross wrote:
> On Wed, Feb 20, 2019 at 05:57:04PM +0100, speck for Thomas Gleixner wrote:
> > On Wed, 20 Feb 2019, speck for mark gross wrote:
> > > On Wed, Feb 20, 2019 at 04:07:56PM +0100, speck for Thomas Gleixner wrote:
> > > > +static inline void mds_clear_cpu_buffers(void)
> > > > +{
> > > > +	static const u16 ds = __KERNEL_DS;
> > > > +
> > > > +	/*
> > > > +	 * Has to be memory form, don't modify to use a register. VERW
> > > > +	 * modifies ZF.
> > > > +	 */
> > > > +	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");
> > > > +}
> > > > +
> > > >  #endif /* __ASSEMBLY__ */
> > > >  
> > > >  /*
> > > >
> > > Perhaps a dumb question but, is there any point to including that fall back ASM
> > > altrinitive for platfroms without the uCode update enabling verw to clear
> > > buffers?
> > 
> > See 8/10 ....
> I was just wrapping my head around that one.  I don't like the name for that
> mode but I can't think of a better one other than MDS_GUEST_CHICKEN_BIT.  Which
> is worse.  

What about MDS_MITIGATION_VMWERV ?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 04/10] MDS basics+ 4
  2019-02-20 19:26   ` Jiri Kosina
@ 2019-02-20 21:42     ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 21:42 UTC (permalink / raw)
  To: speck

Jiri,

On Wed, 20 Feb 2019, speck for Jiri Kosina wrote:

> On Wed, 20 Feb 2019, speck for Thomas Gleixner wrote:
> 
> > +   Neither can NMIs be reliably controlled by a non priviledged attacker
> > +   and their exposure to sensitive data is very limited. NMIs originate
> > +   from:
> [ ... snip ... ]
> > +	None of those are controllable by unpriviledged attackers to form a
> > +	reliable exploit surface.
> 
> One thing where I am not completely sure about this at the moment, is NMI 
> that's used to trigger backtrace on all CPUs.
> 
> In a hypothetical case where we have a situation where unprivileged user 
> can (due to some other issue) controllably trigger either of:
> 
> - hung task situation
> - rcu stall
> - hardlockup
> 
> then MDS turns this into revealing contents of kernel stacks (as the 
> stacktrace unwinder will walk the whole thing between the stack top and 
> %rsp) of all CPUs.

As I said to Andi, if the consensus is that full == paranoid, lets just add
it and be done with it. One flush more or less is probably not buying us
anything.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 09/10] MDS basics+ 9
  2019-02-20 16:21   ` [MODERATED] " Peter Zijlstra
@ 2019-02-20 22:32     ` Thomas Gleixner
  2019-02-20 22:50       ` [MODERATED] " Jiri Kosina
  0 siblings, 1 reply; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 22:32 UTC (permalink / raw)
  To: speck

On Wed, 20 Feb 2019, speck for Peter Zijlstra wrote:
> On Wed, Feb 20, 2019 at 04:08:02PM +0100, speck for Thomas Gleixner wrote:
> >  static inline void mds_user_clear_cpu_buffers(void)
> >  {
> > +	if (static_branch_likely(&mds_user_clear_cond)) {
> 
> Are we sure we want _likely() ? That puts the body in-line.

Yes, no, dunno. 

> > +		if (__this_cpu_read(mds_cond_clear)) {
> > +			__this_cpu_write(mds_cond_clear, 0);
> > +			mds_clear_cpu_buffers();
> > +		}
> 		return;
> 
> _might_ generate better code; by making it explitit we'll never have
> both branches enabled at the same time.
> 
> > +	}
> >  	if (static_branch_likely(&mds_user_clear_always))
> >  		mds_clear_cpu_buffers();
> >  }
> 
> And in general; yuck! Should I once again look at tri-state jump_labels
> or static_switch? Last time I tried I got stuck on the C part, writing
> different targets is trivial.
> 
> Ideally we'd end up with something like:
> 
> 1:	jmp 3f
> 	test $1, %gs:mds_cond_clear
> 	jz 3f
> 	mov $0, %gs:mds_cond_clear
> 2:	verw ds
> 3:
> 
> And change the jmp at 1 between: JMP 3f, JMP 2f and NOP for
> mds={off,full,cond} resp.

Right. That'd be cute, but for backporting sake we should stick with the
tools we have now.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 09/10] MDS basics+ 9
  2019-02-20 22:32     ` Thomas Gleixner
@ 2019-02-20 22:50       ` Jiri Kosina
  2019-02-20 23:22         ` Thomas Gleixner
  0 siblings, 1 reply; 94+ messages in thread
From: Jiri Kosina @ 2019-02-20 22:50 UTC (permalink / raw)
  To: speck

On Wed, 20 Feb 2019, speck for Thomas Gleixner wrote:

> > And change the jmp at 1 between: JMP 3f, JMP 2f and NOP for
> > mds={off,full,cond} resp.
> 
> Right. That'd be cute, but for backporting sake we should stick with the
> tools we have now.

Thomas, thanks a lot for taking this into consideration. It'd indeed be an 
annoyingly big issue for us.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 09/10] MDS basics+ 9
  2019-02-20 22:50       ` [MODERATED] " Jiri Kosina
@ 2019-02-20 23:22         ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-20 23:22 UTC (permalink / raw)
  To: speck

Jiri,

On Wed, 20 Feb 2019, speck for Jiri Kosina wrote:

> On Wed, 20 Feb 2019, speck for Thomas Gleixner wrote:
> 
> > > And change the jmp at 1 between: JMP 3f, JMP 2f and NOP for
> > > mds={off,full,cond} resp.
> > 
> > Right. That'd be cute, but for backporting sake we should stick with the
> > tools we have now.
> 
> Thomas, thanks a lot for taking this into consideration. It'd indeed be an 
> annoyingly big issue for us.

Usually I do not care much about the kernel necrophilia cult, but indeed
all this is annoying enough as is, so there is no need that we annoy each
other even more :)

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 03/10] MDS basics+ 3
  2019-02-20 15:07 ` [patch V2 03/10] MDS basics+ 3 Thomas Gleixner
  2019-02-20 16:54   ` [MODERATED] " mark gross
  2019-02-20 17:14   ` [MODERATED] " Borislav Petkov
@ 2019-02-21  2:12   ` Andrew Cooper
  2019-02-21  9:27     ` Peter Zijlstra
                       ` (2 more replies)
  2 siblings, 3 replies; 94+ messages in thread
From: Andrew Cooper @ 2019-02-21  2:12 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1700 bytes --]

On 20/02/2019 15:07, speck for Thomas Gleixner wrote:
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -318,6 +318,26 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_
>  DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
>  DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
>  
> +#include <asm/segment.h>
> +
> +/**
> + * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
> + *
> + * This uses the otherwise unused and obsolete VERW instruction in
> + * combination with microcode which triggers a CPU buffer flush when the
> + * instruction is executed.
> + */
> +static inline void mds_clear_cpu_buffers(void)
> +{
> +	static const u16 ds = __KERNEL_DS;

In Xen, I've added a note justifying the choice of selector, in the
expectation that people probably won't remember exactly why in 6 months
time.

For least latency (allegedly to avoid a static prediction stall in
microcode), it should be a writeable data segment which is hot in the
cache, and being adjacent to __KERNEL_CS is a pretty good bet.

> +
> +	/*
> +	 * Has to be memory form, don't modify to use a register. VERW
> +	 * modifies ZF.

I don't understand why everyone is so concerned about VERW modifying
ZF.  Its not as if this fact is relevant anywhere that the mitigation is
liable to be used.

> +	 */
> +	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");

The "i" (0) isn't referenced in the assembly, and can be dropped.

On a tangent, have GCC or Clang made any indication that they're going
to stop assuming that all asm() statements clobber flags, and start
making the "cc" clobber necessary on x86 targets?

~Andrew


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 05/10] MDS basics+ 5
  2019-02-20 15:07 ` [patch V2 05/10] MDS basics+ 5 Thomas Gleixner
  2019-02-20 20:05   ` [MODERATED] " Borislav Petkov
@ 2019-02-21  2:24   ` Andrew Cooper
  2019-02-21 10:36     ` Thomas Gleixner
  1 sibling, 1 reply; 94+ messages in thread
From: Andrew Cooper @ 2019-02-21  2:24 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2137 bytes --]

On 20/02/2019 15:07, speck for Thomas Gleixner wrote:
> --- a/Documentation/x86/mds.rst
> +++ b/Documentation/x86/mds.rst
> @@ -143,3 +143,36 @@ Mitigation points
>  
>  	None of those are controllable by unpriviledged attackers to form a
>  	reliable exploit surface.
> +
> +2. C-State transition
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +   When a CPU goes idle and enters a C-State the CPU buffers need to be
> +   cleared on affected CPUs when SMT is active. This addresses the
> +   repartitioning of the Store buffer when one of the Hyper-Thread enters a
> +   C-State.
> +
> +   When SMT is inactive, i.e. either the CPU does not support it or all
> +   sibling threads are offline CPU buffer clearing is not required.
> +
> +   The invocation is controlled by the static key mds_idle_clear which is
> +   switched depending on the chosen mitigation mode and the SMT state of
> +   the system.
> +
> +   The buffer clear is only invoked before entering the C-State to prevent
> +   that stale data from the idling CPU can be spilled to the Hyper-Thread
> +   sibling after the Store buffer got repartitioned and all entries are
> +   available to the non idle sibling.
> +
> +   When coming out of idle the store buffer is partitioned again so each
> +   sibling has half of it available. Now the back from idle CPU could be
> +   speculatively exposed to contents of the sibling, but the buffers are
> +   flushed either on exit to user space or on VMENTER.
> +
> +   The mitigation is hooked into all variants of halt()/mwait(), but does
> +   not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
> +   has been superseeded by the intel_idle driver around 2010 and is
> +   preferred on all affected CPUs which still receive microcode updates
> +   (Nehalem onwards).

I'd perhaps phrase this as "and is preferred on all CPUs which are
expected to gain the MD_CLEAR functionality in microcode".

ISTR there were some pre-Nehalem ucode updates for one of the
speculative issues (although I'm at a loss to locate the schedule
document I think the answer is in).

~Andrew


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 03/10] MDS basics+ 3
  2019-02-21  2:12   ` [MODERATED] " Andrew Cooper
@ 2019-02-21  9:27     ` Peter Zijlstra
  2019-02-21  9:33     ` [MODERATED] " Borislav Petkov
  2019-02-21 10:04     ` Thomas Gleixner
  2 siblings, 0 replies; 94+ messages in thread
From: Peter Zijlstra @ 2019-02-21  9:27 UTC (permalink / raw)
  To: speck

On Thu, Feb 21, 2019 at 02:12:19AM +0000, speck for Andrew Cooper wrote:
> On 20/02/2019 15:07, speck for Thomas Gleixner wrote:

> > +
> > +	/*
> > +	 * Has to be memory form, don't modify to use a register. VERW
> > +	 * modifies ZF.
> 
> I don't understand why everyone is so concerned about VERW modifying
> ZF.  Its not as if this fact is relevant anywhere that the mitigation is
> liable to be used.
> 
> > +	 */
> > +	asm volatile("verw %[ds]" : : "i" (0), [ds] "m" (ds) : "cc");
> 
> The "i" (0) isn't referenced in the assembly, and can be dropped.
> 
> On a tangent, have GCC or Clang made any indication that they're going
> to stop assuming that all asm() statements clobber flags, and start
> making the "cc" clobber necessary on x86 targets?

Not as far as I know, but that doesn't mean we shouldn't use it
properly.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: Re: [patch V2 03/10] MDS basics+ 3
  2019-02-21  2:12   ` [MODERATED] " Andrew Cooper
  2019-02-21  9:27     ` Peter Zijlstra
@ 2019-02-21  9:33     ` Borislav Petkov
  2019-02-21 10:04     ` Thomas Gleixner
  2 siblings, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-21  9:33 UTC (permalink / raw)
  To: speck

On Thu, Feb 21, 2019 at 02:12:19AM +0000, speck for Andrew Cooper wrote:
> For least latency (allegedly to avoid a static prediction stall in
> microcode), it should be a writeable data segment which is hot in the
> cache, and being adjacent to __KERNEL_CS is a pretty good bet.

That should be in a comment somewhere.

> I don't understand why everyone is so concerned about VERW modifying
> ZF.  Its not as if this fact is relevant anywhere that the mitigation is
> liable to be used.

Err, so that gcc can know that it is clobbered? What do you mean here?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 03/10] MDS basics+ 3
  2019-02-21  2:12   ` [MODERATED] " Andrew Cooper
  2019-02-21  9:27     ` Peter Zijlstra
  2019-02-21  9:33     ` [MODERATED] " Borislav Petkov
@ 2019-02-21 10:04     ` Thomas Gleixner
  2019-02-21 10:18       ` [MODERATED] Re: " Borislav Petkov
  2 siblings, 1 reply; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 10:04 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1163 bytes --]

On Thu, 21 Feb 2019, speck for Andrew Cooper wrote:
> On 20/02/2019 15:07, speck for Thomas Gleixner wrote:
> > +static inline void mds_clear_cpu_buffers(void)
> > +{
> > +	static const u16 ds = __KERNEL_DS;
> 
> In Xen, I've added a note justifying the choice of selector, in the
> expectation that people probably won't remember exactly why in 6 months
> time.
> 
> For least latency (allegedly to avoid a static prediction stall in
> microcode), it should be a writeable data segment which is hot in the
> cache, and being adjacent to __KERNEL_CS is a pretty good bet.

I already added info to the comment. Boris asked so as well.

> > +	/*
> > +	 * Has to be memory form, don't modify to use a register. VERW
> > +	 * modifies ZF.
> 
> I don't understand why everyone is so concerned about VERW modifying
> ZF.  Its not as if this fact is relevant anywhere that the mitigation is
> liable to be used.

No, it's not relevant to the mitigation, but for correctness sake I care
about adding the "cc" clobber even if it's not officially required
today. For that matter I rather have documented why "cc" is there.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: Re: [patch V2 03/10] MDS basics+ 3
  2019-02-21 10:04     ` Thomas Gleixner
@ 2019-02-21 10:18       ` Borislav Petkov
  0 siblings, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-21 10:18 UTC (permalink / raw)
  To: speck

On Thu, Feb 21, 2019 at 11:04:56AM +0100, speck for Thomas Gleixner wrote:
> No, it's not relevant to the mitigation, but for correctness sake I care
> about adding the "cc" clobber even if it's not officially required
> today. For that matter I rather have documented why "cc" is there.

Yeah and I read section "6.46.2.6 Clobbers and Scratch Registers"

here https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html

like we should specify "cc" and it doesn't say explicitly that gcc
considers rFLAGS being clobbered in an inline asm. Or?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 06/10] MDS basics+ 6
  2019-02-20 15:07 ` [patch V2 06/10] MDS basics+ 6 Thomas Gleixner
@ 2019-02-21 10:18   ` Borislav Petkov
  0 siblings, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-21 10:18 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:59PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 06/10] x86/speculation/mds: Add mitigation control for MDS
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Now that the mitigations are in place, add a command line parameter to
> control the mitigation, a mitigation selector function and a SMT update
> mechanism.
> 
> This is the minimal straight forward initial implementation which just
> provides an always on/off mode. The command line parameter is:
> 
>   mds=[full|off|auto]
> 
> This is consistent with the existing mitigations for other speculative
> hardware vulnerabilities.
> 
> The idle invocation is dynamically updated according to the SMT state of
> the system similar to the dynamic update of the STIBP mitigation.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |   27 ++++++++
>  arch/x86/include/asm/processor.h                |    6 +
>  arch/x86/kernel/cpu/bugs.c                      |   76 ++++++++++++++++++++++++
>  3 files changed, 109 insertions(+)

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 05/10] MDS basics+ 5
  2019-02-21  2:24   ` Andrew Cooper
@ 2019-02-21 10:36     ` Thomas Gleixner
  2019-02-21 11:22       ` Thomas Gleixner
  2019-02-21 11:51       ` [MODERATED] Attack Surface [Was [patch V2 05/10] MDS basics+ 5] Andrew Cooper
  0 siblings, 2 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 10:36 UTC (permalink / raw)
  To: speck

On Thu, 21 Feb 2019, speck for Andrew Cooper wrote:
> > +   The mitigation is hooked into all variants of halt()/mwait(), but does
> > +   not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
> > +   has been superseeded by the intel_idle driver around 2010 and is
> > +   preferred on all affected CPUs which still receive microcode updates
> > +   (Nehalem onwards).
> 
> I'd perhaps phrase this as "and is preferred on all CPUs which are
> expected to gain the MD_CLEAR functionality in microcode".
> 
> ISTR there were some pre-Nehalem ucode updates for one of the
> speculative issues (although I'm at a loss to locate the schedule
> document I think the answer is in).

One of those "supersecure" documents which I can't work with again....

But that aside the real question is whether it actually matters. Though
that's a tough one to answer because at least I have no clue what the
actual attack surface is.

Maybe it's described in some other !$!%@$%!@ document or this has been
discussed between cat pictures, lame jokes and beer appointments on that
security consecrated whatsapp mimicry.

While I think to know how to exploit that in C/ASM, I have no idea whether
that stuff is exploitable by other means.

  1) What's the state of javascript?

     If that's not an issue, then we really can stop worrying about pre
     Nehalem for reality sake. If it's an issue we need at have some
     information for Pavel at least. :)

  2) What about BPF?

     If untrusted BPF can be abused to exploit this, then we have to think
     hard about howto handle that.

     Assume the following:

     User A -> BPF -> preemption -> kernel thread -> back to BPF

     Because there is no kernel to user transition no buffer clear will
     happen - neither in full nor in conditional mode.

     So the BPF sampler can happily collect the leftovers of the kernel
     thread.

     Might be theoretical, but it needs to be investigated at least.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 09/10] MDS basics+ 9
  2019-02-20 15:08 ` [patch V2 09/10] MDS basics+ 9 Thomas Gleixner
  2019-02-20 16:21   ` [MODERATED] " Peter Zijlstra
@ 2019-02-21 11:04   ` Peter Zijlstra
  2019-02-21 11:50     ` Peter Zijlstra
  2019-02-21 14:18   ` Borislav Petkov
  2019-02-21 18:00   ` Kees Cook
  3 siblings, 1 reply; 94+ messages in thread
From: Peter Zijlstra @ 2019-02-21 11:04 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:08:02PM +0100, speck for Thomas Gleixner wrote:
>  static inline void mds_user_clear_cpu_buffers(void)
>  {
> +	if (static_branch_likely(&mds_user_clear_cond)) {
> +		if (__this_cpu_read(mds_cond_clear)) {
> +			__this_cpu_write(mds_cond_clear, 0);
> +			mds_clear_cpu_buffers();
> +		}
> +	}
>  	if (static_branch_likely(&mds_user_clear_always))
>  		mds_clear_cpu_buffers();
>  }

Results in:

998:   e9 0b 00 00 00          jmpq   9a8 <prepare_exit_to_usermode+0x58>
99d:   65 8b 05 00 00 00 00    mov    %gs:0x0(%rip),%eax        # 9a4 <prepare_exit_to_usermode+0x54>
9a0: R_X86_64_PC32      mds_cond_clear-0x4
9a4:   85 c0                   test   %eax,%eax
9a6:   75 0f                   jne    9b7 <prepare_exit_to_usermode+0x67>
9a8:   e9 07 00 00 00          jmpq   9b4 <prepare_exit_to_usermode+0x64>
9ad:   0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 9b4 <prepare_exit_to_usermode+0x64>
9b0: R_X86_64_PC32      .rodata-0x4
9b4:   5b                      pop    %rbx
9b5:   5d                      pop    %rbp
9b6:   c3                      retq
9b7:   65 c7 05 00 00 00 00    movl   $0x0,%gs:0x0(%rip)        # 9c2 <prepare_exit_to_usermode+0x72>
9be:   00 00 00 00
9ba: R_X86_64_PC32      mds_cond_clear-0x8
9c2:   0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 9c9 <prepare_exit_to_usermode+0x79>
9c5: R_X86_64_PC32      .rodata-0x4
9c9:   eb dd                   jmp    9a8 <prepare_exit_to_usermode+0x58>


static inline void mds_user_clear_cpu_buffers(void)
{
       if (!static_branch_likely(&mds_user_clear_always))
               return;

       if (static_branch_likely(&mds_user_clear_cond)) {
               if (!__this_cpu_read(mds_cond_clear))
                       return;

               __this_cpu_write(mds_cond_clear, 0);
       }

       mds_clear_cpu_buffers();
}

results in:

998:   e9 22 00 00 00          jmpq   9bf <prepare_exit_to_usermode+0x6f>
99d:   e9 16 00 00 00          jmpq   9b8 <prepare_exit_to_usermode+0x68>
9a2:   65 8b 05 00 00 00 00    mov    %gs:0x0(%rip),%eax        # 9a9 <prepare_exit_to_usermode+0x59>
9a5: R_X86_64_PC32      mds_cond_clear-0x4
9a9:   85 c0                   test   %eax,%eax
9ab:   74 12                   je     9bf <prepare_exit_to_usermode+0x6f>
9ad:   65 c7 05 00 00 00 00    movl   $0x0,%gs:0x0(%rip)        # 9b8 <prepare_exit_to_usermode+0x68>
9b4:   00 00 00 00
9b0: R_X86_64_PC32      mds_cond_clear-0x8
9b8:   0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 9bf <prepare_exit_to_usermode+0x6f>
9bb: R_X86_64_PC32      .rodata-0x4
9bf:


The only 'downside' is that means @mds_user_clear_always is now a
misnomer (and needs be enabled for cond too).

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 05/10] MDS basics+ 5
  2019-02-21 10:36     ` Thomas Gleixner
@ 2019-02-21 11:22       ` Thomas Gleixner
  2019-02-21 11:51       ` [MODERATED] Attack Surface [Was [patch V2 05/10] MDS basics+ 5] Andrew Cooper
  1 sibling, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 11:22 UTC (permalink / raw)
  To: speck

On Thu, 21 Feb 2019, speck for Thomas Gleixner wrote:
>   2) What about BPF?
> 
>      If untrusted BPF can be abused to exploit this, then we have to think
>      hard about howto handle that.
> 
>      Assume the following:
> 
>      User A -> BPF -> preemption -> kernel thread -> back to BPF
> 
>      Because there is no kernel to user transition no buffer clear will
>      happen - neither in full nor in conditional mode.
> 
>      So the BPF sampler can happily collect the leftovers of the kernel
>      thread.
> 
>      Might be theoretical, but it needs to be investigated at least.

Just learned that BPF runs with preemption disabled. Oh well, yet another
thing which can't be used with Realtime systems. Well thought out.

Though the

       BPF -> interrupt -> softinterrupt -> BPF

scenario remains. It's not covered by full because it's not a user exit....

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 09/10] MDS basics+ 9
  2019-02-21 11:04   ` [MODERATED] " Peter Zijlstra
@ 2019-02-21 11:50     ` Peter Zijlstra
  0 siblings, 0 replies; 94+ messages in thread
From: Peter Zijlstra @ 2019-02-21 11:50 UTC (permalink / raw)
  To: speck

On Thu, Feb 21, 2019 at 12:04:44PM +0100, Peter Zijlstra wrote:
> On Wed, Feb 20, 2019 at 04:08:02PM +0100, speck for Thomas Gleixner wrote:
> >  static inline void mds_user_clear_cpu_buffers(void)
> >  {
> > +	if (static_branch_likely(&mds_user_clear_cond)) {
> > +		if (__this_cpu_read(mds_cond_clear)) {
> > +			__this_cpu_write(mds_cond_clear, 0);
> > +			mds_clear_cpu_buffers();
> > +		}
> > +	}
> >  	if (static_branch_likely(&mds_user_clear_always))
> >  		mds_clear_cpu_buffers();
> >  }
> 
> Results in:
> 
> 998:   e9 0b 00 00 00          jmpq   9a8 <prepare_exit_to_usermode+0x58>
> 99d:   65 8b 05 00 00 00 00    mov    %gs:0x0(%rip),%eax        # 9a4 <prepare_exit_to_usermode+0x54>
> 9a0: R_X86_64_PC32      mds_cond_clear-0x4
> 9a4:   85 c0                   test   %eax,%eax
> 9a6:   75 0f                   jne    9b7 <prepare_exit_to_usermode+0x67>
> 9a8:   e9 07 00 00 00          jmpq   9b4 <prepare_exit_to_usermode+0x64>
> 9ad:   0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 9b4 <prepare_exit_to_usermode+0x64>
> 9b0: R_X86_64_PC32      .rodata-0x4
> 9b4:   5b                      pop    %rbx
> 9b5:   5d                      pop    %rbp
> 9b6:   c3                      retq
> 9b7:   65 c7 05 00 00 00 00    movl   $0x0,%gs:0x0(%rip)        # 9c2 <prepare_exit_to_usermode+0x72>
> 9be:   00 00 00 00
> 9ba: R_X86_64_PC32      mds_cond_clear-0x8
> 9c2:   0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 9c9 <prepare_exit_to_usermode+0x79>
> 9c5: R_X86_64_PC32      .rodata-0x4
> 9c9:   eb dd                   jmp    9a8 <prepare_exit_to_usermode+0x58>
> 
> 
> static inline void mds_user_clear_cpu_buffers(void)
> {
>        if (!static_branch_likely(&mds_user_clear_always))
>                return;
> 
>        if (static_branch_likely(&mds_user_clear_cond)) {
>                if (!__this_cpu_read(mds_cond_clear))
>                        return;
> 
>                __this_cpu_write(mds_cond_clear, 0);
>        }
> 
>        mds_clear_cpu_buffers();
> }
> 
> results in:
> 
> 998:   e9 22 00 00 00          jmpq   9bf <prepare_exit_to_usermode+0x6f>
> 99d:   e9 16 00 00 00          jmpq   9b8 <prepare_exit_to_usermode+0x68>
> 9a2:   65 8b 05 00 00 00 00    mov    %gs:0x0(%rip),%eax        # 9a9 <prepare_exit_to_usermode+0x59>
> 9a5: R_X86_64_PC32      mds_cond_clear-0x4
> 9a9:   85 c0                   test   %eax,%eax
> 9ab:   74 12                   je     9bf <prepare_exit_to_usermode+0x6f>
> 9ad:   65 c7 05 00 00 00 00    movl   $0x0,%gs:0x0(%rip)        # 9b8 <prepare_exit_to_usermode+0x68>
> 9b4:   00 00 00 00
> 9b0: R_X86_64_PC32      mds_cond_clear-0x8
> 9b8:   0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 9bf <prepare_exit_to_usermode+0x6f>
> 9bb: R_X86_64_PC32      .rodata-0x4
> 9bf:
> 
> 
> The only 'downside' is that means @mds_user_clear_always is now a
> misnomer (and needs be enabled for cond too).

static inline void mds_user_clear_cpu_buffers(void)
{
	if (!static_branch_likely(&mds_user_clear_always))
		return;

	if (static_branch_likely(&mds_user_clear_cond)) {
		if (!GEN_BINARY_RMWcc("btrl", mds_cond_clear, c, "Ir", 0, __percpu_arg([var])))
			return;
	}

	mds_clear_cpu_buffers();
}

998:   e9 17 00 00 00          jmpq   9b4 <prepare_exit_to_usermode+0x64>
99d:   e9 0b 00 00 00          jmpq   9ad <prepare_exit_to_usermode+0x5d>
9a2:   65 0f ba 35 00 00 00    btrl   $0x0,%gs:0x0(%rip)        # 9ab <prepare_exit_to_usermode+0x5b>
9a9:   00 00
9a6: R_X86_64_PC32      mds_cond_clear-0x5
9ab:   73 07                   jae    9b4 <prepare_exit_to_usermode+0x64>
9ad:   0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 9b4 <prepare_exit_to_usermode+0x64>
9b0: R_X86_64_PC32      .rodata-0x4
9b4:

But then the nectro-cult people will hate you :-) Also it will fail to
compile for old GCC and Steve's insane if() redefinition.

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Attack Surface [Was [patch V2 05/10] MDS basics+ 5]
  2019-02-21 10:36     ` Thomas Gleixner
  2019-02-21 11:22       ` Thomas Gleixner
@ 2019-02-21 11:51       ` Andrew Cooper
  2019-02-21 18:41         ` Thomas Gleixner
  1 sibling, 1 reply; 94+ messages in thread
From: Andrew Cooper @ 2019-02-21 11:51 UTC (permalink / raw)
  To: speck

On 21/02/2019 10:36, speck for Thomas Gleixner wrote:
> On Thu, 21 Feb 2019, speck for Andrew Cooper wrote:
>>> +   The mitigation is hooked into all variants of halt()/mwait(), but does
>>> +   not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
>>> +   has been superseeded by the intel_idle driver around 2010 and is
>>> +   preferred on all affected CPUs which still receive microcode updates
>>> +   (Nehalem onwards).
>> I'd perhaps phrase this as "and is preferred on all CPUs which are
>> expected to gain the MD_CLEAR functionality in microcode".
>>
>> ISTR there were some pre-Nehalem ucode updates for one of the
>> speculative issues (although I'm at a loss to locate the schedule
>> document I think the answer is in).
> One of those "supersecure" documents which I can't work with again....

Actually just a plain PDF - the problem is the hundreds of
almost-identical ones I've got from around the same period.

> But that aside the real question is whether it actually matters. Though
> that's a tough one to answer because at least I have no clue what the
> actual attack surface is.

I'm afraid that is a little complicated.  This is to the best of my
understanding...

1) The load and fill buffers issues let an attacker observe stale data
from memory reads and writes which have just occurred in the core.

The single threaded case isn't very interesting - the exit to guest path
most likely clobbers anything interesting, and this is probably also
true of a javascript engine switching between logical contexts.  I
personally think you need an overly-contrived scenario to leak
interesting data in this case.

The hyperthreading case is very interesting however.  An attacker can
read all memory operands from the sibling thread as it is executing. 
Combined with analysis techniques such as those presented in the TLBleed
paper, this is a potentially very powerful attack.

There is an existing PoC from one of the researchers which can leak
arbitrary kernel memory using setrlimit() on one thread and observing
the fill buffers on the other thread.

2) The store buffer issue is much more complicated to reason about. 
Ignoring the idle rebalancing corner case, it is restricted to a single
threaded case.

An attacker can read stale data from the store buffer.  The store buffer
is as wide as the SIMD pipeline in the core, but memory writes don't
zero-extend and overwrite the full width of the buffer.

Therefore, while the bottom 64 bits get modified very frequently, the
upper bits are only modified by a full-width SIMD write, or a
long-enough string operation when FAST_STRINGS is enabled, or from
XSAVE/etc.

The prevailing concern is that an attacker can read most of the vector
register state of the previous task to be scheduled on the thread, which
could be very valuable information if the previous task was using AES-NI.

> Maybe it's described in some other !$!%@$%!@ document or this has been
> discussed between cat pictures, lame jokes and beer appointments on that
> security consecrated whatsapp mimicry.
>
> While I think to know how to exploit that in C/ASM, I have no idea whether
> that stuff is exploitable by other means.
>
>   1) What's the state of javascript?
>
>      If that's not an issue, then we really can stop worrying about pre
>      Nehalem for reality sake. If it's an issue we need at have some
>      information for Pavel at least. :)

I'd like to hope that nothing in a webpage can pull off the store-buffer
attack.  An attacker needs to be able to control its own faults to
create assist conditions, and #PF is the only plausible option, but
shouldn't be under attacker control.

However, I expect a webpage (webassembly in particular) could pull off
the load/fill buffer attack in the HT case.  This is certainly the
biggest risk from my point of view.

>   2) What about BPF?
>
>      If untrusted BPF can be abused to exploit this, then we have to think
>      hard about howto handle that.
>
>      Assume the following:
>
>      User A -> BPF -> preemption -> kernel thread -> back to BPF
>
>      Because there is no kernel to user transition no buffer clear will
>      happen - neither in full nor in conditional mode.
>
>      So the BPF sampler can happily collect the leftovers of the kernel
>      thread.
>
>      Might be theoretical, but it needs to be investigated at least.

TBH, I my gut feeling is that userspace might have an easier time
pulling off these attack that some BPF code, but this certainly doesn't
rule out the possibility.

I hope this is helpful and not too full of errors.  I'd certainly
appreciate some second opinions.

Thanks,

~Andrew

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 07/10] MDS basics+ 7
  2019-02-20 15:08 ` [patch V2 07/10] MDS basics+ 7 Thomas Gleixner
@ 2019-02-21 12:47   ` Borislav Petkov
  2019-02-21 13:48     ` Thomas Gleixner
  0 siblings, 1 reply; 94+ messages in thread
From: Borislav Petkov @ 2019-02-21 12:47 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:08:00PM +0100, speck for Thomas Gleixner wrote:
> Subject: [patch V2 07/10] x86/speculation/mds: Add sysfs reporting for MDS
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Add the sysfs reporting file for MDS. It exposes the vulnerability and
> mitigation state similar to the existing files for the other speculative
> hardware vulnerabilities.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Documentation/ABI/testing/sysfs-devices-system-cpu |    1 +
>  arch/x86/kernel/cpu/bugs.c                         |   20 ++++++++++++++++++++
>  drivers/base/cpu.c                                 |    6 ++++--
>  include/linux/cpu.h                                |    2 ++
>  4 files changed, 27 insertions(+), 2 deletions(-)

...

> --- a/drivers/base/cpu.c
> +++ b/drivers/base/cpu.c
> @@ -540,8 +540,8 @@ ssize_t __weak cpu_show_spec_store_bypas
>  	return sprintf(buf, "Not affected\n");
>  }
>  
> -ssize_t __weak cpu_show_l1tf(struct device *dev,
> -			     struct device_attribute *attr, char *buf)
> +ssize_t __weak cpu_show_mds(struct device *dev,
> +			    struct device_attribute *attr, char *buf)

Didn't you mean to copy cpu_show_l1tf() here and rename it instead of
only renaming it?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 07/10] MDS basics+ 7
  2019-02-21 12:47   ` [MODERATED] " Borislav Petkov
@ 2019-02-21 13:48     ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 13:48 UTC (permalink / raw)
  To: speck

On Thu, 21 Feb 2019, speck for Borislav Petkov wrote:

> On Wed, Feb 20, 2019 at 04:08:00PM +0100, speck for Thomas Gleixner wrote:
> > Subject: [patch V2 07/10] x86/speculation/mds: Add sysfs reporting for MDS
> > From: Thomas Gleixner <tglx@linutronix.de>
> > 
> > Add the sysfs reporting file for MDS. It exposes the vulnerability and
> > mitigation state similar to the existing files for the other speculative
> > hardware vulnerabilities.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > ---
> >  Documentation/ABI/testing/sysfs-devices-system-cpu |    1 +
> >  arch/x86/kernel/cpu/bugs.c                         |   20 ++++++++++++++++++++
> >  drivers/base/cpu.c                                 |    6 ++++--
> >  include/linux/cpu.h                                |    2 ++
> >  4 files changed, 27 insertions(+), 2 deletions(-)
> 
> ...
> 
> > --- a/drivers/base/cpu.c
> > +++ b/drivers/base/cpu.c
> > @@ -540,8 +540,8 @@ ssize_t __weak cpu_show_spec_store_bypas
> >  	return sprintf(buf, "Not affected\n");
> >  }
> >  
> > -ssize_t __weak cpu_show_l1tf(struct device *dev,
> > -			     struct device_attribute *attr, char *buf)
> > +ssize_t __weak cpu_show_mds(struct device *dev,
> > +			    struct device_attribute *attr, char *buf)
> 
> Didn't you mean to copy cpu_show_l1tf() here and rename it instead of
> only renaming it?

Sigh. Yes. Copy and paste is hard

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 08/10] MDS basics+ 8
  2019-02-20 15:08 ` [patch V2 08/10] MDS basics+ 8 Thomas Gleixner
@ 2019-02-21 14:04   ` Borislav Petkov
  2019-02-21 14:11     ` Thomas Gleixner
  0 siblings, 1 reply; 94+ messages in thread
From: Borislav Petkov @ 2019-02-21 14:04 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:08:01PM +0100, speck for Thomas Gleixner wrote:
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -221,7 +221,8 @@ static enum mds_mitigations mds_mitigati
>  
>  static const char * const mds_strings[] = {
>  	[MDS_MITIGATION_OFF]	= "Vulnerable",
> -	[MDS_MITIGATION_FULL]	= "Mitigation: Clear CPU buffers"
> +	[MDS_MITIGATION_FULL]	= "Mitigation: Clear CPU buffers",
> +	[MDS_MITIGATION_HOPE]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
>  };
>  
>  static void mds_select_mitigation(void)
> @@ -236,12 +237,12 @@ static void mds_select_mitigation(void)
>  		break;
>  	case MDS_MITIGATION_AUTO:
>  	case MDS_MITIGATION_FULL:
> -		if (boot_cpu_has(X86_FEATURE_MD_CLEAR)) {
> +	case MDS_MITIGATION_HOPE:

Now we have:

        switch (mds_mitigation) {
        case MDS_MITIGATION_OFF:
                break;
        case MDS_MITIGATION_AUTO:
        case MDS_MITIGATION_FULL:
        case MDS_MITIGATION_HOPE:

I guess that switch-case statement becomes not really useful and we can
do an early exit with a simple if:

	if (mds_mitigation == MDS_MITIGATION_OFF)
		goto print;

	/* do rest here */

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 08/10] MDS basics+ 8
  2019-02-21 14:04   ` [MODERATED] " Borislav Petkov
@ 2019-02-21 14:11     ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 14:11 UTC (permalink / raw)
  To: speck

On Thu, 21 Feb 2019, speck for Borislav Petkov wrote:
> On Wed, Feb 20, 2019 at 04:08:01PM +0100, speck for Thomas Gleixner wrote:
> Now we have:
> 
>         switch (mds_mitigation) {
>         case MDS_MITIGATION_OFF:
>                 break;
>         case MDS_MITIGATION_AUTO:
>         case MDS_MITIGATION_FULL:
>         case MDS_MITIGATION_HOPE:
> 
> I guess that switch-case statement becomes not really useful and we can
> do an early exit with a simple if:
> 
> 	if (mds_mitigation == MDS_MITIGATION_OFF)
> 		goto print;
> 
> 	/* do rest here */

Well, no. We are getting more of those and I like the switch() because it
complains if you add a new enum value and don't update the case.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 09/10] MDS basics+ 9
  2019-02-20 15:08 ` [patch V2 09/10] MDS basics+ 9 Thomas Gleixner
  2019-02-20 16:21   ` [MODERATED] " Peter Zijlstra
  2019-02-21 11:04   ` [MODERATED] " Peter Zijlstra
@ 2019-02-21 14:18   ` Borislav Petkov
  2019-02-21 18:00   ` Kees Cook
  3 siblings, 0 replies; 94+ messages in thread
From: Borislav Petkov @ 2019-02-21 14:18 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:08:02PM +0100, speck for Thomas Gleixner wrote:
> To avoid the expensive CPU buffer flushing on every transition from kernel
> to user space it is desired to provide a conditional mitigation mode.
> 
> Provide the infrastructure which is required to implement this:
> 
>  - A static key to enable conditional mode CPU buffer flushing.
> 
>  - A per CPU variable which indicates that CPU buffers need to
>    be flushed on return to user space. The variable is defined
>    next to __preempt_count to it ends up in a cacheline which

			    so

>    is required on return to user space anyway.
> 
>  - The conditinal flush mechanics on return to user space.

	conditional

> 
>  - A helper function to set the flush request. Is in processor.h for now to
>    avoid include hell, but might move to a separate header.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/entry/common.c              |    6 ++++++
>  arch/x86/include/asm/nospec-branch.h |    3 +++
>  arch/x86/include/asm/processor.h     |   13 +++++++++++++
>  arch/x86/kernel/cpu/bugs.c           |    1 +
>  arch/x86/kernel/cpu/common.c         |    7 +++++++
>  5 files changed, 30 insertions(+)
> 
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -183,6 +183,12 @@ static void exit_to_usermode_loop(struct
>  
>  static inline void mds_user_clear_cpu_buffers(void)
>  {
> +	if (static_branch_likely(&mds_user_clear_cond)) {
> +		if (__this_cpu_read(mds_cond_clear)) {
> +			__this_cpu_write(mds_cond_clear, 0);
> +			mds_clear_cpu_buffers();
> +		}
> +	}
>  	if (static_branch_likely(&mds_user_clear_always))
>  		mds_clear_cpu_buffers();
>  }
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -9,6 +9,7 @@
>  #include <asm/alternative-asm.h>
>  #include <asm/cpufeatures.h>
>  #include <asm/msr-index.h>
> +#include <asm/percpu.h>
>  
>  /*
>   * Fill the CPU return stack buffer.
> @@ -319,7 +320,9 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_
>  DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
>  
>  DECLARE_STATIC_KEY_FALSE(mds_user_clear_always);
> +DECLARE_STATIC_KEY_FALSE(mds_user_clear_cond);
>  DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
> +DECLARE_PER_CPU(unsigned int, mds_cond_clear);
>  
>  #include <asm/segment.h>
>  
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -24,6 +24,7 @@ struct vm86;
>  #include <asm/special_insns.h>
>  #include <asm/fpu/types.h>
>  #include <asm/unwind_hints.h>
> +#include <asm/nospec-branch.h>
>  
>  #include <linux/personality.h>
>  #include <linux/cache.h>
> @@ -998,4 +999,16 @@ enum mds_mitigations {
>  	MDS_MITIGATION_HOPE,
>  };
>  
> +/**
> + * mds_request_buffer_clear - Set the request to clear CPU buffers
> + *
> + * This is invoked from contexts which identify a necessarity to clear CPU
							^
							necessity

> + * buffers on the next return to user space.
> + */
> +static inline void mds_request_buffer_clear(void)
> +{
> +	if (static_branch_likely(&mds_user_clear_cond))
> +		this_cpu_write(mds_cond_clear, 1);
> +}
> +
>  #endif /* _ASM_X86_PROCESSOR_H */
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -66,6 +66,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always
>  
>  /* Control MDS CPU buffer clear before returning to user space */
>  DEFINE_STATIC_KEY_FALSE(mds_user_clear_always);
> +DEFINE_STATIC_KEY_FALSE(mds_user_clear_cond);
>  /* Control MDS CPU buffer clear before idling (halt, mwait) */
>  DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
>  
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -8,6 +8,7 @@
>  #include <linux/export.h>
>  #include <linux/percpu.h>
>  #include <linux/string.h>
> +#include <linux/nospec.h>
>  #include <linux/ctype.h>
>  #include <linux/delay.h>
>  #include <linux/sched/mm.h>
> @@ -1544,6 +1545,9 @@ DEFINE_PER_CPU(unsigned int, irq_count)
>  DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
>  EXPORT_PER_CPU_SYMBOL(__preempt_count);
>  
> +/* Indicator for return to user space or VMENTER to clear CPU buffers */
> +DEFINE_PER_CPU(unsigned int, mds_cond_clear);
> +
>  /* May not be marked __init: used by software suspend */
>  void syscall_init(void)
>  {
> @@ -1617,6 +1621,9 @@ EXPORT_PER_CPU_SYMBOL(current_task);
>  DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
>  EXPORT_PER_CPU_SYMBOL(__preempt_count);
>  
> +/* Indicator for return to user space or VMENTER to clear CPU buffers */
> +DEFINE_PER_CPU(unsigned int, mds_cond_clear);
> +

Why not define it once in the bitness-agnostic place over #ifdef CONFIG_X86_64?

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 09/10] MDS basics+ 9
  2019-02-20 15:08 ` [patch V2 09/10] MDS basics+ 9 Thomas Gleixner
                     ` (2 preceding siblings ...)
  2019-02-21 14:18   ` Borislav Petkov
@ 2019-02-21 18:00   ` Kees Cook
  2019-02-21 19:46     ` Thomas Gleixner
  3 siblings, 1 reply; 94+ messages in thread
From: Kees Cook @ 2019-02-21 18:00 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:08:02PM +0100, speck for Thomas Gleixner wrote:
>  - A helper function to set the flush request. Is in processor.h for now to
>    avoid include hell, but might move to a separate header.

So, this looks like a blacklisting approach? i.e. things that feel they are
sensitive must call this to make sure they don't leak.

I think we'd have safer coverage if we did this in reverse: we're likely
better able to reason about places where we know there's nothing
interesting happening and we don't need to flush. (i.e. whitelist and
flush by default)

Now, all that said, I think we need to always flush, but I'm paranoid and
I look at exploits too much. For example, even stack addresses themselves
should be considered secret, since they may be used by attackers to align
cross-stack attacks, etc. Take a look at the hoops that are needed to
pull this attack off: https://www.slideshare.net/scovetta/stackjacking
I don't think we should make this easier by default.

The same applies to all heap addresses, and basically everything. Just
blacklisting externally-defined "secrets" isn't going to protect much,
IMO. Discoverability of kernel memory layout (and I'm not talking text
ASLR here: I mean stack, heap, page table location, etc) is basically
the second step of modern attacks. (The first step is usually turning
off SMAP.)

So, for the paranoid: we need a flush-always. For people who think
attackers aren't going to use 0-day bugs and all the leaky info to perform
a "regular" memory corruption attack to gain access to "secrets", and want
to just stop direct leaks, I think whitelisting is the better option. How
can we know what someone thinks is a secret?

-- 
Kees Cook                                            @outflux.net

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Attack Surface [Was [patch V2 05/10] MDS basics+ 5]
  2019-02-21 11:51       ` [MODERATED] Attack Surface [Was [patch V2 05/10] MDS basics+ 5] Andrew Cooper
@ 2019-02-21 18:41         ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 18:41 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2114 bytes --]

On Thu, 21 Feb 2019, speck for Andrew Cooper wrote:
> On 21/02/2019 10:36, speck for Thomas Gleixner wrote:

....

> The prevailing concern is that an attacker can read most of the vector
> register state of the previous task to be scheduled on the thread, which
> could be very valuable information if the previous task was using AES-NI.

That part I think I had halfways groked. I'm more concerned about what kind
of context the attack can come from.

> >   1) What's the state of javascript?
> >
> >      If that's not an issue, then we really can stop worrying about pre
> >      Nehalem for reality sake. If it's an issue we need at have some
> >      information for Pavel at least. :)
> 
> I'd like to hope that nothing in a webpage can pull off the store-buffer
> attack.  An attacker needs to be able to control its own faults to
> create assist conditions, and #PF is the only plausible option, but
> shouldn't be under attacker control.
> 
> However, I expect a webpage (webassembly in particular) could pull off
> the load/fill buffer attack in the HT case.  This is certainly the
> biggest risk from my point of view.

Ok. 

> >   2) What about BPF?
> >
> >      If untrusted BPF can be abused to exploit this, then we have to think
> >      hard about howto handle that.
> >
> >      Assume the following:
> >
> >      User A -> BPF -> preemption -> kernel thread -> back to BPF
> >
> >      Because there is no kernel to user transition no buffer clear will
> >      happen - neither in full nor in conditional mode.
> >
> >      So the BPF sampler can happily collect the leftovers of the kernel
> >      thread.
> >
> >      Might be theoretical, but it needs to be investigated at least.
> 
> TBH, I my gut feeling is that userspace might have an easier time
> pulling off these attack that some BPF code, but this certainly doesn't
> rule out the possibility.

My concern here is: HT off, syscalls clear buffers, so BPF might be a good
way to do sampling by carefully constructing a gadget. No idea whether it
works at all.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-20 17:10   ` mark gross
@ 2019-02-21 19:26     ` Tim Chen
  2019-02-21 20:32       ` Thomas Gleixner
  2019-02-21 21:07       ` [MODERATED] " Jiri Kosina
  0 siblings, 2 replies; 94+ messages in thread
From: Tim Chen @ 2019-02-21 19:26 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 135 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for mark gross <speck@linutronix.de>
Subject: Re: [patch V2 04/10] MDS basics+ 4

[-- Attachment #2: Type: text/plain, Size: 829 bytes --]

On 2/20/19 9:10 AM, speck for mark gross wrote:

>> +
>> +      - KGBD

s/KGBD/KGDB

>> +
>> +        If the kernel debugger is accessible by an unpriviledged attacker,
>> +        then the NMI handler is the least of the problems.
>> +

...

> 
> However; if I'm being pedantic, the attacker not having controlability aspect
> of your argument can apply to most aspects of the MDS vulnerability.  I think
> that's why its name uses "data sampling".  Also, I need to ask the chip heads
> about if this list of NMI's is complete and can be expected to stay that way
> across processor and platfrom generations.
> 
> --mark
> 


I don't think any of the code paths listed touches any user data.  So even
if an attacker have some means to control NMI, he won't get any useful data.

Thanks.

Tim 



^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 09/10] MDS basics+ 9
  2019-02-21 18:00   ` Kees Cook
@ 2019-02-21 19:46     ` Thomas Gleixner
  2019-02-21 20:56       ` Thomas Gleixner
  0 siblings, 1 reply; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 19:46 UTC (permalink / raw)
  To: speck

On Thu, 21 Feb 2019, speck for Kees Cook wrote:

> On Wed, Feb 20, 2019 at 04:08:02PM +0100, speck for Thomas Gleixner wrote:
> >  - A helper function to set the flush request. Is in processor.h for now to
> >    avoid include hell, but might move to a separate header.
> 
> So, this looks like a blacklisting approach? i.e. things that feel they are
> sensitive must call this to make sure they don't leak.
> 
> I think we'd have safer coverage if we did this in reverse: we're likely
> better able to reason about places where we know there's nothing
> interesting happening and we don't need to flush. (i.e. whitelist and
> flush by default)
> 
> Now, all that said, I think we need to always flush, but I'm paranoid and
> I look at exploits too much. For example, even stack addresses themselves
> should be considered secret, since they may be used by attackers to align
> cross-stack attacks, etc. Take a look at the hoops that are needed to
> pull this attack off: https://www.slideshare.net/scovetta/stackjacking
> I don't think we should make this easier by default.
> 
> The same applies to all heap addresses, and basically everything. Just
> blacklisting externally-defined "secrets" isn't going to protect much,
> IMO. Discoverability of kernel memory layout (and I'm not talking text
> ASLR here: I mean stack, heap, page table location, etc) is basically
> the second step of modern attacks. (The first step is usually turning
> off SMAP.)
> 
> So, for the paranoid: we need a flush-always. For people who think
> attackers aren't going to use 0-day bugs and all the leaky info to perform
> a "regular" memory corruption attack to gain access to "secrets", and want
> to just stop direct leaks, I think whitelisting is the better option. How
> can we know what someone thinks is a secret?

We have a flush always mode. That's there already and was done first.

While I understand the request to have more light weight mitigations, I'm
not convinced about the conditional thing either in the light of these
issues.

The problem with both white- and black-listing is that we might end up with
a large number of places which do the flagging. If it's just a few like
context switch, that'd be ok, but I fear that everyone and his dog will
come up with some other picture what needs to be protected.

I'll post the last iteration of the lot soonish and we clearly want to keep
the last two out of the git tree for now until we come to a conclusion on
that matter. Andi should be able to provide some insight from analysing
stuff.

TBH, I personaly would love to just to stick with [full|off], send the
conditional stuff to hell, go back and do useful and more pleasant work.

Alternatively we finally bite the bullet and set NR_CPUS=0. All problems
solved.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: Encrypted Message
  2019-02-21 19:26     ` [MODERATED] Encrypted Message Tim Chen
@ 2019-02-21 20:32       ` Thomas Gleixner
  2019-02-21 21:07       ` [MODERATED] " Jiri Kosina
  1 sibling, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 20:32 UTC (permalink / raw)
  To: speck

Tim,

On Thu, 21 Feb 2019, speck for Tim Chen wrote:

> On 2/20/19 9:10 AM, speck for mark gross wrote:

> > However; if I'm being pedantic, the attacker not having controlability aspect
> > of your argument can apply to most aspects of the MDS vulnerability.  I think
> > that's why its name uses "data sampling".  Also, I need to ask the chip heads
> > about if this list of NMI's is complete and can be expected to stay that way
> > across processor and platfrom generations.

> I don't think any of the code paths listed touches any user data.  So
> even if an attacker have some means to control NMI, he won't get any
> useful data.

Can you please stop to tout this 'user data' mantra?

It's a completely unspecified term. Several people have asked what it
means, the answer until today is deafening silence.

Same applies for 'useful data'. What's useful data? That largely depends on
what the attacker is after.

Engineering 101:

   Correctness first

   Definititons first

not

   Performance first

   Unspecified assumptions and buzzwords first

The whole mess we are forced to deal with originates from the latter two.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* Re: [patch V2 09/10] MDS basics+ 9
  2019-02-21 19:46     ` Thomas Gleixner
@ 2019-02-21 20:56       ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-21 20:56 UTC (permalink / raw)
  To: speck

On Thu, 21 Feb 2019, speck for Thomas Gleixner wrote:
> On Thu, 21 Feb 2019, speck for Kees Cook wrote:
> I'll post the last iteration of the lot soonish and we clearly want to keep
> the last two out of the git tree for now until we come to a conclusion on
> that matter. Andi should be able to provide some insight from analysing
> stuff.

That said it would be also extremly useful to get the view on this from the
real world service providers with large deployments. What's their take on
this?

All this thought experiments, micro benchmark numbers and marketing
oriented wishful thinking might become moot when confronted with reality.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: Encrypted Message
  2019-02-21 19:26     ` [MODERATED] Encrypted Message Tim Chen
  2019-02-21 20:32       ` Thomas Gleixner
@ 2019-02-21 21:07       ` Jiri Kosina
  1 sibling, 0 replies; 94+ messages in thread
From: Jiri Kosina @ 2019-02-21 21:07 UTC (permalink / raw)
  To: speck

On Thu, 21 Feb 2019, speck for Tim Chen wrote:

> I don't think any of the code paths listed touches any user data.  So 
> even if an attacker have some means to control NMI, he won't get any 
> useful data.

Tim,

so how about the kernel stack contents I mentioned yesterday?

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Re: [patch V2 00/10] MDS basics+ 0
  2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
                   ` (9 preceding siblings ...)
  2019-02-20 15:08 ` [patch V2 10/10] MDS basics+ 10 Thomas Gleixner
@ 2019-02-22 16:05 ` mark gross
  2019-02-22 17:12   ` Thomas Gleixner
  10 siblings, 1 reply; 94+ messages in thread
From: mark gross @ 2019-02-22 16:05 UTC (permalink / raw)
  To: speck

On Wed, Feb 20, 2019 at 04:07:53PM +0100, speck for Thomas Gleixner wrote:
> Hi!
> 
> This is an update to yesterdays series with the following changes:
> 
>    - Addressed review comments (on/off list)
> 
>    - Changed the approach with static keys slightly
> 
>    - Added "cc" clobber to the VERW asm magic (spotted by Peterz)
> 
>    - Added x86 specific documentation which explains the mitigation methods
>      and details on why particular code pathes are excluded.
> 
>    - Added an internal 'HOPE' mitigation mode to address the VMWare wish.
> 
>    - Added the basic infrastructure for conditional mode
> 
> Dropped the documentation patch for now as I'm not done with updating it
> and I have to run now and attend my grandson's birthday party.
> 
> Thanks,
> 
> 	tglx
> 
> 
>
bumped into a link time issue with an allmodconfig compile.  I see a new
version is out.  I'll check it out shortly but, FWIW:


From c8f9c06339e76b03b5c9e50c3075476ce6e79823 Mon Sep 17 00:00:00 2001
From: Mark Gross <mgross@linux.intel.com>
Date: Fri, 22 Feb 2019 07:59:22 -0800
Subject: [PATCH] minor fix up to get the mds mitigation to compile with
 allmodconfig

added EXPORT_SYMBOL_GPL(mds_idle_clear); to bugs.c

Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a4da89834d4b..d862881721ce 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -69,6 +69,7 @@ DEFINE_STATIC_KEY_FALSE(mds_user_clear_always);
 DEFINE_STATIC_KEY_FALSE(mds_user_clear_cond);
 /* Control MDS CPU buffer clear before idling (halt, mwait) */
 DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
+EXPORT_SYMBOL_GPL(mds_idle_clear);
 
 void __init check_bugs(void)
 {
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 94+ messages in thread

* Re: [patch V2 00/10] MDS basics+ 0
  2019-02-22 16:05 ` [MODERATED] Re: [patch V2 00/10] MDS basics+ 0 mark gross
@ 2019-02-22 17:12   ` Thomas Gleixner
  0 siblings, 0 replies; 94+ messages in thread
From: Thomas Gleixner @ 2019-02-22 17:12 UTC (permalink / raw)
  To: speck

On Fri, 22 Feb 2019, speck for mark gross wrote:
> bumped into a link time issue with an allmodconfig compile.  I see a new
> version is out.  I'll check it out shortly but, FWIW:

Indeed. That's still there because the drivers/ stuff can be built modular.

I'll fold that in.

Thanks!

	tglx

^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-04 17:17     ` [MODERATED] " Josh Poimboeuf
@ 2019-03-06 16:22       ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-06 16:22 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 117 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Josh Poimboeuf <speck@linutronix.de>
Subject: Re: Encrypted Message

[-- Attachment #2: Type: text/plain, Size: 778 bytes --]

On 3/4/19 12:17 PM, speck for Josh Poimboeuf wrote:
> On Sun, Mar 03, 2019 at 10:58:01PM -0500, speck for Jon Masters wrote:
> 
>> On 3/3/19 8:24 PM, speck for Josh Poimboeuf wrote:
>>
>>> +		if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
>>> +			pr_warn_once(MDS_MSG_SMT);
>>
>> It's never fully safe to use SMT. I get that if we only had MSBDS then
>> it's unlikely we'll hit the e.g. power state change cases needed to
>> exploit it but I think it would be prudent to display something anyway?
> 
> My understanding is that the idle state changes are mitigated elsewhere
> in the MDS patches, so it should be safe in theory.

Looked at it again. Agree. Sorry about that.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-05 15:34     ` Thomas Gleixner
@ 2019-03-06 16:21       ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-06 16:21 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 118 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: Encrypted Message

[-- Attachment #2: Type: text/plain, Size: 990 bytes --]

On 3/5/19 10:34 AM, speck for Thomas Gleixner wrote:
> On Mon, 4 Mar 2019, speck for Jon Masters wrote:
> 
>> On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:
>>>       if (static_branch_unlikely(&vmx_l1d_should_flush))
>>>               vmx_l1d_flush(vcpu);
>>> +     else if (static_branch_unlikely(&mds_user_clear))
>>> +             mds_clear_cpu_buffers();
>>
>> Does this cover the case where we have older ucode installed that does
>> L1D flush but NOT the MD_CLEAR? I'm about to go check to see if there's
>> logic handling this but wanted to call it out.
> 
> If no updated microcode is available then it's pretty irrelevant which code
> path you take. None of them will mitigate MDS.

You're right. My fear was we'd have some microcode that mitigated L1D
without implied MD clear but also did MDS. I was incorrect - all ucode
that will be publicly released will have both properties.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-05 22:31     ` Andrew Cooper
@ 2019-03-06 16:18       ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-06 16:18 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 121 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Andrew Cooper <speck@linutronix.de>
Subject: Re: Starting to go public?

[-- Attachment #2: Type: text/plain, Size: 1380 bytes --]

On 3/5/19 5:31 PM, speck for Andrew Cooper wrote:
> On 05/03/2019 20:36, speck for Jiri Kosina wrote:
>> On Tue, 5 Mar 2019, speck for Andrew Cooper wrote:
>>
>>>> Looks like the papers are starting to leak:
>>>>
>>>>    https://arxiv.org/pdf/1903.00446.pdf
>>>>
>>>> yes, yes, a lot of the attack seems to be about rowhammer, but the
>>>> "spolier" part looks like MDS.
>>> So Intel was aware of that paper, but wasn't expecting it to go public
>>> today.
>>>
>>> =46rom their point of view, it is a traditional timing sidechannel on a
>>> piece of the pipeline (which happens to be component which exists for
>>> speculative memory disambiguation).
>>>
>>> There are no proposed changes to the MDS timeline at this point.
>> So this is not the paper that caused the panic fearing that PSF might leak 
>> earlier than the rest of the issues in mid-february (which few days later 
>> Intel claimed to have succesfully negotiated with the researches not to 
>> publish before the CRD)?
> 
> Correct.
> 
> The incident you are referring to is a researcher who definitely found
> PSF, contacted Intel and was initially displeased at the proposed embargo.

Indeed. There are at least three different teams with papers that read
on MDS, and all of them are holding to the embargo.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-05 16:43 [MODERATED] Starting to go public? Linus Torvalds
  2019-03-05 17:02 ` [MODERATED] " Andrew Cooper
@ 2019-03-05 17:10 ` Jon Masters
  1 sibling, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-05 17:10 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 135 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Linus Torvalds <speck@linutronix.de>
Subject: NOT PUBLIC - Re: Starting to go public?

[-- Attachment #2: Type: text/plain, Size: 796 bytes --]

On 3/5/19 11:43 AM, speck for Linus Torvalds wrote:
> Looks like the papers are starting to leak:
> 
>    https://arxiv.org/pdf/1903.00446.pdf
> 
> yes, yes, a lot of the attack seems to be about rowhammer, but the
> "spolier" part looks like MDS.

It's not but it is close to finding PSF behavior. The thing they found
is described separately in one of the original Intel store patent. So we
are at risk but should not panic.

I've spoken with several researchers sitting on MDS papers and confirmed
that they are NOT concerned at this stage. Of course everyone is
carefully watching and that's why we need to have contingency. People
will start looking in this area (I know of three teams doing so) now.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-04  7:06     ` Jon Masters
@ 2019-03-04  8:12       ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  8:12 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 126 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Jon Masters <speck@linutronix.de>
Subject: Re: [patch V6 08/14] MDS basics 8

[-- Attachment #2: Type: text/plain, Size: 1075 bytes --]

On 3/4/19 2:06 AM, speck for Jon Masters wrote:
> On 3/4/19 1:57 AM, speck for Jon Masters wrote:
>> On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:
>>>  	if (static_branch_unlikely(&vmx_l1d_should_flush))
>>>  		vmx_l1d_flush(vcpu);
>>> +	else if (static_branch_unlikely(&mds_user_clear))
>>> +		mds_clear_cpu_buffers();
>>
>> Does this cover the case where we have older ucode installed that does
>> L1D flush but NOT the MD_CLEAR? I'm about to go check to see if there's
>> logic handling this but wanted to call it out.
> 
> Aside from the above question, I've reviewed all of the patches
> extensively at this point. Feel free to add a Reviewed-by or Tested-by
> according to your preference. I've a bunch of further tests running,
> including on AMD platforms just so to check nothing broke with those
> platforms that are not susceptible to MDS.

Running fine on AMD platform here and reports correctly:

$ cat /sys/devices/system/cpu/vulnerabilities/mds
Not affected

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-04  7:30   ` [MODERATED] Re: [PATCH RFC 1/4] 1 Greg KH
@ 2019-03-04  7:45     ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  7:45 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 110 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Greg KH <speck@linutronix.de>
Subject: Re: [PATCH RFC 1/4] 1

[-- Attachment #2: Type: text/plain, Size: 1867 bytes --]

On 3/4/19 2:30 AM, speck for Greg KH wrote:
> On Sun, Mar 03, 2019 at 07:23:22PM -0600, speck for Josh Poimboeuf wrote:
>> From: Josh Poimboeuf <jpoimboe@redhat.com>
>> Subject: [PATCH RFC 1/4] x86/speculation/mds: Add mds=full,nosmt cmdline
>>  option
>>
>> Add the mds=full,nosmt cmdline option.  This is like mds=full, but with
>> SMT disabled if the CPU is vulnerable.
>>
>> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
>> ---
>>  Documentation/admin-guide/hw-vuln/mds.rst       |  3 +++
>>  Documentation/admin-guide/kernel-parameters.txt |  6 ++++--
>>  arch/x86/kernel/cpu/bugs.c                      | 10 ++++++++++
>>  3 files changed, 17 insertions(+), 2 deletions(-)
>>
>> diff --git a/Documentation/admin-guide/hw-vuln/mds.rst b/Documentation/admin-guide/hw-vuln/mds.rst
>> index 1de29d28903d..244ab47d1fb3 100644
>> --- a/Documentation/admin-guide/hw-vuln/mds.rst
>> +++ b/Documentation/admin-guide/hw-vuln/mds.rst
>> @@ -260,6 +260,9 @@ time with the option "mds=". The valid arguments for this option are:
>>  
>>  		It does not automatically disable SMT.
>>  
>> +  full,nosmt	The same as mds=full, with SMT disabled on vulnerable
>> +		CPUs.  This is the complete mitigation.
> 
> While I understand the intention, the number of different combinations
> we are "offering" to userspace here is huge, and everyone is going to be
> confused as to what to do.  If we really think/say that SMT is a major
> issue for this, why don't we just have "full" disable SMT?

Frankly, it ought to for safety (can't be made safe). The reason cited
for not doing so (Thomas and Linus can speak up on this part) was
upgrades vs new installs. The concern was not to break existing folks by
losing half their logical CPU count when upgrading a kernel.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-04  6:57   ` [MODERATED] Encrypted Message Jon Masters
@ 2019-03-04  7:06     ` Jon Masters
  2019-03-04  8:12       ` Jon Masters
  2019-03-05 15:34     ` Thomas Gleixner
  1 sibling, 1 reply; 94+ messages in thread
From: Jon Masters @ 2019-03-04  7:06 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 126 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Jon Masters <speck@linutronix.de>
Subject: Re: [patch V6 08/14] MDS basics 8

[-- Attachment #2: Type: text/plain, Size: 877 bytes --]

On 3/4/19 1:57 AM, speck for Jon Masters wrote:
> On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:
>>  	if (static_branch_unlikely(&vmx_l1d_should_flush))
>>  		vmx_l1d_flush(vcpu);
>> +	else if (static_branch_unlikely(&mds_user_clear))
>> +		mds_clear_cpu_buffers();
> 
> Does this cover the case where we have older ucode installed that does
> L1D flush but NOT the MD_CLEAR? I'm about to go check to see if there's
> logic handling this but wanted to call it out.

Aside from the above question, I've reviewed all of the patches
extensively at this point. Feel free to add a Reviewed-by or Tested-by
according to your preference. I've a bunch of further tests running,
including on AMD platforms just so to check nothing broke with those
platforms that are not susceptible to MDS.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-01 21:47 ` [patch V6 08/14] MDS basics 8 Thomas Gleixner
@ 2019-03-04  6:57   ` Jon Masters
  2019-03-04  7:06     ` Jon Masters
  2019-03-05 15:34     ` Thomas Gleixner
  0 siblings, 2 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  6:57 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 130 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch V6 08/14] MDS basics 8

[-- Attachment #2: Type: text/plain, Size: 491 bytes --]

On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:
>  	if (static_branch_unlikely(&vmx_l1d_should_flush))
>  		vmx_l1d_flush(vcpu);
> +	else if (static_branch_unlikely(&mds_user_clear))
> +		mds_clear_cpu_buffers();

Does this cover the case where we have older ucode installed that does
L1D flush but NOT the MD_CLEAR? I'm about to go check to see if there's
logic handling this but wanted to call it out.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-01 21:47 ` [patch V6 10/14] MDS basics 10 Thomas Gleixner
@ 2019-03-04  6:45   ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  6:45 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 131 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch V6 10/14] MDS basics 10

[-- Attachment #2: Type: text/plain, Size: 306 bytes --]

On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:

> +	/*
> +	 * Enable the idle clearing on CPUs which are affected only by
> +	 * MDBDS and not any other MDS variant. The other variants cannot
           ^^^^^
           MSBDS


-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-01 21:47 ` [patch V6 06/14] MDS basics 6 Thomas Gleixner
@ 2019-03-04  6:28   ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  6:28 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 130 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch V6 06/14] MDS basics 6

[-- Attachment #2: Type: text/plain, Size: 1195 bytes --]

On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:
> Provide a inline function with the assembly magic. The argument of the VERW
> instruction must be a memory operand as documented:
> 
>   "MD_CLEAR enumerates that the memory-operand variant of VERW (for
>    example, VERW m16) has been extended to also overwrite buffers affected
>    by MDS. This buffer overwriting functionality is not guaranteed for the
>    register operand variant of VERW."
> 
> Documentation also recommends to use a writable data segment selector:
> 
>   "The buffer overwriting occurs regardless of the result of the VERW
>    permission check, as well as when the selector is null or causes a
>    descriptor load segment violation. However, for lowest latency we
>    recommend using a selector that indicates a valid writable data
>    segment."

Note that we raised this again with Intel last week amid Andrew's
results and they are going to get back to us if this guidance changes as
a result of further measurements on their end. It's a few cycles
difference in the Coffeelake case, but it could always be higher.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-01 21:47 ` [patch V6 12/14] MDS basics 12 Thomas Gleixner
@ 2019-03-04  5:47   ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  5:47 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 131 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch V6 12/14] MDS basics 12

[-- Attachment #2: Type: text/plain, Size: 1553 bytes --]

On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:

> Subject: [patch V6 12/14] x86/speculation/mds: Add mitigation mode VMWERV
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> In virtualized environments it can happen that the host has the microcode
> update which utilizes the VERW instruction to clear CPU buffers, but the
> hypervisor is not yet updated to expose the X86_FEATURE_MD_CLEAR CPUID bit
> to guests.
> 
> Introduce an internal mitigation mode VWWERV which enables the invocation
> of the CPU buffer clearing even if X86_FEATURE_MD_CLEAR is not set. If the
> system has no updated microcode this results in a pointless execution of
> the VERW instruction wasting a few CPU cycles. If the microcode is updated,
> but not exposed to a guest then the CPU buffers will be cleared.
> 
> That said: Virtual Machines Will Eventually Receive Vaccine

The effect of this patch, currently, is that a (bare metal) machine
without updated ucode will print the following:

[    1.576602] MDS: Vulnerable: Clear CPU buffers attempted, no microcode

The intention of the patch is to say "hey, you might be on a VM, so
we'll try anyway in case we didn't get told you had MD_CLEAR". But the
effect on bare metal might be ambiguous. It's reasonable (for someone
else) to assume we might be using a software sequence to try flushing.

Perhaps the wording should convey something like:

"MDS: Vulnerable: Clear CPU buffers may not work, no microcode"

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-01 21:47 [patch V6 00/14] MDS basics 0 Thomas Gleixner
                   ` (3 preceding siblings ...)
  2019-03-01 21:47 ` [patch V6 12/14] MDS basics 12 Thomas Gleixner
@ 2019-03-04  5:30 ` Jon Masters
  4 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  5:30 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 130 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch V6 00/14] MDS basics 0

[-- Attachment #2: Type: text/plain, Size: 1408 bytes --]

On 3/1/19 4:47 PM, speck for Thomas Gleixner wrote:
> Changes vs. V5:
> 
>   - Fix tools/ build (Josh)
> 
>   - Dropped the AIRMONT_MID change as it needs confirmation from Intel
> 
>   - Made the consolidated whitelist more readable and correct
> 
>   - Added the MSBDS only quirk for XEON PHI, made the idle flush
>     depend on it and updated the sysfs output accordingly.
> 
>   - Fixed the protection matrix in the admin documentation and clarified
>     the SMT situation vs. MSBDS only.
> 
>   - Updated the KVM/VMX changelog.
> 
> Delta patch against V5 below.
> 
> Available from git:
> 
>    cvs.ou.linutronix.de:linux/speck/linux WIP.mds
> 
> The linux-4.20.y, linux-4.19.y and linux-4.14.y branches are updated as
> well and contain the untested backports of the pile for reference.
> 
> I'll send git bundles of the pile as well.

Tested on Coffeelake with updated ucode successfully:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 158
model name      : Intel(R) Core(TM) i7-8086K CPU @ 4.00GHz
stepping        : 10
microcode       : 0xae

[jcm@stephen ~]$ dmesg|grep MDS
[    1.633165] MDS: Mitigation: Clear CPU buffers

[jcm@stephen ~]$ cat /sys/devices/system/cpu/vulnerabilities/mds
Mitigation: Clear CPU buffers; SMT vulnerable

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-04  1:25 ` [MODERATED] [PATCH RFC 4/4] 4 Josh Poimboeuf
@ 2019-03-04  4:07   ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  4:07 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 117 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Josh Poimboeuf <speck@linutronix.de>
Subject: Re: [PATCH RFC 4/4] 4

[-- Attachment #2: Type: text/plain, Size: 1461 bytes --]

On 3/3/19 8:25 PM, speck for Josh Poimboeuf wrote:
> From: Josh Poimboeuf <jpoimboe@redhat.com>
> Subject: [PATCH RFC 4/4] x86/speculation: Add 'cpu_spec_mitigations=' cmdline
>  options
> 
> Keeping track of the number of mitigations for all the CPU speculation
> bugs has become overwhelming for many users.  It's getting more and more
> complicated to decide what mitigations are needed for a given
> architecture.
> 
> Most users fall into a few basic categories:
> 
> - want all mitigations off;
> 
> - want all reasonable mitigations on, with SMT enabled even if it's
>   vulnerable; or
> 
> - want all reasonable mitigations on, with SMT disabled if vulnerable.
> 
> Define a set of curated, arch-independent options, each of which is an
> aggregation of existing options:
> 
> - cpu_spec_mitigations=off: Disable all mitigations.
> 
> - cpu_spec_mitigations=auto: [default] Enable all the default mitigations,
>   but leave SMT enabled, even if it's vulnerable.
> 
> - cpu_spec_mitigations=auto,nosmt: Enable all the default mitigations,
>   disabling SMT if needed by a mitigation.
> 
> See the documentation for more details.

Looks good. There's an effort to upstream mitigation controls for the
arm64 but that's not in place yet. They'll want to wire that up later. I
actually had missed the s390x etokens work so that was fun to see here.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-04  1:24 ` [MODERATED] [PATCH RFC 3/4] 3 Josh Poimboeuf
@ 2019-03-04  3:58   ` Jon Masters
  2019-03-04 17:17     ` [MODERATED] " Josh Poimboeuf
  0 siblings, 1 reply; 94+ messages in thread
From: Jon Masters @ 2019-03-04  3:58 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 117 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Josh Poimboeuf <speck@linutronix.de>
Subject: Re: [PATCH RFC 3/4] 3

[-- Attachment #2: Type: text/plain, Size: 445 bytes --]

On 3/3/19 8:24 PM, speck for Josh Poimboeuf wrote:

> +		if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
> +			pr_warn_once(MDS_MSG_SMT);

It's never fully safe to use SMT. I get that if we only had MSBDS then
it's unlikely we'll hit the e.g. power state change cases needed to
exploit it but I think it would be prudent to display something anyway?

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-04  1:23 ` [MODERATED] [PATCH RFC 1/4] 1 Josh Poimboeuf
@ 2019-03-04  3:55   ` Jon Masters
  2019-03-04  7:30   ` [MODERATED] Re: [PATCH RFC 1/4] 1 Greg KH
  1 sibling, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-04  3:55 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 117 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Josh Poimboeuf <speck@linutronix.de>
Subject: Re: [PATCH RFC 1/4] 1

[-- Attachment #2: Type: text/plain, Size: 1069 bytes --]

On 3/3/19 8:23 PM, speck for Josh Poimboeuf wrote:

> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index e11654f93e71..0c71ab0d57e3 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -221,6 +221,7 @@ static void x86_amd_ssb_disable(void)
>  
>  /* Default mitigation for L1TF-affected CPUs */
>  static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_FULL;
> +static bool mds_nosmt __ro_after_init = false;
>  
>  static const char * const mds_strings[] = {
>  	[MDS_MITIGATION_OFF]	= "Vulnerable",
> @@ -238,8 +239,13 @@ static void mds_select_mitigation(void)
>  	if (mds_mitigation == MDS_MITIGATION_FULL) {
>  		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
>  			mds_mitigation = MDS_MITIGATION_VMWERV;
> +
>  		static_branch_enable(&mds_user_clear);
> +
> +		if (mds_nosmt && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
> +			cpu_smt_disable(false);

Is there some logic missing here to disable SMT?

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-03-01 20:58     ` [MODERATED] Encrypted Message Jon Masters
@ 2019-03-01 22:14       ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-03-01 22:14 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 161 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Jon Masters <speck@linutronix.de>
Subject: Re: [patch V4 04/11] x86/speculation/mds: Add mds_clear_cpu_buffer()

[-- Attachment #2: Type: text/plain, Size: 3426 bytes --]

On 3/1/19 3:58 PM, speck for Jon Masters wrote:
> On 2/26/19 9:19 AM, speck for Josh Poimboeuf wrote:
> 
>> On Fri, Feb 22, 2019 at 11:24:22PM +0100, speck for Thomas Gleixner wrote:
>>> +MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
>>> +L1 miss situations and to hold data which is returned or sent in response
>>> +to a memory or I/O operation. Fill buffers can forward data to a load
>>> +operation and also write data to the cache. When the fill buffer is
>>> +deallocated it can retain the stale data of the preceding operations which
>>> +can then be forwarded to a faulting or assisting load operation, which can
>>> +be exploited under certain conditions. Fill buffers are shared between
>>> +Hyper-Threads so cross thread leakage is possible.
> 
> The fill buffers sit opposite the L1D$ and participate in coherency
> directly. They supply data directly to the load store units. Here's the
> internal summary I wrote (feel free to use any of it that is useful):
> 
> "Intel processors utilize fill buffers to perform loads of data when a
> miss occurs in the Level 1 data cache. The fill buffer allows the
> processor to implement a non-blocking cache, continuing with other
> operations while the necessary cache data “line” is loaded from a higher
> level cache or from memory. It also allows the result of the fill to be
> forwarded directly to the EU (Execution Unit) requiring the load,
> without waiting for it to be written into the L1 Data Cache.
> 
> A load operation is not decoupled in the same way that a store is, but
> it does involve an AGU (Address Generation Unit) operation. If the AGU
> generates a fault (#PF, etc.) or an assist (A/D bits) then the classical
> Intel design would block the load and later reissue it. In contemporary
> designs, it instead allows subsequent speculation operations to
> temporarily see a forwarded data value from the fill buffer slot prior
> to the load actually taking place. Thus it is possible to read data that
> was recently accessed by another thread, if the fill buffer entry is not
> reused.
> 
> It is this attack that allows cross-thread SMT leakage and breaks HT
> without recourse other than to disable it or to implement core
> scheduling in the Linux kernel.
> 
> Variants of this include loads that cross cache or page boundaries due
> to further optimizations in Intel’s implementation. For example, Intel
> incorporate logic to guess at address generation prior to determining
> whether it crosses such a boundary (covered in US5335333A) and will
> forward this to the TLB/load logic prior to resolving the full address.
> They will retry the load by re-issuing uops in the case of a cross
> cacheline/page boundary but in that case will leak state as well."

Btw, I've various reproducers here that I'm happy to share if useful
with the right folks. Thomas and Linus should already have my IFU one
for later testing of that, I've also e.g. an FBBF. Currently it just
spews whatever it sees from the other threads, but in the next few days
I'll have it cleaned up to send/receive specific messages - then can
just wrap it with a bow so it can print yes/no vulnerable.

Ping if you have a need for a repro (keybase/email) and I'll go through
our process for sharing as appropriate.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-26 14:19   ` [MODERATED] " Josh Poimboeuf
@ 2019-03-01 20:58     ` Jon Masters
  2019-03-01 22:14       ` Jon Masters
  0 siblings, 1 reply; 94+ messages in thread
From: Jon Masters @ 2019-03-01 20:58 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 164 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Josh Poimboeuf <speck@linutronix.de>
Subject: Re: [patch V4 04/11] x86/speculation/mds: Add mds_clear_cpu_buffer()

[-- Attachment #2: Type: text/plain, Size: 2764 bytes --]

On 2/26/19 9:19 AM, speck for Josh Poimboeuf wrote:

> On Fri, Feb 22, 2019 at 11:24:22PM +0100, speck for Thomas Gleixner wrote:
>> +MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
>> +L1 miss situations and to hold data which is returned or sent in response
>> +to a memory or I/O operation. Fill buffers can forward data to a load
>> +operation and also write data to the cache. When the fill buffer is
>> +deallocated it can retain the stale data of the preceding operations which
>> +can then be forwarded to a faulting or assisting load operation, which can
>> +be exploited under certain conditions. Fill buffers are shared between
>> +Hyper-Threads so cross thread leakage is possible.

The fill buffers sit opposite the L1D$ and participate in coherency
directly. They supply data directly to the load store units. Here's the
internal summary I wrote (feel free to use any of it that is useful):

"Intel processors utilize fill buffers to perform loads of data when a
miss occurs in the Level 1 data cache. The fill buffer allows the
processor to implement a non-blocking cache, continuing with other
operations while the necessary cache data “line” is loaded from a higher
level cache or from memory. It also allows the result of the fill to be
forwarded directly to the EU (Execution Unit) requiring the load,
without waiting for it to be written into the L1 Data Cache.

A load operation is not decoupled in the same way that a store is, but
it does involve an AGU (Address Generation Unit) operation. If the AGU
generates a fault (#PF, etc.) or an assist (A/D bits) then the classical
Intel design would block the load and later reissue it. In contemporary
designs, it instead allows subsequent speculation operations to
temporarily see a forwarded data value from the fill buffer slot prior
to the load actually taking place. Thus it is possible to read data that
was recently accessed by another thread, if the fill buffer entry is not
reused.

It is this attack that allows cross-thread SMT leakage and breaks HT
without recourse other than to disable it or to implement core
scheduling in the Linux kernel.

Variants of this include loads that cross cache or page boundaries due
to further optimizations in Intel’s implementation. For example, Intel
incorporate logic to guess at address generation prior to determining
whether it crosses such a boundary (covered in US5335333A) and will
forward this to the TLB/load logic prior to resolving the full address.
They will retry the load by re-issuing uops in the case of a cross
cacheline/page boundary but in that case will leak state as well."

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-25 16:30   ` [MODERATED] " Greg KH
@ 2019-02-25 16:41     ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-02-25 16:41 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 115 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Greg KH <speck@linutronix.de>
Subject: Re: [PATCH v6 10/43] MDSv6

[-- Attachment #2: Type: text/plain, Size: 411 bytes --]

On 2/25/19 11:30 AM, speck for Greg KH wrote:

>> +BPF could attack the rest of the kernel if it can successfully
>> +measure side channel side effects.
> 
> Can it do such a measurement?

The researchers involved in MDS are actively working on an exploit using
BPF as well, so I expect we'll know soon. My assumption is "yes".

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-25 16:00           ` [MODERATED] " Greg KH
@ 2019-02-25 16:19             ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-02-25 16:19 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 110 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Greg KH <speck@linutronix.de>
Subject: Re: Encrypted Message

[-- Attachment #2: Type: text/plain, Size: 1592 bytes --]

On 2/25/19 11:00 AM, speck for Greg KH wrote:
> On Mon, Feb 25, 2019 at 10:52:30AM -0500, speck for Jon Masters wrote:
>> From: Jon Masters <jcm@redhat.com>
>> To: speck for Greg KH <speck@linutronix.de>
>> Subject: Re: [PATCH v6 31/43] MDSv6
> 
>> On 2/25/19 10:49 AM, speck for Greg KH wrote:
>>> On Mon, Feb 25, 2019 at 07:34:11AM -0800, speck for Andi Kleen wrote:
>>
>>
>>>> However I will probably not be able to write a detailed
>>>> description for each of the interrupt handlers changed because
>>>> there are just too many.
>>>
>>> Then how do you expect each subsystem / driver author to know if this is
>>> an acceptable change or not?  How do you expect to educate driver
>>> authors to have them determine if they need to do this on their new
>>> drivers or not?  Are you going to hand-audit each new driver that gets
>>> added to the kernel for forever?
>>>
>>> Without this type of information, this seems like a futile exercise.
>>
>> Forgive me if I'm being too cautious here, but it seems to make most
>> sense to have the basic MDS infrastructure in place at unembargo. Unless
>> it's very clear how the auto stuff can be safe, and the audit
>> comprehensive, I wonder if that shouldn't just be done after.
> 
> I thought that was what Thomas's patchset provided and is what was
> alluded to in patch 00/43 of this series.

Indeed. I'm asking whether we're trying to figure out the "auto" stuff
as well before unembargo or is the other discussion just for planning?

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-25 15:49       ` Greg KH
@ 2019-02-25 15:52         ` Jon Masters
  2019-02-25 16:00           ` [MODERATED] " Greg KH
  0 siblings, 1 reply; 94+ messages in thread
From: Jon Masters @ 2019-02-25 15:52 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 115 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Greg KH <speck@linutronix.de>
Subject: Re: [PATCH v6 31/43] MDSv6

[-- Attachment #2: Type: text/plain, Size: 1032 bytes --]

On 2/25/19 10:49 AM, speck for Greg KH wrote:
> On Mon, Feb 25, 2019 at 07:34:11AM -0800, speck for Andi Kleen wrote:


>> However I will probably not be able to write a detailed
>> description for each of the interrupt handlers changed because
>> there are just too many.
> 
> Then how do you expect each subsystem / driver author to know if this is
> an acceptable change or not?  How do you expect to educate driver
> authors to have them determine if they need to do this on their new
> drivers or not?  Are you going to hand-audit each new driver that gets
> added to the kernel for forever?
> 
> Without this type of information, this seems like a futile exercise.

Forgive me if I'm being too cautious here, but it seems to make most
sense to have the basic MDS infrastructure in place at unembargo. Unless
it's very clear how the auto stuff can be safe, and the audit
comprehensive, I wonder if that shouldn't just be done after.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-21 23:44 ` [patch V3 4/9] MDS basics 4 Thomas Gleixner
@ 2019-02-22  7:45   ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-02-22  7:45 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 128 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch V3 4/9] MDS basics 4

[-- Attachment #2: Type: text/plain, Size: 653 bytes --]

On 2/21/19 6:44 PM, speck for Thomas Gleixner wrote:
> +#include <asm/segment.h>
> +
> +/**
> + * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
> + *
> + * This uses the otherwise unused and obsolete VERW instruction in
> + * combination with microcode which triggers a CPU buffer flush when the
> + * instruction is executed.
> + */
> +static inline void mds_clear_cpu_buffers(void)
> +{
> +	static const u16 ds = __KERNEL_DS;

Dunno if it's worth documenting that using a specifically valid segment
is faster than a zero selector according to Intel.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-19 12:44 [patch 0/8] MDS basics 0 Thomas Gleixner
@ 2019-02-21 16:14 ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-02-21 16:14 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 125 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch 0/8] MDS basics 0

[-- Attachment #2: Type: text/plain, Size: 304 bytes --]

Hi Thomas,

Just a note on testing. I built a few Coffelake client systems for Red
Hat using the 8086K anniversary processor for which we have test ucode.
I will build and test these patches and ask the RH perf team to test.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-02-08 10:53         ` [MODERATED] [RFC][PATCH] performance walnuts Peter Zijlstra
@ 2019-02-15 23:45           ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-02-15 23:45 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 132 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Peter Zijlstra <speck@linutronix.de>
Subject: Re: [RFC][PATCH] performance walnuts

[-- Attachment #2: Type: text/plain, Size: 944 bytes --]

On 2/8/19 5:53 AM, speck for Peter Zijlstra wrote:
> +static void intel_set_tfa(struct cpu_hw_events *cpuc, bool on)
> +{
> +	u64 val = MSR_TFA_RTM_FORCE_ABORT * on;
> +
> +	if (cpuc->tfa_shadow != val) {
> +		cpuc->tfa_shadow = val;
> +		wrmsrl(MSR_TSX_FORCE_ABORT, val);
> +	}
> +}

Ok let me ask a stupid question.

This MSR is exposed on a given core. What's the impact (if any) on
*other* cores that might be using TSX? For example, suppose I'm running
an application using RTM on one core while another application on
another core begins profiling. What impact does the impact of this MSR
write have on other cores? (Architecturally).

I'm assuming the implementation of HLE relies on whatever you're doing
fitting into the local core's cache and you just abort on any snoop,
etc. so it ought to be fairly self contained, but I want to know.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-01-14 19:20   ` [MODERATED] " Dave Hansen
@ 2019-01-18  7:33     ` Jon Masters
  0 siblings, 0 replies; 94+ messages in thread
From: Jon Masters @ 2019-01-18  7:33 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 122 bytes --]

From: Jon Masters <jcm@redhat.com>
To: speck for Dave Hansen <speck@linutronix.de>
Subject: Re: [PATCH v4 05/28] MDSv4 10

[-- Attachment #2: Type: text/plain, Size: 1328 bytes --]

On 1/14/19 2:20 PM, speck for Dave Hansen wrote:

> On 1/11/19 5:29 PM, speck for Andi Kleen wrote:
>> When entering idle the internal state of the current CPU might
>> become visible to the thread sibling because the CPU "frees" some
>> internal resources.
> 
> Is there some documentation somewhere about what "idle" means here?  It
> looks like MWAIT and HLT certainly count, but is there anything else?

We know power state transitions in addition can cause the peer to
dynamically sleep or wake up. MWAIT was the main example I got out of
Intel for how you'd explicitly cause a thread to be deallocated.

When Andi is talking about "frees" above he means (for example) the
dynamic allocation/deallocation of store buffer entries as threads come
and go - e.g. in Skylake there are 56 entries in a distributed store
buffer that splits into 2x28. I am not aware of fill buffer behavior
changing as threads come and go, and this isn't documented AFAICS.

I've been wondering whether we want a bit more detail in the docs. I
spent a /lot/ of time last week going through all of Intel's patents in
this area, which really help understand it. If folks feel we could do
with a bit more meaty summary, I can try to suggest something.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-01-12  1:29 ` [MODERATED] [PATCH v4 10/28] MDSv4 24 Andi Kleen
@ 2019-01-15  1:05   ` Tim Chen
  0 siblings, 0 replies; 94+ messages in thread
From: Tim Chen @ 2019-01-15  1:05 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 130 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Andi Kleen <speck@linutronix.de>
Subject: Re: [PATCH v4 10/28] MDSv4 24

[-- Attachment #2: Type: text/plain, Size: 5059 bytes --]


On 1/11/19 5:29 PM, speck for Andi Kleen wrote:

> +Some CPUs can leave read or written data in internal buffers,
> +which then later might be sampled through side effects.
> +For more details see CVE-2018-12126 CVE-2018-12130 CVE-2018-12127
> +
> +This can be avoided by explicitely clearing the CPU state.

s/explicitely/explicitly

> +
> +We trying to avoid leaking data between different processes,

Suggest changing the above phrase to the below:

CPU state clearing prevents leaking data between different processes,

...

> +Basic requirements and assumptions
> +----------------------------------
> +
> +Kernel addresses and kernel temporary data are not sensitive.
> +
> +User data is sensitive, but only for other processes.
> +
> +Kernel data is sensitive when it is cryptographic keys.

s/when it is/when it involves/

> +
> +Guidance for driver/subsystem developers
> +----------------------------------------
> +
> +When you touch user supplied data of *other* processes in system call
> +context add lazy_clear_cpu().
> +
> +For the cases below we care only about data from other processes.
> +Touching non cryptographic data from the current process is always allowed.
> +
> +Touching only pointers to user data is always allowed.
> +
> +When your interrupt does not touch user data directly consider marking

Add a "," between "directly" and "consider"

> +it with IRQF_NO_USER.
> +
> +When your tasklet does not touch user data directly consider marking

Add a "," between "directly" and "consider"

> +it with TASKLET_NO_USER using tasklet_init_flags/or
> +DECLARE_TASKLET*_NOUSER.
> +
> +When your timer does not touch user data mark it with TIMER_NO_USER.

Add a "," between "data" and "mark"

> +If it is a hrtimer mark it with HRTIMER_MODE_NO_USER.

Add a "," between "hrtimer" and "mark"

> +
> +When your irq poll handler does not touch user data, mark it
> +with IRQ_POLL_F_NO_USER through irq_poll_init_flags.
> +
> +For networking code make sure to only touch user data through

Add a "," between "code" and "make"

> +skb_push/put/copy [add more], unless it is data from the current
> +process. If that is not ensured add lazy_clear_cpu or

Add a "," between "ensured" and "add"

> +lazy_clear_cpu_interrupt. When the non skb data access is only in a
> +hardware interrupt controlled by the driver, it can rely on not
> +setting IRQF_NO_USER for that interrupt.
> +
> +Any cryptographic code touching key data should use memzero_explicit
> +or kzfree.
> +
> +If your RCU callback touches user data add lazy_clear_cpu().
> +
> +These steps are currently only needed for code that runs on MDS affected
> +CPUs, which is currently only x86. But might be worth being prepared
> +if other architectures become affected too.
> +
> +Implementation details/assumptions
> +----------------------------------
> +
> +If a system call touches data it is for its own process, so does not

suggest rephrasing to 

If a system call touches data of its own process, cpu state does not

> +need to be cleared, because it has already access to it.
> +
> +When context switching we clear data, unless the context switch
> +is inside a process, or from/to idle. We also clear after any
> +context switches from kernel threads.
> +
> +Idle does not have sensitive data, except for in interrupts, which
> +are handled separately.
> +
> +Cryptographic keys inside the kernel should be protected.
> +We assume they use kzfree() or memzero_explicit() to clear
> +state, so these functions trigger a cpu clear.
> +
> +Hard interrupts, tasklets, timers which can run asynchronous are
> +assumed to touch random user data, unless they have been audited, and
> +marked with NO_USER flags.
> +
> +Most interrupt handlers for modern devices should not touch
> +user data because they rely on DMA and only manipulate
> +pointers. This needs auditing to confirm though.
> +
> +For softirqs we assume that if they touch user data they use

Add "," between "data" and "they"

...

> +Technically we would only need to do this if the BPF program
> +contains conditional branches and loads dominated by them, but
> +let's assume that near all do.
s/near/nealy/

> +
> +This could be further optimized by allowing callers that do
> +a lot of individual BPF runs and are sure they don't touch
> +other user's data inbetween to do the clear only once
> +at the beginning. 

Suggest breaking the above sentence.  It is quite difficult to read.

> We can add such optimizations later based on
> +profile data.
> +
> +Virtualization
> +--------------
> +
> +When entering a guest in KVM we clear to avoid any leakage to a guest.
... we clear CPU state to avoid ....

> +Normally this is done implicitely as part of the L1TF mitigation.

s/implicitely/implicitly/

> +It relies on this being enabled. It also uses the "fast exit"
> +optimization that only clears if an interrupt or context switch
> +happened.
> 



^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2019-01-12  1:29 ` [MODERATED] [PATCH v4 05/28] MDSv4 10 Andi Kleen
  2019-01-14 19:20   ` [MODERATED] " Dave Hansen
@ 2019-01-14 23:39   ` Tim Chen
  1 sibling, 0 replies; 94+ messages in thread
From: Tim Chen @ 2019-01-14 23:39 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 130 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Andi Kleen <speck@linutronix.de>
Subject: Re: [PATCH v4 05/28] MDSv4 10

[-- Attachment #2: Type: text/plain, Size: 526 bytes --]


> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 50aa2aba69bd..b5a1bd4a1a46 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5980,6 +5980,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
>  
>  #ifdef CONFIG_SCHED_SMT
>  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
> +EXPORT_SYMBOL(sched_smt_present);

This export is not needed since sched_smt_present is not used in the patch series.
Only sched_smt_active() is used.

Thanks.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-06-12 17:29 [MODERATED] FYI - Reading uncached memory Jon Masters
@ 2018-06-14 16:59 ` Tim Chen
  0 siblings, 0 replies; 94+ messages in thread
From: Tim Chen @ 2018-06-14 16:59 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 135 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Jon Masters <speck@linutronix.de>
Subject: Re: FYI - Reading uncached memory

[-- Attachment #2: Type: text/plain, Size: 586 bytes --]

On 06/12/2018 10:29 AM, speck for Jon Masters wrote:
> FYI Graz have been able to prove the Intel processors will allow
> speculative reads of /explicitly/ UC memory (e.g. marked in MTRR). I
> believe they actually use the QPI SAD table to determine what memory is
> speculation safe and what memory has side effects (i.e. if it's HA'able
> memory then it's deemed ok to rampantly speculate from it).
> 
> Just in case anyone thought UC was safe against attacks.
> 
> Jon.
> 

Thanks for forwarding the info.  Yes, the internal Intel team
is aware of this issue.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-06-05 23:37               ` Tim Chen
@ 2018-06-07 19:11                 ` Tim Chen
  0 siblings, 0 replies; 94+ messages in thread
From: Tim Chen @ 2018-06-07 19:11 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 165 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Konrad Rzeszutek Wilk <speck@linutronix.de>
Subject: Re: Is: Tim, Q to you. Was:Re: [PATCH 1/2] L1TF KVM 1

[-- Attachment #2: Type: text/plain, Size: 2489 bytes --]

On 06/05/2018 04:37 PM, Tim Chen wrote:
> On 06/05/2018 04:34 PM, Tim Chen wrote:
>> On 06/04/2018 06:11 AM, speck for Konrad Rzeszutek Wilk wrote:
>>> On Mon, Jun 04, 2018 at 10:24:59AM +0200, speck for Martin Pohlack wrote:
>>>> [resending as new message as the replay seems to have been lost on at
>>>> least some mail paths]
>>>>
>>>> On 30.05.2018 11:01, speck for Paolo Bonzini wrote:
>>>>> On 30/05/2018 01:54, speck for Andrew Cooper wrote:
>>>>>> Other bits I don't understand are the 64k limit in the first place, why
>>>>>> it gets walked over in 4k strides to begin with (I'm not aware of any
>>>>>> prefetching which would benefit that...) and why a particularly
>>>>>> obfuscated piece of magic is used for the 64byte strides.
>>>>>
>>>>> That is the only part I understood, :) the 4k strides ensure that the
>>>>> source data is in the TLB.  Why that is needed is still a mystery though.
>>>>
>>>> I think the reasoning is that you first want to populate the TLB for the
>>>> whole flush array, then fence, to make sure TLB walks do not interfere
>>>> with the actual flushing later, either for performance reasons or for
>>>> preventing leakage of partial walk results.
>>>>
>>>> Not sure about the 64K, it likely is about the LRU implementation for L1
>>>> replacement not being perfect (but pseudo LRU), so you need to flush
>>>> more than the L1 size (32K) in software.  But I have also seen smaller
>>>> recommendations for that (52K).
>>>
>>
>> Had some discussions with other Intel folks.
>>
>> Our recommendation is not to use the software sequence for L1 clear but
>> use wrmsrl(MSR_IA32_FLUSH_L1D, MSR_IA32_FLUSH_L1D_VALUE).
>> We expect that all affected systems will be receiving a ucode update
>> to provide L1 clearing capability.
>>
>> Yes, the 4k stride is for getting TLB walks out of the way and
>> the 64kB replacement is to accommodate pseudo LRU.
> 
> I will try to see if I can get hold of the relevant documentation
> on pseudo LRU.
> 

The HW folks mentioned that if we have nothing from the flush buffer in
L1, then 32 KB would be sufficient (if we load miss for everything).

However, that's not the case. If some data from the flush buffer is
already in L1, it could protect an unrelated line that's considered
"near" by the LRU from getting flushed.  To make sure that does not
happen, we go through 64 KB of data to guarantee every line in L1 will
encounter a load miss and is flushed.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-06-05 23:34             ` Tim Chen
@ 2018-06-05 23:37               ` Tim Chen
  2018-06-07 19:11                 ` Tim Chen
  0 siblings, 1 reply; 94+ messages in thread
From: Tim Chen @ 2018-06-05 23:37 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 165 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Konrad Rzeszutek Wilk <speck@linutronix.de>
Subject: Re: Is: Tim, Q to you. Was:Re: [PATCH 1/2] L1TF KVM 1

[-- Attachment #2: Type: text/plain, Size: 1939 bytes --]

On 06/05/2018 04:34 PM, Tim Chen wrote:
> On 06/04/2018 06:11 AM, speck for Konrad Rzeszutek Wilk wrote:
>> On Mon, Jun 04, 2018 at 10:24:59AM +0200, speck for Martin Pohlack wrote:
>>> [resending as new message as the replay seems to have been lost on at
>>> least some mail paths]
>>>
>>> On 30.05.2018 11:01, speck for Paolo Bonzini wrote:
>>>> On 30/05/2018 01:54, speck for Andrew Cooper wrote:
>>>>> Other bits I don't understand are the 64k limit in the first place, why
>>>>> it gets walked over in 4k strides to begin with (I'm not aware of any
>>>>> prefetching which would benefit that...) and why a particularly
>>>>> obfuscated piece of magic is used for the 64byte strides.
>>>>
>>>> That is the only part I understood, :) the 4k strides ensure that the
>>>> source data is in the TLB.  Why that is needed is still a mystery though.
>>>
>>> I think the reasoning is that you first want to populate the TLB for the
>>> whole flush array, then fence, to make sure TLB walks do not interfere
>>> with the actual flushing later, either for performance reasons or for
>>> preventing leakage of partial walk results.
>>>
>>> Not sure about the 64K, it likely is about the LRU implementation for L1
>>> replacement not being perfect (but pseudo LRU), so you need to flush
>>> more than the L1 size (32K) in software.  But I have also seen smaller
>>> recommendations for that (52K).
>>
> 
> Had some discussions with other Intel folks.
> 
> Our recommendation is not to use the software sequence for L1 clear but
> use wrmsrl(MSR_IA32_FLUSH_L1D, MSR_IA32_FLUSH_L1D_VALUE).
> We expect that all affected systems will be receiving a ucode update
> to provide L1 clearing capability.
> 
> Yes, the 4k stride is for getting TLB walks out of the way and
> the 64kB replacement is to accommodate pseudo LRU.

I will try to see if I can get hold of the relevant documentation
on pseudo LRU.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-06-04 13:11           ` [MODERATED] Is: Tim, Q to you. Was:Re: " Konrad Rzeszutek Wilk
  2018-06-04 17:59             ` [MODERATED] Encrypted Message Tim Chen
@ 2018-06-05 23:34             ` Tim Chen
  2018-06-05 23:37               ` Tim Chen
  1 sibling, 1 reply; 94+ messages in thread
From: Tim Chen @ 2018-06-05 23:34 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 165 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Konrad Rzeszutek Wilk <speck@linutronix.de>
Subject: Re: Is: Tim, Q to you. Was:Re: [PATCH 1/2] L1TF KVM 1

[-- Attachment #2: Type: text/plain, Size: 1779 bytes --]

On 06/04/2018 06:11 AM, speck for Konrad Rzeszutek Wilk wrote:
> On Mon, Jun 04, 2018 at 10:24:59AM +0200, speck for Martin Pohlack wrote:
>> [resending as new message as the replay seems to have been lost on at
>> least some mail paths]
>>
>> On 30.05.2018 11:01, speck for Paolo Bonzini wrote:
>>> On 30/05/2018 01:54, speck for Andrew Cooper wrote:
>>>> Other bits I don't understand are the 64k limit in the first place, why
>>>> it gets walked over in 4k strides to begin with (I'm not aware of any
>>>> prefetching which would benefit that...) and why a particularly
>>>> obfuscated piece of magic is used for the 64byte strides.
>>>
>>> That is the only part I understood, :) the 4k strides ensure that the
>>> source data is in the TLB.  Why that is needed is still a mystery though.
>>
>> I think the reasoning is that you first want to populate the TLB for the
>> whole flush array, then fence, to make sure TLB walks do not interfere
>> with the actual flushing later, either for performance reasons or for
>> preventing leakage of partial walk results.
>>
>> Not sure about the 64K, it likely is about the LRU implementation for L1
>> replacement not being perfect (but pseudo LRU), so you need to flush
>> more than the L1 size (32K) in software.  But I have also seen smaller
>> recommendations for that (52K).
> 

Had some discussions with other Intel folks.

Our recommendation is not to use the software sequence for L1 clear but
use wrmsrl(MSR_IA32_FLUSH_L1D, MSR_IA32_FLUSH_L1D_VALUE).
We expect that all affected systems will be receiving a ucode update
to provide L1 clearing capability.

Yes, the 4k stride is for getting TLB walks out of the way and
the 64kB replacement is to accommodate pseudo LRU.

Thanks.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-06-04 13:11           ` [MODERATED] Is: Tim, Q to you. Was:Re: " Konrad Rzeszutek Wilk
@ 2018-06-04 17:59             ` Tim Chen
  2018-06-05 23:34             ` Tim Chen
  1 sibling, 0 replies; 94+ messages in thread
From: Tim Chen @ 2018-06-04 17:59 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 165 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Konrad Rzeszutek Wilk <speck@linutronix.de>
Subject: Re: Is: Tim, Q to you. Was:Re: [PATCH 1/2] L1TF KVM 1

[-- Attachment #2: Type: text/plain, Size: 1464 bytes --]

On 06/04/2018 06:11 AM, speck for Konrad Rzeszutek Wilk wrote:
> On Mon, Jun 04, 2018 at 10:24:59AM +0200, speck for Martin Pohlack wrote:
>> [resending as new message as the replay seems to have been lost on at
>> least some mail paths]
>>
>> On 30.05.2018 11:01, speck for Paolo Bonzini wrote:
>>> On 30/05/2018 01:54, speck for Andrew Cooper wrote:
>>>> Other bits I don't understand are the 64k limit in the first place, why
>>>> it gets walked over in 4k strides to begin with (I'm not aware of any
>>>> prefetching which would benefit that...) and why a particularly
>>>> obfuscated piece of magic is used for the 64byte strides.
>>>
>>> That is the only part I understood, :) the 4k strides ensure that the
>>> source data is in the TLB.  Why that is needed is still a mystery though.
>>
>> I think the reasoning is that you first want to populate the TLB for the
>> whole flush array, then fence, to make sure TLB walks do not interfere
>> with the actual flushing later, either for performance reasons or for
>> preventing leakage of partial walk results.
>>
>> Not sure about the 64K, it likely is about the LRU implementation for L1
>> replacement not being perfect (but pseudo LRU), so you need to flush
>> more than the L1 size (32K) in software.  But I have also seen smaller
>> recommendations for that (52K).
> 
> Isn't Tim Chen from Intel on this mailing list? Tim, could you find out
> please?
> 

Will do.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-05-29 21:14                     ` L1D-Fault KVM mitigation Thomas Gleixner
@ 2018-05-30 16:38                       ` Tim Chen
  0 siblings, 0 replies; 94+ messages in thread
From: Tim Chen @ 2018-05-30 16:38 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 134 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: L1D-Fault KVM mitigation

[-- Attachment #2: Type: text/plain, Size: 1626 bytes --]

On 05/29/2018 02:14 PM, speck for Thomas Gleixner wrote:
> 
>> yes, with compile workload, the HT speedup was mostly eaten up by
>> overhead.
> 
> So where is the point of the exercise?
> 
> You will not find a generic solution for this problem ever simply because
> the workloads and guest scenarios are too different. There are clearly
> scenarios which can benefit, but at the same time there are scenarios which
> will be way worse off than with SMT disabled.
> 
> I completely understand that Intel wants to avoid the 'disable SMT'
> solution by all means, but this cannot be done with something which is
> obvioulsy creating more problems than it solves in the first place.
> 
> At some point reality has to kick in and you have to admit that there is no
> generic solution and the only solution for a lot of use cases will be to
> disable SMT. The solution for special workloads like the fully partitioned
> ones David mentioned do not need the extra mess all over the place
> especially not when there is ucode assist at least to the point which fits
> into the patch space and some of it really should not take a huge amount of
> effort, like the forced sibling vmexit to avoid the whole IPI machinery.
> 

Having to sync on VM entry and on VM exit and on interrupt to idle sibling
sucks. Hopefully the ucode guys can come up with something
to provide an option that forces the sibling to vmexit on vmexit,
and on interrupt to idle sibling. This should cut the sync overhead in half.
Then only VM entry needs to be synced should we still want to
do co-scheduling.

Thanks.

Tim



^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-05-26 19:14                 ` L1D-Fault KVM mitigation Thomas Gleixner
@ 2018-05-29 19:29                   ` Tim Chen
  2018-05-29 21:14                     ` L1D-Fault KVM mitigation Thomas Gleixner
  0 siblings, 1 reply; 94+ messages in thread
From: Tim Chen @ 2018-05-29 19:29 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 134 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: L1D-Fault KVM mitigation

[-- Attachment #2: Type: text/plain, Size: 1799 bytes --]

On 05/26/2018 12:14 PM, speck for Thomas Gleixner wrote:
> On Thu, 24 May 2018, speck for Tim Chen wrote:
> 
 of load time.
>>
>> We may need to do the co-scheduling only when VM exit rate is low, and
>> turn off the SMT when VM exit rate becomes too high.
> 
> You cannot do that during runtime. That will destroy placement schemes and
> whatever. The SMT off decision needs to be done at a quiescent moment,
> i.e. before starting VMs.

Taking the SMT offline is a bit much and too big a hammer.  Andi and I thought about
having the scheduler forcing the other thread in idle instead for high
VM exit rate scenario. We don't have
to bother about doing sync with the other idle thread.

But we have issues about fairness, as we will be starving the
other run queue.

> 

> Running the same compile single threaded (offlining vCPU1 in the guest)
> increases the time to 107 seconds.
> 
>     107 / 88  = 1.22
> 
> I.e. it's 20% slower than the one using two threads. That means that it is
> the same slowdown as having two threads synchronized (your number).

yes, with compile workload, the HT speedup was mostly eaten up by
overhead.

> 
> So if I take the above example and assume that the overhead of
> synchronization is ~20% then the average vmenter/vmexit time is close to
> 50us.
> 

> 
> So I can see the usefulness for scenarious which David Woodhouse described
> where vCPU and host CPU have a fixed relationship and the guests exit once
> in a while. But that should really be done with ucode assisantance which
> avoids all the nasty synchronization hackery more or less completely.

The ucode guys are looking into such possibilities.  It is tough as they
have to work within the constraint of limited ucode headroom.

Thanks.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-05-24 23:18               ` [MODERATED] Encrypted Message Tim Chen
@ 2018-05-25 18:22                 ` Tim Chen
  2018-05-26 19:14                 ` L1D-Fault KVM mitigation Thomas Gleixner
  1 sibling, 0 replies; 94+ messages in thread
From: Tim Chen @ 2018-05-25 18:22 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 127 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Tim Chen <speck@linutronix.de>
Subject: Re: L1D-Fault KVM mitigation

[-- Attachment #2: Type: text/plain, Size: 3260 bytes --]

On 05/24/2018 04:18 PM, speck for Tim Chen wrote:
> On 05/24/2018 08:33 AM, speck for Thomas Gleixner wrote:
>> On Thu, 24 May 2018, speck for Thomas Gleixner wrote:
>>> On Thu, 24 May 2018, speck for Peter Zijlstra wrote:
>>>> On Wed, May 23, 2018 at 10:45:45AM +0100, speck for David Woodhouse wrote:
>>>>> The microcode trick just makes it a lot easier because we don't
>>>>> have to *explicitly* pause the sibling vCPUs and manage their state on
>>>>> every vmexit/entry. And avoids potential race conditions with managing
>>>>> that in software.
>>>>
>>>> Yes, it would certainly help and avoid a fair bit of ugly. It would, for
>>>> instance, avoid having to modify irq_enter() / irq_exit(), which would
>>>> otherwise be required (and possibly leak all data touched up until that
>>>> point is reached).
>>>>
>>>> But even with all that, adding L1-flush to every VMENTER will hurt lots.
>>>> Consider for example the PIO emulation used when booting a guest from a
>>>> disk image. That causes VMEXIT/VMENTER at stupendous rates.
>>>
>>> Just did a test on SKL Client where I have ucode. It does not have HT so
>>> its not suffering from any HT side effects when L1D is flushed.
>>>
>>> Boot time from a disk image is ~1s measured from the first vcpu enter.
>>>
>>> With L1D Flush on vmenter the boot time is about 5-10% slower. And that has
>>> lots of PIO operations in the early boot.
>>>
>>> For a kernel build the L1D Flush has an overhead of < 1%.
>>>
>>> Netperf guest to host has a slight drop of the throughput in the 2%
>>> range. Host to guest surprisingly goes up by ~3%. Fun stuff!
>>>
>>> Now I isolated two host CPUs and pinned the two vCPUs on it to be able to
>>> measure the overhead. Running cyclictest with a period of 25us in the guest
>>> on a isolated guest CPU and monitoring the behaviour with perf on the host
>>> for the corresponding host CPU gives
>>>
>>> No Flush	      	       Flush
>>>
>>> 1.31 insn per cycle	       1.14 insn per cycle
>>>
>>> 2e6 L1-dcache-load-misses/sec  26e6 L1-dcache-load-misses/sec
>>>
>>> In that simple test the L1D misses go up by a factor of 13.
>>>
>>> Now with the whole gang scheduling the numbers I heard through the
>>> grapevine are in the range of factor 130, i.e. 13k% for a simple boot from
>>> disk image. 13 minutes instead of 6 seconds...
> 
> The performance is highly dependent on how often we VM exit.
> Working with Peter Z on his prototype, the performance ranges from
> no regression for a network loop back, ~20% regression for kernel compile
> to ~100% regression on File IO.  PIO brings out the worse aspect
> of the synchronization overhead as we VM exit on every dword PIO read in, and the
> kernel and initrd image was about 50 MB for the experiment, and led to
> 13 min of load time.
> 
> We may need to do the co-scheduling only when VM exit rate is low, and
> turn off the SMT when VM exit rate becomes too high.
> 
> (Note: I haven't added in the L1 flush on VM entry for my experiment, that is on
> the todo).

As a post note, I added in the L1 flush and the performance numbers
pretty much stay the same.  So the synchronization overhead is
dominant and L1 flush overhead is secondary.

Tim



^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-05-24 15:33             ` Thomas Gleixner
@ 2018-05-24 23:18               ` Tim Chen
  2018-05-25 18:22                 ` Tim Chen
  2018-05-26 19:14                 ` L1D-Fault KVM mitigation Thomas Gleixner
  0 siblings, 2 replies; 94+ messages in thread
From: Tim Chen @ 2018-05-24 23:18 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 134 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: L1D-Fault KVM mitigation

[-- Attachment #2: Type: text/plain, Size: 3619 bytes --]

On 05/24/2018 08:33 AM, speck for Thomas Gleixner wrote:
> On Thu, 24 May 2018, speck for Thomas Gleixner wrote:
>> On Thu, 24 May 2018, speck for Peter Zijlstra wrote:
>>> On Wed, May 23, 2018 at 10:45:45AM +0100, speck for David Woodhouse wrote:
>>>> The microcode trick just makes it a lot easier because we don't
>>>> have to *explicitly* pause the sibling vCPUs and manage their state on
>>>> every vmexit/entry. And avoids potential race conditions with managing
>>>> that in software.
>>>
>>> Yes, it would certainly help and avoid a fair bit of ugly. It would, for
>>> instance, avoid having to modify irq_enter() / irq_exit(), which would
>>> otherwise be required (and possibly leak all data touched up until that
>>> point is reached).
>>>
>>> But even with all that, adding L1-flush to every VMENTER will hurt lots.
>>> Consider for example the PIO emulation used when booting a guest from a
>>> disk image. That causes VMEXIT/VMENTER at stupendous rates.
>>
>> Just did a test on SKL Client where I have ucode. It does not have HT so
>> its not suffering from any HT side effects when L1D is flushed.
>>
>> Boot time from a disk image is ~1s measured from the first vcpu enter.
>>
>> With L1D Flush on vmenter the boot time is about 5-10% slower. And that has
>> lots of PIO operations in the early boot.
>>
>> For a kernel build the L1D Flush has an overhead of < 1%.
>>
>> Netperf guest to host has a slight drop of the throughput in the 2%
>> range. Host to guest surprisingly goes up by ~3%. Fun stuff!
>>
>> Now I isolated two host CPUs and pinned the two vCPUs on it to be able to
>> measure the overhead. Running cyclictest with a period of 25us in the guest
>> on a isolated guest CPU and monitoring the behaviour with perf on the host
>> for the corresponding host CPU gives
>>
>> No Flush	      	       Flush
>>
>> 1.31 insn per cycle	       1.14 insn per cycle
>>
>> 2e6 L1-dcache-load-misses/sec  26e6 L1-dcache-load-misses/sec
>>
>> In that simple test the L1D misses go up by a factor of 13.
>>
>> Now with the whole gang scheduling the numbers I heard through the
>> grapevine are in the range of factor 130, i.e. 13k% for a simple boot from
>> disk image. 13 minutes instead of 6 seconds...

The performance is highly dependent on how often we VM exit.
Working with Peter Z on his prototype, the performance ranges from
no regression for a network loop back, ~20% regression for kernel compile
to ~100% regression on File IO.  PIO brings out the worse aspect
of the synchronization overhead as we VM exit on every dword PIO read in, and the
kernel and initrd image was about 50 MB for the experiment, and led to
13 min of load time.

We may need to do the co-scheduling only when VM exit rate is low, and
turn off the SMT when VM exit rate becomes too high.

(Note: I haven't added in the L1 flush on VM entry for my experiment, that is on
the todo).

Tim


>>
>> That's not surprising at all, though the magnitude is way higher than I
>> expected. I don't see a realistic chance for vmexit heavy workloads to work
>> with that synchronization thing at all, whether it's ucode assisted or not.
> 
> That said, I think we should stage the host side mitigations plus the L1
> flush on vmenter ASAP so we are not standing there with our pants down when
> the cat comes out of the bag early. That means HT off, but it's still
> better than having absolutely nothing.
> 
> The gang scheduling nonsense can be added on top when it should
> surprisingly turn out to be usable at all.
> 
> Thanks,
> 
> 	tglx
> 



^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-05-18 14:29   ` Thomas Gleixner
@ 2018-05-18 19:50     ` Tim Chen
  0 siblings, 0 replies; 94+ messages in thread
From: Tim Chen @ 2018-05-18 19:50 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 163 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: Is: Sleep states ?Was:Re: SSB status - V18 pushed out

[-- Attachment #2: Type: text/plain, Size: 2667 bytes --]

On 05/18/2018 07:29 AM, speck for Thomas Gleixner wrote:
> On Fri, 18 May 2018, speck for Konrad Rzeszutek Wilk wrote:
>> On Thu, May 17, 2018 at 10:53:28PM +0200, speck for Thomas Gleixner wrote:
>>> Folks,
>>>
>>> we finally reached a stable state with the SSB patches. I've updated all 3
>>> branches master/linux-4.16.y/linux-4.14.y in the repo and attached the
>>> resulting git bundles. They merge cleanly on top of the current HEADs of
>>> the relevant trees.
>>>
>>> The lot survived light testing on my side and it would be great if everyone
>>> involved could expose it to their test scenarios.
>>>
>>> Thanks to everyone who participated in that effort (patches, review,
>>> testing ...)!
>>
>> Yeey! Thank you.
>>
>> I was reading the updated Intel doc today (instead of skim reading it) and it mentioned:
>>
>> "Intel recommends that the SSBD MSR bit be cleared when in a sleep state on such processors."
> 
> Well, the same recommendation was for IBRS and the reason is that with HT
> enabled the other hyperthread will not be able to go full speed because the
> sleeping one vanished with IBRS set. SSBD works the same way.
> 
> " SW should clear [SSBD] when enter sleep state, just as is suggested for
>   IBRS and STIBP on existing implementations"
> 
> and that document says:
> 
> "Enabling IBRS on one logical processor of a core with Intel
>  Hyper-Threading Technology may affect branch prediction on other logical
>  processors of the same core. For this reason, software should disable IBRS
>  (by clearing IA32_SPEC_CTRL.IBRS) prior to entering a sleep state (e.g.,
>  by executing HLT or MWAIT) and re-enable IBRS upon wakeup and prior to
>  executing any indirect branch."
> 
> So it's only a performance issue and not a fundamental problem to have it
> on when executing HLT/MWAIT
> 
> So we have two situations here:
> 
> 1) ssbd = on, i.e X86_FEATURE_SPEC_STORE_BYPASS_DISABLE
> 
>    There it is irrelevant because both threads have SSBD set permanentely,
>    so unsetting it on HLT/MWAIT is not going to lift the restriction for
>    the running sibling thread. And HLT/MWAIT is not going to be faster by
>    unsetting it and then setting it on wakeup again....
> 
> 2) SSBD via prctl/seccomp
> 
>    Nothing to do there, because idle task does not have TIF_SSBD set so it
>    never goes with SSBD set into HLT/MWAIT.
> 
> So I think we're good, but it would be nice if Intel folks would confirm
> that.

Yes, we have thought about turning off SSBD in the mwait path earlier. But
decided that it was unnecessary for the exact reasons Thomas mentioned.

Thanks.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

* [MODERATED] Encrypted Message
  2018-05-02 21:51 [patch V11 00/16] SSB 0 Thomas Gleixner
@ 2018-05-03  4:27 ` Tim Chen
  0 siblings, 0 replies; 94+ messages in thread
From: Tim Chen @ 2018-05-03  4:27 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 133 bytes --]

From: Tim Chen <tim.c.chen@linux.intel.com>
To: speck for Thomas Gleixner <speck@linutronix.de>
Subject: Re: [patch V11 00/16] SSB 0

[-- Attachment #2: Type: text/plain, Size: 1580 bytes --]



On 05/02/2018 02:51 PM, speck for Thomas Gleixner wrote:
> Changes since V10:
> 
>   - Addressed Ingos review feedback
> 
>   - Picked up Reviewed-bys
> 
> Delta patch below. Bundle is coming in separate mail. Git repo branches are
> updated as well. The master branch contains also the fix for the lost IBRS
> issue Tim was seeing.
> 
> If there are no further issues and nitpicks, I'm going to make the
> changes immutable and changes need to go incremental on top.
> 
> Thanks,
> 
> 	tglx
> 
> 

I notice that this code ignores the current process's TIF_RDS setting
in the prctl case:

#define firmware_restrict_branch_speculation_end()                      \
do {                                                                    \
        u64 val = x86_get_default_spec_ctrl();                          \
                                                                        \
        alternative_msr_write(MSR_IA32_SPEC_CTRL, val,                  \
                              X86_FEATURE_USE_IBRS_FW);                 \
        preempt_enable();                                               \
} while (0)

x86_get_default_spec_ctrl will return x86_spec_ctrl_base, which
will result in x86_spec_ctrl_base written to the MSR
in the prctl case for Intel CPU.  That incorrectly ignores current
process's TIF_RDS setting and the RDS bit will not be set.

Instead, the following value should have been written to the MSR
for Intel CPU:
x86_spec_ctrl_base | rds_tif_to_spec_ctrl(current_thread_info()->flags)

Thanks.

Tim


^ permalink raw reply	[flat|nested] 94+ messages in thread

end of thread, other threads:[~2019-03-06 16:22 UTC | newest]

Thread overview: 94+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-20 15:07 [patch V2 00/10] MDS basics+ 0 Thomas Gleixner
2019-02-20 15:07 ` [patch V2 01/10] MDS basics+ 1 Thomas Gleixner
2019-02-20 16:27   ` [MODERATED] " Borislav Petkov
2019-02-20 16:46   ` Greg KH
2019-02-20 15:07 ` [patch V2 02/10] MDS basics+ 2 Thomas Gleixner
2019-02-20 16:47   ` [MODERATED] " Borislav Petkov
2019-02-20 16:48   ` Greg KH
2019-02-20 15:07 ` [patch V2 03/10] MDS basics+ 3 Thomas Gleixner
2019-02-20 16:54   ` [MODERATED] " mark gross
2019-02-20 16:57     ` Thomas Gleixner
2019-02-20 18:08       ` [MODERATED] " mark gross
2019-02-20 21:40         ` Thomas Gleixner
2019-02-20 17:14   ` [MODERATED] " Borislav Petkov
2019-02-20 21:31     ` Thomas Gleixner
2019-02-21  2:12   ` [MODERATED] " Andrew Cooper
2019-02-21  9:27     ` Peter Zijlstra
2019-02-21  9:33     ` [MODERATED] " Borislav Petkov
2019-02-21 10:04     ` Thomas Gleixner
2019-02-21 10:18       ` [MODERATED] Re: " Borislav Petkov
2019-02-20 15:07 ` [patch V2 04/10] MDS basics+ 4 Thomas Gleixner
2019-02-20 16:52   ` [MODERATED] " Greg KH
2019-02-20 17:10   ` mark gross
2019-02-21 19:26     ` [MODERATED] Encrypted Message Tim Chen
2019-02-21 20:32       ` Thomas Gleixner
2019-02-21 21:07       ` [MODERATED] " Jiri Kosina
2019-02-20 18:43   ` [MODERATED] Re: [patch V2 04/10] MDS basics+ 4 Borislav Petkov
2019-02-20 19:26   ` Jiri Kosina
2019-02-20 21:42     ` Thomas Gleixner
2019-02-20 15:07 ` [patch V2 05/10] MDS basics+ 5 Thomas Gleixner
2019-02-20 20:05   ` [MODERATED] " Borislav Petkov
2019-02-21  2:24   ` Andrew Cooper
2019-02-21 10:36     ` Thomas Gleixner
2019-02-21 11:22       ` Thomas Gleixner
2019-02-21 11:51       ` [MODERATED] Attack Surface [Was [patch V2 05/10] MDS basics+ 5] Andrew Cooper
2019-02-21 18:41         ` Thomas Gleixner
2019-02-20 15:07 ` [patch V2 06/10] MDS basics+ 6 Thomas Gleixner
2019-02-21 10:18   ` [MODERATED] " Borislav Petkov
2019-02-20 15:08 ` [patch V2 07/10] MDS basics+ 7 Thomas Gleixner
2019-02-21 12:47   ` [MODERATED] " Borislav Petkov
2019-02-21 13:48     ` Thomas Gleixner
2019-02-20 15:08 ` [patch V2 08/10] MDS basics+ 8 Thomas Gleixner
2019-02-21 14:04   ` [MODERATED] " Borislav Petkov
2019-02-21 14:11     ` Thomas Gleixner
2019-02-20 15:08 ` [patch V2 09/10] MDS basics+ 9 Thomas Gleixner
2019-02-20 16:21   ` [MODERATED] " Peter Zijlstra
2019-02-20 22:32     ` Thomas Gleixner
2019-02-20 22:50       ` [MODERATED] " Jiri Kosina
2019-02-20 23:22         ` Thomas Gleixner
2019-02-21 11:04   ` [MODERATED] " Peter Zijlstra
2019-02-21 11:50     ` Peter Zijlstra
2019-02-21 14:18   ` Borislav Petkov
2019-02-21 18:00   ` Kees Cook
2019-02-21 19:46     ` Thomas Gleixner
2019-02-21 20:56       ` Thomas Gleixner
2019-02-20 15:08 ` [patch V2 10/10] MDS basics+ 10 Thomas Gleixner
2019-02-22 16:05 ` [MODERATED] Re: [patch V2 00/10] MDS basics+ 0 mark gross
2019-02-22 17:12   ` Thomas Gleixner
  -- strict thread matches above, loose matches on Subject: below --
2019-03-05 16:43 [MODERATED] Starting to go public? Linus Torvalds
2019-03-05 17:02 ` [MODERATED] " Andrew Cooper
2019-03-05 20:36   ` Jiri Kosina
2019-03-05 22:31     ` Andrew Cooper
2019-03-06 16:18       ` [MODERATED] Encrypted Message Jon Masters
2019-03-05 17:10 ` Jon Masters
2019-03-04  1:21 [MODERATED] [PATCH RFC 0/4] Proposed cmdline improvements Josh Poimboeuf
2019-03-04  1:23 ` [MODERATED] [PATCH RFC 1/4] 1 Josh Poimboeuf
2019-03-04  3:55   ` [MODERATED] Encrypted Message Jon Masters
2019-03-04  7:30   ` [MODERATED] Re: [PATCH RFC 1/4] 1 Greg KH
2019-03-04  7:45     ` [MODERATED] Encrypted Message Jon Masters
2019-03-04  1:24 ` [MODERATED] [PATCH RFC 3/4] 3 Josh Poimboeuf
2019-03-04  3:58   ` [MODERATED] Encrypted Message Jon Masters
2019-03-04 17:17     ` [MODERATED] " Josh Poimboeuf
2019-03-06 16:22       ` [MODERATED] " Jon Masters
2019-03-04  1:25 ` [MODERATED] [PATCH RFC 4/4] 4 Josh Poimboeuf
2019-03-04  4:07   ` [MODERATED] Encrypted Message Jon Masters
2019-03-01 21:47 [patch V6 00/14] MDS basics 0 Thomas Gleixner
2019-03-01 21:47 ` [patch V6 06/14] MDS basics 6 Thomas Gleixner
2019-03-04  6:28   ` [MODERATED] Encrypted Message Jon Masters
2019-03-01 21:47 ` [patch V6 08/14] MDS basics 8 Thomas Gleixner
2019-03-04  6:57   ` [MODERATED] Encrypted Message Jon Masters
2019-03-04  7:06     ` Jon Masters
2019-03-04  8:12       ` Jon Masters
2019-03-05 15:34     ` Thomas Gleixner
2019-03-06 16:21       ` [MODERATED] " Jon Masters
2019-03-01 21:47 ` [patch V6 10/14] MDS basics 10 Thomas Gleixner
2019-03-04  6:45   ` [MODERATED] Encrypted Message Jon Masters
2019-03-01 21:47 ` [patch V6 12/14] MDS basics 12 Thomas Gleixner
2019-03-04  5:47   ` [MODERATED] Encrypted Message Jon Masters
2019-03-04  5:30 ` Jon Masters
2019-02-24 15:07 [MODERATED] [PATCH v6 00/43] MDSv6 Andi Kleen
2019-02-24 15:07 ` [MODERATED] [PATCH v6 10/43] MDSv6 Andi Kleen
2019-02-25 16:30   ` [MODERATED] " Greg KH
2019-02-25 16:41     ` [MODERATED] Encrypted Message Jon Masters
2019-02-24 15:07 ` [MODERATED] [PATCH v6 31/43] MDSv6 Andi Kleen
2019-02-25 15:19   ` [MODERATED] " Greg KH
2019-02-25 15:34     ` Andi Kleen
2019-02-25 15:49       ` Greg KH
2019-02-25 15:52         ` [MODERATED] Encrypted Message Jon Masters
2019-02-25 16:00           ` [MODERATED] " Greg KH
2019-02-25 16:19             ` [MODERATED] " Jon Masters
2019-02-22 22:24 [patch V4 00/11] MDS basics Thomas Gleixner
2019-02-22 22:24 ` [patch V4 04/11] x86/speculation/mds: Add mds_clear_cpu_buffer() Thomas Gleixner
2019-02-26 14:19   ` [MODERATED] " Josh Poimboeuf
2019-03-01 20:58     ` [MODERATED] Encrypted Message Jon Masters
2019-03-01 22:14       ` Jon Masters
2019-02-21 23:44 [patch V3 0/9] MDS basics 0 Thomas Gleixner
2019-02-21 23:44 ` [patch V3 4/9] MDS basics 4 Thomas Gleixner
2019-02-22  7:45   ` [MODERATED] Encrypted Message Jon Masters
2019-02-19 12:44 [patch 0/8] MDS basics 0 Thomas Gleixner
2019-02-21 16:14 ` [MODERATED] Encrypted Message Jon Masters
2019-02-07 23:41 [MODERATED] [PATCH v3 0/6] PERFv3 Andi Kleen
2019-02-07 23:41 ` [MODERATED] [PATCH v3 2/6] PERFv3 Andi Kleen
2019-02-08  0:51   ` [MODERATED] Re: [SUSPECTED SPAM][PATCH " Andrew Cooper
2019-02-08  9:01     ` Peter Zijlstra
2019-02-08  9:39       ` Peter Zijlstra
2019-02-08 10:53         ` [MODERATED] [RFC][PATCH] performance walnuts Peter Zijlstra
2019-02-15 23:45           ` [MODERATED] Encrypted Message Jon Masters
2019-01-12  1:29 [MODERATED] [PATCH v4 00/28] MDSv4 2 Andi Kleen
2019-01-12  1:29 ` [MODERATED] [PATCH v4 05/28] MDSv4 10 Andi Kleen
2019-01-14 19:20   ` [MODERATED] " Dave Hansen
2019-01-18  7:33     ` [MODERATED] Encrypted Message Jon Masters
2019-01-14 23:39   ` Tim Chen
2019-01-12  1:29 ` [MODERATED] [PATCH v4 10/28] MDSv4 24 Andi Kleen
2019-01-15  1:05   ` [MODERATED] Encrypted Message Tim Chen
2018-06-12 17:29 [MODERATED] FYI - Reading uncached memory Jon Masters
2018-06-14 16:59 ` [MODERATED] Encrypted Message Tim Chen
2018-05-29 19:42 [MODERATED] [PATCH 0/2] L1TF KVM 0 Paolo Bonzini
     [not found] ` <20180529194240.7F1336110A@crypto-ml.lab.linutronix.de>
2018-05-29 22:49   ` [PATCH 1/2] L1TF KVM 1 Thomas Gleixner
2018-05-29 23:54     ` [MODERATED] " Andrew Cooper
2018-05-30  9:01       ` Paolo Bonzini
2018-06-04  8:24         ` [MODERATED] " Martin Pohlack
2018-06-04 13:11           ` [MODERATED] Is: Tim, Q to you. Was:Re: " Konrad Rzeszutek Wilk
2018-06-04 17:59             ` [MODERATED] Encrypted Message Tim Chen
2018-06-05 23:34             ` Tim Chen
2018-06-05 23:37               ` Tim Chen
2018-06-07 19:11                 ` Tim Chen
2018-05-17 20:53 SSB status - V18 pushed out Thomas Gleixner
2018-05-18 13:54 ` [MODERATED] Is: Sleep states ?Was:Re: " Konrad Rzeszutek Wilk
2018-05-18 14:29   ` Thomas Gleixner
2018-05-18 19:50     ` [MODERATED] Encrypted Message Tim Chen
2018-05-02 21:51 [patch V11 00/16] SSB 0 Thomas Gleixner
2018-05-03  4:27 ` [MODERATED] Encrypted Message Tim Chen
2018-04-24  9:06 [MODERATED] L1D-Fault KVM mitigation Joerg Roedel
2018-04-24  9:35 ` [MODERATED] " Peter Zijlstra
2018-04-24  9:48   ` David Woodhouse
2018-04-24 11:04     ` Peter Zijlstra
2018-05-23  9:45       ` David Woodhouse
2018-05-24  9:45         ` Peter Zijlstra
2018-05-24 15:04           ` Thomas Gleixner
2018-05-24 15:33             ` Thomas Gleixner
2018-05-24 23:18               ` [MODERATED] Encrypted Message Tim Chen
2018-05-25 18:22                 ` Tim Chen
2018-05-26 19:14                 ` L1D-Fault KVM mitigation Thomas Gleixner
2018-05-29 19:29                   ` [MODERATED] Encrypted Message Tim Chen
2018-05-29 21:14                     ` L1D-Fault KVM mitigation Thomas Gleixner
2018-05-30 16:38                       ` [MODERATED] Encrypted Message Tim Chen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.