historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch V5 00/14] MDS basics 0
@ 2019-02-27 15:09 Thomas Gleixner
  2019-02-27 15:09 ` [patch V5 01/14] MDS basics 1 Thomas Gleixner
                   ` (16 more replies)
  0 siblings, 17 replies; 44+ messages in thread
From: Thomas Gleixner @ 2019-02-27 15:09 UTC (permalink / raw)
  To: speck

Changes since V4:

  - Fix SSB whitelist. Needs to go upstream independently.

  - Consolidate whitelists before adding another one.

  - Use an inline helper for the exit to user mitigation.

  - Add VMX/VMENTER mitigation when CPU is not affected by L1TF.

  - Remove 'auto' command line option.

  - Rework the mitigation documentation so the handling of special
    exceptions is clear.

  - Adjust the virt mitigation admin documentation.

  - Fix typos and address review comments

Available from git:

   cvs.ou.linutronix.de:linux/speck/linux WIP.mds

Delta patch and delta diffstat below.

Thanks,

	Thomas

8<--------------------
 Documentation/admin-guide/hw-vuln/mds.rst       |   56 +++++++++---
 Documentation/admin-guide/kernel-parameters.txt |   11 --
 Documentation/x86/mds.rst                       |   90 ++++++++++----------
 arch/x86/entry/common.c                         |    7 -
 arch/x86/include/asm/nospec-branch.h            |   11 ++
 arch/x86/include/asm/processor.h                |    1 
 arch/x86/kernel/cpu/bugs.c                      |   25 +----
 arch/x86/kernel/cpu/common.c                    |  108 ++++++++++--------------
 arch/x86/kernel/nmi.c                           |    4 
 arch/x86/kernel/traps.c                         |    4 
 arch/x86/kvm/vmx/vmx.c                          |    2 
 11 files changed, 163 insertions(+), 156 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/mds.rst b/Documentation/admin-guide/hw-vuln/mds.rst
index 151eb40ddadb..73cdc390aece 100644
--- a/Documentation/admin-guide/hw-vuln/mds.rst
+++ b/Documentation/admin-guide/hw-vuln/mds.rst
@@ -50,7 +50,7 @@ may be able to forward this speculative data to a disclosure gadget which
 allows in turn to infer the value via a cache side channel attack.
 
 Because the buffers are potentially shared between Hyper-Threads cross
-Hyper-Thread attacks may be possible.
+Hyper-Thread attacks are possible.
 
 Deeper technical information is available in the MDS specific x86
 architecture section: :ref:`Documentation/x86/mds.rst <mds>`.
@@ -161,14 +161,43 @@ CPU buffer clearing
 Virtualization mitigation
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-  If the CPU is also affected by L1TF and the L1D flush mitigation is enabled
-  and up to date microcode is available, the L1D flush mitigation is
-  automatically protecting the guest transition. For details on L1TF and
-  virtualization see:
-  :ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <mitigation_control_kvm>`.
+  The protection for host to guest transition depends on the L1TF
+  vulnerability of the CPU:
 
-  If the L1D flush mitigation is disabled or the microcode is not available
-  the guest transition is unprotected.
+  - CPU is affected by L1TF:
+
+    If the L1D flush mitigation is enabled and up to date microcode is
+    available, the L1D flush mitigation is automatically protecting the
+    guest transition. If the L1D flush mitigation is disabled the MDS
+    mitigation is disabled as well.
+
+    For details on L1TF and virtualization see:
+    :ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <mitigation_control_kvm>`.
+
+  - CPU is not affected by L1TF:
+
+    CPU buffers are flushed before entering the guest when the host MDS
+    protection is enabled.
+
+  The resulting MDS protection matrix for the host to guest transition:
+
+  ============ ===== ============= ============ =================
+   L1TF         MDS   VMX-L1FLUSH   Host MDS     State
+
+   Don't care   No    Don't care    N/A          Not affected
+
+   Yes          Yes   Disabled      Don't care   Vulnerable
+
+   Yes          Yes   Enabled       Don't care   Mitigated
+
+   No           Yes   N/A           Off          Vulnerable
+
+   No           Yes   N/A           Full         Mitigated
+  ============ ===== ============= ============ =================
+
+  This only covers the host to guest transition, i.e. prevents leakage from
+  host to guest, but does not protect the guest internally. Guest need to
+  have their own protections.
 
 .. _xeon_phi:
 
@@ -203,13 +232,10 @@ The kernel command line allows to control the MDS mitigations at boot
 time with the option "mds=". The valid arguments for this option are:
 
   ============  =============================================================
-  auto		Kernel selects the appropriate mitigation mode when the CPU
-		is affected. Defaults to full.
-
-  full		Provides all available mitigations for the MDS vulnerability
-		vulnerability, unconditional CPU buffer clearing on exit to
+  full		If the CPU is vulnerable, enable all available mitigations
+		for the MDS vulnerability, CPU buffer clearing on exit to
 		userspace and when entering a VM. Idle transitions are
-		protect as well.
+		protected as well if SMT is enabled.
 
 		It does not automatically disable SMT.
 
@@ -217,6 +243,8 @@ time with the option "mds=". The valid arguments for this option are:
 
   ============  =============================================================
 
+Not specifying this option is equivalent to "mds=full".
+
 
 Mitigation selection guide
 --------------------------
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 46410fe1ce88..dddb024eb523 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2369,19 +2369,14 @@
 			attack, to access data to which the attacker does
 			not have direct access.
 
-			This parameter controls the MDS mitigation. The the
+			This parameter controls the MDS mitigation. The
 			options are:
 
-			full    - Unconditionally enable MDS mitigation
+			full    - Enable MDS mitigation on vulnerable CPUs
 			off     - Unconditionally disable MDS mitigation
-			auto    - Kernel detects whether the CPU model is
-				  vulnerable to MDS and picks the most
-				  appropriate mitigation. If the CPU is not
-				  vulnerable, "off" is selected. If the CPU
-				  is vulnerable "full" is selected.
 
 			Not specifying this option is equivalent to
-			mds=auto.
+			mds=full.
 
 	mem=nn[KMG]	[KNL,BOOT] Force usage of a specific amount of memory
 			Amount of memory to be used when the kernel is not able
diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
index ce3dbddbd3b8..b050623c869c 100644
--- a/Documentation/x86/mds.rst
+++ b/Documentation/x86/mds.rst
@@ -1,5 +1,5 @@
-Microarchitecural Data Sampling (MDS) mitigation
-================================================
+Microarchitectural Data Sampling (MDS) mitigation
+=================================================
 
 .. _mds:
 
@@ -30,7 +30,7 @@ can then be forwarded to a faulting or assisting load operation, which can
 be exploited under certain conditions. Fill buffers are shared between
 Hyper-Threads so cross thread leakage is possible.
 
-MLDPS leaks Load Port Data. Load ports are used to perform load operations
+MLPDS leaks Load Port Data. Load ports are used to perform load operations
 from memory or I/O. The received data is then forwarded to the register
 file or a subsequent operation. In some implementations the Load Port can
 contain stale data from a previous operation which can be forwarded to
@@ -54,8 +54,8 @@ needed for exploiting MDS requires:
  - to control the pointer through which the disclosure gadget exposes the
    data
 
-The existence of such a construct cannot be excluded with 100% certainty,
-but the complexity involved makes it extremly unlikely.
+The existence of such a construct in the kernel cannot be excluded with
+100% certainty, but the complexity involved makes it extremly unlikely.
 
 There is one exception, which is untrusted BPF. The functionality of
 untrusted BPF is limited, but it needs to be thoroughly investigated
@@ -91,8 +91,7 @@ The kernel provides a function to invoke the buffer clearing:
     mds_clear_cpu_buffers()
 
 The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
-(idle) transitions. Depending on the mitigation mode and the system state
-the invocation can be enforced or conditional.
+(idle) transitions.
 
 As a special quirk to address virtualization scenarios where the host has
 the microcode updated, but the hypervisor does not (yet) expose the
@@ -105,7 +104,6 @@ itself are not required because the necessary gadgets to expose the leaked
 data cannot be controlled in a way which allows exploitation from malicious
 user space or VM guests.
 
-
 Kernel internal mitigation modes
 --------------------------------
 
@@ -117,57 +115,67 @@ Kernel internal mitigation modes
          advertised in CPUID.
 
  vmwerv	 Mitigation is enabled. CPU is affected and MD_CLEAR is not
-         advertised in CPUID. That is mainly for virtualization
+	 advertised in CPUID. That is mainly for virtualization
 	 scenarios where the host has the updated microcode but the
 	 hypervisor does not expose MD_CLEAR in CPUID. It's a best
 	 effort approach without guarantee.
  ======= ===========================================================
 
-If the CPU is affected and mds=off is not supplied on the kernel
-command line then the kernel selects the appropriate mitigation mode
-depending on the availability of the MD_CLEAR CPUID bit.
-
+If the CPU is affected and mds=off is not supplied on the kernel command
+line then the kernel selects the appropriate mitigation mode depending on
+the availability of the MD_CLEAR CPUID bit.
 
 Mitigation points
 -----------------
 
 1. Return to user space
 ^^^^^^^^^^^^^^^^^^^^^^^
+
    When transitioning from kernel to user space the CPU buffers are flushed
-   on affected CPUs:
+   on affected CPUs when the mitigation is not disabled on the kernel
+   command line. The migitation is enabled through the static key
+   mds_user_clear.
+
+   The mitigation is invoked in prepare_exit_to_usermode() which covers
+   most of the kernel to user space transitions. There are a few exceptions
+   which are not invoking prepare_exit_to_usermode() on return to user
+   space. These exceptions use the paranoid exit code.
+
+   - Non Maskable Interrupt (NMI):
+
+     Access to sensible data like keys, credentials in the NMI context is
+     mostly theoretical: The CPU can do prefetching or execute a
+     misspeculated code path and thereby fetching data which might end up
+     leaking through a buffer.
 
-   - always when the mitigation mode is full. The migitation is enabled
-     through the static key mds_user_clear.
+     But for mounting other attacks the kernel stack address of the task is
+     already valuable information. So in full mitigation mode, the NMI is
+     mitigated on the return from do_nmi() to provide almost complete
+     coverage.
 
-   This covers transitions from kernel to user space through a return to
-   user space from a syscall and from an interrupt or a regular exception.
+   - Double fault (#DF):
 
-   There are other kernel to user space transitions which are not covered
-   by this: NMIs and all non maskable exceptions which go through the
-   paranoid exit, which means that they are not invoking the regular
-   prepare_exit_to_usermode() which handles the CPU buffer clearing.
+     A double fault is usually fatal, but the ESPFIX workaround, which can
+     be triggered from user space through modify_ldt(2) is a recoverable
+     double fault. #DF uses the paranoid exit path, so explicit mitigation
+     in the double fault handler is required.
 
-   Access to sensible data like keys, credentials in the NMI context is
-   mostly theoretical: The CPU can do prefetching or execute a
-   misspeculated code path and thereby fetching data which might end up
-   leaking through a buffer.
+   - Machine Check Exception (#MC):
 
-   But for mounting other attacks the kernel stack address of the task is
-   already valuable information. So in full mitigation mode, the NMI is
-   mitigated on the return from do_nmi() to provide almost complete
-   coverage.
+     Another corner case is a #MC which hits between the CPU buffer clear
+     invocation and the actual return to user. As this still is in kernel
+     space it takes the paranoid exit path which does not clear the CPU
+     buffers. So the #MC handler repopulates the buffers to some
+     extent. Machine checks are not reliably controllable and the window is
+     extremly small so mitigation would just tick a checkbox that this
+     theoretical corner case is covered. To keep the amount of special
+     cases small, ignore #MC.
 
-   There is one non maskable exception which returns through paranoid exit
-   and is to some extent controllable from user space through
-   modify_ldt(2): #DF. So mitigation is required in the double fault
-   handler as well.
+   - Debug Exception (#DB):
 
-   Another corner case is a #MC which hits between the buffer clear and the
-   actual return to user. As this still is in kernel space it takes the
-   paranoid exit path which does not clear the CPU buffers. So the #MC
-   handler repopulates the buffers to some extent. Machine checks are not
-   reliably controllable and the window is extremly small so mitigation
-   would just tick a checkbox that this theoretical corner case is covered.
+     This takes the paranoid exit path only when the INT1 breakpoint is in
+     kernel space. #DB on a user space address takes the regular exit path,
+     so no extra mitigation required.
 
 
 2. C-State transition
@@ -186,7 +194,7 @@ Mitigation points
    the system.
 
    The buffer clear is only invoked before entering the C-State to prevent
-   that stale data from the idling CPU can be spilled to the Hyper-Thread
+   that stale data from the idling CPU from spilling to the Hyper-Thread
    sibling after the store buffer got repartitioned and all entries are
    available to the non idle sibling.
 
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 63da15ced746..19f650d729f5 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -181,13 +181,6 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
 	}
 }
 
-static inline void mds_user_clear_cpu_buffers(void)
-{
-	if (!static_branch_likely(&mds_user_clear))
-		return;
-	mds_clear_cpu_buffers();
-}
-
 /* Called with IRQs disabled. */
 __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
 {
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 3e27ccd6d5c5..4e970390110f 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -346,6 +346,17 @@ static inline void mds_clear_cpu_buffers(void)
 	asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
 }
 
+/**
+ * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
+ *
+ * Clear CPU buffers if the corresponding static key is enabled
+ */
+static inline void mds_user_clear_cpu_buffers(void)
+{
+	if (static_branch_likely(&mds_user_clear))
+		mds_clear_cpu_buffers();
+}
+
 /**
  * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
  *
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index c3f5e10891a2..aca1ef8cc79f 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -994,7 +994,6 @@ extern enum l1tf_mitigations l1tf_mitigation;
 
 enum mds_mitigations {
 	MDS_MITIGATION_OFF,
-	MDS_MITIGATION_AUTO,
 	MDS_MITIGATION_FULL,
 	MDS_MITIGATION_VMWERV,
 };
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 83b19bb54093..aea871e69d64 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -66,6 +66,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
 /* Control MDS CPU buffer clear before returning to user space */
 DEFINE_STATIC_KEY_FALSE(mds_user_clear);
+EXPORT_SYMBOL_GPL(mds_user_clear);
 /* Control MDS CPU buffer clear before idling (halt, mwait) */
 DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
 EXPORT_SYMBOL_GPL(mds_idle_clear);
@@ -73,6 +74,7 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
 void __init check_bugs(void)
 {
 	identify_boot_cpu();
+
 	/*
 	 * identify_boot_cpu() initialized SMT support information, let the
 	 * core code know.
@@ -218,7 +220,7 @@ static void x86_amd_ssb_disable(void)
 #define pr_fmt(fmt)	"MDS: " fmt
 
 /* Default mitigation for L1TF-affected CPUs */
-static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_AUTO;
+static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_FULL;
 
 static const char * const mds_strings[] = {
 	[MDS_MITIGATION_OFF]	= "Vulnerable",
@@ -233,18 +235,10 @@ static void mds_select_mitigation(void)
 		return;
 	}
 
-	switch (mds_mitigation) {
-	case MDS_MITIGATION_OFF:
-		break;
-	case MDS_MITIGATION_AUTO:
-	case MDS_MITIGATION_FULL:
-	case MDS_MITIGATION_VMWERV:
-		if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
-			mds_mitigation = MDS_MITIGATION_FULL;
-		else
+	if (mds_mitigation == MDS_MITIGATION_FULL) {
+		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
 			mds_mitigation = MDS_MITIGATION_VMWERV;
 		static_branch_enable(&mds_user_clear);
-		break;
 	}
 	pr_info("%s\n", mds_strings[mds_mitigation]);
 }
@@ -259,8 +253,6 @@ static int __init mds_cmdline(char *str)
 
 	if (!strcmp(str, "off"))
 		mds_mitigation = MDS_MITIGATION_OFF;
-	else if (!strcmp(str, "auto"))
-		mds_mitigation = MDS_MITIGATION_AUTO;
 	else if (!strcmp(str, "full"))
 		mds_mitigation = MDS_MITIGATION_FULL;
 
@@ -702,15 +694,12 @@ void arch_smt_update(void)
 		break;
 	}
 
-	switch (mds_mitigation) {
-	case MDS_MITIGATION_OFF:
-		break;
+	switch(mds_mitigation) {
 	case MDS_MITIGATION_FULL:
 	case MDS_MITIGATION_VMWERV:
 		update_mds_branch_idle();
 		break;
-	/* Keep GCC happy */
-	case MDS_MITIGATION_AUTO:
+	case MDS_MITIGATION_OFF:
 		break;
 	}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bac5a3a38f0d..389853338c2f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -948,69 +948,58 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
 #endif
 }
 
-static const __initconst struct x86_cpu_id cpu_no_speculation[] = {
-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL,	X86_FEATURE_ANY },
-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_TABLET,	X86_FEATURE_ANY },
-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL_MID,	X86_FEATURE_ANY },
-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_MID,	X86_FEATURE_ANY },
-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL,	X86_FEATURE_ANY },
-	{ X86_VENDOR_CENTAUR,	5 },
-	{ X86_VENDOR_INTEL,	5 },
-	{ X86_VENDOR_NSC,	5 },
-	{ X86_VENDOR_ANY,	4 },
+#define NO_SPECULATION	BIT(0)
+#define NO_MELTDOWN	BIT(1)
+#define NO_SSB		BIT(2)
+#define NO_L1TF		BIT(3)
+#define NO_MDS		BIT(4)
+
+static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+	{ X86_VENDOR_ANY,	4, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_CENTAUR,	5, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_INTEL,	5, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_NSC,	5, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL,		X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_TABLET,	X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL_MID,		X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_MID,	X86_FEATURE_ANY, NO_SPECULATION },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL,		X86_FEATURE_ANY, NO_SPECULATION },
+
+	{ X86_VENDOR_AMD,	X86_FAMILY_ANY, X86_MODEL_ANY,		X86_FEATURE_ANY, NO_MELTDOWN | NO_L1TF },
+	{ X86_VENDOR_HYGON,	X86_FAMILY_ANY, X86_MODEL_ANY,		X86_FEATURE_ANY, NO_MELTDOWN | NO_L1TF },
+
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SILVERMONT,		X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SILVERMONT_X,	X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SILVERMONT_MID,	X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_AIRMONT,		X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_AIRMONT_MID,		X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_CORE_YONAH,		X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_XEON_PHI_KNL,		X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_XEON_PHI_KNM,		X86_FEATURE_ANY, NO_SSB | NO_L1TF },
+
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_GOLDMONT,		X86_FEATURE_ANY, NO_L1TF | NO_MDS },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_GOLDMONT_X,		X86_FEATURE_ANY, NO_L1TF | NO_MDS },
+	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_GOLDMONT_PLUS,	X86_FEATURE_ANY, NO_L1TF | NO_MDS },
+
+	{ X86_VENDOR_AMD,	0x0f, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SSB },
+	{ X86_VENDOR_AMD,	0x10, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SSB },
+	{ X86_VENDOR_AMD,	0x11, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SSB },
+	{ X86_VENDOR_AMD,	0x12, X86_MODEL_ANY,			X86_FEATURE_ANY, NO_SSB },
 	{}
 };
 
-static const __initconst struct x86_cpu_id cpu_no_meltdown[] = {
-	{ X86_VENDOR_AMD },
-	{ X86_VENDOR_HYGON },
-	{}
-};
-
-/* Only list CPUs which speculate but are non susceptible to SSB */
-static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_X	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_MID	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_CORE_YONAH		},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
-	{ X86_VENDOR_AMD,	0x12,					},
-	{ X86_VENDOR_AMD,	0x11,					},
-	{ X86_VENDOR_AMD,	0x10,					},
-	{ X86_VENDOR_AMD,	0xf,					},
-	{}
-};
-
-static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
-	/* in addition to cpu_no_speculation */
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_X	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_MID	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT_MID	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_X	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_PLUS	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
-	{}
-};
+static bool __init cpu_matches(unsigned long which)
+{
+	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
 
-static const __initconst struct x86_cpu_id cpu_no_mds[] = {
-	/* in addition to cpu_no_speculation */
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_X	},
-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_PLUS	},
-	{}
-};
+	return m && !!(m->driver_data & which);
+}
 
 static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = 0;
 
-	if (x86_match_cpu(cpu_no_speculation))
+	if (cpu_matches(NO_SPECULATION))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
@@ -1019,20 +1008,17 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
 
-	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
-	   !(ia32_cap & ARCH_CAP_SSB_NO) &&
+	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
 
-	if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
-	    !x86_match_cpu(cpu_no_mds)) &&
-	    !(ia32_cap & ARCH_CAP_MDS_NO))
+	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO))
 		setup_force_cpu_bug(X86_BUG_MDS);
 
-	if (x86_match_cpu(cpu_no_meltdown))
+	if (cpu_matches(NO_MELTDOWN))
 		return;
 
 	/* Rogue Data Cache Load? No! */
@@ -1041,7 +1027,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 
 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
 
-	if (x86_match_cpu(cpu_no_l1tf))
+	if (cpu_matches(NO_L1TF))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_L1TF);
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index 02f7f96dde73..086cf1d1d71d 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -535,10 +535,8 @@ do_nmi(struct pt_regs *regs, long error_code)
 	if (this_cpu_dec_return(nmi_state))
 		goto nmi_restart;
 
-	if (!static_branch_likely(&mds_user_clear))
-		return;
 	if (user_mode(regs))
-		mds_clear_cpu_buffers();
+		mds_user_clear_cpu_buffers();
 }
 NOKPROBE_SYMBOL(do_nmi);
 
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index d2779f4730f5..5942060dba9a 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -372,9 +372,7 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
 		 * user space exit, so a CPU buffer clear is required when
 		 * MDS mitigation is enabled.
 		 */
-		if (static_branch_unlikely(&mds_user_clear))
-			mds_clear_cpu_buffers();
-
+		mds_user_clear_cpu_buffers();
 		return;
 	}
 #endif
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 30a6bcd735ec..b33da789eb67 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6371,6 +6371,8 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
 
 	if (static_branch_unlikely(&vmx_l1d_should_flush))
 		vmx_l1d_flush(vcpu);
+	else if (static_branch_unlikely(&mds_user_clear))
+		mds_clear_cpu_buffers();
 
 	asm(
 		/* Store host registers */

^ permalink raw reply related	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2019-03-02 20:39 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-27 15:09 [patch V5 00/14] MDS basics 0 Thomas Gleixner
2019-02-27 15:09 ` [patch V5 01/14] MDS basics 1 Thomas Gleixner
2019-02-28 13:08   ` Thomas Gleixner
2019-02-27 15:09 ` [patch V5 02/14] MDS basics 2 Thomas Gleixner
2019-02-28 13:55   ` [MODERATED] " Josh Poimboeuf
2019-02-28 14:09     ` Thomas Gleixner
2019-02-28 20:23       ` [MODERATED] " Josh Poimboeuf
2019-03-01 16:04         ` Thomas Gleixner
2019-02-27 15:09 ` [patch V5 03/14] MDS basics 3 Thomas Gleixner
2019-02-27 16:34   ` [MODERATED] " Greg KH
2019-02-27 15:09 ` [patch V5 04/14] MDS basics 4 Thomas Gleixner
2019-02-27 15:09 ` [patch V5 05/14] MDS basics 5 Thomas Gleixner
2019-02-27 15:09 ` [patch V5 06/14] MDS basics 6 Thomas Gleixner
2019-02-27 15:09 ` [patch V5 07/14] MDS basics 7 Thomas Gleixner
2019-02-27 17:07   ` [MODERATED] " Greg KH
2019-02-27 15:09 ` [patch V5 08/14] MDS basics 8 Thomas Gleixner
2019-02-28  8:11   ` [MODERATED] " Greg KH
2019-02-27 15:09 ` [patch V5 09/14] MDS basics 9 Thomas Gleixner
2019-03-01 14:04   ` [MODERATED] " Josh Poimboeuf
2019-03-01 16:03     ` Thomas Gleixner
2019-03-01 16:40       ` [MODERATED] " Josh Poimboeuf
2019-03-01 18:39         ` Josh Poimboeuf
2019-03-01 19:15           ` Thomas Gleixner
2019-03-01 22:38             ` [MODERATED] " Andrea Arcangeli
2019-03-01 22:58               ` Thomas Gleixner
2019-03-02 19:22                 ` [MODERATED] Re: [SPAM] " Dave Hansen
2019-03-02 20:39                   ` Thomas Gleixner
2019-02-27 15:09 ` [patch V5 10/14] MDS basics 10 Thomas Gleixner
2019-02-27 15:09 ` [patch V5 11/14] MDS basics 11 Thomas Gleixner
2019-02-27 15:09 ` [patch V5 12/14] MDS basics 12 Thomas Gleixner
2019-03-01 22:00   ` [MODERATED] " mark gross
2019-02-27 15:09 ` [patch V5 13/14] MDS basics 13 Thomas Gleixner
2019-03-01 22:04   ` [MODERATED] " mark gross
2019-02-27 15:09 ` [patch V5 14/14] MDS basics 14 Thomas Gleixner
2019-02-27 17:49   ` Thomas Gleixner
2019-02-27 16:26 ` [MODERATED] Re: [patch V5 00/14] MDS basics 0 Linus Torvalds
2019-02-27 17:51   ` Thomas Gleixner
2019-02-27 18:13     ` Thomas Gleixner
2019-02-27 19:50       ` [MODERATED] " Linus Torvalds
2019-02-27 20:05         ` Thomas Gleixner
2019-02-27 21:04 ` Thomas Gleixner
2019-02-28  1:04   ` [MODERATED] " Josh Poimboeuf
2019-02-27 23:06 ` mark gross
2019-02-28  6:58   ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).