historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MODERATED] [PATCH 0/4] V8 more sampling fun 0
@ 2020-04-16  0:14 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V8 more sampling fun 3 mark gross
                   ` (8 more replies)
  0 siblings, 9 replies; 33+ messages in thread
From: mark gross @ 2020-04-16  0:14 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 0/4] V8 more sampling fun

This version implement cleanups for readability and alignment with the
finalizing Intel white paper on the SRBDS topic.  I thank all who've been
giving me valuable feedback through this process.

I've included a branch diff between the previous patch set and this one.

---

Special Register Buffer Data Sampling is a sampling type of vulnerability that
leaks data across cores sharing the HW-RNG for vulnerable processors.

This leak is fixed by a microcode update and is enabled by default.

This new microcode serializes processor access during execution of RDRAND or
RDSEED. It ensures that the shared buffer is overwritten before it is released
for reuse.

The mitigation impacts the throughput of the RDRAND and RDSEED instructions and
latency of RT processing running on the socket while executing RDRAND or
RDSEED.  The micro benchmarks calling RDRAND many times show a slowdown.

This patch set enables kernel command line control of this mitigation and
exports vulnerability and mitigation status.
This patch set includes 3 patches:
* The first patch adds steppings to x86_cpu_id structure and related macros
* The second patch enables the command line control of the mitigation as well
  as the sysfs export of vulnerability status.
* The third patch has the Documentation/admin-guide/hw-vuln documentation for
  this issue and the control over the mitigation.


mark gross (4):
  x86/cpu: Add stepping field to x86_cpu_id structure
  x86/cpu: clean up cpu_matches
  x86/speculation: Special Register Buffer Data Sampling (SRBDS)
    mitigation control.
  x86/speculation: SRBDS vulnerability and mitigation documentation

 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../special-register-buffer-data-sampling.rst | 147 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  20 +++
 arch/x86/include/asm/cpu_device_id.h          |  27 +++-
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 105 +++++++++++++
 arch/x86/kernel/cpu/common.c                  |  81 ++++++++--
 arch/x86/kernel/cpu/cpu.h                     |   2 +
 arch/x86/kernel/cpu/match.c                   |   7 +-
 drivers/base/cpu.c                            |   8 +
 include/linux/mod_devicetable.h               |   2 +
 13 files changed, 392 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst

-- 
2.17.1

diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
index 9f1ee4064fcd..b3cb73668b08 100644
--- a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
@@ -14,16 +14,19 @@ core through the special register mechanism that is susceptible to MDS attacks.
 
 Affected processors
 --------------------
-Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED and
-are vulnerable to MFBDS (Micro architectural Fill Buffer Data Sampling) variant
-of MDS (Micro architectural Data Sampling) or to the TAA (TSX Asynchronous
-Abort) when TSX is enabled,
+Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
+be affected.
+
+A processor is affected by SRBDS if its Family_Model and stepping is in the
+following list, with the exception of the listed processors exporting MDS_NO
+while Intel TSX is available yet not enabled.  The latter class of processors
+are only affected when Intel TSX is enabled by software using TSX_CTRL_MSR
+otherwise they are not affected.
 
-  =============  ============  ==========================
-  common name    Family_Model  Stepping
-  =============  ============  ==========================
-  Ivybridge      06_3AH        All
 
+  =============  ============  ========
+  common name    Family_Model  Stepping
+  =============  ============  ========
   Haswell        06_3CH        All
   Haswell_L      06_45H        All
   Haswell_G      06_46H        All
@@ -34,14 +37,10 @@ Abort) when TSX is enabled,
   Skylake_L      06_4EH        All
   Skylake        06_5EH        All
 
-  Kabylake_L     06_8EH        <=A
-  Kabylake_L     06_8EH        0xB only if TSX is enabled
-  Kabylake_L     06_8EH        0xC only if TSX is enabled
+  Kabylake_L     06_8EH        <=0xC
 
-  Kabylake       06_9EH        <=B
-  Kabylake       06_9EH        0xC only if TSX is enabled
-  Kabylake       06_9EH        0xD only if TSX is enabled
-  =============  ============  ==========================
+  Kabylake       06_9EH        <=0xD
+  =============  ============  ========
 
 Related CVEs
 ------------
diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
index 4f0df2e46c95..10426cd56dca 100644
--- a/arch/x86/include/asm/cpu_device_id.h
+++ b/arch/x86/include/asm/cpu_device_id.h
@@ -22,7 +22,7 @@
 
 #define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
 /**
- * X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE - Base macro for CPU matching
+ * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
  * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
  *		The name is expanded to X86_VENDOR_@_vendor
  * @_family:	The family number or X86_FAMILY_ANY
@@ -39,7 +39,7 @@
  * into another macro at the usage site for good reasons, then please
  * start this local macro with X86_MATCH to allow easy grepping.
  */
-#define X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(_vendor, _family, _model, \
+#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
 						    _steppings, _feature, _data) { \
 	.vendor		= X86_VENDOR_##_vendor,				\
 	.family		= _family,					\
@@ -50,7 +50,7 @@
 }
 
 /**
- * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
+ * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Macro for CPU matching
  * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
  *		The name is expanded to X86_VENDOR_@_vendor
  * @_family:	The family number or X86_FAMILY_ANY
@@ -60,11 +60,11 @@
  *		format is unsigned long. The supplied value, pointer
  *		etc. is casted to unsigned long internally.
  *
- * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE() is
+ * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE() is
  * set to wildcards.
  */
 #define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
-	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(vendor, family, model, \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(vendor, family, model, \
 						X86_STEPPING_ANY, feature, data)
 
 /**
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index addef92109fe..f345c24f85d1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -406,17 +406,17 @@ enum srbds_mitigations {
 	SRBDS_MITIGATION_OFF,
 	SRBDS_MITIGATION_UCODE_NEEDED,
 	SRBDS_MITIGATION_FULL,
-	SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF,
+	SRBDS_MITIGATION_TSX_OFF,
 	SRBDS_MITIGATION_HYPERVISOR,
 };
 
 static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
 static const char * const srbds_strings[] = {
-	[SRBDS_MITIGATION_OFF]			= "Vulnerable",
-	[SRBDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
-	[SRBDS_MITIGATION_FULL]			= "Mitigated: Microcode",
-	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
-	[SRBDS_MITIGATION_HYPERVISOR]		= "Unknown: Dependent on hypervisor status",
+	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
+	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
+	[SRBDS_MITIGATION_FULL]		= "Mitigated: Microcode",
+	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigated: TSX disabled",
+	[SRBDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
 };
 
 static bool srbds_off;
@@ -438,7 +438,7 @@ void update_srbds_msr(void)
 
 	switch (srbds_mitigation) {
 	case SRBDS_MITIGATION_OFF:
-	case SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF:
+	case SRBDS_MITIGATION_TSX_OFF:
 		mcu_ctrl |= RNGDS_MITG_DIS;
 		break;
 	case SRBDS_MITIGATION_FULL:
@@ -462,26 +462,15 @@ static void __init srbds_select_mitigation(void)
 	 * TSX that are only exposed to SRBDS when TSX is enabled.
 	 */
 	ia32_cap = x86_read_arch_cap_msr();
-	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
-		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
-		goto out;
-	}
-
-	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
+		srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
+	else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
 		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
-		goto out;
-	}
-
-	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
+	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
 		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
-		goto out;
-	}
+	else if (cpu_mitigations_off() || srbds_off)
+		srbds_mitigation = SRBDS_MITIGATION_OFF;
 
-	if (cpu_mitigations_off() || srbds_off) {
-		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)
-			srbds_mitigation = SRBDS_MITIGATION_OFF;
-	}
-out:
 	update_srbds_msr();
 	pr_info("%s\n", srbds_strings[srbds_mitigation]);
 }
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2c9be1fd3c72..39d3fb4292f9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1076,12 +1076,11 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 };
 
 #define VULNBL_INTEL_STEPPING(model, steppings, issues)			   \
-	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6,		   \
 					    INTEL_FAM6_##model, steppings, \
 					    X86_FEATURE_ANY, issues)
 
 #define SRBDS		BIT(0)
-#define SRBDS_IF_TSX	BIT(1)
 
 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
 	VULNBL_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
@@ -1092,10 +1091,8 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
 	VULNBL_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
 	VULNBL_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
 	VULNBL_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
-	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
-	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
-	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
-	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
+	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xC),	SRBDS),
+	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xD),	SRBDS),
 	{}
 };
 
@@ -1116,6 +1113,20 @@ u64 x86_read_arch_cap_msr(void)
 	return ia32_cap;
 }
 
+static bool tsx_fused_off(struct cpuinfo_x86 *c)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	/*
+	 * When running with up-to-date microcode TSX_CTRL is only enumerated
+	 * on parts where TSX is fused on.  When running with microcode not
+	 * supporting TSX_CTRL we check for RTM.
+	 */
+
+	return !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR) &&
+		 !cpu_has(c, X86_FEATURE_RTM);
+}
+
 static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = x86_read_arch_cap_msr();
@@ -1166,34 +1177,26 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
-	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
-		/*
-		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
-		 * they show as "Not affected".
-		 */
-		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
-		    !cpu_has(c, X86_FEATURE_RDSEED))
-			goto srbds_not_affected;
-		/*
-		 * Parts in the blacklist that enumerate MDS_NO are only
-		 * vulneralbe if TSX can be used.  To handle cases where TSX
-		 * gets fused off check to see if TSX is fused off and thus not
-		 * affected.
-		 *
-		 * When running with up to day microcode TSX_CTRL is only
-		 * enumerated on parts where TSX fused on.
-		 * When running with microcode not supporting TSX_CTRL we check
-		 * for RTM
-		 */
-		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
-		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
-		      cpu_has(c, X86_FEATURE_RTM)))
-			goto srbds_not_affected;
-
-		setup_force_cpu_bug(X86_BUG_SRBDS);
+	/*
+	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
+	 * they show as "Not affected".
+	 */
+	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
+		if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
+			/*
+			 * Parts in the blacklist that enumerate MDS_NO are
+			 * only vulnerable if TSX can be used.  To handle cases
+			 * where TSX gets fused off check to see if TSX is
+			 * fused off and thus not affected.
+			 */
+			if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c))
+				goto srbds_not_affected;
+
+			setup_force_cpu_bug(X86_BUG_SRBDS);
+		}
 	}
-srbds_not_affected:
 
+srbds_not_affected:
 	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
 		return;
 

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2020-04-27 15:10 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V8 more sampling fun 3 mark gross
2020-01-30 19:12 ` [MODERATED] [PATCH 4/4] V8 more sampling fun 4 mark gross
2020-03-17  0:56 ` [MODERATED] [PATCH 1/4] V8 more sampling fun 1 mark gross
2020-03-17  0:56 ` [MODERATED] [PATCH 2/4] V8 more sampling fun 2 mark gross
2020-04-16 17:15 ` [MODERATED] Re: [PATCH 1/4] V8 more sampling fun 1 Josh Poimboeuf
2020-04-16 17:30   ` Borislav Petkov
2020-04-16 17:16 ` [MODERATED] Re: [PATCH 2/4] V8 more sampling fun 2 Josh Poimboeuf
2020-04-16 17:33   ` [MODERATED] " Borislav Petkov
2020-04-16 22:47     ` mark gross
2020-04-16 17:17 ` [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3 Josh Poimboeuf
2020-04-16 17:44   ` Borislav Petkov
2020-04-16 18:01     ` Josh Poimboeuf
2020-04-16 22:45       ` mark gross
2020-04-16 22:57     ` mark gross
2020-04-17 12:34     ` Thomas Gleixner
2020-04-17 13:19       ` [MODERATED] " Josh Poimboeuf
2020-04-17 16:46         ` Luck, Tony
2020-04-17 19:22         ` Thomas Gleixner
2020-04-16 22:54   ` [MODERATED] " mark gross
2020-04-16 17:20 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Josh Poimboeuf
2020-04-16 17:49   ` [MODERATED] " Borislav Petkov
2020-04-16 22:57     ` mark gross
2020-04-20 14:30     ` mark gross
2020-04-20 16:17       ` Thomas Gleixner
2020-04-20 22:30         ` [MODERATED] " mark gross
2020-04-20 21:45       ` Slow Randomizing Boosts Denial of Service - Bulletin #1 Thomas Gleixner
2020-04-23 21:35         ` [MODERATED] " mark gross
2020-04-24  7:01           ` Greg KH
2020-04-27 15:10             ` mark gross
2020-04-21 17:30 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Borislav Petkov
2020-04-21 17:34   ` Andrew Cooper
2020-04-21 18:19     ` Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).