historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MODERATED] [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 17:52 [MODERATED] [PATCH 0/3] V5 more sampling fun 0 mark gross
@ 2020-01-16 22:16 ` mark gross
  2020-01-30 19:12 ` [MODERATED] [PATCH 3/3] V5 more sampling fun 3 mark gross
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 21+ messages in thread
From: mark gross @ 2020-01-16 22:16 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 2/3] x86/speculation: Special Register Buffer Data Sampling
 (SRBDS) mitigation control.

SRBDS is an MDS-like speculative side channel that can leak bits from
the RNG across cores and threads. New microcode serializes the processor
access during the execution of RDRAND and RDSEED.  This ensures that the
shared buffer is overwritten before it is released for reuse.

While it is present on all affected CPU models, the microcode mitigation
is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
cases where TSX is not supported or has been disabled with TSX_CTRL.

The mitigation is activated by default on affected processors and it
increases latency for RDRAND and RDSEED instructions.  Among other
effects this will reduce throughput from /dev/urandom.

* enable administrator to configure the mitigation off when desired
  using either mitigations=off or srbds=off.
* export vulnerability status via sysfs
* rename file scoped macros to apply for non-whitelist table
  initializations.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 .../admin-guide/kernel-parameters.txt         |  20 +++
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 123 ++++++++++++++++++
 arch/x86/kernel/cpu/common.c                  |  61 +++++++--
 arch/x86/kernel/cpu/cpu.h                     |   3 +
 drivers/base/cpu.c                            |   8 ++
 8 files changed, 210 insertions(+), 12 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 2e0e3b45d02a..b39531a3c5bc 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
 		/sys/devices/system/cpu/vulnerabilities/mds
+		/sys/devices/system/cpu/vulnerabilities/srbds
 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
 		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
 Date:		January 2018
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 4d5a4fe22703..017bd682d2b9 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4739,6 +4739,26 @@
 			the kernel will oops in either "warn" or "fatal"
 			mode.
 
+	srbds=		[X86,INTEL]
+			Control mitigation for the Special Register
+			Buffer Data Sampling (SRBDS) mitigation.
+
+			Certain CPUs are vulnerable to an MDS-like
+			exploit which can leak bits from the random
+			number generator.
+
+			By default, this issue is mitigated by
+			microcode.  However, the microcode fix can cause
+			the RDRAND and RDSEED instructions to become
+			much slower.  Among other effects, this will
+			result in reduced throughput from /dev/urandom.
+
+			The microcode mitigation can be disabled with
+			the following option:
+
+			off:    Disable mitigation and remove
+				performance impact to RDRAND and RDSEED
+
 	srcutree.counter_wrap_check [KNL]
 			Specifies how frequently to check for
 			grace-period sequence counter wrap for the
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index db189945e9b0..02dabc9e77b0 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -362,6 +362,7 @@
 #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
 #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
 #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
+#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
 #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
 #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
 #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
@@ -407,5 +408,6 @@
 #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
 #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
 #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 12c9684d59ba..3efde600a674 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -128,6 +128,10 @@
 #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
 #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
 
+/* SRBDS support */
+#define MSR_IA32_MCU_OPT_CTRL		0x00000123
+#define RNGDS_MITG_DIS			BIT(0)
+
 #define MSR_IA32_SYSENTER_CS		0x00000174
 #define MSR_IA32_SYSENTER_ESP		0x00000175
 #define MSR_IA32_SYSENTER_EIP		0x00000176
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ed54b3b21c39..51cc38d95a8f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_print_mitigation(void);
 static void __init taa_select_mitigation(void);
+static void __init srbds_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -108,6 +109,7 @@ void __init check_bugs(void)
 	l1tf_select_mitigation();
 	mds_select_mitigation();
 	taa_select_mitigation();
+	srbds_select_mitigation();
 
 	/*
 	 * As MDS and TAA mitigations are inter-related, print MDS
@@ -397,6 +399,114 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"SRBDS: " fmt
+
+enum srbds_mitigations {
+	SRBDS_MITIGATION_OFF,
+	SRBDS_MITIGATION_UCODE_NEEDED,
+	SRBDS_MITIGATION_FULL,
+	SRBDS_NOT_AFFECTED_TSX_OFF,
+	SRBDS_HYPERVISOR,
+};
+
+enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
+static const char * const srbds_strings[] = {
+	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
+	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: no microcode",
+	[SRBDS_MITIGATION_FULL]		= "Mitigated",
+	[SRBDS_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
+	[SRBDS_HYPERVISOR]		= "Unknown",
+};
+
+static bool srbds_off;
+
+void update_srbds_msr(void)
+{
+	u64 mcu_ctrl;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+		return;
+
+	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
+		return;
+
+	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+
+	switch (srbds_mitigation) {
+	case SRBDS_MITIGATION_OFF:
+	case SRBDS_NOT_AFFECTED_TSX_OFF:
+		mcu_ctrl |= RNGDS_MITG_DIS;
+		break;
+	case SRBDS_MITIGATION_FULL:
+		mcu_ctrl &= ~RNGDS_MITG_DIS;
+		break;
+	default:
+		break;
+	}
+
+	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+}
+
+static void __init srbds_select_mitigation(void)
+{
+	u64 ia32_cap;
+
+	/*
+	 * This test relies on the CPUID values of vendor, family, model,
+	 * stepping which might not reflect the real hardware when we are
+	 * running as a guest. But VMM vendors have asked that we do this
+	 * before the X86_FEATURE_HYPERVISOR test since this provides better
+	 * guidance to users in most real situations.
+	 */
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+	/*
+	 * Check to see if this is one of the MDS_NO systems supporting
+	 * TSX that are only exposed to SRBDS when TSX is enabled.
+	 */
+	ia32_cap = x86_read_arch_cap_msr();
+	if (ia32_cap & ARCH_CAP_MDS_NO) {
+		if (!boot_cpu_has(X86_FEATURE_RTM))
+			srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
+	}
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
+			srbds_mitigation = SRBDS_HYPERVISOR;
+		return;
+	}
+
+	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
+		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
+		return;
+	}
+
+	if (cpu_mitigations_off() || srbds_off) {
+		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
+			srbds_mitigation = SRBDS_MITIGATION_OFF;
+	}
+
+	update_srbds_msr();
+	pr_info("%s\n", srbds_strings[srbds_mitigation]);
+}
+
+static int __init srbds_parse_cmdline(char *str)
+{
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off"))
+		srbds_off = true;
+
+	return 0;
+}
+early_param("srbds", srbds_parse_cmdline);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
@@ -1528,6 +1638,11 @@ static char *ibpb_state(void)
 	return "";
 }
 
+static ssize_t srbds_show_state(char *buf)
+{
+	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -1572,6 +1687,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 	case X86_BUG_ITLB_MULTIHIT:
 		return itlb_multihit_show_state(buf);
 
+	case X86_BUG_SRBDS:
+		return srbds_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1618,4 +1736,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
 }
+
+ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
+}
 #endif
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bed0cb83fe24..db9b9fbf9c0f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1075,9 +1075,34 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	{}
 };
 
-static bool __init cpu_matches(unsigned long which)
+#define VULNHW_INTEL_STEPPING(model, steppings, issues)			   \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
+					    INTEL_FAM6_##model, steppings, \
+					    X86_FEATURE_ANY, issues)
+
+#define VULNHW_INTEL(model, issues) X86_MATCH_INTEL_FAM6_MODEL(model, issues)
+#define SRBDS			BIT(1)
+
+/*
+ * List affected CPU's for issues that cannot be enumerated.
+ */
+static const struct x86_cpu_id affected_cpus[] __initconst = {
+	VULNHW_INTEL(IVYBRIDGE,		SRBDS),
+	VULNHW_INTEL(HASWELL,		SRBDS),
+	VULNHW_INTEL(HASWELL_L,		SRBDS),
+	VULNHW_INTEL(HASWELL_G,		SRBDS),
+	VULNHW_INTEL(BROADWELL_G,	SRBDS),
+	VULNHW_INTEL(BROADWELL,		SRBDS),
+	VULNHW_INTEL(SKYLAKE_L,		SRBDS),
+	VULNHW_INTEL(SKYLAKE,		SRBDS),
+	VULNHW_INTEL_STEPPING(KABYLAKE_L, GENMASK(0xC, 0), SRBDS), /*06_8E steppings <=C*/
+	VULNHW_INTEL_STEPPING(KABYLAKE, GENMASK(0xD, 0),   SRBDS), /*06_9E steppings <=D*/
+	{}
+};
+
+static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
 {
-	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
+	const struct x86_cpu_id *m = x86_match_cpu(table);
 
 	return m && !!(m->driver_data & which);
 }
@@ -1097,33 +1122,44 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	u64 ia32_cap = x86_read_arch_cap_msr();
 
 	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
-	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
+	if (!cpu_matches(NO_ITLB_MULTIHIT, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
 		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
 
-	if (cpu_matches(NO_SPECULATION))
+	if (cpu_matches(NO_SPECULATION, cpu_vuln_whitelist))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 
-	if (!cpu_matches(NO_SPECTRE_V2))
+	if (!cpu_matches(NO_SPECTRE_V2, cpu_vuln_whitelist))
 		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
 
-	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
-	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+	if (!cpu_matches(NO_SSB, cpu_vuln_whitelist) &&
+	    (ia32_cap & ARCH_CAP_SSB_NO) &&
+	    !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
 
-	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
+	if (!cpu_matches(NO_MDS, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
 		setup_force_cpu_bug(X86_BUG_MDS);
-		if (cpu_matches(MSBDS_ONLY))
+		if (cpu_matches(MSBDS_ONLY, cpu_vuln_whitelist))
 			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
 	}
 
-	if (!cpu_matches(NO_SWAPGS))
+	if (!cpu_matches(NO_SWAPGS, cpu_vuln_whitelist))
 		setup_force_cpu_bug(X86_BUG_SWAPGS);
 
+	/*
+	 * Some low-end SKUs on the affected list do not support
+	 * RDRAND or RDSEED. Make sure they show as "Not affected".
+	 */
+	if (cpu_matches(SRBDS, affected_cpus) &&
+	    (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)))
+		setup_force_cpu_bug(X86_BUG_SRBDS);
+
 	/*
 	 * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when:
 	 *	- TSX is supported or
@@ -1139,7 +1175,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
-	if (cpu_matches(NO_MELTDOWN))
+	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
 		return;
 
 	/* Rogue Data Cache Load? No! */
@@ -1148,7 +1184,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 
 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
 
-	if (cpu_matches(NO_L1TF))
+	if (cpu_matches(NO_L1TF, cpu_vuln_whitelist))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_L1TF);
@@ -1591,6 +1627,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
 	mtrr_ap_init();
 	validate_apic_and_package_id(c);
 	x86_spec_ctrl_setup_ap();
+	update_srbds_msr();
 }
 
 static __init int setup_noclflush(char *arg)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 37fdefd14f28..9188ad5be4ed 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -44,7 +44,10 @@ struct _tlb_table {
 extern const struct cpu_dev *const __x86_cpu_dev_start[],
 			    *const __x86_cpu_dev_end[];
 
+void update_srbds_msr(void);
+
 #ifdef CONFIG_CPU_SUP_INTEL
+
 enum tsx_ctrl_states {
 	TSX_CTRL_ENABLE,
 	TSX_CTRL_DISABLE,
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 9a1c00fbbaef..d2136ab9b14a 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t __weak cpu_show_srbds(struct device *dev,
+			      struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "Not affected\n");
+}
+
 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
@@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
 static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
 static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
 static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_mds.attr,
 	&dev_attr_tsx_async_abort.attr,
 	&dev_attr_itlb_multihit.attr,
+	&dev_attr_srbds.attr,
 	NULL
 };
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [MODERATED] [PATCH 3/3] V5 more sampling fun 3
  2020-04-06 17:52 [MODERATED] [PATCH 0/3] V5 more sampling fun 0 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 2/3] V5 more sampling fun 2 mark gross
@ 2020-01-30 19:12 ` mark gross
  2020-03-17  0:56 ` [MODERATED] [PATCH 1/3] V5 more sampling fun 1 mark gross
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 21+ messages in thread
From: mark gross @ 2020-01-30 19:12 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 3/3] x86/speculation: SRBDS vulnerability and mitigation
 documentation

Add documentation for the SRBDS vulnerability and its mitigation.

Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 Documentation/admin-guide/hw-vuln/index.rst   |   2 +
 .../special-register-buffer-data-sampling.rst | 150 ++++++++++++++++++
 2 files changed, 152 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst

diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 0795e3c2643f..99d5b3244ef9 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -14,3 +14,5 @@ are configurable at compile, boot or run time.
    mds
    tsx_async_abort
    multihit.rst
+   special-register-buffer-data-sampling.rst
+
diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
new file mode 100644
index 000000000000..c9b3a6812c30
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
@@ -0,0 +1,150 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+SRBDS - Special Register Buffer Data Sampling
+=============================================
+
+SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
+infer values returned from special register accesses.  Special register
+accesses are accesses to off core registers.  According to Intel's evaluation,
+the special register reads that have a security expectation of privacy are:
+RDRAND, RDSEED and SGX EGETKEY.
+
+When RDRAND RDSEED and EGETKEY instructions are used the data is moved to the
+core through the special register mechanism that is susceptible to MDS attacks.
+
+Affected processors
+--------------------
+Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED and
+are vulnerable to MFBDS (Micro architectural Fill Buffer Data Sampling) variant
+of MDS (Micro architectural Data Sampling) or to the TAA (TSX Asynchronous
+Abort) when TSX is enabled,
+
+  =============  ============  ==========================
+  common name    Family_Model  Stepping
+  =============  ============  ==========================
+  Ivybridge      06_3AH        All
+
+  Haswell        06_3CH        All
+  Haswell_L      06_45H        All
+  Haswell_G      06_46H        All
+
+  Broadwell_G    06_47H        All
+  Broadwell      06_3DH        All
+
+  Skylake_L      06_4EH        All
+  Skylake        06_5EH        All
+
+  Kabylake_L     06_8EH        <=A
+  Kabylake_L     06_8EH        0xB only if TSX is enabled
+  Kabylake_L     06_8EH        0xC only if TSX is enabled
+
+  Kabylake       06_9EH        <=B
+  Kabylake       06_9EH        0xC only if TSX is enabled
+  Kabylake       06_9EH        0xD only if TSX is enabled
+  =============  ============  ==========================
+
+Related CVEs
+------------
+
+The following CVE entry is related to this SRBDS issue:
+
+    ==============  =====  =====================================
+    CVE-2020-0543   SRBDS  Special Register Buffer Data Sampling
+    ==============  =====  =====================================
+
+Attack scenarios
+----------------
+An unprivileged user can extract returned values from RDRAND and RDSEED
+executed on another core or sibling thread using MDS techniques.
+
+
+Mitigation mechanism
+-------------------
+Intel will release microcode updates that modify the RDRAND, RDSEED, and
+EGETKEY instructions to overwrite secret special register data in the shared
+staging buffer before the secret data can be accessed by another logical
+processor.
+
+During execution of the RDRAND, RDSEED, or EGETKEY instruction, off-core
+accesses from other logical processors will be delayed until the special
+register read is complete and the secret data in the shared staging buffer is
+overwritten.
+
+This has three effects on performance:
+
+#. RDRAND, RDSEED, or EGETKEY instruction have higher latency.
+
+#. Executing RDRAND at the same time on multiple logical processors will be
+   serialized, resulting in an overall reduction in the maximum RDRAND
+   bandwidth.
+
+#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
+   logical processors that miss their core caches, with an impact similar to
+   legacy locked cache-line-split accesses.
+
+Because of the performance impact of the mitigation, the microcode updates also
+provide an opt-out mechanism (RNGDS_MITG_DIS) to disable the mitigation for
+RDRAND and RDSEED instructions executed outside of Intel Software Guard
+Extensions (Intel SGX) enclaves. On logical processors that disable the
+mitigation using this opt-out mechanism, RDRAND and RDSEED do not take longer
+to execute and do not impact performance of sibling logical processors memory
+accesses. The opt-out mechanism does not affect Intel SGX enclaves (including
+execution of RDRAND or RDSEED inside of an enclave, as well as EGETKEY
+execution).
+
+IA32_MCU_OPT_CTRL MSR Definition
+--------------------------------
+Along with the mitigation for this issue, Intel added a new thread-scope
+IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
+RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
+9]==1. This MSR is introduced through the microcode update.
+
+Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
+disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
+enclave on that logical processor. Opting out of the mitigation for a
+particular logical processor does not affect the RDRAND and RDSEED mitigations
+for other logical processors.
+
+Note that inside of an Intel SGX enclave, the mitigation is applied regardless
+of the value of RNGDS_MITG_DS.
+
+Mitigation control on the kernel command line
+---------------------------------------------
+The kernel command line allows control over the SRBDS mitigation at boot time
+with the option "srbds=".  The option for this is:
+
+  ============= =============================================================
+  off           This option disables SRBDS mitigation for RDRAND and RDSEED on
+                affected platforms.
+  ============= =============================================================
+
+SRBDS System Information
+-----------------------
+The Linux kernel provides vulnerability status information through sysfs.  For
+SRBDS this can be accessed by the following sysfs file:
+/sys/devices/system/cpu/vulnerabilities/srbds
+
+The possible values contained in this file are:
+
+ ============================== =============================================
+ Not affected                   Processor not vulnerable
+ Vulnerable                     Processor vulnerable and mitigation disabled
+ Vulnerable: no microcode       Processor vulnerable and microcode is missing
+                                mitigation
+ Mitigated                      Processor is vulnerable and mitigation is in
+                                effect.
+ Not affected (TSX disabled)    Processor is only vulnerable when TSX is
+                                enabled while this system was booted with TSX
+                                disabled.
+ Unknown                        Running on virtual guest processor that is
+                                affected but with no way to know if host
+                                processor is mitigated or vulnerable.
+ ============================== =============================================
+
+SRBDS Default mitigation
+------------------------
+This new microcode serializes processor access during execution of RDRAND,
+RDSEED ensures that the shared buffer is overwritten before it is released for
+reuse.  Use the "srbds=off" kernel command line to disable the mitigation for
+RDRAND and RDSEED.
+
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [MODERATED] [PATCH 1/3] V5 more sampling fun 1
  2020-04-06 17:52 [MODERATED] [PATCH 0/3] V5 more sampling fun 0 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 2/3] V5 more sampling fun 2 mark gross
  2020-01-30 19:12 ` [MODERATED] [PATCH 3/3] V5 more sampling fun 3 mark gross
@ 2020-03-17  0:56 ` mark gross
       [not found] ` <5e8b7166.1c69fb81.4c99a.3619SMTPIN_ADDED_BROKEN@mx.google.com>
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 21+ messages in thread
From: mark gross @ 2020-03-17  0:56 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 1/3] x86/cpu: Add stepping field to x86_cpu_id structure

Intel uses the same family/model for several CPUs. Sometimes the
stepping must be checked to tell them apart.

On X86 there can be at most 16 steppings, add steppings bitmask to
x86_cpu_id and X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and
support for matching against family/model/stepping.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/include/asm/cpu_device_id.h | 26 +++++++++++++++++++++++---
 arch/x86/kernel/cpu/match.c          |  7 ++++++-
 include/linux/mod_devicetable.h      |  2 ++
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
index cf3d621c6892..d8ecd0d72e04 100644
--- a/arch/x86/include/asm/cpu_device_id.h
+++ b/arch/x86/include/asm/cpu_device_id.h
@@ -21,11 +21,12 @@
 #define X86_CENTAUR_FAM6_NANO		0xf
 
 /**
- * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
+ * X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE - Base macro for CPU matching
  * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
  *		The name is expanded to X86_VENDOR_@_vendor
  * @_family:	The family number or X86_FAMILY_ANY
  * @_model:	The model number, model constant or X86_MODEL_ANY
+ * @_steppings:	Bitmask for steppings, stepping constant or X86_STEPPING_ANY
  * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
  * @_data:	Driver specific data or NULL. The internal storage
  *		format is unsigned long. The supplied value, pointer
@@ -37,15 +38,34 @@
  * into another macro at the usage site for good reasons, then please
  * start this local macro with X86_MATCH to allow easy grepping.
  */
-#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model,	\
-					   _feature, _data) {		\
+#define X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(_vendor, _family, _model, \
+						    _steppings, _feature, _data) { \
 	.vendor		= X86_VENDOR_##_vendor,				\
 	.family		= _family,					\
 	.model		= _model,					\
+	.steppings	= _steppings,					\
 	.feature	= _feature,					\
 	.driver_data	= (unsigned long) _data				\
 }
 
+/**
+ * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
+ * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
+ *		The name is expanded to X86_VENDOR_@_vendor
+ * @_family:	The family number or X86_FAMILY_ANY
+ * @_model:	The model number, model constant or X86_MODEL_ANY
+ * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
+ * @_data:	Driver specific data or NULL. The internal storage
+ *		format is unsigned long. The supplied value, pointer
+ *		etc. is casted to unsigned long internally.
+ *
+ * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE() is
+ * set to wildcards.
+ */
+#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(vendor, family, model, \
+						X86_STEPPING_ANY, feature, data)
+
 /**
  * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CPU feature
  * @vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
index d3482eb43ff3..dac48301b5a7 100644
--- a/arch/x86/kernel/cpu/match.c
+++ b/arch/x86/kernel/cpu/match.c
@@ -39,13 +39,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
 	const struct x86_cpu_id *m;
 	struct cpuinfo_x86 *c = &boot_cpu_data;
 
-	for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
+	for (m = match;
+	     m->vendor | m->family | m->model | m->steppings | m->feature;
+	     m++) {
 		if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
 			continue;
 		if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
 			continue;
 		if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
 			continue;
+		if (m->steppings != X86_STEPPING_ANY &&
+				!(BIT(c->x86_stepping) & m->steppings))
+			continue;
 		if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
 			continue;
 		return m;
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 4c2ddd0941a7..0754b8d71262 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -663,6 +663,7 @@ struct x86_cpu_id {
 	__u16 vendor;
 	__u16 family;
 	__u16 model;
+	__u16 steppings;
 	__u16 feature;	/* bit index */
 	kernel_ulong_t driver_data;
 };
@@ -671,6 +672,7 @@ struct x86_cpu_id {
 #define X86_VENDOR_ANY 0xffff
 #define X86_FAMILY_ANY 0
 #define X86_MODEL_ANY  0
+#define X86_STEPPING_ANY 0
 #define X86_FEATURE_ANY 0	/* Same as FPU, you can't test for that */
 
 /*
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [MODERATED] [PATCH 0/3] V5 more sampling fun 0
@ 2020-04-06 17:52 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 2/3] V5 more sampling fun 2 mark gross
                   ` (7 more replies)
  0 siblings, 8 replies; 21+ messages in thread
From: mark gross @ 2020-04-06 17:52 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 0/3] v5 more sampling fun

This version of the Special Register Buffer Data Sampling has been updated to
address feedback gotten.  Including:
* is based on Thomas' macro clean up for x86.
* cleanups and changes driven by review feedback.
* handles the virtualized guest case better.

---

Special Register Buffer Data Sampling is a sampling type of vulnerability that
leaks data across cores sharing the HW-RNG for vulnerable processors.

This leak is fixed by a microcode update and is enabled by default.

This new microcode serializes processor access during execution of RDRAND or
RDSEED. It ensures that the shared buffer is overwritten before it is released
for reuse.

The mitigation impacts the throughput of the RDRAND and RDSEED instructions and
latency of RT processing running on the socket while executing RDRAND or
RDSEED.  The micro benchmarks calling RDRAND many times show a slowdown.

This patch set enables kernel command line control of this mitigation and
exports vulnerability and mitigation status.
This patch set includes 3 patches:
* The first patch adds steppings to x86_cpu_id structure and related macros
* The second patch enables the command line control of the mitigation as well
  as the sysfs export of vulnerability status.
* The third patch has the Documentation/admin-guide/hw-vuln documentation for
  this issue and the control over the mitigation.


mark gross (3):
  x86/cpu: Add stepping field to x86_cpu_id structure
  x86/speculation: Special Register Buffer Data Sampling (SRBDS)
    mitigation control.
  x86/speculation: SRBDS vulnerability and mitigation documentation

 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   2 +
 .../special-register-buffer-data-sampling.rst | 150 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  20 +++
 arch/x86/include/asm/cpu_device_id.h          |  26 ++-
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 123 ++++++++++++++
 arch/x86/kernel/cpu/common.c                  |  61 +++++--
 arch/x86/kernel/cpu/cpu.h                     |   3 +
 arch/x86/kernel/cpu/match.c                   |   7 +-
 drivers/base/cpu.c                            |   8 +
 include/linux/mod_devicetable.h               |   2 +
 13 files changed, 393 insertions(+), 16 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst

-- 
2.17.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 1/3] V5 more sampling fun 1
       [not found] ` <5e8b7166.1c69fb81.4c99a.3619SMTPIN_ADDED_BROKEN@mx.google.com>
@ 2020-04-06 18:31   ` Kees Cook
  0 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2020-04-06 18:31 UTC (permalink / raw)
  To: speck

On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 1/3] x86/cpu: Add stepping field to x86_cpu_id structure
> 
> Intel uses the same family/model for several CPUs. Sometimes the
> stepping must be checked to tell them apart.
> 
> On X86 there can be at most 16 steppings, add steppings bitmask to
> x86_cpu_id and X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and
> support for matching against family/model/stepping.
> 
> Signed-off-by: Mark Gross <mgross@linux.intel.com>

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> Reviewed-by: Tony Luck <tony.luck@intel.com>
> ---
>  arch/x86/include/asm/cpu_device_id.h | 26 +++++++++++++++++++++++---
>  arch/x86/kernel/cpu/match.c          |  7 ++++++-
>  include/linux/mod_devicetable.h      |  2 ++
>  3 files changed, 31 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
> index cf3d621c6892..d8ecd0d72e04 100644
> --- a/arch/x86/include/asm/cpu_device_id.h
> +++ b/arch/x86/include/asm/cpu_device_id.h
> @@ -21,11 +21,12 @@
>  #define X86_CENTAUR_FAM6_NANO		0xf
>  
>  /**
> - * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
> + * X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE - Base macro for CPU matching
>   * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
>   *		The name is expanded to X86_VENDOR_@_vendor
>   * @_family:	The family number or X86_FAMILY_ANY
>   * @_model:	The model number, model constant or X86_MODEL_ANY
> + * @_steppings:	Bitmask for steppings, stepping constant or X86_STEPPING_ANY
>   * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
>   * @_data:	Driver specific data or NULL. The internal storage
>   *		format is unsigned long. The supplied value, pointer
> @@ -37,15 +38,34 @@
>   * into another macro at the usage site for good reasons, then please
>   * start this local macro with X86_MATCH to allow easy grepping.
>   */
> -#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model,	\
> -					   _feature, _data) {		\
> +#define X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(_vendor, _family, _model, \
> +						    _steppings, _feature, _data) { \
>  	.vendor		= X86_VENDOR_##_vendor,				\
>  	.family		= _family,					\
>  	.model		= _model,					\
> +	.steppings	= _steppings,					\
>  	.feature	= _feature,					\
>  	.driver_data	= (unsigned long) _data				\
>  }
>  
> +/**
> + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
> + * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
> + *		The name is expanded to X86_VENDOR_@_vendor
> + * @_family:	The family number or X86_FAMILY_ANY
> + * @_model:	The model number, model constant or X86_MODEL_ANY
> + * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
> + * @_data:	Driver specific data or NULL. The internal storage
> + *		format is unsigned long. The supplied value, pointer
> + *		etc. is casted to unsigned long internally.
> + *
> + * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE() is
> + * set to wildcards.
> + */
> +#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
> +	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(vendor, family, model, \
> +						X86_STEPPING_ANY, feature, data)
> +
>  /**
>   * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CPU feature
>   * @vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
> diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
> index d3482eb43ff3..dac48301b5a7 100644
> --- a/arch/x86/kernel/cpu/match.c
> +++ b/arch/x86/kernel/cpu/match.c
> @@ -39,13 +39,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
>  	const struct x86_cpu_id *m;
>  	struct cpuinfo_x86 *c = &boot_cpu_data;
>  
> -	for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
> +	for (m = match;
> +	     m->vendor | m->family | m->model | m->steppings | m->feature;
> +	     m++) {
>  		if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
>  			continue;
>  		if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
>  			continue;
>  		if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
>  			continue;
> +		if (m->steppings != X86_STEPPING_ANY &&
> +				!(BIT(c->x86_stepping) & m->steppings))
> +			continue;
>  		if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
>  			continue;
>  		return m;
> diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
> index 4c2ddd0941a7..0754b8d71262 100644
> --- a/include/linux/mod_devicetable.h
> +++ b/include/linux/mod_devicetable.h
> @@ -663,6 +663,7 @@ struct x86_cpu_id {
>  	__u16 vendor;
>  	__u16 family;
>  	__u16 model;
> +	__u16 steppings;
>  	__u16 feature;	/* bit index */
>  	kernel_ulong_t driver_data;
>  };
> @@ -671,6 +672,7 @@ struct x86_cpu_id {
>  #define X86_VENDOR_ANY 0xffff
>  #define X86_FAMILY_ANY 0
>  #define X86_MODEL_ANY  0
> +#define X86_STEPPING_ANY 0
>  #define X86_FEATURE_ANY 0	/* Same as FPU, you can't test for that */
>  
>  /*
> -- 
> 2.17.1

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
       [not found] ` <5e8b71d8.1c69fb81.64075.43abSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2020-04-06 18:34   ` Kees Cook
  2020-04-06 18:37     ` Greg KH
  2020-04-06 18:52     ` mark gross
  0 siblings, 2 replies; 21+ messages in thread
From: Kees Cook @ 2020-04-06 18:34 UTC (permalink / raw)
  To: speck

On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 2/3] x86/speculation: Special Register Buffer Data Sampling
>  (SRBDS) mitigation control.
> 
> SRBDS is an MDS-like speculative side channel that can leak bits from
> the RNG across cores and threads. New microcode serializes the processor
> access during the execution of RDRAND and RDSEED.  This ensures that the
> shared buffer is overwritten before it is released for reuse.
> 
> While it is present on all affected CPU models, the microcode mitigation
> is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
> cases where TSX is not supported or has been disabled with TSX_CTRL.
> 
> The mitigation is activated by default on affected processors and it
> increases latency for RDRAND and RDSEED instructions.  Among other
> effects this will reduce throughput from /dev/urandom.
> 
> * enable administrator to configure the mitigation off when desired
>   using either mitigations=off or srbds=off.
> * export vulnerability status via sysfs
> * rename file scoped macros to apply for non-whitelist table
>   initializations.
> 
> Signed-off-by: Mark Gross <mgross@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  .../ABI/testing/sysfs-devices-system-cpu      |   1 +
>  .../admin-guide/kernel-parameters.txt         |  20 +++
>  arch/x86/include/asm/cpufeatures.h            |   2 +
>  arch/x86/include/asm/msr-index.h              |   4 +
>  arch/x86/kernel/cpu/bugs.c                    | 123 ++++++++++++++++++
>  arch/x86/kernel/cpu/common.c                  |  61 +++++++--
>  arch/x86/kernel/cpu/cpu.h                     |   3 +
>  drivers/base/cpu.c                            |   8 ++
>  8 files changed, 210 insertions(+), 12 deletions(-)
> 
> diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> index 2e0e3b45d02a..b39531a3c5bc 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> @@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
>  		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
>  		/sys/devices/system/cpu/vulnerabilities/l1tf
>  		/sys/devices/system/cpu/vulnerabilities/mds
> +		/sys/devices/system/cpu/vulnerabilities/srbds
>  		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
>  		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
>  Date:		January 2018
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 4d5a4fe22703..017bd682d2b9 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4739,6 +4739,26 @@
>  			the kernel will oops in either "warn" or "fatal"
>  			mode.
>  
> +	srbds=		[X86,INTEL]
> +			Control mitigation for the Special Register
> +			Buffer Data Sampling (SRBDS) mitigation.
> +
> +			Certain CPUs are vulnerable to an MDS-like
> +			exploit which can leak bits from the random
> +			number generator.
> +
> +			By default, this issue is mitigated by
> +			microcode.  However, the microcode fix can cause
> +			the RDRAND and RDSEED instructions to become
> +			much slower.  Among other effects, this will
> +			result in reduced throughput from /dev/urandom.

This is this true about /dev/urandom? I thought the RDRAND dependency
had been removed?

-Kees

> +
> +			The microcode mitigation can be disabled with
> +			the following option:
> +
> +			off:    Disable mitigation and remove
> +				performance impact to RDRAND and RDSEED
> +
>  	srcutree.counter_wrap_check [KNL]
>  			Specifies how frequently to check for
>  			grace-period sequence counter wrap for the
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index db189945e9b0..02dabc9e77b0 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -362,6 +362,7 @@
>  #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
>  #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
>  #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
> +#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
>  #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
>  #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
>  #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
> @@ -407,5 +408,6 @@
>  #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
>  #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
>  #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
> +#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
>  
>  #endif /* _ASM_X86_CPUFEATURES_H */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 12c9684d59ba..3efde600a674 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -128,6 +128,10 @@
>  #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
>  #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
>  
> +/* SRBDS support */
> +#define MSR_IA32_MCU_OPT_CTRL		0x00000123
> +#define RNGDS_MITG_DIS			BIT(0)
> +
>  #define MSR_IA32_SYSENTER_CS		0x00000174
>  #define MSR_IA32_SYSENTER_ESP		0x00000175
>  #define MSR_IA32_SYSENTER_EIP		0x00000176
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index ed54b3b21c39..51cc38d95a8f 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
>  static void __init mds_select_mitigation(void);
>  static void __init mds_print_mitigation(void);
>  static void __init taa_select_mitigation(void);
> +static void __init srbds_select_mitigation(void);
>  
>  /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
>  u64 x86_spec_ctrl_base;
> @@ -108,6 +109,7 @@ void __init check_bugs(void)
>  	l1tf_select_mitigation();
>  	mds_select_mitigation();
>  	taa_select_mitigation();
> +	srbds_select_mitigation();
>  
>  	/*
>  	 * As MDS and TAA mitigations are inter-related, print MDS
> @@ -397,6 +399,114 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
>  }
>  early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
>  
> +#undef pr_fmt
> +#define pr_fmt(fmt)	"SRBDS: " fmt
> +
> +enum srbds_mitigations {
> +	SRBDS_MITIGATION_OFF,
> +	SRBDS_MITIGATION_UCODE_NEEDED,
> +	SRBDS_MITIGATION_FULL,
> +	SRBDS_NOT_AFFECTED_TSX_OFF,
> +	SRBDS_HYPERVISOR,
> +};
> +
> +enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> +static const char * const srbds_strings[] = {
> +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: no microcode",
> +	[SRBDS_MITIGATION_FULL]		= "Mitigated",
> +	[SRBDS_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> +	[SRBDS_HYPERVISOR]		= "Unknown",
> +};
> +
> +static bool srbds_off;
> +
> +void update_srbds_msr(void)
> +{
> +	u64 mcu_ctrl;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return;
> +
> +	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
> +		return;
> +
> +	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> +
> +	switch (srbds_mitigation) {
> +	case SRBDS_MITIGATION_OFF:
> +	case SRBDS_NOT_AFFECTED_TSX_OFF:
> +		mcu_ctrl |= RNGDS_MITG_DIS;
> +		break;
> +	case SRBDS_MITIGATION_FULL:
> +		mcu_ctrl &= ~RNGDS_MITG_DIS;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> +}
> +
> +static void __init srbds_select_mitigation(void)
> +{
> +	u64 ia32_cap;
> +
> +	/*
> +	 * This test relies on the CPUID values of vendor, family, model,
> +	 * stepping which might not reflect the real hardware when we are
> +	 * running as a guest. But VMM vendors have asked that we do this
> +	 * before the X86_FEATURE_HYPERVISOR test since this provides better
> +	 * guidance to users in most real situations.
> +	 */
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;
> +	/*
> +	 * Check to see if this is one of the MDS_NO systems supporting
> +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> +	 */
> +	ia32_cap = x86_read_arch_cap_msr();
> +	if (ia32_cap & ARCH_CAP_MDS_NO) {
> +		if (!boot_cpu_has(X86_FEATURE_RTM))
> +			srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
> +	}
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> +			srbds_mitigation = SRBDS_HYPERVISOR;
> +		return;
> +	}
> +
> +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> +		return;
> +	}
> +
> +	if (cpu_mitigations_off() || srbds_off) {
> +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> +	}
> +
> +	update_srbds_msr();
> +	pr_info("%s\n", srbds_strings[srbds_mitigation]);
> +}
> +
> +static int __init srbds_parse_cmdline(char *str)
> +{
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!strcmp(str, "off"))
> +		srbds_off = true;
> +
> +	return 0;
> +}
> +early_param("srbds", srbds_parse_cmdline);
> +
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "Spectre V1 : " fmt
>  
> @@ -1528,6 +1638,11 @@ static char *ibpb_state(void)
>  	return "";
>  }
>  
> +static ssize_t srbds_show_state(char *buf)
> +{
> +	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
> +}
> +
>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       char *buf, unsigned int bug)
>  {
> @@ -1572,6 +1687,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>  	case X86_BUG_ITLB_MULTIHIT:
>  		return itlb_multihit_show_state(buf);
>  
> +	case X86_BUG_SRBDS:
> +		return srbds_show_state(buf);
> +
>  	default:
>  		break;
>  	}
> @@ -1618,4 +1736,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
>  {
>  	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
>  }
> +
> +ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
> +}
>  #endif
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index bed0cb83fe24..db9b9fbf9c0f 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1075,9 +1075,34 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
>  	{}
>  };
>  
> -static bool __init cpu_matches(unsigned long which)
> +#define VULNHW_INTEL_STEPPING(model, steppings, issues)			   \
> +	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
> +					    INTEL_FAM6_##model, steppings, \
> +					    X86_FEATURE_ANY, issues)
> +
> +#define VULNHW_INTEL(model, issues) X86_MATCH_INTEL_FAM6_MODEL(model, issues)
> +#define SRBDS			BIT(1)
> +
> +/*
> + * List affected CPU's for issues that cannot be enumerated.
> + */
> +static const struct x86_cpu_id affected_cpus[] __initconst = {
> +	VULNHW_INTEL(IVYBRIDGE,		SRBDS),
> +	VULNHW_INTEL(HASWELL,		SRBDS),
> +	VULNHW_INTEL(HASWELL_L,		SRBDS),
> +	VULNHW_INTEL(HASWELL_G,		SRBDS),
> +	VULNHW_INTEL(BROADWELL_G,	SRBDS),
> +	VULNHW_INTEL(BROADWELL,		SRBDS),
> +	VULNHW_INTEL(SKYLAKE_L,		SRBDS),
> +	VULNHW_INTEL(SKYLAKE,		SRBDS),
> +	VULNHW_INTEL_STEPPING(KABYLAKE_L, GENMASK(0xC, 0), SRBDS), /*06_8E steppings <=C*/
> +	VULNHW_INTEL_STEPPING(KABYLAKE, GENMASK(0xD, 0),   SRBDS), /*06_9E steppings <=D*/
> +	{}
> +};
> +
> +static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
>  {
> -	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
> +	const struct x86_cpu_id *m = x86_match_cpu(table);
>  
>  	return m && !!(m->driver_data & which);
>  }
> @@ -1097,33 +1122,44 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	u64 ia32_cap = x86_read_arch_cap_msr();
>  
>  	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
> -	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
> +	if (!cpu_matches(NO_ITLB_MULTIHIT, cpu_vuln_whitelist) &&
> +	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
>  		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
>  
> -	if (cpu_matches(NO_SPECULATION))
> +	if (cpu_matches(NO_SPECULATION, cpu_vuln_whitelist))
>  		return;
>  
>  	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
>  
> -	if (!cpu_matches(NO_SPECTRE_V2))
> +	if (!cpu_matches(NO_SPECTRE_V2, cpu_vuln_whitelist))
>  		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
>  
> -	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
> -	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
> +	if (!cpu_matches(NO_SSB, cpu_vuln_whitelist) &&
> +	    (ia32_cap & ARCH_CAP_SSB_NO) &&
> +	    !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
>  		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
>  
>  	if (ia32_cap & ARCH_CAP_IBRS_ALL)
>  		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
>  
> -	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
> +	if (!cpu_matches(NO_MDS, cpu_vuln_whitelist) &&
> +	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
>  		setup_force_cpu_bug(X86_BUG_MDS);
> -		if (cpu_matches(MSBDS_ONLY))
> +		if (cpu_matches(MSBDS_ONLY, cpu_vuln_whitelist))
>  			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
>  	}
>  
> -	if (!cpu_matches(NO_SWAPGS))
> +	if (!cpu_matches(NO_SWAPGS, cpu_vuln_whitelist))
>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
>  
> +	/*
> +	 * Some low-end SKUs on the affected list do not support
> +	 * RDRAND or RDSEED. Make sure they show as "Not affected".
> +	 */
> +	if (cpu_matches(SRBDS, affected_cpus) &&
> +	    (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)))
> +		setup_force_cpu_bug(X86_BUG_SRBDS);
> +
>  	/*
>  	 * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when:
>  	 *	- TSX is supported or
> @@ -1139,7 +1175,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> -	if (cpu_matches(NO_MELTDOWN))
> +	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
>  		return;
>  
>  	/* Rogue Data Cache Load? No! */
> @@ -1148,7 +1184,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  
>  	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
>  
> -	if (cpu_matches(NO_L1TF))
> +	if (cpu_matches(NO_L1TF, cpu_vuln_whitelist))
>  		return;
>  
>  	setup_force_cpu_bug(X86_BUG_L1TF);
> @@ -1591,6 +1627,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
>  	mtrr_ap_init();
>  	validate_apic_and_package_id(c);
>  	x86_spec_ctrl_setup_ap();
> +	update_srbds_msr();
>  }
>  
>  static __init int setup_noclflush(char *arg)
> diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> index 37fdefd14f28..9188ad5be4ed 100644
> --- a/arch/x86/kernel/cpu/cpu.h
> +++ b/arch/x86/kernel/cpu/cpu.h
> @@ -44,7 +44,10 @@ struct _tlb_table {
>  extern const struct cpu_dev *const __x86_cpu_dev_start[],
>  			    *const __x86_cpu_dev_end[];
>  
> +void update_srbds_msr(void);
> +
>  #ifdef CONFIG_CPU_SUP_INTEL
> +
>  enum tsx_ctrl_states {
>  	TSX_CTRL_ENABLE,
>  	TSX_CTRL_DISABLE,
> diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
> index 9a1c00fbbaef..d2136ab9b14a 100644
> --- a/drivers/base/cpu.c
> +++ b/drivers/base/cpu.c
> @@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
>  	return sprintf(buf, "Not affected\n");
>  }
>  
> +ssize_t __weak cpu_show_srbds(struct device *dev,
> +			      struct device_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "Not affected\n");
> +}
> +
>  static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
>  static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
>  static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
> @@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
>  static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
>  static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
>  static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
> +static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
>  
>  static struct attribute *cpu_root_vulnerabilities_attrs[] = {
>  	&dev_attr_meltdown.attr,
> @@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
>  	&dev_attr_mds.attr,
>  	&dev_attr_tsx_async_abort.attr,
>  	&dev_attr_itlb_multihit.attr,
> +	&dev_attr_srbds.attr,
>  	NULL
>  };
>  
> -- 
> 2.17.1

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 3/3] V5 more sampling fun 3
       [not found] ` <5e8b71af.1c69fb81.d8b8.ac6bSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2020-04-06 18:34   ` Kees Cook
  0 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2020-04-06 18:34 UTC (permalink / raw)
  To: speck

On Thu, Jan 30, 2020 at 11:12:02AM -0800, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 3/3] x86/speculation: SRBDS vulnerability and mitigation
>  documentation
> 
> Add documentation for the SRBDS vulnerability and its mitigation.
> 
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Signed-off-by: Mark Gross <mgross@linux.intel.com>

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
>  Documentation/admin-guide/hw-vuln/index.rst   |   2 +
>  .../special-register-buffer-data-sampling.rst | 150 ++++++++++++++++++
>  2 files changed, 152 insertions(+)
>  create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
> 
> diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
> index 0795e3c2643f..99d5b3244ef9 100644
> --- a/Documentation/admin-guide/hw-vuln/index.rst
> +++ b/Documentation/admin-guide/hw-vuln/index.rst
> @@ -14,3 +14,5 @@ are configurable at compile, boot or run time.
>     mds
>     tsx_async_abort
>     multihit.rst
> +   special-register-buffer-data-sampling.rst
> +
> diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
> new file mode 100644
> index 000000000000..c9b3a6812c30
> --- /dev/null
> +++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
> @@ -0,0 +1,150 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +SRBDS - Special Register Buffer Data Sampling
> +=============================================
> +
> +SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
> +infer values returned from special register accesses.  Special register
> +accesses are accesses to off core registers.  According to Intel's evaluation,
> +the special register reads that have a security expectation of privacy are:
> +RDRAND, RDSEED and SGX EGETKEY.
> +
> +When RDRAND RDSEED and EGETKEY instructions are used the data is moved to the
> +core through the special register mechanism that is susceptible to MDS attacks.
> +
> +Affected processors
> +--------------------
> +Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED and
> +are vulnerable to MFBDS (Micro architectural Fill Buffer Data Sampling) variant
> +of MDS (Micro architectural Data Sampling) or to the TAA (TSX Asynchronous
> +Abort) when TSX is enabled,
> +
> +  =============  ============  ==========================
> +  common name    Family_Model  Stepping
> +  =============  ============  ==========================
> +  Ivybridge      06_3AH        All
> +
> +  Haswell        06_3CH        All
> +  Haswell_L      06_45H        All
> +  Haswell_G      06_46H        All
> +
> +  Broadwell_G    06_47H        All
> +  Broadwell      06_3DH        All
> +
> +  Skylake_L      06_4EH        All
> +  Skylake        06_5EH        All
> +
> +  Kabylake_L     06_8EH        <=A
> +  Kabylake_L     06_8EH        0xB only if TSX is enabled
> +  Kabylake_L     06_8EH        0xC only if TSX is enabled
> +
> +  Kabylake       06_9EH        <=B
> +  Kabylake       06_9EH        0xC only if TSX is enabled
> +  Kabylake       06_9EH        0xD only if TSX is enabled
> +  =============  ============  ==========================
> +
> +Related CVEs
> +------------
> +
> +The following CVE entry is related to this SRBDS issue:
> +
> +    ==============  =====  =====================================
> +    CVE-2020-0543   SRBDS  Special Register Buffer Data Sampling
> +    ==============  =====  =====================================
> +
> +Attack scenarios
> +----------------
> +An unprivileged user can extract returned values from RDRAND and RDSEED
> +executed on another core or sibling thread using MDS techniques.
> +
> +
> +Mitigation mechanism
> +-------------------
> +Intel will release microcode updates that modify the RDRAND, RDSEED, and
> +EGETKEY instructions to overwrite secret special register data in the shared
> +staging buffer before the secret data can be accessed by another logical
> +processor.
> +
> +During execution of the RDRAND, RDSEED, or EGETKEY instruction, off-core
> +accesses from other logical processors will be delayed until the special
> +register read is complete and the secret data in the shared staging buffer is
> +overwritten.
> +
> +This has three effects on performance:
> +
> +#. RDRAND, RDSEED, or EGETKEY instruction have higher latency.
> +
> +#. Executing RDRAND at the same time on multiple logical processors will be
> +   serialized, resulting in an overall reduction in the maximum RDRAND
> +   bandwidth.
> +
> +#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
> +   logical processors that miss their core caches, with an impact similar to
> +   legacy locked cache-line-split accesses.
> +
> +Because of the performance impact of the mitigation, the microcode updates also
> +provide an opt-out mechanism (RNGDS_MITG_DIS) to disable the mitigation for
> +RDRAND and RDSEED instructions executed outside of Intel Software Guard
> +Extensions (Intel SGX) enclaves. On logical processors that disable the
> +mitigation using this opt-out mechanism, RDRAND and RDSEED do not take longer
> +to execute and do not impact performance of sibling logical processors memory
> +accesses. The opt-out mechanism does not affect Intel SGX enclaves (including
> +execution of RDRAND or RDSEED inside of an enclave, as well as EGETKEY
> +execution).
> +
> +IA32_MCU_OPT_CTRL MSR Definition
> +--------------------------------
> +Along with the mitigation for this issue, Intel added a new thread-scope
> +IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
> +RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
> +9]==1. This MSR is introduced through the microcode update.
> +
> +Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
> +disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
> +enclave on that logical processor. Opting out of the mitigation for a
> +particular logical processor does not affect the RDRAND and RDSEED mitigations
> +for other logical processors.
> +
> +Note that inside of an Intel SGX enclave, the mitigation is applied regardless
> +of the value of RNGDS_MITG_DS.
> +
> +Mitigation control on the kernel command line
> +---------------------------------------------
> +The kernel command line allows control over the SRBDS mitigation at boot time
> +with the option "srbds=".  The option for this is:
> +
> +  ============= =============================================================
> +  off           This option disables SRBDS mitigation for RDRAND and RDSEED on
> +                affected platforms.
> +  ============= =============================================================
> +
> +SRBDS System Information
> +-----------------------
> +The Linux kernel provides vulnerability status information through sysfs.  For
> +SRBDS this can be accessed by the following sysfs file:
> +/sys/devices/system/cpu/vulnerabilities/srbds
> +
> +The possible values contained in this file are:
> +
> + ============================== =============================================
> + Not affected                   Processor not vulnerable
> + Vulnerable                     Processor vulnerable and mitigation disabled
> + Vulnerable: no microcode       Processor vulnerable and microcode is missing
> +                                mitigation
> + Mitigated                      Processor is vulnerable and mitigation is in
> +                                effect.
> + Not affected (TSX disabled)    Processor is only vulnerable when TSX is
> +                                enabled while this system was booted with TSX
> +                                disabled.
> + Unknown                        Running on virtual guest processor that is
> +                                affected but with no way to know if host
> +                                processor is mitigated or vulnerable.
> + ============================== =============================================
> +
> +SRBDS Default mitigation
> +------------------------
> +This new microcode serializes processor access during execution of RDRAND,
> +RDSEED ensures that the shared buffer is overwritten before it is released for
> +reuse.  Use the "srbds=off" kernel command line to disable the mitigation for
> +RDRAND and RDSEED.
> +
> -- 
> 2.17.1

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 18:34   ` [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2 Kees Cook
@ 2020-04-06 18:37     ` Greg KH
  2020-04-06 20:56       ` mark gross
  2020-04-06 22:14       ` Luck, Tony
  2020-04-06 18:52     ` mark gross
  1 sibling, 2 replies; 21+ messages in thread
From: Greg KH @ 2020-04-06 18:37 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 11:34:17AM -0700, speck for Kees Cook wrote:
> > +			By default, this issue is mitigated by
> > +			microcode.  However, the microcode fix can cause
> > +			the RDRAND and RDSEED instructions to become
> > +			much slower.  Among other effects, this will
> > +			result in reduced throughput from /dev/urandom.
> 
> This is this true about /dev/urandom? I thought the RDRAND dependency
> had been removed?

That dependancy will be removed in 5.7-rc1 and I will be backporting
that to the stable kernels "soon".

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 18:34   ` [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2 Kees Cook
  2020-04-06 18:37     ` Greg KH
@ 2020-04-06 18:52     ` mark gross
  1 sibling, 0 replies; 21+ messages in thread
From: mark gross @ 2020-04-06 18:52 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 11:34:17AM -0700, speck for Kees Cook wrote:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 2/3] x86/speculation: Special Register Buffer Data Sampling
> >  (SRBDS) mitigation control.
> > 
> > SRBDS is an MDS-like speculative side channel that can leak bits from
> > the RNG across cores and threads. New microcode serializes the processor
> > access during the execution of RDRAND and RDSEED.  This ensures that the
> > shared buffer is overwritten before it is released for reuse.
> > 
> > While it is present on all affected CPU models, the microcode mitigation
> > is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
> > cases where TSX is not supported or has been disabled with TSX_CTRL.
> > 
> > The mitigation is activated by default on affected processors and it
> > increases latency for RDRAND and RDSEED instructions.  Among other
> > effects this will reduce throughput from /dev/urandom.
> > 
> > * enable administrator to configure the mitigation off when desired
> >   using either mitigations=off or srbds=off.
> > * export vulnerability status via sysfs
> > * rename file scoped macros to apply for non-whitelist table
> >   initializations.
> > 
> > Signed-off-by: Mark Gross <mgross@linux.intel.com>
> > Reviewed-by: Tony Luck <tony.luck@intel.com>
> > Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> > ---
> >  .../ABI/testing/sysfs-devices-system-cpu      |   1 +
> >  .../admin-guide/kernel-parameters.txt         |  20 +++
> >  arch/x86/include/asm/cpufeatures.h            |   2 +
> >  arch/x86/include/asm/msr-index.h              |   4 +
> >  arch/x86/kernel/cpu/bugs.c                    | 123 ++++++++++++++++++
> >  arch/x86/kernel/cpu/common.c                  |  61 +++++++--
> >  arch/x86/kernel/cpu/cpu.h                     |   3 +
> >  drivers/base/cpu.c                            |   8 ++
> >  8 files changed, 210 insertions(+), 12 deletions(-)
> > 
> > diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> > index 2e0e3b45d02a..b39531a3c5bc 100644
> > --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> > +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> > @@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
> >  		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
> >  		/sys/devices/system/cpu/vulnerabilities/l1tf
> >  		/sys/devices/system/cpu/vulnerabilities/mds
> > +		/sys/devices/system/cpu/vulnerabilities/srbds
> >  		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
> >  		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
> >  Date:		January 2018
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> > index 4d5a4fe22703..017bd682d2b9 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -4739,6 +4739,26 @@
> >  			the kernel will oops in either "warn" or "fatal"
> >  			mode.
> >  
> > +	srbds=		[X86,INTEL]
> > +			Control mitigation for the Special Register
> > +			Buffer Data Sampling (SRBDS) mitigation.
> > +
> > +			Certain CPUs are vulnerable to an MDS-like
> > +			exploit which can leak bits from the random
> > +			number generator.
> > +
> > +			By default, this issue is mitigated by
> > +			microcode.  However, the microcode fix can cause
> > +			the RDRAND and RDSEED instructions to become
> > +			much slower.  Among other effects, this will
> > +			result in reduced throughput from /dev/urandom.
> 
> This is this true about /dev/urandom? I thought the RDRAND dependency
> had been removed?

I'm not sure.  I traced the urandom code into a crypto lib named chacha and not
seeing any HWRNG bread crumbs leading to asm code invoking RDRAND or RDSEED.
I'll keep looking.  I'll be happy to remove that last sentence if its true.

--mark

> 
> -Kees
> 
> > +
> > +			The microcode mitigation can be disabled with
> > +			the following option:
> > +
> > +			off:    Disable mitigation and remove
> > +				performance impact to RDRAND and RDSEED
> > +
> >  	srcutree.counter_wrap_check [KNL]
> >  			Specifies how frequently to check for
> >  			grace-period sequence counter wrap for the
> > diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> > index db189945e9b0..02dabc9e77b0 100644
> > --- a/arch/x86/include/asm/cpufeatures.h
> > +++ b/arch/x86/include/asm/cpufeatures.h
> > @@ -362,6 +362,7 @@
> >  #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
> >  #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
> >  #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
> > +#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
> >  #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
> >  #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
> >  #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
> > @@ -407,5 +408,6 @@
> >  #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
> >  #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
> >  #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
> > +#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
> >  
> >  #endif /* _ASM_X86_CPUFEATURES_H */
> > diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> > index 12c9684d59ba..3efde600a674 100644
> > --- a/arch/x86/include/asm/msr-index.h
> > +++ b/arch/x86/include/asm/msr-index.h
> > @@ -128,6 +128,10 @@
> >  #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
> >  #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
> >  
> > +/* SRBDS support */
> > +#define MSR_IA32_MCU_OPT_CTRL		0x00000123
> > +#define RNGDS_MITG_DIS			BIT(0)
> > +
> >  #define MSR_IA32_SYSENTER_CS		0x00000174
> >  #define MSR_IA32_SYSENTER_ESP		0x00000175
> >  #define MSR_IA32_SYSENTER_EIP		0x00000176
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index ed54b3b21c39..51cc38d95a8f 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
> >  static void __init mds_select_mitigation(void);
> >  static void __init mds_print_mitigation(void);
> >  static void __init taa_select_mitigation(void);
> > +static void __init srbds_select_mitigation(void);
> >  
> >  /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
> >  u64 x86_spec_ctrl_base;
> > @@ -108,6 +109,7 @@ void __init check_bugs(void)
> >  	l1tf_select_mitigation();
> >  	mds_select_mitigation();
> >  	taa_select_mitigation();
> > +	srbds_select_mitigation();
> >  
> >  	/*
> >  	 * As MDS and TAA mitigations are inter-related, print MDS
> > @@ -397,6 +399,114 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
> >  }
> >  early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
> >  
> > +#undef pr_fmt
> > +#define pr_fmt(fmt)	"SRBDS: " fmt
> > +
> > +enum srbds_mitigations {
> > +	SRBDS_MITIGATION_OFF,
> > +	SRBDS_MITIGATION_UCODE_NEEDED,
> > +	SRBDS_MITIGATION_FULL,
> > +	SRBDS_NOT_AFFECTED_TSX_OFF,
> > +	SRBDS_HYPERVISOR,
> > +};
> > +
> > +enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> > +static const char * const srbds_strings[] = {
> > +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> > +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: no microcode",
> > +	[SRBDS_MITIGATION_FULL]		= "Mitigated",
> > +	[SRBDS_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> > +	[SRBDS_HYPERVISOR]		= "Unknown",
> > +};
> > +
> > +static bool srbds_off;
> > +
> > +void update_srbds_msr(void)
> > +{
> > +	u64 mcu_ctrl;
> > +
> > +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> > +		return;
> > +
> > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> > +		return;
> > +
> > +	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
> > +		return;
> > +
> > +	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> > +
> > +	switch (srbds_mitigation) {
> > +	case SRBDS_MITIGATION_OFF:
> > +	case SRBDS_NOT_AFFECTED_TSX_OFF:
> > +		mcu_ctrl |= RNGDS_MITG_DIS;
> > +		break;
> > +	case SRBDS_MITIGATION_FULL:
> > +		mcu_ctrl &= ~RNGDS_MITG_DIS;
> > +		break;
> > +	default:
> > +		break;
> > +	}
> > +
> > +	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> > +}
> > +
> > +static void __init srbds_select_mitigation(void)
> > +{
> > +	u64 ia32_cap;
> > +
> > +	/*
> > +	 * This test relies on the CPUID values of vendor, family, model,
> > +	 * stepping which might not reflect the real hardware when we are
> > +	 * running as a guest. But VMM vendors have asked that we do this
> > +	 * before the X86_FEATURE_HYPERVISOR test since this provides better
> > +	 * guidance to users in most real situations.
> > +	 */
> > +
> > +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> > +		return;
> > +	/*
> > +	 * Check to see if this is one of the MDS_NO systems supporting
> > +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> > +	 */
> > +	ia32_cap = x86_read_arch_cap_msr();
> > +	if (ia32_cap & ARCH_CAP_MDS_NO) {
> > +		if (!boot_cpu_has(X86_FEATURE_RTM))
> > +			srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
> > +	}
> > +
> > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> > +			srbds_mitigation = SRBDS_HYPERVISOR;
> > +		return;
> > +	}
> > +
> > +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> > +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> > +		return;
> > +	}
> > +
> > +	if (cpu_mitigations_off() || srbds_off) {
> > +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> > +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> > +	}
> > +
> > +	update_srbds_msr();
> > +	pr_info("%s\n", srbds_strings[srbds_mitigation]);
> > +}
> > +
> > +static int __init srbds_parse_cmdline(char *str)
> > +{
> > +	if (!str)
> > +		return -EINVAL;
> > +
> > +	if (!strcmp(str, "off"))
> > +		srbds_off = true;
> > +
> > +	return 0;
> > +}
> > +early_param("srbds", srbds_parse_cmdline);
> > +
> >  #undef pr_fmt
> >  #define pr_fmt(fmt)     "Spectre V1 : " fmt
> >  
> > @@ -1528,6 +1638,11 @@ static char *ibpb_state(void)
> >  	return "";
> >  }
> >  
> > +static ssize_t srbds_show_state(char *buf)
> > +{
> > +	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
> > +}
> > +
> >  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
> >  			       char *buf, unsigned int bug)
> >  {
> > @@ -1572,6 +1687,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
> >  	case X86_BUG_ITLB_MULTIHIT:
> >  		return itlb_multihit_show_state(buf);
> >  
> > +	case X86_BUG_SRBDS:
> > +		return srbds_show_state(buf);
> > +
> >  	default:
> >  		break;
> >  	}
> > @@ -1618,4 +1736,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
> >  {
> >  	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
> >  }
> > +
> > +ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
> > +{
> > +	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
> > +}
> >  #endif
> > diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> > index bed0cb83fe24..db9b9fbf9c0f 100644
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -1075,9 +1075,34 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
> >  	{}
> >  };
> >  
> > -static bool __init cpu_matches(unsigned long which)
> > +#define VULNHW_INTEL_STEPPING(model, steppings, issues)			   \
> > +	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
> > +					    INTEL_FAM6_##model, steppings, \
> > +					    X86_FEATURE_ANY, issues)
> > +
> > +#define VULNHW_INTEL(model, issues) X86_MATCH_INTEL_FAM6_MODEL(model, issues)
> > +#define SRBDS			BIT(1)
> > +
> > +/*
> > + * List affected CPU's for issues that cannot be enumerated.
> > + */
> > +static const struct x86_cpu_id affected_cpus[] __initconst = {
> > +	VULNHW_INTEL(IVYBRIDGE,		SRBDS),
> > +	VULNHW_INTEL(HASWELL,		SRBDS),
> > +	VULNHW_INTEL(HASWELL_L,		SRBDS),
> > +	VULNHW_INTEL(HASWELL_G,		SRBDS),
> > +	VULNHW_INTEL(BROADWELL_G,	SRBDS),
> > +	VULNHW_INTEL(BROADWELL,		SRBDS),
> > +	VULNHW_INTEL(SKYLAKE_L,		SRBDS),
> > +	VULNHW_INTEL(SKYLAKE,		SRBDS),
> > +	VULNHW_INTEL_STEPPING(KABYLAKE_L, GENMASK(0xC, 0), SRBDS), /*06_8E steppings <=C*/
> > +	VULNHW_INTEL_STEPPING(KABYLAKE, GENMASK(0xD, 0),   SRBDS), /*06_9E steppings <=D*/
> > +	{}
> > +};
> > +
> > +static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
> >  {
> > -	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
> > +	const struct x86_cpu_id *m = x86_match_cpu(table);
> >  
> >  	return m && !!(m->driver_data & which);
> >  }
> > @@ -1097,33 +1122,44 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  	u64 ia32_cap = x86_read_arch_cap_msr();
> >  
> >  	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
> > -	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
> > +	if (!cpu_matches(NO_ITLB_MULTIHIT, cpu_vuln_whitelist) &&
> > +	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
> >  		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
> >  
> > -	if (cpu_matches(NO_SPECULATION))
> > +	if (cpu_matches(NO_SPECULATION, cpu_vuln_whitelist))
> >  		return;
> >  
> >  	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
> >  
> > -	if (!cpu_matches(NO_SPECTRE_V2))
> > +	if (!cpu_matches(NO_SPECTRE_V2, cpu_vuln_whitelist))
> >  		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
> >  
> > -	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
> > -	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
> > +	if (!cpu_matches(NO_SSB, cpu_vuln_whitelist) &&
> > +	    (ia32_cap & ARCH_CAP_SSB_NO) &&
> > +	    !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
> >  		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
> >  
> >  	if (ia32_cap & ARCH_CAP_IBRS_ALL)
> >  		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
> >  
> > -	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
> > +	if (!cpu_matches(NO_MDS, cpu_vuln_whitelist) &&
> > +	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
> >  		setup_force_cpu_bug(X86_BUG_MDS);
> > -		if (cpu_matches(MSBDS_ONLY))
> > +		if (cpu_matches(MSBDS_ONLY, cpu_vuln_whitelist))
> >  			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
> >  	}
> >  
> > -	if (!cpu_matches(NO_SWAPGS))
> > +	if (!cpu_matches(NO_SWAPGS, cpu_vuln_whitelist))
> >  		setup_force_cpu_bug(X86_BUG_SWAPGS);
> >  
> > +	/*
> > +	 * Some low-end SKUs on the affected list do not support
> > +	 * RDRAND or RDSEED. Make sure they show as "Not affected".
> > +	 */
> > +	if (cpu_matches(SRBDS, affected_cpus) &&
> > +	    (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)))
> > +		setup_force_cpu_bug(X86_BUG_SRBDS);
> > +
> >  	/*
> >  	 * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> >  	 *	- TSX is supported or
> > @@ -1139,7 +1175,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> >  		setup_force_cpu_bug(X86_BUG_TAA);
> >  
> > -	if (cpu_matches(NO_MELTDOWN))
> > +	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
> >  		return;
> >  
> >  	/* Rogue Data Cache Load? No! */
> > @@ -1148,7 +1184,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  
> >  	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
> >  
> > -	if (cpu_matches(NO_L1TF))
> > +	if (cpu_matches(NO_L1TF, cpu_vuln_whitelist))
> >  		return;
> >  
> >  	setup_force_cpu_bug(X86_BUG_L1TF);
> > @@ -1591,6 +1627,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
> >  	mtrr_ap_init();
> >  	validate_apic_and_package_id(c);
> >  	x86_spec_ctrl_setup_ap();
> > +	update_srbds_msr();
> >  }
> >  
> >  static __init int setup_noclflush(char *arg)
> > diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> > index 37fdefd14f28..9188ad5be4ed 100644
> > --- a/arch/x86/kernel/cpu/cpu.h
> > +++ b/arch/x86/kernel/cpu/cpu.h
> > @@ -44,7 +44,10 @@ struct _tlb_table {
> >  extern const struct cpu_dev *const __x86_cpu_dev_start[],
> >  			    *const __x86_cpu_dev_end[];
> >  
> > +void update_srbds_msr(void);
> > +
> >  #ifdef CONFIG_CPU_SUP_INTEL
> > +
> >  enum tsx_ctrl_states {
> >  	TSX_CTRL_ENABLE,
> >  	TSX_CTRL_DISABLE,
> > diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
> > index 9a1c00fbbaef..d2136ab9b14a 100644
> > --- a/drivers/base/cpu.c
> > +++ b/drivers/base/cpu.c
> > @@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
> >  	return sprintf(buf, "Not affected\n");
> >  }
> >  
> > +ssize_t __weak cpu_show_srbds(struct device *dev,
> > +			      struct device_attribute *attr, char *buf)
> > +{
> > +	return sprintf(buf, "Not affected\n");
> > +}
> > +
> >  static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
> >  static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
> >  static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
> > @@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
> >  static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
> >  static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
> >  static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
> > +static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
> >  
> >  static struct attribute *cpu_root_vulnerabilities_attrs[] = {
> >  	&dev_attr_meltdown.attr,
> > @@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
> >  	&dev_attr_mds.attr,
> >  	&dev_attr_tsx_async_abort.attr,
> >  	&dev_attr_itlb_multihit.attr,
> > +	&dev_attr_srbds.attr,
> >  	NULL
> >  };
> >  
> > -- 
> > 2.17.1
> 
> -- 
> Kees Cook

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 18:37     ` Greg KH
@ 2020-04-06 20:56       ` mark gross
  2020-04-06 22:14       ` Luck, Tony
  1 sibling, 0 replies; 21+ messages in thread
From: mark gross @ 2020-04-06 20:56 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 08:37:56PM +0200, speck for Greg KH wrote:
> On Mon, Apr 06, 2020 at 11:34:17AM -0700, speck for Kees Cook wrote:
> > > +			By default, this issue is mitigated by
> > > +			microcode.  However, the microcode fix can cause
> > > +			the RDRAND and RDSEED instructions to become
> > > +			much slower.  Among other effects, this will
> > > +			result in reduced throughput from /dev/urandom.
> > 
> > This is this true about /dev/urandom? I thought the RDRAND dependency
> > had been removed?
> 
> That dependancy will be removed in 5.7-rc1 and I will be backporting
> that to the stable kernels "soon".
>
FWIW I'm back porting Thomas' X86_MATCH clean up series to the LTS kernels
today/tomorow.  Let me know if someone has beaten me to it.

I'm hoping to take it back to 4.4 ATM I have v5.6.y, v5.5.y, and v5.4.y done so
far.

--mark

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 17:52 [MODERATED] [PATCH 0/3] V5 more sampling fun 0 mark gross
                   ` (5 preceding siblings ...)
       [not found] ` <5e8b71af.1c69fb81.d8b8.ac6bSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2020-04-06 22:07 ` Josh Poimboeuf
  2020-04-07  0:34   ` mark gross
  2020-04-08 22:14   ` mark gross
  2020-04-07 15:17 ` Thomas Gleixner
  7 siblings, 2 replies; 21+ messages in thread
From: Josh Poimboeuf @ 2020-04-06 22:07 UTC (permalink / raw)
  To: speck

On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 2/3] x86/speculation: Special Register Buffer Data Sampling
>  (SRBDS) mitigation control.
> +enum srbds_mitigations {
> +	SRBDS_MITIGATION_OFF,
> +	SRBDS_MITIGATION_UCODE_NEEDED,
> +	SRBDS_MITIGATION_FULL,
> +	SRBDS_NOT_AFFECTED_TSX_OFF,
> +	SRBDS_HYPERVISOR,

These should all be prefixed with "SRBDS_MITIGATION_".

> +};
> +
> +enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;

This can be static.

> +static const char * const srbds_strings[] = {
> +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: no microcode",

"no microcode" should be capitalized:

  "Vulnerable: No microcode"

> +	[SRBDS_MITIGATION_FULL]		= "Mitigated",

All the other mitigations say "Mitigation: <description of mitigation>".

So to be consistent, and to not break dumb scripts which might rely on
that, how about:

  "Mitigation: Microcode"

> +	[SRBDS_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",

Is this actually two distinct states?

1) MDS_NO parts which support TSX, where the user has disabled TSX.  In
   which case it should say:

   "Mitigation: TSX disabled"

2) MDS_NO CPUs which *don't* support TSX, which should say:

  "Not affected"

  (I presume the bug bit doesn't need to be set in this case)

> +	[SRBDS_HYPERVISOR]		= "Unknown",

A little more detail would be helpful:

  "Unknown: Dependent on hypervisor status"

> +static void __init srbds_select_mitigation(void)
> +{
> +	u64 ia32_cap;
> +
> +	/*
> +	 * This test relies on the CPUID values of vendor, family, model,
> +	 * stepping which might not reflect the real hardware when we are
> +	 * running as a guest. But VMM vendors have asked that we do this
> +	 * before the X86_FEATURE_HYPERVISOR test since this provides better
> +	 * guidance to users in most real situations.
> +	 */

I wonder if this comment is really needed.  All the other mitigation
code also checks for the bug bit first, so there's nothing unusual going
on here.  But if you still think it's needed, it's fine with me.

> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;

No newline needed between the above comment and the above check.

Instead, put a newline here, before the next comment.

> +	/*
> +	 * Check to see if this is one of the MDS_NO systems supporting
> +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> +	 */
> +	ia32_cap = x86_read_arch_cap_msr();
> +	if (ia32_cap & ARCH_CAP_MDS_NO) {
> +		if (!boot_cpu_has(X86_FEATURE_RTM))
> +			srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
> +	}

A 'goto out' would be helpful here; then the TSX_OFF checks below aren't
needed and the flow is simplified.

> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> +			srbds_mitigation = SRBDS_HYPERVISOR;
> +		return;
> +	}
> +
> +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> +		return;
> +	}

This status is important and should be printed, via 'goto out'.

> +
> +	if (cpu_mitigations_off() || srbds_off) {
> +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> +	}
> +

out:

> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1075,9 +1075,34 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
>  	{}
>  };
>  
> -static bool __init cpu_matches(unsigned long which)
> +#define VULNHW_INTEL_STEPPING(model, steppings, issues)			   \
> +	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
> +					    INTEL_FAM6_##model, steppings, \
> +					    X86_FEATURE_ANY, issues)
> +
> +#define VULNHW_INTEL(model, issues) X86_MATCH_INTEL_FAM6_MODEL(model, issues)
> +#define SRBDS			BIT(1)

Why not start at bit 0?

Also I don't think tabs are needed here since the value isn't being
vertically aligned with anything AFAICT.

> +
> +/*
> + * List affected CPU's for issues that cannot be enumerated.
> + */

I don't understand the comment, SRBDS seems to be enumerated above.

> +static const struct x86_cpu_id affected_cpus[] __initconst = {

cpu_vuln_blacklist?

> +	VULNHW_INTEL(IVYBRIDGE,		SRBDS),

VULNBL?

> +	VULNHW_INTEL(HASWELL,		SRBDS),
> +	VULNHW_INTEL(HASWELL_L,		SRBDS),
> +	VULNHW_INTEL(HASWELL_G,		SRBDS),
> +	VULNHW_INTEL(BROADWELL_G,	SRBDS),
> +	VULNHW_INTEL(BROADWELL,		SRBDS),
> +	VULNHW_INTEL(SKYLAKE_L,		SRBDS),
> +	VULNHW_INTEL(SKYLAKE,		SRBDS),
> +	VULNHW_INTEL_STEPPING(KABYLAKE_L, GENMASK(0xC, 0), SRBDS), /*06_8E steppings <=C*/
> +	VULNHW_INTEL_STEPPING(KABYLAKE, GENMASK(0xD, 0),   SRBDS), /*06_9E steppings <=D*/
> +	{}

This can be more legible, with aligned columns and readable comments:

static const struct x86_cpu_id affected_cpus[] __initconst = {
	VULNHW_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(HASWELL,		X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(HASWELL_L,	X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(HASWELL_G,	X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(BROADWELL_G,	X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,	SRBDS),
	VULNHW_INTEL_STEPPING(KABYLAKE_L,	GENMASK(0xC, 0),	SRBDS), /* 06_8E steppings <= 0xC */
	VULNHW_INTEL_STEPPING(KABYLAKE,		GENMASK(0xD, 0),	SRBDS),	/* 06_9E steppings <= 0xD */
	{}
};

> +	/*
> +	 * Some low-end SKUs on the affected list do not support
> +	 * RDRAND or RDSEED. Make sure they show as "Not affected".
> +	 */
> +	if (cpu_matches(SRBDS, affected_cpus) &&
> +	    (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)))
> +		setup_force_cpu_bug(X86_BUG_SRBDS);
> +

Might as well put this after the TAA check so it's more chronological.

> diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> index 37fdefd14f28..9188ad5be4ed 100644
> --- a/arch/x86/kernel/cpu/cpu.h
> +++ b/arch/x86/kernel/cpu/cpu.h
> @@ -44,7 +44,10 @@ struct _tlb_table {
>  extern const struct cpu_dev *const __x86_cpu_dev_start[],
>  			    *const __x86_cpu_dev_end[];
>  
> +void update_srbds_msr(void);
> +
>  #ifdef CONFIG_CPU_SUP_INTEL
> +
>  enum tsx_ctrl_states {

Unnecessary newline after the ifdef.

-- 
Josh

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 18:37     ` Greg KH
  2020-04-06 20:56       ` mark gross
@ 2020-04-06 22:14       ` Luck, Tony
  2020-04-07  7:51         ` Greg KH
  1 sibling, 1 reply; 21+ messages in thread
From: Luck, Tony @ 2020-04-06 22:14 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 08:37:56PM +0200, speck for Greg KH wrote:
> On Mon, Apr 06, 2020 at 11:34:17AM -0700, speck for Kees Cook wrote:
> > > +			By default, this issue is mitigated by
> > > +			microcode.  However, the microcode fix can cause
> > > +			the RDRAND and RDSEED instructions to become
> > > +			much slower.  Among other effects, this will
> > > +			result in reduced throughput from /dev/urandom.
> > 
> > This is this true about /dev/urandom? I thought the RDRAND dependency
> > had been removed?
> 
> That dependancy will be removed in 5.7-rc1 and I will be backporting
> that to the stable kernels "soon".
> 

Is that still somee patch pending for the merge window?  I see
Jason Donendfeld's patch to speed up the internal get_random_u32()
and get_random_u64(). But that patch doesn't help /dev/urandom
of the getrandom(2) syscalls.

urandom_read() or SYSCALL_DEFINE3(getrandom)
    urandom_read_nowarn()
	extract_crng_user()
	    extract_crng()
		_extract_crng()
		    arch_get_random_long()

-Tony

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 22:07 ` [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2 Josh Poimboeuf
@ 2020-04-07  0:34   ` mark gross
  2020-04-07 12:39     ` Josh Poimboeuf
  2020-04-08 22:14   ` mark gross
  1 sibling, 1 reply; 21+ messages in thread
From: mark gross @ 2020-04-07  0:34 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 05:07:14PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 2/3] x86/speculation: Special Register Buffer Data Sampling
> >  (SRBDS) mitigation control.
> > +enum srbds_mitigations {
> > +	SRBDS_MITIGATION_OFF,
> > +	SRBDS_MITIGATION_UCODE_NEEDED,
> > +	SRBDS_MITIGATION_FULL,
> > +	SRBDS_NOT_AFFECTED_TSX_OFF,
> > +	SRBDS_HYPERVISOR,
> 
> These should all be prefixed with "SRBDS_MITIGATION_".
fixed.

> > +};
> > +
> > +enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> 
> This can be static.
its static now.
> 
> > +static const char * const srbds_strings[] = {
> > +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> > +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: no microcode",
> 
> "no microcode" should be capitalized:
> 
>   "Vulnerable: No microcode"
Done

> > +	[SRBDS_MITIGATION_FULL]		= "Mitigated",
> 
> All the other mitigations say "Mitigation: <description of mitigation>".
> 
> So to be consistent, and to not break dumb scripts which might rely on
> that, how about:
> 
>   "Mitigation: Microcode"
done

> > +	[SRBDS_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> 
> Is this actually two distinct states?
> 
> 1) MDS_NO parts which support TSX, where the user has disabled TSX.  In
>    which case it should say:
> 
>    "Mitigation: TSX disabled"
> 
> 2) MDS_NO CPUs which *don't* support TSX, which should say:
> 
>   "Not affected"
> 
>   (I presume the bug bit doesn't need to be set in this case)

The white paper algorithm is simply:
Check boot CPU is in the list of affected processors then check for MDS_NO and
TSX status and special case those only if TSX is off and MDS_NO.


The situation of an MDS_NO part that has no TSX capability (not tsx is disable)
is not handled. Unless such parts don't exist or such parts are not included in
the affected processor list in the white paper.

I'll check with the chip folks to see if we are exposed to this corner case.



> > +	[SRBDS_HYPERVISOR]		= "Unknown",
> 
> A little more detail would be helpful:
> 
>   "Unknown: Dependent on hypervisor status"
done

> > +static void __init srbds_select_mitigation(void)
> > +{
> > +	u64 ia32_cap;
> > +
> > +	/*
> > +	 * This test relies on the CPUID values of vendor, family, model,
> > +	 * stepping which might not reflect the real hardware when we are
> > +	 * running as a guest. But VMM vendors have asked that we do this
> > +	 * before the X86_FEATURE_HYPERVISOR test since this provides better
> > +	 * guidance to users in most real situations.
> > +	 */
> 
> I wonder if this comment is really needed.  All the other mitigation
> code also checks for the bug bit first, so there's nothing unusual going
> on here.  But if you still think it's needed, it's fine with me.
I'm happy to drop the comment.

> > +
> > +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> > +		return;
> 
> No newline needed between the above comment and the above check.
> 
> Instead, put a newline here, before the next comment.
> 
> > +	/*
> > +	 * Check to see if this is one of the MDS_NO systems supporting
> > +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> > +	 */
> > +	ia32_cap = x86_read_arch_cap_msr();
> > +	if (ia32_cap & ARCH_CAP_MDS_NO) {
> > +		if (!boot_cpu_has(X86_FEATURE_RTM))
> > +			srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
> > +	}
> 
> A 'goto out' would be helpful here; then the TSX_OFF checks below aren't
> needed and the flow is simplified.
a goto out would mess up the hypervisor check but, I'll add the goto for the
mid function returns that set the srbds_mitigation value.

> > +
> > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> > +			srbds_mitigation = SRBDS_HYPERVISOR;
> > +		return;
> > +	}
> > +
> > +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> > +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> > +		return;
> > +	}
> 
> This status is important and should be printed, via 'goto out'.
done

> > +
> > +	if (cpu_mitigations_off() || srbds_off) {
> > +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> > +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> > +	}
> > +
> 
> out:
done

> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -1075,9 +1075,34 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
> >  	{}
> >  };
> >  
> > -static bool __init cpu_matches(unsigned long which)
> > +#define VULNHW_INTEL_STEPPING(model, steppings, issues)			   \
> > +	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
> > +					    INTEL_FAM6_##model, steppings, \
> > +					    X86_FEATURE_ANY, issues)
> > +
> > +#define VULNHW_INTEL(model, issues) X86_MATCH_INTEL_FAM6_MODEL(model, issues)
> > +#define SRBDS			BIT(1)
> 
> Why not start at bit 0?
no good reason,  its zero now.

> Also I don't think tabs are needed here since the value isn't being
> vertically aligned with anything AFAICT.
extra tabs gone.

> > +
> > +/*
> > + * List affected CPU's for issues that cannot be enumerated.
> > + */
> 
> I don't understand the comment, SRBDS seems to be enumerated above.
hmm, how about I remove the comment? 

> > +static const struct x86_cpu_id affected_cpus[] __initconst = {
> 
> cpu_vuln_blacklist?
I was trying to avoid using the string "blacklist" based on some diversity and
inclusion guidance but, I'm ok with changing the variable name to what you
recommend.

Changed to cpu_vuln_blacklist

.
> 
> > +	VULNHW_INTEL(IVYBRIDGE,		SRBDS),
> 
> VULNBL?
Why not... done

> 
> > +	VULNHW_INTEL(HASWELL,		SRBDS),
> > +	VULNHW_INTEL(HASWELL_L,		SRBDS),
> > +	VULNHW_INTEL(HASWELL_G,		SRBDS),
> > +	VULNHW_INTEL(BROADWELL_G,	SRBDS),
> > +	VULNHW_INTEL(BROADWELL,		SRBDS),
> > +	VULNHW_INTEL(SKYLAKE_L,		SRBDS),
> > +	VULNHW_INTEL(SKYLAKE,		SRBDS),
> > +	VULNHW_INTEL_STEPPING(KABYLAKE_L, GENMASK(0xC, 0), SRBDS), /*06_8E steppings <=C*/
> > +	VULNHW_INTEL_STEPPING(KABYLAKE, GENMASK(0xD, 0),   SRBDS), /*06_9E steppings <=D*/
> > +	{}
> 
> This can be more legible, with aligned columns and readable comments:
> 
> static const struct x86_cpu_id affected_cpus[] __initconst = {
> 	VULNHW_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(HASWELL,		X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(HASWELL_L,	X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(HASWELL_G,	X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(BROADWELL_G,	X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,	SRBDS),
> 	VULNHW_INTEL_STEPPING(KABYLAKE_L,	GENMASK(0xC, 0),	SRBDS), /* 06_8E steppings <= 0xC */
> 	VULNHW_INTEL_STEPPING(KABYLAKE,		GENMASK(0xD, 0),	SRBDS),	/* 06_9E steppings <= 0xD */
> 	{}
> };
done


> > +	/*
> > +	 * Some low-end SKUs on the affected list do not support
> > +	 * RDRAND or RDSEED. Make sure they show as "Not affected".
> > +	 */
> > +	if (cpu_matches(SRBDS, affected_cpus) &&
> > +	    (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)))
> > +		setup_force_cpu_bug(X86_BUG_SRBDS);
> > +
> 
> Might as well put this after the TAA check so it's more chronological.
done

> > diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> > index 37fdefd14f28..9188ad5be4ed 100644
> > --- a/arch/x86/kernel/cpu/cpu.h
> > +++ b/arch/x86/kernel/cpu/cpu.h
> > @@ -44,7 +44,10 @@ struct _tlb_table {
> >  extern const struct cpu_dev *const __x86_cpu_dev_start[],
> >  			    *const __x86_cpu_dev_end[];
> >  
> > +void update_srbds_msr(void);
> > +
> >  #ifdef CONFIG_CPU_SUP_INTEL
> > +
> >  enum tsx_ctrl_states {
> 
> Unnecessary newline after the ifdef.
newline removed.

--mark

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 22:14       ` Luck, Tony
@ 2020-04-07  7:51         ` Greg KH
  0 siblings, 0 replies; 21+ messages in thread
From: Greg KH @ 2020-04-07  7:51 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 03:14:13PM -0700, speck for Luck, Tony wrote:
> On Mon, Apr 06, 2020 at 08:37:56PM +0200, speck for Greg KH wrote:
> > On Mon, Apr 06, 2020 at 11:34:17AM -0700, speck for Kees Cook wrote:
> > > > +			By default, this issue is mitigated by
> > > > +			microcode.  However, the microcode fix can cause
> > > > +			the RDRAND and RDSEED instructions to become
> > > > +			much slower.  Among other effects, this will
> > > > +			result in reduced throughput from /dev/urandom.
> > > 
> > > This is this true about /dev/urandom? I thought the RDRAND dependency
> > > had been removed?
> > 
> > That dependancy will be removed in 5.7-rc1 and I will be backporting
> > that to the stable kernels "soon".
> > 
> 
> Is that still somee patch pending for the merge window?  I see
> Jason Donendfeld's patch to speed up the internal get_random_u32()
> and get_random_u64(). But that patch doesn't help /dev/urandom
> of the getrandom(2) syscalls.
> 
> urandom_read() or SYSCALL_DEFINE3(getrandom)
>     urandom_read_nowarn()
> 	extract_crng_user()
> 	    extract_crng()
> 		_extract_crng()
> 		    arch_get_random_long()

The commit I am referring to is 69efea712f5b ("random: always use
batched entropy for get_random_u{32,64}") which removes the direct
output of RDRAND for the internal kernel "give me some random data"
calls, which is what we need to do today.

But yes, you are right in that we still hit RDRAND on the urandom_read()
path, that's just feeding the data into the pool so we should be "safe"
from a data point of view, and you are right in that it does slow things
down too.

Have you all tried running benchmarks to see if getrandom() does slow
down with this microcode change?  And if it really makes it unusable, we
should just take RDRAND out of that code path entirely, which is what I
think Jason was talking about doing anyway, but you would have to ask
him about that.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-07  0:34   ` mark gross
@ 2020-04-07 12:39     ` Josh Poimboeuf
  2020-04-08 20:26       ` mark gross
  0 siblings, 1 reply; 21+ messages in thread
From: Josh Poimboeuf @ 2020-04-07 12:39 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 05:34:56PM -0700, speck for mark gross wrote:
> On Mon, Apr 06, 2020 at 05:07:14PM -0500, speck for Josh Poimboeuf wrote:
> > > +	/*
> > > +	 * Check to see if this is one of the MDS_NO systems supporting
> > > +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> > > +	 */
> > > +	ia32_cap = x86_read_arch_cap_msr();
> > > +	if (ia32_cap & ARCH_CAP_MDS_NO) {
> > > +		if (!boot_cpu_has(X86_FEATURE_RTM))
> > > +			srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
> > > +	}
> > 
> > A 'goto out' would be helpful here; then the TSX_OFF checks below aren't
> > needed and the flow is simplified.
> a goto out would mess up the hypervisor check but, I'll add the goto for the
> mid function returns that set the srbds_mitigation value.

Just to clarify, I was thinking something like:

	ia32_cap = x86_read_arch_cap_msr();
	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
		srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
		goto out;
	}

As far as I can tell, that doesn't mess up the hypervisor check, since
it only sets SRBDS_HYPERVISOR if TSX_OFF isn't set.

> 
> > > +
> > > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > > +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> > > +			srbds_mitigation = SRBDS_HYPERVISOR;
> > > +		return;
> > > +	}
> > > +
> > > +/*
> > > + * List affected CPU's for issues that cannot be enumerated.
> > > + */
> > 
> > I don't understand the comment, SRBDS seems to be enumerated above.
> hmm, how about I remove the comment? 

Sounds good to me.

-- 
Josh

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 17:52 [MODERATED] [PATCH 0/3] V5 more sampling fun 0 mark gross
                   ` (6 preceding siblings ...)
  2020-04-06 22:07 ` [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2 Josh Poimboeuf
@ 2020-04-07 15:17 ` Thomas Gleixner
  2020-04-08 20:33   ` [MODERATED] " mark gross
  7 siblings, 1 reply; 21+ messages in thread
From: Thomas Gleixner @ 2020-04-07 15:17 UTC (permalink / raw)
  To: speck

Mark,

speck for mark gross <speck@linutronix.de> writes:
> +
> +static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
>  {
> -	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
> +	const struct x86_cpu_id *m = x86_match_cpu(table);

Can this and the fixup of the caller please be in a separate patch?
 
Thanks,

        tglx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-07 12:39     ` Josh Poimboeuf
@ 2020-04-08 20:26       ` mark gross
  0 siblings, 0 replies; 21+ messages in thread
From: mark gross @ 2020-04-08 20:26 UTC (permalink / raw)
  To: speck

On Tue, Apr 07, 2020 at 07:39:55AM -0500, speck for Josh Poimboeuf wrote:
> On Mon, Apr 06, 2020 at 05:34:56PM -0700, speck for mark gross wrote:
> > On Mon, Apr 06, 2020 at 05:07:14PM -0500, speck for Josh Poimboeuf wrote:
> > > > +	/*
> > > > +	 * Check to see if this is one of the MDS_NO systems supporting
> > > > +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> > > > +	 */
> > > > +	ia32_cap = x86_read_arch_cap_msr();
> > > > +	if (ia32_cap & ARCH_CAP_MDS_NO) {
> > > > +		if (!boot_cpu_has(X86_FEATURE_RTM))
> > > > +			srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
> > > > +	}
> > > 
> > > A 'goto out' would be helpful here; then the TSX_OFF checks below aren't
> > > needed and the flow is simplified.
> > a goto out would mess up the hypervisor check but, I'll add the goto for the
> > mid function returns that set the srbds_mitigation value.
> 
> Just to clarify, I was thinking something like:
> 
> 	ia32_cap = x86_read_arch_cap_msr();
> 	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
> 		srbds_mitigation = SRBDS_NOT_AFFECTED_TSX_OFF;
> 		goto out;
> 	}
> 
> As far as I can tell, that doesn't mess up the hypervisor check, since
> it only sets SRBDS_HYPERVISOR if TSX_OFF isn't set.
after looking again I agree and using the goto allows simplification of the
following if block.

> > 
> > > > +
> > > > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > > > +		if (srbds_mitigation != SRBDS_NOT_AFFECTED_TSX_OFF)
> > > > +			srbds_mitigation = SRBDS_HYPERVISOR;
> > > > +		return;
with your goto out aboe I can simplify this if block by removing the nested if.

Thanks!

--mark

> > > > +	}
> > > > +
> > > > +/*
> > > > + * List affected CPU's for issues that cannot be enumerated.
> > > > + */
> > > 
> > > I don't understand the comment, SRBDS seems to be enumerated above.
> > hmm, how about I remove the comment? 
> 
> Sounds good to me.
> 
> -- 
> Josh

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-07 15:17 ` Thomas Gleixner
@ 2020-04-08 20:33   ` mark gross
  2020-04-08 23:21     ` Thomas Gleixner
  0 siblings, 1 reply; 21+ messages in thread
From: mark gross @ 2020-04-08 20:33 UTC (permalink / raw)
  To: speck

On Tue, Apr 07, 2020 at 05:17:44PM +0200, speck for Thomas Gleixner wrote:
> Mark,
> 
> speck for mark gross <speck@linutronix.de> writes:
> > +
> > +static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
> >  {
> > -	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
> > +	const struct x86_cpu_id *m = x86_match_cpu(table);
> 
> Can this and the fixup of the caller please be in a separate patch?

Sure but, as a stand alown patch (outside the context of the srbds changes)
making the x86_cpu_id pased by pointer instead of a hard coded global is a bit
un-motivated.  Is it ok of I keep it as part of the SRBDS patchset and not
post it separately to lkml?

--mark

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-06 22:07 ` [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2 Josh Poimboeuf
  2020-04-07  0:34   ` mark gross
@ 2020-04-08 22:14   ` mark gross
  2020-04-08 22:58     ` mark gross
  1 sibling, 1 reply; 21+ messages in thread
From: mark gross @ 2020-04-08 22:14 UTC (permalink / raw)
  To: speck

On Mon, Apr 06, 2020 at 05:07:14PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 2/3] x86/speculation: Special Register Buffer Data Sampling
> >  (SRBDS) mitigation control.
> > +enum srbds_mitigations {
> > +	SRBDS_MITIGATION_OFF,
> > +	SRBDS_MITIGATION_UCODE_NEEDED,
> > +	SRBDS_MITIGATION_FULL,
> > +	SRBDS_NOT_AFFECTED_TSX_OFF,
> > +	SRBDS_HYPERVISOR,
> 
> These should all be prefixed with "SRBDS_MITIGATION_".
> 
> > +};
> > +
> > +enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> 
> This can be static.
> 
> > +static const char * const srbds_strings[] = {
> > +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> > +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: no microcode",
> 
> "no microcode" should be capitalized:
> 
>   "Vulnerable: No microcode"
> 
> > +	[SRBDS_MITIGATION_FULL]		= "Mitigated",
> 
> All the other mitigations say "Mitigation: <description of mitigation>".
> 
> So to be consistent, and to not break dumb scripts which might rely on
> that, how about:
> 
>   "Mitigation: Microcode"
> 
> > +	[SRBDS_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> 
> Is this actually two distinct states?
> 
> 1) MDS_NO parts which support TSX, where the user has disabled TSX.  In
>    which case it should say:
> 
>    "Mitigation: TSX disabled"
> 
> 2) MDS_NO CPUs which *don't* support TSX, which should say:
> 
>   "Not affected"
> 
>   (I presume the bug bit doesn't need to be set in this case)
I've been working addressing this issue.  It is common for Intel to fuse off
features on low end SKU's so this could happen.  After looking into how to
disambiguate a CFL part (Family 6, Model 158 stepping 13) with TSX fused off
from a CLF part where TSX is disabled using TSX_CNTRL MSR.  I'd like to side
step the problem.

The solution is code for common.c that I have a hard time parsing even though I
wrote it.  It makes my eyes bleed.  I wonder if it would be better to simply
reword the string used for sysfs in the SRBDS_NOT_AFFECTED_TSX_OFF case to
report simply "Not affected" i.e. drop the "(TSX disbled)" part to hide the
ambiguous case?

FWIW this is what I'm thinking of adding to common.c if we choose to not hide
this ambiguity.  We are checking with the uCode folks to confirm but
ARCH_CAP_TSX_CTRL_MSR will not be set if the part has TSX fused off and we
think we can use that to disambiguate:

1166        if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
1167               /*
1168                * Some low-end SKUs on the affected list do not support
1169                * RDRAND or RDSEED. Make sure they show as "Not affected".
1170                */
1171               if (cpu_has(c, X86_FEATURE_RDRAND) ||
1172                   cpu_has(c, X86_FEATURE_RDSEED)) {
1173                      /* Some low end SKU's of parts with F/M/S in the
1174                       * blacklist that normally are only affected if TSX is
1175                       * enabled could have TSX fused off.  To avoid reporting 
1176                       * "Not affected (TSX disabled)" check !MDS_NO or available
1177                       * TSX_CTRL_MSR as a check for parts not fused
1178                       * off.
1179                       */
1180                      if (!(ia32_cap & ARCH_CAP_MDS_NO) ||.
1181                          (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
1182                             setup_force_cpu_bug(X86_BUG_SRBDS);
1183 
1184               }
1185        }

Another option is to add back the SRBDS_MITIGATION_NOT_AFFECTED and do this TSX
fused off check of ARCH_CAP_TSX_CTRL_MSR in bugs.c when checking for
X86_FEATURE_RTM.

What do you or others think?

--mark

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-08 22:14   ` mark gross
@ 2020-04-08 22:58     ` mark gross
  0 siblings, 0 replies; 21+ messages in thread
From: mark gross @ 2020-04-08 22:58 UTC (permalink / raw)
  To: speck

On Wed, Apr 08, 2020 at 03:14:19PM -0700, speck for mark gross wrote:
> On Mon, Apr 06, 2020 at 05:07:14PM -0500, speck for Josh Poimboeuf wrote:
> > On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > > From: mark gross <mgross@linux.intel.com>
> > > Subject: [PATCH 2/3] x86/speculation: Special Register Buffer Data Sampling
> > >  (SRBDS) mitigation control.
> > > +enum srbds_mitigations {
> > > +	SRBDS_MITIGATION_OFF,
> > > +	SRBDS_MITIGATION_UCODE_NEEDED,
> > > +	SRBDS_MITIGATION_FULL,
> > > +	SRBDS_NOT_AFFECTED_TSX_OFF,
> > > +	SRBDS_HYPERVISOR,
> > 
> > These should all be prefixed with "SRBDS_MITIGATION_".
> > 
> > > +};
> > > +
> > > +enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> > 
> > This can be static.
> > 
> > > +static const char * const srbds_strings[] = {
> > > +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> > > +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: no microcode",
> > 
> > "no microcode" should be capitalized:
> > 
> >   "Vulnerable: No microcode"
> > 
> > > +	[SRBDS_MITIGATION_FULL]		= "Mitigated",
> > 
> > All the other mitigations say "Mitigation: <description of mitigation>".
> > 
> > So to be consistent, and to not break dumb scripts which might rely on
> > that, how about:
> > 
> >   "Mitigation: Microcode"
> > 
> > > +	[SRBDS_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> > 
> > Is this actually two distinct states?
> > 
> > 1) MDS_NO parts which support TSX, where the user has disabled TSX.  In
> >    which case it should say:
> > 
> >    "Mitigation: TSX disabled"
> > 
> > 2) MDS_NO CPUs which *don't* support TSX, which should say:
> > 
> >   "Not affected"
> > 
> >   (I presume the bug bit doesn't need to be set in this case)
> I've been working addressing this issue.  It is common for Intel to fuse off
> features on low end SKU's so this could happen.  After looking into how to
> disambiguate a CFL part (Family 6, Model 158 stepping 13) with TSX fused off
> from a CLF part where TSX is disabled using TSX_CNTRL MSR.  I'd like to side
> step the problem.
> 
> The solution is code for common.c that I have a hard time parsing even though I
> wrote it.  It makes my eyes bleed.  I wonder if it would be better to simply
> reword the string used for sysfs in the SRBDS_NOT_AFFECTED_TSX_OFF case to
> report simply "Not affected" i.e. drop the "(TSX disbled)" part to hide the
> ambiguous case?
> 
> FWIW this is what I'm thinking of adding to common.c if we choose to not hide
> this ambiguity.  We are checking with the uCode folks to confirm but
> ARCH_CAP_TSX_CTRL_MSR will not be set if the part has TSX fused off and we
> think we can use that to disambiguate:
> 
> 1166        if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
> 1167               /*
> 1168                * Some low-end SKUs on the affected list do not support
> 1169                * RDRAND or RDSEED. Make sure they show as "Not affected".
> 1170                */
> 1171               if (cpu_has(c, X86_FEATURE_RDRAND) ||
> 1172                   cpu_has(c, X86_FEATURE_RDSEED)) {
> 1173                      /* Some low end SKU's of parts with F/M/S in the
> 1174                       * blacklist that normally are only affected if TSX is
> 1175                       * enabled could have TSX fused off.  To avoid reporting 
> 1176                       * "Not affected (TSX disabled)" check !MDS_NO or available
> 1177                       * TSX_CTRL_MSR as a check for parts not fused
> 1178                       * off.
> 1179                       */
> 1180                      if (!(ia32_cap & ARCH_CAP_MDS_NO) ||.
> 1181                          (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
> 1182                             setup_force_cpu_bug(X86_BUG_SRBDS);
> 1183 
> 1184               }
> 1185        }
> 
> Another option is to add back the SRBDS_MITIGATION_NOT_AFFECTED and do this TSX
> fused off check of ARCH_CAP_TSX_CTRL_MSR in bugs.c when checking for
> X86_FEATURE_RTM.
> 
> What do you or others think?
Never mind.  Tony gave me some ideas to make it easier to read.  I'll post the
updated patchset Late Thursday after some testing.

thanks,

--mark

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] V5 more sampling fun 2
  2020-04-08 20:33   ` [MODERATED] " mark gross
@ 2020-04-08 23:21     ` Thomas Gleixner
  0 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2020-04-08 23:21 UTC (permalink / raw)
  To: speck

Mark,

speck for mark gross <speck@linutronix.de> writes:

> On Tue, Apr 07, 2020 at 05:17:44PM +0200, speck for Thomas Gleixner wrote:
>> Mark,
>> 
>> speck for mark gross <speck@linutronix.de> writes:
>> > +
>> > +static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
>> >  {
>> > -	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
>> > +	const struct x86_cpu_id *m = x86_match_cpu(table);
>> 
>> Can this and the fixup of the caller please be in a separate patch?
>
> Sure but, as a stand alown patch (outside the context of the srbds changes)
> making the x86_cpu_id pased by pointer instead of a hard coded global is a bit
> un-motivated.  Is it ok of I keep it as part of the SRBDS patchset and not
> post it separately to lkml?

Of course it's part of ther SRBDS patch set, but it's way simpler to
review than this all in one combo change.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-04-08 23:21 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-06 17:52 [MODERATED] [PATCH 0/3] V5 more sampling fun 0 mark gross
2020-01-16 22:16 ` [MODERATED] [PATCH 2/3] V5 more sampling fun 2 mark gross
2020-01-30 19:12 ` [MODERATED] [PATCH 3/3] V5 more sampling fun 3 mark gross
2020-03-17  0:56 ` [MODERATED] [PATCH 1/3] V5 more sampling fun 1 mark gross
     [not found] ` <5e8b7166.1c69fb81.4c99a.3619SMTPIN_ADDED_BROKEN@mx.google.com>
2020-04-06 18:31   ` [MODERATED] " Kees Cook
     [not found] ` <5e8b71d8.1c69fb81.64075.43abSMTPIN_ADDED_BROKEN@mx.google.com>
2020-04-06 18:34   ` [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2 Kees Cook
2020-04-06 18:37     ` Greg KH
2020-04-06 20:56       ` mark gross
2020-04-06 22:14       ` Luck, Tony
2020-04-07  7:51         ` Greg KH
2020-04-06 18:52     ` mark gross
     [not found] ` <5e8b71af.1c69fb81.d8b8.ac6bSMTPIN_ADDED_BROKEN@mx.google.com>
2020-04-06 18:34   ` [MODERATED] Re: [PATCH 3/3] V5 more sampling fun 3 Kees Cook
2020-04-06 22:07 ` [MODERATED] Re: [PATCH 2/3] V5 more sampling fun 2 Josh Poimboeuf
2020-04-07  0:34   ` mark gross
2020-04-07 12:39     ` Josh Poimboeuf
2020-04-08 20:26       ` mark gross
2020-04-08 22:14   ` mark gross
2020-04-08 22:58     ` mark gross
2020-04-07 15:17 ` Thomas Gleixner
2020-04-08 20:33   ` [MODERATED] " mark gross
2020-04-08 23:21     ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).