historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MODERATED] [PATCH 3/4] V8 more sampling fun 3
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
@ 2020-01-16 22:16 ` mark gross
  2020-01-30 19:12 ` [MODERATED] [PATCH 4/4] V8 more sampling fun 4 mark gross
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-01-16 22:16 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 3/4] x86/speculation: Special Register Buffer Data Sampling
 (SRBDS) mitigation control.

SRBDS is an MDS-like speculative side channel that can leak bits from
the RNG across cores and threads. New microcode serializes the processor
access during the execution of RDRAND and RDSEED.  This ensures that the
shared buffer is overwritten before it is released for reuse.

While it is present on all affected CPU models, the microcode mitigation
is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
cases where TSX is not supported or has been disabled with TSX_CTRL.

The mitigation is activated by default on affected processors and it
increases latency for RDRAND and RDSEED instructions.  Among other
effects this will reduce throughput from /dev/urandom.

* enable administrator to configure the mitigation off when desired
  using either mitigations=off or srbds=off.
* export vulnerability status via sysfs
* rename file scoped macros to apply for non-whitelist table
  initializations.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 .../admin-guide/kernel-parameters.txt         |  20 ++++
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 105 ++++++++++++++++++
 arch/x86/kernel/cpu/common.c                  |  56 ++++++++++
 arch/x86/kernel/cpu/cpu.h                     |   2 +
 drivers/base/cpu.c                            |   8 ++
 8 files changed, 198 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 2e0e3b45d02a..b39531a3c5bc 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
 		/sys/devices/system/cpu/vulnerabilities/mds
+		/sys/devices/system/cpu/vulnerabilities/srbds
 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
 		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
 Date:		January 2018
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f2a93c8679e8..6bc125ff09da 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4757,6 +4757,26 @@
 			the kernel will oops in either "warn" or "fatal"
 			mode.
 
+	srbds=		[X86,INTEL]
+			Control mitigation for the Special Register
+			Buffer Data Sampling (SRBDS) mitigation.
+
+			Certain CPUs are vulnerable to an MDS-like
+			exploit which can leak bits from the random
+			number generator.
+
+			By default, this issue is mitigated by
+			microcode.  However, the microcode fix can cause
+			the RDRAND and RDSEED instructions to become
+			much slower.  Among other effects, this will
+			result in reduced throughput from /dev/urandom.
+
+			The microcode mitigation can be disabled with
+			the following option:
+
+			off:    Disable mitigation and remove
+				performance impact to RDRAND and RDSEED
+
 	srcutree.counter_wrap_check [KNL]
 			Specifies how frequently to check for
 			grace-period sequence counter wrap for the
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index db189945e9b0..02dabc9e77b0 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -362,6 +362,7 @@
 #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
 #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
 #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
+#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
 #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
 #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
 #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
@@ -407,5 +408,6 @@
 #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
 #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
 #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 12c9684d59ba..3efde600a674 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -128,6 +128,10 @@
 #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
 #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
 
+/* SRBDS support */
+#define MSR_IA32_MCU_OPT_CTRL		0x00000123
+#define RNGDS_MITG_DIS			BIT(0)
+
 #define MSR_IA32_SYSENTER_CS		0x00000174
 #define MSR_IA32_SYSENTER_ESP		0x00000175
 #define MSR_IA32_SYSENTER_EIP		0x00000176
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ed54b3b21c39..f345c24f85d1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_print_mitigation(void);
 static void __init taa_select_mitigation(void);
+static void __init srbds_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -108,6 +109,7 @@ void __init check_bugs(void)
 	l1tf_select_mitigation();
 	mds_select_mitigation();
 	taa_select_mitigation();
+	srbds_select_mitigation();
 
 	/*
 	 * As MDS and TAA mitigations are inter-related, print MDS
@@ -397,6 +399,96 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"SRBDS: " fmt
+
+enum srbds_mitigations {
+	SRBDS_MITIGATION_OFF,
+	SRBDS_MITIGATION_UCODE_NEEDED,
+	SRBDS_MITIGATION_FULL,
+	SRBDS_MITIGATION_TSX_OFF,
+	SRBDS_MITIGATION_HYPERVISOR,
+};
+
+static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
+static const char * const srbds_strings[] = {
+	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
+	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
+	[SRBDS_MITIGATION_FULL]		= "Mitigated: Microcode",
+	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigated: TSX disabled",
+	[SRBDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
+};
+
+static bool srbds_off;
+
+void update_srbds_msr(void)
+{
+	u64 mcu_ctrl;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+		return;
+
+	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
+		return;
+
+	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+
+	switch (srbds_mitigation) {
+	case SRBDS_MITIGATION_OFF:
+	case SRBDS_MITIGATION_TSX_OFF:
+		mcu_ctrl |= RNGDS_MITG_DIS;
+		break;
+	case SRBDS_MITIGATION_FULL:
+		mcu_ctrl &= ~RNGDS_MITG_DIS;
+		break;
+	default:
+		break;
+	}
+
+	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+}
+
+static void __init srbds_select_mitigation(void)
+{
+	u64 ia32_cap;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+	/*
+	 * Check to see if this is one of the MDS_NO systems supporting
+	 * TSX that are only exposed to SRBDS when TSX is enabled.
+	 */
+	ia32_cap = x86_read_arch_cap_msr();
+	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
+		srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
+	else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
+	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
+		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
+	else if (cpu_mitigations_off() || srbds_off)
+		srbds_mitigation = SRBDS_MITIGATION_OFF;
+
+	update_srbds_msr();
+	pr_info("%s\n", srbds_strings[srbds_mitigation]);
+}
+
+static int __init srbds_parse_cmdline(char *str)
+{
+	if (!str)
+		return -EINVAL;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return 0;
+
+	srbds_off = !strcmp(str, "off");
+
+	return 0;
+}
+early_param("srbds", srbds_parse_cmdline);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
@@ -1528,6 +1620,11 @@ static char *ibpb_state(void)
 	return "";
 }
 
+static ssize_t srbds_show_state(char *buf)
+{
+	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -1572,6 +1669,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 	case X86_BUG_ITLB_MULTIHIT:
 		return itlb_multihit_show_state(buf);
 
+	case X86_BUG_SRBDS:
+		return srbds_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1618,4 +1718,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
 }
+
+ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
+}
 #endif
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2bea1cc8dcb4..39d3fb4292f9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1075,6 +1075,27 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	{}
 };
 
+#define VULNBL_INTEL_STEPPING(model, steppings, issues)			   \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6,		   \
+					    INTEL_FAM6_##model, steppings, \
+					    X86_FEATURE_ANY, issues)
+
+#define SRBDS		BIT(0)
+
+static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+	VULNBL_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(HASWELL,		X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xC),	SRBDS),
+	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xD),	SRBDS),
+	{}
+};
+
 static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
 {
 	const struct x86_cpu_id *m = x86_match_cpu(table);
@@ -1092,6 +1113,20 @@ u64 x86_read_arch_cap_msr(void)
 	return ia32_cap;
 }
 
+static bool tsx_fused_off(struct cpuinfo_x86 *c)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	/*
+	 * When running with up-to-date microcode TSX_CTRL is only enumerated
+	 * on parts where TSX is fused on.  When running with microcode not
+	 * supporting TSX_CTRL we check for RTM.
+	 */
+
+	return !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR) &&
+		 !cpu_has(c, X86_FEATURE_RTM);
+}
+
 static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = x86_read_arch_cap_msr();
@@ -1142,6 +1177,26 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
+	/*
+	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
+	 * they show as "Not affected".
+	 */
+	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
+		if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
+			/*
+			 * Parts in the blacklist that enumerate MDS_NO are
+			 * only vulnerable if TSX can be used.  To handle cases
+			 * where TSX gets fused off check to see if TSX is
+			 * fused off and thus not affected.
+			 */
+			if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c))
+				goto srbds_not_affected;
+
+			setup_force_cpu_bug(X86_BUG_SRBDS);
+		}
+	}
+
+srbds_not_affected:
 	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
 		return;
 
@@ -1594,6 +1649,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
 	mtrr_ap_init();
 	validate_apic_and_package_id(c);
 	x86_spec_ctrl_setup_ap();
+	update_srbds_msr();
 }
 
 static __init int setup_noclflush(char *arg)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 37fdefd14f28..f3e2fc44dba0 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -44,6 +44,8 @@ struct _tlb_table {
 extern const struct cpu_dev *const __x86_cpu_dev_start[],
 			    *const __x86_cpu_dev_end[];
 
+void update_srbds_msr(void);
+
 #ifdef CONFIG_CPU_SUP_INTEL
 enum tsx_ctrl_states {
 	TSX_CTRL_ENABLE,
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 9a1c00fbbaef..d2136ab9b14a 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t __weak cpu_show_srbds(struct device *dev,
+			      struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "Not affected\n");
+}
+
 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
@@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
 static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
 static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
 static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_mds.attr,
 	&dev_attr_tsx_async_abort.attr,
 	&dev_attr_itlb_multihit.attr,
+	&dev_attr_srbds.attr,
 	NULL
 };
 
-- 
2.17.1

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] [PATCH 4/4] V8 more sampling fun 4
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V8 more sampling fun 3 mark gross
@ 2020-01-30 19:12 ` mark gross
  2020-03-17  0:56 ` [MODERATED] [PATCH 1/4] V8 more sampling fun 1 mark gross
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-01-30 19:12 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 4/4] x86/speculation: SRBDS vulnerability and mitigation
 documentation

Add documentation for the SRBDS vulnerability and its mitigation.

Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../special-register-buffer-data-sampling.rst | 147 ++++++++++++++++++
 2 files changed, 148 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst

diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 0795e3c2643f..ca4dbdd9016d 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
    mds
    tsx_async_abort
    multihit.rst
+   special-register-buffer-data-sampling.rst
diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
new file mode 100644
index 000000000000..b3cb73668b08
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
@@ -0,0 +1,147 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+SRBDS - Special Register Buffer Data Sampling
+=============================================
+
+SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
+infer values returned from special register accesses.  Special register
+accesses are accesses to off core registers.  According to Intel's evaluation,
+the special register reads that have a security expectation of privacy are:
+RDRAND, RDSEED and SGX EGETKEY.
+
+When RDRAND RDSEED and EGETKEY instructions are used the data is moved to the
+core through the special register mechanism that is susceptible to MDS attacks.
+
+Affected processors
+--------------------
+Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
+be affected.
+
+A processor is affected by SRBDS if its Family_Model and stepping is in the
+following list, with the exception of the listed processors exporting MDS_NO
+while Intel TSX is available yet not enabled.  The latter class of processors
+are only affected when Intel TSX is enabled by software using TSX_CTRL_MSR
+otherwise they are not affected.
+
+
+  =============  ============  ========
+  common name    Family_Model  Stepping
+  =============  ============  ========
+  Haswell        06_3CH        All
+  Haswell_L      06_45H        All
+  Haswell_G      06_46H        All
+
+  Broadwell_G    06_47H        All
+  Broadwell      06_3DH        All
+
+  Skylake_L      06_4EH        All
+  Skylake        06_5EH        All
+
+  Kabylake_L     06_8EH        <=0xC
+
+  Kabylake       06_9EH        <=0xD
+  =============  ============  ========
+
+Related CVEs
+------------
+
+The following CVE entry is related to this SRBDS issue:
+
+    ==============  =====  =====================================
+    CVE-2020-0543   SRBDS  Special Register Buffer Data Sampling
+    ==============  =====  =====================================
+
+Attack scenarios
+----------------
+An unprivileged user can extract returned values from RDRAND and RDSEED
+executed on another core or sibling thread using MDS techniques.
+
+
+Mitigation mechanism
+-------------------
+Intel will release microcode updates that modify the RDRAND, RDSEED, and
+EGETKEY instructions to overwrite secret special register data in the shared
+staging buffer before the secret data can be accessed by another logical
+processor.
+
+During execution of the RDRAND, RDSEED, or EGETKEY instruction, off-core
+accesses from other logical processors will be delayed until the special
+register read is complete and the secret data in the shared staging buffer is
+overwritten.
+
+This has three effects on performance:
+
+#. RDRAND, RDSEED, or EGETKEY instruction have higher latency.
+
+#. Executing RDRAND at the same time on multiple logical processors will be
+   serialized, resulting in an overall reduction in the maximum RDRAND
+   bandwidth.
+
+#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
+   logical processors that miss their core caches, with an impact similar to
+   legacy locked cache-line-split accesses.
+
+The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
+the mitigation for RDRAND and RDSEED instructions executed outside of Intel
+Software Guard Extensions (Intel SGX) enclaves. On logical processors that
+disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
+take longer to execute and do not impact performance of sibling logical
+processors memory accesses. The opt-out mechanism does not affect Intel SGX
+enclaves (including execution of RDRAND or RDSEED inside of an enclave, as well
+as EGETKEY execution).
+
+IA32_MCU_OPT_CTRL MSR Definition
+--------------------------------
+Along with the mitigation for this issue, Intel added a new thread-scope
+IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
+RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
+9]==1. This MSR is introduced through the microcode update.
+
+Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
+disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
+enclave on that logical processor. Opting out of the mitigation for a
+particular logical processor does not affect the RDRAND and RDSEED mitigations
+for other logical processors.
+
+Note that inside of an Intel SGX enclave, the mitigation is applied regardless
+of the value of RNGDS_MITG_DS.
+
+Mitigation control on the kernel command line
+---------------------------------------------
+The kernel command line allows control over the SRBDS mitigation at boot time
+with the option "srbds=".  The option for this is:
+
+  ============= =============================================================
+  off           This option disables SRBDS mitigation for RDRAND and RDSEED on
+                affected platforms.
+  ============= =============================================================
+
+SRBDS System Information
+-----------------------
+The Linux kernel provides vulnerability status information through sysfs.  For
+SRBDS this can be accessed by the following sysfs file:
+/sys/devices/system/cpu/vulnerabilities/srbds
+
+The possible values contained in this file are:
+
+ ============================== =============================================
+ Not affected                   Processor not vulnerable
+ Vulnerable                     Processor vulnerable and mitigation disabled
+ Vulnerable: No microcode       Processor vulnerable and microcode is missing
+                                mitigation
+ Mitigated: Microcode           Processor is vulnerable and mitigation is in
+                                effect.
+ Not affected (TSX disabled)    Processor is only vulnerable when TSX is
+                                enabled while this system was booted with TSX
+                                disabled.
+ Unknown                        Running on virtual guest processor that is
+                                affected but with no way to know if host
+                                processor is mitigated or vulnerable.
+ ============================== =============================================
+
+SRBDS Default mitigation
+------------------------
+This new microcode serializes processor access during execution of RDRAND,
+RDSEED ensures that the shared buffer is overwritten before it is released for
+reuse.  Use the "srbds=off" kernel command line to disable the mitigation for
+RDRAND and RDSEED.
-- 
2.17.1

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] [PATCH 1/4] V8 more sampling fun 1
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V8 more sampling fun 3 mark gross
  2020-01-30 19:12 ` [MODERATED] [PATCH 4/4] V8 more sampling fun 4 mark gross
@ 2020-03-17  0:56 ` mark gross
  2020-03-17  0:56 ` [MODERATED] [PATCH 2/4] V8 more sampling fun 2 mark gross
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-03-17  0:56 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 1/4] x86/cpu: Add stepping field to x86_cpu_id structure

Intel uses the same family/model for several CPUs. Sometimes the
stepping must be checked to tell them apart.

On X86 there can be at most 16 steppings, add steppings bitmask to
x86_cpu_id and X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and
support for matching against family/model/stepping.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/include/asm/cpu_device_id.h | 27 ++++++++++++++++++++++++---
 arch/x86/kernel/cpu/match.c          |  7 ++++++-
 include/linux/mod_devicetable.h      |  2 ++
 3 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
index cf3d621c6892..10426cd56dca 100644
--- a/arch/x86/include/asm/cpu_device_id.h
+++ b/arch/x86/include/asm/cpu_device_id.h
@@ -20,12 +20,14 @@
 #define X86_CENTAUR_FAM6_C7_D		0xd
 #define X86_CENTAUR_FAM6_NANO		0xf
 
+#define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
 /**
- * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
+ * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
  * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
  *		The name is expanded to X86_VENDOR_@_vendor
  * @_family:	The family number or X86_FAMILY_ANY
  * @_model:	The model number, model constant or X86_MODEL_ANY
+ * @_steppings:	Bitmask for steppings, stepping constant or X86_STEPPING_ANY
  * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
  * @_data:	Driver specific data or NULL. The internal storage
  *		format is unsigned long. The supplied value, pointer
@@ -37,15 +39,34 @@
  * into another macro at the usage site for good reasons, then please
  * start this local macro with X86_MATCH to allow easy grepping.
  */
-#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model,	\
-					   _feature, _data) {		\
+#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
+						    _steppings, _feature, _data) { \
 	.vendor		= X86_VENDOR_##_vendor,				\
 	.family		= _family,					\
 	.model		= _model,					\
+	.steppings	= _steppings,					\
 	.feature	= _feature,					\
 	.driver_data	= (unsigned long) _data				\
 }
 
+/**
+ * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Macro for CPU matching
+ * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
+ *		The name is expanded to X86_VENDOR_@_vendor
+ * @_family:	The family number or X86_FAMILY_ANY
+ * @_model:	The model number, model constant or X86_MODEL_ANY
+ * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
+ * @_data:	Driver specific data or NULL. The internal storage
+ *		format is unsigned long. The supplied value, pointer
+ *		etc. is casted to unsigned long internally.
+ *
+ * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE() is
+ * set to wildcards.
+ */
+#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(vendor, family, model, \
+						X86_STEPPING_ANY, feature, data)
+
 /**
  * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CPU feature
  * @vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
index d3482eb43ff3..ad6776081e60 100644
--- a/arch/x86/kernel/cpu/match.c
+++ b/arch/x86/kernel/cpu/match.c
@@ -39,13 +39,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
 	const struct x86_cpu_id *m;
 	struct cpuinfo_x86 *c = &boot_cpu_data;
 
-	for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
+	for (m = match;
+	     m->vendor | m->family | m->model | m->steppings | m->feature;
+	     m++) {
 		if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
 			continue;
 		if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
 			continue;
 		if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
 			continue;
+		if (m->steppings != X86_STEPPING_ANY &&
+		    !(BIT(c->x86_stepping) & m->steppings))
+			continue;
 		if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
 			continue;
 		return m;
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 4c2ddd0941a7..0754b8d71262 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -663,6 +663,7 @@ struct x86_cpu_id {
 	__u16 vendor;
 	__u16 family;
 	__u16 model;
+	__u16 steppings;
 	__u16 feature;	/* bit index */
 	kernel_ulong_t driver_data;
 };
@@ -671,6 +672,7 @@ struct x86_cpu_id {
 #define X86_VENDOR_ANY 0xffff
 #define X86_FAMILY_ANY 0
 #define X86_MODEL_ANY  0
+#define X86_STEPPING_ANY 0
 #define X86_FEATURE_ANY 0	/* Same as FPU, you can't test for that */
 
 /*
-- 
2.17.1

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] [PATCH 2/4] V8 more sampling fun 2
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
                   ` (2 preceding siblings ...)
  2020-03-17  0:56 ` [MODERATED] [PATCH 1/4] V8 more sampling fun 1 mark gross
@ 2020-03-17  0:56 ` mark gross
  2020-04-16 17:15 ` [MODERATED] Re: [PATCH 1/4] V8 more sampling fun 1 Josh Poimboeuf
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-03-17  0:56 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 2/4] x86/cpu: clean up cpu_matches

To make cpu_matches reusable for alternitive matching tables, make cpu_matches
take a x86_cpu_id table as a parameter.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 arch/x86/kernel/cpu/common.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bed0cb83fe24..2bea1cc8dcb4 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1075,9 +1075,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	{}
 };
 
-static bool __init cpu_matches(unsigned long which)
+static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
 {
-	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
+	const struct x86_cpu_id *m = x86_match_cpu(table);
 
 	return m && !!(m->driver_data & which);
 }
@@ -1097,31 +1097,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	u64 ia32_cap = x86_read_arch_cap_msr();
 
 	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
-	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
+	if (!cpu_matches(NO_ITLB_MULTIHIT, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
 		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
 
-	if (cpu_matches(NO_SPECULATION))
+	if (cpu_matches(NO_SPECULATION, cpu_vuln_whitelist))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 
-	if (!cpu_matches(NO_SPECTRE_V2))
+	if (!cpu_matches(NO_SPECTRE_V2, cpu_vuln_whitelist))
 		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
 
-	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
+	if (!cpu_matches(NO_SSB, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_SSB_NO) &&
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
 
-	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
+	if (!cpu_matches(NO_MDS, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
 		setup_force_cpu_bug(X86_BUG_MDS);
-		if (cpu_matches(MSBDS_ONLY))
+		if (cpu_matches(MSBDS_ONLY, cpu_vuln_whitelist))
 			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
 	}
 
-	if (!cpu_matches(NO_SWAPGS))
+	if (!cpu_matches(NO_SWAPGS, cpu_vuln_whitelist))
 		setup_force_cpu_bug(X86_BUG_SWAPGS);
 
 	/*
@@ -1139,7 +1142,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
-	if (cpu_matches(NO_MELTDOWN))
+	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
 		return;
 
 	/* Rogue Data Cache Load? No! */
@@ -1148,7 +1151,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 
 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
 
-	if (cpu_matches(NO_L1TF))
+	if (cpu_matches(NO_L1TF, cpu_vuln_whitelist))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_L1TF);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] [PATCH 0/4] V8 more sampling fun 0
@ 2020-04-16  0:14 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V8 more sampling fun 3 mark gross
                   ` (8 more replies)
  0 siblings, 9 replies; 33+ messages in thread
From: mark gross @ 2020-04-16  0:14 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 0/4] V8 more sampling fun

This version implement cleanups for readability and alignment with the
finalizing Intel white paper on the SRBDS topic.  I thank all who've been
giving me valuable feedback through this process.

I've included a branch diff between the previous patch set and this one.

---

Special Register Buffer Data Sampling is a sampling type of vulnerability that
leaks data across cores sharing the HW-RNG for vulnerable processors.

This leak is fixed by a microcode update and is enabled by default.

This new microcode serializes processor access during execution of RDRAND or
RDSEED. It ensures that the shared buffer is overwritten before it is released
for reuse.

The mitigation impacts the throughput of the RDRAND and RDSEED instructions and
latency of RT processing running on the socket while executing RDRAND or
RDSEED.  The micro benchmarks calling RDRAND many times show a slowdown.

This patch set enables kernel command line control of this mitigation and
exports vulnerability and mitigation status.
This patch set includes 3 patches:
* The first patch adds steppings to x86_cpu_id structure and related macros
* The second patch enables the command line control of the mitigation as well
  as the sysfs export of vulnerability status.
* The third patch has the Documentation/admin-guide/hw-vuln documentation for
  this issue and the control over the mitigation.


mark gross (4):
  x86/cpu: Add stepping field to x86_cpu_id structure
  x86/cpu: clean up cpu_matches
  x86/speculation: Special Register Buffer Data Sampling (SRBDS)
    mitigation control.
  x86/speculation: SRBDS vulnerability and mitigation documentation

 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../special-register-buffer-data-sampling.rst | 147 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  20 +++
 arch/x86/include/asm/cpu_device_id.h          |  27 +++-
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 105 +++++++++++++
 arch/x86/kernel/cpu/common.c                  |  81 ++++++++--
 arch/x86/kernel/cpu/cpu.h                     |   2 +
 arch/x86/kernel/cpu/match.c                   |   7 +-
 drivers/base/cpu.c                            |   8 +
 include/linux/mod_devicetable.h               |   2 +
 13 files changed, 392 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst

-- 
2.17.1

diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
index 9f1ee4064fcd..b3cb73668b08 100644
--- a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
@@ -14,16 +14,19 @@ core through the special register mechanism that is susceptible to MDS attacks.
 
 Affected processors
 --------------------
-Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED and
-are vulnerable to MFBDS (Micro architectural Fill Buffer Data Sampling) variant
-of MDS (Micro architectural Data Sampling) or to the TAA (TSX Asynchronous
-Abort) when TSX is enabled,
+Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
+be affected.
+
+A processor is affected by SRBDS if its Family_Model and stepping is in the
+following list, with the exception of the listed processors exporting MDS_NO
+while Intel TSX is available yet not enabled.  The latter class of processors
+are only affected when Intel TSX is enabled by software using TSX_CTRL_MSR
+otherwise they are not affected.
 
-  =============  ============  ==========================
-  common name    Family_Model  Stepping
-  =============  ============  ==========================
-  Ivybridge      06_3AH        All
 
+  =============  ============  ========
+  common name    Family_Model  Stepping
+  =============  ============  ========
   Haswell        06_3CH        All
   Haswell_L      06_45H        All
   Haswell_G      06_46H        All
@@ -34,14 +37,10 @@ Abort) when TSX is enabled,
   Skylake_L      06_4EH        All
   Skylake        06_5EH        All
 
-  Kabylake_L     06_8EH        <=A
-  Kabylake_L     06_8EH        0xB only if TSX is enabled
-  Kabylake_L     06_8EH        0xC only if TSX is enabled
+  Kabylake_L     06_8EH        <=0xC
 
-  Kabylake       06_9EH        <=B
-  Kabylake       06_9EH        0xC only if TSX is enabled
-  Kabylake       06_9EH        0xD only if TSX is enabled
-  =============  ============  ==========================
+  Kabylake       06_9EH        <=0xD
+  =============  ============  ========
 
 Related CVEs
 ------------
diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
index 4f0df2e46c95..10426cd56dca 100644
--- a/arch/x86/include/asm/cpu_device_id.h
+++ b/arch/x86/include/asm/cpu_device_id.h
@@ -22,7 +22,7 @@
 
 #define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
 /**
- * X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE - Base macro for CPU matching
+ * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
  * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
  *		The name is expanded to X86_VENDOR_@_vendor
  * @_family:	The family number or X86_FAMILY_ANY
@@ -39,7 +39,7 @@
  * into another macro at the usage site for good reasons, then please
  * start this local macro with X86_MATCH to allow easy grepping.
  */
-#define X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(_vendor, _family, _model, \
+#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
 						    _steppings, _feature, _data) { \
 	.vendor		= X86_VENDOR_##_vendor,				\
 	.family		= _family,					\
@@ -50,7 +50,7 @@
 }
 
 /**
- * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
+ * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Macro for CPU matching
  * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
  *		The name is expanded to X86_VENDOR_@_vendor
  * @_family:	The family number or X86_FAMILY_ANY
@@ -60,11 +60,11 @@
  *		format is unsigned long. The supplied value, pointer
  *		etc. is casted to unsigned long internally.
  *
- * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE() is
+ * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE() is
  * set to wildcards.
  */
 #define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
-	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(vendor, family, model, \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(vendor, family, model, \
 						X86_STEPPING_ANY, feature, data)
 
 /**
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index addef92109fe..f345c24f85d1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -406,17 +406,17 @@ enum srbds_mitigations {
 	SRBDS_MITIGATION_OFF,
 	SRBDS_MITIGATION_UCODE_NEEDED,
 	SRBDS_MITIGATION_FULL,
-	SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF,
+	SRBDS_MITIGATION_TSX_OFF,
 	SRBDS_MITIGATION_HYPERVISOR,
 };
 
 static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
 static const char * const srbds_strings[] = {
-	[SRBDS_MITIGATION_OFF]			= "Vulnerable",
-	[SRBDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
-	[SRBDS_MITIGATION_FULL]			= "Mitigated: Microcode",
-	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
-	[SRBDS_MITIGATION_HYPERVISOR]		= "Unknown: Dependent on hypervisor status",
+	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
+	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
+	[SRBDS_MITIGATION_FULL]		= "Mitigated: Microcode",
+	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigated: TSX disabled",
+	[SRBDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
 };
 
 static bool srbds_off;
@@ -438,7 +438,7 @@ void update_srbds_msr(void)
 
 	switch (srbds_mitigation) {
 	case SRBDS_MITIGATION_OFF:
-	case SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF:
+	case SRBDS_MITIGATION_TSX_OFF:
 		mcu_ctrl |= RNGDS_MITG_DIS;
 		break;
 	case SRBDS_MITIGATION_FULL:
@@ -462,26 +462,15 @@ static void __init srbds_select_mitigation(void)
 	 * TSX that are only exposed to SRBDS when TSX is enabled.
 	 */
 	ia32_cap = x86_read_arch_cap_msr();
-	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
-		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
-		goto out;
-	}
-
-	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
+		srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
+	else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
 		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
-		goto out;
-	}
-
-	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
+	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
 		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
-		goto out;
-	}
+	else if (cpu_mitigations_off() || srbds_off)
+		srbds_mitigation = SRBDS_MITIGATION_OFF;
 
-	if (cpu_mitigations_off() || srbds_off) {
-		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)
-			srbds_mitigation = SRBDS_MITIGATION_OFF;
-	}
-out:
 	update_srbds_msr();
 	pr_info("%s\n", srbds_strings[srbds_mitigation]);
 }
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2c9be1fd3c72..39d3fb4292f9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1076,12 +1076,11 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 };
 
 #define VULNBL_INTEL_STEPPING(model, steppings, issues)			   \
-	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6,		   \
 					    INTEL_FAM6_##model, steppings, \
 					    X86_FEATURE_ANY, issues)
 
 #define SRBDS		BIT(0)
-#define SRBDS_IF_TSX	BIT(1)
 
 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
 	VULNBL_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
@@ -1092,10 +1091,8 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
 	VULNBL_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
 	VULNBL_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
 	VULNBL_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
-	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
-	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
-	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
-	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
+	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xC),	SRBDS),
+	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xD),	SRBDS),
 	{}
 };
 
@@ -1116,6 +1113,20 @@ u64 x86_read_arch_cap_msr(void)
 	return ia32_cap;
 }
 
+static bool tsx_fused_off(struct cpuinfo_x86 *c)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	/*
+	 * When running with up-to-date microcode TSX_CTRL is only enumerated
+	 * on parts where TSX is fused on.  When running with microcode not
+	 * supporting TSX_CTRL we check for RTM.
+	 */
+
+	return !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR) &&
+		 !cpu_has(c, X86_FEATURE_RTM);
+}
+
 static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = x86_read_arch_cap_msr();
@@ -1166,34 +1177,26 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
-	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
-		/*
-		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
-		 * they show as "Not affected".
-		 */
-		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
-		    !cpu_has(c, X86_FEATURE_RDSEED))
-			goto srbds_not_affected;
-		/*
-		 * Parts in the blacklist that enumerate MDS_NO are only
-		 * vulneralbe if TSX can be used.  To handle cases where TSX
-		 * gets fused off check to see if TSX is fused off and thus not
-		 * affected.
-		 *
-		 * When running with up to day microcode TSX_CTRL is only
-		 * enumerated on parts where TSX fused on.
-		 * When running with microcode not supporting TSX_CTRL we check
-		 * for RTM
-		 */
-		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
-		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
-		      cpu_has(c, X86_FEATURE_RTM)))
-			goto srbds_not_affected;
-
-		setup_force_cpu_bug(X86_BUG_SRBDS);
+	/*
+	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
+	 * they show as "Not affected".
+	 */
+	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
+		if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
+			/*
+			 * Parts in the blacklist that enumerate MDS_NO are
+			 * only vulnerable if TSX can be used.  To handle cases
+			 * where TSX gets fused off check to see if TSX is
+			 * fused off and thus not affected.
+			 */
+			if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c))
+				goto srbds_not_affected;
+
+			setup_force_cpu_bug(X86_BUG_SRBDS);
+		}
 	}
-srbds_not_affected:
 
+srbds_not_affected:
 	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
 		return;
 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 1/4] V8 more sampling fun 1
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
                   ` (3 preceding siblings ...)
  2020-03-17  0:56 ` [MODERATED] [PATCH 2/4] V8 more sampling fun 2 mark gross
@ 2020-04-16 17:15 ` Josh Poimboeuf
  2020-04-16 17:30   ` Borislav Petkov
  2020-04-16 17:16 ` [MODERATED] Re: [PATCH 2/4] V8 more sampling fun 2 Josh Poimboeuf
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Josh Poimboeuf @ 2020-04-16 17:15 UTC (permalink / raw)
  To: speck

On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 1/4] x86/cpu: Add stepping field to x86_cpu_id structure

s/stepping/steppings/

-- 
Josh

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 2/4] V8 more sampling fun 2
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
                   ` (4 preceding siblings ...)
  2020-04-16 17:15 ` [MODERATED] Re: [PATCH 1/4] V8 more sampling fun 1 Josh Poimboeuf
@ 2020-04-16 17:16 ` Josh Poimboeuf
  2020-04-16 17:33   ` [MODERATED] " Borislav Petkov
  2020-04-16 17:17 ` [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3 Josh Poimboeuf
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 33+ messages in thread
From: Josh Poimboeuf @ 2020-04-16 17:16 UTC (permalink / raw)
  To: speck

On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 2/4] x86/cpu: clean up cpu_matches

Vague subject, how about

  x86/cpu: Add 'table' argument to cpu_matches()

-- 
Josh

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
                   ` (5 preceding siblings ...)
  2020-04-16 17:16 ` [MODERATED] Re: [PATCH 2/4] V8 more sampling fun 2 Josh Poimboeuf
@ 2020-04-16 17:17 ` Josh Poimboeuf
  2020-04-16 17:44   ` Borislav Petkov
  2020-04-16 22:54   ` [MODERATED] " mark gross
  2020-04-16 17:20 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Josh Poimboeuf
  2020-04-21 17:30 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Borislav Petkov
  8 siblings, 2 replies; 33+ messages in thread
From: Josh Poimboeuf @ 2020-04-16 17:17 UTC (permalink / raw)
  To: speck

On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 3/4] x86/speculation: Special Register Buffer Data Sampling
>  (SRBDS) mitigation control.

Subjects don't need periods.

> +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> +static const char * const srbds_strings[] = {
> +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
> +	[SRBDS_MITIGATION_FULL]		= "Mitigated: Microcode",

FWIW, this is at least the third time I've made this comment...

s/Mitigated/Mitigation/

> +	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigated: TSX disabled",

s/Mitigated/Mitigation

> @@ -1142,6 +1177,26 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> +	/*
> +	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> +	 * they show as "Not affected".
> +	 */
> +	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
> +		if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
> +			/*
> +			 * Parts in the blacklist that enumerate MDS_NO are
> +			 * only vulnerable if TSX can be used.  To handle cases
> +			 * where TSX gets fused off check to see if TSX is
> +			 * fused off and thus not affected.
> +			 */
> +			if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c))
> +				goto srbds_not_affected;
> +
> +			setup_force_cpu_bug(X86_BUG_SRBDS);
> +		}
> +	}
> +
> +srbds_not_affected:
>  	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
>  		return;

When nitpicking the whitespace before, I think I completely missed the
fact that this goto is extremely ugly.  And there are a lot of
unnecessary nested ifs.  And the comment is redundant.

How about something like this:

	/*
	 * Parts in the SRBDS blacklist that enumerate MDS_NO are only
	 * vulnerable if TSX isn't fused off.
	 */
	if (cpu_matches(SRBDS, cpu_vuln_blacklist) &&
	    (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) &&
	    (!(ia32_cap & ARCH_CAP_MDS_NO) || !tsx_fused_off(c)))
		setup_force_cpu_bug(X86_BUG_SRBDS);

-- 
Josh

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
                   ` (6 preceding siblings ...)
  2020-04-16 17:17 ` [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3 Josh Poimboeuf
@ 2020-04-16 17:20 ` Josh Poimboeuf
  2020-04-16 17:49   ` [MODERATED] " Borislav Petkov
  2020-04-21 17:30 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Borislav Petkov
  8 siblings, 1 reply; 33+ messages in thread
From: Josh Poimboeuf @ 2020-04-16 17:20 UTC (permalink / raw)
  To: speck

On Thu, Jan 30, 2020 at 11:12:02AM -0800, speck for mark gross wrote:
> +The possible values contained in this file are:
> +
> + ============================== =============================================
> + Not affected                   Processor not vulnerable
> + Vulnerable                     Processor vulnerable and mitigation disabled
> + Vulnerable: No microcode       Processor vulnerable and microcode is missing
> +                                mitigation
> + Mitigated: Microcode           Processor is vulnerable and mitigation is in
> +                                effect.
> + Not affected (TSX disabled)    Processor is only vulnerable when TSX is
> +                                enabled while this system was booted with TSX
> +                                disabled.
> + Unknown                        Running on virtual guest processor that is
> +                                affected but with no way to know if host
> +                                processor is mitigated or vulnerable.
> + ============================== =============================================

This doesn't match the code.

-- 
Josh

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 1/4] V8 more sampling fun 1
  2020-04-16 17:15 ` [MODERATED] Re: [PATCH 1/4] V8 more sampling fun 1 Josh Poimboeuf
@ 2020-04-16 17:30   ` Borislav Petkov
  0 siblings, 0 replies; 33+ messages in thread
From: Borislav Petkov @ 2020-04-16 17:30 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 12:15:58PM -0500, speck for Josh Poimboeuf wrote:
> On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 1/4] x86/cpu: Add stepping field to x86_cpu_id structure
> 
> s/stepping/steppings/

Fixed that while applying.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Re: [PATCH 2/4] V8 more sampling fun 2
  2020-04-16 17:16 ` [MODERATED] Re: [PATCH 2/4] V8 more sampling fun 2 Josh Poimboeuf
@ 2020-04-16 17:33   ` Borislav Petkov
  2020-04-16 22:47     ` mark gross
  0 siblings, 1 reply; 33+ messages in thread
From: Borislav Petkov @ 2020-04-16 17:33 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 12:16:21PM -0500, speck for Josh Poimboeuf wrote:
> On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 2/4] x86/cpu: clean up cpu_matches
> 
> Vague subject, how about
> 
>   x86/cpu: Add 'table' argument to cpu_matches()

Fixed, below final result:

---
From: Mark Gross <mgross@linux.intel.com>
Date: Thu, 16 Apr 2020 17:32:42 +0200
Subject: [PATCH] x86/cpu: Add 'table' argument to cpu_matches()

To make cpu_matches() reusable for other matching tables, have it take a
x86_cpu_id table as an argument.

 [ bp: Flip arguments order. ]

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kernel/cpu/common.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bed0cb83fe24..1131ae032bf2 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1075,9 +1075,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	{}
 };
 
-static bool __init cpu_matches(unsigned long which)
+static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
 {
-	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
+	const struct x86_cpu_id *m = x86_match_cpu(table);
 
 	return m && !!(m->driver_data & which);
 }
@@ -1097,31 +1097,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	u64 ia32_cap = x86_read_arch_cap_msr();
 
 	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
-	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
+	if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
+	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
 		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
 
-	if (cpu_matches(NO_SPECULATION))
+	if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 
-	if (!cpu_matches(NO_SPECTRE_V2))
+	if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
 		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
 
-	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
+	if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
+	    !(ia32_cap & ARCH_CAP_SSB_NO) &&
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
 
-	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
+	if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
+	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
 		setup_force_cpu_bug(X86_BUG_MDS);
-		if (cpu_matches(MSBDS_ONLY))
+		if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
 			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
 	}
 
-	if (!cpu_matches(NO_SWAPGS))
+	if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS))
 		setup_force_cpu_bug(X86_BUG_SWAPGS);
 
 	/*
@@ -1139,7 +1142,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
-	if (cpu_matches(NO_MELTDOWN))
+	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
 		return;
 
 	/* Rogue Data Cache Load? No! */
@@ -1148,7 +1151,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 
 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
 
-	if (cpu_matches(NO_L1TF))
+	if (cpu_matches(cpu_vuln_whitelist, NO_L1TF))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_L1TF);
-- 
2.21.0

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-16 17:17 ` [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3 Josh Poimboeuf
@ 2020-04-16 17:44   ` Borislav Petkov
  2020-04-16 18:01     ` Josh Poimboeuf
                       ` (2 more replies)
  2020-04-16 22:54   ` [MODERATED] " mark gross
  1 sibling, 3 replies; 33+ messages in thread
From: Borislav Petkov @ 2020-04-16 17:44 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 12:17:23PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 3/4] x86/speculation: Special Register Buffer Data Sampling
> >  (SRBDS) mitigation control.
> 
> Subjects don't need periods.

Fixed.

> > +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> > +static const char * const srbds_strings[] = {
> > +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> > +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
> > +	[SRBDS_MITIGATION_FULL]		= "Mitigated: Microcode",
> 
> FWIW, this is at least the third time I've made this comment...
> 
> s/Mitigated/Mitigation/
> 
> > +	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigated: TSX disabled",
> 
> s/Mitigated/Mitigation

Fixed.

> When nitpicking the whitespace before, I think I completely missed the
> fact that this goto is extremely ugly.  And there are a lot of
> unnecessary nested ifs.  And the comment is redundant.

I've cleaned up that part a bit more, please have a look. Here's the whole
thing:

---
From: Mark Gross <mgross@linux.intel.com>
Date: Thu, 16 Apr 2020 17:54:04 +0200
Subject: [PATCH] x86/speculation: Add Special Register Buffer Data Sampling
 (SRBDS) mitigation controls

SRBDS is an MDS-like speculative side channel that can leak bits from
the RNG across cores and threads. New microcode serializes the processor
access during the execution of RDRAND and RDSEED. This ensures that the
shared buffer is overwritten before it is released for reuse.

While it is present on all affected CPU models, the microcode mitigation
is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
cases where TSX is not supported or has been disabled with TSX_CTRL.

The mitigation is activated by default on affected processors and it
increases latency for RDRAND and RDSEED instructions. Among other
effects this will reduce throughput from /dev/urandom.

* Enable administrator to configure the mitigation off when desired using
either mitigations=off or srbds=off.

* Export vulnerability status via sysfs

* Rename file-scoped macros to apply for non-whitelist table
initializations.

 [ bp: Massage,
   - s/VULNBL_INTEL_STEPPING/VULNBL_INTEL_STEPPINGS/g,
   - do not read arch cap MSR a second time in tsx_fused_off() - just pass it in,
   - flip check in cpu_set_bug_bits() to save an indentation level,
   - reflow comments.
   jpoimboe: s/Mitigated/Mitigation/ in user-visible strings ]

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 .../admin-guide/kernel-parameters.txt         |  20 ++++
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 107 ++++++++++++++++++
 arch/x86/kernel/cpu/common.c                  |  54 +++++++++
 arch/x86/kernel/cpu/cpu.h                     |   2 +
 drivers/base/cpu.c                            |   8 ++
 8 files changed, 198 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 2e0e3b45d02a..b39531a3c5bc 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
 		/sys/devices/system/cpu/vulnerabilities/mds
+		/sys/devices/system/cpu/vulnerabilities/srbds
 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
 		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
 Date:		January 2018
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f2a93c8679e8..f720463bd918 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4757,6 +4757,26 @@
 			the kernel will oops in either "warn" or "fatal"
 			mode.
 
+	srbds=		[X86,INTEL]
+			Control the Special Register Buffer Data Sampling
+			(SRBDS) mitigation.
+
+			Certain CPUs are vulnerable to an MDS-like
+			exploit which can leak bits from the random
+			number generator.
+
+			By default, this issue is mitigated by
+			microcode.  However, the microcode fix can cause
+			the RDRAND and RDSEED instructions to become
+			much slower.  Among other effects, this will
+			result in reduced throughput from /dev/urandom.
+
+			The microcode mitigation can be disabled with
+			the following option:
+
+			off:    Disable mitigation and remove
+				performance impact to RDRAND and RDSEED
+
 	srcutree.counter_wrap_check [KNL]
 			Specifies how frequently to check for
 			grace-period sequence counter wrap for the
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index db189945e9b0..02dabc9e77b0 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -362,6 +362,7 @@
 #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
 #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
 #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
+#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
 #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
 #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
 #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
@@ -407,5 +408,6 @@
 #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
 #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
 #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 12c9684d59ba..3efde600a674 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -128,6 +128,10 @@
 #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
 #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
 
+/* SRBDS support */
+#define MSR_IA32_MCU_OPT_CTRL		0x00000123
+#define RNGDS_MITG_DIS			BIT(0)
+
 #define MSR_IA32_SYSENTER_CS		0x00000174
 #define MSR_IA32_SYSENTER_ESP		0x00000175
 #define MSR_IA32_SYSENTER_EIP		0x00000176
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ed54b3b21c39..95e066c8d45d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_print_mitigation(void);
 static void __init taa_select_mitigation(void);
+static void __init srbds_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -108,6 +109,7 @@ void __init check_bugs(void)
 	l1tf_select_mitigation();
 	mds_select_mitigation();
 	taa_select_mitigation();
+	srbds_select_mitigation();
 
 	/*
 	 * As MDS and TAA mitigations are inter-related, print MDS
@@ -397,6 +399,98 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"SRBDS: " fmt
+
+enum srbds_mitigations {
+	SRBDS_MITIGATION_OFF,
+	SRBDS_MITIGATION_UCODE_NEEDED,
+	SRBDS_MITIGATION_FULL,
+	SRBDS_MITIGATION_TSX_OFF,
+	SRBDS_MITIGATION_HYPERVISOR,
+};
+
+static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
+
+static const char * const srbds_strings[] = {
+	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
+	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
+	[SRBDS_MITIGATION_FULL]		= "Mitigation: Microcode",
+	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigation: TSX disabled",
+	[SRBDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
+};
+
+static bool srbds_off;
+
+void update_srbds_msr(void)
+{
+	u64 mcu_ctrl;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+		return;
+
+	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
+		return;
+
+	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+
+	switch (srbds_mitigation) {
+	case SRBDS_MITIGATION_OFF:
+	case SRBDS_MITIGATION_TSX_OFF:
+		mcu_ctrl |= RNGDS_MITG_DIS;
+		break;
+	case SRBDS_MITIGATION_FULL:
+		mcu_ctrl &= ~RNGDS_MITG_DIS;
+		break;
+	default:
+		break;
+	}
+
+	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+}
+
+static void __init srbds_select_mitigation(void)
+{
+	u64 ia32_cap;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+
+	/*
+	 * Check to see if this is one of the MDS_NO systems supporting
+	 * TSX that are only exposed to SRBDS when TSX is enabled.
+	 */
+	ia32_cap = x86_read_arch_cap_msr();
+	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
+		srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
+	else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
+	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
+		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
+	else if (cpu_mitigations_off() || srbds_off)
+		srbds_mitigation = SRBDS_MITIGATION_OFF;
+
+	update_srbds_msr();
+	pr_info("%s\n", srbds_strings[srbds_mitigation]);
+}
+
+static int __init srbds_parse_cmdline(char *str)
+{
+	if (!str)
+		return -EINVAL;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return 0;
+
+	srbds_off = !strcmp(str, "off");
+
+	return 0;
+}
+early_param("srbds", srbds_parse_cmdline);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
@@ -1528,6 +1622,11 @@ static char *ibpb_state(void)
 	return "";
 }
 
+static ssize_t srbds_show_state(char *buf)
+{
+	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -1572,6 +1671,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 	case X86_BUG_ITLB_MULTIHIT:
 		return itlb_multihit_show_state(buf);
 
+	case X86_BUG_SRBDS:
+		return srbds_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1618,4 +1720,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
 }
+
+ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
+}
 #endif
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 1131ae032bf2..dee4d14eb975 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1075,6 +1075,27 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	{}
 };
 
+#define VULNBL_INTEL_STEPPINGS(model, steppings, issues)		   \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6,		   \
+					    INTEL_FAM6_##model, steppings, \
+					    X86_FEATURE_ANY, issues)
+
+#define SRBDS		BIT(0)
+
+static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(HASWELL,		X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xC),	SRBDS),
+	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x0, 0xD),	SRBDS),
+	{}
+};
+
 static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
 {
 	const struct x86_cpu_id *m = x86_match_cpu(table);
@@ -1092,6 +1113,17 @@ u64 x86_read_arch_cap_msr(void)
 	return ia32_cap;
 }
 
+/*
+ * When running with up-to-date microcode TSX_CTRL is only enumerated on parts
+ * where TSX is fused on.  When running with microcode not supporting TSX_CTRL,
+ * check for RTM.
+ */
+static bool tsx_fused_off(struct cpuinfo_x86 *c, u64 ia32_cap)
+{
+	return !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR) &&
+		 !cpu_has(c, X86_FEATURE_RTM);
+}
+
 static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = x86_read_arch_cap_msr();
@@ -1142,6 +1174,27 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
+	/*
+	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
+	 * they show as "Not affected".
+	 */
+	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
+		if (!cpu_matches(cpu_vuln_blacklist, SRBDS))
+			goto srbds_not_affected;
+
+		/*
+		 * Parts in the blacklist that enumerate MDS_NO are only
+		 * vulnerable if TSX can be used.  To handle cases where TSX
+		 * gets fused off check to see if TSX is fused off and thus not
+		 * affected.
+		 */
+		if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c, ia32_cap))
+			goto srbds_not_affected;
+
+		setup_force_cpu_bug(X86_BUG_SRBDS);
+	}
+
+srbds_not_affected:
 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
 		return;
 
@@ -1594,6 +1647,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
 	mtrr_ap_init();
 	validate_apic_and_package_id(c);
 	x86_spec_ctrl_setup_ap();
+	update_srbds_msr();
 }
 
 static __init int setup_noclflush(char *arg)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 37fdefd14f28..f3e2fc44dba0 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -44,6 +44,8 @@ struct _tlb_table {
 extern const struct cpu_dev *const __x86_cpu_dev_start[],
 			    *const __x86_cpu_dev_end[];
 
+void update_srbds_msr(void);
+
 #ifdef CONFIG_CPU_SUP_INTEL
 enum tsx_ctrl_states {
 	TSX_CTRL_ENABLE,
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 9a1c00fbbaef..d2136ab9b14a 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t __weak cpu_show_srbds(struct device *dev,
+			      struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "Not affected\n");
+}
+
 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
@@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
 static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
 static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
 static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_mds.attr,
 	&dev_attr_tsx_async_abort.attr,
 	&dev_attr_itlb_multihit.attr,
+	&dev_attr_srbds.attr,
 	NULL
 };
 
-- 
2.21.0

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-16 17:20 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Josh Poimboeuf
@ 2020-04-16 17:49   ` Borislav Petkov
  2020-04-16 22:57     ` mark gross
  2020-04-20 14:30     ` mark gross
  0 siblings, 2 replies; 33+ messages in thread
From: Borislav Petkov @ 2020-04-16 17:49 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 12:20:21PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Jan 30, 2020 at 11:12:02AM -0800, speck for mark gross wrote:
> > +The possible values contained in this file are:
> > +
> > + ============================== =============================================
> > + Not affected                   Processor not vulnerable
> > + Vulnerable                     Processor vulnerable and mitigation disabled
> > + Vulnerable: No microcode       Processor vulnerable and microcode is missing
> > +                                mitigation
> > + Mitigated: Microcode           Processor is vulnerable and mitigation is in
> > +                                effect.
> > + Not affected (TSX disabled)    Processor is only vulnerable when TSX is
> > +                                enabled while this system was booted with TSX
> > +                                disabled.
> > + Unknown                        Running on virtual guest processor that is
> > +                                affected but with no way to know if host
> > +                                processor is mitigated or vulnerable.
> > + ============================== =============================================
> 
> This doesn't match the code.

How's that?

 ============================== =============================================
 Not affected                   Processor not vulnerable
 Vulnerable                     Processor vulnerable and mitigation disabled
 Vulnerable: No microcode       Processor vulnerable and microcode is missing
                                mitigation
 Mitigation: Microcode          Processor is vulnerable and mitigation is in
                                effect.
 Mitigation: TSX disabled       Processor is only vulnerable when TSX is
                                enabled while this system was booted with TSX
                                disabled.
 Unknown: Dependent on
 hypervisor status              Running on virtual guest processor that is
                                affected but with no way to know if host
                                processor is mitigated or vulnerable.
 ============================== =============================================

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-16 17:44   ` Borislav Petkov
@ 2020-04-16 18:01     ` Josh Poimboeuf
  2020-04-16 22:45       ` mark gross
  2020-04-16 22:57     ` mark gross
  2020-04-17 12:34     ` Thomas Gleixner
  2 siblings, 1 reply; 33+ messages in thread
From: Josh Poimboeuf @ 2020-04-16 18:01 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 07:44:52PM +0200, speck for Borislav Petkov wrote:
>  static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  {
>  	u64 ia32_cap = x86_read_arch_cap_msr();
> @@ -1142,6 +1174,27 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> +	/*
> +	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> +	 * they show as "Not affected".
> +	 */
> +	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
> +		if (!cpu_matches(cpu_vuln_blacklist, SRBDS))
> +			goto srbds_not_affected;
> +
> +		/*
> +		 * Parts in the blacklist that enumerate MDS_NO are only
> +		 * vulnerable if TSX can be used.  To handle cases where TSX
> +		 * gets fused off check to see if TSX is fused off and thus not
> +		 * affected.
> +		 */
> +		if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c, ia32_cap))
> +			goto srbds_not_affected;
> +
> +		setup_force_cpu_bug(X86_BUG_SRBDS);
> +	}
> +
> +srbds_not_affected:

I still strongly dislike this and would much prefer my more compact
non-goto version.

Otherwise everything looks good :-)

-- 
Josh

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-16 18:01     ` Josh Poimboeuf
@ 2020-04-16 22:45       ` mark gross
  0 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-04-16 22:45 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 01:01:08PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Apr 16, 2020 at 07:44:52PM +0200, speck for Borislav Petkov wrote:
> >  static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  {
> >  	u64 ia32_cap = x86_read_arch_cap_msr();
> > @@ -1142,6 +1174,27 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> >  		setup_force_cpu_bug(X86_BUG_TAA);
> >  
> > +	/*
> > +	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> > +	 * they show as "Not affected".
> > +	 */
> > +	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
> > +		if (!cpu_matches(cpu_vuln_blacklist, SRBDS))
> > +			goto srbds_not_affected;
> > +
> > +		/*
> > +		 * Parts in the blacklist that enumerate MDS_NO are only
> > +		 * vulnerable if TSX can be used.  To handle cases where TSX
> > +		 * gets fused off check to see if TSX is fused off and thus not
> > +		 * affected.
> > +		 */
> > +		if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c, ia32_cap))
> > +			goto srbds_not_affected;
> > +
> > +		setup_force_cpu_bug(X86_BUG_SRBDS);
> > +	}
> > +
> > +srbds_not_affected:
> 
> I still strongly dislike this and would much prefer my more compact
> non-goto version.

FWIW I did think pretty hard about doing it way you proposed without the goto a
few emails back but felt the readability was really poor with the complexity of
the conditional that results.

But, I'm not going to fight you on it.

> 
> Otherwise everything looks good :-)
thanks.

--mark

> -- 
> Josh

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Re: [PATCH 2/4] V8 more sampling fun 2
  2020-04-16 17:33   ` [MODERATED] " Borislav Petkov
@ 2020-04-16 22:47     ` mark gross
  0 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-04-16 22:47 UTC (permalink / raw)
  To: speck

ack

Signed-off-by: Mark Gross <mgross@linux.intel.com>

thanks!

--mark

On Thu, Apr 16, 2020 at 07:33:24PM +0200, speck for Borislav Petkov wrote:
> On Thu, Apr 16, 2020 at 12:16:21PM -0500, speck for Josh Poimboeuf wrote:
> > On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> > > From: mark gross <mgross@linux.intel.com>
> > > Subject: [PATCH 2/4] x86/cpu: clean up cpu_matches
> > 
> > Vague subject, how about
> > 
> >   x86/cpu: Add 'table' argument to cpu_matches()
> 
> Fixed, below final result:
> 
> ---
> From: Mark Gross <mgross@linux.intel.com>
> Date: Thu, 16 Apr 2020 17:32:42 +0200
> Subject: [PATCH] x86/cpu: Add 'table' argument to cpu_matches()
> 
> To make cpu_matches() reusable for other matching tables, have it take a
> x86_cpu_id table as an argument.
> 
>  [ bp: Flip arguments order. ]
> 
> Signed-off-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> ---
>  arch/x86/kernel/cpu/common.c | 25 ++++++++++++++-----------
>  1 file changed, 14 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index bed0cb83fe24..1131ae032bf2 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1075,9 +1075,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
>  	{}
>  };
>  
> -static bool __init cpu_matches(unsigned long which)
> +static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
>  {
> -	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
> +	const struct x86_cpu_id *m = x86_match_cpu(table);
>  
>  	return m && !!(m->driver_data & which);
>  }
> @@ -1097,31 +1097,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	u64 ia32_cap = x86_read_arch_cap_msr();
>  
>  	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
> -	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
> +	if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
> +	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
>  		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
>  
> -	if (cpu_matches(NO_SPECULATION))
> +	if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
>  		return;
>  
>  	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
>  
> -	if (!cpu_matches(NO_SPECTRE_V2))
> +	if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
>  		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
>  
> -	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
> +	if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
> +	    !(ia32_cap & ARCH_CAP_SSB_NO) &&
>  	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
>  		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
>  
>  	if (ia32_cap & ARCH_CAP_IBRS_ALL)
>  		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
>  
> -	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
> +	if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
> +	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
>  		setup_force_cpu_bug(X86_BUG_MDS);
> -		if (cpu_matches(MSBDS_ONLY))
> +		if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
>  			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
>  	}
>  
> -	if (!cpu_matches(NO_SWAPGS))
> +	if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS))
>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
>  
>  	/*
> @@ -1139,7 +1142,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> -	if (cpu_matches(NO_MELTDOWN))
> +	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
>  		return;
>  
>  	/* Rogue Data Cache Load? No! */
> @@ -1148,7 +1151,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  
>  	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
>  
> -	if (cpu_matches(NO_L1TF))
> +	if (cpu_matches(cpu_vuln_whitelist, NO_L1TF))
>  		return;
>  
>  	setup_force_cpu_bug(X86_BUG_L1TF);
> -- 
> 2.21.0
> 
> SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
> -- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-16 17:17 ` [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3 Josh Poimboeuf
  2020-04-16 17:44   ` Borislav Petkov
@ 2020-04-16 22:54   ` mark gross
  1 sibling, 0 replies; 33+ messages in thread
From: mark gross @ 2020-04-16 22:54 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 12:17:23PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 3/4] x86/speculation: Special Register Buffer Data Sampling
> >  (SRBDS) mitigation control.
> 
> Subjects don't need periods.
> 
> > +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> > +static const char * const srbds_strings[] = {
> > +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> > +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
> > +	[SRBDS_MITIGATION_FULL]		= "Mitigated: Microcode",
> 
> FWIW, this is at least the third time I've made this comment...
I'm very sorry I miss this multiple times.

> s/Mitigated/Mitigation/
> 
> > +	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigated: TSX disabled",
> 
> s/Mitigated/Mitigation
> 
> > @@ -1142,6 +1177,26 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> >  		setup_force_cpu_bug(X86_BUG_TAA);
> >  
> > +	/*
> > +	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> > +	 * they show as "Not affected".
> > +	 */
> > +	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
> > +		if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
> > +			/*
> > +			 * Parts in the blacklist that enumerate MDS_NO are
> > +			 * only vulnerable if TSX can be used.  To handle cases
> > +			 * where TSX gets fused off check to see if TSX is
> > +			 * fused off and thus not affected.
> > +			 */
> > +			if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c))
> > +				goto srbds_not_affected;
> > +
> > +			setup_force_cpu_bug(X86_BUG_SRBDS);
> > +		}
> > +	}
> > +
> > +srbds_not_affected:
> >  	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
> >  		return;
> 
> When nitpicking the whitespace before, I think I completely missed the
> fact that this goto is extremely ugly.  And there are a lot of
> unnecessary nested ifs.  And the comment is redundant.
> 
> How about something like this:
> 
> 	/*
> 	 * Parts in the SRBDS blacklist that enumerate MDS_NO are only
> 	 * vulnerable if TSX isn't fused off.
> 	 */
> 	if (cpu_matches(SRBDS, cpu_vuln_blacklist) &&
> 	    (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) &&
> 	    (!(ia32_cap & ARCH_CAP_MDS_NO) || !tsx_fused_off(c)))
> 		setup_force_cpu_bug(X86_BUG_SRBDS);
I had this in an unreleased version but, I thought the redability of the
conditional too much for me so I created the tsx_fused_off funtion to.

I also considerd the contropositive of what I did have 

		if (!(ia32_cap & ARCH_CAP_MDS_NO) || !tsx_fused_off(c))
			goto srbds_not_affected;

also hard to read / parse

finally I thought of:
		if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c))
			continue;
		else
	 		setup_force_cpu_bug(X86_BUG_SRBDS);

and thought that looked pretty dumb too.

But, Like I said I'm not going to argue.  

--mark

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-16 17:44   ` Borislav Petkov
  2020-04-16 18:01     ` Josh Poimboeuf
@ 2020-04-16 22:57     ` mark gross
  2020-04-17 12:34     ` Thomas Gleixner
  2 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-04-16 22:57 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 07:44:52PM +0200, speck for Borislav Petkov wrote:
> On Thu, Apr 16, 2020 at 12:17:23PM -0500, speck for Josh Poimboeuf wrote:
> > On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > > From: mark gross <mgross@linux.intel.com>
> > > Subject: [PATCH 3/4] x86/speculation: Special Register Buffer Data Sampling
> > >  (SRBDS) mitigation control.
> > 
> > Subjects don't need periods.
> 
> Fixed.
thank you

> 
> > > +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> > > +static const char * const srbds_strings[] = {
> > > +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> > > +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
> > > +	[SRBDS_MITIGATION_FULL]		= "Mitigated: Microcode",
> > 
> > FWIW, this is at least the third time I've made this comment...
> > 
> > s/Mitigated/Mitigation/
> > 
> > > +	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigated: TSX disabled",
> > 
> > s/Mitigated/Mitigation
> 
> Fixed.
thank you
> 
> > When nitpicking the whitespace before, I think I completely missed the
> > fact that this goto is extremely ugly.  And there are a lot of
> > unnecessary nested ifs.  And the comment is redundant.
> 
> I've cleaned up that part a bit more, please have a look. Here's the whole
> thing:
> 
> ---
> From: Mark Gross <mgross@linux.intel.com>
> Date: Thu, 16 Apr 2020 17:54:04 +0200
> Subject: [PATCH] x86/speculation: Add Special Register Buffer Data Sampling
>  (SRBDS) mitigation controls
> 
> SRBDS is an MDS-like speculative side channel that can leak bits from
> the RNG across cores and threads. New microcode serializes the processor
> access during the execution of RDRAND and RDSEED. This ensures that the
> shared buffer is overwritten before it is released for reuse.
> 
> While it is present on all affected CPU models, the microcode mitigation
> is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
> cases where TSX is not supported or has been disabled with TSX_CTRL.
> 
> The mitigation is activated by default on affected processors and it
> increases latency for RDRAND and RDSEED instructions. Among other
> effects this will reduce throughput from /dev/urandom.
> 
> * Enable administrator to configure the mitigation off when desired using
> either mitigations=off or srbds=off.
> 
> * Export vulnerability status via sysfs
> 
> * Rename file-scoped macros to apply for non-whitelist table
> initializations.
> 
>  [ bp: Massage,
>    - s/VULNBL_INTEL_STEPPING/VULNBL_INTEL_STEPPINGS/g,
>    - do not read arch cap MSR a second time in tsx_fused_off() - just pass it in,
>    - flip check in cpu_set_bug_bits() to save an indentation level,
>    - reflow comments.
>    jpoimboe: s/Mitigated/Mitigation/ in user-visible strings ]
> 
> Signed-off-by: Mark Gross <mgross@linux.intel.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  .../ABI/testing/sysfs-devices-system-cpu      |   1 +
>  .../admin-guide/kernel-parameters.txt         |  20 ++++
>  arch/x86/include/asm/cpufeatures.h            |   2 +
>  arch/x86/include/asm/msr-index.h              |   4 +
>  arch/x86/kernel/cpu/bugs.c                    | 107 ++++++++++++++++++
>  arch/x86/kernel/cpu/common.c                  |  54 +++++++++
>  arch/x86/kernel/cpu/cpu.h                     |   2 +
>  drivers/base/cpu.c                            |   8 ++
>  8 files changed, 198 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> index 2e0e3b45d02a..b39531a3c5bc 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> @@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
>  		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
>  		/sys/devices/system/cpu/vulnerabilities/l1tf
>  		/sys/devices/system/cpu/vulnerabilities/mds
> +		/sys/devices/system/cpu/vulnerabilities/srbds
>  		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
>  		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
>  Date:		January 2018
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2a93c8679e8..f720463bd918 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4757,6 +4757,26 @@
>  			the kernel will oops in either "warn" or "fatal"
>  			mode.
>  
> +	srbds=		[X86,INTEL]
> +			Control the Special Register Buffer Data Sampling
> +			(SRBDS) mitigation.
> +
> +			Certain CPUs are vulnerable to an MDS-like
> +			exploit which can leak bits from the random
> +			number generator.
> +
> +			By default, this issue is mitigated by
> +			microcode.  However, the microcode fix can cause
> +			the RDRAND and RDSEED instructions to become
> +			much slower.  Among other effects, this will
> +			result in reduced throughput from /dev/urandom.
> +
> +			The microcode mitigation can be disabled with
> +			the following option:
> +
> +			off:    Disable mitigation and remove
> +				performance impact to RDRAND and RDSEED
> +
>  	srcutree.counter_wrap_check [KNL]
>  			Specifies how frequently to check for
>  			grace-period sequence counter wrap for the
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index db189945e9b0..02dabc9e77b0 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -362,6 +362,7 @@
>  #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
>  #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
>  #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
> +#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
>  #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
>  #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
>  #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
> @@ -407,5 +408,6 @@
>  #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
>  #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
>  #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
> +#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
>  
>  #endif /* _ASM_X86_CPUFEATURES_H */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 12c9684d59ba..3efde600a674 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -128,6 +128,10 @@
>  #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
>  #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
>  
> +/* SRBDS support */
> +#define MSR_IA32_MCU_OPT_CTRL		0x00000123
> +#define RNGDS_MITG_DIS			BIT(0)
> +
>  #define MSR_IA32_SYSENTER_CS		0x00000174
>  #define MSR_IA32_SYSENTER_ESP		0x00000175
>  #define MSR_IA32_SYSENTER_EIP		0x00000176
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index ed54b3b21c39..95e066c8d45d 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
>  static void __init mds_select_mitigation(void);
>  static void __init mds_print_mitigation(void);
>  static void __init taa_select_mitigation(void);
> +static void __init srbds_select_mitigation(void);
>  
>  /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
>  u64 x86_spec_ctrl_base;
> @@ -108,6 +109,7 @@ void __init check_bugs(void)
>  	l1tf_select_mitigation();
>  	mds_select_mitigation();
>  	taa_select_mitigation();
> +	srbds_select_mitigation();
>  
>  	/*
>  	 * As MDS and TAA mitigations are inter-related, print MDS
> @@ -397,6 +399,98 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
>  }
>  early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
>  
> +#undef pr_fmt
> +#define pr_fmt(fmt)	"SRBDS: " fmt
> +
> +enum srbds_mitigations {
> +	SRBDS_MITIGATION_OFF,
> +	SRBDS_MITIGATION_UCODE_NEEDED,
> +	SRBDS_MITIGATION_FULL,
> +	SRBDS_MITIGATION_TSX_OFF,
> +	SRBDS_MITIGATION_HYPERVISOR,
> +};
> +
> +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> +
> +static const char * const srbds_strings[] = {
> +	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
> +	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
> +	[SRBDS_MITIGATION_FULL]		= "Mitigation: Microcode",
> +	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigation: TSX disabled",
> +	[SRBDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
> +};
> +
> +static bool srbds_off;
> +
> +void update_srbds_msr(void)
> +{
> +	u64 mcu_ctrl;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return;
> +
> +	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
> +		return;
> +
> +	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> +
> +	switch (srbds_mitigation) {
> +	case SRBDS_MITIGATION_OFF:
> +	case SRBDS_MITIGATION_TSX_OFF:
> +		mcu_ctrl |= RNGDS_MITG_DIS;
> +		break;
> +	case SRBDS_MITIGATION_FULL:
> +		mcu_ctrl &= ~RNGDS_MITG_DIS;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> +}
> +
> +static void __init srbds_select_mitigation(void)
> +{
> +	u64 ia32_cap;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;
> +
> +	/*
> +	 * Check to see if this is one of the MDS_NO systems supporting
> +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> +	 */
> +	ia32_cap = x86_read_arch_cap_msr();
> +	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
> +		srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
> +	else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> +		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
> +	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
> +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> +	else if (cpu_mitigations_off() || srbds_off)
> +		srbds_mitigation = SRBDS_MITIGATION_OFF;
> +
> +	update_srbds_msr();
> +	pr_info("%s\n", srbds_strings[srbds_mitigation]);
> +}
> +
> +static int __init srbds_parse_cmdline(char *str)
> +{
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return 0;
> +
> +	srbds_off = !strcmp(str, "off");
> +
> +	return 0;
> +}
> +early_param("srbds", srbds_parse_cmdline);
> +
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "Spectre V1 : " fmt
>  
> @@ -1528,6 +1622,11 @@ static char *ibpb_state(void)
>  	return "";
>  }
>  
> +static ssize_t srbds_show_state(char *buf)
> +{
> +	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
> +}
> +
>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       char *buf, unsigned int bug)
>  {
> @@ -1572,6 +1671,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>  	case X86_BUG_ITLB_MULTIHIT:
>  		return itlb_multihit_show_state(buf);
>  
> +	case X86_BUG_SRBDS:
> +		return srbds_show_state(buf);
> +
>  	default:
>  		break;
>  	}
> @@ -1618,4 +1720,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
>  {
>  	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
>  }
> +
> +ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
> +}
>  #endif
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 1131ae032bf2..dee4d14eb975 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1075,6 +1075,27 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
>  	{}
>  };
>  
> +#define VULNBL_INTEL_STEPPINGS(model, steppings, issues)		   \
> +	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6,		   \
> +					    INTEL_FAM6_##model, steppings, \
> +					    X86_FEATURE_ANY, issues)
> +
> +#define SRBDS		BIT(0)
> +
> +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> +	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(HASWELL,		X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xC),	SRBDS),
> +	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x0, 0xD),	SRBDS),
> +	{}
> +};
> +
>  static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
>  {
>  	const struct x86_cpu_id *m = x86_match_cpu(table);
> @@ -1092,6 +1113,17 @@ u64 x86_read_arch_cap_msr(void)
>  	return ia32_cap;
>  }
>  
> +/*
> + * When running with up-to-date microcode TSX_CTRL is only enumerated on parts
> + * where TSX is fused on.  When running with microcode not supporting TSX_CTRL,
> + * check for RTM.
> + */
> +static bool tsx_fused_off(struct cpuinfo_x86 *c, u64 ia32_cap)
> +{
> +	return !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR) &&
> +		 !cpu_has(c, X86_FEATURE_RTM);
> +}
> +
>  static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  {
>  	u64 ia32_cap = x86_read_arch_cap_msr();
> @@ -1142,6 +1174,27 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> +	/*
> +	 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> +	 * they show as "Not affected".
> +	 */
> +	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
> +		if (!cpu_matches(cpu_vuln_blacklist, SRBDS))
> +			goto srbds_not_affected;
> +
> +		/*
> +		 * Parts in the blacklist that enumerate MDS_NO are only
> +		 * vulnerable if TSX can be used.  To handle cases where TSX
> +		 * gets fused off check to see if TSX is fused off and thus not
> +		 * affected.
> +		 */
> +		if ((ia32_cap & ARCH_CAP_MDS_NO) && tsx_fused_off(c, ia32_cap))
> +			goto srbds_not_affected;
> +
> +		setup_force_cpu_bug(X86_BUG_SRBDS);
> +	}
> +
> +srbds_not_affected:
>  	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
>  		return;
>  
> @@ -1594,6 +1647,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
>  	mtrr_ap_init();
>  	validate_apic_and_package_id(c);
>  	x86_spec_ctrl_setup_ap();
> +	update_srbds_msr();
>  }
>  
>  static __init int setup_noclflush(char *arg)
> diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> index 37fdefd14f28..f3e2fc44dba0 100644
> --- a/arch/x86/kernel/cpu/cpu.h
> +++ b/arch/x86/kernel/cpu/cpu.h
> @@ -44,6 +44,8 @@ struct _tlb_table {
>  extern const struct cpu_dev *const __x86_cpu_dev_start[],
>  			    *const __x86_cpu_dev_end[];
>  
> +void update_srbds_msr(void);
> +
>  #ifdef CONFIG_CPU_SUP_INTEL
>  enum tsx_ctrl_states {
>  	TSX_CTRL_ENABLE,
> diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
> index 9a1c00fbbaef..d2136ab9b14a 100644
> --- a/drivers/base/cpu.c
> +++ b/drivers/base/cpu.c
> @@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
>  	return sprintf(buf, "Not affected\n");
>  }
>  
> +ssize_t __weak cpu_show_srbds(struct device *dev,
> +			      struct device_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "Not affected\n");
> +}
> +
>  static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
>  static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
>  static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
> @@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
>  static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
>  static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
>  static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
> +static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
>  
>  static struct attribute *cpu_root_vulnerabilities_attrs[] = {
>  	&dev_attr_meltdown.attr,
> @@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
>  	&dev_attr_mds.attr,
>  	&dev_attr_tsx_async_abort.attr,
>  	&dev_attr_itlb_multihit.attr,
> +	&dev_attr_srbds.attr,
>  	NULL
>  };
>  
> -- 
> 2.21.0
> 
> SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
> -- 

ACK
Signed-off-by: Mark Gross <mgross@linux.intel.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-16 17:49   ` [MODERATED] " Borislav Petkov
@ 2020-04-16 22:57     ` mark gross
  2020-04-20 14:30     ` mark gross
  1 sibling, 0 replies; 33+ messages in thread
From: mark gross @ 2020-04-16 22:57 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 07:49:14PM +0200, speck for Borislav Petkov wrote:
> On Thu, Apr 16, 2020 at 12:20:21PM -0500, speck for Josh Poimboeuf wrote:
> > On Thu, Jan 30, 2020 at 11:12:02AM -0800, speck for mark gross wrote:
> > > +The possible values contained in this file are:
> > > +
> > > + ============================== =============================================
> > > + Not affected                   Processor not vulnerable
> > > + Vulnerable                     Processor vulnerable and mitigation disabled
> > > + Vulnerable: No microcode       Processor vulnerable and microcode is missing
> > > +                                mitigation
> > > + Mitigated: Microcode           Processor is vulnerable and mitigation is in
> > > +                                effect.
> > > + Not affected (TSX disabled)    Processor is only vulnerable when TSX is
> > > +                                enabled while this system was booted with TSX
> > > +                                disabled.
> > > + Unknown                        Running on virtual guest processor that is
> > > +                                affected but with no way to know if host
> > > +                                processor is mitigated or vulnerable.
> > > + ============================== =============================================
> > 
> > This doesn't match the code.
> 
> How's that?
> 
>  ============================== =============================================
>  Not affected                   Processor not vulnerable
>  Vulnerable                     Processor vulnerable and mitigation disabled
>  Vulnerable: No microcode       Processor vulnerable and microcode is missing
>                                 mitigation
>  Mitigation: Microcode          Processor is vulnerable and mitigation is in
>                                 effect.
>  Mitigation: TSX disabled       Processor is only vulnerable when TSX is
>                                 enabled while this system was booted with TSX
>                                 disabled.
>  Unknown: Dependent on
>  hypervisor status              Running on virtual guest processor that is
>                                 affected but with no way to know if host
>                                 processor is mitigated or vulnerable.
>  ============================== =============================================
> 
thanks,

--mark

-- 
> Regards/Gruss,
>     Boris.
> 
> SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
> -- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-16 17:44   ` Borislav Petkov
  2020-04-16 18:01     ` Josh Poimboeuf
  2020-04-16 22:57     ` mark gross
@ 2020-04-17 12:34     ` Thomas Gleixner
  2020-04-17 13:19       ` [MODERATED] " Josh Poimboeuf
  2 siblings, 1 reply; 33+ messages in thread
From: Thomas Gleixner @ 2020-04-17 12:34 UTC (permalink / raw)
  To: speck

speck for Borislav Petkov <speck@linutronix.de> writes:
> On Thu, Apr 16, 2020 at 12:17:23PM -0500, speck for Josh Poimboeuf wrote:
>
> While it is present on all affected CPU models, the microcode mitigation
> is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
> cases where TSX is not supported or has been disabled with TSX_CTRL.

If the CPU has does not expose TSX_CTRL and has FEATURE_RTM disabled (BIOS
or fused off) then we declare it as non vulnerable.

If the CPU exposes TSX_CTRL then we declare it vulnerable and decide in
the mitigation selection whether it is vulnerable or not depending on
the RTM state. If RTM is off, we say: "Mitigation: TSX disabled".

IMO the whole tsx_fused_off() logic is pointless. It does not matter
whether TSX got fused off or disabled in BIOS or disabled via
TSX_CTRL. The CPU model is affected but the problem is mitigated because
TSX is disabled.

Aside of that the tsx_fused_off() logic depends on the non-availability
of TSX_CTRL. TSX_CTRL is available even on CPUs which enumerate TAA_NO,
but I don't see any check for TAA_NO or BUG_TAA anywhere.

Is SBRDS on MDS_NO parts independent of TAA, i.e. does it solely depend
on the fact that RTM is on?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-17 12:34     ` Thomas Gleixner
@ 2020-04-17 13:19       ` Josh Poimboeuf
  2020-04-17 16:46         ` Luck, Tony
  2020-04-17 19:22         ` Thomas Gleixner
  0 siblings, 2 replies; 33+ messages in thread
From: Josh Poimboeuf @ 2020-04-17 13:19 UTC (permalink / raw)
  To: speck

On Fri, Apr 17, 2020 at 02:34:41PM +0200, speck for Thomas Gleixner wrote:
> speck for Borislav Petkov <speck@linutronix.de> writes:
> > On Thu, Apr 16, 2020 at 12:17:23PM -0500, speck for Josh Poimboeuf wrote:
> >
> > While it is present on all affected CPU models, the microcode mitigation
> > is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
> > cases where TSX is not supported or has been disabled with TSX_CTRL.
> 
> If the CPU has does not expose TSX_CTRL and has FEATURE_RTM disabled (BIOS
> or fused off) then we declare it as non vulnerable.
> 
> If the CPU exposes TSX_CTRL then we declare it vulnerable and decide in
> the mitigation selection whether it is vulnerable or not depending on
> the RTM state. If RTM is off, we say: "Mitigation: TSX disabled".
> 
> IMO the whole tsx_fused_off() logic is pointless. It does not matter
> whether TSX got fused off or disabled in BIOS or disabled via
> TSX_CTRL. The CPU model is affected but the problem is mitigated because
> TSX is disabled.

The idea is that if TSX is *permanently* off, there's no way to trigger
the bug, regardless of how the user has things configured in BIOS or the
kernel.  So from the user's standpoint, the CPU is not affected, and
never was, regardless of kernel/BIOS settings and microcode.

Is there not a way to distinguish "disabled in BIOS" from "permanently
fused off"?  If not, then yes we should just consider all of them
"Mitigation: TSX disabled".

> Aside of that the tsx_fused_off() logic depends on the non-availability
> of TSX_CTRL. TSX_CTRL is available even on CPUs which enumerate TAA_NO,
> but I don't see any check for TAA_NO or BUG_TAA anywhere.
> 
> Is SBRDS on MDS_NO parts independent of TAA, i.e. does it solely depend
> on the fact that RTM is on?

I had assumed there are no parts in the SRBDS blacklist with TAA_NO.

-- 
Josh

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-17 13:19       ` [MODERATED] " Josh Poimboeuf
@ 2020-04-17 16:46         ` Luck, Tony
  2020-04-17 19:22         ` Thomas Gleixner
  1 sibling, 0 replies; 33+ messages in thread
From: Luck, Tony @ 2020-04-17 16:46 UTC (permalink / raw)
  To: speck

On Fri, Apr 17, 2020 at 08:19:47AM -0500, speck for Josh Poimboeuf wrote:
> On Fri, Apr 17, 2020 at 02:34:41PM +0200, speck for Thomas Gleixner wrote:
> > Aside of that the tsx_fused_off() logic depends on the non-availability
> > of TSX_CTRL. TSX_CTRL is available even on CPUs which enumerate TAA_NO,
> > but I don't see any check for TAA_NO or BUG_TAA anywhere.
> > 
> > Is SBRDS on MDS_NO parts independent of TAA, i.e. does it solely depend
> > on the fact that RTM is on?
> 
> I had assumed there are no parts in the SRBDS blacklist with TAA_NO.

Correct. The affected processor list weeds out those with TAA_NO.

-Tony

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/4] V8 more sampling fun 3
  2020-04-17 13:19       ` [MODERATED] " Josh Poimboeuf
  2020-04-17 16:46         ` Luck, Tony
@ 2020-04-17 19:22         ` Thomas Gleixner
  1 sibling, 0 replies; 33+ messages in thread
From: Thomas Gleixner @ 2020-04-17 19:22 UTC (permalink / raw)
  To: speck

speck for Josh Poimboeuf <speck@linutronix.de> writes:
> On Fri, Apr 17, 2020 at 02:34:41PM +0200, speck for Thomas Gleixner wrote:
>> speck for Borislav Petkov <speck@linutronix.de> writes:
>> > On Thu, Apr 16, 2020 at 12:17:23PM -0500, speck for Josh Poimboeuf wrote:
>> >
>> > While it is present on all affected CPU models, the microcode mitigation
>> > is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
>> > cases where TSX is not supported or has been disabled with TSX_CTRL.
>> 
>> If the CPU has does not expose TSX_CTRL and has FEATURE_RTM disabled (BIOS
>> or fused off) then we declare it as non vulnerable.
>> 
>> If the CPU exposes TSX_CTRL then we declare it vulnerable and decide in
>> the mitigation selection whether it is vulnerable or not depending on
>> the RTM state. If RTM is off, we say: "Mitigation: TSX disabled".
>> 
>> IMO the whole tsx_fused_off() logic is pointless. It does not matter
>> whether TSX got fused off or disabled in BIOS or disabled via
>> TSX_CTRL. The CPU model is affected but the problem is mitigated because
>> TSX is disabled.
>
> The idea is that if TSX is *permanently* off, there's no way to trigger
> the bug, regardless of how the user has things configured in BIOS or the
> kernel.  So from the user's standpoint, the CPU is not affected, and
> never was, regardless of kernel/BIOS settings and microcode.
>
> Is there not a way to distinguish "disabled in BIOS" from "permanently
> fused off"?  If not, then yes we should just consider all of them
> "Mitigation: TSX disabled".

I don't have access to a MDS_NO part with a BIOS switch for TSX, so I
can't investigate it.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-16 17:49   ` [MODERATED] " Borislav Petkov
  2020-04-16 22:57     ` mark gross
@ 2020-04-20 14:30     ` mark gross
  2020-04-20 16:17       ` Thomas Gleixner
  2020-04-20 21:45       ` Slow Randomizing Boosts Denial of Service - Bulletin #1 Thomas Gleixner
  1 sibling, 2 replies; 33+ messages in thread
From: mark gross @ 2020-04-20 14:30 UTC (permalink / raw)
  To: speck

On Thu, Apr 16, 2020 at 07:49:14PM +0200, speck for Borislav Petkov wrote:
> On Thu, Apr 16, 2020 at 12:20:21PM -0500, speck for Josh Poimboeuf wrote:
> > On Thu, Jan 30, 2020 at 11:12:02AM -0800, speck for mark gross wrote:
> > > +The possible values contained in this file are:
> > > +
> > > + ============================== =============================================
> > > + Not affected                   Processor not vulnerable
> > > + Vulnerable                     Processor vulnerable and mitigation disabled
> > > + Vulnerable: No microcode       Processor vulnerable and microcode is missing
> > > +                                mitigation
> > > + Mitigated: Microcode           Processor is vulnerable and mitigation is in
> > > +                                effect.
> > > + Not affected (TSX disabled)    Processor is only vulnerable when TSX is
> > > +                                enabled while this system was booted with TSX
> > > +                                disabled.
> > > + Unknown                        Running on virtual guest processor that is
> > > +                                affected but with no way to know if host
> > > +                                processor is mitigated or vulnerable.
> > > + ============================== =============================================
> > 
> > This doesn't match the code.
> 
> How's that?
> 
>  ============================== =============================================
>  Not affected                   Processor not vulnerable
>  Vulnerable                     Processor vulnerable and mitigation disabled
>  Vulnerable: No microcode       Processor vulnerable and microcode is missing
>                                 mitigation
>  Mitigation: Microcode          Processor is vulnerable and mitigation is in
>                                 effect.
>  Mitigation: TSX disabled       Processor is only vulnerable when TSX is
>                                 enabled while this system was booted with TSX
>                                 disabled.
>  Unknown: Dependent on
>  hypervisor status              Running on virtual guest processor that is
>                                 affected but with no way to know if host
>                                 processor is mitigated or vulnerable.
>  ============================== =============================================
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
> -- 
Boris, 
from your emails it looks like you are putting together the final version.  I
have recently (this morning) found out I need to add another family model to
the affected processor list.  Do you have a branch I could use to make a follow
up patch too?  (or if you are cool with me adding it to my local version and
dealing with any minor fix ups let me know)

thanks,
--mark

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-20 14:30     ` mark gross
@ 2020-04-20 16:17       ` Thomas Gleixner
  2020-04-20 22:30         ` [MODERATED] " mark gross
  2020-04-20 21:45       ` Slow Randomizing Boosts Denial of Service - Bulletin #1 Thomas Gleixner
  1 sibling, 1 reply; 33+ messages in thread
From: Thomas Gleixner @ 2020-04-20 16:17 UTC (permalink / raw)
  To: speck

speck for mark gross <speck@linutronix.de> writes:
> On Thu, Apr 16, 2020 at 07:49:14PM +0200, speck for Borislav Petkov wrote:
> from your emails it looks like you are putting together the final version.  I
> have recently (this morning) found out I need to add another family
> model to

Sigh...

> the affected processor list.  Do you have a branch I could use to make a follow
> up patch too?  (or if you are cool with me adding it to my local version and
> dealing with any minor fix ups let me know)

Can you please send delta patch on top of your v8?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Slow Randomizing Boosts Denial of Service - Bulletin #1
  2020-04-20 14:30     ` mark gross
  2020-04-20 16:17       ` Thomas Gleixner
@ 2020-04-20 21:45       ` Thomas Gleixner
  2020-04-23 21:35         ` [MODERATED] " mark gross
  1 sibling, 1 reply; 33+ messages in thread
From: Thomas Gleixner @ 2020-04-20 21:45 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 877 bytes --]

Folks,

I've merged the V8 series with the polishing done by Borislav into the
master branch (based on 5.7-rc2)  of the speck git repository at:

   cvs.ou.linutronix.de:linux/speck/linux

The commits are considered stable from now on. Any further tweaks need
to go on top.

The stable branches linux-5.6.y, linux-5.4.y, linux-4.19.y and
linux-4.14.y have been updated with backports. Kernels prior to 4.14 are
left as an exercise to the members of the kernel necrophilia cult as
usual.

A tarball with git bundles is attached.

The subject line of this mail is a reference to the following part of
the documentation:

  Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from
  other logical processors that miss their core caches, with an impact
  similar to legacy locked cache-line-split accesses.

IOW, yet another DoS tool. Oh well...

Thanks,

        tglx


[-- Attachment #2: srbds.tar.xz --]
[-- Type: application/x-xz, Size: 39256 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-20 16:17       ` Thomas Gleixner
@ 2020-04-20 22:30         ` mark gross
  0 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-04-20 22:30 UTC (permalink / raw)
  To: speck

On Mon, Apr 20, 2020 at 06:17:51PM +0200, speck for Thomas Gleixner wrote:
> speck for mark gross <speck@linutronix.de> writes:
> > On Thu, Apr 16, 2020 at 07:49:14PM +0200, speck for Borislav Petkov wrote:
> > from your emails it looks like you are putting together the final version.  I
> > have recently (this morning) found out I need to add another family
> > model to
> 
> Sigh...
> 
> > the affected processor list.  Do you have a branch I could use to make a follow
> > up patch too?  (or if you are cool with me adding it to my local version and
> > dealing with any minor fix ups let me know)
> 
> Can you please send delta patch on top of your v8?
no need. I got confused.

FWIW the white paper listed Ivy bridge family 6, model 3A any stepping as
affected then they removed it Ivy bridge is no longer getting uCode updates.

Recently (this morning) they are recommending to leave Ivy Bridge in the kernel
patch as it is affected.

I thought I yanked it from the cpu_vuln_blacklist array but, I guess I thought
I did but didn't.

so its good as is :)

--mark

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
                   ` (7 preceding siblings ...)
  2020-04-16 17:20 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Josh Poimboeuf
@ 2020-04-21 17:30 ` Borislav Petkov
  2020-04-21 17:34   ` Andrew Cooper
  8 siblings, 1 reply; 33+ messages in thread
From: Borislav Petkov @ 2020-04-21 17:30 UTC (permalink / raw)
  To: speck

On Thu, Jan 30, 2020 at 11:12:02AM -0800, speck for mark gross wrote:
> +Mitigation mechanism
> +-------------------
> +Intel will release microcode updates ...

So while we're on the subject and in case I haven't missed that
communication: is that microcode somewhere so that I can test my
backports with it?

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-21 17:30 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Borislav Petkov
@ 2020-04-21 17:34   ` Andrew Cooper
  2020-04-21 18:19     ` Borislav Petkov
  0 siblings, 1 reply; 33+ messages in thread
From: Andrew Cooper @ 2020-04-21 17:34 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 537 bytes --]

On 21/04/2020 18:30, speck for Borislav Petkov wrote:
> On Thu, Jan 30, 2020 at 11:12:02AM -0800, speck for mark gross wrote:
>> +Mitigation mechanism
>> +-------------------
>> +Intel will release microcode updates ...
> So while we're on the subject and in case I haven't missed that
> communication: is that microcode somewhere so that I can test my
> backports with it?

It is in https://github.com/otcshare/Intel-Restricted-2020-1-IPU

If you haven't already got access, talk to mcu_administrator@intel.com

~Andrew


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4
  2020-04-21 17:34   ` Andrew Cooper
@ 2020-04-21 18:19     ` Borislav Petkov
  0 siblings, 0 replies; 33+ messages in thread
From: Borislav Petkov @ 2020-04-21 18:19 UTC (permalink / raw)
  To: speck

On Tue, Apr 21, 2020 at 06:34:48PM +0100, speck for Andrew Cooper wrote:
> It is in https://github.com/otcshare/Intel-Restricted-2020-1-IPU
> 
> If you haven't already got access, talk to mcu_administrator@intel.com

Thanks Andrew. I think I know a guy who has... :-)

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Slow Randomizing Boosts Denial of Service - Bulletin #1
  2020-04-20 21:45       ` Slow Randomizing Boosts Denial of Service - Bulletin #1 Thomas Gleixner
@ 2020-04-23 21:35         ` mark gross
  2020-04-24  7:01           ` Greg KH
  0 siblings, 1 reply; 33+ messages in thread
From: mark gross @ 2020-04-23 21:35 UTC (permalink / raw)
  To: speck

On Mon, Apr 20, 2020 at 11:45:06PM +0200, speck for Thomas Gleixner wrote:
> Folks,
> 
> I've merged the V8 series with the polishing done by Borislav into the
> master branch (based on 5.7-rc2)  of the speck git repository at:
> 
>    cvs.ou.linutronix.de:linux/speck/linux
> 
> The commits are considered stable from now on. Any further tweaks need
> to go on top.
> 
> The stable branches linux-5.6.y, linux-5.4.y, linux-4.19.y and
> linux-4.14.y have been updated with backports. Kernels prior to 4.14 are
> left as an exercise to the members of the kernel necrophilia cult as
> usual.

We have ran all these through the test cases I used in my development. They all
pass.

FWIW I did backports from 4.14.y to 4.9 (easy) and 4.4 (needed to cherry pic an
extra patch to make it work)  These also pass testing.

Should I post them to this list or sit on them until disclosure?

Thank you for all your help on this effort!  I do appreciate it.

--mark

> A tarball with git bundles is attached.
> 
> The subject line of this mail is a reference to the following part of
> the documentation:
> 
>   Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from
>   other logical processors that miss their core caches, with an impact
>   similar to legacy locked cache-line-split accesses.
> 
> IOW, yet another DoS tool. Oh well...
> 
> Thanks,
> 
>         tglx
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Slow Randomizing Boosts Denial of Service - Bulletin #1
  2020-04-23 21:35         ` [MODERATED] " mark gross
@ 2020-04-24  7:01           ` Greg KH
  2020-04-27 15:10             ` mark gross
  0 siblings, 1 reply; 33+ messages in thread
From: Greg KH @ 2020-04-24  7:01 UTC (permalink / raw)
  To: speck

On Thu, Apr 23, 2020 at 02:35:43PM -0700, speck for mark gross wrote:
> On Mon, Apr 20, 2020 at 11:45:06PM +0200, speck for Thomas Gleixner wrote:
> > Folks,
> > 
> > I've merged the V8 series with the polishing done by Borislav into the
> > master branch (based on 5.7-rc2)  of the speck git repository at:
> > 
> >    cvs.ou.linutronix.de:linux/speck/linux
> > 
> > The commits are considered stable from now on. Any further tweaks need
> > to go on top.
> > 
> > The stable branches linux-5.6.y, linux-5.4.y, linux-4.19.y and
> > linux-4.14.y have been updated with backports. Kernels prior to 4.14 are
> > left as an exercise to the members of the kernel necrophilia cult as
> > usual.
> 
> We have ran all these through the test cases I used in my development. They all
> pass.
> 
> FWIW I did backports from 4.14.y to 4.9 (easy) and 4.4 (needed to cherry pic an
> extra patch to make it work)  These also pass testing.
> 
> Should I post them to this list or sit on them until disclosure?

Why wait?

Please post them, I would greatly appreciate them.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [MODERATED] Re: Slow Randomizing Boosts Denial of Service - Bulletin #1
  2020-04-24  7:01           ` Greg KH
@ 2020-04-27 15:10             ` mark gross
  0 siblings, 0 replies; 33+ messages in thread
From: mark gross @ 2020-04-27 15:10 UTC (permalink / raw)
  To: speck

On Fri, Apr 24, 2020 at 09:01:06AM +0200, speck for Greg KH wrote:
> On Thu, Apr 23, 2020 at 02:35:43PM -0700, speck for mark gross wrote:
> > On Mon, Apr 20, 2020 at 11:45:06PM +0200, speck for Thomas Gleixner wrote:
> > > Folks,
> > > 
> > > I've merged the V8 series with the polishing done by Borislav into the
> > > master branch (based on 5.7-rc2)  of the speck git repository at:
> > > 
> > >    cvs.ou.linutronix.de:linux/speck/linux
> > > 
> > > The commits are considered stable from now on. Any further tweaks need
> > > to go on top.
> > > 
> > > The stable branches linux-5.6.y, linux-5.4.y, linux-4.19.y and
> > > linux-4.14.y have been updated with backports. Kernels prior to 4.14 are
> > > left as an exercise to the members of the kernel necrophilia cult as
> > > usual.
> > 
> > We have ran all these through the test cases I used in my development. They all
> > pass.
> > 
> > FWIW I did backports from 4.14.y to 4.9 (easy) and 4.4 (needed to cherry pic an
> > extra patch to make it work)  These also pass testing.
> > 
> > Should I post them to this list or sit on them until disclosure?
> 
> Why wait?
> 
> Please post them, I would greatly appreciate them.
ok 

--mark

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2020-04-27 15:10 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-16  0:14 [MODERATED] [PATCH 0/4] V8 more sampling fun 0 mark gross
2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V8 more sampling fun 3 mark gross
2020-01-30 19:12 ` [MODERATED] [PATCH 4/4] V8 more sampling fun 4 mark gross
2020-03-17  0:56 ` [MODERATED] [PATCH 1/4] V8 more sampling fun 1 mark gross
2020-03-17  0:56 ` [MODERATED] [PATCH 2/4] V8 more sampling fun 2 mark gross
2020-04-16 17:15 ` [MODERATED] Re: [PATCH 1/4] V8 more sampling fun 1 Josh Poimboeuf
2020-04-16 17:30   ` Borislav Petkov
2020-04-16 17:16 ` [MODERATED] Re: [PATCH 2/4] V8 more sampling fun 2 Josh Poimboeuf
2020-04-16 17:33   ` [MODERATED] " Borislav Petkov
2020-04-16 22:47     ` mark gross
2020-04-16 17:17 ` [MODERATED] Re: [PATCH 3/4] V8 more sampling fun 3 Josh Poimboeuf
2020-04-16 17:44   ` Borislav Petkov
2020-04-16 18:01     ` Josh Poimboeuf
2020-04-16 22:45       ` mark gross
2020-04-16 22:57     ` mark gross
2020-04-17 12:34     ` Thomas Gleixner
2020-04-17 13:19       ` [MODERATED] " Josh Poimboeuf
2020-04-17 16:46         ` Luck, Tony
2020-04-17 19:22         ` Thomas Gleixner
2020-04-16 22:54   ` [MODERATED] " mark gross
2020-04-16 17:20 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Josh Poimboeuf
2020-04-16 17:49   ` [MODERATED] " Borislav Petkov
2020-04-16 22:57     ` mark gross
2020-04-20 14:30     ` mark gross
2020-04-20 16:17       ` Thomas Gleixner
2020-04-20 22:30         ` [MODERATED] " mark gross
2020-04-20 21:45       ` Slow Randomizing Boosts Denial of Service - Bulletin #1 Thomas Gleixner
2020-04-23 21:35         ` [MODERATED] " mark gross
2020-04-24  7:01           ` Greg KH
2020-04-27 15:10             ` mark gross
2020-04-21 17:30 ` [MODERATED] Re: [PATCH 4/4] V8 more sampling fun 4 Borislav Petkov
2020-04-21 17:34   ` Andrew Cooper
2020-04-21 18:19     ` Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).