historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MODERATED] [PATCH 3/4] V7 more sampling fun 3
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
@ 2020-01-16 22:16 ` mark gross
  2020-01-30 19:12 ` [MODERATED] [PATCH 4/4] V7 more sampling fun 4 mark gross
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-01-16 22:16 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 3/4] x86/speculation: Special Register Buffer Data Sampling
 (SRBDS) mitigation control.

SRBDS is an MDS-like speculative side channel that can leak bits from
the RNG across cores and threads. New microcode serializes the processor
access during the execution of RDRAND and RDSEED.  This ensures that the
shared buffer is overwritten before it is released for reuse.

While it is present on all affected CPU models, the microcode mitigation
is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
cases where TSX is not supported or has been disabled with TSX_CTRL.

The mitigation is activated by default on affected processors and it
increases latency for RDRAND and RDSEED instructions.  Among other
effects this will reduce throughput from /dev/urandom.

* enable administrator to configure the mitigation off when desired
  using either mitigations=off or srbds=off.
* export vulnerability status via sysfs
* rename file scoped macros to apply for non-whitelist table
  initializations.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 .../admin-guide/kernel-parameters.txt         |  20 +++
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 116 ++++++++++++++++++
 arch/x86/kernel/cpu/common.c                  |  53 ++++++++
 arch/x86/kernel/cpu/cpu.h                     |   2 +
 drivers/base/cpu.c                            |   8 ++
 8 files changed, 206 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 2e0e3b45d02a..b39531a3c5bc 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
 		/sys/devices/system/cpu/vulnerabilities/mds
+		/sys/devices/system/cpu/vulnerabilities/srbds
 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
 		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
 Date:		January 2018
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f2a93c8679e8..6bc125ff09da 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4757,6 +4757,26 @@
 			the kernel will oops in either "warn" or "fatal"
 			mode.
 
+	srbds=		[X86,INTEL]
+			Control mitigation for the Special Register
+			Buffer Data Sampling (SRBDS) mitigation.
+
+			Certain CPUs are vulnerable to an MDS-like
+			exploit which can leak bits from the random
+			number generator.
+
+			By default, this issue is mitigated by
+			microcode.  However, the microcode fix can cause
+			the RDRAND and RDSEED instructions to become
+			much slower.  Among other effects, this will
+			result in reduced throughput from /dev/urandom.
+
+			The microcode mitigation can be disabled with
+			the following option:
+
+			off:    Disable mitigation and remove
+				performance impact to RDRAND and RDSEED
+
 	srcutree.counter_wrap_check [KNL]
 			Specifies how frequently to check for
 			grace-period sequence counter wrap for the
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index db189945e9b0..02dabc9e77b0 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -362,6 +362,7 @@
 #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
 #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
 #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
+#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
 #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
 #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
 #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
@@ -407,5 +408,6 @@
 #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
 #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
 #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
+#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 12c9684d59ba..3efde600a674 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -128,6 +128,10 @@
 #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
 #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
 
+/* SRBDS support */
+#define MSR_IA32_MCU_OPT_CTRL		0x00000123
+#define RNGDS_MITG_DIS			BIT(0)
+
 #define MSR_IA32_SYSENTER_CS		0x00000174
 #define MSR_IA32_SYSENTER_ESP		0x00000175
 #define MSR_IA32_SYSENTER_EIP		0x00000176
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ed54b3b21c39..addef92109fe 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_print_mitigation(void);
 static void __init taa_select_mitigation(void);
+static void __init srbds_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -108,6 +109,7 @@ void __init check_bugs(void)
 	l1tf_select_mitigation();
 	mds_select_mitigation();
 	taa_select_mitigation();
+	srbds_select_mitigation();
 
 	/*
 	 * As MDS and TAA mitigations are inter-related, print MDS
@@ -397,6 +399,107 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"SRBDS: " fmt
+
+enum srbds_mitigations {
+	SRBDS_MITIGATION_OFF,
+	SRBDS_MITIGATION_UCODE_NEEDED,
+	SRBDS_MITIGATION_FULL,
+	SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF,
+	SRBDS_MITIGATION_HYPERVISOR,
+};
+
+static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
+static const char * const srbds_strings[] = {
+	[SRBDS_MITIGATION_OFF]			= "Vulnerable",
+	[SRBDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRBDS_MITIGATION_FULL]			= "Mitigated: Microcode",
+	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
+	[SRBDS_MITIGATION_HYPERVISOR]		= "Unknown: Dependent on hypervisor status",
+};
+
+static bool srbds_off;
+
+void update_srbds_msr(void)
+{
+	u64 mcu_ctrl;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+		return;
+
+	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
+		return;
+
+	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+
+	switch (srbds_mitigation) {
+	case SRBDS_MITIGATION_OFF:
+	case SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF:
+		mcu_ctrl |= RNGDS_MITG_DIS;
+		break;
+	case SRBDS_MITIGATION_FULL:
+		mcu_ctrl &= ~RNGDS_MITG_DIS;
+		break;
+	default:
+		break;
+	}
+
+	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
+}
+
+static void __init srbds_select_mitigation(void)
+{
+	u64 ia32_cap;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
+	/*
+	 * Check to see if this is one of the MDS_NO systems supporting
+	 * TSX that are only exposed to SRBDS when TSX is enabled.
+	 */
+	ia32_cap = x86_read_arch_cap_msr();
+	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
+		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
+		goto out;
+	}
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
+		goto out;
+	}
+
+	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
+		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
+		goto out;
+	}
+
+	if (cpu_mitigations_off() || srbds_off) {
+		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)
+			srbds_mitigation = SRBDS_MITIGATION_OFF;
+	}
+out:
+	update_srbds_msr();
+	pr_info("%s\n", srbds_strings[srbds_mitigation]);
+}
+
+static int __init srbds_parse_cmdline(char *str)
+{
+	if (!str)
+		return -EINVAL;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return 0;
+
+	srbds_off = !strcmp(str, "off");
+
+	return 0;
+}
+early_param("srbds", srbds_parse_cmdline);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
@@ -1528,6 +1631,11 @@ static char *ibpb_state(void)
 	return "";
 }
 
+static ssize_t srbds_show_state(char *buf)
+{
+	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
+}
+
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
 			       char *buf, unsigned int bug)
 {
@@ -1572,6 +1680,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 	case X86_BUG_ITLB_MULTIHIT:
 		return itlb_multihit_show_state(buf);
 
+	case X86_BUG_SRBDS:
+		return srbds_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1618,4 +1729,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
 }
+
+ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
+}
 #endif
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2bea1cc8dcb4..2c9be1fd3c72 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1075,6 +1075,30 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	{}
 };
 
+#define VULNBL_INTEL_STEPPING(model, steppings, issues)			   \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
+					    INTEL_FAM6_##model, steppings, \
+					    X86_FEATURE_ANY, issues)
+
+#define SRBDS		BIT(0)
+#define SRBDS_IF_TSX	BIT(1)
+
+static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+	VULNBL_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(HASWELL,		X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
+	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
+	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
+	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
+	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
+	{}
+};
+
 static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
 {
 	const struct x86_cpu_id *m = x86_match_cpu(table);
@@ -1142,6 +1166,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
+	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
+		/*
+		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
+		 * they show as "Not affected".
+		 */
+		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
+		    !cpu_has(c, X86_FEATURE_RDSEED))
+			goto srbds_not_affected;
+		/*
+		 * Parts in the blacklist that enumerate MDS_NO are only
+		 * vulneralbe if TSX can be used.  To handle cases where TSX
+		 * gets fused off check to see if TSX is fused off and thus not
+		 * affected.
+		 *
+		 * When running with up to day microcode TSX_CTRL is only
+		 * enumerated on parts where TSX fused on.
+		 * When running with microcode not supporting TSX_CTRL we check
+		 * for RTM
+		 */
+		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
+		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
+		      cpu_has(c, X86_FEATURE_RTM)))
+			goto srbds_not_affected;
+
+		setup_force_cpu_bug(X86_BUG_SRBDS);
+	}
+srbds_not_affected:
+
 	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
 		return;
 
@@ -1594,6 +1646,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
 	mtrr_ap_init();
 	validate_apic_and_package_id(c);
 	x86_spec_ctrl_setup_ap();
+	update_srbds_msr();
 }
 
 static __init int setup_noclflush(char *arg)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 37fdefd14f28..f3e2fc44dba0 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -44,6 +44,8 @@ struct _tlb_table {
 extern const struct cpu_dev *const __x86_cpu_dev_start[],
 			    *const __x86_cpu_dev_end[];
 
+void update_srbds_msr(void);
+
 #ifdef CONFIG_CPU_SUP_INTEL
 enum tsx_ctrl_states {
 	TSX_CTRL_ENABLE,
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 9a1c00fbbaef..d2136ab9b14a 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t __weak cpu_show_srbds(struct device *dev,
+			      struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "Not affected\n");
+}
+
 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
@@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
 static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
 static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
 static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_mds.attr,
 	&dev_attr_tsx_async_abort.attr,
 	&dev_attr_itlb_multihit.attr,
+	&dev_attr_srbds.attr,
 	NULL
 };
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [MODERATED] [PATCH 4/4] V7 more sampling fun 4
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V7 more sampling fun 3 mark gross
@ 2020-01-30 19:12 ` mark gross
  2020-03-17  0:56 ` [MODERATED] [PATCH 2/4] V7 more sampling fun 2 mark gross
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-01-30 19:12 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 4/4] x86/speculation: SRBDS vulnerability and mitigation
 documentation

Add documentation for the SRBDS vulnerability and its mitigation.

Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../special-register-buffer-data-sampling.rst | 148 ++++++++++++++++++
 2 files changed, 149 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst

diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 0795e3c2643f..ca4dbdd9016d 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
    mds
    tsx_async_abort
    multihit.rst
+   special-register-buffer-data-sampling.rst
diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
new file mode 100644
index 000000000000..9f1ee4064fcd
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
@@ -0,0 +1,148 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+SRBDS - Special Register Buffer Data Sampling
+=============================================
+
+SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
+infer values returned from special register accesses.  Special register
+accesses are accesses to off core registers.  According to Intel's evaluation,
+the special register reads that have a security expectation of privacy are:
+RDRAND, RDSEED and SGX EGETKEY.
+
+When RDRAND RDSEED and EGETKEY instructions are used the data is moved to the
+core through the special register mechanism that is susceptible to MDS attacks.
+
+Affected processors
+--------------------
+Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED and
+are vulnerable to MFBDS (Micro architectural Fill Buffer Data Sampling) variant
+of MDS (Micro architectural Data Sampling) or to the TAA (TSX Asynchronous
+Abort) when TSX is enabled,
+
+  =============  ============  ==========================
+  common name    Family_Model  Stepping
+  =============  ============  ==========================
+  Ivybridge      06_3AH        All
+
+  Haswell        06_3CH        All
+  Haswell_L      06_45H        All
+  Haswell_G      06_46H        All
+
+  Broadwell_G    06_47H        All
+  Broadwell      06_3DH        All
+
+  Skylake_L      06_4EH        All
+  Skylake        06_5EH        All
+
+  Kabylake_L     06_8EH        <=A
+  Kabylake_L     06_8EH        0xB only if TSX is enabled
+  Kabylake_L     06_8EH        0xC only if TSX is enabled
+
+  Kabylake       06_9EH        <=B
+  Kabylake       06_9EH        0xC only if TSX is enabled
+  Kabylake       06_9EH        0xD only if TSX is enabled
+  =============  ============  ==========================
+
+Related CVEs
+------------
+
+The following CVE entry is related to this SRBDS issue:
+
+    ==============  =====  =====================================
+    CVE-2020-0543   SRBDS  Special Register Buffer Data Sampling
+    ==============  =====  =====================================
+
+Attack scenarios
+----------------
+An unprivileged user can extract returned values from RDRAND and RDSEED
+executed on another core or sibling thread using MDS techniques.
+
+
+Mitigation mechanism
+-------------------
+Intel will release microcode updates that modify the RDRAND, RDSEED, and
+EGETKEY instructions to overwrite secret special register data in the shared
+staging buffer before the secret data can be accessed by another logical
+processor.
+
+During execution of the RDRAND, RDSEED, or EGETKEY instruction, off-core
+accesses from other logical processors will be delayed until the special
+register read is complete and the secret data in the shared staging buffer is
+overwritten.
+
+This has three effects on performance:
+
+#. RDRAND, RDSEED, or EGETKEY instruction have higher latency.
+
+#. Executing RDRAND at the same time on multiple logical processors will be
+   serialized, resulting in an overall reduction in the maximum RDRAND
+   bandwidth.
+
+#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
+   logical processors that miss their core caches, with an impact similar to
+   legacy locked cache-line-split accesses.
+
+The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
+the mitigation for RDRAND and RDSEED instructions executed outside of Intel
+Software Guard Extensions (Intel SGX) enclaves. On logical processors that
+disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
+take longer to execute and do not impact performance of sibling logical
+processors memory accesses. The opt-out mechanism does not affect Intel SGX
+enclaves (including execution of RDRAND or RDSEED inside of an enclave, as well
+as EGETKEY execution).
+
+IA32_MCU_OPT_CTRL MSR Definition
+--------------------------------
+Along with the mitigation for this issue, Intel added a new thread-scope
+IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
+RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
+9]==1. This MSR is introduced through the microcode update.
+
+Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
+disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
+enclave on that logical processor. Opting out of the mitigation for a
+particular logical processor does not affect the RDRAND and RDSEED mitigations
+for other logical processors.
+
+Note that inside of an Intel SGX enclave, the mitigation is applied regardless
+of the value of RNGDS_MITG_DS.
+
+Mitigation control on the kernel command line
+---------------------------------------------
+The kernel command line allows control over the SRBDS mitigation at boot time
+with the option "srbds=".  The option for this is:
+
+  ============= =============================================================
+  off           This option disables SRBDS mitigation for RDRAND and RDSEED on
+                affected platforms.
+  ============= =============================================================
+
+SRBDS System Information
+-----------------------
+The Linux kernel provides vulnerability status information through sysfs.  For
+SRBDS this can be accessed by the following sysfs file:
+/sys/devices/system/cpu/vulnerabilities/srbds
+
+The possible values contained in this file are:
+
+ ============================== =============================================
+ Not affected                   Processor not vulnerable
+ Vulnerable                     Processor vulnerable and mitigation disabled
+ Vulnerable: No microcode       Processor vulnerable and microcode is missing
+                                mitigation
+ Mitigated: Microcode           Processor is vulnerable and mitigation is in
+                                effect.
+ Not affected (TSX disabled)    Processor is only vulnerable when TSX is
+                                enabled while this system was booted with TSX
+                                disabled.
+ Unknown                        Running on virtual guest processor that is
+                                affected but with no way to know if host
+                                processor is mitigated or vulnerable.
+ ============================== =============================================
+
+SRBDS Default mitigation
+------------------------
+This new microcode serializes processor access during execution of RDRAND,
+RDSEED ensures that the shared buffer is overwritten before it is released for
+reuse.  Use the "srbds=off" kernel command line to disable the mitigation for
+RDRAND and RDSEED.
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [MODERATED] [PATCH 2/4] V7 more sampling fun 2
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V7 more sampling fun 3 mark gross
  2020-01-30 19:12 ` [MODERATED] [PATCH 4/4] V7 more sampling fun 4 mark gross
@ 2020-03-17  0:56 ` mark gross
  2020-03-17  0:56 ` [MODERATED] [PATCH 1/4] V7 more sampling fun 1 mark gross
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-03-17  0:56 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 2/4] x86/cpu: clean up cpu_matches

To make cpu_matches reusable for alternitive matching tables, make cpu_matches
take a x86_cpu_id table as a parameter.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
---
 arch/x86/kernel/cpu/common.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bed0cb83fe24..2bea1cc8dcb4 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1075,9 +1075,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
 	{}
 };
 
-static bool __init cpu_matches(unsigned long which)
+static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
 {
-	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
+	const struct x86_cpu_id *m = x86_match_cpu(table);
 
 	return m && !!(m->driver_data & which);
 }
@@ -1097,31 +1097,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	u64 ia32_cap = x86_read_arch_cap_msr();
 
 	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
-	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
+	if (!cpu_matches(NO_ITLB_MULTIHIT, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
 		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
 
-	if (cpu_matches(NO_SPECULATION))
+	if (cpu_matches(NO_SPECULATION, cpu_vuln_whitelist))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 
-	if (!cpu_matches(NO_SPECTRE_V2))
+	if (!cpu_matches(NO_SPECTRE_V2, cpu_vuln_whitelist))
 		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
 
-	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
+	if (!cpu_matches(NO_SSB, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_SSB_NO) &&
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
 
-	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
+	if (!cpu_matches(NO_MDS, cpu_vuln_whitelist) &&
+	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
 		setup_force_cpu_bug(X86_BUG_MDS);
-		if (cpu_matches(MSBDS_ONLY))
+		if (cpu_matches(MSBDS_ONLY, cpu_vuln_whitelist))
 			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
 	}
 
-	if (!cpu_matches(NO_SWAPGS))
+	if (!cpu_matches(NO_SWAPGS, cpu_vuln_whitelist))
 		setup_force_cpu_bug(X86_BUG_SWAPGS);
 
 	/*
@@ -1139,7 +1142,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
 		setup_force_cpu_bug(X86_BUG_TAA);
 
-	if (cpu_matches(NO_MELTDOWN))
+	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
 		return;
 
 	/* Rogue Data Cache Load? No! */
@@ -1148,7 +1151,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 
 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
 
-	if (cpu_matches(NO_L1TF))
+	if (cpu_matches(NO_L1TF, cpu_vuln_whitelist))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_L1TF);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [MODERATED] [PATCH 1/4] V7 more sampling fun 1
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
                   ` (2 preceding siblings ...)
  2020-03-17  0:56 ` [MODERATED] [PATCH 2/4] V7 more sampling fun 2 mark gross
@ 2020-03-17  0:56 ` mark gross
  2020-04-14  3:48 ` [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3 mark gross
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-03-17  0:56 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 1/4] x86/cpu: Add stepping field to x86_cpu_id structure

Intel uses the same family/model for several CPUs. Sometimes the
stepping must be checked to tell them apart.

On X86 there can be at most 16 steppings, add steppings bitmask to
x86_cpu_id and X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and
support for matching against family/model/stepping.

Signed-off-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/include/asm/cpu_device_id.h | 27 ++++++++++++++++++++++++---
 arch/x86/kernel/cpu/match.c          |  7 ++++++-
 include/linux/mod_devicetable.h      |  2 ++
 3 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
index cf3d621c6892..4f0df2e46c95 100644
--- a/arch/x86/include/asm/cpu_device_id.h
+++ b/arch/x86/include/asm/cpu_device_id.h
@@ -20,12 +20,14 @@
 #define X86_CENTAUR_FAM6_C7_D		0xd
 #define X86_CENTAUR_FAM6_NANO		0xf
 
+#define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
 /**
- * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
+ * X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE - Base macro for CPU matching
  * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
  *		The name is expanded to X86_VENDOR_@_vendor
  * @_family:	The family number or X86_FAMILY_ANY
  * @_model:	The model number, model constant or X86_MODEL_ANY
+ * @_steppings:	Bitmask for steppings, stepping constant or X86_STEPPING_ANY
  * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
  * @_data:	Driver specific data or NULL. The internal storage
  *		format is unsigned long. The supplied value, pointer
@@ -37,15 +39,34 @@
  * into another macro at the usage site for good reasons, then please
  * start this local macro with X86_MATCH to allow easy grepping.
  */
-#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model,	\
-					   _feature, _data) {		\
+#define X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(_vendor, _family, _model, \
+						    _steppings, _feature, _data) { \
 	.vendor		= X86_VENDOR_##_vendor,				\
 	.family		= _family,					\
 	.model		= _model,					\
+	.steppings	= _steppings,					\
 	.feature	= _feature,					\
 	.driver_data	= (unsigned long) _data				\
 }
 
+/**
+ * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
+ * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
+ *		The name is expanded to X86_VENDOR_@_vendor
+ * @_family:	The family number or X86_FAMILY_ANY
+ * @_model:	The model number, model constant or X86_MODEL_ANY
+ * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
+ * @_data:	Driver specific data or NULL. The internal storage
+ *		format is unsigned long. The supplied value, pointer
+ *		etc. is casted to unsigned long internally.
+ *
+ * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE() is
+ * set to wildcards.
+ */
+#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature, data) \
+	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(vendor, family, model, \
+						X86_STEPPING_ANY, feature, data)
+
 /**
  * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CPU feature
  * @vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
index d3482eb43ff3..ad6776081e60 100644
--- a/arch/x86/kernel/cpu/match.c
+++ b/arch/x86/kernel/cpu/match.c
@@ -39,13 +39,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
 	const struct x86_cpu_id *m;
 	struct cpuinfo_x86 *c = &boot_cpu_data;
 
-	for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
+	for (m = match;
+	     m->vendor | m->family | m->model | m->steppings | m->feature;
+	     m++) {
 		if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
 			continue;
 		if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
 			continue;
 		if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
 			continue;
+		if (m->steppings != X86_STEPPING_ANY &&
+		    !(BIT(c->x86_stepping) & m->steppings))
+			continue;
 		if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
 			continue;
 		return m;
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 4c2ddd0941a7..0754b8d71262 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -663,6 +663,7 @@ struct x86_cpu_id {
 	__u16 vendor;
 	__u16 family;
 	__u16 model;
+	__u16 steppings;
 	__u16 feature;	/* bit index */
 	kernel_ulong_t driver_data;
 };
@@ -671,6 +672,7 @@ struct x86_cpu_id {
 #define X86_VENDOR_ANY 0xffff
 #define X86_FAMILY_ANY 0
 #define X86_MODEL_ANY  0
+#define X86_STEPPING_ANY 0
 #define X86_FEATURE_ANY 0	/* Same as FPU, you can't test for that */
 
 /*
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [MODERATED] [PATCH 0/4] V7 more sampling fun 0
@ 2020-04-13 18:10 mark gross
  2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V7 more sampling fun 3 mark gross
                   ` (8 more replies)
  0 siblings, 9 replies; 22+ messages in thread
From: mark gross @ 2020-04-13 18:10 UTC (permalink / raw)
  To: speck

From: mark gross <mgross@linux.intel.com>
Subject: [PATCH 0/4] V7 more sampling fun

This version implements cleanups and corrections from Thomas and Ben as well as
enhances the check for fused off TSX of processors in the blacklist that may
not have the microcode for TSX_CTRL.

---

Special Register Buffer Data Sampling is a sampling type of vulnerability that
leaks data across cores sharing the HW-RNG for vulnerable processors.

This leak is fixed by a microcode update and is enabled by default.

This new microcode serializes processor access during execution of RDRAND or
RDSEED. It ensures that the shared buffer is overwritten before it is released
for reuse.

The mitigation impacts the throughput of the RDRAND and RDSEED instructions and
latency of RT processing running on the socket while executing RDRAND or
RDSEED.  The micro benchmarks calling RDRAND many times show a slowdown.

This patch set enables kernel command line control of this mitigation and
exports vulnerability and mitigation status.
This patch set includes 3 patches:
* The first patch adds steppings to x86_cpu_id structure and related macros
* The second patch enables the command line control of the mitigation as well
  as the sysfs export of vulnerability status.
* The third patch has the Documentation/admin-guide/hw-vuln documentation for
  this issue and the control over the mitigation.

mark gross (4):
  x86/cpu: Add stepping field to x86_cpu_id structure
  x86/cpu: clean up cpu_matches
  x86/speculation: Special Register Buffer Data Sampling (SRBDS)
    mitigation control.
  x86/speculation: SRBDS vulnerability and mitigation documentation

 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../special-register-buffer-data-sampling.rst | 148 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  20 +++
 arch/x86/include/asm/cpu_device_id.h          |  27 +++-
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/msr-index.h              |   4 +
 arch/x86/kernel/cpu/bugs.c                    | 116 ++++++++++++++
 arch/x86/kernel/cpu/common.c                  |  78 +++++++--
 arch/x86/kernel/cpu/cpu.h                     |   2 +
 arch/x86/kernel/cpu/match.c                   |   7 +-
 drivers/base/cpu.c                            |   8 +
 include/linux/mod_devicetable.h               |   2 +
 13 files changed, 401 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst

-- 
2.17.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
                   ` (3 preceding siblings ...)
  2020-03-17  0:56 ` [MODERATED] [PATCH 1/4] V7 more sampling fun 1 mark gross
@ 2020-04-14  3:48 ` mark gross
  2020-04-14 16:23   ` Thomas Gleixner
  2020-04-14 10:58 ` Thomas Gleixner
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: mark gross @ 2020-04-14  3:48 UTC (permalink / raw)
  To: speck

On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 3/4] x86/speculation: Special Register Buffer Data Sampling
>  (SRBDS) mitigation control.
> 
> SRBDS is an MDS-like speculative side channel that can leak bits from
> the RNG across cores and threads. New microcode serializes the processor
> access during the execution of RDRAND and RDSEED.  This ensures that the
> shared buffer is overwritten before it is released for reuse.
> 
> While it is present on all affected CPU models, the microcode mitigation
> is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the
> cases where TSX is not supported or has been disabled with TSX_CTRL.
> 
> The mitigation is activated by default on affected processors and it
> increases latency for RDRAND and RDSEED instructions.  Among other
> effects this will reduce throughput from /dev/urandom.
> 
> * enable administrator to configure the mitigation off when desired
>   using either mitigations=off or srbds=off.
> * export vulnerability status via sysfs
> * rename file scoped macros to apply for non-whitelist table
>   initializations.
> 
> Signed-off-by: Mark Gross <mgross@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Reviewed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  .../ABI/testing/sysfs-devices-system-cpu      |   1 +
>  .../admin-guide/kernel-parameters.txt         |  20 +++
>  arch/x86/include/asm/cpufeatures.h            |   2 +
>  arch/x86/include/asm/msr-index.h              |   4 +
>  arch/x86/kernel/cpu/bugs.c                    | 116 ++++++++++++++++++
>  arch/x86/kernel/cpu/common.c                  |  53 ++++++++
>  arch/x86/kernel/cpu/cpu.h                     |   2 +
>  drivers/base/cpu.c                            |   8 ++
>  8 files changed, 206 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> index 2e0e3b45d02a..b39531a3c5bc 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> @@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
>  		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
>  		/sys/devices/system/cpu/vulnerabilities/l1tf
>  		/sys/devices/system/cpu/vulnerabilities/mds
> +		/sys/devices/system/cpu/vulnerabilities/srbds
>  		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
>  		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
>  Date:		January 2018
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index f2a93c8679e8..6bc125ff09da 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4757,6 +4757,26 @@
>  			the kernel will oops in either "warn" or "fatal"
>  			mode.
>  
> +	srbds=		[X86,INTEL]
> +			Control mitigation for the Special Register
> +			Buffer Data Sampling (SRBDS) mitigation.
> +
> +			Certain CPUs are vulnerable to an MDS-like
> +			exploit which can leak bits from the random
> +			number generator.
> +
> +			By default, this issue is mitigated by
> +			microcode.  However, the microcode fix can cause
> +			the RDRAND and RDSEED instructions to become
> +			much slower.  Among other effects, this will
> +			result in reduced throughput from /dev/urandom.
> +
> +			The microcode mitigation can be disabled with
> +			the following option:
> +
> +			off:    Disable mitigation and remove
> +				performance impact to RDRAND and RDSEED
> +
>  	srcutree.counter_wrap_check [KNL]
>  			Specifies how frequently to check for
>  			grace-period sequence counter wrap for the
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index db189945e9b0..02dabc9e77b0 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -362,6 +362,7 @@
>  #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
>  #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
>  #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
> +#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
>  #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
>  #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
>  #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
> @@ -407,5 +408,6 @@
>  #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
>  #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
>  #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
> +#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
>  
>  #endif /* _ASM_X86_CPUFEATURES_H */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 12c9684d59ba..3efde600a674 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -128,6 +128,10 @@
>  #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
>  #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
>  
> +/* SRBDS support */
> +#define MSR_IA32_MCU_OPT_CTRL		0x00000123
> +#define RNGDS_MITG_DIS			BIT(0)
> +
>  #define MSR_IA32_SYSENTER_CS		0x00000174
>  #define MSR_IA32_SYSENTER_ESP		0x00000175
>  #define MSR_IA32_SYSENTER_EIP		0x00000176
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index ed54b3b21c39..addef92109fe 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
>  static void __init mds_select_mitigation(void);
>  static void __init mds_print_mitigation(void);
>  static void __init taa_select_mitigation(void);
> +static void __init srbds_select_mitigation(void);
>  
>  /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
>  u64 x86_spec_ctrl_base;
> @@ -108,6 +109,7 @@ void __init check_bugs(void)
>  	l1tf_select_mitigation();
>  	mds_select_mitigation();
>  	taa_select_mitigation();
> +	srbds_select_mitigation();
>  
>  	/*
>  	 * As MDS and TAA mitigations are inter-related, print MDS
> @@ -397,6 +399,107 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
>  }
>  early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
>  
> +#undef pr_fmt
> +#define pr_fmt(fmt)	"SRBDS: " fmt
> +
> +enum srbds_mitigations {
> +	SRBDS_MITIGATION_OFF,
> +	SRBDS_MITIGATION_UCODE_NEEDED,
> +	SRBDS_MITIGATION_FULL,
> +	SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF,
> +	SRBDS_MITIGATION_HYPERVISOR,
> +};
> +
> +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> +static const char * const srbds_strings[] = {
> +	[SRBDS_MITIGATION_OFF]			= "Vulnerable",
> +	[SRBDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
> +	[SRBDS_MITIGATION_FULL]			= "Mitigated: Microcode",
> +	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> +	[SRBDS_MITIGATION_HYPERVISOR]		= "Unknown: Dependent on hypervisor status",
> +};
> +
> +static bool srbds_off;
> +
> +void update_srbds_msr(void)
> +{
> +	u64 mcu_ctrl;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> +		return;
> +
> +	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
> +		return;
> +
> +	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> +
> +	switch (srbds_mitigation) {
> +	case SRBDS_MITIGATION_OFF:
> +	case SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF:
> +		mcu_ctrl |= RNGDS_MITG_DIS;
> +		break;
> +	case SRBDS_MITIGATION_FULL:
> +		mcu_ctrl &= ~RNGDS_MITG_DIS;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
> +}
> +
> +static void __init srbds_select_mitigation(void)
> +{
> +	u64 ia32_cap;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;
> +	/*
> +	 * Check to see if this is one of the MDS_NO systems supporting
> +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> +	 */
> +	ia32_cap = x86_read_arch_cap_msr();
> +	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
> +		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
> +		goto out;
> +	}
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> +		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
> +		goto out;
> +	}
> +
> +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> +		goto out;
> +	}
> +
> +	if (cpu_mitigations_off() || srbds_off) {
> +		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)
> +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> +	}
> +out:
The test for cpu_mitigations_off or srbds off needs to be after out:
Otherwise when TSX is off and srbds=off will report the wrong answer.

--mark

> +	update_srbds_msr();
> +	pr_info("%s\n", srbds_strings[srbds_mitigation]);
> +}
> +
> +static int __init srbds_parse_cmdline(char *str)
> +{
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return 0;
> +
> +	srbds_off = !strcmp(str, "off");
> +
> +	return 0;
> +}
> +early_param("srbds", srbds_parse_cmdline);
> +
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "Spectre V1 : " fmt
>  
> @@ -1528,6 +1631,11 @@ static char *ibpb_state(void)
>  	return "";
>  }
>  
> +static ssize_t srbds_show_state(char *buf)
> +{
> +	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
> +}
> +
>  static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
>  			       char *buf, unsigned int bug)
>  {
> @@ -1572,6 +1680,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>  	case X86_BUG_ITLB_MULTIHIT:
>  		return itlb_multihit_show_state(buf);
>  
> +	case X86_BUG_SRBDS:
> +		return srbds_show_state(buf);
> +
>  	default:
>  		break;
>  	}
> @@ -1618,4 +1729,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
>  {
>  	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
>  }
> +
> +ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> +	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
> +}
>  #endif
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 2bea1cc8dcb4..2c9be1fd3c72 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1075,6 +1075,30 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
>  	{}
>  };
>  
> +#define VULNBL_INTEL_STEPPING(model, steppings, issues)			   \
> +	X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(INTEL, 6,		   \
> +					    INTEL_FAM6_##model, steppings, \
> +					    X86_FEATURE_ANY, issues)
> +
> +#define SRBDS		BIT(0)
> +#define SRBDS_IF_TSX	BIT(1)
> +
> +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> +	VULNBL_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(HASWELL,		X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> +	{}
> +};
> +
>  static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
>  {
>  	const struct x86_cpu_id *m = x86_match_cpu(table);
> @@ -1142,6 +1166,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> +	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
> +		/*
> +		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> +		 * they show as "Not affected".
> +		 */
> +		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
> +		    !cpu_has(c, X86_FEATURE_RDSEED))
> +			goto srbds_not_affected;
> +		/*
> +		 * Parts in the blacklist that enumerate MDS_NO are only
> +		 * vulneralbe if TSX can be used.  To handle cases where TSX
> +		 * gets fused off check to see if TSX is fused off and thus not
> +		 * affected.
> +		 *
> +		 * When running with up to day microcode TSX_CTRL is only
> +		 * enumerated on parts where TSX fused on.
> +		 * When running with microcode not supporting TSX_CTRL we check
> +		 * for RTM
> +		 */
> +		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> +		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
> +		      cpu_has(c, X86_FEATURE_RTM)))
> +			goto srbds_not_affected;
> +
> +		setup_force_cpu_bug(X86_BUG_SRBDS);
> +	}
> +srbds_not_affected:
> +
>  	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
>  		return;
>  
> @@ -1594,6 +1646,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
>  	mtrr_ap_init();
>  	validate_apic_and_package_id(c);
>  	x86_spec_ctrl_setup_ap();
> +	update_srbds_msr();
>  }
>  
>  static __init int setup_noclflush(char *arg)
> diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> index 37fdefd14f28..f3e2fc44dba0 100644
> --- a/arch/x86/kernel/cpu/cpu.h
> +++ b/arch/x86/kernel/cpu/cpu.h
> @@ -44,6 +44,8 @@ struct _tlb_table {
>  extern const struct cpu_dev *const __x86_cpu_dev_start[],
>  			    *const __x86_cpu_dev_end[];
>  
> +void update_srbds_msr(void);
> +
>  #ifdef CONFIG_CPU_SUP_INTEL
>  enum tsx_ctrl_states {
>  	TSX_CTRL_ENABLE,
> diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
> index 9a1c00fbbaef..d2136ab9b14a 100644
> --- a/drivers/base/cpu.c
> +++ b/drivers/base/cpu.c
> @@ -562,6 +562,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
>  	return sprintf(buf, "Not affected\n");
>  }
>  
> +ssize_t __weak cpu_show_srbds(struct device *dev,
> +			      struct device_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "Not affected\n");
> +}
> +
>  static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
>  static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
>  static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
> @@ -570,6 +576,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
>  static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
>  static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
>  static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
> +static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
>  
>  static struct attribute *cpu_root_vulnerabilities_attrs[] = {
>  	&dev_attr_meltdown.attr,
> @@ -580,6 +587,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
>  	&dev_attr_mds.attr,
>  	&dev_attr_tsx_async_abort.attr,
>  	&dev_attr_itlb_multihit.attr,
> +	&dev_attr_srbds.attr,
>  	NULL
>  };
>  
> -- 
> 2.17.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
                   ` (4 preceding siblings ...)
  2020-04-14  3:48 ` [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3 mark gross
@ 2020-04-14 10:58 ` Thomas Gleixner
  2020-04-14 16:43   ` [MODERATED] " mark gross
  2020-04-14 20:02 ` Josh Poimboeuf
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2020-04-14 10:58 UTC (permalink / raw)
  To: speck

Mark,

speck for mark gross <speck@linutronix.de> writes:
> +static void __init srbds_select_mitigation(void)
> +{
> +	u64 ia32_cap;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> +		return;
> +	/*
> +	 * Check to see if this is one of the MDS_NO systems supporting
> +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> +	 */
> +	ia32_cap = x86_read_arch_cap_msr();
> +	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {

This does not work for the CPUs which have the IF_TSX thing because your
setup magic does not set X86_BUG_SRBDS for those.... See below.

> +		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
> +		goto out;
> +	}
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> +		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
> +		goto out;
> +	}
> +
> +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> +		goto out;
> +	}
> +
> +	if (cpu_mitigations_off() || srbds_off) {
> +		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)

That's pointless. You already jumped over this part in case of TSX off.

> +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> +	}

The whole goto stuff can be completely avoided.

	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
		return;

	if (boot_cpu_has(X86_BUG_SRBDS_IF_TSX) && !boot_cpu_has(X86_FEATURE_RTM)))
        	srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
        else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
	else if (cpu_mitigations_off() || srbds_off)
		srbds_mitigation = SRBDS_MITIGATION_OFF;

> +out:
> +	update_srbds_msr();
> +	pr_info("%s\n", srbds_strings[srbds_mitigation]);
> +}

> +#define SRBDS		BIT(0)
> +#define SRBDS_IF_TSX	BIT(1)
> +
> +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> +	VULNBL_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(HASWELL,		X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> +	{}
> +};
> +
>  static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
>  {
>  	const struct x86_cpu_id *m = x86_match_cpu(table);
> @@ -1142,6 +1166,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> +	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
> +		/*
> +		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> +		 * they show as "Not affected".
> +		 */
> +		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
> +		    !cpu_has(c, X86_FEATURE_RDSEED))
> +			goto srbds_not_affected;
> +		/*
> +		 * Parts in the blacklist that enumerate MDS_NO are only
> +		 * vulneralbe if TSX can be used.  To handle cases where TSX
> +		 * gets fused off check to see if TSX is fused off and thus not
> +		 * affected.
> +		 *
> +		 * When running with up to day microcode TSX_CTRL is only
> +		 * enumerated on parts where TSX fused on.
> +		 * When running with microcode not supporting TSX_CTRL we check
> +		 * for RTM
> +		 */
> +		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> +		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
> +		      cpu_has(c, X86_FEATURE_RTM)))

So you added SRBDS_IF_TSX and then you check for both and still have
that check for _all_ CPUs. Also the TSX_CTRL_MSR part is weird. That's
completely irrelevant. The only interesting part is X86_FEATURE_RTM.

What you really want is:

	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS | SRBDS_IF_TSX),
	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS | SRBDS_IF_TSX),

	/*
	 * CPUs which have neither RDRAND nor RDSEED are not affected.
	 */
	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
		if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
			setup_force_cpu_bug(X86_BUG_SRBDS);
			/* These CPUs are only vulnerable when TSX is usable. */
	                if (cpu_matches(SRBDS_IF_TSX, cpu_vuln_blacklist))
				setup_force_cpu_bug(X86_BUG_SRBDS_IF_TSX);
		}
	}

Hmm?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14  3:48 ` [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3 mark gross
@ 2020-04-14 16:23   ` Thomas Gleixner
  2020-04-14 20:03     ` [MODERATED] " mark gross
  0 siblings, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2020-04-14 16:23 UTC (permalink / raw)
  To: speck

Mark,

speck for mark gross <speck@linutronix.de> writes:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
>> +	/*
>> +	 * Check to see if this is one of the MDS_NO systems supporting
>> +	 * TSX that are only exposed to SRBDS when TSX is enabled.
>> +	 */
>> +	ia32_cap = x86_read_arch_cap_msr();
>> +	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
>> +		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
>> +		goto out;
>> +	}
>> +
>> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
>> +		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
>> +		goto out;
>> +	}
>> +
>> +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
>> +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
>> +		goto out;
>> +	}
>> +
>> +	if (cpu_mitigations_off() || srbds_off) {
>> +		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)
>> +			srbds_mitigation = SRBDS_MITIGATION_OFF;
>> +	}
>> +out:
> The test for cpu_mitigations_off or srbds off needs to be after out:
> Otherwise when TSX is off and srbds=off will report the wrong answer.

If the CPU has SRBDS_IF_TSX and TSX is disabled then the correct
answer is: Not affected (TSX off)

That's what we do with other issues as well. If the CPU is not affected
then we print this even with mitigation disabled (all or particular).

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 10:58 ` Thomas Gleixner
@ 2020-04-14 16:43   ` mark gross
  0 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-04-14 16:43 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 12:58:10PM +0200, speck for Thomas Gleixner wrote:
> Mark,
> 
> speck for mark gross <speck@linutronix.de> writes:
> > +static void __init srbds_select_mitigation(void)
> > +{
> > +	u64 ia32_cap;
> > +
> > +	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> > +		return;
> > +	/*
> > +	 * Check to see if this is one of the MDS_NO systems supporting
> > +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> > +	 */
> > +	ia32_cap = x86_read_arch_cap_msr();
> > +	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
> 
> This does not work for the CPUs which have the IF_TSX thing because your
> setup magic does not set X86_BUG_SRBDS for those.... See below.

There is some requirements flux coming from the white paper that looks like may
need have to drop the IF_TSX and replace it with tests for MDS_NO and TSX
anyway.

FWIW I was trying to avoid adding a extra bug bit and using the IF_TSX only
with file scope of common.c to better match what is in the white paper. The
white paper might be changing. I can't close this until they get back from
passover vacation.


> 
> > +		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
> > +		goto out;
> > +	}
> > +
> > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > +		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
> > +		goto out;
> > +	}
> > +
> > +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> > +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> > +		goto out;
> > +	}
> > +
> > +	if (cpu_mitigations_off() || srbds_off) {
> > +		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)
> 
> That's pointless. You already jumped over this part in case of TSX off.
right!  that block needs to be after the out: label.
> 
> > +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> > +	}
> 
> The whole goto stuff can be completely avoided.
> 
> 	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> 		return;
> 
> 	if (boot_cpu_has(X86_BUG_SRBDS_IF_TSX) && !boot_cpu_has(X86_FEATURE_RTM)))
>         	srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
>         else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
> 		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
> 	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
> 		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> 	else if (cpu_mitigations_off() || srbds_off)
> 		srbds_mitigation = SRBDS_MITIGATION_OFF;
Yes, I buggered the goto logic too may ways. Yours looks right.



> 
> > +out:
> > +	update_srbds_msr();
> > +	pr_info("%s\n", srbds_strings[srbds_mitigation]);
> > +}
> 
> > +#define SRBDS		BIT(0)
> > +#define SRBDS_IF_TSX	BIT(1)
> > +
> > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> > +	VULNBL_INTEL_STEPPING(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(HASWELL,		X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> > +	{}
> > +};
> > +
> >  static bool __init cpu_matches(unsigned long which, const struct x86_cpu_id *table)
> >  {
> >  	const struct x86_cpu_id *m = x86_match_cpu(table);
> > @@ -1142,6 +1166,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> >  		setup_force_cpu_bug(X86_BUG_TAA);
> >  
> > +	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
> > +		/*
> > +		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> > +		 * they show as "Not affected".
> > +		 */
> > +		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
> > +		    !cpu_has(c, X86_FEATURE_RDSEED))
> > +			goto srbds_not_affected;
> > +		/*
> > +		 * Parts in the blacklist that enumerate MDS_NO are only
> > +		 * vulneralbe if TSX can be used.  To handle cases where TSX
> > +		 * gets fused off check to see if TSX is fused off and thus not
> > +		 * affected.
> > +		 *
> > +		 * When running with up to day microcode TSX_CTRL is only
> > +		 * enumerated on parts where TSX fused on.
> > +		 * When running with microcode not supporting TSX_CTRL we check
> > +		 * for RTM
> > +		 */
> > +		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> > +		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
> > +		      cpu_has(c, X86_FEATURE_RTM)))
> 
> So you added SRBDS_IF_TSX and then you check for both and still have
> that check for _all_ CPUs. Also the TSX_CTRL_MSR part is weird. That's
> completely irrelevant. The only interesting part is X86_FEATURE_RTM.
> 
> What you really want is:
> 
> 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS | SRBDS_IF_TSX),
> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS | SRBDS_IF_TSX),

well, KABYLAKE steppings 0xC and 0xD are affected only if TSX is on.  I have
been thinking of the SRBDS bit to be for parts without the "only if TSX"
conditional vulnerability.

Anyway,like I said above, there is some flux happening in the white paper I'm
trying to align my code too that I hope to close in a few days.  There is talk
of only listing yes/no in the affected processor list and apply the above test
to every yes entry to ID the subset of parts affected "only if" TSX is
available.  If that happens I'll drop all the SRBDS_IF_TSX stuff and rely on
the test above.


> 	/*
> 	 * CPUs which have neither RDRAND nor RDSEED are not affected.
> 	 */
> 	if (cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) {
> 		if (cpu_matches(SRBDS, cpu_vuln_blacklist)) {
> 			setup_force_cpu_bug(X86_BUG_SRBDS);
> 			/* These CPUs are only vulnerable when TSX is usable. */
> 	                if (cpu_matches(SRBDS_IF_TSX, cpu_vuln_blacklist))
> 				setup_force_cpu_bug(X86_BUG_SRBDS_IF_TSX);
> 		}
> 	}
> 
> Hmm?
That looks like it will work.

Thanks!

--mark

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
                   ` (5 preceding siblings ...)
  2020-04-14 10:58 ` Thomas Gleixner
@ 2020-04-14 20:02 ` Josh Poimboeuf
  2020-04-14 21:03   ` mark gross
  2020-04-14 20:05 ` Josh Poimboeuf
  2020-04-15 17:51 ` [MODERATED] Re: [PATCH 1/4] V7 more sampling fun 1 Borislav Petkov
  8 siblings, 1 reply; 22+ messages in thread
From: Josh Poimboeuf @ 2020-04-14 20:02 UTC (permalink / raw)
  To: speck

On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> +enum srbds_mitigations {
> +	SRBDS_MITIGATION_OFF,
> +	SRBDS_MITIGATION_UCODE_NEEDED,
> +	SRBDS_MITIGATION_FULL,
> +	SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF,
> +	SRBDS_MITIGATION_HYPERVISOR,
> +};
> +
> +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> +static const char * const srbds_strings[] = {
> +	[SRBDS_MITIGATION_OFF]			= "Vulnerable",
> +	[SRBDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
> +	[SRBDS_MITIGATION_FULL]			= "Mitigated: Microcode",

s/Mitigated/Mitigation/ for consistency with other issues

> +	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",

The CPU *is* affected, it just happens to be mitigated, right?

Shouldn't it be SRBDS_MITIGATION_TSX_OFF and "Mitigation: TSX disabled"?


> @@ -1142,6 +1166,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>  		setup_force_cpu_bug(X86_BUG_TAA);
>  
> +	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
> +		/*
> +		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> +		 * they show as "Not affected".
> +		 */
> +		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
> +		    !cpu_has(c, X86_FEATURE_RDSEED))
> +			goto srbds_not_affected;
> +		/*
> +		 * Parts in the blacklist that enumerate MDS_NO are only
> +		 * vulneralbe if TSX can be used.  To handle cases where TSX

"vulnerable"

> +		 * gets fused off check to see if TSX is fused off and thus not
> +		 * affected.
> +		 *
> +		 * When running with up to day microcode TSX_CTRL is only

"up-to-date"

> +		 * enumerated on parts where TSX fused on.

where TSX *is* fused on.

> +		 * When running with microcode not supporting TSX_CTRL we check
> +		 * for RTM

Missing period

> +		 */
> +		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> +		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
> +		      cpu_has(c, X86_FEATURE_RTM)))
> +			goto srbds_not_affected;
> +
> +		setup_force_cpu_bug(X86_BUG_SRBDS);
> +	}
> +srbds_not_affected:
> +
>  	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
>  		return;

I'm thinking it would be more readable to have the newline between the
bracket and the 'if', instead of between the label and the 'if'.

-- 
Josh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 16:23   ` Thomas Gleixner
@ 2020-04-14 20:03     ` mark gross
  0 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-04-14 20:03 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 06:23:38PM +0200, speck for Thomas Gleixner wrote:
> Mark,
> 
> speck for mark gross <speck@linutronix.de> writes:
> > On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> >> +	/*
> >> +	 * Check to see if this is one of the MDS_NO systems supporting
> >> +	 * TSX that are only exposed to SRBDS when TSX is enabled.
> >> +	 */
> >> +	ia32_cap = x86_read_arch_cap_msr();
> >> +	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) {
> >> +		srbds_mitigation = SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF;
> >> +		goto out;
> >> +	}
> >> +
> >> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> >> +		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
> >> +		goto out;
> >> +	}
> >> +
> >> +	if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL)) {
> >> +		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
> >> +		goto out;
> >> +	}
> >> +
> >> +	if (cpu_mitigations_off() || srbds_off) {
> >> +		if (srbds_mitigation != SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF)
> >> +			srbds_mitigation = SRBDS_MITIGATION_OFF;
> >> +	}
> >> +out:
> > The test for cpu_mitigations_off or srbds off needs to be after out:
> > Otherwise when TSX is off and srbds=off will report the wrong answer.
> 
> If the CPU has SRBDS_IF_TSX and TSX is disabled then the correct
> answer is: Not affected (TSX off)
correct.

> That's what we do with other issues as well. If the CPU is not affected
> then we print this even with mitigation disabled (all or particular).

That is what I'm working toward.  Now that I look at the code the right answer
is to not move the out: lable and nuke the nested if int eht srbds_off branch.
That is whats wrong.

That said I plan to us your if/else construct on the next version and forget
about the goto's.

--mark

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
                   ` (6 preceding siblings ...)
  2020-04-14 20:02 ` Josh Poimboeuf
@ 2020-04-14 20:05 ` Josh Poimboeuf
  2020-04-14 21:59   ` mark gross
  2020-04-15 17:51 ` [MODERATED] Re: [PATCH 1/4] V7 more sampling fun 1 Borislav Petkov
  8 siblings, 1 reply; 22+ messages in thread
From: Josh Poimboeuf @ 2020-04-14 20:05 UTC (permalink / raw)
  To: speck

On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),

Another readability tweak: "0x0" helps with vertical alignment:

	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xA),	SRBDS),
	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xB),	SRBDS),
	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),

-- 
Josh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 20:02 ` Josh Poimboeuf
@ 2020-04-14 21:03   ` mark gross
  2020-04-14 21:23     ` Josh Poimboeuf
  0 siblings, 1 reply; 22+ messages in thread
From: mark gross @ 2020-04-14 21:03 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 03:02:37PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > +enum srbds_mitigations {
> > +	SRBDS_MITIGATION_OFF,
> > +	SRBDS_MITIGATION_UCODE_NEEDED,
> > +	SRBDS_MITIGATION_FULL,
> > +	SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF,
> > +	SRBDS_MITIGATION_HYPERVISOR,
> > +};
> > +
> > +static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
> > +static const char * const srbds_strings[] = {
> > +	[SRBDS_MITIGATION_OFF]			= "Vulnerable",
> > +	[SRBDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
> > +	[SRBDS_MITIGATION_FULL]			= "Mitigated: Microcode",
> 
> s/Mitigated/Mitigation/ for consistency with other issues
ok

> > +	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> 
> The CPU *is* affected, it just happens to be mitigated, right?
This depends on perspective.
The only mitigation to SRBS is for the uCode to serialize access to the off
core hwRNG and to do a ghost transfer of a through away random number such that
if it leaks you don't get the right random number.  I think there may be some
buffer clearing in there too.

Disabling TSX from that point of view is not mitigating the issue so much as
hiding exposure to it.

I can see it either way.  Not sure which is better.  After reading my logic do
you still think it would make more sence to change "Not affected (TSX
disabled)" to "Mitigated: TSX disabled"?



> Shouldn't it be SRBDS_MITIGATION_TSX_OFF and "Mitigation: TSX disabled"?
You tell me.  I think they are boot good enough although I do look at txs
disabling as a special case for vulnerability.


> 
> 
> > @@ -1142,6 +1166,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> >  		setup_force_cpu_bug(X86_BUG_TAA);
> >  
> > +	if (cpu_matches(SRBDS|SRBDS_IF_TSX, cpu_vuln_blacklist)) {
> > +		/*
> > +		 * Some parts on the list don't have RDRAND or RDSEED. Make sure
> > +		 * they show as "Not affected".
> > +		 */
> > +		if (!cpu_has(c, X86_FEATURE_RDRAND) &&
> > +		    !cpu_has(c, X86_FEATURE_RDSEED))
> > +			goto srbds_not_affected;
> > +		/*
> > +		 * Parts in the blacklist that enumerate MDS_NO are only
> > +		 * vulneralbe if TSX can be used.  To handle cases where TSX
> 
> "vulnerable"
ok
> 
> > +		 * gets fused off check to see if TSX is fused off and thus not
> > +		 * affected.
> > +		 *
> > +		 * When running with up to day microcode TSX_CTRL is only
> 
> "up-to-date"
ok

> > +		 * enumerated on parts where TSX fused on.
> 
> where TSX *is* fused on.
ok

> > +		 * When running with microcode not supporting TSX_CTRL we check
> > +		 * for RTM
> 
> Missing period
ok

> > +		 */
> > +		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> > +		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
> > +		      cpu_has(c, X86_FEATURE_RTM)))
> > +			goto srbds_not_affected;
> > +
> > +		setup_force_cpu_bug(X86_BUG_SRBDS);
> > +	}
> > +srbds_not_affected:
> > +
> >  	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
> >  		return;
> 
> I'm thinking it would be more readable to have the newline between the
> bracket and the 'if', instead of between the label and the 'if'.
so, lose the newline between the label and the if?

--mark

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 21:03   ` mark gross
@ 2020-04-14 21:23     ` Josh Poimboeuf
  2020-04-14 21:53       ` mark gross
  0 siblings, 1 reply; 22+ messages in thread
From: Josh Poimboeuf @ 2020-04-14 21:23 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 02:03:02PM -0700, speck for mark gross wrote:
> > > +	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> > 
> > The CPU *is* affected, it just happens to be mitigated, right?
> This depends on perspective.
> The only mitigation to SRBS is for the uCode to serialize access to the off
> core hwRNG and to do a ghost transfer of a through away random number such that
> if it leaks you don't get the right random number.  I think there may be some
> buffer clearing in there too.
> 
> Disabling TSX from that point of view is not mitigating the issue so much as
> hiding exposure to it.
> 
> I can see it either way.  Not sure which is better.  After reading my logic do
> you still think it would make more sence to change "Not affected (TSX
> disabled)" to "Mitigated: TSX disabled"?

From a user's perspective, "hiding exposure" is the same as a
mitigation.  So I think it makes sense to call it a mitigation.

> > > +		 */
> > > +		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> > > +		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
> > > +		      cpu_has(c, X86_FEATURE_RTM)))
> > > +			goto srbds_not_affected;
> > > +
> > > +		setup_force_cpu_bug(X86_BUG_SRBDS);
> > > +	}
> > > +srbds_not_affected:
> > > +
> > >  	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
> > >  		return;
> > 
> > I'm thinking it would be more readable to have the newline between the
> > bracket and the 'if', instead of between the label and the 'if'.
> so, lose the newline between the label and the if?

Yes, and add one between the '}' and the label.

-- 
Josh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 21:23     ` Josh Poimboeuf
@ 2020-04-14 21:53       ` mark gross
  0 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-04-14 21:53 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 04:23:27PM -0500, speck for Josh Poimboeuf wrote:
> On Tue, Apr 14, 2020 at 02:03:02PM -0700, speck for mark gross wrote:
> > > > +	[SRBDS_MITIGATION_NOT_AFFECTED_TSX_OFF]	= "Not affected (TSX disabled)",
> > > 
> > > The CPU *is* affected, it just happens to be mitigated, right?
> > This depends on perspective.
> > The only mitigation to SRBS is for the uCode to serialize access to the off
> > core hwRNG and to do a ghost transfer of a through away random number such that
> > if it leaks you don't get the right random number.  I think there may be some
> > buffer clearing in there too.
> > 
> > Disabling TSX from that point of view is not mitigating the issue so much as
> > hiding exposure to it.
> > 
> > I can see it either way.  Not sure which is better.  After reading my logic do
> > you still think it would make more sence to change "Not affected (TSX
> > disabled)" to "Mitigated: TSX disabled"?
> 
> From a user's perspective, "hiding exposure" is the same as a
> mitigation.  So I think it makes sense to call it a mitigation.
done

> > > > +		 */
> > > > +		if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> > > > +		    !((ia32_cap & ARCH_CAP_TSX_CTRL_MSR) ||
> > > > +		      cpu_has(c, X86_FEATURE_RTM)))
> > > > +			goto srbds_not_affected;
> > > > +
> > > > +		setup_force_cpu_bug(X86_BUG_SRBDS);
> > > > +	}
> > > > +srbds_not_affected:
> > > > +
> > > >  	if (cpu_matches(NO_MELTDOWN, cpu_vuln_whitelist))
> > > >  		return;
> > > 
> > > I'm thinking it would be more readable to have the newline between the
> > > bracket and the 'if', instead of between the label and the 'if'.
> > so, lose the newline between the label and the if?
> 
> Yes, and add one between the '}' and the label.
done
--mark

> -- 
> Josh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 20:05 ` Josh Poimboeuf
@ 2020-04-14 21:59   ` mark gross
  2020-04-14 22:46     ` Josh Poimboeuf
  2020-04-15 12:58     ` Thomas Gleixner
  0 siblings, 2 replies; 22+ messages in thread
From: mark gross @ 2020-04-14 21:59 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 03:05:44PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> 
> Another readability tweak: "0x0" helps with vertical alignment:
> 
> 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xA),	SRBDS),
> 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xB),	SRBDS),
> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),

FWIW the white paper no longer calls out individual steppings as vulnerable
only if TSX so I'm losing the SRBDS_IF_TSX stuff.

Given that the additional steppings 0xB, 0xC and 0xD no longer are treated
differently I'm just going with the following:
VULNBL_INTEL_STEPPING(KABYLAKE_L,      X86_STEPPINGS(0, 0xC), SRBDS)
VULNBL_INTEL_STEPPING(KABYLAKE,        X86_STEPPINGS(0, 0xD), SRBDS)

Thanks,

--mark

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 21:59   ` mark gross
@ 2020-04-14 22:46     ` Josh Poimboeuf
  2020-04-15 20:59       ` mark gross
  2020-04-15 12:58     ` Thomas Gleixner
  1 sibling, 1 reply; 22+ messages in thread
From: Josh Poimboeuf @ 2020-04-14 22:46 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 02:59:24PM -0700, speck for mark gross wrote:
> On Tue, Apr 14, 2020 at 03:05:44PM -0500, speck for Josh Poimboeuf wrote:
> > On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> > > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> > > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> > > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> > > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> > 
> > Another readability tweak: "0x0" helps with vertical alignment:
> > 
> > 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xA),	SRBDS),
> > 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> > 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xB),	SRBDS),
> > 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> 
> FWIW the white paper no longer calls out individual steppings as vulnerable
> only if TSX so I'm losing the SRBDS_IF_TSX stuff.
> 
> Given that the additional steppings 0xB, 0xC and 0xD no longer are treated
> differently I'm just going with the following:
> VULNBL_INTEL_STEPPING(KABYLAKE_L,      X86_STEPPINGS(0, 0xC), SRBDS)
> VULNBL_INTEL_STEPPING(KABYLAKE,        X86_STEPPINGS(0, 0xD), SRBDS)

Ok, though I still think "0x0" looks more consistent in this context ;-)

-- 
Josh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 21:59   ` mark gross
  2020-04-14 22:46     ` Josh Poimboeuf
@ 2020-04-15 12:58     ` Thomas Gleixner
  2020-04-15 22:21       ` [MODERATED] " mark gross
  1 sibling, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2020-04-15 12:58 UTC (permalink / raw)
  To: speck

Mark,

speck for mark gross <speck@linutronix.de> writes:
> On Tue, Apr 14, 2020 at 03:05:44PM -0500, speck for Josh Poimboeuf wrote:
>> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
>> > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
>> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
>> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
>> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
>> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
>> 
>> Another readability tweak: "0x0" helps with vertical alignment:
>> 
>> 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xA),	SRBDS),
>> 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
>> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xB),	SRBDS),
>> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
>
> FWIW the white paper no longer calls out individual steppings as vulnerable
> only if TSX so I'm losing the SRBDS_IF_TSX stuff.

So we lose the complete TSX conditionals or do they now apply to all
NO_MDS CPUs?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 1/4] V7 more sampling fun 1
  2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
                   ` (7 preceding siblings ...)
  2020-04-14 20:05 ` Josh Poimboeuf
@ 2020-04-15 17:51 ` Borislav Petkov
  2020-04-15 23:58   ` mark gross
  8 siblings, 1 reply; 22+ messages in thread
From: Borislav Petkov @ 2020-04-15 17:51 UTC (permalink / raw)
  To: speck

On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> From: mark gross <mgross@linux.intel.com>
> Subject: [PATCH 1/4] x86/cpu: Add stepping field to x86_cpu_id structure
> 
> Intel uses the same family/model for several CPUs. Sometimes the
> stepping must be checked to tell them apart.
> 
> On X86 there can be at most 16 steppings, add steppings bitmask to
> x86_cpu_id and X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and
> support for matching against family/model/stepping.
> 
> Signed-off-by: Mark Gross <mgross@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> ---
>  arch/x86/include/asm/cpu_device_id.h | 27 ++++++++++++++++++++++++---
>  arch/x86/kernel/cpu/match.c          |  7 ++++++-
>  include/linux/mod_devicetable.h      |  2 ++
>  3 files changed, 32 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
> index cf3d621c6892..4f0df2e46c95 100644
> --- a/arch/x86/include/asm/cpu_device_id.h
> +++ b/arch/x86/include/asm/cpu_device_id.h
> @@ -20,12 +20,14 @@
>  #define X86_CENTAUR_FAM6_C7_D		0xd
>  #define X86_CENTAUR_FAM6_NANO		0xf
>  
> +#define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
>  /**
> - * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
> + * X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE - Base macro for CPU matching
				 ^^^^^^^^

If the steppings argument is a bitmask of steppings, then the name
should have "STEPPINGS" too to avoid confusion.

>   * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
>   *		The name is expanded to X86_VENDOR_@_vendor
>   * @_family:	The family number or X86_FAMILY_ANY
>   * @_model:	The model number, model constant or X86_MODEL_ANY
> + * @_steppings:	Bitmask for steppings, stepping constant or X86_STEPPING_ANY
>   * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
>   * @_data:	Driver specific data or NULL. The internal storage
>   *		format is unsigned long. The supplied value, pointer
> @@ -37,15 +39,34 @@
>   * into another macro at the usage site for good reasons, then please
>   * start this local macro with X86_MATCH to allow easy grepping.
>   */
> -#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model,	\
> -					   _feature, _data) {		\
> +#define X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(_vendor, _family, _model, \
> +						    _steppings, _feature, _data) { \
>  	.vendor		= X86_VENDOR_##_vendor,				\
>  	.family		= _family,					\
>  	.model		= _model,					\
> +	.steppings	= _steppings,					\
>  	.feature	= _feature,					\
>  	.driver_data	= (unsigned long) _data				\
>  }
>  
> +/**
> + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching

That isn't the base macro anymore.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-14 22:46     ` Josh Poimboeuf
@ 2020-04-15 20:59       ` mark gross
  0 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-04-15 20:59 UTC (permalink / raw)
  To: speck

On Tue, Apr 14, 2020 at 05:46:31PM -0500, speck for Josh Poimboeuf wrote:
> On Tue, Apr 14, 2020 at 02:59:24PM -0700, speck for mark gross wrote:
> > On Tue, Apr 14, 2020 at 03:05:44PM -0500, speck for Josh Poimboeuf wrote:
> > > On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> > > > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> > > > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> > > > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> > > > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> > > > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> > > 
> > > Another readability tweak: "0x0" helps with vertical alignment:
> > > 
> > > 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xA),	SRBDS),
> > > 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> > > 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xB),	SRBDS),
> > > 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> > 
> > FWIW the white paper no longer calls out individual steppings as vulnerable
> > only if TSX so I'm losing the SRBDS_IF_TSX stuff.
> > 
> > Given that the additional steppings 0xB, 0xC and 0xD no longer are treated
> > differently I'm just going with the following:
> > VULNBL_INTEL_STEPPING(KABYLAKE_L,      X86_STEPPINGS(0, 0xC), SRBDS)
> > VULNBL_INTEL_STEPPING(KABYLAKE,        X86_STEPPINGS(0, 0xD), SRBDS)
> 
> Ok, though I still think "0x0" looks more consistent in this context ;-)
ok.

--mark

> -- 
> Josh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3
  2020-04-15 12:58     ` Thomas Gleixner
@ 2020-04-15 22:21       ` mark gross
  0 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-04-15 22:21 UTC (permalink / raw)
  To: speck

On Wed, Apr 15, 2020 at 02:58:40PM +0200, speck for Thomas Gleixner wrote:
> Mark,
> 
> speck for mark gross <speck@linutronix.de> writes:
> > On Tue, Apr 14, 2020 at 03:05:44PM -0500, speck for Josh Poimboeuf wrote:
> >> On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote:
> >> > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
> >> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0, 0xA),		SRBDS),
> >> > +	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> >> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0, 0xB),		SRBDS),
> >> > +	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> >> 
> >> Another readability tweak: "0x0" helps with vertical alignment:
> >> 
> >> 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xA),	SRBDS),
> >> 	VULNBL_INTEL_STEPPING(KABYLAKE_L,	X86_STEPPINGS(0xB, 0xC),	SRBDS_IF_TSX),
> >> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0x0, 0xB),	SRBDS),
> >> 	VULNBL_INTEL_STEPPING(KABYLAKE,		X86_STEPPINGS(0xC, 0xD),	SRBDS_IF_TSX),
> >
> > FWIW the white paper no longer calls out individual steppings as vulnerable
> > only if TSX so I'm losing the SRBDS_IF_TSX stuff.
> 
> So we lose the complete TSX conditionals or do they now apply to all
> NO_MDS CPUs?
we apply the TSX test to all  MDS_NO CPU's.

FWIW I greatly appreciate your efforts in cleaning up the x86 macros that we
needed in earlier versions of the white paper.  The white paper splits out
different steppings to differentiate older parts that were affected by MDS from
the newer versions that had MDS_NO (and so had the option of mitigating SRBDS
by turning off TSX).  But, recently it changed such that stepping isn't needed
to determine vulnerability.

Depending on how people want to list things we can split out the MDS_NO CPU's
or collapse them and not depend on the stepping.  i.e. use X86_STEPPINGS_ALL or
just use the legacy interfaces that don't know about stepping.  Which is what
I'm planning to do as it will make back porting easier.

sorry,
--mark

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [MODERATED] Re: [PATCH 1/4] V7 more sampling fun 1
  2020-04-15 17:51 ` [MODERATED] Re: [PATCH 1/4] V7 more sampling fun 1 Borislav Petkov
@ 2020-04-15 23:58   ` mark gross
  0 siblings, 0 replies; 22+ messages in thread
From: mark gross @ 2020-04-15 23:58 UTC (permalink / raw)
  To: speck

On Wed, Apr 15, 2020 at 07:51:45PM +0200, speck for Borislav Petkov wrote:
> On Mon, Mar 16, 2020 at 05:56:27PM -0700, speck for mark gross wrote:
> > From: mark gross <mgross@linux.intel.com>
> > Subject: [PATCH 1/4] x86/cpu: Add stepping field to x86_cpu_id structure
> > 
> > Intel uses the same family/model for several CPUs. Sometimes the
> > stepping must be checked to tell them apart.
> > 
> > On X86 there can be at most 16 steppings, add steppings bitmask to
> > x86_cpu_id and X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and
> > support for matching against family/model/stepping.
> > 
> > Signed-off-by: Mark Gross <mgross@linux.intel.com>
> > Reviewed-by: Tony Luck <tony.luck@intel.com>
> > ---
> >  arch/x86/include/asm/cpu_device_id.h | 27 ++++++++++++++++++++++++---
> >  arch/x86/kernel/cpu/match.c          |  7 ++++++-
> >  include/linux/mod_devicetable.h      |  2 ++
> >  3 files changed, 32 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
> > index cf3d621c6892..4f0df2e46c95 100644
> > --- a/arch/x86/include/asm/cpu_device_id.h
> > +++ b/arch/x86/include/asm/cpu_device_id.h
> > @@ -20,12 +20,14 @@
> >  #define X86_CENTAUR_FAM6_C7_D		0xd
> >  #define X86_CENTAUR_FAM6_NANO		0xf
> >  
> > +#define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
> >  /**
> > - * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
> > + * X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE - Base macro for CPU matching
> 				 ^^^^^^^^
> 
> If the steppings argument is a bitmask of steppings, then the name
> should have "STEPPINGS" too to avoid confusion.
done

> >   * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
> >   *		The name is expanded to X86_VENDOR_@_vendor
> >   * @_family:	The family number or X86_FAMILY_ANY
> >   * @_model:	The model number, model constant or X86_MODEL_ANY
> > + * @_steppings:	Bitmask for steppings, stepping constant or X86_STEPPING_ANY
> >   * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
> >   * @_data:	Driver specific data or NULL. The internal storage
> >   *		format is unsigned long. The supplied value, pointer
> > @@ -37,15 +39,34 @@
> >   * into another macro at the usage site for good reasons, then please
> >   * start this local macro with X86_MATCH to allow easy grepping.
> >   */
> > -#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model,	\
> > -					   _feature, _data) {		\
> > +#define X86_MATCH_VENDOR_FAM_MODEL_STEPPING_FEATURE(_vendor, _family, _model, \
> > +						    _steppings, _feature, _data) { \
> >  	.vendor		= X86_VENDOR_##_vendor,				\
> >  	.family		= _family,					\
> >  	.model		= _model,					\
> > +	.steppings	= _steppings,					\
> >  	.feature	= _feature,					\
> >  	.driver_data	= (unsigned long) _data				\
> >  }
> >  
> > +/**
> > + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching
> 
> That isn't the base macro anymore.
done

--mark

> -- 
> Regards/Gruss,
>     Boris.
> 
> SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
> -- 

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2020-04-15 23:58 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-13 18:10 [MODERATED] [PATCH 0/4] V7 more sampling fun 0 mark gross
2020-01-16 22:16 ` [MODERATED] [PATCH 3/4] V7 more sampling fun 3 mark gross
2020-01-30 19:12 ` [MODERATED] [PATCH 4/4] V7 more sampling fun 4 mark gross
2020-03-17  0:56 ` [MODERATED] [PATCH 2/4] V7 more sampling fun 2 mark gross
2020-03-17  0:56 ` [MODERATED] [PATCH 1/4] V7 more sampling fun 1 mark gross
2020-04-14  3:48 ` [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3 mark gross
2020-04-14 16:23   ` Thomas Gleixner
2020-04-14 20:03     ` [MODERATED] " mark gross
2020-04-14 10:58 ` Thomas Gleixner
2020-04-14 16:43   ` [MODERATED] " mark gross
2020-04-14 20:02 ` Josh Poimboeuf
2020-04-14 21:03   ` mark gross
2020-04-14 21:23     ` Josh Poimboeuf
2020-04-14 21:53       ` mark gross
2020-04-14 20:05 ` Josh Poimboeuf
2020-04-14 21:59   ` mark gross
2020-04-14 22:46     ` Josh Poimboeuf
2020-04-15 20:59       ` mark gross
2020-04-15 12:58     ` Thomas Gleixner
2020-04-15 22:21       ` [MODERATED] " mark gross
2020-04-15 17:51 ` [MODERATED] Re: [PATCH 1/4] V7 more sampling fun 1 Borislav Petkov
2020-04-15 23:58   ` mark gross

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).