historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MODERATED] [PATCH v7 00/10] TAAv7 0
@ 2019-10-21 20:22 Pawan Gupta
  2019-10-21 20:23 ` [MODERATED] [PATCH v7 01/10] TAAv7 1 Pawan Gupta
                   ` (25 more replies)
  0 siblings, 26 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:22 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 00/10] TAAv7

Changes since v6:
- Add Michal's patch to allow tsx=on|off|auto via CONFIG
- Rebase to v5.4-rc4
- Changelog, comments and documentation update.

Changes since v5:
- Remove unsafe X86_FEATURE_RTM toggles.
- Have only boot cpu call tsx_init()
- s/read_ia32_arch_cap/x86_read_arch_cap_msr/
- Move TSX sysfs knob part to the end after documentation patch.
- Changelog, comments and documentation update.

Changes since v4:
- Simplify TSX_CTRL enumeration, set TSX_CTRL default to NOT_SUPPORTED.
- Add new patch "Export MDS_NO=0 to guests when TSX is enabled".
- Add new patch for tsx=auto which enables TSX on unaffected platforms,
  default stays tsx=off.
- Handle kexec like cases for TAA bug enumeration. Set X86_BUG_TAA when
  X86_FEATURE_RTM=1 or TSX_CTRL=1.
- TSX control sysfs file rename(s/tsx/hw_tx_mem/) and file creation changes.
- Dropped patch "x86/speculation/mds: Rename MDS buffer clear functions"
  It doesn't provide enough benefit compared to the amount of changes
  involved. Added code comment about using MDS mitigation.
- Add helper function read_ia32_arch_cap().
- Reorder mitigation checks in taa_select_mitigation().
- s/MSR_// for TSX_CTRL bit defines.
- Changelog,comments and documentation update.
- Rebase to v5.3.

Changes since v3:
- Disable tsx unconditionally, removed tsx=auto mode.
- Fix verw idle clear.
- Refactor TSX code into new tsx.c
- Use early_param for tsx cmdline parameter.
- Rename sysfs vulnerability file to tsx_async_abort.
- Rename common CPU buffer clear infrastructure (s/mds/verw)
- s/TAA_MITIGATION_VMWERV/TAA_MITIGATION_UCODE_NEEDED
- Rebased to v5.3-rc6
- Split patches.
- Changelog and documentation update.

Changes since v2:
- Rebased to v5.3-rc5
- Fix build for non-x86 targets.
- Commit log, code comments and documentation update.
- Minor code refactoring.

Changes since v1:
- Added TSX command line options added(on|off|auto). "auto" is the
  default which sets TSX state as below:
	- TSX disabled on affected platforms
	- TSX enabled on unaffected platforms
- Update commit messages and documentation.
- Add support to control TSX feature from sysfs.

This patchset adds the mitigation for TSX Async Abort (TAA) which is a
side channel vulnerability to internal buffers in some Intel processors similar
to Microachitectural Data Sampling (MDS). Transactional Synchronization
Extensions (TSX) is a feature in Intel processors that speeds up
execution of multi-threaded software through lock elision.

During TAA certain loads may speculatively pass invalid data to
dependent operations when an asynchronous abort condition is pending in
a TSX transaction.  An attacker can use TSX as a tool to extract
information from the microarchitectural buffers.  The victim data may be
placed into these buffers during normal execution which is unrelated to
any use of TSX.

Mitigation is to either clear the cpu buffers or disable TSX.

Michal Hocko (1):
  x86/tsx: Add config options to set tsx=on|off|auto

Pawan Gupta (9):
  x86/tsx: Add enumeration support for IA32_TSX_CTRL MSR
  x86: Add helper function x86_read_arch_cap_msr()
  x86/tsx: Add TSX cmdline option with TSX disabled by default
  x86/speculation/taa: Add mitigation for TSX Async Abort
  x86/speculation/taa: Add sysfs reporting for TSX Async Abort
  KVM: x86/speculation/taa: Export MDS_NO=0 to guests when TSX is
    enabled
  x86/tsx: Add "auto" option to TSX cmdline parameter
  x86/speculation/taa: Add documentation for TSX Async Abort
  x86/tsx: Add sysfs interface to control TSX

 .../ABI/testing/sysfs-devices-system-cpu      |  24 ++
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../admin-guide/hw-vuln/tsx_async_abort.rst   | 282 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  52 ++++
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/tsx_async_abort.rst         | 116 +++++++
 arch/x86/Kconfig                              |  45 +++
 arch/x86/include/asm/cpufeatures.h            |   1 +
 arch/x86/include/asm/msr-index.h              |   9 +
 arch/x86/include/asm/nospec-branch.h          |   4 +-
 arch/x86/include/asm/processor.h              |   7 +
 arch/x86/kernel/cpu/Makefile                  |   2 +-
 arch/x86/kernel/cpu/bugs.c                    | 169 ++++++++++-
 arch/x86/kernel/cpu/common.c                  |  32 +-
 arch/x86/kernel/cpu/cpu.h                     |  19 ++
 arch/x86/kernel/cpu/intel.c                   |  10 +
 arch/x86/kernel/cpu/tsx.c                     | 241 +++++++++++++++
 arch/x86/kvm/x86.c                            |  19 ++
 drivers/base/cpu.c                            |  41 ++-
 include/linux/cpu.h                           |   9 +
 20 files changed, 1074 insertions(+), 10 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 create mode 100644 Documentation/x86/tsx_async_abort.rst
 create mode 100644 arch/x86/kernel/cpu/tsx.c

-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 01/10] TAAv7 1
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
@ 2019-10-21 20:23 ` Pawan Gupta
  2019-10-21 20:24 ` [MODERATED] [PATCH v7 02/10] TAAv7 2 Pawan Gupta
                   ` (24 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:23 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 01/10] x86/tsx: Add enumeration support for IA32_TSX_CTRL
 MSR

Transactional Synchronization Extensions (TSX) may be used on certain
processors as part of a speculative side channel attack.  A microcode
update for existing processors that are vulnerable to this attack will
add a new MSR, IA32_TSX_CTRL to allow the system administrator the
option to disable TSX as one of the possible mitigations.  [Note that
future processors that are not vulnerable will also support the
IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
bits.

TSX has two sub-features:

1. Restricted Transactional Memory (RTM) is an explicitly-used feature
   where new instructions begin and end TSX transactions.
2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
   "old" style locks are used by software.

Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
IA32_TSX_CTRL MSR.

There are two control bits in IA32_TSX_CTRL MSR:

  Bit 0: When set it disables the Restricted Transactional Memory (RTM)
         sub-feature of TSX (will force all transactions to abort on the
	 XBEGIN instruction).

  Bit 1: When set it disables the enumeration of the RTM and HLE feature
         (i.e. it will make CPUID(EAX=7).EBX{bit4} and
         CPUID(EAX=7).EBX{bit11} read as 0).

The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled but still enumerated as present by CPUID(EAX=7).EBX{bit4}.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/include/asm/msr-index.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 20ce682a2540..da4caf6da739 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -93,6 +93,7 @@
 						  * Microarchitectural Data
 						  * Sampling (MDS) vulnerabilities.
 						  */
+#define ARCH_CAP_TSX_CTRL_MSR		BIT(7)	/* MSR for TSX control is available. */
 
 #define MSR_IA32_FLUSH_CMD		0x0000010b
 #define L1D_FLUSH			BIT(0)	/*
@@ -103,6 +104,10 @@
 #define MSR_IA32_BBL_CR_CTL		0x00000119
 #define MSR_IA32_BBL_CR_CTL3		0x0000011e
 
+#define MSR_IA32_TSX_CTRL		0x00000122
+#define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
+#define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
+
 #define MSR_IA32_SYSENTER_CS		0x00000174
 #define MSR_IA32_SYSENTER_ESP		0x00000175
 #define MSR_IA32_SYSENTER_EIP		0x00000176
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 02/10] TAAv7 2
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
  2019-10-21 20:23 ` [MODERATED] [PATCH v7 01/10] TAAv7 1 Pawan Gupta
@ 2019-10-21 20:24 ` Pawan Gupta
  2019-10-21 20:25 ` [MODERATED] [PATCH v7 03/10] TAAv7 3 Pawan Gupta
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:24 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 02/10] x86: Add helper function x86_read_arch_cap_msr()

Add a helper function to read IA32_ARCH_CAPABILITIES MSR. If the CPU
doesn't support this MSR return 0.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/kernel/cpu/common.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 9ae7d1bcd4f4..897c8302d982 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1092,19 +1092,26 @@ static bool __init cpu_matches(unsigned long which)
 	return m && !!(m->driver_data & which);
 }
 
-static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+u64 x86_read_arch_cap_msr(void)
 {
 	u64 ia32_cap = 0;
 
+	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+
+	return ia32_cap;
+}
+
+static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
 	if (cpu_matches(NO_SPECULATION))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
 
-	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
-
 	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 03/10] TAAv7 3
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
  2019-10-21 20:23 ` [MODERATED] [PATCH v7 01/10] TAAv7 1 Pawan Gupta
  2019-10-21 20:24 ` [MODERATED] [PATCH v7 02/10] TAAv7 2 Pawan Gupta
@ 2019-10-21 20:25 ` Pawan Gupta
  2019-10-21 20:26 ` [MODERATED] [PATCH v7 04/10] TAAv7 4 Pawan Gupta
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:25 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 03/10] x86/tsx: Add TSX cmdline option with TSX disabled by
 default

Add kernel cmdline parameter "tsx" to control the Transactional
Synchronization Extensions (TSX) feature.  On CPUs that support TSX
control, use "tsx=on|off" to enable or disable TSX.  Not specifying this
option is equivalent to "tsx=off". This is because on certain processors
TSX may be used as a part of a speculative side channel attack.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../admin-guide/kernel-parameters.txt         |  11 ++
 arch/x86/kernel/cpu/Makefile                  |   2 +-
 arch/x86/kernel/cpu/common.c                  |   2 +
 arch/x86/kernel/cpu/cpu.h                     |  18 +++
 arch/x86/kernel/cpu/intel.c                   |   5 +
 arch/x86/kernel/cpu/tsx.c                     | 114 ++++++++++++++++++
 6 files changed, 151 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kernel/cpu/tsx.c

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a84a83f8881e..ad6b69057bb0 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4848,6 +4848,17 @@
 			interruptions from clocksource watchdog are not
 			acceptable).
 
+	tsx=		[X86] Control Transactional Synchronization
+			Extensions (TSX) feature in Intel processors that
+			support TSX control.
+
+			This parameter controls the TSX feature. The options are:
+
+			on	- Enable TSX on the system.
+			off	- Disable TSX on the system.
+
+			Not specifying this option is equivalent to tsx=off.
+
 	turbografx.map[2|3]=	[HW,JOY]
 			TurboGraFX parallel port interface
 			Format:
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index d7a1e5a9331c..890f60083eca 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -30,7 +30,7 @@ obj-$(CONFIG_PROC_FS)	+= proc.o
 obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
 
 ifdef CONFIG_CPU_SUP_INTEL
-obj-y			+= intel.o intel_pconfig.o
+obj-y			+= intel.o intel_pconfig.o tsx.o
 obj-$(CONFIG_PM)	+= intel_epb.o
 endif
 obj-$(CONFIG_CPU_SUP_AMD)		+= amd.o
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 897c8302d982..885d4ac2111a 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1561,6 +1561,8 @@ void __init identify_boot_cpu(void)
 #endif
 	cpu_detect_tlb(&boot_cpu_data);
 	setup_cr_pinning();
+
+	tsx_init();
 }
 
 void identify_secondary_cpu(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index c0e2407abdd6..38ab6e115eac 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -44,6 +44,22 @@ struct _tlb_table {
 extern const struct cpu_dev *const __x86_cpu_dev_start[],
 			    *const __x86_cpu_dev_end[];
 
+#ifdef CONFIG_CPU_SUP_INTEL
+enum tsx_ctrl_states {
+	TSX_CTRL_ENABLE,
+	TSX_CTRL_DISABLE,
+	TSX_CTRL_NOT_SUPPORTED,
+};
+
+extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+
+extern void __init tsx_init(void);
+extern void tsx_enable(void);
+extern void tsx_disable(void);
+#else
+static inline void tsx_init(void) { }
+#endif /* CONFIG_CPU_SUP_INTEL */
+
 extern void get_cpu_cap(struct cpuinfo_x86 *c);
 extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
 extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
@@ -62,4 +78,6 @@ unsigned int aperfmperf_get_khz(int cpu);
 
 extern void x86_spec_ctrl_setup_ap(void);
 
+extern u64 x86_read_arch_cap_msr(void);
+
 #endif /* ARCH_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index c2fdc00df163..11d5c5950e2d 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -762,6 +762,11 @@ static void init_intel(struct cpuinfo_x86 *c)
 		detect_tme(c);
 
 	init_intel_misc_features(c);
+
+	if (tsx_ctrl_state == TSX_CTRL_ENABLE)
+		tsx_enable();
+	if (tsx_ctrl_state == TSX_CTRL_DISABLE)
+		tsx_disable();
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
new file mode 100644
index 000000000000..d031a8b6fb0e
--- /dev/null
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -0,0 +1,114 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Intel Transactional Synchronization Extensions (TSX) control.
+ *
+ * Copyright (C) 2019 Intel Corporation
+ *
+ * Author:
+ *	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+ */
+
+#include <linux/cpufeature.h>
+
+#include <asm/cmdline.h>
+
+#include "cpu.h"
+
+enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+
+void tsx_disable(void)
+{
+	u64 tsx;
+
+	rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+	/* Force all transactions to immediately abort */
+	tsx |= TSX_CTRL_RTM_DISABLE;
+	/*
+	 * Ensure TSX support is not enumerated in CPUID.
+	 * This is visible to userspace and will ensure they
+	 * do not waste resources trying TSX transactions that
+	 * will always abort.
+	 */
+	tsx |= TSX_CTRL_CPUID_CLEAR;
+
+	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+void tsx_enable(void)
+{
+	u64 tsx;
+
+	rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+	/* Enable the RTM feature in the cpu */
+	tsx &= ~TSX_CTRL_RTM_DISABLE;
+	/*
+	 * Ensure TSX support is enumerated in CPUID.
+	 * This is visible to userspace and will ensure they
+	 * can enumerate and use the TSX feature.
+	 */
+	tsx &= ~TSX_CTRL_CPUID_CLEAR;
+
+	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+static bool __init tsx_ctrl_is_supported(void)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	/*
+	 * TSX is controlled via MSR_IA32_TSX_CTRL.  However,
+	 * support for this MSR is enumerated by ARCH_CAP_TSX_MSR bit
+	 * in MSR_IA32_ARCH_CAPABILITIES.
+	 */
+	return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
+}
+
+void __init tsx_init(void)
+{
+	char arg[20];
+	int ret;
+
+	if (!tsx_ctrl_is_supported())
+		return;
+
+	ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
+	if (ret >= 0) {
+		if (!strcmp(arg, "on")) {
+			tsx_ctrl_state = TSX_CTRL_ENABLE;
+		} else if (!strcmp(arg, "off")) {
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		} else {
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+			pr_err("tsx: invalid option, defaulting to off\n");
+		}
+	} else {
+		/* tsx= not provided, defaulting to off */
+		tsx_ctrl_state = TSX_CTRL_DISABLE;
+	}
+
+	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
+		tsx_disable();
+		/*
+		 * tsx_disable() will change the state of the
+		 * RTM CPUID bit.  Clear it here since it is now
+		 * expected to be not set.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_RTM);
+	} else if (tsx_ctrl_state == TSX_CTRL_ENABLE) {
+		/*
+		 * HW defaults TSX to be enabled at bootup.
+		 * We may still need the TSX enable support
+		 * during init for special cases like
+		 * kexec after TSX is disabled.
+		 */
+		tsx_enable();
+		/*
+		 * tsx_enable() will change the state of the
+		 * RTM CPUID bit.  Force it here since it is now
+		 * expected to be set.
+		 */
+		setup_force_cpu_cap(X86_FEATURE_RTM);
+	}
+}
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 04/10] TAAv7 4
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (2 preceding siblings ...)
  2019-10-21 20:25 ` [MODERATED] [PATCH v7 03/10] TAAv7 3 Pawan Gupta
@ 2019-10-21 20:26 ` Pawan Gupta
  2019-10-21 20:27 ` [MODERATED] [PATCH v7 05/10] TAAv7 5 Pawan Gupta
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:26 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 04/10] x86/speculation/taa: Add mitigation for TSX Async
 Abort

TSX Async Abort (TAA) is a side channel vulnerability to the internal
buffers in some Intel processors similar to Microachitectural Data
Sampling (MDS).  In this case certain loads may speculatively pass
invalid data to dependent operations when an asynchronous abort
condition is pending in a TSX transaction.  This includes loads with no
fault or assist condition.  Such loads may speculatively expose stale
data from the uarch data structures as in MDS.  Scope of exposure is
within the same-thread and cross-thread.  This issue affects all current
processors that support TSX, but do not have ARCH_CAP_TAA_NO (bit 8) set
in MSR_IA32_ARCH_CAPABILITIES.

On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
using VERW or L1D_FLUSH, there is no additional mitigation needed for
TAA.

On affected CPUs with MDS_NO=1 this issue can be mitigated by disabling
Transactional Synchronization Extensions (TSX) feature.  A new MSR
IA32_TSX_CTRL in future and current processors after a microcode update
can be used to control TSX feature.  TSX_CTRL_RTM_DISABLE bit disables
the TSX sub-feature Restricted Transactional Memory (RTM).
TSX_CTRL_CPUID_CLEAR bit clears the RTM enumeration in CPUID.  The other
TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled with updated microcode but still enumerated as present by
CPUID(EAX=7).EBX{bit4}.

The second mitigation approach is similar to MDS which is clearing the
affected CPU buffers on return to user space and when entering a guest.
Relevant microcode update is required for the mitigation to work.  More
details on this approach can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html

TSX feature can be controlled by the "tsx" command line parameter.  If
the TSX feature is forced to be enabled then "Clear CPU buffers" (MDS
mitigation) is deployed. The effective mitigation state can be read from
sysfs.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/include/asm/cpufeatures.h   |   1 +
 arch/x86/include/asm/msr-index.h     |   4 +
 arch/x86/include/asm/nospec-branch.h |   4 +-
 arch/x86/include/asm/processor.h     |   7 ++
 arch/x86/kernel/cpu/bugs.c           | 127 ++++++++++++++++++++++++++-
 arch/x86/kernel/cpu/common.c         |  15 ++++
 6 files changed, 154 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 0652d3eed9bd..989e03544f18 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -399,5 +399,6 @@
 #define X86_BUG_MDS			X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
 #define X86_BUG_MSBDS_ONLY		X86_BUG(20) /* CPU is only affected by the  MSDBS variant of BUG_MDS */
 #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
+#define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index da4caf6da739..b3a8bb2af0b6 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -94,6 +94,10 @@
 						  * Sampling (MDS) vulnerabilities.
 						  */
 #define ARCH_CAP_TSX_CTRL_MSR		BIT(7)	/* MSR for TSX control is available. */
+#define ARCH_CAP_TAA_NO			BIT(8)	/*
+						 * Not susceptible to
+						 * TSX Async Abort (TAA) vulnerabilities.
+						 */
 
 #define MSR_IA32_FLUSH_CMD		0x0000010b
 #define L1D_FLUSH			BIT(0)	/*
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 80bc209c0708..5c24a7b35166 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -314,7 +314,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
 #include <asm/segment.h>
 
 /**
- * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
  *
  * This uses the otherwise unused and obsolete VERW instruction in
  * combination with microcode which triggers a CPU buffer flush when the
@@ -337,7 +337,7 @@ static inline void mds_clear_cpu_buffers(void)
 }
 
 /**
- * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
  *
  * Clear CPU buffers if the corresponding static key is enabled
  */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 6e0a3b43d027..999b85039128 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -988,4 +988,11 @@ enum mds_mitigations {
 	MDS_MITIGATION_VMWERV,
 };
 
+enum taa_mitigations {
+	TAA_MITIGATION_OFF,
+	TAA_MITIGATION_UCODE_NEEDED,
+	TAA_MITIGATION_VERW,
+	TAA_MITIGATION_TSX_DISABLE,
+};
+
 #endif /* _ASM_X86_PROCESSOR_H */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 91c2561b905f..42fb09d9af6c 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -39,6 +39,7 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
+static void __init taa_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -105,6 +106,7 @@ void __init check_bugs(void)
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
 	mds_select_mitigation();
+	taa_select_mitigation();
 
 	arch_smt_update();
 
@@ -268,6 +270,110 @@ static int __init mds_cmdline(char *str)
 }
 early_param("mds", mds_cmdline);
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"TAA: " fmt
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static bool taa_nosmt __ro_after_init;
+
+static const char * const taa_strings[] = {
+	[TAA_MITIGATION_OFF]		= "Vulnerable",
+	[TAA_MITIGATION_UCODE_NEEDED]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
+	[TAA_MITIGATION_VERW]		= "Mitigation: Clear CPU buffers",
+	[TAA_MITIGATION_TSX_DISABLE]	= "Mitigation: TSX disabled",
+};
+
+static void __init taa_select_mitigation(void)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	if (!boot_cpu_has_bug(X86_BUG_TAA)) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+		return;
+	}
+
+	/*
+	 * As X86_BUG_TAA=1, TSX feature is supported by the hardware. If
+	 * TSX was disabled (X86_FEATURE_RTM=0) earlier during tsx_init().
+	 * Select TSX_DISABLE as mitigation.
+	 *
+	 * This check is ahead of mitigations=off and tsx_async_abort=off
+	 * because when TSX is disabled mitigation is already in place. This
+	 * ensures sysfs doesn't show "Vulnerable" when TSX is disabled.
+	 */
+	if (!boot_cpu_has(X86_FEATURE_RTM)) {
+		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
+		pr_info("%s\n", taa_strings[taa_mitigation]);
+		return;
+	}
+
+	/* All mitigations turned off from cmdline (mitigations=off) */
+	if (cpu_mitigations_off()) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+		return;
+	}
+
+	/* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
+	if (taa_mitigation == TAA_MITIGATION_OFF) {
+		pr_info("%s\n", taa_strings[taa_mitigation]);
+		return;
+	}
+
+	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+		taa_mitigation = TAA_MITIGATION_VERW;
+	else
+		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+	/*
+	 * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
+	 * A microcode update fixes this behavior to clear CPU buffers.
+	 * Microcode update also adds support for MSR_IA32_TSX_CTRL which
+	 * is enumerated by ARCH_CAP_TSX_CTRL_MSR bit.
+	 *
+	 * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
+	 * update is required.
+	 */
+	if ((ia32_cap & ARCH_CAP_MDS_NO) &&
+	    !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
+		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+	/*
+	 * TSX is enabled, select alternate mitigation for TAA which is
+	 * same as MDS. Enable MDS static branch to clear CPU buffers.
+	 *
+	 * For guests that can't determine whether the correct microcode is
+	 * present on host, enable the mitigation for UCODE_NEEDED as well.
+	 */
+	static_branch_enable(&mds_user_clear);
+
+	if (taa_nosmt || cpu_mitigations_auto_nosmt())
+		cpu_smt_disable(false);
+
+	pr_info("%s\n", taa_strings[taa_mitigation]);
+}
+
+static int __init tsx_async_abort_cmdline(char *str)
+{
+	if (!boot_cpu_has_bug(X86_BUG_TAA))
+		return 0;
+
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off")) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+	} else if (!strcmp(str, "full")) {
+		taa_mitigation = TAA_MITIGATION_VERW;
+	} else if (!strcmp(str, "full,nosmt")) {
+		taa_mitigation = TAA_MITIGATION_VERW;
+		taa_nosmt = true;
+	}
+
+	return 0;
+}
+early_param("tsx_async_abort", tsx_async_abort_cmdline);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
@@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
 #undef pr_fmt
 #define pr_fmt(fmt) fmt
 
-/* Update the static key controlling the MDS CPU buffer clear in idle */
+/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
 static void update_mds_branch_idle(void)
 {
 	/*
@@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
 	 * The other variants cannot be mitigated when SMT is enabled, so
 	 * clearing the buffers on idle just to prevent the Store Buffer
 	 * repartitioning leak would be a window dressing exercise.
+	 *
+	 * Apply idle buffer clearing to TAA affected CPUs also.
 	 */
-	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
+	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
+	    !boot_cpu_has_bug(X86_BUG_TAA))
 		return;
 
 	if (sched_smt_active())
@@ -786,6 +895,7 @@ static void update_mds_branch_idle(void)
 }
 
 #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
+#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
 
 void cpu_bugs_smt_update(void)
 {
@@ -819,6 +929,19 @@ void cpu_bugs_smt_update(void)
 		break;
 	}
 
+	switch (taa_mitigation) {
+	case TAA_MITIGATION_VERW:
+	case TAA_MITIGATION_UCODE_NEEDED:
+		if (sched_smt_active())
+			pr_warn_once(TAA_MSG_SMT);
+		/* TSX is enabled, apply MDS idle buffer clearing. */
+		update_mds_branch_idle();
+		break;
+	case TAA_MITIGATION_TSX_DISABLE:
+	case TAA_MITIGATION_OFF:
+		break;
+	}
+
 	mutex_unlock(&spec_ctrl_mutex);
 }
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 885d4ac2111a..86f22c1e5912 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	if (!cpu_matches(NO_SWAPGS))
 		setup_force_cpu_bug(X86_BUG_SWAPGS);
 
+	/*
+	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
+	 *	- TSX is supported or
+	 *	- TSX_CTRL is supported
+	 *
+	 * TSX_CTRL check is needed for cases when TSX could be disabled before
+	 * the kernel boot e.g. kexec
+	 * TSX_CTRL check alone is not sufficient for cases when the microcode
+	 * update is not present or running as guest that don't get TSX_CTRL.
+	 */
+	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
+	    (boot_cpu_has(X86_FEATURE_RTM) ||
+	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+		setup_force_cpu_bug(X86_BUG_TAA);
+
 	if (cpu_matches(NO_MELTDOWN))
 		return;
 
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 05/10] TAAv7 5
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (3 preceding siblings ...)
  2019-10-21 20:26 ` [MODERATED] [PATCH v7 04/10] TAAv7 4 Pawan Gupta
@ 2019-10-21 20:27 ` Pawan Gupta
  2019-10-21 20:28 ` [MODERATED] [PATCH v7 06/10] TAAv7 6 Pawan Gupta
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:27 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 05/10] x86/speculation/taa: Add sysfs reporting for TSX
 Async Abort

Add the sysfs reporting file for TSX Async Abort. It exposes the
vulnerability and the mitigation state similar to the existing files for
the other hardware vulnerabilities.

sysfs file path is:
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
 drivers/base/cpu.c         |  9 +++++++++
 include/linux/cpu.h        |  3 +++
 3 files changed, 35 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 42fb09d9af6c..3220d820159a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1451,6 +1451,21 @@ static ssize_t mds_show_state(char *buf)
 		       sched_smt_active() ? "vulnerable" : "disabled");
 }
 
+static ssize_t tsx_async_abort_show_state(char *buf)
+{
+	if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLE) ||
+	    (taa_mitigation == TAA_MITIGATION_OFF))
+		return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+		return sprintf(buf, "%s; SMT Host state unknown\n",
+			       taa_strings[taa_mitigation]);
+	}
+
+	return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
+		       sched_smt_active() ? "vulnerable" : "disabled");
+}
+
 static char *stibp_state(void)
 {
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
@@ -1521,6 +1536,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 	case X86_BUG_MDS:
 		return mds_show_state(buf);
 
+	case X86_BUG_TAA:
+		return tsx_async_abort_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1557,4 +1575,9 @@ ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *bu
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_MDS);
 }
+
+ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_TAA);
+}
 #endif
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index cc37511de866..0fccd8c0312e 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -554,12 +554,20 @@ ssize_t __weak cpu_show_mds(struct device *dev,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t __weak cpu_show_tsx_async_abort(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	return sprintf(buf, "Not affected\n");
+}
+
 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
 static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
 static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
 static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -568,6 +576,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_spec_store_bypass.attr,
 	&dev_attr_l1tf.attr,
 	&dev_attr_mds.attr,
+	&dev_attr_tsx_async_abort.attr,
 	NULL
 };
 
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index d0633ebdaa9c..f35369f79771 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -59,6 +59,9 @@ extern ssize_t cpu_show_l1tf(struct device *dev,
 			     struct device_attribute *attr, char *buf);
 extern ssize_t cpu_show_mds(struct device *dev,
 			    struct device_attribute *attr, char *buf);
+extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
+					struct device_attribute *attr,
+					char *buf);
 
 extern __printf(4, 5)
 struct device *cpu_device_create(struct device *parent, void *drvdata,
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 06/10] TAAv7 6
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (4 preceding siblings ...)
  2019-10-21 20:27 ` [MODERATED] [PATCH v7 05/10] TAAv7 5 Pawan Gupta
@ 2019-10-21 20:28 ` Pawan Gupta
  2019-10-21 20:29 ` [MODERATED] [PATCH v7 07/10] TAAv7 7 Pawan Gupta
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:28 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 06/10] KVM: x86/speculation/taa: Export MDS_NO=0 to guests
 when TSX is enabled

Export IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX Async
Abort(TAA) affected hosts that have TSX enabled and updated microcode.
This is required so that the guests don't complain,

	"Vulnerable: Clear CPU buffers attempted, no microcode"

when the host has the updated microcode to clear CPU buffers.

Microcode update also adds support for MSR_IA32_TSX_CTRL which is
enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR.
Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is
not exported to the guests.

In this case export MDS_NO=0 to the guests. When guests have
CPUID.MD_CLEAR=1 guests deploy MDS mitigation which also mitigates TAA.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/kvm/x86.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 661e2bf38526..797113fa665d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1299,6 +1299,25 @@ static u64 kvm_get_arch_capabilities(void)
 	if (!boot_cpu_has_bug(X86_BUG_MDS))
 		data |= ARCH_CAP_MDS_NO;
 
+	/*
+	 * On TAA affected systems, export MDS_NO=0 when:
+	 *	- TSX is enabled on host, i.e. X86_FEATURE_RTM=1.
+	 *	- Updated microcode is present. This is detected by
+	 *	  the presence of ARCH_CAP_TSX_CTRL_MSR. This ensures
+	 *	  VERW clears CPU buffers.
+	 *
+	 * When MDS_NO=0 is exported, guests deploy clear CPU buffer
+	 * mitigation and don't complain:
+	 *
+	 *	"Vulnerable: Clear CPU buffers attempted, no microcode"
+	 *
+	 * If TSX is disabled on the system, guests are also mitigated against
+	 * TAA and clear CPU buffer mitigation is not required for guests.
+	 */
+	if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
+	    (data & ARCH_CAP_TSX_CTRL_MSR))
+		data &= ~ARCH_CAP_MDS_NO;
+
 	return data;
 }
 
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 07/10] TAAv7 7
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (5 preceding siblings ...)
  2019-10-21 20:28 ` [MODERATED] [PATCH v7 06/10] TAAv7 6 Pawan Gupta
@ 2019-10-21 20:29 ` Pawan Gupta
  2019-10-21 20:30 ` [MODERATED] [PATCH v7 08/10] TAAv7 8 Pawan Gupta
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:29 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 07/10] x86/tsx: Add "auto" option to TSX cmdline parameter

Platforms which are not affected by X86_BUG_TAA may want the TSX feature
enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
disable TSX when X86_BUG_TAA is present, otherwise enable TSX.

More details on X86_BUG_TAA can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 5 +++++
 arch/x86/kernel/cpu/tsx.c                       | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index ad6b69057bb0..4d63dc8dc2fc 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4856,6 +4856,11 @@
 
 			on	- Enable TSX on the system.
 			off	- Disable TSX on the system.
+			auto	- Disable TSX if X86_BUG_TAA is present,
+				  otherwise enable TSX on the system.
+
+			More details on X86_BUG_TAA are here:
+			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 
 			Not specifying this option is equivalent to tsx=off.
 
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index d031a8b6fb0e..9d7d3ad324b7 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -79,6 +79,11 @@ void __init tsx_init(void)
 			tsx_ctrl_state = TSX_CTRL_ENABLE;
 		} else if (!strcmp(arg, "off")) {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		} else if (!strcmp(arg, "auto")) {
+			if (boot_cpu_has_bug(X86_BUG_TAA))
+				tsx_ctrl_state = TSX_CTRL_DISABLE;
+			else
+				tsx_ctrl_state = TSX_CTRL_ENABLE;
 		} else {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
 			pr_err("tsx: invalid option, defaulting to off\n");
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 08/10] TAAv7 8
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (6 preceding siblings ...)
  2019-10-21 20:29 ` [MODERATED] [PATCH v7 07/10] TAAv7 7 Pawan Gupta
@ 2019-10-21 20:30 ` Pawan Gupta
  2019-10-21 20:31 ` [MODERATED] [PATCH v7 09/10] TAAv7 9 Michal Hocko
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:30 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 08/10] x86/speculation/taa: Add documentation for TSX Async
 Abort

Add the documenation for TSX Async Abort. Include the description of
the issue, how to check the mitigation state, control the mitigation,
guidance for system administrators.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Co-developed-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Signed-off-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../admin-guide/hw-vuln/tsx_async_abort.rst   | 254 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  36 +++
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/tsx_async_abort.rst         | 116 ++++++++
 6 files changed, 409 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 create mode 100644 Documentation/x86/tsx_async_abort.rst

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 06d0931119cc..0e77569bd5e0 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -486,6 +486,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
 		/sys/devices/system/cpu/vulnerabilities/mds
+		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
 Date:		January 2018
 Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 Description:	Information about CPU vulnerabilities
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 49311f3da6f2..0802b1c67452 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -12,3 +12,4 @@ are configurable at compile, boot or run time.
    spectre
    l1tf
    mds
+   tsx_async_abort
diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
new file mode 100644
index 000000000000..852707b8748b
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -0,0 +1,254 @@
+TAA - TSX Asynchronous Abort
+======================================
+
+TAA is a hardware vulnerability that allows unprivileged speculative access to
+data which is available in various CPU internal buffers by using asynchronous
+aborts within an Intel TSX transactional region.
+
+Affected processors
+-------------------
+
+This vulnerability only affects Intel processors that support Intel
+Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8)
+is 0 in the IA32_ARCH_CAPABILITIES MSR.  On processors where the MDS_NO bit
+(bit 5) is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations
+also mitigate against TAA.
+
+Whether a processor is affected or not can be read out from the TAA
+vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`.
+
+Related CVEs
+------------
+
+The following CVE entry is related to this TAA issue:
+
+   ==============  =====  ===================================================
+   CVE-2019-11135  TAA    TSX Asynchronous Abort (TAA) condition on some
+                          microprocessors utilizing speculative execution may
+                          allow an authenticated user to potentially enable
+                          information disclosure via a side channel with
+                          local access.
+   ==============  =====  ===================================================
+
+Problem
+-------
+
+When performing store, load, L1 refill operations, processors write data into
+temporary microarchitectural structures (buffers). The data in the buffer can
+be forwarded to load operations as an optimization.
+
+Intel TSX are an extension to the x86 instruction set architecture that adds
+hardware transactional memory support to improve performance of multi-threaded
+software. TSX lets the processor expose and exploit concurrence hidden in an
+application due to dynamically avoiding unnecessary synchronization.
+
+TSX supports atomic memory transactions that are either committed (success) or
+aborted. During an abort, operations that happened within the transactional region
+are rolled back. An asynchronous abort takes place, among other options, when a
+different thread accesses a cache line that is also used within the transactional
+region when that access might lead to a data race.
+
+Immediately after an uncompleted asynchronous abort, certain speculatively
+executed loads may read data from those internal buffers and pass it to dependent
+operations. This can be then used to infer the value via a cache side channel
+attack.
+
+Because the buffers are potentially shared between Hyper-Threads cross
+Hyper-Thread attacks are possible.
+
+The victim of a malicious actor does not need to make use of TSX. Only the
+attacker needs to begin a TSX transaction and raise an asynchronous abort
+to try to leak some of data stored in the buffers.
+
+Deeper technical information is available in the TAA specific x86 architecture
+section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`.
+
+
+Attack scenarios
+----------------
+
+Attacks against the TAA vulnerability can be implemented from unprivileged
+applications running on hosts or guests.
+
+As for MDS, the attacker has no control over the memory addresses that can be
+leaked. Only the victim is responsible for bringing data to the CPU. As a
+result, the malicious actor has to first sample as much data as possible and
+then postprocess it to try to infer any useful information from it.
+
+A potential attacker only has read access to the data. Also, there is no direct
+privilege escalation by using this technique.
+
+
+.. _tsx_async_abort_sys_info:
+
+TAA system information
+-----------------------
+
+The Linux kernel provides a sysfs interface to enumerate the current TAA status
+of mitigated systems. The relevant sysfs file is:
+
+/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+
+The possible values in this file are:
+
+.. list-table::
+
+   * - 'Vulnerable'
+     - The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied.
+   * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
+     - The system tries to clear the buffers but the microcode might not support the operation.
+   * - 'Mitigation: Clear CPU buffers'
+     - The microcode has been updated to clear the buffers. TSX is still enabled.
+   * - 'Mitigation: TSX disabled'
+     - TSX is disabled.
+   * - 'Not affected'
+     - The CPU is not affected by this issue.
+
+.. _ucode_needed:
+
+Best effort mitigation mode
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If the processor is vulnerable, but the availability of the microcode-based
+mitigation mechanism is not advertised via CPUID the kernel selects a best
+effort mitigation mode.  This mode invokes the mitigation instructions
+without a guarantee that they clear the CPU buffers.
+
+This is done to address virtualization scenarios where the host has the
+microcode update applied, but the hypervisor is not yet updated to expose the
+CPUID to the guest. If the host has updated microcode the protection takes
+effect; otherwise a few CPU cycles are wasted pointlessly.
+
+The state in the tsx_async_abort sysfs file reflects this situation
+accordingly.
+
+
+Mitigation mechanism
+--------------------
+
+The kernel detects the affected CPUs and the presence of the microcode which is
+required. If a CPU is affected and the microcode is available, then the kernel
+enables the mitigation by default.
+
+
+The mitigation can be controlled at boot time via a kernel command line option.
+See :ref:`taa_mitigation_control_command_line`.
+
+.. _virt_mechanism:
+
+Virtualization mitigation
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Affected systems where the host has the TAA microcode and the TAA mitigation is
+ON (with TSX disabled) are not vulnerable regardless of the status of the VMs.
+
+In all other cases, if the host either does not have the TAA microcode or the
+kernel is not mitigated, the system might be vulnerable.
+
+
+.. _taa_mitigation_control_command_line:
+
+Mitigation control on the kernel command line
+---------------------------------------------
+
+The kernel command line allows to control the TAA mitigations at boot time with
+the option "tsx_async_abort=". The valid arguments for this option are:
+
+  ============  =============================================================
+  off		This option disables the TAA mitigation on affected platforms.
+                If the system has TSX enabled (see next parameter) and the CPU
+                is affected, the system is vulnerable.
+
+  full	        TAA mitigation is enabled. If TSX is enabled, on an affected
+                system it will clear CPU buffers on ring transitions. On
+                systems which are MDS-affected and deploy MDS mitigation,
+                TAA is also mitigated. Specifying this option on those
+                systems will have no effect.
+
+  full,nosmt    The same as tsx_async_abort=full, with SMT disabled on
+                vulnerable CPUs that have TSX enabled. This is the complete
+                mitigation. When TSX is disabled, SMT is not disabled because
+                CPU is not vulnerable to cross-thread TAA attacks.
+  ============  =============================================================
+
+Not specifying this option is equivalent to "tsx_async_abort=full".
+
+The kernel command line also allows to control the TSX feature using the
+parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
+to control the TSX feature and the enumeration of the TSX feature bits (RTM
+and HLE) in CPUID.
+
+The valid options are:
+
+  ============  =============================================================
+  off		Disables TSX.
+
+  on		Enables TSX.
+
+  auto		Disables TSX on affected platform, otherwise enables TSX.
+  ============  =============================================================
+
+Not specifying this option is equivalent to "tsx=off".
+
+The following combinations of the "tsx_async_abort" and "tsx" are possible. For
+affected platforms tsx=auto is equivalent to tsx=off and the result will be:
+
+  =========  ====================   =========================================
+  tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
+                                    buffers.
+  tsx=on     tsx_async_abort=off    The system is vulnerable.
+  tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.
+  tsx=off    tsx_async_abort=off    TSX is disabled. System is not vulnerable.
+  =========  ====================   =========================================
+
+For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
+buffers.  For platforms without TSX control "tsx" command line argument has no
+effect.
+
+For the affected platforms below table indicates the mitigation status for the
+combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
+and TSX_CTRL_MSR.
+
+  =======  =========  =============  ========================================
+  MDS_NO   MD_CLEAR   TSX_CTRL_MSR   Status
+  =======  =========  =============  ========================================
+    0          0            0        Vulnerable (needs ucode)
+    0          1            0        MDS and TAA mitigated via VERW
+    1          1            0        MDS fixed, TAA vulnerable if TSX enabled
+                                     because MD_CLEAR has no meaning and
+                                     VERW is not guaranteed to clear buffers
+    1          X            1        MDS fixed, TAA can be mitigated by
+                                     VERW or TSX_CTRL_MSR
+  =======  =========  =============  ========================================
+
+Mitigation selection guide
+--------------------------
+
+1. Trusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If all user space applications are from a trusted source and do not execute
+untrusted code which is supplied externally, then the mitigation can be
+disabled. The same applies to virtualized environments with trusted guests.
+
+
+2. Untrusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If there are untrusted applications or guests on the system, enabling TSX
+might allow a malicious actor to leak data from the host or from other
+processes running on the same physical core.
+
+If the microcode is available and the TSX is disabled on the host, attacks
+are prevented in a virtualized environment as well, even if the VMs do not
+explicitly enable the mitigation.
+
+
+.. _taa_default_mitigations:
+
+Default mitigations
+-------------------
+
+The kernel's default action for vulnerable processors is:
+
+  - Deploy TSX disable mitigation (tsx_async_abort=full tsx=off).
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 4d63dc8dc2fc..e4f7ee737eec 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2636,6 +2636,7 @@
 					       ssbd=force-off [ARM64]
 					       l1tf=off [X86]
 					       mds=off [X86]
+					       tsx_async_abort=off [X86]
 
 			auto (default)
 				Mitigate all CPU vulnerabilities, but leave SMT
@@ -2651,6 +2652,7 @@
 				be fully mitigated, even if it means losing SMT.
 				Equivalent to: l1tf=flush,nosmt [X86]
 					       mds=full,nosmt [X86]
+					       tsx_async_abort=full,nosmt [X86]
 
 	mminit_loglevel=
 			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
@@ -4864,6 +4866,40 @@
 
 			Not specifying this option is equivalent to tsx=off.
 
+	tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
+			Abort (TAA) vulnerability.
+
+			Similar to Micro-architectural Data Sampling (MDS)
+			certain CPUs that support Transactional
+			Synchronization Extensions (TSX) are vulnerable to an
+			exploit against CPU internal buffers which can forward
+			information to a disclosure gadget under certain
+			conditions.
+
+			In vulnerable processors, the speculatively forwarded
+			data can be used in a cache side channel attack, to
+			access data to which the attacker does not have direct
+			access.
+
+			This parameter controls the TAA mitigation.  The
+			options are:
+
+			full       - Enable TAA mitigation on vulnerable CPUs
+			full,nosmt - Enable TAA mitigation and disable SMT on
+				     vulnerable CPUs. If TSX is disabled, SMT
+				     is not disabled because CPU is not
+				     vulnerable to cross-thread TAA attacks.
+			off        - Unconditionally disable TAA mitigation
+
+			Not specifying this option is equivalent to
+			tsx_async_abort=full.  On CPUs which are MDS affected
+			and deploy MDS mitigation, TAA mitigation is not
+			required and doesn't provide any additional
+			mitigation.
+
+			For details see:
+			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+
 	turbografx.map[2|3]=	[HW,JOY]
 			TurboGraFX parallel port interface
 			Format:
diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index af64c4bb4447..a8de2fbc1caa 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -27,6 +27,7 @@ x86-specific Documentation
    mds
    microcode
    resctrl_ui
+   tsx_async_abort
    usb-legacy-support
    i386/index
    x86_64/index
diff --git a/Documentation/x86/tsx_async_abort.rst b/Documentation/x86/tsx_async_abort.rst
new file mode 100644
index 000000000000..41d801309fbb
--- /dev/null
+++ b/Documentation/x86/tsx_async_abort.rst
@@ -0,0 +1,116 @@
+TSX Async Abort (TAA) mitigation
+=================================================
+
+.. _tsx_async_abort:
+
+Overview
+--------
+
+TSX Async Abort (TAA) is a side channel attack on internal buffers in some
+Intel processors similar to Microachitectural Data Sampling (MDS).  In this
+case certain loads may speculatively pass invalid data to dependent operations
+when an asynchronous abort condition is pending in a Transactional
+Synchronization Extensions (TSX) transaction.  This includes loads with no
+fault or assist condition. Such loads may speculatively expose stale data from
+the same uarch data structures as in MDS, with same scope of exposure i.e.
+same-thread and cross-thread. This issue affects all current processors that
+support TSX.
+
+Mitigation strategy
+-------------------
+
+a) TSX disable - One of the mitigation is to disable TSX feature. A new MSR
+IA32_TSX_CTRL will be available in future and current processors and after a
+microcode update in which can be used to disable TSX.  This MSR can be used to
+disable the TSX feature and the enumeration of the TSX feature bits(RTM and
+HLE) in CPUID.
+
+b) CPU clear buffers - Similar to MDS, clearing the CPU buffers mitigates this
+vulnerability. More details on this approach can be found here
+https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
+
+Kernel internal mitigation modes
+--------------------------------
+
+ =============    ============================================================
+ off              Mitigation is disabled. Either the CPU is not affected or
+                  tsx_async_abort=off is supplied on the kernel command line.
+
+ tsx disabled     Mitigation is enabled. TSX feature is disabled by default at
+                  bootup on processors that support TSX control.
+
+ verw             Mitigation is enabled. CPU is affected and MD_CLEAR is
+                  advertised in CPUID.
+
+ ucode needed     Mitigation is enabled. CPU is affected and MD_CLEAR is not
+                  advertised in CPUID. That is mainly for virtualization
+                  scenarios where the host has the updated microcode but the
+                  hypervisor does not expose MD_CLEAR in CPUID. It's a best
+                  effort approach without guarantee.
+ =============    ============================================================
+
+If the CPU is affected and "tsx_async_abort" kernel command line parameter is
+not provided then the kernel selects an appropriate mitigation depending on the
+status of RTM and MD_CLEAR CPUID bits.
+
+Below tables indicate the impact of tsx=on|off|auto cmdline options on state of
+TAA mitigation, VERW behavior and TSX feature for various combinations of
+MSR_IA32_ARCH_CAPABILITIES bits.
+
+1. "tsx=off"
+
+=========  =========  ============  ============  ==============  ===================  ======================
+MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=off
+----------------------------------  -------------------------------------------------------------------------
+TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
+                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
+=========  =========  ============  ============  ==============  ===================  ======================
+    0          0           0         HW default         Yes           Same as MDS           Same as MDS
+    0          0           1        Invalid case   Invalid case       Invalid case          Invalid case
+    0          1           0         HW default         No         Need ucode update     Need ucode update
+    0          1           1          Disabled          Yes           TSX disabled          TSX disabled
+    1          X           1          Disabled           X             None needed           None needed
+=========  =========  ============  ============  ==============  ===================  ======================
+
+2. "tsx=on"
+
+=========  =========  ============  ============  ==============  ===================  ======================
+MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=on
+----------------------------------  -------------------------------------------------------------------------
+TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
+                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
+=========  =========  ============  ============  ==============  ===================  ======================
+    0          0           0         HW default        Yes            Same as MDS          Same as MDS
+    0          0           1        Invalid case   Invalid case       Invalid case         Invalid case
+    0          1           0         HW default        No          Need ucode update     Need ucode update
+    0          1           1          Enabled          Yes               None              Same as MDS
+    1          X           1          Enabled          X              None needed          None needed
+=========  =========  ============  ============  ==============  ===================  ======================
+
+3. "tsx=auto"
+
+=========  =========  ============  ============  ==============  ===================  ======================
+MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=auto
+----------------------------------  -------------------------------------------------------------------------
+TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
+                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
+=========  =========  ============  ============  ==============  ===================  ======================
+    0          0           0         HW default    Yes                Same as MDS           Same as MDS
+    0          0           1        Invalid case  Invalid case        Invalid case          Invalid case
+    0          1           0         HW default    No              Need ucode update     Need ucode update
+    0          1           1          Disabled      Yes               TSX disabled          TSX disabled
+    1          X           1          Enabled       X                 None needed           None needed
+=========  =========  ============  ============  ==============  ===================  ======================
+
+In the tables, TSX_CTRL_MSR is a new bit in MSR_IA32_ARCH_CAPABILITIES that
+indicates whether MSR_IA32_TSX_CTRL is supported.
+
+There are two control bits in IA32_TSX_CTRL MSR:
+
+      Bit 0: When set it disables the Restricted Transactional Memory (RTM)
+             sub-feature of TSX (will force all transactions to abort on the
+             XBEGIN instruction).
+
+      Bit 1: When set it disables the enumeration of the RTM and HLE feature
+             (i.e. it will make CPUID(EAX=7).EBX{bit4} and
+             CPUID(EAX=7).EBX{bit11} read as 0).
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 09/10] TAAv7 9
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (7 preceding siblings ...)
  2019-10-21 20:30 ` [MODERATED] [PATCH v7 08/10] TAAv7 8 Pawan Gupta
@ 2019-10-21 20:31 ` Michal Hocko
  2019-10-21 20:32 ` [MODERATED] [PATCH v7 10/10] TAAv7 10 Pawan Gupta
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Michal Hocko @ 2019-10-21 20:31 UTC (permalink / raw)
  To: speck

From: Michal Hocko <mhocko@suse.com>
Subject: [PATCH v7 09/10] x86/tsx: Add config options to set tsx=on|off|auto

There is a general consensus that TSX usage is not largely spread while
the history shows there is a non trivial space for side channel attacks
possible. Therefore the tsx is disabled by default even on platforms
that might have a safe implementation of TSX according to the current
knowledge. This is a fair trade off to make.

There are, however, workloads that really do benefit from using TSX and
updating to a newer kernel with TSX disabled might introduce a
noticeable regressions. This would be especially a problem for Linux
distributions which will provide TAA mitigations.

Introduce config options X86_INTEL_TSX_MODE_OFF, X86_INTEL_TSX_MODE_ON
and X86_INTEL_TSX_MODE_AUTO to control the TSX feature. The config
setting can be overridden by the tsx cmdline options.

Suggested-by: Borislav Petkov <bpetkov@suse.de>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
 arch/x86/Kconfig          | 45 +++++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/tsx.c | 22 +++++++++++++------
 2 files changed, 61 insertions(+), 6 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d6e1faa28c58..eebae89726c4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1940,6 +1940,51 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS
 
 	  If unsure, say y.
 
+choice
+	prompt "TSX enable mode"
+	depends on CPU_SUP_INTEL
+	default X86_INTEL_TSX_MODE_OFF
+	help
+	  Intel's TSX (Transactional Synchronization Extensions) feature
+	  allows to optimize locking protocols through lock elision which
+	  can lead to a noticeable performance boost.
+
+	  On the other hand it has been shown that TSX can be exploited
+	  to form side channel attacks (e.g. TAA) and chances are there
+	  will be more of those attacks discovered in the future.
+
+	  Therefore TSX is not enabled by default (aka tsx=off). An admin
+	  might override this decision by tsx=on command line parameter. This
+	  has a risk that TSX will get enabled also on platforms which are
+	  known to be vulnerable to attacks like TAA and a safer option is to
+	  use tsx=auto command line parameter.
+
+	  This options allows to set the default tsx mode between tsx=on, off
+	  and auto. See Documentation/admin-guide/kernel-parameters.txt for more
+	  details.
+
+	  Say off if not sure, auto if TSX is in use but it should be used on safe
+	  platforms or on if TSX is in use and the security aspect of tsx is not
+	  relevant.
+
+config X86_INTEL_TSX_MODE_OFF
+	bool "off"
+	help
+	  TSX is always disabled - equals tsx=off command line parameter.
+
+config X86_INTEL_TSX_MODE_ON
+	bool "on"
+	help
+	  TSX is always enabled on TSX capable HW - equals tsx=on command line
+	  parameter.
+
+config X86_INTEL_TSX_MODE_AUTO
+	bool "auto"
+	help
+	  TSX is enabled on TSX capable HW that is believed to be safe against
+	  side channel attacks- equals tsx=auto command line parameter.
+endchoice
+
 config EFI
 	bool "EFI runtime service support"
 	depends on ACPI
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index 9d7d3ad324b7..bd74f216f026 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -65,6 +65,14 @@ static bool __init tsx_ctrl_is_supported(void)
 	return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
 }
 
+static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
+{
+	if (boot_cpu_has_bug(X86_BUG_TAA))
+		return TSX_CTRL_DISABLE;
+
+	return TSX_CTRL_ENABLE;
+}
+
 void __init tsx_init(void)
 {
 	char arg[20];
@@ -80,17 +88,19 @@ void __init tsx_init(void)
 		} else if (!strcmp(arg, "off")) {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
 		} else if (!strcmp(arg, "auto")) {
-			if (boot_cpu_has_bug(X86_BUG_TAA))
-				tsx_ctrl_state = TSX_CTRL_DISABLE;
-			else
-				tsx_ctrl_state = TSX_CTRL_ENABLE;
+			tsx_ctrl_state = x86_get_tsx_auto_mode();
 		} else {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
 			pr_err("tsx: invalid option, defaulting to off\n");
 		}
 	} else {
-		/* tsx= not provided, defaulting to off */
-		tsx_ctrl_state = TSX_CTRL_DISABLE;
+		/* tsx= not provided */
+		if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_AUTO))
+			tsx_ctrl_state = x86_get_tsx_auto_mode();
+		else if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_OFF))
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		else
+			tsx_ctrl_state = TSX_CTRL_ENABLE;
 	}
 
 	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] [PATCH v7 10/10] TAAv7 10
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (8 preceding siblings ...)
  2019-10-21 20:31 ` [MODERATED] [PATCH v7 09/10] TAAv7 9 Michal Hocko
@ 2019-10-21 20:32 ` Pawan Gupta
  2019-10-21 21:32 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Andy Lutomirski
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:32 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 10/10] x86/tsx: Add sysfs interface to control TSX

Transactional Synchronization Extensions (TSX) is an extension to the
x86 instruction set architecture (ISA) that adds Hardware Transactional
Memory (HTM) support. Changing TSX state currently requires a reboot.
This may not be desirable when rebooting imposes a huge penalty. Add
support to control TSX feature via a new sysfs file:
/sys/devices/system/cpu/hw_tx_mem

- Writing 0|off|N|n to this file disables TSX feature on all the CPUs.
  This is equivalent to boot parameter tsx=off.
- Writing 1|on|Y|y to this file enables TSX feature on all the CPUs.
  This is equivalent to boot parameter tsx=on.
- Reading from this returns the status of TSX feature.
- When TSX control is not supported this interface is not visible in
  sysfs.

Changing the TSX state from this interface also updates CPUID.RTM
feature bit.  From the kernel side, this feature bit doesn't result in
any ALTERNATIVE code patching.  No memory allocations are done to
save/restore user state. No code paths in outside of the tests for
vulnerability to TAA are dependent on the value of the feature bit.  In
general the kernel doesn't care whether RTM is present or not.

Applications typically look at CPUID bits once at startup (or when first
calling into a library that uses the feature).  So we have a couple of
cases to cover:

1) An application started and saw that RTM was enabled, so began
   to use it. Then TSX was disabled.  Net result in this case is that
   the application will keep trying to use RTM, but every xbegin() will
   immediately abort the transaction. This has a performance impact to
   the application, but it doesn't affect correctness because all users
   of RTM must have a fallback path for when the transaction aborts. Note
   that even if an application is in the middle of a transaction when we
   disable RTM, we are safe. The XPI that we use to update the TSX_CTRL
   MSR will abort the transaction (just as any interrupt would abort
   a transaction).

2) An application starts and sees RTM is not available. So it will
   always use alternative paths. Even if TSX is enabled and RTM is set,
   applications in general do not re-evaluate their choice so will
   continue to run in non-TSX mode.

When the TSX state is changed from the sysfs interface, TSX Async Abort
(TAA) mitigation state also needs to be updated. Set the TAA mitigation
state as per TSX and VERW static branch state.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |  23 ++++
 .../admin-guide/hw-vuln/tsx_async_abort.rst   |  30 ++++-
 arch/x86/kernel/cpu/bugs.c                    |  21 +++-
 arch/x86/kernel/cpu/cpu.h                     |   3 +-
 arch/x86/kernel/cpu/intel.c                   |   5 +
 arch/x86/kernel/cpu/tsx.c                     | 114 +++++++++++++++++-
 drivers/base/cpu.c                            |  32 ++++-
 include/linux/cpu.h                           |   6 +
 8 files changed, 229 insertions(+), 5 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 0e77569bd5e0..e4f12162abb8 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -573,3 +573,26 @@ Description:	Secure Virtual Machine
 		If 1, it means the system is using the Protected Execution
 		Facility in POWER9 and newer processors. i.e., it is a Secure
 		Virtual Machine.
+
+What:		/sys/devices/system/cpu/hw_tx_mem
+Date:		August 2019
+Contact:	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+		Linux kernel mailing list <linux-kernel@vger.kernel.org>
+Description:	Hardware Transactional Memory (HTM) control.
+
+		Read/write interface to control HTM feature for all the CPUs in
+		the system.  This interface is only present on platforms that
+		support HTM control. HTM is a hardware feature to speed up the
+		execution of multi-threaded software through lock elision. An
+		example of HTM implementation is Intel Transactional
+		Synchronization Extensions (TSX).
+
+			Read returns the status of HTM feature.
+
+			0: HTM is disabled
+			1: HTM is enabled
+
+			Write sets the state of HTM feature.
+
+			0: Disables HTM
+			1: Enables HTM
diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
index 852707b8748b..7ea0b2ef71d2 100644
--- a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -132,7 +132,8 @@ enables the mitigation by default.
 
 
 The mitigation can be controlled at boot time via a kernel command line option.
-See :ref:`taa_mitigation_control_command_line`.
+See :ref:`taa_mitigation_control_command_line`. It also provides a sysfs
+interface. See :ref:`taa_mitigation_sysfs`.
 
 .. _virt_mechanism:
 
@@ -221,6 +222,33 @@ and TSX_CTRL_MSR.
                                      VERW or TSX_CTRL_MSR
   =======  =========  =============  ========================================
 
+.. _taa_mitigation_sysfs:
+
+Mitigation control using sysfs
+------------------------------
+
+For those affected systems that can not be frequently rebooted to enable or
+disable TSX, sysfs can be used as an alternative after installing the updates.
+The possible values for the file /sys/devices/system/cpu/hw_tx_mem are:
+
+  ===  =======================================================================
+  0    Disable TSX. Upon entering a TSX transactional region, the code will
+       immediately abort, before any instruction executes within the
+       transactional region even speculatively, and continue on the fallback.
+       Equivalent to boot parameter "tsx=off".
+  1    Enable TSX. Equivalent to boot parameter "tsx=on".
+  ===  =======================================================================
+
+Reading from this file returns the status of TSX feature. This file is only
+present on systems that support TSX control.
+
+When disabling TSX by using the sysfs mechanism, applications that are already
+running and use TSX will see their transactional regions aborted and execution
+flow will be redirected to the fallback, losing the benefits of the
+non-blocking path. TSX needs fallback code to guarantee correct execution
+without transactional regions.
+
+
 Mitigation selection guide
 --------------------------
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3220d820159a..fd39c053c060 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -274,7 +274,7 @@ early_param("mds", mds_cmdline);
 #define pr_fmt(fmt)	"TAA: " fmt
 
 /* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW;
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -374,6 +374,25 @@ static int __init tsx_async_abort_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_cmdline);
 
+void taa_update_mitigation(bool tsx_enabled)
+{
+	/*
+	 * When userspace changes the TSX state, update taa_mitigation
+	 * so that the updated mitigation state is shown in:
+	 * /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+	 *
+	 * Check if TSX is disabled.
+	 * Check if CPU buffer clear is enabled.
+	 * else the system is vulnerable.
+	 */
+	if (!tsx_enabled)
+		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
+	else if (static_key_count(&mds_user_clear.key))
+		taa_mitigation = TAA_MITIGATION_VERW;
+	else
+		taa_mitigation = TAA_MITIGATION_OFF;
+}
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 38ab6e115eac..c0e2ae982692 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -51,11 +51,12 @@ enum tsx_ctrl_states {
 	TSX_CTRL_NOT_SUPPORTED,
 };
 
-extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+extern enum tsx_ctrl_states tsx_ctrl_state;
 
 extern void __init tsx_init(void);
 extern void tsx_enable(void);
 extern void tsx_disable(void);
+extern void taa_update_mitigation(bool tsx_enabled);
 #else
 static inline void tsx_init(void) { }
 #endif /* CONFIG_CPU_SUP_INTEL */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 11d5c5950e2d..e0a39c631d05 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -763,6 +763,11 @@ static void init_intel(struct cpuinfo_x86 *c)
 
 	init_intel_misc_features(c);
 
+	/*
+	 * TSX_CTRL MSR is updated from sysfs also. Writes from sysfs take the
+	 * cpu_hotplug_lock. This ensures CPUs can't come online and race with
+	 * MSR writes from sysfs.
+	 */
 	if (tsx_ctrl_state == TSX_CTRL_ENABLE)
 		tsx_enable();
 	if (tsx_ctrl_state == TSX_CTRL_DISABLE)
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index bd74f216f026..0969e6e9dff3 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -9,12 +9,15 @@
  */
 
 #include <linux/cpufeature.h>
+#include <linux/cpu.h>
 
 #include <asm/cmdline.h>
 
 #include "cpu.h"
 
-enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+static DEFINE_MUTEX(tsx_mutex);
+
+enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED;
 
 void tsx_disable(void)
 {
@@ -127,3 +130,112 @@ void __init tsx_init(void)
 		setup_force_cpu_cap(X86_FEATURE_RTM);
 	}
 }
+
+static void tsx_update_this_cpu(void *arg)
+{
+	unsigned long enable = (unsigned long)arg;
+
+	if (enable)
+		tsx_enable();
+	else
+		tsx_disable();
+}
+
+/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
+static void tsx_update_on_each_cpu(bool state)
+{
+	get_online_cpus();
+	on_each_cpu(tsx_update_this_cpu, (void *)state, 1);
+	put_online_cpus();
+}
+
+ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *attr,
+		       char *buf)
+{
+	return sprintf(buf, "%d\n", tsx_ctrl_state == TSX_CTRL_ENABLE ? 1 : 0);
+}
+
+ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr,
+			const char *buf, size_t count)
+{
+	enum tsx_ctrl_states requested_state;
+	ssize_t ret;
+	bool state;
+
+	ret = kstrtobool(buf, &state);
+	if (ret)
+		return ret;
+
+	if (state)
+		requested_state = TSX_CTRL_ENABLE;
+	else
+		requested_state = TSX_CTRL_DISABLE;
+
+	/*
+	 * Protect tsx_ctrl_state and TSX update on_each_cpu() from concurrent
+	 * writers.
+	 *
+	 *  - Serialize TSX_CTRL MSR writes across all CPUs when there are
+	 *    concurrent sysfs requests. on_each_cpu() callback execution
+	 *    order on other CPUs can be different for multiple calls to
+	 *    on_each_cpu(). For conflicting concurrent sysfs requests the
+	 *    lock ensures all CPUs have updated the TSX_CTRL MSR before the
+	 *    next call to on_each_cpu() can be processed.
+	 *  - Serialize tsx_ctrl_state update so that it doesn't get out of
+	 *    sync with TSX_CTRL MSR.
+	 *  - Serialize update to taa_mitigation.
+	 */
+	mutex_lock(&tsx_mutex);
+
+	/* Current state is same as the requested state, do nothing */
+	if (tsx_ctrl_state == requested_state)
+		goto exit;
+
+	tsx_ctrl_state = requested_state;
+
+	/*
+	 * Changing the TSX state from this interface also updates CPUID.RTM
+	 * feature  bit. From the kernel side, this feature bit doesn't result
+	 * in any ALTERNATIVE code patching.  No memory allocations are done to
+	 * save/restore user state. No code paths in outside of the tests for
+	 * vulnerability to TAA are dependent on the value of the feature bit.
+	 * In general the kernel doesn't care whether RTM is present or not.
+	 *
+	 * From the user side it is a bit fuzzier. Applications typically look
+	 * at CPUID bits once at startup (or when first calling into a library
+	 * that uses the feature).  So we have a couple of cases to cover:
+	 *
+	 * 1) An application started and saw that RTM was enabled, so began
+	 *    to use it. Then TSX was disabled.  Net result in this case is
+	 *    that the application will keep trying to use RTM, but every
+	 *    xbegin() will immediately abort the transaction. This has a
+	 *    performance impact to the application, but it doesn't affect
+	 *    correctness because all users of RTM must have a fallback path
+	 *    for when the transaction aborts. Note that even if an application
+	 *    is in the middle of a transaction when we disable RTM, we are
+	 *    safe. The XPI that we use to update the TSX_CTRL MSR will abort
+	 *    the transaction (just as any interrupt would abort a
+	 *    transaction).
+	 *
+	 * 2) An application starts and sees RTM is not available. So it will
+	 *    always use alternative paths. Even if TSX is enabled and RTM is
+	 *    set, applications in general do not re-evaluate their choice so
+	 *    will continue to run in non-TSX mode.
+	 */
+	tsx_update_on_each_cpu(state);
+
+	if (boot_cpu_has_bug(X86_BUG_TAA))
+		taa_update_mitigation(state);
+exit:
+	mutex_unlock(&tsx_mutex);
+
+	return count;
+}
+
+umode_t hw_tx_mem_is_visible(void)
+{
+	if (tsx_ctrl_state == TSX_CTRL_NOT_SUPPORTED)
+		return 0;
+
+	return 0644;
+}
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 0fccd8c0312e..2148e974ab0a 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -460,6 +460,34 @@ struct device *cpu_device_create(struct device *parent, void *drvdata,
 }
 EXPORT_SYMBOL_GPL(cpu_device_create);
 
+ssize_t __weak hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+			      char *buf)
+{
+	return -ENODEV;
+}
+
+ssize_t __weak hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+			       const char *buf, size_t count)
+{
+	return -ENODEV;
+}
+
+DEVICE_ATTR_RW(hw_tx_mem);
+
+umode_t __weak hw_tx_mem_is_visible(void)
+{
+	return 0;
+}
+
+static umode_t cpu_root_attrs_is_visible(struct kobject *kobj,
+					 struct attribute *attr, int index)
+{
+	if (attr == &dev_attr_hw_tx_mem.attr)
+		return hw_tx_mem_is_visible();
+
+	return attr->mode;
+}
+
 #ifdef CONFIG_GENERIC_CPU_AUTOPROBE
 static DEVICE_ATTR(modalias, 0444, print_cpu_modalias, NULL);
 #endif
@@ -481,11 +509,13 @@ static struct attribute *cpu_root_attrs[] = {
 #ifdef CONFIG_GENERIC_CPU_AUTOPROBE
 	&dev_attr_modalias.attr,
 #endif
+	&dev_attr_hw_tx_mem.attr,
 	NULL
 };
 
 static struct attribute_group cpu_root_attr_group = {
-	.attrs = cpu_root_attrs,
+	.attrs		= cpu_root_attrs,
+	.is_visible	= cpu_root_attrs_is_visible,
 };
 
 static const struct attribute_group *cpu_root_attr_groups[] = {
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index f35369f79771..104c293add42 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -63,6 +63,12 @@ extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
 					struct device_attribute *attr,
 					char *buf);
 
+extern ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+			      char *buf);
+extern ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+			       const char *buf, size_t count);
+extern umode_t hw_tx_mem_is_visible(void);
+
 extern __printf(4, 5)
 struct device *cpu_device_create(struct device *parent, void *drvdata,
 				 const struct attribute_group **groups,
-- 
2.20.1

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (9 preceding siblings ...)
  2019-10-21 20:32 ` [MODERATED] [PATCH v7 10/10] TAAv7 10 Pawan Gupta
@ 2019-10-21 21:32 ` Andy Lutomirski
  2019-10-21 23:06   ` Andrew Cooper
  2019-10-22  0:34   ` Pawan Gupta
  2019-10-22  4:10 ` [MODERATED] Jon Masters
                   ` (14 subsequent siblings)
  25 siblings, 2 replies; 78+ messages in thread
From: Andy Lutomirski @ 2019-10-21 21:32 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 809 bytes --]

On 10/21/19 1:22 PM, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 00/10] TAAv7
> 
> Changes since v6:
> - Add Michal's patch to allow tsx=on|off|auto via CONFIG
> - Rebase to v5.4-rc4
> - Changelog, comments and documentation update.
> 
> Changes since v5:
> - Remove unsafe X86_FEATURE_RTM toggles.

I'm wondering if maybe these patches shouldn't touch the cpu
capabilities at all.  After all, even with TSX toggled off, TSX is still
present -- XBEGIN doesn't give #UD.  By making this change, we avoid
needing to even consider what happens when a cpu capability bit changes
after boot.

I think it would be nice to expose whether TSX is enabled somewhere in
sysfs, but I'm not convinced that "features" is the place for it.


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-21 21:32 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Andy Lutomirski
@ 2019-10-21 23:06   ` Andrew Cooper
  2019-10-22  0:34   ` Pawan Gupta
  1 sibling, 0 replies; 78+ messages in thread
From: Andrew Cooper @ 2019-10-21 23:06 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1318 bytes --]

On 21/10/2019 22:32, speck for Andy Lutomirski wrote:
> On 10/21/19 1:22 PM, speck for Pawan Gupta wrote:
>> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>> Subject: [PATCH v7 00/10] TAAv7
>>
>> Changes since v6:
>> - Add Michal's patch to allow tsx=on|off|auto via CONFIG
>> - Rebase to v5.4-rc4
>> - Changelog, comments and documentation update.
>>
>> Changes since v5:
>> - Remove unsafe X86_FEATURE_RTM toggles.
> I'm wondering if maybe these patches shouldn't touch the cpu
> capabilities at all.  After all, even with TSX toggled off, TSX is still
> present -- XBEGIN doesn't give #UD.

So, there is some history here.

The first microcode version had XBEGIN giving #UD, and a fair chunk of
ancillary behaviour revolved around that.

However, that is rather catastrophic for late-loading the microcode and
securing the system in place, so the behaviour was relaxed from #UD to
abort.

That way, folk who late-load the microcode can secure their systems by
setting the RTM_ABORT bit, and folk booting "fresh" with the new
microcode can set both the RTM_ABORT and CPUID bits, to cause userspace
to believe there is no RTM at all, and avoid their unconditional abort.

~Andrew

P.S. I did ask if we could have both #UD and abort, and got a firm no
based on patch space.


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-21 21:32 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Andy Lutomirski
  2019-10-21 23:06   ` Andrew Cooper
@ 2019-10-22  0:34   ` Pawan Gupta
  1 sibling, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22  0:34 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 02:32:31PM -0700, speck for Andy Lutomirski wrote:
> On 10/21/19 1:22 PM, speck for Pawan Gupta wrote:
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Subject: [PATCH v7 00/10] TAAv7
> > 
> > Changes since v6:
> > - Add Michal's patch to allow tsx=on|off|auto via CONFIG
> > - Rebase to v5.4-rc4
> > - Changelog, comments and documentation update.
> > 
> > Changes since v5:
> > - Remove unsafe X86_FEATURE_RTM toggles.
> 
> I'm wondering if maybe these patches shouldn't touch the cpu
> capabilities at all.  After all, even with TSX toggled off, TSX is still
> present -- XBEGIN doesn't give #UD. By making this change, we avoid
> needing to even consider what happens when a cpu capability bit changes
> after boot.

This would affect some of the PMU related boot_cpu_has() checks.
https://elixir.bootlin.com/linux/latest/source/arch/x86/events/intel/core.c#L3415

> 
> I think it would be nice to expose whether TSX is enabled somewhere in
> sysfs, but I'm not convinced that "features" is the place for it.

Part 10/10 in this series adds support to expose whether TSX is enabled
(only present when TSX_CTRL MSR is supported):

	$ cat /sys/devices/system/cpu/hw_tx_mem

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] ...
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (10 preceding siblings ...)
  2019-10-21 21:32 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Andy Lutomirski
@ 2019-10-22  4:10 ` Jon Masters
  2019-10-22  5:53   ` [MODERATED] Pawan Gupta
  2019-10-22  7:58 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 07/10] TAAv7 7 Michal Hocko
                   ` (13 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Jon Masters @ 2019-10-22  4:10 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 38 bytes --]

Subject: Re: [PATCH v7 07/10] TAAv7 7

[-- Attachment #2: Type: text/plain, Size: 1614 bytes --]

On 10/21/19 4:29 PM, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 07/10] x86/tsx: Add "auto" option to TSX cmdline parameter
> 
> Platforms which are not affected by X86_BUG_TAA may want the TSX feature
> enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
> disable TSX when X86_BUG_TAA is present, otherwise enable TSX.

Earlier, you do this:

+	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
+	    (boot_cpu_has(X86_FEATURE_RTM) ||
+	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+		setup_force_cpu_bug(X86_BUG_TAA);

Per the other discussion, I think you want to double check if tsx=auto
is doing what folks want it to do, because currently I think auto still
has the semantics of turning off TSX on everything, rather than just
those cases where a VERW mitigation won't suffice/is in use.

(see Thomas's update to the "Re: [PATCH v5 08/11] TAAv5 8" diagram)

It seems that it's a good time to double check if that's what everyone
on the distro side is expecting, namely "tsx=auto will disable TSX
automatically on those parts impacted by TAA for which VERW mitigation
is not available". This seems to reduce the set where we end up
disabling TSX essentially to a few very recent processors (e.g.
CascadeLake some second gen, Bx stepping?). If that's what we are all
expecting/are planning to do, I suspect Red Hat would go with tsx=auto
as a default since the set of impacted processors can be documented.

Anyway, auto isn't right yet.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: ...
  2019-10-22  4:10 ` [MODERATED] Jon Masters
@ 2019-10-22  5:53   ` Pawan Gupta
  0 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22  5:53 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 12:10:31AM -0400, speck for Jon Masters wrote:
> Subject: Re: [PATCH v7 07/10] TAAv7 7

> On 10/21/19 4:29 PM, speck for Pawan Gupta wrote:
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Subject: [PATCH v7 07/10] x86/tsx: Add "auto" option to TSX cmdline parameter
> > 
> > Platforms which are not affected by X86_BUG_TAA may want the TSX feature
> > enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
> > disable TSX when X86_BUG_TAA is present, otherwise enable TSX.
> 
> Earlier, you do this:
> 
> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> +	    (boot_cpu_has(X86_FEATURE_RTM) ||
> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> +		setup_force_cpu_bug(X86_BUG_TAA);
> 
> Per the other discussion, I think you want to double check if tsx=auto
> is doing what folks want it to do, because currently I think auto still
> has the semantics of turning off TSX on everything, rather than just
> those cases where a VERW mitigation won't suffice/is in use.

tsx=auto can only disable TSX on current CPUs(MDS_NO=1) because older
CPUs(MDS_NO=0) wont get TSX_CTRL_MSR.

On older CPUs tsx_init() would simply do nothing because of lack of TSX_CTRL. 

	void __init tsx_init(void)
	{
		if (!tsx_ctrl_is_supported())
			return;
	[...]

> 
> (see Thomas's update to the "Re: [PATCH v5 08/11] TAAv5 8" diagram)

  MDS_NO        MD_CLEAR        TSX_CTRL_MSR
    0             0                 0           Vulnerable (needs ucode)
    0             1                 0           MDS and TAA mitigated via VERW
    1             1                 0           MDS fixed, TAA vulnerable if TSX enabled
                                                because MD_CLEAR has no meaning and
                                                VERW is not guaranteed to clear buffers
    1             X                 1           MDS fixed, TAA can be mitigated by
                                                           VERW or TSX_CTRL_MSR

Thomas's table sums up the mitigation status.

> 
> It seems that it's a good time to double check if that's what everyone
> on the distro side is expecting, namely "tsx=auto will disable TSX
> automatically on those parts impacted by TAA for which VERW mitigation
> is not available".

Microcode update for MDS_NO=1 parts adds both TSX_CTRL and VERW
mitigation at the same time. It is for the OS to choose(tsx=) which
mitigation to deploy. 

For MDS_NO=0 CPUs:
	VERW mitigation will be deployed.
	tsx= is ineffective, for the lack of TSX_CTRL.

For MDS_NO=1 CPUs:
	If OS chooses tsx=on, VERW mitigation will be deployed.
	If OS chooses tsx=auto, TSX will be disabled.
	If OS chooses tsx=off, TSX will be disabled (for TAA_NO=1 as well).

More details on tsx=auto:

+----------+----------+----------------+---------------+--------------+-------------------+
|  MSR_IA32_ARCH_CAPABILITIES bits     |           Result with cmdline tsx=auto           |
+----------+----------+----------------+---------------+--------------+-------------------+
|  TAA_NO  |  MDS_NO  |  TSX_CTRL_MSR  |  VERW clears  |  TSX state   |  TAA mitigation   |
|          |          |                |  CPU buffers  | after bootup |                   |
+==========+==========+================+===============+==============+===================+
|    0     |    0     |       0        |      Yes      |  HW default  |    Same as MDS    |
+----------+----------+----------------+---------------+--------------+-------------------+
|    0     |    0     |       1        | Invalid case  | Invalid case |   Invalid case    |
+----------+----------+----------------+---------------+--------------+-------------------+
|    0     |    1     |       0        |      No       |  HW default  | Need ucode update |
+----------+----------+----------------+---------------+--------------+-------------------+
|    0     |    1     |       1        |      Yes      | TSX disabled |   TSX disabled    |
+----------+----------+----------------+---------------+--------------+-------------------+
|    1     |    X     |       1        |       X       | TSX enabled  |    None needed    |
+----------+----------+----------------+---------------+--------------+-------------------+

Note: MDS_NO=0 and TSX_CTRL_MSR=1 is an invalid case above, which
prevents tsx=auto from disabling TSX in older CPUs.

>This seems to reduce the set where we end up
> disabling TSX essentially to a few very recent processors (e.g.
> CascadeLake some second gen, Bx stepping?). If that's what we are all
> expecting/are planning to do, I suspect Red Hat would go with tsx=auto
> as a default since the set of impacted processors can be documented.
> 
> Anyway, auto isn't right yet.

Below change isn't required as per the above explanation, but I can add
it if adds clarity. Or maybe cover with code comments.

diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index 0969e6e9dff3..7b567ca266c9 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -70,7 +70,8 @@ static bool __init tsx_ctrl_is_supported(void)
 
 static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
 {
-	if (boot_cpu_has_bug(X86_BUG_TAA))
+	if (boot_cpu_has_bug(X86_BUG_TAA) &&
+	    (ARCH_CAP_MDS_NO & x86_read_arch_cap_msr()))
 		return TSX_CTRL_DISABLE;
 
 	return TSX_CTRL_ENABLE;

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** [PATCH v7 07/10] TAAv7 7
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (11 preceding siblings ...)
  2019-10-22  4:10 ` [MODERATED] Jon Masters
@ 2019-10-22  7:58 ` Michal Hocko
  2019-10-22 16:55   ` [MODERATED] " Pawan Gupta
  2019-10-22  8:00 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 09/10] TAAv7 9 Michal Hocko
                   ` (12 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Michal Hocko @ 2019-10-22  7:58 UTC (permalink / raw)
  To: speck

On Mon 21-10-19 13:29:02, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 07/10] x86/tsx: Add "auto" option to TSX cmdline parameter
> 
> Platforms which are not affected by X86_BUG_TAA may want the TSX feature
> enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
> disable TSX when X86_BUG_TAA is present, otherwise enable TSX.
> 
> More details on X86_BUG_TAA can be found here:
> https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html

Considering how the whole thing is confusing could you please explicitly
state CPU models where auto is different from on as you did in an email
in previous email thread?

> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt | 5 +++++
>  arch/x86/kernel/cpu/tsx.c                       | 5 +++++
>  2 files changed, 10 insertions(+)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index ad6b69057bb0..4d63dc8dc2fc 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4856,6 +4856,11 @@
>  
>  			on	- Enable TSX on the system.
>  			off	- Disable TSX on the system.
> +			auto	- Disable TSX if X86_BUG_TAA is present,
> +				  otherwise enable TSX on the system.
> +
> +			More details on X86_BUG_TAA are here:
> +			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
>  
>  			Not specifying this option is equivalent to tsx=off.
>  
> diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
> index d031a8b6fb0e..9d7d3ad324b7 100644
> --- a/arch/x86/kernel/cpu/tsx.c
> +++ b/arch/x86/kernel/cpu/tsx.c
> @@ -79,6 +79,11 @@ void __init tsx_init(void)
>  			tsx_ctrl_state = TSX_CTRL_ENABLE;
>  		} else if (!strcmp(arg, "off")) {
>  			tsx_ctrl_state = TSX_CTRL_DISABLE;
> +		} else if (!strcmp(arg, "auto")) {
> +			if (boot_cpu_has_bug(X86_BUG_TAA))
> +				tsx_ctrl_state = TSX_CTRL_DISABLE;
> +			else
> +				tsx_ctrl_state = TSX_CTRL_ENABLE;
>  		} else {
>  			tsx_ctrl_state = TSX_CTRL_DISABLE;
>  			pr_err("tsx: invalid option, defaulting to off\n");
> -- 
> 2.20.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** [PATCH v7 09/10] TAAv7 9
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (12 preceding siblings ...)
  2019-10-22  7:58 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 07/10] TAAv7 7 Michal Hocko
@ 2019-10-22  8:00 ` Michal Hocko
  2019-10-22  8:15 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3 Michal Hocko
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Michal Hocko @ 2019-10-22  8:00 UTC (permalink / raw)
  To: speck

On Mon 21-10-19 13:31:02, speck for Michal Hocko wrote:
[...]
> +config X86_INTEL_TSX_MODE_ON
> +	bool "on"
> +	help
> +	  TSX is always enabled on TSX capable HW - equals tsx=on command line
> +	  parameter.
> +
> +config X86_INTEL_TSX_MODE_AUTO
> +	bool "auto"
> +	help
> +	  TSX is enabled on TSX capable HW that is believed to be safe against
> +	  side channel attacks- equals tsx=auto command line parameter.

Considering how small the list of CPUs that make a difference for the
two modes please add those models here as well.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (13 preceding siblings ...)
  2019-10-22  8:00 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 09/10] TAAv7 9 Michal Hocko
@ 2019-10-22  8:15 ` Michal Hocko
  2019-10-22 14:42   ` Josh Poimboeuf
  2019-10-22 14:38 ` [MODERATED] " Borislav Petkov
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Michal Hocko @ 2019-10-22  8:15 UTC (permalink / raw)
  To: speck

On Mon 21-10-19 13:25:02, speck for Pawan Gupta wrote:
[...]
> +	tsx=		[X86] Control Transactional Synchronization
> +			Extensions (TSX) feature in Intel processors that
> +			support TSX control.
> +
> +			This parameter controls the TSX feature. The options are:
> +
> +			on	- Enable TSX on the system.
> +			off	- Disable TSX on the system.

Please explicitly mention that off is active only if TSX there is ucode
support for that (aka MSR_IA32_TSX_CTRL).
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (14 preceding siblings ...)
  2019-10-22  8:15 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3 Michal Hocko
@ 2019-10-22 14:38 ` Borislav Petkov
  2019-10-22 16:58   ` Pawan Gupta
  2019-10-22 14:48 ` Borislav Petkov
                   ` (9 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 14:38 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:25:02PM -0700, speck for Pawan Gupta wrote:
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 897c8302d982..885d4ac2111a 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1561,6 +1561,8 @@ void __init identify_boot_cpu(void)
>  #endif
>  	cpu_detect_tlb(&boot_cpu_data);
>  	setup_cr_pinning();
> +
> +	tsx_init();
>  }
>

Why is this called here instead of in check_bugs()?

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3
  2019-10-22  8:15 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3 Michal Hocko
@ 2019-10-22 14:42   ` Josh Poimboeuf
  2019-10-22 16:48     ` [MODERATED] " Pawan Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 14:42 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 10:15:34AM +0200, speck for Michal Hocko wrote:
> On Mon 21-10-19 13:25:02, speck for Pawan Gupta wrote:
> [...]
> > +	tsx=		[X86] Control Transactional Synchronization
> > +			Extensions (TSX) feature in Intel processors that
> > +			support TSX control.
> > +
> > +			This parameter controls the TSX feature. The options are:
> > +
> > +			on	- Enable TSX on the system.
> > +			off	- Disable TSX on the system.
> 
> Please explicitly mention that off is active only if TSX there is ucode
> support for that (aka MSR_IA32_TSX_CTRL).

I'm pretty sure I already asked for this in the last revision.  And this
is not the first time I've seen ignored feedback.  Pawan, please try to
take all feedback into account so we don't have to keep repeating
ourselves.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (15 preceding siblings ...)
  2019-10-22 14:38 ` [MODERATED] " Borislav Petkov
@ 2019-10-22 14:48 ` Borislav Petkov
  2019-10-22 17:00   ` Pawan Gupta
  2019-10-22 15:07 ` Borislav Petkov
                   ` (8 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 14:48 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:25:02PM -0700, speck for Pawan Gupta wrote:
> diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
> new file mode 100644
> index 000000000000..d031a8b6fb0e
> --- /dev/null
> +++ b/arch/x86/kernel/cpu/tsx.c

Maybe I've missed it in the mail overflow but why is this file being created?

Is arch/x86/kernel/cpu/bugs.c too big all of a sudden or is there some
other thought behind separating the TSX stuff?

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (16 preceding siblings ...)
  2019-10-22 14:48 ` Borislav Petkov
@ 2019-10-22 15:07 ` Borislav Petkov
  2019-10-22 18:36   ` Pawan Gupta
  2019-10-22 16:51 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
                   ` (7 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 15:07 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:25:02PM -0700, speck for Pawan Gupta wrote:
> +void __init tsx_init(void)
> +{
> +	char arg[20];

Why 20? Magic?

Why not simply 4? We have "on" and "off" + '\0' AFAICT, no?

Also, you better clear that stack buffer:

	char arg[4] = {};

before use.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 14:42   ` Josh Poimboeuf
@ 2019-10-22 16:48     ` Pawan Gupta
  2019-10-22 17:01       ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 16:48 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 09:42:26AM -0500, speck for Josh Poimboeuf wrote:
> On Tue, Oct 22, 2019 at 10:15:34AM +0200, speck for Michal Hocko wrote:
> > On Mon 21-10-19 13:25:02, speck for Pawan Gupta wrote:
> > [...]
> > > +	tsx=		[X86] Control Transactional Synchronization
> > > +			Extensions (TSX) feature in Intel processors that
> > > +			support TSX control.
> > > +
> > > +			This parameter controls the TSX feature. The options are:
> > > +
> > > +			on	- Enable TSX on the system.
> > > +			off	- Disable TSX on the system.
> > 
> > Please explicitly mention that off is active only if TSX there is ucode
> > support for that (aka MSR_IA32_TSX_CTRL).
> 
> I'm pretty sure I already asked for this in the last revision.  And this
> is not the first time I've seen ignored feedback.  Pawan, please try to
> take all feedback into account so we don't have to keep repeating
> ourselves.

I am sorry to have missed it. All of the operation on|off|auto are
dependent on TSX control being present. The tsx= description states that
processors need TSX control support. I would extend it to specifically
say ucode update adds TSX control support (aka MSR_IA32_TSX_CTRL).


	tsx=		[X86] Control Transactional Synchronization
			Extensions (TSX) feature in Intel processors that
			support TSX control.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (17 preceding siblings ...)
  2019-10-22 15:07 ` Borislav Petkov
@ 2019-10-22 16:51 ` Borislav Petkov
  2019-10-22 17:02   ` Borislav Petkov
                     ` (2 more replies)
  2019-10-22 17:25 ` [MODERATED] Re: [PATCH v7 01/10] TAAv7 1 Josh Poimboeuf
                   ` (6 subsequent siblings)
  25 siblings, 3 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 16:51 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
> @@ -268,6 +270,110 @@ static int __init mds_cmdline(char *str)
>  }
>  early_param("mds", mds_cmdline);
>  
> +#undef pr_fmt
> +#define pr_fmt(fmt)	"TAA: " fmt
> +
> +/* Default mitigation for TAA-affected CPUs */
> +static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
> +static bool taa_nosmt __ro_after_init;
> +
> +static const char * const taa_strings[] = {
> +	[TAA_MITIGATION_OFF]		= "Vulnerable",
> +	[TAA_MITIGATION_UCODE_NEEDED]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
> +	[TAA_MITIGATION_VERW]		= "Mitigation: Clear CPU buffers",
> +	[TAA_MITIGATION_TSX_DISABLE]	= "Mitigation: TSX disabled",
> +};
> +
> +static void __init taa_select_mitigation(void)
> +{
> +	u64 ia32_cap = x86_read_arch_cap_msr();

Do that MSR access...

> +
> +	if (!boot_cpu_has_bug(X86_BUG_TAA)) {
> +		taa_mitigation = TAA_MITIGATION_OFF;
> +		return;
> +	}

... here.

> +
> +	/*
> +	 * As X86_BUG_TAA=1, TSX feature is supported by the hardware. If
> +	 * TSX was disabled (X86_FEATURE_RTM=0) earlier during tsx_init().

That sentence is having trouble saying whatever it is trying to say. :)

> +	 * Select TSX_DISABLE as mitigation.
> +	 *
> +	 * This check is ahead of mitigations=off and tsx_async_abort=off
> +	 * because when TSX is disabled mitigation is already in place. This
> +	 * ensures sysfs doesn't show "Vulnerable" when TSX is disabled.
> +	 */

So make that check first and then merge the other two:

	if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off()) {
		taa_mitigation = TAA_MITIGATION_OFF;
		return;
	}

> +	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> +		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> +		pr_info("%s\n", taa_strings[taa_mitigation]);
> +		return;
> +	}
> +
> +	/* All mitigations turned off from cmdline (mitigations=off) */
> +	if (cpu_mitigations_off()) {
> +		taa_mitigation = TAA_MITIGATION_OFF;
> +		return;
> +	}
> +
> +	/* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
> +	if (taa_mitigation == TAA_MITIGATION_OFF) {

		goto out;

and you slap an out label above the pr_info at the end of this function.

> +		pr_info("%s\n", taa_strings[taa_mitigation]);
> +		return;
> +	}
> +
> +	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
> +		taa_mitigation = TAA_MITIGATION_VERW;
> +	else
> +		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
> +
> +	/*
> +	 * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
> +	 * A microcode update fixes this behavior to clear CPU buffers.
> +	 * Microcode update also adds support for MSR_IA32_TSX_CTRL which
> +	 * is enumerated by ARCH_CAP_TSX_CTRL_MSR bit.
> +	 *
> +	 * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
> +	 * update is required.
> +	 */
> +	if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> +	    !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
> +		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
> +
> +	/*
> +	 * TSX is enabled, select alternate mitigation for TAA which is
> +	 * same as MDS. Enable MDS static branch to clear CPU buffers.
> +	 *
> +	 * For guests that can't determine whether the correct microcode is
> +	 * present on host, enable the mitigation for UCODE_NEEDED as well.
> +	 */
> +	static_branch_enable(&mds_user_clear);
> +
> +	if (taa_nosmt || cpu_mitigations_auto_nosmt())
> +		cpu_smt_disable(false);
> +
> +	pr_info("%s\n", taa_strings[taa_mitigation]);
> +}

To show better what I mean, here's the whole function fixed up:

static void __init taa_select_mitigation(void)
{
	u64 ia32_cap;

	if (!boot_cpu_has(X86_FEATURE_RTM)) {
		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
		goto out;
	}

	if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off()) {
		taa_mitigation = TAA_MITIGATION_OFF;
		goto out;
	}

	ia32_cap = x86_read_arch_cap_msr();

	/* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
	if (taa_mitigation == TAA_MITIGATION_OFF)
		goto out;

	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
		taa_mitigation = TAA_MITIGATION_VERW;
	else
		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;

	/*
	 * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
	 * A microcode update fixes this behavior to clear CPU buffers.
	 * Microcode update also adds support for MSR_IA32_TSX_CTRL which
	 * is enumerated by ARCH_CAP_TSX_CTRL_MSR bit.
	 *
	 * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
	 * update is required.
	 */
	if ((ia32_cap & ARCH_CAP_MDS_NO) &&
	    !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;

	/*
	 * TSX is enabled, select alternate mitigation for TAA which is
	 * the same as MDS. Enable MDS static branch to clear CPU buffers.
	 *
	 * For guests that can't determine whether the correct microcode is
	 * present on host, enable the mitigation for UCODE_NEEDED as well.
	 */
	static_branch_enable(&mds_user_clear);

	if (taa_nosmt || cpu_mitigations_auto_nosmt())
		cpu_smt_disable(false);

out:
	pr_info("%s\n", taa_strings[taa_mitigation]);
}

> +
> +static int __init tsx_async_abort_cmdline(char *str)

That function name needs a verb: tsx_async_abort_parse_cmdline()

> +{
> +	if (!boot_cpu_has_bug(X86_BUG_TAA))
> +		return 0;
> +
> +	if (!str)
> +		return -EINVAL;
> +
> +	if (!strcmp(str, "off")) {
> +		taa_mitigation = TAA_MITIGATION_OFF;
> +	} else if (!strcmp(str, "full")) {
> +		taa_mitigation = TAA_MITIGATION_VERW;
> +	} else if (!strcmp(str, "full,nosmt")) {
> +		taa_mitigation = TAA_MITIGATION_VERW;
> +		taa_nosmt = true;
> +	}
> +
> +	return 0;
> +}
> +early_param("tsx_async_abort", tsx_async_abort_cmdline);

Say what now?!

The previous patch added this:

        tsx=            [X86] Control Transactional Synchronization
                        Extensions (TSX) feature in Intel processors that
                        support TSX control.

                        This parameter controls the TSX feature. The options are:

                        on      - Enable TSX on the system.
                        off     - Disable TSX on the system.

So what is the final name of command line option now?

> +
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "Spectre V1 : " fmt
>  
> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
>  #undef pr_fmt
>  #define pr_fmt(fmt) fmt
>  
> -/* Update the static key controlling the MDS CPU buffer clear in idle */
> +/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
>  static void update_mds_branch_idle(void)
>  {
>  	/*
> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
>  	 * The other variants cannot be mitigated when SMT is enabled, so
>  	 * clearing the buffers on idle just to prevent the Store Buffer
>  	 * repartitioning leak would be a window dressing exercise.
> +	 *
> +	 * Apply idle buffer clearing to TAA affected CPUs also.
>  	 */
> -	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
> +	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
> +	    !boot_cpu_has_bug(X86_BUG_TAA))
>  		return;
>  
>  	if (sched_smt_active())
> @@ -786,6 +895,7 @@ static void update_mds_branch_idle(void)
>  }
>  
>  #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
> +#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
>  
>  void cpu_bugs_smt_update(void)
>  {
> @@ -819,6 +929,19 @@ void cpu_bugs_smt_update(void)
>  		break;
>  	}
>  
> +	switch (taa_mitigation) {
> +	case TAA_MITIGATION_VERW:
> +	case TAA_MITIGATION_UCODE_NEEDED:
> +		if (sched_smt_active())
> +			pr_warn_once(TAA_MSG_SMT);
> +		/* TSX is enabled, apply MDS idle buffer clearing. */
> +		update_mds_branch_idle();
> +		break;
> +	case TAA_MITIGATION_TSX_DISABLE:
> +	case TAA_MITIGATION_OFF:
> +		break;
> +	}
> +
>  	mutex_unlock(&spec_ctrl_mutex);
>  }
>  
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 885d4ac2111a..86f22c1e5912 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	if (!cpu_matches(NO_SWAPGS))
>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
>  
> +	/*
> +	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> +	 *	- TSX is supported or
> +	 *	- TSX_CTRL is supported

s/supported/present/

> +	 *
> +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
> +	 * the kernel boot e.g. kexec
> +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
> +	 * update is not present or running as guest that don't get TSX_CTRL.
> +	 */
> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> +	    (boot_cpu_has(X86_FEATURE_RTM) ||

You have a @c parameter passed in, use it:

	    cpu_has(c, X86_FEATURE_RTM)

> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> +		setup_force_cpu_bug(X86_BUG_TAA);
> +
>  	if (cpu_matches(NO_MELTDOWN))
>  		return;
>  
> -- 
> 2.20.1

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 07/10] TAAv7 7
  2019-10-22  7:58 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 07/10] TAAv7 7 Michal Hocko
@ 2019-10-22 16:55   ` Pawan Gupta
  0 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 16:55 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 09:58:29AM +0200, speck for Michal Hocko wrote:
> On Mon 21-10-19 13:29:02, speck for Pawan Gupta wrote:
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Subject: [PATCH v7 07/10] x86/tsx: Add "auto" option to TSX cmdline parameter
> > 
> > Platforms which are not affected by X86_BUG_TAA may want the TSX feature
> > enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
> > disable TSX when X86_BUG_TAA is present, otherwise enable TSX.
> > 
> > More details on X86_BUG_TAA can be found here:
> > https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html
> 
> Considering how the whole thing is confusing could you please explicitly
> state CPU models where auto is different from on as you did in an email
> in previous email thread?

I will update the documentation.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 14:38 ` [MODERATED] " Borislav Petkov
@ 2019-10-22 16:58   ` Pawan Gupta
  0 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 16:58 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 04:38:18PM +0200, speck for Borislav Petkov wrote:
> On Mon, Oct 21, 2019 at 01:25:02PM -0700, speck for Pawan Gupta wrote:
> > diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> > index 897c8302d982..885d4ac2111a 100644
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -1561,6 +1561,8 @@ void __init identify_boot_cpu(void)
> >  #endif
> >  	cpu_detect_tlb(&boot_cpu_data);
> >  	setup_cr_pinning();
> > +
> > +	tsx_init();
> >  }
> >
> 
> Why is this called here instead of in check_bugs()?

As this as per the feedback, because TSX is actually a feature.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 14:48 ` Borislav Petkov
@ 2019-10-22 17:00   ` Pawan Gupta
  2019-10-22 17:16     ` [MODERATED] " Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 17:00 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 04:48:53PM +0200, speck for Borislav Petkov wrote:
> On Mon, Oct 21, 2019 at 01:25:02PM -0700, speck for Pawan Gupta wrote:
> > diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
> > new file mode 100644
> > index 000000000000..d031a8b6fb0e
> > --- /dev/null
> > +++ b/arch/x86/kernel/cpu/tsx.c
> 
> Maybe I've missed it in the mail overflow but why is this file being created?
> 
> Is arch/x86/kernel/cpu/bugs.c too big all of a sudden or is there some
> other thought behind separating the TSX stuff?

Again this is as per the feedback to create a new file for TSX as it is
a feature. TAA stuff stays in bugs.c.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 16:48     ` [MODERATED] " Pawan Gupta
@ 2019-10-22 17:01       ` Michal Hocko
  2019-10-22 17:35         ` Josh Poimboeuf
  0 siblings, 1 reply; 78+ messages in thread
From: Michal Hocko @ 2019-10-22 17:01 UTC (permalink / raw)
  To: speck

On Tue 22-10-19 09:48:01, speck for Pawan Gupta wrote:
> On Tue, Oct 22, 2019 at 09:42:26AM -0500, speck for Josh Poimboeuf wrote:
> > On Tue, Oct 22, 2019 at 10:15:34AM +0200, speck for Michal Hocko wrote:
> > > On Mon 21-10-19 13:25:02, speck for Pawan Gupta wrote:
> > > [...]
> > > > +	tsx=		[X86] Control Transactional Synchronization
> > > > +			Extensions (TSX) feature in Intel processors that
> > > > +			support TSX control.
> > > > +
> > > > +			This parameter controls the TSX feature. The options are:
> > > > +
> > > > +			on	- Enable TSX on the system.
> > > > +			off	- Disable TSX on the system.
> > > 
> > > Please explicitly mention that off is active only if TSX there is ucode
> > > support for that (aka MSR_IA32_TSX_CTRL).
> > 
> > I'm pretty sure I already asked for this in the last revision.  And this
> > is not the first time I've seen ignored feedback.  Pawan, please try to
> > take all feedback into account so we don't have to keep repeating
> > ourselves.
> 
> I am sorry to have missed it. All of the operation on|off|auto are
> dependent on TSX control being present. The tsx= description states that
> processors need TSX control support. I would extend it to specifically
> say ucode update adds TSX control support (aka MSR_IA32_TSX_CTRL).

Thanks! An explicit note about the MSR is important because that is
something people can google for.
 
> 
> 	tsx=		[X86] Control Transactional Synchronization
> 			Extensions (TSX) feature in Intel processors that
> 			support TSX control.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 16:51 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
@ 2019-10-22 17:02   ` Borislav Petkov
  2019-10-22 18:00     ` Pawan Gupta
  2019-10-22 17:44   ` Pawan Gupta
  2019-10-23  1:33   ` Pawan Gupta
  2 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 17:02 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 06:51:12PM +0200, Borislav Petkov wrote:
> > +{
> > +	if (!boot_cpu_has_bug(X86_BUG_TAA))
> > +		return 0;
> > +
> > +	if (!str)
> > +		return -EINVAL;
> > +
> > +	if (!strcmp(str, "off")) {
> > +		taa_mitigation = TAA_MITIGATION_OFF;
> > +	} else if (!strcmp(str, "full")) {
> > +		taa_mitigation = TAA_MITIGATION_VERW;
> > +	} else if (!strcmp(str, "full,nosmt")) {
> > +		taa_mitigation = TAA_MITIGATION_VERW;
> > +		taa_nosmt = true;
> > +	}
> > +
> > +	return 0;
> > +}
> > +early_param("tsx_async_abort", tsx_async_abort_cmdline);
> 
> Say what now?!
> 
> The previous patch added this:
> 
>         tsx=            [X86] Control Transactional Synchronization
>                         Extensions (TSX) feature in Intel processors that
>                         support TSX control.
> 
>                         This parameter controls the TSX feature. The options are:
> 
>                         on      - Enable TSX on the system.
>                         off     - Disable TSX on the system.
> 
> So what is the final name of command line option now?

Yah, Josh just set me straight: there's two flags. Yuck. I'm willing to
bet I won't be the only one to get confused by this.

tsx=
taa=

is probably marginally better but someone might have a better idea.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 17:00   ` Pawan Gupta
@ 2019-10-22 17:16     ` Borislav Petkov
  2019-10-22 18:07       ` [MODERATED] " Pawan Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 17:16 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 10:00:46AM -0700, speck for Pawan Gupta wrote:
> Again this is as per the feedback to create a new file for TSX as it is
> a feature. TAA stuff stays in bugs.c.

Well, with patchsets like that, it is always good to write a more
detailed changelog so that questions like that are answered from it.

Your 0/x message has

"- Refactor TSX code into new tsx.c"

but it would be a lot more helpful if it had the "why" you did it:

" - Carve out the TSX disabling functionality into a separate
compilation unit because it is a CPU feature."

This sentence could be a part of the patch commit message even.

This patchset has been going back'n'forth since May so people like me
forgetting things is kinda normal.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 01/10] TAAv7 1
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (18 preceding siblings ...)
  2019-10-22 16:51 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
@ 2019-10-22 17:25 ` Josh Poimboeuf
  2019-10-23  9:26   ` Borislav Petkov
  2019-10-22 17:26 ` Josh Poimboeuf
                   ` (5 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 17:25 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:23:02PM -0700, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 01/10] x86/tsx: Add enumeration support for IA32_TSX_CTRL
>  MSR
> 
> Transactional Synchronization Extensions (TSX) may be used on certain
> processors as part of a speculative side channel attack.  A microcode
> update for existing processors that are vulnerable to this attack will
> add a new MSR, IA32_TSX_CTRL to allow the system administrator the
> option to disable TSX as one of the possible mitigations.  [Note that
> future processors that are not vulnerable will also support the
> IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
> bits.
> 
> TSX has two sub-features:
> 
> 1. Restricted Transactional Memory (RTM) is an explicitly-used feature
>    where new instructions begin and end TSX transactions.
> 2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
>    "old" style locks are used by software.
> 
> Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
> IA32_TSX_CTRL MSR.
> 
> There are two control bits in IA32_TSX_CTRL MSR:
> 
>   Bit 0: When set it disables the Restricted Transactional Memory (RTM)
>          sub-feature of TSX (will force all transactions to abort on the
> 	 XBEGIN instruction).
> 
>   Bit 1: When set it disables the enumeration of the RTM and HLE feature
>          (i.e. it will make CPUID(EAX=7).EBX{bit4} and
>          CPUID(EAX=7).EBX{bit11} read as 0).
> 
> The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
> disabled

How is HLE unconditionally disabled?  Is it done by the above mentioned
microcode?  Is there a way to enable it?

> but still enumerated as present by CPUID(EAX=7).EBX{bit4}.

... unless disabled via bit 1 of IA32_TSX_CTRL_MSR?

[ Also please add the above clarifications to the patch description. ]

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 01/10] TAAv7 1
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (19 preceding siblings ...)
  2019-10-22 17:25 ` [MODERATED] Re: [PATCH v7 01/10] TAAv7 1 Josh Poimboeuf
@ 2019-10-22 17:26 ` Josh Poimboeuf
  2019-10-22 20:44   ` [MODERATED] Jon Masters
  2019-10-22 17:47 ` [MODERATED] Re: [PATCH v7 03/10] TAAv7 3 Josh Poimboeuf
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 17:26 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:23:02PM -0700, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 01/10] x86/tsx: Add enumeration support for IA32_TSX_CTRL
>  MSR
> 
> Transactional Synchronization Extensions (TSX) may be used on certain
> processors as part of a speculative side channel attack.  A microcode
> update for existing processors that are vulnerable to this attack will
> add a new MSR, IA32_TSX_CTRL to allow the system administrator the
> option to disable TSX as one of the possible mitigations.  [Note that
> future processors that are not vulnerable will also support the
> IA32_TSX_CTRL MSR].

This should clarify that not *all* TAA-vulnerable CPUs will get
IA32_TSX_CTRL, instead only the ones which aren't vulnerable to MDS.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 17:01       ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
@ 2019-10-22 17:35         ` Josh Poimboeuf
  0 siblings, 0 replies; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 17:35 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 07:01:13PM +0200, speck for Michal Hocko wrote:
> On Tue 22-10-19 09:48:01, speck for Pawan Gupta wrote:
> > On Tue, Oct 22, 2019 at 09:42:26AM -0500, speck for Josh Poimboeuf wrote:
> > > On Tue, Oct 22, 2019 at 10:15:34AM +0200, speck for Michal Hocko wrote:
> > > > On Mon 21-10-19 13:25:02, speck for Pawan Gupta wrote:
> > > > [...]
> > > > > +	tsx=		[X86] Control Transactional Synchronization
> > > > > +			Extensions (TSX) feature in Intel processors that
> > > > > +			support TSX control.
> > > > > +
> > > > > +			This parameter controls the TSX feature. The options are:
> > > > > +
> > > > > +			on	- Enable TSX on the system.
> > > > > +			off	- Disable TSX on the system.
> > > > 
> > > > Please explicitly mention that off is active only if TSX there is ucode
> > > > support for that (aka MSR_IA32_TSX_CTRL).
> > > 
> > > I'm pretty sure I already asked for this in the last revision.  And this
> > > is not the first time I've seen ignored feedback.  Pawan, please try to
> > > take all feedback into account so we don't have to keep repeating
> > > ourselves.
> > 
> > I am sorry to have missed it. All of the operation on|off|auto are
> > dependent on TSX control being present. The tsx= description states that
> > processors need TSX control support. I would extend it to specifically
> > say ucode update adds TSX control support (aka MSR_IA32_TSX_CTRL).
> 
> Thanks! An explicit note about the MSR is important because that is
> something people can google for.

But as we've learned, most people won't know what that really means and
how it relates to different CPU.  This also needs to state in plain
language that tsx=off only works for newer (MDS_NO) CPUs.  The phrase
'off' is misleading, since it really means 'off on MDS_NO CPUs'.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 16:51 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
  2019-10-22 17:02   ` Borislav Petkov
@ 2019-10-22 17:44   ` Pawan Gupta
  2019-10-22 19:04     ` [MODERATED] " Borislav Petkov
  2019-10-23  1:33   ` Pawan Gupta
  2 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 17:44 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 06:51:12PM +0200, speck for Borislav Petkov wrote:
> On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
> > @@ -268,6 +270,110 @@ static int __init mds_cmdline(char *str)
> >  }
> >  early_param("mds", mds_cmdline);
> >  
> > +#undef pr_fmt
> > +#define pr_fmt(fmt)	"TAA: " fmt
> > +
> > +/* Default mitigation for TAA-affected CPUs */
> > +static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
> > +static bool taa_nosmt __ro_after_init;
> > +
> > +static const char * const taa_strings[] = {
> > +	[TAA_MITIGATION_OFF]		= "Vulnerable",
> > +	[TAA_MITIGATION_UCODE_NEEDED]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
> > +	[TAA_MITIGATION_VERW]		= "Mitigation: Clear CPU buffers",
> > +	[TAA_MITIGATION_TSX_DISABLE]	= "Mitigation: TSX disabled",
> > +};
> > +
> > +static void __init taa_select_mitigation(void)
> > +{
> > +	u64 ia32_cap = x86_read_arch_cap_msr();
> 
> Do that MSR access...
> 
> > +
> > +	if (!boot_cpu_has_bug(X86_BUG_TAA)) {
> > +		taa_mitigation = TAA_MITIGATION_OFF;
> > +		return;
> > +	}
> 
> ... here.

ok.

> > +
> > +	/*
> > +	 * As X86_BUG_TAA=1, TSX feature is supported by the hardware. If
> > +	 * TSX was disabled (X86_FEATURE_RTM=0) earlier during tsx_init().
> 
> That sentence is having trouble saying whatever it is trying to say. :)

I will fix it.


> > +	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> > +		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> > +		pr_info("%s\n", taa_strings[taa_mitigation]);
> > +		return;
> > +	}
> > +
> > +	/* All mitigations turned off from cmdline (mitigations=off) */
> > +	if (cpu_mitigations_off()) {
> > +		taa_mitigation = TAA_MITIGATION_OFF;
> > +		return;
> > +	}
> > +
> > +	/* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
> > +	if (taa_mitigation == TAA_MITIGATION_OFF) {
> 
> 		goto out;
> 
> and you slap an out label above the pr_info at the end of this function.

> 
> To show better what I mean, here's the whole function fixed up:
> 
> static void __init taa_select_mitigation(void)
> {
> 	u64 ia32_cap;
> 
> 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> 		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> 		goto out;
> 	}

There is a small problem with this, it will set

	taa_mitigation = TAA_MITIGATION_TSX_DISABLE;

when there is no X86_BUG_TAA.

> 
> 	if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off()) {
> 		taa_mitigation = TAA_MITIGATION_OFF;
> 		goto out;
> 	}

goto will cause it to differ from other mitigations that don't print
individual mitigation status when mitigations are turned off globally
(when cpu_mitigations_off() is true).

> > +
> > +static int __init tsx_async_abort_cmdline(char *str)
> 
> That function name needs a verb: tsx_async_abort_parse_cmdline()

Ok.

> 
> > +{
> > +	if (!boot_cpu_has_bug(X86_BUG_TAA))
> > +		return 0;
> > +
> > +	if (!str)
> > +		return -EINVAL;
> > +
> > +	if (!strcmp(str, "off")) {
> > +		taa_mitigation = TAA_MITIGATION_OFF;
> > +	} else if (!strcmp(str, "full")) {
> > +		taa_mitigation = TAA_MITIGATION_VERW;
> > +	} else if (!strcmp(str, "full,nosmt")) {
> > +		taa_mitigation = TAA_MITIGATION_VERW;
> > +		taa_nosmt = true;
> > +	}
> > +
> > +	return 0;
> > +}
> > +early_param("tsx_async_abort", tsx_async_abort_cmdline);
> 
> Say what now?!
> 
> The previous patch added this:
> 
>         tsx=            [X86] Control Transactional Synchronization
>                         Extensions (TSX) feature in Intel processors that
>                         support TSX control.
> 
>                         This parameter controls the TSX feature. The options are:
> 
>                         on      - Enable TSX on the system.
>                         off     - Disable TSX on the system.
> 
> So what is the final name of command line option now?

We support two cmdline parameters:

	tsx=			To control TSX feature.
	tsx_async_abort=	To control the TAA mitigation.

> 
> > diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> > index 885d4ac2111a..86f22c1e5912 100644
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >  	if (!cpu_matches(NO_SWAPGS))
> >  		setup_force_cpu_bug(X86_BUG_SWAPGS);
> >  
> > +	/*
> > +	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> > +	 *	- TSX is supported or
> > +	 *	- TSX_CTRL is supported
> 
> s/supported/present/

Ok.

> 
> > +	 *
> > +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
> > +	 * the kernel boot e.g. kexec
> > +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
> > +	 * update is not present or running as guest that don't get TSX_CTRL.
> > +	 */
> > +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> > +	    (boot_cpu_has(X86_FEATURE_RTM) ||
> 
> You have a @c parameter passed in, use it:
> 
> 	    cpu_has(c, X86_FEATURE_RTM)

Ok.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (20 preceding siblings ...)
  2019-10-22 17:26 ` Josh Poimboeuf
@ 2019-10-22 17:47 ` Josh Poimboeuf
  2019-10-22 18:39 ` [MODERATED] Re: [PATCH v7 10/10] TAAv7 10 Josh Poimboeuf
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 17:47 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:25:02PM -0700, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 03/10] x86/tsx: Add TSX cmdline option with TSX disabled by
>  default
> 
> Add kernel cmdline parameter "tsx" to control the Transactional
> Synchronization Extensions (TSX) feature.  On CPUs that support TSX
> control, use "tsx=on|off" to enable or disable TSX.  Not specifying this
> option is equivalent to "tsx=off". This is because on certain processors
> TSX may be used as a part of a speculative side channel attack.
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |  11 ++
>  arch/x86/kernel/cpu/Makefile                  |   2 +-
>  arch/x86/kernel/cpu/common.c                  |   2 +
>  arch/x86/kernel/cpu/cpu.h                     |  18 +++
>  arch/x86/kernel/cpu/intel.c                   |   5 +
>  arch/x86/kernel/cpu/tsx.c                     | 114 ++++++++++++++++++
>  6 files changed, 151 insertions(+), 1 deletion(-)
>  create mode 100644 arch/x86/kernel/cpu/tsx.c
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index a84a83f8881e..ad6b69057bb0 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4848,6 +4848,17 @@
>  			interruptions from clocksource watchdog are not
>  			acceptable).
>  
> +	tsx=		[X86] Control Transactional Synchronization
> +			Extensions (TSX) feature in Intel processors that
> +			support TSX control.
> +
> +			This parameter controls the TSX feature. The options are:
> +
> +			on	- Enable TSX on the system.
> +			off	- Disable TSX on the system.
> +
> +			Not specifying this option is equivalent to tsx=off.

In addition to the previous comments about clarifying the
functionalities of 'off' and 'auto', I think this really needs to
describe the risks associated with 'on' and 'auto', so the user can have
more guidance about what to do.

It should state that while there are mitigations for all known issues
(i.e., the tsx_async_abort= option), TSX has been known to be an
accelerator for several previous speculation-related CVEs, and so there
may be unknown security risks associated with leaving it enabled.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 17:02   ` Borislav Petkov
@ 2019-10-22 18:00     ` Pawan Gupta
  2019-10-22 18:12       ` [MODERATED] " Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 18:00 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 07:02:30PM +0200, speck for Borislav Petkov wrote:
> > > +early_param("tsx_async_abort", tsx_async_abort_cmdline);
> > 
> > Say what now?!
> > 
> > The previous patch added this:
> > 
> >         tsx=            [X86] Control Transactional Synchronization
> >                         Extensions (TSX) feature in Intel processors that
> >                         support TSX control.
> > 
> >                         This parameter controls the TSX feature. The options are:
> > 
> >                         on      - Enable TSX on the system.
> >                         off     - Disable TSX on the system.
> > 
> > So what is the final name of command line option now?
> 
> Yah, Josh just set me straight: there's two flags. Yuck. I'm willing to
> bet I won't be the only one to get confused by this.
> 
> tsx=
> taa=
> 
> is probably marginally better but someone might have a better idea.

It was called taa= in earlier versions. There was a feedback to change
it to tsx_async_abort=. tsx_async_abort is consistent with sysfs file.

	/sys/devices/system/cpu/vulnerabilities/tsx_async_abort

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 17:16     ` [MODERATED] " Borislav Petkov
@ 2019-10-22 18:07       ` Pawan Gupta
  0 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 18:07 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 07:16:11PM +0200, speck for Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 10:00:46AM -0700, speck for Pawan Gupta wrote:
> > Again this is as per the feedback to create a new file for TSX as it is
> > a feature. TAA stuff stays in bugs.c.
> 
> Well, with patchsets like that, it is always good to write a more
> detailed changelog so that questions like that are answered from it.
> 
> Your 0/x message has
> 
> "- Refactor TSX code into new tsx.c"
> 
> but it would be a lot more helpful if it had the "why" you did it:
> 
> " - Carve out the TSX disabling functionality into a separate
> compilation unit because it is a CPU feature."
> 
> This sentence could be a part of the patch commit message even.

Thank you for the feedback.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 18:00     ` Pawan Gupta
@ 2019-10-22 18:12       ` Borislav Petkov
  2019-10-22 19:16         ` Luck, Tony
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 18:12 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 11:00:32AM -0700, speck for Pawan Gupta wrote:
> It was called taa= in earlier versions. There was a feedback to change
> it to tsx_async_abort=.

I know, I'm just saying it is confusing.

> tsx_async_abort is consistent with sysfs file.
> 
> 	/sys/devices/system/cpu/vulnerabilities/tsx_async_abort

Well, that's not really an argument.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 15:07 ` Borislav Petkov
@ 2019-10-22 18:36   ` Pawan Gupta
  2019-10-22 18:59     ` [MODERATED] " Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 18:36 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 05:07:47PM +0200, speck for Borislav Petkov wrote:
> On Mon, Oct 21, 2019 at 01:25:02PM -0700, speck for Pawan Gupta wrote:
> > +void __init tsx_init(void)
> > +{
> > +	char arg[20];
> 
> Why 20? Magic?
> 
> Why not simply 4? We have "on" and "off" + '\0' AFAICT, no?

I will change it to 5 to accommodate "auto" too.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 10/10] TAAv7 10
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (21 preceding siblings ...)
  2019-10-22 17:47 ` [MODERATED] Re: [PATCH v7 03/10] TAAv7 3 Josh Poimboeuf
@ 2019-10-22 18:39 ` Josh Poimboeuf
  2019-10-23  7:24   ` Borislav Petkov
  2019-10-22 21:20 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Josh Poimboeuf
                   ` (2 subsequent siblings)
  25 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 18:39 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:32:03PM -0700, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 10/10] x86/tsx: Add sysfs interface to control TSX
> 
> Transactional Synchronization Extensions (TSX) is an extension to the
> x86 instruction set architecture (ISA) that adds Hardware Transactional
> Memory (HTM) support. Changing TSX state currently requires a reboot.
> This may not be desirable when rebooting imposes a huge penalty. Add
> support to control TSX feature via a new sysfs file:
> /sys/devices/system/cpu/hw_tx_mem
> 
> - Writing 0|off|N|n to this file disables TSX feature on all the CPUs.
>   This is equivalent to boot parameter tsx=off.
> - Writing 1|on|Y|y to this file enables TSX feature on all the CPUs.
>   This is equivalent to boot parameter tsx=on.
> - Reading from this returns the status of TSX feature.
> - When TSX control is not supported this interface is not visible in
>   sysfs.
> 
> Changing the TSX state from this interface also updates CPUID.RTM
> feature bit.  From the kernel side, this feature bit doesn't result in
> any ALTERNATIVE code patching.  No memory allocations are done to
> save/restore user state. No code paths in outside of the tests for
> vulnerability to TAA are dependent on the value of the feature bit.  In
> general the kernel doesn't care whether RTM is present or not.

Shouldn't the patch change X86_FEATURE_RTM?  I don't see where that
happens, though changing such bits at runtime is dangerous anyway...

Regardless, this patch adds complexity and still seems very fragile.  It
will add maintenance issues and probably introduce bugs.  And, most
importantly there's *still* not a real world justification for it.
Please drop it from the series.

Nacked-by: Josh Poimboeuf <jpoimboe@redhat.com>

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: Re: [PATCH v7 03/10] TAAv7 3
  2019-10-22 18:36   ` Pawan Gupta
@ 2019-10-22 18:59     ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 18:59 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 11:36:10AM -0700, speck for Pawan Gupta wrote:
> I will change it to 5 to accommodate "auto" too.

Sure but do that in the patch which adds "auto".

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 17:44   ` Pawan Gupta
@ 2019-10-22 19:04     ` Borislav Petkov
  2019-10-22 21:29       ` [MODERATED] " Pawan Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 19:04 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 10:44:52AM -0700, speck for Pawan Gupta wrote:
> There is a small problem with this, it will set
> 
> 	taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> 
> when there is no X86_BUG_TAA.

And?

When the CPU doesn't support TSX, then this practically is "Mitigation:
TSX disabled" because, well, TSX *is* disabled.

Also, from all the possible settings:

        [TAA_MITIGATION_OFF]            = "Vulnerable",
        [TAA_MITIGATION_UCODE_NEEDED]   = "Vulnerable: Clear CPU buffers attempted, no microcode",
        [TAA_MITIGATION_VERW]           = "Mitigation: Clear CPU buffers",
        [TAA_MITIGATION_TSX_DISABLE]    = "Mitigation: TSX disabled",

TAA_MITIGATION_TSX_DISABLE is the one that fits best for the !RTM case,
no?

> goto will cause it to differ from other mitigations that don't print
> individual mitigation status when mitigations are turned off globally
> (when cpu_mitigations_off() is true).

Well, I'd prefer to have a confirmation in dmesg that mitigations were
off but we have that in sysfs already. And the other mitigations don't
print anything, as you say, so yeah, I guess let's not fall out of line
here.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 18:12       ` [MODERATED] " Borislav Petkov
@ 2019-10-22 19:16         ` Luck, Tony
  2019-10-22 19:28           ` [MODERATED] " Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Luck, Tony @ 2019-10-22 19:16 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 08:12:15PM +0200, speck for Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 11:00:32AM -0700, speck for Pawan Gupta wrote:
> > It was called taa= in earlier versions. There was a feedback to change
> > it to tsx_async_abort=.
> 
> I know, I'm just saying it is confusing.
> 
> > tsx_async_abort is consistent with sysfs file.
> > 
> > 	/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
> 
> Well, that's not really an argument.

It might have been Linus that objected to "taa" (but I can't
recall if it was for the sysfs file name, or the boot argument)

Rationale was that in a couple of years time nobody will
remember what "taa" stands for.

-Tony

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 19:16         ` Luck, Tony
@ 2019-10-22 19:28           ` Borislav Petkov
  2019-10-22 20:02             ` Luck, Tony
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 19:28 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 12:16:14PM -0700, speck for Luck, Tony wrote:
> It might have been Linus that objected to "taa" (but I can't
> recall if it was for the sysfs file name, or the boot argument)
> 
> Rationale was that in a couple of years time nobody will
> remember what "taa" stands for.

Oh, I know that. That's why we have spec_store_bypass_disable
and not ssbd. Well, we had ssbd and actually, we should have
spec_store_bypass=(disable|enable|...) if we have to be strict but I
digress...

My point is that it would be helpful if those things were called
differently.

Or, a completely different idea: I wonder if we could merge the two
options like we do for l1tf= for example where we have

	l1tf=<opt1>,<opt2>

and then do:

	tsx=on,async_abort_full
	tsx=on,async_abort_full,nosmt
	tsx=off

That should even diminish the number of combinations because once you've
supplied "tsx=off" for example, async_abort doesn't matter.

So you only get to deal with the "=on" case and all the possibilities
coming with it.

It sure will simplify the code even further, methinks.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 19:28           ` [MODERATED] " Borislav Petkov
@ 2019-10-22 20:02             ` Luck, Tony
  2019-10-22 20:48               ` [MODERATED] Jon Masters
  2019-10-22 20:54               ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
  0 siblings, 2 replies; 78+ messages in thread
From: Luck, Tony @ 2019-10-22 20:02 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 09:28:20PM +0200, speck for Borislav Petkov wrote:
> Or, a completely different idea: I wonder if we could merge the two
> options like we do for l1tf= for example where we have
> 
> 	l1tf=<opt1>,<opt2>
> 
> and then do:
> 
> 	tsx=on,async_abort_full
> 	tsx=on,async_abort_full,nosmt
> 	tsx=off
> 
> That should even diminish the number of combinations because once you've
> supplied "tsx=off" for example, async_abort doesn't matter.

At first glance I find that more confusing that helpful.

Perspective: TAA is an issue that affects ~3 CPU models. It will be
a non-issue on future models.

TSX control is a new CPU feature control that happens to begin
with those three models, but will continue to be present on future
CPU models.

-Tony

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] ...
  2019-10-22 17:26 ` Josh Poimboeuf
@ 2019-10-22 20:44   ` Jon Masters
  0 siblings, 0 replies; 78+ messages in thread
From: Jon Masters @ 2019-10-22 20:44 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 38 bytes --]

Subject: Re: [PATCH v7 01/10] TAAv7 1

[-- Attachment #2: Type: text/plain, Size: 1527 bytes --]

On 10/22/19 1:26 PM, speck for Josh Poimboeuf wrote:
> On Mon, Oct 21, 2019 at 01:23:02PM -0700, speck for Pawan Gupta wrote:
>> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>> Subject: [PATCH v7 01/10] x86/tsx: Add enumeration support for IA32_TSX_CTRL
>>  MSR
>>
>> Transactional Synchronization Extensions (TSX) may be used on certain
>> processors as part of a speculative side channel attack.  A microcode
>> update for existing processors that are vulnerable to this attack will
>> add a new MSR, IA32_TSX_CTRL to allow the system administrator the
>> option to disable TSX as one of the possible mitigations.  [Note that
>> future processors that are not vulnerable will also support the
>> IA32_TSX_CTRL MSR].
> 
> This should clarify that not *all* TAA-vulnerable CPUs will get
> IA32_TSX_CTRL, instead only the ones which aren't vulnerable to MDS.

And FYI this is a change from what Intel had originally said. I think
we're all on the same page now that IA32_TSX_CTRL is only going to exist
on a small number of processors (those with MDS_NO=1 vulnerable to TAA).
We need a comprehensive list of designs in flight shared at some point
(privately) too, so that vendors can prepare documentation ahead.

I believe IA32_TSX_CTRL will be implemented going forward, even on next
generation processors without TAA. Please document that this control
interface is going to persist if that is indeed still the plan.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] ...
  2019-10-22 20:02             ` Luck, Tony
@ 2019-10-22 20:48               ` Jon Masters
  2019-10-22 20:54               ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
  1 sibling, 0 replies; 78+ messages in thread
From: Jon Masters @ 2019-10-22 20:48 UTC (permalink / raw)
  To: speck

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/rfc822-headers; protected-headers="v1", Size: 38 bytes --]

Subject: Re: [PATCH v7 04/10] TAAv7 4

[-- Attachment #2: Type: text/plain, Size: 1216 bytes --]

On 10/22/19 4:02 PM, speck for Luck, Tony wrote:
> On Tue, Oct 22, 2019 at 09:28:20PM +0200, speck for Borislav Petkov wrote:
>> Or, a completely different idea: I wonder if we could merge the two
>> options like we do for l1tf= for example where we have
>>
>> 	l1tf=<opt1>,<opt2>
>>
>> and then do:
>>
>> 	tsx=on,async_abort_full
>> 	tsx=on,async_abort_full,nosmt
>> 	tsx=off
>>
>> That should even diminish the number of combinations because once you've
>> supplied "tsx=off" for example, async_abort doesn't matter.
> 
> At first glance I find that more confusing that helpful.
> 
> Perspective: TAA is an issue that affects ~3 CPU models. It will be
> a non-issue on future models.

This should have been the very first thing mentioned, on the first day.
It's taken a very long time for us to all get on this same page.

> TSX control is a new CPU feature control that happens to begin
> with those three models, but will continue to be present on future
> CPU models.

So this answers my other question. The MSR will be there forevermore.
Meaning that tsx=off has meaning provided you're on a MDS_NO+ part.

Jon.

-- 
Computer Architect | Sent with my Fedora powered laptop


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 20:02             ` Luck, Tony
  2019-10-22 20:48               ` [MODERATED] Jon Masters
@ 2019-10-22 20:54               ` Borislav Petkov
  2019-10-22 21:38                 ` Josh Poimboeuf
  1 sibling, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 20:54 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 01:02:35PM -0700, speck for Luck, Tony wrote:
> At first glance I find that more confusing that helpful.
> 
> Perspective: TAA is an issue that affects ~3 CPU models. It will be
> a non-issue on future models.
> 
> TSX control is a new CPU feature control that happens to begin
> with those three models, but will continue to be present on future
> CPU models.

The ",async_abort..." piece is optional, of course. You should be able
to use

	tsx=on|off

just fine, without the additional option flags. I.e., you get what you
ordered principle.

It is the same thing as when you don't specify tsx_async_abort= on the
command line now.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (22 preceding siblings ...)
  2019-10-22 18:39 ` [MODERATED] Re: [PATCH v7 10/10] TAAv7 10 Josh Poimboeuf
@ 2019-10-22 21:20 ` Josh Poimboeuf
  2019-10-22 21:35   ` Andrew Cooper
  2019-10-23 15:46 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Borislav Petkov
       [not found] ` <5dae165e.1c69fb81.4beee.e271SMTPIN_ADDED_BROKEN@mx.google.com>
  25 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 21:20 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index 6e0a3b43d027..999b85039128 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -988,4 +988,11 @@ enum mds_mitigations {
>  	MDS_MITIGATION_VMWERV,
>  };
>  
> +enum taa_mitigations {
> +	TAA_MITIGATION_OFF,
> +	TAA_MITIGATION_UCODE_NEEDED,
> +	TAA_MITIGATION_VERW,
> +	TAA_MITIGATION_TSX_DISABLE,

I would rename "TSX_DISABLE" to "TSX_DISABLED", because

 a) it matches the verb tense of "UCODE_NEEDED", and
 b) the past tense hopefully helps make it slightly clearer that TSX was
    disabled separately, not as part of the mitigation code itself.

> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
>  #undef pr_fmt
>  #define pr_fmt(fmt) fmt
>  
> -/* Update the static key controlling the MDS CPU buffer clear in idle */
> +/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
>  static void update_mds_branch_idle(void)
>  {
>  	/*
> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
>  	 * The other variants cannot be mitigated when SMT is enabled, so
>  	 * clearing the buffers on idle just to prevent the Store Buffer
>  	 * repartitioning leak would be a window dressing exercise.
> +	 *
> +	 * Apply idle buffer clearing to TAA affected CPUs also.
>  	 */
> -	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
> +	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
> +	    !boot_cpu_has_bug(X86_BUG_TAA))
>  		return;

Sorry, I was out for most of the last two weeks, so I think I left you
hanging on this.  For context, here's your answer to my previous
question about whether the X86_BUG_TAA check makes sense here:

> It does provide protection against the "store buffer" leak. But the
> other buffers(fill buffer and load port) are still SMT vulnerable. Do
> you prefer it removed?

Yes, please remove it, for the same reason we used for MDS.  There's not
much point in partially mitigating TAA here.

> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 885d4ac2111a..86f22c1e5912 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	if (!cpu_matches(NO_SWAPGS))
>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
>  
> +	/*
> +	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> +	 *	- TSX is supported or
> +	 *	- TSX_CTRL is supported
> +	 *
> +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
> +	 * the kernel boot e.g. kexec
> +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
> +	 * update is not present or running as guest that don't get TSX_CTRL.
> +	 */
> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> +	    (boot_cpu_has(X86_FEATURE_RTM) ||
> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> +		setup_force_cpu_bug(X86_BUG_TAA);
> +

I'm finding this logic to be less than 100% convincing, or at least hard
to follow.  And it's different from most of the other bug checks in this
function.

Would the following work instead?

	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_TAA_NO))
		setup_force_cpu_bug(X86_BUG_TAA);

In other words I would presume all the NO_MDS CPUs listed in
'cpu_vuln_whitelist' are also immune to TAA.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 19:04     ` [MODERATED] " Borislav Petkov
@ 2019-10-22 21:29       ` Pawan Gupta
  2019-10-22 21:53         ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-22 21:29 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 09:04:34PM +0200, speck for Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 10:44:52AM -0700, speck for Pawan Gupta wrote:
> > There is a small problem with this, it will set
> > 
> > 	taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> > 
> > when there is no X86_BUG_TAA.
> 
> And?
> 
> When the CPU doesn't support TSX, then this practically is "Mitigation:
> TSX disabled" because, well, TSX *is* disabled.

Side effect of RTM check ahead of X86_BUG_TAA will be a dmesg print
"Mitigation: TSX disabled" when X86_BUG_TAA is not set.

> 
> Also, from all the possible settings:
> 
>         [TAA_MITIGATION_OFF]            = "Vulnerable",
>         [TAA_MITIGATION_UCODE_NEEDED]   = "Vulnerable: Clear CPU buffers attempted, no microcode",
>         [TAA_MITIGATION_VERW]           = "Mitigation: Clear CPU buffers",
>         [TAA_MITIGATION_TSX_DISABLE]    = "Mitigation: TSX disabled",
> 
> TAA_MITIGATION_TSX_DISABLE is the one that fits best for the !RTM case,
> no?

When X86_BUG_TAA is not set sysfs shows "Not affected" irrespective of
value of taa_mitigation.

	cpu_show_common()
	{
		if (!boot_cpu_has_bug(bug))
			return sprintf(buf, "Not affected\n");
		[...]

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 21:20 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Josh Poimboeuf
@ 2019-10-22 21:35   ` Andrew Cooper
  2019-10-22 21:44     ` Josh Poimboeuf
  0 siblings, 1 reply; 78+ messages in thread
From: Andrew Cooper @ 2019-10-22 21:35 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 4542 bytes --]

On 22/10/2019 22:20, speck for Josh Poimboeuf wrote:
> On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
>> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
>> index 6e0a3b43d027..999b85039128 100644
>> --- a/arch/x86/include/asm/processor.h
>> +++ b/arch/x86/include/asm/processor.h
>> @@ -988,4 +988,11 @@ enum mds_mitigations {
>>  	MDS_MITIGATION_VMWERV,
>>  };
>>  
>> +enum taa_mitigations {
>> +	TAA_MITIGATION_OFF,
>> +	TAA_MITIGATION_UCODE_NEEDED,
>> +	TAA_MITIGATION_VERW,
>> +	TAA_MITIGATION_TSX_DISABLE,
> I would rename "TSX_DISABLE" to "TSX_DISABLED", because
>
>  a) it matches the verb tense of "UCODE_NEEDED", and
>  b) the past tense hopefully helps make it slightly clearer that TSX was
>     disabled separately, not as part of the mitigation code itself.
>
>> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
>>  #undef pr_fmt
>>  #define pr_fmt(fmt) fmt
>>  
>> -/* Update the static key controlling the MDS CPU buffer clear in idle */
>> +/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
>>  static void update_mds_branch_idle(void)
>>  {
>>  	/*
>> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
>>  	 * The other variants cannot be mitigated when SMT is enabled, so
>>  	 * clearing the buffers on idle just to prevent the Store Buffer
>>  	 * repartitioning leak would be a window dressing exercise.
>> +	 *
>> +	 * Apply idle buffer clearing to TAA affected CPUs also.
>>  	 */
>> -	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
>> +	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
>> +	    !boot_cpu_has_bug(X86_BUG_TAA))
>>  		return;
> Sorry, I was out for most of the last two weeks, so I think I left you
> hanging on this.  For context, here's your answer to my previous
> question about whether the X86_BUG_TAA check makes sense here:
>
>> It does provide protection against the "store buffer" leak. But the
>> other buffers(fill buffer and load port) are still SMT vulnerable. Do
>> you prefer it removed?
> Yes, please remove it, for the same reason we used for MDS.  There's not
> much point in partially mitigating TAA here.
>
>> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
>> index 885d4ac2111a..86f22c1e5912 100644
>> --- a/arch/x86/kernel/cpu/common.c
>> +++ b/arch/x86/kernel/cpu/common.c
>> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>>  	if (!cpu_matches(NO_SWAPGS))
>>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
>>  
>> +	/*
>> +	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
>> +	 *	- TSX is supported or
>> +	 *	- TSX_CTRL is supported
>> +	 *
>> +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
>> +	 * the kernel boot e.g. kexec
>> +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
>> +	 * update is not present or running as guest that don't get TSX_CTRL.
>> +	 */
>> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
>> +	    (boot_cpu_has(X86_FEATURE_RTM) ||
>> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>> +		setup_force_cpu_bug(X86_BUG_TAA);
>> +
> I'm finding this logic to be less than 100% convincing, or at least hard
> to follow.  And it's different from most of the other bug checks in this
> function.
>
> Would the following work instead?
>
> 	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_TAA_NO))
> 		setup_force_cpu_bug(X86_BUG_TAA);
>
> In other words I would presume all the NO_MDS CPUs listed in
> 'cpu_vuln_whitelist' are also immune to TAA.

From a leakage point of view, TAA is just another way to access stale
date in the load ports, store buffer or fill buffers.

In practice, the leakage mechanism exists on all MDS-vulnerable,
TSX-enabled parts.  Parts which are not vulnerable to MDS leakage in the
first place will be similarly unaffected.

On pre MDS_NO=1 parts, the existing VERW and no-smt mitigates TAA as well.

On the current MDS_NO=1 parts, the silicon fix constituting MDS_NO took
care of the known leakage cases by forcing a register file write-back of
0 rather than the stale data.  However, the TSX Async Abort case was
missed, which is why this generation of parts are still vulnerable.

Therefore, the TAA vulnerability only really matters for TSX=1,
MDS_NO=1, TAA_NO=0 parts, because these parts are the ones that are
leaky, but have a lower overhead mitigation option than VERW and no-SMT.

~Andrew


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 20:54               ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
@ 2019-10-22 21:38                 ` Josh Poimboeuf
  2019-10-22 21:46                   ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 21:38 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 10:54:27PM +0200, speck for Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 01:02:35PM -0700, speck for Luck, Tony wrote:
> > At first glance I find that more confusing that helpful.
> > 
> > Perspective: TAA is an issue that affects ~3 CPU models. It will be
> > a non-issue on future models.
> > 
> > TSX control is a new CPU feature control that happens to begin
> > with those three models, but will continue to be present on future
> > CPU models.
> 
> The ",async_abort..." piece is optional, of course. You should be able
> to use
> 
> 	tsx=on|off
> 
> just fine, without the additional option flags. I.e., you get what you
> ordered principle.

Right, but then tsx=on exposes you to a bug, which might be surprising.

> It is the same thing as when you don't specify tsx_async_abort= on the
> command line now.

I'm not sure what you mean, I think the patches mitigate TAA by default
when TSX is on (ignoring SMT of course).

Also, suspending disbelief for a moment and assuming TSX becomes a huge
success story and you want to safely enable it 5 years down the road,
you'd have to do

  tsx=on,tsx_async_abort=full,nosmt,tsx_bug2=full,nosmt .... etc

when you just want to turn the darned thing on without having to worry
about specifying all the mitigations for all currently known bugs.

So while it is kind of nice to have everything specified on the same
cmdline, I think I prefer the current approach of keeping the TSX
enablement separate from the mitigation.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 21:35   ` Andrew Cooper
@ 2019-10-22 21:44     ` Josh Poimboeuf
  2019-10-22 22:03       ` Andrew Cooper
  0 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 21:44 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 10:35:12PM +0100, speck for Andrew Cooper wrote:
> On 22/10/2019 22:20, speck for Josh Poimboeuf wrote:
> > On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
> >> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> >> index 6e0a3b43d027..999b85039128 100644
> >> --- a/arch/x86/include/asm/processor.h
> >> +++ b/arch/x86/include/asm/processor.h
> >> @@ -988,4 +988,11 @@ enum mds_mitigations {
> >>  	MDS_MITIGATION_VMWERV,
> >>  };
> >>  
> >> +enum taa_mitigations {
> >> +	TAA_MITIGATION_OFF,
> >> +	TAA_MITIGATION_UCODE_NEEDED,
> >> +	TAA_MITIGATION_VERW,
> >> +	TAA_MITIGATION_TSX_DISABLE,
> > I would rename "TSX_DISABLE" to "TSX_DISABLED", because
> >
> >  a) it matches the verb tense of "UCODE_NEEDED", and
> >  b) the past tense hopefully helps make it slightly clearer that TSX was
> >     disabled separately, not as part of the mitigation code itself.
> >
> >> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
> >>  #undef pr_fmt
> >>  #define pr_fmt(fmt) fmt
> >>  
> >> -/* Update the static key controlling the MDS CPU buffer clear in idle */
> >> +/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
> >>  static void update_mds_branch_idle(void)
> >>  {
> >>  	/*
> >> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
> >>  	 * The other variants cannot be mitigated when SMT is enabled, so
> >>  	 * clearing the buffers on idle just to prevent the Store Buffer
> >>  	 * repartitioning leak would be a window dressing exercise.
> >> +	 *
> >> +	 * Apply idle buffer clearing to TAA affected CPUs also.
> >>  	 */
> >> -	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
> >> +	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
> >> +	    !boot_cpu_has_bug(X86_BUG_TAA))
> >>  		return;
> > Sorry, I was out for most of the last two weeks, so I think I left you
> > hanging on this.  For context, here's your answer to my previous
> > question about whether the X86_BUG_TAA check makes sense here:
> >
> >> It does provide protection against the "store buffer" leak. But the
> >> other buffers(fill buffer and load port) are still SMT vulnerable. Do
> >> you prefer it removed?
> > Yes, please remove it, for the same reason we used for MDS.  There's not
> > much point in partially mitigating TAA here.
> >
> >> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> >> index 885d4ac2111a..86f22c1e5912 100644
> >> --- a/arch/x86/kernel/cpu/common.c
> >> +++ b/arch/x86/kernel/cpu/common.c
> >> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >>  	if (!cpu_matches(NO_SWAPGS))
> >>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
> >>  
> >> +	/*
> >> +	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> >> +	 *	- TSX is supported or
> >> +	 *	- TSX_CTRL is supported
> >> +	 *
> >> +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
> >> +	 * the kernel boot e.g. kexec
> >> +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
> >> +	 * update is not present or running as guest that don't get TSX_CTRL.
> >> +	 */
> >> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> >> +	    (boot_cpu_has(X86_FEATURE_RTM) ||
> >> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> >> +		setup_force_cpu_bug(X86_BUG_TAA);
> >> +
> > I'm finding this logic to be less than 100% convincing, or at least hard
> > to follow.  And it's different from most of the other bug checks in this
> > function.
> >
> > Would the following work instead?
> >
> > 	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_TAA_NO))
> > 		setup_force_cpu_bug(X86_BUG_TAA);
> >
> > In other words I would presume all the NO_MDS CPUs listed in
> > 'cpu_vuln_whitelist' are also immune to TAA.
> 
> From a leakage point of view, TAA is just another way to access stale
> date in the load ports, store buffer or fill buffers.
> 
> In practice, the leakage mechanism exists on all MDS-vulnerable,
> TSX-enabled parts.  Parts which are not vulnerable to MDS leakage in the
> first place will be similarly unaffected.
> 
> On pre MDS_NO=1 parts, the existing VERW and no-smt mitigates TAA as well.
> 
> On the current MDS_NO=1 parts, the silicon fix constituting MDS_NO took
> care of the known leakage cases by forcing a register file write-back of
> 0 rather than the stale data.  However, the TSX Async Abort case was
> missed, which is why this generation of parts are still vulnerable.
> 
> Therefore, the TAA vulnerability only really matters for TSX=1,
> MDS_NO=1, TAA_NO=0 parts, because these parts are the ones that are
> leaky, but have a lower overhead mitigation option than VERW and no-SMT.

I agree with all of that.  So are you saying you agree or disagree with
my code snippet?  Note that a point of confusion here might be that
NO_MDS in the above code is something altogether different from the
ARCH_CAP_MDS_NO CPU bit.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 21:38                 ` Josh Poimboeuf
@ 2019-10-22 21:46                   ` Borislav Petkov
  2019-10-22 22:06                     ` Josh Poimboeuf
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 21:46 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 04:38:20PM -0500, speck for Josh Poimboeuf wrote:
> I'm not sure what you mean, I think the patches mitigate TAA by default
> when TSX is on (ignoring SMT of course).
> 
> Also, suspending disbelief for a moment and assuming TSX becomes a huge
> success story and you want to safely enable it 5 years down the road,
> you'd have to do
> 
>   tsx=on,tsx_async_abort=full,nosmt,tsx_bug2=full,nosmt .... etc
> 
> when you just want to turn the darned thing on without having to worry
> about specifying all the mitigations for all currently known bugs.

But when you want to enable it 5 years from now, you simply do

	tsx=on

Why would you even need to supply anything after it?

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 21:29       ` [MODERATED] " Pawan Gupta
@ 2019-10-22 21:53         ` Borislav Petkov
  2019-10-22 22:05           ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 21:53 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 02:29:02PM -0700, speck for Pawan Gupta wrote:
> Side effect of RTM check ahead of X86_BUG_TAA will be a dmesg print
> "Mitigation: TSX disabled" when X86_BUG_TAA is not set.

So what you're trying to tell me is that you don't want to print that
message at all?

	if (!boot_cpu_has(X86_FEATURE_RTM)) {
		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
		return;
	}

How's that?

> > Also, from all the possible settings:
> > 
> >         [TAA_MITIGATION_OFF]            = "Vulnerable",
> >         [TAA_MITIGATION_UCODE_NEEDED]   = "Vulnerable: Clear CPU buffers attempted, no microcode",
> >         [TAA_MITIGATION_VERW]           = "Mitigation: Clear CPU buffers",
> >         [TAA_MITIGATION_TSX_DISABLE]    = "Mitigation: TSX disabled",
> > 
> > TAA_MITIGATION_TSX_DISABLE is the one that fits best for the !RTM case,
> > no?
> 
> When X86_BUG_TAA is not set sysfs shows "Not affected" irrespective of
> value of taa_mitigation.
> 
> 	cpu_show_common()
> 	{
> 		if (!boot_cpu_has_bug(bug))
> 			return sprintf(buf, "Not affected\n");
> 		[...]

Sorry, I can't follow what you're trying to tell me here.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 21:44     ` Josh Poimboeuf
@ 2019-10-22 22:03       ` Andrew Cooper
  2019-10-23  1:16         ` Josh Poimboeuf
  0 siblings, 1 reply; 78+ messages in thread
From: Andrew Cooper @ 2019-10-22 22:03 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 5166 bytes --]

On 22/10/2019 22:44, speck for Josh Poimboeuf wrote:
> On Tue, Oct 22, 2019 at 10:35:12PM +0100, speck for Andrew Cooper wrote:
>> On 22/10/2019 22:20, speck for Josh Poimboeuf wrote:
>>> On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
>>>> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
>>>> index 6e0a3b43d027..999b85039128 100644
>>>> --- a/arch/x86/include/asm/processor.h
>>>> +++ b/arch/x86/include/asm/processor.h
>>>> @@ -988,4 +988,11 @@ enum mds_mitigations {
>>>>  	MDS_MITIGATION_VMWERV,
>>>>  };
>>>>  
>>>> +enum taa_mitigations {
>>>> +	TAA_MITIGATION_OFF,
>>>> +	TAA_MITIGATION_UCODE_NEEDED,
>>>> +	TAA_MITIGATION_VERW,
>>>> +	TAA_MITIGATION_TSX_DISABLE,
>>> I would rename "TSX_DISABLE" to "TSX_DISABLED", because
>>>
>>>  a) it matches the verb tense of "UCODE_NEEDED", and
>>>  b) the past tense hopefully helps make it slightly clearer that TSX was
>>>     disabled separately, not as part of the mitigation code itself.
>>>
>>>> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
>>>>  #undef pr_fmt
>>>>  #define pr_fmt(fmt) fmt
>>>>  
>>>> -/* Update the static key controlling the MDS CPU buffer clear in idle */
>>>> +/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
>>>>  static void update_mds_branch_idle(void)
>>>>  {
>>>>  	/*
>>>> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
>>>>  	 * The other variants cannot be mitigated when SMT is enabled, so
>>>>  	 * clearing the buffers on idle just to prevent the Store Buffer
>>>>  	 * repartitioning leak would be a window dressing exercise.
>>>> +	 *
>>>> +	 * Apply idle buffer clearing to TAA affected CPUs also.
>>>>  	 */
>>>> -	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
>>>> +	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
>>>> +	    !boot_cpu_has_bug(X86_BUG_TAA))
>>>>  		return;
>>> Sorry, I was out for most of the last two weeks, so I think I left you
>>> hanging on this.  For context, here's your answer to my previous
>>> question about whether the X86_BUG_TAA check makes sense here:
>>>
>>>> It does provide protection against the "store buffer" leak. But the
>>>> other buffers(fill buffer and load port) are still SMT vulnerable. Do
>>>> you prefer it removed?
>>> Yes, please remove it, for the same reason we used for MDS.  There's not
>>> much point in partially mitigating TAA here.
>>>
>>>> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
>>>> index 885d4ac2111a..86f22c1e5912 100644
>>>> --- a/arch/x86/kernel/cpu/common.c
>>>> +++ b/arch/x86/kernel/cpu/common.c
>>>> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>>>>  	if (!cpu_matches(NO_SWAPGS))
>>>>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
>>>>  
>>>> +	/*
>>>> +	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
>>>> +	 *	- TSX is supported or
>>>> +	 *	- TSX_CTRL is supported
>>>> +	 *
>>>> +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
>>>> +	 * the kernel boot e.g. kexec
>>>> +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
>>>> +	 * update is not present or running as guest that don't get TSX_CTRL.
>>>> +	 */
>>>> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
>>>> +	    (boot_cpu_has(X86_FEATURE_RTM) ||
>>>> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
>>>> +		setup_force_cpu_bug(X86_BUG_TAA);
>>>> +
>>> I'm finding this logic to be less than 100% convincing, or at least hard
>>> to follow.  And it's different from most of the other bug checks in this
>>> function.
>>>
>>> Would the following work instead?
>>>
>>> 	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_TAA_NO))
>>> 		setup_force_cpu_bug(X86_BUG_TAA);
>>>
>>> In other words I would presume all the NO_MDS CPUs listed in
>>> 'cpu_vuln_whitelist' are also immune to TAA.
>> From a leakage point of view, TAA is just another way to access stale
>> date in the load ports, store buffer or fill buffers.
>>
>> In practice, the leakage mechanism exists on all MDS-vulnerable,
>> TSX-enabled parts.  Parts which are not vulnerable to MDS leakage in the
>> first place will be similarly unaffected.
>>
>> On pre MDS_NO=1 parts, the existing VERW and no-smt mitigates TAA as well.
>>
>> On the current MDS_NO=1 parts, the silicon fix constituting MDS_NO took
>> care of the known leakage cases by forcing a register file write-back of
>> 0 rather than the stale data.  However, the TSX Async Abort case was
>> missed, which is why this generation of parts are still vulnerable.
>>
>> Therefore, the TAA vulnerability only really matters for TSX=1,
>> MDS_NO=1, TAA_NO=0 parts, because these parts are the ones that are
>> leaky, but have a lower overhead mitigation option than VERW and no-SMT.
> I agree with all of that.  So are you saying you agree or disagree with
> my code snippet?

Ah - sorry for being unclear.  Disagree.

Your code snippet will cause some Atom and very old Core parts to also
report to be TAA vulnerable, as they are on the MDS whitelist.

~Andrew


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 21:53         ` Borislav Petkov
@ 2019-10-22 22:05           ` Borislav Petkov
  2019-10-23  0:27             ` Pawan Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 22:05 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 11:53:21PM +0200, Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 02:29:02PM -0700, speck for Pawan Gupta wrote:
> > Side effect of RTM check ahead of X86_BUG_TAA will be a dmesg print
> > "Mitigation: TSX disabled" when X86_BUG_TAA is not set.
> 
> So what you're trying to tell me is that you don't want to print that
> message at all?
> 
> 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> 		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> 		return;
> 	}
> 
> How's that?

And to perhaps answer your other question from below maybe - I'm
assuming you want something like this:

	if (!boot_cpu_has(X86_FEATURE_RTM)) {
		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
		setup_clear_cpu_bug(X86_BUG_TAA);
		return;
	}

This is assuming X86_FEATURE_RTM really mirrors the CPUID bit. And that
should be the case since if you do tsx_disable(), it will clear the
CPUID bit through the MSR. IOW, when TSX is disabled, you're basically
not affected by TAA, of course.

> > > Also, from all the possible settings:
> > > 
> > >         [TAA_MITIGATION_OFF]            = "Vulnerable",
> > >         [TAA_MITIGATION_UCODE_NEEDED]   = "Vulnerable: Clear CPU buffers attempted, no microcode",
> > >         [TAA_MITIGATION_VERW]           = "Mitigation: Clear CPU buffers",
> > >         [TAA_MITIGATION_TSX_DISABLE]    = "Mitigation: TSX disabled",
> > > 
> > > TAA_MITIGATION_TSX_DISABLE is the one that fits best for the !RTM case,
> > > no?
> > 
> > When X86_BUG_TAA is not set sysfs shows "Not affected" irrespective of
> > value of taa_mitigation.
> > 
> > 	cpu_show_common()
> > 	{
> > 		if (!boot_cpu_has_bug(bug))
> > 			return sprintf(buf, "Not affected\n");
> > 		[...]
> 
> Sorry, I can't follow what you're trying to tell me here.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 21:46                   ` Borislav Petkov
@ 2019-10-22 22:06                     ` Josh Poimboeuf
  2019-10-22 22:13                       ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-22 22:06 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 11:46:29PM +0200, speck for Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 04:38:20PM -0500, speck for Josh Poimboeuf wrote:
> > I'm not sure what you mean, I think the patches mitigate TAA by default
> > when TSX is on (ignoring SMT of course).
> > 
> > Also, suspending disbelief for a moment and assuming TSX becomes a huge
> > success story and you want to safely enable it 5 years down the road,
> > you'd have to do
> > 
> >   tsx=on,tsx_async_abort=full,nosmt,tsx_bug2=full,nosmt .... etc
> > 
> > when you just want to turn the darned thing on without having to worry
> > about specifying all the mitigations for all currently known bugs.
> 
> But when you want to enable it 5 years from now, you simply do
> 
> 	tsx=on
> 
> Why would you even need to supply anything after it?

Because my 4-year-old CPU will need the mitigations ;-)

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 22:06                     ` Josh Poimboeuf
@ 2019-10-22 22:13                       ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-22 22:13 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 05:06:09PM -0500, speck for Josh Poimboeuf wrote:
> Because my 4-year-old CPU will need the mitigations ;-)

Aha, I thought you'd buy a new CPU after 5 years. :-)

Well, what you'd have to do then using the currently proposed method is
only marginally different:

	tsx=on tsx_async_abort=full,nosmt tsx_warm_beer_bug=full ...

or are you hoping that the default setting would match what you want?

Yeah, right - like that ever works. :-)

But ok, I see your point.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 22:05           ` Borislav Petkov
@ 2019-10-23  0:27             ` Pawan Gupta
  2019-10-23  5:25               ` Pawan Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-23  0:27 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 12:05:55AM +0200, speck for Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 11:53:21PM +0200, Borislav Petkov wrote:
> > On Tue, Oct 22, 2019 at 02:29:02PM -0700, speck for Pawan Gupta wrote:
> > > Side effect of RTM check ahead of X86_BUG_TAA will be a dmesg print
> > > "Mitigation: TSX disabled" when X86_BUG_TAA is not set.
> > 
> > So what you're trying to tell me is that you don't want to print that
> > message at all?
> > 
> > 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> > 		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> > 		return;
> > 	}
> > 
> > How's that?
> 
> And to perhaps answer your other question from below maybe - I'm
> assuming you want something like this:
> 
> 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> 		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> 		setup_clear_cpu_bug(X86_BUG_TAA);
> 		return;
> 	}
> 
> This is assuming X86_FEATURE_RTM really mirrors the CPUID bit. And that
> should be the case since if you do tsx_disable(), it will clear the
> CPUID bit through the MSR. IOW, when TSX is disabled, you're basically
> not affected by TAA, of course.

Sysfs mitigations are shown by a common function cpu_show_common() which
checks the X86_BUG_TAA bit. If we clear X86_BUG_TAA, cpu_show_common()
will show "Not affected" in sysfs. This would be when X86_BUG_TAA exists
in the hardware, but was mitigated by disabling TSX. In my opinion
correct output in this case should be "Mitigation: TSX disabled", which
is what it shows now.

I am sorry, I am failing to understand what problem are you trying to
solve by changing the order of X86_FEATURE_RTM and X86_BUG_TAA checks?

> 
> > > > Also, from all the possible settings:
> > > > 
> > > >         [TAA_MITIGATION_OFF]            = "Vulnerable",
> > > >         [TAA_MITIGATION_UCODE_NEEDED]   = "Vulnerable: Clear CPU buffers attempted, no microcode",
> > > >         [TAA_MITIGATION_VERW]           = "Mitigation: Clear CPU buffers",
> > > >         [TAA_MITIGATION_TSX_DISABLE]    = "Mitigation: TSX disabled",
> > > > 
> > > > TAA_MITIGATION_TSX_DISABLE is the one that fits best for the !RTM case,
> > > > no?
> > > 
> > > When X86_BUG_TAA is not set sysfs shows "Not affected" irrespective of
> > > value of taa_mitigation.
> > > 
> > > 	cpu_show_common()
> > > 	{
> > > 		if (!boot_cpu_has_bug(bug))
> > > 			return sprintf(buf, "Not affected\n");
> > > 		[...]
> > 
> > Sorry, I can't follow what you're trying to tell me here.

That when X86_BUG_TAA is not set, no matter what the value
taa_mitigation has, sysfs is always going to say "Not affected". So
the first check taa_select_mitigation() does is whether X86_BUG_TAA is
set. If no, do nothing and return. If yes set the  appropriate
taa_mitigation.

When the user reads the sysfs file for vulnerabilities,
cpu_show_common() checks if the bug is present, if yes display the
mitigation string as per taa_mitigation, if no simply say "Not affected".

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 22:03       ` Andrew Cooper
@ 2019-10-23  1:16         ` Josh Poimboeuf
  0 siblings, 0 replies; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-23  1:16 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 11:03:19PM +0100, speck for Andrew Cooper wrote:
> On 22/10/2019 22:44, speck for Josh Poimboeuf wrote:
> > On Tue, Oct 22, 2019 at 10:35:12PM +0100, speck for Andrew Cooper wrote:
> >> On 22/10/2019 22:20, speck for Josh Poimboeuf wrote:
> >>> On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
> >>>> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> >>>> index 6e0a3b43d027..999b85039128 100644
> >>>> --- a/arch/x86/include/asm/processor.h
> >>>> +++ b/arch/x86/include/asm/processor.h
> >>>> @@ -988,4 +988,11 @@ enum mds_mitigations {
> >>>>  	MDS_MITIGATION_VMWERV,
> >>>>  };
> >>>>  
> >>>> +enum taa_mitigations {
> >>>> +	TAA_MITIGATION_OFF,
> >>>> +	TAA_MITIGATION_UCODE_NEEDED,
> >>>> +	TAA_MITIGATION_VERW,
> >>>> +	TAA_MITIGATION_TSX_DISABLE,
> >>> I would rename "TSX_DISABLE" to "TSX_DISABLED", because
> >>>
> >>>  a) it matches the verb tense of "UCODE_NEEDED", and
> >>>  b) the past tense hopefully helps make it slightly clearer that TSX was
> >>>     disabled separately, not as part of the mitigation code itself.
> >>>
> >>>> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
> >>>>  #undef pr_fmt
> >>>>  #define pr_fmt(fmt) fmt
> >>>>  
> >>>> -/* Update the static key controlling the MDS CPU buffer clear in idle */
> >>>> +/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
> >>>>  static void update_mds_branch_idle(void)
> >>>>  {
> >>>>  	/*
> >>>> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
> >>>>  	 * The other variants cannot be mitigated when SMT is enabled, so
> >>>>  	 * clearing the buffers on idle just to prevent the Store Buffer
> >>>>  	 * repartitioning leak would be a window dressing exercise.
> >>>> +	 *
> >>>> +	 * Apply idle buffer clearing to TAA affected CPUs also.
> >>>>  	 */
> >>>> -	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
> >>>> +	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
> >>>> +	    !boot_cpu_has_bug(X86_BUG_TAA))
> >>>>  		return;
> >>> Sorry, I was out for most of the last two weeks, so I think I left you
> >>> hanging on this.  For context, here's your answer to my previous
> >>> question about whether the X86_BUG_TAA check makes sense here:
> >>>
> >>>> It does provide protection against the "store buffer" leak. But the
> >>>> other buffers(fill buffer and load port) are still SMT vulnerable. Do
> >>>> you prefer it removed?
> >>> Yes, please remove it, for the same reason we used for MDS.  There's not
> >>> much point in partially mitigating TAA here.
> >>>
> >>>> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> >>>> index 885d4ac2111a..86f22c1e5912 100644
> >>>> --- a/arch/x86/kernel/cpu/common.c
> >>>> +++ b/arch/x86/kernel/cpu/common.c
> >>>> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> >>>>  	if (!cpu_matches(NO_SWAPGS))
> >>>>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
> >>>>  
> >>>> +	/*
> >>>> +	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> >>>> +	 *	- TSX is supported or
> >>>> +	 *	- TSX_CTRL is supported
> >>>> +	 *
> >>>> +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
> >>>> +	 * the kernel boot e.g. kexec
> >>>> +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
> >>>> +	 * update is not present or running as guest that don't get TSX_CTRL.
> >>>> +	 */
> >>>> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> >>>> +	    (boot_cpu_has(X86_FEATURE_RTM) ||
> >>>> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> >>>> +		setup_force_cpu_bug(X86_BUG_TAA);
> >>>> +
> >>> I'm finding this logic to be less than 100% convincing, or at least hard
> >>> to follow.  And it's different from most of the other bug checks in this
> >>> function.
> >>>
> >>> Would the following work instead?
> >>>
> >>> 	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_TAA_NO))
> >>> 		setup_force_cpu_bug(X86_BUG_TAA);
> >>>
> >>> In other words I would presume all the NO_MDS CPUs listed in
> >>> 'cpu_vuln_whitelist' are also immune to TAA.
> >> From a leakage point of view, TAA is just another way to access stale
> >> date in the load ports, store buffer or fill buffers.
> >>
> >> In practice, the leakage mechanism exists on all MDS-vulnerable,
> >> TSX-enabled parts.  Parts which are not vulnerable to MDS leakage in the
> >> first place will be similarly unaffected.
> >>
> >> On pre MDS_NO=1 parts, the existing VERW and no-smt mitigates TAA as well.
> >>
> >> On the current MDS_NO=1 parts, the silicon fix constituting MDS_NO took
> >> care of the known leakage cases by forcing a register file write-back of
> >> 0 rather than the stale data.  However, the TSX Async Abort case was
> >> missed, which is why this generation of parts are still vulnerable.
> >>
> >> Therefore, the TAA vulnerability only really matters for TSX=1,
> >> MDS_NO=1, TAA_NO=0 parts, because these parts are the ones that are
> >> leaky, but have a lower overhead mitigation option than VERW and no-SMT.
> > I agree with all of that.  So are you saying you agree or disagree with
> > my code snippet?
> 
> Ah - sorry for being unclear.  Disagree.
> 
> Your code snippet will cause some Atom and very old Core parts to also
> report to be TAA vulnerable, as they are on the MDS whitelist.

Ok, so I assume those parts don't have TSX?  Can we clarify which ones
those are?  We could add a "NO_TAA" to the whitelist bits.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-22 16:51 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
  2019-10-22 17:02   ` Borislav Petkov
  2019-10-22 17:44   ` Pawan Gupta
@ 2019-10-23  1:33   ` Pawan Gupta
  2019-10-23  6:48     ` Borislav Petkov
  2 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-23  1:33 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 06:51:12PM +0200, speck for Borislav Petkov wrote:
> > +static void __init taa_select_mitigation(void)
> > +{
> > +	u64 ia32_cap = x86_read_arch_cap_msr();
> 
> Do that MSR access...
> 
> > +
> > +	if (!boot_cpu_has_bug(X86_BUG_TAA)) {
> > +		taa_mitigation = TAA_MITIGATION_OFF;
> > +		return;
> > +	}
> 
> ... here.
> 

Is it okay to move x86_read_arch_cap_msr() further down just before its needed?

+       ia32_cap = x86_read_arch_cap_msr();
+
	if ((ia32_cap & ARCH_CAP_MDS_NO) &&
			!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-23  0:27             ` Pawan Gupta
@ 2019-10-23  5:25               ` Pawan Gupta
  2019-10-23  6:46                 ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-23  5:25 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 05:27:56PM -0700, speck for Pawan Gupta wrote:
> On Wed, Oct 23, 2019 at 12:05:55AM +0200, speck for Borislav Petkov wrote:
> > On Tue, Oct 22, 2019 at 11:53:21PM +0200, Borislav Petkov wrote:
> > > On Tue, Oct 22, 2019 at 02:29:02PM -0700, speck for Pawan Gupta wrote:
> > > > Side effect of RTM check ahead of X86_BUG_TAA will be a dmesg print
> > > > "Mitigation: TSX disabled" when X86_BUG_TAA is not set.
> > > 
> > > So what you're trying to tell me is that you don't want to print that
> > > message at all?
> > > 
> > > 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> > > 		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> > > 		return;
> > > 	}
> > > 
> > > How's that?
> > 
> > And to perhaps answer your other question from below maybe - I'm
> > assuming you want something like this:
> > 
> > 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
> > 		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> > 		setup_clear_cpu_bug(X86_BUG_TAA);

I think we first need to agree on whether disabling TSX is the
mitigation for X86_BUG_TAA or the lack of bug. Your clearing the bug bit
here makes me think that you are treating it as lack of bug. This series
treats it as a mitigation when the hardware has the bug. Let me know
your thoughts on this.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-23  5:25               ` Pawan Gupta
@ 2019-10-23  6:46                 ` Borislav Petkov
  2019-10-23 13:28                   ` Pawan Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2019-10-23  6:46 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 10:25:27PM -0700, speck for Pawan Gupta wrote:
> I think we first need to agree on whether disabling TSX is the
> mitigation for X86_BUG_TAA or the lack of bug. Your clearing the bug bit
> here makes me think that you are treating it as lack of bug. This series
> treats it as a mitigation when the hardware has the bug. Let me know
> your thoughts on this.

Ok, I see your point. And yes, let's do what you're suggesting because
that mirrors reality optimally:

static void __init taa_select_mitigation(void)
{ 
        u64 ia32_cap;

        if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off()) {
                taa_mitigation = TAA_MITIGATION_OFF;
                return;
        }

	if (!boot_cpu_has(X86_FEATURE_RTM)) {
                taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
                goto out;
        }

	...

so the bug presence would take precedence and in the !X86_BUG_TAA case
or when we've disabled all mitigations, we won't say anything.

Then, if the bug is present but we've disabled TSX, we say "Mitigation:
TSX disabled".

Hohumm, makes sense to me.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-23  1:33   ` Pawan Gupta
@ 2019-10-23  6:48     ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-23  6:48 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 06:33:21PM -0700, speck for Pawan Gupta wrote:
> Is it okay to move x86_read_arch_cap_msr() further down just before its needed?

Yap, that's even better as it is common kernel code pattern: you
read/produce a value and then you do checks on it.

> +       ia32_cap = x86_read_arch_cap_msr();
> +

you don't need this newline.

> 	if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> 			!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
> 		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 10/10] TAAv7 10
  2019-10-22 18:39 ` [MODERATED] Re: [PATCH v7 10/10] TAAv7 10 Josh Poimboeuf
@ 2019-10-23  7:24   ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-23  7:24 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 01:39:20PM -0500, speck for Josh Poimboeuf wrote:
> Regardless, this patch adds complexity and still seems very fragile.  It
> will add maintenance issues and probably introduce bugs.  And, most
> importantly there's *still* not a real world justification for it.
> Please drop it from the series.

Yeah, and here's nasty use case if we did the runtime toggle thing: you
boot with TSX on, start guests, they see TSX and start using it and then
you disable it in the host.

I believe I read somewhere Andrew saying it won't #UD but it would abort
all transactions, which is still not nice and leaves the burden on the
sysadmin explaining why guests experience perf penatly all of a sudden.

Not good.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 01/10] TAAv7 1
  2019-10-22 17:25 ` [MODERATED] Re: [PATCH v7 01/10] TAAv7 1 Josh Poimboeuf
@ 2019-10-23  9:26   ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-23  9:26 UTC (permalink / raw)
  To: speck

On Tue, Oct 22, 2019 at 12:25:08PM -0500, speck for Josh Poimboeuf wrote:
> How is HLE unconditionally disabled?  Is it done by the above mentioned
> microcode?  Is there a way to enable it?

From patch 4:

" TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other
TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled with updated microcode but still enumerated as present by
CPUID(EAX=7).EBX{bit4}."

Sounds like the microcode zaps it but leaves the CPUID bit on.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-23  6:46                 ` Borislav Petkov
@ 2019-10-23 13:28                   ` Pawan Gupta
  2019-10-23 14:39                     ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-23 13:28 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 08:46:54AM +0200, speck for Borislav Petkov wrote:
> On Tue, Oct 22, 2019 at 10:25:27PM -0700, speck for Pawan Gupta wrote:
> > I think we first need to agree on whether disabling TSX is the
> > mitigation for X86_BUG_TAA or the lack of bug. Your clearing the bug bit
> > here makes me think that you are treating it as lack of bug. This series
> > treats it as a mitigation when the hardware has the bug. Let me know
> > your thoughts on this.
> 
> Ok, I see your point. And yes, let's do what you're suggesting because
> that mirrors reality optimally:
> 
> static void __init taa_select_mitigation(void)
> { 
>         u64 ia32_cap;
> 
>         if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off()) {
>                 taa_mitigation = TAA_MITIGATION_OFF;
>                 return;
>         }

There is a small issue with cpu_mitigations_off() check before X86_FEATURE_RTM.

If the user provides cmdline "tsx=off mitigations=off" then sysfs will
show "Vulnerable", when it is actually mitigated because of "tsx=off".

> 
> 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
>                 taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
>                 goto out;
>         }

cpu_mitigations_off() check after X86_FEATURE_RTM check will fix this.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
  2019-10-23 13:28                   ` Pawan Gupta
@ 2019-10-23 14:39                     ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-23 14:39 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 06:28:59AM -0700, speck for Pawan Gupta wrote:
> cpu_mitigations_off() check after X86_FEATURE_RTM check will fix this.

Ok, the second cmdline switch tsx= makes this a bit trickier than I was
prepared for. I've reverted to your previous order:

static void __init taa_select_mitigation(void)
{
        u64 ia32_cap;

        if (!boot_cpu_has_bug(X86_BUG_TAA)) {
                taa_mitigation = TAA_MITIGATION_OFF;
                return;
        }

        /* TSX previously disabled by tsx=off */
        if (!boot_cpu_has(X86_FEATURE_RTM)) {
                taa_mitigation = TAA_MITIGATION_TSX_DISABLED;
                goto out;
        }

        if (cpu_mitigations_off()) {
                taa_mitigation = TAA_MITIGATION_OFF;
                return;
        }

I've applied the first 9 patches and have incorporated as much of the
feedback as I could. Will try to get a box for which the microcode fits
and test it before I post today or tomorrow and we can continue there.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
                   ` (23 preceding siblings ...)
  2019-10-22 21:20 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Josh Poimboeuf
@ 2019-10-23 15:46 ` Borislav Petkov
  2019-10-23 17:11   ` Josh Poimboeuf
  2019-10-23 22:12   ` Pawan Gupta
       [not found] ` <5dae165e.1c69fb81.4beee.e271SMTPIN_ADDED_BROKEN@mx.google.com>
  25 siblings, 2 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-23 15:46 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 01:22:01PM -0700, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 00/10] TAAv7

Ok, I ran the pile on a box here:

vendor_id       : GenuineIntel
cpu family      : 6
model           : 158
model name      : Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz
stepping        : 12

There is some microcode for it:

[    0.000000] microcode: microcode updated early to revision 0xc6, date = 2019-08-14
[    1.005808] microcode: sig=0x906ec, pf=0x2, revision=0xc6

And booting it says:

[    0.197056] tsx_init: enter
[    0.197207] tsx_ctrl_is_supported: CAP MSR: 0x9

This is added by me and it shows that the box is a pre MDS_NO=1 one,
i.e., MD_CLEAR mitigates TAA too AFAIU.

Which means, there's no TSX_CTRL_MSR and I cannot disable TSX there.

Which means, boxes like this one don't need the microcode as long as
they have MD_CLEAR microcode.

[    0.197363] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.197540] Spectre V2 : Mitigation: Full generic retpoline
[    0.197696] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.197871] Spectre V2 : Enabling Restricted Speculation for firmware calls
[    0.198032] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[    0.198208] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
[    0.198386] MDS: Mitigation: Clear CPU buffers
[    0.198540] TAA: Mitigation: Clear CPU buffers

Makes sense?

In any case, I thought I should share some first testing attempts with
the group.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-23 15:46 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Borislav Petkov
@ 2019-10-23 17:11   ` Josh Poimboeuf
  2019-10-23 21:49     ` Borislav Petkov
  2019-10-23 22:12   ` Pawan Gupta
  1 sibling, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2019-10-23 17:11 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 05:46:04PM +0200, speck for Borislav Petkov wrote:
> On Mon, Oct 21, 2019 at 01:22:01PM -0700, speck for Pawan Gupta wrote:
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Subject: [PATCH v7 00/10] TAAv7
> 
> Ok, I ran the pile on a box here:
> 
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 158
> model name      : Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz
> stepping        : 12
> 
> There is some microcode for it:
> 
> [    0.000000] microcode: microcode updated early to revision 0xc6, date = 2019-08-14
> [    1.005808] microcode: sig=0x906ec, pf=0x2, revision=0xc6
> 
> And booting it says:
> 
> [    0.197056] tsx_init: enter
> [    0.197207] tsx_ctrl_is_supported: CAP MSR: 0x9
> 
> This is added by me and it shows that the box is a pre MDS_NO=1 one,
> i.e., MD_CLEAR mitigates TAA too AFAIU.
> 
> Which means, there's no TSX_CTRL_MSR and I cannot disable TSX there.
> 
> Which means, boxes like this one don't need the microcode as long as
> they have MD_CLEAR microcode.
> 
> [    0.197363] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
> [    0.197540] Spectre V2 : Mitigation: Full generic retpoline
> [    0.197696] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
> [    0.197871] Spectre V2 : Enabling Restricted Speculation for firmware calls
> [    0.198032] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
> [    0.198208] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
> [    0.198386] MDS: Mitigation: Clear CPU buffers
> [    0.198540] TAA: Mitigation: Clear CPU buffers
> 
> Makes sense?

FWIW, makes sense to me.  It will be especially interesting to see tests
on an MDS_NO=1 box with the new microcode.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-23 17:11   ` Josh Poimboeuf
@ 2019-10-23 21:49     ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-23 21:49 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 12:11:09PM -0500, speck for Josh Poimboeuf wrote:
> FWIW, makes sense to me.  It will be especially interesting to see tests
> on an MDS_NO=1 box with the new microcode.

Yap, that's the plan for tomorrow. I got a bunch of microcode blobs
here, should be able to find a suitable box but let's see...

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-23 15:46 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Borislav Petkov
  2019-10-23 17:11   ` Josh Poimboeuf
@ 2019-10-23 22:12   ` Pawan Gupta
  2019-10-24 14:08     ` Borislav Petkov
  1 sibling, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2019-10-23 22:12 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 05:46:04PM +0200, speck for Borislav Petkov wrote:
> On Mon, Oct 21, 2019 at 01:22:01PM -0700, speck for Pawan Gupta wrote:
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Subject: [PATCH v7 00/10] TAAv7
> 
> Ok, I ran the pile on a box here:
> 
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 158
> model name      : Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz
> stepping        : 12
> 
> There is some microcode for it:
> 
> [    0.000000] microcode: microcode updated early to revision 0xc6, date = 2019-08-14
> [    1.005808] microcode: sig=0x906ec, pf=0x2, revision=0xc6
> 
> And booting it says:
> 
> [    0.197056] tsx_init: enter
> [    0.197207] tsx_ctrl_is_supported: CAP MSR: 0x9
> 
> This is added by me and it shows that the box is a pre MDS_NO=1 one,
> i.e., MD_CLEAR mitigates TAA too AFAIU.
> 
> Which means, there's no TSX_CTRL_MSR and I cannot disable TSX there.
> 
> Which means, boxes like this one don't need the microcode as long as
> they have MD_CLEAR microcode.
> 
> [    0.197363] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
> [    0.197540] Spectre V2 : Mitigation: Full generic retpoline
> [    0.197696] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
> [    0.197871] Spectre V2 : Enabling Restricted Speculation for firmware calls
> [    0.198032] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
> [    0.198208] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
> [    0.198386] MDS: Mitigation: Clear CPU buffers
> [    0.198540] TAA: Mitigation: Clear CPU buffers
> 
> Makes sense?

This is the expected output for MDS_NO=0 CPU.

Below are the CPUs with MDS_NO=1.
+----------------------------+------------------+------------+
|            Name            |  Family / model  |  Stepping  |
+============================+==================+============+
| Whiskey Lake (ULT refresh) |      06_8E       |    0xC     |
+----------------------------+------------------+------------+
|    2nd gen Cascade Lake    |      06_55       |    6, 7    |
+----------------------------+------------------+------------+
|       Coffee Lake R        |      06_9E       |    0xD     |
+----------------------------+------------------+------------+

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 00/10] TAAv7 0
  2019-10-23 22:12   ` Pawan Gupta
@ 2019-10-24 14:08     ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2019-10-24 14:08 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 03:12:14PM -0700, speck for Pawan Gupta wrote:
> Below are the CPUs with MDS_NO=1.
> +----------------------------+------------------+------------+
> |            Name            |  Family / model  |  Stepping  |
> +============================+==================+============+
> | Whiskey Lake (ULT refresh) |      06_8E       |    0xC     |
> +----------------------------+------------------+------------+
> |    2nd gen Cascade Lake    |      06_55       |    6, 7    |

Ok, found one of this type:

vendor_id       : GenuineIntel
cpu family      : 6
model           : 85
model name      : Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz
stepping        : 6

microcode got updated to:

[    0.000000] microcode: microcode updated early to revision 0x400002c, date = 2019-09-05

and ARCH_CAP MSR is:

MSR_0x10A = 0xab = 1010_1011

Booting with CONFIG_X86_INTEL_TSX_MODE_OFF=y gives:

[    0.316520] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.316522] Spectre V2 : Mitigation: Enhanced IBRS
[    0.316523] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.316525] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[    0.316527] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
[    0.316528] TAA: Mitigation: TSX disabled

Booting with tsx=on gives

[    0.314958] TAA: Mitigation: Clear CPU buffers

and with

tsx=on tsx_async_abort=full,nosmt

[    0.315439] SMT: disabled
[    0.315440] TAA: Mitigation: Clear CPU buffers

I think we're good to go...

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 06/10] TAAv7 6
       [not found] ` <5dae165e.1c69fb81.4beee.e271SMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-24 20:53   ` Paolo Bonzini
  2019-10-24 21:00     ` Luck, Tony
  0 siblings, 1 reply; 78+ messages in thread
From: Paolo Bonzini @ 2019-10-24 20:53 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2922 bytes --]

On 21/10/19 22:28, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH v7 06/10] KVM: x86/speculation/taa: Export MDS_NO=0 to guests
>  when TSX is enabled
> 
> Export IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX Async
> Abort(TAA) affected hosts that have TSX enabled and updated microcode.
> This is required so that the guests don't complain,
> 
> 	"Vulnerable: Clear CPU buffers attempted, no microcode"
> 
> when the host has the updated microcode to clear CPU buffers.

On one hand, the issue is bigger than that: if TSX is hidden from the
guest's CPUID, then the guest may show that it's not vulnerable, but it
actually is if the guest just tries executing TSX instructions.

On the other hand, removing bits from CPUID or MSRs can cause issues
even if TSX is not enabled in VMs.  Since most VMs won't have MDS_NO
set, I think we should drop this patch for now.  After the embargo lifts
we can add code to pass TSX_CTRL to the VM just like we do for
SPEC_CTRL, including disabling TSX on vmentry/vmexit depending on guest
CPUID.

Paolo

> 
> Microcode update also adds support for MSR_IA32_TSX_CTRL which is
> enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR.
> Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is
> not exported to the guests.
> 
> In this case export MDS_NO=0 to the guests. When guests have
> CPUID.MD_CLEAR=1 guests deploy MDS mitigation which also mitigates TAA.
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  arch/x86/kvm/x86.c | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 661e2bf38526..797113fa665d 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1299,6 +1299,25 @@ static u64 kvm_get_arch_capabilities(void)
>  	if (!boot_cpu_has_bug(X86_BUG_MDS))
>  		data |= ARCH_CAP_MDS_NO;
>  
> +	/*
> +	 * On TAA affected systems, export MDS_NO=0 when:
> +	 *	- TSX is enabled on host, i.e. X86_FEATURE_RTM=1.
> +	 *	- Updated microcode is present. This is detected by
> +	 *	  the presence of ARCH_CAP_TSX_CTRL_MSR. This ensures
> +	 *	  VERW clears CPU buffers.
> +	 *
> +	 * When MDS_NO=0 is exported, guests deploy clear CPU buffer
> +	 * mitigation and don't complain:
> +	 *
> +	 *	"Vulnerable: Clear CPU buffers attempted, no microcode"
> +	 *
> +	 * If TSX is disabled on the system, guests are also mitigated against
> +	 * TAA and clear CPU buffer mitigation is not required for guests.
> +	 */
> +	if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
> +	    (data & ARCH_CAP_TSX_CTRL_MSR))
> +		data &= ~ARCH_CAP_MDS_NO;
> +
>  	return data;
>  }
>  
> 



^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 06/10] TAAv7 6
  2019-10-24 20:53   ` [MODERATED] Re: [PATCH v7 06/10] TAAv7 6 Paolo Bonzini
@ 2019-10-24 21:00     ` Luck, Tony
  2019-10-24 21:33       ` Paolo Bonzini
  0 siblings, 1 reply; 78+ messages in thread
From: Luck, Tony @ 2019-10-24 21:00 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:53:16PM +0200, speck for Paolo Bonzini wrote:
> On the other hand, removing bits from CPUID or MSRs can cause issues
> even if TSX is not enabled in VMs.  Since most VMs won't have MDS_NO
> set, I think we should drop this patch for now.  After the embargo lifts
> we can add code to pass TSX_CTRL to the VM just like we do for
> SPEC_CTRL, including disabling TSX on vmentry/vmexit depending on guest
> CPUID.

TSX_CTRL is a slow MSR ... so if you do go this path you may also
want some heuristic to avoid switching a thread between guests that
have different settings for TSX.

-Tony

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [MODERATED] Re: [PATCH v7 06/10] TAAv7 6
  2019-10-24 21:00     ` Luck, Tony
@ 2019-10-24 21:33       ` Paolo Bonzini
  0 siblings, 0 replies; 78+ messages in thread
From: Paolo Bonzini @ 2019-10-24 21:33 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1091 bytes --]

On 24/10/19 23:00, speck for Luck, Tony wrote:
> On Thu, Oct 24, 2019 at 10:53:16PM +0200, speck for Paolo Bonzini wrote:
>> On the other hand, removing bits from CPUID or MSRs can cause issues
>> even if TSX is not enabled in VMs.  Since most VMs won't have MDS_NO
>> set, I think we should drop this patch for now.  After the embargo lifts
>> we can add code to pass TSX_CTRL to the VM just like we do for
>> SPEC_CTRL, including disabling TSX on vmentry/vmexit depending on guest
>> CPUID.
> 
> TSX_CTRL is a slow MSR ... so if you do go this path you may also
> want some heuristic to avoid switching a thread between guests that
> have different settings for TSX.

How slow is it?  Since the kernel doesn't use TSX, we could delay the
restore of the host's TSX_CTRL value till return to userspace and avoid
paying the price on each and every vmexits.  But if the host is
overcommitted, there isn't really an alternative to writing it on
context switch.

The host can still disable TSX if they don't want it in any guest, then
there will be no slow down.

Paolo


^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2019-10-24 21:34 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
2019-10-21 20:23 ` [MODERATED] [PATCH v7 01/10] TAAv7 1 Pawan Gupta
2019-10-21 20:24 ` [MODERATED] [PATCH v7 02/10] TAAv7 2 Pawan Gupta
2019-10-21 20:25 ` [MODERATED] [PATCH v7 03/10] TAAv7 3 Pawan Gupta
2019-10-21 20:26 ` [MODERATED] [PATCH v7 04/10] TAAv7 4 Pawan Gupta
2019-10-21 20:27 ` [MODERATED] [PATCH v7 05/10] TAAv7 5 Pawan Gupta
2019-10-21 20:28 ` [MODERATED] [PATCH v7 06/10] TAAv7 6 Pawan Gupta
2019-10-21 20:29 ` [MODERATED] [PATCH v7 07/10] TAAv7 7 Pawan Gupta
2019-10-21 20:30 ` [MODERATED] [PATCH v7 08/10] TAAv7 8 Pawan Gupta
2019-10-21 20:31 ` [MODERATED] [PATCH v7 09/10] TAAv7 9 Michal Hocko
2019-10-21 20:32 ` [MODERATED] [PATCH v7 10/10] TAAv7 10 Pawan Gupta
2019-10-21 21:32 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Andy Lutomirski
2019-10-21 23:06   ` Andrew Cooper
2019-10-22  0:34   ` Pawan Gupta
2019-10-22  4:10 ` [MODERATED] Jon Masters
2019-10-22  5:53   ` [MODERATED] Pawan Gupta
2019-10-22  7:58 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 07/10] TAAv7 7 Michal Hocko
2019-10-22 16:55   ` [MODERATED] " Pawan Gupta
2019-10-22  8:00 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 09/10] TAAv7 9 Michal Hocko
2019-10-22  8:15 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3 Michal Hocko
2019-10-22 14:42   ` Josh Poimboeuf
2019-10-22 16:48     ` [MODERATED] " Pawan Gupta
2019-10-22 17:01       ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
2019-10-22 17:35         ` Josh Poimboeuf
2019-10-22 14:38 ` [MODERATED] " Borislav Petkov
2019-10-22 16:58   ` Pawan Gupta
2019-10-22 14:48 ` Borislav Petkov
2019-10-22 17:00   ` Pawan Gupta
2019-10-22 17:16     ` [MODERATED] " Borislav Petkov
2019-10-22 18:07       ` [MODERATED] " Pawan Gupta
2019-10-22 15:07 ` Borislav Petkov
2019-10-22 18:36   ` Pawan Gupta
2019-10-22 18:59     ` [MODERATED] " Borislav Petkov
2019-10-22 16:51 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
2019-10-22 17:02   ` Borislav Petkov
2019-10-22 18:00     ` Pawan Gupta
2019-10-22 18:12       ` [MODERATED] " Borislav Petkov
2019-10-22 19:16         ` Luck, Tony
2019-10-22 19:28           ` [MODERATED] " Borislav Petkov
2019-10-22 20:02             ` Luck, Tony
2019-10-22 20:48               ` [MODERATED] Jon Masters
2019-10-22 20:54               ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
2019-10-22 21:38                 ` Josh Poimboeuf
2019-10-22 21:46                   ` Borislav Petkov
2019-10-22 22:06                     ` Josh Poimboeuf
2019-10-22 22:13                       ` Borislav Petkov
2019-10-22 17:44   ` Pawan Gupta
2019-10-22 19:04     ` [MODERATED] " Borislav Petkov
2019-10-22 21:29       ` [MODERATED] " Pawan Gupta
2019-10-22 21:53         ` Borislav Petkov
2019-10-22 22:05           ` Borislav Petkov
2019-10-23  0:27             ` Pawan Gupta
2019-10-23  5:25               ` Pawan Gupta
2019-10-23  6:46                 ` Borislav Petkov
2019-10-23 13:28                   ` Pawan Gupta
2019-10-23 14:39                     ` Borislav Petkov
2019-10-23  1:33   ` Pawan Gupta
2019-10-23  6:48     ` Borislav Petkov
2019-10-22 17:25 ` [MODERATED] Re: [PATCH v7 01/10] TAAv7 1 Josh Poimboeuf
2019-10-23  9:26   ` Borislav Petkov
2019-10-22 17:26 ` Josh Poimboeuf
2019-10-22 20:44   ` [MODERATED] Jon Masters
2019-10-22 17:47 ` [MODERATED] Re: [PATCH v7 03/10] TAAv7 3 Josh Poimboeuf
2019-10-22 18:39 ` [MODERATED] Re: [PATCH v7 10/10] TAAv7 10 Josh Poimboeuf
2019-10-23  7:24   ` Borislav Petkov
2019-10-22 21:20 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Josh Poimboeuf
2019-10-22 21:35   ` Andrew Cooper
2019-10-22 21:44     ` Josh Poimboeuf
2019-10-22 22:03       ` Andrew Cooper
2019-10-23  1:16         ` Josh Poimboeuf
2019-10-23 15:46 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Borislav Petkov
2019-10-23 17:11   ` Josh Poimboeuf
2019-10-23 21:49     ` Borislav Petkov
2019-10-23 22:12   ` Pawan Gupta
2019-10-24 14:08     ` Borislav Petkov
     [not found] ` <5dae165e.1c69fb81.4beee.e271SMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-24 20:53   ` [MODERATED] Re: [PATCH v7 06/10] TAAv7 6 Paolo Bonzini
2019-10-24 21:00     ` Luck, Tony
2019-10-24 21:33       ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).