historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MODERATED] [PATCH v6 0/9] TAAv6 0
@ 2019-10-09 23:21 Pawan Gupta
  2019-10-09 23:22 ` [MODERATED] [PATCH v6 1/9] TAAv6 1 Pawan Gupta
                   ` (15 more replies)
  0 siblings, 16 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:21 UTC (permalink / raw)
  To: speck

Changes since v5:
- Remove unsafe X86_FEATURE_RTM toggles.
- Have only boot cpu call tsx_init()
- s/read_ia32_arch_cap/x86_read_arch_cap_msr/
- Move TSX sysfs knob part to the end after documentation patch.
- Changelog, comments and documentation update.

Changes since v4:
- Simplify TSX_CTRL enumeration, set TSX_CTRL default to NOT_SUPPORTED.
- Add new patch "Export MDS_NO=0 to guests when TSX is enabled".
- Add new patch for tsx=auto which enables TSX on unaffected platforms,
  default stays tsx=off.
- Handle kexec like cases for TAA bug enumeration. Set X86_BUG_TAA when
  X86_FEATURE_RTM=1 or TSX_CTRL=1.
- TSX control sysfs file rename(s/tsx/hw_tx_mem/) and file creation changes.
- Dropped patch "x86/speculation/mds: Rename MDS buffer clear functions"
  It doesn't provide enough benefit compared to the amount of changes
  involved. Added code comment about using MDS mitigation.
- Add helper function read_ia32_arch_cap().
- Reorder mitigation checks in taa_select_mitigation().
- s/MSR_// for TSX_CTRL bit defines.
- Changelog,comments and documentation update.
- Rebase to v5.3.

Changes since v3:
- Disable tsx unconditionally, removed tsx=auto mode.
- Fix verw idle clear.
- Refactor TSX code into new tsx.c
- Use early_param for tsx cmdline parameter.
- Rename sysfs vulnerability file to tsx_async_abort.
- Rename common CPU buffer clear infrastructure (s/mds/verw)
- s/TAA_MITIGATION_VMWERV/TAA_MITIGATION_UCODE_NEEDED
- Rebased to v5.3-rc6
- Split patches.
- Changelog and documentation update.

Changes since v2:
- Rebased to v5.3-rc5
- Fix build for non-x86 targets.
- Commit log, code comments and documentation update.
- Minor code refactoring.

Changes since v1:
- Added TSX command line options added(on|off|auto). "auto" is the
  default which sets TSX state as below:
	- TSX disabled on affected platforms
	- TSX enabled on unaffected platforms
- Update commit messages and documentation.
- Add support to control TSX feature from sysfs.

This patchset adds the mitigation for TSX Async Abort (TAA) which is a
side channel vulnerability to internal buffers in some Intel processors similar
to Microachitectural Data Sampling (MDS). Transactional Synchronization
Extensions (TSX) is a feature in Intel processors that speeds up
execution of multi-threaded software through lock elision.

During TAA certain loads may speculatively pass invalid data to
dependent operations when an asynchronous abort condition is pending in
a TSX transaction.  An attacker can use TSX as a tool to extract
information from the microarchitectural buffers.  The victim data may be
placed into these buffers during normal execution which is unrelated to
any use of TSX.

Mitigation is to either clear the cpu buffers or disable TSX.

Pawan Gupta (9):
  x86/tsx: Add enumeration support for IA32_TSX_CTRL MSR
  x86: Add helper function x86_read_arch_cap_msr()
  x86/tsx: Add TSX cmdline option with TSX disabled by default
  x86/speculation/taa: Add mitigation for TSX Async Abort
  x86/speculation/taa: Add sysfs reporting for TSX Async Abort
  KVM: x86/speculation/taa: Export MDS_NO=0 to guests when TSX is
    enabled
  x86/tsx: Add "auto" option to TSX cmdline parameter
  x86/speculation/taa: Add documentation for TSX Async Abort
  x86/tsx: Add sysfs interface to control TSX

 .../ABI/testing/sysfs-devices-system-cpu      |  24 ++
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../admin-guide/hw-vuln/tsx_async_abort.rst   | 269 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  52 ++++
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/tsx_async_abort.rst         |  54 ++++
 arch/x86/include/asm/cpufeatures.h            |   1 +
 arch/x86/include/asm/msr-index.h              |   9 +
 arch/x86/include/asm/nospec-branch.h          |   4 +-
 arch/x86/include/asm/processor.h              |   7 +
 arch/x86/kernel/cpu/Makefile                  |   2 +-
 arch/x86/kernel/cpu/bugs.c                    | 169 ++++++++++-
 arch/x86/kernel/cpu/common.c                  |  32 ++-
 arch/x86/kernel/cpu/cpu.h                     |  19 ++
 arch/x86/kernel/cpu/intel.c                   |   5 +
 arch/x86/kernel/cpu/tsx.c                     | 218 ++++++++++++++
 arch/x86/kvm/x86.c                            |  19 ++
 drivers/base/cpu.c                            |  41 ++-
 include/linux/cpu.h                           |   9 +
 19 files changed, 926 insertions(+), 10 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 create mode 100644 Documentation/x86/tsx_async_abort.rst
 create mode 100644 arch/x86/kernel/cpu/tsx.c

-- 
2.20.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 1/9] TAAv6 1
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
@ 2019-10-09 23:22 ` Pawan Gupta
  2019-10-09 23:23 ` [MODERATED] [PATCH v6 2/9] TAAv6 2 Pawan Gupta
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:22 UTC (permalink / raw)
  To: speck

Transactional Synchronization Extensions (TSX) may be used on certain
processors as part of a speculative side channel attack.  A microcode
update for existing processors that are vulnerable to this attack will
add a new MSR, IA32_TSX_CTRL to allow the system administrator the
option to disable TSX as one of the possible mitigations.  [Note that
future processors that are not vulnerable will also support the
IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
bits.

TSX has two sub-features:

1. Restricted Transactional Memory (RTM) is an explicitly-used feature
   where new instructions begin and end TSX transactions.
2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
   "old" style locks are used by software.

Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
IA32_TSX_CTRL MSR.

There are two control bits in IA32_TSX_CTRL MSR:

  Bit 0: When set it disables the Restricted Transactional Memory (RTM)
         sub-feature of TSX (will force all transactions to abort on the
	 XBEGIN instruction).

  Bit 1: When set it disables the enumeration of the RTM and HLE feature
         (i.e. it will make CPUID(EAX=7).EBX{bit4} and
         CPUID(EAX=7).EBX{bit11} read as 0).

The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled but still enumerated as present by CPUID(EAX=7).EBX{bit4}.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/include/asm/msr-index.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 271d837d69a8..45b6705d9f71 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -93,6 +93,7 @@
 						  * Microarchitectural Data
 						  * Sampling (MDS) vulnerabilities.
 						  */
+#define ARCH_CAP_TSX_CTRL_MSR		BIT(7)	/* MSR for TSX control is available. */
 
 #define MSR_IA32_FLUSH_CMD		0x0000010b
 #define L1D_FLUSH			BIT(0)	/*
@@ -103,6 +104,10 @@
 #define MSR_IA32_BBL_CR_CTL		0x00000119
 #define MSR_IA32_BBL_CR_CTL3		0x0000011e
 
+#define MSR_IA32_TSX_CTRL		0x00000122
+#define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
+#define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
+
 #define MSR_IA32_SYSENTER_CS		0x00000174
 #define MSR_IA32_SYSENTER_ESP		0x00000175
 #define MSR_IA32_SYSENTER_EIP		0x00000176
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 2/9] TAAv6 2
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
  2019-10-09 23:22 ` [MODERATED] [PATCH v6 1/9] TAAv6 1 Pawan Gupta
@ 2019-10-09 23:23 ` Pawan Gupta
  2019-10-09 23:24 ` [MODERATED] [PATCH v6 3/9] TAAv6 3 Pawan Gupta
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:23 UTC (permalink / raw)
  To: speck

Add a helper function to read IA32_ARCH_CAPABILITIES MSR. If the CPU
doesn't support this MSR return 0.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/kernel/cpu/common.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index f125bf7ecb6f..d3b0d67c9243 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1091,19 +1091,26 @@ static bool __init cpu_matches(unsigned long which)
 	return m && !!(m->driver_data & which);
 }
 
-static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+u64 x86_read_arch_cap_msr(void)
 {
 	u64 ia32_cap = 0;
 
+	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+
+	return ia32_cap;
+}
+
+static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
 	if (cpu_matches(NO_SPECULATION))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
 
-	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
-
 	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 3/9] TAAv6 3
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
  2019-10-09 23:22 ` [MODERATED] [PATCH v6 1/9] TAAv6 1 Pawan Gupta
  2019-10-09 23:23 ` [MODERATED] [PATCH v6 2/9] TAAv6 2 Pawan Gupta
@ 2019-10-09 23:24 ` Pawan Gupta
  2019-10-09 23:25 ` [MODERATED] [PATCH v6 4/9] TAAv6 4 Pawan Gupta
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:24 UTC (permalink / raw)
  To: speck

Add kernel cmdline parameter "tsx" to control the Transactional
Synchronization Extensions (TSX) feature.  On CPUs that support TSX
control, use "tsx=on|off" to enable or disable TSX.  Not specifying this
option is equivalent to "tsx=off". This is because on certain processors
TSX may be used as a part of a speculative side channel attack.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../admin-guide/kernel-parameters.txt         |  11 ++
 arch/x86/kernel/cpu/Makefile                  |   2 +-
 arch/x86/kernel/cpu/common.c                  |   2 +
 arch/x86/kernel/cpu/cpu.h                     |  18 +++
 arch/x86/kernel/cpu/intel.c                   |   5 +
 arch/x86/kernel/cpu/tsx.c                     | 115 ++++++++++++++++++
 6 files changed, 152 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kernel/cpu/tsx.c

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 4c1971960afa..832537d59562 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4813,6 +4813,17 @@
 			interruptions from clocksource watchdog are not
 			acceptable).
 
+	tsx=		[X86] Control Transactional Synchronization
+			Extensions (TSX) feature in Intel processors that
+			support TSX control.
+
+			This parameter controls the TSX feature. The options are:
+
+			on	- Enable TSX on the system.
+			off	- Disable TSX on the system.
+
+			Not specifying this option is equivalent to tsx=off.
+
 	turbografx.map[2|3]=	[HW,JOY]
 			TurboGraFX parallel port interface
 			Format:
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index d7a1e5a9331c..890f60083eca 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -30,7 +30,7 @@ obj-$(CONFIG_PROC_FS)	+= proc.o
 obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
 
 ifdef CONFIG_CPU_SUP_INTEL
-obj-y			+= intel.o intel_pconfig.o
+obj-y			+= intel.o intel_pconfig.o tsx.o
 obj-$(CONFIG_PM)	+= intel_epb.o
 endif
 obj-$(CONFIG_CPU_SUP_AMD)		+= amd.o
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index d3b0d67c9243..78bcecafb6e1 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1560,6 +1560,8 @@ void __init identify_boot_cpu(void)
 #endif
 	cpu_detect_tlb(&boot_cpu_data);
 	setup_cr_pinning();
+
+	tsx_init();
 }
 
 void identify_secondary_cpu(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index c0e2407abdd6..38ab6e115eac 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -44,6 +44,22 @@ struct _tlb_table {
 extern const struct cpu_dev *const __x86_cpu_dev_start[],
 			    *const __x86_cpu_dev_end[];
 
+#ifdef CONFIG_CPU_SUP_INTEL
+enum tsx_ctrl_states {
+	TSX_CTRL_ENABLE,
+	TSX_CTRL_DISABLE,
+	TSX_CTRL_NOT_SUPPORTED,
+};
+
+extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+
+extern void __init tsx_init(void);
+extern void tsx_enable(void);
+extern void tsx_disable(void);
+#else
+static inline void tsx_init(void) { }
+#endif /* CONFIG_CPU_SUP_INTEL */
+
 extern void get_cpu_cap(struct cpuinfo_x86 *c);
 extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
 extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
@@ -62,4 +78,6 @@ unsigned int aperfmperf_get_khz(int cpu);
 
 extern void x86_spec_ctrl_setup_ap(void);
 
+extern u64 x86_read_arch_cap_msr(void);
+
 #endif /* ARCH_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 8d6d92ebeb54..cc9f24818e49 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -761,6 +761,11 @@ static void init_intel(struct cpuinfo_x86 *c)
 		detect_tme(c);
 
 	init_intel_misc_features(c);
+
+	if (tsx_ctrl_state == TSX_CTRL_ENABLE)
+		tsx_enable();
+	if (tsx_ctrl_state == TSX_CTRL_DISABLE)
+		tsx_disable();
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
new file mode 100644
index 000000000000..e39b33b7cef8
--- /dev/null
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Intel Transactional Synchronization Extensions (TSX) control.
+ *
+ * Copyright (C) 2019 Intel Corporation
+ *
+ * Author:
+ *	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+ */
+
+#include <linux/processor.h>
+#include <linux/cpufeature.h>
+
+#include <asm/cmdline.h>
+
+#include "cpu.h"
+
+enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+
+void tsx_disable(void)
+{
+	u64 tsx;
+
+	rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+	/* Force all transactions to immediately abort */
+	tsx |= TSX_CTRL_RTM_DISABLE;
+	/*
+	 * Ensure TSX support is not enumerated in CPUID.
+	 * This is visible to userspace and will ensure they
+	 * do not waste resources trying TSX transactions that
+	 * will always abort.
+	 */
+	tsx |= TSX_CTRL_CPUID_CLEAR;
+
+	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+void tsx_enable(void)
+{
+	u64 tsx;
+
+	rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+	/* Enable the RTM feature in the cpu */
+	tsx &= ~TSX_CTRL_RTM_DISABLE;
+	/*
+	 * Ensure TSX support is enumerated in CPUID.
+	 * This is visible to userspace and will ensure they
+	 * can enumerate and use the TSX feature.
+	 */
+	tsx &= ~TSX_CTRL_CPUID_CLEAR;
+
+	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+static bool __init tsx_ctrl_is_supported(void)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	/*
+	 * TSX is controlled via MSR_IA32_TSX_CTRL.  However,
+	 * support for this MSR is enumerated by ARCH_CAP_TSX_MSR bit
+	 * in MSR_IA32_ARCH_CAPABILITIES.
+	 */
+	return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
+}
+
+void __init tsx_init(void)
+{
+	char arg[20];
+	int ret;
+
+	if (!tsx_ctrl_is_supported())
+		return;
+
+	ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
+	if (ret >= 0) {
+		if (!strcmp(arg, "on")) {
+			tsx_ctrl_state = TSX_CTRL_ENABLE;
+		} else if (!strcmp(arg, "off")) {
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		} else {
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+			pr_info("tsx: invalid option, defaulting to off\n");
+		}
+	} else {
+		/* tsx= not provided, defaulting to off */
+		tsx_ctrl_state = TSX_CTRL_DISABLE;
+	}
+
+	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
+		tsx_disable();
+		/*
+		 * tsx_disable() will change the state of the
+		 * RTM CPUID bit.  Clear it here since it is now
+		 * expected to be not set.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_RTM);
+	} else if (tsx_ctrl_state == TSX_CTRL_ENABLE) {
+		/*
+		 * HW defaults TSX to be enabled at bootup.
+		 * We may still need the TSX enable support
+		 * during init for special cases like
+		 * kexec after TSX is disabled.
+		 */
+		tsx_enable();
+		/*
+		 * tsx_enable() will change the state of the
+		 * RTM CPUID bit.  Force it here since it is now
+		 * expected to be set.
+		 */
+		setup_force_cpu_cap(X86_FEATURE_RTM);
+	}
+}
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 4/9] TAAv6 4
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (2 preceding siblings ...)
  2019-10-09 23:24 ` [MODERATED] [PATCH v6 3/9] TAAv6 3 Pawan Gupta
@ 2019-10-09 23:25 ` Pawan Gupta
  2019-10-09 23:26 ` [MODERATED] [PATCH v6 5/9] TAAv6 5 Pawan Gupta
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:25 UTC (permalink / raw)
  To: speck

TSX Async Abort (TAA) is a side channel vulnerability to the internal
buffers in some Intel processors similar to Microachitectural Data
Sampling (MDS).  In this case certain loads may speculatively pass
invalid data to dependent operations when an asynchronous abort
condition is pending in a TSX transaction.  This includes loads with no
fault or assist condition.  Such loads may speculatively expose stale
data from the uarch data structures as in MDS.  Scope of exposure is
within the same-thread and cross-thread.  This issue affects all current
processors that support TSX, but do not have ARCH_CAP_TAA_NO (bit 8) set
in MSR_IA32_ARCH_CAPABILITIES.

On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
using VERW or L1D_FLUSH, there is no additional mitigation needed for
TAA.

On affected CPUs with MDS_NO=1 this issue can be mitigated by disabling
Transactional Synchronization Extensions (TSX) feature.  A new MSR
IA32_TSX_CTRL in future and current processors after a microcode update
can be used to control TSX feature.  TSX_CTRL_RTM_DISABLE bit disables
the TSX sub-feature Restricted Transactional Memory (RTM).
TSX_CTRL_CPUID_CLEAR bit clears the RTM enumeration in CPUID.  The other
TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled with updated microcode but still enumerated as present by
CPUID(EAX=7).EBX{bit4}.

The second mitigation approach is similar to MDS which is clearing the
affected CPU buffers on return to user space and when entering a guest.
Relevant microcode update is required for the mitigation to work.  More
details on this approach can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html

TSX feature can be controlled by the "tsx" command line parameter.  If
the TSX feature is forced to be enabled then "Clear CPU buffers" (MDS
mitigation) is deployed. The effective mitigation state can be read from
sysfs.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/include/asm/cpufeatures.h   |   1 +
 arch/x86/include/asm/msr-index.h     |   4 +
 arch/x86/include/asm/nospec-branch.h |   4 +-
 arch/x86/include/asm/processor.h     |   7 ++
 arch/x86/kernel/cpu/bugs.c           | 127 ++++++++++++++++++++++++++-
 arch/x86/kernel/cpu/common.c         |  15 ++++
 6 files changed, 154 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index e880f2408e29..138512ecc975 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -397,5 +397,6 @@
 #define X86_BUG_MDS			X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
 #define X86_BUG_MSBDS_ONLY		X86_BUG(20) /* CPU is only affected by the  MSDBS variant of BUG_MDS */
 #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
+#define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 45b6705d9f71..4b654ae5b915 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -94,6 +94,10 @@
 						  * Sampling (MDS) vulnerabilities.
 						  */
 #define ARCH_CAP_TSX_CTRL_MSR		BIT(7)	/* MSR for TSX control is available. */
+#define ARCH_CAP_TAA_NO			BIT(8)	/*
+						 * Not susceptible to
+						 * TSX Async Abort (TAA) vulnerabilities.
+						 */
 
 #define MSR_IA32_FLUSH_CMD		0x0000010b
 #define L1D_FLUSH			BIT(0)	/*
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 80bc209c0708..5c24a7b35166 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -314,7 +314,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
 #include <asm/segment.h>
 
 /**
- * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
  *
  * This uses the otherwise unused and obsolete VERW instruction in
  * combination with microcode which triggers a CPU buffer flush when the
@@ -337,7 +337,7 @@ static inline void mds_clear_cpu_buffers(void)
 }
 
 /**
- * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
  *
  * Clear CPU buffers if the corresponding static key is enabled
  */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 6e0a3b43d027..999b85039128 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -988,4 +988,11 @@ enum mds_mitigations {
 	MDS_MITIGATION_VMWERV,
 };
 
+enum taa_mitigations {
+	TAA_MITIGATION_OFF,
+	TAA_MITIGATION_UCODE_NEEDED,
+	TAA_MITIGATION_VERW,
+	TAA_MITIGATION_TSX_DISABLE,
+};
+
 #endif /* _ASM_X86_PROCESSOR_H */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c6fa3ef10b4e..0b7c7a826580 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -39,6 +39,7 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
+static void __init taa_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -105,6 +106,7 @@ void __init check_bugs(void)
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
 	mds_select_mitigation();
+	taa_select_mitigation();
 
 	arch_smt_update();
 
@@ -268,6 +270,110 @@ static int __init mds_cmdline(char *str)
 }
 early_param("mds", mds_cmdline);
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"TAA: " fmt
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static bool taa_nosmt __ro_after_init;
+
+static const char * const taa_strings[] = {
+	[TAA_MITIGATION_OFF]		= "Vulnerable",
+	[TAA_MITIGATION_UCODE_NEEDED]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
+	[TAA_MITIGATION_VERW]		= "Mitigation: Clear CPU buffers",
+	[TAA_MITIGATION_TSX_DISABLE]	= "Mitigation: TSX disabled",
+};
+
+static void __init taa_select_mitigation(void)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	if (!boot_cpu_has_bug(X86_BUG_TAA)) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+		return;
+	}
+
+	/*
+	 * As X86_BUG_TAA=1, TSX feature is supported by the hardware. If
+	 * TSX was disabled (X86_FEATURE_RTM=0) earlier during tsx_init().
+	 * Select TSX_DISABLE as mitigation.
+	 *
+	 * This check is ahead of mitigations=off and tsx_async_abort=off
+	 * because when TSX is disabled mitigation is already in place. This
+	 * ensures sysfs doesn't show "Vulnerable" when TSX is disabled.
+	 */
+	if (!boot_cpu_has(X86_FEATURE_RTM)) {
+		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
+		pr_info("%s\n", taa_strings[taa_mitigation]);
+		return;
+	}
+
+	/* All mitigations turned off from cmdline (mitigations=off) */
+	if (cpu_mitigations_off()) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+		return;
+	}
+
+	/* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
+	if (taa_mitigation == TAA_MITIGATION_OFF) {
+		pr_info("%s\n", taa_strings[taa_mitigation]);
+		return;
+	}
+
+	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+		taa_mitigation = TAA_MITIGATION_VERW;
+	else
+		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+	/*
+	 * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
+	 * A microcode update fixes this behavior to clear CPU buffers.
+	 * Microcode update also adds support for MSR_IA32_TSX_CTRL which
+	 * is enumerated by ARCH_CAP_TSX_CTRL_MSR bit.
+	 *
+	 * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
+	 * update is required.
+	 */
+	if ((ia32_cap & ARCH_CAP_MDS_NO) &&
+	    !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
+		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+	/*
+	 * TSX is enabled, select alternate mitigation for TAA which is
+	 * same as MDS. Enable MDS static branch to clear CPU buffers.
+	 *
+	 * For guests that can't determine whether the correct microcode is
+	 * present on host, enable the mitigation for UCODE_NEEDED as well.
+	 */
+	static_branch_enable(&mds_user_clear);
+
+	if (taa_nosmt || cpu_mitigations_auto_nosmt())
+		cpu_smt_disable(false);
+
+	pr_info("%s\n", taa_strings[taa_mitigation]);
+}
+
+static int __init tsx_async_abort_cmdline(char *str)
+{
+	if (!boot_cpu_has_bug(X86_BUG_TAA))
+		return 0;
+
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off")) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+	} else if (!strcmp(str, "full")) {
+		taa_mitigation = TAA_MITIGATION_VERW;
+	} else if (!strcmp(str, "full,nosmt")) {
+		taa_mitigation = TAA_MITIGATION_VERW;
+		taa_nosmt = true;
+	}
+
+	return 0;
+}
+early_param("tsx_async_abort", tsx_async_abort_cmdline);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
@@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
 #undef pr_fmt
 #define pr_fmt(fmt) fmt
 
-/* Update the static key controlling the MDS CPU buffer clear in idle */
+/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
 static void update_mds_branch_idle(void)
 {
 	/*
@@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
 	 * The other variants cannot be mitigated when SMT is enabled, so
 	 * clearing the buffers on idle just to prevent the Store Buffer
 	 * repartitioning leak would be a window dressing exercise.
+	 *
+	 * Apply idle buffer clearing to TAA affected CPUs also.
 	 */
-	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
+	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
+	    !boot_cpu_has_bug(X86_BUG_TAA))
 		return;
 
 	if (sched_smt_active())
@@ -786,6 +895,7 @@ static void update_mds_branch_idle(void)
 }
 
 #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
+#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
 
 void arch_smt_update(void)
 {
@@ -819,6 +929,19 @@ void arch_smt_update(void)
 		break;
 	}
 
+	switch (taa_mitigation) {
+	case TAA_MITIGATION_VERW:
+	case TAA_MITIGATION_UCODE_NEEDED:
+		if (sched_smt_active())
+			pr_warn_once(TAA_MSG_SMT);
+		/* TSX is enabled, apply MDS idle buffer clearing. */
+		update_mds_branch_idle();
+		break;
+	case TAA_MITIGATION_TSX_DISABLE:
+	case TAA_MITIGATION_OFF:
+		break;
+	}
+
 	mutex_unlock(&spec_ctrl_mutex);
 }
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 78bcecafb6e1..a4ce9e3a622c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1127,6 +1127,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	if (!cpu_matches(NO_SWAPGS))
 		setup_force_cpu_bug(X86_BUG_SWAPGS);
 
+	/*
+	 * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
+	 *	- TSX is supported or
+	 *	- TSX_CTRL is supported
+	 *
+	 * TSX_CTRL check is needed for cases when TSX could be disabled before
+	 * the kernel boot e.g. kexec
+	 * TSX_CTRL check alone is not sufficient for cases when the microcode
+	 * update is not present or running as guest that don't get TSX_CTRL.
+	 */
+	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
+	    (boot_cpu_has(X86_FEATURE_RTM) ||
+	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+		setup_force_cpu_bug(X86_BUG_TAA);
+
 	if (cpu_matches(NO_MELTDOWN))
 		return;
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 5/9] TAAv6 5
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (3 preceding siblings ...)
  2019-10-09 23:25 ` [MODERATED] [PATCH v6 4/9] TAAv6 4 Pawan Gupta
@ 2019-10-09 23:26 ` Pawan Gupta
  2019-10-09 23:27 ` [MODERATED] [PATCH v6 6/9] TAAv6 6 Pawan Gupta
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:26 UTC (permalink / raw)
  To: speck

Add the sysfs reporting file for TSX Async Abort. It exposes the
vulnerability and the mitigation state similar to the existing files for
the other hardware vulnerabilities.

sysfs file path is:
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
 drivers/base/cpu.c         |  9 +++++++++
 include/linux/cpu.h        |  3 +++
 3 files changed, 35 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0b7c7a826580..073687ddd06d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1451,6 +1451,21 @@ static ssize_t mds_show_state(char *buf)
 		       sched_smt_active() ? "vulnerable" : "disabled");
 }
 
+static ssize_t tsx_async_abort_show_state(char *buf)
+{
+	if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLE) ||
+	    (taa_mitigation == TAA_MITIGATION_OFF))
+		return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+		return sprintf(buf, "%s; SMT Host state unknown\n",
+			       taa_strings[taa_mitigation]);
+	}
+
+	return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
+		       sched_smt_active() ? "vulnerable" : "disabled");
+}
+
 static char *stibp_state(void)
 {
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
@@ -1521,6 +1536,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 	case X86_BUG_MDS:
 		return mds_show_state(buf);
 
+	case X86_BUG_TAA:
+		return tsx_async_abort_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1557,4 +1575,9 @@ ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *bu
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_MDS);
 }
+
+ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_TAA);
+}
 #endif
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index cc37511de866..0fccd8c0312e 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -554,12 +554,20 @@ ssize_t __weak cpu_show_mds(struct device *dev,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t __weak cpu_show_tsx_async_abort(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	return sprintf(buf, "Not affected\n");
+}
+
 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
 static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
 static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
 static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -568,6 +576,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_spec_store_bypass.attr,
 	&dev_attr_l1tf.attr,
 	&dev_attr_mds.attr,
+	&dev_attr_tsx_async_abort.attr,
 	NULL
 };
 
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index fcb1386bb0d4..1872b15bda75 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -59,6 +59,9 @@ extern ssize_t cpu_show_l1tf(struct device *dev,
 			     struct device_attribute *attr, char *buf);
 extern ssize_t cpu_show_mds(struct device *dev,
 			    struct device_attribute *attr, char *buf);
+extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
+					struct device_attribute *attr,
+					char *buf);
 
 extern __printf(4, 5)
 struct device *cpu_device_create(struct device *parent, void *drvdata,
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 6/9] TAAv6 6
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (4 preceding siblings ...)
  2019-10-09 23:26 ` [MODERATED] [PATCH v6 5/9] TAAv6 5 Pawan Gupta
@ 2019-10-09 23:27 ` Pawan Gupta
  2019-10-09 23:28 ` [MODERATED] [PATCH v6 7/9] TAAv6 7 Pawan Gupta
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:27 UTC (permalink / raw)
  To: speck

Export IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX Async
Abort(TAA) affected hosts that have TSX enabled and updated microcode.
This is required so that the guests don't complain,

	"Vulnerable: Clear CPU buffers attempted, no microcode"

when the host has the updated microcode to clear CPU buffers.

Microcode update also adds support for MSR_IA32_TSX_CTRL which is
enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR.
Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is
not exported to the guests.

In this case export MDS_NO=0 to the guests. When guests have
CPUID.MD_CLEAR=1 guests deploy MDS mitigation which also mitigates TAA.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 arch/x86/kvm/x86.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 91602d310a3f..282b909b9394 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1254,6 +1254,25 @@ static u64 kvm_get_arch_capabilities(void)
 	if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
 		data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
 
+	/*
+	 * On TAA affected systems, export MDS_NO=0 when:
+	 *	- TSX is enabled on host, i.e. X86_FEATURE_RTM=1.
+	 *	- Updated microcode is present. This is detected by
+	 *	  the presence of ARCH_CAP_TSX_CTRL_MSR. This ensures
+	 *	  VERW clears CPU buffers.
+	 *
+	 * When MDS_NO=0 is exported, guests deploy clear CPU buffer
+	 * mitigation and don't complain:
+	 *
+	 *	"Vulnerable: Clear CPU buffers attempted, no microcode"
+	 *
+	 * If TSX is disabled on the system, guests are also mitigated against
+	 * TAA and clear CPU buffer mitigation is not required for guests.
+	 */
+	if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
+	    (data & ARCH_CAP_TSX_CTRL_MSR))
+		data &= ~ARCH_CAP_MDS_NO;
+
 	return data;
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 7/9] TAAv6 7
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (5 preceding siblings ...)
  2019-10-09 23:27 ` [MODERATED] [PATCH v6 6/9] TAAv6 6 Pawan Gupta
@ 2019-10-09 23:28 ` Pawan Gupta
  2019-10-09 23:29 ` [MODERATED] [PATCH v6 8/9] TAAv6 8 Pawan Gupta
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:28 UTC (permalink / raw)
  To: speck

Platforms which are not affected by X86_BUG_TAA may want the TSX feature
enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
disable TSX when X86_BUG_TAA is present, otherwise enable TSX.

More details on X86_BUG_TAA can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 5 +++++
 arch/x86/kernel/cpu/tsx.c                       | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 832537d59562..d7b48c38c6e5 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4821,6 +4821,11 @@
 
 			on	- Enable TSX on the system.
 			off	- Disable TSX on the system.
+			auto	- Disable TSX if X86_BUG_TAA is present,
+				  otherwise enable TSX on the system.
+
+			More details on X86_BUG_TAA are here:
+			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 
 			Not specifying this option is equivalent to tsx=off.
 
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index e39b33b7cef8..e93abe6f0bb9 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -80,6 +80,11 @@ void __init tsx_init(void)
 			tsx_ctrl_state = TSX_CTRL_ENABLE;
 		} else if (!strcmp(arg, "off")) {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		} else if (!strcmp(arg, "auto")) {
+			if (boot_cpu_has_bug(X86_BUG_TAA))
+				tsx_ctrl_state = TSX_CTRL_DISABLE;
+			else
+				tsx_ctrl_state = TSX_CTRL_ENABLE;
 		} else {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
 			pr_info("tsx: invalid option, defaulting to off\n");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 8/9] TAAv6 8
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (6 preceding siblings ...)
  2019-10-09 23:28 ` [MODERATED] [PATCH v6 7/9] TAAv6 7 Pawan Gupta
@ 2019-10-09 23:29 ` Pawan Gupta
  2019-10-09 23:30 ` [MODERATED] [PATCH v6 9/9] TAAv6 9 Pawan Gupta
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:29 UTC (permalink / raw)
  To: speck

Add the documenation for TSX Async Abort. Include the description of
the issue, how to check the mitigation state, control the mitigation,
guidance for system administrators.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Co-developed-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Signed-off-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../admin-guide/hw-vuln/tsx_async_abort.rst   | 240 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  36 +++
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/tsx_async_abort.rst         |  54 ++++
 6 files changed, 333 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 create mode 100644 Documentation/x86/tsx_async_abort.rst

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 5f7d7b14fa44..9ef9efa4e717 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -486,6 +486,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
 		/sys/devices/system/cpu/vulnerabilities/mds
+		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
 Date:		January 2018
 Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 Description:	Information about CPU vulnerabilities
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 49311f3da6f2..0802b1c67452 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -12,3 +12,4 @@ are configurable at compile, boot or run time.
    spectre
    l1tf
    mds
+   tsx_async_abort
diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
new file mode 100644
index 000000000000..58f24db49615
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -0,0 +1,240 @@
+TAA - TSX Asynchronous Abort
+======================================
+
+TAA is a hardware vulnerability that allows unprivileged speculative access to
+data which is available in various CPU internal buffers by using asynchronous
+aborts within an Intel TSX transactional region.
+
+Affected processors
+-------------------
+
+This vulnerability only affects Intel processors that support Intel
+Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8)
+is 0 in the IA32_ARCH_CAPABILITIES MSR.  On processors where the MDS_NO bit
+(bit 5)is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations
+also mitigate against TAA.
+
+Whether a processor is affected or not can be read out from the TAA
+vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`.
+
+Related CVEs
+------------
+
+The following CVE entry is related to this TAA issue:
+
+   ==============  =====  ===================================================
+   CVE-2019-11135  TAA    TSX Asynchronous Abort (TAA) condition on some
+                          microprocessors utilizing speculative execution may
+                          allow an authenticated user to potentially enable
+                          information disclosure via a side channel with
+                          local access.
+   ==============  =====  ===================================================
+
+Problem
+-------
+
+When performing store, load, L1 refill operations, processors write data into
+temporary microarchitectural structures (buffers). The data in the buffer can
+be forwarded to load operations as an optimization.
+
+Intel TSX are an extension to the x86 instruction set architecture that adds
+hardware transactional memory support to improve performance of multi-threaded
+software. TSX lets the processor expose and exploit concurrence hidden in an
+application due to dynamically avoiding unnecessary synchronization.
+
+TSX supports atomic memory transactions that are either committed (success) or
+aborted. During an abort, operations that happened within the transactional region
+are rolled back. An asynchronous abort takes place, among other options, when a
+different thread accesses a cache line that is also used within the transactional
+region when that access might lead to a data race.
+
+Immediately after an uncompleted asynchronous abort, certain speculatively
+executed loads may read data from those internal buffers and pass it to dependent
+operations. This can be then used to infer the value via a cache side channel
+attack.
+
+Because the buffers are potentially shared between Hyper-Threads cross
+Hyper-Thread attacks are possible.
+
+The victim of a malicious actor does not need to make use of TSX. Only the
+attacker needs to begin a TSX transaction and raise an asynchronous abort
+to try to leak some of data stored in the buffers.
+
+Deeper technical information is available in the TAA specific x86 architecture
+section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`.
+
+
+Attack scenarios
+----------------
+
+Attacks against the TAA vulnerability can be implemented from unprivileged
+applications running on hosts or guests.
+
+As for MDS, the attacker has no control over the memory addresses that can be
+leaked. Only the victim is responsible for bringing data to the CPU. As a
+result, the malicious actor has to first sample as much data as possible and
+then postprocess it to try to infer any useful information from it.
+
+A potential attacker only has read access to the data. Also, there is no direct
+privilege escalation by using this technique.
+
+
+.. _tsx_async_abort_sys_info:
+
+TAA system information
+-----------------------
+
+The Linux kernel provides a sysfs interface to enumerate the current TAA status
+of mitigated systems. The relevant sysfs file is:
+
+/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+
+The possible values in this file are:
+
+.. list-table::
+
+   * - 'Vulnerable'
+     - The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied.
+   * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
+     - The system tries to clear the buffers but the microcode might not support the operation.
+   * - 'Mitigation: Clear CPU buffers'
+     - The microcode has been updated to clear the buffers. TSX is still enabled.
+   * - 'Mitigation: TSX disabled'
+     - TSX is disabled.
+   * - 'Not affected'
+     - The CPU is not affected by this issue.
+
+.. _ucode_needed:
+
+Best effort mitigation mode
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If the processor is vulnerable, but the availability of the microcode-based
+mitigation mechanism is not advertised via CPUID the kernel selects a best
+effort mitigation mode.  This mode invokes the mitigation instructions
+without a guarantee that they clear the CPU buffers.
+
+This is done to address virtualization scenarios where the host has the
+microcode update applied, but the hypervisor is not yet updated to expose the
+CPUID to the guest. If the host has updated microcode the protection takes
+effect; otherwise a few CPU cycles are wasted pointlessly.
+
+The state in the tsx_async_abort sysfs file reflects this situation
+accordingly.
+
+
+Mitigation mechanism
+--------------------
+
+The kernel detects the affected CPUs and the presence of the microcode which is
+required. If a CPU is affected and the microcode is available, then the kernel
+enables the mitigation by default.
+
+
+The mitigation can be controlled at boot time via a kernel command line option.
+See :ref:`taa_mitigation_control_command_line`. It also provides a sysfs
+interface. See :ref:`taa_mitigation_sysfs`.
+
+.. _virt_mechanism:
+
+Virtualization mitigation
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Affected systems where the host has the TAA microcode and the TAA mitigation is
+ON (with TSX disabled) are not vulnerable regardless of the status of the VMs.
+
+In all other cases, if the host either does not have the TAA microcode or the
+kernel is not mitigated, the system might be vulnerable.
+
+
+.. _taa_mitigation_control_command_line:
+
+Mitigation control on the kernel command line
+---------------------------------------------
+
+The kernel command line allows to control the TAA mitigations at boot time with
+the option "tsx_async_abort=". The valid arguments for this option are:
+
+  ============  =============================================================
+  off		This option disables the TAA mitigation on affected platforms.
+                If the system has TSX enabled (see next parameter) and the CPU
+                is affected, the system is vulnerable.
+
+  full	        TAA mitigation is enabled. If TSX is enabled, on an affected
+                system it will clear CPU buffers on ring transitions. On
+                systems which are MDS-affected and deploy MDS mitigation,
+                TAA is also mitigated. Specifying this option on those
+                systems will have no effect.
+
+  full,nosmt    The same as tsx_async_abort=full, with SMT disabled on
+                vulnerable CPUs that have TSX enabled. This is the complete
+                mitigation. When TSX is disabled, SMT is not disabled because
+                CPU is not vulnerable to cross-thread TAA attacks.
+  ============  =============================================================
+
+Not specifying this option is equivalent to "tsx_async_abort=full".
+
+The kernel command line also allows to control the TSX feature using the
+parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
+to control the TSX feature and the enumeration of the TSX feature bits (RTM
+and HLE) in CPUID.
+
+The valid options are:
+
+  ============  =============================================================
+  off		Disables TSX.
+
+  on		Enables TSX.
+
+  auto		Disables TSX on affected platform, otherwise enables TSX.
+  ============  =============================================================
+
+Not specifying this option is equivalent to "tsx=off".
+
+The following combinations of the "tsx_async_abort" and "tsx" are possible. For
+affected platforms tsx=auto is equivalent to tsx=off and the result will be:
+
+  =========  ====================   =========================================
+  tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
+                                    buffers.
+  tsx=on     tsx_async_abort=off    The system is vulnerable.
+  tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.
+  tsx=off    tsx_async_abort=off    TSX is disabled. System is not vulnerable.
+  =========  ====================   =========================================
+
+For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
+buffers.  For platforms without TSX control "tsx" command line argument has no
+effect.
+
+
+Mitigation selection guide
+--------------------------
+
+1. Trusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If all user space applications are from a trusted source and do not execute
+untrusted code which is supplied externally, then the mitigation can be
+disabled. The same applies to virtualized environments with trusted guests.
+
+
+2. Untrusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If there are untrusted applications or guests on the system, enabling TSX
+might allow a malicious actor to leak data from the host or from other
+processes running on the same physical core.
+
+If the microcode is available and the TSX is disabled on the host, attacks
+are prevented in a virtualized environment as well, even if the VMs do not
+explicitly enable the mitigation.
+
+
+.. _taa_default_mitigations:
+
+Default mitigations
+-------------------
+
+The kernel's default action for vulnerable processors is:
+
+  - Deploy TSX disable mitigation (tsx_async_abort=full).
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index d7b48c38c6e5..12aea1711bbf 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2612,6 +2612,7 @@
 					       ssbd=force-off [ARM64]
 					       l1tf=off [X86]
 					       mds=off [X86]
+					       tsx_async_abort=off [X86]
 
 			auto (default)
 				Mitigate all CPU vulnerabilities, but leave SMT
@@ -2627,6 +2628,7 @@
 				be fully mitigated, even if it means losing SMT.
 				Equivalent to: l1tf=flush,nosmt [X86]
 					       mds=full,nosmt [X86]
+					       tsx_async_abort=full,nosmt [X86]
 
 	mminit_loglevel=
 			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
@@ -4656,6 +4658,40 @@
 			neutralize any effect of /proc/sys/kernel/sysrq.
 			Useful for debugging.
 
+	tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
+			Abort (TAA) vulnerability.
+
+			Similar to Micro-architectural Data Sampling (MDS)
+			certain CPUs that support Transactional
+			Synchronization Extensions (TSX) are vulnerable to an
+			exploit against CPU internal buffers which can forward
+			information to a disclosure gadget under certain
+			conditions.
+
+			In vulnerable processors, the speculatively forwarded
+			data can be used in a cache side channel attack, to
+			access data to which the attacker does not have direct
+			access.
+
+			This parameter controls the TAA mitigation.  The
+			options are:
+
+			full       - Enable TAA mitigation on vulnerable CPUs
+			full,nosmt - Enable TAA mitigation and disable SMT on
+				     vulnerable CPUs. If TSX is disabled, SMT
+				     is not disabled because CPU is not
+				     vulnerable to cross-thread TAA attacks.
+			off        - Unconditionally disable TAA mitigation
+
+			Not specifying this option is equivalent to
+			tsx_async_abort=full.  On CPUs which are MDS affected
+			and deploy MDS mitigation, TAA mitigation is not
+			required and doesn't provide any additional
+			mitigation.
+
+			For details see:
+			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+
 	tcpmhash_entries= [KNL,NET]
 			Set the number of tcp_metrics_hash slots.
 			Default value is 8192 or 16384 depending on total
diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index af64c4bb4447..a8de2fbc1caa 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -27,6 +27,7 @@ x86-specific Documentation
    mds
    microcode
    resctrl_ui
+   tsx_async_abort
    usb-legacy-support
    i386/index
    x86_64/index
diff --git a/Documentation/x86/tsx_async_abort.rst b/Documentation/x86/tsx_async_abort.rst
new file mode 100644
index 000000000000..0bcc764c0429
--- /dev/null
+++ b/Documentation/x86/tsx_async_abort.rst
@@ -0,0 +1,54 @@
+TSX Async Abort (TAA) mitigation
+=================================================
+
+.. _tsx_async_abort:
+
+Overview
+--------
+
+TSX Async Abort (TAA) is a side channel attack on internal buffers in some
+Intel processors similar to Microachitectural Data Sampling (MDS).  In this
+case certain loads may speculatively pass invalid data to dependent operations
+when an asynchronous abort condition is pending in a Transactional
+Synchronization Extensions (TSX) transaction.  This includes loads with no
+fault or assist condition. Such loads may speculatively expose stale data from
+the same uarch data structures as in MDS, with same scope of exposure i.e.
+same-thread and cross-thread. This issue affects all current processors that
+support TSX.
+
+Mitigation strategy
+-------------------
+
+a) TSX disable - One of the mitigation is to disable TSX feature. A new MSR
+IA32_TSX_CTRL will be available in future and current processors and after a
+microcode update in which can be used to disable TSX.  This MSR can be used to
+disable the TSX feature and the enumeration of the TSX feature bits(RTM and
+HLE) in CPUID.
+
+b) CPU clear buffers - Similar to MDS, clearing the CPU buffers mitigates this
+vulnerability. More details on this approach can be found here
+https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
+
+Kernel internal mitigation modes
+--------------------------------
+
+ =============    ============================================================
+ off              Mitigation is disabled. Either the CPU is not affected or
+                  tsx_async_abort=off is supplied on the kernel command line.
+
+ tsx disabled     Mitigation is enabled. TSX feature is disabled by default at
+                  bootup on processors that support TSX control.
+
+ verw             Mitigation is enabled. CPU is affected and MD_CLEAR is
+                  advertised in CPUID.
+
+ ucode needed     Mitigation is enabled. CPU is affected and MD_CLEAR is not
+                  advertised in CPUID. That is mainly for virtualization
+                  scenarios where the host has the updated microcode but the
+                  hypervisor does not expose MD_CLEAR in CPUID. It's a best
+                  effort approach without guarantee.
+ =============    ============================================================
+
+If the CPU is affected and "tsx_async_abort" kernel command line parameter is
+not provided then the kernel selects an appropriate mitigation depending on the
+status of RTM and MD_CLEAR CPUID bits.
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] [PATCH v6 9/9] TAAv6 9
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (7 preceding siblings ...)
  2019-10-09 23:29 ` [MODERATED] [PATCH v6 8/9] TAAv6 8 Pawan Gupta
@ 2019-10-09 23:30 ` Pawan Gupta
  2019-10-09 23:34 ` [MODERATED] Re: [PATCH v6 1/9] TAAv6 1 Pawan Gupta
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:30 UTC (permalink / raw)
  To: speck

Transactional Synchronization Extensions (TSX) is an extension to the
x86 instruction set architecture (ISA) that adds Hardware Transactional
Memory (HTM) support. Changing TSX state currently requires a reboot.
This may not be desirable when rebooting imposes a huge penalty. Add
support to control TSX feature via a new sysfs file:
/sys/devices/system/cpu/hw_tx_mem

- Writing 0|off|N|n to this file disables TSX feature on all the CPUs.
  This is equivalent to boot parameter tsx=off.
- Writing 1|on|Y|y to this file enables TSX feature on all the CPUs.
  This is equivalent to boot parameter tsx=on.
- Reading from this returns the status of TSX feature.
- When TSX control is not supported this interface is not visible in
  sysfs.

Changing the TSX state from this interface also updates CPUID.RTM
feature bit.  From the kernel side, this feature bit doesn't result in
any ALTERNATIVE code patching.  No memory allocations are done to
save/restore user state. No code paths in outside of the tests for
vulnerability to TAA are dependent on the value of the feature bit.  In
general the kernel doesn't care whether RTM is present or not.

Applications typically look at CPUID bits once at startup (or when first
calling into a library that uses the feature).  So we have a couple of
cases to cover:

1) An application started and saw that RTM was enabled, so began
   to use it. Then TSX was disabled.  Net result in this case is that
   the application will keep trying to use RTM, but every xbegin() will
   immediately abort the transaction. This has a performance impact to
   the application, but it doesn't affect correctness because all users
   of RTM must have a fallback path for when the transaction aborts. Note
   that even if an application is in the middle of a transaction when we
   disable RTM, we are safe. The XPI that we use to update the TSX_CTRL
   MSR will abort the transaction (just as any interrupt would abort
   a transaction).

2) An application starts and sees RTM is not available. So it will
   always use alternative paths. Even if TSX is enabled and RTM is set,
   applications in general do not re-evaluate their choice so will
   continue to run in non-TSX mode.

When the TSX state is changed from the sysfs interface, TSX Async Abort
(TAA) mitigation state also needs to be updated. Set the TAA mitigation
state as per TSX and VERW static branch state.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |  23 ++++
 .../admin-guide/hw-vuln/tsx_async_abort.rst   |  29 +++++
 arch/x86/kernel/cpu/bugs.c                    |  21 +++-
 arch/x86/kernel/cpu/cpu.h                     |   3 +-
 arch/x86/kernel/cpu/tsx.c                     | 100 +++++++++++++++++-
 drivers/base/cpu.c                            |  32 +++++-
 include/linux/cpu.h                           |   6 ++
 7 files changed, 210 insertions(+), 4 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 9ef9efa4e717..09a90be9f8cc 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -563,3 +563,26 @@ Description:	Umwait control
 			  or C0.2 state. The time is an unsigned 32-bit number.
 			  Note that a value of zero means there is no limit.
 			  Low order two bits must be zero.
+
+What:		/sys/devices/system/cpu/hw_tx_mem
+Date:		August 2019
+Contact:	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+		Linux kernel mailing list <linux-kernel@vger.kernel.org>
+Description:	Hardware Transactional Memory (HTM) control.
+
+		Read/write interface to control HTM feature for all the CPUs in
+		the system.  This interface is only present on platforms that
+		support HTM control. HTM is a hardware feature to speed up the
+		execution of multi-threaded software through lock elision. An
+		example of HTM implementation is Intel Transactional
+		Synchronization Extensions (TSX).
+
+			Read returns the status of HTM feature.
+
+			0: HTM is disabled
+			1: HTM is enabled
+
+			Write sets the state of HTM feature.
+
+			0: Disables HTM
+			1: Enables HTM
diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
index 58f24db49615..b62bc749fd8c 100644
--- a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -207,6 +207,35 @@ buffers.  For platforms without TSX control "tsx" command line argument has no
 effect.
 
 
+.. _taa_mitigation_sysfs:
+
+Mitigation control using sysfs
+------------------------------
+
+For those affected systems that can not be frequently rebooted to enable or
+disable TSX, sysfs can be used as an alternative after installing the updates.
+The possible values for the file /sys/devices/system/cpu/hw_tx_mem are:
+
+  ============  =============================================================
+  0		Disable TSX. Upon entering a TSX transactional region, the code
+                will immediately abort, before any instruction executes within
+                the transactional region even speculatively, and continue on
+                the fallback. Equivalent to boot parameter "tsx=off".
+
+  1		Enable TSX. Equivalent to boot parameter "tsx=on".
+
+  ============  =============================================================
+
+Reading from this file returns the status of TSX feature. This file is only
+present on systems that support TSX control.
+
+When disabling TSX by using the sysfs mechanism, applications that are already
+running and use TSX will see their transactional regions aborted and execution
+flow will be redirected to the fallback, losing the benefits of the
+non-blocking path. TSX needs fallback code to guarantee correct execution
+without transactional regions.
+
+
 Mitigation selection guide
 --------------------------
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 073687ddd06d..3bd8040ef747 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -274,7 +274,7 @@ early_param("mds", mds_cmdline);
 #define pr_fmt(fmt)	"TAA: " fmt
 
 /* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW;
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -374,6 +374,25 @@ static int __init tsx_async_abort_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_cmdline);
 
+void taa_update_mitigation(bool tsx_enabled)
+{
+	/*
+	 * When userspace changes the TSX state, update taa_mitigation
+	 * so that the updated mitigation state is shown in:
+	 * /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+	 *
+	 * Check if TSX is disabled.
+	 * Check if CPU buffer clear is enabled.
+	 * else the system is vulnerable.
+	 */
+	if (!tsx_enabled)
+		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
+	else if (static_key_count(&mds_user_clear.key))
+		taa_mitigation = TAA_MITIGATION_VERW;
+	else
+		taa_mitigation = TAA_MITIGATION_OFF;
+}
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 38ab6e115eac..c0e2ae982692 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -51,11 +51,12 @@ enum tsx_ctrl_states {
 	TSX_CTRL_NOT_SUPPORTED,
 };
 
-extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+extern enum tsx_ctrl_states tsx_ctrl_state;
 
 extern void __init tsx_init(void);
 extern void tsx_enable(void);
 extern void tsx_disable(void);
+extern void taa_update_mitigation(bool tsx_enabled);
 #else
 static inline void tsx_init(void) { }
 #endif /* CONFIG_CPU_SUP_INTEL */
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index e93abe6f0bb9..96320449abb7 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -10,12 +10,15 @@
 
 #include <linux/processor.h>
 #include <linux/cpufeature.h>
+#include <linux/cpu.h>
 
 #include <asm/cmdline.h>
 
 #include "cpu.h"
 
-enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+static DEFINE_MUTEX(tsx_mutex);
+
+enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED;
 
 void tsx_disable(void)
 {
@@ -118,3 +121,98 @@ void __init tsx_init(void)
 		setup_force_cpu_cap(X86_FEATURE_RTM);
 	}
 }
+
+static void tsx_update_this_cpu(void *arg)
+{
+	unsigned long enable = (unsigned long)arg;
+
+	if (enable)
+		tsx_enable();
+	else
+		tsx_disable();
+}
+
+/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
+static void tsx_update_on_each_cpu(bool val)
+{
+	get_online_cpus();
+	on_each_cpu(tsx_update_this_cpu, (void *)val, 1);
+	put_online_cpus();
+}
+
+ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *attr,
+		       char *buf)
+{
+	return sprintf(buf, "%d\n", tsx_ctrl_state == TSX_CTRL_ENABLE ? 1 : 0);
+}
+
+ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr,
+			const char *buf, size_t count)
+{
+	enum tsx_ctrl_states requested_state;
+	ssize_t ret;
+	bool val;
+
+	ret = kstrtobool(buf, &val);
+	if (ret)
+		return ret;
+
+	mutex_lock(&tsx_mutex);
+
+	if (val)
+		requested_state = TSX_CTRL_ENABLE;
+	else
+		requested_state = TSX_CTRL_DISABLE;
+
+	/* Current state is same as the requested state, do nothing */
+	if (tsx_ctrl_state == requested_state)
+		goto exit;
+
+	tsx_ctrl_state = requested_state;
+
+	/*
+	 * Changing the TSX state from this interface also updates CPUID.RTM
+	 * feature  bit. From the kernel side, this feature bit doesn't result
+	 * in any ALTERNATIVE code patching.  No memory allocations are done to
+	 * save/restore user state. No code paths in outside of the tests for
+	 * vulnerability to TAA are dependent on the value of the feature bit.
+	 * In general the kernel doesn't care whether RTM is present or not.
+	 *
+	 * From the user side it is a bit fuzzier. Applications typically look
+	 * at CPUID bits once at startup (or when first calling into a library
+	 * that uses the feature).  So we have a couple of cases to cover:
+	 *
+	 * 1) An application started and saw that RTM was enabled, so began
+	 *    to use it. Then TSX was disabled.  Net result in this case is
+	 *    that the application will keep trying to use RTM, but every
+	 *    xbegin() will immediately abort the transaction. This has a
+	 *    performance impact to the application, but it doesn't affect
+	 *    correctness because all users of RTM must have a fallback path
+	 *    for when the transaction aborts. Note that even if an application
+	 *    is in the middle of a transaction when we disable RTM, we are
+	 *    safe. The XPI that we use to update the TSX_CTRL MSR will abort
+	 *    the transaction (just as any interrupt would abort a
+	 *    transaction).
+	 *
+	 * 2) An application starts and sees RTM is not available. So it will
+	 *    always use alternative paths. Even if TSX is enabled and RTM is
+	 *    set, applications in general do not re-evaluate their choice so
+	 *    will continue to run in non-TSX mode.
+	 */
+	tsx_update_on_each_cpu(val);
+
+	if (boot_cpu_has_bug(X86_BUG_TAA))
+		taa_update_mitigation(val);
+exit:
+	mutex_unlock(&tsx_mutex);
+
+	return count;
+}
+
+umode_t hw_tx_mem_is_visible(void)
+{
+	if (tsx_ctrl_state == TSX_CTRL_NOT_SUPPORTED)
+		return 0;
+
+	return 0644;
+}
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 0fccd8c0312e..2148e974ab0a 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -460,6 +460,34 @@ struct device *cpu_device_create(struct device *parent, void *drvdata,
 }
 EXPORT_SYMBOL_GPL(cpu_device_create);
 
+ssize_t __weak hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+			      char *buf)
+{
+	return -ENODEV;
+}
+
+ssize_t __weak hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+			       const char *buf, size_t count)
+{
+	return -ENODEV;
+}
+
+DEVICE_ATTR_RW(hw_tx_mem);
+
+umode_t __weak hw_tx_mem_is_visible(void)
+{
+	return 0;
+}
+
+static umode_t cpu_root_attrs_is_visible(struct kobject *kobj,
+					 struct attribute *attr, int index)
+{
+	if (attr == &dev_attr_hw_tx_mem.attr)
+		return hw_tx_mem_is_visible();
+
+	return attr->mode;
+}
+
 #ifdef CONFIG_GENERIC_CPU_AUTOPROBE
 static DEVICE_ATTR(modalias, 0444, print_cpu_modalias, NULL);
 #endif
@@ -481,11 +509,13 @@ static struct attribute *cpu_root_attrs[] = {
 #ifdef CONFIG_GENERIC_CPU_AUTOPROBE
 	&dev_attr_modalias.attr,
 #endif
+	&dev_attr_hw_tx_mem.attr,
 	NULL
 };
 
 static struct attribute_group cpu_root_attr_group = {
-	.attrs = cpu_root_attrs,
+	.attrs		= cpu_root_attrs,
+	.is_visible	= cpu_root_attrs_is_visible,
 };
 
 static const struct attribute_group *cpu_root_attr_groups[] = {
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 1872b15bda75..820fda85fc71 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -63,6 +63,12 @@ extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
 					struct device_attribute *attr,
 					char *buf);
 
+extern ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+			      char *buf);
+extern ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+			       const char *buf, size_t count);
+extern umode_t hw_tx_mem_is_visible(void);
+
 extern __printf(4, 5)
 struct device *cpu_device_create(struct device *parent, void *drvdata,
 				 const struct attribute_group **groups,
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (8 preceding siblings ...)
  2019-10-09 23:30 ` [MODERATED] [PATCH v6 9/9] TAAv6 9 Pawan Gupta
@ 2019-10-09 23:34 ` Pawan Gupta
  2019-10-10  1:23   ` Pawan Gupta
  2019-10-09 23:38 ` Andrew Cooper
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 31+ messages in thread
From: Pawan Gupta @ 2019-10-09 23:34 UTC (permalink / raw)
  To: speck

On Wed, Oct 09, 2019 at 04:22:56PM -0700, speck for Pawan Gupta wrote:
> Transactional Synchronization Extensions (TSX) may be used on certain
> processors as part of a speculative side channel attack.  A microcode
> update for existing processors that are vulnerable to this attack will
> add a new MSR, IA32_TSX_CTRL to allow the system administrator the
> option to disable TSX as one of the possible mitigations.  [Note that
> future processors that are not vulnerable will also support the
> IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
> bits.

For some reason the "From:" and the "Subject:" lines are not getting
copied to the body. I am using the same "speckify-gitmail" script.

	$ git format-patch --cover-letter -n --thread -v6 -o ../v6 --to speck@linutronix.de v5.3..HEAD
	$ speckify-gitmail -s "TAAv6" v6 speck

Let me see whats wrong and re-send the series. Sorry for the trouble.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (9 preceding siblings ...)
  2019-10-09 23:34 ` [MODERATED] Re: [PATCH v6 1/9] TAAv6 1 Pawan Gupta
@ 2019-10-09 23:38 ` Andrew Cooper
  2019-10-09 23:40   ` Andrew Cooper
       [not found] ` <5d9e6daa.1c69fb81.f84ad.88ceSMTPIN_ADDED_BROKEN@mx.google.com>
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 31+ messages in thread
From: Andrew Cooper @ 2019-10-09 23:38 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2043 bytes --]

On 10/10/2019 00:22, speck for Pawan Gupta wrote:
> Transactional Synchronization Extensions (TSX) may be used on certain
> processors as part of a speculative side channel attack.  A microcode
> update for existing processors that are vulnerable to this attack will
> add a new MSR, IA32_TSX_CTRL to allow the system administrator the
> option to disable TSX as one of the possible mitigations.  [Note that
> future processors that are not vulnerable will also support the
> IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
> bits.
>
> TSX has two sub-features:
>
> 1. Restricted Transactional Memory (RTM) is an explicitly-used feature
>    where new instructions begin and end TSX transactions.
> 2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
>    "old" style locks are used by software.
>
> Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
> IA32_TSX_CTRL MSR.
>
> There are two control bits in IA32_TSX_CTRL MSR:
>
>   Bit 0: When set it disables the Restricted Transactional Memory (RTM)
>          sub-feature of TSX (will force all transactions to abort on the
> 	 XBEGIN instruction).
>
>   Bit 1: When set it disables the enumeration of the RTM and HLE feature
>          (i.e. it will make CPUID(EAX=7).EBX{bit4} and
>          CPUID(EAX=7).EBX{bit11} read as 0).
>
> The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
> disabled but still enumerated as present by CPUID(EAX=7).EBX{bit4}.

So one paragraph was changed, but not this one it seems.

As for HLE itself, bit 0 is specified to disable it, along with RTM. 
(Or at least, it says so in the latest doc I have on the subject).

I don't know what the enabled status of HLE is on the MDS_NO, TAA parts,
and whether it is statically disabled with the TSX_CTRL microcode, but
if it isn't statically disabled then it needs to be dynamically disabled
by bit 0, or a 'CLFLUSH; XBEGIN ...; MOV secret' can still be used to
exploit TAA.

~Andrew


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-09 23:38 ` Andrew Cooper
@ 2019-10-09 23:40   ` Andrew Cooper
  2019-10-09 23:53     ` Luck, Tony
  0 siblings, 1 reply; 31+ messages in thread
From: Andrew Cooper @ 2019-10-09 23:40 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2243 bytes --]

On 10/10/2019 00:38, speck for Andrew Cooper wrote:
> On 10/10/2019 00:22, speck for Pawan Gupta wrote:
>> Transactional Synchronization Extensions (TSX) may be used on certain
>> processors as part of a speculative side channel attack.  A microcode
>> update for existing processors that are vulnerable to this attack will
>> add a new MSR, IA32_TSX_CTRL to allow the system administrator the
>> option to disable TSX as one of the possible mitigations.  [Note that
>> future processors that are not vulnerable will also support the
>> IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
>> bits.
>>
>> TSX has two sub-features:
>>
>> 1. Restricted Transactional Memory (RTM) is an explicitly-used feature
>>    where new instructions begin and end TSX transactions.
>> 2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
>>    "old" style locks are used by software.
>>
>> Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
>> IA32_TSX_CTRL MSR.
>>
>> There are two control bits in IA32_TSX_CTRL MSR:
>>
>>   Bit 0: When set it disables the Restricted Transactional Memory (RTM)
>>          sub-feature of TSX (will force all transactions to abort on the
>> 	 XBEGIN instruction).
>>
>>   Bit 1: When set it disables the enumeration of the RTM and HLE feature
>>          (i.e. it will make CPUID(EAX=7).EBX{bit4} and
>>          CPUID(EAX=7).EBX{bit11} read as 0).
>>
>> The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
>> disabled but still enumerated as present by CPUID(EAX=7).EBX{bit4}.
> So one paragraph was changed, but not this one it seems.
>
> As for HLE itself, bit 0 is specified to disable it, along with RTM. 
> (Or at least, it says so in the latest doc I have on the subject).
>
> I don't know what the enabled status of HLE is on the MDS_NO, TAA parts,
> and whether it is statically disabled with the TSX_CTRL microcode, but
> if it isn't statically disabled then it needs to be dynamically disabled
> by bit 0, or a 'CLFLUSH; XBEGIN ...; MOV secret' can still be used to
> exploit TAA.

Apologies.  That is the RTM sequence.

For HLE, I meant 'CLFLUSH; XAQUIRE ...; MOV secret'.

~Andrew


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-09 23:40   ` Andrew Cooper
@ 2019-10-09 23:53     ` Luck, Tony
  2019-10-10  0:01       ` Andrew Cooper
  0 siblings, 1 reply; 31+ messages in thread
From: Luck, Tony @ 2019-10-09 23:53 UTC (permalink / raw)
  To: speck

On Thu, Oct 10, 2019 at 12:40:45AM +0100, speck for Andrew Cooper wrote:
> > I don't know what the enabled status of HLE is on the MDS_NO, TAA parts,
> > and whether it is statically disabled with the TSX_CTRL microcode, but
> > if it isn't statically disabled then it needs to be dynamically disabled
> > by bit 0, or a 'CLFLUSH; XBEGIN ...; MOV secret' can still be used to
> > exploit TAA.
> 
> Apologies.  That is the RTM sequence.
> 
> For HLE, I meant 'CLFLUSH; XAQUIRE ...; MOV secret'.

Did we send out a review copy of the white paper for TAA yet?

HLE is kind of buried, but we do say:

   On processors that enumerate IA32_ARCH_CAPABILITIES[TSX_CTRL]
   (bit 7)=1, HLE prefix hints are always ignored.

Which is to say that HLE is unconditionally disabled by the
new microcode for TAA.

-Tony

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-09 23:53     ` Luck, Tony
@ 2019-10-10  0:01       ` Andrew Cooper
  2019-10-10 16:51         ` Luck, Tony
  0 siblings, 1 reply; 31+ messages in thread
From: Andrew Cooper @ 2019-10-10  0:01 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1237 bytes --]

On 10/10/2019 00:53, speck for Luck, Tony wrote:
> On Thu, Oct 10, 2019 at 12:40:45AM +0100, speck for Andrew Cooper wrote:
>>> I don't know what the enabled status of HLE is on the MDS_NO, TAA parts,
>>> and whether it is statically disabled with the TSX_CTRL microcode, but
>>> if it isn't statically disabled then it needs to be dynamically disabled
>>> by bit 0, or a 'CLFLUSH; XBEGIN ...; MOV secret' can still be used to
>>> exploit TAA.
>> Apologies.  That is the RTM sequence.
>>
>> For HLE, I meant 'CLFLUSH; XAQUIRE ...; MOV secret'.
> Did we send out a review copy of the white paper for TAA yet?

Not as far as I am aware.  Have I missed something?

I'm still working from the ppdf from June 26th, which I seem to recall
was from just after the adjustment of bit 0's behaviour away from
causing #UD's.

> HLE is kind of buried, but we do say:
>
>    On processors that enumerate IA32_ARCH_CAPABILITIES[TSX_CTRL]
>    (bit 7)=1, HLE prefix hints are always ignored.
>
> Which is to say that HLE is unconditionally disabled by the
> new microcode for TAA.

Great.  I look forward to a paper to review.

Is that a firm decision on Ronak's suggestion that HLE is going to be
sunset?

~Andrew


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-09 23:34 ` [MODERATED] Re: [PATCH v6 1/9] TAAv6 1 Pawan Gupta
@ 2019-10-10  1:23   ` Pawan Gupta
  2019-10-15 12:54     ` Thomas Gleixner
  0 siblings, 1 reply; 31+ messages in thread
From: Pawan Gupta @ 2019-10-10  1:23 UTC (permalink / raw)
  To: speck

On Wed, Oct 09, 2019 at 04:34:01PM -0700, speck for Pawan Gupta wrote:
> On Wed, Oct 09, 2019 at 04:22:56PM -0700, speck for Pawan Gupta wrote:
> > Transactional Synchronization Extensions (TSX) may be used on certain
> > processors as part of a speculative side channel attack.  A microcode
> > update for existing processors that are vulnerable to this attack will
> > add a new MSR, IA32_TSX_CTRL to allow the system administrator the
> > option to disable TSX as one of the possible mitigations.  [Note that
> > future processors that are not vulnerable will also support the
> > IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
> > bits.
> 
> For some reason the "From:" and the "Subject:" lines are not getting
> copied to the body. I am using the same "speckify-gitmail" script.
> 
> 	$ git format-patch --cover-letter -n --thread -v6 -o ../v6 --to speck@linutronix.de v5.3..HEAD
> 	$ speckify-gitmail -s "TAAv6" v6 speck
> 
> Let me see whats wrong and re-send the series. Sorry for the trouble.

Speckify-gitmail script is doing the right thing. Dumping the message body
content just before gpg.encrypt() shows the correct "From:" and "Subject:". I
don't know where it is getting stripped off.

Thanks,
Pawan

--- a/speckify-gitmail
+++ b/speckify-gitmail
@@ -105,6 +105,7 @@ for f in sorted(infiles):
         content = 'From: %s\n' %mfrom
         content += 'Subject: %s] %s\n\n' %(prefix, subj.strip())
         content += msg.get_payload().encode()
+        sys.stdout.write(content)
 
         try:
             res = gpg.encrypt(content, [args.mailto])

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 3/9] TAAv6 3
       [not found] ` <5d9e6daa.1c69fb81.f84ad.88ceSMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-10  6:47   ` Greg KH
  2019-10-10 23:44     ` Pawan Gupta
  0 siblings, 1 reply; 31+ messages in thread
From: Greg KH @ 2019-10-10  6:47 UTC (permalink / raw)
  To: speck

On Wed, Oct 09, 2019 at 04:24:56PM -0700, speck for Pawan Gupta wrote:
> +	ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
> +	if (ret >= 0) {
> +		if (!strcmp(arg, "on")) {
> +			tsx_ctrl_state = TSX_CTRL_ENABLE;
> +		} else if (!strcmp(arg, "off")) {
> +			tsx_ctrl_state = TSX_CTRL_DISABLE;
> +		} else {
> +			tsx_ctrl_state = TSX_CTRL_DISABLE;
> +			pr_info("tsx: invalid option, defaulting to off\n");

pr_err()?

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 5/9] TAAv6 5
       [not found] ` <5d9e6e22.1c69fb81.6df19.ff55SMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-10  6:50   ` Greg KH
  2019-10-10 21:18     ` Pawan Gupta
  2019-10-10  6:50   ` Greg KH
  1 sibling, 1 reply; 31+ messages in thread
From: Greg KH @ 2019-10-10  6:50 UTC (permalink / raw)
  To: speck

On Wed, Oct 09, 2019 at 04:26:56PM -0700, speck for Pawan Gupta wrote:
> Add the sysfs reporting file for TSX Async Abort. It exposes the
> vulnerability and the mitigation state similar to the existing files for
> the other hardware vulnerabilities.
> 
> sysfs file path is:
> /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
>  drivers/base/cpu.c         |  9 +++++++++
>  include/linux/cpu.h        |  3 +++
>  3 files changed, 35 insertions(+)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 0b7c7a826580..073687ddd06d 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -1451,6 +1451,21 @@ static ssize_t mds_show_state(char *buf)
>  		       sched_smt_active() ? "vulnerable" : "disabled");
>  }
>  
> +static ssize_t tsx_async_abort_show_state(char *buf)
> +{
> +	if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLE) ||
> +	    (taa_mitigation == TAA_MITIGATION_OFF))
> +		return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> +		return sprintf(buf, "%s; SMT Host state unknown\n",
> +			       taa_strings[taa_mitigation]);
> +	}
> +
> +	return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],

Shouldn't that be:
	return sprintf(buf, "%s: SMT %s\n", taa_strings[taa_mitigation],

Use a ':' and not a ';'

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 5/9] TAAv6 5
       [not found] ` <5d9e6e22.1c69fb81.6df19.ff55SMTPIN_ADDED_BROKEN@mx.google.com>
  2019-10-10  6:50   ` [MODERATED] Re: [PATCH v6 5/9] TAAv6 5 Greg KH
@ 2019-10-10  6:50   ` Greg KH
  1 sibling, 0 replies; 31+ messages in thread
From: Greg KH @ 2019-10-10  6:50 UTC (permalink / raw)
  To: speck

On Wed, Oct 09, 2019 at 04:26:56PM -0700, speck for Pawan Gupta wrote:
> +static ssize_t tsx_async_abort_show_state(char *buf)
> +{
> +	if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLE) ||
> +	    (taa_mitigation == TAA_MITIGATION_OFF))
> +		return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
> +
> +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> +		return sprintf(buf, "%s; SMT Host state unknown\n",
> +			       taa_strings[taa_mitigation]);

Also a ':' here and not ';'

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 9/9] TAAv6 9
       [not found] ` <5d9e6f13.1c69fb81.d7036.be99SMTPIN_ADDED_BROKEN@mx.google.com>
@ 2019-10-10  6:54   ` Greg KH
  2019-10-12  1:41     ` Pawan Gupta
  0 siblings, 1 reply; 31+ messages in thread
From: Greg KH @ 2019-10-10  6:54 UTC (permalink / raw)
  To: speck

On Wed, Oct 09, 2019 at 04:30:57PM -0700, speck for Pawan Gupta wrote:
> Transactional Synchronization Extensions (TSX) is an extension to the
> x86 instruction set architecture (ISA) that adds Hardware Transactional
> Memory (HTM) support. Changing TSX state currently requires a reboot.
> This may not be desirable when rebooting imposes a huge penalty. Add
> support to control TSX feature via a new sysfs file:
> /sys/devices/system/cpu/hw_tx_mem
> 
> - Writing 0|off|N|n to this file disables TSX feature on all the CPUs.
>   This is equivalent to boot parameter tsx=off.
> - Writing 1|on|Y|y to this file enables TSX feature on all the CPUs.
>   This is equivalent to boot parameter tsx=on.
> - Reading from this returns the status of TSX feature.
> - When TSX control is not supported this interface is not visible in
>   sysfs.
> 
> Changing the TSX state from this interface also updates CPUID.RTM
> feature bit.  From the kernel side, this feature bit doesn't result in
> any ALTERNATIVE code patching.  No memory allocations are done to
> save/restore user state. No code paths in outside of the tests for
> vulnerability to TAA are dependent on the value of the feature bit.  In
> general the kernel doesn't care whether RTM is present or not.
> 
> Applications typically look at CPUID bits once at startup (or when first
> calling into a library that uses the feature).  So we have a couple of
> cases to cover:
> 
> 1) An application started and saw that RTM was enabled, so began
>    to use it. Then TSX was disabled.  Net result in this case is that
>    the application will keep trying to use RTM, but every xbegin() will
>    immediately abort the transaction. This has a performance impact to
>    the application, but it doesn't affect correctness because all users
>    of RTM must have a fallback path for when the transaction aborts. Note
>    that even if an application is in the middle of a transaction when we
>    disable RTM, we are safe. The XPI that we use to update the TSX_CTRL
>    MSR will abort the transaction (just as any interrupt would abort
>    a transaction).
> 
> 2) An application starts and sees RTM is not available. So it will
>    always use alternative paths. Even if TSX is enabled and RTM is set,
>    applications in general do not re-evaluate their choice so will
>    continue to run in non-TSX mode.
> 
> When the TSX state is changed from the sysfs interface, TSX Async Abort
> (TAA) mitigation state also needs to be updated. Set the TAA mitigation
> state as per TSX and VERW static branch state.
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> ---
>  .../ABI/testing/sysfs-devices-system-cpu      |  23 ++++
>  .../admin-guide/hw-vuln/tsx_async_abort.rst   |  29 +++++
>  arch/x86/kernel/cpu/bugs.c                    |  21 +++-
>  arch/x86/kernel/cpu/cpu.h                     |   3 +-
>  arch/x86/kernel/cpu/tsx.c                     | 100 +++++++++++++++++-
>  drivers/base/cpu.c                            |  32 +++++-
>  include/linux/cpu.h                           |   6 ++
>  7 files changed, 210 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> index 9ef9efa4e717..09a90be9f8cc 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> @@ -563,3 +563,26 @@ Description:	Umwait control
>  			  or C0.2 state. The time is an unsigned 32-bit number.
>  			  Note that a value of zero means there is no limit.
>  			  Low order two bits must be zero.
> +
> +What:		/sys/devices/system/cpu/hw_tx_mem
> +Date:		August 2019
> +Contact:	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> +		Linux kernel mailing list <linux-kernel@vger.kernel.org>
> +Description:	Hardware Transactional Memory (HTM) control.
> +
> +		Read/write interface to control HTM feature for all the CPUs in
> +		the system.  This interface is only present on platforms that
> +		support HTM control. HTM is a hardware feature to speed up the
> +		execution of multi-threaded software through lock elision. An
> +		example of HTM implementation is Intel Transactional
> +		Synchronization Extensions (TSX).
> +
> +			Read returns the status of HTM feature.
> +
> +			0: HTM is disabled
> +			1: HTM is enabled
> +
> +			Write sets the state of HTM feature.
> +
> +			0: Disables HTM
> +			1: Enables HTM
> diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
> index 58f24db49615..b62bc749fd8c 100644
> --- a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
> +++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
> @@ -207,6 +207,35 @@ buffers.  For platforms without TSX control "tsx" command line argument has no
>  effect.
>  
>  
> +.. _taa_mitigation_sysfs:
> +
> +Mitigation control using sysfs
> +------------------------------
> +
> +For those affected systems that can not be frequently rebooted to enable or
> +disable TSX, sysfs can be used as an alternative after installing the updates.
> +The possible values for the file /sys/devices/system/cpu/hw_tx_mem are:
> +
> +  ============  =============================================================
> +  0		Disable TSX. Upon entering a TSX transactional region, the code
> +                will immediately abort, before any instruction executes within
> +                the transactional region even speculatively, and continue on
> +                the fallback. Equivalent to boot parameter "tsx=off".
> +
> +  1		Enable TSX. Equivalent to boot parameter "tsx=on".
> +
> +  ============  =============================================================
> +
> +Reading from this file returns the status of TSX feature. This file is only
> +present on systems that support TSX control.
> +
> +When disabling TSX by using the sysfs mechanism, applications that are already
> +running and use TSX will see their transactional regions aborted and execution
> +flow will be redirected to the fallback, losing the benefits of the
> +non-blocking path. TSX needs fallback code to guarantee correct execution
> +without transactional regions.
> +
> +
>  Mitigation selection guide
>  --------------------------
>  
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 073687ddd06d..3bd8040ef747 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -274,7 +274,7 @@ early_param("mds", mds_cmdline);
>  #define pr_fmt(fmt)	"TAA: " fmt
>  
>  /* Default mitigation for TAA-affected CPUs */
> -static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
> +static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW;
>  static bool taa_nosmt __ro_after_init;
>  
>  static const char * const taa_strings[] = {
> @@ -374,6 +374,25 @@ static int __init tsx_async_abort_cmdline(char *str)
>  }
>  early_param("tsx_async_abort", tsx_async_abort_cmdline);
>  
> +void taa_update_mitigation(bool tsx_enabled)
> +{
> +	/*
> +	 * When userspace changes the TSX state, update taa_mitigation
> +	 * so that the updated mitigation state is shown in:
> +	 * /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
> +	 *
> +	 * Check if TSX is disabled.
> +	 * Check if CPU buffer clear is enabled.
> +	 * else the system is vulnerable.
> +	 */
> +	if (!tsx_enabled)
> +		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> +	else if (static_key_count(&mds_user_clear.key))
> +		taa_mitigation = TAA_MITIGATION_VERW;
> +	else
> +		taa_mitigation = TAA_MITIGATION_OFF;
> +}
> +
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "Spectre V1 : " fmt
>  
> diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> index 38ab6e115eac..c0e2ae982692 100644
> --- a/arch/x86/kernel/cpu/cpu.h
> +++ b/arch/x86/kernel/cpu/cpu.h
> @@ -51,11 +51,12 @@ enum tsx_ctrl_states {
>  	TSX_CTRL_NOT_SUPPORTED,
>  };
>  
> -extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
> +extern enum tsx_ctrl_states tsx_ctrl_state;
>  
>  extern void __init tsx_init(void);
>  extern void tsx_enable(void);
>  extern void tsx_disable(void);
> +extern void taa_update_mitigation(bool tsx_enabled);
>  #else
>  static inline void tsx_init(void) { }
>  #endif /* CONFIG_CPU_SUP_INTEL */
> diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
> index e93abe6f0bb9..96320449abb7 100644
> --- a/arch/x86/kernel/cpu/tsx.c
> +++ b/arch/x86/kernel/cpu/tsx.c
> @@ -10,12 +10,15 @@
>  
>  #include <linux/processor.h>
>  #include <linux/cpufeature.h>
> +#include <linux/cpu.h>
>  
>  #include <asm/cmdline.h>
>  
>  #include "cpu.h"
>  
> -enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
> +static DEFINE_MUTEX(tsx_mutex);

I still don't know what this is trying to "protect".  Please at least
document it so I have a chance to review it...


> +
> +enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED;
>  
>  void tsx_disable(void)
>  {
> @@ -118,3 +121,98 @@ void __init tsx_init(void)
>  		setup_force_cpu_cap(X86_FEATURE_RTM);
>  	}
>  }
> +
> +static void tsx_update_this_cpu(void *arg)
> +{
> +	unsigned long enable = (unsigned long)arg;
> +
> +	if (enable)
> +		tsx_enable();
> +	else
> +		tsx_disable();
> +}
> +
> +/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
> +static void tsx_update_on_each_cpu(bool val)
> +{
> +	get_online_cpus();
> +	on_each_cpu(tsx_update_this_cpu, (void *)val, 1);
> +	put_online_cpus();
> +}

Why take the lock?  This is only called in one place.

> +ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *attr,
> +		       char *buf)
> +{
> +	return sprintf(buf, "%d\n", tsx_ctrl_state == TSX_CTRL_ENABLE ? 1 : 0);
> +}
> +
> +ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr,
> +			const char *buf, size_t count)
> +{
> +	enum tsx_ctrl_states requested_state;
> +	ssize_t ret;
> +	bool val;
> +
> +	ret = kstrtobool(buf, &val);
> +	if (ret)
> +		return ret;
> +
> +	mutex_lock(&tsx_mutex);
> +
> +	if (val)
> +		requested_state = TSX_CTRL_ENABLE;
> +	else
> +		requested_state = TSX_CTRL_DISABLE;

Why is lock grabbed above and not here?

And again, why a lock at all...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-10  0:01       ` Andrew Cooper
@ 2019-10-10 16:51         ` Luck, Tony
  0 siblings, 0 replies; 31+ messages in thread
From: Luck, Tony @ 2019-10-10 16:51 UTC (permalink / raw)
  To: speck

On Thu, Oct 10, 2019 at 01:01:38AM +0100, speck for Andrew Cooper wrote:
> On 10/10/2019 00:53, speck for Luck, Tony wrote:
> > On Thu, Oct 10, 2019 at 12:40:45AM +0100, speck for Andrew Cooper wrote:
> >>> I don't know what the enabled status of HLE is on the MDS_NO, TAA parts,
> >>> and whether it is statically disabled with the TSX_CTRL microcode, but
> >>> if it isn't statically disabled then it needs to be dynamically disabled
> >>> by bit 0, or a 'CLFLUSH; XBEGIN ...; MOV secret' can still be used to
> >>> exploit TAA.
> >> Apologies.  That is the RTM sequence.
> >>
> >> For HLE, I meant 'CLFLUSH; XAQUIRE ...; MOV secret'.
> > Did we send out a review copy of the white paper for TAA yet?
> 
> Not as far as I am aware.  Have I missed something?

May just be that we haven't shared it yet.

> I'm still working from the ppdf from June 26th, which I seem to recall
> was from just after the adjustment of bit 0's behaviour away from
> causing #UD's.
> 
> > HLE is kind of buried, but we do say:
> >
> >    On processors that enumerate IA32_ARCH_CAPABILITIES[TSX_CTRL]
> >    (bit 7)=1, HLE prefix hints are always ignored.
> >
> > Which is to say that HLE is unconditionally disabled by the
> > new microcode for TAA.
> 
> Great.  I look forward to a paper to review.
> 
> Is that a firm decision on Ronak's suggestion that HLE is going to be
> sunset?

I think so ... I believe that is why the somewhat awkward wording
above (trying to cover case for existing CPUs that will get a microcode
update to implement TSX_CTRL and also future ones that have that from
the get go).

-Tony

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 5/9] TAAv6 5
  2019-10-10  6:50   ` [MODERATED] Re: [PATCH v6 5/9] TAAv6 5 Greg KH
@ 2019-10-10 21:18     ` Pawan Gupta
  0 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-10 21:18 UTC (permalink / raw)
  To: speck

On Thu, Oct 10, 2019 at 08:50:11AM +0200, speck for Greg KH wrote:
> On Wed, Oct 09, 2019 at 04:26:56PM -0700, speck for Pawan Gupta wrote:
> > Add the sysfs reporting file for TSX Async Abort. It exposes the
> > vulnerability and the mitigation state similar to the existing files for
> > the other hardware vulnerabilities.
> > 
> > sysfs file path is:
> > /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
> > 
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Reviewed-by: Mark Gross <mgross@linux.intel.com>
> > Reviewed-by: Tony Luck <tony.luck@intel.com>
> > Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> > ---
> >  arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
> >  drivers/base/cpu.c         |  9 +++++++++
> >  include/linux/cpu.h        |  3 +++
> >  3 files changed, 35 insertions(+)
> > 
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index 0b7c7a826580..073687ddd06d 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -1451,6 +1451,21 @@ static ssize_t mds_show_state(char *buf)
> >  		       sched_smt_active() ? "vulnerable" : "disabled");
> >  }
> >  
> > +static ssize_t tsx_async_abort_show_state(char *buf)
> > +{
> > +	if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLE) ||
> > +	    (taa_mitigation == TAA_MITIGATION_OFF))
> > +		return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
> > +
> > +	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > +		return sprintf(buf, "%s; SMT Host state unknown\n",
> > +			       taa_strings[taa_mitigation]);
> > +	}
> > +
> > +	return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
> 
> Shouldn't that be:
> 	return sprintf(buf, "%s: SMT %s\n", taa_strings[taa_mitigation],
> 
> Use a ':' and not a ';'

Oh, ';' is the separator between the main mitigation and the SMT status.

This is the sample output:

	"Mitigation: Clear CPU buffers; SMT vulnerable"
                                      ^
This follows the MDS nomenclature.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 3/9] TAAv6 3
  2019-10-10  6:47   ` [MODERATED] Re: [PATCH v6 3/9] TAAv6 3 Greg KH
@ 2019-10-10 23:44     ` Pawan Gupta
  0 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-10 23:44 UTC (permalink / raw)
  To: speck

On Thu, Oct 10, 2019 at 08:47:48AM +0200, speck for Greg KH wrote:
> On Wed, Oct 09, 2019 at 04:24:56PM -0700, speck for Pawan Gupta wrote:
> > +	ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
> > +	if (ret >= 0) {
> > +		if (!strcmp(arg, "on")) {
> > +			tsx_ctrl_state = TSX_CTRL_ENABLE;
> > +		} else if (!strcmp(arg, "off")) {
> > +			tsx_ctrl_state = TSX_CTRL_DISABLE;
> > +		} else {
> > +			tsx_ctrl_state = TSX_CTRL_DISABLE;
> > +			pr_info("tsx: invalid option, defaulting to off\n");
> 
> pr_err()?

Ok, will change to pr_err().

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 9/9] TAAv6 9
  2019-10-10  6:54   ` [MODERATED] Re: [PATCH v6 9/9] TAAv6 9 Greg KH
@ 2019-10-12  1:41     ` Pawan Gupta
  2019-10-13 20:05       ` Ben Hutchings
  0 siblings, 1 reply; 31+ messages in thread
From: Pawan Gupta @ 2019-10-12  1:41 UTC (permalink / raw)
  To: speck

On Thu, Oct 10, 2019 at 08:54:12AM +0200, speck for Greg KH wrote:
> > +static DEFINE_MUTEX(tsx_mutex);
> 
> I still don't know what this is trying to "protect".  Please at least
> document it so I have a chance to review it...

I will add these comments to the code:

	/*
	 * Protect tsx_ctrl_state and TSX update on_each_cpu() from concurrent
	 * writers.
	 *
	 *  - Serialize TSX_CTRL MSR writes across all CPUs when there are
	 *    concurrent sysfs requests. on_each_cpu() callback execution
	 *    order on other CPUs can be different for multiple calls to
	 *    on_each_cpu(). For conflicting concurrent sysfs requests the
	 *    lock ensures all CPUs have updated the TSX_CTRL MSR before the
	 *    next call to on_each_cpu().
	 *  - Serialize tsx_ctrl_state update so that it doesn't get out of
	 *    sync with TSX_CTRL MSR.
	 *  - Serialize update to taa_mitigation.
	 */

> > +/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
> > +static void tsx_update_on_each_cpu(bool val)
> > +{
> > +	get_online_cpus();
> > +	on_each_cpu(tsx_update_this_cpu, (void *)val, 1);
> > +	put_online_cpus();
> > +}
> 
> Why take the lock?  This is only called in one place.

So that TSX_CTRL MSR state stays consistent across all CPUs between
multiple on_each_cpu() calls. Otherwise overlapping conflicting TSX_CTRL MSR
writes could end up in some CPUs with TSX enabled and others with TSX
disabled.

> > +ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *attr,
> > +		       char *buf)
> > +{
> > +	return sprintf(buf, "%d\n", tsx_ctrl_state == TSX_CTRL_ENABLE ? 1 : 0);
> > +}
> > +
> > +ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr,
> > +			const char *buf, size_t count)
> > +{
> > +	enum tsx_ctrl_states requested_state;
> > +	ssize_t ret;
> > +	bool val;
> > +
> > +	ret = kstrtobool(buf, &val);
> > +	if (ret)
> > +		return ret;
> > +
> > +	mutex_lock(&tsx_mutex);
> > +
> > +	if (val)
> > +		requested_state = TSX_CTRL_ENABLE;
> > +	else
> > +		requested_state = TSX_CTRL_DISABLE;
> 
> Why is lock grabbed above and not here?

Yes, can be moved here.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 9/9] TAAv6 9
  2019-10-12  1:41     ` Pawan Gupta
@ 2019-10-13 20:05       ` Ben Hutchings
  2019-10-13 21:00         ` Ben Hutchings
  0 siblings, 1 reply; 31+ messages in thread
From: Ben Hutchings @ 2019-10-13 20:05 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1053 bytes --]

On Fri, 2019-10-11 at 18:41 -0700, speck for Pawan Gupta wrote:
> On Thu, Oct 10, 2019 at 08:54:12AM +0200, speck for Greg KH wrote:
[...]
> > > +/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
> > > +static void tsx_update_on_each_cpu(bool val)
> > > +{
> > > +	get_online_cpus();
> > > +	on_each_cpu(tsx_update_this_cpu, (void *)val, 1);
> > > +	put_online_cpus();
> > > +}
> > 
> > Why take the lock?  This is only called in one place.
> 
> So that TSX_CTRL MSR state stays consistent across all CPUs between
> multiple on_each_cpu() calls. Otherwise overlapping conflicting TSX_CTRL MSR
> writes could end up in some CPUs with TSX enabled and others with TSX
> disabled.
[...]

get_online_cpus() is a read lock, so it doesn't prevent concurrent
updates.

Ben.

-- 
Ben Hutchings
The obvious mathematical breakthrough [to break modern encryption]
would be development of an easy way to factor large prime numbers.
                                                           - Bill Gates



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 9/9] TAAv6 9
  2019-10-13 20:05       ` Ben Hutchings
@ 2019-10-13 21:00         ` Ben Hutchings
  0 siblings, 0 replies; 31+ messages in thread
From: Ben Hutchings @ 2019-10-13 21:00 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1243 bytes --]

On Sun, 2019-10-13 at 21:05 +0100, speck for Ben Hutchings wrote:
> On Fri, 2019-10-11 at 18:41 -0700, speck for Pawan Gupta wrote:
> > On Thu, Oct 10, 2019 at 08:54:12AM +0200, speck for Greg KH wrote:
> [...]
> > > > +/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
> > > > +static void tsx_update_on_each_cpu(bool val)
> > > > +{
> > > > +	get_online_cpus();
> > > > +	on_each_cpu(tsx_update_this_cpu, (void *)val, 1);
> > > > +	put_online_cpus();
> > > > +}
> > > 
> > > Why take the lock?  This is only called in one place.
> > 
> > So that TSX_CTRL MSR state stays consistent across all CPUs between
> > multiple on_each_cpu() calls. Otherwise overlapping conflicting TSX_CTRL MSR
> > writes could end up in some CPUs with TSX enabled and others with TSX
> > disabled.
> [...]
> 
> get_online_cpus() is a read lock, so it doesn't prevent concurrent
> updates.

Sorry, now I realise "the lock" meant tsx_mutex as mentioned in the
comment.

Ben.

-- 
Ben Hutchings
The obvious mathematical breakthrough [to break modern encryption]
would be development of an easy way to factor large prime numbers.
                                                           - Bill Gates



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 1/9] TAAv6 1
  2019-10-10  1:23   ` Pawan Gupta
@ 2019-10-15 12:54     ` Thomas Gleixner
  2019-10-21 20:35       ` [MODERATED] " Pawan Gupta
  0 siblings, 1 reply; 31+ messages in thread
From: Thomas Gleixner @ 2019-10-15 12:54 UTC (permalink / raw)
  To: speck

On Wed, 9 Oct 2019, speck for Pawan Gupta wrote:
> On Wed, Oct 09, 2019 at 04:34:01PM -0700, speck for Pawan Gupta wrote:
> > On Wed, Oct 09, 2019 at 04:22:56PM -0700, speck for Pawan Gupta wrote:
> > > Transactional Synchronization Extensions (TSX) may be used on certain
> > > processors as part of a speculative side channel attack.  A microcode
> > > update for existing processors that are vulnerable to this attack will
> > > add a new MSR, IA32_TSX_CTRL to allow the system administrator the
> > > option to disable TSX as one of the possible mitigations.  [Note that
> > > future processors that are not vulnerable will also support the
> > > IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
> > > bits.
> > 
> > For some reason the "From:" and the "Subject:" lines are not getting
> > copied to the body. I am using the same "speckify-gitmail" script.
> > 
> > 	$ git format-patch --cover-letter -n --thread -v6 -o ../v6 --to speck@linutronix.de v5.3..HEAD
> > 	$ speckify-gitmail -s "TAAv6" v6 speck
> > 
> > Let me see whats wrong and re-send the series. Sorry for the trouble.
> 
> Speckify-gitmail script is doing the right thing. Dumping the message body
> content just before gpg.encrypt() shows the correct "From:" and "Subject:". I
> don't know where it is getting stripped off.
> 
> Thanks,
> Pawan
> 
> --- a/speckify-gitmail
> +++ b/speckify-gitmail
> @@ -105,6 +105,7 @@ for f in sorted(infiles):
>          content = 'From: %s\n' %mfrom

Can you try to insert a blank newline before the From:?

    	   content = '\nFrom: %s\n' %mfrom

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 8/9] TAAv6 8
       [not found] ` <4b15283c29b75be3177eb7c4b8601be5644f630e.157065=?utf-8?q?8889?= .git.pawan.kumar.gupta@linux.intel.com>
@ 2019-10-18  1:21   ` Ben Hutchings
  0 siblings, 0 replies; 31+ messages in thread
From: Ben Hutchings @ 2019-10-18  1:21 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1008 bytes --]

On Wed, 2019-10-09 at 16:29 -0700, speck for Pawan Gupta wrote:
> Add the documenation for TSX Async Abort. Include the description of
> the issue, how to check the mitigation state, control the mitigation,
> guidance for system administrators.
[...]
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
[...]
> @@ -4656,6 +4658,40 @@
>  			neutralize any effect of /proc/sys/kernel/sysrq.
>  			Useful for debugging.
>  
> +	tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
> +			Abort (TAA) vulnerability.
[...]
>  	tcpmhash_entries= [KNL,NET]
>  			Set the number of tcp_metrics_hash slots.
>  			Default value is 8192 or 16384 depending on total
[...]

A relatively minor issue: the entries in kernel-parameters.txt should
be sorted alphabetically, and "tsx_async_abort" is being inserted out
of order.

Ben.

-- 
Ben Hutchings
It's easier to fight for one's principles than to live up to them.



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 0/9] TAAv6 0
  2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
                   ` (14 preceding siblings ...)
       [not found] ` <4b15283c29b75be3177eb7c4b8601be5644f630e.157065=?utf-8?q?8889?= .git.pawan.kumar.gupta@linux.intel.com>
@ 2019-10-21 20:04 ` Josh Poimboeuf
  2019-10-21 20:09   ` Pawan Gupta
  15 siblings, 1 reply; 31+ messages in thread
From: Josh Poimboeuf @ 2019-10-21 20:04 UTC (permalink / raw)
  To: speck

On Wed, Oct 09, 2019 at 04:21:56PM -0700, speck for Pawan Gupta wrote:
> Changes since v5:
> - Remove unsafe X86_FEATURE_RTM toggles.
> - Have only boot cpu call tsx_init()
> - s/read_ia32_arch_cap/x86_read_arch_cap_msr/
> - Move TSX sysfs knob part to the end after documentation patch.
> - Changelog, comments and documentation update.

Hi Pawan,

We are at the point where we need patches for testing on RHEL.  Do you
plan to release v7 soon?

-- 
Josh

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 0/9] TAAv6 0
  2019-10-21 20:04 ` [MODERATED] Re: [PATCH v6 0/9] TAAv6 0 Josh Poimboeuf
@ 2019-10-21 20:09   ` Pawan Gupta
  0 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:09 UTC (permalink / raw)
  To: speck

On Mon, Oct 21, 2019 at 03:04:52PM -0500, speck for Josh Poimboeuf wrote:
> On Wed, Oct 09, 2019 at 04:21:56PM -0700, speck for Pawan Gupta wrote:
> > Changes since v5:
> > - Remove unsafe X86_FEATURE_RTM toggles.
> > - Have only boot cpu call tsx_init()
> > - s/read_ia32_arch_cap/x86_read_arch_cap_msr/
> > - Move TSX sysfs knob part to the end after documentation patch.
> > - Changelog, comments and documentation update.
> 
> Hi Pawan,
> 
> We are at the point where we need patches for testing on RHEL.  Do you
> plan to release v7 soon?

I am mostly done updating the documentation bits. I will post v7 soon.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [MODERATED] Re: [PATCH v6 1/9] TAAv6 1
  2019-10-15 12:54     ` Thomas Gleixner
@ 2019-10-21 20:35       ` Pawan Gupta
  0 siblings, 0 replies; 31+ messages in thread
From: Pawan Gupta @ 2019-10-21 20:35 UTC (permalink / raw)
  To: speck

On Tue, Oct 15, 2019 at 02:54:04PM +0200, speck for Thomas Gleixner wrote:
> On Wed, 9 Oct 2019, speck for Pawan Gupta wrote:
> > On Wed, Oct 09, 2019 at 04:34:01PM -0700, speck for Pawan Gupta wrote:
> > > On Wed, Oct 09, 2019 at 04:22:56PM -0700, speck for Pawan Gupta wrote:
> > > > Transactional Synchronization Extensions (TSX) may be used on certain
> > > > processors as part of a speculative side channel attack.  A microcode
> > > > update for existing processors that are vulnerable to this attack will
> > > > add a new MSR, IA32_TSX_CTRL to allow the system administrator the
> > > > option to disable TSX as one of the possible mitigations.  [Note that
> > > > future processors that are not vulnerable will also support the
> > > > IA32_TSX_CTRL MSR].  Add defines for the new IA32_TSX_CTRL MSR and its
> > > > bits.
> > > 
> > > For some reason the "From:" and the "Subject:" lines are not getting
> > > copied to the body. I am using the same "speckify-gitmail" script.
> > > 
> > > 	$ git format-patch --cover-letter -n --thread -v6 -o ../v6 --to speck@linutronix.de v5.3..HEAD
> > > 	$ speckify-gitmail -s "TAAv6" v6 speck
> > > 
> > > Let me see whats wrong and re-send the series. Sorry for the trouble.
> > 
> > Speckify-gitmail script is doing the right thing. Dumping the message body
> > content just before gpg.encrypt() shows the correct "From:" and "Subject:". I
> > don't know where it is getting stripped off.
> > 
> > Thanks,
> > Pawan
> > 
> > --- a/speckify-gitmail
> > +++ b/speckify-gitmail
> > @@ -105,6 +105,7 @@ for f in sorted(infiles):
> >          content = 'From: %s\n' %mfrom
> 
> Can you try to insert a blank newline before the From:?
> 
>     	   content = '\nFrom: %s\n' %mfrom

Thanks, "\n" fixed it.

-Pawan

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2019-10-21 20:41 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-09 23:21 [MODERATED] [PATCH v6 0/9] TAAv6 0 Pawan Gupta
2019-10-09 23:22 ` [MODERATED] [PATCH v6 1/9] TAAv6 1 Pawan Gupta
2019-10-09 23:23 ` [MODERATED] [PATCH v6 2/9] TAAv6 2 Pawan Gupta
2019-10-09 23:24 ` [MODERATED] [PATCH v6 3/9] TAAv6 3 Pawan Gupta
2019-10-09 23:25 ` [MODERATED] [PATCH v6 4/9] TAAv6 4 Pawan Gupta
2019-10-09 23:26 ` [MODERATED] [PATCH v6 5/9] TAAv6 5 Pawan Gupta
2019-10-09 23:27 ` [MODERATED] [PATCH v6 6/9] TAAv6 6 Pawan Gupta
2019-10-09 23:28 ` [MODERATED] [PATCH v6 7/9] TAAv6 7 Pawan Gupta
2019-10-09 23:29 ` [MODERATED] [PATCH v6 8/9] TAAv6 8 Pawan Gupta
2019-10-09 23:30 ` [MODERATED] [PATCH v6 9/9] TAAv6 9 Pawan Gupta
2019-10-09 23:34 ` [MODERATED] Re: [PATCH v6 1/9] TAAv6 1 Pawan Gupta
2019-10-10  1:23   ` Pawan Gupta
2019-10-15 12:54     ` Thomas Gleixner
2019-10-21 20:35       ` [MODERATED] " Pawan Gupta
2019-10-09 23:38 ` Andrew Cooper
2019-10-09 23:40   ` Andrew Cooper
2019-10-09 23:53     ` Luck, Tony
2019-10-10  0:01       ` Andrew Cooper
2019-10-10 16:51         ` Luck, Tony
     [not found] ` <5d9e6daa.1c69fb81.f84ad.88ceSMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-10  6:47   ` [MODERATED] Re: [PATCH v6 3/9] TAAv6 3 Greg KH
2019-10-10 23:44     ` Pawan Gupta
     [not found] ` <5d9e6e22.1c69fb81.6df19.ff55SMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-10  6:50   ` [MODERATED] Re: [PATCH v6 5/9] TAAv6 5 Greg KH
2019-10-10 21:18     ` Pawan Gupta
2019-10-10  6:50   ` Greg KH
     [not found] ` <5d9e6f13.1c69fb81.d7036.be99SMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-10  6:54   ` [MODERATED] Re: [PATCH v6 9/9] TAAv6 9 Greg KH
2019-10-12  1:41     ` Pawan Gupta
2019-10-13 20:05       ` Ben Hutchings
2019-10-13 21:00         ` Ben Hutchings
     [not found] ` <4b15283c29b75be3177eb7c4b8601be5644f630e.157065=?utf-8?q?8889?= .git.pawan.kumar.gupta@linux.intel.com>
2019-10-18  1:21   ` [MODERATED] Re: [PATCH v6 8/9] TAAv6 8 Ben Hutchings
2019-10-21 20:04 ` [MODERATED] Re: [PATCH v6 0/9] TAAv6 0 Josh Poimboeuf
2019-10-21 20:09   ` Pawan Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).