historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MODERATED] [PATCH 1/9] TAA 1
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
@ 2019-10-23  8:45 ` Pawan Gupta
  2019-10-24 15:22   ` [MODERATED] " Josh Poimboeuf
  2019-10-23  8:52 ` [MODERATED] [PATCH 2/9] TAA 2 Pawan Gupta
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23  8:45 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 1/9] x86/msr: Add the IA32_TSX_CTRL MSR

Transactional Synchronization Extensions (TSX) may be used on certain
processors as part of a speculative side channel attack.  A microcode
update for existing processors that are vulnerable to this attack will
add a new MSR - IA32_TSX_CTRL to allow the system administrator the
option to disable TSX as one of the possible mitigations.

  [ Note that future processors that are not vulnerable will also
    support the IA32_TSX_CTRL MSR. ]

Add defines for the new IA32_TSX_CTRL MSR and its bits.

TSX has two sub-features:

1. Restricted Transactional Memory (RTM) is an explicitly-used feature
   where new instructions begin and end TSX transactions.
2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
   "old" style locks are used by software.

Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
IA32_TSX_CTRL MSR.

There are two control bits in IA32_TSX_CTRL MSR:

  Bit 0: When set, it disables the Restricted Transactional Memory (RTM)
         sub-feature of TSX (will force all transactions to abort on the
	 XBEGIN instruction).

  Bit 1: When set, it disables the enumeration of the RTM and HLE feature
         (i.e. it will make CPUID(EAX=7).EBX{bit4} and
	  CPUID(EAX=7).EBX{bit11} read as 0).

The other TSX sub-feature, Hardware Lock Elision (HLE), is
unconditionally disabled but still enumerated as present by
CPUID(EAX=7).EBX{bit4}.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
---
 arch/x86/include/asm/msr-index.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 20ce682a2540..da4caf6da739 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -93,6 +93,7 @@
 						  * Microarchitectural Data
 						  * Sampling (MDS) vulnerabilities.
 						  */
+#define ARCH_CAP_TSX_CTRL_MSR		BIT(7)	/* MSR for TSX control is available. */
 
 #define MSR_IA32_FLUSH_CMD		0x0000010b
 #define L1D_FLUSH			BIT(0)	/*
@@ -103,6 +104,10 @@
 #define MSR_IA32_BBL_CR_CTL		0x00000119
 #define MSR_IA32_BBL_CR_CTL3		0x0000011e
 
+#define MSR_IA32_TSX_CTRL		0x00000122
+#define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
+#define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
+
 #define MSR_IA32_SYSENTER_CS		0x00000174
 #define MSR_IA32_SYSENTER_ESP		0x00000175
 #define MSR_IA32_SYSENTER_EIP		0x00000176
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 2/9] TAA 2
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
  2019-10-23  8:45 ` [MODERATED] [PATCH 1/9] TAA 1 Pawan Gupta
@ 2019-10-23  8:52 ` Pawan Gupta
  2019-10-23  9:01 ` [MODERATED] [PATCH 3/9] TAA 3 Pawan Gupta
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23  8:52 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 2/9] x86/cpu: Add a helper function x86_read_arch_cap_msr()

Add a helper function to read the IA32_ARCH_CAPABILITIES MSR.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
---
 arch/x86/kernel/cpu/common.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 9ae7d1bcd4f4..897c8302d982 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1092,19 +1092,26 @@ static bool __init cpu_matches(unsigned long which)
 	return m && !!(m->driver_data & which);
 }
 
-static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+u64 x86_read_arch_cap_msr(void)
 {
 	u64 ia32_cap = 0;
 
+	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+
+	return ia32_cap;
+}
+
+static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
 	if (cpu_matches(NO_SPECULATION))
 		return;
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
 
-	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
-
 	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 3/9] TAA 3
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
  2019-10-23  8:45 ` [MODERATED] [PATCH 1/9] TAA 1 Pawan Gupta
  2019-10-23  8:52 ` [MODERATED] [PATCH 2/9] TAA 2 Pawan Gupta
@ 2019-10-23  9:01 ` Pawan Gupta
  2019-10-24 15:30   ` [MODERATED] " Josh Poimboeuf
                     ` (2 more replies)
  2019-10-23  9:30 ` [MODERATED] [PATCH 4/9] TAA 4 Pawan Gupta
                   ` (5 subsequent siblings)
  8 siblings, 3 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23  9:01 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
 default

Add a kernel cmdline parameter "tsx" to control the Transactional
Synchronization Extensions (TSX) feature. On CPUs that support TSX
control, use "tsx=on|off" to enable or disable TSX. Not specifying this
option is equivalent to "tsx=off". This is because on certain processors
TSX may be used as a part of a speculative side channel attack.

Carve out the TSX controlling functionality into a separate compilation
unit because TSX is a CPU feature while the TSX async abort control
machinery will go to cpu/bugs.c.

 [ bp: Massage, shorten and clear the arg buffer. ]

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Babu Moger <Babu.Moger@amd.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-doc@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Cc: Rahul Tanwar <rahul.tanwar@linux.intel.com>
Cc: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Cc: Zhao Yakui <yakui.zhao@intel.com>
---
 .../admin-guide/kernel-parameters.txt         |  11 ++
 arch/x86/kernel/cpu/Makefile                  |   2 +-
 arch/x86/kernel/cpu/common.c                  |   2 +
 arch/x86/kernel/cpu/cpu.h                     |  18 +++
 arch/x86/kernel/cpu/intel.c                   |   5 +
 arch/x86/kernel/cpu/tsx.c                     | 119 ++++++++++++++++++
 6 files changed, 156 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kernel/cpu/tsx.c

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a84a83f8881e..ad6b69057bb0 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4848,6 +4848,17 @@
 			interruptions from clocksource watchdog are not
 			acceptable).
 
+	tsx=		[X86] Control Transactional Synchronization
+			Extensions (TSX) feature in Intel processors that
+			support TSX control.
+
+			This parameter controls the TSX feature. The options are:
+
+			on	- Enable TSX on the system.
+			off	- Disable TSX on the system.
+
+			Not specifying this option is equivalent to tsx=off.
+
 	turbografx.map[2|3]=	[HW,JOY]
 			TurboGraFX parallel port interface
 			Format:
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index d7a1e5a9331c..890f60083eca 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -30,7 +30,7 @@ obj-$(CONFIG_PROC_FS)	+= proc.o
 obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
 
 ifdef CONFIG_CPU_SUP_INTEL
-obj-y			+= intel.o intel_pconfig.o
+obj-y			+= intel.o intel_pconfig.o tsx.o
 obj-$(CONFIG_PM)	+= intel_epb.o
 endif
 obj-$(CONFIG_CPU_SUP_AMD)		+= amd.o
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 897c8302d982..885d4ac2111a 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1561,6 +1561,8 @@ void __init identify_boot_cpu(void)
 #endif
 	cpu_detect_tlb(&boot_cpu_data);
 	setup_cr_pinning();
+
+	tsx_init();
 }
 
 void identify_secondary_cpu(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index c0e2407abdd6..38ab6e115eac 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -44,6 +44,22 @@ struct _tlb_table {
 extern const struct cpu_dev *const __x86_cpu_dev_start[],
 			    *const __x86_cpu_dev_end[];
 
+#ifdef CONFIG_CPU_SUP_INTEL
+enum tsx_ctrl_states {
+	TSX_CTRL_ENABLE,
+	TSX_CTRL_DISABLE,
+	TSX_CTRL_NOT_SUPPORTED,
+};
+
+extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+
+extern void __init tsx_init(void);
+extern void tsx_enable(void);
+extern void tsx_disable(void);
+#else
+static inline void tsx_init(void) { }
+#endif /* CONFIG_CPU_SUP_INTEL */
+
 extern void get_cpu_cap(struct cpuinfo_x86 *c);
 extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
 extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
@@ -62,4 +78,6 @@ unsigned int aperfmperf_get_khz(int cpu);
 
 extern void x86_spec_ctrl_setup_ap(void);
 
+extern u64 x86_read_arch_cap_msr(void);
+
 #endif /* ARCH_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index c2fdc00df163..11d5c5950e2d 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -762,6 +762,11 @@ static void init_intel(struct cpuinfo_x86 *c)
 		detect_tme(c);
 
 	init_intel_misc_features(c);
+
+	if (tsx_ctrl_state == TSX_CTRL_ENABLE)
+		tsx_enable();
+	if (tsx_ctrl_state == TSX_CTRL_DISABLE)
+		tsx_disable();
 }
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
new file mode 100644
index 000000000000..e5933ef50add
--- /dev/null
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Intel Transactional Synchronization Extensions (TSX) control.
+ *
+ * Copyright (C) 2019 Intel Corporation
+ *
+ * Author:
+ *	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+ */
+
+#include <linux/cpufeature.h>
+
+#include <asm/cmdline.h>
+
+#include "cpu.h"
+
+enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+
+void tsx_disable(void)
+{
+	u64 tsx;
+
+	rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+	/* Force all transactions to immediately abort */
+	tsx |= TSX_CTRL_RTM_DISABLE;
+
+	/*
+	 * Ensure TSX support is not enumerated in CPUID.
+	 * This is visible to userspace and will ensure they
+	 * do not waste resources trying TSX transactions that
+	 * will always abort.
+	 */
+	tsx |= TSX_CTRL_CPUID_CLEAR;
+
+	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+void tsx_enable(void)
+{
+	u64 tsx;
+
+	rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+	/* Enable the RTM feature in the cpu */
+	tsx &= ~TSX_CTRL_RTM_DISABLE;
+
+	/*
+	 * Ensure TSX support is enumerated in CPUID.
+	 * This is visible to userspace and will ensure they
+	 * can enumerate and use the TSX feature.
+	 */
+	tsx &= ~TSX_CTRL_CPUID_CLEAR;
+
+	wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+static bool __init tsx_ctrl_is_supported(void)
+{
+	u64 ia32_cap = x86_read_arch_cap_msr();
+
+	/*
+	 * TSX is controlled via MSR_IA32_TSX_CTRL.  However,
+	 * support for this MSR is enumerated by ARCH_CAP_TSX_MSR bit
+	 * in MSR_IA32_ARCH_CAPABILITIES.
+	 */
+	return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
+}
+
+void __init tsx_init(void)
+{
+	char arg[4] = {};
+	int ret;
+
+	if (!tsx_ctrl_is_supported())
+		return;
+
+	ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
+	if (ret >= 0) {
+		if (!strcmp(arg, "on")) {
+			tsx_ctrl_state = TSX_CTRL_ENABLE;
+		} else if (!strcmp(arg, "off")) {
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		} else {
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+			pr_err("tsx: invalid option, defaulting to off\n");
+		}
+	} else {
+		/* tsx= not provided, defaulting to off */
+		tsx_ctrl_state = TSX_CTRL_DISABLE;
+	}
+
+	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
+		tsx_disable();
+
+		/*
+		 * tsx_disable() will change the state of the
+		 * RTM CPUID bit.  Clear it here since it is now
+		 * expected to be not set.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_RTM);
+	} else if (tsx_ctrl_state == TSX_CTRL_ENABLE) {
+
+		/*
+		 * HW defaults TSX to be enabled at bootup.
+		 * We may still need the TSX enable support
+		 * during init for special cases like
+		 * kexec after TSX is disabled.
+		 */
+		tsx_enable();
+
+		/*
+		 * tsx_enable() will change the state of the
+		 * RTM CPUID bit.  Force it here since it is now
+		 * expected to be set.
+		 */
+		setup_force_cpu_cap(X86_FEATURE_RTM);
+	}
+}
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 4/9] TAA 4
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
                   ` (2 preceding siblings ...)
  2019-10-23  9:01 ` [MODERATED] [PATCH 3/9] TAA 3 Pawan Gupta
@ 2019-10-23  9:30 ` Pawan Gupta
  2019-10-24 15:32   ` [MODERATED] " Josh Poimboeuf
  2019-10-25  0:49   ` Pawan Gupta
  2019-10-23 10:19 ` [MODERATED] [PATCH 5/9] TAA 5 Pawan Gupta
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23  9:30 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 4/9] x86/speculation/taa: Add mitigation for TSX Async Abort

TSX Async Abort (TAA) is a side channel vulnerability to the internal
buffers in some Intel processors similar to Microachitectural Data
Sampling (MDS). In this case, certain loads may speculatively pass
invalid data to dependent operations when an asynchronous abort
condition is pending in a TSX transaction.

This includes loads with no fault or assist condition. Such loads may
speculatively expose stale data from the uarch data structures as in
MDS. Scope of exposure is within the same-thread and cross-thread. This
issue affects all current processors that support TSX, but do not have
ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES.

On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
using VERW or L1D_FLUSH, there is no additional mitigation needed for
TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by
disabling the Transactional Synchronization Extensions (TSX) feature.

A new MSR IA32_TSX_CTRL in future and current processors after a
microcode update can be used to control the TSX feature. There are two
bits in that MSR:

* TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted
Transactional Memory (RTM).

* TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other
TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled with updated microcode but still enumerated as present by
CPUID(EAX=7).EBX{bit4}.

The second mitigation approach is similar to MDS which is clearing the
affected CPU buffers on return to user space and when entering a guest.
Relevant microcode update is required for the mitigation to work.  More
details on this approach can be found here:

  https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html

The TSX feature can be controlled by the "tsx" command line parameter.
If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is
deployed. The effective mitigation state can be read from sysfs.

 [ bp:
   - massage + comments cleanup
   - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh.
   - remove partial TAA mitigation in update_mds_branch_idle() - Josh.
   - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g
 ]

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>
Cc: x86-ml <x86@kernel.org>
---
 arch/x86/include/asm/cpufeatures.h   |   1 +
 arch/x86/include/asm/msr-index.h     |   4 +
 arch/x86/include/asm/nospec-branch.h |   4 +-
 arch/x86/include/asm/processor.h     |   7 ++
 arch/x86/kernel/cpu/bugs.c           | 110 +++++++++++++++++++++++++++
 arch/x86/kernel/cpu/common.c         |  15 ++++
 6 files changed, 139 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 0652d3eed9bd..989e03544f18 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -399,5 +399,6 @@
 #define X86_BUG_MDS			X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
 #define X86_BUG_MSBDS_ONLY		X86_BUG(20) /* CPU is only affected by the  MSDBS variant of BUG_MDS */
 #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
+#define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index da4caf6da739..b3a8bb2af0b6 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -94,6 +94,10 @@
 						  * Sampling (MDS) vulnerabilities.
 						  */
 #define ARCH_CAP_TSX_CTRL_MSR		BIT(7)	/* MSR for TSX control is available. */
+#define ARCH_CAP_TAA_NO			BIT(8)	/*
+						 * Not susceptible to
+						 * TSX Async Abort (TAA) vulnerabilities.
+						 */
 
 #define MSR_IA32_FLUSH_CMD		0x0000010b
 #define L1D_FLUSH			BIT(0)	/*
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 80bc209c0708..5c24a7b35166 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -314,7 +314,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
 #include <asm/segment.h>
 
 /**
- * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
  *
  * This uses the otherwise unused and obsolete VERW instruction in
  * combination with microcode which triggers a CPU buffer flush when the
@@ -337,7 +337,7 @@ static inline void mds_clear_cpu_buffers(void)
 }
 
 /**
- * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
  *
  * Clear CPU buffers if the corresponding static key is enabled
  */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 6e0a3b43d027..54f5d54280f6 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -988,4 +988,11 @@ enum mds_mitigations {
 	MDS_MITIGATION_VMWERV,
 };
 
+enum taa_mitigations {
+	TAA_MITIGATION_OFF,
+	TAA_MITIGATION_UCODE_NEEDED,
+	TAA_MITIGATION_VERW,
+	TAA_MITIGATION_TSX_DISABLED,
+};
+
 #endif /* _ASM_X86_PROCESSOR_H */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 91c2561b905f..6648e0fbed96 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -39,6 +39,7 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
+static void __init taa_select_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
 u64 x86_spec_ctrl_base;
@@ -105,6 +106,7 @@ void __init check_bugs(void)
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
 	mds_select_mitigation();
+	taa_select_mitigation();
 
 	arch_smt_update();
 
@@ -268,6 +270,100 @@ static int __init mds_cmdline(char *str)
 }
 early_param("mds", mds_cmdline);
 
+#undef pr_fmt
+#define pr_fmt(fmt)	"TAA: " fmt
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static bool taa_nosmt __ro_after_init;
+
+static const char * const taa_strings[] = {
+	[TAA_MITIGATION_OFF]		= "Vulnerable",
+	[TAA_MITIGATION_UCODE_NEEDED]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
+	[TAA_MITIGATION_VERW]		= "Mitigation: Clear CPU buffers",
+	[TAA_MITIGATION_TSX_DISABLED]	= "Mitigation: TSX disabled",
+};
+
+static void __init taa_select_mitigation(void)
+{
+	u64 ia32_cap;
+
+	if (!boot_cpu_has_bug(X86_BUG_TAA)) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+		return;
+	}
+
+	/* TSX previously disabled by tsx=off */
+	if (!boot_cpu_has(X86_FEATURE_RTM)) {
+		taa_mitigation = TAA_MITIGATION_TSX_DISABLED;
+		goto out;
+	}
+
+	if (cpu_mitigations_off()) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+		return;
+	}
+
+	/* TAA mitigation is turned off on the cmdline (tsx_async_abort=off) */
+	if (taa_mitigation == TAA_MITIGATION_OFF)
+		goto out;
+
+	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+		taa_mitigation = TAA_MITIGATION_VERW;
+	else
+		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+	/*
+	 * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
+	 * A microcode update fixes this behavior to clear CPU buffers. It also
+	 * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
+	 * ARCH_CAP_TSX_CTRL_MSR bit.
+	 *
+	 * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
+	 * update is required.
+	 */
+	ia32_cap = x86_read_arch_cap_msr();
+	if ( (ia32_cap & ARCH_CAP_MDS_NO) &&
+	    !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
+		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+	/*
+	 * TSX is enabled, select alternate mitigation for TAA which is
+	 * the same as MDS. Enable MDS static branch to clear CPU buffers.
+	 *
+	 * For guests that can't determine whether the correct microcode is
+	 * present on host, enable the mitigation for UCODE_NEEDED as well.
+	 */
+	static_branch_enable(&mds_user_clear);
+
+	if (taa_nosmt || cpu_mitigations_auto_nosmt())
+		cpu_smt_disable(false);
+
+out:
+	pr_info("%s\n", taa_strings[taa_mitigation]);
+}
+
+static int __init tsx_async_abort_parse_cmdline(char *str)
+{
+	if (!boot_cpu_has_bug(X86_BUG_TAA))
+		return 0;
+
+	if (!str)
+		return -EINVAL;
+
+	if (!strcmp(str, "off")) {
+		taa_mitigation = TAA_MITIGATION_OFF;
+	} else if (!strcmp(str, "full")) {
+		taa_mitigation = TAA_MITIGATION_VERW;
+	} else if (!strcmp(str, "full,nosmt")) {
+		taa_mitigation = TAA_MITIGATION_VERW;
+		taa_nosmt = true;
+	}
+
+	return 0;
+}
+early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
@@ -786,6 +882,7 @@ static void update_mds_branch_idle(void)
 }
 
 #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
+#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
 
 void cpu_bugs_smt_update(void)
 {
@@ -819,6 +916,19 @@ void cpu_bugs_smt_update(void)
 		break;
 	}
 
+	switch (taa_mitigation) {
+	case TAA_MITIGATION_VERW:
+	case TAA_MITIGATION_UCODE_NEEDED:
+		if (sched_smt_active())
+			pr_warn_once(TAA_MSG_SMT);
+		/* TSX is enabled, apply MDS idle buffer clearing. */
+		update_mds_branch_idle();
+		break;
+	case TAA_MITIGATION_TSX_DISABLED:
+	case TAA_MITIGATION_OFF:
+		break;
+	}
+
 	mutex_unlock(&spec_ctrl_mutex);
 }
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 885d4ac2111a..f8b8afc8f5b5 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	if (!cpu_matches(NO_SWAPGS))
 		setup_force_cpu_bug(X86_BUG_SWAPGS);
 
+	/*
+	 * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when:
+	 *	- TSX is supported or
+	 *	- TSX_CTRL is present
+	 *
+	 * TSX_CTRL check is needed for cases when TSX could be disabled before
+	 * the kernel boot e.g. kexec.
+	 * TSX_CTRL check alone is not sufficient for cases when the microcode
+	 * update is not present or running as guest that don't get TSX_CTRL.
+	 */
+	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
+	    (cpu_has(c, X86_FEATURE_RTM) ||
+	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+		setup_force_cpu_bug(X86_BUG_TAA);
+
 	if (cpu_matches(NO_MELTDOWN))
 		return;
 
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 5/9] TAA 5
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
                   ` (3 preceding siblings ...)
  2019-10-23  9:30 ` [MODERATED] [PATCH 4/9] TAA 4 Pawan Gupta
@ 2019-10-23 10:19 ` Pawan Gupta
  2019-10-24 18:30   ` [MODERATED] " Greg KH
  2019-10-23 10:23 ` [MODERATED] [PATCH 6/9] TAA 6 Pawan Gupta
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23 10:19 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 5/9] x86/speculation/taa: Add sysfs reporting for TSX Async
 Abort

Add the sysfs reporting file for TSX Async Abort. It exposes the
vulnerability and the mitigation state similar to the existing files for
the other hardware vulnerabilities.

Sysfs file path is:
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
 drivers/base/cpu.c         |  9 +++++++++
 include/linux/cpu.h        |  3 +++
 3 files changed, 35 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6648e0fbed96..07a8f812cf66 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1438,6 +1438,21 @@ static ssize_t mds_show_state(char *buf)
 		       sched_smt_active() ? "vulnerable" : "disabled");
 }
 
+static ssize_t tsx_async_abort_show_state(char *buf)
+{
+	if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLED) ||
+	    (taa_mitigation == TAA_MITIGATION_OFF))
+		return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
+
+	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+		return sprintf(buf, "%s; SMT Host state unknown\n",
+			       taa_strings[taa_mitigation]);
+	}
+
+	return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
+		       sched_smt_active() ? "vulnerable" : "disabled");
+}
+
 static char *stibp_state(void)
 {
 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
@@ -1508,6 +1523,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
 	case X86_BUG_MDS:
 		return mds_show_state(buf);
 
+	case X86_BUG_TAA:
+		return tsx_async_abort_show_state(buf);
+
 	default:
 		break;
 	}
@@ -1544,4 +1562,9 @@ ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *bu
 {
 	return cpu_show_common(dev, attr, buf, X86_BUG_MDS);
 }
+
+ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	return cpu_show_common(dev, attr, buf, X86_BUG_TAA);
+}
 #endif
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index cc37511de866..0fccd8c0312e 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -554,12 +554,20 @@ ssize_t __weak cpu_show_mds(struct device *dev,
 	return sprintf(buf, "Not affected\n");
 }
 
+ssize_t __weak cpu_show_tsx_async_abort(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	return sprintf(buf, "Not affected\n");
+}
+
 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
 static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
 static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
 static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
 
 static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_meltdown.attr,
@@ -568,6 +576,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
 	&dev_attr_spec_store_bypass.attr,
 	&dev_attr_l1tf.attr,
 	&dev_attr_mds.attr,
+	&dev_attr_tsx_async_abort.attr,
 	NULL
 };
 
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index d0633ebdaa9c..f35369f79771 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -59,6 +59,9 @@ extern ssize_t cpu_show_l1tf(struct device *dev,
 			     struct device_attribute *attr, char *buf);
 extern ssize_t cpu_show_mds(struct device *dev,
 			    struct device_attribute *attr, char *buf);
+extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
+					struct device_attribute *attr,
+					char *buf);
 
 extern __printf(4, 5)
 struct device *cpu_device_create(struct device *parent, void *drvdata,
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 6/9] TAA 6
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
                   ` (4 preceding siblings ...)
  2019-10-23 10:19 ` [MODERATED] [PATCH 5/9] TAA 5 Pawan Gupta
@ 2019-10-23 10:23 ` Pawan Gupta
  2019-10-23 10:28 ` [MODERATED] [PATCH 7/9] TAA 7 Pawan Gupta
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23 10:23 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 6/9] kvm/x86: Export MDS_NO=0 to guests when TSX is enabled

Export the IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX
Async Abort(TAA) affected hosts that have TSX enabled and updated
microcode. This is required so that the guests don't complain,

  "Vulnerable: Clear CPU buffers attempted, no microcode"

when the host has the updated microcode to clear CPU buffers.

Microcode update also adds support for MSR_IA32_TSX_CTRL which is
enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR.
Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is
not exported to the guests.

In this case export MDS_NO=0 to the guests. When guests have
CPUID.MD_CLEAR=1, they deploy MDS mitigation which also mitigates TAA.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: kvm ML <kvm@vger.kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krcmar" <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
---
 arch/x86/kvm/x86.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 661e2bf38526..fb8d80caf642 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1299,6 +1299,25 @@ static u64 kvm_get_arch_capabilities(void)
 	if (!boot_cpu_has_bug(X86_BUG_MDS))
 		data |= ARCH_CAP_MDS_NO;
 
+	/*
+	 * On TAA affected systems, export MDS_NO=0 when:
+	 *	- TSX is enabled on the host, i.e. X86_FEATURE_RTM=1.
+	 *	- Updated microcode is present. This is detected by
+	 *	  the presence of ARCH_CAP_TSX_CTRL_MSR and ensures
+	 *	  that VERW clears CPU buffers.
+	 *
+	 * When MDS_NO=0 is exported, guests deploy clear CPU buffer
+	 * mitigation and don't complain:
+	 *
+	 *	"Vulnerable: Clear CPU buffers attempted, no microcode"
+	 *
+	 * If TSX is disabled on the system, guests are also mitigated against
+	 * TAA and clear CPU buffer mitigation is not required for guests.
+	 */
+	if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
+	    (data & ARCH_CAP_TSX_CTRL_MSR))
+		data &= ~ARCH_CAP_MDS_NO;
+
 	return data;
 }
 
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 7/9] TAA 7
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
                   ` (5 preceding siblings ...)
  2019-10-23 10:23 ` [MODERATED] [PATCH 6/9] TAA 6 Pawan Gupta
@ 2019-10-23 10:28 ` Pawan Gupta
  2019-10-24 15:35   ` [MODERATED] " Josh Poimboeuf
  2019-10-23 10:32 ` [MODERATED] [PATCH 8/9] TAA 8 Pawan Gupta
  2019-10-23 10:35 ` [MODERATED] [PATCH 9/9] TAA 9 Michal Hocko
  8 siblings, 1 reply; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23 10:28 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 7/9] x86/tsx: Add "auto" option to the tsx= cmdline parameter

Platforms which are not affected by X86_BUG_TAA may want the TSX feature
enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
disable TSX when X86_BUG_TAA is present, otherwise enable TSX.

More details on X86_BUG_TAA can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html

 [ bp: Extend the arg buffer to accommodate "auto\0". ]

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: linux-doc@vger.kernel.org
Cc: Mark Gross <mgross@linux.intel.com>
Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
---
 Documentation/admin-guide/kernel-parameters.txt | 5 +++++
 arch/x86/kernel/cpu/tsx.c                       | 7 ++++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index ad6b69057bb0..e1aca10f2a7f 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4856,6 +4856,11 @@
 
 			on	- Enable TSX on the system.
 			off	- Disable TSX on the system.
+			auto	- Disable TSX if X86_BUG_TAA is present,
+				  otherwise enable TSX on the system.
+
+			More details on X86_BUG_TAA here:
+			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 
 			Not specifying this option is equivalent to tsx=off.
 
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index e5933ef50add..89ab91eacd4f 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -69,7 +69,7 @@ static bool __init tsx_ctrl_is_supported(void)
 
 void __init tsx_init(void)
 {
-	char arg[4] = {};
+	char arg[5] = {};
 	int ret;
 
 	if (!tsx_ctrl_is_supported())
@@ -81,6 +81,11 @@ void __init tsx_init(void)
 			tsx_ctrl_state = TSX_CTRL_ENABLE;
 		} else if (!strcmp(arg, "off")) {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		} else if (!strcmp(arg, "auto")) {
+			if (boot_cpu_has_bug(X86_BUG_TAA))
+				tsx_ctrl_state = TSX_CTRL_DISABLE;
+			else
+				tsx_ctrl_state = TSX_CTRL_ENABLE;
 		} else {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
 			pr_err("tsx: invalid option, defaulting to off\n");
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 8/9] TAA 8
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
                   ` (6 preceding siblings ...)
  2019-10-23 10:28 ` [MODERATED] [PATCH 7/9] TAA 7 Pawan Gupta
@ 2019-10-23 10:32 ` Pawan Gupta
  2019-10-24 16:03   ` [MODERATED] " Josh Poimboeuf
  2019-10-23 10:35 ` [MODERATED] [PATCH 9/9] TAA 9 Michal Hocko
  8 siblings, 1 reply; 75+ messages in thread
From: Pawan Gupta @ 2019-10-23 10:32 UTC (permalink / raw)
  To: speck

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH 8/9] x86/speculation/taa: Add documentation for TSX Async
 Abort

Add the documenation for TSX Async Abort. Include the description of
the issue, how to check the mitigation state, control the mitigation,
guidance for system administrators.

 [ bp: Add proper SPDX tags, touch ups. ]

Co-developed-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: linux-doc@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
---
 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../admin-guide/hw-vuln/tsx_async_abort.rst   | 256 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  36 +++
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/tsx_async_abort.rst         | 117 ++++++++
 6 files changed, 412 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 create mode 100644 Documentation/x86/tsx_async_abort.rst

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 06d0931119cc..0e77569bd5e0 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -486,6 +486,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
 		/sys/devices/system/cpu/vulnerabilities/l1tf
 		/sys/devices/system/cpu/vulnerabilities/mds
+		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
 Date:		January 2018
 Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 Description:	Information about CPU vulnerabilities
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 49311f3da6f2..0802b1c67452 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -12,3 +12,4 @@ are configurable at compile, boot or run time.
    spectre
    l1tf
    mds
+   tsx_async_abort
diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
new file mode 100644
index 000000000000..5210c726771e
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -0,0 +1,256 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+TAA - TSX Asynchronous Abort
+======================================
+
+TAA is a hardware vulnerability that allows unprivileged speculative access to
+data which is available in various CPU internal buffers by using asynchronous
+aborts within an Intel TSX transactional region.
+
+Affected processors
+-------------------
+
+This vulnerability only affects Intel processors that support Intel
+Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8)
+is 0 in the IA32_ARCH_CAPABILITIES MSR.  On processors where the MDS_NO bit
+(bit 5) is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations
+also mitigate against TAA.
+
+Whether a processor is affected or not can be read out from the TAA
+vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`.
+
+Related CVEs
+------------
+
+The following CVE entry is related to this TAA issue:
+
+   ==============  =====  ===================================================
+   CVE-2019-11135  TAA    TSX Asynchronous Abort (TAA) condition on some
+                          microprocessors utilizing speculative execution may
+                          allow an authenticated user to potentially enable
+                          information disclosure via a side channel with
+                          local access.
+   ==============  =====  ===================================================
+
+Problem
+-------
+
+When performing store, load or L1 refill operations, processors write
+data into temporary microarchitectural structures (buffers). The data in
+those buffers can be forwarded to load operations as an optimization.
+
+Intel TSX is an extension to the x86 instruction set architecture that adds
+hardware transactional memory support to improve performance of multi-threaded
+software. TSX lets the processor expose and exploit concurrency hidden in an
+application due to dynamically avoiding unnecessary synchronization.
+
+TSX supports atomic memory transactions that are either committed (success) or
+aborted. During an abort, operations that happened within the transactional region
+are rolled back. An asynchronous abort takes place, among other options, when a
+different thread accesses a cache line that is also used within the transactional
+region when that access might lead to a data race.
+
+Immediately after an uncompleted asynchronous abort, certain speculatively
+executed loads may read data from those internal buffers and pass it to dependent
+operations. This can be then used to infer the value via a cache side channel
+attack.
+
+Because the buffers are potentially shared between Hyper-Threads cross
+Hyper-Thread attacks are possible.
+
+The victim of a malicious actor does not need to make use of TSX. Only the
+attacker needs to begin a TSX transaction and raise an asynchronous abort
+to try to leak some of data stored in the buffers.
+
+More detailed technical information is available in the TAA specific x86
+architecture section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`.
+
+
+Attack scenarios
+----------------
+
+Attacks against the TAA vulnerability can be implemented from unprivileged
+applications running on hosts or guests.
+
+As for MDS, the attacker has no control over the memory addresses that can be
+leaked. Only the victim is responsible for bringing data to the CPU. As a
+result, the malicious actor has to first sample as much data as possible and
+then postprocess it to try to infer any useful information from it.
+
+A potential attacker only has read access to the data. Also, there is no direct
+privilege escalation by using this technique.
+
+
+.. _tsx_async_abort_sys_info:
+
+TAA system information
+-----------------------
+
+The Linux kernel provides a sysfs interface to enumerate the current TAA status
+of mitigated systems. The relevant sysfs file is:
+
+/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+
+The possible values in this file are:
+
+.. list-table::
+
+   * - 'Vulnerable'
+     - The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied.
+   * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
+     - The system tries to clear the buffers but the microcode might not support the operation.
+   * - 'Mitigation: Clear CPU buffers'
+     - The microcode has been updated to clear the buffers. TSX is still enabled.
+   * - 'Mitigation: TSX disabled'
+     - TSX is disabled.
+   * - 'Not affected'
+     - The CPU is not affected by this issue.
+
+.. _ucode_needed:
+
+Best effort mitigation mode
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If the processor is vulnerable, but the availability of the microcode-based
+mitigation mechanism is not advertised via CPUID the kernel selects a best
+effort mitigation mode.  This mode invokes the mitigation instructions
+without a guarantee that they clear the CPU buffers.
+
+This is done to address virtualization scenarios where the host has the
+microcode update applied, but the hypervisor is not yet updated to expose the
+CPUID to the guest. If the host has updated microcode the protection takes
+effect; otherwise a few CPU cycles are wasted pointlessly.
+
+The state in the tsx_async_abort sysfs file reflects this situation
+accordingly.
+
+
+Mitigation mechanism
+--------------------
+
+The kernel detects the affected CPUs and the presence of the microcode which is
+required. If a CPU is affected and the microcode is available, then the kernel
+enables the mitigation by default.
+
+
+The mitigation can be controlled at boot time via a kernel command line option.
+See :ref:`taa_mitigation_control_command_line`.
+
+.. _virt_mechanism:
+
+Virtualization mitigation
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Affected systems where the host has the TAA microcode and the TAA mitigation is
+ON (with TSX disabled) are not vulnerable regardless of the status of the VMs.
+
+In all other cases, if the host either does not have the TAA microcode or the
+kernel is not mitigated, the system might be vulnerable.
+
+
+.. _taa_mitigation_control_command_line:
+
+Mitigation control on the kernel command line
+---------------------------------------------
+
+The kernel command line allows to control the TAA mitigations at boot time with
+the option "tsx_async_abort=". The valid arguments for this option are:
+
+  ============  =============================================================
+  off		This option disables the TAA mitigation on affected platforms.
+                If the system has TSX enabled (see next parameter) and the CPU
+                is affected, the system is vulnerable.
+
+  full	        TAA mitigation is enabled. If TSX is enabled, on an affected
+                system it will clear CPU buffers on ring transitions. On
+                systems which are MDS-affected and deploy MDS mitigation,
+                TAA is also mitigated. Specifying this option on those
+                systems will have no effect.
+
+  full,nosmt    The same as tsx_async_abort=full, with SMT disabled on
+                vulnerable CPUs that have TSX enabled. This is the complete
+                mitigation. When TSX is disabled, SMT is not disabled because
+                CPU is not vulnerable to cross-thread TAA attacks.
+  ============  =============================================================
+
+Not specifying this option is equivalent to "tsx_async_abort=full".
+
+The kernel command line also allows to control the TSX feature using the
+parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
+to control the TSX feature and the enumeration of the TSX feature bits (RTM
+and HLE) in CPUID.
+
+The valid options are:
+
+  ============  =============================================================
+  off		Disables TSX.
+
+  on		Enables TSX.
+
+  auto		Disables TSX on affected platform, otherwise enables TSX.
+  ============  =============================================================
+
+Not specifying this option is equivalent to "tsx=off".
+
+The following combinations of the "tsx_async_abort" and "tsx" are possible. For
+affected platforms tsx=auto is equivalent to tsx=off and the result will be:
+
+  =========  ====================   =========================================
+  tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
+                                    buffers.
+  tsx=on     tsx_async_abort=off    The system is vulnerable.
+  tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.
+  tsx=off    tsx_async_abort=off    TSX is disabled. System is not vulnerable.
+  =========  ====================   =========================================
+
+For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
+buffers.  For platforms without TSX control "tsx" command line argument has no
+effect.
+
+For the affected platforms below table indicates the mitigation status for the
+combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
+and TSX_CTRL_MSR.
+
+  =======  =========  =============  ========================================
+  MDS_NO   MD_CLEAR   TSX_CTRL_MSR   Status
+  =======  =========  =============  ========================================
+    0          0            0        Vulnerable (needs ucode)
+    0          1            0        MDS and TAA mitigated via VERW
+    1          1            0        MDS fixed, TAA vulnerable if TSX enabled
+                                     because MD_CLEAR has no meaning and
+                                     VERW is not guaranteed to clear buffers
+    1          X            1        MDS fixed, TAA can be mitigated by
+                                     VERW or TSX_CTRL_MSR
+  =======  =========  =============  ========================================
+
+Mitigation selection guide
+--------------------------
+
+1. Trusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If all user space applications are from a trusted source and do not execute
+untrusted code which is supplied externally, then the mitigation can be
+disabled. The same applies to virtualized environments with trusted guests.
+
+
+2. Untrusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If there are untrusted applications or guests on the system, enabling TSX
+might allow a malicious actor to leak data from the host or from other
+processes running on the same physical core.
+
+If the microcode is available and the TSX is disabled on the host, attacks
+are prevented in a virtualized environment as well, even if the VMs do not
+explicitly enable the mitigation.
+
+
+.. _taa_default_mitigations:
+
+Default mitigations
+-------------------
+
+The kernel's default action for vulnerable processors is:
+
+  - Deploy TSX disable mitigation (tsx_async_abort=full tsx=off).
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index e1aca10f2a7f..6eb1c0c8018c 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2636,6 +2636,7 @@
 					       ssbd=force-off [ARM64]
 					       l1tf=off [X86]
 					       mds=off [X86]
+					       tsx_async_abort=off [X86]
 
 			auto (default)
 				Mitigate all CPU vulnerabilities, but leave SMT
@@ -2651,6 +2652,7 @@
 				be fully mitigated, even if it means losing SMT.
 				Equivalent to: l1tf=flush,nosmt [X86]
 					       mds=full,nosmt [X86]
+					       tsx_async_abort=full,nosmt [X86]
 
 	mminit_loglevel=
 			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
@@ -4864,6 +4866,40 @@
 
 			Not specifying this option is equivalent to tsx=off.
 
+	tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
+			Abort (TAA) vulnerability.
+
+			Similar to Micro-architectural Data Sampling (MDS)
+			certain CPUs that support Transactional
+			Synchronization Extensions (TSX) are vulnerable to an
+			exploit against CPU internal buffers which can forward
+			information to a disclosure gadget under certain
+			conditions.
+
+			In vulnerable processors, the speculatively forwarded
+			data can be used in a cache side channel attack, to
+			access data to which the attacker does not have direct
+			access.
+
+			This parameter controls the TAA mitigation.  The
+			options are:
+
+			full       - Enable TAA mitigation on vulnerable CPUs
+			full,nosmt - Enable TAA mitigation and disable SMT on
+				     vulnerable CPUs. If TSX is disabled, SMT
+				     is not disabled because CPU is not
+				     vulnerable to cross-thread TAA attacks.
+			off        - Unconditionally disable TAA mitigation
+
+			Not specifying this option is equivalent to
+			tsx_async_abort=full.  On CPUs which are MDS affected
+			and deploy MDS mitigation, TAA mitigation is not
+			required and doesn't provide any additional
+			mitigation.
+
+			For details see:
+			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+
 	turbografx.map[2|3]=	[HW,JOY]
 			TurboGraFX parallel port interface
 			Format:
diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index af64c4bb4447..a8de2fbc1caa 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -27,6 +27,7 @@ x86-specific Documentation
    mds
    microcode
    resctrl_ui
+   tsx_async_abort
    usb-legacy-support
    i386/index
    x86_64/index
diff --git a/Documentation/x86/tsx_async_abort.rst b/Documentation/x86/tsx_async_abort.rst
new file mode 100644
index 000000000000..583ddc185ba2
--- /dev/null
+++ b/Documentation/x86/tsx_async_abort.rst
@@ -0,0 +1,117 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+TSX Async Abort (TAA) mitigation
+================================
+
+.. _tsx_async_abort:
+
+Overview
+--------
+
+TSX Async Abort (TAA) is a side channel attack on internal buffers in some
+Intel processors similar to Microachitectural Data Sampling (MDS).  In this
+case certain loads may speculatively pass invalid data to dependent operations
+when an asynchronous abort condition is pending in a Transactional
+Synchronization Extensions (TSX) transaction.  This includes loads with no
+fault or assist condition. Such loads may speculatively expose stale data from
+the same uarch data structures as in MDS, with same scope of exposure i.e.
+same-thread and cross-thread. This issue affects all current processors that
+support TSX.
+
+Mitigation strategy
+-------------------
+
+a) TSX disable - one of the mitigations is to disable TSX. A new MSR
+IA32_TSX_CTRL will be available in future and current processors after
+microcode update which can be used to disable TSX. In addition, it
+controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID.
+
+b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this
+vulnerability. More details on this approach can be found in
+:ref:`Documentation/admin-guide/hw-vuln/mds.rst <mds>`.
+
+Kernel internal mitigation modes
+--------------------------------
+
+ =============    ============================================================
+ off              Mitigation is disabled. Either the CPU is not affected or
+                  tsx_async_abort=off is supplied on the kernel command line.
+
+ tsx disabled     Mitigation is enabled. TSX feature is disabled by default at
+                  bootup on processors that support TSX control.
+
+ verw             Mitigation is enabled. CPU is affected and MD_CLEAR is
+                  advertised in CPUID.
+
+ ucode needed     Mitigation is enabled. CPU is affected and MD_CLEAR is not
+                  advertised in CPUID. That is mainly for virtualization
+                  scenarios where the host has the updated microcode but the
+                  hypervisor does not expose MD_CLEAR in CPUID. It's a best
+                  effort approach without guarantee.
+ =============    ============================================================
+
+If the CPU is affected and the "tsx_async_abort" kernel command line parameter is
+not provided then the kernel selects an appropriate mitigation depending on the
+status of RTM and MD_CLEAR CPUID bits.
+
+Below tables indicate the impact of tsx=on|off|auto cmdline options on state of
+TAA mitigation, VERW behavior and TSX feature for various combinations of
+MSR_IA32_ARCH_CAPABILITIES bits.
+
+1. "tsx=off"
+
+=========  =========  ============  ============  ==============  ===================  ======================
+MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=off
+----------------------------------  -------------------------------------------------------------------------
+TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
+                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
+=========  =========  ============  ============  ==============  ===================  ======================
+    0          0           0         HW default         Yes           Same as MDS           Same as MDS
+    0          0           1        Invalid case   Invalid case       Invalid case          Invalid case
+    0          1           0         HW default         No         Need ucode update     Need ucode update
+    0          1           1          Disabled          Yes           TSX disabled          TSX disabled
+    1          X           1          Disabled           X             None needed           None needed
+=========  =========  ============  ============  ==============  ===================  ======================
+
+2. "tsx=on"
+
+=========  =========  ============  ============  ==============  ===================  ======================
+MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=on
+----------------------------------  -------------------------------------------------------------------------
+TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
+                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
+=========  =========  ============  ============  ==============  ===================  ======================
+    0          0           0         HW default        Yes            Same as MDS          Same as MDS
+    0          0           1        Invalid case   Invalid case       Invalid case         Invalid case
+    0          1           0         HW default        No          Need ucode update     Need ucode update
+    0          1           1          Enabled          Yes               None              Same as MDS
+    1          X           1          Enabled          X              None needed          None needed
+=========  =========  ============  ============  ==============  ===================  ======================
+
+3. "tsx=auto"
+
+=========  =========  ============  ============  ==============  ===================  ======================
+MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=auto
+----------------------------------  -------------------------------------------------------------------------
+TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
+                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
+=========  =========  ============  ============  ==============  ===================  ======================
+    0          0           0         HW default    Yes                Same as MDS           Same as MDS
+    0          0           1        Invalid case  Invalid case        Invalid case          Invalid case
+    0          1           0         HW default    No              Need ucode update     Need ucode update
+    0          1           1          Disabled      Yes               TSX disabled          TSX disabled
+    1          X           1          Enabled       X                 None needed           None needed
+=========  =========  ============  ============  ==============  ===================  ======================
+
+In the tables, TSX_CTRL_MSR is a new bit in MSR_IA32_ARCH_CAPABILITIES that
+indicates whether MSR_IA32_TSX_CTRL is supported.
+
+There are two control bits in IA32_TSX_CTRL MSR:
+
+      Bit 0: When set it disables the Restricted Transactional Memory (RTM)
+             sub-feature of TSX (will force all transactions to abort on the
+             XBEGIN instruction).
+
+      Bit 1: When set it disables the enumeration of the RTM and HLE feature
+             (i.e. it will make CPUID(EAX=7).EBX{bit4} and
+             CPUID(EAX=7).EBX{bit11} read as 0).
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 9/9] TAA 9
  2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
                   ` (7 preceding siblings ...)
  2019-10-23 10:32 ` [MODERATED] [PATCH 8/9] TAA 8 Pawan Gupta
@ 2019-10-23 10:35 ` Michal Hocko
  2019-10-24 16:10   ` [MODERATED] " Josh Poimboeuf
  8 siblings, 1 reply; 75+ messages in thread
From: Michal Hocko @ 2019-10-23 10:35 UTC (permalink / raw)
  To: speck

From: Michal Hocko <mhocko@suse.com>
Subject: [PATCH 9/9] x86/tsx: Add config options to set tsx=on|off|auto

There is a general consensus that TSX usage is not largely spread while
the history shows there is a non trivial space for side channel attacks
possible. Therefore the tsx is disabled by default even on platforms
that might have a safe implementation of TSX according to the current
knowledge. This is a fair trade off to make.

There are, however, workloads that really do benefit from using TSX and
updating to a newer kernel with TSX disabled might introduce a
noticeable regressions. This would be especially a problem for Linux
distributions which will provide TAA mitigations.

Introduce config options X86_INTEL_TSX_MODE_OFF, X86_INTEL_TSX_MODE_ON
and X86_INTEL_TSX_MODE_AUTO to control the TSX feature. The config
setting can be overridden by the tsx cmdline options.

Suggested-by: Borislav Petkov <bpetkov@suse.de>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: x86-ml <x86@kernel.org>
---
 arch/x86/Kconfig          | 45 +++++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/tsx.c | 22 +++++++++++++------
 2 files changed, 61 insertions(+), 6 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d6e1faa28c58..eebae89726c4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1940,6 +1940,51 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS
 
 	  If unsure, say y.
 
+choice
+	prompt "TSX enable mode"
+	depends on CPU_SUP_INTEL
+	default X86_INTEL_TSX_MODE_OFF
+	help
+	  Intel's TSX (Transactional Synchronization Extensions) feature
+	  allows to optimize locking protocols through lock elision which
+	  can lead to a noticeable performance boost.
+
+	  On the other hand it has been shown that TSX can be exploited
+	  to form side channel attacks (e.g. TAA) and chances are there
+	  will be more of those attacks discovered in the future.
+
+	  Therefore TSX is not enabled by default (aka tsx=off). An admin
+	  might override this decision by tsx=on command line parameter. This
+	  has a risk that TSX will get enabled also on platforms which are
+	  known to be vulnerable to attacks like TAA and a safer option is to
+	  use tsx=auto command line parameter.
+
+	  This options allows to set the default tsx mode between tsx=on, off
+	  and auto. See Documentation/admin-guide/kernel-parameters.txt for more
+	  details.
+
+	  Say off if not sure, auto if TSX is in use but it should be used on safe
+	  platforms or on if TSX is in use and the security aspect of tsx is not
+	  relevant.
+
+config X86_INTEL_TSX_MODE_OFF
+	bool "off"
+	help
+	  TSX is always disabled - equals tsx=off command line parameter.
+
+config X86_INTEL_TSX_MODE_ON
+	bool "on"
+	help
+	  TSX is always enabled on TSX capable HW - equals tsx=on command line
+	  parameter.
+
+config X86_INTEL_TSX_MODE_AUTO
+	bool "auto"
+	help
+	  TSX is enabled on TSX capable HW that is believed to be safe against
+	  side channel attacks- equals tsx=auto command line parameter.
+endchoice
+
 config EFI
 	bool "EFI runtime service support"
 	depends on ACPI
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index 89ab91eacd4f..ab400f8bbfe1 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -67,6 +67,14 @@ static bool __init tsx_ctrl_is_supported(void)
 	return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
 }
 
+static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
+{
+	if (boot_cpu_has_bug(X86_BUG_TAA))
+		return TSX_CTRL_DISABLE;
+
+	return TSX_CTRL_ENABLE;
+}
+
 void __init tsx_init(void)
 {
 	char arg[5] = {};
@@ -82,17 +90,19 @@ void __init tsx_init(void)
 		} else if (!strcmp(arg, "off")) {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
 		} else if (!strcmp(arg, "auto")) {
-			if (boot_cpu_has_bug(X86_BUG_TAA))
-				tsx_ctrl_state = TSX_CTRL_DISABLE;
-			else
-				tsx_ctrl_state = TSX_CTRL_ENABLE;
+			tsx_ctrl_state = x86_get_tsx_auto_mode();
 		} else {
 			tsx_ctrl_state = TSX_CTRL_DISABLE;
 			pr_err("tsx: invalid option, defaulting to off\n");
 		}
 	} else {
-		/* tsx= not provided, defaulting to off */
-		tsx_ctrl_state = TSX_CTRL_DISABLE;
+		/* tsx= not provided */
+		if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_AUTO))
+			tsx_ctrl_state = x86_get_tsx_auto_mode();
+		else if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_OFF))
+			tsx_ctrl_state = TSX_CTRL_DISABLE;
+		else
+			tsx_ctrl_state = TSX_CTRL_ENABLE;
 	}
 
 	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [PATCH 0/9] TAA 0
@ 2019-10-24  8:20 Borislav Petkov
  2019-10-23  8:45 ` [MODERATED] [PATCH 1/9] TAA 1 Pawan Gupta
                   ` (8 more replies)
  0 siblings, 9 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24  8:20 UTC (permalink / raw)
  To: speck

From: Borislav Petkov <bp@suse.de>
Subject: [PATCH 0/9] TAA

Hi,

here's the pile I've picked up. I've incorporated as much of the
feedback as possible, scream loudly if something's missing. First
testing looks good.

We'll cast them in stone in the coming days because distros and stable
have to start backporting soon. All fixes will go ontop, as in previous
occasions.

Thx.

Michal Hocko (1):
  x86/tsx: Add config options to set tsx=on|off|auto

Pawan Gupta (8):
  x86/msr: Add the IA32_TSX_CTRL MSR
  x86/cpu: Add a helper function x86_read_arch_cap_msr()
  x86/cpu: Add a "tsx=" cmdline option with TSX disabled by default
  x86/speculation/taa: Add mitigation for TSX Async Abort
  x86/speculation/taa: Add sysfs reporting for TSX Async Abort
  kvm/x86: Export MDS_NO=0 to guests when TSX is enabled
  x86/tsx: Add "auto" option to the tsx= cmdline parameter
  x86/speculation/taa: Add documentation for TSX Async Abort

 .../ABI/testing/sysfs-devices-system-cpu      |   1 +
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../admin-guide/hw-vuln/tsx_async_abort.rst   | 256 ++++++++++++++++++
 .../admin-guide/kernel-parameters.txt         |  52 ++++
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/tsx_async_abort.rst         | 117 ++++++++
 arch/x86/Kconfig                              |  45 +++
 arch/x86/include/asm/cpufeatures.h            |   1 +
 arch/x86/include/asm/msr-index.h              |   9 +
 arch/x86/include/asm/nospec-branch.h          |   4 +-
 arch/x86/include/asm/processor.h              |   7 +
 arch/x86/kernel/cpu/Makefile                  |   2 +-
 arch/x86/kernel/cpu/bugs.c                    | 133 +++++++++
 arch/x86/kernel/cpu/common.c                  |  32 ++-
 arch/x86/kernel/cpu/cpu.h                     |  18 ++
 arch/x86/kernel/cpu/intel.c                   |   5 +
 arch/x86/kernel/cpu/tsx.c                     | 134 +++++++++
 arch/x86/kvm/x86.c                            |  19 ++
 drivers/base/cpu.c                            |   9 +
 include/linux/cpu.h                           |   3 +
 20 files changed, 842 insertions(+), 7 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
 create mode 100644 Documentation/x86/tsx_async_abort.rst
 create mode 100644 arch/x86/kernel/cpu/tsx.c

-- 
2.21.0

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 1/9] TAA 1
  2019-10-23  8:45 ` [MODERATED] [PATCH 1/9] TAA 1 Pawan Gupta
@ 2019-10-24 15:22   ` Josh Poimboeuf
  2019-10-24 16:23     ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 15:22 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 10:45:50AM +0200, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH 1/9] x86/msr: Add the IA32_TSX_CTRL MSR
> 
> Transactional Synchronization Extensions (TSX) may be used on certain
> processors as part of a speculative side channel attack.  A microcode
> update for existing processors that are vulnerable to this attack will
> add a new MSR - IA32_TSX_CTRL to allow the system administrator the
> option to disable TSX as one of the possible mitigations.

This should clarify that not *all* TAA-vulnerable CPUs will get
IA32_TSX_CTRL, instead only the ones which aren't also vulnerable to
MDS.

>   [ Note that future processors that are not vulnerable will also
>     support the IA32_TSX_CTRL MSR. ]
> 
> Add defines for the new IA32_TSX_CTRL MSR and its bits.
> 
> TSX has two sub-features:
> 
> 1. Restricted Transactional Memory (RTM) is an explicitly-used feature
>    where new instructions begin and end TSX transactions.
> 2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
>    "old" style locks are used by software.
> 
> Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
> IA32_TSX_CTRL MSR.
> 
> There are two control bits in IA32_TSX_CTRL MSR:
> 
>   Bit 0: When set, it disables the Restricted Transactional Memory (RTM)
>          sub-feature of TSX (will force all transactions to abort on the
> 	 XBEGIN instruction).
> 
>   Bit 1: When set, it disables the enumeration of the RTM and HLE feature
>          (i.e. it will make CPUID(EAX=7).EBX{bit4} and
> 	  CPUID(EAX=7).EBX{bit11} read as 0).
> 
> The other TSX sub-feature, Hardware Lock Elision (HLE), is
> unconditionally disabled

... by the new microcode ...

> but still enumerated as present by
> CPUID(EAX=7).EBX{bit4}.

... unless disabled by bit 1 of IA32_TSX_CTRL_MSR.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-23  9:01 ` [MODERATED] [PATCH 3/9] TAA 3 Pawan Gupta
@ 2019-10-24 15:30   ` Josh Poimboeuf
  2019-10-24 16:33     ` Borislav Petkov
  2019-10-24 17:39   ` Andrew Cooper
  2019-10-30 13:28   ` Greg KH
  2 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 15:30 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4848,6 +4848,17 @@
>  			interruptions from clocksource watchdog are not
>  			acceptable).
>  
> +	tsx=		[X86] Control Transactional Synchronization
> +			Extensions (TSX) feature in Intel processors that
> +			support TSX control.
> +
> +			This parameter controls the TSX feature. The options are:
> +
> +			on	- Enable TSX on the system.
> +			off	- Disable TSX on the system.
> +
> +			Not specifying this option is equivalent to tsx=off.

This still needs details about when 'tsx=off' does and doesn't work.

The above makes it sound like it's off for all CPUs, when in fact it's
only off for newer MDS_NO CPUs.

It should also perhaps describe the risks associated with tsx=on.  While
there are mitigations for all known issues (i.e., the tsx_async_abort=
option), TSX has been known to be an accelerator for several previous
speculation-related CVEs, and so there may be unknown security risks
associated with leaving it enabled.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-23  9:30 ` [MODERATED] [PATCH 4/9] TAA 4 Pawan Gupta
@ 2019-10-24 15:32   ` Josh Poimboeuf
  2019-10-24 16:43     ` Borislav Petkov
  2019-10-25  0:49   ` Pawan Gupta
  1 sibling, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 15:32 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 11:30:45AM +0200, speck for Pawan Gupta wrote:
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 885d4ac2111a..f8b8afc8f5b5 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
>  	if (!cpu_matches(NO_SWAPGS))
>  		setup_force_cpu_bug(X86_BUG_SWAPGS);
>  
> +	/*
> +	 * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> +	 *	- TSX is supported or
> +	 *	- TSX_CTRL is present
> +	 *
> +	 * TSX_CTRL check is needed for cases when TSX could be disabled before
> +	 * the kernel boot e.g. kexec.
> +	 * TSX_CTRL check alone is not sufficient for cases when the microcode
> +	 * update is not present or running as guest that don't get TSX_CTRL.
> +	 */
> +	if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> +	    (cpu_has(c, X86_FEATURE_RTM) ||
> +	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> +		setup_force_cpu_bug(X86_BUG_TAA);
> +

As I said before this would be a lot nicer if we could just add NO_TAA
to the cpu_vuln_whitelist.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 7/9] TAA 7
  2019-10-23 10:28 ` [MODERATED] [PATCH 7/9] TAA 7 Pawan Gupta
@ 2019-10-24 15:35   ` Josh Poimboeuf
  2019-10-24 16:42     ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 15:35 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 12:28:57PM +0200, speck for Pawan Gupta wrote:
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index ad6b69057bb0..e1aca10f2a7f 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4856,6 +4856,11 @@
>  
>  			on	- Enable TSX on the system.
>  			off	- Disable TSX on the system.
> +			auto	- Disable TSX if X86_BUG_TAA is present,

Not quite true, since MDS-affected parts don't have the new microcode
bits to disable TSX.

> +				  otherwise enable TSX on the system.
> +
> +			More details on X86_BUG_TAA here:
> +			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
>  
>  			Not specifying this option is equivalent to tsx=off.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 8/9] TAA 8
  2019-10-23 10:32 ` [MODERATED] [PATCH 8/9] TAA 8 Pawan Gupta
@ 2019-10-24 16:03   ` Josh Poimboeuf
  2019-10-24 17:35     ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 16:03 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 12:32:55PM +0200, speck for Pawan Gupta wrote:
> +Virtualization mitigation
> +^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Affected systems where the host has the TAA microcode and the TAA mitigation is
> +ON (with TSX disabled) are not vulnerable regardless of the status of the VMs.

This is is confusing: "the TAA mitigation is ON (with TSX disabled)".

Which is it?  Is the TAA mitigation on, or is TSX disabled?

> +
> +In all other cases, if the host either does not have the TAA microcode or the
> +kernel is not mitigated, the system might be vulnerable.
> +
> +
> +.. _taa_mitigation_control_command_line:
> +
> +Mitigation control on the kernel command line
> +---------------------------------------------
> +
> +The kernel command line allows to control the TAA mitigations at boot time with
> +the option "tsx_async_abort=". The valid arguments for this option are:
> +
> +  ============  =============================================================
> +  off		This option disables the TAA mitigation on affected platforms.
> +                If the system has TSX enabled (see next parameter) and the CPU
> +                is affected, the system is vulnerable.
> +
> +  full	        TAA mitigation is enabled. If TSX is enabled, on an affected
> +                system it will clear CPU buffers on ring transitions. On
> +                systems which are MDS-affected and deploy MDS mitigation,
> +                TAA is also mitigated. Specifying this option on those
> +                systems will have no effect.
> +
> +  full,nosmt    The same as tsx_async_abort=full, with SMT disabled on
> +                vulnerable CPUs that have TSX enabled. This is the complete
> +                mitigation. When TSX is disabled, SMT is not disabled because
> +                CPU is not vulnerable to cross-thread TAA attacks.
> +  ============  =============================================================
> +
> +Not specifying this option is equivalent to "tsx_async_abort=full".
> +
> +The kernel command line also allows to control the TSX feature using the
> +parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
> +to control the TSX feature and the enumeration of the TSX feature bits (RTM
> +and HLE) in CPUID.
> +
> +The valid options are:
> +
> +  ============  =============================================================
> +  off		Disables TSX.
> +

This is not universally true.

> +  on		Enables TSX.

This probably needs the same "TSX is fundamentally insecure" caveat I
proposed for kernel-parameters.txt.

> +
> +  auto		Disables TSX on affected platform, otherwise enables TSX.

This is not universally true.

Also, it would be relevant to refer to that table Pawan posted which
shows exactly which CPUs are vulnerable to TAA but not MDS.

> +  ============  =============================================================
> +
> +Not specifying this option is equivalent to "tsx=off".
> +
> +The following combinations of the "tsx_async_abort" and "tsx" are possible. For
> +affected platforms tsx=auto is equivalent to tsx=off and the result will be:
> +
> +  =========  ====================   =========================================
> +  tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
> +                                    buffers.

The system may still be vulnerable to SMT-based attacks.

> +  tsx=on     tsx_async_abort=off    The system is vulnerable.
> +  tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.
> +  tsx=off    tsx_async_abort=off    TSX is disabled. System is not vulnerable.
> +  =========  ====================   =========================================

Combinations with tsx_async_abort=full,nosmt should also be described.

> +
> +For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
> +buffers.  For platforms without TSX control "tsx" command line argument has no
> +effect.

Which platforms are those?

> +For the affected platforms below table indicates the mitigation status for the
> +combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
> +and TSX_CTRL_MSR.
> +
> +  =======  =========  =============  ========================================
> +  MDS_NO   MD_CLEAR   TSX_CTRL_MSR   Status
> +  =======  =========  =============  ========================================
> +    0          0            0        Vulnerable (needs ucode)
> +    0          1            0        MDS and TAA mitigated via VERW
> +    1          1            0        MDS fixed, TAA vulnerable if TSX enabled
> +                                     because MD_CLEAR has no meaning and
> +                                     VERW is not guaranteed to clear buffers

(needs ucode) ?

> +    1          X            1        MDS fixed, TAA can be mitigated by
> +                                     VERW or TSX_CTRL_MSR
> +  =======  =========  =============  ========================================
> +
> +Mitigation selection guide
> +--------------------------
> +
> +1. Trusted userspace and guests
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +If all user space applications are from a trusted source and do not execute
> +untrusted code which is supplied externally, then the mitigation can be
> +disabled. The same applies to virtualized environments with trusted guests.
> +
> +
> +2. Untrusted userspace and guests
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +If there are untrusted applications or guests on the system, enabling TSX
> +might allow a malicious actor to leak data from the host or from other
> +processes running on the same physical core.

Unless the mitigation is enabled (which is on by default, BTW...)

This makes it sounds like the only mitigation is to disable TSX.

> +
> +If the microcode is available and the TSX is disabled on the host, attacks
> +are prevented in a virtualized environment as well, even if the VMs do not
> +explicitly enable the mitigation.

What's the effect on VM security if TSX is enabled and the host TAA
mitigation is also enabled?

> +
> +
> +.. _taa_default_mitigations:
> +
> +Default mitigations
> +-------------------
> +
> +The kernel's default action for vulnerable processors is:
> +
> +  - Deploy TSX disable mitigation (tsx_async_abort=full tsx=off).
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index e1aca10f2a7f..6eb1c0c8018c 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2636,6 +2636,7 @@
>  					       ssbd=force-off [ARM64]
>  					       l1tf=off [X86]
>  					       mds=off [X86]
> +					       tsx_async_abort=off [X86]
>  
>  			auto (default)
>  				Mitigate all CPU vulnerabilities, but leave SMT
> @@ -2651,6 +2652,7 @@
>  				be fully mitigated, even if it means losing SMT.
>  				Equivalent to: l1tf=flush,nosmt [X86]
>  					       mds=full,nosmt [X86]
> +					       tsx_async_abort=full,nosmt [X86]
>  
>  	mminit_loglevel=
>  			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
> @@ -4864,6 +4866,40 @@
>  
>  			Not specifying this option is equivalent to tsx=off.
>  
> +	tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
> +			Abort (TAA) vulnerability.
> +
> +			Similar to Micro-architectural Data Sampling (MDS)
> +			certain CPUs that support Transactional
> +			Synchronization Extensions (TSX) are vulnerable to an
> +			exploit against CPU internal buffers which can forward
> +			information to a disclosure gadget under certain
> +			conditions.
> +
> +			In vulnerable processors, the speculatively forwarded
> +			data can be used in a cache side channel attack, to
> +			access data to which the attacker does not have direct
> +			access.
> +
> +			This parameter controls the TAA mitigation.  The
> +			options are:
> +
> +			full       - Enable TAA mitigation on vulnerable CPUs

if TSX is disabled

> +			full,nosmt - Enable TAA mitigation and disable SMT on
> +				     vulnerable CPUs. If TSX is disabled, SMT
> +				     is not disabled because CPU is not
> +				     vulnerable to cross-thread TAA attacks.
> +			off        - Unconditionally disable TAA mitigation
> +
> +			Not specifying this option is equivalent to
> +			tsx_async_abort=full.  On CPUs which are MDS affected
> +			and deploy MDS mitigation, TAA mitigation is not
> +			required and doesn't provide any additional
> +			mitigation.
> +
> +			For details see:
> +			Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
> +
>  	turbografx.map[2|3]=	[HW,JOY]
>  			TurboGraFX parallel port interface
>  			Format:
> diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
> index af64c4bb4447..a8de2fbc1caa 100644
> --- a/Documentation/x86/index.rst
> +++ b/Documentation/x86/index.rst
> @@ -27,6 +27,7 @@ x86-specific Documentation
>     mds
>     microcode
>     resctrl_ui
> +   tsx_async_abort
>     usb-legacy-support
>     i386/index
>     x86_64/index
> diff --git a/Documentation/x86/tsx_async_abort.rst b/Documentation/x86/tsx_async_abort.rst
> new file mode 100644
> index 000000000000..583ddc185ba2
> --- /dev/null
> +++ b/Documentation/x86/tsx_async_abort.rst
> @@ -0,0 +1,117 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +TSX Async Abort (TAA) mitigation
> +================================
> +
> +.. _tsx_async_abort:
> +
> +Overview
> +--------
> +
> +TSX Async Abort (TAA) is a side channel attack on internal buffers in some
> +Intel processors similar to Microachitectural Data Sampling (MDS).  In this
> +case certain loads may speculatively pass invalid data to dependent operations
> +when an asynchronous abort condition is pending in a Transactional
> +Synchronization Extensions (TSX) transaction.  This includes loads with no
> +fault or assist condition. Such loads may speculatively expose stale data from
> +the same uarch data structures as in MDS, with same scope of exposure i.e.
> +same-thread and cross-thread. This issue affects all current processors that
> +support TSX.
> +
> +Mitigation strategy
> +-------------------
> +
> +a) TSX disable - one of the mitigations is to disable TSX. A new MSR
> +IA32_TSX_CTRL will be available in future and current processors after

which processors?

> +microcode update which can be used to disable TSX. In addition, it
> +controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID.
> +
> +b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this
> +vulnerability. More details on this approach can be found in
> +:ref:`Documentation/admin-guide/hw-vuln/mds.rst <mds>`.

It should be clarified the mitigation is a) OR b), not both.

> +
> +Kernel internal mitigation modes
> +--------------------------------
> +
> + =============    ============================================================
> + off              Mitigation is disabled. Either the CPU is not affected or
> +                  tsx_async_abort=off is supplied on the kernel command line.
> +
> + tsx disabled     Mitigation is enabled. TSX feature is disabled by default at
> +                  bootup on processors that support TSX control.
> +
> + verw             Mitigation is enabled. CPU is affected and MD_CLEAR is
> +                  advertised in CPUID.
> +
> + ucode needed     Mitigation is enabled. CPU is affected and MD_CLEAR is not
> +                  advertised in CPUID. That is mainly for virtualization
> +                  scenarios where the host has the updated microcode but the
> +                  hypervisor does not expose MD_CLEAR in CPUID. It's a best
> +                  effort approach without guarantee.
> + =============    ============================================================
> +
> +If the CPU is affected and the "tsx_async_abort" kernel command line parameter is
> +not provided then the kernel selects an appropriate mitigation depending on the
> +status of RTM and MD_CLEAR CPUID bits.
> +
> +Below tables indicate the impact of tsx=on|off|auto cmdline options on state of
> +TAA mitigation, VERW behavior and TSX feature for various combinations of
> +MSR_IA32_ARCH_CAPABILITIES bits.
> +
> +1. "tsx=off"
> +
> +=========  =========  ============  ============  ==============  ===================  ======================
> +MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=off
> +----------------------------------  -------------------------------------------------------------------------
> +TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
> +                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
> +=========  =========  ============  ============  ==============  ===================  ======================
> +    0          0           0         HW default         Yes           Same as MDS           Same as MDS

Does "HW default" mean "Enabled"?

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 9/9] TAA 9
  2019-10-23 10:35 ` [MODERATED] [PATCH 9/9] TAA 9 Michal Hocko
@ 2019-10-24 16:10   ` Josh Poimboeuf
  2019-10-24 16:58     ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 16:10 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 12:35:50PM +0200, speck for Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> Subject: [PATCH 9/9] x86/tsx: Add config options to set tsx=on|off|auto
> 
> There is a general consensus that TSX usage is not largely spread while
> the history shows there is a non trivial space for side channel attacks
> possible. Therefore the tsx is disabled by default even on platforms
> that might have a safe implementation of TSX according to the current
> knowledge. This is a fair trade off to make.
> 
> There are, however, workloads that really do benefit from using TSX and
> updating to a newer kernel with TSX disabled might introduce a
> noticeable regressions. This would be especially a problem for Linux
> distributions which will provide TAA mitigations.
> 
> Introduce config options X86_INTEL_TSX_MODE_OFF, X86_INTEL_TSX_MODE_ON
> and X86_INTEL_TSX_MODE_AUTO to control the TSX feature. The config
> setting can be overridden by the tsx cmdline options.
> 
> Suggested-by: Borislav Petkov <bpetkov@suse.de>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tony Luck <tony.luck@intel.com>
> Cc: x86-ml <x86@kernel.org>
> ---
>  arch/x86/Kconfig          | 45 +++++++++++++++++++++++++++++++++++++++
>  arch/x86/kernel/cpu/tsx.c | 22 +++++++++++++------
>  2 files changed, 61 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index d6e1faa28c58..eebae89726c4 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1940,6 +1940,51 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS
>  
>  	  If unsure, say y.
>  
> +choice
> +	prompt "TSX enable mode"
> +	depends on CPU_SUP_INTEL
> +	default X86_INTEL_TSX_MODE_OFF
> +	help
> +	  Intel's TSX (Transactional Synchronization Extensions) feature
> +	  allows to optimize locking protocols through lock elision which
> +	  can lead to a noticeable performance boost.
> +
> +	  On the other hand it has been shown that TSX can be exploited
> +	  to form side channel attacks (e.g. TAA) and chances are there
> +	  will be more of those attacks discovered in the future.
> +
> +	  Therefore TSX is not enabled by default (aka tsx=off). An admin
> +	  might override this decision by tsx=on command line parameter. This
> +	  has a risk that TSX will get enabled also on platforms which are
> +	  known to be vulnerable to attacks like TAA and a safer option is to
> +	  use tsx=auto command line parameter.

I think this is misleading.  tsx=on doesn't make you vulnerable to TAA,
because we still the TAA mitigation.

> +
> +	  This options allows to set the default tsx mode between tsx=on, off
> +	  and auto. See Documentation/admin-guide/kernel-parameters.txt for more
> +	  details.
> +
> +	  Say off if not sure, auto if TSX is in use but it should be used on safe
> +	  platforms or on if TSX is in use and the security aspect of tsx is not
> +	  relevant.

tsx=on vs tsx=auto is not a security consideration, but rather a
performance one.  With tsx=auto you disable TSX on some TAA-affected
CPUs so you don't have to pay the performance penalty of the MDS
mitigations.

> +
> +config X86_INTEL_TSX_MODE_OFF
> +	bool "off"
> +	help
> +	  TSX is always disabled - equals tsx=off command line parameter.

Define "always" :-)

> +
> +config X86_INTEL_TSX_MODE_ON
> +	bool "on"
> +	help
> +	  TSX is always enabled on TSX capable HW - equals tsx=on command line
> +	  parameter.
> +
> +config X86_INTEL_TSX_MODE_AUTO
> +	bool "auto"
> +	help
> +	  TSX is enabled on TSX capable HW that is believed to be safe against
> +	  side channel attacks- equals tsx=auto command line parameter.

Not exactly :-)  This also leaves TSX enabled on MDS vulnerable parts.


-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 1/9] TAA 1
  2019-10-24 15:22   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-24 16:23     ` Borislav Petkov
  2019-10-24 16:42       ` Josh Poimboeuf
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 16:23 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:22:57AM -0500, speck for Josh Poimboeuf wrote:
> This should clarify that not *all* TAA-vulnerable CPUs will get
> IA32_TSX_CTRL, instead only the ones which aren't also vulnerable to
> MDS.

Added:

"The CPUs which get this new MSR after a microcode upgrade are the ones
which do not set MSR_IA32_ARCH_CAPABILITIES.MDS_NO (bit 5) because those
CPUs have CPUID.MD_CLEAR, i.e., the VERW implementation which clears all
CPU buffers takes care of the TAA case as well."

Hopefully that makes it more clear.

> > The other TSX sub-feature, Hardware Lock Elision (HLE), is
> > unconditionally disabled
> 
> ... by the new microcode ...
> 
> > but still enumerated as present by
> > CPUID(EAX=7).EBX{bit4}.
> 
> ... unless disabled by bit 1 of IA32_TSX_CTRL_MSR.

Changed it to:

"The other TSX sub-feature, Hardware Lock Elision (HLE), is
unconditionally disabled by the new microcode but still enumerated
as present by CPUID(EAX=7).EBX{bit4}, unless disabled by
IA32_TSX_CTRL_MSR[1] - TSX_CTRL_CPUID_CLEAR."

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 15:30   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-24 16:33     ` Borislav Petkov
  2019-10-24 16:43       ` Josh Poimboeuf
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 16:33 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:30:15AM -0500, speck for Josh Poimboeuf wrote:
> This still needs details about when 'tsx=off' does and doesn't work.
> 
> The above makes it sound like it's off for all CPUs, when in fact it's
> only off for newer MDS_NO CPUs.

How does that sound (and that is being mentioned somewhere in all the
text but here it is important to have):

			off     - Disable TSX on the system. (Note that this
				option takes effect only on newer CPUs which are
				not vulnerable to MDS, i.e., have
				MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and which get
				the new IA32_TSX_CTRL MSR through a microcode
				update. This new MSR allows for the reliable
				deactivation of the TSX functionality.)

> It should also perhaps describe the risks associated with tsx=on.  While
> there are mitigations for all known issues (i.e., the tsx_async_abort=
> option), TSX has been known to be an accelerator for several previous
> speculation-related CVEs, and so there may be unknown security risks
> associated with leaving it enabled.

You've basically said it nicely already:

"Although there are mitigations for all known security vulnerabilities,
TSX has been known to be an accelerator for several previous
speculation-related CVEs, and so there may be unknown security risks
associated with leaving it enabled."

ACK?

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 7/9] TAA 7
  2019-10-24 15:35   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-24 16:42     ` Borislav Petkov
  2019-10-24 18:20       ` Jiri Kosina
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 16:42 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:35:17AM -0500, speck for Josh Poimboeuf wrote:
> On Wed, Oct 23, 2019 at 12:28:57PM +0200, speck for Pawan Gupta wrote:
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> > index ad6b69057bb0..e1aca10f2a7f 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -4856,6 +4856,11 @@
> >  
> >  			on	- Enable TSX on the system.
> >  			off	- Disable TSX on the system.
> > +			auto	- Disable TSX if X86_BUG_TAA is present,
> 
> Not quite true, since MDS-affected parts don't have the new microcode
> bits to disable TSX.

How's this:

"Disable TSX if the CPU is affected by the TSX Async Abort (TAA)
vulnerability and microcode provides a special MSR - TSX_CTRL_MSR -
which provides the required TSX control knobs. On MDS-affected parts
where VERW takes care of the TAA vulnerability, that controlling MSR is
not present and thus TSX cannot be disabled there."

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 1/9] TAA 1
  2019-10-24 16:23     ` Borislav Petkov
@ 2019-10-24 16:42       ` Josh Poimboeuf
  0 siblings, 0 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 16:42 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 06:23:58PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 10:22:57AM -0500, speck for Josh Poimboeuf wrote:
> > This should clarify that not *all* TAA-vulnerable CPUs will get
> > IA32_TSX_CTRL, instead only the ones which aren't also vulnerable to
> > MDS.
> 
> Added:
> 
> "The CPUs which get this new MSR after a microcode upgrade are the ones
> which do not set MSR_IA32_ARCH_CAPABILITIES.MDS_NO (bit 5) because those
> CPUs have CPUID.MD_CLEAR, i.e., the VERW implementation which clears all
> CPU buffers takes care of the TAA case as well."
> 
> Hopefully that makes it more clear.
> 
> > > The other TSX sub-feature, Hardware Lock Elision (HLE), is
> > > unconditionally disabled
> > 
> > ... by the new microcode ...
> > 
> > > but still enumerated as present by
> > > CPUID(EAX=7).EBX{bit4}.
> > 
> > ... unless disabled by bit 1 of IA32_TSX_CTRL_MSR.
> 
> Changed it to:
> 
> "The other TSX sub-feature, Hardware Lock Elision (HLE), is
> unconditionally disabled by the new microcode but still enumerated
> as present by CPUID(EAX=7).EBX{bit4}, unless disabled by
> IA32_TSX_CTRL_MSR[1] - TSX_CTRL_CPUID_CLEAR."

Sounds good.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 16:33     ` Borislav Petkov
@ 2019-10-24 16:43       ` Josh Poimboeuf
  0 siblings, 0 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 16:43 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 06:33:36PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 10:30:15AM -0500, speck for Josh Poimboeuf wrote:
> > This still needs details about when 'tsx=off' does and doesn't work.
> > 
> > The above makes it sound like it's off for all CPUs, when in fact it's
> > only off for newer MDS_NO CPUs.
> 
> How does that sound (and that is being mentioned somewhere in all the
> text but here it is important to have):
> 
> 			off     - Disable TSX on the system. (Note that this
> 				option takes effect only on newer CPUs which are
> 				not vulnerable to MDS, i.e., have
> 				MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and which get
> 				the new IA32_TSX_CTRL MSR through a microcode
> 				update. This new MSR allows for the reliable
> 				deactivation of the TSX functionality.)
> 
> > It should also perhaps describe the risks associated with tsx=on.  While
> > there are mitigations for all known issues (i.e., the tsx_async_abort=
> > option), TSX has been known to be an accelerator for several previous
> > speculation-related CVEs, and so there may be unknown security risks
> > associated with leaving it enabled.
> 
> You've basically said it nicely already:
> 
> "Although there are mitigations for all known security vulnerabilities,
> TSX has been known to be an accelerator for several previous
> speculation-related CVEs, and so there may be unknown security risks
> associated with leaving it enabled."
> 
> ACK?

ACK

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 15:32   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-24 16:43     ` Borislav Petkov
  2019-10-24 17:15       ` Josh Poimboeuf
  2019-10-24 18:23       ` Andrew Cooper
  0 siblings, 2 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 16:43 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> As I said before this would be a lot nicer if we could just add NO_TAA
> to the cpu_vuln_whitelist.

We're waiting for a list of CPUs from Intel here, right?

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 9/9] TAA 9
  2019-10-24 16:10   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-24 16:58     ` Borislav Petkov
  2019-10-25 10:47       ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
  2019-10-25 13:05       ` [MODERATED] " Josh Poimboeuf
  0 siblings, 2 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 16:58 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 11:10:16AM -0500, speck for Josh Poimboeuf wrote:
> I think this is misleading.  tsx=on doesn't make you vulnerable to TAA,
> because we still the TAA mitigation.

Changed to:

          Therefore TSX is not enabled by default (aka tsx=off). An admin
          might override this decision by tsx=on the command line parameter.
          Even with TSX enabled, the kernel will attempt to enable the best 
          possible TAA mitigation setting depending on the microcode available 
          for the particular machine.

> tsx=on vs tsx=auto is not a security consideration, but rather a
> performance one.  With tsx=auto you disable TSX on some TAA-affected
> CPUs so you don't have to pay the performance penalty of the MDS
> mitigations.

By performance penalty you mean, when you have TSX disabled on those
parts, you'll save yourself the VERW which should be taking care of TAA
too?

> 
> > +
> > +config X86_INTEL_TSX_MODE_OFF
> > +	bool "off"
> > +	help
> > +	  TSX is always disabled - equals tsx=off command line parameter.
> 
> Define "always" :-)

Changed to:

"TSX is disabled if possible - equals to tsx=off command line parameter."

> Not exactly :-)  This also leaves TSX enabled on MDS vulnerable parts.

Your point being, the MD_CLEAR which takes care of TAA too?

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 16:43     ` Borislav Petkov
@ 2019-10-24 17:15       ` Josh Poimboeuf
  2019-10-24 17:23         ` Pawan Gupta
  2019-10-24 18:23       ` Andrew Cooper
  1 sibling, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 17:15 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 06:43:29PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> > As I said before this would be a lot nicer if we could just add NO_TAA
> > to the cpu_vuln_whitelist.
> 
> We're waiting for a list of CPUs from Intel here, right?

I guess so, unless somebody else can deduce that from existing
information...

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 17:15       ` Josh Poimboeuf
@ 2019-10-24 17:23         ` Pawan Gupta
  2019-10-24 17:27           ` Pawan Gupta
  2019-10-24 17:34           ` Josh Poimboeuf
  0 siblings, 2 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-24 17:23 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 12:15:43PM -0500, speck for Josh Poimboeuf wrote:
> On Thu, Oct 24, 2019 at 06:43:29PM +0200, speck for Borislav Petkov wrote:
> > On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> > > As I said before this would be a lot nicer if we could just add NO_TAA
> > > to the cpu_vuln_whitelist.
> > 
> > We're waiting for a list of CPUs from Intel here, right?
> 
> I guess so, unless somebody else can deduce that from existing
> information...

This is the list to the best of my knowledge.

+----------------------------+------------------+------------+
|            Name            |  Family / model  |  Stepping  |
+============================+==================+============+
| Whiskey Lake (ULT refresh) |      06_8E       |    0xC     |
+----------------------------+------------------+------------+
|    2nd gen Cascade Lake    |      06_55       |    6, 7    |
+----------------------------+------------------+------------+
|       Coffee Lake R        |      06_9E       |    0xD     |
+----------------------------+------------------+------------+

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 17:23         ` Pawan Gupta
@ 2019-10-24 17:27           ` Pawan Gupta
  2019-10-24 17:34           ` Josh Poimboeuf
  1 sibling, 0 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-24 17:27 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:23:56AM -0700, speck for Pawan Gupta wrote:
> On Thu, Oct 24, 2019 at 12:15:43PM -0500, speck for Josh Poimboeuf wrote:
> > On Thu, Oct 24, 2019 at 06:43:29PM +0200, speck for Borislav Petkov wrote:
> > > On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> > > > As I said before this would be a lot nicer if we could just add NO_TAA
> > > > to the cpu_vuln_whitelist.
> > > 
> > > We're waiting for a list of CPUs from Intel here, right?
> > 
> > I guess so, unless somebody else can deduce that from existing
> > information...
> 
> This is the list to the best of my knowledge.
> 
> +----------------------------+------------------+------------+
> |            Name            |  Family / model  |  Stepping  |
> +============================+==================+============+
> | Whiskey Lake (ULT refresh) |      06_8E       |    0xC     |
> +----------------------------+------------------+------------+
> |    2nd gen Cascade Lake    |      06_55       |    6, 7    |
> +----------------------------+------------------+------------+
> |       Coffee Lake R        |      06_9E       |    0xD     |
> +----------------------------+------------------+------------+

Please discard this list, this is opposite of what we are looking for
here.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 17:23         ` Pawan Gupta
  2019-10-24 17:27           ` Pawan Gupta
@ 2019-10-24 17:34           ` Josh Poimboeuf
  1 sibling, 0 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 17:34 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:23:56AM -0700, speck for Pawan Gupta wrote:
> On Thu, Oct 24, 2019 at 12:15:43PM -0500, speck for Josh Poimboeuf wrote:
> > On Thu, Oct 24, 2019 at 06:43:29PM +0200, speck for Borislav Petkov wrote:
> > > On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> > > > As I said before this would be a lot nicer if we could just add NO_TAA
> > > > to the cpu_vuln_whitelist.
> > > 
> > > We're waiting for a list of CPUs from Intel here, right?
> > 
> > I guess so, unless somebody else can deduce that from existing
> > information...
> 
> This is the list to the best of my knowledge.
> 
> +----------------------------+------------------+------------+
> |            Name            |  Family / model  |  Stepping  |
> +============================+==================+============+
> | Whiskey Lake (ULT refresh) |      06_8E       |    0xC     |
> +----------------------------+------------------+------------+
> |    2nd gen Cascade Lake    |      06_55       |    6, 7    |
> +----------------------------+------------------+------------+
> |       Coffee Lake R        |      06_9E       |    0xD     |
> +----------------------------+------------------+------------+

That may be helpful, but I think the question is how does that
information translate to the cpu_vuln_whitelist table.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 8/9] TAA 8
  2019-10-24 16:03   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-24 17:35     ` Borislav Petkov
  2019-10-24 18:11       ` Josh Poimboeuf
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 17:35 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 11:03:12AM -0500, speck for Josh Poimboeuf wrote:
> On Wed, Oct 23, 2019 at 12:32:55PM +0200, speck for Pawan Gupta wrote:
> > +Virtualization mitigation
> > +^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Affected systems where the host has the TAA microcode and the TAA mitigation is
> > +ON (with TSX disabled) are not vulnerable regardless of the status of the VMs.
> 
> This is is confusing: "the TAA mitigation is ON (with TSX disabled)".
> 
> Which is it?  Is the TAA mitigation on, or is TSX disabled?

I think it wants to say:

"Affected systems where the host has TAA microcode and TAA is mitigated
by having disabled TSX, are not vulnerable..."

> This is not universally true.

			"Disables TSX on systems which have the TSX-disable MSR."

> 
> > +  on		Enables TSX.
> 
> This probably needs the same "TSX is fundamentally insecure" caveat I
> proposed for kernel-parameters.txt.

I'm great at copy'n'paste:

			"Enables TSX.

			Although there are mitigations for all known security
			vulnerabilities, TSX has been known to be an accelerator
			for several previous speculation-related CVEs, and so
			there may be unknown security risks associated with leaving
			it enabled."

> 
> > +
> > +  auto		Disables TSX on affected platform, otherwise enables TSX.
> 
> This is not universally true.
> 
> Also, it would be relevant to refer to that table Pawan posted which
> shows exactly which CPUs are vulnerable to TAA but not MDS.

This one?

I don't know whether this one will get more models later or what the
TAA_NO plan is...

+----------------------------+------------------+------------+
|            Name            |  Family / model  |  Stepping  |
+============================+==================+============+
| Whiskey Lake (ULT refresh) |      06_8E       |    0xC     |
+----------------------------+------------------+------------+
|    2nd gen Cascade Lake    |      06_55       |    6, 7    |
+----------------------------+------------------+------------+
|       Coffee Lake R        |      06_9E       |    0xD     |
+----------------------------+------------------+------------+


> 
> > +  ============  =============================================================
> > +
> > +Not specifying this option is equivalent to "tsx=off".
> > +
> > +The following combinations of the "tsx_async_abort" and "tsx" are possible. For
> > +affected platforms tsx=auto is equivalent to tsx=off and the result will be:
> > +
> > +  =========  ====================   =========================================
> > +  tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
> > +                                    buffers.
> 
> The system may still be vulnerable to SMT-based attacks.
> 
> > +  tsx=on     tsx_async_abort=off    The system is vulnerable.
> > +  tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.
> > +  tsx=off    tsx_async_abort=off    TSX is disabled. System is not vulnerable.
> > +  =========  ====================   =========================================
> 
> Combinations with tsx_async_abort=full,nosmt should also be described.

  =========  ====================   =========================================
  tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
                                    buffers. Cross-thread attacks still possible
                                    on SMT machines.
  tsx=on     tsx_async_abort=full,nosmt As above, cross-thread attacks on SMT
                                    mitigated.
  tsx=on     tsx_async_abort=off    The system is vulnerable.
  tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.
  tsx=off    tsx_async_abort=full,nosmt ditto
  tsx=off    tsx_async_abort=off    ditto
  =========  ====================   =========================================


> > +For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
> > +buffers.  For platforms without TSX control "tsx" command line argument has no
> > +effect.
> 
> Which platforms are those?

The MDS_NO=0 ones.

> 
> > +For the affected platforms below table indicates the mitigation status for the
> > +combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
> > +and TSX_CTRL_MSR.
> > +
> > +  =======  =========  =============  ========================================
> > +  MDS_NO   MD_CLEAR   TSX_CTRL_MSR   Status
> > +  =======  =========  =============  ========================================
> > +    0          0            0        Vulnerable (needs ucode)
> > +    0          1            0        MDS and TAA mitigated via VERW
> > +    1          1            0        MDS fixed, TAA vulnerable if TSX enabled
> > +                                     because MD_CLEAR has no meaning and
> > +                                     VERW is not guaranteed to clear buffers
> 
> (needs ucode) ?

Will there even be microcode for those to beef up VERW?

> > +2. Untrusted userspace and guests
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +If there are untrusted applications or guests on the system, enabling TSX
> > +might allow a malicious actor to leak data from the host or from other
> > +processes running on the same physical core.
> 
> Unless the mitigation is enabled (which is on by default, BTW...)
> 
> This makes it sounds like the only mitigation is to disable TSX.

"If there are untrusted applications or guests on the system, enabling
TSX and the inability to enable the TAA mitigation (missing microcode
for a particular model, etc.) might allow a malicious actor to leak
data from the host or from other processes running on the same physical
core."

> > +If the microcode is available and the TSX is disabled on the host, attacks
> > +are prevented in a virtualized environment as well, even if the VMs do not
> > +explicitly enable the mitigation.
> 
> What's the effect on VM security if TSX is enabled and the host TAA
> mitigation is also enabled?

Same as in the !VM case, I'd assume. tsx_async_abort=full,nosmt should
give you full mitigation.

> > +			This parameter controls the TAA mitigation.  The
> > +			options are:
> > +
> > +			full       - Enable TAA mitigation on vulnerable CPUs
> 
> if TSX is disabled

No, that's TAA_MITIGATION_VERW which does the buffer clearing.

> > +Mitigation strategy
> > +-------------------
> > +
> > +a) TSX disable - one of the mitigations is to disable TSX. A new MSR
> > +IA32_TSX_CTRL will be available in future and current processors after
> 
> which processors?

The MDS_NO=1 and future parts, I guess.

> 
> > +microcode update which can be used to disable TSX. In addition, it
> > +controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID.
> > +
> > +b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this
> > +vulnerability. More details on this approach can be found in
> > +:ref:`Documentation/admin-guide/hw-vuln/mds.rst <mds>`.
> 
> It should be clarified the mitigation is a) OR b), not both.

Yah, the default one is b).

        if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
                taa_mitigation = TAA_MITIGATION_VERW;
        else
                taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;

unless nothing has been specified on the cmdline and it depends on what
CONFIG_X86_INTEL_TSX_MODE* has been enabled in .config. I know, it is a
mouthful.

> > +1. "tsx=off"
> > +
> > +=========  =========  ============  ============  ==============  ===================  ======================
> > +MSR_IA32_ARCH_CAPABILITIES bits     Result with cmdline tsx=off
> > +----------------------------------  -------------------------------------------------------------------------
> > +TAA_NO     MDS_NO     TSX_CTRL_MSR  TSX state     VERW can clear  TAA mitigation       TAA mitigation
> > +                                    after bootup  CPU buffers     tsx_async_abort=off  tsx_async_abort=full
> > +=========  =========  ============  ============  ==============  ===================  ======================
> > +    0          0           0         HW default         Yes           Same as MDS           Same as MDS
> 
> Does "HW default" mean "Enabled"?

I'd assume coming out of reset, TSX is enabled. Question for Intel
folks.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-23  9:01 ` [MODERATED] [PATCH 3/9] TAA 3 Pawan Gupta
  2019-10-24 15:30   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-24 17:39   ` Andrew Cooper
  2019-10-24 19:45     ` Borislav Petkov
  2019-10-24 19:47     ` [MODERATED] " Pawan Gupta
  2019-10-30 13:28   ` Greg KH
  2 siblings, 2 replies; 75+ messages in thread
From: Andrew Cooper @ 2019-10-24 17:39 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1087 bytes --]

On 23/10/2019 10:01, speck for Pawan Gupta wrote:
> +	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
> +		tsx_disable();
> +
> +		/*
> +		 * tsx_disable() will change the state of the
> +		 * RTM CPUID bit.  Clear it here since it is now
> +		 * expected to be not set.
> +		 */
> +		setup_clear_cpu_cap(X86_FEATURE_RTM);

This same argument applies to HLE, and it would be weird for
pre-TSX_CTRL CPUs with tsx=off to report HLE but not RTM in /proc/cpuid

Furthermore, while grepping through the tree, I found

events/intel/lbr.c-267-static inline bool
lbr_from_signext_quirk_needed(void)
events/intel/lbr.c-268-{
events/intel/lbr.c-269- int lbr_format = x86_pmu.intel_cap.lbr_format;
events/intel/lbr.c:270: bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) ||
events/intel/lbr.c-271-                    boot_cpu_has(X86_FEATURE_RTM);
events/intel/lbr.c-272-
events/intel/lbr.c-273- return !tsx_support && (lbr_desc[lbr_format] &
LBR_TSX);

which is going to need an adjustment to avoid applying the quirks on
non-broken hardware.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 8/9] TAA 8
  2019-10-24 17:35     ` Borislav Petkov
@ 2019-10-24 18:11       ` Josh Poimboeuf
  2019-10-24 18:55         ` Pawan Gupta
  2019-10-25  8:04         ` Borislav Petkov
  0 siblings, 2 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 18:11 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 07:35:38PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 11:03:12AM -0500, speck for Josh Poimboeuf wrote:
> > On Wed, Oct 23, 2019 at 12:32:55PM +0200, speck for Pawan Gupta wrote:
> > > +Virtualization mitigation
> > > +^^^^^^^^^^^^^^^^^^^^^^^^^
> > > +
> > > +Affected systems where the host has the TAA microcode and the TAA mitigation is
> > > +ON (with TSX disabled) are not vulnerable regardless of the status of the VMs.
> > 
> > This is is confusing: "the TAA mitigation is ON (with TSX disabled)".
> > 
> > Which is it?  Is the TAA mitigation on, or is TSX disabled?
> 
> I think it wants to say:
> 
> "Affected systems where the host has TAA microcode and TAA is mitigated
> by having disabled TSX, are not vulnerable..."
> 
> > This is not universally true.
> 
> 			"Disables TSX on systems which have the TSX-disable MSR."

Maybe use the same wording from kernel-parameters.txt?

> > > +  on		Enables TSX.
> > 
> > This probably needs the same "TSX is fundamentally insecure" caveat I
> > proposed for kernel-parameters.txt.
> 
> I'm great at copy'n'paste:
> 
> 			"Enables TSX.
> 
> 			Although there are mitigations for all known security
> 			vulnerabilities, TSX has been known to be an accelerator
> 			for several previous speculation-related CVEs, and so
> 			there may be unknown security risks associated with leaving
> 			it enabled."
> 
> > 
> > > +
> > > +  auto		Disables TSX on affected platform, otherwise enables TSX.
> > 
> > This is not universally true.

Ditto, I guess whatever wording is used in kernel-parameters.txt can be
cribbed for here.

> > Also, it would be relevant to refer to that table Pawan posted which
> > shows exactly which CPUs are vulnerable to TAA but not MDS.
> 
> This one?
> 
> I don't know whether this one will get more models later or what the
> TAA_NO plan is...
> 
> +----------------------------+------------------+------------+
> |            Name            |  Family / model  |  Stepping  |
> +============================+==================+============+
> | Whiskey Lake (ULT refresh) |      06_8E       |    0xC     |
> +----------------------------+------------------+------------+
> |    2nd gen Cascade Lake    |      06_55       |    6, 7    |
> +----------------------------+------------------+------------+
> |       Coffee Lake R        |      06_9E       |    0xD     |
> +----------------------------+------------------+------------+

That's the one, but yeah... it might not be future-complete, and even if
it is, Intel probably doesn't want to disclose that anyway.

> > > +  ============  =============================================================
> > > +
> > > +Not specifying this option is equivalent to "tsx=off".
> > > +
> > > +The following combinations of the "tsx_async_abort" and "tsx" are possible. For
> > > +affected platforms tsx=auto is equivalent to tsx=off and the result will be:
> > > +
> > > +  =========  ====================   =========================================
> > > +  tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
> > > +                                    buffers.
> > 
> > The system may still be vulnerable to SMT-based attacks.
> > 
> > > +  tsx=on     tsx_async_abort=off    The system is vulnerable.
> > > +  tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.
> > > +  tsx=off    tsx_async_abort=off    TSX is disabled. System is not vulnerable.
> > > +  =========  ====================   =========================================
> > 
> > Combinations with tsx_async_abort=full,nosmt should also be described.
> 
>   =========  ====================   =========================================
>   tsx=on     tsx_async_abort=full   The system will use VERW to clear CPU
>                                     buffers. Cross-thread attacks still possible
>                                     on SMT machines.
>   tsx=on     tsx_async_abort=full,nosmt As above, cross-thread attacks on SMT
>                                     mitigated.
>   tsx=on     tsx_async_abort=off    The system is vulnerable.
>   tsx=off    tsx_async_abort=full   TSX is disabled. System is not vulnerable.

TSX _might_ be disabled, depending...

>   tsx=off    tsx_async_abort=full,nosmt ditto
>   tsx=off    tsx_async_abort=off    ditto
>   =========  ====================   =========================================
> 
> 
> > > +For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
> > > +buffers.  For platforms without TSX control "tsx" command line argument has no
> > > +effect.
> > 
> > Which platforms are those?
> 
> The MDS_NO=0 ones.

Right, update the wording?

> 
> > 
> > > +For the affected platforms below table indicates the mitigation status for the
> > > +combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
> > > +and TSX_CTRL_MSR.
> > > +
> > > +  =======  =========  =============  ========================================
> > > +  MDS_NO   MD_CLEAR   TSX_CTRL_MSR   Status
> > > +  =======  =========  =============  ========================================
> > > +    0          0            0        Vulnerable (needs ucode)
> > > +    0          1            0        MDS and TAA mitigated via VERW
> > > +    1          1            0        MDS fixed, TAA vulnerable if TSX enabled
> > > +                                     because MD_CLEAR has no meaning and
> > > +                                     VERW is not guaranteed to clear buffers
> > 
> > (needs ucode) ?
> 
> Will there even be microcode for those to beef up VERW?

This might be a question for Intel, but I assumed this is the case where
the new microcode on the MDS_NO parts would enable the VERW buffer
clearing.

> 
> > > +2. Untrusted userspace and guests
> > > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > +
> > > +If there are untrusted applications or guests on the system, enabling TSX
> > > +might allow a malicious actor to leak data from the host or from other
> > > +processes running on the same physical core.
> > 
> > Unless the mitigation is enabled (which is on by default, BTW...)
> > 
> > This makes it sounds like the only mitigation is to disable TSX.
> 
> "If there are untrusted applications or guests on the system, enabling
> TSX and the inability to enable the TAA mitigation (missing microcode
> for a particular model, etc.) might allow a malicious actor to leak
> data from the host or from other processes running on the same physical
> core."
> 
> > > +If the microcode is available and the TSX is disabled on the host, attacks
> > > +are prevented in a virtualized environment as well, even if the VMs do not
> > > +explicitly enable the mitigation.
> > 
> > What's the effect on VM security if TSX is enabled and the host TAA
> > mitigation is also enabled?
> 
> Same as in the !VM case, I'd assume. tsx_async_abort=full,nosmt should
> give you full mitigation.

Right, the effects of the host mitigation options on the guest would be
useful here.

> > > +			This parameter controls the TAA mitigation.  The
> > > +			options are:
> > > +
> > > +			full       - Enable TAA mitigation on vulnerable CPUs
> > 
> > if TSX is disabled
> 
> No, that's TAA_MITIGATION_VERW which does the buffer clearing.

I meant to say "if TSX is enabled".  Otherwise if TSX is disabled this
option doesn't do anything

> > > +Mitigation strategy
> > > +-------------------
> > > +
> > > +a) TSX disable - one of the mitigations is to disable TSX. A new MSR
> > > +IA32_TSX_CTRL will be available in future and current processors after
> > 
> > which processors?
> 
> The MDS_NO=1 and future parts, I guess.

Right, that should be clarified.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 7/9] TAA 7
  2019-10-24 16:42     ` Borislav Petkov
@ 2019-10-24 18:20       ` Jiri Kosina
  2019-10-24 19:53         ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Jiri Kosina @ 2019-10-24 18:20 UTC (permalink / raw)
  To: speck

On Thu, 24 Oct 2019, speck for Borislav Petkov wrote:

> "Disable TSX if the CPU is affected by the TSX Async Abort (TAA)
> vulnerability and microcode provides a special MSR - TSX_CTRL_MSR -
> which provides the required TSX control knobs. On MDS-affected parts
> where VERW takes care of the TAA vulnerability, that controlling MSR is
> not present and thus TSX cannot be disabled there."

This is true if you ignore hyperthreading.

On SMT systems, TSX disable is 100% complete mitigation, while VERW 
clearing is not.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 16:43     ` Borislav Petkov
  2019-10-24 17:15       ` Josh Poimboeuf
@ 2019-10-24 18:23       ` Andrew Cooper
  2019-10-24 18:56         ` Josh Poimboeuf
  1 sibling, 1 reply; 75+ messages in thread
From: Andrew Cooper @ 2019-10-24 18:23 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 598 bytes --]

On 24/10/2019 17:43, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
>> As I said before this would be a lot nicer if we could just add NO_TAA
>> to the cpu_vuln_whitelist.
> We're waiting for a list of CPUs from Intel here, right?
>

There is no model list required.  Vulnerability to TAA is calculable
directly from existing architectural sources.

While the original expression might be ugly, and could probably be
explained more clearly, it is correct AFAICT.  I certainly have a very
similar one in Xen.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 5/9] TAA 5
  2019-10-23 10:19 ` [MODERATED] [PATCH 5/9] TAA 5 Pawan Gupta
@ 2019-10-24 18:30   ` Greg KH
  0 siblings, 0 replies; 75+ messages in thread
From: Greg KH @ 2019-10-24 18:30 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 12:19:51PM +0200, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH 5/9] x86/speculation/taa: Add sysfs reporting for TSX Async
>  Abort
> 
> Add the sysfs reporting file for TSX Async Abort. It exposes the
> vulnerability and the mitigation state similar to the existing files for
> the other hardware vulnerabilities.
> 
> Sysfs file path is:
> /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Reviewed-by: Mark Gross <mgross@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Jiri Kosina <jkosina@suse.cz>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: x86-ml <x86@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
>  drivers/base/cpu.c         |  9 +++++++++
>  include/linux/cpu.h        |  3 +++
>  3 files changed, 35 insertions(+)

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 8/9] TAA 8
  2019-10-24 18:11       ` Josh Poimboeuf
@ 2019-10-24 18:55         ` Pawan Gupta
  2019-10-25  8:04         ` Borislav Petkov
  1 sibling, 0 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-24 18:55 UTC (permalink / raw)
  To: speck

> > > > +For the affected platforms below table indicates the mitigation status for the
> > > > +combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
> > > > +and TSX_CTRL_MSR.
> > > > +
> > > > +  =======  =========  =============  ========================================
> > > > +  MDS_NO   MD_CLEAR   TSX_CTRL_MSR   Status
> > > > +  =======  =========  =============  ========================================
> > > > +    0          0            0        Vulnerable (needs ucode)
> > > > +    0          1            0        MDS and TAA mitigated via VERW
> > > > +    1          1            0        MDS fixed, TAA vulnerable if TSX enabled
> > > > +                                     because MD_CLEAR has no meaning and
> > > > +                                     VERW is not guaranteed to clear buffers
> > > 
> > > (needs ucode) ?
> > 
> > Will there even be microcode for those to beef up VERW?
> 
> This might be a question for Intel, but I assumed this is the case where
> the new microcode on the MDS_NO parts would enable the VERW buffer
> clearing.

That is correct.

> > > > +If the microcode is available and the TSX is disabled on the host, attacks
> > > > +are prevented in a virtualized environment as well, even if the VMs do not
> > > > +explicitly enable the mitigation.
> > > 
> > > What's the effect on VM security if TSX is enabled and the host TAA
> > > mitigation is also enabled?
> > 
> > Same as in the !VM case, I'd assume. tsx_async_abort=full,nosmt should
> > give you full mitigation.
> 
> Right, the effects of the host mitigation options on the guest would be
> useful here.

When TSX is enabled on host part 6/9 exports MDS_NO=0 to VMs, so that
VMs deploy MDS mitigation which also mitigates TAA.

> > > > +Mitigation strategy
> > > > +-------------------
> > > > +
> > > > +a) TSX disable - one of the mitigations is to disable TSX. A new MSR
> > > > +IA32_TSX_CTRL will be available in future and current processors after
> > > 
> > > which processors?
> > 
> > The MDS_NO=1 and future parts, I guess.
> 
> Right, that should be clarified.

This is correct, list of MDS_NO=1 parts shared earlier, and the future parts.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 18:23       ` Andrew Cooper
@ 2019-10-24 18:56         ` Josh Poimboeuf
  2019-10-24 18:59           ` Josh Poimboeuf
  2019-10-24 19:13           ` Andrew Cooper
  0 siblings, 2 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 18:56 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 07:23:57PM +0100, speck for Andrew Cooper wrote:
> On 24/10/2019 17:43, speck for Borislav Petkov wrote:
> > On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> >> As I said before this would be a lot nicer if we could just add NO_TAA
> >> to the cpu_vuln_whitelist.
> > We're waiting for a list of CPUs from Intel here, right?
> >
> 
> There is no model list required.  Vulnerability to TAA is calculable
> directly from existing architectural sources.

Can you elaborate?  Earlier I suggested relying on NO_MDS in
cpu_vuln_whitelist, but I believe you said that's not sufficient,
because some of the non-MDS models don't have TSX, in which case we
shouldn't set TAA_BUG.

Which models are those?

Here's the current struct:

static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
	VULNWL(ANY,	4, X86_MODEL_ANY,	NO_SPECULATION),
	VULNWL(CENTAUR,	5, X86_MODEL_ANY,	NO_SPECULATION),
	VULNWL(INTEL,	5, X86_MODEL_ANY,	NO_SPECULATION),
	VULNWL(NSC,	5, X86_MODEL_ANY,	NO_SPECULATION),

	/* Intel Family 6 */
	VULNWL_INTEL(ATOM_SALTWELL,		NO_SPECULATION),
	VULNWL_INTEL(ATOM_SALTWELL_TABLET,	NO_SPECULATION),
	VULNWL_INTEL(ATOM_SALTWELL_MID,		NO_SPECULATION),
	VULNWL_INTEL(ATOM_BONNELL,		NO_SPECULATION),
	VULNWL_INTEL(ATOM_BONNELL_MID,		NO_SPECULATION),

	VULNWL_INTEL(ATOM_SILVERMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
	VULNWL_INTEL(ATOM_SILVERMONT_D,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
	VULNWL_INTEL(ATOM_SILVERMONT_MID,	NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
	VULNWL_INTEL(ATOM_AIRMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
	VULNWL_INTEL(XEON_PHI_KNL,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
	VULNWL_INTEL(XEON_PHI_KNM,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),

	VULNWL_INTEL(CORE_YONAH,		NO_SSB),

	VULNWL_INTEL(ATOM_AIRMONT_MID,		NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
	VULNWL_INTEL(ATOM_AIRMONT_NP,		NO_L1TF | NO_SWAPGS),

	VULNWL_INTEL(ATOM_GOLDMONT,		NO_TAA | NO_MDS | NO_L1TF | NO_SWAPGS),
	VULNWL_INTEL(ATOM_GOLDMONT_D,		NO_TAA | NO_MDS | NO_L1TF | NO_SWAPGS),
	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_TAA | NO_MDS | NO_L1TF | NO_SWAPGS),

	/*
	 * Technically, swapgs isn't serializing on AMD (despite it previously
	 * being documented as such in the APM).  But according to AMD, %gs is
	 * updated non-speculatively, and the issuing of %gs-relative memory
	 * operands will be blocked until the %gs update completes, which is
	 * good enough for our purposes.
	 */

	/* AMD Family 0xf - 0x12 */
	VULNWL_AMD(0x0f,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
	VULNWL_AMD(0x10,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
	VULNWL_AMD(0x11,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),

	/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS),
	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS),
	{}
};

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 18:56         ` Josh Poimboeuf
@ 2019-10-24 18:59           ` Josh Poimboeuf
  2019-10-24 19:13           ` Andrew Cooper
  1 sibling, 0 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 18:59 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 01:56:41PM -0500, Josh Poimboeuf wrote:
> On Thu, Oct 24, 2019 at 07:23:57PM +0100, speck for Andrew Cooper wrote:
> > On 24/10/2019 17:43, speck for Borislav Petkov wrote:
> > > On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> > >> As I said before this would be a lot nicer if we could just add NO_TAA
> > >> to the cpu_vuln_whitelist.
> > > We're waiting for a list of CPUs from Intel here, right?
> > >
> > 
> > There is no model list required.  Vulnerability to TAA is calculable
> > directly from existing architectural sources.
> 
> Can you elaborate?  Earlier I suggested relying on NO_MDS in
> cpu_vuln_whitelist, but I believe you said that's not sufficient,
> because some of the non-MDS models don't have TSX, in which case we

meant to say: "some of the *MDS* models don't have TSX".

> shouldn't set TAA_BUG.
> 
> Which models are those?
> 
> Here's the current struct:
> 
> static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
> 	VULNWL(ANY,	4, X86_MODEL_ANY,	NO_SPECULATION),
> 	VULNWL(CENTAUR,	5, X86_MODEL_ANY,	NO_SPECULATION),
> 	VULNWL(INTEL,	5, X86_MODEL_ANY,	NO_SPECULATION),
> 	VULNWL(NSC,	5, X86_MODEL_ANY,	NO_SPECULATION),
> 
> 	/* Intel Family 6 */
> 	VULNWL_INTEL(ATOM_SALTWELL,		NO_SPECULATION),
> 	VULNWL_INTEL(ATOM_SALTWELL_TABLET,	NO_SPECULATION),
> 	VULNWL_INTEL(ATOM_SALTWELL_MID,		NO_SPECULATION),
> 	VULNWL_INTEL(ATOM_BONNELL,		NO_SPECULATION),
> 	VULNWL_INTEL(ATOM_BONNELL_MID,		NO_SPECULATION),
> 
> 	VULNWL_INTEL(ATOM_SILVERMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
> 	VULNWL_INTEL(ATOM_SILVERMONT_D,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
> 	VULNWL_INTEL(ATOM_SILVERMONT_MID,	NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
> 	VULNWL_INTEL(ATOM_AIRMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
> 	VULNWL_INTEL(XEON_PHI_KNL,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
> 	VULNWL_INTEL(XEON_PHI_KNM,		NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
> 
> 	VULNWL_INTEL(CORE_YONAH,		NO_SSB),
> 
> 	VULNWL_INTEL(ATOM_AIRMONT_MID,		NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
> 	VULNWL_INTEL(ATOM_AIRMONT_NP,		NO_L1TF | NO_SWAPGS),
> 
> 	VULNWL_INTEL(ATOM_GOLDMONT,		NO_TAA | NO_MDS | NO_L1TF | NO_SWAPGS),
> 	VULNWL_INTEL(ATOM_GOLDMONT_D,		NO_TAA | NO_MDS | NO_L1TF | NO_SWAPGS),
> 	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_TAA | NO_MDS | NO_L1TF | NO_SWAPGS),
> 
> 	/*
> 	 * Technically, swapgs isn't serializing on AMD (despite it previously
> 	 * being documented as such in the APM).  But according to AMD, %gs is
> 	 * updated non-speculatively, and the issuing of %gs-relative memory
> 	 * operands will be blocked until the %gs update completes, which is
> 	 * good enough for our purposes.
> 	 */
> 
> 	/* AMD Family 0xf - 0x12 */
> 	VULNWL_AMD(0x0f,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
> 	VULNWL_AMD(0x10,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
> 	VULNWL_AMD(0x11,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
> 	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
> 
> 	/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
> 	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS),
> 	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS),
> 	{}
> };
> 
> -- 
> Josh

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 18:56         ` Josh Poimboeuf
  2019-10-24 18:59           ` Josh Poimboeuf
@ 2019-10-24 19:13           ` Andrew Cooper
  2019-10-24 19:49             ` Josh Poimboeuf
  1 sibling, 1 reply; 75+ messages in thread
From: Andrew Cooper @ 2019-10-24 19:13 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2061 bytes --]

On 24/10/2019 19:56, speck for Josh Poimboeuf wrote:
> On Thu, Oct 24, 2019 at 07:23:57PM +0100, speck for Andrew Cooper wrote:
>> On 24/10/2019 17:43, speck for Borislav Petkov wrote:
>>> On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
>>>> As I said before this would be a lot nicer if we could just add NO_TAA
>>>> to the cpu_vuln_whitelist.
>>> We're waiting for a list of CPUs from Intel here, right?
>>>
>> There is no model list required.  Vulnerability to TAA is calculable
>> directly from existing architectural sources.
> Can you elaborate?  Earlier I suggested relying on NO_MDS in
> cpu_vuln_whitelist, but I believe you said that's not sufficient,
> because some of the non-MDS models don't have TSX, in which case we
> shouldn't set TAA_BUG.
>
> Which models are those?

Ok.  First things first.  Do you (and by this, I really mean Linux) want
to consider TAA an overlapping set with MDS, or a disjoint set?

After considering this for ages, and particularly, how to explain it
clearly to non-experts in Xen's security advisory, I chose to go with this:

---8<---
Vulnerability to TAA is a little complicated to quantify.

In the pipeline, it is just another way to get speculative access to
stale load port, store buffer or fill buffer data, and therefore can be
considered a superset of MDS.  On parts which predate MDS_NO, the
existing VERW flushing will mitigate this sidechannel as well.

On parts which contain MDS_NO, the lack of VERW flushing means that an
attacker can still target microarchitectural buffers to leak secrets.
Therefore, we consider TAA to be the set of parts which have MDS_NO but
lack TAA_NO.
---8<---

The simplifying fact is that vulnerability to TAA doesn't matter on CPUs
which don't advertise MDS_NO, because you're already doing VERW and
disabling hyperthreading, *and* can't turn TSX off if it actually available.

People who were not taking MDS mitigations in the first place won't
change their minds because of TAA, either.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 17:39   ` Andrew Cooper
@ 2019-10-24 19:45     ` Borislav Petkov
  2019-10-24 19:59       ` Josh Poimboeuf
  2019-10-24 20:07       ` Andrew Cooper
  2019-10-24 19:47     ` [MODERATED] " Pawan Gupta
  1 sibling, 2 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 19:45 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 06:39:35PM +0100, speck for Andrew Cooper wrote:
> On 23/10/2019 10:01, speck for Pawan Gupta wrote:
> > +	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
> > +		tsx_disable();
> > +
> > +		/*
> > +		 * tsx_disable() will change the state of the
> > +		 * RTM CPUID bit.  Clear it here since it is now
> > +		 * expected to be not set.
> > +		 */
> > +		setup_clear_cpu_cap(X86_FEATURE_RTM);
> 
> This same argument applies to HLE, and it would be weird for
> pre-TSX_CTRL CPUs with tsx=off to report HLE but not RTM in /proc/cpuid

Right, the correct fix for that would be to run tsx_init() before we
read CPUID leafs in get_cpu_cap() because then it'll load the proper
bits already and we won't have to clear anything - the MSR write
would've cleared both CPUID bits already.

I just checked that it happens globally even:

$ ./cpuid -r | grep -E "^\s+0x00000007" | awk '{ print $6 }' | uniq
edx=0xbc000400

Bits 4 (HLE) and 11 (RTM) are cleared on all CPUs.

I'll try this tomorrow to check whether it would even work that early.

If there's issues with it, then we'll have to do the above thing and
clear HLE too, by hand.

> Furthermore, while grepping through the tree, I found
> 
> events/intel/lbr.c-267-static inline bool
> lbr_from_signext_quirk_needed(void)
> events/intel/lbr.c-268-{
> events/intel/lbr.c-269- int lbr_format = x86_pmu.intel_cap.lbr_format;
> events/intel/lbr.c:270: bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) ||
> events/intel/lbr.c-271-                    boot_cpu_has(X86_FEATURE_RTM);
> events/intel/lbr.c-272-
> events/intel/lbr.c-273- return !tsx_support && (lbr_desc[lbr_format] &
> LBR_TSX);
> 
> which is going to need an adjustment to avoid applying the quirks on
> non-broken hardware.

When both HLE and RTM bits are cleared, that should still be correct,
no? Or am I missing something?

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 17:39   ` Andrew Cooper
  2019-10-24 19:45     ` Borislav Petkov
@ 2019-10-24 19:47     ` Pawan Gupta
  1 sibling, 0 replies; 75+ messages in thread
From: Pawan Gupta @ 2019-10-24 19:47 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 06:39:35PM +0100, speck for Andrew Cooper wrote:
> On 23/10/2019 10:01, speck for Pawan Gupta wrote:
> > +	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
> > +		tsx_disable();
> > +
> > +		/*
> > +		 * tsx_disable() will change the state of the
> > +		 * RTM CPUID bit.  Clear it here since it is now
> > +		 * expected to be not set.
> > +		 */
> > +		setup_clear_cpu_cap(X86_FEATURE_RTM);
> 
> This same argument applies to HLE, and it would be weird for
> pre-TSX_CTRL CPUs with tsx=off to report HLE but not RTM in /proc/cpuid

I guess you mean post-TSX_CTRL CPUs. I think this was done intentionally
to reduce an extra cloud pool scenario. I will confirm if this is the
case.

> Furthermore, while grepping through the tree, I found
> 
> events/intel/lbr.c-267-static inline bool
> lbr_from_signext_quirk_needed(void)
> events/intel/lbr.c-268-{
> events/intel/lbr.c-269- int lbr_format = x86_pmu.intel_cap.lbr_format;
> events/intel/lbr.c:270: bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) ||
> events/intel/lbr.c-271-                    boot_cpu_has(X86_FEATURE_RTM);
> events/intel/lbr.c-272-
> events/intel/lbr.c-273- return !tsx_support && (lbr_desc[lbr_format] &
> LBR_TSX);
> 
> which is going to need an adjustment to avoid applying the quirks on
> non-broken hardware.

This may need an adjustment. I will get back on this one too.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 19:13           ` Andrew Cooper
@ 2019-10-24 19:49             ` Josh Poimboeuf
  2019-10-24 20:48               ` Andrew Cooper
  0 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 19:49 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 08:13:44PM +0100, speck for Andrew Cooper wrote:
> On 24/10/2019 19:56, speck for Josh Poimboeuf wrote:
> > On Thu, Oct 24, 2019 at 07:23:57PM +0100, speck for Andrew Cooper wrote:
> >> On 24/10/2019 17:43, speck for Borislav Petkov wrote:
> >>> On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
> >>>> As I said before this would be a lot nicer if we could just add NO_TAA
> >>>> to the cpu_vuln_whitelist.
> >>> We're waiting for a list of CPUs from Intel here, right?
> >>>
> >> There is no model list required.  Vulnerability to TAA is calculable
> >> directly from existing architectural sources.
> > Can you elaborate?  Earlier I suggested relying on NO_MDS in
> > cpu_vuln_whitelist, but I believe you said that's not sufficient,
> > because some of the non-MDS models don't have TSX, in which case we
> > shouldn't set TAA_BUG.
> >
> > Which models are those?
> 
> Ok.  First things first.  Do you (and by this, I really mean Linux) want
> to consider TAA an overlapping set with MDS, or a disjoint set?
> 
> After considering this for ages, and particularly, how to explain it
> clearly to non-experts in Xen's security advisory, I chose to go with this:
> 
> ---8<---
> Vulnerability to TAA is a little complicated to quantify.
> 
> In the pipeline, it is just another way to get speculative access to
> stale load port, store buffer or fill buffer data, and therefore can be
> considered a superset of MDS.  On parts which predate MDS_NO, the
> existing VERW flushing will mitigate this sidechannel as well.
> 
> On parts which contain MDS_NO, the lack of VERW flushing means that an
> attacker can still target microarchitectural buffers to leak secrets.
> Therefore, we consider TAA to be the set of parts which have MDS_NO but
> lack TAA_NO.
> ---8<---
> 
> The simplifying fact is that vulnerability to TAA doesn't matter on CPUs
> which don't advertise MDS_NO, because you're already doing VERW and
> disabling hyperthreading, *and* can't turn TSX off if it actually available.
> 
> People who were not taking MDS mitigations in the first place won't
> change their minds because of TAA, either.

Good question.

The current Linux patches consider them overlapping.  But it _might_
possibly be easier to communicate if we considered them disjoint.  I
don't know if there's a good answer, but at this point it might be
easiest to stick with our current overlapping approach.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 7/9] TAA 7
  2019-10-24 18:20       ` Jiri Kosina
@ 2019-10-24 19:53         ` Borislav Petkov
  2019-10-24 20:02           ` Josh Poimboeuf
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 19:53 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 08:20:42PM +0200, speck for Jiri Kosina wrote:
> On Thu, 24 Oct 2019, speck for Borislav Petkov wrote:
> 
> > "Disable TSX if the CPU is affected by the TSX Async Abort (TAA)
> > vulnerability and microcode provides a special MSR - TSX_CTRL_MSR -
> > which provides the required TSX control knobs. On MDS-affected parts
> > where VERW takes care of the TAA vulnerability, that controlling MSR is
> > not present and thus TSX cannot be disabled there."
> 
> This is true if you ignore hyperthreading.
> 
> On SMT systems, TSX disable is 100% complete mitigation, while VERW
> clearing is not.

So why is our default this then?

static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;

and we only do the TAA_MITIGATION_TSX_DISABLED thing only if TSX has
been disabled earlier?

Because of those MDS_NO=0 machines which don't get the TSX_CTRL MSR so
that TSX cannot be disabled there?

Are some of those machines SMT?

Because if so, we *must* disable SMT unconditionally to mitigate TAA
completely there... methinks.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 19:45     ` Borislav Petkov
@ 2019-10-24 19:59       ` Josh Poimboeuf
  2019-10-24 20:05         ` Borislav Petkov
  2019-10-24 20:07       ` Andrew Cooper
  1 sibling, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 19:59 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 09:45:03PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 06:39:35PM +0100, speck for Andrew Cooper wrote:
> > On 23/10/2019 10:01, speck for Pawan Gupta wrote:
> > > +	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
> > > +		tsx_disable();
> > > +
> > > +		/*
> > > +		 * tsx_disable() will change the state of the
> > > +		 * RTM CPUID bit.  Clear it here since it is now
> > > +		 * expected to be not set.
> > > +		 */
> > > +		setup_clear_cpu_cap(X86_FEATURE_RTM);
> > 
> > This same argument applies to HLE, and it would be weird for
> > pre-TSX_CTRL CPUs with tsx=off to report HLE but not RTM in /proc/cpuid
> 
> Right, the correct fix for that would be to run tsx_init() before we
> read CPUID leafs in get_cpu_cap() because then it'll load the proper
> bits already and we won't have to clear anything - the MSR write
> would've cleared both CPUID bits already.
> 
> I just checked that it happens globally even:
> 
> $ ./cpuid -r | grep -E "^\s+0x00000007" | awk '{ print $6 }' | uniq
> edx=0xbc000400
> 
> Bits 4 (HLE) and 11 (RTM) are cleared on all CPUs.
> 
> I'll try this tomorrow to check whether it would even work that early.
> 
> If there's issues with it, then we'll have to do the above thing and
> clear HLE too, by hand.

But pre-TSX_CTRL CPUs won't reach the above code, because of the
tsx_ctrl_is_supported() check at the beginning of tsx_init().

And post-TSX_CTRL CPUs will have the HLE bit already cleared by
microcode.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 7/9] TAA 7
  2019-10-24 19:53         ` Borislav Petkov
@ 2019-10-24 20:02           ` Josh Poimboeuf
  2019-10-24 20:08             ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 20:02 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 09:53:16PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 08:20:42PM +0200, speck for Jiri Kosina wrote:
> > On Thu, 24 Oct 2019, speck for Borislav Petkov wrote:
> > 
> > > "Disable TSX if the CPU is affected by the TSX Async Abort (TAA)
> > > vulnerability and microcode provides a special MSR - TSX_CTRL_MSR -
> > > which provides the required TSX control knobs. On MDS-affected parts
> > > where VERW takes care of the TAA vulnerability, that controlling MSR is
> > > not present and thus TSX cannot be disabled there."
> > 
> > This is true if you ignore hyperthreading.
> > 
> > On SMT systems, TSX disable is 100% complete mitigation, while VERW
> > clearing is not.
> 
> So why is our default this then?
> 
> static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
> 
> and we only do the TAA_MITIGATION_TSX_DISABLED thing only if TSX has
> been disabled earlier?
> 
> Because of those MDS_NO=0 machines which don't get the TSX_CTRL MSR so
> that TSX cannot be disabled there?
> 
> Are some of those machines SMT?
> 
> Because if so, we *must* disable SMT unconditionally to mitigate TAA
> completely there... methinks.

For the same reasons we kept SMT enabled for L1TF and MDS...  We don't
want to break existing SMT workloads.

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 19:59       ` Josh Poimboeuf
@ 2019-10-24 20:05         ` Borislav Petkov
  2019-10-24 20:14           ` Josh Poimboeuf
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 20:05 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 02:59:59PM -0500, speck for Josh Poimboeuf wrote:
> But pre-TSX_CTRL CPUs won't reach the above code, because of the
> tsx_ctrl_is_supported() check at the beginning of tsx_init().

Yes, pre-TSX_CTRL won't have those CPUID bits cleared because they
cannot anyway - MSR is not there.

And even if we clear the corresponding X86_FEATURE_ bits, nothing is
stopping people from using TSX - they won't query /proc/cpuinfo but
CPUID directly. Hell, they can even try the instructions and see if they
fault or not, without querying any feature bits.

> And post-TSX_CTRL CPUs will have the HLE bit already cleared by
> microcode.

Yes.

In both cases, the X86_FEATURE_* bits mirror what's in CPUID. At least
for the two TSX bits: HLE and RTM.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 19:45     ` Borislav Petkov
  2019-10-24 19:59       ` Josh Poimboeuf
@ 2019-10-24 20:07       ` Andrew Cooper
  2019-10-24 20:17         ` Borislav Petkov
  1 sibling, 1 reply; 75+ messages in thread
From: Andrew Cooper @ 2019-10-24 20:07 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 3330 bytes --]

On 24/10/2019 20:45, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 06:39:35PM +0100, speck for Andrew Cooper wrote:
>> On 23/10/2019 10:01, speck for Pawan Gupta wrote:
>>> +	if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
>>> +		tsx_disable();
>>> +
>>> +		/*
>>> +		 * tsx_disable() will change the state of the
>>> +		 * RTM CPUID bit.  Clear it here since it is now
>>> +		 * expected to be not set.
>>> +		 */
>>> +		setup_clear_cpu_cap(X86_FEATURE_RTM);
>> This same argument applies to HLE, and it would be weird for
>> pre-TSX_CTRL CPUs with tsx=off to report HLE but not RTM in /proc/cpuid
> Right, the correct fix for that would be to run tsx_init() before we
> read CPUID leafs in get_cpu_cap() because then it'll load the proper
> bits already and we won't have to clear anything - the MSR write
> would've cleared both CPUID bits already.
>
> I just checked that it happens globally even:
>
> $ ./cpuid -r | grep -E "^\s+0x00000007" | awk '{ print $6 }' | uniq
> edx=0xbc000400
>
> Bits 4 (HLE) and 11 (RTM) are cleared on all CPUs.
>
> I'll try this tomorrow to check whether it would even work that early.
>
> If there's issues with it, then we'll have to do the above thing and
> clear HLE too, by hand.

On Xen, I've juggled things such that we load microcode, then interpret
tsx= if the user has provided it (taking care to always write
MSR_TSX_CTRL if it is available, to discard whatever settings firmware
or kexec left), before querying CPUID.

Later, the spec-ctrl= interpretation happens, which might choose to turn
off TSX due to TAA, which then has to modify MSR_TSX_CTRL and force
clear the bits in the policy.

Given that speculative mitigations rely heavily on CPUID, I can't reason
about a clean way to disentangle this, but the above seems to be the
least complicated algorithm.

>> Furthermore, while grepping through the tree, I found
>>
>> events/intel/lbr.c-267-static inline bool
>> lbr_from_signext_quirk_needed(void)
>> events/intel/lbr.c-268-{
>> events/intel/lbr.c-269- int lbr_format = x86_pmu.intel_cap.lbr_format;
>> events/intel/lbr.c:270: bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) ||
>> events/intel/lbr.c-271-                    boot_cpu_has(X86_FEATURE_RTM);
>> events/intel/lbr.c-272-
>> events/intel/lbr.c-273- return !tsx_support && (lbr_desc[lbr_format] &
>> LBR_TSX);
>>
>> which is going to need an adjustment to avoid applying the quirks on
>> non-broken hardware.
> When both HLE and RTM bits are cleared, that should still be correct,
> no? Or am I missing something?

On Haswell and Broadwell, the microcode which turned HLE/RTM off in the
pipeline left the LBR MSRs in a state where you can't context switch the
value, because they would yield a value via RDMSR which WRMSR faulted
on, because the two operations had an asymmetric view of how the top
bits of metadata should be interpreted, given some TSX-related metadata
and a sign extended linear address.

On Skylake where you can't actually turn RTM off, but we may hide
FEATURE_RTM/HLE, the above quirk is probably not true.

On Cascadelake, who knows?  RTM is being turned off in the pipeline, but
maybe the HSX/BWX bug has been fixed, or maybe it is being turned off in
a different way, or ...

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 7/9] TAA 7
  2019-10-24 20:02           ` Josh Poimboeuf
@ 2019-10-24 20:08             ` Borislav Petkov
  0 siblings, 0 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 20:08 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 03:02:33PM -0500, speck for Josh Poimboeuf wrote:
> For the same reasons we kept SMT enabled for L1TF and MDS...  We don't
> want to break existing SMT workloads.

Should we add that to the above "auto" option text?

I mean, it's not like it is a new decision to keep SMT enabled in the
face of a not fully mitigated vuln...

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 20:05         ` Borislav Petkov
@ 2019-10-24 20:14           ` Josh Poimboeuf
  2019-10-24 20:36             ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 20:14 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:05:58PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 02:59:59PM -0500, speck for Josh Poimboeuf wrote:
> > But pre-TSX_CTRL CPUs won't reach the above code, because of the
> > tsx_ctrl_is_supported() check at the beginning of tsx_init().
> 
> Yes, pre-TSX_CTRL won't have those CPUID bits cleared because they
> cannot anyway - MSR is not there.
> 
> And even if we clear the corresponding X86_FEATURE_ bits, nothing is
> stopping people from using TSX - they won't query /proc/cpuinfo but
> CPUID directly. Hell, they can even try the instructions and see if they
> fault or not, without querying any feature bits.
> 
> > And post-TSX_CTRL CPUs will have the HLE bit already cleared by
> > microcode.
> 
> Yes.
> 
> In both cases, the X86_FEATURE_* bits mirror what's in CPUID. At least
> for the two TSX bits: HLE and RTM.

Actually, according to the patches, with TSX_CTRL CPUs, HLE is
unconditionally disabled, but still enumerated as present in CPUID
(why???)

If that's the case then it sounds like X86_FEATURE_HLE needs to be
forced clear for TSX_CTRL CPUs?

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 20:07       ` Andrew Cooper
@ 2019-10-24 20:17         ` Borislav Petkov
  2019-10-24 22:38           ` Andrew Cooper
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 20:17 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 09:07:27PM +0100, speck for Andrew Cooper wrote:
> On Xen, I've juggled things such that we load microcode, then interpret
> tsx= if the user has provided it (taking care to always write
> MSR_TSX_CTRL if it is available, to discard whatever settings firmware
> or kexec left), before querying CPUID.

Yap, wanna try the same thing, in that exact order.

> Later, the spec-ctrl= interpretation happens, which might choose to turn
> off TSX due to TAA, which then has to modify MSR_TSX_CTRL and force
> clear the bits in the policy.

Well, the kernel doesn't reeval CPUID feature bits in that case because
it has gone on booting and enabled all kinds of feature supporting code.
This is the reason why the whole late microcode loading is such a bad
thing to do.

> Given that speculative mitigations rely heavily on CPUID, I can't reason
> about a clean way to disentangle this, but the above seems to be the
> least complicated algorithm.

Right, considering our CPU feature supporting code doesn't do any
reevaluation of features and it "reloads" support for newly appearing or
disappearing features, the only ordering you can do is

1. load microcode and toggle MSRs or do whatever else, which modifies
CPUID leafs.

2. Read and cache and act upon those feature leafs.

That's it, feature bits are cast in stone.

> On Haswell and Broadwell, the microcode which turned HLE/RTM off in the
> pipeline left the LBR MSRs in a state where you can't context switch the
> value, because they would yield a value via RDMSR which WRMSR faulted
> on, because the two operations had an asymmetric view of how the top
> bits of metadata should be interpreted, given some TSX-related metadata
> and a sign extended linear address.
> 
> On Skylake where you can't actually turn RTM off, but we may hide
> FEATURE_RTM/HLE, the above quirk is probably not true.

Huh? How is that possible? TSX_CTRL has defined only bit 1 there, the CPUID
enumeration bit, and bit 0 doesn't do any RTM disabling? Srsly?!

> On Cascadelake, who knows?  RTM is being turned off in the pipeline, but
> maybe the HSX/BWX bug has been fixed, or maybe it is being turned off in
> a different way, or ...

Right, I guess we'll deal with any perfcounters fallout in public, when
the stuff releases...

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 20:14           ` Josh Poimboeuf
@ 2019-10-24 20:36             ` Borislav Petkov
  2019-10-24 20:43               ` Andrew Cooper
  2019-10-24 20:44               ` Josh Poimboeuf
  0 siblings, 2 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 20:36 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 03:14:03PM -0500, speck for Josh Poimboeuf wrote:
> Actually, according to the patches, with TSX_CTRL CPUs, HLE is
> unconditionally disabled, but still enumerated as present in CPUID
> (why???)

I found it: in an older email thread Andrew talks about migration
compatibility and HLE. Dunno if we care but maybe someone else has a
better idea.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 20:36             ` Borislav Petkov
@ 2019-10-24 20:43               ` Andrew Cooper
  2019-10-24 20:55                 ` Borislav Petkov
  2019-10-24 20:44               ` Josh Poimboeuf
  1 sibling, 1 reply; 75+ messages in thread
From: Andrew Cooper @ 2019-10-24 20:43 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 845 bytes --]

On 24/10/2019 21:36, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 03:14:03PM -0500, speck for Josh Poimboeuf wrote:
>> Actually, according to the patches, with TSX_CTRL CPUs, HLE is
>> unconditionally disabled, but still enumerated as present in CPUID
>> (why???)
> I found it: in an older email thread Andrew talks about migration
> compatibility and HLE. Dunno if we care but maybe someone else has a
> better idea.

The first TSX problem in Haswell/Broadwell lead to chaos when a
microcode update took feature bits out.

When the second problem came around, I understand from IRL conversations
that several vendors firmly expressed their opinion about whether
feature bits should disappear in microcode.

As HLE is a hint bit at best, it was left visible despite being disabled
behind the scenes.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 20:36             ` Borislav Petkov
  2019-10-24 20:43               ` Andrew Cooper
@ 2019-10-24 20:44               ` Josh Poimboeuf
  1 sibling, 0 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-24 20:44 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 10:36:00PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 03:14:03PM -0500, speck for Josh Poimboeuf wrote:
> > Actually, according to the patches, with TSX_CTRL CPUs, HLE is
> > unconditionally disabled, but still enumerated as present in CPUID
> > (why???)
> 
> I found it: in an older email thread Andrew talks about migration
> compatibility and HLE. Dunno if we care but maybe someone else has a
> better idea.

Ok, so a question here is -- on top of all the other questions we seem
to be generating -- should we be unconditionally clearing
X86_FEATURE_HLE in the cases where HLE is disabled but not reported as
such, or will that break virt somehow?

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 19:49             ` Josh Poimboeuf
@ 2019-10-24 20:48               ` Andrew Cooper
  2019-10-25  9:12                 ` Andrew Cooper
  0 siblings, 1 reply; 75+ messages in thread
From: Andrew Cooper @ 2019-10-24 20:48 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2792 bytes --]

On 24/10/2019 20:49, speck for Josh Poimboeuf wrote:
> On Thu, Oct 24, 2019 at 08:13:44PM +0100, speck for Andrew Cooper wrote:
>> On 24/10/2019 19:56, speck for Josh Poimboeuf wrote:
>>> On Thu, Oct 24, 2019 at 07:23:57PM +0100, speck for Andrew Cooper wrote:
>>>> On 24/10/2019 17:43, speck for Borislav Petkov wrote:
>>>>> On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
>>>>>> As I said before this would be a lot nicer if we could just add NO_TAA
>>>>>> to the cpu_vuln_whitelist.
>>>>> We're waiting for a list of CPUs from Intel here, right?
>>>>>
>>>> There is no model list required.  Vulnerability to TAA is calculable
>>>> directly from existing architectural sources.
>>> Can you elaborate?  Earlier I suggested relying on NO_MDS in
>>> cpu_vuln_whitelist, but I believe you said that's not sufficient,
>>> because some of the non-MDS models don't have TSX, in which case we
>>> shouldn't set TAA_BUG.
>>>
>>> Which models are those?
>> Ok.  First things first.  Do you (and by this, I really mean Linux) want
>> to consider TAA an overlapping set with MDS, or a disjoint set?
>>
>> After considering this for ages, and particularly, how to explain it
>> clearly to non-experts in Xen's security advisory, I chose to go with this:
>>
>> ---8<---
>> Vulnerability to TAA is a little complicated to quantify.
>>
>> In the pipeline, it is just another way to get speculative access to
>> stale load port, store buffer or fill buffer data, and therefore can be
>> considered a superset of MDS.  On parts which predate MDS_NO, the
>> existing VERW flushing will mitigate this sidechannel as well.
>>
>> On parts which contain MDS_NO, the lack of VERW flushing means that an
>> attacker can still target microarchitectural buffers to leak secrets.
>> Therefore, we consider TAA to be the set of parts which have MDS_NO but
>> lack TAA_NO.
>> ---8<---
>>
>> The simplifying fact is that vulnerability to TAA doesn't matter on CPUs
>> which don't advertise MDS_NO, because you're already doing VERW and
>> disabling hyperthreading, *and* can't turn TSX off if it actually available.
>>
>> People who were not taking MDS mitigations in the first place won't
>> change their minds because of TAA, either.
> Good question.
>
> The current Linux patches consider them overlapping.  But it _might_
> possibly be easier to communicate if we considered them disjoint.  I
> don't know if there's a good answer, but at this point it might be
> easiest to stick with our current overlapping approach.
>

I'll bring this up with the group.  I bet we are not the only people
wondering the same, and it won't do any downstream users any good if
they see conflicting descriptions from software vendors.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 20:43               ` Andrew Cooper
@ 2019-10-24 20:55                 ` Borislav Petkov
  0 siblings, 0 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-24 20:55 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 09:43:56PM +0100, speck for Andrew Cooper wrote:
> The first TSX problem in Haswell/Broadwell lead to chaos when a
> microcode update took feature bits out.

Yeap, not worky with our current non-reentrant feature detection OSes.
:-P

> When the second problem came around, I understand from IRL conversations
> that several vendors firmly expressed their opinion about whether
> feature bits should disappear in microcode.
> 
> As HLE is a hint bit at best, it was left visible despite being disabled
> behind the scenes.

Yeah, and it being a hint bit only, I don't know whether we should care
about a box where RTM=0 but HLE=1.

I guess the relevant use case is big fat hypervisor. But perhaps the
easiest thing would be if the migration code gets told to ignore HLE.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 20:17         ` Borislav Petkov
@ 2019-10-24 22:38           ` Andrew Cooper
  2019-10-25  6:03             ` Pawan Gupta
  2019-10-25  7:17             ` Borislav Petkov
  0 siblings, 2 replies; 75+ messages in thread
From: Andrew Cooper @ 2019-10-24 22:38 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2529 bytes --]

On 24/10/2019 21:17, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 09:07:27PM +0100, speck for Andrew Cooper wrote:
>> On Xen, I've juggled things such that we load microcode, then interpret
>> tsx= if the user has provided it (taking care to always write
>> MSR_TSX_CTRL if it is available, to discard whatever settings firmware
>> or kexec left), before querying CPUID.
> Yap, wanna try the same thing, in that exact order.
>
>> Later, the spec-ctrl= interpretation happens, which might choose to turn
>> off TSX due to TAA, which then has to modify MSR_TSX_CTRL and force
>> clear the bits in the policy.
> Well, the kernel doesn't reeval CPUID feature bits in that case because
> it has gone on booting and enabled all kinds of feature supporting code.
> This is the reason why the whole late microcode loading is such a bad
> thing to do.

:)

I don't necessarily disagree, but the customers (who ultimately pay my
salary) want late microcode loading and livepatching, so we've delivered.

>
>> On Haswell and Broadwell, the microcode which turned HLE/RTM off in the
>> pipeline left the LBR MSRs in a state where you can't context switch the
>> value, because they would yield a value via RDMSR which WRMSR faulted
>> on, because the two operations had an asymmetric view of how the top
>> bits of metadata should be interpreted, given some TSX-related metadata
>> and a sign extended linear address.
>>
>> On Skylake where you can't actually turn RTM off, but we may hide
>> FEATURE_RTM/HLE, the above quirk is probably not true.
> Huh? How is that possible? TSX_CTRL has defined only bit 1 there, the CPUID
> enumeration bit, and bit 0 doesn't do any RTM disabling? Srsly?!

Skylake CPUs aren't getting TSX_CTRL, but force setting/clearing bits at
boot will affect later logic.  (Unless I'm being blind while reading the
patches, which is a distinct possibility).

>
>> On Cascadelake, who knows?  RTM is being turned off in the pipeline, but
>> maybe the HSX/BWX bug has been fixed, or maybe it is being turned off in
>> a different way, or ...
> Right, I guess we'll deal with any perfcounters fallout in public, when
> the stuff releases...

So, I remembered that I had already written a test case for this bug.

Initial experimentation shows that using TSX_CTRL to secure Cascade Lake
doesn't result in Haswell/Broadwell style GP faults, which is good
news.  I will be adjusting Xen's logic not to invoke the quirk on more
modern parts.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-23  9:30 ` [MODERATED] [PATCH 4/9] TAA 4 Pawan Gupta
  2019-10-24 15:32   ` [MODERATED] " Josh Poimboeuf
@ 2019-10-25  0:49   ` Pawan Gupta
  2019-10-25  7:36     ` Borislav Petkov
  1 sibling, 1 reply; 75+ messages in thread
From: Pawan Gupta @ 2019-10-25  0:49 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 11:30:45AM +0200, speck for Pawan Gupta wrote:
>  void cpu_bugs_smt_update(void)
>  {
> @@ -819,6 +916,19 @@ void cpu_bugs_smt_update(void)
>  		break;
>  	}
>  
> +	switch (taa_mitigation) {
> +	case TAA_MITIGATION_VERW:
> +	case TAA_MITIGATION_UCODE_NEEDED:
> +		if (sched_smt_active())
> +			pr_warn_once(TAA_MSG_SMT);
> +		/* TSX is enabled, apply MDS idle buffer clearing. */
> +		update_mds_branch_idle();

With partial clearing removed, this call can go away.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 22:38           ` Andrew Cooper
@ 2019-10-25  6:03             ` Pawan Gupta
  2019-10-25  7:25               ` Borislav Petkov
  2019-10-25  7:17             ` Borislav Petkov
  1 sibling, 1 reply; 75+ messages in thread
From: Pawan Gupta @ 2019-10-25  6:03 UTC (permalink / raw)
  To: speck

> >> On Haswell and Broadwell, the microcode which turned HLE/RTM off in the
> >> pipeline left the LBR MSRs in a state where you can't context switch the
> >> value, because they would yield a value via RDMSR which WRMSR faulted
> >> on, because the two operations had an asymmetric view of how the top
> >> bits of metadata should be interpreted, given some TSX-related metadata
> >> and a sign extended linear address.
> >>
> >> On Skylake where you can't actually turn RTM off, but we may hide
> >> FEATURE_RTM/HLE, the above quirk is probably not true.
> > Huh? How is that possible? TSX_CTRL has defined only bit 1 there, the CPUID
> > enumeration bit, and bit 0 doesn't do any RTM disabling? Srsly?!
> 
> Skylake CPUs aren't getting TSX_CTRL, but force setting/clearing bits at
> boot will affect later logic.  (Unless I'm being blind while reading the
> patches, which is a distinct possibility).

tsx_init() will not force bits at boot on Skylake because of
tsx_ctrl_is_supported() check.

Would adding this comment help?

----
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index 0969e6e9dff3..416dad3b8590 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -78,9 +78,19 @@ static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
 
 void __init tsx_init(void)
 {
	char arg[5] = {};
 	int ret;
 
+	/*
+	 * On MDS_NO=0 CPUs tsx_init() would do nothing and simply return from
+	 * here.
+	 *
+	 * TSX control(aka MSR_IA32_TSX_CTRL) is only available after a
+	 * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES
+	 * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get
+	 * MSR_IA32_TSX_CTRL even after the microcode update. tsx= cmdline
+	 * requests will do nothing on CPUs without MSR_IA32_TSX_CTRL support.
+	 */
 	if (!tsx_ctrl_is_supported())
 		return;
 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-24 22:38           ` Andrew Cooper
  2019-10-25  6:03             ` Pawan Gupta
@ 2019-10-25  7:17             ` Borislav Petkov
  2019-10-25  9:08               ` Andrew Cooper
  1 sibling, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-25  7:17 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 11:38:21PM +0100, speck for Andrew Cooper wrote:
> I don't necessarily disagree, but the customers (who ultimately pay my
> salary) want late microcode loading and livepatching, so we've delivered.

Yeah, you guys promised too much. How do you deal with userspace using
a feature and you wanna upgrade microcode which disables it? TSX might
not be a good example here because feature bits disappearing is still ok,
it doesn't fault but it would simply start aborting transactions
unconditionally but what if it is a CPU feature which userspace is
actively using and it disappears underneath its feet all of a sudden?

Just upgrade the microcode and forget about it is not enough. I'm pretty
sure you'll have to "dance". But hey, you can buy almost everything with
money nowadays so... :-)

> Skylake CPUs aren't getting TSX_CTRL, but force setting/clearing bits at
> boot will affect later logic.  (Unless I'm being blind while reading the
> patches, which is a distinct possibility).

Yes, that's why I'm saying we should not blindly force set and clear
bits but mirror what CPUID is telling us. At least wrt TSX.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-25  6:03             ` Pawan Gupta
@ 2019-10-25  7:25               ` Borislav Petkov
  0 siblings, 0 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-25  7:25 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 11:03:38PM -0700, speck for Pawan Gupta wrote:
> tsx_init() will not force bits at boot on Skylake because of
> tsx_ctrl_is_supported() check.
> 
> Would adding this comment help?

Yes, I've added it to tsx_ctrl_is_supported(). Considering how confusing
this whole TAA/TSX situation to people is, having more comments at the
right spot explaining stuff is always prudent.

Thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-25  0:49   ` Pawan Gupta
@ 2019-10-25  7:36     ` Borislav Petkov
  0 siblings, 0 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-25  7:36 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 05:49:42PM -0700, speck for Pawan Gupta wrote:
> On Wed, Oct 23, 2019 at 11:30:45AM +0200, speck for Pawan Gupta wrote:
> >  void cpu_bugs_smt_update(void)
> >  {
> > @@ -819,6 +916,19 @@ void cpu_bugs_smt_update(void)
> >  		break;
> >  	}
> >  
> > +	switch (taa_mitigation) {
> > +	case TAA_MITIGATION_VERW:
> > +	case TAA_MITIGATION_UCODE_NEEDED:
> > +		if (sched_smt_active())
> > +			pr_warn_once(TAA_MSG_SMT);
> > +		/* TSX is enabled, apply MDS idle buffer clearing. */
> > +		update_mds_branch_idle();
> 
> With partial clearing removed, this call can go away.

It is gone now, thx.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 8/9] TAA 8
  2019-10-24 18:11       ` Josh Poimboeuf
  2019-10-24 18:55         ` Pawan Gupta
@ 2019-10-25  8:04         ` Borislav Petkov
  1 sibling, 0 replies; 75+ messages in thread
From: Borislav Petkov @ 2019-10-25  8:04 UTC (permalink / raw)
  To: speck

On Thu, Oct 24, 2019 at 01:11:20PM -0500, speck for Josh Poimboeuf wrote:
> Ditto, I guess whatever wording is used in kernel-parameters.txt can be
> cribbed for here.

All changed.

> That's the one, but yeah... it might not be future-complete, and even if
> it is, Intel probably doesn't want to disclose that anyway.

Right, let's shelve adding that table for later.

> TSX _might_ be disabled, depending...

Done.

> Right, update the wording?

Done.

> Right, the effects of the host mitigation options on the guest would be
> useful here.

Leaving that as is until the virt patch has been clarified with Paolo.

> I meant to say "if TSX is enabled".  Otherwise if TSX is disabled this
> option doesn't do anything

Done.

> > The MDS_NO=1 and future parts, I guess.
> 
> Right, that should be clarified.

Leaving that out for now until we agree on publishing such a table.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-25  7:17             ` Borislav Petkov
@ 2019-10-25  9:08               ` Andrew Cooper
  2019-10-27  7:48                 ` Borislav Petkov
  0 siblings, 1 reply; 75+ messages in thread
From: Andrew Cooper @ 2019-10-25  9:08 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 2522 bytes --]

On 25/10/2019 08:17, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 11:38:21PM +0100, speck for Andrew Cooper wrote:
>> I don't necessarily disagree, but the customers (who ultimately pay my
>> salary) want late microcode loading and livepatching, so we've delivered.
> Yeah, you guys promised too much. How do you deal with userspace using
> a feature and you wanna upgrade microcode which disables it? TSX might
> not be a good example here

Its the perfect example here.  The answer is by requesting that Intel
change bit 0's behaviour from causing #UD's to causing aborts.

The first version of this microcode was definitely not safe to late load.

> because feature bits disappearing is still ok,

Some userspace apparently gets confused when CPUID changes behind its
back, which is why the CPUID control in bit 1 was split out from an
otherwise monolithic bit 0.

At late load, choose (or not) to use bit 0 only.
At boot, choose (or not) both bits 0 and 1 in unison.

> it doesn't fault but it would simply start aborting transactions
> unconditionally but what if it is a CPU feature which userspace is
> actively using and it disappears underneath its feet all of a sudden?

Noone has guaranteed that all microcode ever in the future is going to
be safe to use on a running system.  If it really can't be made to be
safe, then customers are really going to have to reboot.

However, there is a lot of effort going into trying to make sure that
fixes such as this one are made safe for late loading.

To give a concrete example, we have customers who's elapsed time for a
reboot, conforming to SLAs, is in excess of 9 months, and new microcode
with critical fixes is coming out faster than that.  I bet that I'm not
the only person on this list with this type of customer.

> Just upgrade the microcode and forget about it is not enough. I'm pretty
> sure you'll have to "dance". But hey, you can buy almost everything with
> money nowadays so... :-)

Yeah, you have to dance, but the constituent pieces are already around,
so its not too bad.

>
>> Skylake CPUs aren't getting TSX_CTRL, but force setting/clearing bits at
>> boot will affect later logic.  (Unless I'm being blind while reading the
>> patches, which is a distinct possibility).
> Yes, that's why I'm saying we should not blindly force set and clear
> bits but mirror what CPUID is telling us. At least wrt TSX.
>

Ah - in which case I agree.  Sorry for the noise.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 4/9] TAA 4
  2019-10-24 20:48               ` Andrew Cooper
@ 2019-10-25  9:12                 ` Andrew Cooper
  0 siblings, 0 replies; 75+ messages in thread
From: Andrew Cooper @ 2019-10-25  9:12 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 3188 bytes --]

On 24/10/2019 21:48, speck for Andrew Cooper wrote:
> On 24/10/2019 20:49, speck for Josh Poimboeuf wrote:
>> On Thu, Oct 24, 2019 at 08:13:44PM +0100, speck for Andrew Cooper wrote:
>>> On 24/10/2019 19:56, speck for Josh Poimboeuf wrote:
>>>> On Thu, Oct 24, 2019 at 07:23:57PM +0100, speck for Andrew Cooper wrote:
>>>>> On 24/10/2019 17:43, speck for Borislav Petkov wrote:
>>>>>> On Thu, Oct 24, 2019 at 10:32:40AM -0500, speck for Josh Poimboeuf wrote:
>>>>>>> As I said before this would be a lot nicer if we could just add NO_TAA
>>>>>>> to the cpu_vuln_whitelist.
>>>>>> We're waiting for a list of CPUs from Intel here, right?
>>>>>>
>>>>> There is no model list required.  Vulnerability to TAA is calculable
>>>>> directly from existing architectural sources.
>>>> Can you elaborate?  Earlier I suggested relying on NO_MDS in
>>>> cpu_vuln_whitelist, but I believe you said that's not sufficient,
>>>> because some of the non-MDS models don't have TSX, in which case we
>>>> shouldn't set TAA_BUG.
>>>>
>>>> Which models are those?
>>> Ok.  First things first.  Do you (and by this, I really mean Linux) want
>>> to consider TAA an overlapping set with MDS, or a disjoint set?
>>>
>>> After considering this for ages, and particularly, how to explain it
>>> clearly to non-experts in Xen's security advisory, I chose to go with this:
>>>
>>> ---8<---
>>> Vulnerability to TAA is a little complicated to quantify.
>>>
>>> In the pipeline, it is just another way to get speculative access to
>>> stale load port, store buffer or fill buffer data, and therefore can be
>>> considered a superset of MDS.  On parts which predate MDS_NO, the
>>> existing VERW flushing will mitigate this sidechannel as well.
>>>
>>> On parts which contain MDS_NO, the lack of VERW flushing means that an
>>> attacker can still target microarchitectural buffers to leak secrets.
>>> Therefore, we consider TAA to be the set of parts which have MDS_NO but
>>> lack TAA_NO.
>>> ---8<---
>>>
>>> The simplifying fact is that vulnerability to TAA doesn't matter on CPUs
>>> which don't advertise MDS_NO, because you're already doing VERW and
>>> disabling hyperthreading, *and* can't turn TSX off if it actually available.
>>>
>>> People who were not taking MDS mitigations in the first place won't
>>> change their minds because of TAA, either.
>> Good question.
>>
>> The current Linux patches consider them overlapping.  But it _might_
>> possibly be easier to communicate if we considered them disjoint.  I
>> don't know if there's a good answer, but at this point it might be
>> easiest to stick with our current overlapping approach.
>>
> I'll bring this up with the group.  I bet we are not the only people
> wondering the same, and it won't do any downstream users any good if
> they see conflicting descriptions from software vendors.

Preliminary feedback suggests that some vendors are definitely going
with TAA being a disjoint set to MDS, and other vendors are leaning in
that direction.

We should wait a bit longer for more views and opinions, as I think my
question was fairly late US time anyway yesterday.

~Andrew


^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** Re: [PATCH 9/9] TAA 9
  2019-10-24 16:58     ` Borislav Petkov
@ 2019-10-25 10:47       ` Michal Hocko
  2019-10-25 13:05       ` [MODERATED] " Josh Poimboeuf
  1 sibling, 0 replies; 75+ messages in thread
From: Michal Hocko @ 2019-10-25 10:47 UTC (permalink / raw)
  To: speck

On Thu 24-10-19 18:58:28, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 11:10:16AM -0500, speck for Josh Poimboeuf wrote:
> > I think this is misleading.  tsx=on doesn't make you vulnerable to TAA,
> > because we still the TAA mitigation.
> 
> Changed to:
> 
>           Therefore TSX is not enabled by default (aka tsx=off). An admin
>           might override this decision by tsx=on the command line parameter.
>           Even with TSX enabled, the kernel will attempt to enable the best 
>           possible TAA mitigation setting depending on the microcode available 
>           for the particular machine.
> 
> > tsx=on vs tsx=auto is not a security consideration, but rather a
> > performance one.  With tsx=auto you disable TSX on some TAA-affected
> > CPUs so you don't have to pay the performance penalty of the MDS
> > mitigations.
> 
> By performance penalty you mean, when you have TSX disabled on those
> parts, you'll save yourself the VERW which should be taking care of TAA
> too?
> 
> > 
> > > +
> > > +config X86_INTEL_TSX_MODE_OFF
> > > +	bool "off"
> > > +	help
> > > +	  TSX is always disabled - equals tsx=off command line parameter.
> > 
> > Define "always" :-)
> 
> Changed to:
> 
> "TSX is disabled if possible - equals to tsx=off command line parameter."

Thanks for refinements. The changelog and the help text was written at
the time when all this was still clear as mud.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 9/9] TAA 9
  2019-10-24 16:58     ` Borislav Petkov
  2019-10-25 10:47       ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
@ 2019-10-25 13:05       ` Josh Poimboeuf
  1 sibling, 0 replies; 75+ messages in thread
From: Josh Poimboeuf @ 2019-10-25 13:05 UTC (permalink / raw)
  To: speck

Missed these questions the first time around...

On Thu, Oct 24, 2019 at 06:58:28PM +0200, speck for Borislav Petkov wrote:
> On Thu, Oct 24, 2019 at 11:10:16AM -0500, speck for Josh Poimboeuf wrote:
> > I think this is misleading.  tsx=on doesn't make you vulnerable to TAA,
> > because we still the TAA mitigation.
> 
> Changed to:
> 
>           Therefore TSX is not enabled by default (aka tsx=off). An admin
>           might override this decision by tsx=on the command line parameter.
>           Even with TSX enabled, the kernel will attempt to enable the best 
>           possible TAA mitigation setting depending on the microcode available 
>           for the particular machine.
> 
> > tsx=on vs tsx=auto is not a security consideration, but rather a
> > performance one.  With tsx=auto you disable TSX on some TAA-affected
> > CPUs so you don't have to pay the performance penalty of the MDS
> > mitigations.
> 
> By performance penalty you mean, when you have TSX disabled on those
> parts, you'll save yourself the VERW which should be taking care of TAA
> too?

Right.

> > > +config X86_INTEL_TSX_MODE_OFF
> > > +	bool "off"
> > > +	help
> > > +	  TSX is always disabled - equals tsx=off command line parameter.
> > 
> > Define "always" :-)
> 
> Changed to:
> 
> "TSX is disabled if possible - equals to tsx=off command line parameter."
> 
> > Not exactly :-)  This also leaves TSX enabled on MDS vulnerable parts.
> 
> Your point being, the MD_CLEAR which takes care of TAA too?

You overtrimmed :-) Going back, I believe this comment was about

config X86_INTEL_TSX_MODE_AUTO
	bool "auto"
	help
	  TSX is enabled on TSX capable HW that is believed to be safe against
	  side channel attacks- equals tsx=auto command line parameter.

My point is that this option makes it sounds like TSX is *only* enabled
on non-TAA parts, when in fact it's also enabled on those TAA parts
which are also vulnerable to MDS (this of course assumes the
"overlapping" model described by Andrew).

-- 
Josh

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-25  9:08               ` Andrew Cooper
@ 2019-10-27  7:48                 ` Borislav Petkov
  2019-10-27  7:49                   ` [MODERATED] [AUTOREPLY] [MODERATED] [AUTOREPLY] Automatic reply: " James, Hengameh M
  0 siblings, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-10-27  7:48 UTC (permalink / raw)
  To: speck

On Fri, Oct 25, 2019 at 10:08:41AM +0100, speck for Andrew Cooper wrote:
> Its the perfect example here.  The answer is by requesting that Intel
> change bit 0's behaviour from causing #UD's to causing aborts.
> 
> The first version of this microcode was definitely not safe to late load.

Yap, exactly, and that is a problem: you can't always get the microcode
update to do what you want. IOW, we need some sort of communication
protocl between CPU vendors and OS which tells the OS what the microcode
provides and the OS can determine whether late loading would succeed.

tglx proposed a table of CPUID feature bits changing, for example, and
that is a first step in the right direction.

Changing the kernel to "reload" features is not trivial.

> Some userspace apparently gets confused when CPUID changes behind its
> back, which is why the CPUID control in bit 1 was split out from an
> otherwise monolithic bit 0.
> 
> At late load, choose (or not) to use bit 0 only.
> At boot, choose (or not) both bits 0 and 1 in unison.

Yes, and the fact that there are two bits is practically dictated by the
inability of the OS to handle late loading optimally. Otherwise, you
would've had only one bit.

All I'm saying is, late loading is such a problem that it even dictates
how microcode should look like.

> Noone has guaranteed that all microcode ever in the future is going to
> be safe to use on a running system.  If it really can't be made to be
> safe, then customers are really going to have to reboot.
> 
> However, there is a lot of effort going into trying to make sure that
> fixes such as this one are made safe for late loading.
> 
> To give a concrete example, we have customers who's elapsed time for a
> reboot, conforming to SLAs, is in excess of 9 months, and new microcode
> with critical fixes is coming out faster than that.  I bet that I'm not
> the only person on this list with this type of customer.

Oh I bet.

And I really think in the age of containers and live migration and all
that crap, what we should push for is freeing the host for proper kernel
and microcode update and then rebooting it. While the guests are moved
to another instance which has been rebooted already. I know, that is
hard and it has its warts too but the late loading is a PITA already.
And rebooting the host as part of maint downtime would give you a lot
more advantages.

But that's a whole another topic for another time and place.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] [AUTOREPLY] [MODERATED] [AUTOREPLY] Automatic reply: [PATCH 3/9] TAA 3
  2019-10-27  7:48                 ` Borislav Petkov
@ 2019-10-27  7:49                   ` James, Hengameh M
  0 siblings, 0 replies; 75+ messages in thread
From: James, Hengameh M @ 2019-10-27  7:49 UTC (permalink / raw)
  To: speck



^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-23  9:01 ` [MODERATED] [PATCH 3/9] TAA 3 Pawan Gupta
  2019-10-24 15:30   ` [MODERATED] " Josh Poimboeuf
  2019-10-24 17:39   ` Andrew Cooper
@ 2019-10-30 13:28   ` Greg KH
  2019-10-30 14:48     ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
  2019-10-30 17:24     ` [MODERATED] " Pawan Gupta
  2 siblings, 2 replies; 75+ messages in thread
From: Greg KH @ 2019-10-30 13:28 UTC (permalink / raw)
  To: speck

On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
>  default
> 

As late at this is, I really would like to change the name here.  We
already have a powerpc command line option to disable this for those
chips, and we should use the same name here as well.  I talked a bit
about this with some powerpc people and they really want to disable this
option as well, much like is being done here, for lots of other,
non-security reasons.

So can we please go back to the generic name so that we have some
unification, making userspace stuff more uniform?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** Re: [PATCH 3/9] TAA 3
  2019-10-30 13:28   ` Greg KH
@ 2019-10-30 14:48     ` Michal Hocko
  2019-10-30 17:24     ` [MODERATED] " Pawan Gupta
  1 sibling, 0 replies; 75+ messages in thread
From: Michal Hocko @ 2019-10-30 14:48 UTC (permalink / raw)
  To: speck

On Wed 30-10-19 14:28:58, speck for Greg KH wrote:
> On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
> >  default
> > 
> 
> As late at this is,

indeed

> I really would like to change the name here.  We
> already have a powerpc command line option to disable this for those
> chips, and we should use the same name here as well.

Which option are we talking about?

> I talked a bit
> about this with some powerpc people and they really want to disable this
> option as well, much like is being done here, for lots of other,
> non-security reasons.
> 
> So can we please go back to the generic name so that we have some
> unification, making userspace stuff more uniform?

What do you propose?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-30 13:28   ` Greg KH
  2019-10-30 14:48     ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
@ 2019-10-30 17:24     ` Pawan Gupta
  2019-10-30 19:27       ` Greg KH
  1 sibling, 1 reply; 75+ messages in thread
From: Pawan Gupta @ 2019-10-30 17:24 UTC (permalink / raw)
  To: speck

On Wed, Oct 30, 2019 at 02:28:58PM +0100, speck for Greg KH wrote:
> On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
> >  default
> > 
> 
> As late at this is, I really would like to change the name here.  We
> already have a powerpc command line option to disable this for those
> chips, and we should use the same name here as well.

I think powerpc calls it "ppc_tm=". We can't obviously change to this.

	ppc_tm=		[PPC]
			Format: {"off"}
			Disable Hardware Transactional Memory

> I talked a bit
> about this with some powerpc people and they really want to disable this
> option as well, much like is being done here, for lots of other,
> non-security reasons.
> 
> So can we please go back to the generic name so that we have some
> unification, making userspace stuff more uniform?

I couldn't find an existing generic name for this.

Thanks,
Pawan

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-10-30 17:24     ` [MODERATED] " Pawan Gupta
@ 2019-10-30 19:27       ` Greg KH
  2019-10-30 19:44         ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
  0 siblings, 1 reply; 75+ messages in thread
From: Greg KH @ 2019-10-30 19:27 UTC (permalink / raw)
  To: speck

On Wed, Oct 30, 2019 at 10:24:03AM -0700, speck for Pawan Gupta wrote:
> On Wed, Oct 30, 2019 at 02:28:58PM +0100, speck for Greg KH wrote:
> > On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> > > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
> > >  default
> > > 
> > 
> > As late at this is, I really would like to change the name here.  We
> > already have a powerpc command line option to disable this for those
> > chips, and we should use the same name here as well.
> 
> I think powerpc calls it "ppc_tm=". We can't obviously change to this.
> 
> 	ppc_tm=		[PPC]
> 			Format: {"off"}
> 			Disable Hardware Transactional Memory

No, but we can use "tm", and PPC said they would drop the "ppc_" prefix
and use the generic flag as well.  "Transactional Memory" is the generic
term for this from what I can tell.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** Re: [PATCH 3/9] TAA 3
  2019-10-30 19:27       ` Greg KH
@ 2019-10-30 19:44         ` Michal Hocko
  2019-11-01  9:35           ` Greg KH
  0 siblings, 1 reply; 75+ messages in thread
From: Michal Hocko @ 2019-10-30 19:44 UTC (permalink / raw)
  To: speck

On Wed 30-10-19 20:27:58, speck for Greg KH wrote:
> On Wed, Oct 30, 2019 at 10:24:03AM -0700, speck for Pawan Gupta wrote:
> > On Wed, Oct 30, 2019 at 02:28:58PM +0100, speck for Greg KH wrote:
> > > On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> > > > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > > Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
> > > >  default
> > > > 
> > > 
> > > As late at this is, I really would like to change the name here.  We
> > > already have a powerpc command line option to disable this for those
> > > chips, and we should use the same name here as well.
> > 
> > I think powerpc calls it "ppc_tm=". We can't obviously change to this.
> > 
> > 	ppc_tm=		[PPC]
> > 			Format: {"off"}
> > 			Disable Hardware Transactional Memory
> 
> No, but we can use "tm", and PPC said they would drop the "ppc_" prefix
> and use the generic flag as well.  "Transactional Memory" is the generic
> term for this from what I can tell.

Are those two (tsx and tm) going to have the same semantic? I have
little doubts TBH considering how tsx itself is tricky.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** Re: [PATCH 3/9] TAA 3
  2019-10-30 19:44         ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
@ 2019-11-01  9:35           ` Greg KH
  2019-11-01 13:15             ` [MODERATED] " Borislav Petkov
  2019-11-01 18:42             ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
  0 siblings, 2 replies; 75+ messages in thread
From: Greg KH @ 2019-11-01  9:35 UTC (permalink / raw)
  To: speck

On Wed, Oct 30, 2019 at 08:44:40PM +0100, speck for Michal Hocko wrote:
> On Wed 30-10-19 20:27:58, speck for Greg KH wrote:
> > On Wed, Oct 30, 2019 at 10:24:03AM -0700, speck for Pawan Gupta wrote:
> > > On Wed, Oct 30, 2019 at 02:28:58PM +0100, speck for Greg KH wrote:
> > > > On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> > > > > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > > > Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
> > > > >  default
> > > > > 
> > > > 
> > > > As late at this is, I really would like to change the name here.  We
> > > > already have a powerpc command line option to disable this for those
> > > > chips, and we should use the same name here as well.
> > > 
> > > I think powerpc calls it "ppc_tm=". We can't obviously change to this.
> > > 
> > > 	ppc_tm=		[PPC]
> > > 			Format: {"off"}
> > > 			Disable Hardware Transactional Memory
> > 
> > No, but we can use "tm", and PPC said they would drop the "ppc_" prefix
> > and use the generic flag as well.  "Transactional Memory" is the generic
> > term for this from what I can tell.
> 
> Are those two (tsx and tm) going to have the same semantic? I have
> little doubts TBH considering how tsx itself is tricky.

Ugh, turns out gmail was marking all of these threads as spam, that
answers my "why is this list so quiet?" question :(

Anyway, yes, TM shoudl have the same semantic.  But, if it's too late at
the moment, we can always unify this later on if PPC really wants it.  I
was just passing on what those developers told me.  As the bundles are
all set to go now, I'm not going to press the issue.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-11-01  9:35           ` Greg KH
@ 2019-11-01 13:15             ` Borislav Petkov
  2019-11-01 14:33               ` Greg KH
  2019-11-01 18:42             ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
  1 sibling, 1 reply; 75+ messages in thread
From: Borislav Petkov @ 2019-11-01 13:15 UTC (permalink / raw)
  To: speck

On Fri, Nov 01, 2019 at 10:35:26AM +0100, speck for Greg KH wrote:
> Anyway, yes, TM shoudl have the same semantic.  But, if it's too late at
> the moment, we can always unify this later on if PPC really wants it.

We can s/unify/add additional/ to minimize confusion. What I mean is,
we can add "tm=" as an additional option which coexists with the "tsx="
option on x86 because the patches would have released already and we
would have communicated "tsx=" to people by then. So "tsx=" and "tm="
would do the same thing on x86.

Methinks.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah, HRB 21284 (AG Nürnberg)
-- 

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: [PATCH 3/9] TAA 3
  2019-11-01 13:15             ` [MODERATED] " Borislav Petkov
@ 2019-11-01 14:33               ` Greg KH
  0 siblings, 0 replies; 75+ messages in thread
From: Greg KH @ 2019-11-01 14:33 UTC (permalink / raw)
  To: speck

On Fri, Nov 01, 2019 at 02:15:48PM +0100, speck for Borislav Petkov wrote:
> On Fri, Nov 01, 2019 at 10:35:26AM +0100, speck for Greg KH wrote:
> > Anyway, yes, TM shoudl have the same semantic.  But, if it's too late at
> > the moment, we can always unify this later on if PPC really wants it.
> 
> We can s/unify/add additional/ to minimize confusion. What I mean is,
> we can add "tm=" as an additional option which coexists with the "tsx="
> option on x86 because the patches would have released already and we
> would have communicated "tsx=" to people by then. So "tsx=" and "tm="
> would do the same thing on x86.

Ah, yes, that sounds like a sane plan.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 75+ messages in thread

* [MODERATED] Re: ***UNCHECKED*** Re: Re: [PATCH 3/9] TAA 3
  2019-11-01  9:35           ` Greg KH
  2019-11-01 13:15             ` [MODERATED] " Borislav Petkov
@ 2019-11-01 18:42             ` Michal Hocko
  1 sibling, 0 replies; 75+ messages in thread
From: Michal Hocko @ 2019-11-01 18:42 UTC (permalink / raw)
  To: speck

On Fri 01-11-19 10:35:26, speck for Greg KH wrote:
> On Wed, Oct 30, 2019 at 08:44:40PM +0100, speck for Michal Hocko wrote:
> > On Wed 30-10-19 20:27:58, speck for Greg KH wrote:
> > > On Wed, Oct 30, 2019 at 10:24:03AM -0700, speck for Pawan Gupta wrote:
> > > > On Wed, Oct 30, 2019 at 02:28:58PM +0100, speck for Greg KH wrote:
> > > > > On Wed, Oct 23, 2019 at 11:01:53AM +0200, speck for Pawan Gupta wrote:
> > > > > > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > > > > Subject: [PATCH 3/9] x86/cpu: Add a "tsx=" cmdline option with TSX disabled by
> > > > > >  default
> > > > > > 
> > > > > 
> > > > > As late at this is, I really would like to change the name here.  We
> > > > > already have a powerpc command line option to disable this for those
> > > > > chips, and we should use the same name here as well.
> > > > 
> > > > I think powerpc calls it "ppc_tm=". We can't obviously change to this.
> > > > 
> > > > 	ppc_tm=		[PPC]
> > > > 			Format: {"off"}
> > > > 			Disable Hardware Transactional Memory
> > > 
> > > No, but we can use "tm", and PPC said they would drop the "ppc_" prefix
> > > and use the generic flag as well.  "Transactional Memory" is the generic
> > > term for this from what I can tell.
> > 
> > Are those two (tsx and tm) going to have the same semantic? I have
> > little doubts TBH considering how tsx itself is tricky.
> 
> Ugh, turns out gmail was marking all of these threads as spam, that
> answers my "why is this list so quiet?" question :(
> 
> Anyway, yes, TM shoudl have the same semantic.

My current understanding is that there is no way to simply disable tsx
by the tsx parameter in general. I have no idea whether the same is the
case for the ppc side of matters but I would be really surprised if the
story was consistent with x86.

Anway if there is a way to make this more consistent then I have no
objections but please keep in mind that last minute changes this late
are not really healthy tend to cause more harm then good. This is an
architecture specific problem and attaching it as such first makes a lot
of sense to me.

Thanks!

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 75+ messages in thread

end of thread, other threads:[~2019-11-01 18:42 UTC | newest]

Thread overview: 75+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-24  8:20 [MODERATED] [PATCH 0/9] TAA 0 Borislav Petkov
2019-10-23  8:45 ` [MODERATED] [PATCH 1/9] TAA 1 Pawan Gupta
2019-10-24 15:22   ` [MODERATED] " Josh Poimboeuf
2019-10-24 16:23     ` Borislav Petkov
2019-10-24 16:42       ` Josh Poimboeuf
2019-10-23  8:52 ` [MODERATED] [PATCH 2/9] TAA 2 Pawan Gupta
2019-10-23  9:01 ` [MODERATED] [PATCH 3/9] TAA 3 Pawan Gupta
2019-10-24 15:30   ` [MODERATED] " Josh Poimboeuf
2019-10-24 16:33     ` Borislav Petkov
2019-10-24 16:43       ` Josh Poimboeuf
2019-10-24 17:39   ` Andrew Cooper
2019-10-24 19:45     ` Borislav Petkov
2019-10-24 19:59       ` Josh Poimboeuf
2019-10-24 20:05         ` Borislav Petkov
2019-10-24 20:14           ` Josh Poimboeuf
2019-10-24 20:36             ` Borislav Petkov
2019-10-24 20:43               ` Andrew Cooper
2019-10-24 20:55                 ` Borislav Petkov
2019-10-24 20:44               ` Josh Poimboeuf
2019-10-24 20:07       ` Andrew Cooper
2019-10-24 20:17         ` Borislav Petkov
2019-10-24 22:38           ` Andrew Cooper
2019-10-25  6:03             ` Pawan Gupta
2019-10-25  7:25               ` Borislav Petkov
2019-10-25  7:17             ` Borislav Petkov
2019-10-25  9:08               ` Andrew Cooper
2019-10-27  7:48                 ` Borislav Petkov
2019-10-27  7:49                   ` [MODERATED] [AUTOREPLY] [MODERATED] [AUTOREPLY] Automatic reply: " James, Hengameh M
2019-10-24 19:47     ` [MODERATED] " Pawan Gupta
2019-10-30 13:28   ` Greg KH
2019-10-30 14:48     ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
2019-10-30 17:24     ` [MODERATED] " Pawan Gupta
2019-10-30 19:27       ` Greg KH
2019-10-30 19:44         ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
2019-11-01  9:35           ` Greg KH
2019-11-01 13:15             ` [MODERATED] " Borislav Petkov
2019-11-01 14:33               ` Greg KH
2019-11-01 18:42             ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
2019-10-23  9:30 ` [MODERATED] [PATCH 4/9] TAA 4 Pawan Gupta
2019-10-24 15:32   ` [MODERATED] " Josh Poimboeuf
2019-10-24 16:43     ` Borislav Petkov
2019-10-24 17:15       ` Josh Poimboeuf
2019-10-24 17:23         ` Pawan Gupta
2019-10-24 17:27           ` Pawan Gupta
2019-10-24 17:34           ` Josh Poimboeuf
2019-10-24 18:23       ` Andrew Cooper
2019-10-24 18:56         ` Josh Poimboeuf
2019-10-24 18:59           ` Josh Poimboeuf
2019-10-24 19:13           ` Andrew Cooper
2019-10-24 19:49             ` Josh Poimboeuf
2019-10-24 20:48               ` Andrew Cooper
2019-10-25  9:12                 ` Andrew Cooper
2019-10-25  0:49   ` Pawan Gupta
2019-10-25  7:36     ` Borislav Petkov
2019-10-23 10:19 ` [MODERATED] [PATCH 5/9] TAA 5 Pawan Gupta
2019-10-24 18:30   ` [MODERATED] " Greg KH
2019-10-23 10:23 ` [MODERATED] [PATCH 6/9] TAA 6 Pawan Gupta
2019-10-23 10:28 ` [MODERATED] [PATCH 7/9] TAA 7 Pawan Gupta
2019-10-24 15:35   ` [MODERATED] " Josh Poimboeuf
2019-10-24 16:42     ` Borislav Petkov
2019-10-24 18:20       ` Jiri Kosina
2019-10-24 19:53         ` Borislav Petkov
2019-10-24 20:02           ` Josh Poimboeuf
2019-10-24 20:08             ` Borislav Petkov
2019-10-23 10:32 ` [MODERATED] [PATCH 8/9] TAA 8 Pawan Gupta
2019-10-24 16:03   ` [MODERATED] " Josh Poimboeuf
2019-10-24 17:35     ` Borislav Petkov
2019-10-24 18:11       ` Josh Poimboeuf
2019-10-24 18:55         ` Pawan Gupta
2019-10-25  8:04         ` Borislav Petkov
2019-10-23 10:35 ` [MODERATED] [PATCH 9/9] TAA 9 Michal Hocko
2019-10-24 16:10   ` [MODERATED] " Josh Poimboeuf
2019-10-24 16:58     ` Borislav Petkov
2019-10-25 10:47       ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
2019-10-25 13:05       ` [MODERATED] " Josh Poimboeuf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).