historical-speck.lore.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
To: speck@linutronix.de
Subject: [MODERATED] [PATCH v7 10/10] TAAv7 10
Date: Mon, 21 Oct 2019 13:32:03 -0700	[thread overview]
Message-ID: <=?utf-8?q?=3Ceacee918153637948cbf21f44b46e83d60e891b4=2E157168?= =?utf-8?q?8957=2Egit=2Epawan=2Ekumar=2Egupta=40linux=2Eintel=2Ecom=3E?=> (raw)
In-Reply-To: <cover.1571688957.git.pawan.kumar.gupta@linux.intel.com>

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Subject: [PATCH v7 10/10] x86/tsx: Add sysfs interface to control TSX

Transactional Synchronization Extensions (TSX) is an extension to the
x86 instruction set architecture (ISA) that adds Hardware Transactional
Memory (HTM) support. Changing TSX state currently requires a reboot.
This may not be desirable when rebooting imposes a huge penalty. Add
support to control TSX feature via a new sysfs file:
/sys/devices/system/cpu/hw_tx_mem

- Writing 0|off|N|n to this file disables TSX feature on all the CPUs.
  This is equivalent to boot parameter tsx=off.
- Writing 1|on|Y|y to this file enables TSX feature on all the CPUs.
  This is equivalent to boot parameter tsx=on.
- Reading from this returns the status of TSX feature.
- When TSX control is not supported this interface is not visible in
  sysfs.

Changing the TSX state from this interface also updates CPUID.RTM
feature bit.  From the kernel side, this feature bit doesn't result in
any ALTERNATIVE code patching.  No memory allocations are done to
save/restore user state. No code paths in outside of the tests for
vulnerability to TAA are dependent on the value of the feature bit.  In
general the kernel doesn't care whether RTM is present or not.

Applications typically look at CPUID bits once at startup (or when first
calling into a library that uses the feature).  So we have a couple of
cases to cover:

1) An application started and saw that RTM was enabled, so began
   to use it. Then TSX was disabled.  Net result in this case is that
   the application will keep trying to use RTM, but every xbegin() will
   immediately abort the transaction. This has a performance impact to
   the application, but it doesn't affect correctness because all users
   of RTM must have a fallback path for when the transaction aborts. Note
   that even if an application is in the middle of a transaction when we
   disable RTM, we are safe. The XPI that we use to update the TSX_CTRL
   MSR will abort the transaction (just as any interrupt would abort
   a transaction).

2) An application starts and sees RTM is not available. So it will
   always use alternative paths. Even if TSX is enabled and RTM is set,
   applications in general do not re-evaluate their choice so will
   continue to run in non-TSX mode.

When the TSX state is changed from the sysfs interface, TSX Async Abort
(TAA) mitigation state also needs to be updated. Set the TAA mitigation
state as per TSX and VERW static branch state.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
 .../ABI/testing/sysfs-devices-system-cpu      |  23 ++++
 .../admin-guide/hw-vuln/tsx_async_abort.rst   |  30 ++++-
 arch/x86/kernel/cpu/bugs.c                    |  21 +++-
 arch/x86/kernel/cpu/cpu.h                     |   3 +-
 arch/x86/kernel/cpu/intel.c                   |   5 +
 arch/x86/kernel/cpu/tsx.c                     | 114 +++++++++++++++++-
 drivers/base/cpu.c                            |  32 ++++-
 include/linux/cpu.h                           |   6 +
 8 files changed, 229 insertions(+), 5 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 0e77569bd5e0..e4f12162abb8 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -573,3 +573,26 @@ Description:	Secure Virtual Machine
 		If 1, it means the system is using the Protected Execution
 		Facility in POWER9 and newer processors. i.e., it is a Secure
 		Virtual Machine.
+
+What:		/sys/devices/system/cpu/hw_tx_mem
+Date:		August 2019
+Contact:	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+		Linux kernel mailing list <linux-kernel@vger.kernel.org>
+Description:	Hardware Transactional Memory (HTM) control.
+
+		Read/write interface to control HTM feature for all the CPUs in
+		the system.  This interface is only present on platforms that
+		support HTM control. HTM is a hardware feature to speed up the
+		execution of multi-threaded software through lock elision. An
+		example of HTM implementation is Intel Transactional
+		Synchronization Extensions (TSX).
+
+			Read returns the status of HTM feature.
+
+			0: HTM is disabled
+			1: HTM is enabled
+
+			Write sets the state of HTM feature.
+
+			0: Disables HTM
+			1: Enables HTM
diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
index 852707b8748b..7ea0b2ef71d2 100644
--- a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -132,7 +132,8 @@ enables the mitigation by default.
 
 
 The mitigation can be controlled at boot time via a kernel command line option.
-See :ref:`taa_mitigation_control_command_line`.
+See :ref:`taa_mitigation_control_command_line`. It also provides a sysfs
+interface. See :ref:`taa_mitigation_sysfs`.
 
 .. _virt_mechanism:
 
@@ -221,6 +222,33 @@ and TSX_CTRL_MSR.
                                      VERW or TSX_CTRL_MSR
   =======  =========  =============  ========================================
 
+.. _taa_mitigation_sysfs:
+
+Mitigation control using sysfs
+------------------------------
+
+For those affected systems that can not be frequently rebooted to enable or
+disable TSX, sysfs can be used as an alternative after installing the updates.
+The possible values for the file /sys/devices/system/cpu/hw_tx_mem are:
+
+  ===  =======================================================================
+  0    Disable TSX. Upon entering a TSX transactional region, the code will
+       immediately abort, before any instruction executes within the
+       transactional region even speculatively, and continue on the fallback.
+       Equivalent to boot parameter "tsx=off".
+  1    Enable TSX. Equivalent to boot parameter "tsx=on".
+  ===  =======================================================================
+
+Reading from this file returns the status of TSX feature. This file is only
+present on systems that support TSX control.
+
+When disabling TSX by using the sysfs mechanism, applications that are already
+running and use TSX will see their transactional regions aborted and execution
+flow will be redirected to the fallback, losing the benefits of the
+non-blocking path. TSX needs fallback code to guarantee correct execution
+without transactional regions.
+
+
 Mitigation selection guide
 --------------------------
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3220d820159a..fd39c053c060 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -274,7 +274,7 @@ early_param("mds", mds_cmdline);
 #define pr_fmt(fmt)	"TAA: " fmt
 
 /* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW;
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -374,6 +374,25 @@ static int __init tsx_async_abort_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_cmdline);
 
+void taa_update_mitigation(bool tsx_enabled)
+{
+	/*
+	 * When userspace changes the TSX state, update taa_mitigation
+	 * so that the updated mitigation state is shown in:
+	 * /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+	 *
+	 * Check if TSX is disabled.
+	 * Check if CPU buffer clear is enabled.
+	 * else the system is vulnerable.
+	 */
+	if (!tsx_enabled)
+		taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
+	else if (static_key_count(&mds_user_clear.key))
+		taa_mitigation = TAA_MITIGATION_VERW;
+	else
+		taa_mitigation = TAA_MITIGATION_OFF;
+}
+
 #undef pr_fmt
 #define pr_fmt(fmt)     "Spectre V1 : " fmt
 
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 38ab6e115eac..c0e2ae982692 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -51,11 +51,12 @@ enum tsx_ctrl_states {
 	TSX_CTRL_NOT_SUPPORTED,
 };
 
-extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+extern enum tsx_ctrl_states tsx_ctrl_state;
 
 extern void __init tsx_init(void);
 extern void tsx_enable(void);
 extern void tsx_disable(void);
+extern void taa_update_mitigation(bool tsx_enabled);
 #else
 static inline void tsx_init(void) { }
 #endif /* CONFIG_CPU_SUP_INTEL */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 11d5c5950e2d..e0a39c631d05 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -763,6 +763,11 @@ static void init_intel(struct cpuinfo_x86 *c)
 
 	init_intel_misc_features(c);
 
+	/*
+	 * TSX_CTRL MSR is updated from sysfs also. Writes from sysfs take the
+	 * cpu_hotplug_lock. This ensures CPUs can't come online and race with
+	 * MSR writes from sysfs.
+	 */
 	if (tsx_ctrl_state == TSX_CTRL_ENABLE)
 		tsx_enable();
 	if (tsx_ctrl_state == TSX_CTRL_DISABLE)
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index bd74f216f026..0969e6e9dff3 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -9,12 +9,15 @@
  */
 
 #include <linux/cpufeature.h>
+#include <linux/cpu.h>
 
 #include <asm/cmdline.h>
 
 #include "cpu.h"
 
-enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+static DEFINE_MUTEX(tsx_mutex);
+
+enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED;
 
 void tsx_disable(void)
 {
@@ -127,3 +130,112 @@ void __init tsx_init(void)
 		setup_force_cpu_cap(X86_FEATURE_RTM);
 	}
 }
+
+static void tsx_update_this_cpu(void *arg)
+{
+	unsigned long enable = (unsigned long)arg;
+
+	if (enable)
+		tsx_enable();
+	else
+		tsx_disable();
+}
+
+/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
+static void tsx_update_on_each_cpu(bool state)
+{
+	get_online_cpus();
+	on_each_cpu(tsx_update_this_cpu, (void *)state, 1);
+	put_online_cpus();
+}
+
+ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *attr,
+		       char *buf)
+{
+	return sprintf(buf, "%d\n", tsx_ctrl_state == TSX_CTRL_ENABLE ? 1 : 0);
+}
+
+ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr,
+			const char *buf, size_t count)
+{
+	enum tsx_ctrl_states requested_state;
+	ssize_t ret;
+	bool state;
+
+	ret = kstrtobool(buf, &state);
+	if (ret)
+		return ret;
+
+	if (state)
+		requested_state = TSX_CTRL_ENABLE;
+	else
+		requested_state = TSX_CTRL_DISABLE;
+
+	/*
+	 * Protect tsx_ctrl_state and TSX update on_each_cpu() from concurrent
+	 * writers.
+	 *
+	 *  - Serialize TSX_CTRL MSR writes across all CPUs when there are
+	 *    concurrent sysfs requests. on_each_cpu() callback execution
+	 *    order on other CPUs can be different for multiple calls to
+	 *    on_each_cpu(). For conflicting concurrent sysfs requests the
+	 *    lock ensures all CPUs have updated the TSX_CTRL MSR before the
+	 *    next call to on_each_cpu() can be processed.
+	 *  - Serialize tsx_ctrl_state update so that it doesn't get out of
+	 *    sync with TSX_CTRL MSR.
+	 *  - Serialize update to taa_mitigation.
+	 */
+	mutex_lock(&tsx_mutex);
+
+	/* Current state is same as the requested state, do nothing */
+	if (tsx_ctrl_state == requested_state)
+		goto exit;
+
+	tsx_ctrl_state = requested_state;
+
+	/*
+	 * Changing the TSX state from this interface also updates CPUID.RTM
+	 * feature  bit. From the kernel side, this feature bit doesn't result
+	 * in any ALTERNATIVE code patching.  No memory allocations are done to
+	 * save/restore user state. No code paths in outside of the tests for
+	 * vulnerability to TAA are dependent on the value of the feature bit.
+	 * In general the kernel doesn't care whether RTM is present or not.
+	 *
+	 * From the user side it is a bit fuzzier. Applications typically look
+	 * at CPUID bits once at startup (or when first calling into a library
+	 * that uses the feature).  So we have a couple of cases to cover:
+	 *
+	 * 1) An application started and saw that RTM was enabled, so began
+	 *    to use it. Then TSX was disabled.  Net result in this case is
+	 *    that the application will keep trying to use RTM, but every
+	 *    xbegin() will immediately abort the transaction. This has a
+	 *    performance impact to the application, but it doesn't affect
+	 *    correctness because all users of RTM must have a fallback path
+	 *    for when the transaction aborts. Note that even if an application
+	 *    is in the middle of a transaction when we disable RTM, we are
+	 *    safe. The XPI that we use to update the TSX_CTRL MSR will abort
+	 *    the transaction (just as any interrupt would abort a
+	 *    transaction).
+	 *
+	 * 2) An application starts and sees RTM is not available. So it will
+	 *    always use alternative paths. Even if TSX is enabled and RTM is
+	 *    set, applications in general do not re-evaluate their choice so
+	 *    will continue to run in non-TSX mode.
+	 */
+	tsx_update_on_each_cpu(state);
+
+	if (boot_cpu_has_bug(X86_BUG_TAA))
+		taa_update_mitigation(state);
+exit:
+	mutex_unlock(&tsx_mutex);
+
+	return count;
+}
+
+umode_t hw_tx_mem_is_visible(void)
+{
+	if (tsx_ctrl_state == TSX_CTRL_NOT_SUPPORTED)
+		return 0;
+
+	return 0644;
+}
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 0fccd8c0312e..2148e974ab0a 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -460,6 +460,34 @@ struct device *cpu_device_create(struct device *parent, void *drvdata,
 }
 EXPORT_SYMBOL_GPL(cpu_device_create);
 
+ssize_t __weak hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+			      char *buf)
+{
+	return -ENODEV;
+}
+
+ssize_t __weak hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+			       const char *buf, size_t count)
+{
+	return -ENODEV;
+}
+
+DEVICE_ATTR_RW(hw_tx_mem);
+
+umode_t __weak hw_tx_mem_is_visible(void)
+{
+	return 0;
+}
+
+static umode_t cpu_root_attrs_is_visible(struct kobject *kobj,
+					 struct attribute *attr, int index)
+{
+	if (attr == &dev_attr_hw_tx_mem.attr)
+		return hw_tx_mem_is_visible();
+
+	return attr->mode;
+}
+
 #ifdef CONFIG_GENERIC_CPU_AUTOPROBE
 static DEVICE_ATTR(modalias, 0444, print_cpu_modalias, NULL);
 #endif
@@ -481,11 +509,13 @@ static struct attribute *cpu_root_attrs[] = {
 #ifdef CONFIG_GENERIC_CPU_AUTOPROBE
 	&dev_attr_modalias.attr,
 #endif
+	&dev_attr_hw_tx_mem.attr,
 	NULL
 };
 
 static struct attribute_group cpu_root_attr_group = {
-	.attrs = cpu_root_attrs,
+	.attrs		= cpu_root_attrs,
+	.is_visible	= cpu_root_attrs_is_visible,
 };
 
 static const struct attribute_group *cpu_root_attr_groups[] = {
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index f35369f79771..104c293add42 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -63,6 +63,12 @@ extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
 					struct device_attribute *attr,
 					char *buf);
 
+extern ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+			      char *buf);
+extern ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+			       const char *buf, size_t count);
+extern umode_t hw_tx_mem_is_visible(void);
+
 extern __printf(4, 5)
 struct device *cpu_device_create(struct device *parent, void *drvdata,
 				 const struct attribute_group **groups,
-- 
2.20.1

  parent reply	other threads:[~2019-10-21 20:38 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
2019-10-21 20:23 ` [MODERATED] [PATCH v7 01/10] TAAv7 1 Pawan Gupta
2019-10-21 20:24 ` [MODERATED] [PATCH v7 02/10] TAAv7 2 Pawan Gupta
2019-10-21 20:25 ` [MODERATED] [PATCH v7 03/10] TAAv7 3 Pawan Gupta
2019-10-21 20:26 ` [MODERATED] [PATCH v7 04/10] TAAv7 4 Pawan Gupta
2019-10-21 20:27 ` [MODERATED] [PATCH v7 05/10] TAAv7 5 Pawan Gupta
2019-10-21 20:28 ` [MODERATED] [PATCH v7 06/10] TAAv7 6 Pawan Gupta
2019-10-21 20:29 ` [MODERATED] [PATCH v7 07/10] TAAv7 7 Pawan Gupta
2019-10-21 20:30 ` [MODERATED] [PATCH v7 08/10] TAAv7 8 Pawan Gupta
2019-10-21 20:31 ` [MODERATED] [PATCH v7 09/10] TAAv7 9 Michal Hocko
2019-10-21 20:32 ` Pawan Gupta [this message]
2019-10-21 21:32 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Andy Lutomirski
2019-10-21 23:06   ` Andrew Cooper
2019-10-22  0:34   ` Pawan Gupta
2019-10-22  4:10 ` [MODERATED] Jon Masters
2019-10-22  5:53   ` [MODERATED] Pawan Gupta
2019-10-22  7:58 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 07/10] TAAv7 7 Michal Hocko
2019-10-22 16:55   ` [MODERATED] " Pawan Gupta
2019-10-22  8:00 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 09/10] TAAv7 9 Michal Hocko
2019-10-22  8:15 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3 Michal Hocko
2019-10-22 14:42   ` Josh Poimboeuf
2019-10-22 16:48     ` [MODERATED] " Pawan Gupta
2019-10-22 17:01       ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
2019-10-22 17:35         ` Josh Poimboeuf
2019-10-22 14:38 ` [MODERATED] " Borislav Petkov
2019-10-22 16:58   ` Pawan Gupta
2019-10-22 14:48 ` Borislav Petkov
2019-10-22 17:00   ` Pawan Gupta
2019-10-22 17:16     ` [MODERATED] " Borislav Petkov
2019-10-22 18:07       ` [MODERATED] " Pawan Gupta
2019-10-22 15:07 ` Borislav Petkov
2019-10-22 18:36   ` Pawan Gupta
2019-10-22 18:59     ` [MODERATED] " Borislav Petkov
2019-10-22 16:51 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
2019-10-22 17:02   ` Borislav Petkov
2019-10-22 18:00     ` Pawan Gupta
2019-10-22 18:12       ` [MODERATED] " Borislav Petkov
2019-10-22 19:16         ` Luck, Tony
2019-10-22 19:28           ` [MODERATED] " Borislav Petkov
2019-10-22 20:02             ` Luck, Tony
2019-10-22 20:48               ` [MODERATED] Jon Masters
2019-10-22 20:54               ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
2019-10-22 21:38                 ` Josh Poimboeuf
2019-10-22 21:46                   ` Borislav Petkov
2019-10-22 22:06                     ` Josh Poimboeuf
2019-10-22 22:13                       ` Borislav Petkov
2019-10-22 17:44   ` Pawan Gupta
2019-10-22 19:04     ` [MODERATED] " Borislav Petkov
2019-10-22 21:29       ` [MODERATED] " Pawan Gupta
2019-10-22 21:53         ` Borislav Petkov
2019-10-22 22:05           ` Borislav Petkov
2019-10-23  0:27             ` Pawan Gupta
2019-10-23  5:25               ` Pawan Gupta
2019-10-23  6:46                 ` Borislav Petkov
2019-10-23 13:28                   ` Pawan Gupta
2019-10-23 14:39                     ` Borislav Petkov
2019-10-23  1:33   ` Pawan Gupta
2019-10-23  6:48     ` Borislav Petkov
2019-10-22 17:25 ` [MODERATED] Re: [PATCH v7 01/10] TAAv7 1 Josh Poimboeuf
2019-10-23  9:26   ` Borislav Petkov
2019-10-22 17:26 ` Josh Poimboeuf
2019-10-22 20:44   ` [MODERATED] Jon Masters
2019-10-22 17:47 ` [MODERATED] Re: [PATCH v7 03/10] TAAv7 3 Josh Poimboeuf
2019-10-22 18:39 ` [MODERATED] Re: [PATCH v7 10/10] TAAv7 10 Josh Poimboeuf
2019-10-23  7:24   ` Borislav Petkov
2019-10-22 21:20 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Josh Poimboeuf
2019-10-22 21:35   ` Andrew Cooper
2019-10-22 21:44     ` Josh Poimboeuf
2019-10-22 22:03       ` Andrew Cooper
2019-10-23  1:16         ` Josh Poimboeuf
2019-10-23 15:46 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Borislav Petkov
2019-10-23 17:11   ` Josh Poimboeuf
2019-10-23 21:49     ` Borislav Petkov
2019-10-23 22:12   ` Pawan Gupta
2019-10-24 14:08     ` Borislav Petkov
     [not found] ` <5dae165e.1c69fb81.4beee.e271SMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-24 20:53   ` [MODERATED] Re: [PATCH v7 06/10] TAAv7 6 Paolo Bonzini
2019-10-24 21:00     ` Luck, Tony
2019-10-24 21:33       ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='=?utf-8?q?=3Ceacee918153637948cbf21f44b46e83d60e891b4=2E157168?= =?utf-8?q?8957=2Egit=2Epawan=2Ekumar=2Egupta=40linux=2Eintel=2Ecom=3E?=' \
    --to=pawan.kumar.gupta@linux.intel.com \
    --cc=speck@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).