kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kai Huang <kai.huang@intel.com>
To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: linux-mm@kvack.org, dave.hansen@intel.com,
	kirill.shutemov@linux.intel.com, tony.luck@intel.com,
	peterz@infradead.org, tglx@linutronix.de, seanjc@google.com,
	pbonzini@redhat.com, david@redhat.com, dan.j.williams@intel.com,
	rafael.j.wysocki@intel.com, ying.huang@intel.com,
	reinette.chatre@intel.com, len.brown@intel.com,
	ak@linux.intel.com, isaku.yamahata@intel.com, chao.gao@intel.com,
	sathyanarayanan.kuppuswamy@linux.intel.com, bagasdotme@gmail.com,
	sagis@google.com, imammedo@redhat.com, kai.huang@intel.com
Subject: [PATCH v11 09/20] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory
Date: Mon,  5 Jun 2023 02:27:22 +1200	[thread overview]
Message-ID: <468533166590ff5ed11730350c4af8cdb0b99165.1685887183.git.kai.huang@intel.com> (raw)
In-Reply-To: <cover.1685887183.git.kai.huang@intel.com>

As a step of initializing the TDX module, the kernel needs to tell the
TDX module which memory regions can be used by the TDX module as TDX
guest memory.

TDX reports a list of "Convertible Memory Region" (CMR) to tell the
kernel which memory is TDX compatible.  The kernel needs to build a list
of memory regions (out of CMRs) as "TDX-usable" memory and pass them to
the TDX module.  Once this is done, those "TDX-usable" memory regions
are fixed during module's lifetime.

To keep things simple, assume that all TDX-protected memory will come
from the page allocator.  Make sure all pages in the page allocator
*are* TDX-usable memory.

As TDX-usable memory is a fixed configuration, take a snapshot of the
memory configuration from memblocks at the time of module initialization
(memblocks are modified on memory hotplug).  This snapshot is used to
enable TDX support for *this* memory configuration only.  Use a memory
hotplug notifier to ensure that no other RAM can be added outside of
this configuration.

This approach requires all memblock memory regions at the time of module
initialization to be TDX convertible memory to work, otherwise module
initialization will fail in a later SEAMCALL when passing those regions
to the module.  This approach works when all boot-time "system RAM" is
TDX convertible memory, and no non-TDX-convertible memory is hot-added
to the core-mm before module initialization.

For instance, on the first generation of TDX machines, both CXL memory
and NVDIMM are not TDX convertible memory.  Using kmem driver to hot-add
any CXL memory or NVDIMM to the core-mm before module initialization
will result in failure to initialize the module.  The SEAMCALL error
code will be available in the dmesg to help user to understand the
failure.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
---

v10 -> v11:
 - Added Isaku's Reviewed-by.

v9 -> v10:
 - Moved empty @tdx_memlist check out of is_tdx_memory() to make the
   logic better.
 - Added Ying's Reviewed-by.

v8 -> v9:
 - Replace "The initial support ..." with timeless sentence in both
   changelog and comments(Dave).
 - Fix run-on sentence in changelog, and senstence to explain why to
   stash off memblock (Dave).
 - Tried to improve why to choose this approach and how it work in
   changelog based on Dave's suggestion.
 - Many other comments enhancement (Dave).

v7 -> v8:
 - Trimed down changelog (Dave).
 - Changed to use PHYS_PFN() and PFN_PHYS() throughout this series
   (Ying).
 - Moved memory hotplug handling from add_arch_memory() to
   memory_notifier (Dan/David).
 - Removed 'nid' from 'struct tdx_memblock' to later patch (Dave).
 - {build|free}_tdx_memory() -> {build|}free_tdx_memlist() (Dave).
 - Removed pfn_covered_by_cmr() check as no code to trim CMRs now.
 - Improve the comment around first 1MB (Dave).
 - Added a comment around reserve_real_mode() to point out TDX code
   relies on first 1MB being reserved (Ying).
 - Added comment to explain why the new online memory range cannot
   cross multiple TDX memory blocks (Dave).
 - Improved other comments (Dave).


---
 arch/x86/Kconfig            |   1 +
 arch/x86/kernel/setup.c     |   2 +
 arch/x86/virt/vmx/tdx/tdx.c | 165 +++++++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h |   6 ++
 4 files changed, 172 insertions(+), 2 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f0f3f1a2c8e0..2226d8a4c749 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1958,6 +1958,7 @@ config INTEL_TDX_HOST
 	depends on X86_64
 	depends on KVM_INTEL
 	depends on X86_X2APIC
+	select ARCH_KEEP_MEMBLOCK
 	help
 	  Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
 	  host and certain physical attacks.  This option enables necessary TDX
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 16babff771bd..fd94f8186b9c 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1159,6 +1159,8 @@ void __init setup_arch(char **cmdline_p)
 	 *
 	 * Moreover, on machines with SandyBridge graphics or in setups that use
 	 * crashkernel the entire 1M is reserved anyway.
+	 *
+	 * Note the host kernel TDX also requires the first 1MB being reserved.
 	 */
 	x86_platform.realmode_reserve();
 
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 9fde0f71dd8b..1504023f7f63 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -17,6 +17,13 @@
 #include <linux/spinlock.h>
 #include <linux/percpu-defs.h>
 #include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/memblock.h>
+#include <linux/memory.h>
+#include <linux/minmax.h>
+#include <linux/sizes.h>
+#include <linux/pfn.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/archrandom.h>
@@ -40,6 +47,9 @@ static DEFINE_PER_CPU(unsigned int, tdx_lp_init_status);
 static enum tdx_module_status_t tdx_module_status;
 static DEFINE_MUTEX(tdx_module_lock);
 
+/* All TDX-usable memory regions.  Protected by mem_hotplug_lock. */
+static LIST_HEAD(tdx_memlist);
+
 /*
  * Wrapper of __seamcall() to convert SEAMCALL leaf function error code
  * to kernel error code.  @seamcall_ret and @out contain the SEAMCALL
@@ -246,6 +256,79 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *sysinfo,
 	return 0;
 }
 
+/*
+ * Add a memory region as a TDX memory block.  The caller must make sure
+ * all memory regions are added in address ascending order and don't
+ * overlap.
+ */
+static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
+			    unsigned long end_pfn)
+{
+	struct tdx_memblock *tmb;
+
+	tmb = kmalloc(sizeof(*tmb), GFP_KERNEL);
+	if (!tmb)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&tmb->list);
+	tmb->start_pfn = start_pfn;
+	tmb->end_pfn = end_pfn;
+
+	/* @tmb_list is protected by mem_hotplug_lock */
+	list_add_tail(&tmb->list, tmb_list);
+	return 0;
+}
+
+static void free_tdx_memlist(struct list_head *tmb_list)
+{
+	/* @tmb_list is protected by mem_hotplug_lock */
+	while (!list_empty(tmb_list)) {
+		struct tdx_memblock *tmb = list_first_entry(tmb_list,
+				struct tdx_memblock, list);
+
+		list_del(&tmb->list);
+		kfree(tmb);
+	}
+}
+
+/*
+ * Ensure that all memblock memory regions are convertible to TDX
+ * memory.  Once this has been established, stash the memblock
+ * ranges off in a secondary structure because memblock is modified
+ * in memory hotplug while TDX memory regions are fixed.
+ */
+static int build_tdx_memlist(struct list_head *tmb_list)
+{
+	unsigned long start_pfn, end_pfn;
+	int i, ret;
+
+	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+		/*
+		 * The first 1MB is not reported as TDX convertible memory.
+		 * Although the first 1MB is always reserved and won't end up
+		 * to the page allocator, it is still in memblock's memory
+		 * regions.  Skip them manually to exclude them as TDX memory.
+		 */
+		start_pfn = max(start_pfn, PHYS_PFN(SZ_1M));
+		if (start_pfn >= end_pfn)
+			continue;
+
+		/*
+		 * Add the memory regions as TDX memory.  The regions in
+		 * memblock has already guaranteed they are in address
+		 * ascending order and don't overlap.
+		 */
+		ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+err:
+	free_tdx_memlist(tmb_list);
+	return ret;
+}
+
 static int init_tdx_module(void)
 {
 	static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
@@ -259,10 +342,25 @@ static int init_tdx_module(void)
 	if (ret)
 		return ret;
 
+	/*
+	 * To keep things simple, assume that all TDX-protected memory
+	 * will come from the page allocator.  Make sure all pages in the
+	 * page allocator are TDX-usable memory.
+	 *
+	 * Build the list of "TDX-usable" memory regions which cover all
+	 * pages in the page allocator to guarantee that.  Do it while
+	 * holding mem_hotplug_lock read-lock as the memory hotplug code
+	 * path reads the @tdx_memlist to reject any new memory.
+	 */
+	get_online_mems();
+
+	ret = build_tdx_memlist(&tdx_memlist);
+	if (ret)
+		goto out;
+
 	/*
 	 * TODO:
 	 *
-	 *  - Build the list of TDX-usable memory regions.
 	 *  - Construct a list of "TD Memory Regions" (TDMRs) to cover
 	 *    all TDX-usable memory regions.
 	 *  - Configure the TDMRs and the global KeyID to the TDX module.
@@ -271,7 +369,15 @@ static int init_tdx_module(void)
 	 *
 	 *  Return error before all steps are done.
 	 */
-	return -EINVAL;
+	ret = -EINVAL;
+out:
+	/*
+	 * @tdx_memlist is written here and read at memory hotplug time.
+	 * Lock out memory hotplug code while building it.
+	 */
+	put_online_mems();
+
+	return ret;
 }
 
 static int __tdx_enable(void)
@@ -361,6 +467,54 @@ static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
 	return 0;
 }
 
+static bool is_tdx_memory(unsigned long start_pfn, unsigned long end_pfn)
+{
+	struct tdx_memblock *tmb;
+
+	/*
+	 * This check assumes that the start_pfn<->end_pfn range does not
+	 * cross multiple @tdx_memlist entries.  A single memory online
+	 * event across multiple memblocks (from which @tdx_memlist
+	 * entries are derived at the time of module initialization) is
+	 * not possible.  This is because memory offline/online is done
+	 * on granularity of 'struct memory_block', and the hotpluggable
+	 * memory region (one memblock) must be multiple of memory_block.
+	 */
+	list_for_each_entry(tmb, &tdx_memlist, list) {
+		if (start_pfn >= tmb->start_pfn && end_pfn <= tmb->end_pfn)
+			return true;
+	}
+	return false;
+}
+
+static int tdx_memory_notifier(struct notifier_block *nb, unsigned long action,
+			       void *v)
+{
+	struct memory_notify *mn = v;
+
+	if (action != MEM_GOING_ONLINE)
+		return NOTIFY_OK;
+
+	/*
+	 * Empty list means TDX isn't enabled.  Allow any memory
+	 * to go online.
+	 */
+	if (list_empty(&tdx_memlist))
+		return NOTIFY_OK;
+
+	/*
+	 * The TDX memory configuration is static and can not be
+	 * changed.  Reject onlining any memory which is outside of
+	 * the static configuration whether it supports TDX or not.
+	 */
+	return is_tdx_memory(mn->start_pfn, mn->start_pfn + mn->nr_pages) ?
+		NOTIFY_OK : NOTIFY_BAD;
+}
+
+static struct notifier_block tdx_memory_nb = {
+	.notifier_call = tdx_memory_notifier,
+};
+
 static int __init tdx_init(void)
 {
 	u32 tdx_keyid_start, nr_tdx_keyids;
@@ -384,6 +538,13 @@ static int __init tdx_init(void)
 		goto no_tdx;
 	}
 
+	err = register_memory_notifier(&tdx_memory_nb);
+	if (err) {
+		pr_info("initialization failed: register_memory_notifier() failed (%d)\n",
+				err);
+		goto no_tdx;
+	}
+
 	/*
 	 * Just use the first TDX KeyID as the 'global KeyID' and
 	 * leave the rest for TDX guests.
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 97f4d7e7f1a4..4b6fc0d8b420 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -106,6 +106,12 @@ enum tdx_module_status_t {
 	TDX_MODULE_ERROR
 };
 
+struct tdx_memblock {
+	struct list_head list;
+	unsigned long start_pfn;
+	unsigned long end_pfn;
+};
+
 struct tdx_module_output;
 u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
 	       struct tdx_module_output *out);
-- 
2.40.1


  parent reply	other threads:[~2023-06-04 14:30 UTC|newest]

Thread overview: 174+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-04 14:27 [PATCH v11 00/20] TDX host kernel support Kai Huang
2023-06-04 14:27 ` [PATCH v11 01/20] x86/tdx: Define TDX supported page sizes as macros Kai Huang
2023-06-04 14:27 ` [PATCH v11 02/20] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
2023-06-06 14:00   ` Sathyanarayanan Kuppuswamy
2023-06-06 22:58     ` Huang, Kai
2023-06-06 23:44   ` Isaku Yamahata
2023-06-19 12:12   ` David Hildenbrand
2023-06-19 23:58     ` Huang, Kai
2023-06-04 14:27 ` [PATCH v11 03/20] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC Kai Huang
2023-06-08  0:08   ` kirill.shutemov
2023-06-04 14:27 ` [PATCH v11 04/20] x86/cpu: Detect TDX partial write machine check erratum Kai Huang
2023-06-06 12:38   ` kirill.shutemov
2023-06-06 22:58     ` Huang, Kai
2023-06-07 15:06       ` kirill.shutemov
2023-06-07 14:15   ` Dave Hansen
2023-06-07 22:43     ` Huang, Kai
2023-06-19 11:37       ` Huang, Kai
2023-06-20 15:44         ` Dave Hansen
2023-06-20 23:11           ` Huang, Kai
2023-06-19 12:21   ` David Hildenbrand
2023-06-20 10:31     ` Huang, Kai
2023-06-20 15:39     ` Dave Hansen
2023-06-20 16:03       ` David Hildenbrand
2023-06-20 16:21         ` Dave Hansen
2023-06-04 14:27 ` [PATCH v11 05/20] x86/virt/tdx: Add SEAMCALL infrastructure Kai Huang
2023-06-06 23:55   ` Isaku Yamahata
2023-06-07 14:24   ` Dave Hansen
2023-06-07 18:53     ` Isaku Yamahata
2023-06-07 19:27       ` Dave Hansen
2023-06-07 19:47         ` Isaku Yamahata
2023-06-07 20:08           ` Sean Christopherson
2023-06-07 20:22             ` Dave Hansen
2023-06-08  0:51               ` Huang, Kai
2023-06-08 13:50                 ` Dave Hansen
2023-06-07 22:56     ` Huang, Kai
2023-06-08 14:05       ` Dave Hansen
2023-06-19 12:52   ` David Hildenbrand
2023-06-20 10:37     ` Huang, Kai
2023-06-20 12:20       ` kirill.shutemov
2023-06-20 12:39         ` David Hildenbrand
2023-06-20 15:15     ` Dave Hansen
2023-06-04 14:27 ` [PATCH v11 06/20] x86/virt/tdx: Handle SEAMCALL running out of entropy error Kai Huang
2023-06-07  8:19   ` Isaku Yamahata
2023-06-07 15:08   ` Dave Hansen
2023-06-07 23:36     ` Huang, Kai
2023-06-08  0:29       ` Dave Hansen
2023-06-08  0:08   ` kirill.shutemov
2023-06-09 14:42   ` Nikolay Borisov
2023-06-12 11:04     ` Huang, Kai
2023-06-19 13:00   ` David Hildenbrand
2023-06-20 10:39     ` Huang, Kai
2023-06-20 11:14       ` David Hildenbrand
2023-06-04 14:27 ` [PATCH v11 07/20] x86/virt/tdx: Add skeleton to enable TDX on demand Kai Huang
2023-06-05 21:23   ` Isaku Yamahata
2023-06-05 23:04     ` Huang, Kai
2023-06-05 23:08       ` Dave Hansen
2023-06-05 23:24         ` Huang, Kai
2023-06-07 15:22   ` Dave Hansen
2023-06-08  2:10     ` Huang, Kai
2023-06-08 13:43       ` Dave Hansen
2023-06-12 11:21         ` Huang, Kai
2023-06-19 13:16   ` David Hildenbrand
2023-06-19 23:28     ` Huang, Kai
2023-06-04 14:27 ` [PATCH v11 08/20] x86/virt/tdx: Get information about TDX module and TDX-capable memory Kai Huang
2023-06-07 15:25   ` Dave Hansen
2023-06-08  0:27   ` kirill.shutemov
2023-06-08  2:40     ` Huang, Kai
2023-06-08 11:41       ` kirill.shutemov
2023-06-08 13:13         ` Dave Hansen
2023-06-12  2:00           ` Huang, Kai
2023-06-08 23:29         ` Isaku Yamahata
2023-06-08 23:54           ` kirill.shutemov
2023-06-09  1:33             ` Isaku Yamahata
2023-06-09 10:02   ` kirill.shutemov
2023-06-12  2:00     ` Huang, Kai
2023-06-19 13:29   ` David Hildenbrand
2023-06-19 23:51     ` Huang, Kai
2023-06-04 14:27 ` Kai Huang [this message]
2023-06-07 15:48   ` [PATCH v11 09/20] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory Dave Hansen
2023-06-07 23:22     ` Huang, Kai
2023-06-08 22:40   ` kirill.shutemov
2023-06-04 14:27 ` [PATCH v11 10/20] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions Kai Huang
2023-06-07 15:54   ` Dave Hansen
2023-06-07 15:57   ` Dave Hansen
2023-06-08 10:18     ` Huang, Kai
2023-06-08 22:52   ` kirill.shutemov
2023-06-12  2:21     ` Huang, Kai
2023-06-12  3:01       ` Dave Hansen
2023-06-04 14:27 ` [PATCH v11 11/20] x86/virt/tdx: Fill out " Kai Huang
2023-06-07 16:05   ` Dave Hansen
2023-06-08 10:48     ` Huang, Kai
2023-06-08 13:11       ` Dave Hansen
2023-06-12  2:33         ` Huang, Kai
2023-06-12 14:33           ` kirill.shutemov
2023-06-12 22:10             ` Huang, Kai
2023-06-13 10:18               ` kirill.shutemov
2023-06-13 23:19                 ` Huang, Kai
2023-06-08 23:02   ` kirill.shutemov
2023-06-12  2:25     ` Huang, Kai
2023-06-09  4:01   ` Sathyanarayanan Kuppuswamy
2023-06-12  2:28     ` Huang, Kai
2023-06-14 12:31   ` Nikolay Borisov
2023-06-14 22:45     ` Huang, Kai
2023-06-04 14:27 ` [PATCH v11 12/20] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
2023-06-08 23:24   ` kirill.shutemov
2023-06-08 23:43     ` Dave Hansen
2023-06-12  2:52       ` Huang, Kai
2023-06-25 15:38     ` Huang, Kai
2023-06-15  7:48   ` Nikolay Borisov
2023-06-04 14:27 ` [PATCH v11 13/20] x86/virt/tdx: Designate reserved areas for all TDMRs Kai Huang
2023-06-08 23:53   ` kirill.shutemov
2023-06-04 14:27 ` [PATCH v11 14/20] x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID Kai Huang
2023-06-08 23:53   ` kirill.shutemov
2023-06-04 14:27 ` [PATCH v11 15/20] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
2023-06-08 23:53   ` kirill.shutemov
2023-06-15  8:12   ` Nikolay Borisov
2023-06-15 22:24     ` Huang, Kai
2023-06-19 14:56       ` kirill.shutemov
2023-06-19 23:38         ` Huang, Kai
2023-06-04 14:27 ` [PATCH v11 16/20] x86/virt/tdx: Initialize all TDMRs Kai Huang
2023-06-09 10:03   ` kirill.shutemov
2023-06-04 14:27 ` [PATCH v11 17/20] x86/kexec: Flush cache of TDX private memory Kai Huang
2023-06-09 10:14   ` kirill.shutemov
2023-06-04 14:27 ` [PATCH v11 18/20] x86: Handle TDX erratum to reset TDX private memory during kexec() and reboot Kai Huang
2023-06-09 13:23   ` kirill.shutemov
2023-06-12  3:06     ` Huang, Kai
2023-06-12  7:58       ` kirill.shutemov
2023-06-12 10:27         ` Huang, Kai
2023-06-12 11:48           ` kirill.shutemov
2023-06-12 13:18             ` David Laight
2023-06-12 13:47           ` Dave Hansen
2023-06-13  0:51             ` Huang, Kai
2023-06-13 11:05               ` kirill.shutemov
2023-06-14  0:15                 ` Huang, Kai
2023-06-13 14:25               ` Dave Hansen
2023-06-13 23:18                 ` Huang, Kai
2023-06-14  0:24                   ` Dave Hansen
2023-06-14  0:38                     ` Huang, Kai
2023-06-14  0:42                       ` Huang, Kai
2023-06-19 11:43             ` Huang, Kai
2023-06-19 14:31               ` Dave Hansen
2023-06-19 14:46                 ` kirill.shutemov
2023-06-19 23:35                   ` Huang, Kai
2023-06-19 23:41                   ` Dave Hansen
2023-06-20  0:56                     ` Huang, Kai
2023-06-20  1:06                       ` Dave Hansen
2023-06-20  7:58                         ` Peter Zijlstra
2023-06-25 15:30                         ` Huang, Kai
2023-06-25 23:26                           ` Huang, Kai
2023-06-20  7:48                     ` Peter Zijlstra
2023-06-20  8:11       ` Peter Zijlstra
2023-06-20 10:42         ` Huang, Kai
2023-06-20 10:56           ` Peter Zijlstra
2023-06-14  9:33   ` Huang, Kai
2023-06-14 10:02     ` kirill.shutemov
2023-06-14 10:58       ` Huang, Kai
2023-06-14 11:08         ` kirill.shutemov
2023-06-14 11:17           ` Huang, Kai
2023-06-04 14:27 ` [PATCH v11 19/20] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
2023-06-05  2:13   ` Xiaoyao Li
2023-06-05 23:05     ` Huang, Kai
2023-06-09 13:17   ` kirill.shutemov
2023-06-12  3:08     ` Huang, Kai
2023-06-12  7:59       ` kirill.shutemov
2023-06-12 13:51         ` Dave Hansen
2023-06-12 23:31           ` Huang, Kai
2023-06-04 14:27 ` [PATCH v11 20/20] Documentation/x86: Add documentation for TDX host support Kai Huang
2023-06-08 23:56   ` Dave Hansen
2023-06-12  3:41     ` Huang, Kai
2023-06-16  9:02   ` Nikolay Borisov
2023-06-16 16:26     ` Dave Hansen
2023-06-06  0:36 ` [PATCH v11 00/20] TDX host kernel support Isaku Yamahata
2023-06-08 21:03 ` Dan Williams
2023-06-12 10:56   ` Huang, Kai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=468533166590ff5ed11730350c4af8cdb0b99165.1685887183.git.kai.huang@intel.com \
    --to=kai.huang@intel.com \
    --cc=ak@linux.intel.com \
    --cc=bagasdotme@gmail.com \
    --cc=chao.gao@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=david@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=reinette.chatre@intel.com \
    --cc=sagis@google.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).