linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v15 00/23] TDX host kernel support
@ 2023-11-09 11:55 Kai Huang
  2023-11-09 11:55 ` [PATCH v15 01/23] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
                   ` (23 more replies)
  0 siblings, 24 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

Hi all,

(Again I didn't include the full cover letter here to save people's time.
 The full coverletter can be found in the v13 [1]).

This version mainly addressed one issue that we (Intel people) discussed
internally: to only initialize TDX module 1.5 and later versions.  The
reason is TDX 1.0 has some incompatibility issues to the TDX 1.5 and
later version (for detailed information please see [2]).  There's no
value to support TDX 1.0 when the TDX 1.5 are already out.

Hi Kirill, Dave (and all),

Could you help to review the new patch mentioned in the detailed
changes below (and other minor changes due to rebase to it)?

Appreciate a lot!

The detailed changes:

(please refer to individual patch for specific changes to them.)

 - v14 -> v15:
  - Rebased to latest (today) master branch of Linus's tree.
  - Removed the patch which uses TDH.SYS.INFO to get TDSYSINFO_STRUCT.
  - Added a new patch to use TDH.SYS.RD (which is the new SEAMCALL to read
    TDX module metadata in TDX 1.5) to read essential metadata for module
    initialization and stop initializing TDX 1.0.
  - Put the new patch after the patch to build the TDX-usable memory
    list becaues CMRs are not readed from TDX module anymore.
  - Very minor rebase changes in other couple of patches due to the new
    TDH.SYS.RD patch.
  - Addressed all comments (few) received in v14 (Rafael/Nikolay).
  - Added people's tags -- thanks! (Sathy, Nickolay).

v14: https://lore.kernel.org/lkml/cover.1697532085.git.kai.huang@intel.com/T/

[1] v13: https://lore.kernel.org/lkml/cover.1692962263.git.kai.huang@intel.com/T/
[2] "TDX module ABI incompatibilities" spec:
    https://cdrdv2.intel.com/v1/dl/getContent/773041



Kai Huang (23):
  x86/virt/tdx: Detect TDX during kernel boot
  x86/tdx: Define TDX supported page sizes as macros
  x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC
  x86/cpu: Detect TDX partial write machine check erratum
  x86/virt/tdx: Handle SEAMCALL no entropy error in common code
  x86/virt/tdx: Add SEAMCALL error printing for module initialization
  x86/virt/tdx: Add skeleton to enable TDX on demand
  x86/virt/tdx: Use all system memory when initializing TDX module as
    TDX memory
  x86/virt/tdx: Get module global metadata for module initialization
  x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX
    memory regions
  x86/virt/tdx: Fill out TDMRs to cover all TDX memory regions
  x86/virt/tdx: Allocate and set up PAMTs for TDMRs
  x86/virt/tdx: Designate reserved areas for all TDMRs
  x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID
  x86/virt/tdx: Configure global KeyID on all packages
  x86/virt/tdx: Initialize all TDMRs
  x86/kexec: Flush cache of TDX private memory
  x86/virt/tdx: Keep TDMRs when module initialization is successful
  x86/virt/tdx: Improve readability of module initialization error
    handling
  x86/kexec(): Reset TDX private memory on platforms with TDX erratum
  x86/virt/tdx: Handle TDX interaction with ACPI S3 and deeper states
  x86/mce: Improve error log of kernel space TDX #MC due to erratum
  Documentation/x86: Add documentation for TDX host support

 Documentation/arch/x86/tdx.rst     |  222 +++-
 arch/x86/Kconfig                   |    3 +
 arch/x86/coco/tdx/tdx-shared.c     |    6 +-
 arch/x86/include/asm/cpufeatures.h |    1 +
 arch/x86/include/asm/msr-index.h   |    3 +
 arch/x86/include/asm/shared/tdx.h  |    6 +
 arch/x86/include/asm/tdx.h         |   39 +
 arch/x86/kernel/cpu/intel.c        |   17 +
 arch/x86/kernel/cpu/mce/core.c     |   33 +
 arch/x86/kernel/machine_kexec_64.c |   16 +
 arch/x86/kernel/process.c          |    8 +-
 arch/x86/kernel/reboot.c           |   15 +
 arch/x86/kernel/setup.c            |    2 +
 arch/x86/virt/vmx/tdx/Makefile     |    2 +-
 arch/x86/virt/vmx/tdx/tdx.c        | 1555 ++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/tdx.h        |  121 +++
 16 files changed, 2033 insertions(+), 16 deletions(-)
 create mode 100644 arch/x86/virt/vmx/tdx/tdx.c
 create mode 100644 arch/x86/virt/vmx/tdx/tdx.h


base-commit: 6bc986ab839c844e78a2333a02e55f02c9e57935
-- 
2.41.0


^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH v15 01/23] x86/virt/tdx: Detect TDX during kernel boot
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 02/23] x86/tdx: Define TDX supported page sizes as macros Kai Huang
                   ` (22 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks.  A CPU-attested software module
called 'the TDX module' runs inside a new isolated memory range as a
trusted hypervisor to manage and run protected VMs.

Pre-TDX Intel hardware has support for a memory encryption architecture
called MKTME.  The memory encryption hardware underpinning MKTME is also
used for Intel TDX.  TDX ends up "stealing" some of the physical address
space from the MKTME architecture for crypto-protection to VMs.  The
BIOS is responsible for partitioning the "KeyID" space between legacy
MKTME and TDX.  The KeyIDs reserved for TDX are called 'TDX private
KeyIDs' or 'TDX KeyIDs' for short.

During machine boot, TDX microcode verifies that the BIOS programmed TDX
private KeyIDs consistently and correctly programmed across all CPU
packages.  The MSRs are locked in this state after verification.  This
is why MSR_IA32_MKTME_KEYID_PARTITIONING gets used for TDX enumeration:
it indicates not just that the hardware supports TDX, but that all the
boot-time security checks passed.

The TDX module is expected to be loaded by the BIOS when it enables TDX,
but the kernel needs to properly initialize it before it can be used to
create and run any TDX guests.  The TDX module will be initialized by
the KVM subsystem when KVM wants to use TDX.

Add a new early_initcall(tdx_init) to detect the TDX by detecting TDX
private KeyIDs.  Also add a function to report whether TDX is enabled by
the BIOS.  Similar to AMD SME, kexec() will use it to determine whether
cache flush is needed.

The TDX module itself requires one TDX KeyID as the 'TDX global KeyID'
to protect its metadata.  Each TDX guest also needs a TDX KeyID for its
own protection.  Just use the first TDX KeyID as the global KeyID and
leave the rest for TDX guests.  If no TDX KeyID is left for TDX guests,
disable TDX as initializing the TDX module alone is useless.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---

v14 -> v15:
 - Add Sathy's tag

v13 -> v14:
 - "tdx:" -> "virt/tdx:" (internal)
 - Add Dave's tag
 
---
 arch/x86/include/asm/msr-index.h |  3 ++
 arch/x86/include/asm/tdx.h       |  4 ++
 arch/x86/virt/vmx/tdx/Makefile   |  2 +-
 arch/x86/virt/vmx/tdx/tdx.c      | 90 ++++++++++++++++++++++++++++++++
 4 files changed, 98 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/virt/vmx/tdx/tdx.c

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 1d51e1850ed0..66c12d4efa31 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -536,6 +536,9 @@
 #define MSR_RELOAD_PMC0			0x000014c1
 #define MSR_RELOAD_FIXED_CTR0		0x00001309
 
+/* KeyID partitioning between MKTME and TDX */
+#define MSR_IA32_MKTME_KEYID_PARTITIONING	0x00000087
+
 /*
  * AMD64 MSRs. Not complete. See the architecture manual for a more
  * complete list.
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index f3d5305a60fc..ea9a0320b1f8 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -83,6 +83,10 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
 u64 __seamcall(u64 fn, struct tdx_module_args *args);
 u64 __seamcall_ret(u64 fn, struct tdx_module_args *args);
 u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args);
+
+bool platform_tdx_enabled(void);
+#else
+static inline bool platform_tdx_enabled(void) { return false; }
 #endif	/* CONFIG_INTEL_TDX_HOST */
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/virt/vmx/tdx/Makefile b/arch/x86/virt/vmx/tdx/Makefile
index 46ef8f73aebb..90da47eb85ee 100644
--- a/arch/x86/virt/vmx/tdx/Makefile
+++ b/arch/x86/virt/vmx/tdx/Makefile
@@ -1,2 +1,2 @@
 # SPDX-License-Identifier: GPL-2.0-only
-obj-y += seamcall.o
+obj-y += seamcall.o tdx.o
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
new file mode 100644
index 000000000000..13d22ea2e2d9
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -0,0 +1,90 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2023 Intel Corporation.
+ *
+ * Intel Trusted Domain Extensions (TDX) support
+ */
+
+#define pr_fmt(fmt)	"virt/tdx: " fmt
+
+#include <linux/types.h>
+#include <linux/cache.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/printk.h>
+#include <asm/msr-index.h>
+#include <asm/msr.h>
+#include <asm/tdx.h>
+
+static u32 tdx_global_keyid __ro_after_init;
+static u32 tdx_guest_keyid_start __ro_after_init;
+static u32 tdx_nr_guest_keyids __ro_after_init;
+
+static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
+					    u32 *nr_tdx_keyids)
+{
+	u32 _nr_mktme_keyids, _tdx_keyid_start, _nr_tdx_keyids;
+	int ret;
+
+	/*
+	 * IA32_MKTME_KEYID_PARTIONING:
+	 *   Bit [31:0]:	Number of MKTME KeyIDs.
+	 *   Bit [63:32]:	Number of TDX private KeyIDs.
+	 */
+	ret = rdmsr_safe(MSR_IA32_MKTME_KEYID_PARTITIONING, &_nr_mktme_keyids,
+			&_nr_tdx_keyids);
+	if (ret)
+		return -ENODEV;
+
+	if (!_nr_tdx_keyids)
+		return -ENODEV;
+
+	/* TDX KeyIDs start after the last MKTME KeyID. */
+	_tdx_keyid_start = _nr_mktme_keyids + 1;
+
+	*tdx_keyid_start = _tdx_keyid_start;
+	*nr_tdx_keyids = _nr_tdx_keyids;
+
+	return 0;
+}
+
+static int __init tdx_init(void)
+{
+	u32 tdx_keyid_start, nr_tdx_keyids;
+	int err;
+
+	err = record_keyid_partitioning(&tdx_keyid_start, &nr_tdx_keyids);
+	if (err)
+		return err;
+
+	pr_info("BIOS enabled: private KeyID range [%u, %u)\n",
+			tdx_keyid_start, tdx_keyid_start + nr_tdx_keyids);
+
+	/*
+	 * The TDX module itself requires one 'global KeyID' to protect
+	 * its metadata.  If there's only one TDX KeyID, there won't be
+	 * any left for TDX guests thus there's no point to enable TDX
+	 * at all.
+	 */
+	if (nr_tdx_keyids < 2) {
+		pr_err("initialization failed: too few private KeyIDs available.\n");
+		return -ENODEV;
+	}
+
+	/*
+	 * Just use the first TDX KeyID as the 'global KeyID' and
+	 * leave the rest for TDX guests.
+	 */
+	tdx_global_keyid = tdx_keyid_start;
+	tdx_guest_keyid_start = tdx_keyid_start + 1;
+	tdx_nr_guest_keyids = nr_tdx_keyids - 1;
+
+	return 0;
+}
+early_initcall(tdx_init);
+
+/* Return whether the BIOS has enabled TDX */
+bool platform_tdx_enabled(void)
+{
+	return !!tdx_global_keyid;
+}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 02/23] x86/tdx: Define TDX supported page sizes as macros
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
  2023-11-09 11:55 ` [PATCH v15 01/23] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 03/23] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC Kai Huang
                   ` (21 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

TDX supports 4K, 2M and 1G page sizes.  The corresponding values are
defined by the TDX module spec and used as TDX module ABI.  Currently,
they are used in try_accept_one() when the TDX guest tries to accept a
page.  However currently try_accept_one() uses hard-coded magic values.

Define TDX supported page sizes as macros and get rid of the hard-coded
values in try_accept_one().  TDX host support will need to use them too.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/coco/tdx/tdx-shared.c    | 6 +++---
 arch/x86/include/asm/shared/tdx.h | 5 +++++
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/coco/tdx/tdx-shared.c b/arch/x86/coco/tdx/tdx-shared.c
index 78e413269791..1655aa56a0a5 100644
--- a/arch/x86/coco/tdx/tdx-shared.c
+++ b/arch/x86/coco/tdx/tdx-shared.c
@@ -22,13 +22,13 @@ static unsigned long try_accept_one(phys_addr_t start, unsigned long len,
 	 */
 	switch (pg_level) {
 	case PG_LEVEL_4K:
-		page_size = 0;
+		page_size = TDX_PS_4K;
 		break;
 	case PG_LEVEL_2M:
-		page_size = 1;
+		page_size = TDX_PS_2M;
 		break;
 	case PG_LEVEL_1G:
-		page_size = 2;
+		page_size = TDX_PS_1G;
 		break;
 	default:
 		return 0;
diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
index ccce7ebd8677..a4036149c484 100644
--- a/arch/x86/include/asm/shared/tdx.h
+++ b/arch/x86/include/asm/shared/tdx.h
@@ -55,6 +55,11 @@
 	(TDX_RDX | TDX_RBX | TDX_RSI | TDX_RDI | TDX_R8  | TDX_R9  | \
 	 TDX_R10 | TDX_R11 | TDX_R12 | TDX_R13 | TDX_R14 | TDX_R15)
 
+/* TDX supported page sizes from the TDX module ABI. */
+#define TDX_PS_4K	0
+#define TDX_PS_2M	1
+#define TDX_PS_1G	2
+
 #ifndef __ASSEMBLY__
 
 #include <linux/compiler_attributes.h>
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 03/23] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
  2023-11-09 11:55 ` [PATCH v15 01/23] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
  2023-11-09 11:55 ` [PATCH v15 02/23] x86/tdx: Define TDX supported page sizes as macros Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 04/23] x86/cpu: Detect TDX partial write machine check erratum Kai Huang
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

TDX capable platforms are locked to X2APIC mode and cannot fall back to
the legacy xAPIC mode when TDX is enabled by the BIOS.  TDX host support
requires x2APIC.  Make INTEL_TDX_HOST depend on X86_X2APIC.

Link: https://lore.kernel.org/lkml/ba80b303-31bf-d44a-b05d-5c0f83038798@intel.com/
Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3762f41bb092..eb6e63956d51 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1970,6 +1970,7 @@ config INTEL_TDX_HOST
 	depends on CPU_SUP_INTEL
 	depends on X86_64
 	depends on KVM_INTEL
+	depends on X86_X2APIC
 	help
 	  Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
 	  host and certain physical attacks.  This option enables necessary TDX
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 04/23] x86/cpu: Detect TDX partial write machine check erratum
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (2 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 03/23] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code Kai Huang
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

TDX memory has integrity and confidentiality protections.  Violations of
this integrity protection are supposed to only affect TDX operations and
are never supposed to affect the host kernel itself.  In other words,
the host kernel should never, itself, see machine checks induced by the
TDX integrity hardware.

Alas, the first few generations of TDX hardware have an erratum.  A
partial write to a TDX private memory cacheline will silently "poison"
the line.  Subsequent reads will consume the poison and generate a
machine check.  According to the TDX hardware spec, neither of these
things should have happened.

Virtually all kernel memory accesses operations happen in full
cachelines.  In practice, writing a "byte" of memory usually reads a 64
byte cacheline of memory, modifies it, then writes the whole line back.
Those operations do not trigger this problem.

This problem is triggered by "partial" writes where a write transaction
of less than cacheline lands at the memory controller.  The CPU does
these via non-temporal write instructions (like MOVNTI), or through
UC/WC memory mappings.  The issue can also be triggered away from the
CPU by devices doing partial writes via DMA.

With this erratum, there are additional things need to be done.  To
prepare for those changes, add a CPU bug bit to indicate this erratum.
Note this bug reflects the hardware thus it is detected regardless of
whether the kernel is built with TDX support or not.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/kernel/cpu/intel.c        | 17 +++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 4af140cf5719..d097e558e079 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -495,6 +495,7 @@
 #define X86_BUG_EIBRS_PBRSB		X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
 #define X86_BUG_SMT_RSB			X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */
 #define X86_BUG_GDS			X86_BUG(30) /* CPU is affected by Gather Data Sampling */
+#define X86_BUG_TDX_PW_MCE		X86_BUG(31) /* CPU may incur #MC if non-TD software does partial write to TDX private memory */
 
 /* BUG word 2 */
 #define X86_BUG_SRSO			X86_BUG(1*32 + 0) /* AMD SRSO bug */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index a927a8fc9624..1304d29c0660 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -184,6 +184,21 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
 	return false;
 }
 
+static void check_tdx_erratum(struct cpuinfo_x86 *c)
+{
+	/*
+	 * These CPUs have an erratum.  A partial write from non-TD
+	 * software (e.g. via MOVNTI variants or UC/WC mapping) to TDX
+	 * private memory poisons that memory, and a subsequent read of
+	 * that memory triggers #MC.
+	 */
+	switch (c->x86_model) {
+	case INTEL_FAM6_SAPPHIRERAPIDS_X:
+	case INTEL_FAM6_EMERALDRAPIDS_X:
+		setup_force_cpu_bug(X86_BUG_TDX_PW_MCE);
+	}
+}
+
 static void early_init_intel(struct cpuinfo_x86 *c)
 {
 	u64 misc_enable;
@@ -322,6 +337,8 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 	 */
 	if (detect_extended_topology_early(c) < 0)
 		detect_ht_early(c);
+
+	check_tdx_erratum(c);
 }
 
 static void bsp_init_intel(struct cpuinfo_x86 *c)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (3 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 04/23] x86/cpu: Detect TDX partial write machine check erratum Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 16:38   ` Dave Hansen
  2023-11-14 19:24   ` Isaku Yamahata
  2023-11-09 11:55 ` [PATCH v15 06/23] x86/virt/tdx: Add SEAMCALL error printing for module initialization Kai Huang
                   ` (18 subsequent siblings)
  23 siblings, 2 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

Some SEAMCALLs use the RDRAND hardware and can fail for the same reasons
as RDRAND.  Use the kernel RDRAND retry logic for them.

There are three __seamcall*() variants.  Do the SEAMCALL retry in common
code and add a wrapper for each of them.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirll.shutemov@linux.intel.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---

v14 -> v15:
 - Added Sathy's tag.

v13 -> v14:
 - Use real function sc_retry() instead of using macros. (Dave)
 - Added Kirill's tag.

v12 -> v13:
 - New implementation due to TDCALL assembly series.

---
 arch/x86/include/asm/tdx.h | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index ea9a0320b1f8..f1c0c15469f8 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -24,6 +24,11 @@
 #define TDX_SEAMCALL_GP			(TDX_SW_ERROR | X86_TRAP_GP)
 #define TDX_SEAMCALL_UD			(TDX_SW_ERROR | X86_TRAP_UD)
 
+/*
+ * TDX module SEAMCALL leaf function error codes
+ */
+#define TDX_RND_NO_ENTROPY	0x8000020300000000ULL
+
 #ifndef __ASSEMBLY__
 
 /*
@@ -84,6 +89,27 @@ u64 __seamcall(u64 fn, struct tdx_module_args *args);
 u64 __seamcall_ret(u64 fn, struct tdx_module_args *args);
 u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args);
 
+#include <asm/archrandom.h>
+
+typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args);
+
+static inline u64 sc_retry(sc_func_t func, u64 fn,
+			   struct tdx_module_args *args)
+{
+	int retry = RDRAND_RETRY_LOOPS;
+	u64 ret;
+
+	do {
+		ret = func(fn, args);
+	} while (ret == TDX_RND_NO_ENTROPY && --retry);
+
+	return ret;
+}
+
+#define seamcall(_fn, _args)		sc_retry(__seamcall, (_fn), (_args))
+#define seamcall_ret(_fn, _args)	sc_retry(__seamcall_ret, (_fn), (_args))
+#define seamcall_saved_ret(_fn, _args)	sc_retry(__seamcall_saved_ret, (_fn), (_args))
+
 bool platform_tdx_enabled(void);
 #else
 static inline bool platform_tdx_enabled(void) { return false; }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 06/23] x86/virt/tdx: Add SEAMCALL error printing for module initialization
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (4 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 07/23] x86/virt/tdx: Add skeleton to enable TDX on demand Kai Huang
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

The SEAMCALLs involved during the TDX module initialization are not
expected to fail.  In fact, they are not expected to return any non-zero
code (except the "running out of entropy error", which can be handled
internally already).

Add yet another set of SEAMCALL wrappers, which treats all non-zero
return code as error, to support printing SEAMCALL error upon failure
for module initialization.  Note the TDX module initialization doesn't
use the _saved_ret() variant thus no wrapper is added for it.

SEAMCALL assembly can also return kernel-defined error codes for three
special cases: 1) TDX isn't enabled by the BIOS; 2) TDX module isn't
loaded; 3) CPU isn't in VMX operation.  Whether they can legally happen
depends on the caller, so leave to the caller to print error message
when desired.

Also convert the SEAMCALL error codes to the kernel error codes in the
new wrappers so that each SEAMCALL caller doesn't have to repeat the
conversion.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---

v14 -> v15:
 - Remove unneeded seamcall_err_saved_ret() -- Nikolay
 - Added Sathy's tag

v13 -> v14:
 - Use real functions to replace macros. (Dave)
 - Moved printing error message for special error code to the caller
   (internal)
 - Added Kirill's tag

v12 -> v13:
 - New implementation due to TDCALL assembly series.

---
 arch/x86/include/asm/tdx.h  |  1 +
 arch/x86/virt/vmx/tdx/tdx.c | 43 +++++++++++++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index f1c0c15469f8..9c35cd4ae0dc 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -27,6 +27,7 @@
 /*
  * TDX module SEAMCALL leaf function error codes
  */
+#define TDX_SUCCESS		0ULL
 #define TDX_RND_NO_ENTROPY	0x8000020300000000ULL
 
 #ifndef __ASSEMBLY__
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 13d22ea2e2d9..12e519c5c45c 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -20,6 +20,49 @@ static u32 tdx_global_keyid __ro_after_init;
 static u32 tdx_guest_keyid_start __ro_after_init;
 static u32 tdx_nr_guest_keyids __ro_after_init;
 
+typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args);
+
+static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args)
+{
+	pr_err("SEAMCALL (0x%llx) failed: 0x%llx\n", fn, err);
+}
+
+static inline void seamcall_err_ret(u64 fn, u64 err,
+				    struct tdx_module_args *args)
+{
+	seamcall_err(fn, err, args);
+	pr_err("RCX 0x%llx RDX 0x%llx R8 0x%llx R9 0x%llx R10 0x%llx R11 0x%llx\n",
+			args->rcx, args->rdx, args->r8, args->r9,
+			args->r10, args->r11);
+}
+
+static inline int sc_retry_prerr(sc_func_t func, sc_err_func_t err_func,
+				 u64 fn, struct tdx_module_args *args)
+{
+	u64 sret = sc_retry(func, fn, args);
+
+	if (sret == TDX_SUCCESS)
+		return 0;
+
+	if (sret == TDX_SEAMCALL_VMFAILINVALID)
+		return -ENODEV;
+
+	if (sret == TDX_SEAMCALL_GP)
+		return -EOPNOTSUPP;
+
+	if (sret == TDX_SEAMCALL_UD)
+		return -EACCES;
+
+	err_func(fn, sret, args);
+	return -EIO;
+}
+
+#define seamcall_prerr(__fn, __args)						\
+	sc_retry_prerr(__seamcall, seamcall_err, (__fn), (__args))
+
+#define seamcall_prerr_ret(__fn, __args)					\
+	sc_retry_prerr(__seamcall_ret, seamcall_err_ret, (__fn), (__args))
+
 static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
 					    u32 *nr_tdx_keyids)
 {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 07/23] x86/virt/tdx: Add skeleton to enable TDX on demand
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (5 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 06/23] x86/virt/tdx: Add SEAMCALL error printing for module initialization Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 08/23] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory Kai Huang
                   ` (16 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

To enable TDX the kernel needs to initialize TDX from two perspectives:
1) Do a set of SEAMCALLs to initialize the TDX module to make it ready
to create and run TDX guests; 2) Do the per-cpu initialization SEAMCALL
on one logical cpu before the kernel wants to make any other SEAMCALLs
on that cpu (including those involved during module initialization and
running TDX guests).

The TDX module can be initialized only once in its lifetime.  Instead
of always initializing it at boot time, this implementation chooses an
"on demand" approach to initialize TDX until there is a real need (e.g
when requested by KVM).  This approach has below pros:

1) It avoids consuming the memory that must be allocated by kernel and
given to the TDX module as metadata (~1/256th of the TDX-usable memory),
and also saves the CPU cycles of initializing the TDX module (and the
metadata) when TDX is not used at all.

2) The TDX module design allows it to be updated while the system is
running.  The update procedure shares quite a few steps with this "on
demand" initialization mechanism.  The hope is that much of "on demand"
mechanism can be shared with a future "update" mechanism.  A boot-time
TDX module implementation would not be able to share much code with the
update mechanism.

3) Making SEAMCALL requires VMX to be enabled.  Currently, only the KVM
code mucks with VMX enabling.  If the TDX module were to be initialized
separately from KVM (like at boot), the boot code would need to be
taught how to muck with VMX enabling and KVM would need to be taught how
to cope with that.  Making KVM itself responsible for TDX initialization
lets the rest of the kernel stay blissfully unaware of VMX.

Similar to module initialization, also make the per-cpu initialization
"on demand" as it also depends on VMX being enabled.

Add two functions, tdx_enable() and tdx_cpu_enable(), to enable the TDX
module and enable TDX on local cpu respectively.  For now tdx_enable()
is a placeholder.  The TODO list will be pared down as functionality is
added.

Export both tdx_cpu_enable() and tdx_enable() for KVM use.

In tdx_enable() use a state machine protected by mutex to make sure the
initialization will only be done once, as tdx_enable() can be called
multiple times (i.e. KVM module can be reloaded) and may be called
concurrently by other kernel components in the future.

The per-cpu initialization on each cpu can only be done once during the
module's life time.  Use a per-cpu variable to track its status to make
sure it is only done once in tdx_cpu_enable().

Also, a SEAMCALL to do TDX module global initialization must be done
once on any logical cpu before any per-cpu initialization SEAMCALL.  Do
it inside tdx_cpu_enable() too (if hasn't been done).

tdx_enable() can potentially invoke SEAMCALLs on any online cpus.  The
per-cpu initialization must be done before those SEAMCALLs are invoked
on some cpu.  To keep things simple, in tdx_cpu_enable(), always do the
per-cpu initialization regardless of whether the TDX module has been
initialized or not.  And in tdx_enable(), don't call tdx_cpu_enable()
but assume the caller has disabled CPU hotplug, done VMXON and
tdx_cpu_enable() on all online cpus before calling tdx_enable().

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
---

v14 -> v15:
 - Added Nikolay's tag.

v13 -> v14:
 - Use lockdep_assert_irqs_off() in try_init_model_global() (Nikolay),
   but still keep the comment (Kirill).
 - Add code to print "module not loaded" in the first SEAMCALL.
 - If SYS.INIT fails, stop calling LP.INIT in other tdx_cpu_enable()s.
 - Added Kirill's tag

v12 -> v13:
 - Made tdx_cpu_enable() always be called with IRQ disabled via IPI
   funciton call (Peter, Kirill).

v11 -> v12:
 - Simplified TDX module global init and lp init status tracking (David).
 - Added comment around try_init_module_global() for using
   raw_spin_lock() (Dave).
 - Added one sentence to changelog to explain why to expose tdx_enable()
   and tdx_cpu_enable() (Dave).
 - Simplifed comments around tdx_enable() and tdx_cpu_enable() to use
   lockdep_assert_*() instead. (Dave)
 - Removed redundent "TDX" in error message (Dave).

v10 -> v11:
 - Return -NODEV instead of -EINVAL when CONFIG_INTEL_TDX_HOST is off.
 - Return the actual error code for tdx_enable() instead of -EINVAL.
 - Added Isaku's Reviewed-by.

v9 -> v10:
 - Merged the patch to handle per-cpu initialization to this patch to
   tell the story better.
 - Changed how to handle the per-cpu initialization to only provide a
   tdx_cpu_enable() function to let the user of TDX to do it when the
   user wants to run TDX code on a certain cpu.
 - Changed tdx_enable() to not call cpus_read_lock() explicitly, but
   call lockdep_assert_cpus_held() to assume the caller has done that.
 - Improved comments around tdx_enable() and tdx_cpu_enable().
 - Improved changelog to tell the story better accordingly.

v8 -> v9:
 - Removed detailed TODO list in the changelog (Dave).
 - Added back steps to do module global initialization and per-cpu
   initialization in the TODO list comment.
 - Moved the 'enum tdx_module_status_t' from tdx.c to local tdx.h

v7 -> v8:
 - Refined changelog (Dave).
 - Removed "all BIOS-enabled cpus" related code (Peter/Thomas/Dave).
 - Add a "TODO list" comment in init_tdx_module() to list all steps of
   initializing the TDX Module to tell the story (Dave).
 - Made tdx_enable() unverisally return -EINVAL, and removed nonsense
   comments (Dave).
 - Simplified __tdx_enable() to only handle success or failure.
 - TDX_MODULE_SHUTDOWN -> TDX_MODULE_ERROR
 - Removed TDX_MODULE_NONE (not loaded) as it is not necessary.
 - Improved comments (Dave).
 - Pointed out 'tdx_module_status' is software thing (Dave).

 ...


---
 arch/x86/include/asm/tdx.h  |   4 +
 arch/x86/virt/vmx/tdx/tdx.c | 167 ++++++++++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/tdx.h |  30 +++++++
 3 files changed, 201 insertions(+)
 create mode 100644 arch/x86/virt/vmx/tdx/tdx.h

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 9c35cd4ae0dc..26b7fdbcbdb3 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -112,8 +112,12 @@ static inline u64 sc_retry(sc_func_t func, u64 fn,
 #define seamcall_saved_ret(_fn, _args)	sc_retry(__seamcall_saved_ret, (_fn), (_args))
 
 bool platform_tdx_enabled(void);
+int tdx_cpu_enable(void);
+int tdx_enable(void);
 #else
 static inline bool platform_tdx_enabled(void) { return false; }
+static inline int tdx_cpu_enable(void) { return -ENODEV; }
+static inline int tdx_enable(void)  { return -ENODEV; }
 #endif	/* CONFIG_INTEL_TDX_HOST */
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 12e519c5c45c..e7739e15d47a 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -12,14 +12,24 @@
 #include <linux/init.h>
 #include <linux/errno.h>
 #include <linux/printk.h>
+#include <linux/cpu.h>
+#include <linux/spinlock.h>
+#include <linux/percpu-defs.h>
+#include <linux/mutex.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/tdx.h>
+#include "tdx.h"
 
 static u32 tdx_global_keyid __ro_after_init;
 static u32 tdx_guest_keyid_start __ro_after_init;
 static u32 tdx_nr_guest_keyids __ro_after_init;
 
+static DEFINE_PER_CPU(bool, tdx_lp_initialized);
+
+static enum tdx_module_status_t tdx_module_status;
+static DEFINE_MUTEX(tdx_module_lock);
+
 typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args);
 
 static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args)
@@ -63,6 +73,163 @@ static inline int sc_retry_prerr(sc_func_t func, sc_err_func_t err_func,
 #define seamcall_prerr_ret(__fn, __args)					\
 	sc_retry_prerr(__seamcall_ret, seamcall_err_ret, (__fn), (__args))
 
+/*
+ * Do the module global initialization once and return its result.
+ * It can be done on any cpu.  It's always called with interrupts
+ * disabled.
+ */
+static int try_init_module_global(void)
+{
+	struct tdx_module_args args = {};
+	static DEFINE_RAW_SPINLOCK(sysinit_lock);
+	static bool sysinit_done;
+	static int sysinit_ret;
+
+	lockdep_assert_irqs_disabled();
+
+	raw_spin_lock(&sysinit_lock);
+
+	if (sysinit_done)
+		goto out;
+
+	/* RCX is module attributes and all bits are reserved */
+	args.rcx = 0;
+	sysinit_ret = seamcall_prerr(TDH_SYS_INIT, &args);
+
+	/*
+	 * The first SEAMCALL also detects the TDX module, thus
+	 * it can fail due to the TDX module is not loaded.
+	 * Dump message to let the user know.
+	 */
+	if (sysinit_ret == -ENODEV)
+		pr_err("module not loaded\n");
+
+	sysinit_done = true;
+out:
+	raw_spin_unlock(&sysinit_lock);
+	return sysinit_ret;
+}
+
+/**
+ * tdx_cpu_enable - Enable TDX on local cpu
+ *
+ * Do one-time TDX module per-cpu initialization SEAMCALL (and TDX module
+ * global initialization SEAMCALL if not done) on local cpu to make this
+ * cpu be ready to run any other SEAMCALLs.
+ *
+ * Always call this function via IPI function calls.
+ *
+ * Return 0 on success, otherwise errors.
+ */
+int tdx_cpu_enable(void)
+{
+	struct tdx_module_args args = {};
+	int ret;
+
+	if (!platform_tdx_enabled())
+		return -ENODEV;
+
+	lockdep_assert_irqs_disabled();
+
+	if (__this_cpu_read(tdx_lp_initialized))
+		return 0;
+
+	/*
+	 * The TDX module global initialization is the very first step
+	 * to enable TDX.  Need to do it first (if hasn't been done)
+	 * before the per-cpu initialization.
+	 */
+	ret = try_init_module_global();
+	if (ret)
+		return ret;
+
+	ret = seamcall_prerr(TDH_SYS_LP_INIT, &args);
+	if (ret)
+		return ret;
+
+	__this_cpu_write(tdx_lp_initialized, true);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(tdx_cpu_enable);
+
+static int init_tdx_module(void)
+{
+	/*
+	 * TODO:
+	 *
+	 *  - Build the list of TDX-usable memory regions.
+	 *  - Get TDX module "TD Memory Region" (TDMR) global metadata.
+	 *  - Construct a list of TDMRs to cover all TDX-usable memory
+	 *    regions.
+	 *  - Configure the TDMRs and the global KeyID to the TDX module.
+	 *  - Configure the global KeyID on all packages.
+	 *  - Initialize all TDMRs.
+	 *
+	 *  Return error before all steps are done.
+	 */
+	return -EINVAL;
+}
+
+static int __tdx_enable(void)
+{
+	int ret;
+
+	ret = init_tdx_module();
+	if (ret) {
+		pr_err("module initialization failed (%d)\n", ret);
+		tdx_module_status = TDX_MODULE_ERROR;
+		return ret;
+	}
+
+	pr_info("module initialized\n");
+	tdx_module_status = TDX_MODULE_INITIALIZED;
+
+	return 0;
+}
+
+/**
+ * tdx_enable - Enable TDX module to make it ready to run TDX guests
+ *
+ * This function assumes the caller has: 1) held read lock of CPU hotplug
+ * lock to prevent any new cpu from becoming online; 2) done both VMXON
+ * and tdx_cpu_enable() on all online cpus.
+ *
+ * This function can be called in parallel by multiple callers.
+ *
+ * Return 0 if TDX is enabled successfully, otherwise error.
+ */
+int tdx_enable(void)
+{
+	int ret;
+
+	if (!platform_tdx_enabled())
+		return -ENODEV;
+
+	lockdep_assert_cpus_held();
+
+	mutex_lock(&tdx_module_lock);
+
+	switch (tdx_module_status) {
+	case TDX_MODULE_UNINITIALIZED:
+		ret = __tdx_enable();
+		break;
+	case TDX_MODULE_INITIALIZED:
+		/* Already initialized, great, tell the caller. */
+		ret = 0;
+		break;
+	default:
+		/* Failed to initialize in the previous attempts */
+		ret = -EINVAL;
+		break;
+	}
+
+	mutex_unlock(&tdx_module_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(tdx_enable);
+
 static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
 					    u32 *nr_tdx_keyids)
 {
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
new file mode 100644
index 000000000000..a3c52270df5b
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _X86_VIRT_TDX_H
+#define _X86_VIRT_TDX_H
+
+/*
+ * This file contains both macros and data structures defined by the TDX
+ * architecture and Linux defined software data structures and functions.
+ * The two should not be mixed together for better readability.  The
+ * architectural definitions come first.
+ */
+
+/*
+ * TDX module SEAMCALL leaf functions
+ */
+#define TDH_SYS_INIT		33
+#define TDH_SYS_LP_INIT		35
+
+/*
+ * Do not put any hardware-defined TDX structure representations below
+ * this comment!
+ */
+
+/* Kernel defined TDX module status during module initialization. */
+enum tdx_module_status_t {
+	TDX_MODULE_UNINITIALIZED,
+	TDX_MODULE_INITIALIZED,
+	TDX_MODULE_ERROR
+};
+
+#endif
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 08/23] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (6 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 07/23] x86/virt/tdx: Add skeleton to enable TDX on demand Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization Kai Huang
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

Start to transit out the "multi-steps" to initialize the TDX module.

TDX provides increased levels of memory confidentiality and integrity.
This requires special hardware support for features like memory
encryption and storage of memory integrity checksums.  Not all memory
satisfies these requirements.

As a result, TDX introduced the concept of a "Convertible Memory Region"
(CMR).  During boot, the firmware builds a list of all of the memory
ranges which can provide the TDX security guarantees.  The list of these
ranges is available to the kernel by querying the TDX module.

CMRs tell the kernel which memory is TDX compatible.  The kernel needs
to build a list of memory regions (out of CMRs) as "TDX-usable" memory
and pass them to the TDX module.  Once this is done, those "TDX-usable"
memory regions are fixed during module's lifetime.

To keep things simple, assume that all TDX-protected memory will come
from the page allocator.  Make sure all pages in the page allocator
*are* TDX-usable memory.

As TDX-usable memory is a fixed configuration, take a snapshot of the
memory configuration from memblocks at the time of module initialization
(memblocks are modified on memory hotplug).  This snapshot is used to
enable TDX support for *this* memory configuration only.  Use a memory
hotplug notifier to ensure that no other RAM can be added outside of
this configuration.

This approach requires all memblock memory regions at the time of module
initialization to be TDX convertible memory to work, otherwise module
initialization will fail in a later SEAMCALL when passing those regions
to the module.  This approach works when all boot-time "system RAM" is
TDX convertible memory, and no non-TDX-convertible memory is hot-added
to the core-mm before module initialization.

For instance, on the first generation of TDX machines, both CXL memory
and NVDIMM are not TDX convertible memory.  Using kmem driver to hot-add
any CXL memory or NVDIMM to the core-mm before module initialization
will result in failure to initialize the module.  The SEAMCALL error
code will be available in the dmesg to help user to understand the
failure.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---

v14 -> v15:
 - Rebase due to removal of TDH.SYS.INFO patch.

v13 -> v14:
 - Added Kirill's tag.

v12 -> v13:
 - Allocate TDSYSINFO and CMR array separately. (Kirill)
 - Added comment around TDH.SYS.INFO. (Peter)

v11 -> v12:
 - Changed to use dynamic allocation for TDSYSINFO_STRUCT and CMR array
   (Kirill).
 - Keep SEAMCALL leaf macro definitions in order (Kirill)
 - Removed is_cmr_empty() but open code directly (David)
 - 'atribute' -> 'attribute' (David)

v10 -> v11:
 - No change.

v9 -> v10:
 - Added back "start to transit out..." as now per-cpu init has been
   moved out from tdx_enable().

v8 -> v9:
 - Removed "start to trransit out ..." part in changelog since this patch
   is no longer the first step anymore.
 - Changed to declare 'tdsysinfo' and 'cmr_array' as local static, and
   changed changelog accordingly (Dave).
 - Improved changelog to explain why to declare  'tdsysinfo_struct' in
   full but only use a few members of them (Dave).

v7 -> v8: (Dave)
 - Improved changelog to tell this is the first patch to transit out the
   "multi-steps" init_tdx_module().
 - Removed all CMR check/trim code but to depend on later SEAMCALL.
 - Variable 'vertical alignment' in print TDX module information.
 - Added DECLARE_PADDED_STRUCT() for padded structure.
 - Made tdx_sysinfo and tdx_cmr_array[] to be function local variable
   (and rename them accordingly), and added -Wframe-larger-than=4096 flag
   to silence the build warning.

 ...

---
 arch/x86/Kconfig            |   1 +
 arch/x86/kernel/setup.c     |   2 +
 arch/x86/virt/vmx/tdx/tdx.c | 167 +++++++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h |   6 ++
 4 files changed, 174 insertions(+), 2 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index eb6e63956d51..2c69ef844805 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1971,6 +1971,7 @@ config INTEL_TDX_HOST
 	depends on X86_64
 	depends on KVM_INTEL
 	depends on X86_X2APIC
+	select ARCH_KEEP_MEMBLOCK
 	help
 	  Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
 	  host and certain physical attacks.  This option enables necessary TDX
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1526747bedf2..9597c002b3c4 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1033,6 +1033,8 @@ void __init setup_arch(char **cmdline_p)
 	 *
 	 * Moreover, on machines with SandyBridge graphics or in setups that use
 	 * crashkernel the entire 1M is reserved anyway.
+	 *
+	 * Note the host kernel TDX also requires the first 1MB being reserved.
 	 */
 	x86_platform.realmode_reserve();
 
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index e7739e15d47a..d1affb30f74d 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -16,6 +16,12 @@
 #include <linux/spinlock.h>
 #include <linux/percpu-defs.h>
 #include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/memblock.h>
+#include <linux/memory.h>
+#include <linux/minmax.h>
+#include <linux/sizes.h>
+#include <linux/pfn.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/tdx.h>
@@ -30,6 +36,9 @@ static DEFINE_PER_CPU(bool, tdx_lp_initialized);
 static enum tdx_module_status_t tdx_module_status;
 static DEFINE_MUTEX(tdx_module_lock);
 
+/* All TDX-usable memory regions.  Protected by mem_hotplug_lock. */
+static LIST_HEAD(tdx_memlist);
+
 typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args);
 
 static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args)
@@ -153,12 +162,102 @@ int tdx_cpu_enable(void)
 }
 EXPORT_SYMBOL_GPL(tdx_cpu_enable);
 
+/*
+ * Add a memory region as a TDX memory block.  The caller must make sure
+ * all memory regions are added in address ascending order and don't
+ * overlap.
+ */
+static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
+			    unsigned long end_pfn)
+{
+	struct tdx_memblock *tmb;
+
+	tmb = kmalloc(sizeof(*tmb), GFP_KERNEL);
+	if (!tmb)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&tmb->list);
+	tmb->start_pfn = start_pfn;
+	tmb->end_pfn = end_pfn;
+
+	/* @tmb_list is protected by mem_hotplug_lock */
+	list_add_tail(&tmb->list, tmb_list);
+	return 0;
+}
+
+static void free_tdx_memlist(struct list_head *tmb_list)
+{
+	/* @tmb_list is protected by mem_hotplug_lock */
+	while (!list_empty(tmb_list)) {
+		struct tdx_memblock *tmb = list_first_entry(tmb_list,
+				struct tdx_memblock, list);
+
+		list_del(&tmb->list);
+		kfree(tmb);
+	}
+}
+
+/*
+ * Ensure that all memblock memory regions are convertible to TDX
+ * memory.  Once this has been established, stash the memblock
+ * ranges off in a secondary structure because memblock is modified
+ * in memory hotplug while TDX memory regions are fixed.
+ */
+static int build_tdx_memlist(struct list_head *tmb_list)
+{
+	unsigned long start_pfn, end_pfn;
+	int i, ret;
+
+	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+		/*
+		 * The first 1MB is not reported as TDX convertible memory.
+		 * Although the first 1MB is always reserved and won't end up
+		 * to the page allocator, it is still in memblock's memory
+		 * regions.  Skip them manually to exclude them as TDX memory.
+		 */
+		start_pfn = max(start_pfn, PHYS_PFN(SZ_1M));
+		if (start_pfn >= end_pfn)
+			continue;
+
+		/*
+		 * Add the memory regions as TDX memory.  The regions in
+		 * memblock has already guaranteed they are in address
+		 * ascending order and don't overlap.
+		 */
+		ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+err:
+	free_tdx_memlist(tmb_list);
+	return ret;
+}
+
 static int init_tdx_module(void)
 {
+	int ret;
+
+	/*
+	 * To keep things simple, assume that all TDX-protected memory
+	 * will come from the page allocator.  Make sure all pages in the
+	 * page allocator are TDX-usable memory.
+	 *
+	 * Build the list of "TDX-usable" memory regions which cover all
+	 * pages in the page allocator to guarantee that.  Do it while
+	 * holding mem_hotplug_lock read-lock as the memory hotplug code
+	 * path reads the @tdx_memlist to reject any new memory.
+	 */
+	get_online_mems();
+
+	ret = build_tdx_memlist(&tdx_memlist);
+	if (ret)
+		goto out_put_tdxmem;
+
 	/*
 	 * TODO:
 	 *
-	 *  - Build the list of TDX-usable memory regions.
 	 *  - Get TDX module "TD Memory Region" (TDMR) global metadata.
 	 *  - Construct a list of TDMRs to cover all TDX-usable memory
 	 *    regions.
@@ -168,7 +267,14 @@ static int init_tdx_module(void)
 	 *
 	 *  Return error before all steps are done.
 	 */
-	return -EINVAL;
+	ret = -EINVAL;
+out_put_tdxmem:
+	/*
+	 * @tdx_memlist is written here and read at memory hotplug time.
+	 * Lock out memory hotplug code while building it.
+	 */
+	put_online_mems();
+	return ret;
 }
 
 static int __tdx_enable(void)
@@ -258,6 +364,56 @@ static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
 	return 0;
 }
 
+static bool is_tdx_memory(unsigned long start_pfn, unsigned long end_pfn)
+{
+	struct tdx_memblock *tmb;
+
+	/*
+	 * This check assumes that the start_pfn<->end_pfn range does not
+	 * cross multiple @tdx_memlist entries.  A single memory online
+	 * event across multiple memblocks (from which @tdx_memlist
+	 * entries are derived at the time of module initialization) is
+	 * not possible.  This is because memory offline/online is done
+	 * on granularity of 'struct memory_block', and the hotpluggable
+	 * memory region (one memblock) must be multiple of memory_block.
+	 */
+	list_for_each_entry(tmb, &tdx_memlist, list) {
+		if (start_pfn >= tmb->start_pfn && end_pfn <= tmb->end_pfn)
+			return true;
+	}
+	return false;
+}
+
+static int tdx_memory_notifier(struct notifier_block *nb, unsigned long action,
+			       void *v)
+{
+	struct memory_notify *mn = v;
+
+	if (action != MEM_GOING_ONLINE)
+		return NOTIFY_OK;
+
+	/*
+	 * Empty list means TDX isn't enabled.  Allow any memory
+	 * to go online.
+	 */
+	if (list_empty(&tdx_memlist))
+		return NOTIFY_OK;
+
+	/*
+	 * The TDX memory configuration is static and can not be
+	 * changed.  Reject onlining any memory which is outside of
+	 * the static configuration whether it supports TDX or not.
+	 */
+	if (is_tdx_memory(mn->start_pfn, mn->start_pfn + mn->nr_pages))
+		return NOTIFY_OK;
+
+	return NOTIFY_BAD;
+}
+
+static struct notifier_block tdx_memory_nb = {
+	.notifier_call = tdx_memory_notifier,
+};
+
 static int __init tdx_init(void)
 {
 	u32 tdx_keyid_start, nr_tdx_keyids;
@@ -281,6 +437,13 @@ static int __init tdx_init(void)
 		return -ENODEV;
 	}
 
+	err = register_memory_notifier(&tdx_memory_nb);
+	if (err) {
+		pr_err("initialization failed: register_memory_notifier() failed (%d)\n",
+				err);
+		return -ENODEV;
+	}
+
 	/*
 	 * Just use the first TDX KeyID as the 'global KeyID' and
 	 * leave the rest for TDX guests.
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index a3c52270df5b..c11e0a7ca664 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -27,4 +27,10 @@ enum tdx_module_status_t {
 	TDX_MODULE_ERROR
 };
 
+struct tdx_memblock {
+	struct list_head list;
+	unsigned long start_pfn;
+	unsigned long end_pfn;
+};
+
 #endif
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (7 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 08/23] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 23:29   ` Dave Hansen
  2023-11-15 19:35   ` Isaku Yamahata
  2023-11-09 11:55 ` [PATCH v15 10/23] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions Kai Huang
                   ` (14 subsequent siblings)
  23 siblings, 2 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

The TDX module global metadata provides system-wide information about
the module.  The TDX module provides SEAMCALls to allow the kernel to
query one specific global metadata field (entry) or all fields.

TL;DR:

Use the TDH.SYS.RD SEAMCALL to read the essential global metadata for
module initialization, and at the same time, to only initialize TDX
module with version 1.5 and later.

Long Version:

1) Only initialize TDX module with version 1.5 and later

TDX module 1.0 has some compatibility issues with the later versions of
module, as documented in the "Intel TDX module ABI incompatibilities
between TDX1.0 and TDX1.5" spec.  Basically there's no value to use TDX
module 1.0 when TDX module 1.5 and later versions are already available.
To keep things simple, just support initializing the TDX module 1.5 and
later.

2) Get the essential global metadata for module initialization

TDX reports a list of "Convertible Memory Region" (CMR) to tell the
kernel which memory is TDX compatible.  The kernel needs to build a list
of memory regions (out of CMRs) as "TDX-usable" memory and pass them to
the TDX module.  The kernel does this by constructing a list of "TD
Memory Regions" (TDMRs) to cover all these memory regions and passing
them to the TDX module.

Each TDMR is a TDX architectural data structure containing the memory
region that the TDMR covers, plus the information to track (within this
TDMR): a) the "Physical Address Metadata Table" (PAMT) to track each TDX
memory page's status (such as which TDX guest "owns" a given page, and
b) the "reserved areas" to tell memory holes that cannot be used as TDX
memory.

The kernel needs to get below metadata from the TDX module to build the
list of TDMRs: a) the maximum number of supported TDMRs, b) the maximum
number of supported reserved areas per TDMR and, c) the PAMT entry size
for each TDX-supported page size.

Note the TDX module internally checks whether the "TDX-usable" memory
regions passed via TDMRs are truly convertible.  Just skipping reading
the CMRs and manually checking memory regions against them, but let the
TDX module do the check.

== Implementation ==

TDX module 1.0 uses TDH.SYS.INFO SEAMCALL to report the global metadata
in a fixed-size (1024-bytes) structure 'TDSYSINFO_STRUCT'.  TDX module
1.5 adds more metadata fields, and introduces the new TDH.SYS.{RD|RDALL}
SEAMCALLs for reading the metadata.  The new metadata mechanism removes
the fixed-size limitation of the structure 'TDSYSINFO_STRUCT' and allows
the TDX module to support unlimited number of metadata fields.

TDX module 1.5 and later versions still support the TDH.SYS.INFO for
compatibility to the TDX module 1.0, but it may only report part of
metadata via the 'TDSYSINFO_STRUCT'.  For any new metadata the kernel
must use TDH.SYS.{RD|RDALL} to read.

To achieve the above two goals mentioned in 1) and 2), just use the
TDH.SYS.RD to read the essential metadata fields related to the TDMRs.

TDH.SYS.RD returns *one* metadata field at a given "Metadata Field ID".
It is enough for getting these few fields for module initialization.
On the other hand, TDH.SYS.RDALL reports all metadata fields to a 4KB
buffer provided by the kernel which is a little bit overkill here.

It may be beneficial to get all metadata fields at once here so they can
also be used by KVM (some are essential for creating basic TDX guests),
but technically it's unknown how many 4K pages are needed to fill all
the metadata.  Thus it's better to read metadata when needed.

Signed-off-by: Kai Huang <kai.huang@intel.com>
---

v14 -> v15:
 - New patch to use TDH.SYS.RD to read TDX module global metadata for
   module initialization and stop initializing 1.0 module.

---
 arch/x86/include/asm/shared/tdx.h |  1 +
 arch/x86/virt/vmx/tdx/tdx.c       | 75 ++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h       | 39 ++++++++++++++++
 3 files changed, 114 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
index a4036149c484..fdfd41511b02 100644
--- a/arch/x86/include/asm/shared/tdx.h
+++ b/arch/x86/include/asm/shared/tdx.h
@@ -59,6 +59,7 @@
 #define TDX_PS_4K	0
 #define TDX_PS_2M	1
 #define TDX_PS_1G	2
+#define TDX_PS_NR	(TDX_PS_1G + 1)
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index d1affb30f74d..d24027993983 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -235,8 +235,75 @@ static int build_tdx_memlist(struct list_head *tmb_list)
 	return ret;
 }
 
+static int read_sys_metadata_field(u64 field_id, u64 *data)
+{
+	struct tdx_module_args args = {};
+	int ret;
+
+	/*
+	 * TDH.SYS.RD -- reads one global metadata field
+	 *  - RDX (in): the field to read
+	 *  - R8 (out): the field data
+	 */
+	args.rdx = field_id;
+	ret = seamcall_prerr_ret(TDH_SYS_RD, &args);
+	if (ret)
+		return ret;
+
+	*data = args.r8;
+
+	return 0;
+}
+
+static int read_sys_metadata_field16(u64 field_id, u16 *data)
+{
+	u64 _data;
+	int ret;
+
+	if (WARN_ON_ONCE(MD_FIELD_ID_ELE_SIZE_CODE(field_id) !=
+			MD_FIELD_ID_ELE_SIZE_16BIT))
+		return -EINVAL;
+
+	ret = read_sys_metadata_field(field_id, &_data);
+	if (ret)
+		return ret;
+
+	*data = (u16)_data;
+
+	return 0;
+}
+
+static int get_tdx_tdmr_sysinfo(struct tdx_tdmr_sysinfo *tdmr_sysinfo)
+{
+	int ret;
+
+	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_TDMRS,
+			&tdmr_sysinfo->max_tdmrs);
+	if (ret)
+		return ret;
+
+	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_RESERVED_PER_TDMR,
+			&tdmr_sysinfo->max_reserved_per_tdmr);
+	if (ret)
+		return ret;
+
+	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_4K_ENTRY_SIZE,
+			&tdmr_sysinfo->pamt_entry_size[TDX_PS_4K]);
+	if (ret)
+		return ret;
+
+	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_2M_ENTRY_SIZE,
+			&tdmr_sysinfo->pamt_entry_size[TDX_PS_2M]);
+	if (ret)
+		return ret;
+
+	return read_sys_metadata_field16(MD_FIELD_ID_PAMT_1G_ENTRY_SIZE,
+			&tdmr_sysinfo->pamt_entry_size[TDX_PS_1G]);
+}
+
 static int init_tdx_module(void)
 {
+	struct tdx_tdmr_sysinfo tdmr_sysinfo;
 	int ret;
 
 	/*
@@ -255,10 +322,13 @@ static int init_tdx_module(void)
 	if (ret)
 		goto out_put_tdxmem;
 
+	ret = get_tdx_tdmr_sysinfo(&tdmr_sysinfo);
+	if (ret)
+		goto out_free_tdxmem;
+
 	/*
 	 * TODO:
 	 *
-	 *  - Get TDX module "TD Memory Region" (TDMR) global metadata.
 	 *  - Construct a list of TDMRs to cover all TDX-usable memory
 	 *    regions.
 	 *  - Configure the TDMRs and the global KeyID to the TDX module.
@@ -268,6 +338,9 @@ static int init_tdx_module(void)
 	 *  Return error before all steps are done.
 	 */
 	ret = -EINVAL;
+out_free_tdxmem:
+	if (ret)
+		free_tdx_memlist(&tdx_memlist);
 out_put_tdxmem:
 	/*
 	 * @tdx_memlist is written here and read at memory hotplug time.
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index c11e0a7ca664..29cdf5ea5544 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -2,6 +2,8 @@
 #ifndef _X86_VIRT_TDX_H
 #define _X86_VIRT_TDX_H
 
+#include <linux/bits.h>
+
 /*
  * This file contains both macros and data structures defined by the TDX
  * architecture and Linux defined software data structures and functions.
@@ -13,8 +15,38 @@
  * TDX module SEAMCALL leaf functions
  */
 #define TDH_SYS_INIT		33
+#define TDH_SYS_RD		34
 #define TDH_SYS_LP_INIT		35
 
+/*
+ * Global scope metadata field ID.
+ *
+ * See Table "Global Scope Metadata", TDX module 1.5 ABI spec.
+ */
+#define MD_FIELD_ID_MAX_TDMRS			0x9100000100000008ULL
+#define MD_FIELD_ID_MAX_RESERVED_PER_TDMR	0x9100000100000009ULL
+#define MD_FIELD_ID_PAMT_4K_ENTRY_SIZE		0x9100000100000010ULL
+#define MD_FIELD_ID_PAMT_2M_ENTRY_SIZE		0x9100000100000011ULL
+#define MD_FIELD_ID_PAMT_1G_ENTRY_SIZE		0x9100000100000012ULL
+
+/*
+ * Sub-field definition of metadata field ID.
+ *
+ * See Table "MD_FIELD_ID (Metadata Field Identifier / Sequence Header)
+ * Definition", TDX module 1.5 ABI spec.
+ *
+ *  - Bit 33:32: ELEMENT_SIZE_CODE -- size of a single element of metadata
+ *
+ *	0: 8 bits
+ *	1: 16 bits
+ *	2: 32 bits
+ *	3: 64 bits
+ */
+#define MD_FIELD_ID_ELE_SIZE_CODE(_field_id)	\
+		(((_field_id) & GENMASK_ULL(33, 32)) >> 32)
+
+#define MD_FIELD_ID_ELE_SIZE_16BIT	1
+
 /*
  * Do not put any hardware-defined TDX structure representations below
  * this comment!
@@ -33,4 +65,11 @@ struct tdx_memblock {
 	unsigned long end_pfn;
 };
 
+/* "TDMR info" part of "Global Scope Metadata" for constructing TDMRs */
+struct tdx_tdmr_sysinfo {
+	u16 max_tdmrs;
+	u16 max_reserved_per_tdmr;
+	u16 pamt_entry_size[TDX_PS_NR];
+};
+
 #endif
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 10/23] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (8 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 11/23] x86/virt/tdx: Fill out " Kai Huang
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

After the kernel selects all TDX-usable memory regions, the kernel needs
to pass those regions to the TDX module via data structure "TD Memory
Region" (TDMR).

Add a placeholder to construct a list of TDMRs (in multiple steps) to
cover all TDX-usable memory regions.

=== Long Version ===

TDX provides increased levels of memory confidentiality and integrity.
This requires special hardware support for features like memory
encryption and storage of memory integrity checksums.  Not all memory
satisfies these requirements.

As a result, TDX introduced the concept of a "Convertible Memory Region"
(CMR).  During boot, the firmware builds a list of all of the memory
ranges which can provide the TDX security guarantees.  The list of these
ranges is available to the kernel by querying the TDX module.

The TDX architecture needs additional metadata to record things like
which TD guest "owns" a given page of memory.  This metadata essentially
serves as the 'struct page' for the TDX module.  The space for this
metadata is not reserved by the hardware up front and must be allocated
by the kernel and given to the TDX module.

Since this metadata consumes space, the VMM can choose whether or not to
allocate it for a given area of convertible memory.  If it chooses not
to, the memory cannot receive TDX protections and can not be used by TDX
guests as private memory.

For every memory region that the VMM wants to use as TDX memory, it sets
up a "TD Memory Region" (TDMR).  Each TDMR represents a physically
contiguous convertible range and must also have its own physically
contiguous metadata table, referred to as a Physical Address Metadata
Table (PAMT), to track status for each page in the TDMR range.

Unlike a CMR, each TDMR requires 1G granularity and alignment.  To
support physical RAM areas that don't meet those strict requirements,
each TDMR permits a number of internal "reserved areas" which can be
placed over memory holes.  If PAMT metadata is placed within a TDMR it
must be covered by one of these reserved areas.

Let's summarize the concepts:

 CMR - Firmware-enumerated physical ranges that support TDX.  CMRs are
       4K aligned.
TDMR - Physical address range which is chosen by the kernel to support
       TDX.  1G granularity and alignment required.  Each TDMR has
       reserved areas where TDX memory holes and overlapping PAMTs can
       be represented.
PAMT - Physically contiguous TDX metadata.  One table for each page size
       per TDMR.  Roughly 1/256th of TDMR in size.  256G TDMR = ~1G
       PAMT.

As one step of initializing the TDX module, the kernel configures
TDX-usable memory regions by passing a list of TDMRs to the TDX module.

Constructing the list of TDMRs consists below steps:

1) Fill out TDMRs to cover all memory regions that the TDX module will
   use for TD memory.
2) Allocate and set up PAMT for each TDMR.
3) Designate reserved areas for each TDMR.

Add a placeholder to construct TDMRs to do the above steps.  To keep
things simple, just allocate enough space to hold maximum number of
TDMRs up front.  Always free the buffer of TDMRs since they are only
used during module initialization.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---

v14 -> v15:
 - Rebase due to the new TDH.SYS.RD patch (minor)
  - 'struct tdsysinfo_struct' -> 'struct tdx_tdmr_sysinfo'.

v13 -> v14:
 - No change.

v12 -> v13:
 - No change.

v11 -> v12:
 - Added tags from Dave/Kirill.

v10 -> v11:
 - Changed to keep TDMRs after module initialization to deal with TDX
   erratum in future patches. 

v9 -> v10:
 - Changed the TDMR list from static variable back to local variable as
   now TDX module isn't disabled when tdx_cpu_enable() fails.

v8 -> v9:
 - Changes around 'struct tdmr_info_list' (Dave):
   - Moved the declaration from tdx.c to tdx.h.
   - Renamed 'first_tdmr' to 'tdmrs'.
   - 'nr_tdmrs' -> 'nr_consumed_tdmrs'.
   - Changed 'tdmrs' to 'void *'.
   - Improved comments for all structure members.
 - Added a missing empty line in alloc_tdmr_list() (Dave).

v7 -> v8:
 - Improved changelog to tell this is one step of "TODO list" in
   init_tdx_module().
 - Other changelog improvement suggested by Dave (with "Create TDMRs" to
   "Fill out TDMRs" to align with the code).
 - Added a "TODO list" comment to lay out the steps to construct TDMRs,
   following the same idea of "TODO list" in tdx_module_init().
 - Introduced 'struct tdmr_info_list' (Dave)
 - Further added additional members (tdmr_sz/max_tdmrs/nr_tdmrs) to
   simplify getting TDMR by given index, and reduce passing arguments
   around functions.
 - Added alloc_tdmr_list()/free_tdmr_list() accordingly, which internally
   uses tdmr_size_single() (Dave).
 - tdmr_num -> nr_tdmrs (Dave).

v6 -> v7:
 - Improved commit message to explain 'int' overflow cannot happen
   in cal_tdmr_size() and alloc_tdmr_array(). -- Andy/Dave.

  ...


---
 arch/x86/virt/vmx/tdx/tdx.c | 94 ++++++++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h | 33 +++++++++++++
 2 files changed, 125 insertions(+), 2 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index d24027993983..99f3b3958681 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -22,6 +22,7 @@
 #include <linux/minmax.h>
 #include <linux/sizes.h>
 #include <linux/pfn.h>
+#include <linux/align.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/tdx.h>
@@ -301,9 +302,84 @@ static int get_tdx_tdmr_sysinfo(struct tdx_tdmr_sysinfo *tdmr_sysinfo)
 			&tdmr_sysinfo->pamt_entry_size[TDX_PS_1G]);
 }
 
+/* Calculate the actual TDMR size */
+static int tdmr_size_single(u16 max_reserved_per_tdmr)
+{
+	int tdmr_sz;
+
+	/*
+	 * The actual size of TDMR depends on the maximum
+	 * number of reserved areas.
+	 */
+	tdmr_sz = sizeof(struct tdmr_info);
+	tdmr_sz += sizeof(struct tdmr_reserved_area) * max_reserved_per_tdmr;
+
+	return ALIGN(tdmr_sz, TDMR_INFO_ALIGNMENT);
+}
+
+static int alloc_tdmr_list(struct tdmr_info_list *tdmr_list,
+			   struct tdx_tdmr_sysinfo *tdmr_sysinfo)
+{
+	size_t tdmr_sz, tdmr_array_sz;
+	void *tdmr_array;
+
+	tdmr_sz = tdmr_size_single(tdmr_sysinfo->max_reserved_per_tdmr);
+	tdmr_array_sz = tdmr_sz * tdmr_sysinfo->max_tdmrs;
+
+	/*
+	 * To keep things simple, allocate all TDMRs together.
+	 * The buffer needs to be physically contiguous to make
+	 * sure each TDMR is physically contiguous.
+	 */
+	tdmr_array = alloc_pages_exact(tdmr_array_sz,
+			GFP_KERNEL | __GFP_ZERO);
+	if (!tdmr_array)
+		return -ENOMEM;
+
+	tdmr_list->tdmrs = tdmr_array;
+
+	/*
+	 * Keep the size of TDMR to find the target TDMR
+	 * at a given index in the TDMR list.
+	 */
+	tdmr_list->tdmr_sz = tdmr_sz;
+	tdmr_list->max_tdmrs = tdmr_sysinfo->max_tdmrs;
+	tdmr_list->nr_consumed_tdmrs = 0;
+
+	return 0;
+}
+
+static void free_tdmr_list(struct tdmr_info_list *tdmr_list)
+{
+	free_pages_exact(tdmr_list->tdmrs,
+			tdmr_list->max_tdmrs * tdmr_list->tdmr_sz);
+}
+
+/*
+ * Construct a list of TDMRs on the preallocated space in @tdmr_list
+ * to cover all TDX memory regions in @tmb_list based on the TDX module
+ * TDMR global information in @tdmr_sysinfo.
+ */
+static int construct_tdmrs(struct list_head *tmb_list,
+			   struct tdmr_info_list *tdmr_list,
+			   struct tdx_tdmr_sysinfo *tdmr_sysinfo)
+{
+	/*
+	 * TODO:
+	 *
+	 *  - Fill out TDMRs to cover all TDX memory regions.
+	 *  - Allocate and set up PAMTs for each TDMR.
+	 *  - Designate reserved areas for each TDMR.
+	 *
+	 * Return -EINVAL until constructing TDMRs is done
+	 */
+	return -EINVAL;
+}
+
 static int init_tdx_module(void)
 {
 	struct tdx_tdmr_sysinfo tdmr_sysinfo;
+	struct tdmr_info_list tdmr_list;
 	int ret;
 
 	/*
@@ -326,11 +402,19 @@ static int init_tdx_module(void)
 	if (ret)
 		goto out_free_tdxmem;
 
+	/* Allocate enough space for constructing TDMRs */
+	ret = alloc_tdmr_list(&tdmr_list, &tdmr_sysinfo);
+	if (ret)
+		goto out_free_tdxmem;
+
+	/* Cover all TDX-usable memory regions in TDMRs */
+	ret = construct_tdmrs(&tdx_memlist, &tdmr_list, &tdmr_sysinfo);
+	if (ret)
+		goto out_free_tdmrs;
+
 	/*
 	 * TODO:
 	 *
-	 *  - Construct a list of TDMRs to cover all TDX-usable memory
-	 *    regions.
 	 *  - Configure the TDMRs and the global KeyID to the TDX module.
 	 *  - Configure the global KeyID on all packages.
 	 *  - Initialize all TDMRs.
@@ -338,6 +422,12 @@ static int init_tdx_module(void)
 	 *  Return error before all steps are done.
 	 */
 	ret = -EINVAL;
+out_free_tdmrs:
+	/*
+	 * Always free the buffer of TDMRs as they are only used during
+	 * module initialization.
+	 */
+	free_tdmr_list(&tdmr_list);
 out_free_tdxmem:
 	if (ret)
 		free_tdx_memlist(&tdx_memlist);
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 29cdf5ea5544..9b6b5d70804f 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -47,6 +47,30 @@
 
 #define MD_FIELD_ID_ELE_SIZE_16BIT	1
 
+struct tdmr_reserved_area {
+	u64 offset;
+	u64 size;
+} __packed;
+
+#define TDMR_INFO_ALIGNMENT	512
+
+struct tdmr_info {
+	u64 base;
+	u64 size;
+	u64 pamt_1g_base;
+	u64 pamt_1g_size;
+	u64 pamt_2m_base;
+	u64 pamt_2m_size;
+	u64 pamt_4k_base;
+	u64 pamt_4k_size;
+	/*
+	 * The actual number of reserved areas depends on the value of
+	 * field MD_FIELD_ID_MAX_RESERVED_PER_TDMR in the TDX module
+	 * global metadata.
+	 */
+	DECLARE_FLEX_ARRAY(struct tdmr_reserved_area, reserved_areas);
+} __packed __aligned(TDMR_INFO_ALIGNMENT);
+
 /*
  * Do not put any hardware-defined TDX structure representations below
  * this comment!
@@ -72,4 +96,13 @@ struct tdx_tdmr_sysinfo {
 	u16 pamt_entry_size[TDX_PS_NR];
 };
 
+struct tdmr_info_list {
+	void *tdmrs;	/* Flexible array to hold 'tdmr_info's */
+	int nr_consumed_tdmrs;	/* How many 'tdmr_info's are in use */
+
+	/* Metadata for finding target 'tdmr_info' and freeing @tdmrs */
+	int tdmr_sz;	/* Size of one 'tdmr_info' */
+	int max_tdmrs;	/* How many 'tdmr_info's are allocated */
+};
+
 #endif
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 11/23] x86/virt/tdx: Fill out TDMRs to cover all TDX memory regions
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (9 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 10/23] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 12/23] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

Start to transit out the "multi-steps" to construct a list of "TD Memory
Regions" (TDMRs) to cover all TDX-usable memory regions.

The kernel configures TDX-usable memory regions by passing a list of
TDMRs "TD Memory Regions" (TDMRs) to the TDX module.  Each TDMR contains
the information of the base/size of a memory region, the base/size of the
associated Physical Address Metadata Table (PAMT) and a list of reserved
areas in the region.

Do the first step to fill out a number of TDMRs to cover all TDX memory
regions.  To keep it simple, always try to use one TDMR for each memory
region.  As the first step only set up the base/size for each TDMR.

Each TDMR must be 1G aligned and the size must be in 1G granularity.
This implies that one TDMR could cover multiple memory regions.  If a
memory region spans the 1GB boundary and the former part is already
covered by the previous TDMR, just use a new TDMR for the remaining
part.

TDX only supports a limited number of TDMRs.  Disable TDX if all TDMRs
are consumed but there is more memory region to cover.

There are fancier things that could be done like trying to merge
adjacent TDMRs.  This would allow more pathological memory layouts to be
supported.  But, current systems are not even close to exhausting the
existing TDMR resources in practice.  For now, keep it simple.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
---

v14 -> v15:
 - No change

v13 -> v14: 
 - No change

v12 -> v13:
 - Added Yuan's tag.

v11 -> v12:
 - Improved comments around looping over TDX memblock to create TDMRs.
   (Dave).
 - Added code to pr_warn() when consumed TDMRs reaching maximum TDMRs
   (Dave).
 - BIT_ULL(30) -> SZ_1G (Kirill)
 - Removed unused TDMR_PFN_ALIGNMENT (Sathy)
 - Added tags from Kirill/Sathy

v10 -> v11:
 - No update

v9 -> v10:
 - No change.

v8 -> v9:

 - Added the last paragraph in the changelog (Dave).
 - Removed unnecessary type cast in tdmr_entry() (Dave).


---
 arch/x86/virt/vmx/tdx/tdx.c | 103 +++++++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h |   3 ++
 2 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 99f3b3958681..569822da8685 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -355,6 +355,102 @@ static void free_tdmr_list(struct tdmr_info_list *tdmr_list)
 			tdmr_list->max_tdmrs * tdmr_list->tdmr_sz);
 }
 
+/* Get the TDMR from the list at the given index. */
+static struct tdmr_info *tdmr_entry(struct tdmr_info_list *tdmr_list,
+				    int idx)
+{
+	int tdmr_info_offset = tdmr_list->tdmr_sz * idx;
+
+	return (void *)tdmr_list->tdmrs + tdmr_info_offset;
+}
+
+#define TDMR_ALIGNMENT		SZ_1G
+#define TDMR_ALIGN_DOWN(_addr)	ALIGN_DOWN((_addr), TDMR_ALIGNMENT)
+#define TDMR_ALIGN_UP(_addr)	ALIGN((_addr), TDMR_ALIGNMENT)
+
+static inline u64 tdmr_end(struct tdmr_info *tdmr)
+{
+	return tdmr->base + tdmr->size;
+}
+
+/*
+ * Take the memory referenced in @tmb_list and populate the
+ * preallocated @tdmr_list, following all the special alignment
+ * and size rules for TDMR.
+ */
+static int fill_out_tdmrs(struct list_head *tmb_list,
+			  struct tdmr_info_list *tdmr_list)
+{
+	struct tdx_memblock *tmb;
+	int tdmr_idx = 0;
+
+	/*
+	 * Loop over TDX memory regions and fill out TDMRs to cover them.
+	 * To keep it simple, always try to use one TDMR to cover one
+	 * memory region.
+	 *
+	 * In practice TDX supports at least 64 TDMRs.  A 2-socket system
+	 * typically only consumes less than 10 of those.  This code is
+	 * dumb and simple and may use more TMDRs than is strictly
+	 * required.
+	 */
+	list_for_each_entry(tmb, tmb_list, list) {
+		struct tdmr_info *tdmr = tdmr_entry(tdmr_list, tdmr_idx);
+		u64 start, end;
+
+		start = TDMR_ALIGN_DOWN(PFN_PHYS(tmb->start_pfn));
+		end   = TDMR_ALIGN_UP(PFN_PHYS(tmb->end_pfn));
+
+		/*
+		 * A valid size indicates the current TDMR has already
+		 * been filled out to cover the previous memory region(s).
+		 */
+		if (tdmr->size) {
+			/*
+			 * Loop to the next if the current memory region
+			 * has already been fully covered.
+			 */
+			if (end <= tdmr_end(tdmr))
+				continue;
+
+			/* Otherwise, skip the already covered part. */
+			if (start < tdmr_end(tdmr))
+				start = tdmr_end(tdmr);
+
+			/*
+			 * Create a new TDMR to cover the current memory
+			 * region, or the remaining part of it.
+			 */
+			tdmr_idx++;
+			if (tdmr_idx >= tdmr_list->max_tdmrs) {
+				pr_warn("initialization failed: TDMRs exhausted.\n");
+				return -ENOSPC;
+			}
+
+			tdmr = tdmr_entry(tdmr_list, tdmr_idx);
+		}
+
+		tdmr->base = start;
+		tdmr->size = end - start;
+	}
+
+	/* @tdmr_idx is always the index of the last valid TDMR. */
+	tdmr_list->nr_consumed_tdmrs = tdmr_idx + 1;
+
+	/*
+	 * Warn early that kernel is about to run out of TDMRs.
+	 *
+	 * This is an indication that TDMR allocation has to be
+	 * reworked to be smarter to not run into an issue.
+	 */
+	if (tdmr_list->max_tdmrs - tdmr_list->nr_consumed_tdmrs < TDMR_NR_WARN)
+		pr_warn("consumed TDMRs reaching limit: %d used out of %d\n",
+				tdmr_list->nr_consumed_tdmrs,
+				tdmr_list->max_tdmrs);
+
+	return 0;
+}
+
 /*
  * Construct a list of TDMRs on the preallocated space in @tdmr_list
  * to cover all TDX memory regions in @tmb_list based on the TDX module
@@ -364,10 +460,15 @@ static int construct_tdmrs(struct list_head *tmb_list,
 			   struct tdmr_info_list *tdmr_list,
 			   struct tdx_tdmr_sysinfo *tdmr_sysinfo)
 {
+	int ret;
+
+	ret = fill_out_tdmrs(tmb_list, tdmr_list);
+	if (ret)
+		return ret;
+
 	/*
 	 * TODO:
 	 *
-	 *  - Fill out TDMRs to cover all TDX memory regions.
 	 *  - Allocate and set up PAMTs for each TDMR.
 	 *  - Designate reserved areas for each TDMR.
 	 *
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 9b6b5d70804f..f18ce1b88b0a 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -96,6 +96,9 @@ struct tdx_tdmr_sysinfo {
 	u16 pamt_entry_size[TDX_PS_NR];
 };
 
+/* Warn if kernel has less than TDMR_NR_WARN TDMRs after allocation */
+#define TDMR_NR_WARN 4
+
 struct tdmr_info_list {
 	void *tdmrs;	/* Flexible array to hold 'tdmr_info's */
 	int nr_consumed_tdmrs;	/* How many 'tdmr_info's are in use */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 12/23] x86/virt/tdx: Allocate and set up PAMTs for TDMRs
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (10 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 11/23] x86/virt/tdx: Fill out " Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 13/23] x86/virt/tdx: Designate reserved areas for all TDMRs Kai Huang
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

The TDX module uses additional metadata to record things like which
guest "owns" a given page of memory.  This metadata, referred as
Physical Address Metadata Table (PAMT), essentially serves as the
'struct page' for the TDX module.  PAMTs are not reserved by hardware
up front.  They must be allocated by the kernel and then given to the
TDX module during module initialization.

TDX supports 3 page sizes: 4K, 2M, and 1G.  Each "TD Memory Region"
(TDMR) has 3 PAMTs to track the 3 supported page sizes.  Each PAMT must
be a physically contiguous area from a Convertible Memory Region (CMR).
However, the PAMTs which track pages in one TDMR do not need to reside
within that TDMR but can be anywhere in CMRs.  If one PAMT overlaps with
any TDMR, the overlapping part must be reported as a reserved area in
that particular TDMR.

Use alloc_contig_pages() since PAMT must be a physically contiguous area
and it may be potentially large (~1/256th of the size of the given TDMR).
The downside is alloc_contig_pages() may fail at runtime.  One (bad)
mitigation is to launch a TDX guest early during system boot to get
those PAMTs allocated at early time, but the only way to fix is to add a
boot option to allocate or reserve PAMTs during kernel boot.

It is imperfect but will be improved on later.

TDX only supports a limited number of reserved areas per TDMR to cover
both PAMTs and memory holes within the given TDMR.  If many PAMTs are
allocated within a single TDMR, the reserved areas may not be sufficient
to cover all of them.

Adopt the following policies when allocating PAMTs for a given TDMR:

  - Allocate three PAMTs of the TDMR in one contiguous chunk to minimize
    the total number of reserved areas consumed for PAMTs.
  - Try to first allocate PAMT from the local node of the TDMR for better
    NUMA locality.

Also dump out how many pages are allocated for PAMTs when the TDX module
is initialized successfully.  This helps answer the eternal "where did
all my memory go?" questions.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
---

v14 -> v15:
 - Rebase due to the new TDH.SYS.RD patch (minor)
  - 'struct tdsysinfo_struct' -> 'struct tdx_tdmr_sysinfo'.
  - support each page size to have its own PAMT entry size.

v13 -> v14:
 - No change

v12 -> v13:
 - Added Kirill and Yuan's tag.
 - Removed unintended space. (Yuan)

v11 -> v12:
 - Moved TDX_PS_NUM from tdx.c to <asm/tdx.h> (Kirill)
 - "<= TDX_PS_1G" -> "< TDX_PS_NUM" (Kirill)
 - Changed tdmr_get_pamt() to return base and size instead of base_pfn
   and npages and related code directly (Dave).
 - Simplified PAMT kb counting. (Dave)
 - tdmrs_count_pamt_pages() -> tdmr_count_pamt_kb() (Kirill/Dave)

v10 -> v11:
 - No update

v9 -> v10:
 - Removed code change in disable_tdx_module() as it doesn't exist
   anymore.

v8 -> v9:
 - Added TDX_PS_NR macro instead of open-coding (Dave).
 - Better alignment of 'pamt_entry_size' in tdmr_set_up_pamt() (Dave).
 - Changed to print out PAMTs in "KBs" instead of "pages" (Dave).
 - Added Dave's Reviewed-by.

v7 -> v8: (Dave)
 - Changelog:
  - Added a sentence to state PAMT allocation will be improved.
  - Others suggested by Dave.
 - Moved 'nid' of 'struct tdx_memblock' to this patch.
 - Improved comments around tdmr_get_nid().
 - WARN_ON_ONCE() -> pr_warn() in tdmr_get_nid().
 - Other changes due to 'struct tdmr_info_list'.

v6 -> v7:
 - Changes due to using macros instead of 'enum' for TDX supported page
   sizes.

v5 -> v6:
 - Rebase due to using 'tdx_memblock' instead of memblock.
 - 'int pamt_entry_nr' -> 'unsigned long nr_pamt_entries' (Dave/Sagis).
 - Improved comment around tdmr_get_nid() (Dave).
 - Improved comment in tdmr_set_up_pamt() around breaking the PAMT
   into PAMTs for 4K/2M/1G (Dave).
 - tdmrs_get_pamt_pages() -> tdmrs_count_pamt_pages() (Dave).   


---
 arch/x86/Kconfig            |   1 +
 arch/x86/virt/vmx/tdx/tdx.c | 215 +++++++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h |   1 +
 3 files changed, 212 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2c69ef844805..e255d8ae5e77 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1972,6 +1972,7 @@ config INTEL_TDX_HOST
 	depends on KVM_INTEL
 	depends on X86_X2APIC
 	select ARCH_KEEP_MEMBLOCK
+	depends on CONTIG_ALLOC
 	help
 	  Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
 	  host and certain physical attacks.  This option enables necessary TDX
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 569822da8685..0f3149f23544 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -169,7 +169,7 @@ EXPORT_SYMBOL_GPL(tdx_cpu_enable);
  * overlap.
  */
 static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
-			    unsigned long end_pfn)
+			    unsigned long end_pfn, int nid)
 {
 	struct tdx_memblock *tmb;
 
@@ -180,6 +180,7 @@ static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
 	INIT_LIST_HEAD(&tmb->list);
 	tmb->start_pfn = start_pfn;
 	tmb->end_pfn = end_pfn;
+	tmb->nid = nid;
 
 	/* @tmb_list is protected by mem_hotplug_lock */
 	list_add_tail(&tmb->list, tmb_list);
@@ -207,9 +208,9 @@ static void free_tdx_memlist(struct list_head *tmb_list)
 static int build_tdx_memlist(struct list_head *tmb_list)
 {
 	unsigned long start_pfn, end_pfn;
-	int i, ret;
+	int i, nid, ret;
 
-	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
 		/*
 		 * The first 1MB is not reported as TDX convertible memory.
 		 * Although the first 1MB is always reserved and won't end up
@@ -225,7 +226,7 @@ static int build_tdx_memlist(struct list_head *tmb_list)
 		 * memblock has already guaranteed they are in address
 		 * ascending order and don't overlap.
 		 */
-		ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn);
+		ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn, nid);
 		if (ret)
 			goto err;
 	}
@@ -451,6 +452,202 @@ static int fill_out_tdmrs(struct list_head *tmb_list,
 	return 0;
 }
 
+/*
+ * Calculate PAMT size given a TDMR and a page size.  The returned
+ * PAMT size is always aligned up to 4K page boundary.
+ */
+static unsigned long tdmr_get_pamt_sz(struct tdmr_info *tdmr, int pgsz,
+				      u16 pamt_entry_size)
+{
+	unsigned long pamt_sz, nr_pamt_entries;
+
+	switch (pgsz) {
+	case TDX_PS_4K:
+		nr_pamt_entries = tdmr->size >> PAGE_SHIFT;
+		break;
+	case TDX_PS_2M:
+		nr_pamt_entries = tdmr->size >> PMD_SHIFT;
+		break;
+	case TDX_PS_1G:
+		nr_pamt_entries = tdmr->size >> PUD_SHIFT;
+		break;
+	default:
+		WARN_ON_ONCE(1);
+		return 0;
+	}
+
+	pamt_sz = nr_pamt_entries * pamt_entry_size;
+	/* TDX requires PAMT size must be 4K aligned */
+	pamt_sz = ALIGN(pamt_sz, PAGE_SIZE);
+
+	return pamt_sz;
+}
+
+/*
+ * Locate a NUMA node which should hold the allocation of the @tdmr
+ * PAMT.  This node will have some memory covered by the TDMR.  The
+ * relative amount of memory covered is not considered.
+ */
+static int tdmr_get_nid(struct tdmr_info *tdmr, struct list_head *tmb_list)
+{
+	struct tdx_memblock *tmb;
+
+	/*
+	 * A TDMR must cover at least part of one TMB.  That TMB will end
+	 * after the TDMR begins.  But, that TMB may have started before
+	 * the TDMR.  Find the next 'tmb' that _ends_ after this TDMR
+	 * begins.  Ignore 'tmb' start addresses.  They are irrelevant.
+	 */
+	list_for_each_entry(tmb, tmb_list, list) {
+		if (tmb->end_pfn > PHYS_PFN(tdmr->base))
+			return tmb->nid;
+	}
+
+	/*
+	 * Fall back to allocating the TDMR's metadata from node 0 when
+	 * no TDX memory block can be found.  This should never happen
+	 * since TDMRs originate from TDX memory blocks.
+	 */
+	pr_warn("TDMR [0x%llx, 0x%llx): unable to find local NUMA node for PAMT allocation, fallback to use node 0.\n",
+			tdmr->base, tdmr_end(tdmr));
+	return 0;
+}
+
+/*
+ * Allocate PAMTs from the local NUMA node of some memory in @tmb_list
+ * within @tdmr, and set up PAMTs for @tdmr.
+ */
+static int tdmr_set_up_pamt(struct tdmr_info *tdmr,
+			    struct list_head *tmb_list,
+			    u16 pamt_entry_size[])
+{
+	unsigned long pamt_base[TDX_PS_NR];
+	unsigned long pamt_size[TDX_PS_NR];
+	unsigned long tdmr_pamt_base;
+	unsigned long tdmr_pamt_size;
+	struct page *pamt;
+	int pgsz, nid;
+
+	nid = tdmr_get_nid(tdmr, tmb_list);
+
+	/*
+	 * Calculate the PAMT size for each TDX supported page size
+	 * and the total PAMT size.
+	 */
+	tdmr_pamt_size = 0;
+	for (pgsz = TDX_PS_4K; pgsz < TDX_PS_NR; pgsz++) {
+		pamt_size[pgsz] = tdmr_get_pamt_sz(tdmr, pgsz,
+					pamt_entry_size[pgsz]);
+		tdmr_pamt_size += pamt_size[pgsz];
+	}
+
+	/*
+	 * Allocate one chunk of physically contiguous memory for all
+	 * PAMTs.  This helps minimize the PAMT's use of reserved areas
+	 * in overlapped TDMRs.
+	 */
+	pamt = alloc_contig_pages(tdmr_pamt_size >> PAGE_SHIFT, GFP_KERNEL,
+			nid, &node_online_map);
+	if (!pamt)
+		return -ENOMEM;
+
+	/*
+	 * Break the contiguous allocation back up into the
+	 * individual PAMTs for each page size.
+	 */
+	tdmr_pamt_base = page_to_pfn(pamt) << PAGE_SHIFT;
+	for (pgsz = TDX_PS_4K; pgsz < TDX_PS_NR; pgsz++) {
+		pamt_base[pgsz] = tdmr_pamt_base;
+		tdmr_pamt_base += pamt_size[pgsz];
+	}
+
+	tdmr->pamt_4k_base = pamt_base[TDX_PS_4K];
+	tdmr->pamt_4k_size = pamt_size[TDX_PS_4K];
+	tdmr->pamt_2m_base = pamt_base[TDX_PS_2M];
+	tdmr->pamt_2m_size = pamt_size[TDX_PS_2M];
+	tdmr->pamt_1g_base = pamt_base[TDX_PS_1G];
+	tdmr->pamt_1g_size = pamt_size[TDX_PS_1G];
+
+	return 0;
+}
+
+static void tdmr_get_pamt(struct tdmr_info *tdmr, unsigned long *pamt_base,
+			  unsigned long *pamt_size)
+{
+	unsigned long pamt_bs, pamt_sz;
+
+	/*
+	 * The PAMT was allocated in one contiguous unit.  The 4K PAMT
+	 * should always point to the beginning of that allocation.
+	 */
+	pamt_bs = tdmr->pamt_4k_base;
+	pamt_sz = tdmr->pamt_4k_size + tdmr->pamt_2m_size + tdmr->pamt_1g_size;
+
+	WARN_ON_ONCE((pamt_bs & ~PAGE_MASK) || (pamt_sz & ~PAGE_MASK));
+
+	*pamt_base = pamt_bs;
+	*pamt_size = pamt_sz;
+}
+
+static void tdmr_free_pamt(struct tdmr_info *tdmr)
+{
+	unsigned long pamt_base, pamt_size;
+
+	tdmr_get_pamt(tdmr, &pamt_base, &pamt_size);
+
+	/* Do nothing if PAMT hasn't been allocated for this TDMR */
+	if (!pamt_size)
+		return;
+
+	if (WARN_ON_ONCE(!pamt_base))
+		return;
+
+	free_contig_range(pamt_base >> PAGE_SHIFT, pamt_size >> PAGE_SHIFT);
+}
+
+static void tdmrs_free_pamt_all(struct tdmr_info_list *tdmr_list)
+{
+	int i;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++)
+		tdmr_free_pamt(tdmr_entry(tdmr_list, i));
+}
+
+/* Allocate and set up PAMTs for all TDMRs */
+static int tdmrs_set_up_pamt_all(struct tdmr_info_list *tdmr_list,
+				 struct list_head *tmb_list,
+				 u16 pamt_entry_size[])
+{
+	int i, ret = 0;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+		ret = tdmr_set_up_pamt(tdmr_entry(tdmr_list, i), tmb_list,
+				pamt_entry_size);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+err:
+	tdmrs_free_pamt_all(tdmr_list);
+	return ret;
+}
+
+static unsigned long tdmrs_count_pamt_kb(struct tdmr_info_list *tdmr_list)
+{
+	unsigned long pamt_size = 0;
+	int i;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+		unsigned long base, size;
+
+		tdmr_get_pamt(tdmr_entry(tdmr_list, i), &base, &size);
+		pamt_size += size;
+	}
+
+	return pamt_size / 1024;
+}
+
 /*
  * Construct a list of TDMRs on the preallocated space in @tdmr_list
  * to cover all TDX memory regions in @tmb_list based on the TDX module
@@ -466,10 +663,13 @@ static int construct_tdmrs(struct list_head *tmb_list,
 	if (ret)
 		return ret;
 
+	ret = tdmrs_set_up_pamt_all(tdmr_list, tmb_list,
+			tdmr_sysinfo->pamt_entry_size);
+	if (ret)
+		return ret;
 	/*
 	 * TODO:
 	 *
-	 *  - Allocate and set up PAMTs for each TDMR.
 	 *  - Designate reserved areas for each TDMR.
 	 *
 	 * Return -EINVAL until constructing TDMRs is done
@@ -523,6 +723,11 @@ static int init_tdx_module(void)
 	 *  Return error before all steps are done.
 	 */
 	ret = -EINVAL;
+	if (ret)
+		tdmrs_free_pamt_all(&tdmr_list);
+	else
+		pr_info("%lu KBs allocated for PAMT\n",
+				tdmrs_count_pamt_kb(&tdmr_list));
 out_free_tdmrs:
 	/*
 	 * Always free the buffer of TDMRs as they are only used during
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index f18ce1b88b0a..1b04efece9db 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -87,6 +87,7 @@ struct tdx_memblock {
 	struct list_head list;
 	unsigned long start_pfn;
 	unsigned long end_pfn;
+	int nid;
 };
 
 /* "TDMR info" part of "Global Scope Metadata" for constructing TDMRs */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 13/23] x86/virt/tdx: Designate reserved areas for all TDMRs
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (11 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 12/23] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 14/23] x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID Kai Huang
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

As the last step of constructing TDMRs, populate reserved areas for all
TDMRs.  For each TDMR, put all memory holes within this TDMR to the
reserved areas.  And for all PAMTs which overlap with this TDMR, put
all the overlapping parts to reserved areas too.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
---

v14 -> v15:
  - 'struct tdsysinfo_struct' -> 'struct tdx_tdmr_sysinfo'.

v13 -> v14:
 - No change

v12 -> v13:
 - Added Yuan's tag.

v11 -> v12:
 - Code change due to tdmr_get_pamt() change from returning pfn/npages to
   base/size
 - Added Kirill's tag

v10 -> v11:
 - No update

v9 -> v10:
 - No change.

v8 -> v9:
 - Added comment around 'tdmr_add_rsvd_area()' to point out it doesn't do
   optimization to save reserved areas. (Dave).

v7 -> v8: (Dave)
 - "set_up" -> "populate" in function name change (Dave).
 - Improved comment suggested by Dave.
 - Other changes due to 'struct tdmr_info_list'.


v13 -> v14:
 - No change

v12 -> v13:
 - Added Yuan's tag.

v11 -> v12:
 - Code change due to tdmr_get_pamt() change from returning pfn/npages to
   base/size
 - Added Kirill's tag

v10 -> v11:
 - No update

v9 -> v10:
 - No change.

v8 -> v9:
 - Added comment around 'tdmr_add_rsvd_area()' to point out it doesn't do
   optimization to save reserved areas. (Dave).

v7 -> v8: (Dave)
 - "set_up" -> "populate" in function name change (Dave).
 - Improved comment suggested by Dave.
 - Other changes due to 'struct tdmr_info_list'.


---
 arch/x86/virt/vmx/tdx/tdx.c | 217 ++++++++++++++++++++++++++++++++++--
 1 file changed, 209 insertions(+), 8 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 0f3149f23544..a3340a6e23c5 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -23,6 +23,7 @@
 #include <linux/sizes.h>
 #include <linux/pfn.h>
 #include <linux/align.h>
+#include <linux/sort.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/tdx.h>
@@ -648,6 +649,207 @@ static unsigned long tdmrs_count_pamt_kb(struct tdmr_info_list *tdmr_list)
 	return pamt_size / 1024;
 }
 
+static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx, u64 addr,
+			      u64 size, u16 max_reserved_per_tdmr)
+{
+	struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas;
+	int idx = *p_idx;
+
+	/* Reserved area must be 4K aligned in offset and size */
+	if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK))
+		return -EINVAL;
+
+	if (idx >= max_reserved_per_tdmr) {
+		pr_warn("initialization failed: TDMR [0x%llx, 0x%llx): reserved areas exhausted.\n",
+				tdmr->base, tdmr_end(tdmr));
+		return -ENOSPC;
+	}
+
+	/*
+	 * Consume one reserved area per call.  Make no effort to
+	 * optimize or reduce the number of reserved areas which are
+	 * consumed by contiguous reserved areas, for instance.
+	 */
+	rsvd_areas[idx].offset = addr - tdmr->base;
+	rsvd_areas[idx].size = size;
+
+	*p_idx = idx + 1;
+
+	return 0;
+}
+
+/*
+ * Go through @tmb_list to find holes between memory areas.  If any of
+ * those holes fall within @tdmr, set up a TDMR reserved area to cover
+ * the hole.
+ */
+static int tdmr_populate_rsvd_holes(struct list_head *tmb_list,
+				    struct tdmr_info *tdmr,
+				    int *rsvd_idx,
+				    u16 max_reserved_per_tdmr)
+{
+	struct tdx_memblock *tmb;
+	u64 prev_end;
+	int ret;
+
+	/*
+	 * Start looking for reserved blocks at the
+	 * beginning of the TDMR.
+	 */
+	prev_end = tdmr->base;
+	list_for_each_entry(tmb, tmb_list, list) {
+		u64 start, end;
+
+		start = PFN_PHYS(tmb->start_pfn);
+		end   = PFN_PHYS(tmb->end_pfn);
+
+		/* Break if this region is after the TDMR */
+		if (start >= tdmr_end(tdmr))
+			break;
+
+		/* Exclude regions before this TDMR */
+		if (end < tdmr->base)
+			continue;
+
+		/*
+		 * Skip over memory areas that
+		 * have already been dealt with.
+		 */
+		if (start <= prev_end) {
+			prev_end = end;
+			continue;
+		}
+
+		/* Add the hole before this region */
+		ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
+				start - prev_end,
+				max_reserved_per_tdmr);
+		if (ret)
+			return ret;
+
+		prev_end = end;
+	}
+
+	/* Add the hole after the last region if it exists. */
+	if (prev_end < tdmr_end(tdmr)) {
+		ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
+				tdmr_end(tdmr) - prev_end,
+				max_reserved_per_tdmr);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Go through @tdmr_list to find all PAMTs.  If any of those PAMTs
+ * overlaps with @tdmr, set up a TDMR reserved area to cover the
+ * overlapping part.
+ */
+static int tdmr_populate_rsvd_pamts(struct tdmr_info_list *tdmr_list,
+				    struct tdmr_info *tdmr,
+				    int *rsvd_idx,
+				    u16 max_reserved_per_tdmr)
+{
+	int i, ret;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+		struct tdmr_info *tmp = tdmr_entry(tdmr_list, i);
+		unsigned long pamt_base, pamt_size, pamt_end;
+
+		tdmr_get_pamt(tmp, &pamt_base, &pamt_size);
+		/* Each TDMR must already have PAMT allocated */
+		WARN_ON_ONCE(!pamt_size || !pamt_base);
+
+		pamt_end = pamt_base + pamt_size;
+		/* Skip PAMTs outside of the given TDMR */
+		if ((pamt_end <= tdmr->base) ||
+				(pamt_base >= tdmr_end(tdmr)))
+			continue;
+
+		/* Only mark the part within the TDMR as reserved */
+		if (pamt_base < tdmr->base)
+			pamt_base = tdmr->base;
+		if (pamt_end > tdmr_end(tdmr))
+			pamt_end = tdmr_end(tdmr);
+
+		ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, pamt_base,
+				pamt_end - pamt_base,
+				max_reserved_per_tdmr);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+/* Compare function called by sort() for TDMR reserved areas */
+static int rsvd_area_cmp_func(const void *a, const void *b)
+{
+	struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a;
+	struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b;
+
+	if (r1->offset + r1->size <= r2->offset)
+		return -1;
+	if (r1->offset >= r2->offset + r2->size)
+		return 1;
+
+	/* Reserved areas cannot overlap.  The caller must guarantee. */
+	WARN_ON_ONCE(1);
+	return -1;
+}
+
+/*
+ * Populate reserved areas for the given @tdmr, including memory holes
+ * (via @tmb_list) and PAMTs (via @tdmr_list).
+ */
+static int tdmr_populate_rsvd_areas(struct tdmr_info *tdmr,
+				    struct list_head *tmb_list,
+				    struct tdmr_info_list *tdmr_list,
+				    u16 max_reserved_per_tdmr)
+{
+	int ret, rsvd_idx = 0;
+
+	ret = tdmr_populate_rsvd_holes(tmb_list, tdmr, &rsvd_idx,
+			max_reserved_per_tdmr);
+	if (ret)
+		return ret;
+
+	ret = tdmr_populate_rsvd_pamts(tdmr_list, tdmr, &rsvd_idx,
+			max_reserved_per_tdmr);
+	if (ret)
+		return ret;
+
+	/* TDX requires reserved areas listed in address ascending order */
+	sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct tdmr_reserved_area),
+			rsvd_area_cmp_func, NULL);
+
+	return 0;
+}
+
+/*
+ * Populate reserved areas for all TDMRs in @tdmr_list, including memory
+ * holes (via @tmb_list) and PAMTs.
+ */
+static int tdmrs_populate_rsvd_areas_all(struct tdmr_info_list *tdmr_list,
+					 struct list_head *tmb_list,
+					 u16 max_reserved_per_tdmr)
+{
+	int i;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+		int ret;
+
+		ret = tdmr_populate_rsvd_areas(tdmr_entry(tdmr_list, i),
+				tmb_list, tdmr_list, max_reserved_per_tdmr);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
 /*
  * Construct a list of TDMRs on the preallocated space in @tdmr_list
  * to cover all TDX memory regions in @tmb_list based on the TDX module
@@ -667,14 +869,13 @@ static int construct_tdmrs(struct list_head *tmb_list,
 			tdmr_sysinfo->pamt_entry_size);
 	if (ret)
 		return ret;
-	/*
-	 * TODO:
-	 *
-	 *  - Designate reserved areas for each TDMR.
-	 *
-	 * Return -EINVAL until constructing TDMRs is done
-	 */
-	return -EINVAL;
+
+	ret = tdmrs_populate_rsvd_areas_all(tdmr_list, tmb_list,
+			tdmr_sysinfo->max_reserved_per_tdmr);
+	if (ret)
+		tdmrs_free_pamt_all(tdmr_list);
+
+	return ret;
 }
 
 static int init_tdx_module(void)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 14/23] x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (12 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 13/23] x86/virt/tdx: Designate reserved areas for all TDMRs Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 15/23] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

The TDX module uses a private KeyID as the "global KeyID" for mapping
things like the PAMT and other TDX metadata.  This KeyID has already
been reserved when detecting TDX during the kernel early boot.

After the list of "TD Memory Regions" (TDMRs) has been constructed to
cover all TDX-usable memory regions, the next step is to pass them to
the TDX module together with the global KeyID.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
---

v14 -> v15:
 - No change

v13 -> v14:
 - No change

v12 -> v13:
 - Added Yuan's tag.

v11 -> v12:
 - Added Kirill's tag

v10 -> v11:
 - No update

v9 -> v10:
 - Code change due to change static 'tdx_tdmr_list' to local 'tdmr_list'.

v8 -> v9:
 - Improved changlog to explain why initializing TDMRs can take long
   time (Dave).
 - Improved comments around 'next-to-initialize' address (Dave).

v7 -> v8: (Dave)
 - Changelog:
   - explicitly call out this is the last step of TDX module initialization.
   - Trimed down changelog by removing SEAMCALL name and details.
 - Removed/trimmed down unnecessary comments.
 - Other changes due to 'struct tdmr_info_list'.

v6 -> v7:
 - Removed need_resched() check. -- Andi.

---
 arch/x86/virt/vmx/tdx/tdx.c | 44 ++++++++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h |  2 ++
 2 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index a3340a6e23c5..aba851e11c72 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -24,8 +24,10 @@
 #include <linux/pfn.h>
 #include <linux/align.h>
 #include <linux/sort.h>
+#include <linux/log2.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
+#include <asm/page.h>
 #include <asm/tdx.h>
 #include "tdx.h"
 
@@ -878,6 +880,41 @@ static int construct_tdmrs(struct list_head *tmb_list,
 	return ret;
 }
 
+static int config_tdx_module(struct tdmr_info_list *tdmr_list, u64 global_keyid)
+{
+	struct tdx_module_args args = {};
+	u64 *tdmr_pa_array;
+	size_t array_sz;
+	int i, ret;
+
+	/*
+	 * TDMRs are passed to the TDX module via an array of physical
+	 * addresses of each TDMR.  The array itself also has certain
+	 * alignment requirement.
+	 */
+	array_sz = tdmr_list->nr_consumed_tdmrs * sizeof(u64);
+	array_sz = roundup_pow_of_two(array_sz);
+	if (array_sz < TDMR_INFO_PA_ARRAY_ALIGNMENT)
+		array_sz = TDMR_INFO_PA_ARRAY_ALIGNMENT;
+
+	tdmr_pa_array = kzalloc(array_sz, GFP_KERNEL);
+	if (!tdmr_pa_array)
+		return -ENOMEM;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++)
+		tdmr_pa_array[i] = __pa(tdmr_entry(tdmr_list, i));
+
+	args.rcx = __pa(tdmr_pa_array);
+	args.rdx = tdmr_list->nr_consumed_tdmrs;
+	args.r8 = global_keyid;
+	ret = seamcall_prerr(TDH_SYS_CONFIG, &args);
+
+	/* Free the array as it is not required anymore. */
+	kfree(tdmr_pa_array);
+
+	return ret;
+}
+
 static int init_tdx_module(void)
 {
 	struct tdx_tdmr_sysinfo tdmr_sysinfo;
@@ -914,16 +951,21 @@ static int init_tdx_module(void)
 	if (ret)
 		goto out_free_tdmrs;
 
+	/* Pass the TDMRs and the global KeyID to the TDX module */
+	ret = config_tdx_module(&tdmr_list, tdx_global_keyid);
+	if (ret)
+		goto out_free_pamts;
+
 	/*
 	 * TODO:
 	 *
-	 *  - Configure the TDMRs and the global KeyID to the TDX module.
 	 *  - Configure the global KeyID on all packages.
 	 *  - Initialize all TDMRs.
 	 *
 	 *  Return error before all steps are done.
 	 */
 	ret = -EINVAL;
+out_free_pamts:
 	if (ret)
 		tdmrs_free_pamt_all(&tdmr_list);
 	else
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 1b04efece9db..fa5bcf8b5a9c 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -17,6 +17,7 @@
 #define TDH_SYS_INIT		33
 #define TDH_SYS_RD		34
 #define TDH_SYS_LP_INIT		35
+#define TDH_SYS_CONFIG		45
 
 /*
  * Global scope metadata field ID.
@@ -53,6 +54,7 @@ struct tdmr_reserved_area {
 } __packed;
 
 #define TDMR_INFO_ALIGNMENT	512
+#define TDMR_INFO_PA_ARRAY_ALIGNMENT	512
 
 struct tdmr_info {
 	u64 base;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 15/23] x86/virt/tdx: Configure global KeyID on all packages
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (13 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 14/23] x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 16/23] x86/virt/tdx: Initialize all TDMRs Kai Huang
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

After the list of TDMRs and the global KeyID are configured to the TDX
module, the kernel needs to configure the key of the global KeyID on all
packages using TDH.SYS.KEY.CONFIG.

This SEAMCALL cannot run parallel on different cpus.  Loop all online
cpus and use smp_call_on_cpu() to call this SEAMCALL on the first cpu of
each package.

To keep things simple, this implementation takes no affirmative steps to
online cpus to make sure there's at least one cpu for each package.  The
callers (aka. KVM) can ensure success by ensuring sufficient CPUs are
online for this to succeed.

Intel hardware doesn't guarantee cache coherency across different
KeyIDs.  The PAMTs are transitioning from being used by the kernel
mapping (KeyId 0) to the TDX module's "global KeyID" mapping.

This means that the kernel must flush any dirty KeyID-0 PAMT cachelines
before the TDX module uses the global KeyID to access the PAMTs.
Otherwise, if those dirty cachelines were written back, they would
corrupt the TDX module's metadata.  Aside: This corruption would be
detected by the memory integrity hardware on the next read of the memory
with the global KeyID.  The result would likely be fatal to the system
but would not impact TDX security.

Following the TDX module specification, flush cache before configuring
the global KeyID on all packages.  Given the PAMT size can be large
(~1/256th of system RAM), just use WBINVD on all CPUs to flush.

If TDH.SYS.KEY.CONFIG fails, the TDX module may already have used the
global KeyID to write the PAMTs.  Therefore, use WBINVD to flush cache
before returning the PAMTs back to the kernel.  Also convert all PAMTs
back to normal by using MOVDIR64B as suggested by the TDX module spec,
although on the platform without the "partial write machine check"
erratum it's OK to leave PAMTs as is.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
---

v14 -> v15:
 - No change

v13 -> v14:
 - No change

v12 -> v13:
 - Added Yuan's tag.

v11 -> v12:
 - Added Kirill's tag
 - Improved changelog (Nikolay)

v10 -> v11:
 - Convert PAMTs back to normal when module initialization fails.
 - Fixed an error in changelog

v9 -> v10:
 - Changed to use 'smp_call_on_cpu()' directly to do key configuration.

v8 -> v9:
 - Improved changelog (Dave).
 - Improved comments to explain the function to configure global KeyID
   "takes no affirmative action to online any cpu". (Dave).
 - Improved other comments suggested by Dave.

v7 -> v8: (Dave)
 - Changelog changes:
  - Point out this is the step of "multi-steps" of init_tdx_module().
  - Removed MOVDIR64B part.
  - Other changes due to removing TDH.SYS.SHUTDOWN and TDH.SYS.LP.INIT.
 - Changed to loop over online cpus and use smp_call_function_single()
   directly as the patch to shut down TDX module has been removed.
 - Removed MOVDIR64B part in comment.


---
 arch/x86/virt/vmx/tdx/tdx.c | 130 +++++++++++++++++++++++++++++++++++-
 arch/x86/virt/vmx/tdx/tdx.h |   1 +
 2 files changed, 129 insertions(+), 2 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index aba851e11c72..329d233c11da 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -28,6 +28,7 @@
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/page.h>
+#include <asm/special_insns.h>
 #include <asm/tdx.h>
 #include "tdx.h"
 
@@ -592,7 +593,8 @@ static void tdmr_get_pamt(struct tdmr_info *tdmr, unsigned long *pamt_base,
 	*pamt_size = pamt_sz;
 }
 
-static void tdmr_free_pamt(struct tdmr_info *tdmr)
+static void tdmr_do_pamt_func(struct tdmr_info *tdmr,
+		void (*pamt_func)(unsigned long base, unsigned long size))
 {
 	unsigned long pamt_base, pamt_size;
 
@@ -605,9 +607,19 @@ static void tdmr_free_pamt(struct tdmr_info *tdmr)
 	if (WARN_ON_ONCE(!pamt_base))
 		return;
 
+	(*pamt_func)(pamt_base, pamt_size);
+}
+
+static void free_pamt(unsigned long pamt_base, unsigned long pamt_size)
+{
 	free_contig_range(pamt_base >> PAGE_SHIFT, pamt_size >> PAGE_SHIFT);
 }
 
+static void tdmr_free_pamt(struct tdmr_info *tdmr)
+{
+	tdmr_do_pamt_func(tdmr, free_pamt);
+}
+
 static void tdmrs_free_pamt_all(struct tdmr_info_list *tdmr_list)
 {
 	int i;
@@ -636,6 +648,41 @@ static int tdmrs_set_up_pamt_all(struct tdmr_info_list *tdmr_list,
 	return ret;
 }
 
+/*
+ * Convert TDX private pages back to normal by using MOVDIR64B to
+ * clear these pages.  Note this function doesn't flush cache of
+ * these TDX private pages.  The caller should make sure of that.
+ */
+static void reset_tdx_pages(unsigned long base, unsigned long size)
+{
+	const void *zero_page = (const void *)page_address(ZERO_PAGE(0));
+	unsigned long phys, end;
+
+	end = base + size;
+	for (phys = base; phys < end; phys += 64)
+		movdir64b(__va(phys), zero_page);
+
+	/*
+	 * MOVDIR64B uses WC protocol.  Use memory barrier to
+	 * make sure any later user of these pages sees the
+	 * updated data.
+	 */
+	mb();
+}
+
+static void tdmr_reset_pamt(struct tdmr_info *tdmr)
+{
+	tdmr_do_pamt_func(tdmr, reset_tdx_pages);
+}
+
+static void tdmrs_reset_pamt_all(struct tdmr_info_list *tdmr_list)
+{
+	int i;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++)
+		tdmr_reset_pamt(tdmr_entry(tdmr_list, i));
+}
+
 static unsigned long tdmrs_count_pamt_kb(struct tdmr_info_list *tdmr_list)
 {
 	unsigned long pamt_size = 0;
@@ -915,6 +962,50 @@ static int config_tdx_module(struct tdmr_info_list *tdmr_list, u64 global_keyid)
 	return ret;
 }
 
+static int do_global_key_config(void *data)
+{
+	struct tdx_module_args args = {};
+
+	return seamcall_prerr(TDH_SYS_KEY_CONFIG, &args);
+}
+
+/*
+ * Attempt to configure the global KeyID on all physical packages.
+ *
+ * This requires running code on at least one CPU in each package.  If a
+ * package has no online CPUs, that code will not run and TDX module
+ * initialization (TDMR initialization) will fail.
+ *
+ * This code takes no affirmative steps to online CPUs.  Callers (aka.
+ * KVM) can ensure success by ensuring sufficient CPUs are online for
+ * this to succeed.
+ */
+static int config_global_keyid(void)
+{
+	cpumask_var_t packages;
+	int cpu, ret = -EINVAL;
+
+	if (!zalloc_cpumask_var(&packages, GFP_KERNEL))
+		return -ENOMEM;
+
+	for_each_online_cpu(cpu) {
+		if (cpumask_test_and_set_cpu(topology_physical_package_id(cpu),
+					packages))
+			continue;
+
+		/*
+		 * TDH.SYS.KEY.CONFIG cannot run concurrently on
+		 * different cpus, so just do it one by one.
+		 */
+		ret = smp_call_on_cpu(cpu, do_global_key_config, NULL, true);
+		if (ret)
+			break;
+	}
+
+	free_cpumask_var(packages);
+	return ret;
+}
+
 static int init_tdx_module(void)
 {
 	struct tdx_tdmr_sysinfo tdmr_sysinfo;
@@ -956,15 +1047,47 @@ static int init_tdx_module(void)
 	if (ret)
 		goto out_free_pamts;
 
+	/*
+	 * Hardware doesn't guarantee cache coherency across different
+	 * KeyIDs.  The kernel needs to flush PAMT's dirty cachelines
+	 * (associated with KeyID 0) before the TDX module can use the
+	 * global KeyID to access the PAMT.  Given PAMTs are potentially
+	 * large (~1/256th of system RAM), just use WBINVD on all cpus
+	 * to flush the cache.
+	 */
+	wbinvd_on_all_cpus();
+
+	/* Config the key of global KeyID on all packages */
+	ret = config_global_keyid();
+	if (ret)
+		goto out_reset_pamts;
+
 	/*
 	 * TODO:
 	 *
-	 *  - Configure the global KeyID on all packages.
 	 *  - Initialize all TDMRs.
 	 *
 	 *  Return error before all steps are done.
 	 */
 	ret = -EINVAL;
+out_reset_pamts:
+	if (ret) {
+		/*
+		 * Part of PAMTs may already have been initialized by the
+		 * TDX module.  Flush cache before returning PAMTs back
+		 * to the kernel.
+		 */
+		wbinvd_on_all_cpus();
+		/*
+		 * According to the TDX hardware spec, if the platform
+		 * doesn't have the "partial write machine check"
+		 * erratum, any kernel read/write will never cause #MC
+		 * in kernel space, thus it's OK to not convert PAMTs
+		 * back to normal.  But do the conversion anyway here
+		 * as suggested by the TDX spec.
+		 */
+		tdmrs_reset_pamt_all(&tdmr_list);
+	}
 out_free_pamts:
 	if (ret)
 		tdmrs_free_pamt_all(&tdmr_list);
@@ -1013,6 +1136,9 @@ static int __tdx_enable(void)
  * lock to prevent any new cpu from becoming online; 2) done both VMXON
  * and tdx_cpu_enable() on all online cpus.
  *
+ * This function requires there's at least one online cpu for each CPU
+ * package to succeed.
+ *
  * This function can be called in parallel by multiple callers.
  *
  * Return 0 if TDX is enabled successfully, otherwise error.
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index fa5bcf8b5a9c..dd35baf756b8 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -14,6 +14,7 @@
 /*
  * TDX module SEAMCALL leaf functions
  */
+#define TDH_SYS_KEY_CONFIG	31
 #define TDH_SYS_INIT		33
 #define TDH_SYS_RD		34
 #define TDH_SYS_LP_INIT		35
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 16/23] x86/virt/tdx: Initialize all TDMRs
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (14 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 15/23] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory Kai Huang
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

After the global KeyID has been configured on all packages, initialize
all TDMRs to make all TDX-usable memory regions that are passed to the
TDX module become usable.

This is the last step of initializing the TDX module.

Initializing TDMRs can be time consuming on large memory systems as it
involves initializing all metadata entries for all pages that can be
used by TDX guests.  Initializing different TDMRs can be parallelized.
For now to keep it simple, just initialize all TDMRs one by one.  It can
be enhanced in the future.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
---

v14 -> v15:
 - No change

v13 -> v14:
 - No change

v12 -> v13:
 - Added Yuan's tag.

v11 -> v12:
 - Added Kirill's tag

v10 -> v11:
 - No update

v9 -> v10:
 - Code change due to change static 'tdx_tdmr_list' to local 'tdmr_list'.

v8 -> v9:
 - Improved changlog to explain why initializing TDMRs can take long
   time (Dave).
 - Improved comments around 'next-to-initialize' address (Dave).

v7 -> v8: (Dave)
 - Changelog:
   - explicitly call out this is the last step of TDX module initialization.
   - Trimed down changelog by removing SEAMCALL name and details.
 - Removed/trimmed down unnecessary comments.
 - Other changes due to 'struct tdmr_info_list'.

v6 -> v7:
 - Removed need_resched() check. -- Andi.


---
 arch/x86/virt/vmx/tdx/tdx.c | 60 ++++++++++++++++++++++++++++++++-----
 arch/x86/virt/vmx/tdx/tdx.h |  1 +
 2 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 329d233c11da..ac47d58f8c74 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1006,6 +1006,56 @@ static int config_global_keyid(void)
 	return ret;
 }
 
+static int init_tdmr(struct tdmr_info *tdmr)
+{
+	u64 next;
+
+	/*
+	 * Initializing a TDMR can be time consuming.  To avoid long
+	 * SEAMCALLs, the TDX module may only initialize a part of the
+	 * TDMR in each call.
+	 */
+	do {
+		struct tdx_module_args args = {
+			.rcx = tdmr->base,
+		};
+		int ret;
+
+		ret = seamcall_prerr_ret(TDH_SYS_TDMR_INIT, &args);
+		if (ret)
+			return ret;
+		/*
+		 * RDX contains 'next-to-initialize' address if
+		 * TDH.SYS.TDMR.INIT did not fully complete and
+		 * should be retried.
+		 */
+		next = args.rdx;
+		cond_resched();
+		/* Keep making SEAMCALLs until the TDMR is done */
+	} while (next < tdmr->base + tdmr->size);
+
+	return 0;
+}
+
+static int init_tdmrs(struct tdmr_info_list *tdmr_list)
+{
+	int i;
+
+	/*
+	 * This operation is costly.  It can be parallelized,
+	 * but keep it simple for now.
+	 */
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+		int ret;
+
+		ret = init_tdmr(tdmr_entry(tdmr_list, i));
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
 static int init_tdx_module(void)
 {
 	struct tdx_tdmr_sysinfo tdmr_sysinfo;
@@ -1062,14 +1112,8 @@ static int init_tdx_module(void)
 	if (ret)
 		goto out_reset_pamts;
 
-	/*
-	 * TODO:
-	 *
-	 *  - Initialize all TDMRs.
-	 *
-	 *  Return error before all steps are done.
-	 */
-	ret = -EINVAL;
+	/* Initialize TDMRs to complete the TDX module initialization */
+	ret = init_tdmrs(&tdmr_list);
 out_reset_pamts:
 	if (ret) {
 		/*
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index dd35baf756b8..c0610f0bb88c 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -18,6 +18,7 @@
 #define TDH_SYS_INIT		33
 #define TDH_SYS_RD		34
 #define TDH_SYS_LP_INIT		35
+#define TDH_SYS_TDMR_INIT	36
 #define TDH_SYS_CONFIG		45
 
 /*
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (15 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 16/23] x86/virt/tdx: Initialize all TDMRs Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-27 18:13   ` Dave Hansen
  2023-11-09 11:55 ` [PATCH v15 18/23] x86/virt/tdx: Keep TDMRs when module initialization is successful Kai Huang
                   ` (6 subsequent siblings)
  23 siblings, 1 reply; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

There are two problems in terms of using kexec() to boot to a new kernel
when the old kernel has enabled TDX: 1) Part of the memory pages are
still TDX private pages; 2) There might be dirty cachelines associated
with TDX private pages.

The first problem doesn't matter on the platforms w/o the "partial write
machine check" erratum.  KeyID 0 doesn't have integrity check.  If the
new kernel wants to use any non-zero KeyID, it needs to convert the
memory to that KeyID and such conversion would work from any KeyID.

However the old kernel needs to guarantee there's no dirty cacheline
left behind before booting to the new kernel to avoid silent corruption
from later cacheline writeback (Intel hardware doesn't guarantee cache
coherency across different KeyIDs).

There are two things that the old kernel needs to do to achieve that:

1) Stop accessing TDX private memory mappings:
   a. Stop making TDX module SEAMCALLs (TDX global KeyID);
   b. Stop TDX guests from running (per-guest TDX KeyID).
2) Flush any cachelines from previous TDX private KeyID writes.

For 2), use wbinvd() to flush cache in stop_this_cpu(), following SME
support.  And in this way 1) happens for free as there's no TDX activity
between wbinvd() and the native_halt().

Flushing cache in stop_this_cpu() only flushes cache on remote cpus.  On
the rebooting cpu which does kexec(), unlike SME which does the cache
flush in relocate_kernel(), flush the cache right after stopping remote
cpus in machine_shutdown().

There are two reasons to do so: 1) For TDX there's no need to defer
cache flush to relocate_kernel() because all TDX activities have been
stopped.  2) On the platforms with the above erratum the kernel must
convert all TDX private pages back to normal before booting to the new
kernel in kexec(), and flushing cache early allows the kernel to convert
memory early rather than having to muck with the relocate_kernel()
assembly.

Theoretically, cache flush is only needed when the TDX module has been
initialized.  However initializing the TDX module is done on demand at
runtime, and it takes a mutex to read the module status.  Just check
whether TDX is enabled by the BIOS instead to flush cache.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---

v14 -> v15:
 - No change

v13 -> v14:
 - No change


---
 arch/x86/kernel/process.c |  8 +++++++-
 arch/x86/kernel/reboot.c  | 15 +++++++++++++++
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b6f4e8399fca..8e3cf0f8d7f9 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -823,8 +823,14 @@ void __noreturn stop_this_cpu(void *dummy)
 	 *
 	 * Test the CPUID bit directly because the machine might've cleared
 	 * X86_FEATURE_SME due to cmdline options.
+	 *
+	 * The TDX module or guests might have left dirty cachelines
+	 * behind.  Flush them to avoid corruption from later writeback.
+	 * Note that this flushes on all systems where TDX is possible,
+	 * but does not actually check that TDX was in use.
 	 */
-	if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
+	if ((c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
+			|| platform_tdx_enabled())
 		native_wbinvd();
 
 	/*
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 830425e6d38e..e1a4fa8de11d 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -31,6 +31,7 @@
 #include <asm/realmode.h>
 #include <asm/x86_init.h>
 #include <asm/efi.h>
+#include <asm/tdx.h>
 
 /*
  * Power off function, if any
@@ -741,6 +742,20 @@ void native_machine_shutdown(void)
 	local_irq_disable();
 	stop_other_cpus();
 #endif
+	/*
+	 * stop_other_cpus() has flushed all dirty cachelines of TDX
+	 * private memory on remote cpus.  Unlike SME, which does the
+	 * cache flush on _this_ cpu in the relocate_kernel(), flush
+	 * the cache for _this_ cpu here.  This is because on the
+	 * platforms with "partial write machine check" erratum the
+	 * kernel needs to convert all TDX private pages back to normal
+	 * before booting to the new kernel in kexec(), and the cache
+	 * flush must be done before that.  If the kernel took SME's way,
+	 * it would have to muck with the relocate_kernel() assembly to
+	 * do memory conversion.
+	 */
+	if (platform_tdx_enabled())
+		native_wbinvd();
 
 	lapic_shutdown();
 	restore_boot_irq_mode();
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 18/23] x86/virt/tdx: Keep TDMRs when module initialization is successful
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (16 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 19/23] x86/virt/tdx: Improve readability of module initialization error handling Kai Huang
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

On the platforms with the "partial write machine check" erratum, the
kexec() needs to convert all TDX private pages back to normal before
booting to the new kernel.  Otherwise, the new kernel may get unexpected
machine check.

There's no existing infrastructure to track TDX private pages.  Keep
TDMRs when module initialization is successful so that they can be used
to find PAMTs.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---

v14 -> v15:
 - No change

v13 -> v14:
 - "Change to keep" -> "Keep" (Kirill)
 - Add Kirill/Rick's tags

v12 -> v13:
  - Split "improve error handling" part out as a separate patch.

v11 -> v12 (new patch):
  - Defer keeping TDMRs logic to this patch for better review
  - Improved error handling logic (Nikolay/Kirill in patch 15)

---
 arch/x86/virt/vmx/tdx/tdx.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index ac47d58f8c74..753e435a3040 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -44,6 +44,8 @@ static DEFINE_MUTEX(tdx_module_lock);
 /* All TDX-usable memory regions.  Protected by mem_hotplug_lock. */
 static LIST_HEAD(tdx_memlist);
 
+static struct tdmr_info_list tdx_tdmr_list;
+
 typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args);
 
 static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args)
@@ -1059,7 +1061,6 @@ static int init_tdmrs(struct tdmr_info_list *tdmr_list)
 static int init_tdx_module(void)
 {
 	struct tdx_tdmr_sysinfo tdmr_sysinfo;
-	struct tdmr_info_list tdmr_list;
 	int ret;
 
 	/*
@@ -1083,17 +1084,17 @@ static int init_tdx_module(void)
 		goto out_free_tdxmem;
 
 	/* Allocate enough space for constructing TDMRs */
-	ret = alloc_tdmr_list(&tdmr_list, &tdmr_sysinfo);
+	ret = alloc_tdmr_list(&tdx_tdmr_list, &tdmr_sysinfo);
 	if (ret)
 		goto out_free_tdxmem;
 
 	/* Cover all TDX-usable memory regions in TDMRs */
-	ret = construct_tdmrs(&tdx_memlist, &tdmr_list, &tdmr_sysinfo);
+	ret = construct_tdmrs(&tdx_memlist, &tdx_tdmr_list, &tdmr_sysinfo);
 	if (ret)
 		goto out_free_tdmrs;
 
 	/* Pass the TDMRs and the global KeyID to the TDX module */
-	ret = config_tdx_module(&tdmr_list, tdx_global_keyid);
+	ret = config_tdx_module(&tdx_tdmr_list, tdx_global_keyid);
 	if (ret)
 		goto out_free_pamts;
 
@@ -1113,7 +1114,7 @@ static int init_tdx_module(void)
 		goto out_reset_pamts;
 
 	/* Initialize TDMRs to complete the TDX module initialization */
-	ret = init_tdmrs(&tdmr_list);
+	ret = init_tdmrs(&tdx_tdmr_list);
 out_reset_pamts:
 	if (ret) {
 		/*
@@ -1130,20 +1131,17 @@ static int init_tdx_module(void)
 		 * back to normal.  But do the conversion anyway here
 		 * as suggested by the TDX spec.
 		 */
-		tdmrs_reset_pamt_all(&tdmr_list);
+		tdmrs_reset_pamt_all(&tdx_tdmr_list);
 	}
 out_free_pamts:
 	if (ret)
-		tdmrs_free_pamt_all(&tdmr_list);
+		tdmrs_free_pamt_all(&tdx_tdmr_list);
 	else
 		pr_info("%lu KBs allocated for PAMT\n",
-				tdmrs_count_pamt_kb(&tdmr_list));
+				tdmrs_count_pamt_kb(&tdx_tdmr_list));
 out_free_tdmrs:
-	/*
-	 * Always free the buffer of TDMRs as they are only used during
-	 * module initialization.
-	 */
-	free_tdmr_list(&tdmr_list);
+	if (ret)
+		free_tdmr_list(&tdx_tdmr_list);
 out_free_tdxmem:
 	if (ret)
 		free_tdx_memlist(&tdx_memlist);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 19/23] x86/virt/tdx: Improve readability of module initialization error handling
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (17 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 18/23] x86/virt/tdx: Keep TDMRs when module initialization is successful Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 20/23] x86/kexec(): Reset TDX private memory on platforms with TDX erratum Kai Huang
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

With keeping TDMRs upon successful TDX module initialization, now only
put_online_mems() needs to be done even when module initialization is
successful.  On the other hand, all other four "out_*" labels before
them explicitly check the return value and only clean up when module
initialization fails.

This isn't ideal.  Make all other four "out_*" labels only reachable
when module initialization fails to improve the readability of error
handling.  Rename them from "out_*" to "err_*" to reflect the fact.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---

v14 -> v15:
 - Rebase due to TDH.SYS.RD patch (minor)

v13 -> v14:
 - Fix spell typo (Rick)
 - Add Kirill/Rick's tags

v12 -> v13:
  - New patch to improve error handling. (Kirill, Nikolay)


---
 arch/x86/virt/vmx/tdx/tdx.c | 69 +++++++++++++++++++------------------
 1 file changed, 35 insertions(+), 34 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 753e435a3040..e8cd91692ccf 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1081,22 +1081,22 @@ static int init_tdx_module(void)
 
 	ret = get_tdx_tdmr_sysinfo(&tdmr_sysinfo);
 	if (ret)
-		goto out_free_tdxmem;
+		goto err_free_tdxmem;
 
 	/* Allocate enough space for constructing TDMRs */
 	ret = alloc_tdmr_list(&tdx_tdmr_list, &tdmr_sysinfo);
 	if (ret)
-		goto out_free_tdxmem;
+		goto err_free_tdxmem;
 
 	/* Cover all TDX-usable memory regions in TDMRs */
 	ret = construct_tdmrs(&tdx_memlist, &tdx_tdmr_list, &tdmr_sysinfo);
 	if (ret)
-		goto out_free_tdmrs;
+		goto err_free_tdmrs;
 
 	/* Pass the TDMRs and the global KeyID to the TDX module */
 	ret = config_tdx_module(&tdx_tdmr_list, tdx_global_keyid);
 	if (ret)
-		goto out_free_pamts;
+		goto err_free_pamts;
 
 	/*
 	 * Hardware doesn't guarantee cache coherency across different
@@ -1111,40 +1111,16 @@ static int init_tdx_module(void)
 	/* Config the key of global KeyID on all packages */
 	ret = config_global_keyid();
 	if (ret)
-		goto out_reset_pamts;
+		goto err_reset_pamts;
 
 	/* Initialize TDMRs to complete the TDX module initialization */
 	ret = init_tdmrs(&tdx_tdmr_list);
-out_reset_pamts:
-	if (ret) {
-		/*
-		 * Part of PAMTs may already have been initialized by the
-		 * TDX module.  Flush cache before returning PAMTs back
-		 * to the kernel.
-		 */
-		wbinvd_on_all_cpus();
-		/*
-		 * According to the TDX hardware spec, if the platform
-		 * doesn't have the "partial write machine check"
-		 * erratum, any kernel read/write will never cause #MC
-		 * in kernel space, thus it's OK to not convert PAMTs
-		 * back to normal.  But do the conversion anyway here
-		 * as suggested by the TDX spec.
-		 */
-		tdmrs_reset_pamt_all(&tdx_tdmr_list);
-	}
-out_free_pamts:
 	if (ret)
-		tdmrs_free_pamt_all(&tdx_tdmr_list);
-	else
-		pr_info("%lu KBs allocated for PAMT\n",
-				tdmrs_count_pamt_kb(&tdx_tdmr_list));
-out_free_tdmrs:
-	if (ret)
-		free_tdmr_list(&tdx_tdmr_list);
-out_free_tdxmem:
-	if (ret)
-		free_tdx_memlist(&tdx_memlist);
+		goto err_reset_pamts;
+
+	pr_info("%lu KBs allocated for PAMT\n",
+			tdmrs_count_pamt_kb(&tdx_tdmr_list));
+
 out_put_tdxmem:
 	/*
 	 * @tdx_memlist is written here and read at memory hotplug time.
@@ -1152,6 +1128,31 @@ static int init_tdx_module(void)
 	 */
 	put_online_mems();
 	return ret;
+
+err_reset_pamts:
+	/*
+	 * Part of PAMTs may already have been initialized by the
+	 * TDX module.  Flush cache before returning PAMTs back
+	 * to the kernel.
+	 */
+	wbinvd_on_all_cpus();
+	/*
+	 * According to the TDX hardware spec, if the platform
+	 * doesn't have the "partial write machine check"
+	 * erratum, any kernel read/write will never cause #MC
+	 * in kernel space, thus it's OK to not convert PAMTs
+	 * back to normal.  But do the conversion anyway here
+	 * as suggested by the TDX spec.
+	 */
+	tdmrs_reset_pamt_all(&tdx_tdmr_list);
+err_free_pamts:
+	tdmrs_free_pamt_all(&tdx_tdmr_list);
+err_free_tdmrs:
+	free_tdmr_list(&tdx_tdmr_list);
+err_free_tdxmem:
+	free_tdx_memlist(&tdx_memlist);
+	/* Do things irrelevant to module initialization result */
+	goto out_put_tdxmem;
 }
 
 static int __tdx_enable(void)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 20/23] x86/kexec(): Reset TDX private memory on platforms with TDX erratum
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (18 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 19/23] x86/virt/tdx: Improve readability of module initialization error handling Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-09 11:55 ` [PATCH v15 21/23] x86/virt/tdx: Handle TDX interaction with ACPI S3 and deeper states Kai Huang
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

The first few generations of TDX hardware have an erratum.  A partial
write to a TDX private memory cacheline will silently "poison" the
line.  Subsequent reads will consume the poison and generate a machine
check.  According to the TDX hardware spec, neither of these things
should have happened.

== Background ==

Virtually all kernel memory accesses operations happen in full
cachelines.  In practice, writing a "byte" of memory usually reads a 64
byte cacheline of memory, modifies it, then writes the whole line back.
Those operations do not trigger this problem.

This problem is triggered by "partial" writes where a write transaction
of less than cacheline lands at the memory controller.  The CPU does
these via non-temporal write instructions (like MOVNTI), or through
UC/WC memory mappings.  The issue can also be triggered away from the
CPU by devices doing partial writes via DMA.

== Problem ==

A fast warm reset doesn't reset TDX private memory.  Kexec() can also
boot into the new kernel directly.  Thus if the old kernel has enabled
TDX on the platform with this erratum, the new kernel may get unexpected
machine check.

Note that w/o this erratum any kernel read/write on TDX private memory
should never cause machine check, thus it's OK for the old kernel to
leave TDX private pages as is.

== Solution ==

In short, with this erratum, the kernel needs to explicitly convert all
TDX private pages back to normal to give the new kernel a clean slate
after kexec().  The BIOS is also expected to disable fast warm reset as
a workaround to this erratum, thus this implementation doesn't try to
reset TDX private memory for the reboot case in the kernel but depend on
the BIOS to enable the workaround.

Convert TDX private pages back to normal after all remote cpus has been
stopped and cache flush has been done on all cpus, when no more TDX
activity can happen further.  Do it in machine_kexec() to avoid the
additional overhead to the normal reboot/shutdown as the kernel depends
on the BIOS to disable fast warm reset for the reboot case.

For now TDX private memory can only be PAMT pages.  It would be ideal to
cover all types of TDX private memory here, but there are practical
problems to do so:

1) There's no existing infrastructure to track TDX private pages;
2) It's not feasible to query the TDX module about page type because VMX
   has already been stopped when KVM receives the reboot notifier, plus
   the result from the TDX module may not be accurate (e.g., the remote
   CPU could be stopped right before MOVDIR64B).

One temporary solution is to blindly convert all memory pages, but it's
problematic to do so too, because not all pages are mapped as writable
in the direct mapping.  It can be done by switching to the identical
mapping created for kexec() or a new page table, but the complexity
looks overkill.

Therefore, rather than doing something dramatic, only reset PAMT pages
here.  Other kernel components which use TDX need to do the conversion
on their own by intercepting the rebooting/shutdown notifier (KVM
already does that).

Note kexec() can happen at anytime, including when TDX module is being
initialized.  Register TDX reboot notifier callback to stop further TDX
module initialization.  If there's any ongoing module initialization,
wait until it finishes.  This makes sure the TDX module status is stable
after the reboot notifier callback, and the later kexec() code can read
module status to decide whether PAMTs are stable and available.

Also stop further TDX module initialization in case of machine shutdown
and halt, but not limited to kexec(), as there's no reason to do so in
these cases too.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---

v14 -> v15:
 - No change

v13 -> v14:
 - Skip resetting TDX private memory when preserve_context is true (Rick)
 - Use reboot notifier to stop TDX module initialization at early time of
   kexec() to make module status stable, to avoid using a new variable
   and memory barrier (which is tricky to review).
 - Added Kirill's tag

v12 -> v13:
 - Improve comments to explain why barrier is needed and ignore WBINVD.
   (Dave)
 - Improve comments to document memory ordering. (Nikolay)
 - Made comments/changelog slightly more concise.

v11 -> v12:
 - Changed comment/changelog to say kernel doesn't try to handle fast
   warm reset but depends on BIOS to enable workaround (Kirill)
 - Added a new tdx_may_has_private_mem to indicate system may have TDX
   private memory and PAMTs/TDMRs are stable to access. (Dave).
 - Use atomic_t for tdx_may_has_private_mem for build-in memory barrier
   (Dave)
 - Changed calling x86_platform.memory_shutdown() to calling
   tdx_reset_memory() directly from machine_kexec() to avoid overhead to
   normal reboot case.

v10 -> v11:
 - New patch


---
 arch/x86/include/asm/tdx.h         |  2 +
 arch/x86/kernel/machine_kexec_64.c | 16 ++++++
 arch/x86/virt/vmx/tdx/tdx.c        | 92 ++++++++++++++++++++++++++++++
 3 files changed, 110 insertions(+)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 26b7fdbcbdb3..caca139e7022 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -114,10 +114,12 @@ static inline u64 sc_retry(sc_func_t func, u64 fn,
 bool platform_tdx_enabled(void);
 int tdx_cpu_enable(void);
 int tdx_enable(void);
+void tdx_reset_memory(void);
 #else
 static inline bool platform_tdx_enabled(void) { return false; }
 static inline int tdx_cpu_enable(void) { return -ENODEV; }
 static inline int tdx_enable(void)  { return -ENODEV; }
+static inline void tdx_reset_memory(void) { }
 #endif	/* CONFIG_INTEL_TDX_HOST */
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 1a3e2c05a8a5..d55522902aa1 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -28,6 +28,7 @@
 #include <asm/setup.h>
 #include <asm/set_memory.h>
 #include <asm/cpu.h>
+#include <asm/tdx.h>
 
 #ifdef CONFIG_ACPI
 /*
@@ -301,9 +302,24 @@ void machine_kexec(struct kimage *image)
 	void *control_page;
 	int save_ftrace_enabled;
 
+	/*
+	 * For platforms with TDX "partial write machine check" erratum,
+	 * all TDX private pages need to be converted back to normal
+	 * before booting to the new kernel, otherwise the new kernel
+	 * may get unexpected machine check.
+	 *
+	 * But skip this when preserve_context is on.  The second kernel
+	 * shouldn't write to the first kernel's memory anyway.  Skipping
+	 * this also avoids killing TDX in the first kernel, which would
+	 * require more complicated handling.
+	 */
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
 		save_processor_state();
+	else
+		tdx_reset_memory();
+#else
+	tdx_reset_memory();
 #endif
 
 	save_ftrace_enabled = __ftrace_enabled_save();
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index e8cd91692ccf..53a87034ad59 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -25,6 +25,7 @@
 #include <linux/align.h>
 #include <linux/sort.h>
 #include <linux/log2.h>
+#include <linux/reboot.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/page.h>
@@ -46,6 +47,8 @@ static LIST_HEAD(tdx_memlist);
 
 static struct tdmr_info_list tdx_tdmr_list;
 
+static bool tdx_rebooting;
+
 typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args);
 
 static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args)
@@ -1159,6 +1162,9 @@ static int __tdx_enable(void)
 {
 	int ret;
 
+	if (tdx_rebooting)
+		return -EAGAIN;
+
 	ret = init_tdx_module();
 	if (ret) {
 		pr_err("module initialization failed (%d)\n", ret);
@@ -1217,6 +1223,69 @@ int tdx_enable(void)
 }
 EXPORT_SYMBOL_GPL(tdx_enable);
 
+/*
+ * Convert TDX private pages back to normal on platforms with
+ * "partial write machine check" erratum.
+ *
+ * Called from machine_kexec() before booting to the new kernel.
+ */
+void tdx_reset_memory(void)
+{
+	if (!platform_tdx_enabled())
+		return;
+
+	/*
+	 * Kernel read/write to TDX private memory doesn't
+	 * cause machine check on hardware w/o this erratum.
+	 */
+	if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
+		return;
+
+	/* Called from kexec() when only rebooting cpu is alive */
+	WARN_ON_ONCE(num_online_cpus() != 1);
+
+	/*
+	 * tdx_reboot_notifier() waits until ongoing TDX module
+	 * initialization to finish, and module initialization is
+	 * rejected after that.  Therefore @tdx_module_status is
+	 * stable here and can be read w/o holding lock.
+	 */
+	if (tdx_module_status != TDX_MODULE_INITIALIZED)
+		return;
+
+	/*
+	 * Convert PAMTs back to normal.  All other cpus are already
+	 * dead and TDMRs/PAMTs are stable.
+	 *
+	 * Ideally it's better to cover all types of TDX private pages
+	 * here, but it's impractical:
+	 *
+	 *  - There's no existing infrastructure to tell whether a page
+	 *    is TDX private memory or not.
+	 *
+	 *  - Using SEAMCALL to query TDX module isn't feasible either:
+	 *    - VMX has been turned off by reaching here so SEAMCALL
+	 *      cannot be made;
+	 *    - Even SEAMCALL can be made the result from TDX module may
+	 *      not be accurate (e.g., remote CPU can be stopped while
+	 *      the kernel is in the middle of reclaiming TDX private
+	 *      page and doing MOVDIR64B).
+	 *
+	 * One temporary solution could be just converting all memory
+	 * pages, but it's problematic too, because not all pages are
+	 * mapped as writable in direct mapping.  It can be done by
+	 * switching to the identical mapping for kexec() or a new page
+	 * table which maps all pages as writable, but the complexity is
+	 * overkill.
+	 *
+	 * Thus instead of doing something dramatic to convert all pages,
+	 * only convert PAMTs here.  Other kernel components which use
+	 * TDX need to do the conversion on their own by intercepting the
+	 * rebooting/shutdown notifier (KVM already does that).
+	 */
+	tdmrs_reset_pamt_all(&tdx_tdmr_list);
+}
+
 static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
 					    u32 *nr_tdx_keyids)
 {
@@ -1295,6 +1364,21 @@ static struct notifier_block tdx_memory_nb = {
 	.notifier_call = tdx_memory_notifier,
 };
 
+static int tdx_reboot_notifier(struct notifier_block *nb, unsigned long mode,
+			       void *unused)
+{
+	/* Wait ongoing TDX initialization to finish */
+	mutex_lock(&tdx_module_lock);
+	tdx_rebooting = true;
+	mutex_unlock(&tdx_module_lock);
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block tdx_reboot_nb = {
+	.notifier_call = tdx_reboot_notifier,
+};
+
 static int __init tdx_init(void)
 {
 	u32 tdx_keyid_start, nr_tdx_keyids;
@@ -1325,6 +1409,14 @@ static int __init tdx_init(void)
 		return -ENODEV;
 	}
 
+	err = register_reboot_notifier(&tdx_reboot_nb);
+	if (err) {
+		pr_err("initialization failed: register_reboot_notifier() failed (%d)\n",
+				err);
+		unregister_memory_notifier(&tdx_memory_nb);
+		return -ENODEV;
+	}
+
 	/*
 	 * Just use the first TDX KeyID as the 'global KeyID' and
 	 * leave the rest for TDX guests.
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 21/23] x86/virt/tdx: Handle TDX interaction with ACPI S3 and deeper states
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (19 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 20/23] x86/kexec(): Reset TDX private memory on platforms with TDX erratum Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-30 17:20   ` Dave Hansen
  2023-11-09 11:55 ` [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
                   ` (2 subsequent siblings)
  23 siblings, 1 reply; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

TDX cannot survive from S3 and deeper states.  The hardware resets and
disables TDX completely when platform goes to S3 and deeper.  Both TDX
guests and the TDX module get destroyed permanently.

The kernel uses S3 to support suspend-to-ram, and S4 or deeper states to
support hibernation.  The kernel also maintains TDX states to track
whether it has been initialized and its metadata resource, etc.  After
resuming from S3 or hibernation, these TDX states won't be correct
anymore.

Theoretically, the kernel can do more complicated things like resetting
TDX internal states and TDX module metadata before going to S3 or
deeper, and re-initialize TDX module after resuming, etc, but there is
no way to save/restore TDX guests for now.

Until TDX supports full save and restore of TDX guests, there is no big
value to handle TDX module in suspend and hibernation alone.  To make
things simple, just choose to make TDX mutually exclusive with S3 and
hibernation.

Note the TDX module is initialized at runtime.  To avoid having to deal
with the fuss of determining TDX state at runtime, just choose TDX vs S3
and hibernation at kernel early boot.  It's a bad user experience if the
choice of TDX and S3/hibernation is done at runtime anyway, i.e., the
user can experience being able to do S3/hibernation but later becoming
unable to due to TDX being enabled.

Disable TDX in kernel early boot when hibernation support is available.
Currently there's no mechanism exposed by the hibernation code to allow
other kernel code to disable hibernation once for all.

Disable ACPI S3 when TDX is enabled by the BIOS.  For now the user needs
to disable TDX in the BIOS to use ACPI S3.  A new kernel command line
can be added in the future if there's a need to let user disable TDX
host via kernel command line.

Alternatively, the kernel could disable TDX when ACPI S3 is supported
and request the user to disable S3 to use TDX.  But there's no existing
kernel command line to do that, and BIOS doesn't always have an option
to disable S3.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---

v14 -> v15:
 - Simplify the error message when hibernation_available() returns true
   by removing "Use 'nohibernate' kernel command line part".  Instead,
   explain how to resolve in the Documentation patch. (Rafael)
 - Simplify the comment around hibernation_available(). (Rafael)
 - Also guide acpi_suspend_lowlevel with CONFIG_SUSPEND.

v13 -> v14:
 - New patch

---
 arch/x86/virt/vmx/tdx/tdx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 53a87034ad59..cc21a0f25bee 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -26,6 +26,8 @@
 #include <linux/sort.h>
 #include <linux/log2.h>
 #include <linux/reboot.h>
+#include <linux/acpi.h>
+#include <linux/suspend.h>
 #include <asm/msr-index.h>
 #include <asm/msr.h>
 #include <asm/page.h>
@@ -1402,6 +1404,15 @@ static int __init tdx_init(void)
 		return -ENODEV;
 	}
 
+	/*
+	 * At this point, hibernation_available() indicates whether or
+	 * not hibernation support has been permanently disabled.
+	 */
+	if (hibernation_available()) {
+		pr_err("initialization failed: Hibernation support is enabled\n");
+		return -ENODEV;
+	}
+
 	err = register_memory_notifier(&tdx_memory_nb);
 	if (err) {
 		pr_err("initialization failed: register_memory_notifier() failed (%d)\n",
@@ -1417,6 +1428,11 @@ static int __init tdx_init(void)
 		return -ENODEV;
 	}
 
+#if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND)
+	pr_info("Disable ACPI S3. Turn off TDX in the BIOS to use ACPI S3.\n");
+	acpi_suspend_lowlevel = NULL;
+#endif
+
 	/*
 	 * Just use the first TDX KeyID as the 'global KeyID' and
 	 * leave the rest for TDX guests.
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (20 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 21/23] x86/virt/tdx: Handle TDX interaction with ACPI S3 and deeper states Kai Huang
@ 2023-11-09 11:55 ` Kai Huang
  2023-11-30 18:01   ` Tony Luck
                     ` (2 more replies)
  2023-11-09 11:56 ` [PATCH v15 23/23] Documentation/x86: Add documentation for TDX host support Kai Huang
  2023-11-13  8:40 ` [PATCH v15 00/23] TDX host kernel support Nikolay Borisov
  23 siblings, 3 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:55 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

The first few generations of TDX hardware have an erratum.  Triggering
it in Linux requires some kind of kernel bug involving relatively exotic
memory writes to TDX private memory and will manifest via
spurious-looking machine checks when reading the affected memory.

== Background ==

Virtually all kernel memory accesses operations happen in full
cachelines.  In practice, writing a "byte" of memory usually reads a 64
byte cacheline of memory, modifies it, then writes the whole line back.
Those operations do not trigger this problem.

This problem is triggered by "partial" writes where a write transaction
of less than cacheline lands at the memory controller.  The CPU does
these via non-temporal write instructions (like MOVNTI), or through
UC/WC memory mappings.  The issue can also be triggered away from the
CPU by devices doing partial writes via DMA.

== Problem ==

A partial write to a TDX private memory cacheline will silently "poison"
the line.  Subsequent reads will consume the poison and generate a
machine check.  According to the TDX hardware spec, neither of these
things should have happened.

To add insult to injury, the Linux machine code will present these as a
literal "Hardware error" when they were, in fact, a software-triggered
issue.

== Solution ==

In the end, this issue is hard to trigger.  Rather than do something
rash (and incomplete) like unmap TDX private memory from the direct map,
improve the machine check handler.

Currently, the #MC handler doesn't distinguish whether the memory is
TDX private memory or not but just dump, for instance, below message:

 [...] mce: [Hardware Error]: CPU 147: Machine Check Exception: f Bank 1: bd80000000100134
 [...] mce: [Hardware Error]: RIP 10:<ffffffffadb69870> {__tlb_remove_page_size+0x10/0xa0}
 	...
 [...] mce: [Hardware Error]: Run the above through 'mcelog --ascii'
 [...] mce: [Hardware Error]: Machine check: Data load in unrecoverable area of kernel
 [...] Kernel panic - not syncing: Fatal local machine check

Which says "Hardware Error" and "Data load in unrecoverable area of
kernel".

Ideally, it's better for the log to say "software bug around TDX private
memory" instead of "Hardware Error".  But in reality the real hardware
memory error can happen, and sadly such software-triggered #MC cannot be
distinguished from the real hardware error.  Also, the error message is
used by userspace tool 'mcelog' to parse, so changing the output may
break userspace.

So keep the "Hardware Error".  The "Data load in unrecoverable area of
kernel" is also helpful, so keep it too.

Instead of modifying above error log, improve the error log by printing
additional TDX related message to make the log like:

  ...
 [...] mce: [Hardware Error]: Machine check: Data load in unrecoverable area of kernel
 [...] mce: [Hardware Error]: Machine Check: TDX private memory error. Possible kernel bug.

Adding this additional message requires determination of whether the
memory page is TDX private memory.  There is no existing infrastructure
to do that.  Add an interface to query the TDX module to fill this gap.

== Impact ==

This issue requires some kind of kernel bug to trigger.

TDX private memory should never be mapped UC/WC.  A partial write
originating from these mappings would require *two* bugs, first mapping
the wrong page, then writing the wrong memory.  It would also be
detectable using traditional memory corruption techniques like
DEBUG_PAGEALLOC.

MOVNTI (and friends) could cause this issue with something like a simple
buffer overrun or use-after-free on the direct map.  It should also be
detectable with normal debug techniques.

The one place where this might get nasty would be if the CPU read data
then wrote back the same data.  That would trigger this problem but
would not, for instance, set off mechanisms like slab redzoning because
it doesn't actually corrupt data.

With an IOMMU at least, the DMA exposure is similar to the UC/WC issue.
TDX private memory would first need to be incorrectly mapped into the
I/O space and then a later DMA to that mapping would actually cause the
poisoning event.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
---

v14 -> v15:
 - No change

v13 -> v14:
 - No change

v12 -> v13:
 - Added Kirill and Yuan's tag.

v11 -> v12:
 - Simplified #MC message (Dave/Kirill)
 - Slightly improved some comments.

v10 -> v11:
 - New patch


---
 arch/x86/include/asm/tdx.h     |   2 +
 arch/x86/kernel/cpu/mce/core.c |  33 +++++++++++
 arch/x86/virt/vmx/tdx/tdx.c    | 103 +++++++++++++++++++++++++++++++++
 arch/x86/virt/vmx/tdx/tdx.h    |   5 ++
 4 files changed, 143 insertions(+)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index caca139e7022..a621721f63dd 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -115,11 +115,13 @@ bool platform_tdx_enabled(void);
 int tdx_cpu_enable(void);
 int tdx_enable(void);
 void tdx_reset_memory(void);
+bool tdx_is_private_mem(unsigned long phys);
 #else
 static inline bool platform_tdx_enabled(void) { return false; }
 static inline int tdx_cpu_enable(void) { return -ENODEV; }
 static inline int tdx_enable(void)  { return -ENODEV; }
 static inline void tdx_reset_memory(void) { }
+static inline bool tdx_is_private_mem(unsigned long phys) { return false; }
 #endif	/* CONFIG_INTEL_TDX_HOST */
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 7b397370b4d6..e33537cfc507 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -52,6 +52,7 @@
 #include <asm/mce.h>
 #include <asm/msr.h>
 #include <asm/reboot.h>
+#include <asm/tdx.h>
 
 #include "internal.h"
 
@@ -228,11 +229,34 @@ static void wait_for_panic(void)
 	panic("Panicing machine check CPU died");
 }
 
+static const char *mce_memory_info(struct mce *m)
+{
+	if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
+		return NULL;
+
+	/*
+	 * Certain initial generations of TDX-capable CPUs have an
+	 * erratum.  A kernel non-temporal partial write to TDX private
+	 * memory poisons that memory, and a subsequent read of that
+	 * memory triggers #MC.
+	 *
+	 * However such #MC caused by software cannot be distinguished
+	 * from the real hardware #MC.  Just print additional message
+	 * to show such #MC may be result of the CPU erratum.
+	 */
+	if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
+		return NULL;
+
+	return !tdx_is_private_mem(m->addr) ? NULL :
+		"TDX private memory error. Possible kernel bug.";
+}
+
 static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
 {
 	struct llist_node *pending;
 	struct mce_evt_llist *l;
 	int apei_err = 0;
+	const char *memmsg;
 
 	/*
 	 * Allow instrumentation around external facilities usage. Not that it
@@ -283,6 +307,15 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
 	}
 	if (exp)
 		pr_emerg(HW_ERR "Machine check: %s\n", exp);
+	/*
+	 * Confidential computing platforms such as TDX platforms
+	 * may occur MCE due to incorrect access to confidential
+	 * memory.  Print additional information for such error.
+	 */
+	memmsg = mce_memory_info(final);
+	if (memmsg)
+		pr_emerg(HW_ERR "Machine check: %s\n", memmsg);
+
 	if (!fake_panic) {
 		if (panic_timeout == 0)
 			panic_timeout = mca_cfg.panic_timeout;
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index cc21a0f25bee..1b84dcdf63cb 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1288,6 +1288,109 @@ void tdx_reset_memory(void)
 	tdmrs_reset_pamt_all(&tdx_tdmr_list);
 }
 
+static bool is_pamt_page(unsigned long phys)
+{
+	struct tdmr_info_list *tdmr_list = &tdx_tdmr_list;
+	int i;
+
+	/*
+	 * This function is called from #MC handler, and theoretically
+	 * it could run in parallel with the TDX module initialization
+	 * on other logical cpus.  But it's not OK to hold mutex here
+	 * so just blindly check module status to make sure PAMTs/TDMRs
+	 * are stable to access.
+	 *
+	 * This may return inaccurate result in rare cases, e.g., when
+	 * #MC happens on a PAMT page during module initialization, but
+	 * this is fine as #MC handler doesn't need a 100% accurate
+	 * result.
+	 */
+	if (tdx_module_status != TDX_MODULE_INITIALIZED)
+		return false;
+
+	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+		unsigned long base, size;
+
+		tdmr_get_pamt(tdmr_entry(tdmr_list, i), &base, &size);
+
+		if (phys >= base && phys < (base + size))
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Return whether the memory page at the given physical address is TDX
+ * private memory or not.  Called from #MC handler do_machine_check().
+ *
+ * Note this function may not return an accurate result in rare cases.
+ * This is fine as the #MC handler doesn't need a 100% accurate result,
+ * because it cannot distinguish #MC between software bug and real
+ * hardware error anyway.
+ */
+bool tdx_is_private_mem(unsigned long phys)
+{
+	struct tdx_module_args args = {
+		.rcx = phys & PAGE_MASK,
+	};
+	u64 sret;
+
+	if (!platform_tdx_enabled())
+		return false;
+
+	/* Get page type from the TDX module */
+	sret = __seamcall_ret(TDH_PHYMEM_PAGE_RDMD, &args);
+	/*
+	 * Handle the case that CPU isn't in VMX operation.
+	 *
+	 * KVM guarantees no VM is running (thus no TDX guest)
+	 * when there's any online CPU isn't in VMX operation.
+	 * This means there will be no TDX guest private memory
+	 * and Secure-EPT pages.  However the TDX module may have
+	 * been initialized and the memory page could be PAMT.
+	 */
+	if (sret == TDX_SEAMCALL_UD)
+		return is_pamt_page(phys);
+
+	/*
+	 * Any other failure means:
+	 *
+	 * 1) TDX module not loaded; or
+	 * 2) Memory page isn't managed by the TDX module.
+	 *
+	 * In either case, the memory page cannot be a TDX
+	 * private page.
+	 */
+	if (sret)
+		return false;
+
+	/*
+	 * SEAMCALL was successful -- read page type (via RCX):
+	 *
+	 *  - PT_NDA:	Page is not used by the TDX module
+	 *  - PT_RSVD:	Reserved for Non-TDX use
+	 *  - Others:	Page is used by the TDX module
+	 *
+	 * Note PAMT pages are marked as PT_RSVD but they are also TDX
+	 * private memory.
+	 *
+	 * Note: Even page type is PT_NDA, the memory page could still
+	 * be associated with TDX private KeyID if the kernel hasn't
+	 * explicitly used MOVDIR64B to clear the page.  Assume KVM
+	 * always does that after reclaiming any private page from TDX
+	 * gusets.
+	 */
+	switch (args.rcx) {
+	case PT_NDA:
+		return false;
+	case PT_RSVD:
+		return is_pamt_page(phys);
+	default:
+		return true;
+	}
+}
+
 static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
 					    u32 *nr_tdx_keyids)
 {
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index c0610f0bb88c..b701f69485d3 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -14,6 +14,7 @@
 /*
  * TDX module SEAMCALL leaf functions
  */
+#define TDH_PHYMEM_PAGE_RDMD	24
 #define TDH_SYS_KEY_CONFIG	31
 #define TDH_SYS_INIT		33
 #define TDH_SYS_RD		34
@@ -21,6 +22,10 @@
 #define TDH_SYS_TDMR_INIT	36
 #define TDH_SYS_CONFIG		45
 
+/* TDX page types */
+#define	PT_NDA		0x0
+#define	PT_RSVD		0x1
+
 /*
  * Global scope metadata field ID.
  *
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH v15 23/23] Documentation/x86: Add documentation for TDX host support
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (21 preceding siblings ...)
  2023-11-09 11:55 ` [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
@ 2023-11-09 11:56 ` Kai Huang
  2023-11-13  8:40 ` [PATCH v15 00/23] TDX host kernel support Nikolay Borisov
  23 siblings, 0 replies; 66+ messages in thread
From: Kai Huang @ 2023-11-09 11:56 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme, sagis,
	imammedo, kai.huang

Add documentation for TDX host kernel support.  There is already one
file Documentation/x86/tdx.rst containing documentation for TDX guest
internals.  Also reuse it for TDX host kernel support.

Introduce a new level menu "TDX Guest Support" and move existing
materials under it, and add a new menu for TDX host kernel support.

Signed-off-by: Kai Huang <kai.huang@intel.com>
---

v14 -> v15:
 - Removed the dmesg shows TDX module version (not printed anymore).

v13 -> v14:
 - Added new sections for "Erratum" and "TDX vs S3/hibernation"


---
 Documentation/arch/x86/tdx.rst | 222 +++++++++++++++++++++++++++++++--
 1 file changed, 211 insertions(+), 11 deletions(-)

diff --git a/Documentation/arch/x86/tdx.rst b/Documentation/arch/x86/tdx.rst
index dc8d9fd2c3f7..8969675568d0 100644
--- a/Documentation/arch/x86/tdx.rst
+++ b/Documentation/arch/x86/tdx.rst
@@ -10,6 +10,206 @@ encrypting the guest memory. In TDX, a special module running in a special
 mode sits between the host and the guest and manages the guest/host
 separation.
 
+TDX Host Kernel Support
+=======================
+
+TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) and
+a new isolated range pointed by the SEAM Ranger Register (SEAMRR).  A
+CPU-attested software module called 'the TDX module' runs inside the new
+isolated range to provide the functionalities to manage and run protected
+VMs.
+
+TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
+provide crypto-protection to the VMs.  TDX reserves part of MKTME KeyIDs
+as TDX private KeyIDs, which are only accessible within the SEAM mode.
+BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.
+
+Before the TDX module can be used to create and run protected VMs, it
+must be loaded into the isolated range and properly initialized.  The TDX
+architecture doesn't require the BIOS to load the TDX module, but the
+kernel assumes it is loaded by the BIOS.
+
+TDX boot-time detection
+-----------------------
+
+The kernel detects TDX by detecting TDX private KeyIDs during kernel
+boot.  Below dmesg shows when TDX is enabled by BIOS::
+
+  [..] virt/tdx: BIOS enabled: private KeyID range: [16, 64)
+
+TDX module initialization
+---------------------------------------
+
+The kernel talks to the TDX module via the new SEAMCALL instruction.  The
+TDX module implements SEAMCALL leaf functions to allow the kernel to
+initialize it.
+
+If the TDX module isn't loaded, the SEAMCALL instruction fails with a
+special error.  In this case the kernel fails the module initialization
+and reports the module isn't loaded::
+
+  [..] virt/tdx: module not loaded
+
+Initializing the TDX module consumes roughly ~1/256th system RAM size to
+use it as 'metadata' for the TDX memory.  It also takes additional CPU
+time to initialize those metadata along with the TDX module itself.  Both
+are not trivial.  The kernel initializes the TDX module at runtime on
+demand.
+
+Besides initializing the TDX module, a per-cpu initialization SEAMCALL
+must be done on one cpu before any other SEAMCALLs can be made on that
+cpu.
+
+The kernel provides two functions, tdx_enable() and tdx_cpu_enable() to
+allow the user of TDX to enable the TDX module and enable TDX on local
+cpu respectively.
+
+Making SEAMCALL requires VMXON has been done on that CPU.  Currently only
+KVM implements VMXON.  For now both tdx_enable() and tdx_cpu_enable()
+don't do VMXON internally (not trivial), but depends on the caller to
+guarantee that.
+
+To enable TDX, the caller of TDX should: 1) temporarily disable CPU
+hotplug; 2) do VMXON and tdx_enable_cpu() on all online cpus; 3) call
+tdx_enable().  For example::
+
+        cpus_read_lock();
+        on_each_cpu(vmxon_and_tdx_cpu_enable());
+        ret = tdx_enable();
+        cpus_read_unlock();
+        if (ret)
+                goto no_tdx;
+        // TDX is ready to use
+
+And the caller of TDX must guarantee the tdx_cpu_enable() has been
+successfully done on any cpu before it wants to run any other SEAMCALL.
+A typical usage is do both VMXON and tdx_cpu_enable() in CPU hotplug
+online callback, and refuse to online if tdx_cpu_enable() fails.
+
+User can consult dmesg to see whether the TDX module has been initialized.
+
+If the TDX module is initialized successfully, dmesg shows something
+like below::
+
+  [..] virt/tdx: 262668 KBs allocated for PAMT
+  [..] virt/tdx: module initialized
+
+If the TDX module failed to initialize, dmesg also shows it failed to
+initialize::
+
+  [..] virt/tdx: module initialization failed ...
+
+TDX Interaction to Other Kernel Components
+------------------------------------------
+
+TDX Memory Policy
+~~~~~~~~~~~~~~~~~
+
+TDX reports a list of "Convertible Memory Region" (CMR) to tell the
+kernel which memory is TDX compatible.  The kernel needs to build a list
+of memory regions (out of CMRs) as "TDX-usable" memory and pass those
+regions to the TDX module.  Once this is done, those "TDX-usable" memory
+regions are fixed during module's lifetime.
+
+To keep things simple, currently the kernel simply guarantees all pages
+in the page allocator are TDX memory.  Specifically, the kernel uses all
+system memory in the core-mm "at the time of TDX module initialization"
+as TDX memory, and in the meantime, refuses to online any non-TDX-memory
+in the memory hotplug.
+
+Physical Memory Hotplug
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Note TDX assumes convertible memory is always physically present during
+machine's runtime.  A non-buggy BIOS should never support hot-removal of
+any convertible memory.  This implementation doesn't handle ACPI memory
+removal but depends on the BIOS to behave correctly.
+
+CPU Hotplug
+~~~~~~~~~~~
+
+TDX module requires the per-cpu initialization SEAMCALL must be done on
+one cpu before any other SEAMCALLs can be made on that cpu.  The kernel
+provides tdx_cpu_enable() to let the user of TDX to do it when the user
+wants to use a new cpu for TDX task.
+
+TDX doesn't support physical (ACPI) CPU hotplug.  During machine boot,
+TDX verifies all boot-time present logical CPUs are TDX compatible before
+enabling TDX.  A non-buggy BIOS should never support hot-add/removal of
+physical CPU.  Currently the kernel doesn't handle physical CPU hotplug,
+but depends on the BIOS to behave correctly.
+
+Note TDX works with CPU logical online/offline, thus the kernel still
+allows to offline logical CPU and online it again.
+
+Kexec()
+~~~~~~~
+
+There are two problems in terms of using kexec() to boot to a new kernel
+when the old kernel has enabled TDX: 1) Part of the memory pages are
+still TDX private pages; 2) There might be dirty cachelines associated
+with TDX private pages.
+
+The first problem doesn't matter.  KeyID 0 doesn't have integrity check.
+Even the new kernel wants use any non-zero KeyID, it needs to convert
+the memory to that KeyID and such conversion would work from any KeyID.
+
+However the old kernel needs to guarantee there's no dirty cacheline
+left behind before booting to the new kernel to avoid silent corruption
+from later cacheline writeback (Intel hardware doesn't guarantee cache
+coherency across different KeyIDs).
+
+Similar to AMD SME, the kernel does wbinvd() to flush cache before
+booting to the new kernel.
+
+Erratum
+~~~~~~~
+
+The first few generations of TDX hardware have an erratum.  A partial
+write to a TDX private memory cacheline will silently "poison" the
+line.  Subsequent reads will consume the poison and generate a machine
+check.
+
+A partial write is a memory write where a write transaction of less than
+cacheline lands at the memory controller.  The CPU does these via
+non-temporal write instructions (like MOVNTI), or through UC/WC memory
+mappings.  Devices can also do partial writes via DMA.
+
+Theoretically, a kernel bug could do partial write to TDX private memory
+and trigger unexpected machine check.  What's more, the machine check
+code will present these as "Hardware error" when they were, in fact, a
+software-triggered issue.  But in the end, this issue is hard to trigger.
+
+If the platform has such erratum, the kernel does additional things:
+1) resetting TDX private pages using MOVDIR64B in kexec before booting to
+the new kernel; 2) Printing additional message in machine check handler
+to tell user the machine check may be caused by kernel bug on TDX private
+memory.
+
+Interaction vs S3 and deeper states
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+TDX cannot survive from S3 and deeper states.  The hardware resets and
+disables TDX completely when platform goes to S3 and deeper.  Both TDX
+guests and the TDX module get destroyed permanently.
+
+The kernel uses S3 for suspend-to-ram, and use S4 and deeper states for
+hibernation.  Currently, for simplicity, the kernel chooses to make TDX
+mutually exclusive with S3 and hibernation.
+
+The kernel disables TDX during early boot when hibernation support is
+available::
+
+  [..] virt/tdx: initialization failed: Hibernation support is enabled
+
+Add 'nohibernate' kernel command line to disable hibernation in order to
+use TDX.
+
+ACPI S3 is disabled during kernel early boot if TDX is enabled.  The user
+needs to turn off TDX in the BIOS in order to use S3.
+
+TDX Guest Support
+=================
 Since the host cannot directly access guest registers or memory, much
 normal functionality of a hypervisor must be moved into the guest. This is
 implemented using a Virtualization Exception (#VE) that is handled by the
@@ -20,7 +220,7 @@ TDX includes new hypercall-like mechanisms for communicating from the
 guest to the hypervisor or the TDX module.
 
 New TDX Exceptions
-==================
+------------------
 
 TDX guests behave differently from bare-metal and traditional VMX guests.
 In TDX guests, otherwise normal instructions or memory accesses can cause
@@ -30,7 +230,7 @@ Instructions marked with an '*' conditionally cause exceptions.  The
 details for these instructions are discussed below.
 
 Instruction-based #VE
----------------------
+~~~~~~~~~~~~~~~~~~~~~
 
 - Port I/O (INS, OUTS, IN, OUT)
 - HLT
@@ -41,7 +241,7 @@ Instruction-based #VE
 - CPUID*
 
 Instruction-based #GP
----------------------
+~~~~~~~~~~~~~~~~~~~~~
 
 - All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH,
   VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON
@@ -52,7 +252,7 @@ Instruction-based #GP
 - RDMSR*,WRMSR*
 
 RDMSR/WRMSR Behavior
---------------------
+~~~~~~~~~~~~~~~~~~~~
 
 MSR access behavior falls into three categories:
 
@@ -73,7 +273,7 @@ trapping and handling in the TDX module.  Other than possibly being slow,
 these MSRs appear to function just as they would on bare metal.
 
 CPUID Behavior
---------------
+~~~~~~~~~~~~~~
 
 For some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID
 return values (in guest EAX/EBX/ECX/EDX) are configurable by the
@@ -93,7 +293,7 @@ not know how to handle. The guest kernel may ask the hypervisor for the
 value with a hypercall.
 
 #VE on Memory Accesses
-======================
+----------------------
 
 There are essentially two classes of TDX memory: private and shared.
 Private memory receives full TDX protections.  Its content is protected
@@ -107,7 +307,7 @@ entries.  This helps ensure that a guest does not place sensitive
 information in shared memory, exposing it to the untrusted hypervisor.
 
 #VE on Shared Memory
---------------------
+~~~~~~~~~~~~~~~~~~~~
 
 Access to shared mappings can cause a #VE.  The hypervisor ultimately
 controls whether a shared memory access causes a #VE, so the guest must be
@@ -127,7 +327,7 @@ be careful not to access device MMIO regions unless it is also prepared to
 handle a #VE.
 
 #VE on Private Pages
---------------------
+~~~~~~~~~~~~~~~~~~~~
 
 An access to private mappings can also cause a #VE.  Since all kernel
 memory is also private memory, the kernel might theoretically need to
@@ -145,7 +345,7 @@ The hypervisor is permitted to unilaterally move accepted pages to a
 to handle the exception.
 
 Linux #VE handler
-=================
+-----------------
 
 Just like page faults or #GP's, #VE exceptions can be either handled or be
 fatal.  Typically, an unhandled userspace #VE results in a SIGSEGV.
@@ -167,7 +367,7 @@ While the block is in place, any #VE is elevated to a double fault (#DF)
 which is not recoverable.
 
 MMIO handling
-=============
+-------------
 
 In non-TDX VMs, MMIO is usually implemented by giving a guest access to a
 mapping which will cause a VMEXIT on access, and then the hypervisor
@@ -189,7 +389,7 @@ MMIO access via other means (like structure overlays) may result in an
 oops.
 
 Shared Memory Conversions
-=========================
+-------------------------
 
 All TDX guest memory starts out as private at boot.  This memory can not
 be accessed by the hypervisor.  However, some kernel users like device
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code
  2023-11-09 11:55 ` [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code Kai Huang
@ 2023-11-09 16:38   ` Dave Hansen
  2023-11-14 19:24   ` Isaku Yamahata
  1 sibling, 0 replies; 66+ messages in thread
From: Dave Hansen @ 2023-11-09 16:38 UTC (permalink / raw)
  To: Kai Huang, linux-kernel, kvm
  Cc: x86, kirill.shutemov, peterz, tony.luck, tglx, bp, mingo, hpa,
	seanjc, pbonzini, rafael, david, dan.j.williams, len.brown, ak,
	isaku.yamahata, ying.huang, chao.gao, sathyanarayanan.kuppuswamy,
	nik.borisov, bagasdotme, sagis, imammedo

On 11/9/23 03:55, Kai Huang wrote:
> Some SEAMCALLs use the RDRAND hardware and can fail for the same reasons
> as RDRAND.  Use the kernel RDRAND retry logic for them.
> 
> There are three __seamcall*() variants.  Do the SEAMCALL retry in common
> code and add a wrapper for each of them.

The new common wrapper looks great, thanks:

Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization
  2023-11-09 11:55 ` [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization Kai Huang
@ 2023-11-09 23:29   ` Dave Hansen
  2023-11-10  2:23     ` Huang, Kai
  2023-11-15 19:35   ` Isaku Yamahata
  1 sibling, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-11-09 23:29 UTC (permalink / raw)
  To: Kai Huang, linux-kernel, kvm
  Cc: x86, kirill.shutemov, peterz, tony.luck, tglx, bp, mingo, hpa,
	seanjc, pbonzini, rafael, david, dan.j.williams, len.brown, ak,
	isaku.yamahata, ying.huang, chao.gao, sathyanarayanan.kuppuswamy,
	nik.borisov, bagasdotme, sagis, imammedo

[-- Attachment #1: Type: text/plain, Size: 1101 bytes --]

On 11/9/23 03:55, Kai Huang wrote:
...> +	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_TDMRS,
> +			&tdmr_sysinfo->max_tdmrs);
> +	if (ret)
> +		return ret;
> +
> +	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_RESERVED_PER_TDMR,
> +			&tdmr_sysinfo->max_reserved_per_tdmr);
> +	if (ret)
> +		return ret;
> +
> +	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_4K_ENTRY_SIZE,
> +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_4K]);
> +	if (ret)
> +		return ret;
> +
> +	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_2M_ENTRY_SIZE,
> +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_2M]);
> +	if (ret)
> +		return ret;
> +
> +	return read_sys_metadata_field16(MD_FIELD_ID_PAMT_1G_ENTRY_SIZE,
> +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_1G]);
> +}

I kinda despise how this looks.  It's impossible to read.

I'd much rather do something like the attached where you just map the
field number to a structure member.  Note that this kind of structure
could also be converted to leverage the bulk metadata query in the future.

Any objections to doing something more like the attached completely
untested patch?

[-- Attachment #2: cleaner-tdx-metadata-0.patch --]
[-- Type: text/x-patch, Size: 2588 bytes --]



---

 b/arch/x86/virt/vmx/tdx/tdx.c |   59 ++++++++++++++++++++++++------------------
 1 file changed, 34 insertions(+), 25 deletions(-)

diff -puN arch/x86/virt/vmx/tdx/tdx.c~cleaner-tdx-metadata-0 arch/x86/virt/vmx/tdx/tdx.c
--- a/arch/x86/virt/vmx/tdx/tdx.c~cleaner-tdx-metadata-0	2023-11-09 14:58:06.504531884 -0800
+++ b/arch/x86/virt/vmx/tdx/tdx.c	2023-11-09 15:22:46.895941908 -0800
@@ -256,50 +256,59 @@ static int read_sys_metadata_field(u64 f
 	return 0;
 }
 
-static int read_sys_metadata_field16(u64 field_id, u16 *data)
+static int read_sys_metadata_field16(u64 field_id,
+				     int offset,
+				     struct tdx_tdmr_sysinfo *ts)
 {
-	u64 _data;
+	u16 *ts_member = ((void *)ts) + offset;
+	u64 tmp;
 	int ret;
 
 	if (WARN_ON_ONCE(MD_FIELD_ID_ELE_SIZE_CODE(field_id) !=
 			MD_FIELD_ID_ELE_SIZE_16BIT))
 		return -EINVAL;
 
-	ret = read_sys_metadata_field(field_id, &_data);
+	ret = read_sys_metadata_field(field_id, &tmp);
 	if (ret)
 		return ret;
 
-	*data = (u16)_data;
+	*ts_member = tmp;
 
 	return 0;
 }
 
+struct field_mapping
+{
+	u64 field_id;
+	int offset;
+};
+
+#define TD_SYSINFO_MAP(_field_id, _offset) \
+	{ .field_id = MD_FIELD_ID_##_field_id,	   \
+	  .offset   = offsetof(struct tdx_tdmr_sysinfo,_offset) }
+
+struct field_mapping fields[] = {
+	TD_SYSINFO_MAP(MAX_TDMRS,	      max_tdmrs),
+	TD_SYSINFO_MAP(MAX_RESERVED_PER_TDMR, max_reserved_per_tdmr),
+	TD_SYSINFO_MAP(PAMT_4K_ENTRY_SIZE,    pamt_entry_size[TDX_PS_4K]),
+	TD_SYSINFO_MAP(PAMT_2M_ENTRY_SIZE,    pamt_entry_size[TDX_PS_2M]),
+	TD_SYSINFO_MAP(PAMT_1G_ENTRY_SIZE,    pamt_entry_size[TDX_PS_1G]),
+};
+
 static int get_tdx_tdmr_sysinfo(struct tdx_tdmr_sysinfo *tdmr_sysinfo)
 {
 	int ret;
+	int i;
 
-	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_TDMRS,
-			&tdmr_sysinfo->max_tdmrs);
-	if (ret)
-		return ret;
-
-	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_RESERVED_PER_TDMR,
-			&tdmr_sysinfo->max_reserved_per_tdmr);
-	if (ret)
-		return ret;
-
-	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_4K_ENTRY_SIZE,
-			&tdmr_sysinfo->pamt_entry_size[TDX_PS_4K]);
-	if (ret)
-		return ret;
+	for (i = 0; i < ARRAY_SIZE(fields); i++) {
+		ret = read_sys_metadata_field16(fields[i].field_id,
+						fields[i].offset,
+						tdmr_sysinfo);
+		if (ret)
+			return ret;
+	}
 
-	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_2M_ENTRY_SIZE,
-			&tdmr_sysinfo->pamt_entry_size[TDX_PS_2M]);
-	if (ret)
-		return ret;
-
-	return read_sys_metadata_field16(MD_FIELD_ID_PAMT_1G_ENTRY_SIZE,
-			&tdmr_sysinfo->pamt_entry_size[TDX_PS_1G]);
+	return 0;
 }
 
 static int init_tdx_module(void)
_

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization
  2023-11-09 23:29   ` Dave Hansen
@ 2023-11-10  2:23     ` Huang, Kai
  0 siblings, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-11-10  2:23 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: sathyanarayanan.kuppuswamy, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, seanjc, mingo, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, peterz, Shahar, Sagi, imammedo, bp, Gao, Chao,
	Brown, Len, rafael, Huang, Ying, Williams, Dan J, x86

On Thu, 2023-11-09 at 15:29 -0800, Dave Hansen wrote:
> On 11/9/23 03:55, Kai Huang wrote:
> ...> +	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_TDMRS,
> > +			&tdmr_sysinfo->max_tdmrs);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_RESERVED_PER_TDMR,
> > +			&tdmr_sysinfo->max_reserved_per_tdmr);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_4K_ENTRY_SIZE,
> > +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_4K]);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_2M_ENTRY_SIZE,
> > +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_2M]);
> > +	if (ret)
> > +		return ret;
> > +
> > +	return read_sys_metadata_field16(MD_FIELD_ID_PAMT_1G_ENTRY_SIZE,
> > +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_1G]);
> > +}
> 
> I kinda despise how this looks.  It's impossible to read.
> 
> I'd much rather do something like the attached where you just map the
> field number to a structure member.  Note that this kind of structure
> could also be converted to leverage the bulk metadata query in the future.
> 
> Any objections to doing something more like the attached completely
> untested patch?

Hi Dave,

No objection and thanks!  I've just tested with your diff I can successfully
initialize the TDX module.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 00/23] TDX host kernel support
  2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
                   ` (22 preceding siblings ...)
  2023-11-09 11:56 ` [PATCH v15 23/23] Documentation/x86: Add documentation for TDX host support Kai Huang
@ 2023-11-13  8:40 ` Nikolay Borisov
  2023-11-13  9:11   ` Huang, Kai
  23 siblings, 1 reply; 66+ messages in thread
From: Nikolay Borisov @ 2023-11-13  8:40 UTC (permalink / raw)
  To: Kai Huang, linux-kernel, kvm
  Cc: x86, dave.hansen, kirill.shutemov, peterz, tony.luck, tglx, bp,
	mingo, hpa, seanjc, pbonzini, rafael, david, dan.j.williams,
	len.brown, ak, isaku.yamahata, ying.huang, chao.gao,
	sathyanarayanan.kuppuswamy, bagasdotme, sagis, imammedo



On 9.11.23 г. 13:55 ч., Kai Huang wrote:
> Hi all,
> 
> (Again I didn't include the full cover letter here to save people's time.
>   The full coverletter can be found in the v13 [1]).
> 
> This version mainly addressed one issue that we (Intel people) discussed
> internally: to only initialize TDX module 1.5 and later versions.  The
> reason is TDX 1.0 has some incompatibility issues to the TDX 1.5 and
> later version (for detailed information please see [2]).  There's no
> value to support TDX 1.0 when the TDX 1.5 are already out.
> 
> Hi Kirill, Dave (and all),
> 
> Could you help to review the new patch mentioned in the detailed
> changes below (and other minor changes due to rebase to it)?
> 
> Appreciate a lot!
> 

It looks good as a foundation to build on apart from Dave's comment 
about the read out of metadata fields are there any outstanding issues 
impending the merge of this series - Dave?


FWIW:

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 00/23] TDX host kernel support
  2023-11-13  8:40 ` [PATCH v15 00/23] TDX host kernel support Nikolay Borisov
@ 2023-11-13  9:11   ` Huang, Kai
  0 siblings, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-11-13  9:11 UTC (permalink / raw)
  To: kvm, nik.borisov, linux-kernel
  Cc: sathyanarayanan.kuppuswamy, Hansen, Dave, david, bagasdotme, ak,
	kirill.shutemov, seanjc, mingo, pbonzini, tglx, Yamahata, Isaku,
	Luck, Tony, hpa, peterz, Shahar, Sagi, imammedo, bp, Gao, Chao,
	rafael, Brown, Len, Huang, Ying, Williams, Dan J, x86

On Mon, 2023-11-13 at 10:40 +0200, Nikolay Borisov wrote:
> 
> On 9.11.23 г. 13:55 ч., Kai Huang wrote:
> > Hi all,
> > 
> > (Again I didn't include the full cover letter here to save people's time.
> >   The full coverletter can be found in the v13 [1]).
> > 
> > This version mainly addressed one issue that we (Intel people) discussed
> > internally: to only initialize TDX module 1.5 and later versions.  The
> > reason is TDX 1.0 has some incompatibility issues to the TDX 1.5 and
> > later version (for detailed information please see [2]).  There's no
> > value to support TDX 1.0 when the TDX 1.5 are already out.
> > 
> > Hi Kirill, Dave (and all),
> > 
> > Could you help to review the new patch mentioned in the detailed
> > changes below (and other minor changes due to rebase to it)?
> > 
> > Appreciate a lot!
> > 
> 
> It looks good as a foundation to build on apart from Dave's comment 
> about the read out of metadata fields are there any outstanding issues 
> impending the merge of this series - Dave?

I believe many people are attending Linux plumber this week. :-)

> 
> 
> FWIW:
> 
> Reviewed-by: Nikolay Borisov <nborisov@suse.com>

Thanks!


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code
  2023-11-09 11:55 ` [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code Kai Huang
  2023-11-09 16:38   ` Dave Hansen
@ 2023-11-14 19:24   ` Isaku Yamahata
  2023-11-15 10:41     ` Huang, Kai
  1 sibling, 1 reply; 66+ messages in thread
From: Isaku Yamahata @ 2023-11-14 19:24 UTC (permalink / raw)
  To: Kai Huang
  Cc: linux-kernel, kvm, x86, dave.hansen, kirill.shutemov, peterz,
	tony.luck, tglx, bp, mingo, hpa, seanjc, pbonzini, rafael, david,
	dan.j.williams, len.brown, ak, isaku.yamahata, ying.huang,
	chao.gao, sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme,
	sagis, imammedo, isaku.yamahata

On Fri, Nov 10, 2023 at 12:55:42AM +1300,
Kai Huang <kai.huang@intel.com> wrote:

> Some SEAMCALLs use the RDRAND hardware and can fail for the same reasons
> as RDRAND.  Use the kernel RDRAND retry logic for them.
> 
> There are three __seamcall*() variants.  Do the SEAMCALL retry in common
> code and add a wrapper for each of them.
> 
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Reviewed-by: Kirill A. Shutemov <kirll.shutemov@linux.intel.com>
> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> ---
> 
> v14 -> v15:
>  - Added Sathy's tag.
> 
> v13 -> v14:
>  - Use real function sc_retry() instead of using macros. (Dave)
>  - Added Kirill's tag.
> 
> v12 -> v13:
>  - New implementation due to TDCALL assembly series.
> 
> ---
>  arch/x86/include/asm/tdx.h | 26 ++++++++++++++++++++++++++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> index ea9a0320b1f8..f1c0c15469f8 100644
> --- a/arch/x86/include/asm/tdx.h
> +++ b/arch/x86/include/asm/tdx.h
> @@ -24,6 +24,11 @@
>  #define TDX_SEAMCALL_GP			(TDX_SW_ERROR | X86_TRAP_GP)
>  #define TDX_SEAMCALL_UD			(TDX_SW_ERROR | X86_TRAP_UD)
>  
> +/*
> + * TDX module SEAMCALL leaf function error codes
> + */
> +#define TDX_RND_NO_ENTROPY	0x8000020300000000ULL
> +
>  #ifndef __ASSEMBLY__
>  
>  /*
> @@ -84,6 +89,27 @@ u64 __seamcall(u64 fn, struct tdx_module_args *args);
>  u64 __seamcall_ret(u64 fn, struct tdx_module_args *args);
>  u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args);
>  
> +#include <asm/archrandom.h>
> +
> +typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args);
> +
> +static inline u64 sc_retry(sc_func_t func, u64 fn,
> +			   struct tdx_module_args *args)
> +{
> +	int retry = RDRAND_RETRY_LOOPS;
> +	u64 ret;
> +
> +	do {
> +		ret = func(fn, args);
> +	} while (ret == TDX_RND_NO_ENTROPY && --retry);

This loop assumes that args isn't touched when TDX_RND_NO_ENTRYPOY is returned.
It's not true.  TDH.SYS.INIT() and TDH.SYS.LP.INIT() clear RCX, RDX, etc on
error including TDX_RND_NO_ENTRY.  Because TDH.SYS.INIT() takes RCX as input,
this wrapper doesn't work.  TDH.SYS.LP.INIT() doesn't use RCX, RDX ... as
input. So it doesn't matter.

Other SEAMCALLs doesn't touch registers on the no entropy error.
TDH.EXPORTS.STATE.IMMUTABLE(), TDH.IMPORTS.STATE.IMMUTABLE(), TDH.MNG.ADDCX(),
and TDX.MNG.CREATE().  TDH.SYS.INIT() is an exception.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code
  2023-11-14 19:24   ` Isaku Yamahata
@ 2023-11-15 10:41     ` Huang, Kai
  2023-11-15 19:26       ` Isaku Yamahata
  0 siblings, 1 reply; 66+ messages in thread
From: Huang, Kai @ 2023-11-15 10:41 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, sathyanarayanan.kuppuswamy, Hansen, Dave, david, bagasdotme,
	Luck, Tony, ak, kirill.shutemov, seanjc, mingo, pbonzini, tglx,
	Yamahata, Isaku, linux-kernel, nik.borisov, hpa, peterz, Shahar,
	Sagi, imammedo, bp, Gao, Chao, rafael, Brown, Len, Huang, Ying,
	Williams, Dan J, x86


> > +#include <asm/archrandom.h>
> > +
> > +typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args);
> > +
> > +static inline u64 sc_retry(sc_func_t func, u64 fn,
> > +			   struct tdx_module_args *args)
> > +{
> > +	int retry = RDRAND_RETRY_LOOPS;
> > +	u64 ret;
> > +
> > +	do {
> > +		ret = func(fn, args);
> > +	} while (ret == TDX_RND_NO_ENTROPY && --retry);
> 
> This loop assumes that args isn't touched when TDX_RND_NO_ENTRYPOY is returned.
> It's not true.  TDH.SYS.INIT() and TDH.SYS.LP.INIT() clear RCX, RDX, etc on
> error including TDX_RND_NO_ENTRY.  Because TDH.SYS.INIT() takes RCX as input,
> this wrapper doesn't work.  TDH.SYS.LP.INIT() doesn't use RCX, RDX ... as
> input. So it doesn't matter.
> 
> Other SEAMCALLs doesn't touch registers on the no entropy error.
> TDH.EXPORTS.STATE.IMMUTABLE(), TDH.IMPORTS.STATE.IMMUTABLE(), TDH.MNG.ADDCX(),
> and TDX.MNG.CREATE().  TDH.SYS.INIT() is an exception.

If I am reading the spec (TDX module 1.5 ABI) correctly the TDH.SYS.INIT doesn't
return TDX_RND_NO_ENTROPY.  TDH.SYS.LP.INIT indeed can return NO_ENTROPY but as
you said it doesn't take any register as input.  So technically the code works
fine.  (Even the TDH.SYS.INIT can return NO_ENTROPY the code still works fine
because the RCX must be 0 for TDH.SYS.INIT.)

Also, I can hardly think out of any reason why TDX module needs to clobber input
registers in case of NO_ENTROPY for *ANY* SEAMCALL.  But despite that, I am not
opposing the idea that it *MIGHT* be better to "not assume" NO_ENTROPY will
never clobber registers either, e.g., for the sake of future extendibility.  In
this case, the below diff should address:

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index a621721f63dd..962a7a6be721 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -97,12 +97,23 @@ typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args
*args);
 static inline u64 sc_retry(sc_func_t func, u64 fn,
                           struct tdx_module_args *args)
 {
+       struct tdx_module_args _args = *args;
        int retry = RDRAND_RETRY_LOOPS;
        u64 ret;
 
-       do {
-               ret = func(fn, args);
-       } while (ret == TDX_RND_NO_ENTROPY && --retry);
+again:
+       ret = func(fn, args);
+       if (ret == TDX_RND_NO_ENTROPY && --retry) {
+               /*
+                * Do not assume TDX module will never clobber the input
+                * registers when any SEAMCALL fails with out of entropy.
+                * In this case the original input registers in @args
+                * are clobbered.  Always restore the input registers
+                * before retrying the SEAMCALL.
+                */
+               *args = _args;
+               goto again;
+       }
 
        return ret;
 }


The downside is we will have an additional memory copy of 'struct
tdx_module_args' for each SEAMCALL, but I don't believe this will have any
difference in practice.

Or, we can go and ask TDX module guys to promise no input registers will be
clobbered in case of NO_ENTROPY.

Hi Dave,

Do you have any opinion?

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code
  2023-11-15 10:41     ` Huang, Kai
@ 2023-11-15 19:26       ` Isaku Yamahata
  0 siblings, 0 replies; 66+ messages in thread
From: Isaku Yamahata @ 2023-11-15 19:26 UTC (permalink / raw)
  To: Huang, Kai
  Cc: isaku.yamahata, kvm, sathyanarayanan.kuppuswamy, Hansen, Dave,
	david, bagasdotme, Luck, Tony, ak, kirill.shutemov, seanjc,
	mingo, pbonzini, tglx, Yamahata, Isaku, linux-kernel,
	nik.borisov, hpa, peterz, Shahar, Sagi, imammedo, bp, Gao, Chao,
	rafael, Brown, Len, Huang, Ying, Williams, Dan J, x86

On Wed, Nov 15, 2023 at 10:41:46AM +0000,
"Huang, Kai" <kai.huang@intel.com> wrote:

> 
> > > +#include <asm/archrandom.h>
> > > +
> > > +typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args);
> > > +
> > > +static inline u64 sc_retry(sc_func_t func, u64 fn,
> > > +			   struct tdx_module_args *args)
> > > +{
> > > +	int retry = RDRAND_RETRY_LOOPS;
> > > +	u64 ret;
> > > +
> > > +	do {
> > > +		ret = func(fn, args);
> > > +	} while (ret == TDX_RND_NO_ENTROPY && --retry);
> > 
> > This loop assumes that args isn't touched when TDX_RND_NO_ENTRYPOY is returned.
> > It's not true.  TDH.SYS.INIT() and TDH.SYS.LP.INIT() clear RCX, RDX, etc on
> > error including TDX_RND_NO_ENTRY.  Because TDH.SYS.INIT() takes RCX as input,
> > this wrapper doesn't work.  TDH.SYS.LP.INIT() doesn't use RCX, RDX ... as
> > input. So it doesn't matter.
> > 
> > Other SEAMCALLs doesn't touch registers on the no entropy error.
> > TDH.EXPORTS.STATE.IMMUTABLE(), TDH.IMPORTS.STATE.IMMUTABLE(), TDH.MNG.ADDCX(),
> > and TDX.MNG.CREATE().  TDH.SYS.INIT() is an exception.
> 
> If I am reading the spec (TDX module 1.5 ABI) correctly the TDH.SYS.INIT doesn't
> return TDX_RND_NO_ENTROPY.

The next updated spec would fix it.
                                  

> TDH.SYS.LP.INIT indeed can return NO_ENTROPY but as
> you said it doesn't take any register as input.  So technically the code works
> fine.  (Even the TDH.SYS.INIT can return NO_ENTROPY the code still works fine
> because the RCX must be 0 for TDH.SYS.INIT.)

Ah yes, I agree with you. So it doesn't matter.


> Also, I can hardly think out of any reason why TDX module needs to clobber input
> registers in case of NO_ENTROPY for *ANY* SEAMCALL.  But despite that, I am not
> opposing the idea that it *MIGHT* be better to "not assume" NO_ENTROPY will
> never clobber registers either, e.g., for the sake of future extendibility.  In
> this case, the below diff should address:

Now we agreed that TDH.SYS.INIT() and TDH.SYS.LP.INIT() doesn't matter,
I'm fine with this patch. (TDX KVM handles other SEAMCALLS itself.)

Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization
  2023-11-09 11:55 ` [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization Kai Huang
  2023-11-09 23:29   ` Dave Hansen
@ 2023-11-15 19:35   ` Isaku Yamahata
  2023-11-16  3:19     ` Huang, Kai
  1 sibling, 1 reply; 66+ messages in thread
From: Isaku Yamahata @ 2023-11-15 19:35 UTC (permalink / raw)
  To: Kai Huang
  Cc: linux-kernel, kvm, x86, dave.hansen, kirill.shutemov, peterz,
	tony.luck, tglx, bp, mingo, hpa, seanjc, pbonzini, rafael, david,
	dan.j.williams, len.brown, ak, isaku.yamahata, ying.huang,
	chao.gao, sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme,
	sagis, imammedo, isaku.yamahata

On Fri, Nov 10, 2023 at 12:55:46AM +1300,
Kai Huang <kai.huang@intel.com> wrote:

> The TDX module global metadata provides system-wide information about
> the module.  The TDX module provides SEAMCALls to allow the kernel to
> query one specific global metadata field (entry) or all fields.
> 
> TL;DR:
> 
> Use the TDH.SYS.RD SEAMCALL to read the essential global metadata for
> module initialization, and at the same time, to only initialize TDX
> module with version 1.5 and later.
> 
> Long Version:
> 
> 1) Only initialize TDX module with version 1.5 and later
> 
> TDX module 1.0 has some compatibility issues with the later versions of
> module, as documented in the "Intel TDX module ABI incompatibilities
> between TDX1.0 and TDX1.5" spec.  Basically there's no value to use TDX
> module 1.0 when TDX module 1.5 and later versions are already available.
> To keep things simple, just support initializing the TDX module 1.5 and
> later.
> 
> 2) Get the essential global metadata for module initialization
> 
> TDX reports a list of "Convertible Memory Region" (CMR) to tell the
> kernel which memory is TDX compatible.  The kernel needs to build a list
> of memory regions (out of CMRs) as "TDX-usable" memory and pass them to
> the TDX module.  The kernel does this by constructing a list of "TD
> Memory Regions" (TDMRs) to cover all these memory regions and passing
> them to the TDX module.
> 
> Each TDMR is a TDX architectural data structure containing the memory
> region that the TDMR covers, plus the information to track (within this
> TDMR): a) the "Physical Address Metadata Table" (PAMT) to track each TDX
> memory page's status (such as which TDX guest "owns" a given page, and
> b) the "reserved areas" to tell memory holes that cannot be used as TDX
> memory.
> 
> The kernel needs to get below metadata from the TDX module to build the
> list of TDMRs: a) the maximum number of supported TDMRs, b) the maximum
> number of supported reserved areas per TDMR and, c) the PAMT entry size
> for each TDX-supported page size.
> 
> Note the TDX module internally checks whether the "TDX-usable" memory
> regions passed via TDMRs are truly convertible.  Just skipping reading
> the CMRs and manually checking memory regions against them, but let the
> TDX module do the check.
> 
> == Implementation ==
> 
> TDX module 1.0 uses TDH.SYS.INFO SEAMCALL to report the global metadata
> in a fixed-size (1024-bytes) structure 'TDSYSINFO_STRUCT'.  TDX module
> 1.5 adds more metadata fields, and introduces the new TDH.SYS.{RD|RDALL}
> SEAMCALLs for reading the metadata.  The new metadata mechanism removes
> the fixed-size limitation of the structure 'TDSYSINFO_STRUCT' and allows
> the TDX module to support unlimited number of metadata fields.
> 
> TDX module 1.5 and later versions still support the TDH.SYS.INFO for
> compatibility to the TDX module 1.0, but it may only report part of
> metadata via the 'TDSYSINFO_STRUCT'.  For any new metadata the kernel
> must use TDH.SYS.{RD|RDALL} to read.
> 
> To achieve the above two goals mentioned in 1) and 2), just use the
> TDH.SYS.RD to read the essential metadata fields related to the TDMRs.
> 
> TDH.SYS.RD returns *one* metadata field at a given "Metadata Field ID".
> It is enough for getting these few fields for module initialization.
> On the other hand, TDH.SYS.RDALL reports all metadata fields to a 4KB
> buffer provided by the kernel which is a little bit overkill here.
> 
> It may be beneficial to get all metadata fields at once here so they can
> also be used by KVM (some are essential for creating basic TDX guests),
> but technically it's unknown how many 4K pages are needed to fill all
> the metadata.  Thus it's better to read metadata when needed.
> 
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> ---
> 
> v14 -> v15:
>  - New patch to use TDH.SYS.RD to read TDX module global metadata for
>    module initialization and stop initializing 1.0 module.
> 
> ---
>  arch/x86/include/asm/shared/tdx.h |  1 +
>  arch/x86/virt/vmx/tdx/tdx.c       | 75 ++++++++++++++++++++++++++++++-
>  arch/x86/virt/vmx/tdx/tdx.h       | 39 ++++++++++++++++
>  3 files changed, 114 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
> index a4036149c484..fdfd41511b02 100644
> --- a/arch/x86/include/asm/shared/tdx.h
> +++ b/arch/x86/include/asm/shared/tdx.h
> @@ -59,6 +59,7 @@
>  #define TDX_PS_4K	0
>  #define TDX_PS_2M	1
>  #define TDX_PS_1G	2
> +#define TDX_PS_NR	(TDX_PS_1G + 1)
>  
>  #ifndef __ASSEMBLY__
>  
> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index d1affb30f74d..d24027993983 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -235,8 +235,75 @@ static int build_tdx_memlist(struct list_head *tmb_list)
>  	return ret;
>  }
>  
> +static int read_sys_metadata_field(u64 field_id, u64 *data)
> +{
> +	struct tdx_module_args args = {};
> +	int ret;
> +
> +	/*
> +	 * TDH.SYS.RD -- reads one global metadata field
> +	 *  - RDX (in): the field to read
> +	 *  - R8 (out): the field data
> +	 */
> +	args.rdx = field_id;
> +	ret = seamcall_prerr_ret(TDH_SYS_RD, &args);
> +	if (ret)
> +		return ret;
> +
> +	*data = args.r8;
> +
> +	return 0;
> +}
> +
> +static int read_sys_metadata_field16(u64 field_id, u16 *data)
> +{
> +	u64 _data;
> +	int ret;
> +
> +	if (WARN_ON_ONCE(MD_FIELD_ID_ELE_SIZE_CODE(field_id) !=
> +			MD_FIELD_ID_ELE_SIZE_16BIT))
> +		return -EINVAL;
> +
> +	ret = read_sys_metadata_field(field_id, &_data);
> +	if (ret)
> +		return ret;
> +
> +	*data = (u16)_data;
> +
> +	return 0;
> +}
> +
> +static int get_tdx_tdmr_sysinfo(struct tdx_tdmr_sysinfo *tdmr_sysinfo)
> +{
> +	int ret;
> +
> +	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_TDMRS,
> +			&tdmr_sysinfo->max_tdmrs);
> +	if (ret)
> +		return ret;
> +
> +	ret = read_sys_metadata_field16(MD_FIELD_ID_MAX_RESERVED_PER_TDMR,
> +			&tdmr_sysinfo->max_reserved_per_tdmr);
> +	if (ret)
> +		return ret;
> +
> +	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_4K_ENTRY_SIZE,
> +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_4K]);
> +	if (ret)
> +		return ret;
> +
> +	ret = read_sys_metadata_field16(MD_FIELD_ID_PAMT_2M_ENTRY_SIZE,
> +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_2M]);
> +	if (ret)
> +		return ret;
> +
> +	return read_sys_metadata_field16(MD_FIELD_ID_PAMT_1G_ENTRY_SIZE,
> +			&tdmr_sysinfo->pamt_entry_size[TDX_PS_1G]);
> +}
> +

Now we don't query the versions, build info, attributes, and etc.  Because it's
important to know its version/attributes, can we query and print them
as before? Maybe with another path.
In long term, those info would be exported via sysfs, though.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization
  2023-11-15 19:35   ` Isaku Yamahata
@ 2023-11-16  3:19     ` Huang, Kai
  0 siblings, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-11-16  3:19 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, sathyanarayanan.kuppuswamy, Hansen, Dave, david, bagasdotme,
	Luck, Tony, ak, kirill.shutemov, seanjc, mingo, pbonzini, tglx,
	Yamahata, Isaku, linux-kernel, nik.borisov, hpa, peterz, Shahar,
	Sagi, imammedo, bp, Gao, Chao, rafael, Brown, Len, Huang, Ying,
	Williams, Dan J, x86

On Wed, 2023-11-15 at 11:35 -0800, Isaku Yamahata wrote:
> Now we don't query the versions, build info, attributes, and etc.  Because it's
> important to know its version/attributes, can we query and print them
> as before? Maybe with another path.
> In long term, those info would be exported via sysfs, though.

I am planning to do /sysfs soon (not long term) after the basic TDX
functionality is merged.  The TDX guest side also has such requirement so we can
do it together.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-09 11:55 ` [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory Kai Huang
@ 2023-11-27 18:13   ` Dave Hansen
  2023-11-27 19:33     ` Huang, Kai
  0 siblings, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-11-27 18:13 UTC (permalink / raw)
  To: Kai Huang, linux-kernel, kvm
  Cc: x86, kirill.shutemov, peterz, tony.luck, tglx, bp, mingo, hpa,
	seanjc, pbonzini, rafael, david, dan.j.williams, len.brown, ak,
	isaku.yamahata, ying.huang, chao.gao, sathyanarayanan.kuppuswamy,
	nik.borisov, bagasdotme, sagis, imammedo

On 11/9/23 03:55, Kai Huang wrote:
...
> --- a/arch/x86/kernel/reboot.c
> +++ b/arch/x86/kernel/reboot.c
> @@ -31,6 +31,7 @@
>  #include <asm/realmode.h>
>  #include <asm/x86_init.h>
>  #include <asm/efi.h>
> +#include <asm/tdx.h>
>  
>  /*
>   * Power off function, if any
> @@ -741,6 +742,20 @@ void native_machine_shutdown(void)
>  	local_irq_disable();
>  	stop_other_cpus();
>  #endif
> +	/*
> +	 * stop_other_cpus() has flushed all dirty cachelines of TDX
> +	 * private memory on remote cpus.  Unlike SME, which does the
> +	 * cache flush on _this_ cpu in the relocate_kernel(), flush
> +	 * the cache for _this_ cpu here.  This is because on the
> +	 * platforms with "partial write machine check" erratum the
> +	 * kernel needs to convert all TDX private pages back to normal
> +	 * before booting to the new kernel in kexec(), and the cache
> +	 * flush must be done before that.  If the kernel took SME's way,
> +	 * it would have to muck with the relocate_kernel() assembly to
> +	 * do memory conversion.
> +	 */
> +	if (platform_tdx_enabled())
> +		native_wbinvd();

Why can't the TDX host code just set host_mem_enc_active=1?

Sure, you'll end up *using* the SME WBINVD support, but then you don't
have two different WBINVD call sites.  You also don't have to mess with
a single line of assembly.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-27 18:13   ` Dave Hansen
@ 2023-11-27 19:33     ` Huang, Kai
  2023-11-27 20:02       ` Huang, Kai
  2023-11-27 20:05       ` Dave Hansen
  0 siblings, 2 replies; 66+ messages in thread
From: Huang, Kai @ 2023-11-27 19:33 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: sathyanarayanan.kuppuswamy, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, seanjc, mingo, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, peterz, sagis, imammedo, bp, Gao, Chao, Brown,
	Len, rafael, Huang, Ying, Williams, Dan J, x86

On Mon, 2023-11-27 at 10:13 -0800, Hansen, Dave wrote:
> On 11/9/23 03:55, Kai Huang wrote:
> ...
> > --- a/arch/x86/kernel/reboot.c
> > +++ b/arch/x86/kernel/reboot.c
> > @@ -31,6 +31,7 @@
> >  #include <asm/realmode.h>
> >  #include <asm/x86_init.h>
> >  #include <asm/efi.h>
> > +#include <asm/tdx.h>
> >  
> >  /*
> >   * Power off function, if any
> > @@ -741,6 +742,20 @@ void native_machine_shutdown(void)
> >  	local_irq_disable();
> >  	stop_other_cpus();
> >  #endif
> > +	/*
> > +	 * stop_other_cpus() has flushed all dirty cachelines of TDX
> > +	 * private memory on remote cpus.  Unlike SME, which does the
> > +	 * cache flush on _this_ cpu in the relocate_kernel(), flush
> > +	 * the cache for _this_ cpu here.  This is because on the
> > +	 * platforms with "partial write machine check" erratum the
> > +	 * kernel needs to convert all TDX private pages back to normal
> > +	 * before booting to the new kernel in kexec(), and the cache
> > +	 * flush must be done before that.  If the kernel took SME's way,
> > +	 * it would have to muck with the relocate_kernel() assembly to
> > +	 * do memory conversion.
> > +	 */
> > +	if (platform_tdx_enabled())
> > +		native_wbinvd();
> 
> Why can't the TDX host code just set host_mem_enc_active=1?
> 
> Sure, you'll end up *using* the SME WBINVD support, but then you don't
> have two different WBINVD call sites.  You also don't have to mess with
> a single line of assembly.

I wanted to avoid changing the assembly.

Perhaps the comment isn't very clear.  Flushing cache (on the CPU running kexec)
when the host_mem_enc_active=1 is handled in the relocate_kernel() assembly,
which happens at very late stage right before jumping to the new kernel. 
However for TDX when the platform has erratum we need to convert TDX private
pages back to normal, which must be done after flushing cache.  If we reuse
host_mem_enc_active=1, then we will need to change the assembly code to do that.


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-27 19:33     ` Huang, Kai
@ 2023-11-27 20:02       ` Huang, Kai
  2023-11-27 20:05       ` Dave Hansen
  1 sibling, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-11-27 20:02 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: Williams, Dan J, x86, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, rafael, Gao, Chao

On Mon, 2023-11-27 at 19:33 +0000, Huang, Kai wrote:
> On Mon, 2023-11-27 at 10:13 -0800, Hansen, Dave wrote:
> > On 11/9/23 03:55, Kai Huang wrote:
> > ...
> > > --- a/arch/x86/kernel/reboot.c
> > > +++ b/arch/x86/kernel/reboot.c
> > > @@ -31,6 +31,7 @@
> > >  #include <asm/realmode.h>
> > >  #include <asm/x86_init.h>
> > >  #include <asm/efi.h>
> > > +#include <asm/tdx.h>
> > >  
> > >  /*
> > >   * Power off function, if any
> > > @@ -741,6 +742,20 @@ void native_machine_shutdown(void)
> > >  	local_irq_disable();
> > >  	stop_other_cpus();
> > >  #endif
> > > +	/*
> > > +	 * stop_other_cpus() has flushed all dirty cachelines of TDX
> > > +	 * private memory on remote cpus.  Unlike SME, which does the
> > > +	 * cache flush on _this_ cpu in the relocate_kernel(), flush
> > > +	 * the cache for _this_ cpu here.  This is because on the
> > > +	 * platforms with "partial write machine check" erratum the
> > > +	 * kernel needs to convert all TDX private pages back to normal
> > > +	 * before booting to the new kernel in kexec(), and the cache
> > > +	 * flush must be done before that.  If the kernel took SME's way,
> > > +	 * it would have to muck with the relocate_kernel() assembly to
> > > +	 * do memory conversion.
> > > +	 */
> > > +	if (platform_tdx_enabled())
> > > +		native_wbinvd();
> > 
> > Why can't the TDX host code just set host_mem_enc_active=1?
> > 
> > Sure, you'll end up *using* the SME WBINVD support, but then you don't
> > have two different WBINVD call sites.  You also don't have to mess with
> > a single line of assembly.
> 
> I wanted to avoid changing the assembly.
> 
> Perhaps the comment isn't very clear.  Flushing cache (on the CPU running kexec)
> when the host_mem_enc_active=1 is handled in the relocate_kernel() assembly,
> which happens at very late stage right before jumping to the new kernel. 
> However for TDX when the platform has erratum we need to convert TDX private
> pages back to normal, which must be done after flushing cache.  If we reuse
> host_mem_enc_active=1, then we will need to change the assembly code to do that.
> 

Forgot to say doing TDX page conversion in the relocate_assembly() isn't easy
because the cache flushing when host_mem_enc_active=1 happens after kernel has
switched to the identity mapping table, so we will need to do hacks like fixing
up symbol address etc.


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-27 19:33     ` Huang, Kai
  2023-11-27 20:02       ` Huang, Kai
@ 2023-11-27 20:05       ` Dave Hansen
  2023-11-27 20:52         ` Huang, Kai
  1 sibling, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-11-27 20:05 UTC (permalink / raw)
  To: Huang, Kai, kvm, linux-kernel
  Cc: sathyanarayanan.kuppuswamy, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, seanjc, mingo, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, peterz, sagis, imammedo, bp, Gao, Chao, Brown,
	Len, rafael, Huang, Ying, Williams, Dan J, x86

On 11/27/23 11:33, Huang, Kai wrote:
> On Mon, 2023-11-27 at 10:13 -0800, Hansen, Dave wrote:
>> On 11/9/23 03:55, Kai Huang wrote:
>> ...
>>> --- a/arch/x86/kernel/reboot.c
>>> +++ b/arch/x86/kernel/reboot.c
>>> @@ -31,6 +31,7 @@
>>>  #include <asm/realmode.h>
>>>  #include <asm/x86_init.h>
>>>  #include <asm/efi.h>
>>> +#include <asm/tdx.h>
>>>
>>>  /*
>>>   * Power off function, if any
>>> @@ -741,6 +742,20 @@ void native_machine_shutdown(void)
>>>     local_irq_disable();
>>>     stop_other_cpus();
>>>  #endif
>>> +   /*
>>> +    * stop_other_cpus() has flushed all dirty cachelines of TDX
>>> +    * private memory on remote cpus.  Unlike SME, which does the
>>> +    * cache flush on _this_ cpu in the relocate_kernel(), flush
>>> +    * the cache for _this_ cpu here.  This is because on the
>>> +    * platforms with "partial write machine check" erratum the
>>> +    * kernel needs to convert all TDX private pages back to normal
>>> +    * before booting to the new kernel in kexec(), and the cache
>>> +    * flush must be done before that.  If the kernel took SME's way,
>>> +    * it would have to muck with the relocate_kernel() assembly to
>>> +    * do memory conversion.
>>> +    */
>>> +   if (platform_tdx_enabled())
>>> +           native_wbinvd();
>>
>> Why can't the TDX host code just set host_mem_enc_active=1?
>>
>> Sure, you'll end up *using* the SME WBINVD support, but then you don't
>> have two different WBINVD call sites.  You also don't have to mess with
>> a single line of assembly.
> 
> I wanted to avoid changing the assembly.
> 
> Perhaps the comment isn't very clear.  Flushing cache (on the CPU running kexec)
> when the host_mem_enc_active=1 is handled in the relocate_kernel() assembly,
> which happens at very late stage right before jumping to the new kernel.
> However for TDX when the platform has erratum we need to convert TDX private
> pages back to normal, which must be done after flushing cache.  If we reuse
> host_mem_enc_active=1, then we will need to change the assembly code to do that.

I honestly think you need to stop thinking about the partial write issue
at this point in the series.  It's really causing a horrible amount of
unnecessary confusion.

Here's the golden rule:

	The boot CPU needs to run WBINVD sometime after it stops writing
	to private memory but before it starts treating the memory as
	shared.

On SME kernels, that key point evidently in early boot when it's
enabling SME.  I _think_ that point is also a valid place to do WBINVD
on no-TDX-erratum systems.

On TDX systems with the erratum, there's a *second* point before the
private=>shared conversion occurs.  I think what you're trying to do
here is prematurely optimize the erratum-affected affected systems so
that they don't do two WBINVDs.  Please stop trying to do that.

This WBINVD is only _needed_ for the erratum.  It should be closer to
the actual erratum handing.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-27 20:05       ` Dave Hansen
@ 2023-11-27 20:52         ` Huang, Kai
  2023-11-27 21:06           ` Dave Hansen
  0 siblings, 1 reply; 66+ messages in thread
From: Huang, Kai @ 2023-11-27 20:52 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: Williams, Dan J, x86, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, rafael, Gao, Chao

On Mon, 2023-11-27 at 12:05 -0800, Dave Hansen wrote:
> On 11/27/23 11:33, Huang, Kai wrote:
> > On Mon, 2023-11-27 at 10:13 -0800, Hansen, Dave wrote:
> > > On 11/9/23 03:55, Kai Huang wrote:
> > > ...
> > > > --- a/arch/x86/kernel/reboot.c
> > > > +++ b/arch/x86/kernel/reboot.c
> > > > @@ -31,6 +31,7 @@
> > > >  #include <asm/realmode.h>
> > > >  #include <asm/x86_init.h>
> > > >  #include <asm/efi.h>
> > > > +#include <asm/tdx.h>
> > > > 
> > > >  /*
> > > >   * Power off function, if any
> > > > @@ -741,6 +742,20 @@ void native_machine_shutdown(void)
> > > >     local_irq_disable();
> > > >     stop_other_cpus();
> > > >  #endif
> > > > +   /*
> > > > +    * stop_other_cpus() has flushed all dirty cachelines of TDX
> > > > +    * private memory on remote cpus.  Unlike SME, which does the
> > > > +    * cache flush on _this_ cpu in the relocate_kernel(), flush
> > > > +    * the cache for _this_ cpu here.  This is because on the
> > > > +    * platforms with "partial write machine check" erratum the
> > > > +    * kernel needs to convert all TDX private pages back to normal
> > > > +    * before booting to the new kernel in kexec(), and the cache
> > > > +    * flush must be done before that.  If the kernel took SME's way,
> > > > +    * it would have to muck with the relocate_kernel() assembly to
> > > > +    * do memory conversion.
> > > > +    */
> > > > +   if (platform_tdx_enabled())
> > > > +           native_wbinvd();
> > > 
> > > Why can't the TDX host code just set host_mem_enc_active=1?
> > > 
> > > Sure, you'll end up *using* the SME WBINVD support, but then you don't
> > > have two different WBINVD call sites.  You also don't have to mess with
> > > a single line of assembly.
> > 
> > I wanted to avoid changing the assembly.
> > 
> > Perhaps the comment isn't very clear.  Flushing cache (on the CPU running kexec)
> > when the host_mem_enc_active=1 is handled in the relocate_kernel() assembly,
> > which happens at very late stage right before jumping to the new kernel.
> > However for TDX when the platform has erratum we need to convert TDX private
> > pages back to normal, which must be done after flushing cache.  If we reuse
> > host_mem_enc_active=1, then we will need to change the assembly code to do that.
> 
> I honestly think you need to stop thinking about the partial write issue
> at this point in the series.  It's really causing a horrible amount of
> unnecessary confusion.
> 
> Here's the golden rule:
> 
> 	The boot CPU needs to run WBINVD sometime after it stops writing
> 	to private memory but before it starts treating the memory as
> 	shared.
> 
> On SME kernels, that key point evidently in early boot when it's
> enabling SME.  I _think_ that point is also a valid place to do WBINVD
> on no-TDX-erratum systems.

You mean we could advertise cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT) true for
TDX host? We could but IMHO it doesn't perfectly match.

SME kernel sets _PAGE_ENC on by default for all memory mappings IIUC, but TDX
host never actually sets any encryption bits in page tables managed by the
kernel.

So I think we can just do below, but not advertise CC_ATTR_HOST_MEM_ENCRYPT for
TDX host?

--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -377,7 +377,8 @@ void machine_kexec(struct kimage *image)
                                       (unsigned long)page_list,
                                       image->start,
                                       image->preserve_context,
-                                     
cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));
+                                      cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)
||
+                                      platform_tdx_enabled());


> 
> On TDX systems with the erratum, there's a *second* point before the
> private=>shared conversion occurs.  I think what you're trying to do
> here is prematurely optimize the erratum-affected affected systems so
> that they don't do two WBINVDs.  Please stop trying to do that.
> 
> This WBINVD is only _needed_ for the erratum.  It should be closer to
> the actual erratum handing.

If we do WBINVD early here then the second one isn't needed.  But 100% agreed
this handling/optimization should be done later closer to the erratum handling.


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-27 20:52         ` Huang, Kai
@ 2023-11-27 21:06           ` Dave Hansen
  2023-11-27 22:09             ` Huang, Kai
  0 siblings, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-11-27 21:06 UTC (permalink / raw)
  To: Huang, Kai, kvm, linux-kernel
  Cc: Williams, Dan J, x86, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, rafael, Gao, Chao

[-- Attachment #1: Type: text/plain, Size: 679 bytes --]

On 11/27/23 12:52, Huang, Kai wrote:
> --- a/arch/x86/kernel/machine_kexec_64.c
> +++ b/arch/x86/kernel/machine_kexec_64.c
> @@ -377,7 +377,8 @@ void machine_kexec(struct kimage *image)
>                                        (unsigned long)page_list,
>                                        image->start,
>                                        image->preserve_context,
> -
> cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));
> +                                      cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)
> ||
> +                                      platform_tdx_enabled());

Well, something more like the attached would be preferable, but you've
got the right idea logically.

[-- Attachment #2: cc-host-mem-incoherent.patch --]
[-- Type: text/x-patch, Size: 2319 bytes --]



---

 b/arch/x86/coco/core.c               |    1 +
 b/arch/x86/kernel/machine_kexec_64.c |    2 +-
 b/include/linux/cc_platform.h        |   16 ++++++++++++++++
 3 files changed, 18 insertions(+), 1 deletion(-)

diff -puN include/linux/cc_platform.h~cc-host-mem-incoherent include/linux/cc_platform.h
--- a/include/linux/cc_platform.h~cc-host-mem-incoherent	2023-11-27 12:20:44.217381008 -0800
+++ b/include/linux/cc_platform.h	2023-11-27 12:25:05.771073193 -0800
@@ -43,6 +43,22 @@ enum cc_attr {
 	CC_ATTR_HOST_MEM_ENCRYPT,
 
 	/**
+	 * @CC_ATTR_HOST_MEM_INCOHERENT: Host memory encryption can be
+	 * incoherent
+	 *
+	 * The platform/OS is running as a bare-metal system or a hypervisor.
+	 * The memory encryption engine might have left non-cache-coherent
+	 * data in the caches that needs to be flushed.
+	 *
+	 * Use this in places where the cache coherency of the memory matters
+	 * but the encryption status does not.
+	 *
+	 * Includes all systems that set CC_ATTR_HOST_MEM_ENCRYPT, but
+	 * aditionally adds TDX hosts.
+	 */
+	CC_ATTR_HOST_MEM_INCOHERENT,
+
+	/**
 	 * @CC_ATTR_GUEST_MEM_ENCRYPT: Guest memory encryption is active
 	 *
 	 * The platform/OS is running as a guest/virtual machine and actively
diff -puN arch/x86/kernel/machine_kexec_64.c~cc-host-mem-incoherent arch/x86/kernel/machine_kexec_64.c
--- a/arch/x86/kernel/machine_kexec_64.c~cc-host-mem-incoherent	2023-11-27 12:25:13.527115260 -0800
+++ b/arch/x86/kernel/machine_kexec_64.c	2023-11-27 13:04:19.732959001 -0800
@@ -361,7 +361,7 @@ void machine_kexec(struct kimage *image)
 				       (unsigned long)page_list,
 				       image->start,
 				       image->preserve_context,
-				       cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));
+				       cc_platform_has(CC_ATTR_HOST_MEM_INCOHERENT));
 
 #ifdef CONFIG_KEXEC_JUMP
 	if (image->preserve_context)
diff -puN arch/x86/coco/core.c~cc-host-mem-incoherent arch/x86/coco/core.c
--- a/arch/x86/coco/core.c~cc-host-mem-incoherent	2023-11-27 12:26:02.535372377 -0800
+++ b/arch/x86/coco/core.c	2023-11-27 12:26:12.371422241 -0800
@@ -70,6 +70,7 @@ static bool noinstr amd_cc_platform_has(
 		return sme_me_mask;
 
 	case CC_ATTR_HOST_MEM_ENCRYPT:
+	case CC_ATTR_HOST_MEM_INCOHERENT:
 		return sme_me_mask && !(sev_status & MSR_AMD64_SEV_ENABLED);
 
 	case CC_ATTR_GUEST_MEM_ENCRYPT:
_

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory
  2023-11-27 21:06           ` Dave Hansen
@ 2023-11-27 22:09             ` Huang, Kai
  0 siblings, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-11-27 22:09 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: rafael, Gao, Chao, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, Williams, Dan J, x86

[-- Attachment #1: Type: text/plain, Size: 2446 bytes --]

On Mon, 2023-11-27 at 13:06 -0800, Hansen, Dave wrote:
> On 11/27/23 12:52, Huang, Kai wrote:
> > --- a/arch/x86/kernel/machine_kexec_64.c
> > +++ b/arch/x86/kernel/machine_kexec_64.c
> > @@ -377,7 +377,8 @@ void machine_kexec(struct kimage *image)
> >                                        (unsigned long)page_list,
> >                                        image->start,
> >                                        image->preserve_context,
> > -
> > cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));
> > +                                      cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)
> > > > 
> > +                                      platform_tdx_enabled());
> 
> Well, something more like the attached would be preferable, but you've
> got the right idea logically.

Thanks!

On top of that, I think below code (also attached the diff) should do
advertising the CC_ATTR_HOST_MEM_INCOHERENT for TDX host?

diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c
index 2e2d559169a8..bec70b967504 100644
--- a/arch/x86/coco/core.c
+++ b/arch/x86/coco/core.c
@@ -12,6 +12,8 @@
 
 #include <asm/coco.h>
 #include <asm/processor.h>
+#include <asm/cpufeatures.h>
+#include <asm/tdx.h>
 
 enum cc_vendor cc_vendor __ro_after_init = CC_VENDOR_NONE;
 static u64 cc_mask __ro_after_init;
@@ -23,7 +25,9 @@ static bool noinstr intel_cc_platform_has(enum cc_attr attr)
        case CC_ATTR_HOTPLUG_DISABLED:
        case CC_ATTR_GUEST_MEM_ENCRYPT:
        case CC_ATTR_MEM_ENCRYPT:
-               return true;
+               return cpu_feature_enabled(X86_FEATURE_TDX_GUEST);
+       case CC_ATTR_HOST_MEM_INCOHERENT:
+               return platform_tdx_enabled();
        default:
                return false;
        }
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 73cd2f7b7d87..1ae21348edc1 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1634,6 +1634,13 @@ static int __init tdx_init(void)
        tdx_guest_keyid_start = tdx_keyid_start + 1;
        tdx_nr_guest_keyids = nr_tdx_keyids - 1;
 
+       /*
+        * TDX doesn't guarantee cache coherency among different
+        * KeyIDs.  Advertise the CC_ATTR_HOST_MEM_INCOHERENT
+        * attribute for TDX host.
+        */
+       cc_vendor = CC_VENDOR_INTEL;
+
        return 0;
 }
 early_initcall(tdx_init);


I'll do some test with your code and the above code.

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: tdx-host-mem-incoherent.diff --]
[-- Type: text/x-patch; name="tdx-host-mem-incoherent.diff", Size: 1276 bytes --]

diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c
index 2e2d559169a8..bec70b967504 100644
--- a/arch/x86/coco/core.c
+++ b/arch/x86/coco/core.c
@@ -12,6 +12,8 @@
 
 #include <asm/coco.h>
 #include <asm/processor.h>
+#include <asm/cpufeatures.h>
+#include <asm/tdx.h>
 
 enum cc_vendor cc_vendor __ro_after_init = CC_VENDOR_NONE;
 static u64 cc_mask __ro_after_init;
@@ -23,7 +25,9 @@ static bool noinstr intel_cc_platform_has(enum cc_attr attr)
 	case CC_ATTR_HOTPLUG_DISABLED:
 	case CC_ATTR_GUEST_MEM_ENCRYPT:
 	case CC_ATTR_MEM_ENCRYPT:
-		return true;
+		return cpu_feature_enabled(X86_FEATURE_TDX_GUEST);
+	case CC_ATTR_HOST_MEM_INCOHERENT:
+		return platform_tdx_enabled();
 	default:
 		return false;
 	}
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 73cd2f7b7d87..1ae21348edc1 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1634,6 +1634,13 @@ static int __init tdx_init(void)
 	tdx_guest_keyid_start = tdx_keyid_start + 1;
 	tdx_nr_guest_keyids = nr_tdx_keyids - 1;
 
+	/*
+	 * TDX doesn't guarantee cache coherency among different
+	 * KeyIDs.  Advertise the CC_ATTR_HOST_MEM_INCOHERENT
+	 * attribute for TDX host.
+	 */
+	cc_vendor = CC_VENDOR_INTEL;
+
 	return 0;
 }
 early_initcall(tdx_init);

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 21/23] x86/virt/tdx: Handle TDX interaction with ACPI S3 and deeper states
  2023-11-09 11:55 ` [PATCH v15 21/23] x86/virt/tdx: Handle TDX interaction with ACPI S3 and deeper states Kai Huang
@ 2023-11-30 17:20   ` Dave Hansen
  0 siblings, 0 replies; 66+ messages in thread
From: Dave Hansen @ 2023-11-30 17:20 UTC (permalink / raw)
  To: Kai Huang, linux-kernel, kvm
  Cc: x86, kirill.shutemov, peterz, tony.luck, tglx, bp, mingo, hpa,
	seanjc, pbonzini, rafael, david, dan.j.williams, len.brown, ak,
	isaku.yamahata, ying.huang, chao.gao, sathyanarayanan.kuppuswamy,
	nik.borisov, bagasdotme, sagis, imammedo

On 11/9/23 03:55, Kai Huang wrote:
>  #include <asm/page.h>
> @@ -1402,6 +1404,15 @@ static int __init tdx_init(void)
>  		return -ENODEV;
>  	}
>  
> +	/*
> +	 * At this point, hibernation_available() indicates whether or
> +	 * not hibernation support has been permanently disabled.
> +	 */
> +	if (hibernation_available()) {
> +		pr_err("initialization failed: Hibernation support is enabled\n");
> +		return -ENODEV;
> +	}
> +
>  	err = register_memory_notifier(&tdx_memory_nb);
>  	if (err) {
>  		pr_err("initialization failed: register_memory_notifier() failed (%d)\n",
> @@ -1417,6 +1428,11 @@ static int __init tdx_init(void)
>  		return -ENODEV;
>  	}
>  
> +#if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND)
> +	pr_info("Disable ACPI S3. Turn off TDX in the BIOS to use ACPI S3.\n");
> +	acpi_suspend_lowlevel = NULL;
> +#endif

Rafael, are you OK with how this patch ended up?  An ack would be much
appreciated if so.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-11-09 11:55 ` [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
@ 2023-11-30 18:01   ` Tony Luck
  2023-12-01 20:35   ` Dave Hansen
  2023-12-05 14:25   ` Borislav Petkov
  2 siblings, 0 replies; 66+ messages in thread
From: Tony Luck @ 2023-11-30 18:01 UTC (permalink / raw)
  To: Kai Huang
  Cc: linux-kernel, kvm, x86, dave.hansen, kirill.shutemov, peterz,
	tglx, bp, mingo, hpa, seanjc, pbonzini, rafael, david,
	dan.j.williams, len.brown, ak, isaku.yamahata, ying.huang,
	chao.gao, sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme,
	sagis, imammedo

On Fri, Nov 10, 2023 at 12:55:59AM +1300, Kai Huang wrote:
> Instead of modifying above error log, improve the error log by printing
> additional TDX related message to make the log like:
> 
>   ...
>  [...] mce: [Hardware Error]: Machine check: Data load in unrecoverable area of kernel
>  [...] mce: [Hardware Error]: Machine Check: TDX private memory error. Possible kernel bug.

This seems a reasonable addition.

>  arch/x86/kernel/cpu/mce/core.c |  33 +++++++++++

Reviewed-by: Tony Luck <tony.luck@intel.com>

[I only reviewed the hooks into mce/core.c I don't feel qualified
to dig through the TDX bits that determine this is a TD private page]

-Tony

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-11-09 11:55 ` [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
  2023-11-30 18:01   ` Tony Luck
@ 2023-12-01 20:35   ` Dave Hansen
  2023-12-03 11:44     ` Huang, Kai
  2023-12-05 14:25   ` Borislav Petkov
  2 siblings, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-12-01 20:35 UTC (permalink / raw)
  To: Kai Huang, linux-kernel, kvm
  Cc: x86, kirill.shutemov, peterz, tony.luck, tglx, bp, mingo, hpa,
	seanjc, pbonzini, rafael, david, dan.j.williams, len.brown, ak,
	isaku.yamahata, ying.huang, chao.gao, sathyanarayanan.kuppuswamy,
	nik.borisov, bagasdotme, sagis, imammedo

On 11/9/23 03:55, Kai Huang wrote:
> +static bool is_pamt_page(unsigned long phys)
> +{
> +	struct tdmr_info_list *tdmr_list = &tdx_tdmr_list;
> +	int i;
> +
> +	/*
> +	 * This function is called from #MC handler, and theoretically
> +	 * it could run in parallel with the TDX module initialization
> +	 * on other logical cpus.  But it's not OK to hold mutex here
> +	 * so just blindly check module status to make sure PAMTs/TDMRs
> +	 * are stable to access.
> +	 *
> +	 * This may return inaccurate result in rare cases, e.g., when
> +	 * #MC happens on a PAMT page during module initialization, but
> +	 * this is fine as #MC handler doesn't need a 100% accurate
> +	 * result.
> +	 */

It doesn't need perfect accuracy.  But how do we know it's not going to
go, for instance, chase a bad pointer?

> +	if (tdx_module_status != TDX_MODULE_INITIALIZED)
> +		return false;

As an example, what prevents this CPU from observing
tdx_module_status==TDX_MODULE_INITIALIZED while the PAMT structure is
being assembled?

> +	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
> +		unsigned long base, size;
> +
> +		tdmr_get_pamt(tdmr_entry(tdmr_list, i), &base, &size);
> +
> +		if (phys >= base && phys < (base + size))
> +			return true;
> +	}
> +
> +	return false;
> +}
> +
> +/*
> + * Return whether the memory page at the given physical address is TDX
> + * private memory or not.  Called from #MC handler do_machine_check().
> + *
> + * Note this function may not return an accurate result in rare cases.
> + * This is fine as the #MC handler doesn't need a 100% accurate result,
> + * because it cannot distinguish #MC between software bug and real
> + * hardware error anyway.
> + */
> +bool tdx_is_private_mem(unsigned long phys)
> +{
> +	struct tdx_module_args args = {
> +		.rcx = phys & PAGE_MASK,
> +	};
> +	u64 sret;
> +
> +	if (!platform_tdx_enabled())
> +		return false;
> +
> +	/* Get page type from the TDX module */
> +	sret = __seamcall_ret(TDH_PHYMEM_PAGE_RDMD, &args);
> +	/*
> +	 * Handle the case that CPU isn't in VMX operation.
> +	 *
> +	 * KVM guarantees no VM is running (thus no TDX guest)
> +	 * when there's any online CPU isn't in VMX operation.
> +	 * This means there will be no TDX guest private memory
> +	 * and Secure-EPT pages.  However the TDX module may have
> +	 * been initialized and the memory page could be PAMT.
> +	 */
> +	if (sret == TDX_SEAMCALL_UD)
> +		return is_pamt_page(phys);

Either this is comment is wonky or the module initialization is buggy.

config_global_keyid() goes and does SEAMCALLs on all CPUs.  There are
zero checks or special handling in there for whether the CPU has done
VMXON.  So, by the time we've started initializing the TDX module
(including the PAMT), all online CPUs must be able to do SEAMCALLs.  Right?

So how can we have a working PAMT here when this CPU can't do SEAMCALLs?

I don't think we should even bother with this complexity.  I think we
can just fix the whole thing by saying that unless you can make a
non-init SEAMCALL, we just assume the memory can't be private.

The transition to being able to make non-init SEAMCALLs is also #MC safe
*and* it's at a point when the tdmr_list is stable.

Can anyone shoot any holes in that? :)

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-01 20:35   ` Dave Hansen
@ 2023-12-03 11:44     ` Huang, Kai
  2023-12-04 17:07       ` Dave Hansen
  0 siblings, 1 reply; 66+ messages in thread
From: Huang, Kai @ 2023-12-03 11:44 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: sathyanarayanan.kuppuswamy, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, seanjc, mingo, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, peterz, sagis, imammedo, bp, Gao, Chao, Brown,
	Len, rafael, Huang, Ying, Williams, Dan J, x86

On Fri, 2023-12-01 at 12:35 -0800, Hansen, Dave wrote:
> On 11/9/23 03:55, Kai Huang wrote:
> > +static bool is_pamt_page(unsigned long phys)
> > +{
> > +	struct tdmr_info_list *tdmr_list = &tdx_tdmr_list;
> > +	int i;
> > +
> > +	/*
> > +	 * This function is called from #MC handler, and theoretically
> > +	 * it could run in parallel with the TDX module initialization
> > +	 * on other logical cpus.  But it's not OK to hold mutex here
> > +	 * so just blindly check module status to make sure PAMTs/TDMRs
> > +	 * are stable to access.
> > +	 *
> > +	 * This may return inaccurate result in rare cases, e.g., when
> > +	 * #MC happens on a PAMT page during module initialization, but
> > +	 * this is fine as #MC handler doesn't need a 100% accurate
> > +	 * result.
> > +	 */
> 
> It doesn't need perfect accuracy.  But how do we know it's not going to
> go, for instance, chase a bad pointer?
> 
> > +	if (tdx_module_status != TDX_MODULE_INITIALIZED)
> > +		return false;
> 
> As an example, what prevents this CPU from observing
> tdx_module_status==TDX_MODULE_INITIALIZED while the PAMT structure is
> being assembled?

There are two types of memory order serializing operations between assembling
the TDMR/PAMT structure and setting the tdx_module_status to
TDX_MODULE_INITIALIZED: 1) wbvind_on_all_cpus(); 2) bunch of SEAMCALLs;

WBINVD is a serializing instruction.  SEAMCALL is a VMEXIT to the TDX module,
which involves GDT/LDT/control registers/MSRs switch so it is also a serializing
operation.

But perhaps we can explicitly add a smp_wmb() between assembling TDMR/PAMT
structure and setting tdx_module_status if that's better.

> 
> > +	for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
> > +		unsigned long base, size;
> > +
> > +		tdmr_get_pamt(tdmr_entry(tdmr_list, i), &base, &size);
> > +
> > +		if (phys >= base && phys < (base + size))
> > +			return true;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> > +/*
> > + * Return whether the memory page at the given physical address is TDX
> > + * private memory or not.  Called from #MC handler do_machine_check().
> > + *
> > + * Note this function may not return an accurate result in rare cases.
> > + * This is fine as the #MC handler doesn't need a 100% accurate result,
> > + * because it cannot distinguish #MC between software bug and real
> > + * hardware error anyway.
> > + */
> > +bool tdx_is_private_mem(unsigned long phys)
> > +{
> > +	struct tdx_module_args args = {
> > +		.rcx = phys & PAGE_MASK,
> > +	};
> > +	u64 sret;
> > +
> > +	if (!platform_tdx_enabled())
> > +		return false;
> > +
> > +	/* Get page type from the TDX module */
> > +	sret = __seamcall_ret(TDH_PHYMEM_PAGE_RDMD, &args);
> > +	/*
> > +	 * Handle the case that CPU isn't in VMX operation.
> > +	 *
> > +	 * KVM guarantees no VM is running (thus no TDX guest)
> > +	 * when there's any online CPU isn't in VMX operation.
> > +	 * This means there will be no TDX guest private memory
> > +	 * and Secure-EPT pages.  However the TDX module may have
> > +	 * been initialized and the memory page could be PAMT.
> > +	 */
> > +	if (sret == TDX_SEAMCALL_UD)
> > +		return is_pamt_page(phys);
> 
> Either this is comment is wonky or the module initialization is buggy.
> 
> config_global_keyid() goes and does SEAMCALLs on all CPUs.  There are
> zero checks or special handling in there for whether the CPU has done
> VMXON.  So, by the time we've started initializing the TDX module
> (including the PAMT), all online CPUs must be able to do SEAMCALLs.  Right?
> 
> So how can we have a working PAMT here when this CPU can't do SEAMCALLs?

The corner case is KVM can enable VMX on all cpus, initialize the TDX module,
and then disable VMX on all cpus.  One example is KVM can be unloaded after it
initializes the TDX module.

In this case CPU cannot do SEAMCALL but PAMTs are already working :-)

However if SEAMCALL cannot be made (due to out of VMX), then the module can only
be initialized or the initialization hasn't been tried, so both
tdx_module_status and the tdx_tdmr_list are stable to access.

> 
> I don't think we should even bother with this complexity.  I think we
> can just fix the whole thing by saying that unless you can make a
> non-init SEAMCALL, we just assume the memory can't be private.
> 
> The transition to being able to make non-init SEAMCALLs is also #MC safe
> *and* it's at a point when the tdmr_list is stable.
> 
> Can anyone shoot any holes in that? :)



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-03 11:44     ` Huang, Kai
@ 2023-12-04 17:07       ` Dave Hansen
  2023-12-04 21:00         ` Huang, Kai
  0 siblings, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-12-04 17:07 UTC (permalink / raw)
  To: Huang, Kai, kvm, linux-kernel
  Cc: sathyanarayanan.kuppuswamy, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, seanjc, mingo, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, peterz, sagis, imammedo, bp, Gao, Chao, Brown,
	Len, rafael, Huang, Ying, Williams, Dan J, x86

On 12/3/23 03:44, Huang, Kai wrote:
...
>> It doesn't need perfect accuracy.  But how do we know it's not going to
>> go, for instance, chase a bad pointer?
>>
>>> +   if (tdx_module_status != TDX_MODULE_INITIALIZED)
>>> +           return false;
>>
>> As an example, what prevents this CPU from observing
>> tdx_module_status==TDX_MODULE_INITIALIZED while the PAMT structure is
>> being assembled?
> 
> There are two types of memory order serializing operations between assembling
> the TDMR/PAMT structure and setting the tdx_module_status to
> TDX_MODULE_INITIALIZED: 1) wbvind_on_all_cpus(); 2) bunch of SEAMCALLs;
> 
> WBINVD is a serializing instruction.  SEAMCALL is a VMEXIT to the TDX module,
> which involves GDT/LDT/control registers/MSRs switch so it is also a serializing
> operation.
> 
> But perhaps we can explicitly add a smp_wmb() between assembling TDMR/PAMT
> structure and setting tdx_module_status if that's better.

... and there's zero documentation of this dependency because ... ?

I suspect it's because it was never looked at until Tony made a comment
about it and we started looking at it.  In other words, it worked by
coincidence.

>>> +   for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
>>> +           unsigned long base, size;
>>> +
>>> +           tdmr_get_pamt(tdmr_entry(tdmr_list, i), &base, &size);
>>> +
>>> +           if (phys >= base && phys < (base + size))
>>> +                   return true;
>>> +   }
>>> +
>>> +   return false;
>>> +}
>>> +
>>> +/*
>>> + * Return whether the memory page at the given physical address is TDX
>>> + * private memory or not.  Called from #MC handler do_machine_check().
>>> + *
>>> + * Note this function may not return an accurate result in rare cases.
>>> + * This is fine as the #MC handler doesn't need a 100% accurate result,
>>> + * because it cannot distinguish #MC between software bug and real
>>> + * hardware error anyway.
>>> + */
>>> +bool tdx_is_private_mem(unsigned long phys)
>>> +{
>>> +   struct tdx_module_args args = {
>>> +           .rcx = phys & PAGE_MASK,
>>> +   };
>>> +   u64 sret;
>>> +
>>> +   if (!platform_tdx_enabled())
>>> +           return false;
>>> +
>>> +   /* Get page type from the TDX module */
>>> +   sret = __seamcall_ret(TDH_PHYMEM_PAGE_RDMD, &args);
>>> +   /*
>>> +    * Handle the case that CPU isn't in VMX operation.
>>> +    *
>>> +    * KVM guarantees no VM is running (thus no TDX guest)
>>> +    * when there's any online CPU isn't in VMX operation.
>>> +    * This means there will be no TDX guest private memory
>>> +    * and Secure-EPT pages.  However the TDX module may have
>>> +    * been initialized and the memory page could be PAMT.
>>> +    */
>>> +   if (sret == TDX_SEAMCALL_UD)
>>> +           return is_pamt_page(phys);
>>
>> Either this is comment is wonky or the module initialization is buggy.
>>
>> config_global_keyid() goes and does SEAMCALLs on all CPUs.  There are
>> zero checks or special handling in there for whether the CPU has done
>> VMXON.  So, by the time we've started initializing the TDX module
>> (including the PAMT), all online CPUs must be able to do SEAMCALLs.  Right?
>>
>> So how can we have a working PAMT here when this CPU can't do SEAMCALLs?
> 
> The corner case is KVM can enable VMX on all cpus, initialize the TDX module,
> and then disable VMX on all cpus.  One example is KVM can be unloaded after it
> initializes the TDX module.
> 
> In this case CPU cannot do SEAMCALL but PAMTs are already working :-)
> 
> However if SEAMCALL cannot be made (due to out of VMX), then the module can only
> be initialized or the initialization hasn't been tried, so both
> tdx_module_status and the tdx_tdmr_list are stable to access.

None of this even matters.  Let's remind ourselves how unbelievably
unlikely this is:

1. You're on an affected system that has the erratum
2. The KVM module gets unloaded, runs vmxoff
3. A kernel bug using a very rare partial write corrupts the PAMT
4. A second bug reads the PAMT consuming poison, #MC is generated
5. Enter #MC handler, SEAMCALL fails
6. #MC handler just reports a plain hardware error

The only thing even remotely wrong with this situation is that the
report won't pin the #MC on TDX.  Play stupid games (removing modules),
win stupid prizes (worse error message).

Can we dynamically mark a module as unsafe to remove?  If so, I'd
happily just say that we should make kvm_intel.ko unsafe to remove when
TDX is supported and move on with life.

tl;dr: I think even looking a #MC on the PAMT after the kvm module is
removed is a fool's errand.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-04 17:07       ` Dave Hansen
@ 2023-12-04 21:00         ` Huang, Kai
  2023-12-04 22:04           ` Dave Hansen
  0 siblings, 1 reply; 66+ messages in thread
From: Huang, Kai @ 2023-12-04 21:00 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: Williams, Dan J, x86, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, rafael, Gao, Chao

On Mon, 2023-12-04 at 09:07 -0800, Dave Hansen wrote:
> On 12/3/23 03:44, Huang, Kai wrote:
> ...
> > > It doesn't need perfect accuracy.  But how do we know it's not going to
> > > go, for instance, chase a bad pointer?
> > > 
> > > > +   if (tdx_module_status != TDX_MODULE_INITIALIZED)
> > > > +           return false;
> > > 
> > > As an example, what prevents this CPU from observing
> > > tdx_module_status==TDX_MODULE_INITIALIZED while the PAMT structure is
> > > being assembled?
> > 
> > There are two types of memory order serializing operations between assembling
> > the TDMR/PAMT structure and setting the tdx_module_status to
> > TDX_MODULE_INITIALIZED: 1) wbvind_on_all_cpus(); 2) bunch of SEAMCALLs;
> > 
> > WBINVD is a serializing instruction.  SEAMCALL is a VMEXIT to the TDX module,
> > which involves GDT/LDT/control registers/MSRs switch so it is also a serializing
> > operation.
> > 
> > But perhaps we can explicitly add a smp_wmb() between assembling TDMR/PAMT
> > structure and setting tdx_module_status if that's better.
> 
> ... and there's zero documentation of this dependency because ... ?
> 
> I suspect it's because it was never looked at until Tony made a comment
> about it and we started looking at it.  In other words, it worked by
> coincidence.

I should have put a comment around here.  My bad.

Kirill also helped to look at the code.

> 
> > > > +   for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
> > > > +           unsigned long base, size;
> > > > +
> > > > +           tdmr_get_pamt(tdmr_entry(tdmr_list, i), &base, &size);
> > > > +
> > > > +           if (phys >= base && phys < (base + size))
> > > > +                   return true;
> > > > +   }
> > > > +
> > > > +   return false;
> > > > +}
> > > > +
> > > > +/*
> > > > + * Return whether the memory page at the given physical address is TDX
> > > > + * private memory or not.  Called from #MC handler do_machine_check().
> > > > + *
> > > > + * Note this function may not return an accurate result in rare cases.
> > > > + * This is fine as the #MC handler doesn't need a 100% accurate result,
> > > > + * because it cannot distinguish #MC between software bug and real
> > > > + * hardware error anyway.
> > > > + */
> > > > +bool tdx_is_private_mem(unsigned long phys)
> > > > +{
> > > > +   struct tdx_module_args args = {
> > > > +           .rcx = phys & PAGE_MASK,
> > > > +   };
> > > > +   u64 sret;
> > > > +
> > > > +   if (!platform_tdx_enabled())
> > > > +           return false;
> > > > +
> > > > +   /* Get page type from the TDX module */
> > > > +   sret = __seamcall_ret(TDH_PHYMEM_PAGE_RDMD, &args);
> > > > +   /*
> > > > +    * Handle the case that CPU isn't in VMX operation.
> > > > +    *
> > > > +    * KVM guarantees no VM is running (thus no TDX guest)
> > > > +    * when there's any online CPU isn't in VMX operation.
> > > > +    * This means there will be no TDX guest private memory
> > > > +    * and Secure-EPT pages.  However the TDX module may have
> > > > +    * been initialized and the memory page could be PAMT.
> > > > +    */
> > > > +   if (sret == TDX_SEAMCALL_UD)
> > > > +           return is_pamt_page(phys);
> > > 
> > > Either this is comment is wonky or the module initialization is buggy.
> > > 
> > > config_global_keyid() goes and does SEAMCALLs on all CPUs.  There are
> > > zero checks or special handling in there for whether the CPU has done
> > > VMXON.  So, by the time we've started initializing the TDX module
> > > (including the PAMT), all online CPUs must be able to do SEAMCALLs.  Right?
> > > 
> > > So how can we have a working PAMT here when this CPU can't do SEAMCALLs?
> > 
> > The corner case is KVM can enable VMX on all cpus, initialize the TDX module,
> > and then disable VMX on all cpus.  One example is KVM can be unloaded after it
> > initializes the TDX module.
> > 
> > In this case CPU cannot do SEAMCALL but PAMTs are already working :-)
> > 
> > However if SEAMCALL cannot be made (due to out of VMX), then the module can only
> > be initialized or the initialization hasn't been tried, so both
> > tdx_module_status and the tdx_tdmr_list are stable to access.
> 
> None of this even matters.  Let's remind ourselves how unbelievably
> unlikely this is:
> 
> 1. You're on an affected system that has the erratum
> 2. The KVM module gets unloaded, runs vmxoff
> 3. A kernel bug using a very rare partial write corrupts the PAMT
> 4. A second bug reads the PAMT consuming poison, #MC is generated
> 5. Enter #MC handler, SEAMCALL fails
> 6. #MC handler just reports a plain hardware error

Yes totally agree it is very unlikely to happen.  

> 
> The only thing even remotely wrong with this situation is that the
> report won't pin the #MC on TDX.  Play stupid games (removing modules),
> win stupid prizes (worse error message).
> 
> Can we dynamically mark a module as unsafe to remove?  If so, I'd
> happily just say that we should make kvm_intel.ko unsafe to remove when
> TDX is supported and move on with life.
> 
> tl;dr: I think even looking a #MC on the PAMT after the kvm module is
> removed is a fool's errand.

Sorry I wasn't clear enough.  KVM actually turns off VMX when it destroys the
last VM, so the KVM module doesn't need to be removed to turn off VMX.  I used
"KVM can be unloaded" as an example to explain the PAMT can be working when VMX
is off.


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-04 21:00         ` Huang, Kai
@ 2023-12-04 22:04           ` Dave Hansen
  2023-12-04 23:24             ` Huang, Kai
  0 siblings, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-12-04 22:04 UTC (permalink / raw)
  To: Huang, Kai, kvm, linux-kernel
  Cc: Williams, Dan J, x86, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, rafael, Gao, Chao

On 12/4/23 13:00, Huang, Kai wrote:
>> tl;dr: I think even looking a #MC on the PAMT after the kvm module is
>> removed is a fool's errand.
> Sorry I wasn't clear enough.  KVM actually turns off VMX when it destroys the
> last VM, so the KVM module doesn't need to be removed to turn off VMX.  I used
> "KVM can be unloaded" as an example to explain the PAMT can be working when VMX
> is off.

Can't we just fix this by having KVM do an "extra" hardware_enable_all()
before initializing the TDX module?  It's not wrong to say that TDX is a
KVM user.  If KVm wants 'kvm_usage_count' to go back to 0, it can shut
down the TDX module.  Then there's no PAMT to worry about.

The shutdown would be something like:

	1. TDX module shutdown
	2. Deallocate/Convert PAMT
	3. vmxoff

Then, no SEAMCALL failure because of vmxoff can cause a PAMT-induced #MC
to be missed.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-04 22:04           ` Dave Hansen
@ 2023-12-04 23:24             ` Huang, Kai
  2023-12-04 23:39               ` Dave Hansen
  2023-12-04 23:41               ` Huang, Kai
  0 siblings, 2 replies; 66+ messages in thread
From: Huang, Kai @ 2023-12-04 23:24 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: rafael, Gao, Chao, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, Williams, Dan J, x86

On Mon, 2023-12-04 at 14:04 -0800, Hansen, Dave wrote:
> On 12/4/23 13:00, Huang, Kai wrote:
> > > tl;dr: I think even looking a #MC on the PAMT after the kvm module is
> > > removed is a fool's errand.
> > Sorry I wasn't clear enough.  KVM actually turns off VMX when it destroys the
> > last VM, so the KVM module doesn't need to be removed to turn off VMX.  I used
> > "KVM can be unloaded" as an example to explain the PAMT can be working when VMX
> > is off.
> 
> Can't we just fix this by having KVM do an "extra" hardware_enable_all()
> before initializing the TDX module?  
> 

Yes KVM needs to do hardware_enable_all() anyway before initializing the TDX
module.  

I believe you mean we can keep VMX enabled after initializing the TDX module,
i.e., not calling hardware_disable_all() after that, so that kvm_usage_count
will remain non-zero even when last VM is destroyed?

The current behaviour that KVM only enable VMX when there's active VM is because
it (or the kernel) wants to allow to be able to load and run third-party VMX
module (yes the virtual BOX) when KVM module is loaded.  Only one of them can
actually use the VMX hardware but they can be both loaded.

In ancient time KVM used to immediately enable VMX when it is loaded, but later
it was changed to only enable VMX when there's active VM because of the above
reason.

See commit 10474ae8945ce ("KVM: Activate Virtualization On Demand").

> It's not wrong to say that TDX is a
> KVM user.  If KVm wants 'kvm_usage_count' to go back to 0, it can shut
> down the TDX module.  Then there's no PAMT to worry about.
> 
> The shutdown would be something like:
> 
> 	1. TDX module shutdown
> 	2. Deallocate/Convert PAMT
> 	3. vmxoff
> 
> Then, no SEAMCALL failure because of vmxoff can cause a PAMT-induced #MC
> to be missed.

The limitation is once the TDX module is shutdown, it cannot be initialized
again unless it is runtimely updated.

Long-termly, if we go this design then there might be other problems when other
kernel components are using TDX.  For example, the VT-d driver will need to be
changed to support TDX-IO, and it will need to enable TDX module much earlier
than KVM to do some initialization.  It might need to some TDX work (e.g.,
cleanup) while KVM is unloaded.  I am not super familiar with TDX-IO but looks
we might have some problem here if we go with such design. 


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-04 23:24             ` Huang, Kai
@ 2023-12-04 23:39               ` Dave Hansen
  2023-12-04 23:56                 ` Huang, Kai
  2023-12-05  2:04                 ` Sean Christopherson
  2023-12-04 23:41               ` Huang, Kai
  1 sibling, 2 replies; 66+ messages in thread
From: Dave Hansen @ 2023-12-04 23:39 UTC (permalink / raw)
  To: Huang, Kai, kvm, linux-kernel
  Cc: rafael, Gao, Chao, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, Williams, Dan J, x86

On 12/4/23 15:24, Huang, Kai wrote:
> On Mon, 2023-12-04 at 14:04 -0800, Hansen, Dave wrote:
...
> In ancient time KVM used to immediately enable VMX when it is loaded, but later
> it was changed to only enable VMX when there's active VM because of the above
> reason.
> 
> See commit 10474ae8945ce ("KVM: Activate Virtualization On Demand").

Fine.  This doesn't need to change ... until you load TDX.  Once you
initialize the TDX module, no more out-of-tree VMMs for you.

That doesn't seem too insane.  This is yet *ANOTHER* reason that doing
dynamic TDX module initialization is a good idea.

>> It's not wrong to say that TDX is a
>> KVM user.  If KVm wants 'kvm_usage_count' to go back to 0, it can shut
>> down the TDX module.  Then there's no PAMT to worry about.
>>
>> The shutdown would be something like:
>>
>>       1. TDX module shutdown
>>       2. Deallocate/Convert PAMT
>>       3. vmxoff
>>
>> Then, no SEAMCALL failure because of vmxoff can cause a PAMT-induced #MC
>> to be missed.
> 
> The limitation is once the TDX module is shutdown, it cannot be initialized
> again unless it is runtimely updated.
> 
> Long-termly, if we go this design then there might be other problems when other
> kernel components are using TDX.  For example, the VT-d driver will need to be
> changed to support TDX-IO, and it will need to enable TDX module much earlier
> than KVM to do some initialization.  It might need to some TDX work (e.g.,
> cleanup) while KVM is unloaded.  I am not super familiar with TDX-IO but looks
> we might have some problem here if we go with such design.

The burden for who does vmxon will simply need to change from KVM itself
to some common code that KVM depends on.  Probably not dissimilar to
those nutty (sorry folks, just calling it as I see 'em) multi-KVM module
patches that are floating around.


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-04 23:24             ` Huang, Kai
  2023-12-04 23:39               ` Dave Hansen
@ 2023-12-04 23:41               ` Huang, Kai
  1 sibling, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-12-04 23:41 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: Huang, Ying, x86, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, Gao, Chao, bp, rafael, peterz,
	sathyanarayanan.kuppuswamy, Brown, Len, Williams, Dan J

On Mon, 2023-12-04 at 23:24 +0000, Huang, Kai wrote:
> Long-termly, if we go this design then there might be other problems when other
> kernel components are using TDX.  For example, the VT-d driver will need to be
> changed to support TDX-IO, and it will need to enable TDX module much earlier
> than KVM to do some initialization.  It might need to some TDX work (e.g.,
> cleanup) while KVM is unloaded.  I am not super familiar with TDX-IO but looks
> we might have some problem here if we go with such design. 

Perhaps I shouldn't use the future feature as argument, e.g., with multiple TDX
users we are likely to have a refcount to see whether we can truly shutdown TDX.

And VMX on/off will also need to be moved out of KVM for these work.

But the point is it's better to not assume how these kernel components will use
VMX on/off.  E.g., it may just choose to simply turn on VMX, do SEMACALL, and
then turn off VMX immediately.  While the TDX module will be alive all the time.

Keeping VMX on will suppress INIT, I guess that's another reason we prefer to
turning VMX on when needed.

/*      
 * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized during
 * reboot.  VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT if
 * GIF=0, i.e. if the crash occurred between CLGI and STGI.
 */
void cpu_emergency_disable_virtualization(void)
{
	...
}

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-04 23:39               ` Dave Hansen
@ 2023-12-04 23:56                 ` Huang, Kai
  2023-12-05  2:04                 ` Sean Christopherson
  1 sibling, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-12-04 23:56 UTC (permalink / raw)
  To: kvm, Hansen, Dave, linux-kernel
  Cc: Huang, Ying, x86, Luck, Tony, david, bagasdotme, ak,
	kirill.shutemov, mingo, seanjc, pbonzini, tglx, Yamahata, Isaku,
	nik.borisov, hpa, sagis, imammedo, Gao, Chao, bp, rafael, peterz,
	sathyanarayanan.kuppuswamy, Brown, Len, Williams, Dan J

On Mon, 2023-12-04 at 15:39 -0800, Dave Hansen wrote:
> On 12/4/23 15:24, Huang, Kai wrote:
> > On Mon, 2023-12-04 at 14:04 -0800, Hansen, Dave wrote:
> ...
> > In ancient time KVM used to immediately enable VMX when it is loaded, but later
> > it was changed to only enable VMX when there's active VM because of the above
> > reason.
> > 
> > See commit 10474ae8945ce ("KVM: Activate Virtualization On Demand").
> 
> Fine.  This doesn't need to change ... until you load TDX.  Once you
> initialize the TDX module, no more out-of-tree VMMs for you.
> 
> That doesn't seem too insane.  This is yet *ANOTHER* reason that doing
> dynamic TDX module initialization is a good idea.

I don't have objection to this.

> 
> > > It's not wrong to say that TDX is a
> > > KVM user.  If KVm wants 'kvm_usage_count' to go back to 0, it can shut
> > > down the TDX module.  Then there's no PAMT to worry about.
> > > 
> > > The shutdown would be something like:
> > > 
> > >       1. TDX module shutdown
> > >       2. Deallocate/Convert PAMT
> > >       3. vmxoff
> > > 
> > > Then, no SEAMCALL failure because of vmxoff can cause a PAMT-induced #MC
> > > to be missed.
> > 
> > The limitation is once the TDX module is shutdown, it cannot be initialized
> > again unless it is runtimely updated.
> > 
> > Long-termly, if we go this design then there might be other problems when other
> > kernel components are using TDX.  For example, the VT-d driver will need to be
> > changed to support TDX-IO, and it will need to enable TDX module much earlier
> > than KVM to do some initialization.  It might need to some TDX work (e.g.,
> > cleanup) while KVM is unloaded.  I am not super familiar with TDX-IO but looks
> > we might have some problem here if we go with such design.
> 
> The burden for who does vmxon will simply need to change from KVM itself
> to some common code that KVM depends on.  Probably not dissimilar to
> those nutty (sorry folks, just calling it as I see 'em) multi-KVM module
> patches that are floating around.
> 

Right we will need to move VMX on/off out of KVM for that purpose.  I think the
point is it's better to not assume how these kernel components will use VMX
on/off.  E.g., it may just choose to simply turn on VMX, do SEMACALL, and
then turn off VMX immediately, while the TDX module will be alive all the time.

Or we require they all need to: 1) enable VMX; 2) enable/use TDX; 3) disable TDX
when no need; 4) disable VMX.

But I don't have strong opinion here too.


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-04 23:39               ` Dave Hansen
  2023-12-04 23:56                 ` Huang, Kai
@ 2023-12-05  2:04                 ` Sean Christopherson
  2023-12-05 16:36                   ` Dave Hansen
  2023-12-05 16:36                   ` Luck, Tony
  1 sibling, 2 replies; 66+ messages in thread
From: Sean Christopherson @ 2023-12-05  2:04 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Kai Huang, kvm, linux-kernel, rafael, Chao Gao, Tony Luck, david,
	bagasdotme, ak, kirill.shutemov, mingo, pbonzini, tglx,
	Isaku Yamahata, nik.borisov, hpa, sagis, imammedo, peterz, bp,
	Len Brown, sathyanarayanan.kuppuswamy, Ying Huang,
	Dan J Williams, x86

On Mon, Dec 04, 2023, Dave Hansen wrote:
> On 12/4/23 15:24, Huang, Kai wrote:
> > On Mon, 2023-12-04 at 14:04 -0800, Hansen, Dave wrote:
> ...
> > In ancient time KVM used to immediately enable VMX when it is loaded, but later
> > it was changed to only enable VMX when there's active VM because of the above
> > reason.
> > 
> > See commit 10474ae8945ce ("KVM: Activate Virtualization On Demand").

Huh, I always just assumed it was some backwards thinking about enabling VMX/SVM
being "dangerous" or something.

> Fine.  This doesn't need to change ... until you load TDX.  Once you
> initialize the TDX module, no more out-of-tree VMMs for you.

It's not just out-of-tree hypervisors, which IMO should be little more than an
afterthought.  The other more important issue is that being post-VMXON blocks INIT,
i.e. VMX needs to be disabled before reboot, suspend, etc.  Forcing kvm_usage_count
would work, but it would essentially turn "graceful" reboots, i.e. reboots where
the host isn't running VMs and thus VMX is already disabled.  Having VMX be enabled
so long as KVM is loaded would turn all reboots into the "oh crap, the system is
rebooting, quick do VMXOFF!" variety.

> That doesn't seem too insane.  This is yet *ANOTHER* reason that doing
> dynamic TDX module initialization is a good idea.
> 
> >> It's not wrong to say that TDX is a KVM user.  If KVm wants
> >> 'kvm_usage_count' to go back to 0, it can shut down the TDX module.  Then
> >> there's no PAMT to worry about.
> >>
> >> The shutdown would be something like:
> >>
> >>       1. TDX module shutdown
> >>       2. Deallocate/Convert PAMT
> >>       3. vmxoff
> >>
> >> Then, no SEAMCALL failure because of vmxoff can cause a PAMT-induced #MC
> >> to be missed.
> > 
> > The limitation is once the TDX module is shutdown, it cannot be initialized
> > again unless it is runtimely updated.
> > 
> > Long-termly, if we go this design then there might be other problems when other
> > kernel components are using TDX.  For example, the VT-d driver will need to be
> > changed to support TDX-IO, and it will need to enable TDX module much earlier
> > than KVM to do some initialization.  It might need to some TDX work (e.g.,
> > cleanup) while KVM is unloaded.  I am not super familiar with TDX-IO but looks
> > we might have some problem here if we go with such design.
> 
> The burden for who does vmxon will simply need to change from KVM itself
> to some common code that KVM depends on.  Probably not dissimilar to
> those nutty (sorry folks, just calling it as I see 'em) multi-KVM module

You misspelled "amazing" ;-)

> patches that are floating around.

Joking aside, why shove TDX module ownership into KVM?  It honestly sounds like
a terrible fit, even without the whole TDX-IO mess.  KVM state is largely ephemeral,
in the sense that loading and unloading kvm.ko doesn't allocate/free much memory
or do all that much initialization or teardown.

TDX on the other hand is quite different.  IIRC the PAMT is hundreds of MiB, maybe
over a GiB in most expected use cases?  And also IIRC, TDH.SYS.INIT is rather
long running operation, blocks IRQs, NMIs, (SMIs?), etc.

So rather than shove TDX ownership into KVM and force KVM to figure out how to
manage the TDX module, why not do what us nutty people are suggesting and move
hardware enabling and TDX-module management into a dedicated base module (bonus
points if you call it vac.ko ;-) ).

Alternatively, we could have a dedicated kernel module for TDX, e.g. tdx.ko, and
then have tdx.ko and kvm.ko depend on vac.ko.  But I think that ends up being
quite gross and unnecessary, e.g. in such a setup, kvm-intel.ko ideally wouldn't
take a hard dependency on tdx.ko, as auto-loading tdx.ko would defeat some of the
purpose of the split, and KVM shouldn't fail to load just because TDX isn't supported.
But that'd mean conditionally doing request_module("tdx") or whatever and would
create other conundrums.

(Oof, typing that out made me realize that KVM depends on the PSP driver if
CONFIG_KVM_AMD_SEV=y, even if if the platform owner has no intention of ever using
SEV/SEV-ES.  IIUC, it works because sp_mod_init() just registers a driver, i.e.
doesn't fail out of there's no PSP.  That's kinda gross).

Anyways, vac.ko provides an API to grab a reference to the TDX module, e.g. the
"create a VM" API gets extended to say "create a VM of the TDX variety", and then
vac.ko manages its refcounts to VMX and TDX accordingly.  And KVM obviously keeps
its existing behavior of getting and putting references for each VM.

That way userspace gets to decide when to (un)load tdx.ko without needing to add
a KVM module param or whatever to allow forcefully unloading tdx.ko (which would
be bizarre and probably quite difficult to implement correctly), and unloading
kvm-intel.ko wouldn't require unloading the TDX module.

The end behavior might not be all that different in the short term, but it would
give us more options, e.g. for this erratum, it would be quite easy for vac.ko to
let usersepace choose between keeping VMX "on" (while the TDX module is loaded)
and potentially having imperfect #MC messages.

And out-of-tree hypervisors could even use vac.ko's exported APIs to manage hardware
enabling if they so choose.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-11-09 11:55 ` [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
  2023-11-30 18:01   ` Tony Luck
  2023-12-01 20:35   ` Dave Hansen
@ 2023-12-05 14:25   ` Borislav Petkov
  2023-12-05 19:41     ` Huang, Kai
  2 siblings, 1 reply; 66+ messages in thread
From: Borislav Petkov @ 2023-12-05 14:25 UTC (permalink / raw)
  To: Kai Huang
  Cc: linux-kernel, kvm, x86, dave.hansen, kirill.shutemov, peterz,
	tony.luck, tglx, mingo, hpa, seanjc, pbonzini, rafael, david,
	dan.j.williams, len.brown, ak, isaku.yamahata, ying.huang,
	chao.gao, sathyanarayanan.kuppuswamy, nik.borisov, bagasdotme,
	sagis, imammedo

On Fri, Nov 10, 2023 at 12:55:59AM +1300, Kai Huang wrote:
> +static const char *mce_memory_info(struct mce *m)
> +{
> +	if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
> +		return NULL;
> +
> +	/*
> +	 * Certain initial generations of TDX-capable CPUs have an
> +	 * erratum.  A kernel non-temporal partial write to TDX private
> +	 * memory poisons that memory, and a subsequent read of that
> +	 * memory triggers #MC.
> +	 *
> +	 * However such #MC caused by software cannot be distinguished
> +	 * from the real hardware #MC.  Just print additional message
> +	 * to show such #MC may be result of the CPU erratum.
> +	 */
> +	if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
> +		return NULL;
> +
> +	return !tdx_is_private_mem(m->addr) ? NULL :
> +		"TDX private memory error. Possible kernel bug.";
> +}
> +
>  static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
>  {
>  	struct llist_node *pending;
>  	struct mce_evt_llist *l;
>  	int apei_err = 0;
> +	const char *memmsg;
>  
>  	/*
>  	 * Allow instrumentation around external facilities usage. Not that it
> @@ -283,6 +307,15 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
>  	}
>  	if (exp)
>  		pr_emerg(HW_ERR "Machine check: %s\n", exp);
> +	/*
> +	 * Confidential computing platforms such as TDX platforms
> +	 * may occur MCE due to incorrect access to confidential
> +	 * memory.  Print additional information for such error.
> +	 */
> +	memmsg = mce_memory_info(final);
> +	if (memmsg)
> +		pr_emerg(HW_ERR "Machine check: %s\n", memmsg);
> +

No, this is not how this is done. First of all, this function should be
called something like

	mce_dump_aux_info()

or so to state that it is dumping some auxiliary info.

Then, it does:

	if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
		return tdx_get_mce_info();

or so and you put that tdx_get_mce_info() function in TDX code and there
you do all your picking apart of things, what needs to be dumped or what
not, checking whether it is a memory error and so on.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05  2:04                 ` Sean Christopherson
@ 2023-12-05 16:36                   ` Dave Hansen
  2023-12-05 16:53                     ` Sean Christopherson
  2023-12-05 16:36                   ` Luck, Tony
  1 sibling, 1 reply; 66+ messages in thread
From: Dave Hansen @ 2023-12-05 16:36 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Kai Huang, kvm, linux-kernel, rafael, Chao Gao, Tony Luck, david,
	bagasdotme, ak, kirill.shutemov, mingo, pbonzini, tglx,
	Isaku Yamahata, nik.borisov, hpa, sagis, imammedo, peterz, bp,
	Len Brown, sathyanarayanan.kuppuswamy, Ying Huang,
	Dan J Williams, x86

On 12/4/23 18:04, Sean Christopherson wrote:
> Joking aside, why shove TDX module ownership into KVM?  It honestly sounds like
> a terrible fit, even without the whole TDX-IO mess.  KVM state is largely ephemeral,
> in the sense that loading and unloading kvm.ko doesn't allocate/free much memory
> or do all that much initialization or teardown.

Yeah, you have a good point there.  We really do need some core code to
manage VMXON/OFF now that there is increased interest outside of
_purely_ running VMs.

For the purposes of _this_ patch, I think I'm happy to leave open the
possibility that SEAMCALL can simply fail due to VMXOFF.  For now, it
means that we can't attribute #MC's to the PAMT unless a VM is running
but that seems like a reasonable compromise for the moment.

Once TDX gains the ability to "pin" VMXON, the added precision here will
be appreciated.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* RE: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05  2:04                 ` Sean Christopherson
  2023-12-05 16:36                   ` Dave Hansen
@ 2023-12-05 16:36                   ` Luck, Tony
  2023-12-05 16:57                     ` Sean Christopherson
  1 sibling, 1 reply; 66+ messages in thread
From: Luck, Tony @ 2023-12-05 16:36 UTC (permalink / raw)
  To: Sean Christopherson, Hansen, Dave
  Cc: Huang, Kai, kvm, linux-kernel, rafael, Gao, Chao, david,
	bagasdotme, ak, kirill.shutemov, mingo, pbonzini, tglx, Yamahata,
	Isaku, nik.borisov, hpa, sagis, imammedo, peterz, bp, Brown, Len,
	sathyanarayanan.kuppuswamy, Huang, Ying, Williams, Dan J, x86

>> Fine.  This doesn't need to change ... until you load TDX.  Once you
>> initialize the TDX module, no more out-of-tree VMMs for you.
>
> It's not just out-of-tree hypervisors, which IMO should be little more than an
> afterthought.  The other more important issue is that being post-VMXON blocks INIT,

Does that make CPU offline a one-way process? Linux uses INIT to bring a CPU back
online again.

-Tony

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 16:36                   ` Dave Hansen
@ 2023-12-05 16:53                     ` Sean Christopherson
  0 siblings, 0 replies; 66+ messages in thread
From: Sean Christopherson @ 2023-12-05 16:53 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Kai Huang, kvm, linux-kernel, rafael, Chao Gao, Tony Luck, david,
	bagasdotme, ak, kirill.shutemov, mingo, pbonzini, tglx,
	Isaku Yamahata, nik.borisov, hpa, sagis, imammedo, peterz, bp,
	Len Brown, sathyanarayanan.kuppuswamy, Ying Huang,
	Dan J Williams, x86

On Tue, Dec 05, 2023, Dave Hansen wrote:
> On 12/4/23 18:04, Sean Christopherson wrote:
> > Joking aside, why shove TDX module ownership into KVM?  It honestly sounds like
> > a terrible fit, even without the whole TDX-IO mess.  KVM state is largely ephemeral,
> > in the sense that loading and unloading kvm.ko doesn't allocate/free much memory
> > or do all that much initialization or teardown.
> 
> Yeah, you have a good point there.  We really do need some core code to
> manage VMXON/OFF now that there is increased interest outside of
> _purely_ running VMs.
> 
> For the purposes of _this_ patch, I think I'm happy to leave open the
> possibility that SEAMCALL can simply fail due to VMXOFF.  For now, it
> means that we can't attribute #MC's to the PAMT unless a VM is running
> but that seems like a reasonable compromise for the moment.

+1

> Once TDX gains the ability to "pin" VMXON, the added precision here will
> be appreciated.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 16:36                   ` Luck, Tony
@ 2023-12-05 16:57                     ` Sean Christopherson
  0 siblings, 0 replies; 66+ messages in thread
From: Sean Christopherson @ 2023-12-05 16:57 UTC (permalink / raw)
  To: Tony Luck
  Cc: Dave Hansen, Kai Huang, kvm, linux-kernel, rafael, Chao Gao,
	david, bagasdotme, ak, kirill.shutemov, mingo, pbonzini, tglx,
	Isaku Yamahata, nik.borisov, hpa, sagis, imammedo, peterz, bp,
	Len Brown, sathyanarayanan.kuppuswamy, Ying Huang,
	Dan J Williams, x86

On Tue, Dec 05, 2023, Tony Luck wrote:
> >> Fine.  This doesn't need to change ... until you load TDX.  Once you
> >> initialize the TDX module, no more out-of-tree VMMs for you.
> >
> > It's not just out-of-tree hypervisors, which IMO should be little more than an
> > afterthought.  The other more important issue is that being post-VMXON blocks INIT,
> 
> Does that make CPU offline a one-way process? Linux uses INIT to bring a CPU back
> online again.

No, KVM does VMXOFF on the CPU being offlined, and then VMXON if/when the CPU is
onlined again.  This also handles secondary CPUs for suspend/resume (the primary
CPU hooks .suspend() and .resume()).

static int kvm_offline_cpu(unsigned int cpu)
{
	mutex_lock(&kvm_lock);
	if (kvm_usage_count)
		hardware_disable_nolock(NULL);
	mutex_unlock(&kvm_lock);
	return 0;
}


static int kvm_online_cpu(unsigned int cpu)
{
	int ret = 0;

	/*
	 * Abort the CPU online process if hardware virtualization cannot
	 * be enabled. Otherwise running VMs would encounter unrecoverable
	 * errors when scheduled to this CPU.
	 */
	mutex_lock(&kvm_lock);
	if (kvm_usage_count)
		ret = __hardware_enable_nolock();
	mutex_unlock(&kvm_lock);
	return ret;
}

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 14:25   ` Borislav Petkov
@ 2023-12-05 19:41     ` Huang, Kai
  2023-12-05 19:56       ` Borislav Petkov
  0 siblings, 1 reply; 66+ messages in thread
From: Huang, Kai @ 2023-12-05 19:41 UTC (permalink / raw)
  To: bp
  Cc: kvm, sathyanarayanan.kuppuswamy, Hansen, Dave, david, bagasdotme,
	Luck, Tony, ak, kirill.shutemov, seanjc, mingo, pbonzini, tglx,
	Yamahata, Isaku, linux-kernel, nik.borisov, hpa, peterz, sagis,
	imammedo, Gao, Chao, rafael, Brown, Len, Huang, Ying, Williams,
	Dan J, x86

On Tue, 2023-12-05 at 15:25 +0100, Borislav Petkov wrote:
> On Fri, Nov 10, 2023 at 12:55:59AM +1300, Kai Huang wrote:
> > +static const char *mce_memory_info(struct mce *m)
> > +{
> > +	if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
> > +		return NULL;
> > +
> > +	/*
> > +	 * Certain initial generations of TDX-capable CPUs have an
> > +	 * erratum.  A kernel non-temporal partial write to TDX private
> > +	 * memory poisons that memory, and a subsequent read of that
> > +	 * memory triggers #MC.
> > +	 *
> > +	 * However such #MC caused by software cannot be distinguished
> > +	 * from the real hardware #MC.  Just print additional message
> > +	 * to show such #MC may be result of the CPU erratum.
> > +	 */
> > +	if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
> > +		return NULL;
> > +
> > +	return !tdx_is_private_mem(m->addr) ? NULL :
> > +		"TDX private memory error. Possible kernel bug.";
> > +}
> > +
> >  static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
> >  {
> >  	struct llist_node *pending;
> >  	struct mce_evt_llist *l;
> >  	int apei_err = 0;
> > +	const char *memmsg;
> >  
> >  	/*
> >  	 * Allow instrumentation around external facilities usage. Not that it
> > @@ -283,6 +307,15 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
> >  	}
> >  	if (exp)
> >  		pr_emerg(HW_ERR "Machine check: %s\n", exp);
> > +	/*
> > +	 * Confidential computing platforms such as TDX platforms
> > +	 * may occur MCE due to incorrect access to confidential
> > +	 * memory.  Print additional information for such error.
> > +	 */
> > +	memmsg = mce_memory_info(final);
> > +	if (memmsg)
> > +		pr_emerg(HW_ERR "Machine check: %s\n", memmsg);
> > +
> 
> No, this is not how this is done. First of all, this function should be
> called something like
> 
> 	mce_dump_aux_info()
> 
> or so to state that it is dumping some auxiliary info.
> 
> Then, it does:
> 
> 	if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
> 		return tdx_get_mce_info();
> 
> or so and you put that tdx_get_mce_info() function in TDX code and there
> you do all your picking apart of things, what needs to be dumped or what
> not, checking whether it is a memory error and so on.
> 
> Thx.
> 

Thanks Boris.  Looks good to me, with one exception that it is actually TDX
host, but not TDX guest.  So that the above X86_FEATURE_TDX_GUEST cpu feature
check needs to be replaced with the host one.

Full incremental diff below.  Could you take a look whether this is what you
want?

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index a621721f63dd..0c02b66dcc41 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -115,13 +115,13 @@ bool platform_tdx_enabled(void);
 int tdx_cpu_enable(void);
 int tdx_enable(void);
 void tdx_reset_memory(void);
-bool tdx_is_private_mem(unsigned long phys);
+const char *tdx_get_mce_info(unsigned long phys);
 #else
 static inline bool platform_tdx_enabled(void) { return false; }
 static inline int tdx_cpu_enable(void) { return -ENODEV; }
 static inline int tdx_enable(void)  { return -ENODEV; }
 static inline void tdx_reset_memory(void) { }
-static inline bool tdx_is_private_mem(unsigned long phys) { return false; }
+static inline const char *tdx_get_mce_info(unsigned long phys) { return NULL; }
 #endif /* CONFIG_INTEL_TDX_HOST */

 #endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index e33537cfc507..b7e650b5f7ef 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -229,26 +229,20 @@ static void wait_for_panic(void)
        panic("Panicing machine check CPU died");
 }

-static const char *mce_memory_info(struct mce *m)
+static const char *mce_dump_aux_info(struct mce *m)
 {
-       if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
-               return NULL;
-
        /*
-        * Certain initial generations of TDX-capable CPUs have an
-        * erratum.  A kernel non-temporal partial write to TDX private
-        * memory poisons that memory, and a subsequent read of that
-        * memory triggers #MC.
-        *
-        * However such #MC caused by software cannot be distinguished
-        * from the real hardware #MC.  Just print additional message
-        * to show such #MC may be result of the CPU erratum.
+        * Confidential computing platforms such as TDX platforms
+        * may occur MCE due to incorrect access to confidential
+        * memory.  Print additional information for such error.
         */
-       if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
+       if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
                return NULL;

-       return !tdx_is_private_mem(m->addr) ? NULL :
-               "TDX private memory error. Possible kernel bug.";
+       if (platform_tdx_enabled())
+               return tdx_get_mce_info(m->addr);
+
+       return NULL;
 }

 static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
@@ -256,7 +250,7 @@ static noinstr void mce_panic(const char *msg, struct mce
*final, char *exp)
        struct llist_node *pending;
        struct mce_evt_llist *l;
        int apei_err = 0;
-       const char *memmsg;
+       const char *auxinfo;

        /*
         * Allow instrumentation around external facilities usage. Not that it
@@ -307,14 +301,10 @@ static noinstr void mce_panic(const char *msg, struct mce
*final, char *exp)
        }
        if (exp)
                pr_emerg(HW_ERR "Machine check: %s\n", exp);
-       /*
-        * Confidential computing platforms such as TDX platforms
-        * may occur MCE due to incorrect access to confidential
-        * memory.  Print additional information for such error.
-        */
-       memmsg = mce_memory_info(final);
-       if (memmsg)
-               pr_emerg(HW_ERR "Machine check: %s\n", memmsg);
+
+       auxinfo = mce_dump_aux_info(final);
+       if (auxinfo)
+               pr_emerg(HW_ERR "Machine check: %s\n", auxinfo);

        if (!fake_panic) {
                if (panic_timeout == 0)
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 1b84dcdf63cb..cfbaec0f43b2 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1329,7 +1329,7 @@ static bool is_pamt_page(unsigned long phys)
  * because it cannot distinguish #MC between software bug and real
  * hardware error anyway.
  */
-bool tdx_is_private_mem(unsigned long phys)
+static bool tdx_is_private_mem(unsigned long phys)
 {
        struct tdx_module_args args = {
                .rcx = phys & PAGE_MASK,
@@ -1391,6 +1391,25 @@ bool tdx_is_private_mem(unsigned long phys)
        }
 }

+const char *tdx_get_mce_info(unsigned long phys)
+{
+       /*
+        * Certain initial generations of TDX-capable CPUs have an
+        * erratum.  A kernel non-temporal partial write to TDX private
+        * memory poisons that memory, and a subsequent read of that
+        * memory triggers #MC.
+        *
+        * However such #MC caused by software cannot be distinguished
+        * from the real hardware #MC.  Just print additional message
+        * to show such #MC may be result of the CPU erratum.
+        */
+       if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
+               return NULL;
+
+       return !tdx_is_private_mem(phys) ? NULL :
+               "TDX private memory error. Possible kernel bug.";
+}
+
 static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
                                            u32 *nr_tdx_keyids)
 {


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 19:41     ` Huang, Kai
@ 2023-12-05 19:56       ` Borislav Petkov
  2023-12-05 20:08         ` Huang, Kai
  0 siblings, 1 reply; 66+ messages in thread
From: Borislav Petkov @ 2023-12-05 19:56 UTC (permalink / raw)
  To: Huang, Kai
  Cc: kvm, sathyanarayanan.kuppuswamy, Hansen, Dave, david, bagasdotme,
	Luck, Tony, ak, kirill.shutemov, seanjc, mingo, pbonzini, tglx,
	Yamahata, Isaku, linux-kernel, nik.borisov, hpa, peterz, sagis,
	imammedo, Gao, Chao, rafael, Brown, Len, Huang, Ying, Williams,
	Dan J, x86

On Tue, Dec 05, 2023 at 07:41:41PM +0000, Huang, Kai wrote:
> -static const char *mce_memory_info(struct mce *m)
> +static const char *mce_dump_aux_info(struct mce *m)
>  {
> -       if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
> -               return NULL;
> -
>         /*
> -        * Certain initial generations of TDX-capable CPUs have an
> -        * erratum.  A kernel non-temporal partial write to TDX private
> -        * memory poisons that memory, and a subsequent read of that
> -        * memory triggers #MC.
> -        *
> -        * However such #MC caused by software cannot be distinguished
> -        * from the real hardware #MC.  Just print additional message
> -        * to show such #MC may be result of the CPU erratum.
> +        * Confidential computing platforms such as TDX platforms
> +        * may occur MCE due to incorrect access to confidential
> +        * memory.  Print additional information for such error.
>          */
> -       if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
> +       if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
>                 return NULL;

What's the point of doing this on !TDX? None.

> -       return !tdx_is_private_mem(m->addr) ? NULL :
> -               "TDX private memory error. Possible kernel bug.";
> +       if (platform_tdx_enabled())

So is this the "host is TDX" check?

Not a X86_FEATURE flag but something homegrown. And Kirill is trying to
switch the CC_ATTRs to X86_FEATURE_ flags for SEV but here you guys are
using something homegrown.

why not a X86_FEATURE_ flag?

The CC_ATTR things are for guests, I guess, but the host feature checks
should be X86_FEATURE_ flags things.

Hmmm.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 19:56       ` Borislav Petkov
@ 2023-12-05 20:08         ` Huang, Kai
  2023-12-05 20:29           ` Borislav Petkov
  0 siblings, 1 reply; 66+ messages in thread
From: Huang, Kai @ 2023-12-05 20:08 UTC (permalink / raw)
  To: bp
  Cc: kvm, x86, Hansen, Dave, david, bagasdotme, ak, linux-kernel,
	seanjc, mingo, pbonzini, tglx, Yamahata, Isaku, nik.borisov,
	Luck, Tony, kirill.shutemov, hpa, peterz, imammedo, sagis, Gao,
	Chao, rafael, sathyanarayanan.kuppuswamy, Huang, Ying, Brown,
	Len, Williams, Dan J

On Tue, 2023-12-05 at 20:56 +0100, Borislav Petkov wrote:
> On Tue, Dec 05, 2023 at 07:41:41PM +0000, Huang, Kai wrote:
> > -static const char *mce_memory_info(struct mce *m)
> > +static const char *mce_dump_aux_info(struct mce *m)
> >  {
> > -       if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
> > -               return NULL;
> > -
> >         /*
> > -        * Certain initial generations of TDX-capable CPUs have an
> > -        * erratum.  A kernel non-temporal partial write to TDX private
> > -        * memory poisons that memory, and a subsequent read of that
> > -        * memory triggers #MC.
> > -        *
> > -        * However such #MC caused by software cannot be distinguished
> > -        * from the real hardware #MC.  Just print additional message
> > -        * to show such #MC may be result of the CPU erratum.
> > +        * Confidential computing platforms such as TDX platforms
> > +        * may occur MCE due to incorrect access to confidential
> > +        * memory.  Print additional information for such error.
> >          */
> > -       if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
> > +       if (!m || !mce_is_memory_error(m) || !mce_usable_address(m))
> >                 return NULL;
> 
> What's the point of doing this on !TDX? None.

OK. I'll move this inside tdx_get_mce_info(). 

> 
> > -       return !tdx_is_private_mem(m->addr) ? NULL :
> > -               "TDX private memory error. Possible kernel bug.";
> > +       if (platform_tdx_enabled())
> 
> So is this the "host is TDX" check?
> 
> Not a X86_FEATURE flag but something homegrown. And Kirill is trying to
> switch the CC_ATTRs to X86_FEATURE_ flags for SEV but here you guys are
> using something homegrown.
> 
> why not a X86_FEATURE_ flag?
> 

The difference is for TDX host the kernel needs to initialize the TDX module
first before TDX can be used.  The module initialization is done at runtime, and
the platform_tdx_enabled() here only returns whether the BIOS has enabled TDX.

IIUC the X86_FEATURE_ flag doesn't suit this purpose because based on my
understanding the flag being present means the kernel has done some enabling
work and the feature is ready to use.


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 20:08         ` Huang, Kai
@ 2023-12-05 20:29           ` Borislav Petkov
  2023-12-05 20:33             ` Huang, Kai
  0 siblings, 1 reply; 66+ messages in thread
From: Borislav Petkov @ 2023-12-05 20:29 UTC (permalink / raw)
  To: Huang, Kai
  Cc: kvm, x86, Hansen, Dave, david, bagasdotme, ak, linux-kernel,
	seanjc, mingo, pbonzini, tglx, Yamahata, Isaku, nik.borisov,
	Luck, Tony, kirill.shutemov, hpa, peterz, imammedo, sagis, Gao,
	Chao, rafael, sathyanarayanan.kuppuswamy, Huang, Ying, Brown,
	Len, Williams, Dan J

On Tue, Dec 05, 2023 at 08:08:34PM +0000, Huang, Kai wrote:
> The difference is for TDX host the kernel needs to initialize the TDX module
> first before TDX can be used.  The module initialization is done at runtime, and
> the platform_tdx_enabled() here only returns whether the BIOS has enabled TDX.
> 
> IIUC the X86_FEATURE_ flag doesn't suit this purpose because based on my
> understanding the flag being present means the kernel has done some enabling
> work and the feature is ready to use.

Which flag do you mean? X86_FEATURE_TDX_GUEST?

I mean, you would set a separate X86_FEATURE_TDX or so flag to denote
that the BIOS has enabled it, at the end of that tdx_init() in the first
patch.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 20:29           ` Borislav Petkov
@ 2023-12-05 20:33             ` Huang, Kai
  2023-12-05 20:41               ` Borislav Petkov
  0 siblings, 1 reply; 66+ messages in thread
From: Huang, Kai @ 2023-12-05 20:33 UTC (permalink / raw)
  To: bp
  Cc: kvm, Brown, Len, Hansen, Dave, david, bagasdotme, ak,
	linux-kernel, kirill.shutemov, mingo, pbonzini, seanjc, Yamahata,
	Isaku, nik.borisov, tglx, Luck, Tony, hpa, peterz, imammedo,
	sagis, Gao, Chao, rafael, sathyanarayanan.kuppuswamy, Huang,
	Ying, x86, Williams, Dan J

On Tue, 2023-12-05 at 21:29 +0100, Borislav Petkov wrote:
> On Tue, Dec 05, 2023 at 08:08:34PM +0000, Huang, Kai wrote:
> > The difference is for TDX host the kernel needs to initialize the TDX module
> > first before TDX can be used.  The module initialization is done at runtime, and
> > the platform_tdx_enabled() here only returns whether the BIOS has enabled TDX.
> > 
> > IIUC the X86_FEATURE_ flag doesn't suit this purpose because based on my
> > understanding the flag being present means the kernel has done some enabling
> > work and the feature is ready to use.
> 
> Which flag do you mean? X86_FEATURE_TDX_GUEST?
> 
> I mean, you would set a separate X86_FEATURE_TDX or so flag to denote
> that the BIOS has enabled it, at the end of that tdx_init() in the first
> patch.
> 

Yes I understand what you said.  My point is X86_FEATURE_TDX doesn't suit
because when it is set, the kernel actually hasn't done any enabling work yet
thus TDX is not available albeit the X86_FEATURE_TDX is set.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 20:33             ` Huang, Kai
@ 2023-12-05 20:41               ` Borislav Petkov
  2023-12-05 20:49                 ` Dave Hansen
  2023-12-05 20:58                 ` Huang, Kai
  0 siblings, 2 replies; 66+ messages in thread
From: Borislav Petkov @ 2023-12-05 20:41 UTC (permalink / raw)
  To: Huang, Kai
  Cc: kvm, Brown, Len, Hansen, Dave, david, bagasdotme, ak,
	linux-kernel, kirill.shutemov, mingo, pbonzini, seanjc, Yamahata,
	Isaku, nik.borisov, tglx, Luck, Tony, hpa, peterz, imammedo,
	sagis, Gao, Chao, rafael, sathyanarayanan.kuppuswamy, Huang,
	Ying, x86, Williams, Dan J

On Tue, Dec 05, 2023 at 08:33:14PM +0000, Huang, Kai wrote:
> Yes I understand what you said.  My point is X86_FEATURE_TDX doesn't suit
> because when it is set, the kernel actually hasn't done any enabling work yet
> thus TDX is not available albeit the X86_FEATURE_TDX is set.

You define a X86_FEATURE flag. You set it *when* TDX is available and
enabled. Then you query that flag. This is how synthetic flags work.

In your patchset, when do you know that TDX is enabled? Point me to the
code place pls.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 20:41               ` Borislav Petkov
@ 2023-12-05 20:49                 ` Dave Hansen
  2023-12-05 20:58                 ` Huang, Kai
  1 sibling, 0 replies; 66+ messages in thread
From: Dave Hansen @ 2023-12-05 20:49 UTC (permalink / raw)
  To: Borislav Petkov, Huang, Kai
  Cc: kvm, Brown, Len, david, bagasdotme, ak, linux-kernel,
	kirill.shutemov, mingo, pbonzini, seanjc, Yamahata, Isaku,
	nik.borisov, tglx, Luck, Tony, hpa, peterz, imammedo, sagis, Gao,
	Chao, rafael, sathyanarayanan.kuppuswamy, Huang, Ying, x86,
	Williams, Dan J

On 12/5/23 12:41, Borislav Petkov wrote:
> On Tue, Dec 05, 2023 at 08:33:14PM +0000, Huang, Kai wrote:
>> Yes I understand what you said.  My point is X86_FEATURE_TDX doesn't suit
>> because when it is set, the kernel actually hasn't done any enabling work yet
>> thus TDX is not available albeit the X86_FEATURE_TDX is set.
> You define a X86_FEATURE flag. You set it *when* TDX is available and
> enabled. Then you query that flag. This is how synthetic flags work.
> 
> In your patchset, when do you know that TDX is enabled? Point me to the
> code place pls.

TDX can be "ready" in a couple of different ways:

1. The module is there and running SEAMCALLS (tdx_platform_enabled())
2. The module is initialized and ready to run guests.  This happens
   after init_tdmrs() and init_tdx_module() return success.

#1 is known at boot.
#2 doesn't happen until just before KVM runs the first TDX guest.
Here's the patch for #2:

> https://lore.kernel.org/all/566ff8b05090c935d980d5ace3389d31c7cce7df.1699527082.git.kai.huang@intel.com/



^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum
  2023-12-05 20:41               ` Borislav Petkov
  2023-12-05 20:49                 ` Dave Hansen
@ 2023-12-05 20:58                 ` Huang, Kai
  1 sibling, 0 replies; 66+ messages in thread
From: Huang, Kai @ 2023-12-05 20:58 UTC (permalink / raw)
  To: bp
  Cc: kvm, Williams, Dan J, Hansen, Dave, Luck, Tony, bagasdotme, ak,
	david, linux-kernel, kirill.shutemov, seanjc, pbonzini, mingo,
	Yamahata, Isaku, nik.borisov, tglx, hpa, peterz, imammedo, sagis,
	Gao, Chao, Brown, Len, rafael, sathyanarayanan.kuppuswamy, Huang,
	Ying, x86

On Tue, 2023-12-05 at 21:41 +0100, Borislav Petkov wrote:
> On Tue, Dec 05, 2023 at 08:33:14PM +0000, Huang, Kai wrote:
> > Yes I understand what you said.  My point is X86_FEATURE_TDX doesn't suit
> > because when it is set, the kernel actually hasn't done any enabling work yet
> > thus TDX is not available albeit the X86_FEATURE_TDX is set.
> 
> You define a X86_FEATURE flag. You set it *when* TDX is available and
> enabled. Then you query that flag. This is how synthetic flags work.
> 
> In your patchset, when do you know that TDX is enabled? Point me to the
> code place pls.
> 

This patchset provides two functions to allow the user of TDX to enable TDX at
runtime when needed: tdx_cpu_enable() and tdx_enable().

Please see patch:

https://lore.kernel.org/lkml/cover.1699527082.git.kai.huang@intel.com/T/#m96cb9aaa4e323d4e29f7ff6c532f7d33a01995a7

So TDX will be available when tdx_enable() is done successfully.

For now KVM is the only user of TDX, and tdx_enable() will be called by KVM on
demand at runtime.

Hope I've made this clear.  Thanks.



^ permalink raw reply	[flat|nested] 66+ messages in thread

end of thread, other threads:[~2023-12-05 20:59 UTC | newest]

Thread overview: 66+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-09 11:55 [PATCH v15 00/23] TDX host kernel support Kai Huang
2023-11-09 11:55 ` [PATCH v15 01/23] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
2023-11-09 11:55 ` [PATCH v15 02/23] x86/tdx: Define TDX supported page sizes as macros Kai Huang
2023-11-09 11:55 ` [PATCH v15 03/23] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC Kai Huang
2023-11-09 11:55 ` [PATCH v15 04/23] x86/cpu: Detect TDX partial write machine check erratum Kai Huang
2023-11-09 11:55 ` [PATCH v15 05/23] x86/virt/tdx: Handle SEAMCALL no entropy error in common code Kai Huang
2023-11-09 16:38   ` Dave Hansen
2023-11-14 19:24   ` Isaku Yamahata
2023-11-15 10:41     ` Huang, Kai
2023-11-15 19:26       ` Isaku Yamahata
2023-11-09 11:55 ` [PATCH v15 06/23] x86/virt/tdx: Add SEAMCALL error printing for module initialization Kai Huang
2023-11-09 11:55 ` [PATCH v15 07/23] x86/virt/tdx: Add skeleton to enable TDX on demand Kai Huang
2023-11-09 11:55 ` [PATCH v15 08/23] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory Kai Huang
2023-11-09 11:55 ` [PATCH v15 09/23] x86/virt/tdx: Get module global metadata for module initialization Kai Huang
2023-11-09 23:29   ` Dave Hansen
2023-11-10  2:23     ` Huang, Kai
2023-11-15 19:35   ` Isaku Yamahata
2023-11-16  3:19     ` Huang, Kai
2023-11-09 11:55 ` [PATCH v15 10/23] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions Kai Huang
2023-11-09 11:55 ` [PATCH v15 11/23] x86/virt/tdx: Fill out " Kai Huang
2023-11-09 11:55 ` [PATCH v15 12/23] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
2023-11-09 11:55 ` [PATCH v15 13/23] x86/virt/tdx: Designate reserved areas for all TDMRs Kai Huang
2023-11-09 11:55 ` [PATCH v15 14/23] x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID Kai Huang
2023-11-09 11:55 ` [PATCH v15 15/23] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
2023-11-09 11:55 ` [PATCH v15 16/23] x86/virt/tdx: Initialize all TDMRs Kai Huang
2023-11-09 11:55 ` [PATCH v15 17/23] x86/kexec: Flush cache of TDX private memory Kai Huang
2023-11-27 18:13   ` Dave Hansen
2023-11-27 19:33     ` Huang, Kai
2023-11-27 20:02       ` Huang, Kai
2023-11-27 20:05       ` Dave Hansen
2023-11-27 20:52         ` Huang, Kai
2023-11-27 21:06           ` Dave Hansen
2023-11-27 22:09             ` Huang, Kai
2023-11-09 11:55 ` [PATCH v15 18/23] x86/virt/tdx: Keep TDMRs when module initialization is successful Kai Huang
2023-11-09 11:55 ` [PATCH v15 19/23] x86/virt/tdx: Improve readability of module initialization error handling Kai Huang
2023-11-09 11:55 ` [PATCH v15 20/23] x86/kexec(): Reset TDX private memory on platforms with TDX erratum Kai Huang
2023-11-09 11:55 ` [PATCH v15 21/23] x86/virt/tdx: Handle TDX interaction with ACPI S3 and deeper states Kai Huang
2023-11-30 17:20   ` Dave Hansen
2023-11-09 11:55 ` [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
2023-11-30 18:01   ` Tony Luck
2023-12-01 20:35   ` Dave Hansen
2023-12-03 11:44     ` Huang, Kai
2023-12-04 17:07       ` Dave Hansen
2023-12-04 21:00         ` Huang, Kai
2023-12-04 22:04           ` Dave Hansen
2023-12-04 23:24             ` Huang, Kai
2023-12-04 23:39               ` Dave Hansen
2023-12-04 23:56                 ` Huang, Kai
2023-12-05  2:04                 ` Sean Christopherson
2023-12-05 16:36                   ` Dave Hansen
2023-12-05 16:53                     ` Sean Christopherson
2023-12-05 16:36                   ` Luck, Tony
2023-12-05 16:57                     ` Sean Christopherson
2023-12-04 23:41               ` Huang, Kai
2023-12-05 14:25   ` Borislav Petkov
2023-12-05 19:41     ` Huang, Kai
2023-12-05 19:56       ` Borislav Petkov
2023-12-05 20:08         ` Huang, Kai
2023-12-05 20:29           ` Borislav Petkov
2023-12-05 20:33             ` Huang, Kai
2023-12-05 20:41               ` Borislav Petkov
2023-12-05 20:49                 ` Dave Hansen
2023-12-05 20:58                 ` Huang, Kai
2023-11-09 11:56 ` [PATCH v15 23/23] Documentation/x86: Add documentation for TDX host support Kai Huang
2023-11-13  8:40 ` [PATCH v15 00/23] TDX host kernel support Nikolay Borisov
2023-11-13  9:11   ` Huang, Kai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).