All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/11] Add TDX Guest Support (Initial support)
@ 2021-06-18 22:57 Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 01/11] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT Kuppuswamy Sathyanarayanan
                   ` (11 more replies)
  0 siblings, 12 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

Hi All,

Intel's Trust Domain Extensions (TDX) protect guest VMs from malicious
hosts and some physical attacks. This series adds the basic TDX guest
infrastructure support (including #VE handler support, and #VE support
for halt and CPUID). This is just a subset of patches in the bare minimum
TDX support patch list which is required for supporting minimal
functional TDX guest. Other basic feature features like #VE support for
IO, MMIO, boot optimization fixes and shared-mm support will be submitted
in a separate patch set. To make reviewing easier we split it into smaller
series. This series alone is not necessarily fully functional.

Also, the host-side support patches, and support for advanced TD guest
features like attestation or debug-mode will be submitted at a later time.
Also, at this point it is not secure with some known holes in drivers, and
also hasn’t been fully audited and fuzzed yet.

TDX has a lot of similarities to SEV. It enhances confidentiality and
of guest memory and state (like registers) and includes a new exception
(#VE) for the same basic reasons as SEV-ES. Like SEV-SNP (not merged
yet), TDX limits the host's ability to effect changes in the guest
physical address space. With TDX the host cannot access the guest memory,
so various functionality that would normally be done in KVM has moved
into a (paravirtualized) guest. Partially this is done using the
Virtualization Exception (#VE) and partially with direct paravirtual hooks.

The TDX architecture also includes a new CPU mode called
Secure-Arbitration Mode (SEAM). The software (TDX module) running in this
mode arbitrates interactions between host and guest and implements many of
the guarantees of the TDX architecture.

Some of the key differences between TD and regular VM is,

1. Multi CPU bring-up is done using the ACPI MADT wake-up table.
2. A new #VE exception handler is added. The TDX module injects #VE exception
   to the guest TD in cases of instructions that need to be emulated, disallowed
   MSR accesses, etc.
3. By default memory is marked as private, and TD will selectively share it with
   VMM based on need.
   
Note that the kernel will also need to be hardened against low level inputs from
the now untrusted hosts. This will be done in follow on patches.

You can find TDX related documents in the following link.

https://software.intel.com/content/www/br/pt/develop/articles/intel-trust-domain-extensions.html

Changes since v1 (v2 is partial set submission):
 * Patch titled "x86/x86: Add early_is_tdx_guest() interface" is moved
   out of this series.
 * Rest of the change log is added per patch.

Kirill A. Shutemov (7):
  x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT
  x86/tdx: Get TD execution environment information via TDINFO
  x86/traps: Add #VE support for TDX guest
  x86/tdx: Add HLT support for TDX guest
  x86/tdx: Wire up KVM hypercalls
  x86/tdx: Add MSR support for TDX guest
  x86/tdx: Handle CPUID via #VE

Kuppuswamy Sathyanarayanan (4):
  x86/tdx: Introduce INTEL_TDX_GUEST config option
  x86/cpufeatures: Add TDX Guest CPU feature
  x86: Introduce generic protected guest abstraction
  x86/tdx: Add __tdx_module_call() and __tdx_hypercall() helper
    functions

 arch/Kconfig                           |   3 +
 arch/x86/Kconfig                       |  22 ++
 arch/x86/include/asm/cpufeatures.h     |   1 +
 arch/x86/include/asm/idtentry.h        |   4 +
 arch/x86/include/asm/irqflags.h        |  40 ++--
 arch/x86/include/asm/kvm_para.h        |  22 ++
 arch/x86/include/asm/paravirt.h        |  20 +-
 arch/x86/include/asm/paravirt_types.h  |   3 +-
 arch/x86/include/asm/protected_guest.h |  20 ++
 arch/x86/include/asm/sev.h             |   3 +
 arch/x86/include/asm/tdx.h             | 109 ++++++++++
 arch/x86/kernel/Makefile               |   1 +
 arch/x86/kernel/asm-offsets.c          |  23 ++
 arch/x86/kernel/head64.c               |   3 +
 arch/x86/kernel/idt.c                  |   6 +
 arch/x86/kernel/paravirt.c             |   4 +-
 arch/x86/kernel/sev.c                  |  17 ++
 arch/x86/kernel/tdcall.S               | 283 +++++++++++++++++++++++++
 arch/x86/kernel/tdx.c                  | 246 +++++++++++++++++++++
 arch/x86/kernel/traps.c                |  69 ++++++
 include/linux/protected_guest.h        |  30 +++
 21 files changed, 898 insertions(+), 31 deletions(-)
 create mode 100644 arch/x86/include/asm/protected_guest.h
 create mode 100644 arch/x86/include/asm/tdx.h
 create mode 100644 arch/x86/kernel/tdcall.S
 create mode 100644 arch/x86/kernel/tdx.c
 create mode 100644 include/linux/protected_guest.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 01/11] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 02/11] x86/tdx: Introduce INTEL_TDX_GUEST config option Kuppuswamy Sathyanarayanan
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

CONFIG_PARAVIRT_XXL is mainly defined/used by XEN PV guests. For
other VM guest types, features supported under CONFIG_PARAVIRT
are self sufficient. CONFIG_PARAVIRT mainly provides support for
TLB flush operations and time related operations.

For TDX guest as well, paravirt calls under CONFIG_PARVIRT meets
most of its requirement except the need of HLT and SAFE_HLT
paravirt calls, which is currently defined under
COFNIG_PARAVIRT_XXL.

Since enabling CONFIG_PARAVIRT_XXL is too bloated for TDX guest
like platforms, move HLT and SAFE_HLT paravirt calls under
CONFIG_PARAVIRT.

Moving HLT and SAFE_HLT paravirt calls are not fatal and should not
break any functionality for current users of CONFIG_PARAVIRT.

Co-developed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/include/asm/irqflags.h       | 40 +++++++++++++++------------
 arch/x86/include/asm/paravirt.h       | 20 +++++++-------
 arch/x86/include/asm/paravirt_types.h |  3 +-
 arch/x86/kernel/paravirt.c            |  4 ++-
 4 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index c5ce9845c999..f3bb33b1715d 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -59,6 +59,28 @@ static inline __cpuidle void native_halt(void)
 
 #endif
 
+#ifndef CONFIG_PARAVIRT
+#ifndef __ASSEMBLY__
+/*
+ * Used in the idle loop; sti takes one instruction cycle
+ * to complete:
+ */
+static inline __cpuidle void arch_safe_halt(void)
+{
+	native_safe_halt();
+}
+
+/*
+ * Used when interrupts are already enabled or to
+ * shutdown the processor:
+ */
+static inline __cpuidle void halt(void)
+{
+	native_halt();
+}
+#endif /* __ASSEMBLY__ */
+#endif /* CONFIG_PARAVIRT */
+
 #ifdef CONFIG_PARAVIRT_XXL
 #include <asm/paravirt.h>
 #else
@@ -80,24 +102,6 @@ static __always_inline void arch_local_irq_enable(void)
 	native_irq_enable();
 }
 
-/*
- * Used in the idle loop; sti takes one instruction cycle
- * to complete:
- */
-static inline __cpuidle void arch_safe_halt(void)
-{
-	native_safe_halt();
-}
-
-/*
- * Used when interrupts are already enabled or to
- * shutdown the processor:
- */
-static inline __cpuidle void halt(void)
-{
-	native_halt();
-}
-
 /*
  * For spinlocks, etc:
  */
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index da3a1ac82be5..d323a626c7a8 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -97,6 +97,16 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
 	PVOP_VCALL1(mmu.exit_mmap, mm);
 }
 
+static inline void arch_safe_halt(void)
+{
+	PVOP_VCALL0(irq.safe_halt);
+}
+
+static inline void halt(void)
+{
+	PVOP_VCALL0(irq.halt);
+}
+
 #ifdef CONFIG_PARAVIRT_XXL
 static inline void load_sp0(unsigned long sp0)
 {
@@ -162,16 +172,6 @@ static inline void __write_cr4(unsigned long x)
 	PVOP_VCALL1(cpu.write_cr4, x);
 }
 
-static inline void arch_safe_halt(void)
-{
-	PVOP_VCALL0(irq.safe_halt);
-}
-
-static inline void halt(void)
-{
-	PVOP_VCALL0(irq.halt);
-}
-
 static inline void wbinvd(void)
 {
 	PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV));
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index d9d6b0203ec4..40082847f314 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -150,10 +150,9 @@ struct pv_irq_ops {
 	struct paravirt_callee_save save_fl;
 	struct paravirt_callee_save irq_disable;
 	struct paravirt_callee_save irq_enable;
-
+#endif
 	void (*safe_halt)(void);
 	void (*halt)(void);
-#endif
 } __no_randomize_layout;
 
 struct pv_mmu_ops {
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 04cafc057bed..124e0f6c5d1c 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -283,9 +283,11 @@ struct paravirt_patch_template pv_ops = {
 	.irq.save_fl		= __PV_IS_CALLEE_SAVE(native_save_fl),
 	.irq.irq_disable	= __PV_IS_CALLEE_SAVE(native_irq_disable),
 	.irq.irq_enable		= __PV_IS_CALLEE_SAVE(native_irq_enable),
+#endif /* CONFIG_PARAVIRT_XXL */
+
+	/* Irq HLT ops. */
 	.irq.safe_halt		= native_safe_halt,
 	.irq.halt		= native_halt,
-#endif /* CONFIG_PARAVIRT_XXL */
 
 	/* Mmu ops. */
 	.mmu.flush_tlb_user	= native_flush_tlb_local,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 02/11] x86/tdx: Introduce INTEL_TDX_GUEST config option
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 01/11] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-19 11:59   ` Juergen Gross
  2021-06-18 22:57 ` [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature Kuppuswamy Sathyanarayanan
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

Add INTEL_TDX_GUEST config option to selectively compile
TDX guest support.

Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/Kconfig | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0045e1b44190..ff79263aebd1 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -876,6 +876,21 @@ config ACRN_GUEST
 	  IOT with small footprint and real-time features. More details can be
 	  found in https://projectacrn.org/.
 
+config INTEL_TDX_GUEST
+	bool "Intel Trusted Domain eXtensions Guest Support"
+	depends on X86_64 && CPU_SUP_INTEL && PARAVIRT
+	depends on SECURITY
+	select PARAVIRT_XL
+	select X86_X2APIC
+	select SECURITY_LOCKDOWN_LSM
+	help
+	  Provide support for running in a trusted domain on Intel processors
+	  equipped with Trusted Domain eXtenstions. TDX is a new Intel
+	  technology that extends VMX and Memory Encryption with a new kind of
+	  virtual machine guest called Trust Domain (TD). A TD is designed to
+	  run in a CPU mode that protects the confidentiality of TD memory
+	  contents and the TD’s CPU state from other software, including VMM.
+
 endif #HYPERVISOR_GUEST
 
 source "arch/x86/Kconfig.cpu"
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 01/11] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 02/11] x86/tdx: Introduce INTEL_TDX_GUEST config option Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 23:39   ` Borislav Petkov
  2021-07-15 11:56   ` Xiaoyao Li
  2021-06-18 22:57 ` [PATCH v3 04/11] x86: Introduce generic protected guest abstraction Kuppuswamy Sathyanarayanan
                   ` (8 subsequent siblings)
  11 siblings, 2 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

Add CPU feature detection for Trusted Domain Extensions support. TDX
feature adds capabilities to keep guest register state and memory
isolated from hypervisor.

For TDX guest platforms, executing CPUID(eax=0x21, ecx=0) will return
following values in EAX, EBX, ECX and EDX.

EAX:  Maximum sub-leaf number:  0
EBX/EDX/ECX:  Vendor string:

EBX =  "Inte"
EDX =  "lTDX"
ECX =  "    "

So when above condition is true, set X86_FEATURE_TDX_GUEST feature cap
bit.

Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
---

Changes since v2:
 * Fixed debug prints as per Borislav suggestion.

Changes since v1:
 * Fixed commit log issues reported by Borislav.
 * Moved header file include to the start of tdx.h.
 * Added pr_fmt for TDX.
 * Simplified cpuid_has_tdx_guest() implementation as per
   Borislav comments.

 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/include/asm/tdx.h         | 20 ++++++++++++++++++++
 arch/x86/kernel/Makefile           |  1 +
 arch/x86/kernel/head64.c           |  3 +++
 arch/x86/kernel/tdx.c              | 29 +++++++++++++++++++++++++++++
 5 files changed, 54 insertions(+)
 create mode 100644 arch/x86/include/asm/tdx.h
 create mode 100644 arch/x86/kernel/tdx.c

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index ac37830ae941..dddc3a27cc8a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -238,6 +238,7 @@
 #define X86_FEATURE_VMW_VMMCALL		( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
 #define X86_FEATURE_PVUNLOCK		( 8*32+20) /* "" PV unlock function */
 #define X86_FEATURE_VCPUPREEMPT		( 8*32+21) /* "" PV vcpu_is_preempted function */
+#define X86_FEATURE_TDX_GUEST		( 8*32+22) /* Trusted Domain Extensions Guest */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
 #define X86_FEATURE_FSGSBASE		( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
new file mode 100644
index 000000000000..c738bde944d1
--- /dev/null
+++ b/arch/x86/include/asm/tdx.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2020 Intel Corporation */
+#ifndef _ASM_X86_TDX_H
+#define _ASM_X86_TDX_H
+
+#include <linux/cpufeature.h>
+
+#define TDX_CPUID_LEAF_ID	0x21
+
+#ifdef CONFIG_INTEL_TDX_GUEST
+
+void __init tdx_early_init(void);
+
+#else
+
+static inline void tdx_early_init(void) { };
+
+#endif /* CONFIG_INTEL_TDX_GUEST */
+
+#endif /* _ASM_X86_TDX_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 0f66682ac02a..af09ce93a641 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -126,6 +126,7 @@ obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 obj-$(CONFIG_X86_PMEM_LEGACY_DEVICE) += pmem.o
 
 obj-$(CONFIG_JAILHOUSE_GUEST)	+= jailhouse.o
+obj-$(CONFIG_INTEL_TDX_GUEST)	+= tdx.o
 
 obj-$(CONFIG_EISA)		+= eisa.o
 obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..d1a4942ae160 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -40,6 +40,7 @@
 #include <asm/extable.h>
 #include <asm/trapnr.h>
 #include <asm/sev.h>
+#include <asm/tdx.h>
 
 /*
  * Manage page tables very early on.
@@ -491,6 +492,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 
 	kasan_early_init();
 
+	tdx_early_init();
+
 	idt_setup_early_handler();
 
 	copy_bootdata(__va(real_mode_data));
diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
new file mode 100644
index 000000000000..b1492e076168
--- /dev/null
+++ b/arch/x86/kernel/tdx.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2020 Intel Corporation */
+
+#undef pr_fmt
+#define pr_fmt(fmt)     "x86/tdx: " fmt
+
+#include <asm/tdx.h>
+
+static inline bool cpuid_has_tdx_guest(void)
+{
+	u32 eax, sig[3];
+
+	if (cpuid_eax(0) < TDX_CPUID_LEAF_ID)
+		return false;
+
+	cpuid_count(TDX_CPUID_LEAF_ID, 0, &eax, &sig[0], &sig[1], &sig[2]);
+
+	return !memcmp("IntelTDX    ", sig, 12);
+}
+
+void __init tdx_early_init(void)
+{
+	if (!cpuid_has_tdx_guest())
+		return;
+
+	setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);
+
+	pr_info("Guest initialized\n");
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 04/11] x86: Introduce generic protected guest abstraction
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (2 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-24 15:01   ` Borislav Petkov
  2021-06-28 17:52   ` Tom Lendacky
  2021-06-18 22:57 ` [PATCH v3 05/11] x86/tdx: Add __tdx_module_call() and __tdx_hypercall() helper functions Kuppuswamy Sathyanarayanan
                   ` (7 subsequent siblings)
  11 siblings, 2 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

Add a generic way to check if we run with an encrypted guest,
without requiring x86 specific ifdefs. This can then be used in
non architecture specific code. 

prot_guest_has() is used to check for protected guest feature
flags.

Originally-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---

Change since v1:
 * Introduced PR_GUEST_TDX and PR_GUEST_SEV vendor flags as per
   Boris suggestion.
 * Replaced is_tdx_guest() with if (boot_cpu_data.x86_vendor ==
   X86_VENDOR_INTEL) in prot_guest_has().
 * Modified tdx_protected_guest_has() and sev_protected_guest_has()
   to support vendor flags.

 arch/Kconfig                           |  3 +++
 arch/x86/Kconfig                       |  2 ++
 arch/x86/include/asm/protected_guest.h | 20 +++++++++++++++++
 arch/x86/include/asm/sev.h             |  3 +++
 arch/x86/include/asm/tdx.h             |  4 ++++
 arch/x86/kernel/sev.c                  | 17 +++++++++++++++
 arch/x86/kernel/tdx.c                  | 17 +++++++++++++++
 include/linux/protected_guest.h        | 30 ++++++++++++++++++++++++++
 8 files changed, 96 insertions(+)
 create mode 100644 arch/x86/include/asm/protected_guest.h
 create mode 100644 include/linux/protected_guest.h

diff --git a/arch/Kconfig b/arch/Kconfig
index c45b770d3579..3c5bf55ee752 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1011,6 +1011,9 @@ config HAVE_ARCH_NVRAM_OPS
 config ISA_BUS_API
 	def_bool ISA
 
+config ARCH_HAS_PROTECTED_GUEST
+	bool
+
 #
 # ABI hall of shame
 #
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ff79263aebd1..d506aae29dd9 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -883,6 +883,7 @@ config INTEL_TDX_GUEST
 	select PARAVIRT_XL
 	select X86_X2APIC
 	select SECURITY_LOCKDOWN_LSM
+	select ARCH_HAS_PROTECTED_GUEST
 	help
 	  Provide support for running in a trusted domain on Intel processors
 	  equipped with Trusted Domain eXtenstions. TDX is a new Intel
@@ -1539,6 +1540,7 @@ config AMD_MEM_ENCRYPT
 	select ARCH_HAS_FORCE_DMA_UNENCRYPTED
 	select INSTRUCTION_DECODER
 	select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+	select ARCH_HAS_PROTECTED_GUEST
 	help
 	  Say yes to enable support for the encryption of system memory.
 	  This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h
new file mode 100644
index 000000000000..d47668dee6c2
--- /dev/null
+++ b/arch/x86/include/asm/protected_guest.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2020 Intel Corporation */
+#ifndef _ASM_PROTECTED_GUEST_H
+#define _ASM_PROTECTED_GUEST_H 1
+
+#include <asm/processor.h>
+#include <asm/tdx.h>
+#include <asm/sev.h>
+
+static inline bool prot_guest_has(unsigned long flag)
+{
+	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+		return tdx_protected_guest_has(flag);
+	else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
+		return sev_protected_guest_has(flag);
+
+	return false;
+}
+
+#endif /* _ASM_PROTECTED_GUEST_H */
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index fa5cd05d3b5b..e9b0b93a3157 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -81,12 +81,15 @@ static __always_inline void sev_es_nmi_complete(void)
 		__sev_es_nmi_complete();
 }
 extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
+bool sev_protected_guest_has(unsigned long flag);
+
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
 static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; }
 static inline void sev_es_nmi_complete(void) { }
 static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
+static inline bool sev_protected_guest_has(unsigned long flag) { return false; }
 #endif
 
 #endif
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index c738bde944d1..1c17c9080a2c 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -11,10 +11,14 @@
 
 void __init tdx_early_init(void);
 
+bool tdx_protected_guest_has(unsigned long flag);
+
 #else
 
 static inline void tdx_early_init(void) { };
 
+static inline bool tdx_protected_guest_has(unsigned long flag) { return false; }
+
 #endif /* CONFIG_INTEL_TDX_GUEST */
 
 #endif /* _ASM_X86_TDX_H */
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 651b81cd648e..3e88576555d2 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -19,6 +19,7 @@
 #include <linux/memblock.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
+#include <linux/protected_guest.h>
 
 #include <asm/cpu_entry_area.h>
 #include <asm/stacktrace.h>
@@ -1493,3 +1494,19 @@ bool __init handle_vc_boot_ghcb(struct pt_regs *regs)
 	while (true)
 		halt();
 }
+
+bool sev_protected_guest_has(unsigned long flag)
+{
+	switch (flag) {
+	case PR_GUEST_MEM_ENCRYPT:
+	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
+	case PR_GUEST_UNROLL_STRING_IO:
+	case PR_GUEST_HOST_MEM_ENCRYPT:
+		return true;
+	case PR_GUEST_SEV:
+		return sev_active();
+	}
+
+	return false;
+}
+EXPORT_SYMBOL_GPL(sev_protected_guest_has);
diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index b1492e076168..ae3334a2b29d 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -4,6 +4,8 @@
 #undef pr_fmt
 #define pr_fmt(fmt)     "x86/tdx: " fmt
 
+#include <linux/protected_guest.h>
+
 #include <asm/tdx.h>
 
 static inline bool cpuid_has_tdx_guest(void)
@@ -18,6 +20,21 @@ static inline bool cpuid_has_tdx_guest(void)
 	return !memcmp("IntelTDX    ", sig, 12);
 }
 
+bool tdx_protected_guest_has(unsigned long flag)
+{
+	switch (flag) {
+	case PR_GUEST_MEM_ENCRYPT:
+	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
+	case PR_GUEST_UNROLL_STRING_IO:
+	case PR_GUEST_SHARED_MAPPING_INIT:
+	case PR_GUEST_TDX:
+		return static_cpu_has(X86_FEATURE_TDX_GUEST);
+	}
+
+	return false;
+}
+EXPORT_SYMBOL_GPL(tdx_protected_guest_has);
+
 void __init tdx_early_init(void)
 {
 	if (!cpuid_has_tdx_guest())
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
new file mode 100644
index 000000000000..c5b7547e5a68
--- /dev/null
+++ b/include/linux/protected_guest.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _LINUX_PROTECTED_GUEST_H
+#define _LINUX_PROTECTED_GUEST_H 1
+
+/* Protected Guest Feature Flags (leave 0-0xfff for vendor specific flags) */
+
+/* 0-ff is reserved for Intel specific flags */
+#define PR_GUEST_TDX				0x0000
+
+/* 100-1ff is reserved for AMD specific flags */
+#define PR_GUEST_SEV				0x0100
+
+/* Support for guest encryption */
+#define PR_GUEST_MEM_ENCRYPT			0x1000
+/* Encryption support is active */
+#define PR_GUEST_MEM_ENCRYPT_ACTIVE		0x1001
+/* Support for unrolled string IO */
+#define PR_GUEST_UNROLL_STRING_IO		0x1002
+/* Support for host memory encryption */
+#define PR_GUEST_HOST_MEM_ENCRYPT		0x1003
+/* Support for shared mapping initialization (after early init) */
+#define PR_GUEST_SHARED_MAPPING_INIT		0x1004
+
+#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
+#include <asm/protected_guest.h>
+#else
+static inline bool prot_guest_has(unsigned long flag) { return false; }
+#endif
+
+#endif /* _LINUX_PROTECTED_GUEST_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 05/11] x86/tdx: Add __tdx_module_call() and __tdx_hypercall() helper functions
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (3 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 04/11] x86: Introduce generic protected guest abstraction Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 06/11] x86/tdx: Get TD execution environment information via TDINFO Kuppuswamy Sathyanarayanan
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

Guests communicate with VMMs with hypercalls. Historically, these
are implemented using instructions that are known to cause VMEXITs
like vmcall, vmlaunch, etc. However, with TDX, VMEXITs no longer
expose guest state to the host.  This prevents the old hypercall
mechanisms from working. So to communicate with VMM, TDX
specification defines a new instruction called TDCALL.

In a TDX based VM, since VMM is an untrusted entity, a intermediary
layer (TDX module) exists between host and guest to facilitate
secure communication. TDX guests communicate with the TDX module
using TDCALL instruction. 

Both TDX module and VMM communication uses TDCALL instruction. Value
of the RAX register when executing TDCALL instruction is used to
determine the TDCALL type. If the TDCALL is executed with value "0"
in RAX, then it is the service request to VMM. Any other value in
RAX means it is a TDX module related call.

Implement common helper functions to communicate with the TDX Module
and VMM (using TDCALL instruction).
   
__tdx_hypercall()    - request services from the VMM.
__tdx_module_call()  - communicate with the TDX Module.

Also define two additional wrappers, tdx_hypercall() and
tdx_hypercall_out_r11() to cover common use cases of
__tdx_hypercall() function. Since each use case of
__tdx_module_call() is different, it does not need
multiple wrappers.

Implement __tdx_module_call() and __tdx_hypercall() helper functions
in assembly.

Rationale behind choosing to use assembly over inline assembly is,
since the number of lines of instructions (with comments) in
__tdx_hypercall() implementation is over 70, using inline assembly
to implement it will make it hard to read.
   
Also, just like syscalls, not all TDVMCALL/TDCALLs use cases need to
use the same set of argument registers. The implementation here picks
the current worst-case scenario for TDCALL (4 registers). For TDCALLs
with fewer than 4 arguments, there will end up being a few superfluous
(cheap) instructions.  But, this approach maximizes code reuse. The
same argument applies to __tdx_hypercall() function as well.

For registers used by TDCALL instruction, please check TDX GHCI
specification, sec 2.4 and 3.

https://software.intel.com/content/dam/develop/external/us/en/documents/intel-tdx-guest-hypervisor-communication-interface.pdf

Originally-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---

Change since v1:
 * Fixed commit comments format issues as per review comments.
 * Fixed empty line and other typo issues found in review.
 * Removed do_tdx_hypercall() helper function and modified
   __tdx_hypercall() to include do_tdx_hypercall() implementation
   and to accept TDVMCALL type as argument.
 * Since the number of arguments in __tdx_hypercall() is more than
   6, it has been modified  to get the 7th argument from the caller stack.
 * Instead of leaving output pointer in register before making
   TDCALL, stored it in a stack.
 * Instead of triggering ud2 for TDCALL failures in __tdx_hypercall(),
   it is modified to return the TDCALL status as return value. we will let
   user add appropriate error info before triggering fatal error. Also,
   extended struct tdx_hypercall_output to store r10 register which
   contains hypercall error code.
 * Included TDCALL ABI details in __tdx_module_call() and __tdx_hypercall.
 * Removed tdx_hypercall_out_r11() helper function. Since it is not really
   useful.
 * Added _tdx_hypercall() as a wrapper for __tdx_hypercall() with BUG_ON
   check for TDCALL failure.

 arch/x86/include/asm/tdx.h    |  40 +++++
 arch/x86/kernel/Makefile      |   2 +-
 arch/x86/kernel/asm-offsets.c |  23 +++
 arch/x86/kernel/tdcall.S      | 281 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/tdx.c         |  23 +++
 5 files changed, 368 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kernel/tdcall.S

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 1c17c9080a2c..6eea835694c0 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -4,8 +4,40 @@
 #define _ASM_X86_TDX_H
 
 #include <linux/cpufeature.h>
+#include <linux/types.h>
 
 #define TDX_CPUID_LEAF_ID	0x21
+#define TDX_HYPERCALL_STANDARD  0
+
+/*
+ * Used in __tdx_module_call() helper function to gather the
+ * output registers' values of TDCALL instruction when requesting
+ * services from the TDX module. This is software only structure
+ * and not related to TDX module/VMM.
+ */
+struct tdx_module_output {
+	u64 rcx;
+	u64 rdx;
+	u64 r8;
+	u64 r9;
+	u64 r10;
+	u64 r11;
+};
+
+/*
+ * Used in __tdx_hypercall() helper function to gather the
+ * output registers' values of TDCALL instruction when requesting
+ * services from the VMM. This is software only structure
+ * and not related to TDX module/VMM.
+ */
+struct tdx_hypercall_output {
+	u64 r10;
+	u64 r11;
+	u64 r12;
+	u64 r13;
+	u64 r14;
+	u64 r15;
+};
 
 #ifdef CONFIG_INTEL_TDX_GUEST
 
@@ -13,6 +45,14 @@ void __init tdx_early_init(void);
 
 bool tdx_protected_guest_has(unsigned long flag);
 
+/* Helper function used to communicate with the TDX module */
+u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
+		      struct tdx_module_output *out);
+
+/* Helper function used to request services from VMM */
+u64 __tdx_hypercall(u64 type, u64 fn, u64 r12, u64 r13, u64 r14,
+		    u64 r15, struct tdx_hypercall_output *out);
+
 #else
 
 static inline void tdx_early_init(void) { };
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index af09ce93a641..3410f03ef7aa 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -126,7 +126,7 @@ obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 obj-$(CONFIG_X86_PMEM_LEGACY_DEVICE) += pmem.o
 
 obj-$(CONFIG_JAILHOUSE_GUEST)	+= jailhouse.o
-obj-$(CONFIG_INTEL_TDX_GUEST)	+= tdx.o
+obj-$(CONFIG_INTEL_TDX_GUEST)	+= tdcall.o tdx.o
 
 obj-$(CONFIG_EISA)		+= eisa.o
 obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index ecd3fd6993d1..0494ec01218d 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -23,6 +23,10 @@
 #include <xen/interface/xen.h>
 #endif
 
+#ifdef CONFIG_INTEL_TDX_GUEST
+#include <asm/tdx.h>
+#endif
+
 #ifdef CONFIG_X86_32
 # include "asm-offsets_32.c"
 #else
@@ -68,6 +72,25 @@ static void __used common(void)
 	OFFSET(XEN_vcpu_info_arch_cr2, vcpu_info, arch.cr2);
 #endif
 
+#ifdef CONFIG_INTEL_TDX_GUEST
+	BLANK();
+	/* Offset for fields in tdx_module_output */
+	OFFSET(TDX_MODULE_rcx, tdx_module_output, rcx);
+	OFFSET(TDX_MODULE_rdx, tdx_module_output, rdx);
+	OFFSET(TDX_MODULE_r8,  tdx_module_output, r8);
+	OFFSET(TDX_MODULE_r9,  tdx_module_output, r9);
+	OFFSET(TDX_MODULE_r10, tdx_module_output, r10);
+	OFFSET(TDX_MODULE_r11, tdx_module_output, r11);
+
+	/* Offset for fields in tdx_hypercall_output */
+	OFFSET(TDX_HYPERCALL_r10, tdx_hypercall_output, r10);
+	OFFSET(TDX_HYPERCALL_r11, tdx_hypercall_output, r11);
+	OFFSET(TDX_HYPERCALL_r12, tdx_hypercall_output, r12);
+	OFFSET(TDX_HYPERCALL_r13, tdx_hypercall_output, r13);
+	OFFSET(TDX_HYPERCALL_r14, tdx_hypercall_output, r14);
+	OFFSET(TDX_HYPERCALL_r15, tdx_hypercall_output, r15);
+#endif
+
 	BLANK();
 	OFFSET(BP_scratch, boot_params, scratch);
 	OFFSET(BP_secure_boot, boot_params, secure_boot);
diff --git a/arch/x86/kernel/tdcall.S b/arch/x86/kernel/tdcall.S
new file mode 100644
index 000000000000..3cf1d1af7889
--- /dev/null
+++ b/arch/x86/kernel/tdcall.S
@@ -0,0 +1,281 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <asm/asm-offsets.h>
+#include <asm/asm.h>
+#include <asm/frame.h>
+#include <asm/unwind_hints.h>
+
+#include <linux/linkage.h>
+#include <linux/bits.h>
+#include <linux/errno.h>
+
+#define TDG_R10		BIT(10)
+#define TDG_R11		BIT(11)
+#define TDG_R12		BIT(12)
+#define TDG_R13		BIT(13)
+#define TDG_R14		BIT(14)
+#define TDG_R15		BIT(15)
+
+#ifdef CONFIG_FRAME_POINTER
+/* Frame offset + Return address + 8 (for arg1) */
+#define ARG7_SP_OFFSET		(FRAME_OFFSET + 0x10)
+#else
+#define ARG7_SP_OFFSET		(0x08)
+#endif
+
+/*
+ * Expose registers R10-R15 to VMM. It is passed via RCX register
+ * to the TDX Module, which will be used by the TDX module to
+ * identify the list of registers exposed to VMM. Each bit in this
+ * mask represents a register ID. You can find the bit field details
+ * in TDX GHCI specification.
+ */
+#define TDVMCALL_EXPOSE_REGS_MASK	( TDG_R10 | TDG_R11 | \
+					  TDG_R12 | TDG_R13 | \
+					  TDG_R14 | TDG_R15 )
+
+/*
+ * TDX guests use the TDCALL instruction to make requests to the
+ * TDX module and hypercalls to the VMM. It is supported in
+ * Binutils >= 2.36.
+ */
+#define tdcall .byte 0x66,0x0f,0x01,0xcc
+
+/*
+ * __tdx_module_call()  - Helper function used by TDX guests to request
+ * services from the TDX module (does not include VMM services).
+ *
+ * This function serves as a wrapper to move user call arguments to the
+ * correct registers as specified by TDCALL ABI and share it with the
+ * TDX module. If the TDCALL operation is successful and a valid
+ * "struct tdx_module_output" pointer is available (in "out" argument),
+ * output from the TDX module is saved to the memory specified in the
+ * "out" pointer. Also the status of the TDCALL operation is returned
+ * back to the user as a function return value.
+ *
+ *-------------------------------------------------------------------------
+ * TDCALL ABI:
+ *-------------------------------------------------------------------------
+ * Input Registers:
+ *
+ * RAX                 - TDCALL Leaf number.
+ * RCX,RDX,R8-R9       - TDCALL Leaf specific input registers.
+ *
+ * Output Registers:
+ *
+ * RAX                 - TDCALL instruction error code.
+ * RCX,RDX,R8-R11      - TDCALL Leaf specific output registers.
+ *
+ *-------------------------------------------------------------------------
+ *
+ * __tdx_module_call() function ABI:
+ *
+ * @fn  (RDI)          - TDCALL Leaf ID,    moved to RAX
+ * @rcx (RSI)          - Input parameter 1, moved to RCX
+ * @rdx (RDX)          - Input parameter 2, moved to RDX
+ * @r8  (RCX)          - Input parameter 3, moved to R8
+ * @r9  (R8)           - Input parameter 4, moved to R9
+ *
+ * @out (R9)           - struct tdx_module_output pointer
+ *                       stored temporarily in R12 (not
+ *                       shared with the TDX module). It
+ *                       can be NULL.
+ *
+ * Return status of TDCALL via RAX.
+ */
+SYM_FUNC_START(__tdx_module_call)
+	FRAME_BEGIN
+
+	/*
+	 * R12 will be used as temporary storage for
+	 * struct tdx_module_output pointer. You can
+	 * find struct tdx_module_output details in
+	 * arch/x86/include/asm/tdx.h. Also note that
+	 * registers R12-R15 are not used by TDCALL
+	 * services supported by this helper function.
+	 */
+
+	/* Callee saved, so preserve it */
+	push %r12
+
+	/*
+	 * Push output pointer to stack, after TDCALL operation,
+	 * it will be fetched into R12 register.
+	 */
+	push %r9
+
+	/* Mangle function call ABI into TDCALL ABI: */
+	/* Move TDCALL Leaf ID to RAX */
+	mov %rdi, %rax
+	/* Move input 4 to R9 */
+	mov %r8,  %r9
+	/* Move input 3 to R8 */
+	mov %rcx, %r8
+	/* Move input 1 to RCX */
+	mov %rsi, %rcx
+	/* Leave input param 2 in RDX */
+
+	tdcall
+
+	/* Fetch output pointer from stack to R12 */
+	pop %r12
+
+	/* Check for TDCALL success: 0 - Successful, otherwise failed */
+	test %rax, %rax
+	jnz 1f
+
+	/*
+	 * __tdx_module_call() can be initiated without an output pointer.
+	 * So, check if caller provided an output struct before storing
+	 * output registers.
+	 */
+	test %r12, %r12
+	jz 1f
+
+	/* Copy TDCALL result registers to output struct: */
+	movq %rcx, TDX_MODULE_rcx(%r12)
+	movq %rdx, TDX_MODULE_rdx(%r12)
+	movq %r8,  TDX_MODULE_r8(%r12)
+	movq %r9,  TDX_MODULE_r9(%r12)
+	movq %r10, TDX_MODULE_r10(%r12)
+	movq %r11, TDX_MODULE_r11(%r12)
+1:
+	/* Restore the state of R12 register */
+	pop %r12
+
+	FRAME_END
+	ret
+SYM_FUNC_END(__tdx_module_call)
+
+/*
+ * __tdx_hypercall()  - Helper function used by TDX guests to request
+ * services from the VMM. All requests are made via the TDX module
+ * using TDCALL instruction.
+ *
+ * This function serves as a wrapper to move user call arguments to the
+ * correct registers as specified by TDCALL ABI and share it with VMM
+ * via the TDX module. After TDCALL operation, output from the VMM is
+ * saved to the memory specified in the "out" (struct tdx_hypercall_output)
+ * pointer. 
+ *
+ *-------------------------------------------------------------------------
+ * TD VMCALL ABI:
+ *-------------------------------------------------------------------------
+ *
+ * Input Registers:
+ *
+ * RAX                 - TDCALL instruction leaf number (0 - TDG.VP.VMCALL)
+ * RCX                 - BITMAP which controls which part of TD Guest GPR
+ *                       is passed as-is to VMM and back.
+ * R10                 - Set 0 to indicate TDCALL follows standard TDX ABI
+ *                       specification. Non zero value indicates vendor
+ *                       specific ABI.
+ * R11                 - VMCALL sub function number
+ * RBX, RBP, RDI, RSI  - Used to pass VMCALL sub function specific arguments.
+ * R8-R9, R12–R15      - Same as above.
+ *
+ * Output Registers:
+ *
+ * RAX                 - TDCALL instruction status (Not related to hypercall
+ *                        output).
+ * R10                 - Hypercall output error code.
+ * R11-R15             - Hypercall sub function specific output values.
+ *
+ *-------------------------------------------------------------------------
+ *
+ * __tdx_hypercall() function ABI:
+ *
+ * @type  (RDI)        - TD VMCALL type, moved to R10
+ * @fn    (RSI)        - TD VMCALL sub function, moved to R11
+ * @r12   (RDX)        - Input parameter 1, moved to R12
+ * @r13   (RCX)        - Input parameter 2, moved to R13
+ * @r14   (R8)         - Input parameter 3, moved to R14
+ * @r15   (R9)         - Input parameter 4, moved to R15
+ *
+ * @out   (stack)      - struct tdx_hypercall_output pointer (cannot be NULL)
+ *
+ * On successful completion, return TDCALL status or -EINVAL for invalid
+ * inputs.
+ */
+SYM_FUNC_START(__tdx_hypercall)
+	FRAME_BEGIN
+
+	/* Move argument 7 from caller stack to RAX */
+	movq ARG7_SP_OFFSET(%rsp), %rax
+
+	/* Check if caller provided an output struct */
+	test %rax, %rax
+	/* If out pointer is NULL, return -EINVAL */
+	jz 1f
+
+	/* Save callee-s ved GPRs as mandated by the x86_64 ABI */
+	push %r15
+	push %r14
+	push %r13
+	push %r12
+
+	/*
+	 * Save R9 and output pointer (rax) in stack, it will be used
+	 * again when storing the output registers after TDCALL
+	 * operation.
+	 */
+	push %r9
+	push %rax
+
+	/* Mangle function call ABI into TDCALL ABI: */
+	/* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */
+	xor %eax, %eax
+	/* Move TDVMCALL type (standard vs vendor) in R10 */
+	mov %rdi, %r10
+	/* Move TDVMCALL sub function id to R11 */
+	mov %rsi, %r11
+	/* Move input 1 to R12 */
+	mov %rdx, %r12
+	/* Move input 2 to R13 */
+	mov %rcx, %r13
+	/* Move input 3 to R14 */
+	mov %r8,  %r14
+	/* Move input 4 to R15 */
+	mov %r9,  %r15
+
+	movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
+
+	tdcall
+
+	/* Restore output pointer to R9 */
+	pop  %r9
+
+	/* Copy hypercall result registers to output struct: */
+	movq %r10, TDX_HYPERCALL_r10(%r9)
+	movq %r11, TDX_HYPERCALL_r11(%r9)
+	movq %r12, TDX_HYPERCALL_r12(%r9)
+	movq %r13, TDX_HYPERCALL_r13(%r9)
+	movq %r14, TDX_HYPERCALL_r14(%r9)
+	movq %r15, TDX_HYPERCALL_r15(%r9)
+
+	/*
+	 * Zero out registers exposed to the VMM to avoid
+	 * speculative execution with VMM-controlled values.
+	 * This needs to include all registers present in
+	 * TDVMCALL_EXPOSE_REGS_MASK (except R12-R15).
+	 * R12-R15 context will be restored.
+	 */
+	xor %r10d, %r10d
+	xor %r11d, %r11d
+
+	/* Restore state of R9 register */
+	pop %r9
+
+	/* Restore callee-s ved GPRs as mandated by the x86_64 ABI */
+	pop %r12
+	pop %r13
+	pop %r14
+	pop %r15
+
+	jmp 2f
+1:
+       movq $(-EINVAL), %rax
+2:
+       FRAME_END
+
+       retq
+SYM_FUNC_END(__tdx_hypercall)
diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index ae3334a2b29d..9e846b0d7353 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -8,6 +8,29 @@
 
 #include <asm/tdx.h>
 
+/*
+ * Wrapper for standard use of __tdx_hypercall with BUG_ON() check
+ * for TDCALL error.
+ */
+static inline u64 _tdx_hypercall(u64 fn, u64 r12, u64 r13, u64 r14,
+				 u64 r15, struct tdx_hypercall_output *out)
+{
+	struct tdx_hypercall_output outl = {0};
+	u64 err;
+
+	/* __tdx_hypercall() does not accept NULL output pointer */
+	if (!out)
+		out = &outl;
+
+	err = __tdx_hypercall(TDX_HYPERCALL_STANDARD, fn, r12, r13, r14,
+			      r15, out);
+
+	/* Non zero return value indicates buggy TDX module, so panic */
+	BUG_ON(err);
+
+	return out->r10;
+}
+
 static inline bool cpuid_has_tdx_guest(void)
 {
 	u32 eax, sig[3];
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 06/11] x86/tdx: Get TD execution environment information via TDINFO
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (4 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 05/11] x86/tdx: Add __tdx_module_call() and __tdx_hypercall() helper functions Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 07/11] x86/traps: Add #VE support for TDX guest Kuppuswamy Sathyanarayanan
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

Per Guest-Host-Communication Interface (GHCI) for Intel Trust
Domain Extensions (Intel TDX) specification, sec 2.4.2,
TDCALL[TDINFO] provides basic TD execution environment information, not
provided by CPUID.

Call TDINFO during early boot to be used for following system
initialization.

The call provides info on which bit in pfn is used to indicate that the
page is shared with the host and attributes of the TD, such as debug.

Information about the number of CPUs need not be saved because there are
no users so far for it.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---
 arch/x86/kernel/tdx.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index 9e846b0d7353..98778f44bbc6 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -8,6 +8,14 @@
 
 #include <asm/tdx.h>
 
+/* TDX Module call Leaf IDs */
+#define TDINFO				1
+
+static struct {
+	unsigned int gpa_width;
+	unsigned long attributes;
+} td_info __ro_after_init;
+
 /*
  * Wrapper for standard use of __tdx_hypercall with BUG_ON() check
  * for TDCALL error.
@@ -58,6 +66,19 @@ bool tdx_protected_guest_has(unsigned long flag)
 }
 EXPORT_SYMBOL_GPL(tdx_protected_guest_has);
 
+static void tdg_get_info(void)
+{
+	u64 ret;
+	struct tdx_module_output out = {0};
+
+	ret = __tdx_module_call(TDINFO, 0, 0, 0, 0, &out);
+
+	BUG_ON(ret);
+
+	td_info.gpa_width = out.rcx & GENMASK(5, 0);
+	td_info.attributes = out.rdx;
+}
+
 void __init tdx_early_init(void)
 {
 	if (!cpuid_has_tdx_guest())
@@ -65,5 +86,7 @@ void __init tdx_early_init(void)
 
 	setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);
 
+	tdg_get_info();
+
 	pr_info("Guest initialized\n");
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 07/11] x86/traps: Add #VE support for TDX guest
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (5 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 06/11] x86/tdx: Get TD execution environment information via TDINFO Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 08/11] x86/tdx: Add HLT " Kuppuswamy Sathyanarayanan
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

Virtualization Exceptions (#VE) are delivered to TDX guests due to
specific guest actions which may happen in either user space or the kernel:

 * Specific instructions (WBINVD, for example)
 * Specific MSR accesses
 * Specific CPUID leaf accesses
 * Access to TD-shared memory, which includes MMIO

In the settings that Linux will run in, virtual exceptions are never
generated on accesses to normal, TD-private memory that has been
accepted.

The entry paths do not access TD-shared memory, MMIO regions or use
those specific MSRs, instructions, CPUID leaves that might generate #VE.
In addition, all interrupts including NMIs are blocked by the hardware
starting with #VE delivery until TDGETVEINFO is called.  This eliminates
the chance of a #VE during the syscall gap or paranoid entry paths and
simplifies #VE handling.

After TDGETVEINFO #VE could happen in theory (e.g. through an NMI),
but it is expected not to happen because TDX expects NMIs not to
trigger #VEs. Another case where they could happen is if the #VE
exception panics, but in this case there are no guarantees on anything
anyways.

If a guest kernel action which would normally cause a #VE occurs in the
interrupt-disabled region before TDGETVEINFO, a #DF is delivered to the
guest which will result in an oops (and should eventually be a panic, as
we would like to set panic_on_oops to 1 for TDX guests).

Add basic infrastructure to handle any #VE which occurs in the kernel or
userspace.  Later patches will add handling for specific #VE scenarios.

Convert unhandled #VE's (everything, until later in this series) so that
they appear just like a #GP by calling ve_raise_fault() directly.
ve_raise_fault() is similar to #GP handler and is responsible for
sending SIGSEGV to userspace and cpu die and notifying debuggers and
other die chain users.  

Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---
 arch/x86/include/asm/idtentry.h |  4 ++
 arch/x86/include/asm/tdx.h      | 19 +++++++++
 arch/x86/kernel/idt.c           |  6 +++
 arch/x86/kernel/tdx.c           | 36 +++++++++++++++++
 arch/x86/kernel/traps.c         | 69 +++++++++++++++++++++++++++++++++
 5 files changed, 134 insertions(+)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 73d45b0dfff2..d3c779abbc78 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -634,6 +634,10 @@ DECLARE_IDTENTRY_XENCB(X86_TRAP_OTHER,	exc_xen_hypervisor_callback);
 DECLARE_IDTENTRY_RAW(X86_TRAP_OTHER,	exc_xen_unknown_trap);
 #endif
 
+#ifdef CONFIG_INTEL_TDX_GUEST
+DECLARE_IDTENTRY(X86_TRAP_VE,		exc_virtualization_exception);
+#endif
+
 /* Device interrupts common/spurious */
 DECLARE_IDTENTRY_IRQ(X86_TRAP_OTHER,	common_interrupt);
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 6eea835694c0..5b07f01a0f99 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -39,6 +39,20 @@ struct tdx_hypercall_output {
 	u64 r15;
 };
 
+/*
+ * Used by #VE exception handler to gather the #VE exception
+ * info from the TDX module. This is software only structure
+ * and not related to TDX module/VMM.
+ */
+struct ve_info {
+	u64 exit_reason;
+	u64 exit_qual;
+	u64 gla;	/* Guest Linear (virtual) Address */
+	u64 gpa;	/* Guest Physical (virtual) Address */
+	u32 instr_len;
+	u32 instr_info;
+};
+
 #ifdef CONFIG_INTEL_TDX_GUEST
 
 void __init tdx_early_init(void);
@@ -53,6 +67,11 @@ u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
 u64 __tdx_hypercall(u64 type, u64 fn, u64 r12, u64 r13, u64 r14,
 		    u64 r15, struct tdx_hypercall_output *out);
 
+unsigned long tdg_get_ve_info(struct ve_info *ve);
+
+int tdg_handle_virtualization_exception(struct pt_regs *regs,
+					struct ve_info *ve);
+
 #else
 
 static inline void tdx_early_init(void) { };
diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
index d552f177eca0..39390761d9c8 100644
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -64,6 +64,9 @@ static const __initconst struct idt_data early_idts[] = {
 	 */
 	INTG(X86_TRAP_PF,		asm_exc_page_fault),
 #endif
+#ifdef CONFIG_INTEL_TDX_GUEST
+	INTG(X86_TRAP_VE,		asm_exc_virtualization_exception),
+#endif
 };
 
 /*
@@ -87,6 +90,9 @@ static const __initconst struct idt_data def_idts[] = {
 	INTG(X86_TRAP_MF,		asm_exc_coprocessor_error),
 	INTG(X86_TRAP_AC,		asm_exc_alignment_check),
 	INTG(X86_TRAP_XF,		asm_exc_simd_coprocessor_error),
+#ifdef CONFIG_INTEL_TDX_GUEST
+	INTG(X86_TRAP_VE,		asm_exc_virtualization_exception),
+#endif
 
 #ifdef CONFIG_X86_32
 	TSKG(X86_TRAP_DF,		GDT_ENTRY_DOUBLEFAULT_TSS),
diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index 98778f44bbc6..21d0c9e78b0c 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -10,6 +10,7 @@
 
 /* TDX Module call Leaf IDs */
 #define TDINFO				1
+#define TDGETVEINFO			3
 
 static struct {
 	unsigned int gpa_width;
@@ -79,6 +80,41 @@ static void tdg_get_info(void)
 	td_info.attributes = out.rdx;
 }
 
+unsigned long tdg_get_ve_info(struct ve_info *ve)
+{
+	u64 ret;
+	struct tdx_module_output out = {0};
+
+	/*
+	 * NMIs and machine checks are suppressed. Before this point any
+	 * #VE is fatal. After this point (TDGETVEINFO call), NMIs and
+	 * additional #VEs are permitted (but we don't expect them to
+	 * happen unless you panic).
+	 */
+	ret = __tdx_module_call(TDGETVEINFO, 0, 0, 0, 0, &out);
+
+	ve->exit_reason = out.rcx;
+	ve->exit_qual   = out.rdx;
+	ve->gla         = out.r8;
+	ve->gpa         = out.r9;
+	ve->instr_len   = out.r10 & UINT_MAX;
+	ve->instr_info  = out.r10 >> 32;
+
+	return ret;
+}
+
+int tdg_handle_virtualization_exception(struct pt_regs *regs,
+					struct ve_info *ve)
+{
+	/*
+	 * TODO: Add handler support for various #VE exit
+	 * reasons. It will be added by other patches in
+	 * the series.
+	 */
+	pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
+	return -EFAULT;
+}
+
 void __init tdx_early_init(void)
 {
 	if (!cpuid_has_tdx_guest())
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 853ea7a80806..d860fdee9cfe 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -61,6 +61,7 @@
 #include <asm/insn.h>
 #include <asm/insn-eval.h>
 #include <asm/vdso.h>
+#include <asm/tdx.h>
 
 #ifdef CONFIG_X86_64
 #include <asm/x86_init.h>
@@ -1139,6 +1140,74 @@ DEFINE_IDTENTRY(exc_device_not_available)
 	}
 }
 
+#define VEFSTR "VE fault"
+static void ve_raise_fault(struct pt_regs *regs, long error_code)
+{
+	struct task_struct *tsk = current;
+
+	if (user_mode(regs)) {
+		tsk->thread.error_code = error_code;
+		tsk->thread.trap_nr = X86_TRAP_VE;
+
+		/*
+		 * Not fixing up VDSO exceptions similar to #GP handler
+		 * because we don't expect the VDSO to trigger #VE.
+		 */
+		show_signal(tsk, SIGSEGV, "", VEFSTR, regs, error_code);
+		force_sig(SIGSEGV);
+		return;
+	}
+
+	if (fixup_exception(regs, X86_TRAP_VE, error_code, 0))
+		return;
+
+	tsk->thread.error_code = error_code;
+	tsk->thread.trap_nr = X86_TRAP_VE;
+
+	/*
+	 * To be potentially processing a kprobe fault and to trust the result
+	 * from kprobe_running(), we have to be non-preemptible.
+	 */
+	if (!preemptible() &&
+	    kprobe_running() &&
+	    kprobe_fault_handler(regs, X86_TRAP_VE))
+		return;
+
+	notify_die(DIE_GPF, VEFSTR, regs, error_code, X86_TRAP_VE, SIGSEGV);
+
+	die_addr(VEFSTR, regs, error_code, 0);
+}
+
+#ifdef CONFIG_INTEL_TDX_GUEST
+DEFINE_IDTENTRY(exc_virtualization_exception)
+{
+	struct ve_info ve;
+	int ret;
+
+	RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
+
+	/*
+	 * NMIs/Machine-checks/Interrupts will be in a disabled state
+	 * till TDGETVEINFO TDCALL is executed. This prevents #VE
+	 * nesting issue.
+	 */
+	ret = tdg_get_ve_info(&ve);
+
+	cond_local_irq_enable(regs);
+
+	if (!ret)
+		ret = tdg_handle_virtualization_exception(regs, &ve);
+	/*
+	 * If tdg_handle_virtualization_exception() could not process
+	 * it successfully, treat it as #GP(0) and handle it.
+	 */
+	if (ret)
+		ve_raise_fault(regs, 0);
+
+	cond_local_irq_disable(regs);
+}
+#endif
+
 #ifdef CONFIG_X86_32
 DEFINE_IDTENTRY_SW(iret_error)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 08/11] x86/tdx: Add HLT support for TDX guest
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (6 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 07/11] x86/traps: Add #VE support for TDX guest Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 09/11] x86/tdx: Wire up KVM hypercalls Kuppuswamy Sathyanarayanan
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

Per Guest-Host-Communication Interface (GHCI) for Intel Trust
Domain Extensions (Intel TDX) specification, sec 3.8,
TDVMCALL[Instruction.HLT] provides HLT operation. Use it to implement
halt() and safe_halt() paravirtualization calls.

The same TDX hypercall is used to handle #VE exception due to
EXIT_REASON_HLT.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---
 arch/x86/kernel/tdx.c | 51 +++++++++++++++++++++++++++++++++++++------
 1 file changed, 44 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index 21d0c9e78b0c..1ce528f8fc95 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -7,6 +7,7 @@
 #include <linux/protected_guest.h>
 
 #include <asm/tdx.h>
+#include <asm/vmx.h>
 
 /* TDX Module call Leaf IDs */
 #define TDINFO				1
@@ -80,6 +81,33 @@ static void tdg_get_info(void)
 	td_info.attributes = out.rdx;
 }
 
+static __cpuidle void tdg_halt(void)
+{
+	u64 ret;
+
+	ret = _tdx_hypercall(EXIT_REASON_HLT, irqs_disabled(), 0, 0, 0, NULL);
+
+	/* It should never fail */
+	BUG_ON(ret);
+}
+
+static __cpuidle void tdg_safe_halt(void)
+{
+	u64 ret;
+
+	/*
+	 * Enable interrupts next to the TDVMCALL to avoid
+	 * performance degradation.
+	 */
+	local_irq_enable();
+
+	/* IRQ is enabled, So set R12 as 0 */
+	ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 0, NULL);
+
+	/* It should never fail */
+	BUG_ON(ret);
+}
+
 unsigned long tdg_get_ve_info(struct ve_info *ve)
 {
 	u64 ret;
@@ -106,13 +134,19 @@ unsigned long tdg_get_ve_info(struct ve_info *ve)
 int tdg_handle_virtualization_exception(struct pt_regs *regs,
 					struct ve_info *ve)
 {
-	/*
-	 * TODO: Add handler support for various #VE exit
-	 * reasons. It will be added by other patches in
-	 * the series.
-	 */
-	pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
-	return -EFAULT;
+	switch (ve->exit_reason) {
+	case EXIT_REASON_HLT:
+		tdg_halt();
+		break;
+	default:
+		pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
+		return -EFAULT;
+	}
+
+	/* After successful #VE handling, move the IP */
+	regs->ip += ve->instr_len;
+
+	return 0;
 }
 
 void __init tdx_early_init(void)
@@ -124,5 +158,8 @@ void __init tdx_early_init(void)
 
 	tdg_get_info();
 
+	pv_ops.irq.safe_halt = tdg_safe_halt;
+	pv_ops.irq.halt = tdg_halt;
+
 	pr_info("Guest initialized\n");
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 09/11] x86/tdx: Wire up KVM hypercalls
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (7 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 08/11] x86/tdx: Add HLT " Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 10/11] x86/tdx: Add MSR support for TDX guest Kuppuswamy Sathyanarayanan
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

KVM hypercalls use the "vmcall" or "vmmcall" instructions.
Although the ABI is similar, those instructions no longer
function for TDX guests. Make vendor-specific TDVMCALLs
instead of VMCALL. This enables TDX guests to run with KVM
acting as the hypervisor. TDX guests running under other
hypervisors will continue to use those hypervisors'
hypercalls.

Since KVM driver can be built as a kernel module, export
tdx_kvm_hypercall*() to make the symbols visible to kvm.ko.

[Isaku Yamahata: proposed KVM VENDOR string]
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---

Changes since v1:
 * Replaced is_tdx_guest() with prot_guest_has(PR_GUEST_TDX).
 * Replaced tdx_kvm_hypercall{1-4} with single generic 
   function tdx_kvm_hypercall().
 * Removed __tdx_hypercall_vendor_kvm() and re-used __tdx_hypercall().

 arch/x86/Kconfig                |  5 +++++
 arch/x86/include/asm/kvm_para.h | 22 ++++++++++++++++++++++
 arch/x86/include/asm/tdx.h      | 30 ++++++++++++++++++++++++++++--
 arch/x86/kernel/tdcall.S        |  2 ++
 4 files changed, 57 insertions(+), 2 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d506aae29dd9..fc51579e54ad 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -892,6 +892,11 @@ config INTEL_TDX_GUEST
 	  run in a CPU mode that protects the confidentiality of TD memory
 	  contents and the TD’s CPU state from other software, including VMM.
 
+# This option enables KVM specific hypercalls in TDX guest.
+config INTEL_TDX_GUEST_KVM
+	def_bool y
+	depends on KVM_GUEST && INTEL_TDX_GUEST
+
 endif #HYPERVISOR_GUEST
 
 source "arch/x86/Kconfig.cpu"
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 69299878b200..bc0e70734053 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -4,7 +4,9 @@
 
 #include <asm/processor.h>
 #include <asm/alternative.h>
+#include <asm/tdx.h>
 #include <linux/interrupt.h>
+#include <linux/protected_guest.h>
 #include <uapi/asm/kvm_para.h>
 
 #ifdef CONFIG_KVM_GUEST
@@ -32,6 +34,10 @@ static inline bool kvm_check_and_clear_guest_paused(void)
 static inline long kvm_hypercall0(unsigned int nr)
 {
 	long ret;
+
+	if (prot_guest_has(PR_GUEST_TDX))
+		return tdx_kvm_hypercall(nr, 0, 0, 0, 0);
+
 	asm volatile(KVM_HYPERCALL
 		     : "=a"(ret)
 		     : "a"(nr)
@@ -42,6 +48,10 @@ static inline long kvm_hypercall0(unsigned int nr)
 static inline long kvm_hypercall1(unsigned int nr, unsigned long p1)
 {
 	long ret;
+
+	if (prot_guest_has(PR_GUEST_TDX))
+		return tdx_kvm_hypercall(nr, p1, 0, 0, 0);
+
 	asm volatile(KVM_HYPERCALL
 		     : "=a"(ret)
 		     : "a"(nr), "b"(p1)
@@ -53,6 +63,10 @@ static inline long kvm_hypercall2(unsigned int nr, unsigned long p1,
 				  unsigned long p2)
 {
 	long ret;
+
+	if (prot_guest_has(PR_GUEST_TDX))
+		return tdx_kvm_hypercall(nr, p1, p2, 0, 0);
+
 	asm volatile(KVM_HYPERCALL
 		     : "=a"(ret)
 		     : "a"(nr), "b"(p1), "c"(p2)
@@ -64,6 +78,10 @@ static inline long kvm_hypercall3(unsigned int nr, unsigned long p1,
 				  unsigned long p2, unsigned long p3)
 {
 	long ret;
+
+	if (prot_guest_has(PR_GUEST_TDX))
+		return tdx_kvm_hypercall(nr, p1, p2, p3, 0);
+
 	asm volatile(KVM_HYPERCALL
 		     : "=a"(ret)
 		     : "a"(nr), "b"(p1), "c"(p2), "d"(p3)
@@ -76,6 +94,10 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
 				  unsigned long p4)
 {
 	long ret;
+
+	if (prot_guest_has(PR_GUEST_TDX))
+		return tdx_kvm_hypercall(nr, p1, p2, p3, p4);
+
 	asm volatile(KVM_HYPERCALL
 		     : "=a"(ret)
 		     : "a"(nr), "b"(p1), "c"(p2), "d"(p3), "S"(p4)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 5b07f01a0f99..f9d8c2036348 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -6,8 +6,9 @@
 #include <linux/cpufeature.h>
 #include <linux/types.h>
 
-#define TDX_CPUID_LEAF_ID	0x21
-#define TDX_HYPERCALL_STANDARD  0
+#define TDX_CPUID_LEAF_ID			0x21
+#define TDX_HYPERCALL_STANDARD			0
+#define TDX_HYPERCALL_VENDOR_KVM		0x4d564b2e584454
 
 /*
  * Used in __tdx_module_call() helper function to gather the
@@ -80,4 +81,29 @@ static inline bool tdx_protected_guest_has(unsigned long flag) { return false; }
 
 #endif /* CONFIG_INTEL_TDX_GUEST */
 
+#ifdef CONFIG_INTEL_TDX_GUEST_KVM
+
+static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
+				     unsigned long p2, unsigned long p3,
+				     unsigned long p4)
+{
+	struct tdx_hypercall_output out;
+	u64 err;
+
+	err = __tdx_hypercall(TDX_HYPERCALL_VENDOR_KVM, nr, p1, p2,
+			      p3, p4, &out);
+
+	BUG_ON(err);
+
+	return out.r10;
+}
+#else
+static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
+				     unsigned long p2, unsigned long p3,
+				     unsigned long p4)
+{
+	return -ENODEV;
+}
+#endif /* CONFIG_INTEL_TDX_GUEST_KVM */
+
 #endif /* _ASM_X86_TDX_H */
diff --git a/arch/x86/kernel/tdcall.S b/arch/x86/kernel/tdcall.S
index 3cf1d1af7889..0c7739559b05 100644
--- a/arch/x86/kernel/tdcall.S
+++ b/arch/x86/kernel/tdcall.S
@@ -3,6 +3,7 @@
 #include <asm/asm.h>
 #include <asm/frame.h>
 #include <asm/unwind_hints.h>
+#include <asm/export.h>
 
 #include <linux/linkage.h>
 #include <linux/bits.h>
@@ -279,3 +280,4 @@ SYM_FUNC_START(__tdx_hypercall)
 
        retq
 SYM_FUNC_END(__tdx_hypercall)
+EXPORT_SYMBOL(__tdx_hypercall);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 10/11] x86/tdx: Add MSR support for TDX guest
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (8 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 09/11] x86/tdx: Wire up KVM hypercalls Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-18 22:57 ` [PATCH v3 11/11] x86/tdx: Handle CPUID via #VE Kuppuswamy Sathyanarayanan
  2021-06-30 23:22 ` [PATCH v3 00/11] Add TDX Guest Support (Initial support) Sathyanarayanan Kuppuswamy Natarajan
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

Operations on context-switched MSRs can be run natively. The rest of
MSRs should be handled through TDVMCALLs.

TDVMCALL[Instruction.RDMSR] and TDVMCALL[Instruction.WRMSR] provide
MSR oprations.

You can find RDMSR and WRMSR details in Guest-Host-Communication
Interface (GHCI) for Intel Trust Domain Extensions (Intel TDX)
specification, sec 3.10, 3.11.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---
 arch/x86/kernel/tdx.c | 67 +++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 65 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index 1ce528f8fc95..a02ee45695e6 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -108,6 +108,55 @@ static __cpuidle void tdg_safe_halt(void)
 	BUG_ON(ret);
 }
 
+static bool tdg_is_context_switched_msr(unsigned int msr)
+{
+	switch (msr) {
+	case MSR_EFER:
+	case MSR_IA32_CR_PAT:
+	case MSR_FS_BASE:
+	case MSR_GS_BASE:
+	case MSR_KERNEL_GS_BASE:
+	case MSR_IA32_SYSENTER_CS:
+	case MSR_IA32_SYSENTER_EIP:
+	case MSR_IA32_SYSENTER_ESP:
+	case MSR_STAR:
+	case MSR_LSTAR:
+	case MSR_SYSCALL_MASK:
+	case MSR_IA32_XSS:
+	case MSR_TSC_AUX:
+	case MSR_IA32_BNDCFGS:
+		return true;
+	}
+	return false;
+}
+
+static u64 tdg_read_msr_safe(unsigned int msr, int *err)
+{
+	u64 ret;
+	struct tdx_hypercall_output out = {0};
+
+	WARN_ON_ONCE(tdg_is_context_switched_msr(msr));
+
+	ret = _tdx_hypercall(EXIT_REASON_MSR_READ, msr, 0, 0, 0, &out);
+
+	*err = ret ? -EIO : 0;
+
+	return out.r11;
+}
+
+static int tdg_write_msr_safe(unsigned int msr, unsigned int low,
+			      unsigned int high)
+{
+	u64 ret;
+
+	WARN_ON_ONCE(tdg_is_context_switched_msr(msr));
+
+	ret = _tdx_hypercall(EXIT_REASON_MSR_WRITE, msr, (u64)high << 32 | low,
+			     0, 0, NULL);
+
+	return ret ? -EIO : 0;
+}
+
 unsigned long tdg_get_ve_info(struct ve_info *ve)
 {
 	u64 ret;
@@ -134,19 +183,33 @@ unsigned long tdg_get_ve_info(struct ve_info *ve)
 int tdg_handle_virtualization_exception(struct pt_regs *regs,
 					struct ve_info *ve)
 {
+	unsigned long val;
+	int ret = 0;
+
 	switch (ve->exit_reason) {
 	case EXIT_REASON_HLT:
 		tdg_halt();
 		break;
+	case EXIT_REASON_MSR_READ:
+		val = tdg_read_msr_safe(regs->cx, (unsigned int *)&ret);
+		if (!ret) {
+			regs->ax = val & UINT_MAX;
+			regs->dx = val >> 32;
+		}
+		break;
+	case EXIT_REASON_MSR_WRITE:
+		ret = tdg_write_msr_safe(regs->cx, regs->ax, regs->dx);
+		break;
 	default:
 		pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
 		return -EFAULT;
 	}
 
 	/* After successful #VE handling, move the IP */
-	regs->ip += ve->instr_len;
+	if (!ret)
+		regs->ip += ve->instr_len;
 
-	return 0;
+	return ret;
 }
 
 void __init tdx_early_init(void)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 11/11] x86/tdx: Handle CPUID via #VE
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (9 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 10/11] x86/tdx: Add MSR support for TDX guest Kuppuswamy Sathyanarayanan
@ 2021-06-18 22:57 ` Kuppuswamy Sathyanarayanan
  2021-06-30 23:22 ` [PATCH v3 00/11] Add TDX Guest Support (Initial support) Sathyanarayanan Kuppuswamy Natarajan
  11 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy Sathyanarayanan @ 2021-06-18 22:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	Kuppuswamy Sathyanarayanan, x86, linux-kernel

From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>

TDX has three classes of CPUID leaves: some CPUID leaves
are always handled by the CPU, others are handled by the TDX module,
and some others are handled by the VMM. Since the VMM cannot directly
intercept the instruction these are reflected with a #VE exception
to the guest, which then converts it into a hypercall to the VMM,
or handled directly.

The TDX module EAS has a full list of CPUID leaves which are handled
natively or by the TDX module in 16.2. Only unknown CPUIDs are handled by
the #VE method. In practice this typically only applies to the
hypervisor specific CPUIDs unknown to the native CPU.

Therefore there is no risk of causing this in early CPUID code which
runs before the #VE handler is set up because it will never access
those exotic CPUID leaves.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---
 arch/x86/kernel/tdx.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index a02ee45695e6..f48567dce97a 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -157,6 +157,21 @@ static int tdg_write_msr_safe(unsigned int msr, unsigned int low,
 	return ret ? -EIO : 0;
 }
 
+static void tdg_handle_cpuid(struct pt_regs *regs)
+{
+	u64 ret;
+	struct tdx_hypercall_output out = {0};
+
+	ret = _tdx_hypercall(EXIT_REASON_CPUID, regs->ax, regs->cx, 0, 0, &out);
+
+	WARN_ON(ret);
+
+	regs->ax = out.r12;
+	regs->bx = out.r13;
+	regs->cx = out.r14;
+	regs->dx = out.r15;
+}
+
 unsigned long tdg_get_ve_info(struct ve_info *ve)
 {
 	u64 ret;
@@ -200,6 +215,9 @@ int tdg_handle_virtualization_exception(struct pt_regs *regs,
 	case EXIT_REASON_MSR_WRITE:
 		ret = tdg_write_msr_safe(regs->cx, regs->ax, regs->dx);
 		break;
+	case EXIT_REASON_CPUID:
+		tdg_handle_cpuid(regs);
+		break;
 	default:
 		pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
 		return -EFAULT;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature
  2021-06-18 22:57 ` [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature Kuppuswamy Sathyanarayanan
@ 2021-06-18 23:39   ` Borislav Petkov
  2021-06-19  0:13     ` Kuppuswamy, Sathyanarayanan
  2021-07-15 11:56   ` Xiaoyao Li
  1 sibling, 1 reply; 26+ messages in thread
From: Borislav Petkov @ 2021-06-18 23:39 UTC (permalink / raw)
  To: Kuppuswamy Sathyanarayanan
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel

On Fri, Jun 18, 2021 at 03:57:47PM -0700, Kuppuswamy Sathyanarayanan wrote:
> Add CPU feature detection for Trusted Domain Extensions support. TDX
> feature adds capabilities to keep guest register state and memory
> isolated from hypervisor.
> 
> For TDX guest platforms, executing CPUID(eax=0x21, ecx=0) will return
> following values in EAX, EBX, ECX and EDX.
> 
> EAX:  Maximum sub-leaf number:  0
> EBX/EDX/ECX:  Vendor string:
> 
> EBX =  "Inte"
> EDX =  "lTDX"
> ECX =  "    "
> 
> So when above condition is true, set X86_FEATURE_TDX_GUEST feature cap
> bit.
> 
> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> Reviewed-by: Andi Kleen <ak@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> ---
> 
> Changes since v2:
>  * Fixed debug prints as per Borislav suggestion.
> 
> Changes since v1:
>  * Fixed commit log issues reported by Borislav.
>  * Moved header file include to the start of tdx.h.
>  * Added pr_fmt for TDX.
>  * Simplified cpuid_has_tdx_guest() implementation as per
>    Borislav comments.

From Documentation/process/submitting-patches.rst:

"Both Tested-by and Reviewed-by tags, once received on mailing list from tester
or reviewer, should be added by author to the applicable patches when sending
next versions.  However if the patch has changed substantially in following
version, these tags might not be applicable anymore and thus should be removed.
Usually removal of someone's Tested-by or Reviewed-by tags should be mentioned
in the patch changelog (after the '---' separator)."

IOW, for the next revisions of your patchsets, you should drop
Reviewed-by: tags on patches when they've changed more than trivially
because otherwise those tags have no meaning at all.

Also, please take the time to peruse the above document on the kernel
process while waiting.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature
  2021-06-18 23:39   ` Borislav Petkov
@ 2021-06-19  0:13     ` Kuppuswamy, Sathyanarayanan
  2021-06-19  6:38       ` Borislav Petkov
  0 siblings, 1 reply; 26+ messages in thread
From: Kuppuswamy, Sathyanarayanan @ 2021-06-19  0:13 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel



On 6/18/21 4:39 PM, Borislav Petkov wrote:
>  From Documentation/process/submitting-patches.rst:
> 
> "Both Tested-by and Reviewed-by tags, once received on mailing list from tester
> or reviewer, should be added by author to the applicable patches when sending
> next versions.  However if the patch has changed substantially in following
> version, these tags might not be applicable anymore and thus should be removed.
> Usually removal of someone's Tested-by or Reviewed-by tags should be mentioned
> in the patch changelog (after the '---' separator)."
> 
> IOW, for the next revisions of your patchsets, you should drop
> Reviewed-by: tags on patches when they've changed more than trivially
> because otherwise those tags have no meaning at all.
> 
> Also, please take the time to peruse the above document on the kernel
> process while waiting.

I will make sure to remove the Reviewed-by/Tested-by tags for the changed patches
in the next submission. But, IMO, changes made in this patch is minimal. Nothing
changed functionally. So, do we still need to remove the tags for this patch?

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature
  2021-06-19  0:13     ` Kuppuswamy, Sathyanarayanan
@ 2021-06-19  6:38       ` Borislav Petkov
  0 siblings, 0 replies; 26+ messages in thread
From: Borislav Petkov @ 2021-06-19  6:38 UTC (permalink / raw)
  To: Kuppuswamy, Sathyanarayanan
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel

On Fri, Jun 18, 2021 at 05:13:39PM -0700, Kuppuswamy, Sathyanarayanan wrote:
> On 6/18/21 4:39 PM, Borislav Petkov wrote:
> >  From Documentation/process/submitting-patches.rst:
> > 
> > "Both Tested-by and Reviewed-by tags, once received on mailing list from tester
> > or reviewer, should be added by author to the applicable patches when sending
> > next versions.  However if the patch has changed substantially in following
> > version, these tags might not be applicable anymore and thus should be removed.
> > Usually removal of someone's Tested-by or Reviewed-by tags should be mentioned
> > in the patch changelog (after the '---' separator)."
> > 
> > IOW, for the next revisions of your patchsets, you should drop
> > Reviewed-by: tags on patches when they've changed more than trivially
> > because otherwise those tags have no meaning at all.
> > 
> > Also, please take the time to peruse the above document on the kernel
> > process while waiting.
> 
> I will make sure to remove the Reviewed-by/Tested-by tags for the changed patches
> in the next submission. But, IMO, changes made in this patch is minimal. Nothing
> changed functionally. So, do we still need to remove the tags for this patch?

My note was more of a general reminder: "for the next revisions of
your patchsets" above. I simply replied to the first mail with a patch
changelog.

Also, maybe our documentation text is not really clear. It says "changed
substantially", you understood that as "changed functionally" and I've
seen people complain about smaller things. But ok, let's agree on
functional changes here.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 02/11] x86/tdx: Introduce INTEL_TDX_GUEST config option
  2021-06-18 22:57 ` [PATCH v3 02/11] x86/tdx: Introduce INTEL_TDX_GUEST config option Kuppuswamy Sathyanarayanan
@ 2021-06-19 11:59   ` Juergen Gross
  2021-06-19 17:11     ` Kuppuswamy, Sathyanarayanan
  0 siblings, 1 reply; 26+ messages in thread
From: Juergen Gross @ 2021-06-19 11:59 UTC (permalink / raw)
  To: Kuppuswamy Sathyanarayanan, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel


[-- Attachment #1.1.1: Type: text/plain, Size: 970 bytes --]

On 19.06.21 00:57, Kuppuswamy Sathyanarayanan wrote:
> Add INTEL_TDX_GUEST config option to selectively compile
> TDX guest support.
> 
> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> Reviewed-by: Andi Kleen <ak@linux.intel.com>
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> ---
>   arch/x86/Kconfig | 15 +++++++++++++++
>   1 file changed, 15 insertions(+)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 0045e1b44190..ff79263aebd1 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -876,6 +876,21 @@ config ACRN_GUEST
>   	  IOT with small footprint and real-time features. More details can be
>   	  found in https://projectacrn.org/.
>   
> +config INTEL_TDX_GUEST
> +	bool "Intel Trusted Domain eXtensions Guest Support"
> +	depends on X86_64 && CPU_SUP_INTEL && PARAVIRT
> +	depends on SECURITY
> +	select PARAVIRT_XL

PARAVIRT_XL? Didn't you drop that?


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 02/11] x86/tdx: Introduce INTEL_TDX_GUEST config option
  2021-06-19 11:59   ` Juergen Gross
@ 2021-06-19 17:11     ` Kuppuswamy, Sathyanarayanan
  0 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy, Sathyanarayanan @ 2021-06-19 17:11 UTC (permalink / raw)
  To: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel



On 6/19/21 4:59 AM, Juergen Gross wrote:
> PARAVIRT_XL? Didn't you drop that?

Yes. I removed it from previous patch. But forgot to remove it here.

I will fix this in next version.

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 04/11] x86: Introduce generic protected guest abstraction
  2021-06-18 22:57 ` [PATCH v3 04/11] x86: Introduce generic protected guest abstraction Kuppuswamy Sathyanarayanan
@ 2021-06-24 15:01   ` Borislav Petkov
  2021-06-24 17:58     ` Kuppuswamy, Sathyanarayanan
  2021-06-28 17:52   ` Tom Lendacky
  1 sibling, 1 reply; 26+ messages in thread
From: Borislav Petkov @ 2021-06-24 15:01 UTC (permalink / raw)
  To: Kuppuswamy Sathyanarayanan
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel

On Fri, Jun 18, 2021 at 03:57:48PM -0700, Kuppuswamy Sathyanarayanan wrote:
> Add a generic way to check if we run with an encrypted guest,

Please use passive voice in your commit message: no "we" or "I", etc,
and describe your changes in imperative mood.

Also, pls read section "2) Describe your changes" in
Documentation/process/submitting-patches.rst for more details.

Bottom line is: personal pronouns are ambiguous in text, especially with
so many parties/companies/etc developing the kernel so let's avoid them
please.

> without requiring x86 specific ifdefs. This can then be used in
> non architecture specific code. 

"... in arch-independent code." or so.

> prot_guest_has() is used to check for protected guest feature
> flags.
> 
> Originally-by: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> ---
> 
> Change since v1:
>  * Introduced PR_GUEST_TDX and PR_GUEST_SEV vendor flags as per
>    Boris suggestion.
>  * Replaced is_tdx_guest() with if (boot_cpu_data.x86_vendor ==
>    X86_VENDOR_INTEL) in prot_guest_has().
>  * Modified tdx_protected_guest_has() and sev_protected_guest_has()
>    to support vendor flags.

...

> diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h
> new file mode 100644
> index 000000000000..d47668dee6c2
> --- /dev/null
> +++ b/arch/x86/include/asm/protected_guest.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright (C) 2020 Intel Corporation */
> +#ifndef _ASM_PROTECTED_GUEST_H
> +#define _ASM_PROTECTED_GUEST_H 1

#define _ASM_X86_PROTECTED_GUEST_H

> +
> +#include <asm/processor.h>
> +#include <asm/tdx.h>
> +#include <asm/sev.h>
> +
> +static inline bool prot_guest_has(unsigned long flag)
> +{
> +	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
> +		return tdx_protected_guest_has(flag);
> +	else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
> +		return sev_protected_guest_has(flag);

s/protected/prot/

tdx_prot_guest_has
sev_prot_guest_has

...

> @@ -18,6 +20,21 @@ static inline bool cpuid_has_tdx_guest(void)
>  	return !memcmp("IntelTDX    ", sig, 12);
>  }
>  
> +bool tdx_protected_guest_has(unsigned long flag)
> +{
> +	switch (flag) {
> +	case PR_GUEST_MEM_ENCRYPT:
> +	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
> +	case PR_GUEST_UNROLL_STRING_IO:
> +	case PR_GUEST_SHARED_MAPPING_INIT:
> +	case PR_GUEST_TDX:
> +		return static_cpu_has(X86_FEATURE_TDX_GUEST);

		return cpu_feature_enabled(...)


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 04/11] x86: Introduce generic protected guest abstraction
  2021-06-24 15:01   ` Borislav Petkov
@ 2021-06-24 17:58     ` Kuppuswamy, Sathyanarayanan
  0 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy, Sathyanarayanan @ 2021-06-24 17:58 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel



On 6/24/21 8:01 AM, Borislav Petkov wrote:
> On Fri, Jun 18, 2021 at 03:57:48PM -0700, Kuppuswamy Sathyanarayanan wrote:
>> Add a generic way to check if we run with an encrypted guest,
> 
> Please use passive voice in your commit message: no "we" or "I", etc,
> and describe your changes in imperative mood.
> 
> Also, pls read section "2) Describe your changes" in
> Documentation/process/submitting-patches.rst for more details.
> 
> Bottom line is: personal pronouns are ambiguous in text, especially with
> so many parties/companies/etc developing the kernel so let's avoid them
> please.

I will fix this in next version. I will make sure to follow it in future
submissions.

> 
>> without requiring x86 specific ifdefs. This can then be used in
>> non architecture specific code.
> 
> "... in arch-independent code." or so.

I will fix this in next version.

> 
>> prot_guest_has() is used to check for protected guest feature
>> flags.
>>
>> Originally-by: Andi Kleen <ak@linux.intel.com>
>> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
>> ---
>>
>> Change since v1:
>>   * Introduced PR_GUEST_TDX and PR_GUEST_SEV vendor flags as per
>>     Boris suggestion.
>>   * Replaced is_tdx_guest() with if (boot_cpu_data.x86_vendor ==
>>     X86_VENDOR_INTEL) in prot_guest_has().
>>   * Modified tdx_protected_guest_has() and sev_protected_guest_has()
>>     to support vendor flags.
> 
> ...
> 
>> diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h
>> new file mode 100644
>> index 000000000000..d47668dee6c2
>> --- /dev/null
>> +++ b/arch/x86/include/asm/protected_guest.h
>> @@ -0,0 +1,20 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/* Copyright (C) 2020 Intel Corporation */
>> +#ifndef _ASM_PROTECTED_GUEST_H
>> +#define _ASM_PROTECTED_GUEST_H 1
> 
> #define _ASM_X86_PROTECTED_GUEST_H
> 
>> +
>> +#include <asm/processor.h>
>> +#include <asm/tdx.h>
>> +#include <asm/sev.h>
>> +
>> +static inline bool prot_guest_has(unsigned long flag)
>> +{
>> +	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
>> +		return tdx_protected_guest_has(flag);
>> +	else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
>> +		return sev_protected_guest_has(flag);
> 
> s/protected/prot/
> 
> tdx_prot_guest_has
> sev_prot_guest_has

Ok. I will make this change in next version.

> 
> ...
> 
>> @@ -18,6 +20,21 @@ static inline bool cpuid_has_tdx_guest(void)
>>   	return !memcmp("IntelTDX    ", sig, 12);
>>   }
>>   
>> +bool tdx_protected_guest_has(unsigned long flag)
>> +{
>> +	switch (flag) {
>> +	case PR_GUEST_MEM_ENCRYPT:
>> +	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
>> +	case PR_GUEST_UNROLL_STRING_IO:
>> +	case PR_GUEST_SHARED_MAPPING_INIT:
>> +	case PR_GUEST_TDX:
>> +		return static_cpu_has(X86_FEATURE_TDX_GUEST);
> 
> 		return cpu_feature_enabled(...)

I will use it in next version.

> 
> 

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 04/11] x86: Introduce generic protected guest abstraction
  2021-06-18 22:57 ` [PATCH v3 04/11] x86: Introduce generic protected guest abstraction Kuppuswamy Sathyanarayanan
  2021-06-24 15:01   ` Borislav Petkov
@ 2021-06-28 17:52   ` Tom Lendacky
  2021-06-28 18:59     ` Tom Lendacky
  2021-06-28 19:14     ` Kuppuswamy, Sathyanarayanan
  1 sibling, 2 replies; 26+ messages in thread
From: Tom Lendacky @ 2021-06-28 17:52 UTC (permalink / raw)
  To: Kuppuswamy Sathyanarayanan, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel

On 6/18/21 5:57 PM, Kuppuswamy Sathyanarayanan wrote:
> Add a generic way to check if we run with an encrypted guest,
> without requiring x86 specific ifdefs. This can then be used in
> non architecture specific code. 
> 
> prot_guest_has() is used to check for protected guest feature
> flags.
> 
> Originally-by: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> ---
> 
> Change since v1:
>  * Introduced PR_GUEST_TDX and PR_GUEST_SEV vendor flags as per
>    Boris suggestion.
>  * Replaced is_tdx_guest() with if (boot_cpu_data.x86_vendor ==
>    X86_VENDOR_INTEL) in prot_guest_has().
>  * Modified tdx_protected_guest_has() and sev_protected_guest_has()
>    to support vendor flags.
> 
>  arch/Kconfig                           |  3 +++
>  arch/x86/Kconfig                       |  2 ++
>  arch/x86/include/asm/protected_guest.h | 20 +++++++++++++++++
>  arch/x86/include/asm/sev.h             |  3 +++
>  arch/x86/include/asm/tdx.h             |  4 ++++
>  arch/x86/kernel/sev.c                  | 17 +++++++++++++++
>  arch/x86/kernel/tdx.c                  | 17 +++++++++++++++
>  include/linux/protected_guest.h        | 30 ++++++++++++++++++++++++++
>  8 files changed, 96 insertions(+)
>  create mode 100644 arch/x86/include/asm/protected_guest.h
>  create mode 100644 include/linux/protected_guest.h
> 
> diff --git a/arch/Kconfig b/arch/Kconfig
> index c45b770d3579..3c5bf55ee752 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -1011,6 +1011,9 @@ config HAVE_ARCH_NVRAM_OPS
>  config ISA_BUS_API
>  	def_bool ISA
>  
> +config ARCH_HAS_PROTECTED_GUEST
> +	bool
> +
>  #
>  # ABI hall of shame
>  #
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index ff79263aebd1..d506aae29dd9 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -883,6 +883,7 @@ config INTEL_TDX_GUEST
>  	select PARAVIRT_XL
>  	select X86_X2APIC
>  	select SECURITY_LOCKDOWN_LSM
> +	select ARCH_HAS_PROTECTED_GUEST
>  	help
>  	  Provide support for running in a trusted domain on Intel processors
>  	  equipped with Trusted Domain eXtenstions. TDX is a new Intel
> @@ -1539,6 +1540,7 @@ config AMD_MEM_ENCRYPT
>  	select ARCH_HAS_FORCE_DMA_UNENCRYPTED
>  	select INSTRUCTION_DECODER
>  	select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
> +	select ARCH_HAS_PROTECTED_GUEST
>  	help
>  	  Say yes to enable support for the encryption of system memory.
>  	  This requires an AMD processor that supports Secure Memory
> diff --git a/arch/x86/include/asm/protected_guest.h b/arch/x86/include/asm/protected_guest.h
> new file mode 100644
> index 000000000000..d47668dee6c2
> --- /dev/null
> +++ b/arch/x86/include/asm/protected_guest.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright (C) 2020 Intel Corporation */
> +#ifndef _ASM_PROTECTED_GUEST_H
> +#define _ASM_PROTECTED_GUEST_H 1
> +
> +#include <asm/processor.h>
> +#include <asm/tdx.h>
> +#include <asm/sev.h>
> +
> +static inline bool prot_guest_has(unsigned long flag)
> +{
> +	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
> +		return tdx_protected_guest_has(flag);
> +	else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
> +		return sev_protected_guest_has(flag);

So as I think about this, I don't think this will work if the hypervisor
decides to change the vendor name, right?

And doesn't TDX supply "IntelTDX    " as a signature. I don't see where
the signature is used to set the CPU vendor to X86_VENDOR_INTEL.

The current SEV checks to set sev_status, which is used by sme_active(),
sev_active, etc.) are based on the max leaf and CPUID bits, but not a
CPUID vendor check.

So maybe we can keep the prot_guest_has() but I think it will have to be a
common routine, with a "switch" statement that has supporting case element
that check for "sev_active() || static_cpu_has(X86_FEATURE_TDX_GUEST)", etc.

> +
> +	return false;
> +}
> +
> +#endif /* _ASM_PROTECTED_GUEST_H */
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index fa5cd05d3b5b..e9b0b93a3157 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -81,12 +81,15 @@ static __always_inline void sev_es_nmi_complete(void)
>  		__sev_es_nmi_complete();
>  }
>  extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
> +bool sev_protected_guest_has(unsigned long flag);
> +
>  #else
>  static inline void sev_es_ist_enter(struct pt_regs *regs) { }
>  static inline void sev_es_ist_exit(void) { }
>  static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; }
>  static inline void sev_es_nmi_complete(void) { }
>  static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
> +static inline bool sev_protected_guest_has(unsigned long flag) { return false; }
>  #endif
>  
>  #endif
> diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> index c738bde944d1..1c17c9080a2c 100644
> --- a/arch/x86/include/asm/tdx.h
> +++ b/arch/x86/include/asm/tdx.h
> @@ -11,10 +11,14 @@
>  
>  void __init tdx_early_init(void);
>  
> +bool tdx_protected_guest_has(unsigned long flag);
> +
>  #else
>  
>  static inline void tdx_early_init(void) { };
>  
> +static inline bool tdx_protected_guest_has(unsigned long flag) { return false; }
> +
>  #endif /* CONFIG_INTEL_TDX_GUEST */
>  
>  #endif /* _ASM_X86_TDX_H */
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index 651b81cd648e..3e88576555d2 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -19,6 +19,7 @@
>  #include <linux/memblock.h>
>  #include <linux/kernel.h>
>  #include <linux/mm.h>
> +#include <linux/protected_guest.h>
>  
>  #include <asm/cpu_entry_area.h>
>  #include <asm/stacktrace.h>
> @@ -1493,3 +1494,19 @@ bool __init handle_vc_boot_ghcb(struct pt_regs *regs)
>  	while (true)
>  		halt();
>  }
> +
> +bool sev_protected_guest_has(unsigned long flag)
> +{
> +	switch (flag) {
> +	case PR_GUEST_MEM_ENCRYPT:
> +	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
> +	case PR_GUEST_UNROLL_STRING_IO:
> +	case PR_GUEST_HOST_MEM_ENCRYPT:
> +		return true;

This will need to be fixed up because this function will be called for
baremetal and legacy guests and those properties aren't true for those
situations. Something like (although I'm unsure of the difference between
PR_GUEST_MEM_ENCRYPT and PR_GUEST_MEM_ENCRYPT_ACTIVE):

	case PR_GUEST_MEM_ENCRYPT:
	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
		return sev_active();
	case PR_GUEST_UNROLL_STRING_IO:
		return sev_active() && !sev_es_active();
	case PR_GUEST_HOST_MEM_ENCRYPT:
		return sme_active();

But you (or I) would have to audit all of the locations where
mem_encrypt_active(), sme_active(), sev_active() and sev_es_active() are
used, to be sure the right thing is being done. And for bisectability,
that should probably be the first patch if you will be invoking
prot_guest_has() in the same location as any of the identified functions.

Create the new helper and fixup the locations should be one (or more)
patches. Then add the TDX support to the helper function as a follow-on patch.

> +	case PR_GUEST_SEV:
> +		return sev_active();
> +	}
> +
> +	return false;
> +}
> +EXPORT_SYMBOL_GPL(sev_protected_guest_has);
> diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
> index b1492e076168..ae3334a2b29d 100644
> --- a/arch/x86/kernel/tdx.c
> +++ b/arch/x86/kernel/tdx.c
> @@ -4,6 +4,8 @@
>  #undef pr_fmt
>  #define pr_fmt(fmt)     "x86/tdx: " fmt
>  
> +#include <linux/protected_guest.h>
> +
>  #include <asm/tdx.h>
>  
>  static inline bool cpuid_has_tdx_guest(void)
> @@ -18,6 +20,21 @@ static inline bool cpuid_has_tdx_guest(void)
>  	return !memcmp("IntelTDX    ", sig, 12);
>  }
>  
> +bool tdx_protected_guest_has(unsigned long flag)
> +{
> +	switch (flag) {
> +	case PR_GUEST_MEM_ENCRYPT:
> +	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
> +	case PR_GUEST_UNROLL_STRING_IO:
> +	case PR_GUEST_SHARED_MAPPING_INIT:
> +	case PR_GUEST_TDX:
> +		return static_cpu_has(X86_FEATURE_TDX_GUEST);
> +	}
> +
> +	return false;
> +}
> +EXPORT_SYMBOL_GPL(tdx_protected_guest_has);
> +
>  void __init tdx_early_init(void)
>  {
>  	if (!cpuid_has_tdx_guest())
> diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
> new file mode 100644
> index 000000000000..c5b7547e5a68
> --- /dev/null
> +++ b/include/linux/protected_guest.h
> @@ -0,0 +1,30 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef _LINUX_PROTECTED_GUEST_H
> +#define _LINUX_PROTECTED_GUEST_H 1
> +
> +/* Protected Guest Feature Flags (leave 0-0xfff for vendor specific flags) */
> +
> +/* 0-ff is reserved for Intel specific flags */
> +#define PR_GUEST_TDX				0x0000
> +
> +/* 100-1ff is reserved for AMD specific flags */
> +#define PR_GUEST_SEV				0x0100
> +
> +/* Support for guest encryption */
> +#define PR_GUEST_MEM_ENCRYPT			0x1000

I'm not sure I follow the difference between this and
PR_GUEST_MEM_ENCRYPT_ACTIVE. Is this saying that the host has support for
starting guests that support memory encryption or the guest has support
for memory encryption but it hasn't been activated yet (which doesn't seem
possible)?

Thanks,
Tom

> +/* Encryption support is active */
> +#define PR_GUEST_MEM_ENCRYPT_ACTIVE		0x1001
> +/* Support for unrolled string IO */
> +#define PR_GUEST_UNROLL_STRING_IO		0x1002
> +/* Support for host memory encryption */
> +#define PR_GUEST_HOST_MEM_ENCRYPT		0x1003
> +/* Support for shared mapping initialization (after early init) */
> +#define PR_GUEST_SHARED_MAPPING_INIT		0x1004
> +
> +#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
> +#include <asm/protected_guest.h>
> +#else
> +static inline bool prot_guest_has(unsigned long flag) { return false; }
> +#endif
> +
> +#endif /* _LINUX_PROTECTED_GUEST_H */
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 04/11] x86: Introduce generic protected guest abstraction
  2021-06-28 17:52   ` Tom Lendacky
@ 2021-06-28 18:59     ` Tom Lendacky
  2021-06-28 19:14     ` Kuppuswamy, Sathyanarayanan
  1 sibling, 0 replies; 26+ messages in thread
From: Tom Lendacky @ 2021-06-28 18:59 UTC (permalink / raw)
  To: Kuppuswamy Sathyanarayanan, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel

On 6/28/21 12:52 PM, Tom Lendacky wrote:
> On 6/18/21 5:57 PM, Kuppuswamy Sathyanarayanan wrote:
>> +
>> +static inline bool prot_guest_has(unsigned long flag)
>> +{
>> +	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
>> +		return tdx_protected_guest_has(flag);
>> +	else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
>> +		return sev_protected_guest_has(flag);
> 
> So as I think about this, I don't think this will work if the hypervisor
> decides to change the vendor name, right?
> 
> And doesn't TDX supply "IntelTDX    " as a signature. I don't see where
> the signature is used to set the CPU vendor to X86_VENDOR_INTEL.
> 
> The current SEV checks to set sev_status, which is used by sme_active(),
> sev_active, etc.) are based on the max leaf and CPUID bits, but not a
> CPUID vendor check.
> 
> So maybe we can keep the prot_guest_has() but I think it will have to be a
> common routine, with a "switch" statement that has supporting case element
> that check for "sev_active() || static_cpu_has(X86_FEATURE_TDX_GUEST)", etc.
> 

Or keep the separate vendor routines for separation and easier testing
but, instead, they would have to key off of the support:

	if (static_cpu_has(X86_FEATURE_TDX_GUEST))
		return tdx_prot_guest_has(flag);
	else if (sme_active() || sev_active())
		return sev_prot_guest_has(flag);

Thanks,
Tom

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 04/11] x86: Introduce generic protected guest abstraction
  2021-06-28 17:52   ` Tom Lendacky
  2021-06-28 18:59     ` Tom Lendacky
@ 2021-06-28 19:14     ` Kuppuswamy, Sathyanarayanan
  2021-06-29 19:47       ` Tom Lendacky
  1 sibling, 1 reply; 26+ messages in thread
From: Kuppuswamy, Sathyanarayanan @ 2021-06-28 19:14 UTC (permalink / raw)
  To: Tom Lendacky, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel



On 6/28/21 10:52 AM, Tom Lendacky wrote:
> On 6/18/21 5:57 PM, Kuppuswamy Sathyanarayanan wrote:
>> Add a generic way to check if we run with an encrypted guest,
>> without requiring x86 specific ifdefs. This can then be used in
>> non architecture specific code.
>>
>> prot_guest_has() is used to check for protected guest feature
>> flags.
>>
>> Originally-by: Andi Kleen <ak@linux.intel.com>
>> Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
>> ---
>>
>> Change since v1:
>>   * Introduced PR_GUEST_TDX and PR_GUEST_SEV vendor flags as per
>>     Boris suggestion.
>>   * Replaced is_tdx_guest() with if (boot_cpu_data.x86_vendor ==
>>     X86_VENDOR_INTEL) in prot_guest_has().
>>   * Modified tdx_protected_guest_has() and sev_protected_guest_has()
>>     to support vendor flags.
>>
>>   arch/Kconfig                           |  3 +++
>>   arch/x86/Kconfig                       |  2 ++
>>   arch/x86/include/asm/protected_guest.h | 20 +++++++++++++++++
>>   arch/x86/include/asm/sev.h             |  3 +++
>>   arch/x86/include/asm/tdx.h             |  4 ++++
>>   arch/x86/kernel/sev.c                  | 17 +++++++++++++++
>>   arch/x86/kernel/tdx.c                  | 17 +++++++++++++++
>>   include/linux/protected_guest.h        | 30 ++++++++++++++++++++++++++
>>   8 files changed, 96 insertions(+)
>>   create mode 100644 arch/x86/include/asm/protected_guest.h
>>   create mode 100644 include/linux/protected_guest.h
>>

>> +static inline bool prot_guest_has(unsigned long flag)
>> +{
>> +	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
>> +		return tdx_protected_guest_has(flag);
>> +	else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
>> +		return sev_protected_guest_has(flag);
> 
> So as I think about this, I don't think this will work if the hypervisor
> decides to change the vendor name, right?

For TDX guest, vendor name cannot be changed. It is set by TDX module and
it is fixed as per TDX module spec.

> 
> And doesn't TDX supply "IntelTDX    " as a signature. I don't see where
> the signature is used to set the CPU vendor to X86_VENDOR_INTEL.

We don't need to specially handle it for TDX. Generic early_identify_cpu() will
set boot_cpu_data.x86_vendor as X86_VENDOR_INTEL for TDX guest. I think it is
based on Intel in vendor string.

> 
> The current SEV checks to set sev_status, which is used by sme_active(),
> sev_active, etc.) are based on the max leaf and CPUID bits, but not a
> CPUID vendor check.
> 

You also set x86_vendor id as AMD based on SEV checks?

> So maybe we can keep the prot_guest_has() but I think it will have to be a
> common routine, with a "switch" statement that has supporting case element
> that check for "sev_active() || static_cpu_has(X86_FEATURE_TDX_GUEST)", etc.

>>   }
>> +
>> +bool sev_protected_guest_has(unsigned long flag)
>> +{
>> +	switch (flag) {
>> +	case PR_GUEST_MEM_ENCRYPT:
>> +	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
>> +	case PR_GUEST_UNROLL_STRING_IO:
>> +	case PR_GUEST_HOST_MEM_ENCRYPT:
>> +		return true;
> 
> This will need to be fixed up because this function will be called for
> baremetal and legacy guests and those properties aren't true for those
> situations. Something like (although I'm unsure of the difference between
> PR_GUEST_MEM_ENCRYPT and PR_GUEST_MEM_ENCRYPT_ACTIVE):

MEM_ENCRYPT_ACTIVE is suggested for mem_encrypt_active() case (I think it
means some sort of encryption is active).

PR_GUEST_MEM_ENCRYPT means guest supports memory encryption (sev_active()
case).

Yes, I can include following changes in next version.

> 
> 	case PR_GUEST_MEM_ENCRYPT:
> 	case PR_GUEST_MEM_ENCRYPT_ACTIVE:
> 		return sev_active();
> 	case PR_GUEST_UNROLL_STRING_IO:
> 		return sev_active() && !sev_es_active();
> 	case PR_GUEST_HOST_MEM_ENCRYPT:
> 		return sme_active();
> 
> But you (or I) would have to audit all of the locations where
> mem_encrypt_active(), sme_active(), sev_active() and sev_es_active() are
> used, to be sure the right thing is being done. And for bisectability,
> that should probably be the first patch if you will be invoking
> prot_guest_has() in the same location as any of the identified functions.
> 
> Create the new helper and fixup the locations should be one (or more)
> patches. Then add the TDX support to the helper function as a follow-on patch.

Can you submit a patch to replace all existing uses cases of mem_encrypt_active()
,sme_active(), sev_active() and sev_es_active() with prot_guest_has() calls? Since
I cannot test any of these changes for AMD, it would be better if you could do it.

Once you submit a tested version, I can enable these features for TDX and test
and submit it separately.

This patch can be split as below:

1. x86: Introduce generic protected guest abstraction patch (with below changes).
    - Remove all PR_GUEST flags in sev_protected_guest_has() and
      tdx_protected_guest_has().
2. Patch from you to use prot_guest_has() for AMD code and enable relevant
    PR_GUEST flags in sev_protected_guest_has().
3. Patch from me to us prot_guest_has() for TDX cases and enable relevant
    PR_GUEST flags in tdx_protected_guest_has().

Agree?


>> diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
>> new file mode 100644
>> index 000000000000..c5b7547e5a68
>> --- /dev/null
>> +++ b/include/linux/protected_guest.h
>> @@ -0,0 +1,30 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +#ifndef _LINUX_PROTECTED_GUEST_H
>> +#define _LINUX_PROTECTED_GUEST_H 1
>> +
>> +/* Protected Guest Feature Flags (leave 0-0xfff for vendor specific flags) */
>> +
>> +/* 0-ff is reserved for Intel specific flags */
>> +#define PR_GUEST_TDX				0x0000
>> +
>> +/* 100-1ff is reserved for AMD specific flags */
>> +#define PR_GUEST_SEV				0x0100
>> +
>> +/* Support for guest encryption */
>> +#define PR_GUEST_MEM_ENCRYPT			0x1000
> 
> I'm not sure I follow the difference between this and
> PR_GUEST_MEM_ENCRYPT_ACTIVE. Is this saying that the host has support for
> starting guests that support memory encryption or the guest has support
> for memory encryption but it hasn't been activated yet (which doesn't seem
> possible)?

Explained it above.

> 
> Thanks,
> Tom
> 

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 04/11] x86: Introduce generic protected guest abstraction
  2021-06-28 19:14     ` Kuppuswamy, Sathyanarayanan
@ 2021-06-29 19:47       ` Tom Lendacky
  0 siblings, 0 replies; 26+ messages in thread
From: Tom Lendacky @ 2021-06-29 19:47 UTC (permalink / raw)
  To: Kuppuswamy, Sathyanarayanan, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel

On 6/28/21 2:14 PM, Kuppuswamy, Sathyanarayanan wrote:
> 
> 
> On 6/28/21 10:52 AM, Tom Lendacky wrote:
>> On 6/18/21 5:57 PM, Kuppuswamy Sathyanarayanan wrote:

>>> +static inline bool prot_guest_has(unsigned long flag)
>>> +{
>>> +    if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
>>> +        return tdx_protected_guest_has(flag);
>>> +    else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
>>> +        return sev_protected_guest_has(flag);
>>
>> So as I think about this, I don't think this will work if the hypervisor
>> decides to change the vendor name, right?
> 
> For TDX guest, vendor name cannot be changed. It is set by TDX module and
> it is fixed as per TDX module spec.
> 
>>
>> And doesn't TDX supply "IntelTDX    " as a signature. I don't see where
>> the signature is used to set the CPU vendor to X86_VENDOR_INTEL.>
> We don't need to specially handle it for TDX. Generic early_identify_cpu()
> will
> set boot_cpu_data.x86_vendor as X86_VENDOR_INTEL for TDX guest. I think it is
> based on Intel in vendor string.

Hmmm..., I must be missing something then. I thought early_identify_cpu()
will read the signature, which would be "IntelTDX    ", right? Then that
is be compared against the structs that register via cpu_dev_register()
which would contain the x86_vendor value. I don't see anything registering
with the "IndexTDX    " signature so I don't know how you'll get an
x86_vendor value of X86_VENDOR_INTEL.

I'm probably missing something there, but it shouldn't matter for this
routine going forward.

> 
>>
>> The current SEV checks to set sev_status, which is used by sme_active(),
>> sev_active, etc.) are based on the max leaf and CPUID bits, but not a
>> CPUID vendor check.
>>
> 
> You also set x86_vendor id as AMD based on SEV checks?

No, we don't.

> 
>> So maybe we can keep the prot_guest_has() but I think it will have to be a
>> common routine, with a "switch" statement that has supporting case element
>> that check for "sev_active() || static_cpu_has(X86_FEATURE_TDX_GUEST)",
>> etc.
> 
>>>   }
>>> +
>>> +bool sev_protected_guest_has(unsigned long flag)
>>> +{
>>> +    switch (flag) {
>>> +    case PR_GUEST_MEM_ENCRYPT:
>>> +    case PR_GUEST_MEM_ENCRYPT_ACTIVE:
>>> +    case PR_GUEST_UNROLL_STRING_IO:
>>> +    case PR_GUEST_HOST_MEM_ENCRYPT:
>>> +        return true;
>>
>> This will need to be fixed up because this function will be called for
>> baremetal and legacy guests and those properties aren't true for those
>> situations. Something like (although I'm unsure of the difference between
>> PR_GUEST_MEM_ENCRYPT and PR_GUEST_MEM_ENCRYPT_ACTIVE):
> 
> MEM_ENCRYPT_ACTIVE is suggested for mem_encrypt_active() case (I think it
> means some sort of encryption is active).
> 
> PR_GUEST_MEM_ENCRYPT means guest supports memory encryption (sev_active()
> case).

Yeah, this is the problem with the name having guest in everything when
there are host and guest scenarios for AMD.

We have PR_GUEST_HOST_MEM_ENCRYPT but it would look strange to have
PR_GUEST_GUEST_MEM_ENCRYPT.

> 
> Yes, I can include following changes in next version.
> 
>>
>>     case PR_GUEST_MEM_ENCRYPT:
>>     case PR_GUEST_MEM_ENCRYPT_ACTIVE:
>>         return sev_active();
>>     case PR_GUEST_UNROLL_STRING_IO:
>>         return sev_active() && !sev_es_active();
>>     case PR_GUEST_HOST_MEM_ENCRYPT:
>>         return sme_active();
>>
>> But you (or I) would have to audit all of the locations where
>> mem_encrypt_active(), sme_active(), sev_active() and sev_es_active() are
>> used, to be sure the right thing is being done. And for bisectability,
>> that should probably be the first patch if you will be invoking
>> prot_guest_has() in the same location as any of the identified functions.
>>
>> Create the new helper and fixup the locations should be one (or more)
>> patches. Then add the TDX support to the helper function as a follow-on
>> patch.
> 
> Can you submit a patch to replace all existing uses cases of
> mem_encrypt_active()
> ,sme_active(), sev_active() and sev_es_active() with prot_guest_has()
> calls? Since
> I cannot test any of these changes for AMD, it would be better if you
> could do it.
> 
> Once you submit a tested version, I can enable these features for TDX and
> test
> and submit it separately.
> 
> This patch can be split as below:
> 
> 1. x86: Introduce generic protected guest abstraction patch (with below
> changes).
>    - Remove all PR_GUEST flags in sev_protected_guest_has() and
>      tdx_protected_guest_has().
> 2. Patch from you to use prot_guest_has() for AMD code and enable relevant
>    PR_GUEST flags in sev_protected_guest_has().
> 3. Patch from me to us prot_guest_has() for TDX cases and enable relevant
>    PR_GUEST flags in tdx_protected_guest_has().
> 
> Agree?

So I can work on a pre-patch series. It will be purely a replacement for
the current SME/SEV calls. You'll need to add all of the TDX support in a
subsequent patch in the TDX series. Given this is a pre-patch, I will
probably reset the flag values slightly and work on the names to be less
confusing.

Thanks,
Tom

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 00/11] Add TDX Guest Support (Initial support)
  2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
                   ` (10 preceding siblings ...)
  2021-06-18 22:57 ` [PATCH v3 11/11] x86/tdx: Handle CPUID via #VE Kuppuswamy Sathyanarayanan
@ 2021-06-30 23:22 ` Sathyanarayanan Kuppuswamy Natarajan
  11 siblings, 0 replies; 26+ messages in thread
From: Sathyanarayanan Kuppuswamy Natarajan @ 2021-06-30 23:22 UTC (permalink / raw)
  To: Kuppuswamy Sathyanarayanan
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Peter Zijlstra,
	Andy Lutomirski, Peter H Anvin, Dave Hansen, Tony Luck,
	Dan Williams, Andi Kleen, Kirill Shutemov, Sean Christopherson,
	x86, Linux Kernel Mailing List

Hi x86 maintainers,

On Fri, Jun 18, 2021 at 3:58 PM Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@linux.intel.com> wrote:
>
> Hi All,
>
> Intel's Trust Domain Extensions (TDX) protect guest VMs from malicious
> hosts and some physical attacks. This series adds the basic TDX guest
> infrastructure support (including #VE handler support, and #VE support
> for halt and CPUID). This is just a subset of patches in the bare minimum
> TDX support patch list which is required for supporting minimal
> functional TDX guest. Other basic feature features like #VE support for
> IO, MMIO, boot optimization fixes and shared-mm support will be submitted
> in a separate patch set. To make reviewing easier we split it into smaller
> series. This series alone is not necessarily fully functional.
>
> Also, the host-side support patches, and support for advanced TD guest
> features like attestation or debug-mode will be submitted at a later time.
> Also, at this point it is not secure with some known holes in drivers, and
> also hasn’t been fully audited and fuzzed yet.
>
> TDX has a lot of similarities to SEV. It enhances confidentiality and
> of guest memory and state (like registers) and includes a new exception
> (#VE) for the same basic reasons as SEV-ES. Like SEV-SNP (not merged
> yet), TDX limits the host's ability to effect changes in the guest
> physical address space. With TDX the host cannot access the guest memory,
> so various functionality that would normally be done in KVM has moved
> into a (paravirtualized) guest. Partially this is done using the
> Virtualization Exception (#VE) and partially with direct paravirtual hooks.
>
> The TDX architecture also includes a new CPU mode called
> Secure-Arbitration Mode (SEAM). The software (TDX module) running in this
> mode arbitrates interactions between host and guest and implements many of
> the guarantees of the TDX architecture.
>
> Some of the key differences between TD and regular VM is,
>
> 1. Multi CPU bring-up is done using the ACPI MADT wake-up table.
> 2. A new #VE exception handler is added. The TDX module injects #VE exception
>    to the guest TD in cases of instructions that need to be emulated, disallowed
>    MSR accesses, etc.
> 3. By default memory is marked as private, and TD will selectively share it with
>    VMM based on need.
>
> Note that the kernel will also need to be hardened against low level inputs from
> the now untrusted hosts. This will be done in follow on patches.
>
> You can find TDX related documents in the following link.
>
> https://software.intel.com/content/www/br/pt/develop/articles/intel-trust-domain-extensions.html
>
> Changes since v1 (v2 is partial set submission):
>  * Patch titled "x86/x86: Add early_is_tdx_guest() interface" is moved
>    out of this series.
>  * Rest of the change log is added per patch.

I have submitted the following list of TDX patch series 2-3 weeks back, and so
far we only received feedback for a few patches in sets 1 and 4 (from Boris &
Tom Lendacky). So, I was curious if you were planning on taking a look at
other sets of patch series in this submission or were waiting for new
revisions? Please let me know your comments.

sets 1-4 are core sets of patches that add TDX guest support.
set 4+ adds extra TDX features support.

Add TDX Guest Support (Initial support) [set 1] (currently v3 version)
 - https://lore.kernel.org/patchwork/project/lkml/list/?series=505232

Add TDX Guest Support (#VE handler support) [set 2] (currently v2 version)
 - https://lore.kernel.org/patchwork/project/lkml/list/?series=506230

Add TDX Guest Support (boot fixes) [set 3] (currently v2 version)
 - https://lore.kernel.org/patchwork/project/lkml/list/?series=506231

Add TDX Guest Support (shared-mm support) [set 4] (currently v2 version)
 - https://lore.kernel.org/patchwork/project/lkml/list/?series=506232

Add TDX Guest Support (Debug support) [set 5] (currently v1 version)
 - https://lore.kernel.org/patchwork/project/lkml/list/?series=506233

Add TDX Guest Support (Attestation support) [set 6] (currently v1 version)
 - https://lore.kernel.org/patchwork/project/lkml/list/?series=506234


-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature
  2021-06-18 22:57 ` [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature Kuppuswamy Sathyanarayanan
  2021-06-18 23:39   ` Borislav Petkov
@ 2021-07-15 11:56   ` Xiaoyao Li
  2021-07-19  5:10     ` Kuppuswamy, Sathyanarayanan
  1 sibling, 1 reply; 26+ messages in thread
From: Xiaoyao Li @ 2021-07-15 11:56 UTC (permalink / raw)
  To: Kuppuswamy Sathyanarayanan, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel

On 6/19/2021 6:57 AM, Kuppuswamy Sathyanarayanan wrote:
> Add CPU feature detection for Trusted Domain Extensions support. TDX
> feature adds capabilities to keep guest register state and memory
> isolated from hypervisor.
> 
> For TDX guest platforms, executing CPUID(eax=0x21, ecx=0) will return
> following values in EAX, EBX, ECX and EDX.
> 
> EAX:  Maximum sub-leaf number:  0
> EBX/EDX/ECX:  Vendor string:
> 
> EBX =  "Inte"
> EDX =  "lTDX"
> ECX =  "    "
> 
> So when above condition is true, set X86_FEATURE_TDX_GUEST feature cap
> bit.
> 

...

> +static inline bool cpuid_has_tdx_guest(void)
> +{
> +	u32 eax, sig[3];
> +
> +	if (cpuid_eax(0) < TDX_CPUID_LEAF_ID)
> +		return false;
> +
> +	cpuid_count(TDX_CPUID_LEAF_ID, 0, &eax, &sig[0], &sig[1], &sig[2]);

As change log describes, EBX + EDX + ECX is "IntelTDX    ", not EBX + 
ECX + EDX. So it should be

	cpuid_count(TDX_CPUID_LEAF_ID, 0, &eax, &sig[0], &sig[2], &sig[1]);

Please also correct early_cpuid_has_tdx_guest()

> +
> +	return !memcmp("IntelTDX    ", sig, 12);
> +}
> +
> +void __init tdx_early_init(void)
> +{
> +	if (!cpuid_has_tdx_guest())
> +		return;
> +
> +	setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);
> +
> +	pr_info("Guest initialized\n");
> +}
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature
  2021-07-15 11:56   ` Xiaoyao Li
@ 2021-07-19  5:10     ` Kuppuswamy, Sathyanarayanan
  0 siblings, 0 replies; 26+ messages in thread
From: Kuppuswamy, Sathyanarayanan @ 2021-07-19  5:10 UTC (permalink / raw)
  To: Xiaoyao Li, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Peter Zijlstra, Andy Lutomirski
  Cc: Peter H Anvin, Dave Hansen, Tony Luck, Dan Williams, Andi Kleen,
	Kirill Shutemov, Sean Christopherson, Kuppuswamy Sathyanarayanan,
	x86, linux-kernel



On 7/15/21 4:56 AM, Xiaoyao Li wrote:
>>
> 
> As change log describes, EBX + EDX + ECX is "IntelTDX    ", not EBX + ECX + EDX. So it should be
> 
>      cpuid_count(TDX_CPUID_LEAF_ID, 0, &eax, &sig[0], &sig[2], &sig[1]);
> 
> Please also correct early_cpuid_has_tdx_guest()

Good catch. I will fix this in next submission.

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-07-19  5:11 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-18 22:57 [PATCH v3 00/11] Add TDX Guest Support (Initial support) Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 01/11] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 02/11] x86/tdx: Introduce INTEL_TDX_GUEST config option Kuppuswamy Sathyanarayanan
2021-06-19 11:59   ` Juergen Gross
2021-06-19 17:11     ` Kuppuswamy, Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 03/11] x86/cpufeatures: Add TDX Guest CPU feature Kuppuswamy Sathyanarayanan
2021-06-18 23:39   ` Borislav Petkov
2021-06-19  0:13     ` Kuppuswamy, Sathyanarayanan
2021-06-19  6:38       ` Borislav Petkov
2021-07-15 11:56   ` Xiaoyao Li
2021-07-19  5:10     ` Kuppuswamy, Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 04/11] x86: Introduce generic protected guest abstraction Kuppuswamy Sathyanarayanan
2021-06-24 15:01   ` Borislav Petkov
2021-06-24 17:58     ` Kuppuswamy, Sathyanarayanan
2021-06-28 17:52   ` Tom Lendacky
2021-06-28 18:59     ` Tom Lendacky
2021-06-28 19:14     ` Kuppuswamy, Sathyanarayanan
2021-06-29 19:47       ` Tom Lendacky
2021-06-18 22:57 ` [PATCH v3 05/11] x86/tdx: Add __tdx_module_call() and __tdx_hypercall() helper functions Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 06/11] x86/tdx: Get TD execution environment information via TDINFO Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 07/11] x86/traps: Add #VE support for TDX guest Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 08/11] x86/tdx: Add HLT " Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 09/11] x86/tdx: Wire up KVM hypercalls Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 10/11] x86/tdx: Add MSR support for TDX guest Kuppuswamy Sathyanarayanan
2021-06-18 22:57 ` [PATCH v3 11/11] x86/tdx: Handle CPUID via #VE Kuppuswamy Sathyanarayanan
2021-06-30 23:22 ` [PATCH v3 00/11] Add TDX Guest Support (Initial support) Sathyanarayanan Kuppuswamy Natarajan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.