linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: isaku.yamahata@intel.com
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com,
	Paolo Bonzini <pbonzini@redhat.com>,
	Jim Mattson <jmattson@google.com>,
	erdemaktas@google.com, Connor Kuehl <ckuehl@redhat.com>,
	Sean Christopherson <seanjc@google.com>
Subject: [RFC PATCH v5 042/104] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis
Date: Fri,  4 Mar 2022 11:48:58 -0800	[thread overview]
Message-ID: <b494b94bf2d6a5d841cb76e63e255d4cff906d83.1646422845.git.isaku.yamahata@intel.com> (raw)
In-Reply-To: <cover.1646422845.git.isaku.yamahata@intel.com>

From: Sean Christopherson <sean.j.christopherson@intel.com>

Define the EPT Violation #VE control bit, #VE info VMCS fields, and the
suppress #VE bit for EPT entries.

TDX will use a different shadow PTE entry value for MMIO from VMX.  Add
members to kvm_arch and track value for MMIO per-VM.  By using per-VM EPT
entry value for MMIO, the existing VMX logic is kept working.

In the case of VMX VM case, the EPT entry for MMIO is non-present PTE
(present bit cleared) without backing guest physical address (on EPT
violation, KVM seaching backing guest memory and it finds there is no
backing guest page.) or the value to trigger EPT misconfiguration.  For
fast path. Once MMIO is triggered on the EPT entry, the EPT entry is
updated for the future MMIO.  It allows KVM to understand the memory access
is for MMIO without searching backing guest pages.). And then KVM parses
guest instruction to figure out address/value/width for MMIO.

In the case of the guest TD, the guest memory is protected so that VMM
can't parse guest instruction to trigger EPT violation.  Instead VMM sets
up (Shared) EPT to trigger #VE.  When the guest TD issues MMIO, #VE is
injected.  guest VE handler converts MMIO access into MMIO hypercall to
pass address/value/width for MMIO to VMM. (or directly paravirtualize MMIO
into hypercall.)  Then VMM can handle the MMIO hypercall without parsing
guest instruction.

When the guest accesses GPA if "the EPT Violation #VE" control bit is set
and EPT SUPPRESS VE bit in EPT entry is cleared, #VE, virtualization
exception, is injected into the guest.  Because the TDX guest vCPU state
and memory are protected, a VMM can't emulate MMIO by the TDX guest on EPT
violation by snooping vCPU state and parsing instruction to figure out MMIO
address and value.  Instead, PV MMIO (MMIO hypercall) is adapted.  On EPT
violation, CPU injects #VE to guest and the guest converts MMIO instruction
into PV MMIO.  Or guest directly issues MMIO hypercall.

The existing VMX code uses zero as an initial value for EPT entry.  TDX
will enable EPT-violation #VE VM-execution control and requires suppress VE
bit cleared in shared EPT entry to inject #VE into the TDX guest.  To keep
the same behavior for VMX, suppress VE bit needs to be set.  Allow to
specify an initial value for EPT entry and if TDX is enabled, set initial
EPT entry value to suppress VE bit set.  EPT-violation #VE VM-execution
control will be enabled, and For TDX shared EPT suppress VE bit will be
cleared for TDX shared EPT entry.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/include/asm/vmx.h      |  1 +
 arch/x86/kvm/mmu.h              |  6 ++++--
 arch/x86/kvm/mmu/mmu.c          | 19 +++++++++++------
 arch/x86/kvm/mmu/spte.c         | 38 ++++++++++++++++-----------------
 arch/x86/kvm/mmu/spte.h         |  9 ++++----
 arch/x86/kvm/mmu/tdp_mmu.c      |  6 +++---
 arch/x86/kvm/svm/svm.c          |  2 +-
 arch/x86/kvm/vmx/main.c         |  7 ++++--
 arch/x86/kvm/vmx/tdx.c          |  2 +-
 arch/x86/kvm/vmx/tdx.h          |  2 ++
 arch/x86/kvm/vmx/vmx.c          |  8 +++++++
 12 files changed, 63 insertions(+), 40 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d33d79f2af2d..fcab2337819c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1069,6 +1069,9 @@ struct kvm_arch {
 	 */
 	spinlock_t mmu_unsync_pages_lock;
 
+	u64 shadow_mmio_value;
+	u64 shadow_mmio_mask;
+
 	struct list_head assigned_dev_head;
 	struct iommu_domain *iommu_domain;
 	bool iommu_noncoherent;
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 0ffaa3156a4e..88d9b8cc7dde 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -498,6 +498,7 @@ enum vmcs_field {
 #define VMX_EPT_IPAT_BIT    			(1ull << 6)
 #define VMX_EPT_ACCESS_BIT			(1ull << 8)
 #define VMX_EPT_DIRTY_BIT			(1ull << 9)
+#define VMX_EPT_SUPPRESS_VE_BIT			(1ull << 63)
 #define VMX_EPT_RWX_MASK                        (VMX_EPT_READABLE_MASK |       \
 						 VMX_EPT_WRITABLE_MASK |       \
 						 VMX_EPT_EXECUTABLE_MASK)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 650989c37f2e..b49841e4faaa 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -64,8 +64,10 @@ static __always_inline u64 rsvd_bits(int s, int e)
 	return ((2ULL << (e - s)) - 1) << s;
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
+void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask,
+				u64 access_mask);
+void kvm_mmu_set_default_mmio_spte_mask(u64 mask);
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only, u64 init_value);
 void kvm_mmu_set_spte_init_value(u64 init_value);
 
 void kvm_init_mmu(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d8c1505155b0..6e9847b1124b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2336,7 +2336,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 				return kvm_mmu_prepare_zap_page(kvm, child,
 								invalid_list);
 		}
-	} else if (is_mmio_spte(pte)) {
+	} else if (is_mmio_spte(kvm, pte)) {
 		mmu_spte_clear_no_track(spte);
 	}
 	return 0;
@@ -3069,9 +3069,12 @@ static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fa
 		/*
 		 * If MMIO caching is disabled, emulate immediately without
 		 * touching the shadow page tables as attempting to install an
-		 * MMIO SPTE will just be an expensive nop.
+		 * MMIO SPTE will just be an expensive nop, but excludes the
+		 * INTEL TD guest due to it also uses shadow_mmio_value = 0
+		 * to emulating MMIO access.
 		 */
-		if (unlikely(!shadow_mmio_value)) {
+		if (unlikely(!vcpu->kvm->arch.shadow_mmio_value)
+		    && !kvm_gfn_stolen_mask(vcpu->kvm)) {
 			*ret_val = RET_PF_EMULATE;
 			return true;
 		}
@@ -3209,7 +3212,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 			break;
 
 		sp = sptep_to_sp(sptep);
-		if (!is_last_spte(spte, sp->role.level) || is_mmio_spte(spte))
+		if (!is_last_spte(spte, sp->role.level) ||
+			is_mmio_spte(vcpu->kvm, spte))
 			break;
 
 		/*
@@ -3892,7 +3896,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 	if (WARN_ON(reserved))
 		return -EINVAL;
 
-	if (is_mmio_spte(spte)) {
+	if (is_mmio_spte(vcpu->kvm, spte)) {
 		gfn_t gfn = get_mmio_spte_gfn(spte);
 		unsigned int access = get_mmio_spte_access(spte);
 
@@ -4294,7 +4298,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu)
 static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
 			   unsigned int access)
 {
-	if (unlikely(is_mmio_spte(*sptep))) {
+	if (unlikely(is_mmio_spte(vcpu->kvm, *sptep))) {
 		if (gfn != get_mmio_spte_gfn(*sptep)) {
 			mmu_spte_clear_no_track(sptep);
 			return true;
@@ -5791,6 +5795,9 @@ void kvm_mmu_init_vm(struct kvm *kvm)
 	kvm_page_track_register_notifier(kvm, node);
 
 	kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL;
+	kvm_mmu_set_mmio_spte_mask(kvm, shadow_default_mmio_mask,
+				   shadow_default_mmio_mask,
+				   ACC_WRITE_MASK | ACC_USER_MASK);
 }
 
 void kvm_mmu_uninit_vm(struct kvm *kvm)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 5071e8332db2..ea83927b9231 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -29,8 +29,7 @@ u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
 u64 __read_mostly shadow_user_mask;
 u64 __read_mostly shadow_accessed_mask;
 u64 __read_mostly shadow_dirty_mask;
-u64 __read_mostly shadow_mmio_value;
-u64 __read_mostly shadow_mmio_mask;
+u64 __read_mostly shadow_default_mmio_mask;
 u64 __read_mostly shadow_mmio_access_mask;
 u64 __read_mostly shadow_present_mask;
 u64 __read_mostly shadow_me_mask;
@@ -59,10 +58,11 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access)
 	u64 spte = generation_mmio_spte_mask(gen);
 	u64 gpa = gfn << PAGE_SHIFT;
 
-	WARN_ON_ONCE(!shadow_mmio_value);
+	WARN_ON_ONCE(!vcpu->kvm->arch.shadow_mmio_value &&
+		     !kvm_gfn_stolen_mask(vcpu->kvm));
 
 	access &= shadow_mmio_access_mask;
-	spte |= shadow_mmio_value | access;
+	spte |= vcpu->kvm->arch.shadow_mmio_value | access;
 	spte |= gpa | shadow_nonpresent_or_rsvd_mask;
 	spte |= (gpa & shadow_nonpresent_or_rsvd_mask)
 		<< SHADOW_NONPRESENT_OR_RSVD_MASK_LEN;
@@ -279,7 +279,8 @@ u64 mark_spte_for_access_track(u64 spte)
 	return spte;
 }
 
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
+void kvm_mmu_set_mmio_spte_mask(struct kvm *kvm, u64 mmio_value, u64 mmio_mask,
+				u64 access_mask)
 {
 	BUG_ON((u64)(unsigned)access_mask != access_mask);
 	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
@@ -308,39 +309,32 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
 	    WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
 		mmio_value = 0;
 
-	shadow_mmio_value = mmio_value;
-	shadow_mmio_mask  = mmio_mask;
+	kvm->arch.shadow_mmio_value = mmio_value;
+	kvm->arch.shadow_mmio_mask = mmio_mask;
 	shadow_mmio_access_mask = access_mask;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
 
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only, u64 init_value)
 {
 	shadow_user_mask	= VMX_EPT_READABLE_MASK;
 	shadow_accessed_mask	= has_ad_bits ? VMX_EPT_ACCESS_BIT : 0ull;
 	shadow_dirty_mask	= has_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull;
 	shadow_nx_mask		= 0ull;
 	shadow_x_mask		= VMX_EPT_EXECUTABLE_MASK;
-	shadow_present_mask	= has_exec_only ? 0ull : VMX_EPT_READABLE_MASK;
+	shadow_present_mask	=
+		(has_exec_only ? 0ull : VMX_EPT_READABLE_MASK) | init_value;
 	shadow_acc_track_mask	= VMX_EPT_RWX_MASK;
 	shadow_me_mask		= 0ull;
 
 	shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
 	shadow_mmu_writable_mask  = EPT_SPTE_MMU_WRITABLE;
-
-	/*
-	 * EPT Misconfigurations are generated if the value of bits 2:0
-	 * of an EPT paging-structure entry is 110b (write/execute).
-	 */
-	kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE,
-				   VMX_EPT_RWX_MASK, 0);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_ept_masks);
 
 void kvm_mmu_reset_all_pte_masks(void)
 {
 	u8 low_phys_bits;
-	u64 mask;
 
 	shadow_phys_bits = kvm_get_shadow_phys_bits();
 
@@ -389,9 +383,13 @@ void kvm_mmu_reset_all_pte_masks(void)
 	 * PTEs and so the reserved PA approach must be disabled.
 	 */
 	if (shadow_phys_bits < 52)
-		mask = BIT_ULL(51) | PT_PRESENT_MASK;
+		shadow_default_mmio_mask = BIT_ULL(51) | PT_PRESENT_MASK;
 	else
-		mask = 0;
+		shadow_default_mmio_mask = 0;
+}
 
-	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
+void kvm_mmu_set_default_mmio_spte_mask(u64 mask)
+{
+	shadow_default_mmio_mask = mask;
 }
+EXPORT_SYMBOL_GPL(kvm_mmu_set_default_mmio_spte_mask);
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 8e13a35ab8c9..bde843bce878 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -165,8 +165,7 @@ extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
 extern u64 __read_mostly shadow_user_mask;
 extern u64 __read_mostly shadow_accessed_mask;
 extern u64 __read_mostly shadow_dirty_mask;
-extern u64 __read_mostly shadow_mmio_value;
-extern u64 __read_mostly shadow_mmio_mask;
+extern u64 __read_mostly shadow_default_mmio_mask;
 extern u64 __read_mostly shadow_mmio_access_mask;
 extern u64 __read_mostly shadow_present_mask;
 extern u64 __read_mostly shadow_me_mask;
@@ -229,10 +228,10 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
  */
 extern u8 __read_mostly shadow_phys_bits;
 
-static inline bool is_mmio_spte(u64 spte)
+static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
 {
-	return (spte & shadow_mmio_mask) == shadow_mmio_value &&
-	       likely(shadow_mmio_value);
+	return (spte & kvm->arch.shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
+		likely(kvm->arch.shadow_mmio_value || kvm_gfn_stolen_mask(kvm));
 }
 
 static inline bool is_shadow_present_pte(u64 pte)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index bc9e3553fba2..ebd0a02620e8 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -447,8 +447,8 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 		 * impact the guest since both the former and current SPTEs
 		 * are nonpresent.
 		 */
-		if (WARN_ON(!is_mmio_spte(old_spte) &&
-			    !is_mmio_spte(new_spte) &&
+		if (WARN_ON(!is_mmio_spte(kvm, old_spte) &&
+			    !is_mmio_spte(kvm, new_spte) &&
 			    !is_removed_spte(new_spte)))
 			pr_err("Unexpected SPTE change! Nonpresent SPTEs\n"
 			       "should not be replaced with another,\n"
@@ -927,7 +927,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 	}
 
 	/* If a MMIO SPTE is installed, the MMIO will need to be emulated. */
-	if (unlikely(is_mmio_spte(new_spte))) {
+	if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) {
 		trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn,
 				     new_spte);
 		ret = RET_PF_EMULATE;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 778075b71dc3..c7eec23e9ebe 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4704,7 +4704,7 @@ static __init void svm_adjust_mmio_mask(void)
 	 */
 	mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
 
-	kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
+	kvm_mmu_set_default_mmio_spte_mask(mask);
 }
 
 static __init void svm_set_cpu_caps(void)
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 51aaafe6b432..b242a9dc9e29 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -23,9 +23,12 @@ static __init int vt_hardware_setup(void)
 
 	tdx_hardware_setup(&vt_x86_ops);
 
-	if (enable_ept)
+	if (enable_ept) {
+		const u64 init_value = enable_tdx ? VMX_EPT_SUPPRESS_VE_BIT : 0ull;
 		kvm_mmu_set_ept_masks(enable_ept_ad_bits,
-				      cpu_has_vmx_ept_execute_only());
+				      cpu_has_vmx_ept_execute_only(), init_value);
+		kvm_mmu_set_spte_init_value(init_value);
+	}
 
 	return 0;
 }
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index f86a257dd71b..c3434b33c452 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -11,7 +11,7 @@
 #undef pr_fmt
 #define pr_fmt(fmt) "tdx: " fmt
 
-static bool __read_mostly enable_tdx = true;
+bool __read_mostly enable_tdx = true;
 module_param_named(tdx, enable_tdx, bool, 0644);
 
 #define TDX_MAX_NR_CPUID_CONFIGS					\
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 4ce7fcab6f64..b32e068c51b4 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -6,6 +6,7 @@
 
 #include "tdx_ops.h"
 
+extern bool __read_mostly enable_tdx;
 int tdx_module_setup(void);
 
 struct tdx_td_page {
@@ -166,6 +167,7 @@ static __always_inline u64 td_tdcs_exec_read64(struct kvm_tdx *kvm_tdx, u32 fiel
 }
 
 #else
+#define enable_tdx false
 static inline int tdx_module_setup(void) { return -ENODEV; };
 
 struct kvm_tdx;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 07fd892768be..00f88aa25047 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7065,6 +7065,14 @@ int vmx_vm_init(struct kvm *kvm)
 	if (!ple_gap)
 		kvm->arch.pause_in_guest = true;
 
+	/*
+	 * EPT Misconfigurations can be generated if the value of bits 2:0
+	 * of an EPT paging-structure entry is 110b (write/execute).
+	 */
+	if (enable_ept)
+		kvm_mmu_set_mmio_spte_mask(kvm, VMX_EPT_MISCONFIG_WX_VALUE,
+					   VMX_EPT_MISCONFIG_WX_VALUE, 0);
+
 	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
 		switch (l1tf_mitigation) {
 		case L1TF_MITIGATION_OFF:
-- 
2.25.1


  parent reply	other threads:[~2022-03-04 20:09 UTC|newest]

Thread overview: 310+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-04 19:48 [RFC PATCH v5 000/104] KVM TDX basic feature support isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 001/104] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
2022-03-13 13:45   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 002/104] x86/virt/tdx: export platform_has_tdx isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 003/104] KVM: TDX: Detect CPU feature on kernel module initialization isaku.yamahata
2022-03-13 13:49   ` Paolo Bonzini
2022-03-14 18:34     ` Isaku Yamahata
2022-04-08 16:46   ` Sean Christopherson
2022-03-04 19:48 ` [RFC PATCH v5 004/104] KVM: Enable hardware before doing arch VM initialization isaku.yamahata
2022-03-13 14:00   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 005/104] KVM: x86: Refactor KVM VMX module init/exit functions isaku.yamahata
2022-03-13 13:54   ` Paolo Bonzini
2022-03-14 19:22     ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 006/104] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
2022-03-13 13:55   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 007/104] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
2022-03-13 13:59   ` Paolo Bonzini
2022-03-13 23:02     ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 008/104] KVM: TDX: Add a function to initialize " isaku.yamahata
2022-03-13 14:03   ` Paolo Bonzini
2022-03-14 19:45     ` Isaku Yamahata
2022-03-31  0:03       ` Sean Christopherson
2022-03-31  1:02         ` Kai Huang
2022-03-31 17:03         ` Isaku Yamahata
2022-03-31 19:34           ` Sean Christopherson
     [not found]             ` <20220401032741.GA2806@gao-cwp>
2022-04-01  5:07               ` Chao Gao
2022-03-31  3:31   ` Kai Huang
2022-03-31 19:41     ` Isaku Yamahata
2022-04-01  6:56       ` Xiaoyao Li
2022-04-01 20:18         ` Isaku Yamahata
2022-04-02  2:40           ` Xiaoyao Li
2022-03-04 19:48 ` [RFC PATCH v5 009/104] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
2022-03-13 14:07   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 010/104] KVM: TDX: Make TDX VM type supported isaku.yamahata
2022-03-13 23:08   ` Kai Huang
2022-03-15 21:03     ` Isaku Yamahata
2022-03-15 21:47       ` Kai Huang
2022-03-15 21:49         ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 011/104] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 012/104] KVM: TDX: Define " isaku.yamahata
2022-03-13 14:30   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 013/104] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
2022-03-13 14:08   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 014/104] KVM: TDX: Add a function for KVM to invoke SEAMCALL isaku.yamahata
2022-03-13 14:10   ` Paolo Bonzini
2022-03-13 22:42   ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 015/104] KVM: TDX: add a helper function for KVM to issue SEAMCALL isaku.yamahata
2022-03-13 14:11   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 016/104] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 017/104] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
2022-03-13 14:12   ` Paolo Bonzini
2022-04-15 16:54   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 018/104] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 019/104] KVM: TDX: Stub in tdx.h with structs, accessors, and VMCS helpers isaku.yamahata
2022-04-15 16:55   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 020/104] KVM: TDX: allocate per-package mutex isaku.yamahata
2022-04-05 12:39   ` Paolo Bonzini
2022-04-08  0:44     ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 021/104] KVM: x86: Introduce hooks to free VM callback prezap and vm_free isaku.yamahata
2022-03-31  3:02   ` Kai Huang
2022-03-31 19:54     ` Isaku Yamahata
2022-04-05 12:40   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 022/104] KVM: Add max_vcpus field in common 'struct kvm' isaku.yamahata
2022-04-05 12:42   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 023/104] x86/cpu: Add helper functions to allocate/free MKTME keyid isaku.yamahata
2022-03-31  1:21   ` Kai Huang
2022-03-31 20:15     ` Isaku Yamahata
2022-04-06  1:55       ` Kai Huang
2022-04-07  1:00         ` Kai Huang
2022-04-05 13:08   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 024/104] KVM: TDX: create/destroy VM structure isaku.yamahata
2022-03-31  4:17   ` Kai Huang
2022-03-31 22:12     ` Isaku Yamahata
2022-03-31 23:41       ` Kai Huang
2022-04-05 12:44   ` Paolo Bonzini
2022-04-08  0:51     ` Isaku Yamahata
2022-04-15 13:47       ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 025/104] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
2022-04-05 12:50   ` Paolo Bonzini
2022-04-08  0:56     ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 026/104] KVM: TDX: x86: Add vm ioctl to get TDX systemwide parameters isaku.yamahata
2022-04-05 12:52   ` Paolo Bonzini
2022-04-06  1:54     ` Xiaoyao Li
2022-04-07  1:07       ` Kai Huang
2022-04-07  1:17         ` Xiaoyao Li
2022-04-08  0:58           ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 027/104] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
2022-03-31  4:55   ` Kai Huang
2022-04-05 13:01     ` Paolo Bonzini
2022-04-06  2:06       ` Xiaoyao Li
2022-04-06 11:27         ` Paolo Bonzini
2022-04-08  2:18     ` Isaku Yamahata
2022-04-05 12:58   ` Paolo Bonzini
2022-04-07  1:29     ` Xiaoyao Li
2022-04-07  1:51       ` Kai Huang
2022-04-08  3:33         ` Isaku Yamahata
2022-03-04 19:48 ` [RFC PATCH v5 028/104] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 029/104] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
2022-04-05 13:04   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 030/104] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 031/104] [MARKER] The start of TDX KVM patch series: KVM MMU GPA stolen bits isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 032/104] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
2022-03-31 11:23   ` Kai Huang
2022-04-01  1:51     ` Isaku Yamahata
2022-04-01  2:13       ` Kai Huang
2022-04-05 13:48         ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 033/104] KVM: x86: Add infrastructure for stolen GPA bits isaku.yamahata
2022-03-31 11:16   ` Kai Huang
2022-04-01  2:10     ` Kai Huang
2022-04-01  2:34     ` Isaku Yamahata
2022-04-05 14:02       ` Paolo Bonzini
2022-04-05 14:02       ` Paolo Bonzini
2022-04-05 13:55     ` Paolo Bonzini
2022-04-06  2:23       ` Kai Huang
2022-04-06 11:26         ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 034/104] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
2022-03-04 19:48 ` [RFC PATCH v5 035/104] KVM: x86/mmu: Disallow dirty logging for x86 TDX isaku.yamahata
2022-04-05 13:09   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 036/104] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault isaku.yamahata
2022-04-05 13:17   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 037/104] KVM: x86/mmu: Allow non-zero init value for shadow PTE isaku.yamahata
2022-04-01  5:13   ` Kai Huang
2022-04-01  7:13     ` Kai Huang
2022-04-05 14:14       ` Paolo Bonzini
2022-04-08 18:38         ` Isaku Yamahata
2022-04-05 14:13     ` Paolo Bonzini
2022-04-05 14:10   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 038/104] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
2022-04-01  5:15   ` Kai Huang
2022-04-01 14:08     ` Sean Christopherson
2022-04-01 20:28       ` Isaku Yamahata
2022-04-01 20:53         ` Sean Christopherson
2022-04-01 22:27       ` Kai Huang
2022-04-02  0:08         ` Sean Christopherson
2022-04-04  0:41           ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 039/104] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
2022-04-05 13:22   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 040/104] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
2022-04-05 14:43   ` Paolo Bonzini
2022-03-04 19:48 ` [RFC PATCH v5 041/104] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
2022-04-05 14:48   ` Paolo Bonzini
2022-03-04 19:48 ` isaku.yamahata [this message]
2022-04-05 15:25   ` [RFC PATCH v5 042/104] KVM: x86/mmu: Track shadow MMIO value/mask on a per-VM basis Paolo Bonzini
2022-04-08 18:46     ` Isaku Yamahata
2022-04-19 19:55       ` Sean Christopherson
2022-04-06 11:06   ` Kai Huang
2022-04-07  3:05     ` Kai Huang
2022-04-08 19:12     ` Isaku Yamahata
2022-04-08 23:34       ` Kai Huang
2022-03-04 19:48 ` [RFC PATCH v5 043/104] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
2022-04-05 14:51   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 044/104] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 045/104] KVM: x86/tdp_mmu: make REMOVED_SPTE include shadow_initial value isaku.yamahata
2022-04-05 14:22   ` Paolo Bonzini
2022-04-06 23:35     ` Sean Christopherson
2022-04-07 13:52       ` Paolo Bonzini
2022-04-06 23:30   ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 046/104] KVM: x86/tdp_mmu: refactor kvm_tdp_mmu_map() isaku.yamahata
2022-04-05 14:53   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 047/104] KVM: x86/mmu: add a private pointer to struct kvm_mmu_page isaku.yamahata
2022-04-05 14:58   ` Paolo Bonzini
2022-04-06 23:43   ` Kai Huang
2022-04-07 13:52     ` Paolo Bonzini
2022-04-07 22:53       ` Kai Huang
2022-04-07 23:03         ` Paolo Bonzini
2022-04-07 23:24           ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 048/104] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
2022-04-07  0:50   ` Kai Huang
2022-04-25 19:10     ` Sagi Shahar
2022-04-26 21:12       ` Isaku Yamahata
2022-04-29  0:28   ` Sagi Shahar
2022-04-29  0:46     ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 049/104] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs isaku.yamahata
2022-04-05 15:15   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 050/104] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 051/104] KVM: TDX: TDP MMU TDX support isaku.yamahata
2022-04-07  2:20   ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 052/104] [MARKER] The start of TDX KVM patch series: KVM TDP MMU MapGPA isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 053/104] KVM: x86/mmu: steal software usable bit for EPT to represent shared page isaku.yamahata
2022-04-15 15:21   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 054/104] KVM: x86/tdp_mmu: Keep PRIVATE_PROHIBIT bit when zapping isaku.yamahata
2022-04-07  1:43   ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 055/104] KVM: x86/tdp_mmu: prevent private/shared map based on PRIVATE_PROHIBIT isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 056/104] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 057/104] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 058/104] KVM: x86/mmu: Focibly use TDP MMU for TDX isaku.yamahata
2022-04-07  1:49   ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 059/104] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 060/104] KVM: TDX: Create initial guest memory isaku.yamahata
2022-04-07  2:30   ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 061/104] KVM: TDX: Finalize VM initialization isaku.yamahata
2022-04-15 13:52   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 062/104] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 063/104] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 064/104] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
2022-03-22 17:28   ` Erdem Aktas
2022-03-23 17:55     ` Isaku Yamahata
2022-03-23 20:05       ` Erdem Aktas
2022-03-23 22:48         ` Isaku Yamahata
2022-03-04 19:49 ` [RFC PATCH v5 065/104] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
2022-04-15 13:56   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 066/104] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 067/104] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
2022-04-15 14:02   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 068/104] KVM: TDX: restore user ret MSRs isaku.yamahata
2022-04-15 14:06   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 069/104] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 070/104] KVM: TDX: complete interrupts after tdexit isaku.yamahata
2022-04-15 14:07   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 071/104] KVM: TDX: restore debug store when TD exit isaku.yamahata
2022-04-15 14:20   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 072/104] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
2022-04-15 14:14   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 073/104] KVM: TDX: track LP tdx vcpu run and teardown vcpus on descroing the guest TD isaku.yamahata
2022-03-23  0:54   ` Erdem Aktas
2022-03-23 19:08     ` Isaku Yamahata
2022-03-23 20:17       ` Erdem Aktas
2022-03-04 19:49 ` [RFC PATCH v5 074/104] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
2022-04-05 15:32   ` Paolo Bonzini
2022-04-06 23:28     ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 075/104] KVM: x86: Check for pending APICv interrupt in kvm_vcpu_has_events() isaku.yamahata
2022-04-08 16:24   ` Sean Christopherson
2022-04-15 14:20     ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 076/104] KVM: x86: Add option to force LAPIC expiration wait isaku.yamahata
2022-04-05 15:33   ` Paolo Bonzini
2022-04-08 16:36   ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 077/104] KVM: TDX: Use vcpu_to_pi_desc() uniformly in posted_intr.c isaku.yamahata
2022-04-05 15:36   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 078/104] KVM: TDX: Implement interrupt injection isaku.yamahata
2022-04-06 11:47   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 079/104] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
2022-04-06 12:49   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 080/104] KVM: TDX: Implement methods to inject NMI isaku.yamahata
2022-04-06 12:47   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 081/104] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
2022-04-06 12:49   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 082/104] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 083/104] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
2022-03-21 18:32   ` Sagi Shahar
2022-03-23 17:53     ` Isaku Yamahata
2022-04-07 13:12     ` Paolo Bonzini
2022-04-08  5:34       ` Isaku Yamahata
2022-03-04 19:49 ` [RFC PATCH v5 084/104] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
2022-04-15 14:20   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 085/104] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
2022-04-15 14:29   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 086/104] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
2022-04-06 20:50   ` Sagi Shahar
2022-04-07  1:09     ` Xiaoyao Li
2022-03-04 19:49 ` [RFC PATCH v5 087/104] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
2022-04-15 14:49   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 088/104] KVM: TDX: Add TDG.VP.VMCALL accessors to access guest vcpu registers isaku.yamahata
2022-04-07  4:06   ` Kai Huang
2022-04-15 14:50   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 089/104] KVM: TDX: Add a placeholder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
2022-04-07  4:15   ` Kai Huang
2022-04-07 13:14     ` Paolo Bonzini
2022-04-07 14:39       ` Sean Christopherson
2022-04-07 18:04         ` Paolo Bonzini
2022-04-07 18:11           ` Sean Christopherson
2022-04-07 23:20             ` Kai Huang
2022-03-04 19:49 ` [RFC PATCH v5 090/104] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 091/104] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
2022-04-07 13:16   ` Paolo Bonzini
2022-04-07 14:48     ` Sean Christopherson
2022-04-07 18:03       ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 092/104] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
2022-04-07 13:56   ` Paolo Bonzini
2022-04-07 15:02     ` Sean Christopherson
2022-04-07 15:56       ` Paolo Bonzini
2022-04-07 16:08         ` Sean Christopherson
2022-04-08  4:58         ` Isaku Yamahata
2022-04-08  9:57           ` Paolo Bonzini
2022-04-08 14:51             ` Sean Christopherson
2022-04-11 17:40               ` Paolo Bonzini
2022-04-14 17:09                 ` Sean Christopherson
2022-04-07 14:51   ` Sean Christopherson
2022-03-04 19:49 ` [RFC PATCH v5 093/104] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
2022-04-15 14:59   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 094/104] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
2022-04-15 15:05   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 095/104] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
2022-04-15 15:07   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 096/104] KVM: TDX: Handle TDX PV rdmsr hypercall isaku.yamahata
2022-04-15 15:08   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 097/104] KVM: TDX: Handle TDX PV wrmsr hypercall isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 098/104] KVM: TDX: Handle TDX PV report fatal error hypercall isaku.yamahata
2022-04-15 15:13   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 099/104] KVM: TDX: Handle TDX PV map_gpa hypercall isaku.yamahata
2022-03-04 19:49 ` [RFC PATCH v5 100/104] KVM: TDX: Silently discard SMI request isaku.yamahata
2022-04-05 15:41   ` Paolo Bonzini
2022-03-04 19:49 ` [RFC PATCH v5 101/104] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
2022-04-05 15:48   ` Paolo Bonzini
2022-04-05 17:53     ` Tom Lendacky
2022-04-07 11:09     ` Xiaoyao Li
2022-04-07 12:12       ` Paolo Bonzini
2022-04-08  3:40         ` Isaku Yamahata
2022-03-04 19:49 ` [RFC PATCH v5 102/104] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
2022-04-05 15:56   ` Paolo Bonzini
2022-04-08  3:50     ` Isaku Yamahata
2022-04-12  6:49   ` Xiaoyao Li
2022-04-12  6:52     ` Paolo Bonzini
2022-04-12  7:31       ` Xiaoyao Li
2022-03-04 19:49 ` [RFC PATCH v5 103/104] Documentation/virtual/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
2022-03-04 19:50 ` [RFC PATCH v5 104/104] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
2022-03-07  7:44 ` [RFC PATCH v5 000/104] KVM TDX basic feature support Christoph Hellwig
2022-03-13 14:00   ` Paolo Bonzini
2022-04-15 15:18 ` Paolo Bonzini
2022-04-15 17:05   ` Paolo Bonzini
2022-04-15 21:19   ` Isaku Yamahata

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b494b94bf2d6a5d841cb76e63e255d4cff906d83.1646422845.git.isaku.yamahata@intel.com \
    --to=isaku.yamahata@intel.com \
    --cc=ckuehl@redhat.com \
    --cc=erdemaktas@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).