linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: isaku.yamahata@intel.com
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com,
	Paolo Bonzini <pbonzini@redhat.com>,
	erdemaktas@google.com, Sean Christopherson <seanjc@google.com>,
	Sagi Shahar <sagis@google.com>,
	David Matlack <dmatlack@google.com>,
	Kai Huang <kai.huang@intel.com>,
	Zhi Wang <zhi.wang.linux@gmail.com>,
	chen.bo@intel.com
Subject: [PATCH v14 077/113] KVM: TDX: Implement interrupt injection
Date: Sun, 28 May 2023 21:19:59 -0700	[thread overview]
Message-ID: <27f69eee1d6dc2d6d7fca02f6f437a2ce9e5e6cd.1685333728.git.isaku.yamahata@intel.com> (raw)
In-Reply-To: <cover.1685333727.git.isaku.yamahata@intel.com>

From: Isaku Yamahata <isaku.yamahata@intel.com>

TDX supports interrupt inject into vcpu with posted interrupt.  Wire up the
corresponding kvm x86 operations to posted interrupt.  Move
kvm_vcpu_trigger_posted_interrupt() from vmx.c to common.h to share the
code.

VMX can inject interrupt by setting interrupt information field,
VM_ENTRY_INTR_INFO_FIELD, of VMCS.  TDX supports interrupt injection only
by posted interrupt.  Ignore the execution path to access
VM_ENTRY_INTR_INFO_FIELD.

As cpu state is protected and apicv is enabled for the TDX guest, VMM can
inject interrupt by updating posted interrupt descriptor.  Treat interrupt
can be injected always.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/common.h      | 71 ++++++++++++++++++++++++++
 arch/x86/kvm/vmx/main.c        | 93 ++++++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/posted_intr.c |  2 +-
 arch/x86/kvm/vmx/posted_intr.h |  2 +
 arch/x86/kvm/vmx/tdx.c         | 25 +++++++++
 arch/x86/kvm/vmx/vmx.c         | 67 +-----------------------
 arch/x86/kvm/vmx/x86_ops.h     |  7 ++-
 7 files changed, 190 insertions(+), 77 deletions(-)

diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 235908f3e044..747f993cf7de 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -4,6 +4,7 @@
 
 #include <linux/kvm_host.h>
 
+#include "posted_intr.h"
 #include "mmu.h"
 
 static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
@@ -30,4 +31,74 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
 	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
 }
 
+static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
+						     int pi_vec)
+{
+#ifdef CONFIG_SMP
+	if (vcpu->mode == IN_GUEST_MODE) {
+		/*
+		 * The vector of the virtual has already been set in the PIR.
+		 * Send a notification event to deliver the virtual interrupt
+		 * unless the vCPU is the currently running vCPU, i.e. the
+		 * event is being sent from a fastpath VM-Exit handler, in
+		 * which case the PIR will be synced to the vIRR before
+		 * re-entering the guest.
+		 *
+		 * When the target is not the running vCPU, the following
+		 * possibilities emerge:
+		 *
+		 * Case 1: vCPU stays in non-root mode. Sending a notification
+		 * event posts the interrupt to the vCPU.
+		 *
+		 * Case 2: vCPU exits to root mode and is still runnable. The
+		 * PIR will be synced to the vIRR before re-entering the guest.
+		 * Sending a notification event is ok as the host IRQ handler
+		 * will ignore the spurious event.
+		 *
+		 * Case 3: vCPU exits to root mode and is blocked. vcpu_block()
+		 * has already synced PIR to vIRR and never blocks the vCPU if
+		 * the vIRR is not empty. Therefore, a blocked vCPU here does
+		 * not wait for any requested interrupts in PIR, and sending a
+		 * notification event also results in a benign, spurious event.
+		 */
+
+		if (vcpu != kvm_get_running_vcpu())
+			apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
+		return;
+	}
+#endif
+	/*
+	 * The vCPU isn't in the guest; wake the vCPU in case it is blocking,
+	 * otherwise do nothing as KVM will grab the highest priority pending
+	 * IRQ via ->sync_pir_to_irr() in vcpu_enter_guest().
+	 */
+	kvm_vcpu_wake_up(vcpu);
+}
+
+/*
+ * Send interrupt to vcpu via posted interrupt way.
+ * 1. If target vcpu is running(non-root mode), send posted interrupt
+ * notification to vcpu and hardware will sync PIR to vIRR atomically.
+ * 2. If target vcpu isn't running(root mode), kick it to pick up the
+ * interrupt from PIR in next vmentry.
+ */
+static inline void __vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu,
+						  struct pi_desc *pi_desc, int vector)
+{
+	if (pi_test_and_set_pir(vector, pi_desc))
+		return;
+
+	/* If a previous notification has sent the IPI, nothing to do.  */
+	if (pi_test_and_set_on(pi_desc))
+		return;
+
+	/*
+	 * The implied barrier in pi_test_and_set_on() pairs with the smp_mb_*()
+	 * after setting vcpu->mode in vcpu_enter_guest(), thus the vCPU is
+	 * guaranteed to see PID.ON=1 and sync the PIR to IRR if triggering a
+	 * posted interrupt "fails" because vcpu->mode != IN_GUEST_MODE.
+	 */
+	kvm_vcpu_trigger_posted_interrupt(vcpu, POSTED_INTR_VECTOR);
+}
+
 #endif /* __KVM_X86_VMX_COMMON_H */
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 4f3beccebee1..c86c5e3f9ea3 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -239,6 +239,34 @@ static bool vt_protected_apic_has_interrupt(struct kvm_vcpu *vcpu)
 	return tdx_protected_apic_has_interrupt(vcpu);
 }
 
+static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+{
+	struct pi_desc *pi = vcpu_to_pi_desc(vcpu);
+
+	pi_clear_on(pi);
+	memset(pi->pir, 0, sizeof(pi->pir));
+}
+
+static int vt_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return -1;
+
+	return vmx_sync_pir_to_irr(vcpu);
+}
+
+static void vt_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector)
+{
+	if (is_td_vcpu(apic->vcpu)) {
+		tdx_deliver_interrupt(apic, delivery_mode, trig_mode,
+					     vector);
+		return;
+	}
+
+	vmx_deliver_interrupt(apic, delivery_mode, trig_mode, vector);
+}
+
 static void vt_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	if (is_td_vcpu(vcpu)) {
@@ -306,6 +334,53 @@ static void vt_sched_in(struct kvm_vcpu *vcpu, int cpu)
 	vmx_sched_in(vcpu, cpu);
 }
 
+static void vt_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+	vmx_set_interrupt_shadow(vcpu, mask);
+}
+
+static u32 vt_get_interrupt_shadow(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return 0;
+
+	return vmx_get_interrupt_shadow(vcpu);
+}
+
+static void vt_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_inject_irq(vcpu, reinjected);
+}
+
+static void vt_cancel_injection(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_cancel_injection(vcpu);
+}
+
+static int vt_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
+{
+	if (is_td_vcpu(vcpu))
+		return true;
+
+	return vmx_interrupt_allowed(vcpu, for_injection);
+}
+
+static void vt_enable_irq_window(struct kvm_vcpu *vcpu)
+{
+	if (is_td_vcpu(vcpu))
+		return;
+
+	vmx_enable_irq_window(vcpu);
+}
+
 static u8 vt_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 {
 	if (is_td_vcpu(vcpu))
@@ -405,31 +480,31 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
 	.handle_exit = vmx_handle_exit,
 	.skip_emulated_instruction = vmx_skip_emulated_instruction,
 	.update_emulated_instruction = vmx_update_emulated_instruction,
-	.set_interrupt_shadow = vmx_set_interrupt_shadow,
-	.get_interrupt_shadow = vmx_get_interrupt_shadow,
+	.set_interrupt_shadow = vt_set_interrupt_shadow,
+	.get_interrupt_shadow = vt_get_interrupt_shadow,
 	.patch_hypercall = vmx_patch_hypercall,
-	.inject_irq = vmx_inject_irq,
+	.inject_irq = vt_inject_irq,
 	.inject_nmi = vmx_inject_nmi,
 	.inject_exception = vmx_inject_exception,
-	.cancel_injection = vmx_cancel_injection,
-	.interrupt_allowed = vmx_interrupt_allowed,
+	.cancel_injection = vt_cancel_injection,
+	.interrupt_allowed = vt_interrupt_allowed,
 	.nmi_allowed = vmx_nmi_allowed,
 	.get_nmi_mask = vmx_get_nmi_mask,
 	.set_nmi_mask = vmx_set_nmi_mask,
 	.enable_nmi_window = vmx_enable_nmi_window,
-	.enable_irq_window = vmx_enable_irq_window,
+	.enable_irq_window = vt_enable_irq_window,
 	.update_cr8_intercept = vmx_update_cr8_intercept,
 	.set_virtual_apic_mode = vmx_set_virtual_apic_mode,
 	.set_apic_access_page_addr = vmx_set_apic_access_page_addr,
 	.refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl,
 	.load_eoi_exitmap = vmx_load_eoi_exitmap,
-	.apicv_post_state_restore = vmx_apicv_post_state_restore,
+	.apicv_post_state_restore = vt_apicv_post_state_restore,
 	.required_apicv_inhibits = VMX_REQUIRED_APICV_INHIBITS,
 	.hwapic_irr_update = vmx_hwapic_irr_update,
 	.hwapic_isr_update = vmx_hwapic_isr_update,
 	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
-	.sync_pir_to_irr = vmx_sync_pir_to_irr,
-	.deliver_interrupt = vmx_deliver_interrupt,
+	.sync_pir_to_irr = vt_sync_pir_to_irr,
+	.deliver_interrupt = vt_deliver_interrupt,
 	.dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,
 	.protected_apic_has_interrupt = vt_protected_apic_has_interrupt,
 
diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
index 92de016852ca..2b2da6c18504 100644
--- a/arch/x86/kvm/vmx/posted_intr.c
+++ b/arch/x86/kvm/vmx/posted_intr.c
@@ -52,7 +52,7 @@ static inline struct vcpu_pi *vcpu_to_pi(struct kvm_vcpu *vcpu)
 	return (struct vcpu_pi *)vcpu;
 }
 
-static inline struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
+struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu)
 {
 	return &vcpu_to_pi(vcpu)->pi_desc;
 }
diff --git a/arch/x86/kvm/vmx/posted_intr.h b/arch/x86/kvm/vmx/posted_intr.h
index 2fe8222308b2..0f9983b6910b 100644
--- a/arch/x86/kvm/vmx/posted_intr.h
+++ b/arch/x86/kvm/vmx/posted_intr.h
@@ -105,6 +105,8 @@ struct vcpu_pi {
 	/* Until here common layout betwwn vcpu_vmx and vcpu_tdx. */
 };
 
+struct pi_desc *vcpu_to_pi_desc(struct kvm_vcpu *vcpu);
+
 void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu);
 void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu);
 void pi_wakeup_handler(void);
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 599e6cfefaab..2406db9047d5 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -7,6 +7,7 @@
 
 #include "capabilities.h"
 #include "x86_ops.h"
+#include "common.h"
 #include "tdx.h"
 #include "vmx.h"
 #include "x86.h"
@@ -534,6 +535,9 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.guest_state_protected =
 		!(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG);
 
+	tdx->pi_desc.nv = POSTED_INTR_VECTOR;
+	tdx->pi_desc.sn = 1;
+
 	tdx->host_state_need_save = true;
 	tdx->host_state_need_restore = false;
 
@@ -544,6 +548,7 @@ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	struct vcpu_tdx *tdx = to_tdx(vcpu);
 
+	vmx_vcpu_pi_load(vcpu, cpu);
 	if (vcpu->cpu == cpu)
 		return;
 
@@ -732,6 +737,12 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	trace_kvm_entry(vcpu);
 
+	if (pi_test_on(&tdx->pi_desc)) {
+		apic->send_IPI_self(POSTED_INTR_VECTOR);
+
+		kvm_wait_lapic_expire(vcpu);
+	}
+
 	tdx_vcpu_enter_exit(vcpu, tdx);
 
 	tdx_user_return_update_cache(vcpu);
@@ -1063,6 +1074,16 @@ static int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn,
 	return tdx_sept_drop_private_spte(kvm, gfn, level, pfn);
 }
 
+void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector)
+{
+	struct kvm_vcpu *vcpu = apic->vcpu;
+	struct vcpu_tdx *tdx = to_tdx(vcpu);
+
+	/* TDX supports only posted interrupt.  No lapic emulation. */
+	__vmx_deliver_posted_interrupt(vcpu, &tdx->pi_desc, vector);
+}
+
 static int tdx_get_capabilities(struct kvm_tdx_cmd *cmd)
 {
 	struct kvm_tdx_capabilities __user *user_caps;
@@ -1812,6 +1833,10 @@ int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
 	if (ret)
 		return ret;
 
+	td_vmcs_write16(tdx, POSTED_INTR_NV, POSTED_INTR_VECTOR);
+	td_vmcs_write64(tdx, POSTED_INTR_DESC_ADDR, __pa(&tdx->pi_desc));
+	td_vmcs_setbit32(tdx, PIN_BASED_VM_EXEC_CONTROL, PIN_BASED_POSTED_INTR);
+
 	tdx->initialized = true;
 	return 0;
 }
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e1d7c6d01e83..3186c702100e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4122,50 +4122,6 @@ void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
 		pt_update_intercept_for_msr(vcpu);
 }
 
-static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
-						     int pi_vec)
-{
-#ifdef CONFIG_SMP
-	if (vcpu->mode == IN_GUEST_MODE) {
-		/*
-		 * The vector of the virtual has already been set in the PIR.
-		 * Send a notification event to deliver the virtual interrupt
-		 * unless the vCPU is the currently running vCPU, i.e. the
-		 * event is being sent from a fastpath VM-Exit handler, in
-		 * which case the PIR will be synced to the vIRR before
-		 * re-entering the guest.
-		 *
-		 * When the target is not the running vCPU, the following
-		 * possibilities emerge:
-		 *
-		 * Case 1: vCPU stays in non-root mode. Sending a notification
-		 * event posts the interrupt to the vCPU.
-		 *
-		 * Case 2: vCPU exits to root mode and is still runnable. The
-		 * PIR will be synced to the vIRR before re-entering the guest.
-		 * Sending a notification event is ok as the host IRQ handler
-		 * will ignore the spurious event.
-		 *
-		 * Case 3: vCPU exits to root mode and is blocked. vcpu_block()
-		 * has already synced PIR to vIRR and never blocks the vCPU if
-		 * the vIRR is not empty. Therefore, a blocked vCPU here does
-		 * not wait for any requested interrupts in PIR, and sending a
-		 * notification event also results in a benign, spurious event.
-		 */
-
-		if (vcpu != kvm_get_running_vcpu())
-			apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
-		return;
-	}
-#endif
-	/*
-	 * The vCPU isn't in the guest; wake the vCPU in case it is blocking,
-	 * otherwise do nothing as KVM will grab the highest priority pending
-	 * IRQ via ->sync_pir_to_irr() in vcpu_enter_guest().
-	 */
-	kvm_vcpu_wake_up(vcpu);
-}
-
 static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
 						int vector)
 {
@@ -4218,20 +4174,7 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
 	if (!vcpu->arch.apic->apicv_active)
 		return -1;
 
-	if (pi_test_and_set_pir(vector, &vmx->pi_desc))
-		return 0;
-
-	/* If a previous notification has sent the IPI, nothing to do.  */
-	if (pi_test_and_set_on(&vmx->pi_desc))
-		return 0;
-
-	/*
-	 * The implied barrier in pi_test_and_set_on() pairs with the smp_mb_*()
-	 * after setting vcpu->mode in vcpu_enter_guest(), thus the vCPU is
-	 * guaranteed to see PID.ON=1 and sync the PIR to IRR if triggering a
-	 * posted interrupt "fails" because vcpu->mode != IN_GUEST_MODE.
-	 */
-	kvm_vcpu_trigger_posted_interrupt(vcpu, POSTED_INTR_VECTOR);
+	__vmx_deliver_posted_interrupt(vcpu, &vmx->pi_desc, vector);
 	return 0;
 }
 
@@ -6875,14 +6818,6 @@ void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 	vmcs_write64(EOI_EXIT_BITMAP3, eoi_exit_bitmap[3]);
 }
 
-void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
-{
-	struct vcpu_vmx *vmx = to_vmx(vcpu);
-
-	pi_clear_on(&vmx->pi_desc);
-	memset(vmx->pi_desc.pir, 0, sizeof(vmx->pi_desc.pir));
-}
-
 void vmx_do_interrupt_irqoff(unsigned long entry);
 void vmx_do_nmi_irqoff(void);
 
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index 72d7356dcddc..efe6f41a51a6 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -60,7 +60,6 @@ int vmx_check_intercept(struct kvm_vcpu *vcpu,
 bool vmx_apic_init_signal_blocked(struct kvm_vcpu *vcpu);
 void vmx_migrate_timers(struct kvm_vcpu *vcpu);
 void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
-void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu);
 bool vmx_check_apicv_inhibit_reasons(enum kvm_apicv_inhibit reason);
 void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
 void vmx_hwapic_isr_update(int max_isr);
@@ -159,6 +158,9 @@ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
 bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu);
 u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
 
+void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+			   int trig_mode, int vector);
+
 int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp);
 
 void tdx_flush_tlb(struct kvm_vcpu *vcpu);
@@ -188,6 +190,9 @@ static inline void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {}
 static inline bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu) { return false; }
 static inline u8 tdx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { return 0; }
 
+static inline void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+					 int trig_mode, int vector) {}
+
 static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; }
 
 static inline void tdx_flush_tlb(struct kvm_vcpu *vcpu) {}
-- 
2.25.1


  parent reply	other threads:[~2023-05-29  4:33 UTC|newest]

Thread overview: 149+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-29  4:18 [PATCH v14 000/113] KVM TDX basic feature support isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 001/113] KVM: VMX: Move out vmx_x86_ops to 'main.c' to wrap VMX and TDX isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 002/113] KVM: x86/vmx: initialize loaded_vmcss_on_cpu in vmx_hardware_setup() isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 003/113] KVM: x86/vmx: Refactor KVM VMX module init/exit functions isaku.yamahata
2023-05-31  1:57   ` Zhi Wang
     [not found]     ` <20230531203012.GG1234772@ls.amr.corp.intel.com>
2023-05-31 22:10       ` Isaku Yamahata
2023-05-29  4:18 ` [PATCH v14 004/113] KVM: TDX: Initialize the TDX module when loading the KVM intel kernel module isaku.yamahata
2023-05-30 14:35   ` Zhi Wang
2023-05-30 17:14     ` Sean Christopherson
2023-06-06  4:19   ` Huang, Kai
2023-06-07 18:06     ` Isaku Yamahata
2023-06-12 23:55       ` Huang, Kai
2023-06-13 17:38         ` Isaku Yamahata
2023-06-14  9:41           ` Huang, Kai
2023-06-14 16:05             ` Isaku Yamahata
2023-06-14 23:14               ` Huang, Kai
2023-05-29  4:18 ` [PATCH v14 005/113] KVM: TDX: Add placeholders for TDX VM/vcpu structure isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 006/113] KVM: x86: Introduce vm_type to differentiate default VMs from confidential VMs isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 007/113] KVM: TDX: Make TDX VM type supported isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 008/113] [MARKER] The start of TDX KVM patch series: TDX architectural definitions isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 009/113] KVM: TDX: Define " isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 010/113] KVM: TDX: Add TDX "architectural" error codes isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 011/113] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module isaku.yamahata
2023-06-01 13:24   ` Wang, Wei W
2023-06-02  0:15     ` Isaku Yamahata
2023-06-05 15:20   ` Wang, Wei W
2023-06-07 18:15     ` Isaku Yamahata
2023-06-08  1:43       ` Wang, Wei W
2023-06-08 20:10         ` Isaku Yamahata
2023-06-14 11:45           ` Wang, Wei W
2023-06-14 16:29             ` Isaku Yamahata
2023-05-29  4:18 ` [PATCH v14 012/113] KVM: TDX: Add helper functions to print TDX SEAMCALL error isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 013/113] [MARKER] The start of TDX KVM patch series: TD VM creation/destruction isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 014/113] x86/cpu: Add helper functions to allocate/free TDX private host key id isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 015/113] x86/virt/tdx: Add a helper function to return system wide info about TDX module isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 016/113] KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl isaku.yamahata
2023-05-29  4:18 ` [PATCH v14 017/113] KVM: TDX: x86: Add ioctl to get TDX systemwide parameters isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 018/113] KVM: x86, tdx: Make KVM_CAP_MAX_VCPUS backend specific isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 019/113] KVM: TDX: create/destroy VM structure isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 020/113] KVM: TDX: initialize VM with TDX specific parameters isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 021/113] KVM: TDX: Make pmu_intel.c ignore guest TD case isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 022/113] KVM: TDX: Refuse to unplug the last cpu on the package isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 023/113] [MARKER] The start of TDX KVM patch series: TD vcpu creation/destruction isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 024/113] KVM: TDX: allocate/free TDX vcpu structure isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 025/113] KVM: TDX: Do TDX specific vcpu initialization isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 026/113] [MARKER] The start of TDX KVM patch series: KVM MMU GPA shared bits isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 027/113] KVM: x86/mmu: introduce config for PRIVATE KVM MMU isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 028/113] KVM: x86/mmu: Add address conversion functions for TDX shared bit of GPA isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 029/113] [MARKER] The start of TDX KVM patch series: KVM TDP refactoring for TDX isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 030/113] KVM: Allow page-sized MMU caches to be initialized with custom 64-bit values isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 031/113] KVM: x86/mmu: Replace hardcoded value 0 for the initial value for SPTE isaku.yamahata
2023-06-06  4:59   ` Yuan Yao
2023-06-06 13:19     ` Isaku Yamahata
2023-06-06  5:31   ` Wu, Dan1
2023-05-29  4:19 ` [PATCH v14 032/113] KVM: x86/mmu: Allow non-zero value for non-present SPTE and removed SPTE isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 033/113] KVM: x86/mmu: Add Suppress VE bit to shadow_mmio_mask/shadow_present_mask isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 034/113] KVM: x86/mmu: Track shadow MMIO value on a per-VM basis isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 035/113] KVM: x86/mmu: Disallow fast page fault on private GPA isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 036/113] KVM: x86/mmu: Allow per-VM override of the TDP max page level isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 037/113] KVM: VMX: Introduce test mode related to EPT violation VE isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 038/113] [MARKER] The start of TDX KVM patch series: KVM TDP MMU hooks isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 039/113] KVM: x86/mmu: Assume guest MMIOs are shared isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 040/113] KVM: x86/tdp_mmu: Init role member of struct kvm_mmu_page at allocation isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 041/113] KVM: x86/mmu: Add a new is_private member for union kvm_mmu_page_role isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 042/113] KVM: x86/mmu: Add a private pointer to struct kvm_mmu_page isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 043/113] KVM: Add flags to struct kvm_gfn_range isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 044/113] KVM: x86/tdp_mmu: Don't zap private pages for unsupported cases isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 045/113] KVM: x86/tdp_mmu: Sprinkle __must_check isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 046/113] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 047/113] [MARKER] The start of TDX KVM patch series: TDX EPT violation isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 048/113] KVM: x86/mmu: TDX: Do not enable page track for TD guest isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 049/113] KVM: VMX: Split out guts of EPT violation to common/exposed function isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 050/113] KVM: VMX: Move setting of EPT MMU masks to common VT-x code isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 051/113] KVM: TDX: Add accessors VMX VMCS helpers isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 052/113] KVM: TDX: Add load_mmu_pgd method for TDX isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 053/113] KVM: TDX: Retry seamcall when TDX_OPERAND_BUSY with operand SEPT isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 054/113] KVM: TDX: Require TDP MMU and mmio caching for TDX isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 055/113] KVM: TDX: TDP MMU TDX support isaku.yamahata
     [not found]   ` <CAAYXXYzR6JZq8OOD2qqC_vVGiCa3e5KmZZ+36YffGW6JFK4+Hw@mail.gmail.com>
2023-06-08 11:29     ` Erdem Aktas
2023-06-08 20:55       ` Isaku Yamahata
2023-05-29  4:19 ` [PATCH v14 056/113] KVM: TDX: MTRR: implement get_mt_mask() for TDX isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 057/113] [MARKER] The start of TDX KVM patch series: TD finalization isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 058/113] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by TDX isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 059/113] KVM: TDX: Create initial guest memory isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 060/113] KVM: TDX: Finalize VM initialization isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 061/113] [MARKER] The start of TDX KVM patch series: TD vcpu enter/exit isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 062/113] KVM: TDX: Add helper assembly function to TDX vcpu isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 063/113] KVM: TDX: Implement TDX vcpu enter/exit path isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 064/113] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 065/113] KVM: TDX: restore host xsave state when exit from the guest TD isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 066/113] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 067/113] KVM: TDX: restore user ret MSRs isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 068/113] KVM: TDX: Add TSX_CTRL msr into uret_msrs list isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 069/113] [MARKER] The start of TDX KVM patch series: TD vcpu exits/interrupts/hypercalls isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 070/113] KVM: TDX: complete interrupts after tdexit isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 071/113] KVM: TDX: restore debug store when TD exit isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 072/113] KVM: TDX: handle vcpu migration over logical processor isaku.yamahata
2023-07-12  6:08   ` Wen, Qian
2023-07-17 17:12     ` Isaku Yamahata
2023-05-29  4:19 ` [PATCH v14 073/113] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 074/113] KVM: TDX: Add support for find pending IRQ in a protected local APIC isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 075/113] KVM: x86: Assume timer IRQ was injected if APIC state is proteced isaku.yamahata
2023-05-29  4:19 ` [PATCH v14 076/113] KVM: TDX: remove use of struct vcpu_vmx from posted_interrupt.c isaku.yamahata
2023-05-29  4:19 ` isaku.yamahata [this message]
2023-05-29  4:20 ` [PATCH v14 078/113] KVM: TDX: Implements vcpu request_immediate_exit isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 079/113] KVM: TDX: Implement methods to inject NMI isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 080/113] KVM: VMX: Modify NMI and INTR handlers to take intr_info as function argument isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 081/113] KVM: VMX: Move NMI/exception handler to common helper isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 082/113] KVM: x86: Split core of hypercall emulation to helper function isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 083/113] KVM: TDX: Add a place holder to handle TDX VM exit isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 084/113] KVM: TDX: Handle vmentry failure for INTEL TD guest isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 085/113] KVM: TDX: handle EXIT_REASON_OTHER_SMI isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 086/113] KVM: TDX: handle ept violation/misconfig exit isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 087/113] KVM: TDX: handle EXCEPTION_NMI and EXTERNAL_INTERRUPT isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 088/113] KVM: TDX: Add a place holder for handler of TDX hypercalls (TDG.VP.VMCALL) isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 089/113] KVM: TDX: handle KVM hypercall with TDG.VP.VMCALL isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 090/113] KVM: TDX: Add KVM Exit for TDX TDG.VP.VMCALL isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 091/113] KVM: TDX: Handle TDX PV CPUID hypercall isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 092/113] KVM: TDX: Handle TDX PV HLT hypercall isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 093/113] KVM: TDX: Handle TDX PV port io hypercall isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 094/113] KVM: TDX: Handle TDX PV MMIO hypercall isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 095/113] KVM: TDX: Implement callbacks for MSR operations for TDX isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 096/113] KVM: TDX: Handle TDX PV rdmsr/wrmsr hypercall isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 097/113] KVM: TDX: Handle MSR MTRRCap and MTRRDefType access isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 098/113] KVM: TDX: Handle TDG.VP.VMCALL<GetTdVmCallInfo> hypercall isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 099/113] KVM: TDX: Silently discard SMI request isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 100/113] KVM: TDX: Silently ignore INIT/SIPI isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 101/113] KVM: TDX: Add methods to ignore accesses to CPU state isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 102/113] KVM: TDX: Add methods to ignore guest instruction emulation isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 103/113] KVM: TDX: Add a method to ignore dirty logging isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 104/113] KVM: TDX: Add methods to ignore VMX preemption timer isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 105/113] KVM: TDX: Add methods to ignore accesses to TSC isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 106/113] KVM: TDX: Ignore setting up mce isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 107/113] KVM: TDX: Add a method to ignore for TDX to ignore hypercall patch isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 108/113] KVM: TDX: Add methods to ignore virtual apic related operation isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 109/113] Documentation/virt/kvm: Document on Trust Domain Extensions(TDX) isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 110/113] KVM: x86: design documentation on TDX support of x86 KVM TDP MMU isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 111/113] RFC: KVM: x86, TDX: Add check for setting CPUID isaku.yamahata
2023-06-03  1:29   ` Zhi Wang
2023-06-03 18:02     ` Isaku Yamahata
2023-06-05  2:25       ` Zhi Wang
2023-06-05 20:46         ` Isaku Yamahata
2023-06-06 23:57   ` Huang, Kai
2023-06-13 17:31     ` Isaku Yamahata
2023-06-14  9:43       ` Huang, Kai
2023-05-29  4:20 ` [PATCH v14 112/113] RFC: KVM: TDX: Make busy with S-EPT on entry bug isaku.yamahata
2023-05-29  4:20 ` [PATCH v14 113/113] [MARKER] the end of (the first phase of) TDX KVM patch series isaku.yamahata
2023-05-30  7:34 ` [PATCH v14 000/113] KVM TDX basic feature support Wang, Wei W
2023-05-30 22:11   ` Isaku Yamahata

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=27f69eee1d6dc2d6d7fca02f6f437a2ce9e5e6cd.1685333728.git.isaku.yamahata@intel.com \
    --to=isaku.yamahata@intel.com \
    --cc=chen.bo@intel.com \
    --cc=dmatlack@google.com \
    --cc=erdemaktas@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=kai.huang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sagis@google.com \
    --cc=seanjc@google.com \
    --cc=zhi.wang.linux@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).