All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting
@ 2013-04-10 13:22 Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 1/7] KVM: VMX: Enable acknowledge interupt on vmexit Yang Zhang
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

The follwoing patches are adding the Posted Interrupt supporting to KVM:
The first patch enables the feature 'acknowledge interrupt on vmexit'.Since
it is required by Posted interrupt, we need to enable it firstly.

And the subsequent patches are adding the posted interrupt supporting:
Posted Interrupt allows APIC interrupts to inject into guest directly
without any vmexit.

- When delivering a interrupt to guest, if target vcpu is running,
  update Posted-interrupt requests bitmap and send a notification event
  to the vcpu. Then the vcpu will handle this interrupt automatically,
  without any software involvemnt.

- If target vcpu is not running or there already a notification event
  pending in the vcpu, do nothing. The interrupt will be handled by
  next vm entry

Changes from v8 to v9:
* Add tracing in PI case when deliver interrupt.
* Scan ioapic when updating SPIV register.
* Rebase on top of KVM upstream + RTC eoi tracking patch.

Changes from v7 to v8:
* Remove unused memeber 'on' from struct pi_desc.
* Register a dummy function to sync_pir_to_irr is apicv is disabled.
* Minor fixup.
* Rebase on top of KVM upstream + RTC eoi tracking patch.

Yang Zhang (7):
  KVM: VMX: Enable acknowledge interupt on vmexit
  KVM: VMX: Register a new IPI for posted interrupt
  KVM: VMX: Check the posted interrupt capability
  KVM: Call common update function when ioapic entry changed.
  KVM: Set TMR when programming ioapic entry
  KVM: VMX: Add the algorithm of deliver posted interrupt
  KVM: VMX: Use posted interrupt to deliver virtual interrupt

 arch/ia64/kvm/lapic.h              |    6 -
 arch/x86/include/asm/entry_arch.h  |    4 +
 arch/x86/include/asm/hardirq.h     |    3 +
 arch/x86/include/asm/hw_irq.h      |    1 +
 arch/x86/include/asm/irq_vectors.h |    5 +
 arch/x86/include/asm/kvm_host.h    |    3 +
 arch/x86/include/asm/vmx.h         |    4 +
 arch/x86/kernel/entry_64.S         |    5 +
 arch/x86/kernel/irq.c              |   22 ++++
 arch/x86/kernel/irqinit.c          |    4 +
 arch/x86/kvm/lapic.c               |   66 ++++++++----
 arch/x86/kvm/lapic.h               |    7 ++
 arch/x86/kvm/svm.c                 |   12 ++
 arch/x86/kvm/vmx.c                 |  207 +++++++++++++++++++++++++++++++-----
 arch/x86/kvm/x86.c                 |   19 +++-
 include/linux/kvm_host.h           |    4 +-
 virt/kvm/ioapic.c                  |   32 ++++--
 virt/kvm/ioapic.h                  |    7 +-
 virt/kvm/irq_comm.c                |    4 +-
 virt/kvm/kvm_main.c                |    5 +-
 20 files changed, 341 insertions(+), 79 deletions(-)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v9 1/7] KVM: VMX: Enable acknowledge interupt on vmexit
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
@ 2013-04-10 13:22 ` Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 2/7] KVM: VMX: Register a new IPI for posted interrupt Yang Zhang
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

The "acknowledge interrupt on exit" feature controls processor behavior
for external interrupt acknowledgement. When this control is set, the
processor acknowledges the interrupt controller to acquire the
interrupt vector on VM exit.

After enabling this feature, an interrupt which arrived when target cpu is
running in vmx non-root mode will be handled by vmx handler instead of handler
in idt. Currently, vmx handler only fakes an interrupt stack and jump to idt
table to let real handler to handle it. Further, we will recognize the interrupt
and only delivery the interrupt which not belong to current vcpu through idt table.
The interrupt which belonged to current vcpu will be handled inside vmx handler.
This will reduce the interrupt handle cost of KVM.

Also, interrupt enable logic is changed if this feature is turnning on:
Before this patch, hypervior call local_irq_enable() to enable it directly.
Now IF bit is set on interrupt stack frame, and will be enabled on a return from
interrupt handler if exterrupt interrupt exists. If no external interrupt, still
call local_irq_enable() to enable it.

Refer to Intel SDM volum 3, chapter 33.2.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/svm.c              |    6 ++++
 arch/x86/kvm/vmx.c              |   58 ++++++++++++++++++++++++++++++++++++---
 arch/x86/kvm/x86.c              |    4 ++-
 4 files changed, 64 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b5a6462..8e95512 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -730,6 +730,7 @@ struct kvm_x86_ops {
 	int (*check_intercept)(struct kvm_vcpu *vcpu,
 			       struct x86_instruction_info *info,
 			       enum x86_intercept_stage stage);
+	void (*handle_external_intr)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7a46c1f..2f8fe3f 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4233,6 +4233,11 @@ out:
 	return ret;
 }
 
+static void svm_handle_external_intr(struct kvm_vcpu *vcpu)
+{
+	local_irq_enable();
+}
+
 static struct kvm_x86_ops svm_x86_ops = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -4328,6 +4333,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.set_tdp_cr3 = set_tdp_cr3,
 
 	.check_intercept = svm_check_intercept,
+	.handle_external_intr = svm_handle_external_intr,
 };
 
 static int __init svm_init(void)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 03f5746..7408d93 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -378,6 +378,7 @@ struct vcpu_vmx {
 	struct shared_msr_entry *guest_msrs;
 	int                   nmsrs;
 	int                   save_nmsrs;
+	unsigned long	      host_idt_base;
 #ifdef CONFIG_X86_64
 	u64 		      msr_host_kernel_gs_base;
 	u64 		      msr_guest_kernel_gs_base;
@@ -2627,7 +2628,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
 #ifdef CONFIG_X86_64
 	min |= VM_EXIT_HOST_ADDR_SPACE_SIZE;
 #endif
-	opt = VM_EXIT_SAVE_IA32_PAT | VM_EXIT_LOAD_IA32_PAT;
+	opt = VM_EXIT_SAVE_IA32_PAT | VM_EXIT_LOAD_IA32_PAT |
+		VM_EXIT_ACK_INTR_ON_EXIT;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_EXIT_CTLS,
 				&_vmexit_control) < 0)
 		return -EIO;
@@ -3879,7 +3881,7 @@ static void vmx_disable_intercept_msr_write_x2apic(u32 msr)
  * Note that host-state that does change is set elsewhere. E.g., host-state
  * that is set differently for each CPU is set in vmx_vcpu_load(), not here.
  */
-static void vmx_set_constant_host_state(void)
+static void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
 {
 	u32 low32, high32;
 	unsigned long tmpl;
@@ -3907,6 +3909,7 @@ static void vmx_set_constant_host_state(void)
 
 	native_store_idt(&dt);
 	vmcs_writel(HOST_IDTR_BASE, dt.address);   /* 22.2.4 */
+	vmx->host_idt_base = dt.address;
 
 	vmcs_writel(HOST_RIP, vmx_return); /* 22.2.5 */
 
@@ -4039,7 +4042,7 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
 
 	vmcs_write16(HOST_FS_SELECTOR, 0);            /* 22.2.4 */
 	vmcs_write16(HOST_GS_SELECTOR, 0);            /* 22.2.4 */
-	vmx_set_constant_host_state();
+	vmx_set_constant_host_state(vmx);
 #ifdef CONFIG_X86_64
 	rdmsrl(MSR_FS_BASE, a);
 	vmcs_writel(HOST_FS_BASE, a); /* 22.2.4 */
@@ -6400,6 +6403,52 @@ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
 	}
 }
 
+static void vmx_handle_external_intr(struct kvm_vcpu *vcpu)
+{
+	u32 exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+
+	/*
+	 * If external interrupt exists, IF bit is set in rflags/eflags on the
+	 * interrupt stack frame, and interrupt will be enabled on a return
+	 * from interrupt handler.
+	 */
+	if ((exit_intr_info & (INTR_INFO_VALID_MASK | INTR_INFO_INTR_TYPE_MASK))
+			== (INTR_INFO_VALID_MASK | INTR_TYPE_EXT_INTR)) {
+		unsigned int vector;
+		unsigned long entry;
+		gate_desc *desc;
+		struct vcpu_vmx *vmx = to_vmx(vcpu);
+#ifdef CONFIG_X86_64
+		unsigned long tmp;
+#endif
+
+		vector =  exit_intr_info & INTR_INFO_VECTOR_MASK;
+		desc = (gate_desc *)vmx->host_idt_base + vector;
+		entry = gate_offset(*desc);
+		asm volatile(
+#ifdef CONFIG_X86_64
+			"mov %%" _ASM_SP ", %[sp]\n\t"
+			"and $0xfffffffffffffff0, %%" _ASM_SP "\n\t"
+			"push $%c[ss]\n\t"
+			"push %[sp]\n\t"
+#endif
+			"pushf\n\t"
+			"orl $0x200, (%%" _ASM_SP ")\n\t"
+			__ASM_SIZE(push) " $%c[cs]\n\t"
+			"call *%[entry]\n\t"
+			:
+#ifdef CONFIG_X86_64
+			[sp]"=&r"(tmp)
+#endif
+			:
+			[entry]"r"(entry),
+			[ss]"i"(__KERNEL_DS),
+			[cs]"i"(__KERNEL_CS)
+			);
+	} else
+		local_irq_enable();
+}
+
 static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 {
 	u32 exit_intr_info;
@@ -7071,7 +7120,7 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
 	 * Other fields are different per CPU, and will be set later when
 	 * vmx_vcpu_load() is called, and when vmx_save_host_state() is called.
 	 */
-	vmx_set_constant_host_state();
+	vmx_set_constant_host_state(vmx);
 
 	/*
 	 * HOST_RSP is normally set correctly in vmx_vcpu_run() just before
@@ -7685,6 +7734,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.set_tdp_cr3 = vmx_set_cr3,
 
 	.check_intercept = vmx_check_intercept,
+	.handle_external_intr = vmx_handle_external_intr,
 };
 
 static int __init vmx_init(void)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5e85d8d..5b146d2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5808,7 +5808,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	vcpu->mode = OUTSIDE_GUEST_MODE;
 	smp_wmb();
-	local_irq_enable();
+
+	/* Interrupt is enabled by handle_external_intr() */
+	kvm_x86_ops->handle_external_intr(vcpu);
 
 	++vcpu->stat.exits;
 
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 2/7] KVM: VMX: Register a new IPI for posted interrupt
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 1/7] KVM: VMX: Enable acknowledge interupt on vmexit Yang Zhang
@ 2013-04-10 13:22 ` Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 3/7] KVM: VMX: Check the posted interrupt capability Yang Zhang
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

Posted Interrupt feature requires a special IPI to deliver posted interrupt
to guest. And it should has a high priority so the interrupt will not be
blocked by others.
Normally, the posted interrupt will be consumed by vcpu if target vcpu is
running and transparent to OS. But in some cases, the interrupt will arrive
when target vcpu is scheduled out. And host will see it. So we need to
register a dump handler to handle it.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 arch/x86/include/asm/entry_arch.h  |    4 ++++
 arch/x86/include/asm/hardirq.h     |    3 +++
 arch/x86/include/asm/hw_irq.h      |    1 +
 arch/x86/include/asm/irq_vectors.h |    5 +++++
 arch/x86/kernel/entry_64.S         |    5 +++++
 arch/x86/kernel/irq.c              |   22 ++++++++++++++++++++++
 arch/x86/kernel/irqinit.c          |    4 ++++
 7 files changed, 44 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/entry_arch.h b/arch/x86/include/asm/entry_arch.h
index 40afa00..9bd4eca 100644
--- a/arch/x86/include/asm/entry_arch.h
+++ b/arch/x86/include/asm/entry_arch.h
@@ -19,6 +19,10 @@ BUILD_INTERRUPT(reboot_interrupt,REBOOT_VECTOR)
 
 BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
 
+#ifdef CONFIG_HAVE_KVM
+BUILD_INTERRUPT(kvm_posted_intr_ipi, POSTED_INTR_VECTOR)
+#endif
+
 /*
  * every pentium local APIC has two 'local interrupts', with a
  * soft-definable vector attached to both interrupts, one of
diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
index 81f04ce..ab0ae1a 100644
--- a/arch/x86/include/asm/hardirq.h
+++ b/arch/x86/include/asm/hardirq.h
@@ -12,6 +12,9 @@ typedef struct {
 	unsigned int irq_spurious_count;
 	unsigned int icr_read_retry_count;
 #endif
+#ifdef CONFIG_HAVE_KVM
+	unsigned int kvm_posted_intr_ipis;
+#endif
 	unsigned int x86_platform_ipis;	/* arch dependent */
 	unsigned int apic_perf_irqs;
 	unsigned int apic_irq_work_irqs;
diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
index 10a78c3..1da97ef 100644
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -28,6 +28,7 @@
 /* Interrupt handlers registered during init_IRQ */
 extern void apic_timer_interrupt(void);
 extern void x86_platform_ipi(void);
+extern void kvm_posted_intr_ipi(void);
 extern void error_interrupt(void);
 extern void irq_work_interrupt(void);
 
diff --git a/arch/x86/include/asm/irq_vectors.h b/arch/x86/include/asm/irq_vectors.h
index aac5fa6..5702d7e 100644
--- a/arch/x86/include/asm/irq_vectors.h
+++ b/arch/x86/include/asm/irq_vectors.h
@@ -102,6 +102,11 @@
  */
 #define X86_PLATFORM_IPI_VECTOR		0xf7
 
+/* Vector for KVM to deliver posted interrupt IPI */
+#ifdef CONFIG_HAVE_KVM
+#define POSTED_INTR_VECTOR		0xf2
+#endif
+
 /*
  * IRQ work vector:
  */
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index c1d01e6..7272089 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1166,6 +1166,11 @@ apicinterrupt LOCAL_TIMER_VECTOR \
 apicinterrupt X86_PLATFORM_IPI_VECTOR \
 	x86_platform_ipi smp_x86_platform_ipi
 
+#ifdef CONFIG_HAVE_KVM
+apicinterrupt POSTED_INTR_VECTOR \
+	kvm_posted_intr_ipi smp_kvm_posted_intr_ipi
+#endif
+
 apicinterrupt THRESHOLD_APIC_VECTOR \
 	threshold_interrupt smp_threshold_interrupt
 apicinterrupt THERMAL_APIC_VECTOR \
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index e4595f1..6ae6ea1 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -228,6 +228,28 @@ void smp_x86_platform_ipi(struct pt_regs *regs)
 	set_irq_regs(old_regs);
 }
 
+#ifdef CONFIG_HAVE_KVM
+/*
+ * Handler for POSTED_INTERRUPT_VECTOR.
+ */
+void smp_kvm_posted_intr_ipi(struct pt_regs *regs)
+{
+	struct pt_regs *old_regs = set_irq_regs(regs);
+
+	ack_APIC_irq();
+
+	irq_enter();
+
+	exit_idle();
+
+	inc_irq_stat(kvm_posted_intr_ipis);
+
+	irq_exit();
+
+	set_irq_regs(old_regs);
+}
+#endif
+
 EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq);
 
 #ifdef CONFIG_HOTPLUG_CPU
diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
index 7dc4e45..a2a1fbc 100644
--- a/arch/x86/kernel/irqinit.c
+++ b/arch/x86/kernel/irqinit.c
@@ -172,6 +172,10 @@ static void __init apic_intr_init(void)
 
 	/* IPI for X86 platform specific use */
 	alloc_intr_gate(X86_PLATFORM_IPI_VECTOR, x86_platform_ipi);
+#ifdef CONFIG_HAVE_KVM
+	/* IPI for KVM to deliver posted interrupt */
+	alloc_intr_gate(POSTED_INTR_VECTOR, kvm_posted_intr_ipi);
+#endif
 
 	/* IPI vectors for APIC spurious and error interrupts */
 	alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt);
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 3/7] KVM: VMX: Check the posted interrupt capability
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 1/7] KVM: VMX: Enable acknowledge interupt on vmexit Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 2/7] KVM: VMX: Register a new IPI for posted interrupt Yang Zhang
@ 2013-04-10 13:22 ` Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 4/7] KVM: Call common update function when ioapic entry changed Yang Zhang
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

Detect the posted interrupt feature. If it exists, then set it in vmcs_config.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 arch/x86/include/asm/vmx.h |    4 ++
 arch/x86/kvm/vmx.c         |   82 +++++++++++++++++++++++++++++++++-----------
 2 files changed, 66 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index fc1c313..6f07f19 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -71,6 +71,7 @@
 #define PIN_BASED_NMI_EXITING                   0x00000008
 #define PIN_BASED_VIRTUAL_NMIS                  0x00000020
 #define PIN_BASED_VMX_PREEMPTION_TIMER          0x00000040
+#define PIN_BASED_POSTED_INTR                   0x00000080
 
 #define PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR	0x00000016
 
@@ -102,6 +103,7 @@
 /* VMCS Encodings */
 enum vmcs_field {
 	VIRTUAL_PROCESSOR_ID            = 0x00000000,
+	POSTED_INTR_NV                  = 0x00000002,
 	GUEST_ES_SELECTOR               = 0x00000800,
 	GUEST_CS_SELECTOR               = 0x00000802,
 	GUEST_SS_SELECTOR               = 0x00000804,
@@ -136,6 +138,8 @@ enum vmcs_field {
 	VIRTUAL_APIC_PAGE_ADDR_HIGH     = 0x00002013,
 	APIC_ACCESS_ADDR		= 0x00002014,
 	APIC_ACCESS_ADDR_HIGH		= 0x00002015,
+	POSTED_INTR_DESC_ADDR           = 0x00002016,
+	POSTED_INTR_DESC_ADDR_HIGH      = 0x00002017,
 	EPT_POINTER                     = 0x0000201a,
 	EPT_POINTER_HIGH                = 0x0000201b,
 	EOI_EXIT_BITMAP0                = 0x0000201c,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 7408d93..05da991 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -84,7 +84,8 @@ module_param(vmm_exclusive, bool, S_IRUGO);
 static bool __read_mostly fasteoi = 1;
 module_param(fasteoi, bool, S_IRUGO);
 
-static bool __read_mostly enable_apicv_reg_vid;
+static bool __read_mostly enable_apicv;
+module_param(enable_apicv, bool, S_IRUGO);
 
 /*
  * If nested=1, nested virtualization is supported, i.e., guests may use
@@ -366,6 +367,14 @@ struct nested_vmx {
 	struct page *apic_access_page;
 };
 
+#define POSTED_INTR_ON  0
+/* Posted-Interrupt Descriptor */
+struct pi_desc {
+	u32 pir[8];     /* Posted interrupt requested */
+	u32 control;	/* bit 0 of control is outstanding notification bit */
+	u32 rsvd[7];
+} __aligned(64);
+
 struct vcpu_vmx {
 	struct kvm_vcpu       vcpu;
 	unsigned long         host_rsp;
@@ -430,6 +439,9 @@ struct vcpu_vmx {
 
 	bool rdtscp_enabled;
 
+	/* Posted interrupt descriptor */
+	struct pi_desc pi_desc;
+
 	/* Support for a guest hypervisor (nested VMX) */
 	struct nested_vmx nested;
 };
@@ -785,6 +797,18 @@ static inline bool cpu_has_vmx_virtual_intr_delivery(void)
 		SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
 }
 
+static inline bool cpu_has_vmx_posted_intr(void)
+{
+	return vmcs_config.pin_based_exec_ctrl & PIN_BASED_POSTED_INTR;
+}
+
+static inline bool cpu_has_vmx_apicv(void)
+{
+	return cpu_has_vmx_apic_register_virt() &&
+		cpu_has_vmx_virtual_intr_delivery() &&
+		cpu_has_vmx_posted_intr();
+}
+
 static inline bool cpu_has_vmx_flexpriority(void)
 {
 	return cpu_has_vmx_tpr_shadow() &&
@@ -2552,12 +2576,6 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
 	u32 _vmexit_control = 0;
 	u32 _vmentry_control = 0;
 
-	min = PIN_BASED_EXT_INTR_MASK | PIN_BASED_NMI_EXITING;
-	opt = PIN_BASED_VIRTUAL_NMIS;
-	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_PINBASED_CTLS,
-				&_pin_based_exec_control) < 0)
-		return -EIO;
-
 	min = CPU_BASED_HLT_EXITING |
 #ifdef CONFIG_X86_64
 	      CPU_BASED_CR8_LOAD_EXITING |
@@ -2634,6 +2652,17 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
 				&_vmexit_control) < 0)
 		return -EIO;
 
+	min = PIN_BASED_EXT_INTR_MASK | PIN_BASED_NMI_EXITING;
+	opt = PIN_BASED_VIRTUAL_NMIS | PIN_BASED_POSTED_INTR;
+	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_PINBASED_CTLS,
+				&_pin_based_exec_control) < 0)
+		return -EIO;
+
+	if (!(_cpu_based_2nd_exec_control &
+		SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) ||
+		!(_vmexit_control & VM_EXIT_ACK_INTR_ON_EXIT))
+		_pin_based_exec_control &= ~PIN_BASED_POSTED_INTR;
+
 	min = 0;
 	opt = VM_ENTRY_LOAD_IA32_PAT;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_ENTRY_CTLS,
@@ -2812,11 +2841,10 @@ static __init int hardware_setup(void)
 	if (!cpu_has_vmx_ple())
 		ple_gap = 0;
 
-	if (!cpu_has_vmx_apic_register_virt() ||
-				!cpu_has_vmx_virtual_intr_delivery())
-		enable_apicv_reg_vid = 0;
+	if (!cpu_has_vmx_apicv())
+		enable_apicv = 0;
 
-	if (enable_apicv_reg_vid)
+	if (enable_apicv)
 		kvm_x86_ops->update_cr8_intercept = NULL;
 	else
 		kvm_x86_ops->hwapic_irr_update = NULL;
@@ -3875,6 +3903,11 @@ static void vmx_disable_intercept_msr_write_x2apic(u32 msr)
 			msr, MSR_TYPE_W);
 }
 
+static int vmx_vm_has_apicv(struct kvm *kvm)
+{
+	return enable_apicv && irqchip_in_kernel(kvm);
+}
+
 /*
  * Set up the vmcs's constant host-state fields, i.e., host-state fields that
  * will not change in the lifetime of the guest.
@@ -3935,6 +3968,15 @@ static void set_cr4_guest_host_mask(struct vcpu_vmx *vmx)
 	vmcs_writel(CR4_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr4_guest_owned_bits);
 }
 
+static u32 vmx_pin_based_exec_ctrl(struct vcpu_vmx *vmx)
+{
+	u32 pin_based_exec_ctrl = vmcs_config.pin_based_exec_ctrl;
+
+	if (!vmx_vm_has_apicv(vmx->vcpu.kvm))
+		pin_based_exec_ctrl &= ~PIN_BASED_POSTED_INTR;
+	return pin_based_exec_ctrl;
+}
+
 static u32 vmx_exec_control(struct vcpu_vmx *vmx)
 {
 	u32 exec_control = vmcs_config.cpu_based_exec_ctrl;
@@ -3952,11 +3994,6 @@ static u32 vmx_exec_control(struct vcpu_vmx *vmx)
 	return exec_control;
 }
 
-static int vmx_vm_has_apicv(struct kvm *kvm)
-{
-	return enable_apicv_reg_vid && irqchip_in_kernel(kvm);
-}
-
 static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
 {
 	u32 exec_control = vmcs_config.cpu_based_2nd_exec_ctrl;
@@ -4012,8 +4049,7 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
 	vmcs_write64(VMCS_LINK_POINTER, -1ull); /* 22.3.1.5 */
 
 	/* Control */
-	vmcs_write32(PIN_BASED_VM_EXEC_CONTROL,
-		vmcs_config.pin_based_exec_ctrl);
+	vmcs_write32(PIN_BASED_VM_EXEC_CONTROL, vmx_pin_based_exec_ctrl(vmx));
 
 	vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, vmx_exec_control(vmx));
 
@@ -4022,13 +4058,16 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
 				vmx_secondary_exec_control(vmx));
 	}
 
-	if (enable_apicv_reg_vid) {
+	if (vmx_vm_has_apicv(vmx->vcpu.kvm)) {
 		vmcs_write64(EOI_EXIT_BITMAP0, 0);
 		vmcs_write64(EOI_EXIT_BITMAP1, 0);
 		vmcs_write64(EOI_EXIT_BITMAP2, 0);
 		vmcs_write64(EOI_EXIT_BITMAP3, 0);
 
 		vmcs_write16(GUEST_INTR_STATUS, 0);
+
+		vmcs_write64(POSTED_INTR_NV, POSTED_INTR_VECTOR);
+		vmcs_write64(POSTED_INTR_DESC_ADDR, __pa((&vmx->pi_desc)));
 	}
 
 	if (ple_gap) {
@@ -4170,6 +4209,9 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu)
 		vmcs_write64(APIC_ACCESS_ADDR,
 			     page_to_phys(vmx->vcpu.kvm->arch.apic_access_page));
 
+	if (vmx_vm_has_apicv(vcpu->kvm))
+		memset(&vmx->pi_desc, 0, sizeof(struct pi_desc));
+
 	if (vmx->vpid != 0)
 		vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->vpid);
 
@@ -7809,7 +7851,7 @@ static int __init vmx_init(void)
 	memcpy(vmx_msr_bitmap_longmode_x2apic,
 			vmx_msr_bitmap_longmode, PAGE_SIZE);
 
-	if (enable_apicv_reg_vid) {
+	if (enable_apicv) {
 		for (msr = 0x800; msr <= 0x8ff; msr++)
 			vmx_disable_intercept_msr_read_x2apic(msr);
 
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 4/7] KVM: Call common update function when ioapic entry changed.
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
                   ` (2 preceding siblings ...)
  2013-04-10 13:22 ` [PATCH v9 3/7] KVM: VMX: Check the posted interrupt capability Yang Zhang
@ 2013-04-10 13:22 ` Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 5/7] KVM: Set TMR when programming ioapic entry Yang Zhang
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

Both TMR and EOI exit bitmap need to be updated when ioapic changed
or vcpu's id/ldr/dfr changed. So use common function instead eoi exit
bitmap specific function.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 arch/ia64/kvm/lapic.h    |    6 ------
 arch/x86/kvm/lapic.c     |    8 ++------
 arch/x86/kvm/lapic.h     |    5 +++++
 arch/x86/kvm/vmx.c       |    3 +++
 arch/x86/kvm/x86.c       |   11 +++++++----
 include/linux/kvm_host.h |    4 ++--
 virt/kvm/ioapic.c        |   22 +++++++++++++---------
 virt/kvm/ioapic.h        |    6 ++----
 virt/kvm/irq_comm.c      |    4 ++--
 virt/kvm/kvm_main.c      |    4 ++--
 10 files changed, 38 insertions(+), 35 deletions(-)

diff --git a/arch/ia64/kvm/lapic.h b/arch/ia64/kvm/lapic.h
index c3e2935..c5f92a9 100644
--- a/arch/ia64/kvm/lapic.h
+++ b/arch/ia64/kvm/lapic.h
@@ -27,10 +27,4 @@ int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq);
 #define kvm_apic_present(x) (true)
 #define kvm_lapic_enabled(x) (true)
 
-static inline bool kvm_apic_vid_enabled(void)
-{
-	/* IA64 has no apicv supporting, do nothing here */
-	return false;
-}
-
 #endif
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 6796218..4ccdc94 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -134,11 +134,7 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
 			static_key_slow_inc(&apic_sw_disabled.key);
 	}
 	apic_set_reg(apic, APIC_SPIV, val);
-}
-
-static inline int apic_enabled(struct kvm_lapic *apic)
-{
-	return kvm_apic_sw_enabled(apic) &&	kvm_apic_hw_enabled(apic);
+	kvm_make_request(KVM_REQ_SCAN_IOAPIC, apic->vcpu);
 }
 
 #define LVT_MASK	\
@@ -217,7 +213,7 @@ out:
 	if (old)
 		kfree_rcu(old, rcu);
 
-	kvm_ioapic_make_eoibitmap_request(kvm);
+	kvm_vcpu_request_scan_ioapic(kvm);
 }
 
 static inline void kvm_apic_set_id(struct kvm_lapic *apic, u8 id)
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 16304b1..201f79a 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -166,6 +166,11 @@ static inline bool kvm_apic_has_events(struct kvm_vcpu *vcpu)
 	return vcpu->arch.apic->pending_events;
 }
 
+static inline int apic_enabled(struct kvm_lapic *apic)
+{
+	return kvm_apic_sw_enabled(apic) && kvm_apic_hw_enabled(apic);
+}
+
 bool kvm_apic_pending_eoi(struct kvm_vcpu *vcpu, int vector);
 
 #endif
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 05da991..5637a8a 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6415,6 +6415,9 @@ static void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
 
 static void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 {
+	if (!vmx_vm_has_apicv(vcpu->kvm))
+		return;
+
 	vmcs_write64(EOI_EXIT_BITMAP0, eoi_exit_bitmap[0]);
 	vmcs_write64(EOI_EXIT_BITMAP1, eoi_exit_bitmap[1]);
 	vmcs_write64(EOI_EXIT_BITMAP2, eoi_exit_bitmap[2]);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5b146d2..53dc96f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5649,13 +5649,16 @@ static void kvm_gen_update_masterclock(struct kvm *kvm)
 #endif
 }
 
-static void update_eoi_exitmap(struct kvm_vcpu *vcpu)
+static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 {
 	u64 eoi_exit_bitmap[4];
 
+	if (!apic_enabled(vcpu->arch.apic))
+		return;
+
 	memset(eoi_exit_bitmap, 0, 32);
 
-	kvm_ioapic_calculate_eoi_exitmap(vcpu, eoi_exit_bitmap);
+	kvm_ioapic_scan_entry(vcpu, eoi_exit_bitmap);
 	kvm_x86_ops->load_eoi_exitmap(vcpu, eoi_exit_bitmap);
 }
 
@@ -5712,8 +5715,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			kvm_handle_pmu_event(vcpu);
 		if (kvm_check_request(KVM_REQ_PMI, vcpu))
 			kvm_deliver_pmi(vcpu);
-		if (kvm_check_request(KVM_REQ_EOIBITMAP, vcpu))
-			update_eoi_exitmap(vcpu);
+		if (kvm_check_request(KVM_REQ_SCAN_IOAPIC, vcpu))
+			vcpu_scan_ioapic(vcpu);
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 7bcdb6b..6f49d9d 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -126,7 +126,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_MASTERCLOCK_UPDATE 19
 #define KVM_REQ_MCLOCK_INPROGRESS 20
 #define KVM_REQ_EPR_EXIT          21
-#define KVM_REQ_EOIBITMAP         22
+#define KVM_REQ_SCAN_IOAPIC       22
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID		0
 #define KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID	1
@@ -572,7 +572,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
 void kvm_flush_remote_tlbs(struct kvm *kvm);
 void kvm_reload_remote_mmus(struct kvm *kvm);
 void kvm_make_mclock_inprogress_request(struct kvm *kvm);
-void kvm_make_update_eoibitmap_request(struct kvm *kvm);
+void kvm_make_scan_ioapic_request(struct kvm *kvm);
 
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg);
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index 704a273..47dba95 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -197,15 +197,13 @@ static void update_handled_vectors(struct kvm_ioapic *ioapic)
 	smp_wmb();
 }
 
-void kvm_ioapic_calculate_eoi_exitmap(struct kvm_vcpu *vcpu,
-					u64 *eoi_exit_bitmap)
+void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 {
 	struct kvm_ioapic *ioapic = vcpu->kvm->arch.vioapic;
 	union kvm_ioapic_redirect_entry *e;
 	int index;
 
 	spin_lock(&ioapic->lock);
-	/* traverse ioapic entry to set eoi exit bitmap*/
 	for (index = 0; index < IOAPIC_NUM_PINS; index++) {
 		e = &ioapic->redirtbl[index];
 		if (!e->fields.mask &&
@@ -219,16 +217,22 @@ void kvm_ioapic_calculate_eoi_exitmap(struct kvm_vcpu *vcpu,
 	}
 	spin_unlock(&ioapic->lock);
 }
-EXPORT_SYMBOL_GPL(kvm_ioapic_calculate_eoi_exitmap);
 
-void kvm_ioapic_make_eoibitmap_request(struct kvm *kvm)
+#ifdef CONFIG_X86
+void kvm_vcpu_request_scan_ioapic(struct kvm *kvm)
 {
 	struct kvm_ioapic *ioapic = kvm->arch.vioapic;
 
-	if (!kvm_apic_vid_enabled(kvm) || !ioapic)
+	if (!ioapic)
 		return;
-	kvm_make_update_eoibitmap_request(kvm);
+	kvm_make_scan_ioapic_request(kvm);
 }
+#else
+void kvm_vcpu_request_scan_ioapic(struct kvm *kvm)
+{
+	return;
+}
+#endif
 
 static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
 {
@@ -271,7 +275,7 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
 		if (e->fields.trig_mode == IOAPIC_LEVEL_TRIG
 		    && ioapic->irr & (1 << index))
 			ioapic_service(ioapic, index, false);
-		kvm_ioapic_make_eoibitmap_request(ioapic->kvm);
+		kvm_vcpu_request_scan_ioapic(ioapic->kvm);
 		break;
 	}
 }
@@ -591,7 +595,7 @@ int kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state)
 	spin_lock(&ioapic->lock);
 	memcpy(ioapic, state, sizeof(struct kvm_ioapic_state));
 	update_handled_vectors(ioapic);
-	kvm_ioapic_make_eoibitmap_request(kvm);
+	kvm_vcpu_request_scan_ioapic(kvm);
 	kvm_rtc_eoi_tracking_restore_all(ioapic);
 	spin_unlock(&ioapic->lock);
 	return 0;
diff --git a/virt/kvm/ioapic.h b/virt/kvm/ioapic.h
index 554157b..674a388 100644
--- a/virt/kvm/ioapic.h
+++ b/virt/kvm/ioapic.h
@@ -96,9 +96,7 @@ int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
 		struct kvm_lapic_irq *irq, unsigned long *dest_map);
 int kvm_get_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state);
 int kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state);
-void kvm_ioapic_make_eoibitmap_request(struct kvm *kvm);
-void kvm_ioapic_calculate_eoi_exitmap(struct kvm_vcpu *vcpu,
-					u64 *eoi_exit_bitmap);
-
+void kvm_vcpu_request_scan_ioapic(struct kvm *kvm);
+void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 
 #endif
diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
index 8efb580..25ab480 100644
--- a/virt/kvm/irq_comm.c
+++ b/virt/kvm/irq_comm.c
@@ -285,7 +285,7 @@ void kvm_register_irq_ack_notifier(struct kvm *kvm,
 	mutex_lock(&kvm->irq_lock);
 	hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
 	mutex_unlock(&kvm->irq_lock);
-	kvm_ioapic_make_eoibitmap_request(kvm);
+	kvm_vcpu_request_scan_ioapic(kvm);
 }
 
 void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
@@ -295,7 +295,7 @@ void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
 	hlist_del_init_rcu(&kian->link);
 	mutex_unlock(&kvm->irq_lock);
 	synchronize_rcu();
-	kvm_ioapic_make_eoibitmap_request(kvm);
+	kvm_vcpu_request_scan_ioapic(kvm);
 }
 
 int kvm_request_irq_source_id(struct kvm *kvm)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 6c4842a..8b5c6a8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -217,9 +217,9 @@ void kvm_make_mclock_inprogress_request(struct kvm *kvm)
 	make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS);
 }
 
-void kvm_make_update_eoibitmap_request(struct kvm *kvm)
+void kvm_make_scan_ioapic_request(struct kvm *kvm)
 {
-	make_all_cpus_request(kvm, KVM_REQ_EOIBITMAP);
+	make_all_cpus_request(kvm, KVM_REQ_SCAN_IOAPIC);
 }
 
 int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 5/7] KVM: Set TMR when programming ioapic entry
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
                   ` (3 preceding siblings ...)
  2013-04-10 13:22 ` [PATCH v9 4/7] KVM: Call common update function when ioapic entry changed Yang Zhang
@ 2013-04-10 13:22 ` Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 6/7] KVM: VMX: Add the algorithm of deliver posted interrupt Yang Zhang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

We already know the trigger mode of a given interrupt when programming
the ioapice entry. So it's not necessary to set it in each interrupt
delivery.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 arch/x86/kvm/lapic.c |   15 +++++++++------
 arch/x86/kvm/lapic.h |    1 +
 arch/x86/kvm/x86.c   |    5 ++++-
 virt/kvm/ioapic.c    |   12 +++++++++---
 virt/kvm/ioapic.h    |    3 ++-
 5 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 4ccdc94..ba56bc7 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -464,6 +464,15 @@ static inline int apic_find_highest_isr(struct kvm_lapic *apic)
 	return result;
 }
 
+void kvm_apic_update_tmr(struct kvm_vcpu *vcpu, u32 *tmr)
+{
+	struct kvm_lapic *apic = vcpu->arch.apic;
+	int i;
+
+	for (i = 0; i < 8; i++)
+		apic_set_reg(apic, APIC_TMR + 0x10 * i, tmr[i]);
+}
+
 static void apic_update_ppr(struct kvm_lapic *apic)
 {
 	u32 tpr, isrv, ppr, old_ppr;
@@ -657,12 +666,6 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 		if (dest_map)
 			__set_bit(vcpu->vcpu_id, dest_map);
 
-		if (trig_mode) {
-			apic_debug("level trig mode for vector %d", vector);
-			apic_set_vector(vector, apic->regs + APIC_TMR);
-		} else
-			apic_clear_vector(vector, apic->regs + APIC_TMR);
-
 		result = !apic_test_and_set_irr(vector, apic);
 		trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
 					  trig_mode, vector, !result);
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 201f79a..c7b2cb2 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -53,6 +53,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value);
 u64 kvm_lapic_get_base(struct kvm_vcpu *vcpu);
 void kvm_apic_set_version(struct kvm_vcpu *vcpu);
 
+void kvm_apic_update_tmr(struct kvm_vcpu *vcpu, u32 *tmr);
 int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest);
 int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda);
 int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 53dc96f..72be079 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5652,14 +5652,17 @@ static void kvm_gen_update_masterclock(struct kvm *kvm)
 static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 {
 	u64 eoi_exit_bitmap[4];
+	u32 tmr[8];
 
 	if (!apic_enabled(vcpu->arch.apic))
 		return;
 
 	memset(eoi_exit_bitmap, 0, 32);
+	memset(tmr, 0, 32);
 
-	kvm_ioapic_scan_entry(vcpu, eoi_exit_bitmap);
+	kvm_ioapic_scan_entry(vcpu, eoi_exit_bitmap, tmr);
 	kvm_x86_ops->load_eoi_exitmap(vcpu, eoi_exit_bitmap);
+	kvm_apic_update_tmr(vcpu, tmr);
 }
 
 static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index 47dba95..fdeaa5e 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -197,7 +197,8 @@ static void update_handled_vectors(struct kvm_ioapic *ioapic)
 	smp_wmb();
 }
 
-void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
+void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap,
+			u32 *tmr)
 {
 	struct kvm_ioapic *ioapic = vcpu->kvm->arch.vioapic;
 	union kvm_ioapic_redirect_entry *e;
@@ -211,8 +212,13 @@ void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
 			 kvm_irq_has_notifier(ioapic->kvm, KVM_IRQCHIP_IOAPIC,
 				 index) || index == RTC_GSI)) {
 			if (kvm_apic_match_dest(vcpu, NULL, 0,
-				e->fields.dest_id, e->fields.dest_mode))
-				__set_bit(e->fields.vector, (unsigned long *)eoi_exit_bitmap);
+				e->fields.dest_id, e->fields.dest_mode)) {
+				__set_bit(e->fields.vector,
+					(unsigned long *)eoi_exit_bitmap);
+				if (e->fields.trig_mode == IOAPIC_LEVEL_TRIG)
+					__set_bit(e->fields.vector,
+						(unsigned long *)tmr);
+			}
 		}
 	}
 	spin_unlock(&ioapic->lock);
diff --git a/virt/kvm/ioapic.h b/virt/kvm/ioapic.h
index 674a388..615d8c9 100644
--- a/virt/kvm/ioapic.h
+++ b/virt/kvm/ioapic.h
@@ -97,6 +97,7 @@ int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
 int kvm_get_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state);
 int kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state);
 void kvm_vcpu_request_scan_ioapic(struct kvm *kvm);
-void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
+void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap,
+			u32 *tmr);
 
 #endif
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 6/7] KVM: VMX: Add the algorithm of deliver posted interrupt
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
                   ` (4 preceding siblings ...)
  2013-04-10 13:22 ` [PATCH v9 5/7] KVM: Set TMR when programming ioapic entry Yang Zhang
@ 2013-04-10 13:22 ` Yang Zhang
  2013-04-10 13:22 ` [PATCH v9 7/7] KVM: VMX: Use posted interrupt to deliver virtual interrupt Yang Zhang
  2013-04-10 15:33 ` [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Gleb Natapov
  7 siblings, 0 replies; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

Only deliver the posted interrupt when target vcpu is running
and there is no previous interrupt pending in pir.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 arch/x86/include/asm/kvm_host.h |    2 +
 arch/x86/kvm/lapic.c            |   13 ++++++++
 arch/x86/kvm/lapic.h            |    1 +
 arch/x86/kvm/svm.c              |    6 ++++
 arch/x86/kvm/vmx.c              |   64 ++++++++++++++++++++++++++++++++++++++-
 virt/kvm/kvm_main.c             |    1 +
 6 files changed, 86 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 8e95512..842ea5a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -704,6 +704,8 @@ struct kvm_x86_ops {
 	void (*hwapic_isr_update)(struct kvm *kvm, int isr);
 	void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
 	void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
+	void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
+	void (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
 	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
 	int (*get_tdp_level)(void);
 	u64 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index ba56bc7..42a87ac 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -314,6 +314,19 @@ static u8 count_vectors(void *bitmap)
 	return count;
 }
 
+void kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir)
+{
+	u32 i, pir_val;
+	struct kvm_lapic *apic = vcpu->arch.apic;
+
+	for (i = 0; i <= 7; i++) {
+		pir_val = xchg(&pir[i], 0);
+		if (pir_val)
+			*((u32 *)(apic->regs + APIC_IRR + i * 0x10)) |= pir_val;
+	}
+}
+EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
+
 static inline int apic_test_and_set_irr(int vec, struct kvm_lapic *apic)
 {
 	apic->irr_pending = true;
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index c7b2cb2..62537eb 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -54,6 +54,7 @@ u64 kvm_lapic_get_base(struct kvm_vcpu *vcpu);
 void kvm_apic_set_version(struct kvm_vcpu *vcpu);
 
 void kvm_apic_update_tmr(struct kvm_vcpu *vcpu, u32 *tmr);
+void kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir);
 int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest);
 int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda);
 int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq,
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 2f8fe3f..d6713e1 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3577,6 +3577,11 @@ static void svm_hwapic_isr_update(struct kvm *kvm, int isr)
 	return;
 }
 
+static void svm_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+{
+	return;
+}
+
 static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -4305,6 +4310,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.vm_has_apicv = svm_vm_has_apicv,
 	.load_eoi_exitmap = svm_load_eoi_exitmap,
 	.hwapic_isr_update = svm_hwapic_isr_update,
+	.sync_pir_to_irr = svm_sync_pir_to_irr,
 
 	.set_tss_addr = svm_set_tss_addr,
 	.get_tdp_level = get_npt_level,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 5637a8a..314b2ed 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -375,6 +375,23 @@ struct pi_desc {
 	u32 rsvd[7];
 } __aligned(64);
 
+static bool pi_test_and_set_on(struct pi_desc *pi_desc)
+{
+	return test_and_set_bit(POSTED_INTR_ON,
+			(unsigned long *)&pi_desc->control);
+}
+
+static bool pi_test_and_clear_on(struct pi_desc *pi_desc)
+{
+	return test_and_clear_bit(POSTED_INTR_ON,
+			(unsigned long *)&pi_desc->control);
+}
+
+static int pi_test_and_set_pir(int vector, struct pi_desc *pi_desc)
+{
+	return test_and_set_bit(vector, (unsigned long *)pi_desc->pir);
+}
+
 struct vcpu_vmx {
 	struct kvm_vcpu       vcpu;
 	unsigned long         host_rsp;
@@ -639,6 +656,7 @@ static void vmx_get_segment(struct kvm_vcpu *vcpu,
 			    struct kvm_segment *var, int seg);
 static bool guest_state_valid(struct kvm_vcpu *vcpu);
 static u32 vmx_segment_access_rights(struct kvm_segment *var);
+static void vmx_sync_pir_to_irr_dummy(struct kvm_vcpu *vcpu);
 
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
@@ -2846,8 +2864,11 @@ static __init int hardware_setup(void)
 
 	if (enable_apicv)
 		kvm_x86_ops->update_cr8_intercept = NULL;
-	else
+	else {
 		kvm_x86_ops->hwapic_irr_update = NULL;
+		kvm_x86_ops->deliver_posted_interrupt = NULL;
+		kvm_x86_ops->sync_pir_to_irr = vmx_sync_pir_to_irr_dummy;
+	}
 
 	if (nested)
 		nested_vmx_setup_ctls_msrs();
@@ -3909,6 +3930,45 @@ static int vmx_vm_has_apicv(struct kvm *kvm)
 }
 
 /*
+ * Send interrupt to vcpu via posted interrupt way.
+ * 1. If target vcpu is running(non-root mode), send posted interrupt
+ * notification to vcpu and hardware will sync PIR to vIRR atomically.
+ * 2. If target vcpu isn't running(root mode), kick it to pick up the
+ * interrupt from PIR in next vmentry.
+ */
+static void vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	int r;
+
+	if (pi_test_and_set_pir(vector, &vmx->pi_desc))
+		return;
+
+	r = pi_test_and_set_on(&vmx->pi_desc);
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
+	if (!r && (vcpu->mode == IN_GUEST_MODE))
+		apic->send_IPI_mask(get_cpu_mask(vcpu->cpu),
+				POSTED_INTR_VECTOR);
+	else
+		kvm_vcpu_kick(vcpu);
+}
+
+static void vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	if (!pi_test_and_clear_on(&vmx->pi_desc))
+		return;
+
+	kvm_apic_update_irr(vcpu, vmx->pi_desc.pir);
+}
+
+static void vmx_sync_pir_to_irr_dummy(struct kvm_vcpu *vcpu)
+{
+	return;
+}
+
+/*
  * Set up the vmcs's constant host-state fields, i.e., host-state fields that
  * will not change in the lifetime of the guest.
  * Note that host-state that does change is set elsewhere. E.g., host-state
@@ -7751,6 +7811,8 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.load_eoi_exitmap = vmx_load_eoi_exitmap,
 	.hwapic_irr_update = vmx_hwapic_irr_update,
 	.hwapic_isr_update = vmx_hwapic_isr_update,
+	.sync_pir_to_irr = vmx_sync_pir_to_irr,
+	.deliver_posted_interrupt = vmx_deliver_posted_interrupt,
 
 	.set_tss_addr = vmx_set_tss_addr,
 	.get_tdp_level = get_ept_level,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8b5c6a8..97c2264 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1671,6 +1671,7 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 			smp_send_reschedule(cpu);
 	put_cpu();
 }
+EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
 #endif /* !CONFIG_S390 */
 
 void kvm_resched(struct kvm_vcpu *vcpu)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v9 7/7] KVM: VMX: Use posted interrupt to deliver virtual interrupt
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
                   ` (5 preceding siblings ...)
  2013-04-10 13:22 ` [PATCH v9 6/7] KVM: VMX: Add the algorithm of deliver posted interrupt Yang Zhang
@ 2013-04-10 13:22 ` Yang Zhang
  2013-04-10 15:32   ` Gleb Natapov
  2013-04-10 15:33 ` [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Gleb Natapov
  7 siblings, 1 reply; 14+ messages in thread
From: Yang Zhang @ 2013-04-10 13:22 UTC (permalink / raw)
  To: kvm; +Cc: gleb, mtosatti, xiantao.zhang, jun.nakajima, Yang Zhang

From: Yang Zhang <yang.z.zhang@Intel.com>

If posted interrupt is avaliable, then uses it to inject virtual
interrupt to guest.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 arch/x86/kvm/lapic.c |   32 +++++++++++++++++++++-----------
 arch/x86/kvm/vmx.c   |    2 +-
 arch/x86/kvm/x86.c   |    1 +
 3 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 42a87ac..4fdb984 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -349,6 +349,7 @@ static inline int apic_find_highest_irr(struct kvm_lapic *apic)
 	if (!apic->irr_pending)
 		return -1;
 
+	kvm_x86_ops->sync_pir_to_irr(apic->vcpu);
 	result = apic_search_irr(apic);
 	ASSERT(result == -1 || result >= 16);
 
@@ -679,18 +680,27 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 		if (dest_map)
 			__set_bit(vcpu->vcpu_id, dest_map);
 
-		result = !apic_test_and_set_irr(vector, apic);
-		trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
-					  trig_mode, vector, !result);
-		if (!result) {
-			if (trig_mode)
-				apic_debug("level trig mode repeatedly for "
-						"vector %d", vector);
-			break;
-		}
+		if (kvm_x86_ops->deliver_posted_interrupt) {
+			result = 1;
+			kvm_x86_ops->deliver_posted_interrupt(vcpu, vector);
+		} else {
+			result = !apic_test_and_set_irr(vector, apic);
+
+			trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
+					trig_mode, vector, !result);
+			if (!result) {
+				if (trig_mode)
+					apic_debug("level trig mode repeatedly "
+						"for vector %d", vector);
+				goto out;
+			}
 
-		kvm_make_request(KVM_REQ_EVENT, vcpu);
-		kvm_vcpu_kick(vcpu);
+			kvm_make_request(KVM_REQ_EVENT, vcpu);
+			kvm_vcpu_kick(vcpu);
+		}
+out:
+		trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
+				trig_mode, vector, !result);
 		break;
 
 	case APIC_DM_REMRD:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 314b2ed..52b21da 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -84,7 +84,7 @@ module_param(vmm_exclusive, bool, S_IRUGO);
 static bool __read_mostly fasteoi = 1;
 module_param(fasteoi, bool, S_IRUGO);
 
-static bool __read_mostly enable_apicv;
+static bool __read_mostly enable_apicv = 1;
 module_param(enable_apicv, bool, S_IRUGO);
 
 /*
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 72be079..486f627 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2685,6 +2685,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
 				    struct kvm_lapic_state *s)
 {
+	kvm_x86_ops->sync_pir_to_irr(vcpu);
 	memcpy(s->regs, vcpu->arch.apic->regs, sizeof *s);
 
 	return 0;
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 7/7] KVM: VMX: Use posted interrupt to deliver virtual interrupt
  2013-04-10 13:22 ` [PATCH v9 7/7] KVM: VMX: Use posted interrupt to deliver virtual interrupt Yang Zhang
@ 2013-04-10 15:32   ` Gleb Natapov
  2013-04-11  0:56     ` Zhang, Yang Z
  0 siblings, 1 reply; 14+ messages in thread
From: Gleb Natapov @ 2013-04-10 15:32 UTC (permalink / raw)
  To: Yang Zhang; +Cc: kvm, mtosatti, xiantao.zhang, jun.nakajima

On Wed, Apr 10, 2013 at 09:22:57PM +0800, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> If posted interrupt is avaliable, then uses it to inject virtual
> interrupt to guest.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  arch/x86/kvm/lapic.c |   32 +++++++++++++++++++++-----------
>  arch/x86/kvm/vmx.c   |    2 +-
>  arch/x86/kvm/x86.c   |    1 +
>  3 files changed, 23 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index 42a87ac..4fdb984 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -349,6 +349,7 @@ static inline int apic_find_highest_irr(struct kvm_lapic *apic)
>  	if (!apic->irr_pending)
>  		return -1;
>  
> +	kvm_x86_ops->sync_pir_to_irr(apic->vcpu);
>  	result = apic_search_irr(apic);
>  	ASSERT(result == -1 || result >= 16);
>  
> @@ -679,18 +680,27 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
>  		if (dest_map)
>  			__set_bit(vcpu->vcpu_id, dest_map);
>  
> -		result = !apic_test_and_set_irr(vector, apic);
> -		trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
> -					  trig_mode, vector, !result);
> -		if (!result) {
> -			if (trig_mode)
> -				apic_debug("level trig mode repeatedly for "
> -						"vector %d", vector);
> -			break;
> -		}
> +		if (kvm_x86_ops->deliver_posted_interrupt) {
> +			result = 1;
> +			kvm_x86_ops->deliver_posted_interrupt(vcpu, vector);
> +		} else {
> +			result = !apic_test_and_set_irr(vector, apic);
> +
> +			trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
> +					trig_mode, vector, !result);
> +			if (!result) {
> +				if (trig_mode)
> +					apic_debug("level trig mode repeatedly "
> +						"for vector %d", vector);
> +				goto out;
> +			}
>  
> -		kvm_make_request(KVM_REQ_EVENT, vcpu);
> -		kvm_vcpu_kick(vcpu);
> +			kvm_make_request(KVM_REQ_EVENT, vcpu);
> +			kvm_vcpu_kick(vcpu);
> +		}
> +out:
> +		trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
> +				trig_mode, vector, !result);
Sigh, now you trace it twice.

>  		break;
>  
>  	case APIC_DM_REMRD:
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 314b2ed..52b21da 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -84,7 +84,7 @@ module_param(vmm_exclusive, bool, S_IRUGO);
>  static bool __read_mostly fasteoi = 1;
>  module_param(fasteoi, bool, S_IRUGO);
>  
> -static bool __read_mostly enable_apicv;
> +static bool __read_mostly enable_apicv = 1;
>  module_param(enable_apicv, bool, S_IRUGO);
>  
>  /*
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 72be079..486f627 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2685,6 +2685,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
>  				    struct kvm_lapic_state *s)
>  {
> +	kvm_x86_ops->sync_pir_to_irr(vcpu);
>  	memcpy(s->regs, vcpu->arch.apic->regs, sizeof *s);
>  
>  	return 0;
> -- 
> 1.7.1

--
			Gleb.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting
  2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
                   ` (6 preceding siblings ...)
  2013-04-10 13:22 ` [PATCH v9 7/7] KVM: VMX: Use posted interrupt to deliver virtual interrupt Yang Zhang
@ 2013-04-10 15:33 ` Gleb Natapov
  2013-04-11  1:03   ` Zhang, Yang Z
  7 siblings, 1 reply; 14+ messages in thread
From: Gleb Natapov @ 2013-04-10 15:33 UTC (permalink / raw)
  To: Yang Zhang; +Cc: kvm, mtosatti, xiantao.zhang, jun.nakajima

On Wed, Apr 10, 2013 at 09:22:50PM +0800, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> The follwoing patches are adding the Posted Interrupt supporting to KVM:
> The first patch enables the feature 'acknowledge interrupt on vmexit'.Since
> it is required by Posted interrupt, we need to enable it firstly.
> 
> And the subsequent patches are adding the posted interrupt supporting:
> Posted Interrupt allows APIC interrupts to inject into guest directly
> without any vmexit.
> 
> - When delivering a interrupt to guest, if target vcpu is running,
>   update Posted-interrupt requests bitmap and send a notification event
>   to the vcpu. Then the vcpu will handle this interrupt automatically,
>   without any software involvemnt.
> 
> - If target vcpu is not running or there already a notification event
>   pending in the vcpu, do nothing. The interrupt will be handled by
>   next vm entry
> 
> Changes from v8 to v9:
> * Add tracing in PI case when deliver interrupt.
> * Scan ioapic when updating SPIV register.
Do not see it at the patch series. Have I missed it?

> * Rebase on top of KVM upstream + RTC eoi tracking patch.
> 
> Changes from v7 to v8:
> * Remove unused memeber 'on' from struct pi_desc.
> * Register a dummy function to sync_pir_to_irr is apicv is disabled.
> * Minor fixup.
> * Rebase on top of KVM upstream + RTC eoi tracking patch.
> 
> Yang Zhang (7):
>   KVM: VMX: Enable acknowledge interupt on vmexit
>   KVM: VMX: Register a new IPI for posted interrupt
>   KVM: VMX: Check the posted interrupt capability
>   KVM: Call common update function when ioapic entry changed.
>   KVM: Set TMR when programming ioapic entry
>   KVM: VMX: Add the algorithm of deliver posted interrupt
>   KVM: VMX: Use posted interrupt to deliver virtual interrupt
> 
>  arch/ia64/kvm/lapic.h              |    6 -
>  arch/x86/include/asm/entry_arch.h  |    4 +
>  arch/x86/include/asm/hardirq.h     |    3 +
>  arch/x86/include/asm/hw_irq.h      |    1 +
>  arch/x86/include/asm/irq_vectors.h |    5 +
>  arch/x86/include/asm/kvm_host.h    |    3 +
>  arch/x86/include/asm/vmx.h         |    4 +
>  arch/x86/kernel/entry_64.S         |    5 +
>  arch/x86/kernel/irq.c              |   22 ++++
>  arch/x86/kernel/irqinit.c          |    4 +
>  arch/x86/kvm/lapic.c               |   66 ++++++++----
>  arch/x86/kvm/lapic.h               |    7 ++
>  arch/x86/kvm/svm.c                 |   12 ++
>  arch/x86/kvm/vmx.c                 |  207 +++++++++++++++++++++++++++++++-----
>  arch/x86/kvm/x86.c                 |   19 +++-
>  include/linux/kvm_host.h           |    4 +-
>  virt/kvm/ioapic.c                  |   32 ++++--
>  virt/kvm/ioapic.h                  |    7 +-
>  virt/kvm/irq_comm.c                |    4 +-
>  virt/kvm/kvm_main.c                |    5 +-
>  20 files changed, 341 insertions(+), 79 deletions(-)

--
			Gleb.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH v9 7/7] KVM: VMX: Use posted interrupt to deliver virtual interrupt
  2013-04-10 15:32   ` Gleb Natapov
@ 2013-04-11  0:56     ` Zhang, Yang Z
  0 siblings, 0 replies; 14+ messages in thread
From: Zhang, Yang Z @ 2013-04-11  0:56 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: kvm, mtosatti, Zhang, Xiantao, Nakajima, Jun

Gleb Natapov wrote on 2013-04-10:
> On Wed, Apr 10, 2013 at 09:22:57PM +0800, Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> If posted interrupt is avaliable, then uses it to inject virtual
>> interrupt to guest.
>> 
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>>  arch/x86/kvm/lapic.c |   32 +++++++++++++++++++++-----------
>>  arch/x86/kvm/vmx.c   |    2 +-
>>  arch/x86/kvm/x86.c   |    1 +
>>  3 files changed, 23 insertions(+), 12 deletions(-)
>> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
>> index 42a87ac..4fdb984 100644
>> --- a/arch/x86/kvm/lapic.c
>> +++ b/arch/x86/kvm/lapic.c
>> @@ -349,6 +349,7 @@ static inline int apic_find_highest_irr(struct kvm_lapic
> *apic)
>>  	if (!apic->irr_pending)
>>  		return -1;
>> +	kvm_x86_ops->sync_pir_to_irr(apic->vcpu);
>>  	result = apic_search_irr(apic);
>>  	ASSERT(result == -1 || result >= 16);
>> @@ -679,18 +680,27 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int
> delivery_mode,
>>  		if (dest_map)
>>  			__set_bit(vcpu->vcpu_id, dest_map);
>> -		result = !apic_test_and_set_irr(vector, apic);
>> -		trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
>> -					  trig_mode, vector, !result);
>> -		if (!result) {
>> -			if (trig_mode)
>> -				apic_debug("level trig mode repeatedly for "
>> -						"vector %d", vector);
>> -			break;
>> -		}
>> +		if (kvm_x86_ops->deliver_posted_interrupt) {
>> +			result = 1;
>> +			kvm_x86_ops->deliver_posted_interrupt(vcpu, vector);
>> +		} else {
>> +			result = !apic_test_and_set_irr(vector, apic);
>> +
>> +			trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
>> +					trig_mode, vector, !result);
>> +			if (!result) {
>> +				if (trig_mode)
>> +					apic_debug("level trig mode repeatedly "
>> +						"for vector %d", vector);
>> +				goto out;
>> +			}
>> 
>> -		kvm_make_request(KVM_REQ_EVENT, vcpu);
>> -		kvm_vcpu_kick(vcpu);
>> +			kvm_make_request(KVM_REQ_EVENT, vcpu);
>> +			kvm_vcpu_kick(vcpu);
>> +		}
>> +out:
>> +		trace_kvm_apic_accept_irq(vcpu->vcpu_id, delivery_mode,
>> +				trig_mode, vector, !result);
> Sigh, now you trace it twice.
Yes. Forget to remove the old one. :(

> 
>>  		break;
>>  
>>  	case APIC_DM_REMRD:
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 314b2ed..52b21da 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -84,7 +84,7 @@ module_param(vmm_exclusive, bool, S_IRUGO);
>>  static bool __read_mostly fasteoi = 1;
>>  module_param(fasteoi, bool, S_IRUGO);
>> -static bool __read_mostly enable_apicv;
>> +static bool __read_mostly enable_apicv = 1;
>>  module_param(enable_apicv, bool, S_IRUGO);
>>  
>>  /*
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 72be079..486f627 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -2685,6 +2685,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>>  static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu, 				   
>>  struct kvm_lapic_state *s) { +	kvm_x86_ops->sync_pir_to_irr(vcpu);
>>  	memcpy(s->regs, vcpu->arch.apic->regs, sizeof *s);
>>  
>>  	return 0;
>> --
>> 1.7.1
> 
> --
> 			Gleb.


Best regards,
Yang


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting
  2013-04-10 15:33 ` [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Gleb Natapov
@ 2013-04-11  1:03   ` Zhang, Yang Z
  2013-04-11  5:44     ` Gleb Natapov
  0 siblings, 1 reply; 14+ messages in thread
From: Zhang, Yang Z @ 2013-04-11  1:03 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: kvm, mtosatti, Zhang, Xiantao, Nakajima, Jun

Gleb Natapov wrote on 2013-04-10:
> On Wed, Apr 10, 2013 at 09:22:50PM +0800, Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> The follwoing patches are adding the Posted Interrupt supporting to KVM:
>> The first patch enables the feature 'acknowledge interrupt on vmexit'.Since
>> it is required by Posted interrupt, we need to enable it firstly.
>> 
>> And the subsequent patches are adding the posted interrupt supporting:
>> Posted Interrupt allows APIC interrupts to inject into guest directly
>> without any vmexit.
>> 
>> - When delivering a interrupt to guest, if target vcpu is running,
>>   update Posted-interrupt requests bitmap and send a notification event
>>   to the vcpu. Then the vcpu will handle this interrupt automatically,
>>   without any software involvemnt.
>> - If target vcpu is not running or there already a notification event
>>   pending in the vcpu, do nothing. The interrupt will be handled by
>>   next vm entry
>> Changes from v8 to v9:
>> * Add tracing in PI case when deliver interrupt.
>> * Scan ioapic when updating SPIV register.
> Do not see it at the patch series. Have I missed it?
The change is in forth patch:

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 6796218..4ccdc94 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -134,11 +134,7 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
 			static_key_slow_inc(&apic_sw_disabled.key);
 	}
 	apic_set_reg(apic, APIC_SPIV, val);
-}
-
-static inline int apic_enabled(struct kvm_lapic *apic)
-{
-	return kvm_apic_sw_enabled(apic) &&	kvm_apic_hw_enabled(apic);
+	kvm_make_request(KVM_REQ_SCAN_IOAPIC, apic->vcpu);
 }

As you mentioned, since it will call apic_enabled() to check whether apic is enabled in vcpu_scan_ioapic. So we must ensure rescan ioapic when apic state changed.
And I found recalculate_apic_map() doesn't track the enable/disable apic by software approach. So make_scan_ioapic_request in recalculate_apic_map() is not enough.
We also should force rescan ioapic when apic state is changed via software approach(update spiv reg).

> 
>> * Rebase on top of KVM upstream + RTC eoi tracking patch.
>> 
>> Changes from v7 to v8:
>> * Remove unused memeber 'on' from struct pi_desc.
>> * Register a dummy function to sync_pir_to_irr is apicv is disabled.
>> * Minor fixup.
>> * Rebase on top of KVM upstream + RTC eoi tracking patch.
>> 
>> Yang Zhang (7):
>>   KVM: VMX: Enable acknowledge interupt on vmexit
>>   KVM: VMX: Register a new IPI for posted interrupt
>>   KVM: VMX: Check the posted interrupt capability
>>   KVM: Call common update function when ioapic entry changed.
>>   KVM: Set TMR when programming ioapic entry
>>   KVM: VMX: Add the algorithm of deliver posted interrupt
>>   KVM: VMX: Use posted interrupt to deliver virtual interrupt
>>  arch/ia64/kvm/lapic.h              |    6 -
>>  arch/x86/include/asm/entry_arch.h  |    4 +
>>  arch/x86/include/asm/hardirq.h     |    3 +
>>  arch/x86/include/asm/hw_irq.h      |    1 +
>>  arch/x86/include/asm/irq_vectors.h |    5 +
>>  arch/x86/include/asm/kvm_host.h    |    3 + arch/x86/include/asm/vmx.h
>>          |    4 + arch/x86/kernel/entry_64.S         |    5 +
>>  arch/x86/kernel/irq.c              |   22 ++++
>>  arch/x86/kernel/irqinit.c          |    4 + arch/x86/kvm/lapic.c      
>>          |   66 ++++++++---- arch/x86/kvm/lapic.h               |    7
>>  ++ arch/x86/kvm/svm.c                 |   12 ++ arch/x86/kvm/vmx.c    
>>              |  207 +++++++++++++++++++++++++++++++-----
>>  arch/x86/kvm/x86.c                 |   19 +++-
>>  include/linux/kvm_host.h           |    4 +- virt/kvm/ioapic.c        
>>           |   32 ++++-- virt/kvm/ioapic.h                  |    7 +-
>>  virt/kvm/irq_comm.c                |    4 +- virt/kvm/kvm_main.c      
>>           |    5 +- 20 files changed, 341 insertions(+), 79 deletions(-)
> 
> --
> 			Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Best regards,
Yang



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting
  2013-04-11  1:03   ` Zhang, Yang Z
@ 2013-04-11  5:44     ` Gleb Natapov
  2013-04-11  6:05       ` Zhang, Yang Z
  0 siblings, 1 reply; 14+ messages in thread
From: Gleb Natapov @ 2013-04-11  5:44 UTC (permalink / raw)
  To: Zhang, Yang Z; +Cc: kvm, mtosatti, Zhang, Xiantao, Nakajima, Jun

On Thu, Apr 11, 2013 at 01:03:30AM +0000, Zhang, Yang Z wrote:
> Gleb Natapov wrote on 2013-04-10:
> > On Wed, Apr 10, 2013 at 09:22:50PM +0800, Yang Zhang wrote:
> >> From: Yang Zhang <yang.z.zhang@Intel.com>
> >> 
> >> The follwoing patches are adding the Posted Interrupt supporting to KVM:
> >> The first patch enables the feature 'acknowledge interrupt on vmexit'.Since
> >> it is required by Posted interrupt, we need to enable it firstly.
> >> 
> >> And the subsequent patches are adding the posted interrupt supporting:
> >> Posted Interrupt allows APIC interrupts to inject into guest directly
> >> without any vmexit.
> >> 
> >> - When delivering a interrupt to guest, if target vcpu is running,
> >>   update Posted-interrupt requests bitmap and send a notification event
> >>   to the vcpu. Then the vcpu will handle this interrupt automatically,
> >>   without any software involvemnt.
> >> - If target vcpu is not running or there already a notification event
> >>   pending in the vcpu, do nothing. The interrupt will be handled by
> >>   next vm entry
> >> Changes from v8 to v9:
> >> * Add tracing in PI case when deliver interrupt.
> >> * Scan ioapic when updating SPIV register.
> > Do not see it at the patch series. Have I missed it?
> The change is in forth patch:
> 
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index 6796218..4ccdc94 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -134,11 +134,7 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
>  			static_key_slow_inc(&apic_sw_disabled.key);
>  	}
>  	apic_set_reg(apic, APIC_SPIV, val);
> -}
> -
> -static inline int apic_enabled(struct kvm_lapic *apic)
> -{
> -	return kvm_apic_sw_enabled(apic) &&	kvm_apic_hw_enabled(apic);
> +	kvm_make_request(KVM_REQ_SCAN_IOAPIC, apic->vcpu);
>  }
> 
OK, see it now. Thanks.

> As you mentioned, since it will call apic_enabled() to check whether apic is enabled in vcpu_scan_ioapic. So we must ensure rescan ioapic when apic state changed.
> And I found recalculate_apic_map() doesn't track the enable/disable apic by software approach. So make_scan_ioapic_request in recalculate_apic_map() is not enough.
> We also should force rescan ioapic when apic state is changed via software approach(update spiv reg).
> 
10.4.7.2 Local APIC State After It Has Been Software Disabled says:

  Pending interrupts in the IRR and ISR registers are held and require
  masking or handling by the CPU.

My understanding is that we should treat software disabled APIC as a
valid target for an interrupt. vcpu_scan_ioapic() should check
kvm_apic_hw_enabled() only.

--
			Gleb.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting
  2013-04-11  5:44     ` Gleb Natapov
@ 2013-04-11  6:05       ` Zhang, Yang Z
  0 siblings, 0 replies; 14+ messages in thread
From: Zhang, Yang Z @ 2013-04-11  6:05 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: kvm, mtosatti, Zhang, Xiantao, Nakajima, Jun

Gleb Natapov wrote on 2013-04-11:
> On Thu, Apr 11, 2013 at 01:03:30AM +0000, Zhang, Yang Z wrote:
>> Gleb Natapov wrote on 2013-04-10:
>>> On Wed, Apr 10, 2013 at 09:22:50PM +0800, Yang Zhang wrote:
>>>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>>> 
>>>> The follwoing patches are adding the Posted Interrupt supporting to KVM:
>>>> The first patch enables the feature 'acknowledge interrupt on vmexit'.Since
>>>> it is required by Posted interrupt, we need to enable it firstly.
>>>> 
>>>> And the subsequent patches are adding the posted interrupt supporting:
>>>> Posted Interrupt allows APIC interrupts to inject into guest directly
>>>> without any vmexit.
>>>> 
>>>> - When delivering a interrupt to guest, if target vcpu is running,
>>>>   update Posted-interrupt requests bitmap and send a notification
>>>>   event to the vcpu. Then the vcpu will handle this interrupt
>>>>   automatically, without any software involvemnt. - If target vcpu is
>>>>   not running or there already a notification event pending in the
>>>>   vcpu, do nothing. The interrupt will be handled by next vm entry
>>>> Changes from v8 to v9:
>>>> * Add tracing in PI case when deliver interrupt.
>>>> * Scan ioapic when updating SPIV register.
>>> Do not see it at the patch series. Have I missed it?
>> The change is in forth patch:
>> 
>> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
>> index 6796218..4ccdc94 100644
>> --- a/arch/x86/kvm/lapic.c
>> +++ b/arch/x86/kvm/lapic.c
>> @@ -134,11 +134,7 @@ static inline void apic_set_spiv(struct kvm_lapic *apic,
> u32 val)
>>  			static_key_slow_inc(&apic_sw_disabled.key);
>>  	}
>>  	apic_set_reg(apic, APIC_SPIV, val);
>> -}
>> -
>> -static inline int apic_enabled(struct kvm_lapic *apic)
>> -{
>> -	return kvm_apic_sw_enabled(apic) &&	kvm_apic_hw_enabled(apic);
>> +	kvm_make_request(KVM_REQ_SCAN_IOAPIC, apic->vcpu);
>>  }
> OK, see it now. Thanks.
> 
>> As you mentioned, since it will call apic_enabled() to check whether apic is
> enabled in vcpu_scan_ioapic. So we must ensure rescan ioapic when apic state
> changed.
>> And I found recalculate_apic_map() doesn't track the enable/disable apic by
> software approach. So make_scan_ioapic_request in recalculate_apic_map() is
> not enough.
>> We also should force rescan ioapic when apic state is changed via
>> software approach(update spiv reg).
>> 
> 10.4.7.2 Local APIC State After It Has Been Software Disabled says:
> 
>   Pending interrupts in the IRR and ISR registers are held and require
>   masking or handling by the CPU.
> My understanding is that we should treat software disabled APIC as a
> valid target for an interrupt. vcpu_scan_ioapic() should check
> kvm_apic_hw_enabled() only.
Indeed. kvm_apic_hw_enabled() is the right one.

Best regards,
Yang



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2013-04-11  6:05 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-10 13:22 [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Yang Zhang
2013-04-10 13:22 ` [PATCH v9 1/7] KVM: VMX: Enable acknowledge interupt on vmexit Yang Zhang
2013-04-10 13:22 ` [PATCH v9 2/7] KVM: VMX: Register a new IPI for posted interrupt Yang Zhang
2013-04-10 13:22 ` [PATCH v9 3/7] KVM: VMX: Check the posted interrupt capability Yang Zhang
2013-04-10 13:22 ` [PATCH v9 4/7] KVM: Call common update function when ioapic entry changed Yang Zhang
2013-04-10 13:22 ` [PATCH v9 5/7] KVM: Set TMR when programming ioapic entry Yang Zhang
2013-04-10 13:22 ` [PATCH v9 6/7] KVM: VMX: Add the algorithm of deliver posted interrupt Yang Zhang
2013-04-10 13:22 ` [PATCH v9 7/7] KVM: VMX: Use posted interrupt to deliver virtual interrupt Yang Zhang
2013-04-10 15:32   ` Gleb Natapov
2013-04-11  0:56     ` Zhang, Yang Z
2013-04-10 15:33 ` [PATCH v9 0/7] KVM: VMX: Add Posted Interrupt supporting Gleb Natapov
2013-04-11  1:03   ` Zhang, Yang Z
2013-04-11  5:44     ` Gleb Natapov
2013-04-11  6:05       ` Zhang, Yang Z

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.