kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/6] Nonatomic interrupt injection
@ 2010-08-30 11:36 Avi Kivity
  2010-08-30 11:36 ` [PATCH v4 1/6] KVM: Check for pending events before attempting injection Avi Kivity
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:36 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

This patchset changes interrupt injection to be done from normal process
context instead of interrupts disabled context.  This is useful for real
mode interrupt injection on Intel without the current hacks (injecting as
a software interrupt of a vm86 task), reducing latencies, and later, for
allowing nested virtualization code to use kvm_read_guest()/kvm_write_guest()
instead of kmap() to access the guest vmcb/vmcs.

Seems to survive a hack that cancels every 16th entry, after injection has
already taken place.

With the PIC reset fix posted earlier, this passes autotest on both AMD and
Intel, with in-kernel irqchip.  I'll run -no-kvm-irqchip tests shortly.

Please review carefully, esp. the first patch.  Any missing kvm_make_request()
there may result in a hung guest.

v4: add KVM_REQ_EVENT after lapic restore

v3: close new race between injection and entry
    fix Intel real-mode injection cancellation

v2: svm support (easier than expected)
    fix silly vmx warning

Avi Kivity (6):
  KVM: Check for pending events before attempting injection
  KVM: VMX: Split up vmx_complete_interrupts()
  KVM: VMX: Move real-mode interrupt injection fixup to
    vmx_complete_interrupts()
  KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and
    entry
  KVM: Non-atomic interrupt injection
  KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration

 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/i8259.c            |    1 +
 arch/x86/kvm/lapic.c            |   13 ++++-
 arch/x86/kvm/svm.c              |   20 ++++++-
 arch/x86/kvm/vmx.c              |  116 ++++++++++++++++++++++++++------------
 arch/x86/kvm/x86.c              |   44 ++++++++++----
 include/linux/kvm_host.h        |    1 +
 7 files changed, 143 insertions(+), 53 deletions(-)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/6] KVM: Check for pending events before attempting injection
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
@ 2010-08-30 11:36 ` Avi Kivity
  2010-08-30 17:41   ` Marcelo Tosatti
  2010-08-30 11:36 ` [PATCH v4 2/6] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:36 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

Instead of blindly attempting to inject an event before each guest entry,
check for a possible event first in vcpu->requests.  Sites that can trigger
event injection are modified to set KVM_REQ_EVENT:

- interrupt, nmi window opening
- ppr updates
- i8259 output changes
- local apic irr changes
- rflags updates
- gif flag set
- event set on exit

This improves non-injecting entry performance, and sets the stage for
non-atomic injection.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/i8259.c     |    1 +
 arch/x86/kvm/lapic.c     |   13 +++++++++++--
 arch/x86/kvm/svm.c       |    8 +++++++-
 arch/x86/kvm/vmx.c       |    6 ++++++
 arch/x86/kvm/x86.c       |   35 ++++++++++++++++++++++++++---------
 include/linux/kvm_host.h |    1 +
 6 files changed, 52 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
index 5662ff6..6281a32 100644
--- a/arch/x86/kvm/i8259.c
+++ b/arch/x86/kvm/i8259.c
@@ -67,6 +67,7 @@ static void pic_unlock(struct kvm_pic *s)
 		if (!found)
 			return;
 
+		kvm_make_request(KVM_REQ_EVENT, found);
 		kvm_vcpu_kick(found);
 	}
 }
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 77d8c0f..c6f2f15 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -259,9 +259,10 @@ static inline int apic_find_highest_isr(struct kvm_lapic *apic)
 
 static void apic_update_ppr(struct kvm_lapic *apic)
 {
-	u32 tpr, isrv, ppr;
+	u32 tpr, isrv, ppr, old_ppr;
 	int isr;
 
+	old_ppr = apic_get_reg(apic, APIC_PROCPRI);
 	tpr = apic_get_reg(apic, APIC_TASKPRI);
 	isr = apic_find_highest_isr(apic);
 	isrv = (isr != -1) ? isr : 0;
@@ -274,7 +275,10 @@ static void apic_update_ppr(struct kvm_lapic *apic)
 	apic_debug("vlapic %p, ppr 0x%x, isr 0x%x, isrv 0x%x",
 		   apic, ppr, isr, isrv);
 
-	apic_set_reg(apic, APIC_PROCPRI, ppr);
+	if (old_ppr != ppr) {
+		apic_set_reg(apic, APIC_PROCPRI, ppr);
+		kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
+	}
 }
 
 static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr)
@@ -391,6 +395,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 			break;
 		}
 
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
 		kvm_vcpu_kick(vcpu);
 		break;
 
@@ -416,6 +421,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 				       "INIT on a runnable vcpu %d\n",
 				       vcpu->vcpu_id);
 			vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
+			kvm_make_request(KVM_REQ_EVENT, vcpu);
 			kvm_vcpu_kick(vcpu);
 		} else {
 			apic_debug("Ignoring de-assert INIT to vcpu %d\n",
@@ -430,6 +436,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 			result = 1;
 			vcpu->arch.sipi_vector = vector;
 			vcpu->arch.mp_state = KVM_MP_STATE_SIPI_RECEIVED;
+			kvm_make_request(KVM_REQ_EVENT, vcpu);
 			kvm_vcpu_kick(vcpu);
 		}
 		break;
@@ -475,6 +482,7 @@ static void apic_set_eoi(struct kvm_lapic *apic)
 		trigger_mode = IOAPIC_EDGE_TRIG;
 	if (!(apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI))
 		kvm_ioapic_update_eoi(apic->vcpu->kvm, vector, trigger_mode);
+	kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
 }
 
 static void apic_send_ipi(struct kvm_lapic *apic)
@@ -1152,6 +1160,7 @@ void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu)
 	update_divide_count(apic);
 	start_apic_timer(apic);
 	apic->irr_pending = true;
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
 
 void __kvm_migrate_apic_timer(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 69a32bb..8884e51 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2288,6 +2288,7 @@ static int stgi_interception(struct vcpu_svm *svm)
 
 	svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
 	skip_emulated_instruction(&svm->vcpu);
+	kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
 
 	enable_gif(svm);
 
@@ -2663,6 +2664,7 @@ static int interrupt_window_interception(struct vcpu_svm *svm)
 {
 	struct kvm_run *kvm_run = svm->vcpu.run;
 
+	kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
 	svm_clear_vintr(svm);
 	svm->vmcb->control.int_ctl &= ~V_IRQ_MASK;
 	/*
@@ -3108,8 +3110,10 @@ static void svm_complete_interrupts(struct vcpu_svm *svm)
 
 	svm->int3_injected = 0;
 
-	if (svm->vcpu.arch.hflags & HF_IRET_MASK)
+	if (svm->vcpu.arch.hflags & HF_IRET_MASK) {
 		svm->vcpu.arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK);
+		kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
+	}
 
 	svm->vcpu.arch.nmi_injected = false;
 	kvm_clear_exception_queue(&svm->vcpu);
@@ -3118,6 +3122,8 @@ static void svm_complete_interrupts(struct vcpu_svm *svm)
 	if (!(exitintinfo & SVM_EXITINTINFO_VALID))
 		return;
 
+	kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
+
 	vector = exitintinfo & SVM_EXITINTINFO_VEC_MASK;
 	type = exitintinfo & SVM_EXITINTINFO_TYPE_MASK;
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 676555c..4255856 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3333,6 +3333,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu)
 
 static int handle_tpr_below_threshold(struct kvm_vcpu *vcpu)
 {
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	return 1;
 }
 
@@ -3345,6 +3346,8 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu)
 	cpu_based_vm_exec_control &= ~CPU_BASED_VIRTUAL_INTR_PENDING;
 	vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
 
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
+
 	++vcpu->stat.irq_window_exits;
 
 	/*
@@ -3601,6 +3604,7 @@ static int handle_nmi_window(struct kvm_vcpu *vcpu)
 	cpu_based_vm_exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING;
 	vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
 	++vcpu->stat.nmi_window_exits;
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 
 	return 1;
 }
@@ -3834,6 +3838,8 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 	if (!idtv_info_valid)
 		return;
 
+	kvm_make_request(KVM_REQ_EVENT, &vmx->vcpu);
+
 	vector = idt_vectoring_info & VECTORING_INFO_VECTOR_MASK;
 	type = idt_vectoring_info & VECTORING_INFO_TYPE_MASK;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cc6d6cd..907ea7f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -284,6 +284,8 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
 	u32 prev_nr;
 	int class1, class2;
 
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
+
 	if (!vcpu->arch.exception.pending) {
 	queue:
 		vcpu->arch.exception.pending = true;
@@ -339,6 +341,7 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long addr,
 
 void kvm_inject_nmi(struct kvm_vcpu *vcpu)
 {
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	vcpu->arch.nmi_pending = 1;
 }
 EXPORT_SYMBOL_GPL(kvm_inject_nmi);
@@ -2470,6 +2473,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 	if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR)
 		vcpu->arch.sipi_vector = events->sipi_vector;
 
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
+
 	return 0;
 }
 
@@ -4201,6 +4206,7 @@ restart:
 
 	toggle_interruptibility(vcpu, vcpu->arch.emulate_ctxt.interruptibility);
 	kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	memcpy(vcpu->arch.regs, c->regs, sizeof c->regs);
 	kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip);
 
@@ -4927,17 +4933,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		goto out;
 	}
 
-	inject_pending_event(vcpu);
+	if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
+		inject_pending_event(vcpu);
 
-	/* enable NMI/IRQ window open exits if needed */
-	if (vcpu->arch.nmi_pending)
-		kvm_x86_ops->enable_nmi_window(vcpu);
-	else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
-		kvm_x86_ops->enable_irq_window(vcpu);
+		/* enable NMI/IRQ window open exits if needed */
+		if (vcpu->arch.nmi_pending)
+			kvm_x86_ops->enable_nmi_window(vcpu);
+		else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+			kvm_x86_ops->enable_irq_window(vcpu);
 
-	if (kvm_lapic_enabled(vcpu)) {
-		update_cr8_intercept(vcpu);
-		kvm_lapic_sync_to_vapic(vcpu);
+		if (kvm_lapic_enabled(vcpu)) {
+			update_cr8_intercept(vcpu);
+			kvm_lapic_sync_to_vapic(vcpu);
+		}
 	}
 
 	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
@@ -5177,6 +5185,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu->arch.exception.pending = false;
 
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
+
 	return 0;
 }
 
@@ -5240,6 +5250,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 				    struct kvm_mp_state *mp_state)
 {
 	vcpu->arch.mp_state = mp_state->mp_state;
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	return 0;
 }
 
@@ -5261,6 +5272,7 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason,
 	memcpy(vcpu->arch.regs, c->regs, sizeof c->regs);
 	kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip);
 	kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	return EMULATE_DONE;
 }
 EXPORT_SYMBOL_GPL(kvm_task_switch);
@@ -5331,6 +5343,8 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
 	    !is_protmode(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
 
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
+
 	return 0;
 }
 
@@ -5563,6 +5577,8 @@ int kvm_arch_vcpu_reset(struct kvm_vcpu *vcpu)
 	vcpu->arch.dr6 = DR6_FIXED_1;
 	vcpu->arch.dr7 = DR7_FIXED_1;
 
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
+
 	return kvm_x86_ops->vcpu_reset(vcpu);
 }
 
@@ -5870,6 +5886,7 @@ void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 	    kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip))
 		rflags |= X86_EFLAGS_TF;
 	kvm_x86_ops->set_rflags(vcpu, rflags);
+	kvm_make_request(KVM_REQ_EVENT, vcpu);
 }
 EXPORT_SYMBOL_GPL(kvm_set_rflags);
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f2ecdd5..a08d267 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -39,6 +39,7 @@
 #define KVM_REQ_KVMCLOCK_UPDATE    8
 #define KVM_REQ_KICK               9
 #define KVM_REQ_DEACTIVATE_FPU    10
+#define KVM_REQ_EVENT             11
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID	0
 
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/6] KVM: VMX: Split up vmx_complete_interrupts()
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
  2010-08-30 11:36 ` [PATCH v4 1/6] KVM: Check for pending events before attempting injection Avi Kivity
@ 2010-08-30 11:36 ` Avi Kivity
  2010-08-30 11:36 ` [PATCH v4 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts() Avi Kivity
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:36 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

vmx_complete_interrupts() does too much, split it up:
 - vmx_vcpu_run() gets the "cache important vmcs fields" part
 - a new vmx_complete_atomic_exit() gets the parts that must be done atomically
 - a new vmx_recover_nmi_blocking() does what its name says
 - vmx_complete_interrupts() retains the event injection recovery code

This helps in reducing the work done in atomic context.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/vmx.c |   39 +++++++++++++++++++++++++++------------
 1 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4255856..521df28 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -125,6 +125,7 @@ struct vcpu_vmx {
 	unsigned long         host_rsp;
 	int                   launched;
 	u8                    fail;
+	u32                   exit_intr_info;
 	u32                   idt_vectoring_info;
 	struct shared_msr_entry *guest_msrs;
 	int                   nmsrs;
@@ -3781,18 +3782,9 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 	vmcs_write32(TPR_THRESHOLD, irr);
 }
 
-static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
 {
-	u32 exit_intr_info;
-	u32 idt_vectoring_info = vmx->idt_vectoring_info;
-	bool unblock_nmi;
-	u8 vector;
-	int type;
-	bool idtv_info_valid;
-
-	exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
-
-	vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);
+	u32 exit_intr_info = vmx->exit_intr_info;
 
 	/* Handle machine checks before interrupts are enabled */
 	if ((vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY)
@@ -3807,8 +3799,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 		asm("int $2");
 		kvm_after_handle_nmi(&vmx->vcpu);
 	}
+}
 
-	idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
+static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
+{
+	u32 exit_intr_info = vmx->exit_intr_info;
+	bool unblock_nmi;
+	u8 vector;
+	bool idtv_info_valid;
+
+	idtv_info_valid = vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK;
 
 	if (cpu_has_virtual_nmis()) {
 		unblock_nmi = (exit_intr_info & INTR_INFO_UNBLOCK_NMI) != 0;
@@ -3830,6 +3830,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 	} else if (unlikely(vmx->soft_vnmi_blocked))
 		vmx->vnmi_blocked_time +=
 			ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
+}
+
+static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+{
+	u32 idt_vectoring_info = vmx->idt_vectoring_info;
+	u8 vector;
+	int type;
+	bool idtv_info_valid;
+
+	idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
 
 	vmx->vcpu.arch.nmi_injected = false;
 	kvm_clear_exception_queue(&vmx->vcpu);
@@ -4042,6 +4052,11 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	asm("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS));
 	vmx->launched = 1;
 
+	vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);
+	vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+
+	vmx_complete_atomic_exit(vmx);
+	vmx_recover_nmi_blocking(vmx);
 	vmx_complete_interrupts(vmx);
 }
 
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts()
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
  2010-08-30 11:36 ` [PATCH v4 1/6] KVM: Check for pending events before attempting injection Avi Kivity
  2010-08-30 11:36 ` [PATCH v4 2/6] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity
@ 2010-08-30 11:36 ` Avi Kivity
  2010-08-30 11:37 ` [PATCH v4 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:36 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

This allows reuse of vmx_complete_interrupts() for cancelling injections.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/vmx.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 521df28..39dd627 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -182,6 +182,7 @@ static int init_rmode(struct kvm *kvm);
 static u64 construct_eptp(unsigned long root_hpa);
 static void kvm_cpu_vmxon(u64 addr);
 static void kvm_cpu_vmxoff(void);
+static void fixup_rmode_irq(struct vcpu_vmx *vmx);
 
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
@@ -3834,11 +3835,15 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 
 static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 {
-	u32 idt_vectoring_info = vmx->idt_vectoring_info;
+	u32 idt_vectoring_info;
 	u8 vector;
 	int type;
 	bool idtv_info_valid;
 
+	if (vmx->rmode.irq.pending)
+		fixup_rmode_irq(vmx);
+
+	idt_vectoring_info = vmx->idt_vectoring_info;
 	idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
 
 	vmx->vcpu.arch.nmi_injected = false;
@@ -4046,8 +4051,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	vcpu->arch.regs_dirty = 0;
 
 	vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
-	if (vmx->rmode.irq.pending)
-		fixup_rmode_irq(vmx);
 
 	asm("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS));
 	vmx->launched = 1;
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
                   ` (2 preceding siblings ...)
  2010-08-30 11:36 ` [PATCH v4 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts() Avi Kivity
@ 2010-08-30 11:37 ` Avi Kivity
  2010-08-30 11:37 ` [PATCH v4 5/6] KVM: Non-atomic interrupt injection Avi Kivity
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:37 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

Currently vmx_complete_interrupts() can decode event information from vmx
exit fields into the generic kvm event queues.  Make it able to decode
the information from the entry fields as well by parametrizing it.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/vmx.c |   34 +++++++++++++++++++++-------------
 1 files changed, 21 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 39dd627..c6d37df 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -182,7 +182,7 @@ static int init_rmode(struct kvm *kvm);
 static u64 construct_eptp(unsigned long root_hpa);
 static void kvm_cpu_vmxon(u64 addr);
 static void kvm_cpu_vmxoff(void);
-static void fixup_rmode_irq(struct vcpu_vmx *vmx);
+static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info);
 
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
@@ -3833,17 +3833,18 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 			ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
 }
 
-static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+static void __vmx_complete_interrupts(struct vcpu_vmx *vmx,
+				      u32 idt_vectoring_info,
+				      int instr_len_field,
+				      int error_code_field)
 {
-	u32 idt_vectoring_info;
 	u8 vector;
 	int type;
 	bool idtv_info_valid;
 
 	if (vmx->rmode.irq.pending)
-		fixup_rmode_irq(vmx);
+		fixup_rmode_irq(vmx, &idt_vectoring_info);
 
-	idt_vectoring_info = vmx->idt_vectoring_info;
 	idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
 
 	vmx->vcpu.arch.nmi_injected = false;
@@ -3871,18 +3872,18 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 		break;
 	case INTR_TYPE_SOFT_EXCEPTION:
 		vmx->vcpu.arch.event_exit_inst_len =
-			vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+			vmcs_read32(instr_len_field);
 		/* fall through */
 	case INTR_TYPE_HARD_EXCEPTION:
 		if (idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK) {
-			u32 err = vmcs_read32(IDT_VECTORING_ERROR_CODE);
+			u32 err = vmcs_read32(error_code_field);
 			kvm_queue_exception_e(&vmx->vcpu, vector, err);
 		} else
 			kvm_queue_exception(&vmx->vcpu, vector);
 		break;
 	case INTR_TYPE_SOFT_INTR:
 		vmx->vcpu.arch.event_exit_inst_len =
-			vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+			vmcs_read32(instr_len_field);
 		/* fall through */
 	case INTR_TYPE_EXT_INTR:
 		kvm_queue_interrupt(&vmx->vcpu, vector,
@@ -3893,24 +3894,31 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 	}
 }
 
+static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
+{
+	__vmx_complete_interrupts(vmx, vmx->idt_vectoring_info,
+				  VM_EXIT_INSTRUCTION_LEN,
+				  IDT_VECTORING_ERROR_CODE);
+}
+
 /*
  * Failure to inject an interrupt should give us the information
  * in IDT_VECTORING_INFO_FIELD.  However, if the failure occurs
  * when fetching the interrupt redirection bitmap in the real-mode
  * tss, this doesn't happen.  So we do it ourselves.
  */
-static void fixup_rmode_irq(struct vcpu_vmx *vmx)
+static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info)
 {
 	vmx->rmode.irq.pending = 0;
 	if (kvm_rip_read(&vmx->vcpu) + 1 != vmx->rmode.irq.rip)
 		return;
 	kvm_rip_write(&vmx->vcpu, vmx->rmode.irq.rip);
-	if (vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK) {
-		vmx->idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK;
-		vmx->idt_vectoring_info |= INTR_TYPE_EXT_INTR;
+	if (*idt_vectoring_info & VECTORING_INFO_VALID_MASK) {
+		*idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK;
+		*idt_vectoring_info |= INTR_TYPE_EXT_INTR;
 		return;
 	}
-	vmx->idt_vectoring_info =
+	*idt_vectoring_info =
 		VECTORING_INFO_VALID_MASK
 		| INTR_TYPE_EXT_INTR
 		| vmx->rmode.irq.vector;
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 5/6] KVM: Non-atomic interrupt injection
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
                   ` (3 preceding siblings ...)
  2010-08-30 11:37 ` [PATCH v4 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity
@ 2010-08-30 11:37 ` Avi Kivity
  2010-08-30 11:37 ` [PATCH v4 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration Avi Kivity
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:37 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

Change the interrupt injection code to work from preemptible, interrupts
enabled context.  This works by adding a ->cancel_injection() operation
that undoes an injection in case we were not able to actually enter the guest
(this condition could never happen with atomic injection).

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/svm.c              |   12 ++++++++++++
 arch/x86/kvm/vmx.c              |   11 +++++++++++
 arch/x86/kvm/x86.c              |   31 ++++++++++++++++---------------
 4 files changed, 40 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9b30285..8004b34 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -510,6 +510,7 @@ struct kvm_x86_ops {
 	void (*queue_exception)(struct kvm_vcpu *vcpu, unsigned nr,
 				bool has_error_code, u32 error_code,
 				bool reinject);
+	void (*cancel_injection)(struct kvm_vcpu *vcpu);
 	int (*interrupt_allowed)(struct kvm_vcpu *vcpu);
 	int (*nmi_allowed)(struct kvm_vcpu *vcpu);
 	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8884e51..e64a5b1 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3160,6 +3160,17 @@ static void svm_complete_interrupts(struct vcpu_svm *svm)
 	}
 }
 
+static void svm_cancel_injection(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	struct vmcb_control_area *control = &svm->vmcb->control;
+
+	control->exit_int_info = control->event_inj;
+	control->exit_int_info_err = control->event_inj_err;
+	control->event_inj = 0;
+	svm_complete_interrupts(svm);
+}
+
 #ifdef CONFIG_X86_64
 #define R "r"
 #else
@@ -3523,6 +3534,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.set_irq = svm_set_irq,
 	.set_nmi = svm_inject_nmi,
 	.queue_exception = svm_queue_exception,
+	.cancel_injection = svm_cancel_injection,
 	.interrupt_allowed = svm_interrupt_allowed,
 	.nmi_allowed = svm_nmi_allowed,
 	.get_nmi_mask = svm_get_nmi_mask,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c6d37df..41b1ff6 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3901,6 +3901,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
 				  IDT_VECTORING_ERROR_CODE);
 }
 
+static void vmx_cancel_injection(struct kvm_vcpu *vcpu)
+{
+	__vmx_complete_interrupts(to_vmx(vcpu),
+				  vmcs_read32(VM_ENTRY_INTR_INFO_FIELD),
+				  VM_ENTRY_INSTRUCTION_LEN,
+				  VM_ENTRY_EXCEPTION_ERROR_CODE);
+
+	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0);
+}
+
 /*
  * Failure to inject an interrupt should give us the information
  * in IDT_VECTORING_INFO_FIELD.  However, if the failure occurs
@@ -4354,6 +4364,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.set_irq = vmx_inject_irq,
 	.set_nmi = vmx_inject_nmi,
 	.queue_exception = vmx_queue_exception,
+	.cancel_injection = vmx_cancel_injection,
 	.interrupt_allowed = vmx_interrupt_allowed,
 	.nmi_allowed = vmx_nmi_allowed,
 	.get_nmi_mask = vmx_get_nmi_mask,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 907ea7f..bb447f5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4911,6 +4911,21 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (unlikely(r))
 		goto out;
 
+	if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
+		inject_pending_event(vcpu);
+
+		/* enable NMI/IRQ window open exits if needed */
+		if (vcpu->arch.nmi_pending)
+			kvm_x86_ops->enable_nmi_window(vcpu);
+		else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+			kvm_x86_ops->enable_irq_window(vcpu);
+
+		if (kvm_lapic_enabled(vcpu)) {
+			update_cr8_intercept(vcpu);
+			kvm_lapic_sync_to_vapic(vcpu);
+		}
+	}
+
 	preempt_disable();
 
 	kvm_x86_ops->prepare_guest_switch(vcpu);
@@ -4929,25 +4944,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		smp_wmb();
 		local_irq_enable();
 		preempt_enable();
+		kvm_x86_ops->cancel_injection(vcpu);
 		r = 1;
 		goto out;
 	}
 
-	if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
-		inject_pending_event(vcpu);
-
-		/* enable NMI/IRQ window open exits if needed */
-		if (vcpu->arch.nmi_pending)
-			kvm_x86_ops->enable_nmi_window(vcpu);
-		else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
-			kvm_x86_ops->enable_irq_window(vcpu);
-
-		if (kvm_lapic_enabled(vcpu)) {
-			update_cr8_intercept(vcpu);
-			kvm_lapic_sync_to_vapic(vcpu);
-		}
-	}
-
 	srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
 
 	kvm_guest_enter();
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
                   ` (4 preceding siblings ...)
  2010-08-30 11:37 ` [PATCH v4 5/6] KVM: Non-atomic interrupt injection Avi Kivity
@ 2010-08-30 11:37 ` Avi Kivity
  2010-08-30 11:52 ` [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
  2010-09-16 13:35 ` Avi Kivity
  7 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:37 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

No code changes.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/vmx.c |   47 +++++++++++++++++++++++------------------------
 1 files changed, 23 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 41b1ff6..e6317cc 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -182,7 +182,6 @@ static int init_rmode(struct kvm *kvm);
 static u64 construct_eptp(unsigned long root_hpa);
 static void kvm_cpu_vmxon(u64 addr);
 static void kvm_cpu_vmxoff(void);
-static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info);
 
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
@@ -3833,6 +3832,29 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 			ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
 }
 
+/*
+ * Failure to inject an interrupt should give us the information
+ * in IDT_VECTORING_INFO_FIELD.  However, if the failure occurs
+ * when fetching the interrupt redirection bitmap in the real-mode
+ * tss, this doesn't happen.  So we do it ourselves.
+ */
+static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info)
+{
+	vmx->rmode.irq.pending = 0;
+	if (kvm_rip_read(&vmx->vcpu) + 1 != vmx->rmode.irq.rip)
+		return;
+	kvm_rip_write(&vmx->vcpu, vmx->rmode.irq.rip);
+	if (*idt_vectoring_info & VECTORING_INFO_VALID_MASK) {
+		*idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK;
+		*idt_vectoring_info |= INTR_TYPE_EXT_INTR;
+		return;
+	}
+	*idt_vectoring_info =
+		VECTORING_INFO_VALID_MASK
+		| INTR_TYPE_EXT_INTR
+		| vmx->rmode.irq.vector;
+}
+
 static void __vmx_complete_interrupts(struct vcpu_vmx *vmx,
 				      u32 idt_vectoring_info,
 				      int instr_len_field,
@@ -3911,29 +3933,6 @@ static void vmx_cancel_injection(struct kvm_vcpu *vcpu)
 	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0);
 }
 
-/*
- * Failure to inject an interrupt should give us the information
- * in IDT_VECTORING_INFO_FIELD.  However, if the failure occurs
- * when fetching the interrupt redirection bitmap in the real-mode
- * tss, this doesn't happen.  So we do it ourselves.
- */
-static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info)
-{
-	vmx->rmode.irq.pending = 0;
-	if (kvm_rip_read(&vmx->vcpu) + 1 != vmx->rmode.irq.rip)
-		return;
-	kvm_rip_write(&vmx->vcpu, vmx->rmode.irq.rip);
-	if (*idt_vectoring_info & VECTORING_INFO_VALID_MASK) {
-		*idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK;
-		*idt_vectoring_info |= INTR_TYPE_EXT_INTR;
-		return;
-	}
-	*idt_vectoring_info =
-		VECTORING_INFO_VALID_MASK
-		| INTR_TYPE_EXT_INTR
-		| vmx->rmode.irq.vector;
-}
-
 #ifdef CONFIG_X86_64
 #define R "r"
 #define Q "q"
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/6] Nonatomic interrupt injection
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
                   ` (5 preceding siblings ...)
  2010-08-30 11:37 ` [PATCH v4 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration Avi Kivity
@ 2010-08-30 11:52 ` Avi Kivity
  2010-09-16 13:35 ` Avi Kivity
  7 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2010-08-30 11:52 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

  On 08/30/2010 02:36 PM, Avi Kivity wrote:
> I'll run -no-kvm-irqchip tests shortly.

That needed a KVM_REQ_EVENT in the KVM_INTERRUPT path, but otherwise 
it's running smoothly.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 1/6] KVM: Check for pending events before attempting injection
  2010-08-30 11:36 ` [PATCH v4 1/6] KVM: Check for pending events before attempting injection Avi Kivity
@ 2010-08-30 17:41   ` Marcelo Tosatti
  0 siblings, 0 replies; 14+ messages in thread
From: Marcelo Tosatti @ 2010-08-30 17:41 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm

On Mon, Aug 30, 2010 at 02:36:57PM +0300, Avi Kivity wrote:
> Instead of blindly attempting to inject an event before each guest entry,
> check for a possible event first in vcpu->requests.  Sites that can trigger
> event injection are modified to set KVM_REQ_EVENT:
> 
> - interrupt, nmi window opening
> - ppr updates
> - i8259 output changes
> - local apic irr changes
> - rflags updates
> - gif flag set
> - event set on exit
> 
> This improves non-injecting entry performance, and sets the stage for
> non-atomic injection.
> 
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
>  arch/x86/kvm/i8259.c     |    1 +
>  arch/x86/kvm/lapic.c     |   13 +++++++++++--
>  arch/x86/kvm/svm.c       |    8 +++++++-
>  arch/x86/kvm/vmx.c       |    6 ++++++
>  arch/x86/kvm/x86.c       |   35 ++++++++++++++++++++++++++---------
>  include/linux/kvm_host.h |    1 +
>  6 files changed, 52 insertions(+), 12 deletions(-)
> 

> index cc6d6cd..907ea7f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -284,6 +284,8 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu,
>  	u32 prev_nr;
>  	int class1, class2;
>  
> +	kvm_make_request(KVM_REQ_EVENT, vcpu);
> +
>  	if (!vcpu->arch.exception.pending) {
>  	queue:
>  		vcpu->arch.exception.pending = true;
> @@ -339,6 +341,7 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long addr,
>  
>  void kvm_inject_nmi(struct kvm_vcpu *vcpu)
>  {
> +	kvm_make_request(KVM_REQ_EVENT, vcpu);
>  	vcpu->arch.nmi_pending = 1;
>  }
>  EXPORT_SYMBOL_GPL(kvm_inject_nmi);
> @@ -2470,6 +2473,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>  	if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR)
>  		vcpu->arch.sipi_vector = events->sipi_vector;
>  
> +	kvm_make_request(KVM_REQ_EVENT, vcpu);
> +
>  	return 0;
>  }
>  
> @@ -4201,6 +4206,7 @@ restart:
>  
>  	toggle_interruptibility(vcpu, vcpu->arch.emulate_ctxt.interruptibility);
>  	kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
> +	kvm_make_request(KVM_REQ_EVENT, vcpu);
>  	memcpy(vcpu->arch.regs, c->regs, sizeof c->regs);
>  	kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip);
>  
> @@ -4927,17 +4933,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  		goto out;
>  	}
>  
> -	inject_pending_event(vcpu);
> +	if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
> +		inject_pending_event(vcpu);
>  
> -	/* enable NMI/IRQ window open exits if needed */
> -	if (vcpu->arch.nmi_pending)
> -		kvm_x86_ops->enable_nmi_window(vcpu);
> -	else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
> -		kvm_x86_ops->enable_irq_window(vcpu);
> +		/* enable NMI/IRQ window open exits if needed */
> +		if (vcpu->arch.nmi_pending)
> +			kvm_x86_ops->enable_nmi_window(vcpu);
> +		else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
> +			kvm_x86_ops->enable_irq_window(vcpu);

Misses req_int_win == true.

Looks good otherwise.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/6] Nonatomic interrupt injection
  2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
                   ` (6 preceding siblings ...)
  2010-08-30 11:52 ` [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
@ 2010-09-16 13:35 ` Avi Kivity
  2010-09-17 19:12   ` Marcelo Tosatti
  7 siblings, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2010-09-16 13:35 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

  On 08/30/2010 02:36 PM, Avi Kivity wrote:
> This patchset changes interrupt injection to be done from normal process
> context instead of interrupts disabled context.  This is useful for real
> mode interrupt injection on Intel without the current hacks (injecting as
> a software interrupt of a vm86 task), reducing latencies, and later, for
> allowing nested virtualization code to use kvm_read_guest()/kvm_write_guest()
> instead of kmap() to access the guest vmcb/vmcs.
>
> Seems to survive a hack that cancels every 16th entry, after injection has
> already taken place.
>
> With the PIC reset fix posted earlier, this passes autotest on both AMD and
> Intel, with in-kernel irqchip.  I'll run -no-kvm-irqchip tests shortly.
>
> Please review carefully, esp. the first patch.  Any missing kvm_make_request()
> there may result in a hung guest.
>

This is now merged, with the change pointed out by Marcelo.  Windows XP 
x64 fails installation without

(vmx.c handle_cr())
          case 8: {
                  u8 cr8_prev = kvm_get_cr8(vcpu);
                  u8 cr8 = kvm_register_read(vcpu, reg);
                  kvm_set_cr8(vcpu, cr8);
                  skip_emulated_instruction(vcpu);
                  if (irqchip_in_kernel(vcpu->kvm))
                      return 1;
-                if (cr8_prev <= cr8)
-                    return 1;
                  vcpu->run->exit_reason = KVM_EXIT_SET_TPR;
                  return 0;
              }

Which doesn't make any sense (anyone?).  The failure is present even 
without the patchset, and is fixed by the same hack, so a regression was 
not introduced.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/6] Nonatomic interrupt injection
  2010-09-16 13:35 ` Avi Kivity
@ 2010-09-17 19:12   ` Marcelo Tosatti
  2010-09-19  9:25     ` Avi Kivity
  0 siblings, 1 reply; 14+ messages in thread
From: Marcelo Tosatti @ 2010-09-17 19:12 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm

On Thu, Sep 16, 2010 at 03:35:19PM +0200, Avi Kivity wrote:
>  On 08/30/2010 02:36 PM, Avi Kivity wrote:
> >This patchset changes interrupt injection to be done from normal process
> >context instead of interrupts disabled context.  This is useful for real
> >mode interrupt injection on Intel without the current hacks (injecting as
> >a software interrupt of a vm86 task), reducing latencies, and later, for
> >allowing nested virtualization code to use kvm_read_guest()/kvm_write_guest()
> >instead of kmap() to access the guest vmcb/vmcs.
> >
> >Seems to survive a hack that cancels every 16th entry, after injection has
> >already taken place.
> >
> >With the PIC reset fix posted earlier, this passes autotest on both AMD and
> >Intel, with in-kernel irqchip.  I'll run -no-kvm-irqchip tests shortly.
> >
> >Please review carefully, esp. the first patch.  Any missing kvm_make_request()
> >there may result in a hung guest.
> >
> 
> This is now merged, with the change pointed out by Marcelo.  Windows
> XP x64 fails installation without
> 
> (vmx.c handle_cr())
>          case 8: {
>                  u8 cr8_prev = kvm_get_cr8(vcpu);
>                  u8 cr8 = kvm_register_read(vcpu, reg);
>                  kvm_set_cr8(vcpu, cr8);
>                  skip_emulated_instruction(vcpu);
>                  if (irqchip_in_kernel(vcpu->kvm))
>                      return 1;
> -                if (cr8_prev <= cr8)
> -                    return 1;
>                  vcpu->run->exit_reason = KVM_EXIT_SET_TPR;
>                  return 0;
>              }
> 
> Which doesn't make any sense (anyone?).  The failure is present even
> without the patchset, and is fixed by the same hack, so a regression
> was not introduced.

If userspace does not have an uptodate TPR value, it can signal an
interrupt that is now blocked? Say:

- cr8 write 0
- cr8 write 5
- no exit to userspace
- userspace signals interrupt with priority 
4 because it knows about tpr == 0.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/6] Nonatomic interrupt injection
  2010-09-17 19:12   ` Marcelo Tosatti
@ 2010-09-19  9:25     ` Avi Kivity
  2010-09-19  9:28       ` Avi Kivity
  0 siblings, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2010-09-19  9:25 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm, Gleb Natapov

  On 09/17/2010 09:12 PM, Marcelo Tosatti wrote:
> >  This is now merged, with the change pointed out by Marcelo.  Windows
> >  XP x64 fails installation without
> >
> >  (vmx.c handle_cr())
> >           case 8: {
> >                   u8 cr8_prev = kvm_get_cr8(vcpu);
> >                   u8 cr8 = kvm_register_read(vcpu, reg);
> >                   kvm_set_cr8(vcpu, cr8);
> >                   skip_emulated_instruction(vcpu);
> >                   if (irqchip_in_kernel(vcpu->kvm))
> >                       return 1;
> >  -                if (cr8_prev<= cr8)
> >  -                    return 1;
> >                   vcpu->run->exit_reason = KVM_EXIT_SET_TPR;
> >                   return 0;
> >               }
> >
> >  Which doesn't make any sense (anyone?).  The failure is present even
> >  without the patchset, and is fixed by the same hack, so a regression
> >  was not introduced.
>
> If userspace does not have an uptodate TPR value, it can signal an
> interrupt that is now blocked? Say:
>
> - cr8 write 0
> - cr8 write 5
> - no exit to userspace
> - userspace signals interrupt with priority
> 4 because it knows about tpr == 0.
>

To signal an interrupt, userspace needs to force an exit.  The exit will 
sync cr8.

However, it may be that the decision to inject the interrupt is taken 
before the exit, so the interrupt is injected even though it shouldn't be.

Let's assume that this is so (I'll check).  Is this a bug in the kernel 
or userspace?

My feeling is that this is a kernel bug, and the optimization should be 
removed.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/6] Nonatomic interrupt injection
  2010-09-19  9:25     ` Avi Kivity
@ 2010-09-19  9:28       ` Avi Kivity
  2010-09-19 10:09         ` Avi Kivity
  0 siblings, 1 reply; 14+ messages in thread
From: Avi Kivity @ 2010-09-19  9:28 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm, Gleb Natapov

  On 09/19/2010 11:25 AM, Avi Kivity wrote:
>
> Let's assume that this is so (I'll check).

It's trivially so.  If a completion causes an interrupt to be raised, 
the vcpu's apic code is executed in the iothread context.


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/6] Nonatomic interrupt injection
  2010-09-19  9:28       ` Avi Kivity
@ 2010-09-19 10:09         ` Avi Kivity
  0 siblings, 0 replies; 14+ messages in thread
From: Avi Kivity @ 2010-09-19 10:09 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm, Gleb Natapov

  On 09/19/2010 11:28 AM, Avi Kivity wrote:
>  On 09/19/2010 11:25 AM, Avi Kivity wrote:
>>
>> Let's assume that this is so (I'll check).
>
> It's trivially so.  If a completion causes an interrupt to be raised, 
> the vcpu's apic code is executed in the iothread context.
>
>

However, that's a bug even with the optimization removed:

iothread                vcpu
ioapic interrrupt
apic: set isr
                         mov $large_value, cr8
on_vcpu(inject)
                         inject interrupt even though cr8 inhibits it

Setting the isr and injecting the interrupt should be an atomic 
operation (easily done be running the apic code on the vcpu thread).

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2010-09-19 10:09 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-30 11:36 [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
2010-08-30 11:36 ` [PATCH v4 1/6] KVM: Check for pending events before attempting injection Avi Kivity
2010-08-30 17:41   ` Marcelo Tosatti
2010-08-30 11:36 ` [PATCH v4 2/6] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity
2010-08-30 11:36 ` [PATCH v4 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts() Avi Kivity
2010-08-30 11:37 ` [PATCH v4 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity
2010-08-30 11:37 ` [PATCH v4 5/6] KVM: Non-atomic interrupt injection Avi Kivity
2010-08-30 11:37 ` [PATCH v4 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration Avi Kivity
2010-08-30 11:52 ` [PATCH v4 0/6] Nonatomic interrupt injection Avi Kivity
2010-09-16 13:35 ` Avi Kivity
2010-09-17 19:12   ` Marcelo Tosatti
2010-09-19  9:25     ` Avi Kivity
2010-09-19  9:28       ` Avi Kivity
2010-09-19 10:09         ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).