From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 20/24] Correct handling of interrupt injection Date: Mon, 14 Jun 2010 15:29:44 +0300 Message-ID: <4C1620B8.6010104@redhat.com> References: <1276431753-nyh@il.ibm.com> <201006131232.o5DCWmCr013127@rice.haifa.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: "Nadav Har'El" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:24602 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752442Ab0FNM3r (ORCPT ); Mon, 14 Jun 2010 08:29:47 -0400 In-Reply-To: <201006131232.o5DCWmCr013127@rice.haifa.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On 06/13/2010 03:32 PM, Nadav Har'El wrote: > When KVM wants to inject an interrupt, the guest should think a real interrupt > has happened. Normally (in the non-nested case) this means checking that the > guest doesn't block interrupts (and if it does, inject when it doesn't - using > the "interrupt window" VMX mechanism), and setting up the appropriate VMCS > fields for the guest to receive the interrupt. > > However, when we are running a nested guest (L2) and its hypervisor (L1) > requested exits on interrupts (as most hypervisors do), the most efficient > thing to do is to exit L2, telling L1 that the exit was caused by an > interrupt, the one we were injecting; Only when L1 asked not to be notified > of interrupts, we should to inject it directly to the running guest L2 (i.e., > the normal code path). > > However, properly doing what is described above requires invasive changes to > the flow of the existing code, which we elected not to do in this stage. > Instead we do something more simplistic and less efficient: we modify > vmx_interrupt_allowed(), which kvm calls to see if it can inject the interrupt > now, to exit from L2 to L1 before continuing the normal code. The normal kvm > code then notices that L1 is blocking interrupts, and sets the interrupt > window to inject the interrupt later to L1. Shortly after, L1 gets the > interrupt while it is itself running, not as an exit from L2. The cost is an > extra L1 exit (the interrupt window). > > That's a little sad. > > cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL); > cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_INTR_PENDING; > @@ -3718,6 +3738,13 @@ static int nested_vmx_vmexit(struct kvm_ > > static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu) > { > + if (to_vmx(vcpu)->nested.nested_mode&& nested_exit_on_intr(vcpu)) { > + if (to_vmx(vcpu)->nested.nested_run_pending) > + return 0; > + nested_vmx_vmexit(vcpu, true); > + /* fall through to normal code, but now in L1, not L2 */ > + } > + > What exit is reported here? -- error compiling committee.c: too many arguments to function