From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Tian, Kevin" Subject: RE: [PATCH 23/31] nVMX: Correct handling of interrupt injection Date: Wed, 25 May 2011 16:39:49 +0800 Message-ID: <625BA99ED14B2D499DC4E29D8138F1505C9BFA3A70@shsmsx502.ccr.corp.intel.com> References: <1305575004-nyh@il.ibm.com> <201105161955.p4GJtgKc001996@rice.haifa.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Cc: "gleb@redhat.com" , "avi@redhat.com" To: Nadav Har'El , "kvm@vger.kernel.org" Return-path: Received: from mga09.intel.com ([134.134.136.24]:25958 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932782Ab1EYIkU convert rfc822-to-8bit (ORCPT ); Wed, 25 May 2011 04:40:20 -0400 In-Reply-To: <201105161955.p4GJtgKc001996@rice.haifa.ibm.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: > From: Nadav Har'El > Sent: Tuesday, May 17, 2011 3:56 AM > > The code in this patch correctly emulates external-interrupt injection > while a nested guest L2 is running. > > Because of this code's relative un-obviousness, I include here a longer-than- > usual justification for what it does - much longer than the code itself ;-) > > To understand how to correctly emulate interrupt injection while L2 is > running, let's look first at what we need to emulate: How would things look > like if the extra L0 hypervisor layer is removed, and instead of L0 injecting > an interrupt, we had hardware delivering an interrupt? > > Now we have L1 running on bare metal with a guest L2, and the hardware > generates an interrupt. Assuming that L1 set PIN_BASED_EXT_INTR_MASK to > 1, and > VM_EXIT_ACK_INTR_ON_EXIT to 0 (we'll revisit these assumptions below), > what > happens now is this: The processor exits from L2 to L1, with an external- > interrupt exit reason but without an interrupt vector. L1 runs, with > interrupts disabled, and it doesn't yet know what the interrupt was. Soon > after, it enables interrupts and only at that moment, it gets the interrupt > from the processor. when L1 is KVM, Linux handles this interrupt. > > Now we need exactly the same thing to happen when that L1->L2 system runs > on top of L0, instead of real hardware. This is how we do this: > > When L0 wants to inject an interrupt, it needs to exit from L2 to L1, with > external-interrupt exit reason (with an invalid interrupt vector), and run L1. > Just like in the bare metal case, it likely can't deliver the interrupt to > L1 now because L1 is running with interrupts disabled, in which case it turns > on the interrupt window when running L1 after the exit. L1 will soon enable > interrupts, and at that point L0 will gain control again and inject the > interrupt to L1. > > Finally, there is an extra complication in the code: when nested_run_pending, > we cannot return to L1 now, and must launch L2. We need to remember the > interrupt we wanted to inject (and not clear it now), and do it on the > next exit. > > The above explanation shows that the relative strangeness of the nested > interrupt injection code in this patch, and the extra interrupt-window > exit incurred, are in fact necessary for accurate emulation, and are not > just an unoptimized implementation. > > Let's revisit now the two assumptions made above: > > If L1 turns off PIN_BASED_EXT_INTR_MASK (no hypervisor that I know > does, by the way), things are simple: L0 may inject the interrupt directly > to the L2 guest - using the normal code path that injects to any guest. > We support this case in the code below. > > If L1 turns on VM_EXIT_ACK_INTR_ON_EXIT (again, no hypervisor that I know > does), things look very different from the description above: L1 expects Type-1 bare metal hypervisor may enable this bit, such as Xen. This bit is really prepared for L2 hypervisor since normally L2 hypervisor is tricky to touch generic interrupt logic, and thus better to not ack it until interrupt is enabled and then hardware will gear to the kernel interrupt handler automatically. > to see an exit from L2 with the interrupt vector already filled in the exit > information, and does not expect to be interrupted again with this interrupt. > The current code does not (yet) support this case, so we do not allow the > VM_EXIT_ACK_INTR_ON_EXIT exit-control to be turned on by L1. Then just fill the interrupt vector field with the highest unmasked bit from pending vIRR. > > Signed-off-by: Nadav Har'El > --- > arch/x86/kvm/vmx.c | 36 ++++++++++++++++++++++++++++++++++++ > 1 file changed, 36 insertions(+) > > --- .before/arch/x86/kvm/vmx.c 2011-05-16 22:36:49.000000000 +0300 > +++ .after/arch/x86/kvm/vmx.c 2011-05-16 22:36:49.000000000 +0300 > @@ -1788,6 +1788,7 @@ static __init void nested_vmx_setup_ctls > > /* exit controls */ > nested_vmx_exit_ctls_low = 0; > + /* Note that guest use of VM_EXIT_ACK_INTR_ON_EXIT is not supported. > */ > #ifdef CONFIG_X86_64 > nested_vmx_exit_ctls_high = VM_EXIT_HOST_ADDR_SPACE_SIZE; > #else > @@ -3733,9 +3734,25 @@ out: > return ret; > } > > +/* > + * In nested virtualization, check if L1 asked to exit on external interrupts. > + * For most existing hypervisors, this will always return true. > + */ > +static bool nested_exit_on_intr(struct kvm_vcpu *vcpu) > +{ > + return get_vmcs12(vcpu)->pin_based_vm_exec_control & > + PIN_BASED_EXT_INTR_MASK; > +} > + could be a similar common wrapper like nested_cpu_has... Thanks, Kevin