From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [PATCH 22/24] Correct handling of idt vectoring info Date: Thu, 17 Jun 2010 14:58:03 +0300 Message-ID: <20100617115803.GP523@redhat.com> References: <1276431753-nyh@il.ibm.com> <201006131233.o5DCXoel013156@rice.haifa.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: avi@redhat.com, kvm@vger.kernel.org To: "Nadav Har'El" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:22917 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752695Ab0FQL6I (ORCPT ); Thu, 17 Jun 2010 07:58:08 -0400 Content-Disposition: inline In-Reply-To: <201006131233.o5DCXoel013156@rice.haifa.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On Sun, Jun 13, 2010 at 03:33:50PM +0300, Nadav Har'El wrote: > This patch adds correct handling of IDT_VECTORING_INFO_FIELD for the nested > case. > > When a guest exits while handling an interrupt or exception, we get this > information in IDT_VECTORING_INFO_FIELD in the VMCS. When L2 exits to L1, > there's nothing we need to do, because L1 will see this field in vmcs12, and > handle it itself. However, when L2 exits and L0 handles the exit itself and > plans to return to L2, L0 must inject this event to L2. > > In the normal non-nested case, the idt_vectoring_info case is treated after > the exit. However, in the nested case a decision of whether to return to L2 This is not correct. On the normal non-nested case the idt_vectoring_info is parsed into vmx/svm independent data structure (which is saved/restored during VM migartion) after exit. The reinjection happens on vmentry path. > or L1 also happens during the injection phase (see the previous patches), so > in the nested case we have to treat the idt_vectoring_info right after the > injection, i.e., in the beginning of vmx_vcpu_run, which is the first time > we know for sure if we're staying in L2 (i.e., nested_mode is true). > > Signed-off-by: Nadav Har'El > --- > --- .before/arch/x86/kvm/vmx.c 2010-06-13 15:01:30.000000000 +0300 > +++ .after/arch/x86/kvm/vmx.c 2010-06-13 15:01:30.000000000 +0300 > @@ -320,6 +320,10 @@ struct nested_vmx { > struct vmcs *l1_vmcs; > /* L2 must run next, and mustn't decide to exit to L1. */ > bool nested_run_pending; > + /* true if last exit was of L2, and had a valid idt_vectoring_info */ > + bool valid_idt_vectoring_info; > + /* These are saved if valid_idt_vectoring_info */ > + u32 vm_exit_instruction_len, idt_vectoring_error_code; > }; > > enum vmcs_field_type { > @@ -5460,6 +5464,22 @@ static void fixup_rmode_irq(struct vcpu_ > | vmx->rmode.irq.vector; > } > > +static void nested_handle_valid_idt_vectoring_info(struct vcpu_vmx *vmx) > +{ > + int irq = vmx->idt_vectoring_info & VECTORING_INFO_VECTOR_MASK; > + int type = vmx->idt_vectoring_info & VECTORING_INFO_TYPE_MASK; > + int errCodeValid = vmx->idt_vectoring_info & > + VECTORING_INFO_DELIVER_CODE_MASK; > + vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, > + irq | type | INTR_INFO_VALID_MASK | errCodeValid); > + > + vmcs_write32(VM_ENTRY_INSTRUCTION_LEN, > + vmx->nested.vm_exit_instruction_len); > + if (errCodeValid) > + vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, > + vmx->nested.idt_vectoring_error_code); > +} > + Why can't you do that using existing exception/nmi/interrupt queues that we have, but instead you effectively disable vmx_complete_interrupts() by patch 18 when in nested mode and add logically same code in this patch. I.e after exit you save info about idt event into nested_vmx and reinject it on vm entry. > static inline void sync_cached_regs_to_vmcs(struct kvm_vcpu *vcpu) > { > if (test_bit(VCPU_REGS_RSP, (unsigned long *)&vcpu->arch.regs_dirty)) > @@ -5481,6 +5501,9 @@ static void vmx_vcpu_run(struct kvm_vcpu > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > > + if (vmx->nested.nested_mode && vmx->nested.valid_idt_vectoring_info) > + nested_handle_valid_idt_vectoring_info(vmx); > + > /* Record the guest's net vcpu time for enforced NMI injections. */ > if (unlikely(!cpu_has_virtual_nmis() && vmx->soft_vnmi_blocked)) > vmx->entry_time = ktime_get(); > @@ -5600,6 +5623,16 @@ static void vmx_vcpu_run(struct kvm_vcpu > | (1 << VCPU_EXREG_PDPTR)); > > vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD); > + > + vmx->nested.valid_idt_vectoring_info = vmx->nested.nested_mode && > + (vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK); > + if (vmx->nested.valid_idt_vectoring_info) { > + vmx->nested.vm_exit_instruction_len = > + vmcs_read32(VM_EXIT_INSTRUCTION_LEN); > + vmx->nested.idt_vectoring_error_code = > + vmcs_read32(IDT_VECTORING_ERROR_CODE); > + } > + > if (vmx->rmode.irq.pending) > fixup_rmode_irq(vmx); > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Gleb.