From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932111AbdGNMWG (ORCPT ); Fri, 14 Jul 2017 08:22:06 -0400 Received: from mail-oi0-f66.google.com ([209.85.218.66]:33185 "EHLO mail-oi0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932074AbdGNMWE (ORCPT ); Fri, 14 Jul 2017 08:22:04 -0400 MIME-Version: 1.0 In-Reply-To: References: <1500025145-96878-1-git-send-email-wanpeng.li@hotmail.com> From: Wanpeng Li Date: Fri, 14 Jul 2017 20:22:03 +0800 Message-ID: Subject: Re: [PATCH] KVM: VMX: Fix losing blocking by NMI in the guest interruptibility-state field To: Paolo Bonzini Cc: "linux-kernel@vger.kernel.org" , kvm , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Wanpeng Li Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by nfs id v6ECMFfd008308 2017-07-14 19:36 GMT+08:00 Paolo Bonzini : > On 14/07/2017 11:39, Wanpeng Li wrote: >> However, commit 0be9c7a89f750 (KVM: VMX: set "blocked by NMI" flag if EPT >> violation happens during IRET from NMI) just fixes the fault due to EPT violation. >> This patch tries to fix the fault due to the page fault of shadow page table. >> >> Cc: Paolo Bonzini >> Cc: Radim Krčmář >> Signed-off-by: Wanpeng Li >> --- >> arch/x86/kvm/vmx.c | 5 +++++ >> 1 file changed, 5 insertions(+) >> >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> index 84e62ac..32ca063 100644 >> --- a/arch/x86/kvm/vmx.c >> +++ b/arch/x86/kvm/vmx.c >> @@ -5709,6 +5709,11 @@ static int handle_exception(struct kvm_vcpu *vcpu) >> } >> >> if (is_page_fault(intr_info)) { >> + >> + if (!(to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) && >> + (intr_info & INTR_INFO_UNBLOCK_NMI)) >> + vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI); >> + >> cr2 = vmcs_readl(EXIT_QUALIFICATION); >> /* EPT won't cause page fault directly */ >> WARN_ON_ONCE(!vcpu->arch.apf.host_apf_reason && enable_ept); > > vmx_recover_nmi_blocking is supposed to do the same. EPT and PML-full exits > need separate code because they store bit 12 in the exit qualification rather > than the VM-exit interruption info. I think the bug is in the handling of > vmx->nmi_known_unmasked. > > The following patch fixes it for me, can you test it too? > > Thanks, > > Paolo > > --------- 8< ------------------- > From: Paolo Bonzini Subject: [PATCH] KVM: nVMX: track NMI blocking state separately for each VMCS > > vmx_recover_nmi_blocking is using a cached value of the guest > interruptibility info, which is stored in vmx->nmi_known_unmasked. > vmx_recover_nmi_blocking is run for both normal and nested guests, > so the cached value must be per-VMCS. > > This fixes eventinj.flat in a nested non-EPT environment. With EPT it > works, because the EPT violation handler doesn't have the > vmx->nmi_known_unmasked optimization (it is unnecessary because, unlike > vmx_recover_nmi_blocking, it can just look at the exit qualification). > > Thanks to Wanpeng Li for debugging the testcase and providing an initial > patch. > > Signed-off-by: Paolo Bonzini Looks good to me, thanks for the patch. Regards, Wanpeng Li > > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 32db3f5dce7f..504df356a10c 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -198,7 +198,8 @@ struct loaded_vmcs { > struct vmcs *vmcs; > struct vmcs *shadow_vmcs; > int cpu; > - int launched; > + bool launched; > + bool nmi_known_unmasked; > struct list_head loaded_vmcss_on_cpu_link; > }; > > @@ -5497,10 +5498,8 @@ static void vmx_inject_nmi(struct kvm_vcpu *vcpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > > - if (!is_guest_mode(vcpu)) { > - ++vcpu->stat.nmi_injections; > - vmx->nmi_known_unmasked = false; > - } > + ++vcpu->stat.nmi_injections; > + vmx->loaded_vmcs->nmi_known_unmasked = false; > > if (vmx->rmode.vm86_active) { > if (kvm_inject_realmode_interrupt(vcpu, NMI_VECTOR, 0) != EMULATE_DONE) > @@ -5514,16 +5513,21 @@ static void vmx_inject_nmi(struct kvm_vcpu *vcpu) > > static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu) > { > - if (to_vmx(vcpu)->nmi_known_unmasked) > + struct vcpu_vmx *vmx = to_vmx(vcpu); > + bool masked; > + > + if (vmx->loaded_vmcs->nmi_known_unmasked) > return false; > - return vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_NMI; > + masked = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_NMI; > + vmx->loaded_vmcs->nmi_known_unmasked = !masked; > + return masked; > } > > static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > > - vmx->nmi_known_unmasked = !masked; > + vmx->loaded_vmcs->nmi_known_unmasked = !masked; > if (masked) > vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, > GUEST_INTR_STATE_NMI); > @@ -8719,7 +8723,7 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) > > idtv_info_valid = vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK; > > - if (vmx->nmi_known_unmasked) > + if (vmx->loaded_vmcs->nmi_known_unmasked) > return; > /* > * Can't use vmx->exit_intr_info since we're not sure what > @@ -8743,7 +8747,7 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) > vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, > GUEST_INTR_STATE_NMI); > else > - vmx->nmi_known_unmasked = > + vmx->loaded_vmcs->nmi_known_unmasked = > !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) > & GUEST_INTR_STATE_NMI); > }