From mboxrd@z Thu Jan 1 00:00:00 1970 From: Liran Alon Subject: [PATCH v3 03/11] KVM: nVMX: Re-evaluate L1 pending events when running L2 and L1 got posted-interrupt Date: Sun, 24 Dec 2017 18:12:55 +0200 Message-ID: <1514131983-24305-4-git-send-email-liran.alon@oracle.com> References: <1514131983-24305-1-git-send-email-liran.alon@oracle.com> Cc: jmattson@google.com, wanpeng.li@hotmail.com, idan.brown@oracle.com, Liran Alon , Liam Merwick , Konrad Rzeszutek Wilk To: pbonzini@redhat.com, rkrcmar@redhat.com, kvm@vger.kernel.org Return-path: Received: from userp2120.oracle.com ([156.151.31.85]:34912 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751447AbdLXQNg (ORCPT ); Sun, 24 Dec 2017 11:13:36 -0500 In-Reply-To: <1514131983-24305-1-git-send-email-liran.alon@oracle.com> Sender: kvm-owner@vger.kernel.org List-ID: In case posted-interrupt was delivered to CPU while it is in host (outside guest), then posted-interrupt delivery will be done by calling sync_pir_to_irr() at vmentry after interrupts are disabled. sync_pir_to_irr() will check vmx->pi_desc.control ON bit and if set, it will sync vmx->pi_desc.pir to IRR and afterwards update RVI to ensure virtual-interrupt-delivery will dispatch interrupt to guest. However, it is possible that L1 will receive a posted-interrupt while CPU runs at host and is about to enter L2. In this case, the call to sync_pir_to_irr() will indeed update the L1's APIC IRR but vcpu_enter_guest() will then just resume into L2 guest without re-evaluating if it should exit from L2 to L1 as a result of this new pending L1 event. To address this case, if sync_pir_to_irr() has a new L1 injectable interrupt and CPU is running L2, we force exit GUEST_MODE which will result in another iteration of vcpu_run() run loop which will call kvm_vcpu_running() which will call check_nested_events() which will handle the pending L1 event properly. Signed-off-by: Liran Alon Reviewed-by: Nikita Leshenko Reviewed-by: Krish Sadhukhan Reviewed-by: Liam Merwick Signed-off-by: Liam Merwick Signed-off-by: Konrad Rzeszutek Wilk --- arch/x86/kvm/vmx.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 325608a1ed65..d307bf26462a 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9032,6 +9032,7 @@ static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); int max_irr; + bool max_irr_updated; WARN_ON(!vcpu->arch.apicv_active); if (pi_test_on(&vmx->pi_desc)) { @@ -9041,7 +9042,16 @@ static int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu) * But on x86 this is just a compiler barrier anyway. */ smp_mb__after_atomic(); - kvm_apic_update_irr(vcpu, vmx->pi_desc.pir, &max_irr); + max_irr_updated = + kvm_apic_update_irr(vcpu, vmx->pi_desc.pir, &max_irr); + + /* + * If we are running L2 and L1 has a new pending interrupt + * which can be injected, we should re-evaluate + * what should be done with this new L1 interrupt. + */ + if (is_guest_mode(vcpu) && max_irr_updated) + kvm_vcpu_exiting_guest_mode(vcpu); } else { max_irr = kvm_lapic_find_highest_irr(vcpu); } -- 1.9.1