All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Oliver Upton <oupton@google.com>,
	Jim Mattson <jmattson@google.com>,
	Peter Shier <pshier@google.com>
Subject: [PATCH v2 06/22] KVM: nVMX: Open a window for pending nested VMX preemption timer
Date: Fri, 24 Apr 2020 13:24:00 -0400	[thread overview]
Message-ID: <20200424172416.243870-7-pbonzini@redhat.com> (raw)
In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com>

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add a kvm_x86_ops hook to detect a nested pending "hypervisor timer" and
use it to effectively open a window for servicing the expired timer.
Like pending SMIs on VMX, opening a window simply means requesting an
immediate exit.

This fixes a bug where an expired VMX preemption timer (for L2) will be
delayed and/or lost if a pending exception is injected into L2.  The
pending exception is rightly prioritized by vmx_check_nested_events()
and injected into L2, with the preemption timer left pending.  Because
no window opened, L2 is free to run uninterrupted.

Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer")
Reported-by: Jim Mattson <jmattson@google.com>
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-3-sean.j.christopherson@intel.com>
[Check it in kvm_vcpu_has_events too, to ensure that the preemption
 timer is serviced promptly even if the vCPU is halted and L1 is not
 intercepting HLT. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/nested.c       | 10 ++++++++--
 arch/x86/kvm/x86.c              |  9 +++++++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a239a297be33..372e6ea4af32 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1257,6 +1257,7 @@ struct kvm_x86_ops {
 
 struct kvm_x86_nested_ops {
 	int (*check_events)(struct kvm_vcpu *vcpu);
+	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
 	int (*get_state)(struct kvm_vcpu *vcpu,
 			 struct kvm_nested_state __user *user_kvm_nested_state,
 			 unsigned user_data_size);
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 490dba7d0504..182e5209a1f6 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3687,6 +3687,12 @@ static void nested_vmx_update_pending_dbg(struct kvm_vcpu *vcpu)
 			    vcpu->arch.exception.payload);
 }
 
+static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
+{
+	return nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) &&
+	       to_vmx(vcpu)->nested.preemption_timer_expired;
+}
+
 static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -3742,8 +3748,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) &&
-	    vmx->nested.preemption_timer_expired) {
+	if (nested_vmx_preemption_timer_pending(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
 		nested_vmx_vmexit(vcpu, EXIT_REASON_PREEMPTION_TIMER, 0, 0);
@@ -6448,6 +6453,7 @@ __init int nested_vmx_hardware_setup(struct kvm_x86_ops *ops,
 
 struct kvm_x86_nested_ops vmx_nested_ops = {
 	.check_events = vmx_check_nested_events,
+	.hv_timer_pending = nested_vmx_preemption_timer_pending,
 	.get_state = vmx_get_nested_state,
 	.set_state = vmx_set_nested_state,
 	.get_vmcs12_pages = nested_get_vmcs12_pages,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8c0b77ac8dc6..ee934a88a267 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8324,6 +8324,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 				kvm_x86_ops.enable_nmi_window(vcpu);
 			if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
 				kvm_x86_ops.enable_irq_window(vcpu);
+			if (is_guest_mode(vcpu) &&
+			    kvm_x86_ops.nested_ops->hv_timer_pending &&
+			    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+				req_immediate_exit = true;
 			WARN_ON(vcpu->arch.exception.pending);
 		}
 
@@ -10179,6 +10183,11 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 	if (kvm_hv_has_stimer_pending(vcpu))
 		return true;
 
+	if (is_guest_mode(vcpu) &&
+	    kvm_x86_ops.nested_ops->hv_timer_pending &&
+	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+		return true;
+
 	return false;
 }
 
-- 
2.18.2



  parent reply	other threads:[~2020-04-24 17:25 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 01/22] KVM: SVM: introduce nested_run_pending Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit Paolo Bonzini
2020-04-24 17:41   ` Oliver Upton
2020-04-24 17:23 ` [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit Paolo Bonzini
2020-05-21 12:50   ` Vitaly Kuznetsov
2020-05-21 14:08     ` Paolo Bonzini
2020-05-21 21:04       ` Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 04/22] KVM: SVM: Implement check_nested_events for NMI Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 05/22] KVM: nVMX: Preserve exception priority irrespective of exiting behavior Paolo Bonzini
2020-04-24 17:24 ` Paolo Bonzini [this message]
2020-04-24 17:24 ` [PATCH v2 07/22] KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit set Paolo Bonzini
2020-04-24 17:24 ` [PATCH v2 08/22] KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead of int Paolo Bonzini
2020-04-24 17:24 ` [PATCH v2 09/22] KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed Paolo Bonzini
2020-04-24 17:29 ` [PATCH v2 00/22] KVM: Event fixes and cleanup Sean Christopherson
2020-04-24 21:02 ` Oliver Upton
2020-04-24 21:05   ` Sean Christopherson
2020-04-25  7:21     ` Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 10/22] KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 11/22] KVM: nSVM: " Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 12/22] KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events() Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 13/22] KVM: VMX: Split out architectural interrupt/NMI blocking checks Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 14/22] KVM: SVM: Split out architectural interrupt/NMI/SMI " Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 15/22] KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 16/22] KVM: nVMX: Prioritize SMI over nested IRQ/NMI Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 17/22] KVM: nSVM: Report interrupts as allowed when in L2 and exit-on-interrupt is set Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 18/22] KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting behavior Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 19/22] KVM: x86: WARN on injected+pending exception even in nested case Paolo Bonzini
2020-04-25  7:02 ` [PATCH v2 20/22] KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit() Paolo Bonzini
2020-04-25  7:02 ` [PATCH v2 21/22] KVM: VMX: Use vmx_get_rflags() to query RFLAGS in vmx_interrupt_blocked() Paolo Bonzini
2020-04-25  7:02 ` [PATCH v2 22/22] KVM: x86: Replace late check_nested_events() hack with more precise fix Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200424172416.243870-7-pbonzini@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=cavery@redhat.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oupton@google.com \
    --cc=pshier@google.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wei.huang2@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.