kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/22] KVM: Event fixes and cleanup
@ 2020-04-24 17:23 Paolo Bonzini
  2020-04-24 17:23 ` [PATCH v2 01/22] KVM: SVM: introduce nested_run_pending Paolo Bonzini
                   ` (23 more replies)
  0 siblings, 24 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:23 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

This is v2 of Sean's patch series, where the generic and VMX parts
are left more or less untouched and SVM gets the same cure.  It also
incorporates Cathy's patch to move nested NMI to svm_check_nested_events,
which just works thanks to preliminary changes that switch
svm_check_nested_events to look more like VMX.  In particular, the vmexit
is performed immediately instead of being scheduled via exit_required,
so that GIF is cleared and inject_pending_event automagically requests
an interrupt/NMI/SMI window.  This in turn requires the addition of a
nested_run_pending flag similar to VMX's.

As in the Intel patch, check_nested_events is now used for SMIs as well,
so that only exceptions are using the old mechanism.  Likewise,
exit_required is only used for exceptions (and that should go away next).
SMIs can cause a vmexit on AMD, unlike on Intel without dual-monitor
treatment, and are blocked by GIF=0, hence the few SMI-related changes
in common code (patch 9).

Sean's changes to common code are more or less left untouched, except
for the last patch to replace the late check_nested_events() hack.  Even
though it turned out to be unnecessary for NMIs, I think the new fix
makes more sense if applied generally to all events---even NMIs and SMIs,
despite them never being injected asynchronously.  If people prefer to
have a WARN instead we can do that, too.

Because of this, I added a bool argument to interrupt_allowed, nmi_allowed
and smi_allowed instead of adding a fourth hook.

I have some ideas about how to rework the event injection code in the
way that Sean mentioned in his cover letter.  It's not even that scary,
with the right set of testcases and starting from code that (despite its
deficiencies) actually makes some sense and is not a pile of hacks, and
I am very happy in that respect about the general ideas behind these
patches.  Even though some hacks remain it's a noticeable improvement,
and it's very good that Intel and AMD can be brought more or less on
the same page with respect to nested guest event injection.

Please review!

Paolo

Cathy Avery (1):
  KVM: SVM: Implement check_nested_events for NMI

Paolo Bonzini (10):
  KVM: SVM: introduce nested_run_pending
  KVM: SVM: leave halted state on vmexit
  KVM: SVM: immediately inject INTR vmexit
  KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed
  KVM: nSVM: Report NMIs as allowed when in L2 and Exit-on-NMI is set
  KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events()
  KVM: SVM: Split out architectural interrupt/NMI/SMI blocking checks
  KVM: nSVM: Report interrupts as allowed when in L2 and
    exit-on-interrupt is set
  KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting
    behavior
  KVM: x86: Replace late check_nested_events() hack with more precise
    fix

Sean Christopherson (11):
  KVM: nVMX: Preserve exception priority irrespective of exiting
    behavior
  KVM: nVMX: Open a window for pending nested VMX preemption timer
  KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit
    set
  KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead
    of int
  KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set
  KVM: VMX: Split out architectural interrupt/NMI blocking checks
  KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior
  KVM: nVMX: Prioritize SMI over nested IRQ/NMI
  KVM: x86: WARN on injected+pending exception even in nested case
  KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit()
  KVM: VMX: Use vmx_get_rflags() to query RFLAGS in
    vmx_interrupt_blocked()

 arch/x86/include/asm/kvm_host.h |   7 ++-
 arch/x86/kvm/svm/nested.c       |  55 ++++++++++++++---
 arch/x86/kvm/svm/svm.c          | 101 ++++++++++++++++++++++++--------
 arch/x86/kvm/svm/svm.h          |  31 ++++++----
 arch/x86/kvm/vmx/nested.c       |  42 ++++++++-----
 arch/x86/kvm/vmx/nested.h       |   5 ++
 arch/x86/kvm/vmx/vmx.c          |  76 ++++++++++++++++--------
 arch/x86/kvm/vmx/vmx.h          |   2 +
 arch/x86/kvm/x86.c              |  53 +++++++++--------
 9 files changed, 256 insertions(+), 116 deletions(-)

-- 
2.18.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 01/22] KVM: SVM: introduce nested_run_pending
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
@ 2020-04-24 17:23 ` Paolo Bonzini
  2020-04-24 17:23 ` [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit Paolo Bonzini
                   ` (22 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:23 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

We want to inject vmexits immediately from svm_check_nested_events,
so that the interrupt/NMI window requests happen in inject_pending_event
right after it returns.

This however has the same issue as in vmx_check_nested_events, so
introduce a nested_run_pending flag with the exact same purpose
of delaying vmexit injection after the vmentry.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 3 ++-
 arch/x86/kvm/svm/svm.c    | 1 +
 arch/x86/kvm/svm/svm.h    | 4 ++++
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index a7c3b3030e59..51cfab68428d 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -413,6 +413,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
 
 	copy_vmcb_control_area(hsave, vmcb);
 
+	svm->nested.nested_run_pending = 1;
 	enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, &map);
 
 	if (!nested_svm_vmrun_msrpm(svm)) {
@@ -792,7 +793,8 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	bool block_nested_events =
-		kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required;
+		kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required ||
+		svm->nested.nested_run_pending;
 
 	if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(svm)) {
 		if (block_nested_events)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c86f7278509b..77440b5953e3 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3417,6 +3417,7 @@ static enum exit_fastpath_completion svm_vcpu_run(struct kvm_vcpu *vcpu)
 	sync_cr8_to_lapic(vcpu);
 
 	svm->next_rip = 0;
+	svm->nested.nested_run_pending = 0;
 
 	svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 98c2890d561d..435f3328c99c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -97,6 +97,10 @@ struct nested_state {
 	/* A VMEXIT is required but not yet emulated */
 	bool exit_required;
 
+	/* A VMRUN has started but has not yet been performed, so
+	 * we cannot inject a nested vmexit yet.  */
+	bool nested_run_pending;
+
 	/* cache for intercepts of the guest */
 	u32 intercept_cr;
 	u32 intercept_dr;
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
  2020-04-24 17:23 ` [PATCH v2 01/22] KVM: SVM: introduce nested_run_pending Paolo Bonzini
@ 2020-04-24 17:23 ` Paolo Bonzini
  2020-04-24 17:41   ` Oliver Upton
  2020-04-24 17:23 ` [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit Paolo Bonzini
                   ` (21 subsequent siblings)
  23 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:23 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Similar to VMX, we need to leave the halted state when performing a vmexit.
Failure to do so will cause a hang after vmexit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 51cfab68428d..e69e60ac1370 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -472,6 +472,9 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 	leave_guest_mode(&svm->vcpu);
 	svm->nested.vmcb = 0;
 
+	/* in case we halted in L2 */
+	svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE;
+
 	/* Give the current vmcb to the guest */
 	disable_gif(svm);
 
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
  2020-04-24 17:23 ` [PATCH v2 01/22] KVM: SVM: introduce nested_run_pending Paolo Bonzini
  2020-04-24 17:23 ` [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit Paolo Bonzini
@ 2020-04-24 17:23 ` Paolo Bonzini
  2020-05-21 12:50   ` Vitaly Kuznetsov
  2020-04-24 17:23 ` [PATCH v2 04/22] KVM: SVM: Implement check_nested_events for NMI Paolo Bonzini
                   ` (20 subsequent siblings)
  23 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:23 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

We can immediately leave SVM guest mode in svm_check_nested_events
now that we have the nested_run_pending mechanism.  This makes
things easier because we can run the rest of inject_pending_event
with GIF=0, and KVM will naturally end up requesting the next
interrupt window.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index e69e60ac1370..266fde240493 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -778,13 +778,13 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 
 static void nested_svm_intr(struct vcpu_svm *svm)
 {
+	trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip);
+
 	svm->vmcb->control.exit_code   = SVM_EXIT_INTR;
 	svm->vmcb->control.exit_info_1 = 0;
 	svm->vmcb->control.exit_info_2 = 0;
 
-	/* nested_svm_vmexit this gets called afterwards from handle_exit */
-	svm->nested.exit_required = true;
-	trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip);
+	nested_svm_vmexit(svm);
 }
 
 static bool nested_exit_on_intr(struct vcpu_svm *svm)
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 04/22] KVM: SVM: Implement check_nested_events for NMI
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (2 preceding siblings ...)
  2020-04-24 17:23 ` [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit Paolo Bonzini
@ 2020-04-24 17:23 ` Paolo Bonzini
  2020-04-24 17:23 ` [PATCH v2 05/22] KVM: nVMX: Preserve exception priority irrespective of exiting behavior Paolo Bonzini
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:23 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Cathy Avery <cavery@redhat.com>

Migrate nested guest NMI intercept processing
to new check_nested_events.

Signed-off-by: Cathy Avery <cavery@redhat.com>
Message-Id: <20200414201107.22952-2-cavery@redhat.com>
[Reorder clauses as NMIs have higher priority than IRQs; inject
 immediate vmexit as is now done for IRQ vmexits. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++++
 arch/x86/kvm/svm/svm.c    |  6 ++----
 arch/x86/kvm/svm/svm.h    | 15 ---------------
 3 files changed, 23 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 266fde240493..c3650efd2e89 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -776,6 +776,20 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 	return vmexit;
 }
 
+static bool nested_exit_on_nmi(struct vcpu_svm *svm)
+{
+	return (svm->nested.intercept & (1ULL << INTERCEPT_NMI));
+}
+
+static void nested_svm_nmi(struct vcpu_svm *svm)
+{
+	svm->vmcb->control.exit_code = SVM_EXIT_NMI;
+	svm->vmcb->control.exit_info_1 = 0;
+	svm->vmcb->control.exit_info_2 = 0;
+
+	nested_svm_vmexit(svm);
+}
+
 static void nested_svm_intr(struct vcpu_svm *svm)
 {
 	trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip);
@@ -798,6 +812,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 		kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required ||
 		svm->nested.nested_run_pending;
 
+	if (vcpu->arch.nmi_pending && nested_exit_on_nmi(svm)) {
+		if (block_nested_events)
+			return -EBUSY;
+		nested_svm_nmi(svm);
+		return 0;
+	}
+
 	if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(svm)) {
 		if (block_nested_events)
 			return -EBUSY;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 77440b5953e3..8e732eb0b5c9 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3067,9 +3067,10 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
 	int ret;
+
 	ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) &&
 	      !(svm->vcpu.arch.hflags & HF_NMI_MASK);
-	ret = ret && gif_set(svm) && nested_svm_nmi(svm);
+	ret = ret && gif_set(svm);
 
 	return ret;
 }
@@ -3147,9 +3148,6 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu)
 		return; /* STGI will cause a vm exit */
 	}
 
-	if (svm->nested.exit_required)
-		return; /* we're not going to run the guest yet */
-
 	/*
 	 * Something prevents NMI from been injected. Single step over possible
 	 * problem (IRET or exception injection or interrupt shadow)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 435f3328c99c..a2bc33aadb67 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -373,21 +373,6 @@ void disable_nmi_singlestep(struct vcpu_svm *svm);
 #define NESTED_EXIT_DONE	1	/* Exit caused nested vmexit  */
 #define NESTED_EXIT_CONTINUE	2	/* Further checks needed      */
 
-/* This function returns true if it is save to enable the nmi window */
-static inline bool nested_svm_nmi(struct vcpu_svm *svm)
-{
-	if (!is_guest_mode(&svm->vcpu))
-		return true;
-
-	if (!(svm->nested.intercept & (1ULL << INTERCEPT_NMI)))
-		return true;
-
-	svm->vmcb->control.exit_code = SVM_EXIT_NMI;
-	svm->nested.exit_required = true;
-
-	return false;
-}
-
 static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu)
 {
 	return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK);
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 05/22] KVM: nVMX: Preserve exception priority irrespective of exiting behavior
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (3 preceding siblings ...)
  2020-04-24 17:23 ` [PATCH v2 04/22] KVM: SVM: Implement check_nested_events for NMI Paolo Bonzini
@ 2020-04-24 17:23 ` Paolo Bonzini
  2020-04-24 17:24 ` [PATCH v2 06/22] KVM: nVMX: Open a window for pending nested VMX preemption timer Paolo Bonzini
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:23 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson, Peter Shier

From: Sean Christopherson <sean.j.christopherson@intel.com>

Short circuit vmx_check_nested_events() if an exception is pending and
needs to be injected into L2, priority between coincident events is not
dependent on exiting behavior.  This fixes a bug where a single-step #DB
that is not intercepted by L1 is incorrectly dropped due to servicing a
VMX Preemption Timer VM-Exit.

Injected exceptions also need to be blocked if nested VM-Enter is
pending or an exception was already injected, otherwise injecting the
exception could overwrite an existing event injection from L1.
Technically, this scenario should be impossible, i.e. KVM shouldn't
inject its own exception during nested VM-Enter.  This will be addressed
in a future patch.

Note, event priority between SMI, NMI and INTR is incorrect for L2, e.g.
SMI should take priority over VM-Exit on NMI/INTR, and NMI that is
injected into L2 should take priority over VM-Exit INTR.  This will also
be addressed in a future patch.

Fixes: b6b8a1451fc4 ("KVM: nVMX: Rework interception of IRQs and NMIs")
Reported-by: Jim Mattson <jmattson@google.com>
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-2-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/nested.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index b516c24494e3..490dba7d0504 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3716,11 +3716,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 	/*
 	 * Process any exceptions that are not debug traps before MTF.
 	 */
-	if (vcpu->arch.exception.pending &&
-	    !vmx_pending_dbg_trap(vcpu) &&
-	    nested_vmx_check_exception(vcpu, &exit_qual)) {
+	if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
+		if (!nested_vmx_check_exception(vcpu, &exit_qual))
+			goto no_vmexit;
 		nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
 		return 0;
 	}
@@ -3733,10 +3733,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (vcpu->arch.exception.pending &&
-	    nested_vmx_check_exception(vcpu, &exit_qual)) {
+	if (vcpu->arch.exception.pending) {
 		if (block_nested_events)
 			return -EBUSY;
+		if (!nested_vmx_check_exception(vcpu, &exit_qual))
+			goto no_vmexit;
 		nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
 		return 0;
 	}
@@ -3771,6 +3772,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
+no_vmexit:
 	vmx_complete_nested_posted_interrupt(vcpu);
 	return 0;
 }
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 06/22] KVM: nVMX: Open a window for pending nested VMX preemption timer
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (4 preceding siblings ...)
  2020-04-24 17:23 ` [PATCH v2 05/22] KVM: nVMX: Preserve exception priority irrespective of exiting behavior Paolo Bonzini
@ 2020-04-24 17:24 ` Paolo Bonzini
  2020-04-24 17:24 ` [PATCH v2 07/22] KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit set Paolo Bonzini
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:24 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson, Peter Shier

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add a kvm_x86_ops hook to detect a nested pending "hypervisor timer" and
use it to effectively open a window for servicing the expired timer.
Like pending SMIs on VMX, opening a window simply means requesting an
immediate exit.

This fixes a bug where an expired VMX preemption timer (for L2) will be
delayed and/or lost if a pending exception is injected into L2.  The
pending exception is rightly prioritized by vmx_check_nested_events()
and injected into L2, with the preemption timer left pending.  Because
no window opened, L2 is free to run uninterrupted.

Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer")
Reported-by: Jim Mattson <jmattson@google.com>
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-3-sean.j.christopherson@intel.com>
[Check it in kvm_vcpu_has_events too, to ensure that the preemption
 timer is serviced promptly even if the vCPU is halted and L1 is not
 intercepting HLT. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/nested.c       | 10 ++++++++--
 arch/x86/kvm/x86.c              |  9 +++++++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a239a297be33..372e6ea4af32 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1257,6 +1257,7 @@ struct kvm_x86_ops {
 
 struct kvm_x86_nested_ops {
 	int (*check_events)(struct kvm_vcpu *vcpu);
+	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
 	int (*get_state)(struct kvm_vcpu *vcpu,
 			 struct kvm_nested_state __user *user_kvm_nested_state,
 			 unsigned user_data_size);
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 490dba7d0504..182e5209a1f6 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3687,6 +3687,12 @@ static void nested_vmx_update_pending_dbg(struct kvm_vcpu *vcpu)
 			    vcpu->arch.exception.payload);
 }
 
+static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
+{
+	return nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) &&
+	       to_vmx(vcpu)->nested.preemption_timer_expired;
+}
+
 static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -3742,8 +3748,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) &&
-	    vmx->nested.preemption_timer_expired) {
+	if (nested_vmx_preemption_timer_pending(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
 		nested_vmx_vmexit(vcpu, EXIT_REASON_PREEMPTION_TIMER, 0, 0);
@@ -6448,6 +6453,7 @@ __init int nested_vmx_hardware_setup(struct kvm_x86_ops *ops,
 
 struct kvm_x86_nested_ops vmx_nested_ops = {
 	.check_events = vmx_check_nested_events,
+	.hv_timer_pending = nested_vmx_preemption_timer_pending,
 	.get_state = vmx_get_nested_state,
 	.set_state = vmx_set_nested_state,
 	.get_vmcs12_pages = nested_get_vmcs12_pages,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8c0b77ac8dc6..ee934a88a267 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8324,6 +8324,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 				kvm_x86_ops.enable_nmi_window(vcpu);
 			if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win)
 				kvm_x86_ops.enable_irq_window(vcpu);
+			if (is_guest_mode(vcpu) &&
+			    kvm_x86_ops.nested_ops->hv_timer_pending &&
+			    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+				req_immediate_exit = true;
 			WARN_ON(vcpu->arch.exception.pending);
 		}
 
@@ -10179,6 +10183,11 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 	if (kvm_hv_has_stimer_pending(vcpu))
 		return true;
 
+	if (is_guest_mode(vcpu) &&
+	    kvm_x86_ops.nested_ops->hv_timer_pending &&
+	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+		return true;
+
 	return false;
 }
 
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 07/22] KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit set
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (5 preceding siblings ...)
  2020-04-24 17:24 ` [PATCH v2 06/22] KVM: nVMX: Open a window for pending nested VMX preemption timer Paolo Bonzini
@ 2020-04-24 17:24 ` Paolo Bonzini
  2020-04-24 17:24 ` [PATCH v2 08/22] KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead of int Paolo Bonzini
                   ` (16 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:24 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Re-request KVM_REQ_EVENT if vcpu_enter_guest() bails after processing
pending requests and an immediate exit was requested.  This fixes a bug
where a pending event, e.g. VMX preemption timer, is delayed and/or lost
if the exit was deferred due to something other than a higher priority
_injected_ event, e.g. due to a pending nested VM-Enter.  This bug only
affects the !injected case as kvm_x86_ops.cancel_injection() sets
KVM_REQ_EVENT to redo the injection, but that's purely serendipitous
behavior with respect to the deferred event.

Note, emulated preemption timer isn't the only event that can be
affected, it simply happens to be the only event where not re-requesting
KVM_REQ_EVENT is blatantly visible to the guest.

Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-4-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ee934a88a267..8ebfebc807fd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8489,6 +8489,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	return r;
 
 cancel_injection:
+	if (req_immediate_exit)
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
 	kvm_x86_ops.cancel_injection(vcpu);
 	if (unlikely(vcpu->arch.apic_attention))
 		kvm_lapic_sync_from_vapic(vcpu);
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 08/22] KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead of int
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (6 preceding siblings ...)
  2020-04-24 17:24 ` [PATCH v2 07/22] KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit set Paolo Bonzini
@ 2020-04-24 17:24 ` Paolo Bonzini
  2020-04-24 17:24 ` [PATCH v2 09/22] KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed Paolo Bonzini
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:24 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Return an actual bool for kvm_x86_ops' {interrupt_nmi}_allowed() hook to
better reflect the return semantics, and to avoid creating an even
bigger mess when the related VMX code is refactored in upcoming patches.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-5-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  6 +++---
 arch/x86/kvm/svm/svm.c          | 16 ++++++++--------
 arch/x86/kvm/vmx/vmx.c          | 14 +++++++-------
 3 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 372e6ea4af32..efaddc68a694 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1139,8 +1139,8 @@ struct kvm_x86_ops {
 	void (*set_nmi)(struct kvm_vcpu *vcpu);
 	void (*queue_exception)(struct kvm_vcpu *vcpu);
 	void (*cancel_injection)(struct kvm_vcpu *vcpu);
-	int (*interrupt_allowed)(struct kvm_vcpu *vcpu);
-	int (*nmi_allowed)(struct kvm_vcpu *vcpu);
+	bool (*interrupt_allowed)(struct kvm_vcpu *vcpu);
+	bool (*nmi_allowed)(struct kvm_vcpu *vcpu);
 	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
 	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
 	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
@@ -1238,7 +1238,7 @@ struct kvm_x86_ops {
 
 	void (*setup_mce)(struct kvm_vcpu *vcpu);
 
-	int (*smi_allowed)(struct kvm_vcpu *vcpu);
+	bool (*smi_allowed)(struct kvm_vcpu *vcpu);
 	int (*pre_enter_smm)(struct kvm_vcpu *vcpu, char *smstate);
 	int (*pre_leave_smm)(struct kvm_vcpu *vcpu, const char *smstate);
 	int (*enable_smi_window)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8e732eb0b5c9..cdee634e961d 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3062,11 +3062,11 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 		set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
 }
 
-static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
+static bool svm_nmi_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
-	int ret;
+	bool ret;
 
 	ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) &&
 	      !(svm->vcpu.arch.hflags & HF_NMI_MASK);
@@ -3095,14 +3095,14 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 	}
 }
 
-static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
+static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
 
 	if (!gif_set(svm) ||
 	     (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK))
-		return 0;
+		return false;
 
 	if (is_guest_mode(vcpu) && (svm->vcpu.arch.hflags & HF_VINTR_MASK))
 		return !!(svm->vcpu.arch.hflags & HF_HIF_MASK);
@@ -3755,23 +3755,23 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu)
 	vcpu->arch.mcg_cap &= 0x1ff;
 }
 
-static int svm_smi_allowed(struct kvm_vcpu *vcpu)
+static bool svm_smi_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	/* Per APM Vol.2 15.22.2 "Response to SMI" */
 	if (!gif_set(svm))
-		return 0;
+		return false;
 
 	if (is_guest_mode(&svm->vcpu) &&
 	    svm->nested.intercept & (1ULL << INTERCEPT_SMI)) {
 		/* TODO: Might need to set exit_info_1 and exit_info_2 here */
 		svm->vmcb->control.exit_code = SVM_EXIT_SMI;
 		svm->nested.exit_required = true;
-		return 0;
+		return false;
 	}
 
-	return 1;
+	return true;
 }
 
 static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 455cd2c8dbce..c98194f04b04 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4511,21 +4511,21 @@ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 	}
 }
 
-static int vmx_nmi_allowed(struct kvm_vcpu *vcpu)
+static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
-		return 0;
+		return false;
 
 	if (!enable_vnmi &&
 	    to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked)
-		return 0;
+		return false;
 
 	return	!(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
 		  (GUEST_INTR_STATE_MOV_SS | GUEST_INTR_STATE_STI
 		   | GUEST_INTR_STATE_NMI));
 }
 
-static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
+static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return false;
@@ -7675,12 +7675,12 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu)
 			~FEAT_CTL_LMCE_ENABLED;
 }
 
-static int vmx_smi_allowed(struct kvm_vcpu *vcpu)
+static bool vmx_smi_allowed(struct kvm_vcpu *vcpu)
 {
 	/* we need a nested vmexit to enter SMM, postpone if run is pending */
 	if (to_vmx(vcpu)->nested.nested_run_pending)
-		return 0;
-	return 1;
+		return false;
+	return true;
 }
 
 static int vmx_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 09/22] KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (7 preceding siblings ...)
  2020-04-24 17:24 ` [PATCH v2 08/22] KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead of int Paolo Bonzini
@ 2020-04-24 17:24 ` Paolo Bonzini
  2020-04-24 17:29 ` [PATCH v2 00/22] KVM: Event fixes and cleanup Sean Christopherson
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-24 17:24 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Do not hardcode is_smm so that all the architectural conditions for
blocking SMIs are listed in a single place.  Well, in two places because
this introduces some code duplication between Intel and AMD.

This ensures that nested SVM obeys GIF in kvm_vcpu_has_events.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 2 +-
 arch/x86/kvm/vmx/vmx.c | 2 +-
 arch/x86/kvm/x86.c     | 6 +++---
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cdee634e961d..01ee1c3be25b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3771,7 +3771,7 @@ static bool svm_smi_allowed(struct kvm_vcpu *vcpu)
 		return false;
 	}
 
-	return true;
+	return !is_smm(vcpu);
 }
 
 static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c98194f04b04..c33317bfc1cf 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7680,7 +7680,7 @@ static bool vmx_smi_allowed(struct kvm_vcpu *vcpu)
 	/* we need a nested vmexit to enter SMM, postpone if run is pending */
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return false;
-	return true;
+	return !is_smm(vcpu);
 }
 
 static int vmx_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8ebfebc807fd..e8469db6ccae 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7744,8 +7744,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
 	if (kvm_event_needs_reinjection(vcpu))
 		return 0;
 
-	if (vcpu->arch.smi_pending && !is_smm(vcpu) &&
-	    kvm_x86_ops.smi_allowed(vcpu)) {
+	if (vcpu->arch.smi_pending && kvm_x86_ops.smi_allowed(vcpu)) {
 		vcpu->arch.smi_pending = false;
 		++vcpu->arch.smi_count;
 		enter_smm(vcpu);
@@ -10174,7 +10173,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 		return true;
 
 	if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
-	    (vcpu->arch.smi_pending && !is_smm(vcpu)))
+	    (vcpu->arch.smi_pending &&
+	     kvm_x86_ops.smi_allowed(vcpu)))
 		return true;
 
 	if (kvm_arch_interrupt_allowed(vcpu) &&
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/22] KVM: Event fixes and cleanup
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (8 preceding siblings ...)
  2020-04-24 17:24 ` [PATCH v2 09/22] KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed Paolo Bonzini
@ 2020-04-24 17:29 ` Sean Christopherson
  2020-04-24 21:02 ` Oliver Upton
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Sean Christopherson @ 2020-04-24 17:29 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, wei.huang2, cavery, vkuznets, Oliver Upton,
	Jim Mattson

On Fri, Apr 24, 2020 at 01:23:54PM -0400, Paolo Bonzini wrote:
> Because of this, I added a bool argument to interrupt_allowed, nmi_allowed
> and smi_allowed instead of adding a fourth hook.

Ha, I had this as the original implementation for interrupts, and then
switched to a separate hook at the 11th hour to minimize churn.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit
  2020-04-24 17:23 ` [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit Paolo Bonzini
@ 2020-04-24 17:41   ` Oliver Upton
  0 siblings, 0 replies; 31+ messages in thread
From: Oliver Upton @ 2020-04-24 17:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, wei.huang2, cavery, vkuznets,
	Sean Christopherson, Jim Mattson

On Fri, Apr 24, 2020 at 01:23:56PM -0400, Paolo Bonzini wrote:
> Similar to VMX, we need to leave the halted state when performing a vmexit.
> Failure to do so will cause a hang after vmexit.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Oliver Upton <oupton@google.com>
> ---
>  arch/x86/kvm/svm/nested.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 51cfab68428d..e69e60ac1370 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -472,6 +472,9 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
>  	leave_guest_mode(&svm->vcpu);
>  	svm->nested.vmcb = 0;
>  
> +	/* in case we halted in L2 */
> +	svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE;
> +
>  	/* Give the current vmcb to the guest */
>  	disable_gif(svm);
>  
> -- 
> 2.18.2
> 
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/22] KVM: Event fixes and cleanup
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (9 preceding siblings ...)
  2020-04-24 17:29 ` [PATCH v2 00/22] KVM: Event fixes and cleanup Sean Christopherson
@ 2020-04-24 21:02 ` Oliver Upton
  2020-04-24 21:05   ` Sean Christopherson
  2020-04-25  7:01 ` [PATCH v2 10/22] KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set Paolo Bonzini
                   ` (12 subsequent siblings)
  23 siblings, 1 reply; 31+ messages in thread
From: Oliver Upton @ 2020-04-24 21:02 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, wei.huang2, cavery, vkuznets,
	Sean Christopherson, Jim Mattson

Paolo,

I've only received patches 1-9 for this series, could you resend? :)

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/22] KVM: Event fixes and cleanup
  2020-04-24 21:02 ` Oliver Upton
@ 2020-04-24 21:05   ` Sean Christopherson
  2020-04-25  7:21     ` Paolo Bonzini
  0 siblings, 1 reply; 31+ messages in thread
From: Sean Christopherson @ 2020-04-24 21:05 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Paolo Bonzini, linux-kernel, kvm, wei.huang2, cavery, vkuznets,
	Jim Mattson

On Fri, Apr 24, 2020 at 09:02:42PM +0000, Oliver Upton wrote:
> Paolo,
> 
> I've only received patches 1-9 for this series, could you resend? :)

Same here, I was hoping they would magically show up.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 10/22] KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (10 preceding siblings ...)
  2020-04-24 21:02 ` Oliver Upton
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 11/22] KVM: nSVM: " Paolo Bonzini
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Report NMIs as allowed when the vCPU is in L2 and L2 is being run with
Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective
in this case.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-7-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/nested.c | 5 -----
 arch/x86/kvm/vmx/nested.h | 5 +++++
 arch/x86/kvm/vmx/vmx.c    | 3 +++
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 182e5209a1f6..9edd9464fedf 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -698,11 +698,6 @@ static bool nested_exit_intr_ack_set(struct kvm_vcpu *vcpu)
 		VM_EXIT_ACK_INTR_ON_EXIT;
 }
 
-static bool nested_exit_on_nmi(struct kvm_vcpu *vcpu)
-{
-	return nested_cpu_has_nmi_exiting(get_vmcs12(vcpu));
-}
-
 static int nested_vmx_check_apic_access_controls(struct kvm_vcpu *vcpu,
 					  struct vmcs12 *vmcs12)
 {
diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
index 7ce9572c3d3a..5cc72ae0e277 100644
--- a/arch/x86/kvm/vmx/nested.h
+++ b/arch/x86/kvm/vmx/nested.h
@@ -225,6 +225,11 @@ static inline bool nested_cpu_has_save_preemption_timer(struct vmcs12 *vmcs12)
 	    VM_EXIT_SAVE_VMX_PREEMPTION_TIMER;
 }
 
+static inline bool nested_exit_on_nmi(struct kvm_vcpu *vcpu)
+{
+	return nested_cpu_has_nmi_exiting(get_vmcs12(vcpu));
+}
+
 /*
  * In nested virtualization, check if L1 asked to exit on external interrupts.
  * For most existing hypervisors, this will always return true.
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c33317bfc1cf..37b1986a4e8f 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4516,6 +4516,9 @@ static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu)
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return false;
 
+	if (is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu))
+		return true;
+
 	if (!enable_vnmi &&
 	    to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked)
 		return false;
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 11/22] KVM: nSVM: Report NMIs as allowed when in L2 and Exit-on-NMI is set
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (11 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 10/22] KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 12/22] KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events() Paolo Bonzini
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Report NMIs as allowed when the vCPU is in L2 and L2 is being run with
Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective
in this case.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 5 -----
 arch/x86/kvm/svm/svm.c    | 3 +++
 arch/x86/kvm/svm/svm.h    | 5 +++++
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index c3650efd2e89..748b01220aac 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -776,11 +776,6 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 	return vmexit;
 }
 
-static bool nested_exit_on_nmi(struct vcpu_svm *svm)
-{
-	return (svm->nested.intercept & (1ULL << INTERCEPT_NMI));
-}
-
 static void nested_svm_nmi(struct vcpu_svm *svm)
 {
 	svm->vmcb->control.exit_code = SVM_EXIT_NMI;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 01ee1c3be25b..3f1f80737f9e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3068,6 +3068,9 @@ static bool svm_nmi_allowed(struct kvm_vcpu *vcpu)
 	struct vmcb *vmcb = svm->vmcb;
 	bool ret;
 
+	if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
+		return true;
+
 	ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) &&
 	      !(svm->vcpu.arch.hflags & HF_NMI_MASK);
 	ret = ret && gif_set(svm);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index a2bc33aadb67..d8ae654340d4 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -378,6 +378,11 @@ static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu)
 	return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK);
 }
 
+static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
+{
+	return (svm->nested.intercept & (1ULL << INTERCEPT_NMI));
+}
+
 void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 			  struct vmcb *nested_vmcb, struct kvm_host_map *map);
 int nested_svm_vmrun(struct vcpu_svm *svm);
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 12/22] KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events()
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (12 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 11/22] KVM: nSVM: " Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 13/22] KVM: VMX: Split out architectural interrupt/NMI blocking checks Paolo Bonzini
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Unlike VMX, SVM allows a hypervisor to take a SMI vmexit without having
any special SMM-monitor enablement sequence.  Therefore, it has to be
handled like interrupts and NMIs.  Check for an unblocked SMI in
svm_check_nested_events() so that pending SMIs are correctly prioritized
over IRQs and NMIs when the latter events will trigger VM-Exit.

Note that there is no need to test explicitly for SMI vmexits, because
guests always runs outside SMM and therefore can never get an SMI while
they are blocked.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 16 ++++++++++++++++
 arch/x86/kvm/svm/svm.c    |  8 --------
 arch/x86/kvm/svm/svm.h    |  5 +++++
 3 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 748b01220aac..226d5a0d677b 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -776,6 +776,15 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 	return vmexit;
 }
 
+static void nested_svm_smi(struct vcpu_svm *svm)
+{
+	svm->vmcb->control.exit_code = SVM_EXIT_SMI;
+	svm->vmcb->control.exit_info_1 = 0;
+	svm->vmcb->control.exit_info_2 = 0;
+
+	nested_svm_vmexit(svm);
+}
+
 static void nested_svm_nmi(struct vcpu_svm *svm)
 {
 	svm->vmcb->control.exit_code = SVM_EXIT_NMI;
@@ -807,6 +816,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 		kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required ||
 		svm->nested.nested_run_pending;
 
+	if (vcpu->arch.smi_pending && nested_exit_on_smi(svm)) {
+		if (block_nested_events)
+			return -EBUSY;
+		nested_svm_smi(svm);
+		return 0;
+	}
+
 	if (vcpu->arch.nmi_pending && nested_exit_on_nmi(svm)) {
 		if (block_nested_events)
 			return -EBUSY;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3f1f80737f9e..1f9577b281e6 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3766,14 +3766,6 @@ static bool svm_smi_allowed(struct kvm_vcpu *vcpu)
 	if (!gif_set(svm))
 		return false;
 
-	if (is_guest_mode(&svm->vcpu) &&
-	    svm->nested.intercept & (1ULL << INTERCEPT_SMI)) {
-		/* TODO: Might need to set exit_info_1 and exit_info_2 here */
-		svm->vmcb->control.exit_code = SVM_EXIT_SMI;
-		svm->nested.exit_required = true;
-		return false;
-	}
-
 	return !is_smm(vcpu);
 }
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index d8ae654340d4..4dc6d2b4b721 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -378,6 +378,11 @@ static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu)
 	return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK);
 }
 
+static inline bool nested_exit_on_smi(struct vcpu_svm *svm)
+{
+	return (svm->nested.intercept & (1ULL << INTERCEPT_SMI));
+}
+
 static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
 {
 	return (svm->nested.intercept & (1ULL << INTERCEPT_NMI));
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 13/22] KVM: VMX: Split out architectural interrupt/NMI blocking checks
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (13 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 12/22] KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events() Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 14/22] KVM: SVM: Split out architectural interrupt/NMI/SMI " Paolo Bonzini
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Move the architectural (non-KVM specific) interrupt/NMI blocking checks
to a separate helper so that they can be used in a future patch by
vmx_check_nested_events().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-8-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 35 ++++++++++++++++++++++-------------
 arch/x86/kvm/vmx/vmx.h |  2 ++
 2 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 37b1986a4e8f..7fb7dcb3d94d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4511,21 +4511,35 @@ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 	}
 }
 
+bool vmx_nmi_blocked(struct kvm_vcpu *vcpu)
+{
+	if (is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu))
+		return false;
+
+	if (!enable_vnmi && to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked)
+		return true;
+
+	return (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
+		(GUEST_INTR_STATE_MOV_SS | GUEST_INTR_STATE_STI |
+		 GUEST_INTR_STATE_NMI));
+}
+
 static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return false;
 
-	if (is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu))
-		return true;
+	return !vmx_nmi_blocked(vcpu);
+}
 
-	if (!enable_vnmi &&
-	    to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked)
+bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
+{
+	if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
 		return false;
 
-	return	!(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
-		  (GUEST_INTR_STATE_MOV_SS | GUEST_INTR_STATE_STI
-		   | GUEST_INTR_STATE_NMI));
+	return !(vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) ||
+	       (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
+		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
 }
 
 static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
@@ -4533,12 +4547,7 @@ static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return false;
 
-	if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
-		return true;
-
-	return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
-		!(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
-			(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
+	return !vmx_interrupt_blocked(vcpu);
 }
 
 static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index edfb739e5907..b5e773267abe 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -344,6 +344,8 @@ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
 u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa);
 void update_exception_bitmap(struct kvm_vcpu *vcpu);
 void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
+bool vmx_nmi_blocked(struct kvm_vcpu *vcpu);
+bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu);
 bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
 void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
 void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 14/22] KVM: SVM: Split out architectural interrupt/NMI/SMI blocking checks
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (14 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 13/22] KVM: VMX: Split out architectural interrupt/NMI blocking checks Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 15/22] KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior Paolo Bonzini
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Move the architectural (non-KVM specific) interrupt/NMI/SMI blocking checks
to a separate helper so that they can be used in a future patch by
svm_check_nested_events().

No functional change intended.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 51 +++++++++++++++++++++++++++++++++---------
 arch/x86/kvm/svm/svm.h |  3 +++
 2 files changed, 43 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1f9577b281e6..4e284b62d384 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3062,22 +3062,33 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
 		set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
 }
 
-static bool svm_nmi_allowed(struct kvm_vcpu *vcpu)
+bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
 	bool ret;
 
-	if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
+	if (!gif_set(svm))
 		return true;
 
-	ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) &&
-	      !(svm->vcpu.arch.hflags & HF_NMI_MASK);
-	ret = ret && gif_set(svm);
+	if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
+		return false;
+
+	ret = (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) ||
+	      (svm->vcpu.arch.hflags & HF_NMI_MASK);
 
 	return ret;
 }
 
+static bool svm_nmi_allowed(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	if (svm->nested.nested_run_pending)
+		return false;
+
+        return !svm_nmi_blocked(vcpu);
+}
+
 static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -3098,19 +3109,28 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
 	}
 }
 
-static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu)
+bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
 
 	if (!gif_set(svm) ||
 	     (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK))
-		return false;
+		return true;
 
 	if (is_guest_mode(vcpu) && (svm->vcpu.arch.hflags & HF_VINTR_MASK))
-		return !!(svm->vcpu.arch.hflags & HF_HIF_MASK);
+		return !(svm->vcpu.arch.hflags & HF_HIF_MASK);
 	else
-		return !!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF);
+		return !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF);
+}
+
+static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	if (svm->nested.nested_run_pending)
+		return false;
+
+        return !svm_interrupt_blocked(vcpu);
 }
 
 static void enable_irq_window(struct kvm_vcpu *vcpu)
@@ -3758,15 +3778,24 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu)
 	vcpu->arch.mcg_cap &= 0x1ff;
 }
 
-static bool svm_smi_allowed(struct kvm_vcpu *vcpu)
+bool svm_smi_blocked(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	/* Per APM Vol.2 15.22.2 "Response to SMI" */
 	if (!gif_set(svm))
+		return true;
+
+	return is_smm(vcpu);
+}
+
+static bool svm_smi_allowed(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	if (svm->nested.nested_run_pending)
 		return false;
 
-	return !is_smm(vcpu);
+	return !svm_smi_blocked(vcpu);
 }
 
 static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 4dc6d2b4b721..f6d604b72a4c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -366,6 +366,9 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
 void svm_flush_tlb(struct kvm_vcpu *vcpu);
 void disable_nmi_singlestep(struct vcpu_svm *svm);
+bool svm_smi_blocked(struct kvm_vcpu *vcpu);
+bool svm_nmi_blocked(struct kvm_vcpu *vcpu);
+bool svm_interrupt_blocked(struct kvm_vcpu *vcpu);
 
 /* nested.c */
 
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 15/22] KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (15 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 14/22] KVM: SVM: Split out architectural interrupt/NMI/SMI " Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 16/22] KVM: nVMX: Prioritize SMI over nested IRQ/NMI Paolo Bonzini
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI is
pending and needs to be injected into L2, priority between coincident
events is not dependent on exiting behavior.

Fixes: b6b8a1451fc4 ("KVM: nVMX: Rework interception of IRQs and NMIs")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-9-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/nested.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 9edd9464fedf..188ff4cfdbaf 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3750,9 +3750,12 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (vcpu->arch.nmi_pending && nested_exit_on_nmi(vcpu)) {
+	if (vcpu->arch.nmi_pending && !vmx_nmi_blocked(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
+		if (!nested_exit_on_nmi(vcpu))
+			goto no_vmexit;
+
 		nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI,
 				  NMI_VECTOR | INTR_TYPE_NMI_INTR |
 				  INTR_INFO_VALID_MASK, 0);
@@ -3765,9 +3768,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
-	if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(vcpu)) {
+	if (kvm_cpu_has_interrupt(vcpu) && !vmx_interrupt_blocked(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
+		if (!nested_exit_on_intr(vcpu))
+			goto no_vmexit;
 		nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0);
 		return 0;
 	}
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 16/22] KVM: nVMX: Prioritize SMI over nested IRQ/NMI
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (16 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 15/22] KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 17/22] KVM: nSVM: Report interrupts as allowed when in L2 and exit-on-interrupt is set Paolo Bonzini
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Check for an unblocked SMI in vmx_check_nested_events() so that pending
SMIs are correctly prioritized over IRQs and NMIs when the latter events
will trigger VM-Exit.  This also fixes an issue where an SMI that was
marked pending while processing a nested VM-Enter wouldn't trigger an
immediate exit, i.e. would be incorrectly delayed until L2 happened to
take a VM-Exit.

Fixes: 64d6067057d96 ("KVM: x86: stubs for SMM support")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-10-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/nested.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 188ff4cfdbaf..2c36f3f53108 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3750,6 +3750,12 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 		return 0;
 	}
 
+	if (vcpu->arch.smi_pending && !is_smm(vcpu)) {
+		if (block_nested_events)
+			return -EBUSY;
+		goto no_vmexit;
+	}
+
 	if (vcpu->arch.nmi_pending && !vmx_nmi_blocked(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 17/22] KVM: nSVM: Report interrupts as allowed when in L2 and exit-on-interrupt is set
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (17 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 16/22] KVM: nVMX: Prioritize SMI over nested IRQ/NMI Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 18/22] KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting behavior Paolo Bonzini
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Report interrupts as allowed when the vCPU is in L2 and L2 is being run with
exit-on-interrupts enabled and EFLAGS.IF=1 (either on the host or on the guest
according to VINTR).  Interrupts are always unblocked from L1's perspective
in this case.

While moving nested_exit_on_intr to svm.h, use INTERCEPT_INTR properly instead
of assuming it's zero (which it is of course).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c |  5 -----
 arch/x86/kvm/svm/svm.c    | 23 +++++++++++++++++------
 arch/x86/kvm/svm/svm.h    |  5 +++++
 3 files changed, 22 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 226d5a0d677b..a7a3d695405e 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -805,11 +805,6 @@ static void nested_svm_intr(struct vcpu_svm *svm)
 	nested_svm_vmexit(svm);
 }
 
-static bool nested_exit_on_intr(struct vcpu_svm *svm)
-{
-	return (svm->nested.intercept & 1ULL);
-}
-
 static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4e284b62d384..0bbb2cd24eb7 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3114,14 +3114,25 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
 
-	if (!gif_set(svm) ||
-	     (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK))
+	if (!gif_set(svm))
 		return true;
 
-	if (is_guest_mode(vcpu) && (svm->vcpu.arch.hflags & HF_VINTR_MASK))
-		return !(svm->vcpu.arch.hflags & HF_HIF_MASK);
-	else
-		return !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF);
+	if (is_guest_mode(vcpu)) {
+		/* As long as interrupts are being delivered...  */
+		if ((svm->vcpu.arch.hflags & HF_VINTR_MASK)
+		    ? !(svm->vcpu.arch.hflags & HF_HIF_MASK)
+		    : !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF))
+			return true;
+
+		/* ... vmexits aren't blocked by the interrupt shadow  */
+		if (nested_exit_on_intr(svm))
+			return false;
+	} else {
+		if (!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF))
+			return true;
+	}
+
+	return (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK);
 }
 
 static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index f6d604b72a4c..5cc559ab862d 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -386,6 +386,11 @@ static inline bool nested_exit_on_smi(struct vcpu_svm *svm)
 	return (svm->nested.intercept & (1ULL << INTERCEPT_SMI));
 }
 
+static inline bool nested_exit_on_intr(struct vcpu_svm *svm)
+{
+	return (svm->nested.intercept & (1ULL << INTERCEPT_INTR));
+}
+
 static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
 {
 	return (svm->nested.intercept & (1ULL << INTERCEPT_NMI));
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 18/22] KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting behavior
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (18 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 17/22] KVM: nSVM: Report interrupts as allowed when in L2 and exit-on-interrupt is set Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:01 ` [PATCH v2 19/22] KVM: x86: WARN on injected+pending exception even in nested case Paolo Bonzini
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI/SMI is
pending and needs to be injected into L2, priority between coincident
events is not dependent on exiting behavior.

Fixes: b518ba9fa691 ("KVM: nSVM: implement check_nested_events for interrupts")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index a7a3d695405e..7c86ccb0e939 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -811,23 +811,29 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
 		kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required ||
 		svm->nested.nested_run_pending;
 
-	if (vcpu->arch.smi_pending && nested_exit_on_smi(svm)) {
+	if (vcpu->arch.smi_pending && !svm_smi_blocked(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
+		if (!nested_exit_on_smi(svm))
+			return 0;
 		nested_svm_smi(svm);
 		return 0;
 	}
 
-	if (vcpu->arch.nmi_pending && nested_exit_on_nmi(svm)) {
+	if (vcpu->arch.nmi_pending && !svm_nmi_blocked(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
+		if (!nested_exit_on_nmi(svm))
+			return 0;
 		nested_svm_nmi(svm);
 		return 0;
 	}
 
-	if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(svm)) {
+	if (kvm_cpu_has_interrupt(vcpu) && !svm_interrupt_blocked(vcpu)) {
 		if (block_nested_events)
 			return -EBUSY;
+		if (!nested_exit_on_intr(svm))
+			return 0;
 		nested_svm_intr(svm);
 		return 0;
 	}
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 19/22] KVM: x86: WARN on injected+pending exception even in nested case
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (19 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 18/22] KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting behavior Paolo Bonzini
@ 2020-04-25  7:01 ` Paolo Bonzini
  2020-04-25  7:02 ` [PATCH v2 20/22] KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit() Paolo Bonzini
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

WARN if a pending exception is coincident with an injected exception
before calling check_nested_events() so that the WARN will fire even if
inject_pending_event() bails early because check_nested_events() detects
the conflict.  Bailing early isn't problematic (quite the opposite), but
suppressing the WARN is undesirable as it could mask a bug elsewhere in
KVM.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-11-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e8469db6ccae..44b9d834621c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7693,6 +7693,9 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
 			kvm_x86_ops.set_irq(vcpu);
 	}
 
+	WARN_ON_ONCE(vcpu->arch.exception.injected &&
+		     vcpu->arch.exception.pending);
+
 	/*
 	 * Call check_nested_events() even if we reinjected a previous event
 	 * in order for caller to determine if it should require immediate-exit
@@ -7711,7 +7714,6 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
 					vcpu->arch.exception.has_error_code,
 					vcpu->arch.exception.error_code);
 
-		WARN_ON_ONCE(vcpu->arch.exception.injected);
 		vcpu->arch.exception.pending = false;
 		vcpu->arch.exception.injected = true;
 
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 20/22] KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit()
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (20 preceding siblings ...)
  2020-04-25  7:01 ` [PATCH v2 19/22] KVM: x86: WARN on injected+pending exception even in nested case Paolo Bonzini
@ 2020-04-25  7:02 ` Paolo Bonzini
  2020-04-25  7:02 ` [PATCH v2 21/22] KVM: VMX: Use vmx_get_rflags() to query RFLAGS in vmx_interrupt_blocked() Paolo Bonzini
  2020-04-25  7:02 ` [PATCH v2 22/22] KVM: x86: Replace late check_nested_events() hack with more precise fix Paolo Bonzini
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:02 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Use vmx_interrupt_blocked() instead of bouncing through
vmx_interrupt_allowed() when handling edge cases in vmx_handle_exit().
The nested_run_pending check in vmx_interrupt_allowed() should never
evaluate true in the VM-Exit path.

Hoist the WARN in handle_invalid_guest_state() up to vmx_handle_exit()
to enforce the above assumption for the !enable_vnmi case, and to detect
any other potential bugs with nested VM-Enter.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-12-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 7fb7dcb3d94d..0e5eacd6f911 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5268,18 +5268,11 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
 	bool intr_window_requested;
 	unsigned count = 130;
 
-	/*
-	 * We should never reach the point where we are emulating L2
-	 * due to invalid guest state as that means we incorrectly
-	 * allowed a nested VMEntry with an invalid vmcs12.
-	 */
-	WARN_ON_ONCE(vmx->emulation_required && vmx->nested.nested_run_pending);
-
 	intr_window_requested = exec_controls_get(vmx) &
 				CPU_BASED_INTR_WINDOW_EXITING;
 
 	while (vmx->emulation_required && count-- != 0) {
-		if (intr_window_requested && vmx_interrupt_allowed(vcpu))
+		if (intr_window_requested && !vmx_interrupt_blocked(vcpu))
 			return handle_interrupt_window(&vmx->vcpu);
 
 		if (kvm_test_request(KVM_REQ_EVENT, vcpu))
@@ -5896,6 +5889,14 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu,
 	if (enable_pml)
 		vmx_flush_pml_buffer(vcpu);
 
+	/*
+	 * We should never reach this point with a pending nested VM-Enter, and
+	 * more specifically emulation of L2 due to invalid guest state (see
+	 * below) should never happen as that means we incorrectly allowed a
+	 * nested VM-Enter with an invalid vmcs12.
+	 */
+	WARN_ON_ONCE(vmx->nested.nested_run_pending);
+
 	/* If guest state is invalid, start emulating */
 	if (vmx->emulation_required)
 		return handle_invalid_guest_state(vcpu);
@@ -5962,7 +5963,7 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu,
 
 	if (unlikely(!enable_vnmi &&
 		     vmx->loaded_vmcs->soft_vnmi_blocked)) {
-		if (vmx_interrupt_allowed(vcpu)) {
+		if (!vmx_interrupt_blocked(vcpu)) {
 			vmx->loaded_vmcs->soft_vnmi_blocked = 0;
 		} else if (vmx->loaded_vmcs->vnmi_blocked_time > 1000000000LL &&
 			   vcpu->arch.nmi_pending) {
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 21/22] KVM: VMX: Use vmx_get_rflags() to query RFLAGS in vmx_interrupt_blocked()
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (21 preceding siblings ...)
  2020-04-25  7:02 ` [PATCH v2 20/22] KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit() Paolo Bonzini
@ 2020-04-25  7:02 ` Paolo Bonzini
  2020-04-25  7:02 ` [PATCH v2 22/22] KVM: x86: Replace late check_nested_events() hack with more precise fix Paolo Bonzini
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:02 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

From: Sean Christopherson <sean.j.christopherson@intel.com>

Use vmx_get_rflags() instead of manually reading vmcs.GUEST_RFLAGS when
querying RFLAGS.IF so that multiple checks against interrupt blocking in
a single run loop only require a single VMREAD.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-14-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0e5eacd6f911..9e2ba1f96cd1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4537,7 +4537,7 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
 	if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
 		return false;
 
-	return !(vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) ||
+	return !(vmx_get_rflags(vcpu) & X86_EFLAGS_IF) ||
 	       (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
 		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
 }
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 22/22] KVM: x86: Replace late check_nested_events() hack with more precise fix
  2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
                   ` (22 preceding siblings ...)
  2020-04-25  7:02 ` [PATCH v2 21/22] KVM: VMX: Use vmx_get_rflags() to query RFLAGS in vmx_interrupt_blocked() Paolo Bonzini
@ 2020-04-25  7:02 ` Paolo Bonzini
  23 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:02 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: wei.huang2, cavery, vkuznets, Sean Christopherson, Oliver Upton,
	Jim Mattson

Add an argument to interrupt_allowed and nmi_allowed, to checking if
interrupt injection is blocked.  Use the hook to handle the case where
an interrupt arrives between check_nested_events() and the injection
logic.  Drop the retry of check_nested_events() that hack-a-fixed the
same condition.

Blocking injection is also a bit of a hack, e.g. KVM should do exiting
and non-exiting interrupt processing in a single pass, but it's a more
precise hack.  The old comment is also misleading, e.g. KVM_REQ_EVENT is
purely an optimization, setting it on every run loop (which KVM doesn't
do) should not affect functionality, only performance.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-13-sean.j.christopherson@intel.com>
[Extend to SVM, add SMI and NMI.  Even though NMI and SMI cannot come
 asynchronously right now, making the fix generic is easy and removes a
 special case. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  6 +++---
 arch/x86/kvm/svm/svm.c          | 25 ++++++++++++++++++-----
 arch/x86/kvm/vmx/vmx.c          | 17 +++++++++++++---
 arch/x86/kvm/x86.c              | 36 +++++++++++----------------------
 4 files changed, 49 insertions(+), 35 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index efaddc68a694..7cd68d1d0627 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1139,8 +1139,8 @@ struct kvm_x86_ops {
 	void (*set_nmi)(struct kvm_vcpu *vcpu);
 	void (*queue_exception)(struct kvm_vcpu *vcpu);
 	void (*cancel_injection)(struct kvm_vcpu *vcpu);
-	bool (*interrupt_allowed)(struct kvm_vcpu *vcpu);
-	bool (*nmi_allowed)(struct kvm_vcpu *vcpu);
+	bool (*interrupt_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
+	bool (*nmi_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
 	bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
 	void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
 	void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
@@ -1238,7 +1238,7 @@ struct kvm_x86_ops {
 
 	void (*setup_mce)(struct kvm_vcpu *vcpu);
 
-	bool (*smi_allowed)(struct kvm_vcpu *vcpu);
+	bool (*smi_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
 	int (*pre_enter_smm)(struct kvm_vcpu *vcpu, char *smstate);
 	int (*pre_leave_smm)(struct kvm_vcpu *vcpu, const char *smstate);
 	int (*enable_smi_window)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 0bbb2cd24eb7..8f8fc65bfa3e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3080,13 +3080,17 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-static bool svm_nmi_allowed(struct kvm_vcpu *vcpu)
+static bool svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	if (svm->nested.nested_run_pending)
 		return false;
 
-        return !svm_nmi_blocked(vcpu);
+	/* An NMI must not be injected into L2 if it's supposed to VM-Exit.  */
+	if (for_injection && is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
+		return false;
+
+	return !svm_nmi_blocked(vcpu);
 }
 
 static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
@@ -3135,13 +3139,20 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
 	return (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK);
 }
 
-static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu)
+static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	if (svm->nested.nested_run_pending)
 		return false;
 
-        return !svm_interrupt_blocked(vcpu);
+	/*
+	 * An IRQ must not be injected into L2 if it's supposed to VM-Exit,
+	 * e.g. if the IRQ arrived asynchronously after checking nested events.
+	 */
+	if (for_injection && is_guest_mode(vcpu) && nested_exit_on_intr(svm))
+		return false;
+
+	return !svm_interrupt_blocked(vcpu);
 }
 
 static void enable_irq_window(struct kvm_vcpu *vcpu)
@@ -3800,12 +3811,16 @@ bool svm_smi_blocked(struct kvm_vcpu *vcpu)
 	return is_smm(vcpu);
 }
 
-static bool svm_smi_allowed(struct kvm_vcpu *vcpu)
+static bool svm_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	if (svm->nested.nested_run_pending)
 		return false;
 
+	/* An SMI must not be injected into L2 if it's supposed to VM-Exit.  */
+	if (for_injection && is_guest_mode(vcpu) && nested_exit_on_smi(svm))
+		return false;
+
 	return !svm_smi_blocked(vcpu);
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9e2ba1f96cd1..3ab6ca6062ce 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4524,11 +4524,15 @@ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu)
 		 GUEST_INTR_STATE_NMI));
 }
 
-static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu)
+static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return false;
 
+	/* An NMI must not be injected into L2 if it's supposed to VM-Exit.  */
+	if (for_injection && is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu))
+		return false;
+
 	return !vmx_nmi_blocked(vcpu);
 }
 
@@ -4542,11 +4546,18 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu)
 		(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
 }
 
-static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
+static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return false;
 
+       /*
+        * An IRQ must not be injected into L2 if it's supposed to VM-Exit,
+        * e.g. if the IRQ arrived asynchronously after checking nested events.
+        */
+	if (for_injection && is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
+		return false;
+
 	return !vmx_interrupt_blocked(vcpu);
 }
 
@@ -7688,7 +7699,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu)
 			~FEAT_CTL_LMCE_ENABLED;
 }
 
-static bool vmx_smi_allowed(struct kvm_vcpu *vcpu)
+static bool vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 {
 	/* we need a nested vmexit to enter SMM, postpone if run is pending */
 	if (to_vmx(vcpu)->nested.nested_run_pending)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 44b9d834621c..856b6fc2c2ba 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7746,32 +7746,20 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
 	if (kvm_event_needs_reinjection(vcpu))
 		return 0;
 
-	if (vcpu->arch.smi_pending && kvm_x86_ops.smi_allowed(vcpu)) {
+	if (vcpu->arch.smi_pending &&
+	    kvm_x86_ops.smi_allowed(vcpu, true)) {
 		vcpu->arch.smi_pending = false;
 		++vcpu->arch.smi_count;
 		enter_smm(vcpu);
-	} else if (vcpu->arch.nmi_pending && kvm_x86_ops.nmi_allowed(vcpu)) {
+	} else if (vcpu->arch.nmi_pending &&
+		   kvm_x86_ops.nmi_allowed(vcpu, true)) {
 		--vcpu->arch.nmi_pending;
 		vcpu->arch.nmi_injected = true;
 		kvm_x86_ops.set_nmi(vcpu);
-	} else if (kvm_cpu_has_injectable_intr(vcpu)) {
-		/*
-		 * Because interrupts can be injected asynchronously, we are
-		 * calling check_nested_events again here to avoid a race condition.
-		 * See https://lkml.org/lkml/2014/7/2/60 for discussion about this
-		 * proposal and current concerns.  Perhaps we should be setting
-		 * KVM_REQ_EVENT only on certain events and not unconditionally?
-		 */
-		if (is_guest_mode(vcpu)) {
-			r = kvm_x86_ops.nested_ops->check_events(vcpu);
-			if (r != 0)
-				return r;
-		}
-		if (kvm_x86_ops.interrupt_allowed(vcpu)) {
-			kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu),
-					    false);
-			kvm_x86_ops.set_irq(vcpu);
-		}
+	} else if (kvm_cpu_has_injectable_intr(vcpu) &&
+		   kvm_x86_ops.interrupt_allowed(vcpu, true)) {
+		kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false);
+		kvm_x86_ops.set_irq(vcpu);
 	}
 
 	return 0;
@@ -10171,12 +10159,12 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 
 	if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
 	    (vcpu->arch.nmi_pending &&
-	     kvm_x86_ops.nmi_allowed(vcpu)))
+	     kvm_x86_ops.nmi_allowed(vcpu, false)))
 		return true;
 
 	if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
 	    (vcpu->arch.smi_pending &&
-	     kvm_x86_ops.smi_allowed(vcpu)))
+	     kvm_x86_ops.smi_allowed(vcpu, false)))
 		return true;
 
 	if (kvm_arch_interrupt_allowed(vcpu) &&
@@ -10228,7 +10216,7 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
 
 int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
-	return kvm_x86_ops.interrupt_allowed(vcpu);
+	return kvm_x86_ops.interrupt_allowed(vcpu, false);
 }
 
 unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu)
@@ -10393,7 +10381,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu)
 	 * If interrupts are off we cannot even use an artificial
 	 * halt state.
 	 */
-	return kvm_x86_ops.interrupt_allowed(vcpu);
+	return kvm_arch_interrupt_allowed(vcpu);
 }
 
 void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
-- 
2.18.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/22] KVM: Event fixes and cleanup
  2020-04-24 21:05   ` Sean Christopherson
@ 2020-04-25  7:21     ` Paolo Bonzini
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-04-25  7:21 UTC (permalink / raw)
  To: Sean Christopherson, Oliver Upton
  Cc: linux-kernel, kvm, wei.huang2, cavery, vkuznets, Jim Mattson

On 24/04/20 23:05, Sean Christopherson wrote:
> On Fri, Apr 24, 2020 at 09:02:42PM +0000, Oliver Upton wrote:
>> Paolo,
>>
>> I've only received patches 1-9 for this series, could you resend? :)
> 
> Same here, I was hoping they would magically show up.

An SMTP server, in its infinite wisdom, decided that sending more than
10 emails in a batch is "too much mail".  I sent the remaining 13
patches now (in two batches).  Thanks for warning me!

Paolo


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit
  2020-04-24 17:23 ` [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit Paolo Bonzini
@ 2020-05-21 12:50   ` Vitaly Kuznetsov
  2020-05-21 14:08     ` Paolo Bonzini
  0 siblings, 1 reply; 31+ messages in thread
From: Vitaly Kuznetsov @ 2020-05-21 12:50 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: wei.huang2, cavery, Sean Christopherson, Oliver Upton,
	Jim Mattson, linux-kernel, kvm

Paolo Bonzini <pbonzini@redhat.com> writes:

> We can immediately leave SVM guest mode in svm_check_nested_events
> now that we have the nested_run_pending mechanism.  This makes
> things easier because we can run the rest of inject_pending_event
> with GIF=0, and KVM will naturally end up requesting the next
> interrupt window.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/svm/nested.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index e69e60ac1370..266fde240493 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -778,13 +778,13 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
>  
>  static void nested_svm_intr(struct vcpu_svm *svm)
>  {
> +	trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip);
> +
>  	svm->vmcb->control.exit_code   = SVM_EXIT_INTR;
>  	svm->vmcb->control.exit_info_1 = 0;
>  	svm->vmcb->control.exit_info_2 = 0;
>  
> -	/* nested_svm_vmexit this gets called afterwards from handle_exit */
> -	svm->nested.exit_required = true;
> -	trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip);
> +	nested_svm_vmexit(svm);
>  }
>  
>  static bool nested_exit_on_intr(struct vcpu_svm *svm)

Sorry for reporting this late but I just found out that this commit
breaks Hyper-V 2016 on KVM on SVM completely (always hangs on boot). I
haven't investigated it yet (well, this is Windows, you know...) but
what's usually different about Hyper-V is that unlike KVM/Linux it has
handlers for some hardware interrupts in the guest and not in the
hypervisor.

-- 
Vitaly


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit
  2020-05-21 12:50   ` Vitaly Kuznetsov
@ 2020-05-21 14:08     ` Paolo Bonzini
  2020-05-21 21:04       ` Paolo Bonzini
  0 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2020-05-21 14:08 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: wei.huang2, cavery, Sean Christopherson, Oliver Upton,
	Jim Mattson, linux-kernel, kvm

On 21/05/20 14:50, Vitaly Kuznetsov wrote:
> Sorry for reporting this late but I just found out that this commit
> breaks Hyper-V 2016 on KVM on SVM completely (always hangs on boot). I
> haven't investigated it yet (well, this is Windows, you know...) but
> what's usually different about Hyper-V is that unlike KVM/Linux it has
> handlers for some hardware interrupts in the guest and not in the
> hypervisor.

"Always hangs on boot" is easy. :)  At this point I think it's easiest
to debug it on top of the whole pending SVM patches that remove
exit_required completely (and exit_required is not coming back anyway).

Can you get a trace and send it to me?

Paolo


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit
  2020-05-21 14:08     ` Paolo Bonzini
@ 2020-05-21 21:04       ` Paolo Bonzini
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2020-05-21 21:04 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: wei.huang2, cavery, Sean Christopherson, Oliver Upton,
	Jim Mattson, linux-kernel, kvm

On 21/05/20 16:08, Paolo Bonzini wrote:
> On 21/05/20 14:50, Vitaly Kuznetsov wrote:
>> Sorry for reporting this late but I just found out that this commit
>> breaks Hyper-V 2016 on KVM on SVM completely (always hangs on boot). I
>> haven't investigated it yet (well, this is Windows, you know...) but
>> what's usually different about Hyper-V is that unlike KVM/Linux it has
>> handlers for some hardware interrupts in the guest and not in the
>> hypervisor.
> 
> "Always hangs on boot" is easy. :)  At this point I think it's easiest
> to debug it on top of the whole pending SVM patches that remove
> exit_required completely (and exit_required is not coming back anyway).

Ok so there could be two bugs, as the hang seems to happens much earlier
later in the series (try "grep int_ctl:.0x.....1.." on the trace).

As one could guess from the grep, one thing that is certainly different
between KVM and Hyper-V is that Hyper-V injects interrupts using
int_ctl; sometimes it also uses eventinj but presumably it's just
copying it from exitintinfo).

This could cause problems: for example, when L1 wants to inject a
virtual interrupt into L2 that has interrupts disabled or V_TPR >=
V_INTR_PRIO, and KVM also wants to inject an interrupt to L1, then KVM
might end up stomping on Hyper-V's int_ctl.  However I cannot think
off-hand of a scenario where this could happen in this case, because
Hyper-V does set EXIT_INTR and therefore we should never get into
enable_irq_window while L2 is running.  Still, that's one place where
I'd start adding some trace_printk's.

Also, if a uniprocessor guest also fails, it might be easier to debug.

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2020-05-21 21:04 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-24 17:23 [PATCH v2 00/22] KVM: Event fixes and cleanup Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 01/22] KVM: SVM: introduce nested_run_pending Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit Paolo Bonzini
2020-04-24 17:41   ` Oliver Upton
2020-04-24 17:23 ` [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit Paolo Bonzini
2020-05-21 12:50   ` Vitaly Kuznetsov
2020-05-21 14:08     ` Paolo Bonzini
2020-05-21 21:04       ` Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 04/22] KVM: SVM: Implement check_nested_events for NMI Paolo Bonzini
2020-04-24 17:23 ` [PATCH v2 05/22] KVM: nVMX: Preserve exception priority irrespective of exiting behavior Paolo Bonzini
2020-04-24 17:24 ` [PATCH v2 06/22] KVM: nVMX: Open a window for pending nested VMX preemption timer Paolo Bonzini
2020-04-24 17:24 ` [PATCH v2 07/22] KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit set Paolo Bonzini
2020-04-24 17:24 ` [PATCH v2 08/22] KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead of int Paolo Bonzini
2020-04-24 17:24 ` [PATCH v2 09/22] KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed Paolo Bonzini
2020-04-24 17:29 ` [PATCH v2 00/22] KVM: Event fixes and cleanup Sean Christopherson
2020-04-24 21:02 ` Oliver Upton
2020-04-24 21:05   ` Sean Christopherson
2020-04-25  7:21     ` Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 10/22] KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 11/22] KVM: nSVM: " Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 12/22] KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events() Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 13/22] KVM: VMX: Split out architectural interrupt/NMI blocking checks Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 14/22] KVM: SVM: Split out architectural interrupt/NMI/SMI " Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 15/22] KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 16/22] KVM: nVMX: Prioritize SMI over nested IRQ/NMI Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 17/22] KVM: nSVM: Report interrupts as allowed when in L2 and exit-on-interrupt is set Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 18/22] KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting behavior Paolo Bonzini
2020-04-25  7:01 ` [PATCH v2 19/22] KVM: x86: WARN on injected+pending exception even in nested case Paolo Bonzini
2020-04-25  7:02 ` [PATCH v2 20/22] KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit() Paolo Bonzini
2020-04-25  7:02 ` [PATCH v2 21/22] KVM: VMX: Use vmx_get_rflags() to query RFLAGS in vmx_interrupt_blocked() Paolo Bonzini
2020-04-25  7:02 ` [PATCH v2 22/22] KVM: x86: Replace late check_nested_events() hack with more precise fix Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).