kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block
@ 2022-08-22 17:06 Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 1/7] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

The following backtrace:

[ 1355.807187]  kvm_vcpu_map+0x159/0x190 [kvm]
[ 1355.807628]  nested_svm_vmexit+0x4c/0x7f0 [kvm_amd]
[ 1355.808036]  ? kvm_vcpu_block+0x54/0xa0 [kvm]
[ 1355.808450]  svm_check_nested_events+0x97/0x390 [kvm_amd]
[ 1355.808920]  kvm_check_nested_events+0x1c/0x40 [kvm] 
[ 1355.809396]  kvm_arch_vcpu_runnable+0x4e/0x190 [kvm]
[ 1355.809892]  kvm_vcpu_check_block+0x4f/0x100 [kvm]
[ 1355.811259]  kvm_vcpu_block+0x6b/0xa0 [kvm] 

can occur due to kmap being called in non-sleepable (!TASK_RUNNING) context.
The fix is to extend kvm_x86_ops->nested_ops.hv_timer_pending() to cover
all events not already checked in kvm_arch_vcpu_is_runnable(), and then
get rid of the annoying (and wrong) call to kvm_check_nested_events()
from kvm_vcpu_check_block().

Beware, this is not a complete fix, because kvm_guest_apic_has_interrupt()
might still _read_ memory from non-sleepable context.  The fix here is
probably to make kvm_arch_vcpu_is_runnable() return -EAGAIN, and in that
case do a round of kvm_vcpu_check_block() polling in sleepable context.
Nevertheless, it is a good start as it pushes the vmexit into vcpu_block().

The series also does a small cleanup pass on kvm_vcpu_check_block(),
removing KVM_REQ_UNHALT in favor of simply calling kvm_arch_vcpu_runnable()
again.  Now that kvm_check_nested_events() is not called anymore by
kvm_arch_vcpu_runnable(), it is much easier to see that KVM will never
consume the event that caused kvm_vcpu_has_events() to return true,
and therefore it is safe to evaluate it again.

The alternative of propagating the return value of
kvm_arch_vcpu_runnable() up to kvm_vcpu_{block,halt}() is inferior
because it does not quite get right the edge cases where the vCPU becomes
runnable right before schedule() or right after kvm_vcpu_check_block().
While these edge cases are unlikely to truly matter in practice, it is
also pointless to get them "wrong".

Paolo

v2->v3: do not propagate the return value of
	kvm_arch_vcpu_runnable() up to kvm_vcpu_{block,halt}()

	move and reformat the comment in vcpu_block()

	move KVM_REQ_UNHALT removal last

Paolo Bonzini (6):
  KVM: x86: check validity of argument to KVM_SET_MP_STATE
  KVM: x86: make vendor code check for all nested events
  KVM: x86: lapic does not have to process INIT if it is blocked
  KVM: x86: never write to memory from kvm_vcpu_check_block
  KVM: mips, x86: do not rely on KVM_REQ_UNHALT
  KVM: remove KVM_REQ_UNHALT

Sean Christopherson (1):
  KVM: nVMX: Make an event request when pending an MTF nested VM-Exit

 Documentation/virt/kvm/vcpu-requests.rst | 28 +------------
 arch/arm64/kvm/arm.c                     |  1 -
 arch/mips/kvm/emulate.c                  |  6 +--
 arch/powerpc/kvm/book3s_pr.c             |  1 -
 arch/powerpc/kvm/book3s_pr_papr.c        |  1 -
 arch/powerpc/kvm/booke.c                 |  1 -
 arch/powerpc/kvm/powerpc.c               |  1 -
 arch/riscv/kvm/vcpu_insn.c               |  1 -
 arch/s390/kvm/kvm-s390.c                 |  2 -
 arch/x86/include/asm/kvm_host.h          |  3 +-
 arch/x86/kvm/i8259.c                     |  4 +-
 arch/x86/kvm/lapic.h                     |  2 +-
 arch/x86/kvm/vmx/nested.c                |  9 +++-
 arch/x86/kvm/vmx/vmx.c                   |  6 ++-
 arch/x86/kvm/x86.c                       | 53 ++++++++++++++++++------
 arch/x86/kvm/x86.h                       |  5 ---
 arch/x86/kvm/xen.c                       |  1 -
 include/linux/kvm_host.h                 |  3 +-
 virt/kvm/kvm_main.c                      |  4 +-
 19 files changed, 63 insertions(+), 69 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/7] KVM: x86: check validity of argument to KVM_SET_MP_STATE
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
@ 2022-08-22 17:06 ` Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 2/7] KVM: x86: make vendor code check for all nested events Paolo Bonzini
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

An invalid argument to KVM_SET_MP_STATE has no effect other than making the
vCPU fail to run at the next KVM_RUN.  Since it is extremely unlikely that
any userspace is relying on it, fail with -EINVAL just like for other
architectures.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d7374d768296..4d719ba21553 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10652,7 +10652,8 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 	case KVM_MP_STATE_INIT_RECEIVED:
 		break;
 	default:
-		return -EINTR;
+		WARN_ON_ONCE(1);
+		break;
 	}
 	return 1;
 }
@@ -11093,9 +11094,22 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 
 	vcpu_load(vcpu);
 
-	if (!lapic_in_kernel(vcpu) &&
-	    mp_state->mp_state != KVM_MP_STATE_RUNNABLE)
+	switch (mp_state->mp_state) {
+	case KVM_MP_STATE_UNINITIALIZED:
+	case KVM_MP_STATE_HALTED:
+	case KVM_MP_STATE_AP_RESET_HOLD:
+	case KVM_MP_STATE_INIT_RECEIVED:
+	case KVM_MP_STATE_SIPI_RECEIVED:
+		if (!lapic_in_kernel(vcpu))
+			goto out;
+		break;
+
+	case KVM_MP_STATE_RUNNABLE:
+		break;
+
+	default:
 		goto out;
+	}
 
 	/*
 	 * KVM_MP_STATE_INIT_RECEIVED means the processor is in
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/7] KVM: x86: make vendor code check for all nested events
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 1/7] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
@ 2022-08-22 17:06 ` Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 3/7] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

Interrupts, NMIs etc. sent while in guest mode are already handled
properly by the *_interrupt_allowed callbacks, but other events can
cause a vCPU to be runnable that are specific to guest mode.

In the case of VMX there are two, the preemption timer and the
monitor trap.  The VMX preemption timer is already special cased via
the hv_timer_pending callback, but the purpose of the callback can be
easily extended to MTF or in fact any other event that can occur only
in guest mode.

Rename the callback and add an MTF check; kvm_arch_vcpu_runnable()
now can return true if an MTF is pending, without relying on
kvm_vcpu_running()'s call to kvm_check_nested_events().  Until that call
is removed, however, the patch introduces no functional change.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/vmx/nested.c       | 9 ++++++++-
 arch/x86/kvm/x86.c              | 8 ++++----
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2c96c43c313a..f57ff10c3ae8 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1636,7 +1636,7 @@ struct kvm_x86_nested_ops {
 	int (*check_events)(struct kvm_vcpu *vcpu);
 	bool (*handle_page_fault_workaround)(struct kvm_vcpu *vcpu,
 					     struct x86_exception *fault);
-	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
+	bool (*has_events)(struct kvm_vcpu *vcpu);
 	void (*triple_fault)(struct kvm_vcpu *vcpu);
 	int (*get_state)(struct kvm_vcpu *vcpu,
 			 struct kvm_nested_state __user *user_kvm_nested_state,
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index ddd4367d4826..9631cdcdd058 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3876,6 +3876,13 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
 	       to_vmx(vcpu)->nested.preemption_timer_expired;
 }
 
+static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
+}
+
 static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -6816,7 +6823,7 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
 	.leave_nested = vmx_leave_nested,
 	.check_events = vmx_check_nested_events,
 	.handle_page_fault_workaround = nested_vmx_handle_page_fault_workaround,
-	.hv_timer_pending = nested_vmx_preemption_timer_pending,
+	.has_events = vmx_has_nested_events,
 	.triple_fault = nested_vmx_triple_fault,
 	.get_state = vmx_get_nested_state,
 	.set_state = vmx_set_nested_state,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4d719ba21553..95035c029def 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9788,8 +9788,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	}
 
 	if (is_guest_mode(vcpu) &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+	    kvm_x86_ops.nested_ops->has_events &&
+	    kvm_x86_ops.nested_ops->has_events(vcpu))
 		*req_immediate_exit = true;
 
 	WARN_ON(vcpu->arch.exception.pending);
@@ -12605,8 +12605,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 		return true;
 
 	if (is_guest_mode(vcpu) &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+	    kvm_x86_ops.nested_ops->has_events &&
+	    kvm_x86_ops.nested_ops->has_events(vcpu))
 		return true;
 
 	if (kvm_xen_has_pending_events(vcpu))
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/7] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 1/7] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 2/7] KVM: x86: make vendor code check for all nested events Paolo Bonzini
@ 2022-08-22 17:06 ` Paolo Bonzini
  2022-08-22 17:52   ` Jim Mattson
  2022-08-22 17:06 ` [PATCH v3 4/7] KVM: x86: lapic does not have to process INIT if it is blocked Paolo Bonzini
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc, stable

From: Sean Christopherson <seanjc@google.com>

Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run
through inject_pending_event() and thus vmx_check_nested_events() prior
to re-entering the guest.

MTF currently works by virtue of KVM's hack that calls
kvm_check_nested_events() from kvm_vcpu_running(), but that hack will
be removed in the near future.  Until that call is removed, the patch
introduces no functional change.

Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c9b49a09e6b5..d22e67ac1e69 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1659,10 +1659,12 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
 	 */
 	if (nested_cpu_has_mtf(vmcs12) &&
 	    (!vcpu->arch.exception.pending ||
-	     vcpu->arch.exception.nr == DB_VECTOR))
+	     vcpu->arch.exception.nr == DB_VECTOR)) {
 		vmx->nested.mtf_pending = true;
-	else
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+	} else {
 		vmx->nested.mtf_pending = false;
+	}
 }
 
 static int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/7] KVM: x86: lapic does not have to process INIT if it is blocked
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (2 preceding siblings ...)
  2022-08-22 17:06 ` [PATCH v3 3/7] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
@ 2022-08-22 17:06 ` Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 5/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

Do not return true from kvm_apic_has_events, and consequently from
kvm_vcpu_has_events, if the vCPU is not going to process an INIT.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/i8259.c            | 4 ++--
 arch/x86/kvm/lapic.h            | 2 +-
 arch/x86/kvm/x86.c              | 5 +++++
 arch/x86/kvm/x86.h              | 5 -----
 5 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f57ff10c3ae8..bf23cd9dea79 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2042,6 +2042,7 @@ void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
 				     u32 size);
 bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu);
+bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu);
 
 bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
 			     struct kvm_vcpu **dest_vcpu);
diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
index e1bb6218bb96..619520322533 100644
--- a/arch/x86/kvm/i8259.c
+++ b/arch/x86/kvm/i8259.c
@@ -29,9 +29,9 @@
 #include <linux/mm.h>
 #include <linux/slab.h>
 #include <linux/bitops.h>
-#include "irq.h"
-
 #include <linux/kvm_host.h>
+
+#include "irq.h"
 #include "trace.h"
 
 #define pr_pic_unimpl(fmt, ...)	\
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 117a46df5cc1..12577ddccdfc 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -225,7 +225,7 @@ static inline bool kvm_vcpu_apicv_active(struct kvm_vcpu *vcpu)
 
 static inline bool kvm_apic_has_events(struct kvm_vcpu *vcpu)
 {
-	return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events;
+	return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events && !kvm_vcpu_latch_init(vcpu);
 }
 
 static inline bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 95035c029def..5c5341110876 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12572,6 +12572,11 @@ static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 		static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
 }
 
+bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
+{
+	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
+}
+
 static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 {
 	if (!list_empty_careful(&vcpu->async_pf.done))
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 1926d2cb8e79..c333e7cf933a 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -267,11 +267,6 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk)
 	return !(kvm->arch.disabled_quirks & quirk);
 }
 
-static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
-{
-	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
-}
-
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
 
 u64 get_kvmclock_ns(struct kvm *kvm);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 5/7] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (3 preceding siblings ...)
  2022-08-22 17:06 ` [PATCH v3 4/7] KVM: x86: lapic does not have to process INIT if it is blocked Paolo Bonzini
@ 2022-08-22 17:06 ` Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 6/7] KVM: mips, x86: do not rely on KVM_REQ_UNHALT Paolo Bonzini
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore
it cannot sleep.  Writing to guest memory is therefore forbidden, but it
can happen on AMD processors if kvm_check_nested_events() causes a vmexit.

Fortunately, all events that are caught by kvm_check_nested_events() are
also recognized by kvm_vcpu_has_events() through vendor callbacks such as
kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so
remove the call and postpone the actual processing to vcpu_block().

Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5c5341110876..f901cd872b6d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10637,6 +10637,15 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 			return 1;
 	}
 
+	/*
+	 * Evaluate nested events before exiting the halted state.  This allows
+	 * the halt state to be recorded properly in the VMCS12's activity
+	 * state field (AMD does not have a similar field and a VM-Exit always
+	 * causes a spurious wakeup from HLT).
+	 */
+	if (is_guest_mode(vcpu))
+		kvm_check_nested_events(vcpu);
+
 	if (kvm_apic_accept_events(vcpu) < 0)
 		return 0;
 	switch(vcpu->arch.mp_state) {
@@ -10660,9 +10669,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 
 static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
 {
-	if (is_guest_mode(vcpu))
-		kvm_check_nested_events(vcpu);
-
 	return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
 		!vcpu->arch.apf.halted);
 }
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 6/7] KVM: mips, x86: do not rely on KVM_REQ_UNHALT
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (4 preceding siblings ...)
  2022-08-22 17:06 ` [PATCH v3 5/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
@ 2022-08-22 17:06 ` Paolo Bonzini
  2022-08-22 17:06 ` [PATCH v3 7/7] KVM: remove KVM_REQ_UNHALT Paolo Bonzini
  2022-09-08 15:36 ` [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Sean Christopherson
  7 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

KVM_REQ_UNHALT is a weird request that simply reports the value of
kvm_arch_vcpu_runnable() on exit from kvm_vcpu_halt().  Only
MIPS and x86 are looking at it, the others just clear it.  Check
the state of the vCPU directly so that the request is handled
as a nop on all architectures.

No functional change intended, except for corner cases where an
event arrive immediately after a signal become pending or after
another similar host-side event.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/mips/kvm/emulate.c | 7 +++----
 arch/x86/kvm/x86.c      | 9 ++++++++-
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index b494d8d39290..1d7c56defe93 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -955,13 +955,12 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
 		kvm_vcpu_halt(vcpu);
 
 		/*
-		 * We we are runnable, then definitely go off to user space to
+		 * We are runnable, then definitely go off to user space to
 		 * check if any I/O interrupts are pending.
 		 */
-		if (kvm_check_request(KVM_REQ_UNHALT, vcpu)) {
-			kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+		if (kvm_arch_vcpu_runnable(vcpu))
 			vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN;
-		}
 	}
 
 	return EMULATE_DONE;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f901cd872b6d..66ae2f2cb618 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10633,7 +10633,14 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 		if (hv_timer)
 			kvm_lapic_switch_to_hv_timer(vcpu);
 
-		if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
+		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+
+		/*
+		 * If the vCPU is not runnable, a signal or another host event
+		 * of some kind is pending; service it without changing the
+		 * the vCPU's activity state.
+		 */
+		if (!kvm_arch_vcpu_runnable(vcpu))
 			return 1;
 	}
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 7/7] KVM: remove KVM_REQ_UNHALT
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (5 preceding siblings ...)
  2022-08-22 17:06 ` [PATCH v3 6/7] KVM: mips, x86: do not rely on KVM_REQ_UNHALT Paolo Bonzini
@ 2022-08-22 17:06 ` Paolo Bonzini
  2022-09-08 15:36 ` [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Sean Christopherson
  7 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 17:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

KVM_REQ_UNHALT is now unnecessary because it is replaced by the return
value of kvm_vcpu_block/kvm_vcpu_halt.  Remove it.

No functional change intended.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/vcpu-requests.rst | 28 +-----------------------
 arch/arm64/kvm/arm.c                     |  1 -
 arch/mips/kvm/emulate.c                  |  1 -
 arch/powerpc/kvm/book3s_pr.c             |  1 -
 arch/powerpc/kvm/book3s_pr_papr.c        |  1 -
 arch/powerpc/kvm/booke.c                 |  1 -
 arch/powerpc/kvm/powerpc.c               |  1 -
 arch/riscv/kvm/vcpu_insn.c               |  1 -
 arch/s390/kvm/kvm-s390.c                 |  2 --
 arch/x86/kvm/x86.c                       |  3 ---
 arch/x86/kvm/xen.c                       |  1 -
 include/linux/kvm_host.h                 |  3 +--
 virt/kvm/kvm_main.c                      |  4 +---
 13 files changed, 3 insertions(+), 45 deletions(-)

diff --git a/Documentation/virt/kvm/vcpu-requests.rst b/Documentation/virt/kvm/vcpu-requests.rst
index 31f62b64e07b..87f04c1fa53d 100644
--- a/Documentation/virt/kvm/vcpu-requests.rst
+++ b/Documentation/virt/kvm/vcpu-requests.rst
@@ -97,7 +97,7 @@ VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap.
 This means general bitops, like those documented in [atomic-ops]_ could
 also be used, e.g. ::
 
-  clear_bit(KVM_REQ_UNHALT & KVM_REQUEST_MASK, &vcpu->requests);
+  clear_bit(KVM_REQ_UNBLOCK & KVM_REQUEST_MASK, &vcpu->requests);
 
 However, VCPU request users should refrain from doing so, as it would
 break the abstraction.  The first 8 bits are reserved for architecture
@@ -126,17 +126,6 @@ KVM_REQ_UNBLOCK
   or in order to update the interrupt routing and ensure that assigned
   devices will wake up the vCPU.
 
-KVM_REQ_UNHALT
-
-  This request may be made from the KVM common function kvm_vcpu_block(),
-  which is used to emulate an instruction that causes a CPU to halt until
-  one of an architectural specific set of events and/or interrupts is
-  received (determined by checking kvm_arch_vcpu_runnable()).  When that
-  event or interrupt arrives kvm_vcpu_block() makes the request.  This is
-  in contrast to when kvm_vcpu_block() returns due to any other reason,
-  such as a pending signal, which does not indicate the VCPU's halt
-  emulation should stop, and therefore does not make the request.
-
 KVM_REQ_OUTSIDE_GUEST_MODE
 
   This "request" ensures the target vCPU has exited guest mode prior to the
@@ -297,21 +286,6 @@ architecture dependent.  kvm_vcpu_block() calls kvm_arch_vcpu_runnable()
 to check if it should awaken.  One reason to do so is to provide
 architectures a function where requests may be checked if necessary.
 
-Clearing Requests
------------------
-
-Generally it only makes sense for the receiving VCPU thread to clear a
-request.  However, in some circumstances, such as when the requesting
-thread and the receiving VCPU thread are executed serially, such as when
-they are the same thread, or when they are using some form of concurrency
-control to temporarily execute synchronously, then it's possible to know
-that the request may be cleared immediately, rather than waiting for the
-receiving VCPU thread to handle the request in VCPU RUN.  The only current
-examples of this are kvm_vcpu_block() calls made by VCPUs to block
-themselves.  A possible side-effect of that call is to make the
-KVM_REQ_UNHALT request, which may then be cleared immediately when the
-VCPU returns from the call.
-
 References
 ==========
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 2ff0ef62abad..4f949b64fdc9 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -666,7 +666,6 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
 
 	kvm_vcpu_halt(vcpu);
 	vcpu_clear_flag(vcpu, IN_WFIT);
-	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 
 	preempt_disable();
 	vgic_v4_load(vcpu);
diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index 1d7c56defe93..edaec93a1a1f 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -958,7 +958,6 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
 		 * We are runnable, then definitely go off to user space to
 		 * check if any I/O interrupts are pending.
 		 */
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		if (kvm_arch_vcpu_runnable(vcpu))
 			vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN;
 	}
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index d6abed6e51e6..9fc4dd8f66eb 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -499,7 +499,6 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr)
 	if (msr & MSR_POW) {
 		if (!vcpu->arch.pending_exceptions) {
 			kvm_vcpu_halt(vcpu);
-			kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 			vcpu->stat.generic.halt_wakeup++;
 
 			/* Unset POW bit after we woke up */
diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c
index a1f2978b2a86..b2c89e850d7a 100644
--- a/arch/powerpc/kvm/book3s_pr_papr.c
+++ b/arch/powerpc/kvm/book3s_pr_papr.c
@@ -393,7 +393,6 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd)
 	case H_CEDE:
 		kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE);
 		kvm_vcpu_halt(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		vcpu->stat.generic.halt_wakeup++;
 		return EMULATE_DONE;
 	case H_LOGICAL_CI_LOAD:
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 06c5830a93f9..7b4920e9fd26 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -719,7 +719,6 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.shared->msr & MSR_WE) {
 		local_irq_enable();
 		kvm_vcpu_halt(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		hard_irq_disable();
 
 		kvmppc_set_exit_type(vcpu, EMULATED_MTMSRWE_EXITS);
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index fb1490761c87..ec9c1e3c2ff4 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -239,7 +239,6 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 	case EV_HCALL_TOKEN(EV_IDLE):
 		r = EV_SUCCESS;
 		kvm_vcpu_halt(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		break;
 	default:
 		r = EV_UNIMPLEMENTED;
diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index 7eb90a47b571..0bb52761a3f7 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -191,7 +191,6 @@ void kvm_riscv_vcpu_wfi(struct kvm_vcpu *vcpu)
 		kvm_vcpu_srcu_read_unlock(vcpu);
 		kvm_vcpu_halt(vcpu);
 		kvm_vcpu_srcu_read_lock(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 	}
 }
 
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index edfd4bbd0cba..aa39ea4582bd 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4343,8 +4343,6 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu)
 		goto retry;
 	}
 
-	/* nothing to do, just clear the request */
-	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 	/* we left the vsie handler, nothing to do, just clear the request */
 	kvm_clear_request(KVM_REQ_VSIE_RESTART, vcpu);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 66ae2f2cb618..762cdd08018a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10633,8 +10633,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 		if (hv_timer)
 			kvm_lapic_switch_to_hv_timer(vcpu);
 
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
-
 		/*
 		 * If the vCPU is not runnable, a signal or another host event
 		 * of some kind is pending; service it without changing the
@@ -10852,7 +10850,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 			r = 0;
 			goto out;
 		}
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		r = -EAGAIN;
 		if (signal_pending(current)) {
 			r = -EINTR;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 280cb5dc7341..93c628d3e3a9 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1065,7 +1065,6 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
 			del_timer(&vcpu->arch.xen.poll_timer);
 
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 	}
 
 	vcpu->arch.xen.poll_evtchn = 0;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f4519d3689e1..4d35d2e3e440 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -151,12 +151,11 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQUEST_NO_ACTION      BIT(10)
 /*
  * Architecture-independent vcpu->requests bit members
- * Bits 4-7 are reserved for more arch-independent bits.
+ * Bits 3-7 are reserved for more arch-independent bits.
  */
 #define KVM_REQ_TLB_FLUSH         (0 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQ_VM_DEAD           (1 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQ_UNBLOCK           2
-#define KVM_REQ_UNHALT            3
 #define KVM_REQUEST_ARCH_BASE     8
 
 /*
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 584a5bab3af3..dc326e4961d9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3409,10 +3409,8 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
 	int ret = -EINTR;
 	int idx = srcu_read_lock(&vcpu->kvm->srcu);
 
-	if (kvm_arch_vcpu_runnable(vcpu)) {
-		kvm_make_request(KVM_REQ_UNHALT, vcpu);
+	if (kvm_arch_vcpu_runnable(vcpu))
 		goto out;
-	}
 	if (kvm_cpu_has_pending_timer(vcpu))
 		goto out;
 	if (signal_pending(current))
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/7] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit
  2022-08-22 17:06 ` [PATCH v3 3/7] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
@ 2022-08-22 17:52   ` Jim Mattson
  2022-08-22 19:40     ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Jim Mattson @ 2022-08-22 17:52 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, seanjc, stable

On Mon, Aug 22, 2022 at 10:08 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> From: Sean Christopherson <seanjc@google.com>
>
> Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run
> through inject_pending_event() and thus vmx_check_nested_events() prior
> to re-entering the guest.
>
> MTF currently works by virtue of KVM's hack that calls
> kvm_check_nested_events() from kvm_vcpu_running(), but that hack will
> be removed in the near future.  Until that call is removed, the patch
> introduces no functional change.
>
> Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
> Cc: stable@vger.kernel.org
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
What happens if live migration occurs before the KVM_REQ_EVENT is processed?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/7] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit
  2022-08-22 17:52   ` Jim Mattson
@ 2022-08-22 19:40     ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-08-22 19:40 UTC (permalink / raw)
  To: Jim Mattson; +Cc: linux-kernel, kvm, mlevitsk, seanjc, stable

On 8/22/22 19:52, Jim Mattson wrote:
> On Mon, Aug 22, 2022 at 10:08 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>>
>> From: Sean Christopherson <seanjc@google.com>
>>
>> Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run
>> through inject_pending_event() and thus vmx_check_nested_events() prior
>> to re-entering the guest.
>>
>> MTF currently works by virtue of KVM's hack that calls
>> kvm_check_nested_events() from kvm_vcpu_running(), but that hack will
>> be removed in the near future.  Until that call is removed, the patch
>> introduces no functional change.
>>
>> Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Sean Christopherson <seanjc@google.com>
>> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
> What happens if live migration occurs before the KVM_REQ_EVENT is processed?

Good point, this must be squashed in:

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index ab135f9ef52f..a703b71d675d 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -6481,6 +6481,8 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
  	if (ret)
  		goto error_guest_mode;
  
+	if (vmx->nested.mtf_pending)
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
  	return 0;
  
  error_guest_mode:

so that the other side restores the request.

Paolo


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (6 preceding siblings ...)
  2022-08-22 17:06 ` [PATCH v3 7/7] KVM: remove KVM_REQ_UNHALT Paolo Bonzini
@ 2022-09-08 15:36 ` Sean Christopherson
  2022-09-17  1:02   ` Sean Christopherson
  7 siblings, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2022-09-08 15:36 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk

On Mon, Aug 22, 2022, Paolo Bonzini wrote:
> The following backtrace:
> Paolo Bonzini (6):
>   KVM: x86: check validity of argument to KVM_SET_MP_STATE

Skipping this one since it's already in 6.0 and AFAICT isn't strictly necessary
for the rest of the series (shouldn't matter anyways?).

>   KVM: x86: make vendor code check for all nested events
>   KVM: x86: lapic does not have to process INIT if it is blocked
>   KVM: x86: never write to memory from kvm_vcpu_check_block
>   KVM: mips, x86: do not rely on KVM_REQ_UNHALT
>   KVM: remove KVM_REQ_UNHALT
> 
> Sean Christopherson (1):
>   KVM: nVMX: Make an event request when pending an MTF nested VM-Exit

Pushed to branch `for_paolo/6.1` at:

    https://github.com/sean-jc/linux.git

with a cosmetic cleanup to kvm_apic_has_events() and the MTF migration fix squashed
in.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-09-08 15:36 ` [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Sean Christopherson
@ 2022-09-17  1:02   ` Sean Christopherson
  2022-09-20  1:15     ` Sean Christopherson
  0 siblings, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2022-09-17  1:02 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk

On Thu, Sep 08, 2022, Sean Christopherson wrote:
> On Mon, Aug 22, 2022, Paolo Bonzini wrote:
> > The following backtrace:
> > Paolo Bonzini (6):
> >   KVM: x86: check validity of argument to KVM_SET_MP_STATE
> 
> Skipping this one since it's already in 6.0 and AFAICT isn't strictly necessary
> for the rest of the series (shouldn't matter anyways?).
> 
> >   KVM: x86: make vendor code check for all nested events
> >   KVM: x86: lapic does not have to process INIT if it is blocked
> >   KVM: x86: never write to memory from kvm_vcpu_check_block
> >   KVM: mips, x86: do not rely on KVM_REQ_UNHALT
> >   KVM: remove KVM_REQ_UNHALT
> > 
> > Sean Christopherson (1):
> >   KVM: nVMX: Make an event request when pending an MTF nested VM-Exit
> 
> Pushed to branch `for_paolo/6.1` at:
> 
>     https://github.com/sean-jc/linux.git
> 
> with a cosmetic cleanup to kvm_apic_has_events() and the MTF migration fix squashed
> in.

Oh the irony about complaining that people waste maintainers' time by not running
existing tests :-)  I suppose it's not technically ironic since I was the one doing
the actual complaining, but it's still hilarious.

The eponymous patch breaks handling of INITs (and SIPIs) that are "latched"[1]
and later become unblocked, e.g. due to entering VMX non-root mode or because SVM's
GIF is set.  vmx_init_signal_test fails because KVM fails to re-evaluate pending
events after entering guest/non-root.  It passes now because KVM always checks
nested events in the outer run loop.

I have fixes, I'll (temporarily) drop this from the queue and post a new version of
this series on Monday.  As a reward to myself for bisecting and debugging, I'm going
to tweak "KVM: x86: lapic does not have to process INIT if it is blocked" to incorporate
my suggestions[2] from v2 so that the VMX and SVM code can check only for pending
INIT/SIPI and not include the blocking check to align with related checks that also
trigger KVM_REQ_EVENT (and because the resulting SVM GIF code would be quite fragile
if the blocking were incorporated).

[1] It annoys me to no end that KVM uses different terminology for INIT/SIPI versus
    everything else.
[2] https://lore.kernel.org/all/YvwxJzHC5xYnc7CJ@google.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-09-17  1:02   ` Sean Christopherson
@ 2022-09-20  1:15     ` Sean Christopherson
  0 siblings, 0 replies; 13+ messages in thread
From: Sean Christopherson @ 2022-09-20  1:15 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk

On Sat, Sep 17, 2022, Sean Christopherson wrote:
> The eponymous patch breaks handling of INITs (and SIPIs) that are "latched"[1]
> and later become unblocked, e.g. due to entering VMX non-root mode or because SVM's
> GIF is set.  vmx_init_signal_test fails because KVM fails to re-evaluate pending
> events after entering guest/non-root.  It passes now because KVM always checks
> nested events in the outer run loop.
> 
> I have fixes, I'll (temporarily) drop this from the queue and post a new version of
> this series on Monday.

And by "Monday" I meant "Tuesday", the weird pending_events snapshot thing sent me
down a bit of a rabbit hole.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-09-20  1:16 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-22 17:06 [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
2022-08-22 17:06 ` [PATCH v3 1/7] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
2022-08-22 17:06 ` [PATCH v3 2/7] KVM: x86: make vendor code check for all nested events Paolo Bonzini
2022-08-22 17:06 ` [PATCH v3 3/7] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
2022-08-22 17:52   ` Jim Mattson
2022-08-22 19:40     ` Paolo Bonzini
2022-08-22 17:06 ` [PATCH v3 4/7] KVM: x86: lapic does not have to process INIT if it is blocked Paolo Bonzini
2022-08-22 17:06 ` [PATCH v3 5/7] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
2022-08-22 17:06 ` [PATCH v3 6/7] KVM: mips, x86: do not rely on KVM_REQ_UNHALT Paolo Bonzini
2022-08-22 17:06 ` [PATCH v3 7/7] KVM: remove KVM_REQ_UNHALT Paolo Bonzini
2022-09-08 15:36 ` [PATCH v3 0/7] KVM: x86: never write to memory from kvm_vcpu_check_block Sean Christopherson
2022-09-17  1:02   ` Sean Christopherson
2022-09-20  1:15     ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).