linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block
@ 2022-08-11 21:05 Paolo Bonzini
  2022-08-11 21:05 ` [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
                   ` (8 more replies)
  0 siblings, 9 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

It took a few months for me to get back to this, and it is lightly tested
only but I wanted to send it out before disappearing for a long weekend.
The story here is that the following backtrace:

[ 1355.807187]  kvm_vcpu_map+0x159/0x190 [kvm]
[ 1355.807628]  nested_svm_vmexit+0x4c/0x7f0 [kvm_amd]
[ 1355.808036]  ? kvm_vcpu_block+0x54/0xa0 [kvm]
[ 1355.808450]  svm_check_nested_events+0x97/0x390 [kvm_amd]
[ 1355.808920]  kvm_check_nested_events+0x1c/0x40 [kvm] 
[ 1355.809396]  kvm_arch_vcpu_runnable+0x4e/0x190 [kvm]
[ 1355.809892]  kvm_vcpu_check_block+0x4f/0x100 [kvm]
[ 1355.811259]  kvm_vcpu_block+0x6b/0xa0 [kvm] 

can occur due to kmap being called in non-sleepable (!TASK_RUNNING) context.
The fix is to extend kvm_x86_ops->nested_ops.hv_timer_pending() to cover
all events not already checked in kvm_arch_vcpu_is_runnable(), and then
get rid of the annoying (and wrong) call to kvm_check_nested_events()
from kvm_vcpu_check_block().

Beware, this is not a complete fix, because kvm_guest_apic_has_interrupt()
might still _read_ memory from non-sleepable context.  The fix here is
probably to make kvm_arch_vcpu_is_runnable() return -EAGAIN, and in that
case do a round of kvm_vcpu_check_block() polling in sleepable context.

Nevertheless, it is a good start as it pushes the vmexit into vcpu_block().
The series also does a small cleanup pass on kvm_vcpu_{block,halt}(),
removing KVM_REQ_UNHALT in favor of simply the return value from those
functions.  This turned out not to be necessary, but I kept it because
it is cleaner anyway and it touches adjacent code.

Paolo


Paolo Bonzini (8):
  KVM: x86: check validity of argument to KVM_SET_MP_STATE
  KVM: x86: remove return value of kvm_vcpu_block
  KVM: x86: make kvm_vcpu_{block,halt} return whether vCPU is runnable
  KVM: mips, x86: do not rely on KVM_REQ_UNHALT
  KVM: remove KVM_REQ_UNHALT
  KVM: x86: make vendor code check for all nested events
  KVM: x86: lapic does not have to process INIT if it is blocked
  KVM: x86: never write to memory from kvm_vcpu_check_block

Sean Christopherson (1):
  KVM: nVMX: Make an event request when pending an MTF nested VM-Exit

 Documentation/virt/kvm/vcpu-requests.rst | 28 +----------
 arch/arm64/kvm/arm.c                     |  1 -
 arch/mips/kvm/emulate.c                  |  7 ++-
 arch/powerpc/kvm/book3s_pr.c             |  1 -
 arch/powerpc/kvm/book3s_pr_papr.c        |  1 -
 arch/powerpc/kvm/booke.c                 |  1 -
 arch/powerpc/kvm/powerpc.c               |  1 -
 arch/riscv/kvm/vcpu_insn.c               |  1 -
 arch/s390/kvm/kvm-s390.c                 |  2 -
 arch/x86/include/asm/kvm_host.h          |  3 +-
 arch/x86/kvm/i8259.c                     |  2 +-
 arch/x86/kvm/lapic.h                     |  2 +-
 arch/x86/kvm/vmx/nested.c                |  9 +++-
 arch/x86/kvm/vmx/vmx.c                   |  6 ++-
 arch/x86/kvm/x86.c                       | 55 ++++++++++++++++------
 arch/x86/kvm/x86.h                       |  5 --
 arch/x86/kvm/xen.c                       |  1 -
 include/linux/kvm_host.h                 |  7 ++-
 virt/kvm/kvm_main.c                      | 59 ++++++++++++------------
 19 files changed, 94 insertions(+), 98 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
@ 2022-08-11 21:05 ` Paolo Bonzini
  2022-08-15 13:31   ` Maxim Levitsky
  2022-08-11 21:05 ` [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block Paolo Bonzini
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

An invalid argument to KVM_SET_MP_STATE has no effect other than making the
vCPU fail to run at the next KVM_RUN.  Since it is extremely unlikely that
any userspace is relying on it, fail with -EINVAL just like for other
architectures.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 132d662d9713..c44348bb6ef2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10653,7 +10653,8 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 	case KVM_MP_STATE_INIT_RECEIVED:
 		break;
 	default:
-		return -EINTR;
+		WARN_ON(1);
+		break;
 	}
 	return 1;
 }
@@ -11094,9 +11095,22 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 
 	vcpu_load(vcpu);
 
-	if (!lapic_in_kernel(vcpu) &&
-	    mp_state->mp_state != KVM_MP_STATE_RUNNABLE)
+	switch (mp_state->mp_state) {
+	case KVM_MP_STATE_UNINITIALIZED:
+	case KVM_MP_STATE_HALTED:
+	case KVM_MP_STATE_AP_RESET_HOLD:
+	case KVM_MP_STATE_INIT_RECEIVED:
+	case KVM_MP_STATE_SIPI_RECEIVED:
+		if (!lapic_in_kernel(vcpu))
+			goto out;
+		break;
+
+	case KVM_MP_STATE_RUNNABLE:
+		break;
+
+	default:
 		goto out;
+	}
 
 	/*
 	 * KVM_MP_STATE_INIT_RECEIVED means the processor is in
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-08-11 21:05 ` [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
@ 2022-08-11 21:05 ` Paolo Bonzini
  2022-08-16 23:34   ` Sean Christopherson
  2022-08-11 21:05 ` [PATCH v2 3/9] KVM: x86: make kvm_vcpu_{block,halt} return whether vCPU is runnable Paolo Bonzini
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

The return value of kvm_vcpu_block will be repurposed soon to return
the state of KVM_REQ_UNHALT.  In preparation for that, get rid of the
current return value.  It is only used by kvm_vcpu_halt to decide whether
the call resulted in a wait, but the same effect can be obtained with
a single round of polling.

No functional change intended, apart from practically indistinguishable
changes to the polling behavior.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/linux/kvm_host.h |  2 +-
 virt/kvm/kvm_main.c      | 45 +++++++++++++++++-----------------------
 2 files changed, 20 insertions(+), 27 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1c480b1821e1..e7bd48d15db8 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1339,7 +1339,7 @@ void kvm_sigset_activate(struct kvm_vcpu *vcpu);
 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
 
 void kvm_vcpu_halt(struct kvm_vcpu *vcpu);
-bool kvm_vcpu_block(struct kvm_vcpu *vcpu);
+void kvm_vcpu_block(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 515dfe9d3bcf..1f049c1d01b4 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3429,10 +3429,9 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
  * pending.  This is mostly used when halting a vCPU, but may also be used
  * directly for other vCPU non-runnable states, e.g. x86's Wait-For-SIPI.
  */
-bool kvm_vcpu_block(struct kvm_vcpu *vcpu)
+void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 {
 	struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu);
-	bool waited = false;
 
 	vcpu->stat.generic.blocking = 1;
 
@@ -3447,7 +3446,6 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu)
 		if (kvm_vcpu_check_block(vcpu) < 0)
 			break;
 
-		waited = true;
 		schedule();
 	}
 
@@ -3457,8 +3455,6 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu)
 	preempt_enable();
 
 	vcpu->stat.generic.blocking = 0;
-
-	return waited;
 }
 
 static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start,
@@ -3493,35 +3489,32 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
 {
 	bool halt_poll_allowed = !kvm_arch_no_poll(vcpu);
 	bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns;
-	ktime_t start, cur, poll_end;
+	ktime_t start, cur, poll_end, stop;
 	bool waited = false;
 	u64 halt_ns;
 
 	start = cur = poll_end = ktime_get();
-	if (do_halt_poll) {
-		ktime_t stop = ktime_add_ns(start, vcpu->halt_poll_ns);
+	stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns);
 
-		do {
-			/*
-			 * This sets KVM_REQ_UNHALT if an interrupt
-			 * arrives.
-			 */
-			if (kvm_vcpu_check_block(vcpu) < 0)
-				goto out;
-			cpu_relax();
-			poll_end = cur = ktime_get();
-		} while (kvm_vcpu_can_poll(cur, stop));
-	}
+	do {
+		/*
+		 * This sets KVM_REQ_UNHALT if an interrupt
+		 * arrives.
+		 */
+		if (kvm_vcpu_check_block(vcpu) < 0)
+			goto out;
+		cpu_relax();
+		poll_end = cur = ktime_get();
+	} while (kvm_vcpu_can_poll(cur, stop));
 
-	waited = kvm_vcpu_block(vcpu);
+	waited = true;
+	kvm_vcpu_block(vcpu);
 
 	cur = ktime_get();
-	if (waited) {
-		vcpu->stat.generic.halt_wait_ns +=
-			ktime_to_ns(cur) - ktime_to_ns(poll_end);
-		KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.generic.halt_wait_hist,
-				ktime_to_ns(cur) - ktime_to_ns(poll_end));
-	}
+	vcpu->stat.generic.halt_wait_ns +=
+		ktime_to_ns(cur) - ktime_to_ns(poll_end);
+	KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.generic.halt_wait_hist,
+				  ktime_to_ns(cur) - ktime_to_ns(poll_end));
 out:
 	/* The total time the vCPU was "halted", including polling time. */
 	halt_ns = ktime_to_ns(cur) - ktime_to_ns(start);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 3/9] KVM: x86: make kvm_vcpu_{block,halt} return whether vCPU is runnable
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-08-11 21:05 ` [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
  2022-08-11 21:05 ` [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block Paolo Bonzini
@ 2022-08-11 21:05 ` Paolo Bonzini
  2022-08-11 21:06 ` [PATCH v2 4/9] KVM: mips, x86: do not rely on KVM_REQ_UNHALT Paolo Bonzini
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

This is currently returned via KVM_REQ_UNHALT, but this is completely
unnecessary since all that the callers do is clear the request; it
is never processed via the usual request loop.  The same condition
can be returned as a positive value from the functions.

No functional change intended.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/linux/kvm_host.h |  4 ++--
 virt/kvm/kvm_main.c      | 23 ++++++++++++++++++-----
 2 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index e7bd48d15db8..cbd9577e5447 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1338,8 +1338,8 @@ void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
 void kvm_sigset_activate(struct kvm_vcpu *vcpu);
 void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
 
-void kvm_vcpu_halt(struct kvm_vcpu *vcpu);
-void kvm_vcpu_block(struct kvm_vcpu *vcpu);
+int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
+int kvm_vcpu_block(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1f049c1d01b4..e827805b7b28 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3402,6 +3402,12 @@ static void shrink_halt_poll_ns(struct kvm_vcpu *vcpu)
 	trace_kvm_halt_poll_ns_shrink(vcpu->vcpu_id, val, old);
 }
 
+/*
+ * Returns zero if the vCPU should remain in a blocked state,
+ * nonzero if it has been woken up, specifically:
+ * - 1 if it is runnable
+ * - -EINTR if it is not runnable (e.g. has a signal or a timer pending)
+ */
 static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
 {
 	int ret = -EINTR;
@@ -3409,6 +3415,7 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
 
 	if (kvm_arch_vcpu_runnable(vcpu)) {
 		kvm_make_request(KVM_REQ_UNHALT, vcpu);
+		ret = 1;
 		goto out;
 	}
 	if (kvm_cpu_has_pending_timer(vcpu))
@@ -3429,9 +3436,10 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
  * pending.  This is mostly used when halting a vCPU, but may also be used
  * directly for other vCPU non-runnable states, e.g. x86's Wait-For-SIPI.
  */
-void kvm_vcpu_block(struct kvm_vcpu *vcpu)
+int kvm_vcpu_block(struct kvm_vcpu *vcpu)
 {
 	struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu);
+	int r;
 
 	vcpu->stat.generic.blocking = 1;
 
@@ -3443,7 +3451,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 	for (;;) {
 		set_current_state(TASK_INTERRUPTIBLE);
 
-		if (kvm_vcpu_check_block(vcpu) < 0)
+		r = kvm_vcpu_check_block(vcpu);
+		if (r != 0)
 			break;
 
 		schedule();
@@ -3455,6 +3464,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 	preempt_enable();
 
 	vcpu->stat.generic.blocking = 0;
+	return r;
 }
 
 static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start,
@@ -3485,12 +3495,13 @@ static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start,
  * expensive block+unblock sequence if a wake event arrives soon after the vCPU
  * is halted.
  */
-void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
+int kvm_vcpu_halt(struct kvm_vcpu *vcpu)
 {
 	bool halt_poll_allowed = !kvm_arch_no_poll(vcpu);
 	bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns;
 	ktime_t start, cur, poll_end, stop;
 	bool waited = false;
+	int r;
 	u64 halt_ns;
 
 	start = cur = poll_end = ktime_get();
@@ -3501,14 +3512,15 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
 		 * This sets KVM_REQ_UNHALT if an interrupt
 		 * arrives.
 		 */
-		if (kvm_vcpu_check_block(vcpu) < 0)
+		r = kvm_vcpu_check_block(vcpu);
+		if (r != 0)
 			goto out;
 		cpu_relax();
 		poll_end = cur = ktime_get();
 	} while (kvm_vcpu_can_poll(cur, stop));
 
 	waited = true;
-	kvm_vcpu_block(vcpu);
+	r = kvm_vcpu_block(vcpu);
 
 	cur = ktime_get();
 	vcpu->stat.generic.halt_wait_ns +=
@@ -3547,6 +3559,7 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
 	}
 
 	trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu));
+	return r;
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_halt);
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 4/9] KVM: mips, x86: do not rely on KVM_REQ_UNHALT
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (2 preceding siblings ...)
  2022-08-11 21:05 ` [PATCH v2 3/9] KVM: x86: make kvm_vcpu_{block,halt} return whether vCPU is runnable Paolo Bonzini
@ 2022-08-11 21:06 ` Paolo Bonzini
  2022-08-11 21:06 ` [PATCH v2 5/9] KVM: remove KVM_REQ_UNHALT Paolo Bonzini
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

KVM_REQ_UNHALT is now available as the return value from kvm_vcpu_halt
or kvm_vcpu_block, so the request can be simply cleared just like all
other architectures do.

No functional change intended.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/mips/kvm/emulate.c | 8 ++++----
 arch/x86/kvm/x86.c      | 8 +++++---
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index b494d8d39290..77d760d45c48 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -944,6 +944,7 @@ enum hrtimer_restart kvm_mips_count_timeout(struct kvm_vcpu *vcpu)
 
 enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
 {
+	int r;
 	kvm_debug("[%#lx] !!!WAIT!!! (%#lx)\n", vcpu->arch.pc,
 		  vcpu->arch.pending_exceptions);
 
@@ -952,16 +953,15 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
 	if (!vcpu->arch.pending_exceptions) {
 		kvm_vz_lose_htimer(vcpu);
 		vcpu->arch.wait = 1;
-		kvm_vcpu_halt(vcpu);
+		r = kvm_vcpu_halt(vcpu);
 
 		/*
 		 * We we are runnable, then definitely go off to user space to
 		 * check if any I/O interrupts are pending.
 		 */
-		if (kvm_check_request(KVM_REQ_UNHALT, vcpu)) {
-			kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+		if (r > 0)
 			vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN;
-		}
 	}
 
 	return EMULATE_DONE;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c44348bb6ef2..416df0fc7fda 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10611,6 +10611,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 static inline int vcpu_block(struct kvm_vcpu *vcpu)
 {
 	bool hv_timer;
+	int r;
 
 	if (!kvm_arch_vcpu_runnable(vcpu)) {
 		/*
@@ -10626,15 +10627,16 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 
 		kvm_vcpu_srcu_read_unlock(vcpu);
 		if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED)
-			kvm_vcpu_halt(vcpu);
+			r = kvm_vcpu_halt(vcpu);
 		else
-			kvm_vcpu_block(vcpu);
+			r = kvm_vcpu_block(vcpu);
 		kvm_vcpu_srcu_read_lock(vcpu);
 
 		if (hv_timer)
 			kvm_lapic_switch_to_hv_timer(vcpu);
 
-		if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
+		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+		if (r <= 0)
 			return 1;
 	}
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 5/9] KVM: remove KVM_REQ_UNHALT
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (3 preceding siblings ...)
  2022-08-11 21:06 ` [PATCH v2 4/9] KVM: mips, x86: do not rely on KVM_REQ_UNHALT Paolo Bonzini
@ 2022-08-11 21:06 ` Paolo Bonzini
  2022-08-11 21:06 ` [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events Paolo Bonzini
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

KVM_REQ_UNHALT is now unnecessary because it is replaced by the return
value of kvm_vcpu_block/kvm_vcpu_halt.  Remove it.

No functional change intended.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/vcpu-requests.rst | 28 +-----------------------
 arch/arm64/kvm/arm.c                     |  1 -
 arch/mips/kvm/emulate.c                  |  1 -
 arch/powerpc/kvm/book3s_pr.c             |  1 -
 arch/powerpc/kvm/book3s_pr_papr.c        |  1 -
 arch/powerpc/kvm/booke.c                 |  1 -
 arch/powerpc/kvm/powerpc.c               |  1 -
 arch/riscv/kvm/vcpu_insn.c               |  1 -
 arch/s390/kvm/kvm-s390.c                 |  2 --
 arch/x86/kvm/x86.c                       |  2 --
 arch/x86/kvm/xen.c                       |  1 -
 include/linux/kvm_host.h                 |  3 +--
 virt/kvm/kvm_main.c                      |  5 -----
 13 files changed, 2 insertions(+), 46 deletions(-)

diff --git a/Documentation/virt/kvm/vcpu-requests.rst b/Documentation/virt/kvm/vcpu-requests.rst
index 31f62b64e07b..87f04c1fa53d 100644
--- a/Documentation/virt/kvm/vcpu-requests.rst
+++ b/Documentation/virt/kvm/vcpu-requests.rst
@@ -97,7 +97,7 @@ VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap.
 This means general bitops, like those documented in [atomic-ops]_ could
 also be used, e.g. ::
 
-  clear_bit(KVM_REQ_UNHALT & KVM_REQUEST_MASK, &vcpu->requests);
+  clear_bit(KVM_REQ_UNBLOCK & KVM_REQUEST_MASK, &vcpu->requests);
 
 However, VCPU request users should refrain from doing so, as it would
 break the abstraction.  The first 8 bits are reserved for architecture
@@ -126,17 +126,6 @@ KVM_REQ_UNBLOCK
   or in order to update the interrupt routing and ensure that assigned
   devices will wake up the vCPU.
 
-KVM_REQ_UNHALT
-
-  This request may be made from the KVM common function kvm_vcpu_block(),
-  which is used to emulate an instruction that causes a CPU to halt until
-  one of an architectural specific set of events and/or interrupts is
-  received (determined by checking kvm_arch_vcpu_runnable()).  When that
-  event or interrupt arrives kvm_vcpu_block() makes the request.  This is
-  in contrast to when kvm_vcpu_block() returns due to any other reason,
-  such as a pending signal, which does not indicate the VCPU's halt
-  emulation should stop, and therefore does not make the request.
-
 KVM_REQ_OUTSIDE_GUEST_MODE
 
   This "request" ensures the target vCPU has exited guest mode prior to the
@@ -297,21 +286,6 @@ architecture dependent.  kvm_vcpu_block() calls kvm_arch_vcpu_runnable()
 to check if it should awaken.  One reason to do so is to provide
 architectures a function where requests may be checked if necessary.
 
-Clearing Requests
------------------
-
-Generally it only makes sense for the receiving VCPU thread to clear a
-request.  However, in some circumstances, such as when the requesting
-thread and the receiving VCPU thread are executed serially, such as when
-they are the same thread, or when they are using some form of concurrency
-control to temporarily execute synchronously, then it's possible to know
-that the request may be cleared immediately, rather than waiting for the
-receiving VCPU thread to handle the request in VCPU RUN.  The only current
-examples of this are kvm_vcpu_block() calls made by VCPUs to block
-themselves.  A possible side-effect of that call is to make the
-KVM_REQ_UNHALT request, which may then be cleared immediately when the
-VCPU returns from the call.
-
 References
 ==========
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 986cee6fbc7f..0bbd1ce601a5 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -666,7 +666,6 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu)
 
 	kvm_vcpu_halt(vcpu);
 	vcpu_clear_flag(vcpu, IN_WFIT);
-	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 
 	preempt_disable();
 	vgic_v4_load(vcpu);
diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index 77d760d45c48..c11143586765 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -959,7 +959,6 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
 		 * We we are runnable, then definitely go off to user space to
 		 * check if any I/O interrupts are pending.
 		 */
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		if (r > 0)
 			vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN;
 	}
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index d6abed6e51e6..9fc4dd8f66eb 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -499,7 +499,6 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr)
 	if (msr & MSR_POW) {
 		if (!vcpu->arch.pending_exceptions) {
 			kvm_vcpu_halt(vcpu);
-			kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 			vcpu->stat.generic.halt_wakeup++;
 
 			/* Unset POW bit after we woke up */
diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c
index a1f2978b2a86..b2c89e850d7a 100644
--- a/arch/powerpc/kvm/book3s_pr_papr.c
+++ b/arch/powerpc/kvm/book3s_pr_papr.c
@@ -393,7 +393,6 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd)
 	case H_CEDE:
 		kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE);
 		kvm_vcpu_halt(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		vcpu->stat.generic.halt_wakeup++;
 		return EMULATE_DONE;
 	case H_LOGICAL_CI_LOAD:
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 06c5830a93f9..7b4920e9fd26 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -719,7 +719,6 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.shared->msr & MSR_WE) {
 		local_irq_enable();
 		kvm_vcpu_halt(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		hard_irq_disable();
 
 		kvmppc_set_exit_type(vcpu, EMULATED_MTMSRWE_EXITS);
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 191992fcb2c2..3c384ed1d9fc 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -238,7 +238,6 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 	case EV_HCALL_TOKEN(EV_IDLE):
 		r = EV_SUCCESS;
 		kvm_vcpu_halt(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		break;
 	default:
 		r = EV_UNIMPLEMENTED;
diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index 7eb90a47b571..0bb52761a3f7 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -191,7 +191,6 @@ void kvm_riscv_vcpu_wfi(struct kvm_vcpu *vcpu)
 		kvm_vcpu_srcu_read_unlock(vcpu);
 		kvm_vcpu_halt(vcpu);
 		kvm_vcpu_srcu_read_lock(vcpu);
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 	}
 }
 
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index edfd4bbd0cba..aa39ea4582bd 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4343,8 +4343,6 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu)
 		goto retry;
 	}
 
-	/* nothing to do, just clear the request */
-	kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 	/* we left the vsie handler, nothing to do, just clear the request */
 	kvm_clear_request(KVM_REQ_VSIE_RESTART, vcpu);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 416df0fc7fda..7f084613fac8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10635,7 +10635,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 		if (hv_timer)
 			kvm_lapic_switch_to_hv_timer(vcpu);
 
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		if (r <= 0)
 			return 1;
 	}
@@ -10842,7 +10841,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 			r = 0;
 			goto out;
 		}
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 		r = -EAGAIN;
 		if (signal_pending(current)) {
 			r = -EINTR;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 280cb5dc7341..93c628d3e3a9 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1065,7 +1065,6 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
 			del_timer(&vcpu->arch.xen.poll_timer);
 
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
-		kvm_clear_request(KVM_REQ_UNHALT, vcpu);
 	}
 
 	vcpu->arch.xen.poll_evtchn = 0;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index cbd9577e5447..cfe46830783f 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -151,12 +151,11 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQUEST_NO_ACTION      BIT(10)
 /*
  * Architecture-independent vcpu->requests bit members
- * Bits 4-7 are reserved for more arch-independent bits.
+ * Bits 3-7 are reserved for more arch-independent bits.
  */
 #define KVM_REQ_TLB_FLUSH         (0 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQ_VM_DEAD           (1 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQ_UNBLOCK           2
-#define KVM_REQ_UNHALT            3
 #define KVM_REQUEST_ARCH_BASE     8
 
 /*
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e827805b7b28..18292f028536 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3414,7 +3414,6 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
 	int idx = srcu_read_lock(&vcpu->kvm->srcu);
 
 	if (kvm_arch_vcpu_runnable(vcpu)) {
-		kvm_make_request(KVM_REQ_UNHALT, vcpu);
 		ret = 1;
 		goto out;
 	}
@@ -3508,10 +3507,6 @@ int kvm_vcpu_halt(struct kvm_vcpu *vcpu)
 	stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns);
 
 	do {
-		/*
-		 * This sets KVM_REQ_UNHALT if an interrupt
-		 * arrives.
-		 */
 		r = kvm_vcpu_check_block(vcpu);
 		if (r != 0)
 			goto out;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (4 preceding siblings ...)
  2022-08-11 21:06 ` [PATCH v2 5/9] KVM: remove KVM_REQ_UNHALT Paolo Bonzini
@ 2022-08-11 21:06 ` Paolo Bonzini
  2022-08-16 23:47   ` Sean Christopherson
  2022-08-17 14:10   ` Maxim Levitsky
  2022-08-11 21:06 ` [PATCH v2 7/9] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

Interrupts, NMIs etc. sent while in guest mode are already handled
properly by the *_interrupt_allowed callbacks, but other events can
cause a vCPU to be runnable that are specific to guest mode.

In the case of VMX there are two, the preemption timer and the
monitor trap.  The VMX preemption timer is already special cased via
the hv_timer_pending callback, but the purpose of the callback can be
easily extended to MTF or in fact any other event that can occur only
in guest mode.

Rename the callback and add an MTF check; kvm_arch_vcpu_runnable()
now will return true if an MTF is pending, without relying on
kvm_vcpu_running()'s call to kvm_check_nested_events().  Until that call
is removed, however, the patch introduces no functional change.

Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/vmx/nested.c       | 9 ++++++++-
 arch/x86/kvm/x86.c              | 8 ++++----
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5ffa578cafe1..293ff678fff5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1636,7 +1636,7 @@ struct kvm_x86_nested_ops {
 	int (*check_events)(struct kvm_vcpu *vcpu);
 	bool (*handle_page_fault_workaround)(struct kvm_vcpu *vcpu,
 					     struct x86_exception *fault);
-	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
+	bool (*has_events)(struct kvm_vcpu *vcpu);
 	void (*triple_fault)(struct kvm_vcpu *vcpu);
 	int (*get_state)(struct kvm_vcpu *vcpu,
 			 struct kvm_nested_state __user *user_kvm_nested_state,
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index ddd4367d4826..9631cdcdd058 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3876,6 +3876,13 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
 	       to_vmx(vcpu)->nested.preemption_timer_expired;
 }
 
+static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
+}
+
 static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -6816,7 +6823,7 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
 	.leave_nested = vmx_leave_nested,
 	.check_events = vmx_check_nested_events,
 	.handle_page_fault_workaround = nested_vmx_handle_page_fault_workaround,
-	.hv_timer_pending = nested_vmx_preemption_timer_pending,
+	.has_events = vmx_has_nested_events,
 	.triple_fault = nested_vmx_triple_fault,
 	.get_state = vmx_get_nested_state,
 	.set_state = vmx_set_nested_state,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7f084613fac8..0f9f24793b8a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9789,8 +9789,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	}
 
 	if (is_guest_mode(vcpu) &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+	    kvm_x86_ops.nested_ops->has_events &&
+	    kvm_x86_ops.nested_ops->has_events(vcpu))
 		*req_immediate_exit = true;
 
 	WARN_ON(vcpu->arch.exception.pending);
@@ -12562,8 +12562,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 		return true;
 
 	if (is_guest_mode(vcpu) &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+	    kvm_x86_ops.nested_ops->has_events &&
+	    kvm_x86_ops.nested_ops->has_events(vcpu))
 		return true;
 
 	if (kvm_xen_has_pending_events(vcpu))
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 7/9] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (5 preceding siblings ...)
  2022-08-11 21:06 ` [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events Paolo Bonzini
@ 2022-08-11 21:06 ` Paolo Bonzini
  2022-08-17 14:11   ` Maxim Levitsky
  2022-08-11 21:06 ` [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked Paolo Bonzini
  2022-08-11 21:06 ` [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  8 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets, stable

From: Sean Christopherson <seanjc@google.com>

Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run
through inject_pending_event() and thus vmx_check_nested_events() prior
to re-entering the guest.

MTF currently works by virtue of KVM's hack that calls
kvm_check_nested_events() from kvm_vcpu_running(), but that hack will
be removed in the near future.  Until that call is removed, the patch
introduces no functional change.

Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d7f8331d6f7e..940c0c0f8281 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1660,10 +1660,12 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
 	 */
 	if (nested_cpu_has_mtf(vmcs12) &&
 	    (!vcpu->arch.exception.pending ||
-	     vcpu->arch.exception.nr == DB_VECTOR))
+	     vcpu->arch.exception.nr == DB_VECTOR)) {
 		vmx->nested.mtf_pending = true;
-	else
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+	} else {
 		vmx->nested.mtf_pending = false;
+	}
 }
 
 static int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (6 preceding siblings ...)
  2022-08-11 21:06 ` [PATCH v2 7/9] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
@ 2022-08-11 21:06 ` Paolo Bonzini
  2022-08-17  0:07   ` Sean Christopherson
  2022-08-11 21:06 ` [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  8 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

Do not return true from kvm_apic_has_events, and consequently from
kvm_vcpu_has_events, if the vCPU is not going to process an INIT.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/i8259.c            | 2 +-
 arch/x86/kvm/lapic.h            | 2 +-
 arch/x86/kvm/x86.c              | 5 +++++
 arch/x86/kvm/x86.h              | 5 -----
 5 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 293ff678fff5..1ce4ebc41118 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2042,6 +2042,7 @@ void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
 				     u32 size);
 bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu);
 bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu);
+bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu);
 
 bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
 			     struct kvm_vcpu **dest_vcpu);
diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
index e1bb6218bb96..177555eea54e 100644
--- a/arch/x86/kvm/i8259.c
+++ b/arch/x86/kvm/i8259.c
@@ -29,9 +29,9 @@
 #include <linux/mm.h>
 #include <linux/slab.h>
 #include <linux/bitops.h>
-#include "irq.h"
+#include <linux/kvm_host.h>
 
-#include <linux/kvm_host.h>
+#include "irq.h"
 #include "trace.h"
 
 #define pr_pic_unimpl(fmt, ...)	\
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 117a46df5cc1..12577ddccdfc 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -225,7 +225,7 @@ static inline bool kvm_vcpu_apicv_active(struct kvm_vcpu *vcpu)
 
 static inline bool kvm_apic_has_events(struct kvm_vcpu *vcpu)
 {
-	return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events;
+	return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events && !kvm_vcpu_latch_init(vcpu);
 }
 
 static inline bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0f9f24793b8a..5e9358ea112b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12529,6 +12529,11 @@ static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 		static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
 }
 
+bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
+{
+	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
+}
+
 static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 {
 	if (!list_empty_careful(&vcpu->async_pf.done))
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 1926d2cb8e79..c333e7cf933a 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -267,11 +267,6 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk)
 	return !(kvm->arch.disabled_quirks & quirk);
 }
 
-static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
-{
-	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
-}
-
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
 
 u64 get_kvmclock_ns(struct kvm *kvm);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (7 preceding siblings ...)
  2022-08-11 21:06 ` [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked Paolo Bonzini
@ 2022-08-11 21:06 ` Paolo Bonzini
  2022-08-16 23:45   ` Sean Christopherson
                     ` (2 more replies)
  8 siblings, 3 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-11 21:06 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, mlevitsk, vkuznets

kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore
it cannot sleep.  Writing to guest memory is therefore forbidden, but it
can happen on AMD processors if kvm_check_nested_events() causes a vmexit.

Fortunately, all events that are caught by kvm_check_nested_events() are
also recognized by kvm_vcpu_has_events() through vendor callbacks such as
kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so
remove the call and postpone the actual processing to vcpu_block().

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5e9358ea112b..9226fd536783 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10639,6 +10639,17 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 			return 1;
 	}
 
+	if (is_guest_mode(vcpu)) {
+		/*
+		 * Evaluate nested events before exiting the halted state.
+		 * This allows the halt state to be recorded properly in
+		 * the VMCS12's activity state field (AMD does not have
+		 * a similar field and a vmexit always causes a spurious
+		 * wakeup from HLT).
+		 */
+		kvm_check_nested_events(vcpu);
+	}
+
 	if (kvm_apic_accept_events(vcpu) < 0)
 		return 0;
 	switch(vcpu->arch.mp_state) {
@@ -10662,9 +10673,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 
 static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
 {
-	if (is_guest_mode(vcpu))
-		kvm_check_nested_events(vcpu);
-
 	return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
 		!vcpu->arch.apf.halted);
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE
  2022-08-11 21:05 ` [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
@ 2022-08-15 13:31   ` Maxim Levitsky
  2022-08-16 22:50     ` Sean Christopherson
  0 siblings, 1 reply; 28+ messages in thread
From: Maxim Levitsky @ 2022-08-15 13:31 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: seanjc, vkuznets

On Thu, 2022-08-11 at 17:05 -0400, Paolo Bonzini wrote:
> An invalid argument to KVM_SET_MP_STATE has no effect other than making the
> vCPU fail to run at the next KVM_RUN.  Since it is extremely unlikely that
> any userspace is relying on it, fail with -EINVAL just like for other
> architectures.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 20 +++++++++++++++++---
>  1 file changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 132d662d9713..c44348bb6ef2 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10653,7 +10653,8 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>         case KVM_MP_STATE_INIT_RECEIVED:
>                 break;
>         default:
> -               return -EINTR;
> +               WARN_ON(1);

Very small nitpick: Maybe WARN_ON_ONCE after all? 
(but I don't see any way after this patch to have invalid mp_state)

> +               break;



>         }
>         return 1;
>  }
> @@ -11094,9 +11095,22 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
>  
>         vcpu_load(vcpu);
>  
> -       if (!lapic_in_kernel(vcpu) &&
> -           mp_state->mp_state != KVM_MP_STATE_RUNNABLE)
> +       switch (mp_state->mp_state) {
> +       case KVM_MP_STATE_UNINITIALIZED:
> +       case KVM_MP_STATE_HALTED:
> +       case KVM_MP_STATE_AP_RESET_HOLD:
> +       case KVM_MP_STATE_INIT_RECEIVED:
> +       case KVM_MP_STATE_SIPI_RECEIVED:
> +               if (!lapic_in_kernel(vcpu))
> +                       goto out;
> +               break;
> +
> +       case KVM_MP_STATE_RUNNABLE:
> +               break;
> +
> +       default:
>                 goto out;
> +       }
>  
>         /*
>          * KVM_MP_STATE_INIT_RECEIVED means the processor is in



Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE
  2022-08-15 13:31   ` Maxim Levitsky
@ 2022-08-16 22:50     ` Sean Christopherson
  0 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2022-08-16 22:50 UTC (permalink / raw)
  To: Maxim Levitsky; +Cc: Paolo Bonzini, linux-kernel, kvm, vkuznets

On Mon, Aug 15, 2022, Maxim Levitsky wrote:
> On Thu, 2022-08-11 at 17:05 -0400, Paolo Bonzini wrote:
> > An invalid argument to KVM_SET_MP_STATE has no effect other than making the
> > vCPU fail to run at the next KVM_RUN.  Since it is extremely unlikely that
> > any userspace is relying on it, fail with -EINVAL just like for other
> > architectures.
> > 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  arch/x86/kvm/x86.c | 20 +++++++++++++++++---
> >  1 file changed, 17 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 132d662d9713..c44348bb6ef2 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -10653,7 +10653,8 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
> >         case KVM_MP_STATE_INIT_RECEIVED:
> >                 break;
> >         default:
> > -               return -EINTR;
> > +               WARN_ON(1);
> 
> Very small nitpick: Maybe WARN_ON_ONCE after all? 

+1, I don't think warning multiple times would help triage/debug, and this seems
like something that will fire a lot if it fires at all.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
  2022-08-11 21:05 ` [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block Paolo Bonzini
@ 2022-08-16 23:34   ` Sean Christopherson
  2022-08-17 14:10     ` Maxim Levitsky
  2022-08-17 15:31     ` Paolo Bonzini
  0 siblings, 2 replies; 28+ messages in thread
From: Sean Christopherson @ 2022-08-16 23:34 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> The return value of kvm_vcpu_block will be repurposed soon to return

kvm_vcpu_block()

> the state of KVM_REQ_UNHALT.  In preparation for that, get rid of the
> current return value.  It is only used by kvm_vcpu_halt to decide whether

kvm_vcpu_halt()

> the call resulted in a wait, but the same effect can be obtained with
> a single round of polling.
> 
> No functional change intended, apart from practically indistinguishable
> changes to the polling behavior.

This "breaks" update_halt_poll_stats().  At the very least, it breaks the comment
that effectively says "waited" is set if and only if schedule() is called.

	/*
	 * Note, halt-polling is considered successful so long as the vCPU was
	 * never actually scheduled out, i.e. even if the wake event arrived
	 * after of the halt-polling loop itself, but before the full wait.
	 */
	if (do_halt_poll)
		update_halt_poll_stats(vcpu, start, poll_end, !waited);

> @@ -3493,35 +3489,32 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
>  {
>  	bool halt_poll_allowed = !kvm_arch_no_poll(vcpu);
>  	bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns;
> -	ktime_t start, cur, poll_end;
> +	ktime_t start, cur, poll_end, stop;
>  	bool waited = false;
>  	u64 halt_ns;
>  
>  	start = cur = poll_end = ktime_get();
> -	if (do_halt_poll) {
> -		ktime_t stop = ktime_add_ns(start, vcpu->halt_poll_ns);
> +	stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns);

This is inverted, the loop should terminate after the first iteration (stop==start)
if halt-polling is _not_ allowed, i.e. do_halt_poll is false.

Overall, I don't like this patch.  I don't necessarily hate it, but I definitely
don't like it.

Isn't freeing up the return from kvm_vcpu_check_block() unnecessary?  Can't we
just do:

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9f11b505cbee..ccb9f8bdeb18 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
                if (hv_timer)
                        kvm_lapic_switch_to_hv_timer(vcpu);

-               if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
+               if (!kvm_arch_vcpu_runnable(vcpu))
                        return 1;
        }


which IMO is more intuitive and doesn't require reworking halt-polling (again).

I don't see why KVM cares if a vCPU becomes runnable after detecting that something
else caused kvm_vcpu_check_block() to bail.  E.g. a pending signal doesn't invalidate
a pending vCPU event, and either way KVM is bailing from the run loop.

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-08-11 21:06 ` [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
@ 2022-08-16 23:45   ` Sean Christopherson
  2022-08-17 14:11     ` Maxim Levitsky
  2022-09-20  0:32   ` Sean Christopherson
  2022-09-20  0:55   ` Sean Christopherson
  2 siblings, 1 reply; 28+ messages in thread
From: Sean Christopherson @ 2022-08-16 23:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore
> it cannot sleep.  Writing to guest memory is therefore forbidden, but it
> can happen on AMD processors if kvm_check_nested_events() causes a vmexit.
> 
> Fortunately, all events that are caught by kvm_check_nested_events() are
> also recognized by kvm_vcpu_has_events() through vendor callbacks such as
> kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so
> remove the call and postpone the actual processing to vcpu_block().
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5e9358ea112b..9226fd536783 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10639,6 +10639,17 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>  			return 1;
>  	}
>  
> +	if (is_guest_mode(vcpu)) {
> +		/*
> +		 * Evaluate nested events before exiting the halted state.
> +		 * This allows the halt state to be recorded properly in
> +		 * the VMCS12's activity state field (AMD does not have
> +		 * a similar field and a vmexit always causes a spurious
> +		 * wakeup from HLT).
> +		 */
> +		kvm_check_nested_events(vcpu);

Formatting nit, I'd prefer the block comment go above the if-statement, that way
we avoiding debating whether or not the technically-unnecessary braces align with
kernel/KVM style, and it doesn't have to wrap as aggressively.

And s/vmexit/VM-Exit while I'm nitpicking.

	/*
	 * Evaluate nested events before exiting the halted state.  This allows
	 * the halt state to be recorded properly in the VMCS12's activity
	 * state field (AMD does not have a similar field and a VM-Exit always
	 * causes a spurious wakeup from HLT).
	 */
	if (is_guest_mode(vcpu))
		kvm_check_nested_events(vcpu);

Side topic, the AMD behavior is a bug report waiting to happen.  I know of at least
one customer failure that was root caused to a KVM bug where KVM caused a spurious
wakeup.  To be fair, the guest workload was being stupid (execute HLT on vCPU and
then effectively unmap its code by doing kexec), but it's still an unpleasant gap :-(

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events
  2022-08-11 21:06 ` [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events Paolo Bonzini
@ 2022-08-16 23:47   ` Sean Christopherson
  2022-08-17 14:10   ` Maxim Levitsky
  1 sibling, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2022-08-16 23:47 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> Interrupts, NMIs etc. sent while in guest mode are already handled
> properly by the *_interrupt_allowed callbacks, but other events can
> cause a vCPU to be runnable that are specific to guest mode.
> 
> In the case of VMX there are two, the preemption timer and the
> monitor trap.  The VMX preemption timer is already special cased via
> the hv_timer_pending callback, but the purpose of the callback can be
> easily extended to MTF or in fact any other event that can occur only
> in guest mode.
> 
> Rename the callback and add an MTF check; kvm_arch_vcpu_runnable()
> now will return true if an MTF is pending, without relying on
> kvm_vcpu_running()'s call to kvm_check_nested_events().  Until that call
> is removed, however, the patch introduces no functional change.
> 
> Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/vmx/nested.c       | 9 ++++++++-
>  arch/x86/kvm/x86.c              | 8 ++++----
>  3 files changed, 13 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5ffa578cafe1..293ff678fff5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1636,7 +1636,7 @@ struct kvm_x86_nested_ops {
>  	int (*check_events)(struct kvm_vcpu *vcpu);
>  	bool (*handle_page_fault_workaround)(struct kvm_vcpu *vcpu,
>  					     struct x86_exception *fault);
> -	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
> +	bool (*has_events)(struct kvm_vcpu *vcpu);
>  	void (*triple_fault)(struct kvm_vcpu *vcpu);
>  	int (*get_state)(struct kvm_vcpu *vcpu,
>  			 struct kvm_nested_state __user *user_kvm_nested_state,
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index ddd4367d4826..9631cdcdd058 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -3876,6 +3876,13 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
>  	       to_vmx(vcpu)->nested.preemption_timer_expired;
>  }
>  
> +static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +
> +	return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;

How about:

	return nested_vmx_preemption_timer_pending(vcpu) ||
	       to_vmx(vcpu)->nested.mtf_pending;

to use less lines and honor the 80 char soft-limit?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked
  2022-08-11 21:06 ` [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked Paolo Bonzini
@ 2022-08-17  0:07   ` Sean Christopherson
  2022-08-17 14:11     ` Maxim Levitsky
  0 siblings, 1 reply; 28+ messages in thread
From: Sean Christopherson @ 2022-08-17  0:07 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> Do not return true from kvm_apic_has_events, and consequently from
> kvm_vcpu_has_events, if the vCPU is not going to process an INIT.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 1 +
>  arch/x86/kvm/i8259.c            | 2 +-
>  arch/x86/kvm/lapic.h            | 2 +-
>  arch/x86/kvm/x86.c              | 5 +++++
>  arch/x86/kvm/x86.h              | 5 -----
>  5 files changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 293ff678fff5..1ce4ebc41118 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2042,6 +2042,7 @@ void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
>  				     u32 size);
>  bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu);
>  bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu);
> +bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu);
>  
>  bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
>  			     struct kvm_vcpu **dest_vcpu);
> diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
> index e1bb6218bb96..177555eea54e 100644
> --- a/arch/x86/kvm/i8259.c
> +++ b/arch/x86/kvm/i8259.c
> @@ -29,9 +29,9 @@
>  #include <linux/mm.h>
>  #include <linux/slab.h>
>  #include <linux/bitops.h>
> -#include "irq.h"
> +#include <linux/kvm_host.h>
>  
> -#include <linux/kvm_host.h>
> +#include "irq.h"
>  #include "trace.h"
>  
>  #define pr_pic_unimpl(fmt, ...)	\
> diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
> index 117a46df5cc1..12577ddccdfc 100644
> --- a/arch/x86/kvm/lapic.h
> +++ b/arch/x86/kvm/lapic.h
> @@ -225,7 +225,7 @@ static inline bool kvm_vcpu_apicv_active(struct kvm_vcpu *vcpu)
>  
>  static inline bool kvm_apic_has_events(struct kvm_vcpu *vcpu)
>  {
> -	return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events;
> +	return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events && !kvm_vcpu_latch_init(vcpu);

Blech, the kvm_apic_has_events() name is awful (as is pending_events), e.g. it
really should be kvm_apic_has_pending_sipi_or_init().

To avoid the odd kvm_vcpu_latch_init() declaration and the long line above, what
if we open code this in kvm_vcpu_has_events() like we do for NMI, SMI, etc...?

And as follow-up, I would love to rename kvm_vcpu_latch_init() => kvm_vcpu_init_blocked(),
kvm_apic_has_events(), and pending_events.

E.g. for this patch just do:

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9f11b505cbee..559900736a71 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12533,7 +12533,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
        if (!list_empty_careful(&vcpu->async_pf.done))
                return true;

-       if (kvm_apic_has_events(vcpu))
+       /* comment explaning that SIPIs are dropped in this case. */
+       if (kvm_apic_has_events(vcpu) && !kvm_vcpu_latch_init(vcpu))
                return true;

        if (vcpu->arch.pv.pv_unhalted)


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
  2022-08-16 23:34   ` Sean Christopherson
@ 2022-08-17 14:10     ` Maxim Levitsky
  2022-08-17 15:31     ` Paolo Bonzini
  1 sibling, 0 replies; 28+ messages in thread
From: Maxim Levitsky @ 2022-08-17 14:10 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: linux-kernel, kvm, vkuznets

On Tue, 2022-08-16 at 23:34 +0000, Sean Christopherson wrote:
> On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> > The return value of kvm_vcpu_block will be repurposed soon to return
> 
> kvm_vcpu_block()
> 
> > the state of KVM_REQ_UNHALT.  In preparation for that, get rid of the
> > current return value.  It is only used by kvm_vcpu_halt to decide whether
> 
> kvm_vcpu_halt()
> 
> > the call resulted in a wait, but the same effect can be obtained with
> > a single round of polling.
> > 
> > No functional change intended, apart from practically indistinguishable
> > changes to the polling behavior.
> 
> This "breaks" update_halt_poll_stats().  At the very least, it breaks the comment
> that effectively says "waited" is set if and only if schedule() is called.
> 
>         /*
>          * Note, halt-polling is considered successful so long as the vCPU was
>          * never actually scheduled out, i.e. even if the wake event arrived
>          * after of the halt-polling loop itself, but before the full wait.
>          */
>         if (do_halt_poll)
>                 update_halt_poll_stats(vcpu, start, poll_end, !waited);
> 
> > @@ -3493,35 +3489,32 @@ void kvm_vcpu_halt(struct kvm_vcpu *vcpu)
> >  {
> >         bool halt_poll_allowed = !kvm_arch_no_poll(vcpu);
> >         bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns;
> > -       ktime_t start, cur, poll_end;
> > +       ktime_t start, cur, poll_end, stop;
> >         bool waited = false;
> >         u64 halt_ns;
> >  
> >         start = cur = poll_end = ktime_get();
> > -       if (do_halt_poll) {
> > -               ktime_t stop = ktime_add_ns(start, vcpu->halt_poll_ns);
> > +       stop = do_halt_poll ? start : ktime_add_ns(start, vcpu->halt_poll_ns);
> 
> This is inverted, the loop should terminate after the first iteration (stop==start)
> if halt-polling is _not_ allowed, i.e. do_halt_poll is false.
> 
> Overall, I don't like this patch.  I don't necessarily hate it, but I definitely
> don't like it.

Roughtly same opinion here.

> 
> Isn't freeing up the return from kvm_vcpu_check_block() unnecessary?  Can't we
> just do:
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9f11b505cbee..ccb9f8bdeb18 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>                 if (hv_timer)
>                         kvm_lapic_switch_to_hv_timer(vcpu);
> 
> -               if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
> +               if (!kvm_arch_vcpu_runnable(vcpu))


The 'kvm_vcpu_block()' returns when 'kvm_vcpu_check_block()' returns a negative number
which can happen also due to pending apic timer / signal, however it only sets the
'KVM_REQ_UNHALT' only when 'kvm_arch_vcpu_runnable()==true' so the above does make
sense.


Best regards,
 Maxim Levitsky


>                         return 1;
>         }
> 
> 
> which IMO is more intuitive and doesn't require reworking halt-polling (again).
> 
> I don't see why KVM cares if a vCPU becomes runnable after detecting that something
> else caused kvm_vcpu_check_block() to bail.  E.g. a pending signal doesn't invalidate
> a pending vCPU event, and either way KVM is bailing from the run loop.
> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events
  2022-08-11 21:06 ` [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events Paolo Bonzini
  2022-08-16 23:47   ` Sean Christopherson
@ 2022-08-17 14:10   ` Maxim Levitsky
  1 sibling, 0 replies; 28+ messages in thread
From: Maxim Levitsky @ 2022-08-17 14:10 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: seanjc, vkuznets

On Thu, 2022-08-11 at 17:06 -0400, Paolo Bonzini wrote:
> Interrupts, NMIs etc. sent while in guest mode are already handled
> properly by the *_interrupt_allowed callbacks, but other events can
> cause a vCPU to be runnable that are specific to guest mode.
> 
> In the case of VMX there are two, the preemption timer and the
> monitor trap.  The VMX preemption timer is already special cased via
> the hv_timer_pending callback, but the purpose of the callback can be
> easily extended to MTF or in fact any other event that can occur only
> in guest mode.

I am just curious, can this happen with MTF? I see that 'vmx->nested.mtf_pending'
is only set from 'vmx_update_emulated_instruction' and that should only
in turn be called when we emulate an instruction, which implies that the
guest is not halted.

> 
> Rename the callback and add an MTF check; kvm_arch_vcpu_runnable()
> now will return true if an MTF is pending, without relying on
> kvm_vcpu_running()'s call to kvm_check_nested_events().  Until that call
> is removed, however, the patch introduces no functional change.
> 
> Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/vmx/nested.c       | 9 ++++++++-
>  arch/x86/kvm/x86.c              | 8 ++++----
>  3 files changed, 13 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5ffa578cafe1..293ff678fff5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1636,7 +1636,7 @@ struct kvm_x86_nested_ops {
>         int (*check_events)(struct kvm_vcpu *vcpu);
>         bool (*handle_page_fault_workaround)(struct kvm_vcpu *vcpu,
>                                              struct x86_exception *fault);
> -       bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
> +       bool (*has_events)(struct kvm_vcpu *vcpu);
>         void (*triple_fault)(struct kvm_vcpu *vcpu);
>         int (*get_state)(struct kvm_vcpu *vcpu,
>                          struct kvm_nested_state __user *user_kvm_nested_state,
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index ddd4367d4826..9631cdcdd058 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -3876,6 +3876,13 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
>                to_vmx(vcpu)->nested.preemption_timer_expired;
>  }
>  
> +static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
> +{
> +       struct vcpu_vmx *vmx = to_vmx(vcpu);
> +
> +       return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
> +}
> +
>  static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
>  {
>         struct vcpu_vmx *vmx = to_vmx(vcpu);
> @@ -6816,7 +6823,7 @@ struct kvm_x86_nested_ops vmx_nested_ops = {
>         .leave_nested = vmx_leave_nested,
>         .check_events = vmx_check_nested_events,
>         .handle_page_fault_workaround = nested_vmx_handle_page_fault_workaround,
> -       .hv_timer_pending = nested_vmx_preemption_timer_pending,
> +       .has_events = vmx_has_nested_events,
>         .triple_fault = nested_vmx_triple_fault,
>         .get_state = vmx_get_nested_state,
>         .set_state = vmx_set_nested_state,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 7f084613fac8..0f9f24793b8a 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9789,8 +9789,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
>         }
>  
>         if (is_guest_mode(vcpu) &&
> -           kvm_x86_ops.nested_ops->hv_timer_pending &&
> -           kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
> +           kvm_x86_ops.nested_ops->has_events &&
> +           kvm_x86_ops.nested_ops->has_events(vcpu))
>                 *req_immediate_exit = true;
>  
>         WARN_ON(vcpu->arch.exception.pending);
> @@ -12562,8 +12562,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
>                 return true;
>  
>         if (is_guest_mode(vcpu) &&
> -           kvm_x86_ops.nested_ops->hv_timer_pending &&
> -           kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
> +           kvm_x86_ops.nested_ops->has_events &&
> +           kvm_x86_ops.nested_ops->has_events(vcpu))
>                 return true;
>  
>         if (kvm_xen_has_pending_events(vcpu))

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 7/9] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit
  2022-08-11 21:06 ` [PATCH v2 7/9] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
@ 2022-08-17 14:11   ` Maxim Levitsky
  0 siblings, 0 replies; 28+ messages in thread
From: Maxim Levitsky @ 2022-08-17 14:11 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: seanjc, vkuznets, stable

On Thu, 2022-08-11 at 17:06 -0400, Paolo Bonzini wrote:
> From: Sean Christopherson <seanjc@google.com>
> 
> Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run
> through inject_pending_event() and thus vmx_check_nested_events() prior
> to re-entering the guest.
> 
> MTF currently works by virtue of KVM's hack that calls
> kvm_check_nested_events() from kvm_vcpu_running(), but that hack will
> be removed in the near future.  Until that call is removed, the patch
> introduces no functional change.
> 
> Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
> Cc: stable@vger.kernel.org
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index d7f8331d6f7e..940c0c0f8281 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1660,10 +1660,12 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
>          */
>         if (nested_cpu_has_mtf(vmcs12) &&
>             (!vcpu->arch.exception.pending ||
> -            vcpu->arch.exception.nr == DB_VECTOR))
> +            vcpu->arch.exception.nr == DB_VECTOR)) {
>                 vmx->nested.mtf_pending = true;
> -       else
> +               kvm_make_request(KVM_REQ_EVENT, vcpu);
> +       } else {
>                 vmx->nested.mtf_pending = false;
> +       }
>  }
>  
>  static int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked
  2022-08-17  0:07   ` Sean Christopherson
@ 2022-08-17 14:11     ` Maxim Levitsky
  2022-08-17 15:33       ` Paolo Bonzini
  0 siblings, 1 reply; 28+ messages in thread
From: Maxim Levitsky @ 2022-08-17 14:11 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: linux-kernel, kvm, vkuznets

On Wed, 2022-08-17 at 00:07 +0000, Sean Christopherson wrote:
> On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> > Do not return true from kvm_apic_has_events, and consequently from
> > kvm_vcpu_has_events, if the vCPU is not going to process an INIT.
> > 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h | 1 +
> >  arch/x86/kvm/i8259.c            | 2 +-
> >  arch/x86/kvm/lapic.h            | 2 +-
> >  arch/x86/kvm/x86.c              | 5 +++++
> >  arch/x86/kvm/x86.h              | 5 -----
> >  5 files changed, 8 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 293ff678fff5..1ce4ebc41118 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -2042,6 +2042,7 @@ void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
> >                                      u32 size);
> >  bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu);
> >  bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu);
> > +bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu);
> >  
> >  bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
> >                              struct kvm_vcpu **dest_vcpu);
> > diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
> > index e1bb6218bb96..177555eea54e 100644
> > --- a/arch/x86/kvm/i8259.c
> > +++ b/arch/x86/kvm/i8259.c
> > @@ -29,9 +29,9 @@
> >  #include <linux/mm.h>
> >  #include <linux/slab.h>
> >  #include <linux/bitops.h>
> > -#include "irq.h"
> > +#include <linux/kvm_host.h>
> >  
> > -#include <linux/kvm_host.h>
> > +#include "irq.h"
> >  #include "trace.h"
> >  
> >  #define pr_pic_unimpl(fmt, ...)        \
> > diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
> > index 117a46df5cc1..12577ddccdfc 100644
> > --- a/arch/x86/kvm/lapic.h
> > +++ b/arch/x86/kvm/lapic.h
> > @@ -225,7 +225,7 @@ static inline bool kvm_vcpu_apicv_active(struct kvm_vcpu *vcpu)
> >  
> >  static inline bool kvm_apic_has_events(struct kvm_vcpu *vcpu)
> >  {
> > -       return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events;
> > +       return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events && !kvm_vcpu_latch_init(vcpu);
> 
> Blech, the kvm_apic_has_events() name is awful (as is pending_events), e.g. it
> really should be kvm_apic_has_pending_sipi_or_init().

110% agree.
> 
> To avoid the odd kvm_vcpu_latch_init() declaration and the long line above, what
> if we open code this in kvm_vcpu_has_events() like we do for NMI, SMI, etc...?
> 
> And as follow-up, I would love to rename kvm_vcpu_latch_init() => kvm_vcpu_init_blocked(),
> kvm_apic_has_events(), and pending_events.
> 
> E.g. for this patch just do:
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9f11b505cbee..559900736a71 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12533,7 +12533,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
>         if (!list_empty_careful(&vcpu->async_pf.done))
>                 return true;
> 
> -       if (kvm_apic_has_events(vcpu))
> +       /* comment explaning that SIPIs are dropped in this case. */
> +       if (kvm_apic_has_events(vcpu) && !kvm_vcpu_latch_init(vcpu))
>                 return true;

I personally don't know if I prefer this or the original patch.



> 
>         if (vcpu->arch.pv.pv_unhalted)
> 


While reviwing this, I noticed that we have this code:


static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
{
 struct vcpu_svm *svm = to_svm(vcpu);

 /*
 * TODO: Last condition latch INIT signals on vCPU when
 * vCPU is in guest-mode and vmcb12 defines intercept on INIT.
 * To properly emulate the INIT intercept,
 * svm_check_nested_events() should call nested_svm_vmexit()
 * if an INIT signal is pending.
 */
 return !gif_set(svm) ||
 (vmcb_is_intercept(&svm->vmcb->control, INTERCEPT_INIT));
}

Is this workaround still needed? svm_check_nested_events does check the apic's INIT/SIPI status.

Currently the '.apic_init_signal_blocked' is called from kvm_vcpu_latch_init which itself is
currently called from kvm_vcpu_latch_init which happens after we would vmexit if INIT is intercepted by nested
hypervisor.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-08-16 23:45   ` Sean Christopherson
@ 2022-08-17 14:11     ` Maxim Levitsky
  0 siblings, 0 replies; 28+ messages in thread
From: Maxim Levitsky @ 2022-08-17 14:11 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: linux-kernel, kvm, vkuznets

On Tue, 2022-08-16 at 23:45 +0000, Sean Christopherson wrote:
> On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> > kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore
> > it cannot sleep.  Writing to guest memory is therefore forbidden, but it
> > can happen on AMD processors if kvm_check_nested_events() causes a vmexit.
> > 
> > Fortunately, all events that are caught by kvm_check_nested_events() are
> > also recognized by kvm_vcpu_has_events() through vendor callbacks such as
> > kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so
> > remove the call and postpone the actual processing to vcpu_block().
> > 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  arch/x86/kvm/x86.c | 14 +++++++++++---
> >  1 file changed, 11 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 5e9358ea112b..9226fd536783 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -10639,6 +10639,17 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
> >                         return 1;
> >         }
> >  
> > +       if (is_guest_mode(vcpu)) {
> > +               /*
> > +                * Evaluate nested events before exiting the halted state.
> > +                * This allows the halt state to be recorded properly in
> > +                * the VMCS12's activity state field (AMD does not have
> > +                * a similar field and a vmexit always causes a spurious
> > +                * wakeup from HLT).
> > +                */

I assume that the comment refers to the fact that nested_vmx_vmexit due to event
on the HLT instruction, will trigger update of the 'vmcs12->guest_activity_state'
so it should be done before we update the 'vcpu->arch.mp_state'


> > +               kvm_check_nested_events(vcpu);
> 
> Formatting nit, I'd prefer the block comment go above the if-statement, that way
> we avoiding debating whether or not the technically-unnecessary braces align with
> kernel/KVM style, and it doesn't have to wrap as aggressively.
> 
> And s/vmexit/VM-Exit while I'm nitpicking.
> 
>         /*
>          * Evaluate nested events before exiting the halted state.  This allows
>          * the halt state to be recorded properly in the VMCS12's activity
>          * state field (AMD does not have a similar field and a VM-Exit always
>          * causes a spurious wakeup from HLT).
>          */
>         if (is_guest_mode(vcpu))
>                 kvm_check_nested_events(vcpu);
> 
> Side topic, the AMD behavior is a bug report waiting to happen.  I know of at least
> one customer failure that was root caused to a KVM bug where KVM caused a spurious
> wakeup.  To be fair, the guest workload was being stupid (execute HLT on vCPU and
> then effectively unmap its code by doing kexec), but it's still an unpleasant gap :-(

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky

> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
  2022-08-16 23:34   ` Sean Christopherson
  2022-08-17 14:10     ` Maxim Levitsky
@ 2022-08-17 15:31     ` Paolo Bonzini
  2022-08-17 16:41       ` Sean Christopherson
  1 sibling, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-17 15:31 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On 8/17/22 01:34, Sean Christopherson wrote:
> Isn't freeing up the return from kvm_vcpu_check_block() unnecessary?  Can't we
> just do:
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9f11b505cbee..ccb9f8bdeb18 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>                  if (hv_timer)
>                          kvm_lapic_switch_to_hv_timer(vcpu);
> 
> -               if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
> +               if (!kvm_arch_vcpu_runnable(vcpu))
>                          return 1;
>          }
> 
> 
> which IMO is more intuitive and doesn't require reworking halt-polling (again).

This was my first idea indeed.  However I didn't like calling 
kvm_arch_vcpu_runnable() again and "did it schedule()" seemed to be a 
less interesting result from kvm_vcpu_block() (and in fact 
kvm_vcpu_halt() does not bother passing it up the return chain).

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked
  2022-08-17 14:11     ` Maxim Levitsky
@ 2022-08-17 15:33       ` Paolo Bonzini
  0 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-17 15:33 UTC (permalink / raw)
  To: Maxim Levitsky, Sean Christopherson; +Cc: linux-kernel, kvm, vkuznets

On 8/17/22 16:11, Maxim Levitsky wrote:
> 
> While reviwing this, I noticed that we have this code:
> 
> 
> static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
> {
>   struct vcpu_svm *svm = to_svm(vcpu);
> 
>   /*
>   * TODO: Last condition latch INIT signals on vCPU when
>   * vCPU is in guest-mode and vmcb12 defines intercept on INIT.
>   * To properly emulate the INIT intercept,
>   * svm_check_nested_events() should call nested_svm_vmexit()
>   * if an INIT signal is pending.
>   */
>   return !gif_set(svm) ||
>   (vmcb_is_intercept(&svm->vmcb->control, INTERCEPT_INIT));
> }
> 
> Is this workaround still needed? svm_check_nested_events does check
> the apic's INIT/SIPI status.
> 
> Currently the '.apic_init_signal_blocked' is called from
> kvm_vcpu_latch_init which itself is currently called from
> kvm_vcpu_latch_init which happens after we would vmexit if INIT is
> intercepted by nested hypervisor.
No, it shouldn't be needed anymore.

Paolo


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
  2022-08-17 15:31     ` Paolo Bonzini
@ 2022-08-17 16:41       ` Sean Christopherson
  2022-08-17 16:49         ` Paolo Bonzini
  2022-09-20  0:42         ` Sean Christopherson
  0 siblings, 2 replies; 28+ messages in thread
From: Sean Christopherson @ 2022-08-17 16:41 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Wed, Aug 17, 2022, Paolo Bonzini wrote:
> On 8/17/22 01:34, Sean Christopherson wrote:
> > Isn't freeing up the return from kvm_vcpu_check_block() unnecessary?  Can't we
> > just do:
> > 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 9f11b505cbee..ccb9f8bdeb18 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
> >                  if (hv_timer)
> >                          kvm_lapic_switch_to_hv_timer(vcpu);
> > 
> > -               if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
> > +               if (!kvm_arch_vcpu_runnable(vcpu))
> >                          return 1;
> >          }
> > 
> > 
> > which IMO is more intuitive and doesn't require reworking halt-polling (again).
> 
> This was my first idea indeed.  However I didn't like calling
> kvm_arch_vcpu_runnable() again and "did it schedule()" seemed to be a less
> interesting result from kvm_vcpu_block() (and in fact kvm_vcpu_halt() does
> not bother passing it up the return chain).

The flip side of calling kvm_arch_vcpu_runnable() again is that KVM will immediately
wake the vCPU if it becomes runnable after kvm_vcpu_check_block().  The edge cases
where the vCPU becomes runnable late are unlikely to truly matter in practice, but
on the other hand manually re-checking kvm_arch_vcpu_runnable() means KVM gets both
cases "right" (waited=true iff vCPU actually waited, vCPU awakened ASAP), whereas
squishing the information into the return of kvm_vcpu_check_block() means KVM gets
both cases "wrong" (waited=true even if schedule() was never called, vCPU left in
a non-running state even though it's runnable).

My only hesitation with calling kvm_arch_vcpu_runnable() again is that it could be
problematic if KVM somehow managed to consume the event that caused kvm_vcpu_has_events()
to return true, but I don't see how that could happen without it being a KVM bug.

Ah, and if we do go with the explicit check, it should come with a comment to call
out that KVM needs to return up the stack, e.g.

		/*
		 * If KVM stopped blocking the vCPU but the vCPU is still not
		 * runnable, then there is a pending host event of some kind,
		 * e.g. a pending signal.  Return up the stack to service the
		 * pending event without changing the vCPU's activity state.
		 */
		if (!kvm_arch_vcpu_runnable(vcpu))
			return 1;

so that we don't end combining the checks into:

	while (!kvm_arch_vcpu_runnable(vcpu)) {
		/*
		 * Switch to the software timer before halt-polling/blocking as
		 * the guest's timer may be a break event for the vCPU, and the
		 * hypervisor timer runs only when the CPU is in guest mode.
		 * Switch before halt-polling so that KVM recognizes an expired
		 * timer before blocking.
		 */
		hv_timer = kvm_lapic_hv_timer_in_use(vcpu);
		if (hv_timer)
			kvm_lapic_switch_to_sw_timer(vcpu);

		kvm_vcpu_srcu_read_unlock(vcpu);
		if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED)
			kvm_vcpu_halt(vcpu);
		else
			kvm_vcpu_block(vcpu);
		kvm_vcpu_srcu_read_lock(vcpu);

		if (hv_timer)
			kvm_lapic_switch_to_hv_timer(vcpu);
	}

which looks sane, but would mean KVM never bounces out to handle whatever event
needs handling.

Side topic, usage of kvm_apic_accept_events() appears to be broken (though nothing
can trigger the bug).  If kvm_apic_accept_events() were to return an -errno, then
kvm_arch_vcpu_ioctl_run() would return '0' to userspace without updating
vcpu->run->exit_reason.  I think an easy fix is to drop the return value entirely
and then WARN if kvm_check_nested_events() returns something other than -EBUSY.

	if (is_guest_mode(vcpu)) {
		r = kvm_check_nested_events(vcpu);
		if (r < 0) {
			WARN_ON_ONCE(r != -EBUSY);
			return;
		}



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
  2022-08-17 16:41       ` Sean Christopherson
@ 2022-08-17 16:49         ` Paolo Bonzini
  2022-09-20  0:42         ` Sean Christopherson
  1 sibling, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-08-17 16:49 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On 8/17/22 18:41, Sean Christopherson wrote:
> On Wed, Aug 17, 2022, Paolo Bonzini wrote:
>> On 8/17/22 01:34, Sean Christopherson wrote:
>>> Isn't freeing up the return from kvm_vcpu_check_block() unnecessary?  Can't we
>>> just do:
>>>
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index 9f11b505cbee..ccb9f8bdeb18 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>>>                   if (hv_timer)
>>>                           kvm_lapic_switch_to_hv_timer(vcpu);
>>>
>>> -               if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
>>> +               if (!kvm_arch_vcpu_runnable(vcpu))
>>>                           return 1;
>>>           }
>>>
>>>
>>> which IMO is more intuitive and doesn't require reworking halt-polling (again).
>>
>> This was my first idea indeed.  However I didn't like calling
>> kvm_arch_vcpu_runnable() again and "did it schedule()" seemed to be a less
>> interesting result from kvm_vcpu_block() (and in fact kvm_vcpu_halt() does
>> not bother passing it up the return chain).
> 
> The flip side of calling kvm_arch_vcpu_runnable() again is that KVM will immediately
> wake the vCPU if it becomes runnable after kvm_vcpu_check_block().  The edge cases
> where the vCPU becomes runnable late are unlikely to truly matter in practice, but
> on the other hand manually re-checking kvm_arch_vcpu_runnable() means KVM gets both
> cases "right" (waited=true iff vCPU actually waited, vCPU awakened ASAP), whereas
> squishing the information into the return of kvm_vcpu_check_block() means KVM gets
> both cases "wrong" (waited=true even if schedule() was never called, vCPU left in
> a non-running state even though it's runnable).
> 
> My only hesitation with calling kvm_arch_vcpu_runnable() again is that it could be
> problematic if KVM somehow managed to consume the event that caused kvm_vcpu_has_events()
> to return true, but I don't see how that could happen without it being a KVM bug.

No, I agree that it cannot happen, and especially so after getting rid 
of the kvm_check_nested_events() call in kvm_arch_vcpu_runnable().

I'll reorder the patches and apply your suggestion.

Paolo


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-08-11 21:06 ` [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-08-16 23:45   ` Sean Christopherson
@ 2022-09-20  0:32   ` Sean Christopherson
  2022-09-20  0:55   ` Sean Christopherson
  2 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2022-09-20  0:32 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore
> it cannot sleep.  Writing to guest memory is therefore forbidden, but it
> can happen on AMD processors if kvm_check_nested_events() causes a vmexit.
> 
> Fortunately, all events that are caught by kvm_check_nested_events() are
> also recognized by kvm_vcpu_has_events() through vendor callbacks such as
> kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so
> remove the call and postpone the actual processing to vcpu_block().
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5e9358ea112b..9226fd536783 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10639,6 +10639,17 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>  			return 1;
>  	}
>  
> +	if (is_guest_mode(vcpu)) {
> +		/*
> +		 * Evaluate nested events before exiting the halted state.
> +		 * This allows the halt state to be recorded properly in
> +		 * the VMCS12's activity state field (AMD does not have
> +		 * a similar field and a vmexit always causes a spurious
> +		 * wakeup from HLT).
> +		 */
> +		kvm_check_nested_events(vcpu);
> +	}
> +
>  	if (kvm_apic_accept_events(vcpu) < 0)
>  		return 0;

Oof, this ends up yielding a really confusing code sequence.  kvm_apic_accept_events()
has its own kvm_check_nested_events(), but has code to snapshot pending INITs/SIPIs
_before_ the call.  Unpacked, KVM ends up with:

	if (is_guest_mode(vcpu))
		kvm_check_nested_events(vcpu);

	/*
	 * Read pending events before calling the check_events
	 * callback.
	 */
	pe = smp_load_acquire(&apic->pending_events);
	if (!pe)
		return 0;

	if (is_guest_mode(vcpu)) {
		r = kvm_check_nested_events(vcpu);
		if (r < 0)
			return r == -EBUSY ? 0 : r;
	}

	if (kvm_vcpu_latch_init(vcpu)) {
		WARN_ON_ONCE(vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED);
		if (test_bit(KVM_APIC_SIPI, &pe))
			clear_bit(KVM_APIC_SIPI, &apic->pending_events);
		return 0;
	}

	if (test_bit(KVM_APIC_INIT, &pe)) {
		clear_bit(KVM_APIC_INIT, &apic->pending_events);
		kvm_vcpu_reset(vcpu, true);
		if (kvm_vcpu_is_bsp(apic->vcpu))
			vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
		else
			vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED;
	}
	if (test_bit(KVM_APIC_SIPI, &pe)) {
		clear_bit(KVM_APIC_SIPI, &apic->pending_events);
		if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED) {
			/* evaluate pending_events before reading the vector */
			smp_rmb();
			sipi_vector = apic->sipi_vector;
			static_call(kvm_x86_vcpu_deliver_sipi_vector)(vcpu, sipi_vector);
			vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
		}
	}

which on the surface makes this code look broken, e.g. if kvm_check_nested_events()
_needs_ to be after the pending_events snapshot is taken, why is it safe to add a
kvm_check_nested_events() call immediately before the snapshot?

In reality, it's just a bunch of noise because the pending events snapshot is
completely unnecessary and subtly relies on INIT+SIPI being blocked after VM-Exit
on VMX (and SVM, but it's more important for VMX).

In particular, testing "pe" after VM-Exit is nonsensical.  On VMX, events are consumed
if they trigger VM-Exit, i.e. processing INIT/SIPI is flat out wrong if the INIT/SIPI
was the direct cause of VM-Exit.  On SVM, events are left pending, so if any pending
INIT/SIPI will still be there.

The VMX code works because kvm_vcpu_latch_init(), a.k.a. "is INIT blocked", is
always true after VM-Exit since INIT is always blocked in VMX root mode.  Ditto for
the conditional clearing of SIPI; the CPU can't be in wait-for-SIPI immediately
after VM-Exit and so dropping SIPIs ends up being architecturally ok.

I'll add a patch to drop the snapshot code, assuming I'm not missing something even
more subtle...

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block
  2022-08-17 16:41       ` Sean Christopherson
  2022-08-17 16:49         ` Paolo Bonzini
@ 2022-09-20  0:42         ` Sean Christopherson
  1 sibling, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2022-09-20  0:42 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Wed, Aug 17, 2022, Sean Christopherson wrote:
> Side topic, usage of kvm_apic_accept_events() appears to be broken (though nothing
> can trigger the bug).  If kvm_apic_accept_events() were to return an -errno, then
> kvm_arch_vcpu_ioctl_run() would return '0' to userspace without updating
> vcpu->run->exit_reason.  I think an easy fix is to drop the return value entirely
> and then WARN if kvm_check_nested_events() returns something other than -EBUSY.
> 
> 	if (is_guest_mode(vcpu)) {
> 		r = kvm_check_nested_events(vcpu);
> 		if (r < 0) {
> 			WARN_ON_ONCE(r != -EBUSY);
> 			return;
> 		}

For posterity, I was wrong.  Way down the stack, vmx_complete_nested_posted_interrupt()
can return -ENXIO after filling vcpu->run->exit_reason via kvm_handle_memory_failure().
That's the entire reason why negative values from kvm_check_nested_events() and
kvm_apic_accept_events() are morphed to '0', i.e. to "exit to userspace".


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-08-11 21:06 ` [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-08-16 23:45   ` Sean Christopherson
  2022-09-20  0:32   ` Sean Christopherson
@ 2022-09-20  0:55   ` Sean Christopherson
  2 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2022-09-20  0:55 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, vkuznets

On Thu, Aug 11, 2022, Paolo Bonzini wrote:
> kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore
> it cannot sleep.  Writing to guest memory is therefore forbidden, but it
> can happen on AMD processors if kvm_check_nested_events() causes a vmexit.
> 
> Fortunately, all events that are caught by kvm_check_nested_events() are
> also recognized by kvm_vcpu_has_events() through vendor callbacks such as
> kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so
> remove the call and postpone the actual processing to vcpu_block().
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5e9358ea112b..9226fd536783 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10639,6 +10639,17 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>  			return 1;
>  	}
>  
> +	if (is_guest_mode(vcpu)) {
> +		/*
> +		 * Evaluate nested events before exiting the halted state.
> +		 * This allows the halt state to be recorded properly in
> +		 * the VMCS12's activity state field (AMD does not have
> +		 * a similar field and a vmexit always causes a spurious
> +		 * wakeup from HLT).
> +		 */
> +		kvm_check_nested_events(vcpu);

Continuing the conversation with myself, this "needs" to check the return of
kvm_check_nested_events().  In quotes because KVM just signals "internal error"
if vmx_complete_nested_posted_interrupt() fails, i.e. the VM is likely dead anyways,
but again it's odd that the return of kvm_apic_accept_events() is checked but the
direct call to kvm_check_nested_events() is not.

> +	}
> +
>  	if (kvm_apic_accept_events(vcpu) < 0)
>  		return 0;
>  	switch(vcpu->arch.mp_state) {
> @@ -10662,9 +10673,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>  
>  static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
>  {
> -	if (is_guest_mode(vcpu))
> -		kvm_check_nested_events(vcpu);
> -
>  	return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
>  		!vcpu->arch.apf.halted);
>  }
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2022-09-20  0:56 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-11 21:05 [PATCH v2 0/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
2022-08-11 21:05 ` [PATCH v2 1/9] KVM: x86: check validity of argument to KVM_SET_MP_STATE Paolo Bonzini
2022-08-15 13:31   ` Maxim Levitsky
2022-08-16 22:50     ` Sean Christopherson
2022-08-11 21:05 ` [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block Paolo Bonzini
2022-08-16 23:34   ` Sean Christopherson
2022-08-17 14:10     ` Maxim Levitsky
2022-08-17 15:31     ` Paolo Bonzini
2022-08-17 16:41       ` Sean Christopherson
2022-08-17 16:49         ` Paolo Bonzini
2022-09-20  0:42         ` Sean Christopherson
2022-08-11 21:05 ` [PATCH v2 3/9] KVM: x86: make kvm_vcpu_{block,halt} return whether vCPU is runnable Paolo Bonzini
2022-08-11 21:06 ` [PATCH v2 4/9] KVM: mips, x86: do not rely on KVM_REQ_UNHALT Paolo Bonzini
2022-08-11 21:06 ` [PATCH v2 5/9] KVM: remove KVM_REQ_UNHALT Paolo Bonzini
2022-08-11 21:06 ` [PATCH v2 6/9] KVM: x86: make vendor code check for all nested events Paolo Bonzini
2022-08-16 23:47   ` Sean Christopherson
2022-08-17 14:10   ` Maxim Levitsky
2022-08-11 21:06 ` [PATCH v2 7/9] KVM: nVMX: Make an event request when pending an MTF nested VM-Exit Paolo Bonzini
2022-08-17 14:11   ` Maxim Levitsky
2022-08-11 21:06 ` [PATCH v2 8/9] KVM: x86: lapic does not have to process INIT if it is blocked Paolo Bonzini
2022-08-17  0:07   ` Sean Christopherson
2022-08-17 14:11     ` Maxim Levitsky
2022-08-17 15:33       ` Paolo Bonzini
2022-08-11 21:06 ` [PATCH v2 9/9] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
2022-08-16 23:45   ` Sean Christopherson
2022-08-17 14:11     ` Maxim Levitsky
2022-09-20  0:32   ` Sean Christopherson
2022-09-20  0:55   ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).