All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: x86: never write to memory from kvm_vcpu_check_block
@ 2022-04-27 17:37 Paolo Bonzini
  2022-04-27 17:37 ` [PATCH 1/3] KVM: x86: make vendor code check for all nested events Paolo Bonzini
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Paolo Bonzini @ 2022-04-27 17:37 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc

Maxim reported the following backtrace:

[ 1355.807187]  kvm_vcpu_map+0x159/0x190 [kvm]
[ 1355.807628]  nested_svm_vmexit+0x4c/0x7f0 [kvm_amd]
[ 1355.808036]  ? kvm_vcpu_block+0x54/0xa0 [kvm]
[ 1355.808450]  svm_check_nested_events+0x97/0x390 [kvm_amd]
[ 1355.808920]  kvm_check_nested_events+0x1c/0x40 [kvm] 
[ 1355.809396]  kvm_arch_vcpu_runnable+0x4e/0x190 [kvm]
[ 1355.809892]  kvm_vcpu_check_block+0x4f/0x100 [kvm]
[ 1355.811259]  kvm_vcpu_block+0x6b/0xa0 [kvm] 

due to kmap being called in non-sleepable (!TASK_RUNNING) context.
Fix it by extending kvm_x86_ops->nested_ops.hv_timer_pending and
getting rid of one annoying instance of kvm_check_nested_events.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] KVM: x86: make vendor code check for all nested events
  2022-04-27 17:37 [PATCH 0/2] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
@ 2022-04-27 17:37 ` Paolo Bonzini
  2022-04-27 20:40   ` Maxim Levitsky
  2022-04-29 17:03   ` Sean Christopherson
  2022-04-27 17:37 ` [PATCH 2/3] KVM: x86: a vCPU with a pending triple fault is runnable Paolo Bonzini
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 12+ messages in thread
From: Paolo Bonzini @ 2022-04-27 17:37 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc, stable

Right now, the VMX preemption timer is special cased via the
hv_timer_pending, but the purpose of the callback can be easily
extended to observing any event that can occur only in non-root
mode.  Interrupts, NMIs etc. are already handled properly by
the *_interrupt_allowed callbacks, so what is missing is only
MTF.  Check it in the newly-renamed callback, so that
kvm_vcpu_running's call to kvm_check_nested_events
becomes redundant.

Cc: stable@vger.kernel.org
Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/vmx/nested.c       | 7 ++++++-
 arch/x86/kvm/x86.c              | 8 ++++----
 3 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4ff36610af6a..e2e4f60159e9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1504,7 +1504,7 @@ struct kvm_x86_ops {
 struct kvm_x86_nested_ops {
 	void (*leave_nested)(struct kvm_vcpu *vcpu);
 	int (*check_events)(struct kvm_vcpu *vcpu);
-	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
+	bool (*has_events)(struct kvm_vcpu *vcpu);
 	void (*triple_fault)(struct kvm_vcpu *vcpu);
 	int (*get_state)(struct kvm_vcpu *vcpu,
 			 struct kvm_nested_state __user *user_kvm_nested_state,
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 856c87563883..54672025c3a1 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3857,6 +3857,11 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
 	       to_vmx(vcpu)->nested.preemption_timer_expired;
 }
 
+static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
+{
+	return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
+}
+
 static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -6809,7 +6814,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
 struct kvm_x86_nested_ops vmx_nested_ops = {
 	.leave_nested = vmx_leave_nested,
 	.check_events = vmx_check_nested_events,
-	.hv_timer_pending = nested_vmx_preemption_timer_pending,
+	.has_events = vmx_has_nested_events,
 	.triple_fault = nested_vmx_triple_fault,
 	.get_state = vmx_get_nested_state,
 	.set_state = vmx_set_nested_state,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a6ab19afc638..0e73607b02bd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9471,8 +9471,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	}
 
 	if (is_guest_mode(vcpu) &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+	    kvm_x86_ops.nested_ops->has_events &&
+	    kvm_x86_ops.nested_ops->has_events(vcpu))
 		*req_immediate_exit = true;
 
 	WARN_ON(vcpu->arch.exception.pending);
@@ -12183,8 +12183,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 		return true;
 
 	if (is_guest_mode(vcpu) &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending &&
-	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
+	    kvm_x86_ops.nested_ops->has_events &&
+	    kvm_x86_ops.nested_ops->has_events(vcpu))
 		return true;
 
 	return false;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] KVM: x86: a vCPU with a pending triple fault is runnable
  2022-04-27 17:37 [PATCH 0/2] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-04-27 17:37 ` [PATCH 1/3] KVM: x86: make vendor code check for all nested events Paolo Bonzini
@ 2022-04-27 17:37 ` Paolo Bonzini
  2022-04-27 20:41   ` Maxim Levitsky
  2022-04-27 17:37 ` [PATCH 3/3] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-07-20  9:31 ` [PATCH 0/2] " Maxim Levitsky
  3 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2022-04-27 17:37 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc, stable

Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0e73607b02bd..d563812ca229 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12187,6 +12187,9 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 	    kvm_x86_ops.nested_ops->has_events(vcpu))
 		return true;
 
+	if (kvm_test_request(KVM_REQ_TRIPLE_FAULT, vcpu))
+		return true;
+
 	return false;
 }
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-04-27 17:37 [PATCH 0/2] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
  2022-04-27 17:37 ` [PATCH 1/3] KVM: x86: make vendor code check for all nested events Paolo Bonzini
  2022-04-27 17:37 ` [PATCH 2/3] KVM: x86: a vCPU with a pending triple fault is runnable Paolo Bonzini
@ 2022-04-27 17:37 ` Paolo Bonzini
  2022-04-27 20:42   ` Maxim Levitsky
  2022-07-20  9:31 ` [PATCH 0/2] " Maxim Levitsky
  3 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2022-04-27 17:37 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: mlevitsk, seanjc, stable

kvm_vcpu_check_block is called while not in TASK_RUNNING, and therefore
cannot sleep.  Writing to guest memory is therefore forbidden, but it
can happen if kvm_check_nested_events causes a vmexit.

Fortunately, all events that are caught by kvm_check_nested_events are
also handled by kvm_vcpu_has_events through vendor callbacks such as
kvm_x86_interrupt_allowed or kvm_x86_ops.nested_ops->has_events, so
remove the call.

Cc: stable@vger.kernel.org
Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d563812ca229..90b4f50b9a84 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10341,9 +10341,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
 
 static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
 {
-	if (is_guest_mode(vcpu))
-		kvm_check_nested_events(vcpu);
-
 	return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
 		!vcpu->arch.apf.halted);
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] KVM: x86: make vendor code check for all nested events
  2022-04-27 17:37 ` [PATCH 1/3] KVM: x86: make vendor code check for all nested events Paolo Bonzini
@ 2022-04-27 20:40   ` Maxim Levitsky
  2022-04-29 18:40     ` Paolo Bonzini
  2022-04-29 17:03   ` Sean Christopherson
  1 sibling, 1 reply; 12+ messages in thread
From: Maxim Levitsky @ 2022-04-27 20:40 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: seanjc, stable

On Wed, 2022-04-27 at 13:37 -0400, Paolo Bonzini wrote:
> Right now, the VMX preemption timer is special cased via the
> hv_timer_pending, but the purpose of the callback can be easily
> extended to observing any event that can occur only in non-root
> mode.  Interrupts, NMIs etc. are already handled properly by
> the *_interrupt_allowed callbacks, so what is missing is only
> MTF.  Check it in the newly-renamed callback, so that
> kvm_vcpu_running's call to kvm_check_nested_events
> becomes redundant.
> 
> Cc: stable@vger.kernel.org
> Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/vmx/nested.c       | 7 ++++++-
>  arch/x86/kvm/x86.c              | 8 ++++----
>  3 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 4ff36610af6a..e2e4f60159e9 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1504,7 +1504,7 @@ struct kvm_x86_ops {
>  struct kvm_x86_nested_ops {
>  	void (*leave_nested)(struct kvm_vcpu *vcpu);
>  	int (*check_events)(struct kvm_vcpu *vcpu);
> -	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
> +	bool (*has_events)(struct kvm_vcpu *vcpu);
>  	void (*triple_fault)(struct kvm_vcpu *vcpu);
>  	int (*get_state)(struct kvm_vcpu *vcpu,
>  			 struct kvm_nested_state __user *user_kvm_nested_state,
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 856c87563883..54672025c3a1 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -3857,6 +3857,11 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
>  	       to_vmx(vcpu)->nested.preemption_timer_expired;
>  }
>  
> +static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
> +{
Typo: needs struct vcpu_vmx *vmx = to_vmx(vcpu);

> +	return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
> +}
> +
>  static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> @@ -6809,7 +6814,7 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
>  struct kvm_x86_nested_ops vmx_nested_ops = {
>  	.leave_nested = vmx_leave_nested,
>  	.check_events = vmx_check_nested_events,
> -	.hv_timer_pending = nested_vmx_preemption_timer_pending,
> +	.has_events = vmx_has_nested_events,
>  	.triple_fault = nested_vmx_triple_fault,
>  	.get_state = vmx_get_nested_state,
>  	.set_state = vmx_set_nested_state,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a6ab19afc638..0e73607b02bd 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9471,8 +9471,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
>  	}
>  
>  	if (is_guest_mode(vcpu) &&
> -	    kvm_x86_ops.nested_ops->hv_timer_pending &&
> -	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
> +	    kvm_x86_ops.nested_ops->has_events &&
> +	    kvm_x86_ops.nested_ops->has_events(vcpu))
>  		*req_immediate_exit = true;
>  
>  	WARN_ON(vcpu->arch.exception.pending);
> @@ -12183,8 +12183,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
>  		return true;
>  
>  	if (is_guest_mode(vcpu) &&
> -	    kvm_x86_ops.nested_ops->hv_timer_pending &&
> -	    kvm_x86_ops.nested_ops->hv_timer_pending(vcpu))
> +	    kvm_x86_ops.nested_ops->has_events &&
> +	    kvm_x86_ops.nested_ops->has_events(vcpu))
Nitpick: Won't it make sense to use conditional static call here instead?

>  		return true;
>  
>  	return false;


Besides nitpicks,

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>


Wasn't able to test on my intel laptop, I am getting out of sudden in qemu:

'cpuid_data is full, no space for cpuid(eax:0x8000001d,ecx:0x3e)'

I will investigate tomorrow.

Best regards,
	Maxim Levitsky



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] KVM: x86: a vCPU with a pending triple fault is runnable
  2022-04-27 17:37 ` [PATCH 2/3] KVM: x86: a vCPU with a pending triple fault is runnable Paolo Bonzini
@ 2022-04-27 20:41   ` Maxim Levitsky
  0 siblings, 0 replies; 12+ messages in thread
From: Maxim Levitsky @ 2022-04-27 20:41 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: seanjc, stable

On Wed, 2022-04-27 at 13:37 -0400, Paolo Bonzini wrote:
> Cc: stable@vger.kernel.org
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 0e73607b02bd..d563812ca229 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12187,6 +12187,9 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
>  	    kvm_x86_ops.nested_ops->has_events(vcpu))
>  		return true;
>  
> +	if (kvm_test_request(KVM_REQ_TRIPLE_FAULT, vcpu))
> +		return true;
> +
>  	return false;
>  }
>  

True.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-04-27 17:37 ` [PATCH 3/3] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
@ 2022-04-27 20:42   ` Maxim Levitsky
  0 siblings, 0 replies; 12+ messages in thread
From: Maxim Levitsky @ 2022-04-27 20:42 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: seanjc, stable

On Wed, 2022-04-27 at 13:37 -0400, Paolo Bonzini wrote:
> kvm_vcpu_check_block is called while not in TASK_RUNNING, and therefore
> cannot sleep.  Writing to guest memory is therefore forbidden, but it
> can happen if kvm_check_nested_events causes a vmexit.
> 
> Fortunately, all events that are caught by kvm_check_nested_events are
> also handled by kvm_vcpu_has_events through vendor callbacks such as
> kvm_x86_interrupt_allowed or kvm_x86_ops.nested_ops->has_events, so
> remove the call.
> 
> Cc: stable@vger.kernel.org
> Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/x86.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d563812ca229..90b4f50b9a84 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10341,9 +10341,6 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>  
>  static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
>  {
> -	if (is_guest_mode(vcpu))
> -		kvm_check_nested_events(vcpu);
> -
>  	return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
>  		!vcpu->arch.apf.halted);
>  }

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>

I tested this on AMD, and it seems to work fine, and my nested AVIC test
works as good as was before.

Note that I forgot to mention, that I had to apply most of the patches
manually, they don't apply to kvm/queue.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] KVM: x86: make vendor code check for all nested events
  2022-04-27 17:37 ` [PATCH 1/3] KVM: x86: make vendor code check for all nested events Paolo Bonzini
  2022-04-27 20:40   ` Maxim Levitsky
@ 2022-04-29 17:03   ` Sean Christopherson
  2022-04-29 17:09     ` Paolo Bonzini
  1 sibling, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2022-04-29 17:03 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, stable

On Wed, Apr 27, 2022, Paolo Bonzini wrote:
> Right now, the VMX preemption timer is special cased via the
> hv_timer_pending, but the purpose of the callback can be easily
> extended to observing any event that can occur only in non-root
> mode.  Interrupts, NMIs etc. are already handled properly by
> the *_interrupt_allowed callbacks, so what is missing is only
> MTF.  Check it in the newly-renamed callback, so that
> kvm_vcpu_running's call to kvm_check_nested_events
> becomes redundant.
> 
> Cc: stable@vger.kernel.org
> Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/vmx/nested.c       | 7 ++++++-
>  arch/x86/kvm/x86.c              | 8 ++++----
>  3 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 4ff36610af6a..e2e4f60159e9 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1504,7 +1504,7 @@ struct kvm_x86_ops {
>  struct kvm_x86_nested_ops {
>  	void (*leave_nested)(struct kvm_vcpu *vcpu);
>  	int (*check_events)(struct kvm_vcpu *vcpu);
> -	bool (*hv_timer_pending)(struct kvm_vcpu *vcpu);
> +	bool (*has_events)(struct kvm_vcpu *vcpu);
>  	void (*triple_fault)(struct kvm_vcpu *vcpu);
>  	int (*get_state)(struct kvm_vcpu *vcpu,
>  			 struct kvm_nested_state __user *user_kvm_nested_state,
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 856c87563883..54672025c3a1 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -3857,6 +3857,11 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
>  	       to_vmx(vcpu)->nested.preemption_timer_expired;
>  }
>  
> +static bool vmx_has_nested_events(struct kvm_vcpu *vcpu)
> +{
> +	return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;

This doesn't even compile...

arch/x86/kvm/vmx/nested.c: In function ‘vmx_has_nested_events’:
arch/x86/kvm/vmx/nested.c:3862:61: error: ‘vmx’ undeclared (first use in this function)
 3862 |         return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
      |                                                             ^~~
arch/x86/kvm/vmx/nested.c:3862:61: note: each undeclared identifier is reported only once for each function it appears in
  CC [M]  arch/x86/kvm/svm/svm_onhyperv.o
arch/x86/kvm/vmx/nested.c:3863:1: error: control reaches end of non-void function [-Werror=return-type]
 3863 | }
      | ^
cc1: all warnings being treated as errors
  LD [M]  arch/x86/kvm/kvm.o

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] KVM: x86: make vendor code check for all nested events
  2022-04-29 17:03   ` Sean Christopherson
@ 2022-04-29 17:09     ` Paolo Bonzini
  2022-04-29 17:35       ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2022-04-29 17:09 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-kernel, kvm, mlevitsk, stable

On 4/29/22 19:03, Sean Christopherson wrote:
> This doesn't even compile...
> 
> arch/x86/kvm/vmx/nested.c: In function ‘vmx_has_nested_events’:
> arch/x86/kvm/vmx/nested.c:3862:61: error: ‘vmx’ undeclared (first use in this function)
>   3862 |         return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
>        |                                                             ^~~
> arch/x86/kvm/vmx/nested.c:3862:61: note: each undeclared identifier is reported only once for each function it appears in
>    CC [M]  arch/x86/kvm/svm/svm_onhyperv.o
> arch/x86/kvm/vmx/nested.c:3863:1: error: control reaches end of non-void function [-Werror=return-type]
>   3863 | }
>        | ^
> cc1: all warnings being treated as errors
>    LD [M]  arch/x86/kvm/kvm.o

Yeah, it doesn't.  Of course this will need a v2, also because there are 
failures in the vmx tests.

What can I say, testing these patches on AMD hardware wasn't a great idea.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] KVM: x86: make vendor code check for all nested events
  2022-04-29 17:09     ` Paolo Bonzini
@ 2022-04-29 17:35       ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-04-29 17:35 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, mlevitsk, stable

On Fri, Apr 29, 2022, Paolo Bonzini wrote:
> On 4/29/22 19:03, Sean Christopherson wrote:
> > This doesn't even compile...
> > 
> > arch/x86/kvm/vmx/nested.c: In function ‘vmx_has_nested_events’:
> > arch/x86/kvm/vmx/nested.c:3862:61: error: ‘vmx’ undeclared (first use in this function)
> >   3862 |         return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending;
> >        |                                                             ^~~
> > arch/x86/kvm/vmx/nested.c:3862:61: note: each undeclared identifier is reported only once for each function it appears in
> >    CC [M]  arch/x86/kvm/svm/svm_onhyperv.o
> > arch/x86/kvm/vmx/nested.c:3863:1: error: control reaches end of non-void function [-Werror=return-type]
> >   3863 | }
> >        | ^
> > cc1: all warnings being treated as errors
> >    LD [M]  arch/x86/kvm/kvm.o
> 
> Yeah, it doesn't.  Of course this will need a v2, also because there are
> failures in the vmx tests.

Heh, I suspected there would be failures, I was about to type up a response to
patch 3.  MTF is subtly relying on the call from kvm_vcpu_running() to inject
the event.

From: Sean Christopherson <seanjc@google.com>
Date: Fri, 29 Apr 2022 17:30:54 +0000
Subject: [PATCH] KVM: nVMX: Make an event request when pending an MTF nested
 VM-Exit

Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run
through inject_pending_event() and thus vmx_check_nested_events() prior
to re-entering the guest.  MTF currently works by virtue of KVM's hack
that calls kvm_check_nested_events() from kvm_vcpu_running(), but that
hack will be removed in the near future.

Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d58b763df855..4c635bc08105 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1577,10 +1577,12 @@ static void vmx_update_emulated_instruction(struct kvm_vcpu *vcpu)
 	 */
 	if (nested_cpu_has_mtf(vmcs12) &&
 	    (!vcpu->arch.exception.pending ||
-	     vcpu->arch.exception.nr == DB_VECTOR))
+	     vcpu->arch.exception.nr == DB_VECTOR)) {
 		vmx->nested.mtf_pending = true;
-	else
+		kvm_make_request(KVM_REQ_EVENT, vcpu);
+	} else {
 		vmx->nested.mtf_pending = false;
+	}
 }

 static int vmx_skip_emulated_instruction(struct kvm_vcpu *vcpu)

base-commit: 39aa5903e8c407e5128c15aeabb0717b275b007e
--


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] KVM: x86: make vendor code check for all nested events
  2022-04-27 20:40   ` Maxim Levitsky
@ 2022-04-29 18:40     ` Paolo Bonzini
  0 siblings, 0 replies; 12+ messages in thread
From: Paolo Bonzini @ 2022-04-29 18:40 UTC (permalink / raw)
  To: Maxim Levitsky, linux-kernel, kvm; +Cc: seanjc, stable

On 4/27/22 22:40, Maxim Levitsky wrote:
> 
> Wasn't able to test on my intel laptop, I am getting out of sudden in qemu:
> 
> 'cpuid_data is full, no space for cpuid(eax:0x8000001d,ecx:0x3e)'

Sending a patch soon, it's a QEMU bug that we have to work around.

Paolo

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] KVM: x86: never write to memory from kvm_vcpu_check_block
  2022-04-27 17:37 [PATCH 0/2] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
                   ` (2 preceding siblings ...)
  2022-04-27 17:37 ` [PATCH 3/3] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
@ 2022-07-20  9:31 ` Maxim Levitsky
  3 siblings, 0 replies; 12+ messages in thread
From: Maxim Levitsky @ 2022-07-20  9:31 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: seanjc

On Wed, 2022-04-27 at 13:37 -0400, Paolo Bonzini wrote:
> Maxim reported the following backtrace:
> 
> [ 1355.807187]  kvm_vcpu_map+0x159/0x190 [kvm]
> [ 1355.807628]  nested_svm_vmexit+0x4c/0x7f0 [kvm_amd]
> [ 1355.808036]  ? kvm_vcpu_block+0x54/0xa0 [kvm]
> [ 1355.808450]  svm_check_nested_events+0x97/0x390 [kvm_amd]
> [ 1355.808920]  kvm_check_nested_events+0x1c/0x40 [kvm] 
> [ 1355.809396]  kvm_arch_vcpu_runnable+0x4e/0x190 [kvm]
> [ 1355.809892]  kvm_vcpu_check_block+0x4f/0x100 [kvm]
> [ 1355.811259]  kvm_vcpu_block+0x6b/0xa0 [kvm] 
> 
> due to kmap being called in non-sleepable (!TASK_RUNNING) context.
> Fix it by extending kvm_x86_ops->nested_ops.hv_timer_pending and
> getting rid of one annoying instance of kvm_check_nested_events.
> 
> Paolo
> 

Any update on this patch series? Pinging so it is not forgotten.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-07-20  9:31 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-27 17:37 [PATCH 0/2] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
2022-04-27 17:37 ` [PATCH 1/3] KVM: x86: make vendor code check for all nested events Paolo Bonzini
2022-04-27 20:40   ` Maxim Levitsky
2022-04-29 18:40     ` Paolo Bonzini
2022-04-29 17:03   ` Sean Christopherson
2022-04-29 17:09     ` Paolo Bonzini
2022-04-29 17:35       ` Sean Christopherson
2022-04-27 17:37 ` [PATCH 2/3] KVM: x86: a vCPU with a pending triple fault is runnable Paolo Bonzini
2022-04-27 20:41   ` Maxim Levitsky
2022-04-27 17:37 ` [PATCH 3/3] KVM: x86: never write to memory from kvm_vcpu_check_block Paolo Bonzini
2022-04-27 20:42   ` Maxim Levitsky
2022-07-20  9:31 ` [PATCH 0/2] " Maxim Levitsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.