All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: x86: Use get_cpl directly in case of vcpu_load to improve accuracy
@ 2023-11-23  7:58 Like Xu
  2023-11-28  1:30 ` Sean Christopherson
  0 siblings, 1 reply; 4+ messages in thread
From: Like Xu @ 2023-11-23  7:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel

From: Like Xu <likexu@tencent.com>

When vcpu is consistent with kvm_get_running_vcpu(), use get_cpl directly
to return the current exact state for the callers of vcpu_in_kernel API.

In scenarios where VM payload is profiled via perf-kvm, it's noticed that
the value of vcpu->arch.preempted_in_kernel is not strictly synchronised
with current vcpu_cpl.

This affects perf/core's ability to make use of the kvm_guest_state() API
to tag guest RIP with PERF_RECORD_MISC_GUEST_{KERNEL|USER} and record it
in the sample. This causes perf/tool to fail to connect the vcpu RIPs to
the guest kernel space symbols when parsing these samples due to incorrect
PERF_RECORD_MISC flags:

   Before (perf-report of a cpu-cycles sample):
      1.23%  :58945   [unknown]         [u] 0xffffffff818012e0

Given the semantics of preempted_in_kernel, it may not be easy (w/o extra
effort) to reconcile changes between preempted_in_kernel and CPL values.
Therefore to make this API more trustworthy, fallback to using get_cpl()
directly when the vcpu is loaded:

   After:
      1.35%  :60703   [kernel.vmlinux]  [g] asm_exc_page_fault

More performance squeezing is clearly possible, with priority given to
correcting its accuracy as a basic move.

Signed-off-by: Like Xu <likexu@tencent.com>
---
 arch/x86/kvm/x86.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2c924075f6f1..c454df904a45 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13031,7 +13031,10 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.guest_state_protected)
 		return true;
 
-	return vcpu->arch.preempted_in_kernel;
+	if (vcpu != kvm_get_running_vcpu())
+		return vcpu->arch.preempted_in_kernel;
+
+	return static_call(kvm_x86_get_cpl)(vcpu) == 0;
 }
 
 unsigned long kvm_arch_vcpu_get_ip(struct kvm_vcpu *vcpu)

base-commit: 45b890f7689eb0aba454fc5831d2d79763781677
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] KVM: x86: Use get_cpl directly in case of vcpu_load to improve accuracy
  2023-11-23  7:58 [PATCH] KVM: x86: Use get_cpl directly in case of vcpu_load to improve accuracy Like Xu
@ 2023-11-28  1:30 ` Sean Christopherson
  2023-11-29 11:40   ` Like Xu
  0 siblings, 1 reply; 4+ messages in thread
From: Sean Christopherson @ 2023-11-28  1:30 UTC (permalink / raw)
  To: Like Xu; +Cc: Paolo Bonzini, kvm, linux-kernel

On Thu, Nov 23, 2023, Like Xu wrote:
> Signed-off-by: Like Xu <likexu@tencent.com>
> ---
>  arch/x86/kvm/x86.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 2c924075f6f1..c454df904a45 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -13031,7 +13031,10 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
>  	if (vcpu->arch.guest_state_protected)
>  		return true;
>  
> -	return vcpu->arch.preempted_in_kernel;
> +	if (vcpu != kvm_get_running_vcpu())
> +		return vcpu->arch.preempted_in_kernel;

Eww, KVM really shouldn't be reading vcpu->arch.preempted_in_kernel in a generic
vcpu_in_kernel() API. 

Rather than fudge around that ugliness with a kvm_get_running_vcpu() check, what
if we instead repurpose kvm_arch_dy_has_pending_interrupt(), which is effectively
x86 specific, to deal with not being able to read the current CPL for a vCPU that
is (possibly) not "loaded", which AFAICT is also x86 specific (or rather, Intel/VMX
specific).

And if getting the CPL for a vCPU that may not be loaded is problematic for other
architectures, then I think the correct fix is to move preempted_in_kernel into
common code and check it directly in kvm_vcpu_on_spin().

This is what I'm thinking:

---
 arch/x86/kvm/x86.c       | 22 +++++++++++++++-------
 include/linux/kvm_host.h |  2 +-
 virt/kvm/kvm_main.c      |  7 +++----
 3 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6d0772b47041..5c1a75c0dafe 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13022,13 +13022,21 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 	return kvm_vcpu_running(vcpu) || kvm_vcpu_has_events(vcpu);
 }
 
-bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
+static bool kvm_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
 {
-	if (kvm_vcpu_apicv_active(vcpu) &&
-	    static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu))
-		return true;
+	return kvm_vcpu_apicv_active(vcpu) &&
+	       static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu);
+}
 
-	return false;
+bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * Treat the vCPU as being in-kernel if it has a pending interrupt, as
+	 * the vCPU trying to yield may be spinning on IPI delivery, i.e. the
+	 * the target vCPU is in-kernel for the purposes of directed yield.
+	 */
+	return vcpu->arch.preempted_in_kernel ||
+	       kvm_dy_has_pending_interrupt(vcpu);
 }
 
 bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
@@ -13043,7 +13051,7 @@ bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
 		 kvm_test_request(KVM_REQ_EVENT, vcpu))
 		return true;
 
-	return kvm_arch_dy_has_pending_interrupt(vcpu);
+	return kvm_dy_has_pending_interrupt(vcpu);
 }
 
 bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
@@ -13051,7 +13059,7 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.guest_state_protected)
 		return true;
 
-	return vcpu->arch.preempted_in_kernel;
+	return static_call(kvm_x86_get_cpl)(vcpu);
 }
 
 unsigned long kvm_arch_vcpu_get_ip(struct kvm_vcpu *vcpu)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ea1523a7b83a..820c5b64230f 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1505,7 +1505,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
 bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
 int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
 bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu);
-bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu);
+bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu);
 int kvm_arch_post_init_vm(struct kvm *kvm);
 void kvm_arch_pre_destroy_vm(struct kvm *kvm);
 int kvm_arch_create_vm_debugfs(struct kvm *kvm);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8758cb799e18..e84be7e2e05e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4049,9 +4049,9 @@ static bool vcpu_dy_runnable(struct kvm_vcpu *vcpu)
 	return false;
 }
 
-bool __weak kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
+bool __weak kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
 {
-	return false;
+	return kvm_arch_vcpu_in_kernel(vcpu);
 }
 
 void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
@@ -4086,8 +4086,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
 			if (kvm_vcpu_is_blocking(vcpu) && !vcpu_dy_runnable(vcpu))
 				continue;
 			if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode &&
-			    !kvm_arch_dy_has_pending_interrupt(vcpu) &&
-			    !kvm_arch_vcpu_in_kernel(vcpu))
+			    kvm_arch_vcpu_preempted_in_kernel(vcpu))
 				continue;
 			if (!kvm_vcpu_eligible_for_directed_yield(vcpu))
 				continue;

base-commit: e9e60c82fe391d04db55a91c733df4a017c28b2f
-- 


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] KVM: x86: Use get_cpl directly in case of vcpu_load to improve accuracy
  2023-11-28  1:30 ` Sean Christopherson
@ 2023-11-29 11:40   ` Like Xu
  2023-11-29 18:25     ` Sean Christopherson
  0 siblings, 1 reply; 4+ messages in thread
From: Like Xu @ 2023-11-29 11:40 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel

Thanks for your comments.

On 28/11/2023 9:30 am, Sean Christopherson wrote:
> On Thu, Nov 23, 2023, Like Xu wrote:
>> Signed-off-by: Like Xu <likexu@tencent.com>
>> ---
>>   arch/x86/kvm/x86.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 2c924075f6f1..c454df904a45 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -13031,7 +13031,10 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
>>   	if (vcpu->arch.guest_state_protected)
>>   		return true;
>>   
>> -	return vcpu->arch.preempted_in_kernel;
>> +	if (vcpu != kvm_get_running_vcpu())
>> +		return vcpu->arch.preempted_in_kernel;
> 
> Eww, KVM really shouldn't be reading vcpu->arch.preempted_in_kernel in a generic
> vcpu_in_kernel() API.

It looks weird to me too.

> 
> Rather than fudge around that ugliness with a kvm_get_running_vcpu() check, what
> if we instead repurpose kvm_arch_dy_has_pending_interrupt(), which is effectively
> x86 specific, to deal with not being able to read the current CPL for a vCPU that
> is (possibly) not "loaded", which AFAICT is also x86 specific (or rather, Intel/VMX
> specific).

I'd break it into two parts, the first step applying this simpler, more 
straightforward fix
(which is backport friendly compared to the diff below), and the second step 
applying
your insight for more decoupling and cleanup.

You'd prefer one move to fix it, right ?

> 
> And if getting the CPL for a vCPU that may not be loaded is problematic for other
> architectures, then I think the correct fix is to move preempted_in_kernel into
> common code and check it directly in kvm_vcpu_on_spin().

Not sure which tests would cover this part of the change.

> 
> This is what I'm thinking:
> 
> ---
>   arch/x86/kvm/x86.c       | 22 +++++++++++++++-------
>   include/linux/kvm_host.h |  2 +-
>   virt/kvm/kvm_main.c      |  7 +++----
>   3 files changed, 19 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6d0772b47041..5c1a75c0dafe 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -13022,13 +13022,21 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
>   	return kvm_vcpu_running(vcpu) || kvm_vcpu_has_events(vcpu);
>   }
>   
> -bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
> +static bool kvm_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
>   {
> -	if (kvm_vcpu_apicv_active(vcpu) &&
> -	    static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu))
> -		return true;
> +	return kvm_vcpu_apicv_active(vcpu) &&
> +	       static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu);
> +}
>   
> -	return false;
> +bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
> +{
> +	/*
> +	 * Treat the vCPU as being in-kernel if it has a pending interrupt, as
> +	 * the vCPU trying to yield may be spinning on IPI delivery, i.e. the
> +	 * the target vCPU is in-kernel for the purposes of directed yield.

How about the case "vcpu->arch.guest_state_protected == true" ?

> +	 */
> +	return vcpu->arch.preempted_in_kernel ||
> +	       kvm_dy_has_pending_interrupt(vcpu);
>   }
>   
>   bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> @@ -13043,7 +13051,7 @@ bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
>   		 kvm_test_request(KVM_REQ_EVENT, vcpu))
>   		return true;
>   
> -	return kvm_arch_dy_has_pending_interrupt(vcpu);
> +	return kvm_dy_has_pending_interrupt(vcpu);
>   }
>   
>   bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> @@ -13051,7 +13059,7 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
>   	if (vcpu->arch.guest_state_protected)
>   		return true;
>   
> -	return vcpu->arch.preempted_in_kernel;
> +	return static_call(kvm_x86_get_cpl)(vcpu);

We need "return static_call(kvm_x86_get_cpl)(vcpu) == 0;" here.

>   }
>   
>   unsigned long kvm_arch_vcpu_get_ip(struct kvm_vcpu *vcpu)
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index ea1523a7b83a..820c5b64230f 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1505,7 +1505,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
>   bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
>   int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
>   bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu);
> -bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu);
> +bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu);
>   int kvm_arch_post_init_vm(struct kvm *kvm);
>   void kvm_arch_pre_destroy_vm(struct kvm *kvm);
>   int kvm_arch_create_vm_debugfs(struct kvm *kvm);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 8758cb799e18..e84be7e2e05e 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4049,9 +4049,9 @@ static bool vcpu_dy_runnable(struct kvm_vcpu *vcpu)
>   	return false;
>   }
>   
> -bool __weak kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
> +bool __weak kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
>   {
> -	return false;
> +	return kvm_arch_vcpu_in_kernel(vcpu);
>   }
>   
>   void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
> @@ -4086,8 +4086,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
>   			if (kvm_vcpu_is_blocking(vcpu) && !vcpu_dy_runnable(vcpu))
>   				continue;
>   			if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode &&
> -			    !kvm_arch_dy_has_pending_interrupt(vcpu) &&
> -			    !kvm_arch_vcpu_in_kernel(vcpu))
> +			    kvm_arch_vcpu_preempted_in_kernel(vcpu))

Use !kvm_arch_vcpu_preempted_in_kernel(vcpu) ?

>   				continue;
>   			if (!kvm_vcpu_eligible_for_directed_yield(vcpu))
>   				continue;
> 
> base-commit: e9e60c82fe391d04db55a91c733df4a017c28b2f

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] KVM: x86: Use get_cpl directly in case of vcpu_load to improve accuracy
  2023-11-29 11:40   ` Like Xu
@ 2023-11-29 18:25     ` Sean Christopherson
  0 siblings, 0 replies; 4+ messages in thread
From: Sean Christopherson @ 2023-11-29 18:25 UTC (permalink / raw)
  To: Like Xu; +Cc: Paolo Bonzini, kvm, linux-kernel

On Wed, Nov 29, 2023, Like Xu wrote:
> > Rather than fudge around that ugliness with a kvm_get_running_vcpu() check, what
> > if we instead repurpose kvm_arch_dy_has_pending_interrupt(), which is effectively
> > x86 specific, to deal with not being able to read the current CPL for a vCPU that
> > is (possibly) not "loaded", which AFAICT is also x86 specific (or rather, Intel/VMX
> > specific).
> 
> I'd break it into two parts, the first step applying this simpler, more
> straightforward fix (which is backport friendly compared to the diff below),
> and the second step applying your insight for more decoupling and cleanup.
> 
> You'd prefer one move to fix it, right ?

Yeah, I'll apply your patch first, though if you don't object I'd like to reword
the shortlog+changelog to make it explicitly clear that this is a correctness fix,
that the preemption case really needs to have a separate API, and that checking
for vcpu->preempted isn't safe.

I've applied it to kvm-x86/fixes with the below changelog, holler if you want to
change anything.

[1/1] KVM: x86: Get CPL directly when checking if loaded vCPU is in kernel mode
      https://github.com/kvm-x86/linux/commit/8eedf4177184

    KVM: x86: Get CPL directly when checking if loaded vCPU is in kernel mode
    
    When querying whether or not a vCPU "is" running in kernel mode, directly
    get the CPL if the vCPU is the currently loaded vCPU.  In scenarios where
    a guest is profiled via perf-kvm, querying vcpu->arch.preempted_in_kernel
    from kvm_guest_state() is wrong the vCPU is actively running, i.e. hasn't
    been preempted and so preempted_in_kernel is stale.
    
    This affects perf/core's ability to accurately tag guest RIP with
    PERF_RECORD_MISC_GUEST_{KERNEL|USER} and record it in the sample.  This
    causes perf/tool to fail to connect the vCPU RIPs to the guest kernel
    space symbols when parsing these samples due to incorrect PERF_RECORD_MISC
    flags:
    
       Before (perf-report of a cpu-cycles sample):
          1.23%  :58945   [unknown]         [u] 0xffffffff818012e0
    
       After:
          1.35%  :60703   [kernel.vmlinux]  [g] asm_exc_page_fault
    
    Note, checking preempted_in_kernel in kvm_arch_vcpu_in_kernel() is awful
    as nothing in the API's suggests that it's safe to use if and only if the
    vCPU was preempted.  That can be cleaned up in the future, for now just
    fix the glaring correctness bug.
    
    Note #2, checking vcpu->preempted is NOT safe, as getting the CPL on VMX
    requires VMREAD, i.e. is correct if and only if the vCPU is loaded.  If
    the target vCPU *was* preempted, then it can be scheduled back in after
    the check on vcpu->preempted in kvm_vcpu_on_spin(), i.e. KVM could end up
    trying to do VMREAD on a VMCS that isn't loaded on the current pCPU.
    
    Signed-off-by: Like Xu <likexu@tencent.com>
    Fixes: e1bfc24577cc ("KVM: Move x86's perf guest info callbacks to generic KVM")
    Link: https://lore.kernel.org/r/20231123075818.12521-1-likexu@tencent.com
    [sean: massage changelong, add Fixes]
    Signed-off-by: Sean Christopherson <seanjc@google.com>

> > And if getting the CPL for a vCPU that may not be loaded is problematic for other
> > architectures, then I think the correct fix is to move preempted_in_kernel into
> > common code and check it directly in kvm_vcpu_on_spin().
> 
> Not sure which tests would cover this part of the change.

It'd likely require a human to look at results, i.e. as you did.

> > +bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
> > +{
> > +	/*
> > +	 * Treat the vCPU as being in-kernel if it has a pending interrupt, as
> > +	 * the vCPU trying to yield may be spinning on IPI delivery, i.e. the
> > +	 * the target vCPU is in-kernel for the purposes of directed yield.
> 
> How about the case "vcpu->arch.guest_state_protected == true" ?

Ah, right, the existing code considers vCPUs to always be in-kernel for preemption
checks.

> > +	return vcpu->arch.preempted_in_kernel ||
> > +	       kvm_dy_has_pending_interrupt(vcpu);
> >   }
> >   bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> > @@ -13043,7 +13051,7 @@ bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> >   		 kvm_test_request(KVM_REQ_EVENT, vcpu))
> >   		return true;
> > -	return kvm_arch_dy_has_pending_interrupt(vcpu);
> > +	return kvm_dy_has_pending_interrupt(vcpu);
> >   }
> >   bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> > @@ -13051,7 +13059,7 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> >   	if (vcpu->arch.guest_state_protected)
> >   		return true;
> > -	return vcpu->arch.preempted_in_kernel;
> > +	return static_call(kvm_x86_get_cpl)(vcpu);
> 
> We need "return static_call(kvm_x86_get_cpl)(vcpu) == 0;" here.

Doh, I had fixed this locally but forgot to refresh the copy+paste with the updated
diff.

> > -bool __weak kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
> > +bool __weak kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
> >   {
> > -	return false;
> > +	return kvm_arch_vcpu_in_kernel(vcpu);
> >   }
> >   void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
> > @@ -4086,8 +4086,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode)
> >   			if (kvm_vcpu_is_blocking(vcpu) && !vcpu_dy_runnable(vcpu))
> >   				continue;
> >   			if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode &&
> > -			    !kvm_arch_dy_has_pending_interrupt(vcpu) &&
> > -			    !kvm_arch_vcpu_in_kernel(vcpu))
> > +			    kvm_arch_vcpu_preempted_in_kernel(vcpu))
> 
> Use !kvm_arch_vcpu_preempted_in_kernel(vcpu) ?

Double doh.  Yeah, this is inverted.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-11-29 18:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-23  7:58 [PATCH] KVM: x86: Use get_cpl directly in case of vcpu_load to improve accuracy Like Xu
2023-11-28  1:30 ` Sean Christopherson
2023-11-29 11:40   ` Like Xu
2023-11-29 18:25     ` Sean Christopherson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.