All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR
@ 2016-12-02 15:24 Mika Penttilä
  2016-12-02 17:02 ` Roman Kagan
  0 siblings, 1 reply; 7+ messages in thread
From: Mika Penttilä @ 2016-12-02 15:24 UTC (permalink / raw)
  To: rkagan, kvm

> > > > > --- a/arch/x86/kvm/x86.c
> > > > > +++ b/arch/x86/kvm/x86.c
> > > > > @@ -8402,7 +8402,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
> > > > > bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
> > > > > {
> > > > > 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
> > > > > -		return true;
> > > > > +		return false;
> > > > > 
> > > > > 
> > > > Why do you make this change?
> > 
> > > Because the code does the opposite of what it's meant to do.
> > 
> > It could have a better name but returning "true" is right. See below.
> > 
> >  > > I think kvm_arch_async_page_present() is not
> > > > ever called now and neither kvm_del_async_pf_gfn(vcpu, work->arch.gfn);
> > > I wonder how you came to such a conclusion?  I certainly see them called
> > > on my test machine (you need to have the guest memory swapped out for
> > > that, that can be forced e.g. using a memory cgroup).
> > if !KVM_ASYNC_PF_ENABLED then kvm_check_async_pf_completion(), it's only call site, never calls it.

> How's that?  I don't see any check for it in
> kvm_check_async_pf_completion().  Moreover, that's exactly where it does
> that check.

void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
{
        struct kvm_async_pf *work;

        while (!list_empty_careful(&vcpu->async_pf.done) &&
              kvm_arch_can_inject_async_page_present(vcpu)) {
                spin_lock(&vcpu->async_pf.lock);
  

and you made kvm_arch_can_inject_async_page_present(vcpu) return false if !KVM_ASYNC_PF_ENABLED (i.e. not enabled)


> > Maybe you had KVM_ASYNC_PF_ENABLED?

> Of course I did.  Not sure I get what you mean...

I mean if the guest support for async pf is not enabled you maybe broke that?

--Mika
 



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR
  2016-12-02 15:24 [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR Mika Penttilä
@ 2016-12-02 17:02 ` Roman Kagan
  0 siblings, 0 replies; 7+ messages in thread
From: Roman Kagan @ 2016-12-02 17:02 UTC (permalink / raw)
  To: Mika Penttilä; +Cc: kvm

On Fri, Dec 02, 2016 at 05:24:55PM +0200, Mika Penttilä wrote:
> > > > > > --- a/arch/x86/kvm/x86.c
> > > > > > +++ b/arch/x86/kvm/x86.c
> > > > > > @@ -8402,7 +8402,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
> > > > > > bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
> > > > > > {
> > > > > > 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
> > > > > > -		return true;
> > > > > > +		return false;
> > > > > > 
> > > > > > 
> > > > > Why do you make this change?
> > > 
> > > > Because the code does the opposite of what it's meant to do.
> > > 
> > > It could have a better name but returning "true" is right. See below.
> > > 
> > >  > > I think kvm_arch_async_page_present() is not
> > > > > ever called now and neither kvm_del_async_pf_gfn(vcpu, work->arch.gfn);
> > > > I wonder how you came to such a conclusion?  I certainly see them called
> > > > on my test machine (you need to have the guest memory swapped out for
> > > > that, that can be forced e.g. using a memory cgroup).
> > > if !KVM_ASYNC_PF_ENABLED then kvm_check_async_pf_completion(), it's only call site, never calls it.
> 
> > How's that?  I don't see any check for it in
> > kvm_check_async_pf_completion().  Moreover, that's exactly where it does
> > that check.
> 
> void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
> {
>         struct kvm_async_pf *work;
> 
>         while (!list_empty_careful(&vcpu->async_pf.done) &&
>               kvm_arch_can_inject_async_page_present(vcpu)) {
>                 spin_lock(&vcpu->async_pf.lock);
>   
> 
> and you made kvm_arch_can_inject_async_page_present(vcpu) return false if !KVM_ASYNC_PF_ENABLED (i.e. not enabled)
> 
> 
> > > Maybe you had KVM_ASYNC_PF_ENABLED?
> 
> > Of course I did.  Not sure I get what you mean...
> 
> I mean if the guest support for async pf is not enabled you maybe broke that?

Ah I finally see the light, thank you.

Apparently disabling async_pf by the guest should result in draining all
accumulated async pagefaults but without injecting #PF in the guest.
The latter is taken care of by another check for the msr value in
kvm_arch_async_page_present().

My patch is wrong indeed; I was misled by the "self-explanatory"
function name which was bogus.

I'll cook up another patch renaming the function and putting a comment
there, too.

Thanks,
Roman.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR
  2016-12-02 14:14 Mika Penttilä
@ 2016-12-02 14:31 ` Roman Kagan
  0 siblings, 0 replies; 7+ messages in thread
From: Roman Kagan @ 2016-12-02 14:31 UTC (permalink / raw)
  To: Mika Penttilä; +Cc: kvm

On Fri, Dec 02, 2016 at 04:14:48PM +0200, Mika Penttilä wrote:
> > On Fri, Dec 02, 2016 at 12:45:28PM +0200, Mika Penttilä wrote:
> > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > > index bf11fe4..14a46e9 100644
> > > > --- a/arch/x86/kvm/x86.c
> > > > +++ b/arch/x86/kvm/x86.c
> > > > @@ -8402,7 +8402,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
> > > > bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
> > > > {
> > > > 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
> > > > -		return true;
> > > > +		return false;
> > > > 
> > > > 
> > > Why do you make this change?
> 
> > Because the code does the opposite of what it's meant to do.
> 
> It could have a better name but returning "true" is right. See below.
> 
>  > > I think kvm_arch_async_page_present() is not
> > > ever called now and neither kvm_del_async_pf_gfn(vcpu, work->arch.gfn);
> 
> > I wonder how you came to such a conclusion?  I certainly see them called
> > on my test machine (you need to have the guest memory swapped out for
> > that, that can be forced e.g. using a memory cgroup).
> 
> if !KVM_ASYNC_PF_ENABLED then kvm_check_async_pf_completion(), it's only call site, never calls it.

How's that?  I don't see any check for it in
kvm_check_async_pf_completion().  Moreover, that's exactly where it does
that check.

> Maybe you had KVM_ASYNC_PF_ENABLED?

Of course I did.  Not sure I get what you mean...

Roman.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR
@ 2016-12-02 14:14 Mika Penttilä
  2016-12-02 14:31 ` Roman Kagan
  0 siblings, 1 reply; 7+ messages in thread
From: Mika Penttilä @ 2016-12-02 14:14 UTC (permalink / raw)
  To: rkagan, kvm

> On Fri, Dec 02, 2016 at 12:45:28PM +0200, Mika Penttilä wrote:
> > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > index bf11fe4..14a46e9 100644
> > > --- a/arch/x86/kvm/x86.c
> > > +++ b/arch/x86/kvm/x86.c
> > > @@ -8402,7 +8402,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
> > > bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
> > > {
> > > 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
> > > -		return true;
> > > +		return false;
> > > 
> > > 
> > Why do you make this change?

> Because the code does the opposite of what it's meant to do.

It could have a better name but returning "true" is right. See below.

 > > I think kvm_arch_async_page_present() is not
> > ever called now and neither kvm_del_async_pf_gfn(vcpu, work->arch.gfn);

> I wonder how you came to such a conclusion?  I certainly see them called
> on my test machine (you need to have the guest memory swapped out for
> that, that can be forced e.g. using a memory cgroup).

if !KVM_ASYNC_PF_ENABLED then kvm_check_async_pf_completion(), it's only call site, never calls it.

Maybe you had KVM_ASYNC_PF_ENABLED?

> Roman.

--Mika


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR
  2016-12-02 10:45 ` Mika Penttilä
@ 2016-12-02 12:05   ` Roman Kagan
  0 siblings, 0 replies; 7+ messages in thread
From: Roman Kagan @ 2016-12-02 12:05 UTC (permalink / raw)
  To: Mika Penttilä; +Cc: kvm

On Fri, Dec 02, 2016 at 12:45:28PM +0200, Mika Penttilä wrote:
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index bf11fe4..14a46e9 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -8402,7 +8402,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
> > bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
> > {
> > 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
> > -		return true;
> > +		return false;
> > 
> > 
> Why do you make this change?

Because the code does the opposite of what it's meant to do.

> I think kvm_arch_async_page_present() is not
> ever called now and neither kvm_del_async_pf_gfn(vcpu, work->arch.gfn);

I wonder how you came to such a conclusion?  I certainly see them called
on my test machine (you need to have the guest memory swapped out for
that, that can be forced e.g. using a memory cgroup).

Roman.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR
       [not found] <a237e9b7-8155-5006-81b1-ac77a5efd13d@nextfour.com>
@ 2016-12-02 10:45 ` Mika Penttilä
  2016-12-02 12:05   ` Roman Kagan
  0 siblings, 1 reply; 7+ messages in thread
From: Mika Penttilä @ 2016-12-02 10:45 UTC (permalink / raw)
  To: rkagan, kvm


> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index bf11fe4..14a46e9 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8402,7 +8402,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
> bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
> {
> 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
> -		return true;
> +		return false;
> 
> 
Why do you make this change? I think kvm_arch_async_page_present() is not
ever called now and neither kvm_del_async_pf_gfn(vcpu, work->arch.gfn);
 
Even with !KVM_ASYNC_PF_ENABLED the pf is handled with the workqueue asynchronously.

> 
> 
>> 	else
>> 		return !kvm_event_needs_reinjection(vcpu) &&
>  			kvm_x86_ops->interrupt_allowed(vcpu);
> 
 --Mika
 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR
  2016-12-02  8:47 [PATCH 0/5] kvm: avoid delaying async_pf ready delivery Roman Kagan
@ 2016-12-02  8:47 ` Roman Kagan
  0 siblings, 0 replies; 7+ messages in thread
From: Roman Kagan @ 2016-12-02  8:47 UTC (permalink / raw)
  To: Radim Krčmář, Paolo Bonzini, kvm; +Cc: Denis Lunev, Roman Kagan

Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
---
 arch/x86/kvm/x86.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index bf11fe4..14a46e9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8402,7 +8402,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
 bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu)
 {
 	if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))
-		return true;
+		return false;
 	else
 		return !kvm_event_needs_reinjection(vcpu) &&
 			kvm_x86_ops->interrupt_allowed(vcpu);
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-12-03  0:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-02 15:24 [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR Mika Penttilä
2016-12-02 17:02 ` Roman Kagan
  -- strict thread matches above, loose matches on Subject: below --
2016-12-02 14:14 Mika Penttilä
2016-12-02 14:31 ` Roman Kagan
     [not found] <a237e9b7-8155-5006-81b1-ac77a5efd13d@nextfour.com>
2016-12-02 10:45 ` Mika Penttilä
2016-12-02 12:05   ` Roman Kagan
2016-12-02  8:47 [PATCH 0/5] kvm: avoid delaying async_pf ready delivery Roman Kagan
2016-12-02  8:47 ` [PATCH 1/5] kvm/x86: fix inversed check for async_pf MSR Roman Kagan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.