All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wanpeng Li <kernellwp@gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	kvm <kvm@vger.kernel.org>, "Radim Krčmář" <rkrcmar@redhat.com>,
	"Wanpeng Li" <wanpeng.li@hotmail.com>
Subject: Re: [PATCH v4 3/4] KVM: async_pf: Force a nested vmexit if the injected #PF is async_pf
Date: Wed, 28 Jun 2017 06:33:56 +0800	[thread overview]
Message-ID: <CANRm+CzJHoLwgsECeTbXrEVEjU4xzh0DTqiV3yVoLY2twHq5DQ@mail.gmail.com> (raw)
In-Reply-To: <61bbcc19-a818-6934-75e9-8aed28523aa0@redhat.com>

2017-06-27 21:40 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
>
>
> On 22/06/2017 04:06, Wanpeng Li wrote:
>> From: Wanpeng Li <wanpeng.li@hotmail.com>
>>
>> Add an async_page_fault field to vcpu->arch.exception to identify an async
>> page fault, and constructs the expected vm-exit information fields. Force
>> a nested VM exit from nested_vmx_check_exception() if the injected #PF
>> is async page fault. Extending the userspace interface KVM_GET_VCPU_EVENTS
>> and KVM_SET_VCPU_EVENTS for live migration.
>
> I am not sure what would happen if new kernel (that can produce
> async_page_fault=1) runs on top of old userspace (that can consume it).
>
> I think it would be safer to make the new field "nested_apf", and only
> set it if in guest_mode, like
>
>         vcpu->arch.exception.nested_apf =
>                 is_guest_mode(vcpu) && fault->async_page_fault;
>         if (vcpu->arch.exception.nested_apf)
>                 vcpu->arch.apf.nested_apf_token = fault->address;
>         else
>                 vcpu->arch.cr2 = fault->address;

I have already added the same logic in kvm_inject_page_fault in patch
3/4, in addition, there is a guarantee it is in guest mode when we set
svm->vmcb->control.xxxx in nested_svm_check_exception, how about just
as what we do in nested_vmx_check_exception?

+ if (svm->vcpu.arch.exception.async_page_fault)
+     svm->vmcb->control.exit_info_2 = svm->vcpu.arch.apf.nested_apf_token;
+ else
+     svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2;

Regards,
Wanpeng Li

  reply	other threads:[~2017-06-27 22:34 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-22  2:06 [PATCH v4 0/4] KVM: async_pf: Fix async_pf exception injection Wanpeng Li
2017-06-22  2:06 ` [PATCH v4 1/4] KVM: x86: Simple kvm_x86_ops->queue_exception parameter Wanpeng Li
2017-06-22  2:06 ` [PATCH v4 2/4] KVM: async_pf: Add L1 guest async_pf #PF vmexit handler Wanpeng Li
2017-06-22  2:06 ` [PATCH v4 3/4] KVM: async_pf: Force a nested vmexit if the injected #PF is async_pf Wanpeng Li
2017-06-27 13:40   ` Paolo Bonzini
2017-06-27 22:33     ` Wanpeng Li [this message]
2017-06-28 11:40       ` Paolo Bonzini
2017-06-22  2:06 ` [PATCH v4 4/4] KVM: async_pf: Let host know whether the guest support delivery async_pf as #PF vmexit Wanpeng Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANRm+CzJHoLwgsECeTbXrEVEjU4xzh0DTqiV3yVoLY2twHq5DQ@mail.gmail.com \
    --to=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=wanpeng.li@hotmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.