From: Liran Alon <liran.alon@oracle.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, Joao Martins <joao.m.martins@oracle.com>,
ehabkost@redhat.com, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] [PATCH 1/4] target/i386: kvm: Init nested-state for VMX when vCPU expose VMX
Date: Thu, 11 Jul 2019 17:36:17 +0300 [thread overview]
Message-ID: <901DE868-40A4-4668-8E10-D14B1E97BAE0@oracle.com> (raw)
In-Reply-To: <805d7eb5-e171-60bb-94c2-574180f5c44c@redhat.com>
> On 11 Jul 2019, at 16:45, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 05/07/19 23:06, Liran Alon wrote:
>> - if (IS_INTEL_CPU(env)) {
>> + if (cpu_has_vmx(env)) {
>> struct kvm_vmx_nested_state_hdr *vmx_hdr =
>> &env->nested_state->hdr.vmx;
>>
>
> I am not sure this is enough, because kvm_get_nested_state and kvm_put_nested_state would run anyway later. If we want to cull them completely for a non-VMX virtual machine, I'd do something like this:
>
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index 5035092..73ab102 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -1748,14 +1748,13 @@ int kvm_arch_init_vcpu(CPUState *cs)
> max_nested_state_len = kvm_max_nested_state_length();
> if (max_nested_state_len > 0) {
> assert(max_nested_state_len >= offsetof(struct kvm_nested_state, data));
> - env->nested_state = g_malloc0(max_nested_state_len);
>
> - env->nested_state->size = max_nested_state_len;
> -
> - if (IS_INTEL_CPU(env)) {
> + if (cpu_has_vmx(env)) {
> struct kvm_vmx_nested_state_hdr *vmx_hdr =
> &env->nested_state->hdr.vmx;
>
> + env->nested_state = g_malloc0(max_nested_state_len);
> + env->nested_state->size = max_nested_state_len;
> env->nested_state->format = KVM_STATE_NESTED_FORMAT_VMX;
> vmx_hdr->vmxon_pa = -1ull;
> vmx_hdr->vmcs12_pa = -1ull;
> @@ -3682,7 +3681,7 @@ static int kvm_put_nested_state(X86CPU *cpu)
> CPUX86State *env = &cpu->env;
> int max_nested_state_len = kvm_max_nested_state_length();
>
> - if (max_nested_state_len <= 0) {
> + if (!env->nested_state) {
> return 0;
> }
>
> @@ -3696,7 +3695,7 @@ static int kvm_get_nested_state(X86CPU *cpu)
> int max_nested_state_len = kvm_max_nested_state_length();
> int ret;
>
> - if (max_nested_state_len <= 0) {
> + if (!env->nested_state) {
> return 0;
> }
>
>
> What do you think? (As a side effect, this completely disables
> KVM_GET/SET_NESTED_STATE on SVM, which I think is safer since it
> will have to save at least the NPT root and the paging mode. So we
> could remove vmstate_svm_nested_state as well).
>
> Paolo
I like your suggestion better than my commit. It is indeed more elegant and correct. :)
The code change above looks good to me as nested_state_needed() will return false anyway if env->nested_state is false.
Will you submit a new patch or should I?
-Liran
WARNING: multiple messages have this Message-ID (diff)
From: Liran Alon <liran.alon@oracle.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Joao Martins <joao.m.martins@oracle.com>,
qemu-devel@nongnu.org, kvm@vger.kernel.org, ehabkost@redhat.com
Subject: Re: [Qemu-devel] [PATCH 1/4] target/i386: kvm: Init nested-state for VMX when vCPU expose VMX
Date: Thu, 11 Jul 2019 17:36:17 +0300 [thread overview]
Message-ID: <901DE868-40A4-4668-8E10-D14B1E97BAE0@oracle.com> (raw)
In-Reply-To: <805d7eb5-e171-60bb-94c2-574180f5c44c@redhat.com>
> On 11 Jul 2019, at 16:45, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 05/07/19 23:06, Liran Alon wrote:
>> - if (IS_INTEL_CPU(env)) {
>> + if (cpu_has_vmx(env)) {
>> struct kvm_vmx_nested_state_hdr *vmx_hdr =
>> &env->nested_state->hdr.vmx;
>>
>
> I am not sure this is enough, because kvm_get_nested_state and kvm_put_nested_state would run anyway later. If we want to cull them completely for a non-VMX virtual machine, I'd do something like this:
>
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index 5035092..73ab102 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -1748,14 +1748,13 @@ int kvm_arch_init_vcpu(CPUState *cs)
> max_nested_state_len = kvm_max_nested_state_length();
> if (max_nested_state_len > 0) {
> assert(max_nested_state_len >= offsetof(struct kvm_nested_state, data));
> - env->nested_state = g_malloc0(max_nested_state_len);
>
> - env->nested_state->size = max_nested_state_len;
> -
> - if (IS_INTEL_CPU(env)) {
> + if (cpu_has_vmx(env)) {
> struct kvm_vmx_nested_state_hdr *vmx_hdr =
> &env->nested_state->hdr.vmx;
>
> + env->nested_state = g_malloc0(max_nested_state_len);
> + env->nested_state->size = max_nested_state_len;
> env->nested_state->format = KVM_STATE_NESTED_FORMAT_VMX;
> vmx_hdr->vmxon_pa = -1ull;
> vmx_hdr->vmcs12_pa = -1ull;
> @@ -3682,7 +3681,7 @@ static int kvm_put_nested_state(X86CPU *cpu)
> CPUX86State *env = &cpu->env;
> int max_nested_state_len = kvm_max_nested_state_length();
>
> - if (max_nested_state_len <= 0) {
> + if (!env->nested_state) {
> return 0;
> }
>
> @@ -3696,7 +3695,7 @@ static int kvm_get_nested_state(X86CPU *cpu)
> int max_nested_state_len = kvm_max_nested_state_length();
> int ret;
>
> - if (max_nested_state_len <= 0) {
> + if (!env->nested_state) {
> return 0;
> }
>
>
> What do you think? (As a side effect, this completely disables
> KVM_GET/SET_NESTED_STATE on SVM, which I think is safer since it
> will have to save at least the NPT root and the paging mode. So we
> could remove vmstate_svm_nested_state as well).
>
> Paolo
I like your suggestion better than my commit. It is indeed more elegant and correct. :)
The code change above looks good to me as nested_state_needed() will return false anyway if env->nested_state is false.
Will you submit a new patch or should I?
-Liran
next prev parent reply other threads:[~2019-07-11 14:37 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-05 21:06 [PATCH 0/4] target/i386: kvm: Various nested-state fixes Liran Alon
2019-07-05 21:06 ` [Qemu-devel] " Liran Alon
2019-07-05 21:06 ` [PATCH 1/4] target/i386: kvm: Init nested-state for VMX when vCPU expose VMX Liran Alon
2019-07-05 21:06 ` [Qemu-devel] " Liran Alon
2019-07-09 17:21 ` Maran Wilson
2019-07-11 13:45 ` Paolo Bonzini
2019-07-11 14:36 ` Liran Alon [this message]
2019-07-11 14:36 ` Liran Alon
2019-07-11 17:27 ` Paolo Bonzini
2019-07-11 17:27 ` Paolo Bonzini
2019-07-05 21:06 ` [PATCH 2/4] target/i386: kvm: Init nested-state for vCPU exposed with SVM Liran Alon
2019-07-05 21:06 ` [Qemu-devel] " Liran Alon
2019-07-09 17:21 ` Maran Wilson
2019-07-11 13:35 ` Paolo Bonzini
2019-07-05 21:06 ` [PATCH 3/4] target/i386: kvm: Save nested-state only in case vCPU have set VMXON region Liran Alon
2019-07-05 21:06 ` [Qemu-devel] " Liran Alon
2019-07-09 17:21 ` Maran Wilson
2019-07-05 21:06 ` [PATCH 4/4] target/i386: kvm: Demand nested migration kernel capabilities only when vCPU may have enabled VMX Liran Alon
2019-07-05 21:06 ` [Qemu-devel] " Liran Alon
2019-07-09 17:21 ` Maran Wilson
2019-07-15 9:20 ` Liran Alon
2019-07-15 9:20 ` [Qemu-devel] " Liran Alon
2019-07-15 9:25 ` Paolo Bonzini
2019-07-15 9:25 ` [Qemu-devel] " Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=901DE868-40A4-4668-8E10-D14B1E97BAE0@oracle.com \
--to=liran.alon@oracle.com \
--cc=ehabkost@redhat.com \
--cc=joao.m.martins@oracle.com \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.