All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] KVM: nSVM: few random fixes
@ 2021-01-06 10:49 Maxim Levitsky
  2021-01-06 10:49 ` [PATCH 1/6] KVM: SVM: create svm_process_injected_event Maxim Levitsky
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 10:49 UTC (permalink / raw)
  To: kvm
  Cc: Ingo Molnar, Thomas Gleixner, Paolo Bonzini, Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov,
	Maxim Levitsky

This is a series of fixes to nested SVM, that finally makes my kvm on kvm
stress test pass, and fix various other issues/regressions.

Patches 1-2 are a fix for disappearing interrupts in L2 on migration which
usually make the L2 hang.
Same issue happens on VMX and WIP, patches for this will be sent in a separate
series.
Paulo helped me to find the root cause of this issue.

Note that this patch likely breaks a nested guest that uses software interrupt
injections (SVM_EXITINTINFO_TYPE_SOFT) because currently kvm ignores these
on SVM.

Patch 3 is a fix for recent regression related to code that delayed the nested
msr bitmap processing to the next vm entry, and started to crash the L1 after
my on demand nested state allocation patches.

The problem was that the code assumed that we will still be in the nested
guest mode on next vmentry after setting the nested state, but a pending event
can cause a nested vmexit prior to that.

Patches 4,5,6 are few things I found while reviewing the nested migration code.
I don't have a reproducer for them.

Best regards,
	Maxim Levitsky

Maxim Levitsky (6):
  KVM: SVM: create svm_process_injected_event
  KVM: nSVM: fix for disappearing L1->L2 event injection on L1 migration
  KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES on nested vmexit
  KVM: nSVM: correctly restore nested_run_pending on migration
  KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE
  KVM: nSVM: mark vmcb as dirty when forcingly leaving the guest mode

 arch/x86/kvm/svm/nested.c | 21 ++++++++++++--
 arch/x86/kvm/svm/svm.c    | 58 ++++++++++++++++++++++-----------------
 arch/x86/kvm/svm/svm.h    |  4 +++
 3 files changed, 55 insertions(+), 28 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/6] KVM: SVM: create svm_process_injected_event
  2021-01-06 10:49 [PATCH 0/6] KVM: nSVM: few random fixes Maxim Levitsky
@ 2021-01-06 10:49 ` Maxim Levitsky
  2021-01-06 10:49 ` [PATCH 2/6] KVM: nSVM: fix for disappearing L1->L2 event injection on L1 migration Maxim Levitsky
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 10:49 UTC (permalink / raw)
  To: kvm
  Cc: Ingo Molnar, Thomas Gleixner, Paolo Bonzini, Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov,
	Maxim Levitsky

Refactor the logic that is dealing with parsing of an injected event to a
separate function.

This will be used in the next patch to deal with the events that L1 wants to
inject to L2 in a way that survives migration.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 58 ++++++++++++++++++++++++------------------
 arch/x86/kvm/svm/svm.h |  4 +++
 2 files changed, 37 insertions(+), 25 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 941e5251e13fe..01f1655d9e6f7 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3584,38 +3584,21 @@ static inline void sync_lapic_to_cr8(struct kvm_vcpu *vcpu)
 	svm->vmcb->control.int_ctl |= cr8 & V_TPR_MASK;
 }
 
-static void svm_complete_interrupts(struct vcpu_svm *svm)
+void svm_process_injected_event(struct vcpu_svm *svm,
+				u32 event,
+				u32 event_err_code)
 {
-	u8 vector;
-	int type;
-	u32 exitintinfo = svm->vmcb->control.exit_int_info;
-	unsigned int3_injected = svm->int3_injected;
+	u8 vector = event & SVM_EXITINTINFO_VEC_MASK;
+	int type = event & SVM_EXITINTINFO_TYPE_MASK;
 
+	unsigned int int3_injected = svm->int3_injected;
 	svm->int3_injected = 0;
 
-	/*
-	 * If we've made progress since setting HF_IRET_MASK, we've
-	 * executed an IRET and can allow NMI injection.
-	 */
-	if ((svm->vcpu.arch.hflags & HF_IRET_MASK) &&
-	    (sev_es_guest(svm->vcpu.kvm) ||
-	     kvm_rip_read(&svm->vcpu) != svm->nmi_iret_rip)) {
-		svm->vcpu.arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK);
-		kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
-	}
-
-	svm->vcpu.arch.nmi_injected = false;
-	kvm_clear_exception_queue(&svm->vcpu);
-	kvm_clear_interrupt_queue(&svm->vcpu);
-
-	if (!(exitintinfo & SVM_EXITINTINFO_VALID))
+	if (!(event & SVM_EXITINTINFO_VALID))
 		return;
 
 	kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
 
-	vector = exitintinfo & SVM_EXITINTINFO_VEC_MASK;
-	type = exitintinfo & SVM_EXITINTINFO_TYPE_MASK;
-
 	switch (type) {
 	case SVM_EXITINTINFO_TYPE_NMI:
 		svm->vcpu.arch.nmi_injected = true;
@@ -3640,7 +3623,7 @@ static void svm_complete_interrupts(struct vcpu_svm *svm)
 					      int3_injected);
 			break;
 		}
-		if (exitintinfo & SVM_EXITINTINFO_VALID_ERR) {
+		if (event & SVM_EXITINTINFO_VALID_ERR) {
 			u32 err = svm->vmcb->control.exit_int_info_err;
 			kvm_requeue_exception_e(&svm->vcpu, vector, err);
 
@@ -3653,6 +3636,31 @@ static void svm_complete_interrupts(struct vcpu_svm *svm)
 	default:
 		break;
 	}
+
+}
+
+static void svm_complete_interrupts(struct vcpu_svm *svm)
+{
+
+	/*
+	 * If we've made progress since setting HF_IRET_MASK, we've
+	 * executed an IRET and can allow NMI injection.
+	 */
+	if ((svm->vcpu.arch.hflags & HF_IRET_MASK) &&
+	    (sev_es_guest(svm->vcpu.kvm) ||
+	     kvm_rip_read(&svm->vcpu) != svm->nmi_iret_rip)) {
+		svm->vcpu.arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK);
+		kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
+	}
+
+	svm->vcpu.arch.nmi_injected = false;
+	kvm_clear_exception_queue(&svm->vcpu);
+	kvm_clear_interrupt_queue(&svm->vcpu);
+
+	svm_process_injected_event(svm,
+				   svm->vmcb->control.exit_int_info,
+				   svm->vmcb->control.exit_int_info_err);
+
 }
 
 static void svm_cancel_injection(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 5431e6335e2e8..b5587650181f2 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -420,6 +420,10 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
 void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
 void svm_flush_tlb(struct kvm_vcpu *vcpu);
+
+void svm_process_injected_event(struct vcpu_svm *svm, u32 event,
+				u32 event_err_code);
+
 void disable_nmi_singlestep(struct vcpu_svm *svm);
 bool svm_smi_blocked(struct kvm_vcpu *vcpu);
 bool svm_nmi_blocked(struct kvm_vcpu *vcpu);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/6] KVM: nSVM: fix for disappearing L1->L2 event injection on L1 migration
  2021-01-06 10:49 [PATCH 0/6] KVM: nSVM: few random fixes Maxim Levitsky
  2021-01-06 10:49 ` [PATCH 1/6] KVM: SVM: create svm_process_injected_event Maxim Levitsky
@ 2021-01-06 10:49 ` Maxim Levitsky
  2021-01-06 10:49 ` [PATCH 3/6] KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES on nested vmexit Maxim Levitsky
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 10:49 UTC (permalink / raw)
  To: kvm
  Cc: Ingo Molnar, Thomas Gleixner, Paolo Bonzini, Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov,
	Maxim Levitsky

If migration happens while L2 entry with an injected event to L2 is pending,
we weren't including the event in the migration state and it would be
lost leading to L2 hang.

Fix this by queueing the injected event in similar manner to how we queue
interrupted injections.

This can be reproduced by running an IO intense task in L2,
and repeatedly migrating the L1.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b0b667456b2e7..18b71e73a9935 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -416,8 +416,11 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm)
 	svm->vmcb->control.virt_ext            = svm->nested.ctl.virt_ext;
 	svm->vmcb->control.int_vector          = svm->nested.ctl.int_vector;
 	svm->vmcb->control.int_state           = svm->nested.ctl.int_state;
-	svm->vmcb->control.event_inj           = svm->nested.ctl.event_inj;
-	svm->vmcb->control.event_inj_err       = svm->nested.ctl.event_inj_err;
+
+	svm_process_injected_event(svm, svm->nested.ctl.event_inj,
+				   svm->nested.ctl.event_inj_err);
+
+	WARN_ON_ONCE(svm->vmcb->control.event_inj);
 
 	svm->vmcb->control.pause_filter_count  = svm->nested.ctl.pause_filter_count;
 	svm->vmcb->control.pause_filter_thresh = svm->nested.ctl.pause_filter_thresh;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/6] KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES on nested vmexit
  2021-01-06 10:49 [PATCH 0/6] KVM: nSVM: few random fixes Maxim Levitsky
  2021-01-06 10:49 ` [PATCH 1/6] KVM: SVM: create svm_process_injected_event Maxim Levitsky
  2021-01-06 10:49 ` [PATCH 2/6] KVM: nSVM: fix for disappearing L1->L2 event injection on L1 migration Maxim Levitsky
@ 2021-01-06 10:49 ` Maxim Levitsky
  2021-01-06 10:49 ` [PATCH 4/6] KVM: nSVM: correctly restore nested_run_pending on migration Maxim Levitsky
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 10:49 UTC (permalink / raw)
  To: kvm
  Cc: Ingo Molnar, Thomas Gleixner, Paolo Bonzini, Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov,
	Maxim Levitsky, stable

It is possible to exit the nested guest mode, entered by
svm_set_nested_state prior to first vm entry to it (e.g due to pending event)
if the nested run was not pending during the migration.

In this case we must not switch to the nested msr permission bitmap.
Also add a warning to catch similar cases in the future.

CC: stable@vger.kernel.org
Fixes: a7d5c7ce41ac1 ("KVM: nSVM: delay MSR permission processing to first nested VM run")

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 18b71e73a9935..6208d3a5a3fdb 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -199,6 +199,10 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm)
 static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (WARN_ON_ONCE(!is_guest_mode(&svm->vcpu)))
+		return false;
+
 	if (!nested_svm_vmrun_msrpm(svm)) {
 		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		vcpu->run->internal.suberror =
@@ -598,6 +602,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
 	svm->nested.vmcb12_gpa = 0;
 	WARN_ON_ONCE(svm->nested.nested_run_pending);
 
+	kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, &svm->vcpu);
+
 	/* in case we halted in L2 */
 	svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/6] KVM: nSVM: correctly restore nested_run_pending on migration
  2021-01-06 10:49 [PATCH 0/6] KVM: nSVM: few random fixes Maxim Levitsky
                   ` (2 preceding siblings ...)
  2021-01-06 10:49 ` [PATCH 3/6] KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES on nested vmexit Maxim Levitsky
@ 2021-01-06 10:49 ` Maxim Levitsky
  2021-01-06 17:27   ` Sean Christopherson
  2021-01-06 10:50 ` [PATCH 5/6] KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE Maxim Levitsky
  2021-01-06 10:50 ` [PATCH 6/6] KVM: nSVM: mark vmcb as dirty when forcingly leaving the guest mode Maxim Levitsky
  5 siblings, 1 reply; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 10:49 UTC (permalink / raw)
  To: kvm
  Cc: Ingo Molnar, Thomas Gleixner, Paolo Bonzini, Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov,
	Maxim Levitsky

The code to store it on the migration exists, but no code was restoring it.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 6208d3a5a3fdb..c1a3d0e996add 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1203,6 +1203,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 	 * in the registers, the save area of the nested state instead
 	 * contains saved L1 state.
 	 */
+
+	if (kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING)
+		svm->nested.nested_run_pending = true;
+
 	copy_vmcb_control_area(&hsave->control, &svm->vmcb->control);
 	hsave->save = *save;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/6] KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE
  2021-01-06 10:49 [PATCH 0/6] KVM: nSVM: few random fixes Maxim Levitsky
                   ` (3 preceding siblings ...)
  2021-01-06 10:49 ` [PATCH 4/6] KVM: nSVM: correctly restore nested_run_pending on migration Maxim Levitsky
@ 2021-01-06 10:50 ` Maxim Levitsky
  2021-01-06 17:39   ` Sean Christopherson
  2021-01-06 10:50 ` [PATCH 6/6] KVM: nSVM: mark vmcb as dirty when forcingly leaving the guest mode Maxim Levitsky
  5 siblings, 1 reply; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 10:50 UTC (permalink / raw)
  To: kvm
  Cc: Ingo Molnar, Thomas Gleixner, Paolo Bonzini, Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov,
	Maxim Levitsky

This should prevent bad things from happening if the user calls the
KVM_SET_NESTED_STATE twice.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index c1a3d0e996add..3aa18016832d0 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1154,8 +1154,9 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 	if (is_smm(vcpu) && (kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE))
 		return -EINVAL;
 
+	svm_leave_nested(svm);
+
 	if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) {
-		svm_leave_nested(svm);
 		svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET));
 		return 0;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/6] KVM: nSVM: mark vmcb as dirty when forcingly leaving the guest mode
  2021-01-06 10:49 [PATCH 0/6] KVM: nSVM: few random fixes Maxim Levitsky
                   ` (4 preceding siblings ...)
  2021-01-06 10:50 ` [PATCH 5/6] KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE Maxim Levitsky
@ 2021-01-06 10:50 ` Maxim Levitsky
  5 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 10:50 UTC (permalink / raw)
  To: kvm
  Cc: Ingo Molnar, Thomas Gleixner, Paolo Bonzini, Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov,
	Maxim Levitsky

We overwrite most of vmcb fields while doing so, so we must
mark it as dirty.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 3aa18016832d0..de3dbb5407206 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -763,6 +763,7 @@ void svm_leave_nested(struct vcpu_svm *svm)
 		leave_guest_mode(&svm->vcpu);
 		copy_vmcb_control_area(&vmcb->control, &hsave->control);
 		nested_svm_uninit_mmu_context(&svm->vcpu);
+		vmcb_mark_all_dirty(svm->vmcb);
 	}
 
 	kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, &svm->vcpu);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/6] KVM: nSVM: correctly restore nested_run_pending on migration
  2021-01-06 10:49 ` [PATCH 4/6] KVM: nSVM: correctly restore nested_run_pending on migration Maxim Levitsky
@ 2021-01-06 17:27   ` Sean Christopherson
  2021-01-06 23:54     ` Maxim Levitsky
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2021-01-06 17:27 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Ingo Molnar, Thomas Gleixner, Paolo Bonzini,
	Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov

On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> The code to store it on the migration exists, but no code was restoring it.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/svm/nested.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 6208d3a5a3fdb..c1a3d0e996add 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1203,6 +1203,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
>  	 * in the registers, the save area of the nested state instead
>  	 * contains saved L1 state.
>  	 */
> +
> +	if (kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING)
> +		svm->nested.nested_run_pending = true;

This should be:

	svm->nested.nested_run_pending =
		!!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING);

> +
>  	copy_vmcb_control_area(&hsave->control, &svm->vmcb->control);
>  	hsave->save = *save;
>  
> -- 
> 2.26.2
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/6] KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE
  2021-01-06 10:50 ` [PATCH 5/6] KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE Maxim Levitsky
@ 2021-01-06 17:39   ` Sean Christopherson
  2021-01-06 23:55     ` Maxim Levitsky
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2021-01-06 17:39 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Ingo Molnar, Thomas Gleixner, Paolo Bonzini,
	Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov

On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> This should prevent bad things from happening if the user calls the
> KVM_SET_NESTED_STATE twice.

This doesn't exactly inspire confidence, nor does it provide much help to
readers that don't already know why KVM should "leave nested" before processing
the rest of kvm_state.

> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/svm/nested.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index c1a3d0e996add..3aa18016832d0 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1154,8 +1154,9 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
>  	if (is_smm(vcpu) && (kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE))
>  		return -EINVAL;
>  
> +	svm_leave_nested(svm);

nVMX sets a really bad example in that it does vmx_leave_nested(), and many
other things, long before it has vetted the incoming state.  That's not the end
of the word as the caller is likely going to exit if this ioctl() fails, but it
would be nice to avoid such behavior with nSVM, especially since it appears to
be trivially easy to do svm_leave_nested() iff the ioctl() will succeed.

> +
>  	if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) {
> -		svm_leave_nested(svm);
>  		svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET));
>  		return 0;
>  	}
> -- 
> 2.26.2
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/6] KVM: nSVM: correctly restore nested_run_pending on migration
  2021-01-06 17:27   ` Sean Christopherson
@ 2021-01-06 23:54     ` Maxim Levitsky
  0 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 23:54 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Ingo Molnar, Thomas Gleixner, Paolo Bonzini,
	Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov

On Wed, 2021-01-06 at 09:27 -0800, Sean Christopherson wrote:
> On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> > The code to store it on the migration exists, but no code was restoring it.
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  arch/x86/kvm/svm/nested.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index 6208d3a5a3fdb..c1a3d0e996add 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -1203,6 +1203,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
> >  	 * in the registers, the save area of the nested state instead
> >  	 * contains saved L1 state.
> >  	 */
> > +
> > +	if (kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING)
> > +		svm->nested.nested_run_pending = true;
> 
> This should be:

Yes, if someone already set the nested state before, but
I also sent a patch that forces nesteded mode exit in this
case.

Still 100% agree, that this would be better.
Thanks for the review,

Best regards,
	Maxim Levitsky

> 
> 	svm->nested.nested_run_pending =
> 		!!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING);
> 
> > +
> >  	copy_vmcb_control_area(&hsave->control, &svm->vmcb->control);
> >  	hsave->save = *save;
> >  
> > -- 
> > 2.26.2
> > 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/6] KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE
  2021-01-06 17:39   ` Sean Christopherson
@ 2021-01-06 23:55     ` Maxim Levitsky
  0 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-01-06 23:55 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Ingo Molnar, Thomas Gleixner, Paolo Bonzini,
	Sean Christopherson,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Jim Mattson, Joerg Roedel,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, H. Peter Anvin, Vitaly Kuznetsov, Borislav Petkov

On Wed, 2021-01-06 at 09:39 -0800, Sean Christopherson wrote:
> On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> > This should prevent bad things from happening if the user calls the
> > KVM_SET_NESTED_STATE twice.
> 
> This doesn't exactly inspire confidence, nor does it provide much help to
> readers that don't already know why KVM should "leave nested" before processing
> the rest of kvm_state.
> 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  arch/x86/kvm/svm/nested.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index c1a3d0e996add..3aa18016832d0 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -1154,8 +1154,9 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
> >  	if (is_smm(vcpu) && (kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE))
> >  		return -EINVAL;
> >  
> > +	svm_leave_nested(svm);
> 
> nVMX sets a really bad example in that it does vmx_leave_nested(), and many
> other things, long before it has vetted the incoming state.  That's not the end
> of the word as the caller is likely going to exit if this ioctl() fails, but it
> would be nice to avoid such behavior with nSVM, especially since it appears to
> be trivially easy to do svm_leave_nested() iff the ioctl() will succeed.

I agree with you. So if I understand correctly I should move the unconditional 
svm_leave_nested(svm) after all the checks are done? I 

Best regards,
	Maxim Levitsky

> 
> > +
> >  	if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) {
> > -		svm_leave_nested(svm);
> >  		svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET));
> >  		return 0;
> >  	}
> > -- 
> > 2.26.2
> > 



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-01-06 23:57 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-06 10:49 [PATCH 0/6] KVM: nSVM: few random fixes Maxim Levitsky
2021-01-06 10:49 ` [PATCH 1/6] KVM: SVM: create svm_process_injected_event Maxim Levitsky
2021-01-06 10:49 ` [PATCH 2/6] KVM: nSVM: fix for disappearing L1->L2 event injection on L1 migration Maxim Levitsky
2021-01-06 10:49 ` [PATCH 3/6] KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES on nested vmexit Maxim Levitsky
2021-01-06 10:49 ` [PATCH 4/6] KVM: nSVM: correctly restore nested_run_pending on migration Maxim Levitsky
2021-01-06 17:27   ` Sean Christopherson
2021-01-06 23:54     ` Maxim Levitsky
2021-01-06 10:50 ` [PATCH 5/6] KVM: nSVM: always leave the nested state first on KVM_SET_NESTED_STATE Maxim Levitsky
2021-01-06 17:39   ` Sean Christopherson
2021-01-06 23:55     ` Maxim Levitsky
2021-01-06 10:50 ` [PATCH 6/6] KVM: nSVM: mark vmcb as dirty when forcingly leaving the guest mode Maxim Levitsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.