All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] KVM: few more SMM fixes
@ 2021-09-13 14:09 Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 1/7] KVM: x86: nSVM: refactor svm_leave_smm and smm_enter_smm Maxim Levitsky
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

These are few SMM fixes I was working on last week.

* Patch 1,2 fixes a minor issue that remained after
  commit 37be407b2ce8 ("KVM: nSVM: Fix L1 state corruption upon return from SMM")

  While now, returns to guest mode from SMM work due to restored state from HSAVE
  area, the guest entry still sees incorrect HSAVE state.

  This for example breaks return from SMM when the guest is 32 bit, due to PDPTRs
  loading which are done using incorrect MMU state which is incorrect,
  because it was setup with incorrect L1 HSAVE state.

  V3: updated with review feedback from Sean.

* Patch 3 fixes a theoretical issue that I introduced with my SREGS2 patchset,
  which Sean Christopherson pointed out.

  The issue is that KVM_REQ_GET_NESTED_STATE_PAGES request is not only used
  for completing the load of the nested state, but it is also used to complete
  exit from SMM to guest mode, and my compatibility hack of pdptrs_from_userspace
  was done assuming that this is not done.

  V3: I moved the reset of pdptrs_from_userspace to common x86 code.

* Patch 4 makes SVM SMM exit to be a bit more similar to how VMX does it
  by also raising KVM_REQ_GET_NESTED_STATE_PAGES requests.

  I do have doubts about why we need to do this on VMX though. The initial
  justification for this comes from

  7f7f1ba33cf2 ("KVM: x86: do not load vmcs12 pages while still in SMM")

  With all the MMU changes, I am not sure that we can still have a case
  of not up to date MMU when we enter the nested guest from SMM.
  On SVM it does seem to work anyway without this.

* Patch 5 fixes guest emulation failure when unrestricted_guest=0 and we reach
  handle_exception_nmi_irqoff.
  That function takes stale values from current vmcs and fails not taking into account
  the fact that we are emulating invalid guest state now, and thus no VM exit happened.

* Patch 6 fixed a corner case where return from SMM is slightly corrupting
  the L2 segment register state when unrestricted_guest=0 due to real mode segement
  caching register logic, but later it restores it correctly from SMMRAM.
  Fix this by not failing nested_vmx_enter_non_root_mode and delaying this
  failure to the next nested VM entry.

* Patch 7 fixes another corner case where emulation_required was not updated
  correctly on nested VMexit when restoring the L1 segement registers.

I still track 2 SMM issues:

1. When HyperV guest is running nested, and uses SMM enabled OVMF, it crashes and
   reboots during the boot process.

2. Nested migration on VMX is still broken when L1 floods itself with SMIs.

Best regards,
	Maxim Levitsky

Maxim Levitsky (7):
  KVM: x86: nSVM: refactor svm_leave_smm and smm_enter_smm
  KVM: x86: nSVM: restore the L1 host state prior to resuming nested
    guest on SMM exit
  KVM: x86: reset pdptrs_from_userspace when exiting smm
  KVM: x86: SVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM
    mode
  KVM: x86: VMX: synthesize invalid VM exit when emulating invalid guest
    state
  KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if
    !from_vmentry
  KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit

 arch/x86/kvm/svm/nested.c |   9 ++-
 arch/x86/kvm/svm/svm.c    | 131 ++++++++++++++++++++------------------
 arch/x86/kvm/svm/svm.h    |   3 +-
 arch/x86/kvm/vmx/nested.c |   9 ++-
 arch/x86/kvm/vmx/vmx.c    |  28 ++++++--
 arch/x86/kvm/vmx/vmx.h    |   1 +
 arch/x86/kvm/x86.c        |   7 ++
 7 files changed, 113 insertions(+), 75 deletions(-)

-- 
2.26.3



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/7] KVM: x86: nSVM: refactor svm_leave_smm and smm_enter_smm
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
@ 2021-09-13 14:09 ` Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 2/7] KVM: x86: nSVM: restore the L1 host state prior to resuming nested guest on SMM exit Maxim Levitsky
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

Use return statements instead of nested if, and fix error
path to free all the maps that were allocated.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 129 +++++++++++++++++++++--------------------
 1 file changed, 65 insertions(+), 64 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1a70e11f0487..a75dafbfa4a6 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4284,43 +4284,44 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 	struct kvm_host_map map_save;
 	int ret;
 
-	if (is_guest_mode(vcpu)) {
-		/* FED8h - SVM Guest */
-		put_smstate(u64, smstate, 0x7ed8, 1);
-		/* FEE0h - SVM Guest VMCB Physical Address */
-		put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa);
+	if (!is_guest_mode(vcpu))
+		return 0;
 
-		svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
-		svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
-		svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
+	/* FED8h - SVM Guest */
+	put_smstate(u64, smstate, 0x7ed8, 1);
+	/* FEE0h - SVM Guest VMCB Physical Address */
+	put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa);
 
-		ret = nested_svm_vmexit(svm);
-		if (ret)
-			return ret;
+	svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
+	svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
+	svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
 
-		/*
-		 * KVM uses VMCB01 to store L1 host state while L2 runs but
-		 * VMCB01 is going to be used during SMM and thus the state will
-		 * be lost. Temporary save non-VMLOAD/VMSAVE state to the host save
-		 * area pointed to by MSR_VM_HSAVE_PA. APM guarantees that the
-		 * format of the area is identical to guest save area offsetted
-		 * by 0x400 (matches the offset of 'struct vmcb_save_area'
-		 * within 'struct vmcb'). Note: HSAVE area may also be used by
-		 * L1 hypervisor to save additional host context (e.g. KVM does
-		 * that, see svm_prepare_guest_switch()) which must be
-		 * preserved.
-		 */
-		if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
-				 &map_save) == -EINVAL)
-			return 1;
+	ret = nested_svm_vmexit(svm);
+	if (ret)
+		return ret;
+
+	/*
+	 * KVM uses VMCB01 to store L1 host state while L2 runs but
+	 * VMCB01 is going to be used during SMM and thus the state will
+	 * be lost. Temporary save non-VMLOAD/VMSAVE state to the host save
+	 * area pointed to by MSR_VM_HSAVE_PA. APM guarantees that the
+	 * format of the area is identical to guest save area offsetted
+	 * by 0x400 (matches the offset of 'struct vmcb_save_area'
+	 * within 'struct vmcb'). Note: HSAVE area may also be used by
+	 * L1 hypervisor to save additional host context (e.g. KVM does
+	 * that, see svm_prepare_guest_switch()) which must be
+	 * preserved.
+	 */
+	if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
+			 &map_save) == -EINVAL)
+		return 1;
 
-		BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400);
+	BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400);
 
-		svm_copy_vmrun_state(map_save.hva + 0x400,
-				     &svm->vmcb01.ptr->save);
+	svm_copy_vmrun_state(map_save.hva + 0x400,
+			     &svm->vmcb01.ptr->save);
 
-		kvm_vcpu_unmap(vcpu, &map_save, true);
-	}
+	kvm_vcpu_unmap(vcpu, &map_save, true);
 	return 0;
 }
 
@@ -4328,50 +4329,50 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_host_map map, map_save;
-	int ret = 0;
-
-	if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) {
-		u64 saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
-		u64 guest = GET_SMSTATE(u64, smstate, 0x7ed8);
-		u64 vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
-		struct vmcb *vmcb12;
+	u64 saved_efer, vmcb12_gpa;
+	struct vmcb *vmcb12;
+	int ret;
 
-		if (guest) {
-			if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
-				return 1;
+	if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
+		return 0;
 
-			if (!(saved_efer & EFER_SVME))
-				return 1;
+	/* Non-zero if SMI arrived while vCPU was in guest mode. */
+	if (!GET_SMSTATE(u64, smstate, 0x7ed8))
+		return 0;
 
-			if (kvm_vcpu_map(vcpu,
-					 gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
-				return 1;
+	if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
+		return 1;
 
-			if (svm_allocate_nested(svm))
-				return 1;
+	saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
+	if (!(saved_efer & EFER_SVME))
+		return 1;
 
-			vmcb12 = map.hva;
+	vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
+	if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
+		return 1;
 
-			nested_load_control_from_vmcb12(svm, &vmcb12->control);
+	ret = 1;
+	if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save) == -EINVAL)
+		goto unmap_map;
 
-			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
-			kvm_vcpu_unmap(vcpu, &map, true);
+	if (svm_allocate_nested(svm))
+		goto unmap_save;
 
-			/*
-			 * Restore L1 host state from L1 HSAVE area as VMCB01 was
-			 * used during SMM (see svm_enter_smm())
-			 */
-			if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
-					 &map_save) == -EINVAL)
-				return 1;
+	vmcb12 = map.hva;
+	nested_load_control_from_vmcb12(svm, &vmcb12->control);
+	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
 
-			svm_copy_vmrun_state(&svm->vmcb01.ptr->save,
-					     map_save.hva + 0x400);
+	/*
+	 * Restore L1 host state from L1 HSAVE area as VMCB01 was
+	 * used during SMM (see svm_enter_smm())
+	 */
 
-			kvm_vcpu_unmap(vcpu, &map_save, true);
-		}
-	}
+	svm_copy_vmrun_state(&svm->vmcb01.ptr->save, map_save.hva + 0x400);
 
+unmap_save:
+	kvm_vcpu_unmap(vcpu, &map_save, true);
+unmap_map:
+	kvm_vcpu_unmap(vcpu, &map, true);
 	return ret;
 }
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/7] KVM: x86: nSVM: restore the L1 host state prior to resuming nested guest on SMM exit
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 1/7] KVM: x86: nSVM: refactor svm_leave_smm and smm_enter_smm Maxim Levitsky
@ 2021-09-13 14:09 ` Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 3/7] KVM: x86: reset pdptrs_from_userspace when exiting smm Maxim Levitsky
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

Otherwise guest entry code might see incorrect L1 state (e.g paging state).

Fixes: 37be407b2ce8 ("KVM: nSVM: Fix L1 state corruption upon return from SMM")

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a75dafbfa4a6..e3b092a3a2e1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4358,10 +4358,6 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 	if (svm_allocate_nested(svm))
 		goto unmap_save;
 
-	vmcb12 = map.hva;
-	nested_load_control_from_vmcb12(svm, &vmcb12->control);
-	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
-
 	/*
 	 * Restore L1 host state from L1 HSAVE area as VMCB01 was
 	 * used during SMM (see svm_enter_smm())
@@ -4369,6 +4365,14 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 
 	svm_copy_vmrun_state(&svm->vmcb01.ptr->save, map_save.hva + 0x400);
 
+	/*
+	 * Enter the nested guest now
+	 */
+
+	vmcb12 = map.hva;
+	nested_load_control_from_vmcb12(svm, &vmcb12->control);
+	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
+
 unmap_save:
 	kvm_vcpu_unmap(vcpu, &map_save, true);
 unmap_map:
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/7] KVM: x86: reset pdptrs_from_userspace when exiting smm
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 1/7] KVM: x86: nSVM: refactor svm_leave_smm and smm_enter_smm Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 2/7] KVM: x86: nSVM: restore the L1 host state prior to resuming nested guest on SMM exit Maxim Levitsky
@ 2021-09-13 14:09 ` Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 4/7] KVM: x86: SVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode Maxim Levitsky
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

When exiting SMM, pdpts are loaded again from the guest memory.

This fixes a theoretical bug, when exit from SMM triggers entry to the
nested guest which re-uses some of the migration
code which uses this flag as a workaround for a legacy userspace.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/x86.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 86539c1686fa..3a61f19455cb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7654,6 +7654,13 @@ static void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm)
 
 		/* Process a latched INIT or SMI, if any.  */
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
+
+		/*
+		 * Even if KVM_SET_SREGS2 loaded PDPTRs out of band,
+		 * on SMM exit we still need to reload them from
+		 * guest memory
+		 */
+		vcpu->arch.pdptrs_from_userspace = false;
 	}
 
 	kvm_mmu_reset_context(vcpu);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/7] KVM: x86: SVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
                   ` (2 preceding siblings ...)
  2021-09-13 14:09 ` [PATCH v3 3/7] KVM: x86: reset pdptrs_from_userspace when exiting smm Maxim Levitsky
@ 2021-09-13 14:09 ` Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 5/7] KVM: x86: VMX: synthesize invalid VM exit when emulating invalid guest state Maxim Levitsky
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

Currently the KVM_REQ_GET_NESTED_STATE_PAGES on SVM only reloads PDPTRs,
and MSR bitmap, with former not really needed for SMM as SMM exit code
reloads them again from SMRAM'S CR3, and later happens to work
since MSR bitmap isn't modified while in SMM.

Still it is better to be consistient with VMX.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 9 ++++++---
 arch/x86/kvm/svm/svm.c    | 2 +-
 arch/x86/kvm/svm/svm.h    | 3 ++-
 3 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 5e13357da21e..f8345701ce0e 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -572,7 +572,7 @@ static void nested_svm_copy_common_state(struct vmcb *from_vmcb, struct vmcb *to
 }
 
 int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
-			 struct vmcb *vmcb12)
+			 struct vmcb *vmcb12, bool from_vmrun)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	int ret;
@@ -602,13 +602,16 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
 	nested_vmcb02_prepare_save(svm, vmcb12);
 
 	ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3,
-				  nested_npt_enabled(svm), true);
+				  nested_npt_enabled(svm), from_vmrun);
 	if (ret)
 		return ret;
 
 	if (!npt_enabled)
 		vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
 
+	if (!from_vmrun)
+		kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+
 	svm_set_gif(svm, true);
 
 	return 0;
@@ -674,7 +677,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 
 	svm->nested.nested_run_pending = 1;
 
-	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12))
+	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
 		goto out_exit_err;
 
 	if (nested_svm_vmrun_msrpm(svm))
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e3b092a3a2e1..5a8f70ab13c9 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4371,7 +4371,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 
 	vmcb12 = map.hva;
 	nested_load_control_from_vmcb12(svm, &vmcb12->control);
-	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
+	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
 
 unmap_save:
 	kvm_vcpu_unmap(vcpu, &map_save, true);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 524d943f3efc..128a54b1fbf1 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -459,7 +459,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
 	return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
 }
 
-int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, struct vmcb *vmcb12);
+int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
+			 u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun);
 void svm_leave_nested(struct vcpu_svm *svm);
 void svm_free_nested(struct vcpu_svm *svm);
 int svm_allocate_nested(struct vcpu_svm *svm);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 5/7] KVM: x86: VMX: synthesize invalid VM exit when emulating invalid guest state
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
                   ` (3 preceding siblings ...)
  2021-09-13 14:09 ` [PATCH v3 4/7] KVM: x86: SVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode Maxim Levitsky
@ 2021-09-13 14:09 ` Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 6/7] KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if !from_vmentry Maxim Levitsky
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

Since no actual VM entry happened, the VM exit information is stale.
To avoid this, synthesize an invalid VM guest state VM exit.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index fada1055f325..57609dda1bbc 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6618,10 +6618,21 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		     vmx->loaded_vmcs->soft_vnmi_blocked))
 		vmx->loaded_vmcs->entry_time = ktime_get();
 
-	/* Don't enter VMX if guest state is invalid, let the exit handler
-	   start emulation until we arrive back to a valid state */
-	if (vmx->emulation_required)
+	/*
+	 * Don't enter VMX if guest state is invalid, let the exit handler
+	 * start emulation until we arrive back to a valid state.  Synthesize a
+	 * consistency check VM-Exit due to invalid guest state and bail.
+	 */
+	if (unlikely(vmx->emulation_required)) {
+		vmx->fail = 0;
+		vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;
+		vmx->exit_reason.failed_vmentry = 1;
+		kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_1);
+		vmx->exit_qualification = ENTRY_FAIL_DEFAULT;
+		kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_2);
+		vmx->exit_intr_info = 0;
 		return EXIT_FASTPATH_NONE;
+	}
 
 	trace_kvm_entry(vcpu);
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 6/7] KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if !from_vmentry
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
                   ` (4 preceding siblings ...)
  2021-09-13 14:09 ` [PATCH v3 5/7] KVM: x86: VMX: synthesize invalid VM exit when emulating invalid guest state Maxim Levitsky
@ 2021-09-13 14:09 ` Maxim Levitsky
  2021-09-13 14:09 ` [PATCH v3 7/7] KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit Maxim Levitsky
  2021-09-22 14:35 ` [PATCH v3 0/7] KVM: few more SMM fixes Paolo Bonzini
  7 siblings, 0 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

It is possible that when non root mode is entered via special entry
(!from_vmentry), that is from SMM or from loading the nested state,
the L2 state could be invalid in regard to non unrestricted guest mode,
but later it can become valid.

(for example when RSM emulation restores segment registers from SMRAM)

Thus delay the check to VM entry, where we will check this and fail.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/vmx/nested.c | 7 ++++++-
 arch/x86/kvm/vmx/vmx.c    | 5 ++++-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index bc6327950657..1a05ae83dae5 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2546,8 +2546,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 	 * Guest state is invalid and unrestricted guest is disabled,
 	 * which means L1 attempted VMEntry to L2 with invalid state.
 	 * Fail the VMEntry.
+	 *
+	 * However when force loading the guest state (SMM exit or
+	 * loading nested state after migration, it is possible to
+	 * have invalid guest state now, which will be later fixed by
+	 * restoring L2 register state
 	 */
-	if (CC(!vmx_guest_state_valid(vcpu))) {
+	if (CC(from_vmentry && !vmx_guest_state_valid(vcpu))) {
 		*entry_failure_code = ENTRY_FAIL_DEFAULT;
 		return -EINVAL;
 	}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 57609dda1bbc..c3b3c406f4f1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6624,7 +6624,10 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	 * consistency check VM-Exit due to invalid guest state and bail.
 	 */
 	if (unlikely(vmx->emulation_required)) {
-		vmx->fail = 0;
+
+		/* We don't emulate invalid state of a nested guest */
+		vmx->fail = is_guest_mode(vcpu);
+
 		vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;
 		vmx->exit_reason.failed_vmentry = 1;
 		kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_1);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 7/7] KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
                   ` (5 preceding siblings ...)
  2021-09-13 14:09 ` [PATCH v3 6/7] KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if !from_vmentry Maxim Levitsky
@ 2021-09-13 14:09 ` Maxim Levitsky
  2021-09-22 14:35 ` [PATCH v3 0/7] KVM: few more SMM fixes Paolo Bonzini
  7 siblings, 0 replies; 13+ messages in thread
From: Maxim Levitsky @ 2021-09-13 14:09 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Sean Christopherson,
	Borislav Petkov, Vitaly Kuznetsov, Maxim Levitsky, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

If L1 had invalid state on VM entry (can happen on SMM transactions
when we enter from real mode, straight to nested guest),

then after we load 'host' state from VMCS12, the state has to become
valid again, but since we load the segment registers with
__vmx_set_segment we weren't always updating emulation_required.

Update emulation_required explicitly at end of load_vmcs12_host_state.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/vmx/nested.c | 2 ++
 arch/x86/kvm/vmx/vmx.c    | 8 ++++----
 arch/x86/kvm/vmx/vmx.h    | 1 +
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 1a05ae83dae5..f915e1ac589c 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4319,6 +4319,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 	if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
 				vmcs12->vm_exit_msr_load_count))
 		nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_MSR_FAIL);
+
+	to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
 }
 
 static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c3b3c406f4f1..7f0a768050db 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1323,7 +1323,7 @@ static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
 	vmx_prepare_switch_to_host(to_vmx(vcpu));
 }
 
-static bool emulation_required(struct kvm_vcpu *vcpu)
+bool vmx_emulation_required(struct kvm_vcpu *vcpu)
 {
 	return emulate_invalid_guest_state && !vmx_guest_state_valid(vcpu);
 }
@@ -1367,7 +1367,7 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 	vmcs_writel(GUEST_RFLAGS, rflags);
 
 	if ((old_rflags ^ vmx->rflags) & X86_EFLAGS_VM)
-		vmx->emulation_required = emulation_required(vcpu);
+		vmx->emulation_required = vmx_emulation_required(vcpu);
 }
 
 u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)
@@ -3077,7 +3077,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	}
 
 	/* depends on vcpu->arch.cr0 to be set to a new value */
-	vmx->emulation_required = emulation_required(vcpu);
+	vmx->emulation_required = vmx_emulation_required(vcpu);
 }
 
 static int vmx_get_max_tdp_level(void)
@@ -3330,7 +3330,7 @@ static void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int
 {
 	__vmx_set_segment(vcpu, var, seg);
 
-	to_vmx(vcpu)->emulation_required = emulation_required(vcpu);
+	to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
 }
 
 static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 4858c5fd95f2..3a587c51a8d1 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -359,6 +359,7 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
 void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel,
 			unsigned long fs_base, unsigned long gs_base);
 int vmx_get_cpl(struct kvm_vcpu *vcpu);
+bool vmx_emulation_required(struct kvm_vcpu *vcpu);
 unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu);
 void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
 u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: few more SMM fixes
  2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
                   ` (6 preceding siblings ...)
  2021-09-13 14:09 ` [PATCH v3 7/7] KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit Maxim Levitsky
@ 2021-09-22 14:35 ` Paolo Bonzini
  2021-09-22 14:46   ` Sean Christopherson
  7 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2021-09-22 14:35 UTC (permalink / raw)
  To: Maxim Levitsky, kvm
  Cc: linux-kernel, Jim Mattson, Wanpeng Li, Thomas Gleixner,
	H. Peter Anvin, Sean Christopherson, Borislav Petkov,
	Vitaly Kuznetsov, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

On 13/09/21 16:09, Maxim Levitsky wrote:
> These are few SMM fixes I was working on last week.
> 
> * Patch 1,2 fixes a minor issue that remained after
>    commit 37be407b2ce8 ("KVM: nSVM: Fix L1 state corruption upon return from SMM")
> 
>    While now, returns to guest mode from SMM work due to restored state from HSAVE
>    area, the guest entry still sees incorrect HSAVE state.
> 
>    This for example breaks return from SMM when the guest is 32 bit, due to PDPTRs
>    loading which are done using incorrect MMU state which is incorrect,
>    because it was setup with incorrect L1 HSAVE state.
> 
>    V3: updated with review feedback from Sean.
> 
> * Patch 3 fixes a theoretical issue that I introduced with my SREGS2 patchset,
>    which Sean Christopherson pointed out.
> 
>    The issue is that KVM_REQ_GET_NESTED_STATE_PAGES request is not only used
>    for completing the load of the nested state, but it is also used to complete
>    exit from SMM to guest mode, and my compatibility hack of pdptrs_from_userspace
>    was done assuming that this is not done.
> 
>    V3: I moved the reset of pdptrs_from_userspace to common x86 code.
> 
> * Patch 4 makes SVM SMM exit to be a bit more similar to how VMX does it
>    by also raising KVM_REQ_GET_NESTED_STATE_PAGES requests.
> 
>    I do have doubts about why we need to do this on VMX though. The initial
>    justification for this comes from
> 
>    7f7f1ba33cf2 ("KVM: x86: do not load vmcs12 pages while still in SMM")
> 
>    With all the MMU changes, I am not sure that we can still have a case
>    of not up to date MMU when we enter the nested guest from SMM.
>    On SVM it does seem to work anyway without this.
> 
> * Patch 5 fixes guest emulation failure when unrestricted_guest=0 and we reach
>    handle_exception_nmi_irqoff.
>    That function takes stale values from current vmcs and fails not taking into account
>    the fact that we are emulating invalid guest state now, and thus no VM exit happened.
> 
> * Patch 6 fixed a corner case where return from SMM is slightly corrupting
>    the L2 segment register state when unrestricted_guest=0 due to real mode segement
>    caching register logic, but later it restores it correctly from SMMRAM.
>    Fix this by not failing nested_vmx_enter_non_root_mode and delaying this
>    failure to the next nested VM entry.
> 
> * Patch 7 fixes another corner case where emulation_required was not updated
>    correctly on nested VMexit when restoring the L1 segement registers.
> 
> I still track 2 SMM issues:
> 
> 1. When HyperV guest is running nested, and uses SMM enabled OVMF, it crashes and
>     reboots during the boot process.
> 
> 2. Nested migration on VMX is still broken when L1 floods itself with SMIs.
> 
> Best regards,
> 	Maxim Levitsky
> 
> Maxim Levitsky (7):
>    KVM: x86: nSVM: refactor svm_leave_smm and smm_enter_smm
>    KVM: x86: nSVM: restore the L1 host state prior to resuming nested
>      guest on SMM exit
>    KVM: x86: reset pdptrs_from_userspace when exiting smm
>    KVM: x86: SVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM
>      mode
>    KVM: x86: VMX: synthesize invalid VM exit when emulating invalid guest
>      state
>    KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if
>      !from_vmentry
>    KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit
> 
>   arch/x86/kvm/svm/nested.c |   9 ++-
>   arch/x86/kvm/svm/svm.c    | 131 ++++++++++++++++++++------------------
>   arch/x86/kvm/svm/svm.h    |   3 +-
>   arch/x86/kvm/vmx/nested.c |   9 ++-
>   arch/x86/kvm/vmx/vmx.c    |  28 ++++++--
>   arch/x86/kvm/vmx/vmx.h    |   1 +
>   arch/x86/kvm/x86.c        |   7 ++
>   7 files changed, 113 insertions(+), 75 deletions(-)
> 

Queued, thanks.  However, I'm keeping patch 1 for 5.16 only.

Paolo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: few more SMM fixes
  2021-09-22 14:35 ` [PATCH v3 0/7] KVM: few more SMM fixes Paolo Bonzini
@ 2021-09-22 14:46   ` Sean Christopherson
  2021-09-22 15:45     ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2021-09-22 14:46 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Maxim Levitsky, kvm, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Borislav Petkov,
	Vitaly Kuznetsov, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

On Wed, Sep 22, 2021, Paolo Bonzini wrote:
> On 13/09/21 16:09, Maxim Levitsky wrote:
> >    KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit

...
 
> Queued, thanks.  However, I'm keeping patch 1 for 5.16 only.

I'm pretty sure the above patch is wrong, emulation_required can simply be
cleared on emulated VM-Exit.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: few more SMM fixes
  2021-09-22 14:46   ` Sean Christopherson
@ 2021-09-22 15:45     ` Paolo Bonzini
  2021-09-22 15:52       ` Sean Christopherson
  0 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2021-09-22 15:45 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Maxim Levitsky, kvm, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Borislav Petkov,
	Vitaly Kuznetsov, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

On 22/09/21 16:46, Sean Christopherson wrote:
> On Wed, Sep 22, 2021, Paolo Bonzini wrote:
>> On 13/09/21 16:09, Maxim Levitsky wrote:
>>>     KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit
> 
> ...
>   
>> Queued, thanks.  However, I'm keeping patch 1 for 5.16 only.
> 
> I'm pretty sure the above patch is wrong, emulation_required can simply be
> cleared on emulated VM-Exit.

Are you sure?  I think you can at least set the host segment fields to a 
data segment that requires emulation.  For example the DPL of the host 
DS is hardcoded to zero, but the RPL comes from the selector field and 
the DS selector is not validated.  Therefore a subsequent vmentry could 
fail the access rights tests of 26.3.1.2 Checks on Guest Segment Registers:

DS, ES, FS, GS. The DPL cannot be less than the RPL in the selector 
field if (1) the “unrestricted guest” VM-execution control is 0; (2) the 
register is usable; and (3) the Type in the access-rights field is in 
the range 0 – 11 (data segment or non-conforming code segment).

Paolo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: few more SMM fixes
  2021-09-22 15:45     ` Paolo Bonzini
@ 2021-09-22 15:52       ` Sean Christopherson
  2021-09-22 18:17         ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2021-09-22 15:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Maxim Levitsky, kvm, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Borislav Petkov,
	Vitaly Kuznetsov, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

On Wed, Sep 22, 2021, Paolo Bonzini wrote:
> On 22/09/21 16:46, Sean Christopherson wrote:
> > On Wed, Sep 22, 2021, Paolo Bonzini wrote:
> > > On 13/09/21 16:09, Maxim Levitsky wrote:
> > > >     KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit
> > 
> > ...
> > > Queued, thanks.  However, I'm keeping patch 1 for 5.16 only.
> > 
> > I'm pretty sure the above patch is wrong, emulation_required can simply be
> > cleared on emulated VM-Exit.
> 
> Are you sure?

Pretty sure, but not 100% sure :-)

> I think you can at least set the host segment fields to a data segment that
> requires emulation.  For example the DPL of the host DS is hardcoded to zero,
> but the RPL comes from the selector field and the DS selector is not
> validated.

HOST_DS_SEL is validated:

  In the selector field for each of CS, SS, DS, ES, FS, GS and TR, the RPL
  (bits 1:0) and the TI flag (bit 2) must be 0.

> Therefore a subsequent vmentry could fail the access rights tests of 26.3.1.2
> Checks on Guest Segment Registers:

Yes, but this path is loading host state on VM-Exit.

> DS, ES, FS, GS. The DPL cannot be less than the RPL in the selector field if
> (1) the “unrestricted guest” VM-execution control is 0; (2) the register is
> usable; and (3) the Type in the access-rights field is in the range 0 – 11
> (data segment or non-conforming code segment).
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/7] KVM: few more SMM fixes
  2021-09-22 15:52       ` Sean Christopherson
@ 2021-09-22 18:17         ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2021-09-22 18:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Maxim Levitsky, kvm, linux-kernel, Jim Mattson, Wanpeng Li,
	Thomas Gleixner, H. Peter Anvin, Borislav Petkov,
	Vitaly Kuznetsov, Ingo Molnar,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Joerg Roedel

On 22/09/21 17:52, Sean Christopherson wrote:
> On Wed, Sep 22, 2021, Paolo Bonzini wrote:
>> On 22/09/21 16:46, Sean Christopherson wrote:
>>> On Wed, Sep 22, 2021, Paolo Bonzini wrote:
>>>> On 13/09/21 16:09, Maxim Levitsky wrote:
>>>>>      KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit
>>>
>>> ...
>>>> Queued, thanks.  However, I'm keeping patch 1 for 5.16 only.
>>>
>>> I'm pretty sure the above patch is wrong, emulation_required can simply be
>>> cleared on emulated VM-Exit.
>>
>> Are you sure?
> 
> Pretty sure, but not 100% sure :-)
> 
>> I think you can at least set the host segment fields to a data segment that
>> requires emulation.  For example the DPL of the host DS is hardcoded to zero,
>> but the RPL comes from the selector field and the DS selector is not
>> validated.
> 
> HOST_DS_SEL is validated:
> 
>    In the selector field for each of CS, SS, DS, ES, FS, GS and TR, the RPL
>    (bits 1:0) and the TI flag (bit 2) must be 0.

Ah, I think that's a bug in the manual.  In "27.5.2 Loading Host Segment 
and Descriptor-Table Registers" the reference to 26.3.1.2 should be 
26.2.3 ("Checks on Host Segment and Descriptor-Table Registers").  That 
one does cover all segment registers.  Hmm, who do we ask now about 
fixing Intel manuals?

So yeah, a WARN_ON_ONCE might be in order.  But I don't feel super safe 
making it false when it is possible to make KVM do something that is at 
least sensible.

Paolo

>> Therefore a subsequent vmentry could fail the access rights tests of 26.3.1.2
>> Checks on Guest Segment Registers:
> 
> Yes, but this path is loading host state on VM-Exit.
> 
>> DS, ES, FS, GS. The DPL cannot be less than the RPL in the selector field if
>> (1) the “unrestricted guest” VM-execution control is 0; (2) the register is
>> usable; and (3) the Type in the access-rights field is in the range 0 – 11
>> (data segment or non-conforming code segment).
>>
>> Paolo
>>
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-09-22 18:17 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-13 14:09 [PATCH v3 0/7] KVM: few more SMM fixes Maxim Levitsky
2021-09-13 14:09 ` [PATCH v3 1/7] KVM: x86: nSVM: refactor svm_leave_smm and smm_enter_smm Maxim Levitsky
2021-09-13 14:09 ` [PATCH v3 2/7] KVM: x86: nSVM: restore the L1 host state prior to resuming nested guest on SMM exit Maxim Levitsky
2021-09-13 14:09 ` [PATCH v3 3/7] KVM: x86: reset pdptrs_from_userspace when exiting smm Maxim Levitsky
2021-09-13 14:09 ` [PATCH v3 4/7] KVM: x86: SVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode Maxim Levitsky
2021-09-13 14:09 ` [PATCH v3 5/7] KVM: x86: VMX: synthesize invalid VM exit when emulating invalid guest state Maxim Levitsky
2021-09-13 14:09 ` [PATCH v3 6/7] KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if !from_vmentry Maxim Levitsky
2021-09-13 14:09 ` [PATCH v3 7/7] KVM: x86: nVMX: re-evaluate emulation_required on nested VM exit Maxim Levitsky
2021-09-22 14:35 ` [PATCH v3 0/7] KVM: few more SMM fixes Paolo Bonzini
2021-09-22 14:46   ` Sean Christopherson
2021-09-22 15:45     ` Paolo Bonzini
2021-09-22 15:52       ` Sean Christopherson
2021-09-22 18:17         ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.