linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] KVM: few more SMM fixes
@ 2021-08-23 11:46 Maxim Levitsky
  2021-08-23 11:46 ` [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit Maxim Levitsky
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-08-23 11:46 UTC (permalink / raw)
  To: kvm
  Cc: Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, Sean Christopherson,
	H. Peter Anvin, Maxim Levitsky, Ingo Molnar, Vitaly Kuznetsov

These are few SMM fixes I was working on last week.

* First patch fixes a minor issue that remained after
  commit 37be407b2ce8 ("KVM: nSVM: Fix L1 state corruption upon return from SMM")

  While now, returns to guest mode from SMM work due to restored state from HSAVE
  area, the guest entry still sees incorrect HSAVE state.

  This for example breaks return from SMM when the guest is 32 bit, due to PDPTRs
  loading which are done using incorrect MMU state which is incorrect,
  because it was setup with incorrect L1 HSAVE state.

* 2nd patch fixes a theoretical issue that I introduced with my SREGS2 patchset,
  which Sean Christopherson pointed out.

  The issue is that KVM_REQ_GET_NESTED_STATE_PAGES request is not only used
  for completing the load of the nested state, but it is also used to complete
  exit from SMM to guest mode, and my compatibility hack of pdptrs_from_userspace
  was done assuming that this is not done.

  While it is safe to just reset 'pdptrs_from_userspace' on each VM entry,
  I don't want to slow down the common code for this very rare hack.
  Instead I explicitly zero this variable when SMM exit to guest mode is done,
  because in this case PDPTRs do need to be reloaded from memory always.

  Note that this is a theoretical issue only, because after 'vendor' return from
  smm code (aka .leave_smm) is done, even when it returned to the guest mode,
  which loads some of L2 CPU state, we still load again all of the L2 cpu state
  captured in SMRAM which includes CR3, at which point guest PDPTRs are re-loaded
  anyway.

  Also note that across SMI entries the CR3 seems not to be updated, and Intel's
  SDM notes that it saved value in SMRAM isn't writable, thus it is possible
  that if SMM handler didn't change CR3, the pdptrs would not be touched.

  I guess that means that a SMI handler can in theory preserve PDPTRs by never
  touching CR3, but since recently we removed that code that didn't update PDPTRs
  if CR3 didn't change, I guess it won't work.

  Anyway I don't think any OS bothers to have PDPTRs not synced with whatever
  page CR3 points at, thus I didn't bother to try and test what the real hardware
  does in this case.

* 3rd patch makes SVM SMM exit to be a bit more similar to how VMX does it
  by also raising KVM_REQ_GET_NESTED_STATE_PAGES requests.

  I do have doubts about why we need to do this on VMX though. The initial
  justification for this comes from

  7f7f1ba33cf2 ("KVM: x86: do not load vmcs12 pages while still in SMM")

  With all the MMU changes, I am not sure that we can still have a case
  of not up to date MMU when we enter the nested guest from SMM.
  On SVM it does seem to work anyway without this.

I still track another SMM issue, which I debugged a bit today but still
no lead on what is going on:

When HyperV guest is running nested, and uses SMM enabled OVMF, it crashes and
reboots during the boot process.

Best regards,
	Maxim Levitsky

Maxim Levitsky (3):
  KVM: nSVM: restore the L1 host state prior to resuming a nested guest
    on SMM exit
  KVM: x86: force PDPTRs reload on SMM exit
  KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode

 arch/x86/kvm/svm/nested.c |  9 ++++++---
 arch/x86/kvm/svm/svm.c    | 27 ++++++++++++++++++---------
 arch/x86/kvm/svm/svm.h    |  3 ++-
 arch/x86/kvm/vmx/vmx.c    |  7 +++++++
 4 files changed, 33 insertions(+), 13 deletions(-)

-- 
2.26.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit
  2021-08-23 11:46 [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky
@ 2021-08-23 11:46 ` Maxim Levitsky
  2021-09-08 22:04   ` Sean Christopherson
  2021-08-23 11:46 ` [PATCH v2 2/3] KVM: x86: force PDPTRs reload " Maxim Levitsky
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Maxim Levitsky @ 2021-08-23 11:46 UTC (permalink / raw)
  To: kvm
  Cc: Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, Sean Christopherson,
	H. Peter Anvin, Maxim Levitsky, Ingo Molnar, Vitaly Kuznetsov

If the guest is entered prior to restoring the host save area,
the guest entry code might see incorrect L1 state (e.g paging state).

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1a70e11f0487..ea7a4dacd42f 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4347,27 +4347,30 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 					 gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
 				return 1;
 
-			if (svm_allocate_nested(svm))
+			if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
+					 &map_save) == -EINVAL)
 				return 1;
 
-			vmcb12 = map.hva;
-
-			nested_load_control_from_vmcb12(svm, &vmcb12->control);
-
-			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
-			kvm_vcpu_unmap(vcpu, &map, true);
+			if (svm_allocate_nested(svm))
+				return 1;
 
 			/*
 			 * Restore L1 host state from L1 HSAVE area as VMCB01 was
 			 * used during SMM (see svm_enter_smm())
 			 */
-			if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
-					 &map_save) == -EINVAL)
-				return 1;
 
 			svm_copy_vmrun_state(&svm->vmcb01.ptr->save,
 					     map_save.hva + 0x400);
 
+			/*
+			 * Restore L2 state
+			 */
+
+			vmcb12 = map.hva;
+			nested_load_control_from_vmcb12(svm, &vmcb12->control);
+			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
+
+			kvm_vcpu_unmap(vcpu, &map, true);
 			kvm_vcpu_unmap(vcpu, &map_save, true);
 		}
 	}
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/3] KVM: x86: force PDPTRs reload on SMM exit
  2021-08-23 11:46 [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky
  2021-08-23 11:46 ` [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit Maxim Levitsky
@ 2021-08-23 11:46 ` Maxim Levitsky
  2021-09-09  0:55   ` Sean Christopherson
  2021-08-23 11:46 ` [PATCH v2 3/3] KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode Maxim Levitsky
  2021-08-23 13:01 ` [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky
  3 siblings, 1 reply; 11+ messages in thread
From: Maxim Levitsky @ 2021-08-23 11:46 UTC (permalink / raw)
  To: kvm
  Cc: Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, Sean Christopherson,
	H. Peter Anvin, Maxim Levitsky, Ingo Molnar, Vitaly Kuznetsov

KVM_REQ_GET_NESTED_STATE_PAGES is also used with VM entries that happen
on exit from SMM mode, and in this case PDPTRS must be always reloaded.

Thanks to Sean Christopherson for pointing this out.

Fixes: 0f85722341b0 ("KVM: nVMX: delay loading of PDPTRs to KVM_REQ_GET_NESTED_STATE_PAGES")

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index fada1055f325..4194fbf5e5d6 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7504,6 +7504,13 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 	}
 
 	if (vmx->nested.smm.guest_mode) {
+
+		/* Exit from the SMM to the non root mode also uses
+		 * the KVM_REQ_GET_NESTED_STATE_PAGES request,
+		 * but in this case the pdptrs must be always reloaded
+		 */
+		vcpu->arch.pdptrs_from_userspace = false;
+
 		ret = nested_vmx_enter_non_root_mode(vcpu, false);
 		if (ret)
 			return ret;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/3] KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode
  2021-08-23 11:46 [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky
  2021-08-23 11:46 ` [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit Maxim Levitsky
  2021-08-23 11:46 ` [PATCH v2 2/3] KVM: x86: force PDPTRs reload " Maxim Levitsky
@ 2021-08-23 11:46 ` Maxim Levitsky
  2021-09-09  0:59   ` Sean Christopherson
  2021-08-23 13:01 ` [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky
  3 siblings, 1 reply; 11+ messages in thread
From: Maxim Levitsky @ 2021-08-23 11:46 UTC (permalink / raw)
  To: kvm
  Cc: Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, Sean Christopherson,
	H. Peter Anvin, Maxim Levitsky, Ingo Molnar, Vitaly Kuznetsov

This allows nested SVM code to be more similar to nested VMX code.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 9 ++++++---
 arch/x86/kvm/svm/svm.c    | 8 +++++++-
 arch/x86/kvm/svm/svm.h    | 3 ++-
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 5e13357da21e..678fd21f6077 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -572,7 +572,7 @@ static void nested_svm_copy_common_state(struct vmcb *from_vmcb, struct vmcb *to
 }
 
 int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
-			 struct vmcb *vmcb12)
+			 struct vmcb *vmcb12, bool from_entry)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	int ret;
@@ -602,13 +602,16 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
 	nested_vmcb02_prepare_save(svm, vmcb12);
 
 	ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3,
-				  nested_npt_enabled(svm), true);
+				  nested_npt_enabled(svm), from_entry);
 	if (ret)
 		return ret;
 
 	if (!npt_enabled)
 		vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
 
+	if (!from_entry)
+		kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+
 	svm_set_gif(svm, true);
 
 	return 0;
@@ -674,7 +677,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 
 	svm->nested.nested_run_pending = 1;
 
-	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12))
+	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
 		goto out_exit_err;
 
 	if (nested_svm_vmrun_msrpm(svm))
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ea7a4dacd42f..76ee15af8c48 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4354,6 +4354,12 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 			if (svm_allocate_nested(svm))
 				return 1;
 
+			/* Exit from the SMM to the non root mode also uses
+			 * the KVM_REQ_GET_NESTED_STATE_PAGES request,
+			 * but in this case the pdptrs must be always reloaded
+			 */
+			vcpu->arch.pdptrs_from_userspace = false;
+
 			/*
 			 * Restore L1 host state from L1 HSAVE area as VMCB01 was
 			 * used during SMM (see svm_enter_smm())
@@ -4368,7 +4374,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 
 			vmcb12 = map.hva;
 			nested_load_control_from_vmcb12(svm, &vmcb12->control);
-			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
+			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
 
 			kvm_vcpu_unmap(vcpu, &map, true);
 			kvm_vcpu_unmap(vcpu, &map_save, true);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 524d943f3efc..51ffa46ab257 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -459,7 +459,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
 	return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
 }
 
-int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, struct vmcb *vmcb12);
+int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
+		u64 vmcb_gpa, struct vmcb *vmcb12, bool from_entry);
 void svm_leave_nested(struct vcpu_svm *svm);
 void svm_free_nested(struct vcpu_svm *svm);
 int svm_allocate_nested(struct vcpu_svm *svm);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/3] KVM: few more SMM fixes
  2021-08-23 11:46 [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky
                   ` (2 preceding siblings ...)
  2021-08-23 11:46 ` [PATCH v2 3/3] KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode Maxim Levitsky
@ 2021-08-23 13:01 ` Maxim Levitsky
  3 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-08-23 13:01 UTC (permalink / raw)
  To: kvm
  Cc: Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, Sean Christopherson,
	H. Peter Anvin, Ingo Molnar, Vitaly Kuznetsov

On Mon, 2021-08-23 at 14:46 +0300, Maxim Levitsky wrote:
> These are few SMM fixes I was working on last week.
> 
> * First patch fixes a minor issue that remained after
>   commit 37be407b2ce8 ("KVM: nSVM: Fix L1 state corruption upon return from SMM")
> 
>   While now, returns to guest mode from SMM work due to restored state from HSAVE
>   area, the guest entry still sees incorrect HSAVE state.
> 
>   This for example breaks return from SMM when the guest is 32 bit, due to PDPTRs
>   loading which are done using incorrect MMU state which is incorrect,
>   because it was setup with incorrect L1 HSAVE state.
> 
> * 2nd patch fixes a theoretical issue that I introduced with my SREGS2 patchset,
>   which Sean Christopherson pointed out.
> 
>   The issue is that KVM_REQ_GET_NESTED_STATE_PAGES request is not only used
>   for completing the load of the nested state, but it is also used to complete
>   exit from SMM to guest mode, and my compatibility hack of pdptrs_from_userspace
>   was done assuming that this is not done.
> 
>   While it is safe to just reset 'pdptrs_from_userspace' on each VM entry,
>   I don't want to slow down the common code for this very rare hack.
>   Instead I explicitly zero this variable when SMM exit to guest mode is done,
>   because in this case PDPTRs do need to be reloaded from memory always.
> 
>   Note that this is a theoretical issue only, because after 'vendor' return from
>   smm code (aka .leave_smm) is done, even when it returned to the guest mode,
>   which loads some of L2 CPU state, we still load again all of the L2 cpu state
>   captured in SMRAM which includes CR3, at which point guest PDPTRs are re-loaded
>   anyway.
> 
>   Also note that across SMI entries the CR3 seems not to be updated, and Intel's
>   SDM notes that it saved value in SMRAM isn't writable, thus it is possible
>   that if SMM handler didn't change CR3, the pdptrs would not be touched.
> 
>   I guess that means that a SMI handler can in theory preserve PDPTRs by never
>   touching CR3, but since recently we removed that code that didn't update PDPTRs
>   if CR3 didn't change, I guess it won't work.
> 
>   Anyway I don't think any OS bothers to have PDPTRs not synced with whatever
>   page CR3 points at, thus I didn't bother to try and test what the real hardware
>   does in this case.
> 
> * 3rd patch makes SVM SMM exit to be a bit more similar to how VMX does it
>   by also raising KVM_REQ_GET_NESTED_STATE_PAGES requests.
> 
>   I do have doubts about why we need to do this on VMX though. The initial
>   justification for this comes from
> 
>   7f7f1ba33cf2 ("KVM: x86: do not load vmcs12 pages while still in SMM")
> 
>   With all the MMU changes, I am not sure that we can still have a case
>   of not up to date MMU when we enter the nested guest from SMM.
>   On SVM it does seem to work anyway without this.
> 
> I still track another SMM issue, which I debugged a bit today but still
> no lead on what is going on:
> 
> When HyperV guest is running nested, and uses SMM enabled OVMF, it crashes and
> reboots during the boot process.
> 
> Best regards,
> 	Maxim Levitsky
> 
> Maxim Levitsky (3):
>   KVM: nSVM: restore the L1 host state prior to resuming a nested guest
>     on SMM exit
>   KVM: x86: force PDPTRs reload on SMM exit
>   KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode
> 
>  arch/x86/kvm/svm/nested.c |  9 ++++++---
>  arch/x86/kvm/svm/svm.c    | 27 ++++++++++++++++++---------
>  arch/x86/kvm/svm/svm.h    |  3 ++-
>  arch/x86/kvm/vmx/vmx.c    |  7 +++++++
>  4 files changed, 33 insertions(+), 13 deletions(-)
> 
> -- 
> 2.26.3
> 
> 

This is not really v2, mistake on my part in git-publish.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit
  2021-08-23 11:46 ` [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit Maxim Levitsky
@ 2021-09-08 22:04   ` Sean Christopherson
  2021-09-12 10:33     ` Maxim Levitsky
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2021-09-08 22:04 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, H. Peter Anvin,
	Ingo Molnar, Vitaly Kuznetsov

On Mon, Aug 23, 2021, Maxim Levitsky wrote:
> If the guest is entered prior to restoring the host save area,
> the guest entry code might see incorrect L1 state (e.g paging state).
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/svm/svm.c | 23 +++++++++++++----------
>  1 file changed, 13 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 1a70e11f0487..ea7a4dacd42f 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4347,27 +4347,30 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
>  					 gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
>  				return 1;
>  
> -			if (svm_allocate_nested(svm))
> +			if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
> +					 &map_save) == -EINVAL)
>  				return 1;

Returning here will neglect to unmap "map".

>  
> -			vmcb12 = map.hva;
> -
> -			nested_load_control_from_vmcb12(svm, &vmcb12->control);
> -
> -			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
> -			kvm_vcpu_unmap(vcpu, &map, true);
> +			if (svm_allocate_nested(svm))
> +				return 1;

Ditto here for both "map" and "map_save", though it looks like there's a
pre-existing bug if svm_allocate_nested() fails.  If you add a prep cleanup patch
to remove the statement nesting (between the bug fix and this patch), it will make
handling this a lot easier, e.g.

static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
{
	struct vcpu_svm *svm = to_svm(vcpu);
	struct kvm_host_map map, map_save;
	u64 saved_efer, vmcb12_gpa;
	struct vmcb *vmcb12;
	int ret;

	if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
		return 0;

	/* Non-zero if SMI arrived while vCPU was in guest mode. */
	if (!GET_SMSTATE(u64, smstate, 0x7ed8))
		return 0;

	if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
		return 1;

	saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
	if (!(saved_efer & EFER_SVME))
		return 1;

	vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
	if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
		return 1;

	ret = 1;
	if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save) == -EINVAL)
		goto unmap_map;

	if (svm_allocate_nested(svm))
		goto unmap_save;

	/*
	 * Restore L1 host state from L1 HSAVE area as VMCB01 was
	 * used during SMM (see svm_enter_smm())
	 */

	svm_copy_vmrun_state(&svm->vmcb01.ptr->save,
				map_save.hva + 0x400);

	/*
	 * Restore L2 state
	 */

	vmcb12 = map.hva;
	nested_load_control_from_vmcb12(svm, &vmcb12->control);
	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);

unmap_save;
	kvm_vcpu_unmap(vcpu, &map_save, true);
unmap_map:
	kvm_vcpu_unmap(vcpu, &map, true);
	return 1;
}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/3] KVM: x86: force PDPTRs reload on SMM exit
  2021-08-23 11:46 ` [PATCH v2 2/3] KVM: x86: force PDPTRs reload " Maxim Levitsky
@ 2021-09-09  0:55   ` Sean Christopherson
  2021-09-12 10:33     ` Maxim Levitsky
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2021-09-09  0:55 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, H. Peter Anvin,
	Ingo Molnar, Vitaly Kuznetsov

On Mon, Aug 23, 2021, Maxim Levitsky wrote:
> KVM_REQ_GET_NESTED_STATE_PAGES is also used with VM entries that happen
> on exit from SMM mode, and in this case PDPTRS must be always reloaded.
> 
> Thanks to Sean Christopherson for pointing this out.
> 
> Fixes: 0f85722341b0 ("KVM: nVMX: delay loading of PDPTRs to KVM_REQ_GET_NESTED_STATE_PAGES")
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index fada1055f325..4194fbf5e5d6 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7504,6 +7504,13 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
>  	}
>  
>  	if (vmx->nested.smm.guest_mode) {
> +
> +		/* Exit from the SMM to the non root mode also uses

Just "Exit from SMM to non-root mode", i.e. drop the "the".

Multi-line comments should look like:

		/*
		 * Exit from SMM ...

though oddly checkpatch doesn't complain about that.

That said, for the comment, it'd be more helpful to explain why the PDPTRs should
not come from userspace.  Something like:

		/*
		 * Always reload the guest's version of the PDPTRs when exiting
		 * from SMM to non-root.  If KVM_SET_SREGS2 stuffs PDPTRs while
		 * SMM is active, that state is specifically for SMM context.
		 * On RSM, all guest state is pulled from its architectural
		 * location, whatever that may be.
		 */

Though typing that makes me wonder if this is fixing the wrong thing.  It seems
like pdptrs_from_userspace shouldn't be set when SMM is active, though I suppose
there's a potential ordering issue between KVM_SET_SREGS2 and KVM_SET_VCPU_EVENTS.
Bummer.

> +		 * the KVM_REQ_GET_NESTED_STATE_PAGES request,
> +		 * but in this case the pdptrs must be always reloaded
> +		 */
> +		vcpu->arch.pdptrs_from_userspace = false;
> +
>  		ret = nested_vmx_enter_non_root_mode(vcpu, false);
>  		if (ret)
>  			return ret;
> -- 
> 2.26.3
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode
  2021-08-23 11:46 ` [PATCH v2 3/3] KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode Maxim Levitsky
@ 2021-09-09  0:59   ` Sean Christopherson
  2021-09-12 10:35     ` Maxim Levitsky
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2021-09-09  0:59 UTC (permalink / raw)
  To: Maxim Levitsky
  Cc: kvm, Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, H. Peter Anvin,
	Ingo Molnar, Vitaly Kuznetsov

On Mon, Aug 23, 2021, Maxim Levitsky wrote:
> This allows nested SVM code to be more similar to nested VMX code.
> 
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>  arch/x86/kvm/svm/nested.c | 9 ++++++---
>  arch/x86/kvm/svm/svm.c    | 8 +++++++-
>  arch/x86/kvm/svm/svm.h    | 3 ++-
>  3 files changed, 15 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 5e13357da21e..678fd21f6077 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -572,7 +572,7 @@ static void nested_svm_copy_common_state(struct vmcb *from_vmcb, struct vmcb *to
>  }
>  
>  int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
> -			 struct vmcb *vmcb12)
> +			 struct vmcb *vmcb12, bool from_entry)

from_vmrun would be a better name.  VMX uses the slightly absstract from_vmentry
because of the VMLAUNCH vs. VMRESUME silliness.  If we want to explicitly follow
VMX then from_vmentry would be more appropriate, but I don't see any reason not
to be more precise.

>  {
>  	struct vcpu_svm *svm = to_svm(vcpu);
>  	int ret;
> @@ -602,13 +602,16 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
>  	nested_vmcb02_prepare_save(svm, vmcb12);
>  
>  	ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3,
> -				  nested_npt_enabled(svm), true);
> +				  nested_npt_enabled(svm), from_entry);
>  	if (ret)
>  		return ret;
>  
>  	if (!npt_enabled)
>  		vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
>  
> +	if (!from_entry)
> +		kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
> +
>  	svm_set_gif(svm, true);
>  
>  	return 0;
> @@ -674,7 +677,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
>  
>  	svm->nested.nested_run_pending = 1;
>  
> -	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12))
> +	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
>  		goto out_exit_err;
>  
>  	if (nested_svm_vmrun_msrpm(svm))
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index ea7a4dacd42f..76ee15af8c48 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4354,6 +4354,12 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
>  			if (svm_allocate_nested(svm))
>  				return 1;
>  
> +			/* Exit from the SMM to the non root mode also uses
> +			 * the KVM_REQ_GET_NESTED_STATE_PAGES request,
> +			 * but in this case the pdptrs must be always reloaded
> +			 */
> +			vcpu->arch.pdptrs_from_userspace = false;

Hmm, I think this belongs in the previous patch.  And I would probably go so far
as to say it belongs in emulator_leave_smm(), i.e. pdptrs_from_userspace should
be cleared on RSM regardless of what mode is being resumed.

> +
>  			/*
>  			 * Restore L1 host state from L1 HSAVE area as VMCB01 was
>  			 * used during SMM (see svm_enter_smm())
> @@ -4368,7 +4374,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
>  
>  			vmcb12 = map.hva;
>  			nested_load_control_from_vmcb12(svm, &vmcb12->control);
> -			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
> +			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
>  
>  			kvm_vcpu_unmap(vcpu, &map, true);
>  			kvm_vcpu_unmap(vcpu, &map_save, true);
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 524d943f3efc..51ffa46ab257 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -459,7 +459,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
>  	return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
>  }
>  
> -int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, struct vmcb *vmcb12);
> +int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
> +		u64 vmcb_gpa, struct vmcb *vmcb12, bool from_entry);

Alignment is funky, it can/should match the definition, e.g.

int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
			 struct vmcb *vmcb12, bool from_entry);

>  void svm_leave_nested(struct vcpu_svm *svm);
>  void svm_free_nested(struct vcpu_svm *svm);
>  int svm_allocate_nested(struct vcpu_svm *svm);
> -- 
> 2.26.3
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit
  2021-09-08 22:04   ` Sean Christopherson
@ 2021-09-12 10:33     ` Maxim Levitsky
  0 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-09-12 10:33 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, H. Peter Anvin,
	Ingo Molnar, Vitaly Kuznetsov

On Wed, 2021-09-08 at 22:04 +0000, Sean Christopherson wrote:
> On Mon, Aug 23, 2021, Maxim Levitsky wrote:
> > If the guest is entered prior to restoring the host save area,
> > the guest entry code might see incorrect L1 state (e.g paging state).
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  arch/x86/kvm/svm/svm.c | 23 +++++++++++++----------
> >  1 file changed, 13 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 1a70e11f0487..ea7a4dacd42f 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -4347,27 +4347,30 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> >  					 gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
> >  				return 1;
> >  
> > -			if (svm_allocate_nested(svm))
> > +			if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
> > +					 &map_save) == -EINVAL)
> >  				return 1;
> 
> Returning here will neglect to unmap "map".
> 
> >  
> > -			vmcb12 = map.hva;
> > -
> > -			nested_load_control_from_vmcb12(svm, &vmcb12->control);
> > -
> > -			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
> > -			kvm_vcpu_unmap(vcpu, &map, true);
> > +			if (svm_allocate_nested(svm))
> > +				return 1;
> 
> Ditto here for both "map" and "map_save", though it looks like there's a
> pre-existing bug if svm_allocate_nested() fails.  If you add a prep cleanup patch
> to remove the statement nesting (between the bug fix and this patch), it will make
> handling this a lot easier, e.g.
> 
> static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> {
> 	struct vcpu_svm *svm = to_svm(vcpu);
> 	struct kvm_host_map map, map_save;
> 	u64 saved_efer, vmcb12_gpa;
> 	struct vmcb *vmcb12;
> 	int ret;
> 
> 	if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
> 		return 0;
> 
> 	/* Non-zero if SMI arrived while vCPU was in guest mode. */
> 	if (!GET_SMSTATE(u64, smstate, 0x7ed8))
> 		return 0;
> 
> 	if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
> 		return 1;
> 
> 	saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
> 	if (!(saved_efer & EFER_SVME))
> 		return 1;
> 
> 	vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
> 	if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
> 		return 1;
> 
> 	ret = 1;
> 	if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save) == -EINVAL)
> 		goto unmap_map;
> 
> 	if (svm_allocate_nested(svm))
> 		goto unmap_save;
> 
> 	/*
> 	 * Restore L1 host state from L1 HSAVE area as VMCB01 was
> 	 * used during SMM (see svm_enter_smm())
> 	 */
> 
> 	svm_copy_vmrun_state(&svm->vmcb01.ptr->save,
> 				map_save.hva + 0x400);
> 
> 	/*
> 	 * Restore L2 state
> 	 */
> 
> 	vmcb12 = map.hva;
> 	nested_load_control_from_vmcb12(svm, &vmcb12->control);
> 	ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
> 
> unmap_save;
> 	kvm_vcpu_unmap(vcpu, &map_save, true);
> unmap_map:
> 	kvm_vcpu_unmap(vcpu, &map, true);
> 	return 1;
> }
> 
Will do. I haven't given the error path enough attention.

Thanks!
Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/3] KVM: x86: force PDPTRs reload on SMM exit
  2021-09-09  0:55   ` Sean Christopherson
@ 2021-09-12 10:33     ` Maxim Levitsky
  0 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-09-12 10:33 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, H. Peter Anvin,
	Ingo Molnar, Vitaly Kuznetsov

On Thu, 2021-09-09 at 00:55 +0000, Sean Christopherson wrote:
> On Mon, Aug 23, 2021, Maxim Levitsky wrote:
> > KVM_REQ_GET_NESTED_STATE_PAGES is also used with VM entries that happen
> > on exit from SMM mode, and in this case PDPTRS must be always reloaded.
> > 
> > Thanks to Sean Christopherson for pointing this out.
> > 
> > Fixes: 0f85722341b0 ("KVM: nVMX: delay loading of PDPTRs to KVM_REQ_GET_NESTED_STATE_PAGES")
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  arch/x86/kvm/vmx/vmx.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index fada1055f325..4194fbf5e5d6 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7504,6 +7504,13 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> >  	}
> >  
> >  	if (vmx->nested.smm.guest_mode) {
> > +
> > +		/* Exit from the SMM to the non root mode also uses
> 
> Just "Exit from SMM to non-root mode", i.e. drop the "the".
> 
> Multi-line comments should look like:
> 
> 		/*
> 		 * Exit from SMM ...
> 
> though oddly checkpatch doesn't complain about that.
> 
> That said, for the comment, it'd be more helpful to explain why the PDPTRs should
> not come from userspace.  Something like:
> 
> 		/*
> 		 * Always reload the guest's version of the PDPTRs when exiting
> 		 * from SMM to non-root.  If KVM_SET_SREGS2 stuffs PDPTRs while
> 		 * SMM is active, that state is specifically for SMM context.
> 		 * On RSM, all guest state is pulled from its architectural
> 		 * location, whatever that may be.
> 		 */
> 
> Though typing that makes me wonder if this is fixing the wrong thing.  It seems
> like pdptrs_from_userspace shouldn't be set when SMM is active, though I suppose
> there's a potential ordering issue between KVM_SET_SREGS2 and KVM_SET_VCPU_EVENTS.
> Bummer.
> 
> > +		 * the KVM_REQ_GET_NESTED_STATE_PAGES request,
> > +		 * but in this case the pdptrs must be always reloaded
> > +		 */
> > +		vcpu->arch.pdptrs_from_userspace = false;
> > +
> >  		ret = nested_vmx_enter_non_root_mode(vcpu, false);
> >  		if (ret)
> >  			return ret;
> > -- 
> > 2.26.3
> > 

I went with your suggestion in the patch 3, and dropped this patch.

Thanks!
Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode
  2021-09-09  0:59   ` Sean Christopherson
@ 2021-09-12 10:35     ` Maxim Levitsky
  0 siblings, 0 replies; 11+ messages in thread
From: Maxim Levitsky @ 2021-09-12 10:35 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Joerg Roedel, Thomas Gleixner, Jim Mattson,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	Wanpeng Li, Paolo Bonzini, Borislav Petkov, H. Peter Anvin,
	Ingo Molnar, Vitaly Kuznetsov

On Thu, 2021-09-09 at 00:59 +0000, Sean Christopherson wrote:
> On Mon, Aug 23, 2021, Maxim Levitsky wrote:
> > This allows nested SVM code to be more similar to nested VMX code.
> > 
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > ---
> >  arch/x86/kvm/svm/nested.c | 9 ++++++---
> >  arch/x86/kvm/svm/svm.c    | 8 +++++++-
> >  arch/x86/kvm/svm/svm.h    | 3 ++-
> >  3 files changed, 15 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index 5e13357da21e..678fd21f6077 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -572,7 +572,7 @@ static void nested_svm_copy_common_state(struct vmcb *from_vmcb, struct vmcb *to
> >  }
> >  
> >  int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
> > -			 struct vmcb *vmcb12)
> > +			 struct vmcb *vmcb12, bool from_entry)
> 
> from_vmrun would be a better name.  VMX uses the slightly absstract from_vmentry
> because of the VMLAUNCH vs. VMRESUME silliness.  If we want to explicitly follow
> VMX then from_vmentry would be more appropriate, but I don't see any reason not
> to be more precise.
OK.

> 
> >  {
> >  	struct vcpu_svm *svm = to_svm(vcpu);
> >  	int ret;
> > @@ -602,13 +602,16 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
> >  	nested_vmcb02_prepare_save(svm, vmcb12);
> >  
> >  	ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3,
> > -				  nested_npt_enabled(svm), true);
> > +				  nested_npt_enabled(svm), from_entry);
> >  	if (ret)
> >  		return ret;
> >  
> >  	if (!npt_enabled)
> >  		vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
> >  
> > +	if (!from_entry)
> > +		kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
> > +
> >  	svm_set_gif(svm, true);
> >  
> >  	return 0;
> > @@ -674,7 +677,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> >  
> >  	svm->nested.nested_run_pending = 1;
> >  
> > -	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12))
> > +	if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
> >  		goto out_exit_err;
> >  
> >  	if (nested_svm_vmrun_msrpm(svm))
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index ea7a4dacd42f..76ee15af8c48 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -4354,6 +4354,12 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> >  			if (svm_allocate_nested(svm))
> >  				return 1;
> >  
> > +			/* Exit from the SMM to the non root mode also uses
> > +			 * the KVM_REQ_GET_NESTED_STATE_PAGES request,
> > +			 * but in this case the pdptrs must be always reloaded
> > +			 */
> > +			vcpu->arch.pdptrs_from_userspace = false;
> 
> Hmm, I think this belongs in the previous patch.  And I would probably go so far
> as to say it belongs in emulator_leave_smm(), i.e. pdptrs_from_userspace should
> be cleared on RSM regardless of what mode is being resumed.

I actually don't think that this belongs to a previous patch, since this issue didn't exist on SVM,
since it didn't call the KVM_REQ_GET_NESTED_STATE_PAGES.

However I do agree with you that it makes sense to move this hack to the common x86 code.
I had put it to kvm_smm_changed, and will soon send a new version.

> 
> > +
> >  			/*
> >  			 * Restore L1 host state from L1 HSAVE area as VMCB01 was
> >  			 * used during SMM (see svm_enter_smm())
> > @@ -4368,7 +4374,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> >  
> >  			vmcb12 = map.hva;
> >  			nested_load_control_from_vmcb12(svm, &vmcb12->control);
> > -			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12);
> > +			ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
> >  
> >  			kvm_vcpu_unmap(vcpu, &map, true);
> >  			kvm_vcpu_unmap(vcpu, &map_save, true);
> > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> > index 524d943f3efc..51ffa46ab257 100644
> > --- a/arch/x86/kvm/svm/svm.h
> > +++ b/arch/x86/kvm/svm/svm.h
> > @@ -459,7 +459,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
> >  	return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
> >  }
> >  
> > -int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, struct vmcb *vmcb12);
> > +int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
> > +		u64 vmcb_gpa, struct vmcb *vmcb12, bool from_entry);
> 
> Alignment is funky, it can/should match the definition, e.g.
Oops, forgot to check the prototype - these things you write once and forget about them,
as long as it compiles :-)

> 
> int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
> 			 struct vmcb *vmcb12, bool from_entry);
> 
> >  void svm_leave_nested(struct vcpu_svm *svm);
> >  void svm_free_nested(struct vcpu_svm *svm);
> >  int svm_allocate_nested(struct vcpu_svm *svm);
> > -- 
> > 2.26.3
> > 

Thanks for the review!

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-09-12 10:36 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-23 11:46 [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky
2021-08-23 11:46 ` [PATCH v2 1/3] KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit Maxim Levitsky
2021-09-08 22:04   ` Sean Christopherson
2021-09-12 10:33     ` Maxim Levitsky
2021-08-23 11:46 ` [PATCH v2 2/3] KVM: x86: force PDPTRs reload " Maxim Levitsky
2021-09-09  0:55   ` Sean Christopherson
2021-09-12 10:33     ` Maxim Levitsky
2021-08-23 11:46 ` [PATCH v2 3/3] KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode Maxim Levitsky
2021-09-09  0:59   ` Sean Christopherson
2021-09-12 10:35     ` Maxim Levitsky
2021-08-23 13:01 ` [PATCH v2 0/3] KVM: few more SMM fixes Maxim Levitsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).