From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH v4 4/6] KVM: nVMX: treat CR4.VMXE as reserved in SMM Date: Tue, 10 Oct 2017 16:31:08 +0200 Message-ID: <7d8ebefd-27e3-a80d-7c05-624f7814f9e5@redhat.com> References: <20171010121717.17792-1-lprosek@redhat.com> <20171010121717.17792-5-lprosek@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: rkrcmar@redhat.com To: Ladi Prosek , kvm@vger.kernel.org Return-path: Received: from mail-wm0-f67.google.com ([74.125.82.67]:35062 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756288AbdJJObQ (ORCPT ); Tue, 10 Oct 2017 10:31:16 -0400 Received: by mail-wm0-f67.google.com with SMTP id b189so25461652wmd.2 for ; Tue, 10 Oct 2017 07:31:15 -0700 (PDT) In-Reply-To: <20171010121717.17792-5-lprosek@redhat.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: On 10/10/2017 14:17, Ladi Prosek wrote: > Intel SDM 34.14.3 Protection of CR4.VMXE in SMM: > > "Under the default treatment, CR4.VMXE is treated as a reserved bit while a > logical processor is in SMM. Any attempt by software running in SMM to set > this bit causes a general-protection exception." > > em_rsm() may set CR4.VMXE as part of its loading of saved SMM state so the > the SMM flag is temporarily cleared in the affected function. > > Signed-off-by: Ladi Prosek > --- > arch/x86/kvm/emulate.c | 32 ++++++++++++++++++++++++-------- > arch/x86/kvm/vmx.c | 4 ++++ > 2 files changed, 28 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c > index 1e6a8a824b8b..2d0b9dcfb812 100644 > --- a/arch/x86/kvm/emulate.c > +++ b/arch/x86/kvm/emulate.c > @@ -2406,7 +2406,14 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) > static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, > u64 cr0, u64 cr4) > { > - int bad; > + int bad, ret = X86EMUL_CONTINUE; > + > + /* > + * Temporarily clear the SMM flag so we don't trip over the > + * no-CR4.VMXE-in-SMM check. > + */ > + ctxt->ops->set_hflags(ctxt, ctxt->ops->get_hflags(ctxt) & > + ~X86EMUL_SMM_MASK); Maybe don't set VMXE here and set it in post_leave_smm? I can just leave out this patch and let you redo this as a follow up. Paolo > /* > * First enable PAE, long mode needs it before CR0.PG = 1 is set. > @@ -2414,20 +2421,29 @@ static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, > * if EFER.LMA=0, so set it separately. > */ > bad = ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); > - if (bad) > - return X86EMUL_UNHANDLEABLE; > + if (bad) { > + ret = X86EMUL_UNHANDLEABLE; > + goto out; > + } > > bad = ctxt->ops->set_cr(ctxt, 0, cr0); > - if (bad) > - return X86EMUL_UNHANDLEABLE; > + if (bad) { > + ret = X86EMUL_UNHANDLEABLE; > + goto out; > + } > > if (cr4 & X86_CR4_PCIDE) { > bad = ctxt->ops->set_cr(ctxt, 4, cr4); > - if (bad) > - return X86EMUL_UNHANDLEABLE; > + if (bad) { > + ret = X86EMUL_UNHANDLEABLE; > + goto out; > + } > } > +out: > + ctxt->ops->set_hflags(ctxt, ctxt->ops->get_hflags(ctxt) | > + X86EMUL_SMM_MASK); > > - return X86EMUL_CONTINUE; > + return ret; > } > > static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, u64 smbase) > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index ace5ca6bc41a..f255038d5a91 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -4364,6 +4364,10 @@ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) > */ > if (!nested_vmx_allowed(vcpu)) > return 1; > + > + /* cr4.VMXE is a reserved bit in SMM */ > + if (is_smm(vcpu)) > + return 1; > } > > if (to_vmx(vcpu)->nested.vmxon && !nested_cr4_valid(vcpu, cr4)) >