From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751993AbbEFKjw (ORCPT ); Wed, 6 May 2015 06:39:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58860 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750735AbbEFKju (ORCPT ); Wed, 6 May 2015 06:39:50 -0400 Message-ID: <5549EF71.4080207@redhat.com> Date: Wed, 06 May 2015 12:39:45 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Bandan Das CC: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, rkrcmar@redhat.com, guangrong.xiao@linux.intel.com, Yang Zhang , wanpeng.li@linux.intel.com Subject: Re: [PATCH 09/13] KVM: x86: save/load state on SMM switch References: <1430393772-27208-1-git-send-email-pbonzini@redhat.com> <1430393772-27208-10-git-send-email-pbonzini@redhat.com> In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/05/2015 22:44, Bandan Das wrote: > Paolo Bonzini writes: > >> +static inline bool guest_cpuid_has_longmode(struct kvm_vcpu *vcpu) >> +{ >> + struct kvm_cpuid_entry2 *best; >> + >> + best = kvm_find_cpuid_entry(vcpu, 0x80000001, 0); >> + return best && (best->edx & bit(X86_FEATURE_LM)); >> +} >> + > > We could combine all guest_cpuid_has* functions into a single > function guest_cpuid_has_feature(vcpu, feature) and avoid code duplication. > Not relevant to this change - just a note (to self). > >> static inline bool guest_cpuid_has_osvw(struct kvm_vcpu *vcpu) >> { >> struct kvm_cpuid_entry2 *best; >> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c >> index f6b641207416..a49606871277 100644 >> --- a/arch/x86/kvm/emulate.c >> +++ b/arch/x86/kvm/emulate.c >> @@ -2262,12 +2262,253 @@ static int em_lseg(struct x86_emulate_ctxt *ctxt) >> return rc; >> } >> >> +static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt) >> +{ >> + u32 eax, ebx, ecx, edx; >> + >> + eax = 0x80000001; >> + ecx = 0; >> + ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx); >> + return edx & bit(X86_FEATURE_LM); >> +} >> + >> +#define get_smstate(type, smbase, offset) \ >> + ({ \ >> + type __val; \ >> + int r = ctxt->ops->read_std(ctxt, smbase + offset, &__val, \ >> + sizeof(__val), NULL); \ >> + if (r != X86EMUL_CONTINUE) \ >> + return X86EMUL_UNHANDLEABLE; \ >> + __val; \ >> + }) >> + >> +static void rsm_set_desc_flags(struct desc_struct *desc, u16 flags) >> +{ >> + desc->g = (flags >> 15) & 1; >> + desc->d = (flags >> 14) & 1; >> + desc->l = (flags >> 13) & 1; >> + desc->avl = (flags >> 12) & 1; >> + desc->p = (flags >> 7) & 1; >> + desc->dpl = (flags >> 5) & 3; >> + desc->s = (flags >> 4) & 1; >> + desc->type = flags & 15; >> +} >> + >> +static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) >> +{ >> + struct desc_struct desc; >> + int offset; >> + u16 selector; >> + >> + selector = get_smstate(u32, smbase, 0x7fa8 + n * 4); > Probably a good idea to use #defines for all the offsets here > and elsewhere. Uff, those would be a lot of offsets. It's just as easy to get the #defines right, as it is to get them right in get_smstate... >80 character lines would also mess everything up, I think. > This return is redundant since rsm_load_seg_32 always returns > success. Same for rsm_load_seg_64() Note that the get_smstate macro can do an early return. >> static int em_rsm(struct x86_emulate_ctxt *ctxt) >> { >> + unsigned long cr0, cr4, efer; >> + u64 smbase; >> + int ret; >> + >> + printk("rsm\n"); >> if ((ctxt->emul_flags & X86EMUL_SMM_MASK) == 0) >> return emulate_ud(ctxt); >> >> - return X86EMUL_UNHANDLEABLE; >> + /* >> + * Get back to real mode, to prepare a safe state in which >> + * to load CR0/CR3/CR4/EFER. Also this will ensure that >> + * addresses passed to read_std/write_std are not virtual. >> + */ > > I am trying to understand this. Aren't we are already in real mode > here since we are in smm or am I missing something.. SMM starts in big real mode. The SMI handler can go to protected mode and even enable paging; as long as it's not 64-bit, it can execute an RSM. I should check and disable CR4.PCIDE before CR0.PG. >> + cr0 = ctxt->ops->get_cr(ctxt, 0); >> + if (cr0 & X86_CR0_PE) >> + ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE)); >> + cr4 = ctxt->ops->get_cr(ctxt, 4); >> + if (cr0 & X86_CR4_PAE) Oops, cr4 here. >> + /* revision id */ >> + put_smstate(u32, buf, 0x7efc, 0x00020064); > Is the revision id (and 0x00020000 for process_smi*_32()) from the > spec ? I can't seem to find them. This revision id is in the AMD spec. You can see that SeaBIOS checks for it and the 32-bit one as well (which I cannot find anywhere). SeaBIOS should also accept 0x30000 and 0x30064. Sending a patch. Paolo > Bandan > ... >