From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753250AbdHNQNQ (ORCPT ); Mon, 14 Aug 2017 12:13:16 -0400 Received: from mail-wr0-f179.google.com ([209.85.128.179]:38483 "EHLO mail-wr0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753049AbdHNQNN (ORCPT ); Mon, 14 Aug 2017 12:13:13 -0400 MIME-Version: 1.0 In-Reply-To: <1502544906-1108-2-git-send-email-yu.c.zhang@linux.intel.com> References: <1502544906-1108-1-git-send-email-yu.c.zhang@linux.intel.com> <1502544906-1108-2-git-send-email-yu.c.zhang@linux.intel.com> From: Jim Mattson Date: Mon, 14 Aug 2017 09:13:11 -0700 Message-ID: Subject: Re: [PATCH v1 1/4] KVM: MMU: check guest CR3 reserved bits based on its physical address width. To: Yu Zhang Cc: kvm list , LKML , Paolo Bonzini , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , xiaoguangrong@tencent.com, Joerg Roedel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Aug 12, 2017 at 6:35 AM, Yu Zhang wrote: > Currently, KVM uses CR3_L_MODE_RESERVED_BITS to check the > reserved bits in CR3. Yet the length of reserved bits in > guest CR3 should be based on the physical address width > exposed to the VM. This patch changes CR3 check logic to > calculate the reserved bits at runtime. > > Signed-off-by: Yu Zhang > --- > arch/x86/include/asm/kvm_host.h | 1 - > arch/x86/kvm/emulate.c | 12 ++++++++++-- > arch/x86/kvm/x86.c | 8 ++++---- > 3 files changed, 14 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 9e4862e..018300e 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -79,7 +79,6 @@ > | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM \ > | X86_CR0_NW | X86_CR0_CD | X86_CR0_PG)) > > -#define CR3_L_MODE_RESERVED_BITS 0xFFFFFF0000000000ULL > #define CR3_PCID_INVD BIT_64(63) > #define CR4_RESERVED_BITS \ > (~(unsigned long)(X86_CR4_VME | X86_CR4_PVI | X86_CR4_TSD | X86_CR4_DE\ > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c > index fb00559..a98b88a 100644 > --- a/arch/x86/kvm/emulate.c > +++ b/arch/x86/kvm/emulate.c > @@ -4097,8 +4097,16 @@ static int check_cr_write(struct x86_emulate_ctxt *ctxt) > u64 rsvd = 0; > > ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); > - if (efer & EFER_LMA) > - rsvd = CR3_L_MODE_RESERVED_BITS & ~CR3_PCID_INVD; > + if (efer & EFER_LMA) { > + u64 maxphyaddr; > + u32 eax = 0x80000008; > + > + ctxt->ops->get_cpuid(ctxt, &eax, NULL, NULL, NULL); > + maxphyaddr = eax * 0xff; What if leaf 0x80000008 is not defined? > + > + rsvd = (~((1UL << maxphyaddr) - 1)) & > + ~CR3_PCID_INVD; > + } > > if (new_val & rsvd) > return emulate_gp(ctxt, 0); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index e40a779..d9100c4 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -813,10 +813,10 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) > return 0; > } > > - if (is_long_mode(vcpu)) { > - if (cr3 & CR3_L_MODE_RESERVED_BITS) > - return 1; > - } else if (is_pae(vcpu) && is_paging(vcpu) && > + if (is_long_mode(vcpu) && > + (cr3 & rsvd_bits(cpuid_maxphyaddr(vcpu), 63))) > + return 1; > + else if (is_pae(vcpu) && is_paging(vcpu) && > !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3)) > return 1; > > -- > 2.5.0 >