From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757574AbdKOAme (ORCPT ); Tue, 14 Nov 2017 19:42:34 -0500 Received: from mail-ot0-f195.google.com ([74.125.82.195]:43041 "EHLO mail-ot0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756826AbdKOAmD (ORCPT ); Tue, 14 Nov 2017 19:42:03 -0500 X-Google-Smtp-Source: AGs4zMYepshUE8pW7c7sTipMXOUwROtwmvMNliEDR8lGiqQ0eLvNXoqGKBFHf3ABL/axzkRmIQ61yZTiUiGJBGjjNFo= MIME-Version: 1.0 In-Reply-To: <1510584031-36240-5-git-send-email-pbonzini@redhat.com> References: <1510584031-36240-1-git-send-email-pbonzini@redhat.com> <1510584031-36240-5-git-send-email-pbonzini@redhat.com> From: Wanpeng Li Date: Wed, 15 Nov 2017 08:42:02 +0800 Message-ID: Subject: Re: [PATCH 4/5] KVM: x86: add support for emulating UMIP To: Paolo Bonzini Cc: "linux-kernel@vger.kernel.org" , kvm , Radim Krcmar Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2017-11-13 22:40 GMT+08:00 Paolo Bonzini : > The User-Mode Instruction Prevention feature present in recent Intel > processor prevents a group of instructions (sgdt, sidt, sldt, smsw, and > str) from being executed with CPL > 0. Otherwise, a general protection > fault is issued. > > UMIP instructions in general are also able to trigger vmexits, so we can > actually emulate UMIP on older processors. This commit sets up the > infrastructure so that kvm-intel.ko and kvm-amd.ko can set the UMIP > feature bit for CPUID even if the feature is not actually available > in hardware. > > Signed-off-by: Paolo Bonzini Reviewed-by: Wanpeng Li > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/cpuid.c | 2 ++ > arch/x86/kvm/svm.c | 6 ++++++ > arch/x86/kvm/vmx.c | 6 ++++++ > 4 files changed, 15 insertions(+) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 1b005ccf4d0b..f0a4f107a97f 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1004,6 +1004,7 @@ struct kvm_x86_ops { > void (*handle_external_intr)(struct kvm_vcpu *vcpu); > bool (*mpx_supported)(void); > bool (*xsaves_supported)(void); > + bool (*umip_emulated)(void); > > int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr); > > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index 77fb8732b47b..2b3b06458f6f 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -327,6 +327,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, > unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0; > unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0; > unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0; > + unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0; > > /* cpuid 1.edx */ > const u32 kvm_cpuid_1_edx_x86_features = > @@ -473,6 +474,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, > entry->ebx |= F(TSC_ADJUST); > entry->ecx &= kvm_cpuid_7_0_ecx_x86_features; > cpuid_mask(&entry->ecx, CPUID_7_ECX); > + entry->ecx |= f_umip; > /* PKU is not yet implemented for shadow paging. */ > if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE)) > entry->ecx &= ~F(PKU); > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 0e68f0b3cbf7..be7fc7e5ee7e 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -5174,6 +5174,11 @@ static bool svm_xsaves_supported(void) > return false; > } > > +static bool svm_umip_emulated(void) > +{ > + return false; > +} > + > static bool svm_has_wbinvd_exit(void) > { > return true; > @@ -5485,6 +5490,7 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu) > .invpcid_supported = svm_invpcid_supported, > .mpx_supported = svm_mpx_supported, > .xsaves_supported = svm_xsaves_supported, > + .umip_emulated = svm_umip_emulated, > > .set_supported_cpuid = svm_set_supported_cpuid, > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 8917e100ddeb..6c474c94e154 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -9095,6 +9095,11 @@ static bool vmx_xsaves_supported(void) > SECONDARY_EXEC_XSAVES; > } > > +static bool vmx_umip_emulated(void) > +{ > + return false; > +} > + > static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) > { > u32 exit_intr_info; > @@ -12038,6 +12043,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu) > .handle_external_intr = vmx_handle_external_intr, > .mpx_supported = vmx_mpx_supported, > .xsaves_supported = vmx_xsaves_supported, > + .umip_emulated = vmx_umip_emulated, > > .check_nested_events = vmx_check_nested_events, > > -- > 1.8.3.1 > >