From mboxrd@z Thu Jan 1 00:00:00 1970 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933467AbeALMi6 (ORCPT + 1 other); Fri, 12 Jan 2018 07:38:58 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34844 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932797AbeALMi5 (ORCPT ); Fri, 12 Jan 2018 07:38:57 -0500 Subject: Re: [PATCH 4/5] x86/svm: Direct access to MSR_IA32_SPEC_CTRL To: Ashok Raj , linux-kernel@vger.kernel.org, Thomas Gleixner , Tim Chen , Andy Lutomirski , Linus Torvalds , Greg KH Cc: Dave Hansen , Andrea Arcangeli , Andi Kleen , Arjan Van De Ven , David Woodhouse , Peter Zijlstra , Dan Williams , Jun Nakajima , Asit Mallick References: <1515720739-43819-1-git-send-email-ashok.raj@intel.com> <1515720739-43819-5-git-send-email-ashok.raj@intel.com> From: Paolo Bonzini Message-ID: Date: Fri, 12 Jan 2018 13:38:50 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <1515720739-43819-5-git-send-email-ashok.raj@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Fri, 12 Jan 2018 12:38:57 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On 12/01/2018 02:32, Ashok Raj wrote: > From: Paolo Bonzini > > Direct access to MSR_IA32_SPEC_CTRL is important > for performance. Allow load/store of MSR_IA32_SPEC_CTRL, restore guest > IBRS on VM entry and set restore host values on VM exit. > it yet). > > TBD: need to check msr's can be passed through even if feature is not > emuerated by the CPU. > > [Ashok: Modified to reuse V3 spec-ctrl patches from Tim] > > Signed-off-by: Paolo Bonzini > Signed-off-by: Ashok Raj > --- > arch/x86/kvm/svm.c | 35 +++++++++++++++++++++++++++++++++++ > 1 file changed, 35 insertions(+) If you want to do this, please include the vmx.c part as well... Paolo > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 0e68f0b..7c14471a 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -183,6 +183,8 @@ struct vcpu_svm { > u64 gs_base; > } host; > > + u64 spec_ctrl; > + > u32 *msrpm; > > ulong nmi_iret_rip; > @@ -248,6 +250,7 @@ static const struct svm_direct_access_msrs { > { .index = MSR_CSTAR, .always = true }, > { .index = MSR_SYSCALL_MASK, .always = true }, > #endif > + { .index = MSR_IA32_SPEC_CTRL, .always = true }, > { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false }, > { .index = MSR_IA32_LASTBRANCHTOIP, .always = false }, > { .index = MSR_IA32_LASTINTFROMIP, .always = false }, > @@ -917,6 +920,9 @@ static void svm_vcpu_init_msrpm(u32 *msrpm) > > set_msr_interception(msrpm, direct_access_msrs[i].index, 1, 1); > } > + > + if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) > + set_msr_interception(msrpm, MSR_IA32_SPEC_CTRL, 1, 1); > } > > static void add_msr_offset(u32 offset) > @@ -3576,6 +3582,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > case MSR_VM_CR: > msr_info->data = svm->nested.vm_cr_msr; > break; > + case MSR_IA32_SPEC_CTRL: > + msr_info->data = svm->spec_ctrl; > + break; > case MSR_IA32_UCODE_REV: > msr_info->data = 0x01000065; > break; > @@ -3724,6 +3733,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) > case MSR_VM_IGNNE: > vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data); > break; > + case MSR_IA32_SPEC_CTRL: > + svm->spec_ctrl = data; > + break; > case MSR_IA32_APICBASE: > if (kvm_vcpu_apicv_active(vcpu)) > avic_update_vapic_bar(to_svm(vcpu), data); > @@ -4871,6 +4883,19 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu) > svm_complete_interrupts(svm); > } > > + > +/* > + * Save guest value of spec_ctrl and also restore host value > + */ > +static void save_guest_spec_ctrl(struct vcpu_svm *svm) > +{ > + if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) { > + svm->spec_ctrl = spec_ctrl_get(); > + spec_ctrl_restriction_on(); > + } else > + rmb(); > +} > + > static void svm_vcpu_run(struct kvm_vcpu *vcpu) > { > struct vcpu_svm *svm = to_svm(vcpu); > @@ -4910,6 +4935,14 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > > clgi(); > > + if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) { > + /* > + * FIXME: lockdep_assert_irqs_disabled(); > + */ > + WARN_ON_ONCE(!irqs_disabled()); > + spec_ctrl_set(svm->spec_ctrl); > + } > + > local_irq_enable(); > > asm volatile ( > @@ -4985,6 +5018,8 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > #endif > ); > > + save_guest_spec_ctrl(svm); > + > #ifdef CONFIG_X86_64 > wrmsrl(MSR_GS_BASE, svm->host.gs_base); > #else >