From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06CFAC43459 for ; Thu, 16 Jul 2020 03:17:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1F9720775 for ; Thu, 16 Jul 2020 03:17:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728376AbgGPDRr (ORCPT ); Wed, 15 Jul 2020 23:17:47 -0400 Received: from mga06.intel.com ([134.134.136.31]:8148 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728248AbgGPDRI (ORCPT ); Wed, 15 Jul 2020 23:17:08 -0400 IronPort-SDR: zMQUJhGJfoPkGBBjXbUOHcNmdY2OH5THH0qS6xqVqTgCIjizHLsDM8DCcEfhFSLjBLvengRX8Y yg8gy1zsDkmg== X-IronPort-AV: E=McAfee;i="6000,8403,9683"; a="210844857" X-IronPort-AV: E=Sophos;i="5.75,357,1589266800"; d="scan'208";a="210844857" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2020 20:17:07 -0700 IronPort-SDR: US4WisnG2Zr98tq6JL3gDlbeQdJrC9ygahEMWmD/J2CGNunAV7Mv0fm5vqY2+QeevkkPUC8fpb 3geOCdJiNY4Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,357,1589266800"; d="scan'208";a="360910458" Received: from unknown (HELO local-michael-cet-test.sh.intel.com) ([10.239.159.128]) by orsmga001.jf.intel.com with ESMTP; 15 Jul 2020 20:17:06 -0700 From: Yang Weijiang To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, sean.j.christopherson@intel.com, jmattson@google.com Cc: yu.c.zhang@linux.intel.com, Yang Weijiang Subject: [RESEND v13 06/11] KVM: x86: Load guest fpu state when access MSRs managed by XSAVES Date: Thu, 16 Jul 2020 11:16:22 +0800 Message-Id: <20200716031627.11492-7-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20200716031627.11492-1-weijiang.yang@intel.com> References: <20200716031627.11492-1-weijiang.yang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson A handful of CET MSRs are not context switched through "traditional" methods, e.g. VMCS or manual switching, but rather are passed through to the guest and are saved and restored by XSAVES/XRSTORS, i.e. in the guest's FPU state. Load the guest's FPU state if userspace is accessing MSRs whose values are managed by XSAVES so that the MSR helper, e.g. vmx_{get,set}_msr(), can simply do {RD,WR}MSR to access the guest's value. Note that guest_cpuid_has() is not queried as host userspace is allowed to access MSRs that have not been exposed to the guest, e.g. it might do KVM_SET_MSRS prior to KVM_SET_CPUID2. Signed-off-by: Sean Christopherson Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang --- arch/x86/kvm/x86.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8aed32ff9c0c..c437ddc22ad6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -107,6 +107,8 @@ static void enter_smm(struct kvm_vcpu *vcpu); static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags); static void store_regs(struct kvm_vcpu *vcpu); static int sync_regs(struct kvm_vcpu *vcpu); +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); struct kvm_x86_ops kvm_x86_ops __read_mostly; EXPORT_SYMBOL_GPL(kvm_x86_ops); @@ -3356,6 +3358,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); +static bool is_xsaves_msr(u32 index) +{ + return index == MSR_IA32_U_CET || + (index >= MSR_IA32_PL0_SSP && index <= MSR_IA32_PL3_SSP); +} + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * @@ -3366,11 +3374,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs, int (*do_msr)(struct kvm_vcpu *vcpu, unsigned index, u64 *data)) { + bool fpu_loaded = false; int i; - for (i = 0; i < msrs->nmsrs; ++i) + for (i = 0; i < msrs->nmsrs; ++i) { + if (vcpu && !fpu_loaded && supported_xss && + is_xsaves_msr(entries[i].index)) { + kvm_load_guest_fpu(vcpu); + fpu_loaded = true; + } if (do_msr(vcpu, entries[i].index, &entries[i].data)) break; + } + if (fpu_loaded) + kvm_put_guest_fpu(vcpu); return i; } -- 2.17.2