linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yang Weijiang <weijiang.yang@intel.com>
To: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com,
	like.xu.linux@gmail.com, vkuznets@redhat.com,
	wei.w.wang@intel.com, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Sean Christopherson <sean.j.christopherson@intel.com>,
	Yang Weijiang <weijiang.yang@intel.com>
Subject: [PATCH v9 03/17] KVM: x86: Load guest fpu state when accessing MSRs managed by XSAVES
Date: Tue, 15 Feb 2022 16:25:30 -0500	[thread overview]
Message-ID: <20220215212544.51666-4-weijiang.yang@intel.com> (raw)
In-Reply-To: <20220215212544.51666-1-weijiang.yang@intel.com>

From: Sean Christopherson <sean.j.christopherson@intel.com>

If new feature MSRs are supported in XSS and passed through to the guest
they are saved and restored by XSAVES/XRSTORS, i.e. in the guest's FPU
state.

Load the guest's FPU state if userspace is accessing MSRs whose values are
managed by XSAVES so that the MSR helper, e.g. kvm_{get,set}_xsave_msr(),
can simply do {RD,WR}MSR to access the guest's value.

Because it's also used for the KVM_GET_MSRS device ioctl(), explicitly
check that @vcpu is non-null before attempting to load guest state.  The
XSS supporting MSRs cannot be retrieved via the device ioctl() without
loading guest FPU state (which doesn't exist).

MSRs that are switched through XSAVES are especially annoying due to the
possibility of the kernel's FPU being used in IRQ context.  Disable IRQs
and ensure the guest's FPU state is loaded when accessing such MSRs.

Note that guest_cpuid_has() is not queried as host userspace is allowed
to access MSRs that have not been exposed to the guest, e.g. it might do
KVM_SET_MSRS prior to KVM_SET_CPUID2.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Co-developed-by: Yang Weijiang <weijiang.yang@intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/x86.c              | 44 ++++++++++++++++++++++++++++++++-
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 9d59dfc27e5a..0c3a6feb41eb 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1731,6 +1731,9 @@ int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu);
 int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr);
 int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr);
 
+void kvm_get_xsave_msr(struct msr_data *msr_info);
+void kvm_set_xsave_msr(struct msr_data *msr_info);
+
 unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu);
 void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
 int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8397e5bc4ed5..8891329d594c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -125,6 +125,9 @@ static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu);
 static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
 static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
 
+static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
+static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
+
 struct kvm_x86_ops kvm_x86_ops __read_mostly;
 EXPORT_SYMBOL_GPL(kvm_x86_ops);
 
@@ -4072,6 +4075,36 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 }
 EXPORT_SYMBOL_GPL(kvm_get_msr_common);
 
+void __maybe_unused kvm_get_xsave_msr(struct msr_data *msr_info)
+{
+	local_irq_disable();
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		switch_fpu_return();
+	rdmsrl(msr_info->index, msr_info->data);
+	local_irq_enable();
+}
+EXPORT_SYMBOL_GPL(kvm_get_xsave_msr);
+
+void __maybe_unused kvm_set_xsave_msr(struct msr_data *msr_info)
+{
+	local_irq_disable();
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		switch_fpu_return();
+	wrmsrl(msr_info->index, msr_info->data);
+	local_irq_enable();
+}
+EXPORT_SYMBOL_GPL(kvm_set_xsave_msr);
+
+/*
+ * If new features passthrough XSS managed MSRs to guest, it's required to
+ * add separate checks here so as to load feature dependent guest MSRs before
+ * access them.
+ */
+static bool is_xsaves_msr(u32 index)
+{
+	return false;
+}
+
 /*
  * Read or write a bunch of msrs. All parameters are kernel addresses.
  *
@@ -4082,11 +4115,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs,
 		    int (*do_msr)(struct kvm_vcpu *vcpu,
 				  unsigned index, u64 *data))
 {
+	bool fpu_loaded = false;
 	int i;
 
-	for (i = 0; i < msrs->nmsrs; ++i)
+	for (i = 0; i < msrs->nmsrs; ++i) {
+		if (vcpu && !fpu_loaded && supported_xss &&
+		    is_xsaves_msr(entries[i].index)) {
+			kvm_load_guest_fpu(vcpu);
+			fpu_loaded = true;
+		}
 		if (do_msr(vcpu, entries[i].index, &entries[i].data))
 			break;
+	}
+	if (fpu_loaded)
+		kvm_put_guest_fpu(vcpu);
 
 	return i;
 }
-- 
2.27.0


  parent reply	other threads:[~2022-02-16 10:27 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-15 21:25 [PATCH v9 00/17] Introduce Architectural LBR for vPMU Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 01/17] KVM: x86: Report XSS as an MSR to be saved if there are supported features Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 02/17] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS Yang Weijiang
2022-02-15 21:25 ` Yang Weijiang [this message]
2022-02-15 21:25 ` [PATCH v9 04/17] perf/x86/intel: Fix the comment about guest LBR support on KVM Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 05/17] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 06/17] KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 07/17] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 08/17] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL " Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 09/17] KVM: x86/pmu: Refactor code to support " Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 10/17] KVM: x86: Refine the matching and clearing logic for supported_xss Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 11/17] KVM: x86: Add XSAVE Support for Architectural LBR Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 12/17] KVM: x86/vmx: Check Arch LBR config when return perf capabilities Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 13/17] KVM: nVMX: Add necessary Arch LBR settings for nested VM Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 14/17] KVM: x86/vmx: Clear Arch LBREn bit before inject #DB to guest Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 15/17] KVM: x86/vmx: Flip Arch LBREn bit on guest state change Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 16/17] KVM: x86: Add Arch LBR MSR access interface Yang Weijiang
2022-02-15 21:25 ` [PATCH v9 17/17] KVM: x86/cpuid: Advertise Arch LBR feature in CPUID Yang Weijiang
2022-02-21  6:11 ` [PATCH v9 00/17] Introduce Architectural LBR for vPMU Yang, Weijiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220215212544.51666-4-weijiang.yang@intel.com \
    --to=weijiang.yang@intel.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=like.xu.linux@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --cc=wei.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).