linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Chao Gao <chao.gao@intel.com>
Cc: Yang Weijiang <weijiang.yang@intel.com>,
	pbonzini@redhat.com, peterz@infradead.org, john.allen@amd.com,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	rick.p.edgecombe@intel.com, binbin.wu@linux.intel.com
Subject: Re: [PATCH v5 12/19] KVM:x86: Save and reload SSP to/from SMRAM
Date: Fri, 4 Aug 2023 08:25:26 -0700	[thread overview]
Message-ID: <ZM0YZgFsYWuBFOze@google.com> (raw)
In-Reply-To: <ZMyueOBXMwPkVk6J@chao-email>

On Fri, Aug 04, 2023, Chao Gao wrote:
> On Thu, Aug 03, 2023 at 12:27:25AM -0400, Yang Weijiang wrote:
> >Save CET SSP to SMRAM on SMI and reload it on RSM.
> >KVM emulates architectural behavior when guest enters/leaves SMM
> >mode, i.e., save registers to SMRAM at the entry of SMM and reload
> >them at the exit of SMM. Per SDM, SSP is defined as one of
> >the fields in SMRAM for 64-bit mode, so handle the state accordingly.
> >
> >Check is_smm() to determine whether kvm_cet_is_msr_accessible()
> >is called in SMM mode so that kvm_{set,get}_msr() works in SMM mode.
> >
> >Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> >---
> > arch/x86/kvm/smm.c | 11 +++++++++++
> > arch/x86/kvm/smm.h |  2 +-
> > arch/x86/kvm/x86.c | 11 ++++++++++-
> > 3 files changed, 22 insertions(+), 2 deletions(-)
> >
> >diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c
> >index b42111a24cc2..e0b62d211306 100644
> >--- a/arch/x86/kvm/smm.c
> >+++ b/arch/x86/kvm/smm.c
> >@@ -309,6 +309,12 @@ void enter_smm(struct kvm_vcpu *vcpu)
> > 
> > 	kvm_smm_changed(vcpu, true);
> > 
> >+#ifdef CONFIG_X86_64
> >+	if (guest_can_use(vcpu, X86_FEATURE_SHSTK) &&
> >+	    kvm_get_msr(vcpu, MSR_KVM_GUEST_SSP, &smram.smram64.ssp))
> >+		goto error;
> >+#endif
> 
> SSP save/load should go to enter_smm_save_state_64() and rsm_load_state_64(),
> where other fields of SMRAM are handled.

+1.  The right way to get/set MSRs like this is to use __kvm_get_msr() and pass
%true for @host_initiated.  Though I would add a prep patch to provide wrappers
for __kvm_get_msr() and __kvm_set_msr().  Naming will be hard, but I think we
can use kvm_{read,write}_msr() to go along with the KVM-initiated register
accessors/mutators, e.g. kvm_register_read(), kvm_pdptr_write(), etc.

Then you don't need to wait until after kvm_smm_changed(), and kvm_cet_is_msr_accessible()
doesn't need the confusing (and broken) SMM waiver, e.g. as Chao points out below,
that would allow the guest to access the synthetic MSR.

Delta patch at the bottom (would need to be split up, rebased, etc.).

> > 	if (kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, &smram, sizeof(smram)))
> > 		goto error;
> > 
> >@@ -586,6 +592,11 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt)
> > 	if ((vcpu->arch.hflags & HF_SMM_INSIDE_NMI_MASK) == 0)
> > 		static_call(kvm_x86_set_nmi_mask)(vcpu, false);
> > 
> >+#ifdef CONFIG_X86_64
> >+	if (guest_can_use(vcpu, X86_FEATURE_SHSTK) &&
> >+	    kvm_set_msr(vcpu, MSR_KVM_GUEST_SSP, smram.smram64.ssp))
> >+		return X86EMUL_UNHANDLEABLE;
> >+#endif
> > 	kvm_smm_changed(vcpu, false);
> > 
> > 	/*
> >diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h
> >index a1cf2ac5bd78..1e2a3e18207f 100644
> >--- a/arch/x86/kvm/smm.h
> >+++ b/arch/x86/kvm/smm.h
> >@@ -116,8 +116,8 @@ struct kvm_smram_state_64 {
> > 	u32 smbase;
> > 	u32 reserved4[5];
> > 
> >-	/* ssp and svm_* fields below are not implemented by KVM */
> > 	u64 ssp;
> >+	/* svm_* fields below are not implemented by KVM */
> > 	u64 svm_guest_pat;
> > 	u64 svm_host_efer;
> > 	u64 svm_host_cr4;
> >diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> >index 98f3ff6078e6..56aa5a3d3913 100644
> >--- a/arch/x86/kvm/x86.c
> >+++ b/arch/x86/kvm/x86.c
> >@@ -3644,8 +3644,17 @@ static bool kvm_cet_is_msr_accessible(struct kvm_vcpu *vcpu,
> > 		if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK))
> > 			return false;
> > 
> >-		if (msr->index == MSR_KVM_GUEST_SSP)
> >+		/*
> >+		 * This MSR is synthesized mainly for userspace access during
> >+		 * Live Migration, it also can be accessed in SMM mode by VMM.
> >+		 * Guest is not allowed to access this MSR.
> >+		 */
> >+		if (msr->index == MSR_KVM_GUEST_SSP) {
> >+			if (IS_ENABLED(CONFIG_X86_64) && is_smm(vcpu))
> >+				return true;
> 
> On second thoughts, this is incorrect. We don't want guest in SMM
> mode to read/write SSP via the synthesized MSR. Right?

It's not a guest read though, KVM is doing the read while emulating SMI/RSM.

> You can
> 1. move set/get guest SSP into two helper functions, e.g., kvm_set/get_ssp()
> 2. call kvm_set/get_ssp() for host-initiated MSR accesses and SMM transitions.

We could, but that would largely defeat the purpose of kvm_x86_ops.{g,s}et_msr(),
i.e. we already have hooks to get at MSR values that are buried in the VMCS/VMCB,
the interface is just a bit kludgy.
 
> 3. refuse guest accesses to the synthesized MSR.

---
 arch/x86/include/asm/kvm_host.h |  8 +++++++-
 arch/x86/kvm/cpuid.c            |  2 +-
 arch/x86/kvm/smm.c              | 10 ++++------
 arch/x86/kvm/x86.c              | 17 +++++++++++++----
 4 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f883696723f4..fe8484bc8082 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1939,7 +1939,13 @@ void kvm_prepare_emulation_failure_exit(struct kvm_vcpu *vcpu);
 
 void kvm_enable_efer_bits(u64);
 bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer);
-int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated);
+
+/*
+ * kvm_msr_{read,write}() are KVM-internal helpers, i.e. for when KVM needs to
+ * get/set an MSR value when emulating CPU behavior.
+ */
+int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data);
+int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 *data);
 int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data);
 int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data);
 int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 1a601be7b4fa..b595645b2af7 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -1515,7 +1515,7 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
 		*edx = entry->edx;
 		if (function == 7 && index == 0) {
 			u64 data;
-		        if (!__kvm_get_msr(vcpu, MSR_IA32_TSX_CTRL, &data, true) &&
+		        if (!kvm_msr_read(vcpu, MSR_IA32_TSX_CTRL, &data) &&
 			    (data & TSX_CTRL_CPUID_CLEAR))
 				*ebx &= ~(F(RTM) | F(HLE));
 		} else if (function == 0x80000007) {
diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c
index e0b62d211306..8db12831877e 100644
--- a/arch/x86/kvm/smm.c
+++ b/arch/x86/kvm/smm.c
@@ -275,6 +275,10 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu,
 	enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS);
 
 	smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
+
+	if (guest_can_use(vcpu, X86_FEATURE_SHSTK)
+		KVM_BUG_ON(kvm_msr_read(vcpu, MSR_KVM_GUEST_SSP,
+					&smram.smram64.ssp), vcpu->kvm));
 }
 #endif
 
@@ -309,12 +313,6 @@ void enter_smm(struct kvm_vcpu *vcpu)
 
 	kvm_smm_changed(vcpu, true);
 
-#ifdef CONFIG_X86_64
-	if (guest_can_use(vcpu, X86_FEATURE_SHSTK) &&
-	    kvm_get_msr(vcpu, MSR_KVM_GUEST_SSP, &smram.smram64.ssp))
-		goto error;
-#endif
-
 	if (kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, &smram, sizeof(smram)))
 		goto error;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2e200a5d00e9..872767b7bf51 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1924,8 +1924,8 @@ static int kvm_set_msr_ignored_check(struct kvm_vcpu *vcpu,
  * Returns 0 on success, non-0 otherwise.
  * Assumes vcpu_load() was already called.
  */
-int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data,
-		  bool host_initiated)
+static int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data,
+			 bool host_initiated)
 {
 	struct msr_data msr;
 	int ret;
@@ -1951,6 +1951,16 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data,
 	return ret;
 }
 
+int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 *data)
+{
+	return __kvm_get_msr(vcpu, index, data, true);
+}
+
+int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data)
+{
+	return __kvm_get_msr(vcpu, index, data, true);
+}
+
 static int kvm_get_msr_ignored_check(struct kvm_vcpu *vcpu,
 				     u32 index, u64 *data, bool host_initiated)
 {
@@ -4433,8 +4443,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		if (msr == MSR_IA32_PL0_SSP || msr == MSR_IA32_PL1_SSP ||
 		    msr == MSR_IA32_PL2_SSP) {
-			msr_info->data =
-				vcpu->arch.cet_s_ssp[msr - MSR_IA32_PL0_SSP];
+			msr_info->data = vcpu->arch.cet_s_ssp[msr - MSR_IA32_PL0_SSP];
 		} else if (msr == MSR_IA32_U_CET || msr == MSR_IA32_PL3_SSP) {
 			kvm_get_xsave_msr(msr_info);
 		}

base-commit: 82e95ab0094bf1b823a6f9c9a07238852b375a22
-- 


  reply	other threads:[~2023-08-04 15:26 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-03  4:27 [PATCH v5 00/19] Enable CET Virtualization Yang Weijiang
2023-08-03  4:27 ` [PATCH v5 01/19] x86/cpufeatures: Add CPU feature flags for shadow stacks Yang Weijiang
2023-08-03  4:27 ` [PATCH v5 02/19] x86/fpu/xstate: Introduce CET MSR and XSAVES supervisor states Yang Weijiang
2023-08-03  4:27 ` [PATCH v5 03/19] KVM:x86: Report XSS as to-be-saved if there are supported features Yang Weijiang
2023-08-03  4:27 ` [PATCH v5 04/19] KVM:x86: Refresh CPUID on write to guest MSR_IA32_XSS Yang Weijiang
2023-08-04 16:02   ` Sean Christopherson
2023-08-04 21:43     ` Paolo Bonzini
2023-08-09  3:11       ` Yang, Weijiang
2023-08-08 14:20     ` Yang, Weijiang
2023-08-04 18:27   ` Sean Christopherson
2023-08-07  6:55     ` Paolo Bonzini
2023-08-09  8:56     ` Yang, Weijiang
2023-08-10  0:01       ` Paolo Bonzini
2023-08-10  1:12         ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 05/19] KVM:x86: Initialize kvm_caps.supported_xss Yang Weijiang
2023-08-04 18:45   ` Sean Christopherson
2023-08-08 15:08     ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 06/19] KVM:x86: Load guest FPU state when access XSAVE-managed MSRs Yang Weijiang
2023-08-03  4:27 ` [PATCH v5 07/19] KVM:x86: Add fault checks for guest CR4.CET setting Yang Weijiang
2023-08-03  9:07   ` Chao Gao
2023-08-03  4:27 ` [PATCH v5 08/19] KVM:x86: Report KVM supported CET MSRs as to-be-saved Yang Weijiang
2023-08-03 10:39   ` Chao Gao
2023-08-04  3:13     ` Yang, Weijiang
2023-08-04  5:51       ` Chao Gao
2023-08-04 18:51         ` Sean Christopherson
2023-08-04 22:01           ` Paolo Bonzini
2023-08-08 15:16           ` Yang, Weijiang
2023-08-06  8:54         ` Yang, Weijiang
2023-08-04 18:55   ` Sean Christopherson
2023-08-08 15:26     ` Yang, Weijiang
2023-08-04 21:47   ` Paolo Bonzini
2023-08-09  3:14     ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 09/19] KVM:x86: Make guest supervisor states as non-XSAVE managed Yang Weijiang
2023-08-03 11:15   ` Chao Gao
2023-08-04  3:26     ` Yang, Weijiang
2023-08-04 20:45       ` Sean Christopherson
2023-08-04 20:59         ` Peter Zijlstra
2023-08-04 21:32         ` Paolo Bonzini
2023-08-09  2:51           ` Yang, Weijiang
2023-08-09  2:39         ` Yang, Weijiang
2023-08-10  9:29         ` Yang, Weijiang
2023-08-10 14:29           ` Dave Hansen
2023-08-10 15:15             ` Paolo Bonzini
2023-08-10 15:37               ` Sean Christopherson
2023-08-11  3:03               ` Yang, Weijiang
2023-08-28 21:00               ` Dave Hansen
2023-08-29  7:05                 ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 10/19] KVM:VMX: Introduce CET VMCS fields and control bits Yang Weijiang
2023-08-03  4:27 ` [PATCH v5 11/19] KVM:VMX: Emulate read and write to CET MSRs Yang Weijiang
2023-08-04  5:14   ` Chao Gao
2023-08-04 21:27     ` Sean Christopherson
2023-08-04 21:45       ` Paolo Bonzini
2023-08-04 22:21         ` Sean Christopherson
2023-08-07  7:03           ` Paolo Bonzini
2023-08-06  8:44       ` Yang, Weijiang
2023-08-07  7:00         ` Paolo Bonzini
2023-08-04  8:28   ` Chao Gao
2023-08-09  7:12     ` Yang, Weijiang
2023-08-04 21:40   ` Paolo Bonzini
2023-08-09  3:05     ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 12/19] KVM:x86: Save and reload SSP to/from SMRAM Yang Weijiang
2023-08-04  7:53   ` Chao Gao
2023-08-04 15:25     ` Sean Christopherson [this message]
2023-08-06  9:14       ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 13/19] KVM:VMX: Set up interception for CET MSRs Yang Weijiang
2023-08-04  8:16   ` Chao Gao
2023-08-06  9:22     ` Yang, Weijiang
2023-08-07  1:16       ` Chao Gao
2023-08-09  6:11         ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 14/19] KVM:VMX: Set host constant supervisor states to VMCS fields Yang Weijiang
2023-08-04  8:23   ` Chao Gao
2023-08-03  4:27 ` [PATCH v5 15/19] KVM:x86: Optimize CET supervisor SSP save/reload Yang Weijiang
2023-08-04  8:43   ` Chao Gao
2023-08-09  9:00     ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 16/19] KVM:x86: Enable CET virtualization for VMX and advertise to userspace Yang Weijiang
2023-08-03  4:27 ` [PATCH v5 17/19] KVM:x86: Enable guest CET supervisor xstate bit support Yang Weijiang
2023-08-04 22:02   ` Paolo Bonzini
2023-08-09  6:07     ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 18/19] KVM:nVMX: Refine error code injection to nested VM Yang Weijiang
2023-08-04 21:38   ` Sean Christopherson
2023-08-09  3:00     ` Yang, Weijiang
2023-08-03  4:27 ` [PATCH v5 19/19] KVM:nVMX: Enable CET support for " Yang Weijiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZM0YZgFsYWuBFOze@google.com \
    --to=seanjc@google.com \
    --cc=binbin.wu@linux.intel.com \
    --cc=chao.gao@intel.com \
    --cc=john.allen@amd.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=weijiang.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).