linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 0/7] Introduce support for guest CET feature
@ 2019-12-27  2:11 Yang Weijiang
  2019-12-27  2:11 ` [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration Yang Weijiang
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

Control-flow Enforcement Technology (CET) provides protection against
Return/Jump-Oriented Programming (ROP/JOP) attack. It includes two
sub-features: Shadow Stack (SHSTK) and Indirect Branch Tracking (IBT).

KVM change is required to support guest CET feature.
This patch serial implemented CET related CPUID/XSAVES enumeration, MSRs
and vmentry/vmexit configuration etc.so that guest kernel can setup CET
runtime infrastructure based on them. Some CET MSRs and related feature
flags used reference the definitions in kernel patchset.

CET kernel patches is here:
https://lkml.org/lkml/2019/8/13/1110
https://lkml.org/lkml/2019/8/13/1109
Note: CET hasn't been supported for nested case now, since CR4.CET
is fixed to 0 in FIXED1 MSR, actually nested VM cannot enable CET.

v8 -> v9:
- Refactored msr-check functions per Sean's feedback.
- Fixed a few issues per Sean's suggestion.
- Rebased patch to kernel-v5.4.
- Moved CET CPUID feature bits and CR4.CET to last patch.

v7 -> v8:
- Addressed Jim and Sean's feedback on: 1) CPUID(0xD,i) enumeration. 2)
  sanity check when configure guest CET. 3) function improvement.
- Added more sanity check functions.
- Set host vmexit default status so that guest won't leak CET status to
  host when vmexit.
- Added CR0.WP vs. CR4.CET mutual constrains.

v6 -> v7:
- Rebased patch to kernel v5.3
- Sean suggested to change CPUID(0xd, n) enumeration code as alined with
  existing one, and I think it's better to make the fix as an independent patch 
  since XSS MSR are being used widely on X86 platforms.
- Check more host and guest status before configure guest CET
  per Sean's feedback.
- Add error-check before guest accesses CET MSRs per Sean's feedback.
- Other minor fixes suggested by Sean.

v5 -> v6:
- Rebase patch to kernel v5.2.
- Move CPUID(0xD, n>=1) helper to a seperate patch.
- Merge xsave size fix with other patch.
- Other minor fixes per community feedback.

v4 -> v5:
- Rebase patch to kernel v5.1.
- Wrap CPUID(0xD, n>=1) code to a helper function.
- Pass through MSR_IA32_PL1_SSP and MSR_IA32_PL2_SSP to Guest.
- Add Co-developed-by expression in patch description.
- Refine patch description.

v3 -> v4:
- Add Sean's patch for loading Guest fpu state before access XSAVES
  managed CET MSRs.
- Melt down CET bits setting into CPUID configuration patch.
- Add VMX interface to query Host XSS.
- Check Host and Guest XSS support bits before set Guest XSS.
- Make Guest SHSTK and IBT feature enabling independent.
- Do not report CET support to Guest when Host CET feature is Disabled.

v2 -> v3:
- Modified patches to make Guest CET independent to Host enabling.
- Added patch 8 to add user space access for Guest CET MSR access.
- Modified code comments and patch description to reflect changes.

v1 -> v2:
- Re-ordered patch sequence, combined one patch.
- Added more description for CET related VMCS fields.
- Added Host CET capability check while enabling Guest CET loading bit.
- Added Host CET capability check while reporting Guest CPUID(EAX=7, EXC=0).
- Modified code in reporting Guest CPUID(EAX=D,ECX>=1), make it clearer.
- Added Host and Guest XSS mask check while setting bits for Guest XSS.


PATCH 1    : Fix CPUID(0xD, n) enumeration to support XSS MSR.
PATCH 2    : Define CET VMCS fields and bits.
PATCH 3    : Pass through CET MSRs to guest.
PATCH 4    : Load guest/host CET states on vmentry/vmexit.
PATCH 5    : Enable update to IA32_XSS for guest.
PATCH 6    : Load Guest FPU states for XSAVES managed MSRs.
PATCH 7    : Add user-space CET MSR access interface.




Sean Christopherson (1):
  KVM: X86: Load guest fpu state when accessing MSRs managed by XSAVES

Yang Weijiang (6):
  KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration
  KVM: VMX: Define CET VMCS fields and #CP flag
  KVM: VMX: Pass through CET related MSRs
  KVM: VMX: Load CET states on vmentry/vmexit
  KVM: X86: Enable CET bits update in IA32_XSS
  KVM: X86: Add user-space access interface for CET MSRs

 arch/x86/include/asm/kvm_host.h |   6 +-
 arch/x86/include/asm/vmx.h      |   8 +
 arch/x86/include/uapi/asm/kvm.h |   1 +
 arch/x86/kvm/cpuid.c            | 122 ++++++++++-----
 arch/x86/kvm/cpuid.h            |   2 +
 arch/x86/kvm/svm.c              |   7 +
 arch/x86/kvm/vmx/capabilities.h |  10 ++
 arch/x86/kvm/vmx/vmx.c          | 256 +++++++++++++++++++++++++++++++-
 arch/x86/kvm/x86.c              |  36 ++++-
 arch/x86/kvm/x86.h              |  10 +-
 10 files changed, 412 insertions(+), 46 deletions(-)

-- 
2.17.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration
  2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
@ 2019-12-27  2:11 ` Yang Weijiang
  2020-03-05 14:51   ` Paolo Bonzini
  2019-12-27  2:11 ` [PATCH v9 2/7] KVM: VMX: Define CET VMCS fields and #CP flag Yang Weijiang
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

The control bits in IA32_XSS MSR are being used for new features,
but current CPUID(0xd,i) enumeration code doesn't support them, so
fix existing code first.

The supervisor states in IA32_XSS haven't been used in public
KVM code, so set KVM_SUPPORTED_XSS to 0 now, anyone who's developing
IA32_XSS related feature may expand the macro to add the CPUID support,
otherwise, CPUID(0xd,i>1) always reports 0 of the subleaf to guest.

Extracted old code into a new filter and keep it same flavor as others.

This patch passed selftest on a few Intel platforms.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |   2 +
 arch/x86/kvm/cpuid.c            | 105 ++++++++++++++++++++++----------
 arch/x86/kvm/svm.c              |   7 +++
 arch/x86/kvm/vmx/vmx.c          |   8 +++
 arch/x86/kvm/x86.h              |   7 +++
 5 files changed, 97 insertions(+), 32 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4fc61483919a..4e59f17ded50 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1216,6 +1216,8 @@ struct kvm_x86_ops {
 
 	bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
 	int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
+
+	u64 (*supported_xss)(void);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index f68c0c753c38..546cfe123ba7 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -62,6 +62,17 @@ u64 kvm_supported_xcr0(void)
 	return xcr0;
 }
 
+u64 kvm_supported_xss(void)
+{
+	u64 kvm_xss = KVM_SUPPORTED_XSS & kvm_x86_ops->supported_xss();
+
+	if (!kvm_x86_ops->xsaves_supported())
+		return 0;
+
+	return kvm_xss;
+}
+EXPORT_SYMBOL_GPL(kvm_supported_xss);
+
 #define F(x) bit(X86_FEATURE_##x)
 
 int kvm_update_cpuid(struct kvm_vcpu *vcpu)
@@ -426,6 +437,58 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
 	}
 }
 
+static inline bool do_cpuid_0xd_mask(struct kvm_cpuid_entry2 *entry, int index)
+{
+	unsigned int f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
+	/* cpuid 0xD.1.eax */
+	const u32 kvm_cpuid_D_1_eax_x86_features =
+		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | f_xsaves;
+	u64 u_supported = kvm_supported_xcr0();
+	u64 s_supported = kvm_supported_xss();
+	u64 supported;
+
+	switch (index) {
+	case 0:
+		entry->eax &= u_supported;
+		entry->ebx = xstate_required_size(u_supported, false);
+		entry->ecx = entry->ebx;
+		entry->edx &= u_supported >> 32;
+
+		if (!u_supported) {
+			entry->eax = 0;
+			entry->ebx = 0;
+			entry->ecx = 0;
+			entry->edx = 0;
+			return false;
+		}
+		break;
+	case 1:
+		supported = u_supported | s_supported;
+		entry->eax &= kvm_cpuid_D_1_eax_x86_features;
+		cpuid_mask(&entry->eax, CPUID_D_1_EAX);
+		entry->ebx = 0;
+		entry->edx &= s_supported >> 32;
+		entry->ecx &= s_supported;
+		if (entry->eax & (F(XSAVES) | F(XSAVEC)))
+			entry->ebx = xstate_required_size(supported, true);
+
+		break;
+	default:
+		supported = (entry->ecx & 0x1) ? s_supported : u_supported;
+		if (!(supported & (BIT_ULL(index)))) {
+			entry->eax = 0;
+			entry->ebx = 0;
+			entry->ecx = 0;
+			entry->edx = 0;
+			return false;
+		}
+		if (entry->ecx & 0x1)
+			entry->ebx = 0;
+		break;
+	}
+	return true;
+}
+
 static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 				  int *nent, int maxnent)
 {
@@ -440,7 +503,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 	unsigned f_lm = 0;
 #endif
 	unsigned f_rdtscp = kvm_x86_ops->rdtscp_supported() ? F(RDTSCP) : 0;
-	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
 	unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
 
 	/* cpuid 1.edx */
@@ -495,10 +557,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
 		F(PMM) | F(PMM_EN);
 
-	/* cpuid 0xD.1.eax */
-	const u32 kvm_cpuid_D_1_eax_x86_features =
-		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | f_xsaves;
-
 	/* all calls to cpuid_count() should be made on the same cpu */
 	get_cpu();
 
@@ -639,38 +697,21 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
 		break;
 	}
 	case 0xd: {
-		int idx, i;
-		u64 supported = kvm_supported_xcr0();
+		int i, idx;
 
-		entry->eax &= supported;
-		entry->ebx = xstate_required_size(supported, false);
-		entry->ecx = entry->ebx;
-		entry->edx &= supported >> 32;
-		if (!supported)
+		if (!do_cpuid_0xd_mask(&entry[0], 0))
 			break;
-
-		for (idx = 1, i = 1; idx < 64; ++idx) {
-			u64 mask = ((u64)1 << idx);
+		for (i = 1, idx = 1; idx < 64; ++idx) {
 			if (*nent >= maxnent)
 				goto out;
-
 			do_host_cpuid(&entry[i], function, idx);
-			if (idx == 1) {
-				entry[i].eax &= kvm_cpuid_D_1_eax_x86_features;
-				cpuid_mask(&entry[i].eax, CPUID_D_1_EAX);
-				entry[i].ebx = 0;
-				if (entry[i].eax & (F(XSAVES)|F(XSAVEC)))
-					entry[i].ebx =
-						xstate_required_size(supported,
-								     true);
-			} else {
-				if (entry[i].eax == 0 || !(supported & mask))
-					continue;
-				if (WARN_ON_ONCE(entry[i].ecx & 1))
-					continue;
-			}
-			entry[i].ecx = 0;
-			entry[i].edx = 0;
+			if (entry[i].eax == 0 && entry[i].ebx == 0 &&
+			    entry[i].ecx == 0 && entry[i].edx == 0)
+				continue;
+
+			if (!do_cpuid_0xd_mask(&entry[i], idx))
+				continue;
+
 			++*nent;
 			++i;
 		}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c5673bda4b66..0b596efddf31 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -7178,6 +7178,11 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 		   (svm->vmcb->control.intercept & (1ULL << INTERCEPT_INIT));
 }
 
+static u64 svm_supported_xss(void)
+{
+	return 0;
+}
+
 static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = has_svm,
 	.disabled_by_bios = is_disabled,
@@ -7316,6 +7321,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 	.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
 
 	.apic_init_signal_blocked = svm_apic_init_signal_blocked,
+
+	.supported_xss = svm_supported_xss,
 };
 
 static int __init svm_init(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 7cd6c5e67337..477173e4a85d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1743,6 +1743,11 @@ static inline bool vmx_feature_control_msr_valid(struct kvm_vcpu *vcpu,
 	return !(val & ~valid_bits);
 }
 
+static inline u64 vmx_supported_xss(void)
+{
+	return host_xss;
+}
+
 static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
 {
 	switch (msr->index) {
@@ -7898,7 +7903,10 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.nested_enable_evmcs = NULL,
 	.nested_get_evmcs_version = NULL,
 	.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
+
 	.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
+
+	.supported_xss = vmx_supported_xss,
 };
 
 static void vmx_cleanup_l1d_flush(void)
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index dbf7442a822b..c29783afebed 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -293,6 +293,13 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, unsigned long cr2,
 				| XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \
 				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
 				| XFEATURE_MASK_PKRU)
+
+/*
+ * In future, applicable XSS state bits can be added here
+ * to make them available to KVM and guest.
+ */
+#define KVM_SUPPORTED_XSS	0
+
 extern u64 host_xcr0;
 
 extern u64 kvm_supported_xcr0(void);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 2/7] KVM: VMX: Define CET VMCS fields and #CP flag
  2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
  2019-12-27  2:11 ` [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration Yang Weijiang
@ 2019-12-27  2:11 ` Yang Weijiang
  2020-03-03 21:42   ` Sean Christopherson
  2019-12-27  2:11 ` [PATCH v9 3/7] KVM: VMX: Pass through CET related MSRs Yang Weijiang
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

CET(Control-flow Enforcement Technology) is an upcoming Intel(R)
processor feature that blocks Return/Jump-Oriented Programming(ROP)
attacks. It provides the following capabilities to defend
against ROP/JOP style control-flow subversion attacks:

Shadow Stack (SHSTK):
  A second stack for program which is used exclusively for
  control transfer operations.

Indirect Branch Tracking (IBT):
  Code branching protection to defend against jump/call oriented
  programming.

Several new CET MSRs are defined in kernel to support CET:
  MSR_IA32_{U,S}_CET: Controls the CET settings for user
                      mode and suervisor mode respectively.

  MSR_IA32_PL{0,1,2,3}_SSP: Stores shadow stack pointers for
                            CPL-0,1,2,3 level respectively.

  MSR_IA32_INT_SSP_TAB: Stores base address of shadow stack
                        pointer table.

Two XSAVES state bits are introduced for CET:
  IA32_XSS:[bit 11]: For saving/restoring user mode CET states
  IA32_XSS:[bit 12]: For saving/restoring supervisor mode CET states.

Six VMCS fields are introduced for CET:
  {HOST,GUEST}_S_CET: Stores CET settings for supervisor mode.
  {HOST,GUEST}_SSP: Stores shadow stack pointer for supervisor mode.
  {HOST,GUEST}_INTR_SSP_TABLE: Stores base address of shadow stack pointer
                               table.

If VM_EXIT_LOAD_HOST_CET_STATE = 1, the host's CET MSRs are restored
from below VMCS fields at VM-Exit:
  HOST_S_CET
  HOST_SSP
  HOST_INTR_SSP_TABLE

If VM_ENTRY_LOAD_GUEST_CET_STATE = 1, the guest's CET MSRs are loaded
from below VMCS fields at VM-Entry:
  GUEST_S_CET
  GUEST_SSP
  GUEST_INTR_SSP_TABLE

Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/vmx.h      | 8 ++++++++
 arch/x86/include/uapi/asm/kvm.h | 1 +
 arch/x86/kvm/x86.c              | 1 +
 arch/x86/kvm/x86.h              | 5 +++--
 4 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 1835767aa335..5b641c30e1b8 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -91,6 +91,7 @@
 #define VM_EXIT_CLEAR_BNDCFGS                   0x00800000
 #define VM_EXIT_PT_CONCEAL_PIP			0x01000000
 #define VM_EXIT_CLEAR_IA32_RTIT_CTL		0x02000000
+#define VM_EXIT_LOAD_HOST_CET_STATE             0x10000000
 
 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR	0x00036dff
 
@@ -104,6 +105,7 @@
 #define VM_ENTRY_LOAD_BNDCFGS                   0x00010000
 #define VM_ENTRY_PT_CONCEAL_PIP			0x00020000
 #define VM_ENTRY_LOAD_IA32_RTIT_CTL		0x00040000
+#define VM_ENTRY_LOAD_GUEST_CET_STATE           0x00100000
 
 #define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR	0x000011ff
 
@@ -323,6 +325,9 @@ enum vmcs_field {
 	GUEST_PENDING_DBG_EXCEPTIONS    = 0x00006822,
 	GUEST_SYSENTER_ESP              = 0x00006824,
 	GUEST_SYSENTER_EIP              = 0x00006826,
+	GUEST_S_CET                     = 0x00006828,
+	GUEST_SSP                       = 0x0000682a,
+	GUEST_INTR_SSP_TABLE            = 0x0000682c,
 	HOST_CR0                        = 0x00006c00,
 	HOST_CR3                        = 0x00006c02,
 	HOST_CR4                        = 0x00006c04,
@@ -335,6 +340,9 @@ enum vmcs_field {
 	HOST_IA32_SYSENTER_EIP          = 0x00006c12,
 	HOST_RSP                        = 0x00006c14,
 	HOST_RIP                        = 0x00006c16,
+	HOST_S_CET                      = 0x00006c18,
+	HOST_SSP                        = 0x00006c1a,
+	HOST_INTR_SSP_TABLE             = 0x00006c1c
 };
 
 /*
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 503d3f42da16..e68d6b448730 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -31,6 +31,7 @@
 #define MC_VECTOR 18
 #define XM_VECTOR 19
 #define VE_VECTOR 20
+#define CP_VECTOR 21
 
 /* Select x86 specific features in <linux/kvm.h> */
 #define __KVM_HAVE_PIT
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5d530521f11d..a9b1140d0508 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -379,6 +379,7 @@ static int exception_class(int vector)
 	case NP_VECTOR:
 	case SS_VECTOR:
 	case GP_VECTOR:
+	case CP_VECTOR:
 		return EXCPT_CONTRIBUTORY;
 	default:
 		break;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index c29783afebed..402dea669619 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -114,7 +114,7 @@ static inline bool x86_exception_has_error_code(unsigned int vector)
 {
 	static u32 exception_has_error_code = BIT(DF_VECTOR) | BIT(TS_VECTOR) |
 			BIT(NP_VECTOR) | BIT(SS_VECTOR) | BIT(GP_VECTOR) |
-			BIT(PF_VECTOR) | BIT(AC_VECTOR);
+			BIT(PF_VECTOR) | BIT(AC_VECTOR) | BIT(CP_VECTOR);
 
 	return (1U << vector) & exception_has_error_code;
 }
@@ -298,7 +298,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, unsigned long cr2,
  * In future, applicable XSS state bits can be added here
  * to make them available to KVM and guest.
  */
-#define KVM_SUPPORTED_XSS	0
+#define KVM_SUPPORTED_XSS	(XFEATURE_MASK_CET_USER \
+				| XFEATURE_MASK_CET_KERNEL)
 
 extern u64 host_xcr0;
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 3/7] KVM: VMX: Pass through CET related MSRs
  2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
  2019-12-27  2:11 ` [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration Yang Weijiang
  2019-12-27  2:11 ` [PATCH v9 2/7] KVM: VMX: Define CET VMCS fields and #CP flag Yang Weijiang
@ 2019-12-27  2:11 ` Yang Weijiang
  2020-03-03 21:51   ` Sean Christopherson
  2019-12-27  2:11 ` [PATCH v9 4/7] KVM: VMX: Load CET states on vmentry/vmexit Yang Weijiang
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

CET MSRs pass through Guest directly to enhance performance.
CET runtime control settings are stored in MSR_IA32_{U,S}_CET,
Shadow Stack Pointer(SSP) are stored in MSR_IA32_PL{0,1,2,3}_SSP,
SSP table base address is stored in MSR_IA32_INT_SSP_TAB,
these MSRs are defined in kernel and re-used here.

MSR_IA32_U_CET and MSR_IA32_PL3_SSP are used for user mode protection,
the contents could differ from process to process, therefore,
kernel needs to save/restore them during context switch, it makes
sense to pass through them so that the guest kernel can
use xsaves/xrstors to operate them efficiently. Other MSRs are used
for non-user mode protection. See CET spec for detailed info.

The difference between CET VMCS state fields and xsave components is that,
the former used for CET state storage during VMEnter/VMExit,
whereas the latter used for state retention between Guest task/process
switch.

Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/cpuid.h   |  2 ++
 arch/x86/kvm/vmx/vmx.c | 48 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 50 insertions(+)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index d78a61408243..1d77b880084d 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -27,6 +27,8 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
 
 int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu);
 
+u64 kvm_supported_xss(void);
+
 static inline int cpuid_maxphyaddr(struct kvm_vcpu *vcpu)
 {
 	return vcpu->arch.maxphyaddr;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 477173e4a85d..61fc846c7ef3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7091,6 +7091,52 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
 		vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));
 }
 
+static void vmx_update_intercept_for_cet_msr(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
+
+	/*
+	 * U_CET is required for USER CET, per CET spec., meanwhile U_CET and
+	 * PL3_SPP are a bundle for USER CET xsaves.
+	 */
+	if ((kvm_supported_xss() & XFEATURE_MASK_CET_USER) &&
+	    (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
+	    guest_cpuid_has(vcpu, X86_FEATURE_IBT))) {
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW);
+	} else {
+		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW, true);
+		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW, true);
+	}
+	/*
+	 * S_CET is required for KERNEL CET, meanwhile PL0_SSP ... PL2_SSP are a bundle
+	 * for CET KERNEL xsaves.
+	 */
+	if ((kvm_supported_xss() & XFEATURE_MASK_CET_KERNEL) &&
+	    (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
+	    guest_cpuid_has(vcpu, X86_FEATURE_IBT))) {
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW);
+		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW);
+
+		/* SSP_TAB only available for KERNEL SHSTK.*/
+		if (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK))
+			vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB,
+						      MSR_TYPE_RW);
+		else
+			vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB,
+						  MSR_TYPE_RW, true);
+	} else {
+		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW, true);
+		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW, true);
+		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW, true);
+		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW, true);
+		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB, MSR_TYPE_RW, true);
+	}
+}
+
 static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -7115,6 +7161,8 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
 			guest_cpuid_has(vcpu, X86_FEATURE_INTEL_PT))
 		update_intel_pt_cfg(vcpu);
+
+	vmx_update_intercept_for_cet_msr(vcpu);
 }
 
 static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 4/7] KVM: VMX: Load CET states on vmentry/vmexit
  2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
                   ` (2 preceding siblings ...)
  2019-12-27  2:11 ` [PATCH v9 3/7] KVM: VMX: Pass through CET related MSRs Yang Weijiang
@ 2019-12-27  2:11 ` Yang Weijiang
  2020-03-03 22:06   ` Sean Christopherson
  2019-12-27  2:11 ` [PATCH v9 5/7] KVM: X86: Enable CET bits update in IA32_XSS Yang Weijiang
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

"Load {guest,host} CET state" bit controls whether guest/host
CET states will be loaded at VM entry/exit. Before doing that,
KVM needs to check if CET can be enabled both on host and guest.

Note:
1)The processor does not allow CR4.CET to be set if CR0.WP = 0,
  similarly, it does not allow CR0.WP to be cleared while CR4.CET = 1.
  In either case, KVM would inject #GP to guest.

2)SHSTK and IBT features share one control MSR:
  MSR_IA32_{U,S}_CET, which means it's difficult to hide
  one feature from another in the case of SHSTK != IBT,
  after discussed in community, it's agreed to allow Guest
  control two features independently as it won't introduce
  security hole.

Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/vmx/capabilities.h | 10 ++++++
 arch/x86/kvm/vmx/vmx.c          | 56 +++++++++++++++++++++++++++++++--
 arch/x86/kvm/x86.c              |  3 ++
 3 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 7aa69716d516..4a67d35a42a2 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -106,6 +106,16 @@ static inline bool vmx_mpx_supported(void)
 		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS);
 }
 
+static inline bool cpu_has_load_guest_cet_states_ctrl(void)
+{
+	return ((vmcs_config.vmentry_ctrl) & VM_ENTRY_LOAD_GUEST_CET_STATE);
+}
+
+static inline bool cpu_has_load_host_cet_states_ctrl(void)
+{
+	return ((vmcs_config.vmexit_ctrl) & VM_EXIT_LOAD_HOST_CET_STATE);
+}
+
 static inline bool cpu_has_vmx_tpr_shadow(void)
 {
 	return vmcs_config.cpu_based_exec_ctrl & CPU_BASED_TPR_SHADOW;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 61fc846c7ef3..95063cc7da89 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -44,6 +44,7 @@
 #include <asm/spec-ctrl.h>
 #include <asm/virtext.h>
 #include <asm/vmx.h>
+#include <asm/cet.h>
 
 #include "capabilities.h"
 #include "cpuid.h"
@@ -2445,7 +2446,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	      VM_EXIT_LOAD_IA32_EFER |
 	      VM_EXIT_CLEAR_BNDCFGS |
 	      VM_EXIT_PT_CONCEAL_PIP |
-	      VM_EXIT_CLEAR_IA32_RTIT_CTL;
+	      VM_EXIT_CLEAR_IA32_RTIT_CTL |
+	      VM_EXIT_LOAD_HOST_CET_STATE;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_EXIT_CTLS,
 				&_vmexit_control) < 0)
 		return -EIO;
@@ -2469,7 +2471,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	      VM_ENTRY_LOAD_IA32_EFER |
 	      VM_ENTRY_LOAD_BNDCFGS |
 	      VM_ENTRY_PT_CONCEAL_PIP |
-	      VM_ENTRY_LOAD_IA32_RTIT_CTL;
+	      VM_ENTRY_LOAD_IA32_RTIT_CTL |
+	      VM_ENTRY_LOAD_GUEST_CET_STATE;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_ENTRY_CTLS,
 				&_vmentry_control) < 0)
 		return -EIO;
@@ -3027,6 +3030,25 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 	vmcs_writel(GUEST_CR3, guest_cr3);
 }
 
+bool is_cet_bit_allowed(struct kvm_vcpu *vcpu)
+{
+	u64 kvm_xss = kvm_supported_xss();
+	unsigned long cr0;
+	bool cet_allowed;
+
+	cr0 = kvm_read_cr0(vcpu);
+
+	/* Right now, only user-mode CET is supported.*/
+	cet_allowed = (kvm_xss & XFEATURE_MASK_CET_USER) &&
+		      (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
+		      guest_cpuid_has(vcpu, X86_FEATURE_IBT));
+
+	if ((cr0 & X86_CR0_WP) && cet_allowed)
+		return true;
+
+	return false;
+}
+
 int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -3067,6 +3089,9 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 			return 1;
 	}
 
+	if ((cr4 & X86_CR4_CET) && !is_cet_bit_allowed(vcpu))
+		return 1;
+
 	if (vmx->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
 		return 1;
 
@@ -3930,6 +3955,12 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
 
 	if (cpu_has_load_ia32_efer())
 		vmcs_write64(HOST_IA32_EFER, host_efer);
+
+	if (cpu_has_load_host_cet_states_ctrl()) {
+		vmcs_writel(HOST_S_CET, 0);
+		vmcs_writel(HOST_INTR_SSP_TABLE, 0);
+		vmcs_writel(HOST_SSP, 0);
+	}
 }
 
 void set_cr4_guest_host_mask(struct vcpu_vmx *vmx)
@@ -6499,7 +6530,9 @@ bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
 static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	u64 kvm_xss = kvm_supported_xss();
 	unsigned long cr3, cr4;
+	bool cet_allowed;
 
 	/* Record the guest's net vcpu time for enforced NMI injections. */
 	if (unlikely(!enable_vnmi &&
@@ -6530,6 +6563,25 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		vmx->loaded_vmcs->host_state.cr3 = cr3;
 	}
 
+	/* Right now, only user-mode CET is supported.*/
+	cet_allowed = (kvm_xss & XFEATURE_MASK_CET_USER) &&
+		      (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
+		      guest_cpuid_has(vcpu, X86_FEATURE_IBT));
+
+	if (cpu_has_load_guest_cet_states_ctrl() && cet_allowed)
+		vmcs_set_bits(VM_ENTRY_CONTROLS,
+			      VM_ENTRY_LOAD_GUEST_CET_STATE);
+	else
+		vmcs_clear_bits(VM_ENTRY_CONTROLS,
+				VM_ENTRY_LOAD_GUEST_CET_STATE);
+
+	if (cpu_has_load_host_cet_states_ctrl() && cet_allowed)
+		vmcs_set_bits(VM_EXIT_CONTROLS,
+			      VM_EXIT_LOAD_HOST_CET_STATE);
+	else
+		vmcs_clear_bits(VM_EXIT_CONTROLS,
+				VM_EXIT_LOAD_HOST_CET_STATE);
+
 	cr4 = cr4_read_shadow();
 	if (unlikely(cr4 != vmx->loaded_vmcs->host_state.cr4)) {
 		vmcs_writel(HOST_CR4, cr4);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a9b1140d0508..b27d97eaec24 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -788,6 +788,9 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
 		return 1;
 
+	if (!(cr0 & X86_CR0_WP) && kvm_read_cr4_bits(vcpu, X86_CR4_CET))
+		return 1;
+
 	kvm_x86_ops->set_cr0(vcpu, cr0);
 
 	if ((cr0 ^ old_cr0) & X86_CR0_PG) {
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 5/7] KVM: X86: Enable CET bits update in IA32_XSS
  2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
                   ` (3 preceding siblings ...)
  2019-12-27  2:11 ` [PATCH v9 4/7] KVM: VMX: Load CET states on vmentry/vmexit Yang Weijiang
@ 2019-12-27  2:11 ` Yang Weijiang
  2019-12-27  2:11 ` [PATCH v9 6/7] KVM: X86: Load guest fpu state when accessing MSRs managed by XSAVES Yang Weijiang
  2019-12-27  2:11 ` [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs Yang Weijiang
  6 siblings, 0 replies; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

Previously, KVM did not support setting any bits in XSS
so it's hardcoded to check and inject a #GP if Guest
attempted to write a non-zero value to XSS, now it supports
CET related bits setting.

Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/cpuid.c            | 12 ++++++++++--
 arch/x86/kvm/vmx/vmx.c          |  6 +-----
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4e59f17ded50..64bf379381e4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -620,6 +620,7 @@ struct kvm_vcpu_arch {
 
 	u64 xcr0;
 	u64 guest_supported_xcr0;
+	u64 guest_supported_xss;
 	u32 guest_xstate_size;
 
 	struct kvm_pio_request pio;
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 546cfe123ba7..126a31b99823 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -125,8 +125,16 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 	}
 
 	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
-	if (best && (best->eax & (F(XSAVES) | F(XSAVEC))))
-		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+	if (best && (best->eax & (F(XSAVES) | F(XSAVEC)))) {
+		u64 xstate = vcpu->arch.xcr0 | vcpu->arch.ia32_xss;
+
+		best->ebx = xstate_required_size(xstate, true);
+		vcpu->arch.guest_supported_xss =
+			(best->ecx | ((u64)best->edx << 32)) &
+			kvm_supported_xss();
+	} else {
+		vcpu->arch.guest_supported_xss = 0;
+	}
 
 	/*
 	 * The existing code assumes virtual address is 48-bit or 57-bit in the
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 95063cc7da89..0a75b65d03f0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2081,11 +2081,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
 		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
 			return 1;
-		/*
-		 * The only supported bit as of Skylake is bit 8, but
-		 * it is not supported on KVM.
-		 */
-		if (data != 0)
+		if (data & ~vcpu->arch.guest_supported_xss)
 			return 1;
 		vcpu->arch.ia32_xss = data;
 		if (vcpu->arch.ia32_xss != host_xss)
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 6/7] KVM: X86: Load guest fpu state when accessing MSRs managed by XSAVES
  2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
                   ` (4 preceding siblings ...)
  2019-12-27  2:11 ` [PATCH v9 5/7] KVM: X86: Enable CET bits update in IA32_XSS Yang Weijiang
@ 2019-12-27  2:11 ` Yang Weijiang
  2019-12-27  2:11 ` [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs Yang Weijiang
  6 siblings, 0 replies; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

From: Sean Christopherson <sean.j.christopherson@intel.com>

A handful of CET MSRs are not context switched through "traditional"
methods, e.g. VMCS or manual switching, but rather are passed through
to the guest and are saved and restored by XSAVES/XRSTORS, i.e. the
guest's FPU state.

Load the guest's FPU state if userspace is accessing MSRs whose values
are managed by XSAVES so that the MSR helper, e.g. vmx_{get,set}_msr(),
can simply do {RD,WR}MSR to access the guest's value.

Note that guest_cpuid_has() is not queried as host userspace is allowed
to access MSRs that have not been exposed to the guest, e.g. it might do
KVM_SET_MSRS prior to KVM_SET_CPUID2.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Co-developed-by: Yang Weijiang <weijiang.yang@intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/x86.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b27d97eaec24..6dbe77365b22 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -104,6 +104,8 @@ static void enter_smm(struct kvm_vcpu *vcpu);
 static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
 static void store_regs(struct kvm_vcpu *vcpu);
 static int sync_regs(struct kvm_vcpu *vcpu);
+static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
+static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
 
 struct kvm_x86_ops *kvm_x86_ops __read_mostly;
 EXPORT_SYMBOL_GPL(kvm_x86_ops);
@@ -3129,6 +3131,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 }
 EXPORT_SYMBOL_GPL(kvm_get_msr_common);
 
+static bool is_xsaves_msr(u32 index)
+{
+	return index == MSR_IA32_U_CET ||
+	       (index >= MSR_IA32_PL0_SSP && index <= MSR_IA32_PL3_SSP);
+}
+
 /*
  * Read or write a bunch of msrs. All parameters are kernel addresses.
  *
@@ -3139,11 +3147,22 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs,
 		    int (*do_msr)(struct kvm_vcpu *vcpu,
 				  unsigned index, u64 *data))
 {
+	bool fpu_loaded = false;
 	int i;
+	const u64 cet_bits = XFEATURE_MASK_CET_USER | XFEATURE_MASK_CET_KERNEL;
+	bool cet_xss = kvm_supported_xss() & cet_bits;
 
-	for (i = 0; i < msrs->nmsrs; ++i)
+	for (i = 0; i < msrs->nmsrs; ++i) {
+		if (vcpu && !fpu_loaded && cet_xss &&
+		    is_xsaves_msr(entries[i].index)) {
+			kvm_load_guest_fpu(vcpu);
+			fpu_loaded = true;
+		}
 		if (do_msr(vcpu, entries[i].index, &entries[i].data))
 			break;
+	}
+	if (fpu_loaded)
+		kvm_put_guest_fpu(vcpu);
 
 	return i;
 }
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs
  2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
                   ` (5 preceding siblings ...)
  2019-12-27  2:11 ` [PATCH v9 6/7] KVM: X86: Load guest fpu state when accessing MSRs managed by XSAVES Yang Weijiang
@ 2019-12-27  2:11 ` Yang Weijiang
  2020-03-03 22:28   ` Sean Christopherson
  6 siblings, 1 reply; 20+ messages in thread
From: Yang Weijiang @ 2019-12-27  2:11 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, jmattson, sean.j.christopherson
  Cc: yu.c.zhang, Yang Weijiang

There're two different places storing Guest CET states, states
managed with XSAVES/XRSTORS, as restored/saved
in previous patch, can be read/write directly from/to the MSRs.
For those stored in VMCS fields, they're access via vmcs_read/
vmcs_write.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |   3 +-
 arch/x86/kvm/cpuid.c            |   5 +-
 arch/x86/kvm/vmx/vmx.c          | 138 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c              |  11 +++
 4 files changed, 154 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 64bf379381e4..34140462084f 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -90,7 +90,8 @@
 			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
 			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
 			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
-			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
+			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
+			  | X86_CR4_CET))
 
 #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 126a31b99823..4414bd110f3c 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -385,13 +385,14 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
 		F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) |
 		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
 		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
-		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/;
+		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | F(SHSTK) |
+		0 /*WAITPKG*/;
 
 	/* cpuid 7.0.edx*/
 	const u32 kvm_cpuid_7_0_edx_x86_features =
 		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
 		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
-		F(MD_CLEAR);
+		F(MD_CLEAR) | F(IBT);
 
 	/* cpuid 7.1.eax */
 	const u32 kvm_cpuid_7_1_eax_x86_features =
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0a75b65d03f0..52ac67604026 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1763,6 +1763,96 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
 	return 0;
 }
 
+#define CET_MSR_RSVD_BITS_1    0x3
+#define CET_MSR_RSVD_BITS_2   (0xF << 6)
+
+static bool cet_ssp_write_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+	u64 data = msr->data;
+	u32 high_word = data >> 32;
+
+	if (is_64_bit_mode(vcpu)) {
+		if (data & CET_MSR_RSVD_BITS_1)
+			return false;
+	} else if (high_word) {
+		return false;
+	}
+
+	return true;
+}
+
+static bool cet_ctl_write_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+	u64 data = msr->data;
+	u32 high_word = data >> 32;
+
+	if (data & CET_MSR_RSVD_BITS_2)
+		return false;
+
+	if (!is_64_bit_mode(vcpu) && high_word)
+		return false;
+
+	return true;
+}
+
+static bool cet_ssp_access_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+	u64 kvm_xss;
+	u32 index = msr->index;
+
+	if (is_guest_mode(vcpu))
+		return false;
+
+	if (!boot_cpu_has(X86_FEATURE_SHSTK))
+		return false;
+
+	if (!msr->host_initiated &&
+	    !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK))
+		return false;
+
+	if (index == MSR_IA32_INT_SSP_TAB)
+		return true;
+
+	kvm_xss = kvm_supported_xss();
+
+	if (index == MSR_IA32_PL3_SSP) {
+		if (!(kvm_xss & XFEATURE_MASK_CET_USER))
+			return false;
+	} else if (!(kvm_xss & XFEATURE_MASK_CET_KERNEL)) {
+		return false;
+	}
+
+	return true;
+}
+
+static bool cet_ctl_access_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
+{
+	u64 kvm_xss;
+	u32 index = msr->index;
+
+	if (is_guest_mode(vcpu))
+		return false;
+
+	kvm_xss = kvm_supported_xss();
+
+	if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
+	    !boot_cpu_has(X86_FEATURE_IBT))
+		return false;
+
+	if (!msr->host_initiated &&
+	    !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) &&
+	    !guest_cpuid_has(vcpu, X86_FEATURE_IBT))
+		return false;
+
+	if (index == MSR_IA32_U_CET) {
+		if (!(kvm_xss & XFEATURE_MASK_CET_USER))
+			return false;
+	} else if (!(kvm_xss & XFEATURE_MASK_CET_KERNEL)) {
+		return false;
+	}
+
+	return true;
+}
 /*
  * Reads an msr value (of 'msr_index') into 'pdata'.
  * Returns 0 on success, non-0 otherwise.
@@ -1886,6 +1976,26 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		else
 			msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
 		break;
+	case MSR_IA32_S_CET:
+		if (!cet_ctl_access_allowed(vcpu, msr_info))
+			return 1;
+		msr_info->data = vmcs_readl(GUEST_S_CET);
+		break;
+	case MSR_IA32_INT_SSP_TAB:
+		if (!cet_ssp_access_allowed(vcpu, msr_info))
+			return 1;
+		msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
+		break;
+	case MSR_IA32_U_CET:
+		if (!cet_ctl_access_allowed(vcpu, msr_info))
+			return 1;
+		rdmsrl(MSR_IA32_U_CET, msr_info->data);
+		break;
+	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+		if (!cet_ssp_access_allowed(vcpu, msr_info))
+			return 1;
+		rdmsrl(msr_info->index, msr_info->data);
+		break;
 	case MSR_TSC_AUX:
 		if (!msr_info->host_initiated &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
@@ -2147,6 +2257,34 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		else
 			vmx->pt_desc.guest.addr_a[index / 2] = data;
 		break;
+	case MSR_IA32_S_CET:
+		if (!cet_ctl_access_allowed(vcpu, msr_info))
+			return 1;
+		if (!cet_ctl_write_allowed(vcpu, msr_info))
+			return 1;
+		vmcs_writel(GUEST_S_CET, data);
+		break;
+	case MSR_IA32_INT_SSP_TAB:
+		if (!cet_ctl_access_allowed(vcpu, msr_info))
+			return 1;
+		if (!is_64_bit_mode(vcpu))
+			return 1;
+		vmcs_writel(GUEST_INTR_SSP_TABLE, data);
+		break;
+	case MSR_IA32_U_CET:
+		if (!cet_ctl_access_allowed(vcpu, msr_info))
+			return 1;
+		if (!cet_ctl_write_allowed(vcpu, msr_info))
+			return 1;
+		wrmsrl(MSR_IA32_U_CET, data);
+		break;
+	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+		if (!cet_ssp_access_allowed(vcpu, msr_info))
+			return 1;
+		if (!cet_ssp_write_allowed(vcpu, msr_info))
+			return 1;
+		wrmsrl(msr_info->index, data);
+		break;
 	case MSR_TSC_AUX:
 		if (!msr_info->host_initiated &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6dbe77365b22..7de6faa6aa51 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1186,6 +1186,10 @@ static const u32 msrs_to_save_all[] = {
 	MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
 	MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
 	MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
+
+	MSR_IA32_XSS, MSR_IA32_U_CET, MSR_IA32_S_CET,
+	MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP,
+	MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB,
 };
 
 static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
@@ -1468,6 +1472,13 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
 		 * invokes 64-bit SYSENTER.
 		 */
 		data = get_canonical(data, vcpu_virt_addr_bits(vcpu));
+		break;
+	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+	case MSR_IA32_U_CET:
+	case MSR_IA32_S_CET:
+	case MSR_IA32_INT_SSP_TAB:
+		if (is_noncanonical_address(data, vcpu))
+			return 1;
 	}
 
 	msr.data = data;
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 2/7] KVM: VMX: Define CET VMCS fields and #CP flag
  2019-12-27  2:11 ` [PATCH v9 2/7] KVM: VMX: Define CET VMCS fields and #CP flag Yang Weijiang
@ 2020-03-03 21:42   ` Sean Christopherson
  2020-03-04  8:44     ` Yang Weijiang
  0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2020-03-03 21:42 UTC (permalink / raw)
  To: Yang Weijiang; +Cc: kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Fri, Dec 27, 2019 at 10:11:28AM +0800, Yang Weijiang wrote:
> @@ -298,7 +298,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, unsigned long cr2,
>   * In future, applicable XSS state bits can be added here
>   * to make them available to KVM and guest.
>   */
> -#define KVM_SUPPORTED_XSS	0
> +#define KVM_SUPPORTED_XSS	(XFEATURE_MASK_CET_USER \
> +				| XFEATURE_MASK_CET_KERNEL)

My preference would be to put the operator on the previous line, though I
realize this diverges from other KVM behavior.  I find it much easier to
read With the names aligned.

#define KVM_SUPPORTED_XSS	(XFEATURE_MASK_CET_USER | \
				 XFEATURE_MASK_CET_KERNEL)
>  
>  extern u64 host_xcr0;
>  
> -- 
> 2.17.2
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 3/7] KVM: VMX: Pass through CET related MSRs
  2019-12-27  2:11 ` [PATCH v9 3/7] KVM: VMX: Pass through CET related MSRs Yang Weijiang
@ 2020-03-03 21:51   ` Sean Christopherson
  2020-03-04  8:46     ` Yang Weijiang
  0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2020-03-03 21:51 UTC (permalink / raw)
  To: Yang Weijiang; +Cc: kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Fri, Dec 27, 2019 at 10:11:29AM +0800, Yang Weijiang wrote:
> CET MSRs pass through Guest directly to enhance performance.
> CET runtime control settings are stored in MSR_IA32_{U,S}_CET,
> Shadow Stack Pointer(SSP) are stored in MSR_IA32_PL{0,1,2,3}_SSP,
> SSP table base address is stored in MSR_IA32_INT_SSP_TAB,
> these MSRs are defined in kernel and re-used here.
> 
> MSR_IA32_U_CET and MSR_IA32_PL3_SSP are used for user mode protection,
> the contents could differ from process to process, therefore,
> kernel needs to save/restore them during context switch, it makes
> sense to pass through them so that the guest kernel can
> use xsaves/xrstors to operate them efficiently. Other MSRs are used
> for non-user mode protection. See CET spec for detailed info.
> 
> The difference between CET VMCS state fields and xsave components is that,
> the former used for CET state storage during VMEnter/VMExit,
> whereas the latter used for state retention between Guest task/process
> switch.
> 
> Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> ---
>  arch/x86/kvm/cpuid.h   |  2 ++
>  arch/x86/kvm/vmx/vmx.c | 48 ++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 50 insertions(+)
> 
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index d78a61408243..1d77b880084d 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -27,6 +27,8 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
>  
>  int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu);
>  
> +u64 kvm_supported_xss(void);
> +
>  static inline int cpuid_maxphyaddr(struct kvm_vcpu *vcpu)
>  {
>  	return vcpu->arch.maxphyaddr;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 477173e4a85d..61fc846c7ef3 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7091,6 +7091,52 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
>  		vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));
>  }
>  
> +static void vmx_update_intercept_for_cet_msr(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
> +
> +	/*
> +	 * U_CET is required for USER CET, per CET spec., meanwhile U_CET and
> +	 * PL3_SPP are a bundle for USER CET xsaves.
> +	 */
> +	if ((kvm_supported_xss() & XFEATURE_MASK_CET_USER) &&
> +	    (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> +	    guest_cpuid_has(vcpu, X86_FEATURE_IBT))) {
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW);
> +	} else {
> +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW, true);
> +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW, true);
> +	}

I prefer the style of pt_update_intercept_for_msr(), e.g.

	flag = (kvm_supported_xss() & XFEATURE_MASK_CET_USER) &&
		(guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
		 guest_cpuid_has(vcpu, X86_FEATURE_IBT));

	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW, flag);
	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW, flag);


	flag = (kvm_supported_xss() & XFEATURE_MASK_CET_KERNEL) &&
		(guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
		 guest_cpuid_has(vcpu, X86_FEATURE_IBT));

	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW, flag);
	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW, flag);
	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW, flag);
	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW, flag);

	/* SSP_TAB only available for KERNEL SHSTK.*/
	flag &= guest_cpuid_has(vcpu, X86_FEATURE_SHSTK);
	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB, MSR_TYPE_RW, flag);


> +	/*
> +	 * S_CET is required for KERNEL CET, meanwhile PL0_SSP ... PL2_SSP are a bundle
> +	 * for CET KERNEL xsaves.
> +	 */
> +	if ((kvm_supported_xss() & XFEATURE_MASK_CET_KERNEL) &&
> +	    (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> +	    guest_cpuid_has(vcpu, X86_FEATURE_IBT))) {
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW);
> +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW);
> +
> +		/* SSP_TAB only available for KERNEL SHSTK.*/
> +		if (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK))
> +			vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB,
> +						      MSR_TYPE_RW);
> +		else
> +			vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB,
> +						  MSR_TYPE_RW, true);
> +	} else {
> +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW, true);
> +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW, true);
> +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW, true);
> +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW, true);
> +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB, MSR_TYPE_RW, true);
> +	}
> +}
> +
>  static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> @@ -7115,6 +7161,8 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>  	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
>  			guest_cpuid_has(vcpu, X86_FEATURE_INTEL_PT))
>  		update_intel_pt_cfg(vcpu);
> +
> +	vmx_update_intercept_for_cet_msr(vcpu);
>  }
>  
>  static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
> -- 
> 2.17.2
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 4/7] KVM: VMX: Load CET states on vmentry/vmexit
  2019-12-27  2:11 ` [PATCH v9 4/7] KVM: VMX: Load CET states on vmentry/vmexit Yang Weijiang
@ 2020-03-03 22:06   ` Sean Christopherson
  2020-03-04  8:55     ` Yang Weijiang
  0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2020-03-03 22:06 UTC (permalink / raw)
  To: Yang Weijiang; +Cc: kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Fri, Dec 27, 2019 at 10:11:30AM +0800, Yang Weijiang wrote:
> "Load {guest,host} CET state" bit controls whether guest/host
> CET states will be loaded at VM entry/exit. Before doing that,
> KVM needs to check if CET can be enabled both on host and guest.
> 
> Note:
> 1)The processor does not allow CR4.CET to be set if CR0.WP = 0,
>   similarly, it does not allow CR0.WP to be cleared while CR4.CET = 1.
>   In either case, KVM would inject #GP to guest.
> 
> 2)SHSTK and IBT features share one control MSR:
>   MSR_IA32_{U,S}_CET, which means it's difficult to hide
>   one feature from another in the case of SHSTK != IBT,
>   after discussed in community, it's agreed to allow Guest
>   control two features independently as it won't introduce
>   security hole.
> 
> Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> ---
>  arch/x86/kvm/vmx/capabilities.h | 10 ++++++
>  arch/x86/kvm/vmx/vmx.c          | 56 +++++++++++++++++++++++++++++++--
>  arch/x86/kvm/x86.c              |  3 ++
>  3 files changed, 67 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
> index 7aa69716d516..4a67d35a42a2 100644
> --- a/arch/x86/kvm/vmx/capabilities.h
> +++ b/arch/x86/kvm/vmx/capabilities.h
> @@ -106,6 +106,16 @@ static inline bool vmx_mpx_supported(void)
>  		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS);
>  }
>  
> +static inline bool cpu_has_load_guest_cet_states_ctrl(void)
> +{
> +	return ((vmcs_config.vmentry_ctrl) & VM_ENTRY_LOAD_GUEST_CET_STATE);
> +}
> +
> +static inline bool cpu_has_load_host_cet_states_ctrl(void)
> +{
> +	return ((vmcs_config.vmexit_ctrl) & VM_EXIT_LOAD_HOST_CET_STATE);

No need for parantheses around vmcs_config.vmexit_ctrl.

> +}
> +
>  static inline bool cpu_has_vmx_tpr_shadow(void)
>  {
>  	return vmcs_config.cpu_based_exec_ctrl & CPU_BASED_TPR_SHADOW;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 61fc846c7ef3..95063cc7da89 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -44,6 +44,7 @@
>  #include <asm/spec-ctrl.h>
>  #include <asm/virtext.h>
>  #include <asm/vmx.h>
> +#include <asm/cet.h>
>  
>  #include "capabilities.h"
>  #include "cpuid.h"
> @@ -2445,7 +2446,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
>  	      VM_EXIT_LOAD_IA32_EFER |
>  	      VM_EXIT_CLEAR_BNDCFGS |
>  	      VM_EXIT_PT_CONCEAL_PIP |
> -	      VM_EXIT_CLEAR_IA32_RTIT_CTL;
> +	      VM_EXIT_CLEAR_IA32_RTIT_CTL |
> +	      VM_EXIT_LOAD_HOST_CET_STATE;
>  	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_EXIT_CTLS,
>  				&_vmexit_control) < 0)
>  		return -EIO;
> @@ -2469,7 +2471,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
>  	      VM_ENTRY_LOAD_IA32_EFER |
>  	      VM_ENTRY_LOAD_BNDCFGS |
>  	      VM_ENTRY_PT_CONCEAL_PIP |
> -	      VM_ENTRY_LOAD_IA32_RTIT_CTL;
> +	      VM_ENTRY_LOAD_IA32_RTIT_CTL |
> +	      VM_ENTRY_LOAD_GUEST_CET_STATE;
>  	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_ENTRY_CTLS,
>  				&_vmentry_control) < 0)
>  		return -EIO;
> @@ -3027,6 +3030,25 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>  	vmcs_writel(GUEST_CR3, guest_cr3);
>  }
>  
> +bool is_cet_bit_allowed(struct kvm_vcpu *vcpu)

This should be static.  I'd also include cr4 in the name, e.g.

  static bool is_cr4_set_allowed(...)


> +{
> +	u64 kvm_xss = kvm_supported_xss();
> +	unsigned long cr0;
> +	bool cet_allowed;
> +
> +	cr0 = kvm_read_cr0(vcpu);
> +
> +	/* Right now, only user-mode CET is supported.*/
> +	cet_allowed = (kvm_xss & XFEATURE_MASK_CET_USER) &&
> +		      (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> +		      guest_cpuid_has(vcpu, X86_FEATURE_IBT));

Probably makes sense to add a "is_cet_supported()" helper.  That'd reduce
the amount of copy+paste and would probably add clarity to most flows.

> +
> +	if ((cr0 & X86_CR0_WP) && cet_allowed)
> +		return true;
> +
> +	return false;

	return (cr0 & X86_CR0_WP) && cet_allowed;

Even better, especially if you add is_cet_supported(), to avoid VMREAD of
CR0 when CET isn't supported.

	return is_cet_supported() && (kvm_read_cr0(vcpu) & X86_CR0_WP);

At that point, you can probably even forgo the helper, e.g.

	if ((cr4 & X86_CR4_CET) &&
	    (!is_cet_supported() || !(kvm_read_cr0(vcpu) & X86_CR0_WP)))
		return 1;

> +}
> +
>  int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> @@ -3067,6 +3089,9 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>  			return 1;
>  	}
>  
> +	if ((cr4 & X86_CR4_CET) && !is_cet_bit_allowed(vcpu))
> +		return 1;
> +
>  	if (vmx->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
>  		return 1;
>  
> @@ -3930,6 +3955,12 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
>  
>  	if (cpu_has_load_ia32_efer())
>  		vmcs_write64(HOST_IA32_EFER, host_efer);
> +
> +	if (cpu_has_load_host_cet_states_ctrl()) {
> +		vmcs_writel(HOST_S_CET, 0);
> +		vmcs_writel(HOST_INTR_SSP_TABLE, 0);
> +		vmcs_writel(HOST_SSP, 0);
> +	}
>  }
>  
>  void set_cr4_guest_host_mask(struct vcpu_vmx *vmx)
> @@ -6499,7 +6530,9 @@ bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
>  static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +	u64 kvm_xss = kvm_supported_xss();
>  	unsigned long cr3, cr4;
> +	bool cet_allowed;
>  
>  	/* Record the guest's net vcpu time for enforced NMI injections. */
>  	if (unlikely(!enable_vnmi &&
> @@ -6530,6 +6563,25 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  		vmx->loaded_vmcs->host_state.cr3 = cr3;
>  	}
>  
> +	/* Right now, only user-mode CET is supported.*/
> +	cet_allowed = (kvm_xss & XFEATURE_MASK_CET_USER) &&
> +		      (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> +		      guest_cpuid_has(vcpu, X86_FEATURE_IBT));
> +
> +	if (cpu_has_load_guest_cet_states_ctrl() && cet_allowed)
> +		vmcs_set_bits(VM_ENTRY_CONTROLS,
> +			      VM_ENTRY_LOAD_GUEST_CET_STATE);
> +	else
> +		vmcs_clear_bits(VM_ENTRY_CONTROLS,
> +				VM_ENTRY_LOAD_GUEST_CET_STATE);
> +
> +	if (cpu_has_load_host_cet_states_ctrl() && cet_allowed)
> +		vmcs_set_bits(VM_EXIT_CONTROLS,
> +			      VM_EXIT_LOAD_HOST_CET_STATE);
> +	else
> +		vmcs_clear_bits(VM_EXIT_CONTROLS,
> +				VM_EXIT_LOAD_HOST_CET_STATE);

Why are you clearing VMCS bits in vmx_vcpu_run()?  Unless I'm missing
something, these can go in vmx_cpuid_update().

> +
>  	cr4 = cr4_read_shadow();
>  	if (unlikely(cr4 != vmx->loaded_vmcs->host_state.cr4)) {
>  		vmcs_writel(HOST_CR4, cr4);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a9b1140d0508..b27d97eaec24 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -788,6 +788,9 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
>  	if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
>  		return 1;
>  
> +	if (!(cr0 & X86_CR0_WP) && kvm_read_cr4_bits(vcpu, X86_CR4_CET))
> +		return 1;
> +
>  	kvm_x86_ops->set_cr0(vcpu, cr0);
>  
>  	if ((cr0 ^ old_cr0) & X86_CR0_PG) {
> -- 
> 2.17.2
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs
  2019-12-27  2:11 ` [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs Yang Weijiang
@ 2020-03-03 22:28   ` Sean Christopherson
  2020-03-04 15:18     ` Yang Weijiang
  0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2020-03-03 22:28 UTC (permalink / raw)
  To: Yang Weijiang; +Cc: kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

Subject should be something like "Enable CET virtualization", or maybe
move CPUID changes to a separate final patch?

On Fri, Dec 27, 2019 at 10:11:33AM +0800, Yang Weijiang wrote:
> There're two different places storing Guest CET states, states
> managed with XSAVES/XRSTORS, as restored/saved
> in previous patch, can be read/write directly from/to the MSRs.
> For those stored in VMCS fields, they're access via vmcs_read/
> vmcs_write.
> 
> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |   3 +-
>  arch/x86/kvm/cpuid.c            |   5 +-
>  arch/x86/kvm/vmx/vmx.c          | 138 ++++++++++++++++++++++++++++++++
>  arch/x86/kvm/x86.c              |  11 +++
>  4 files changed, 154 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 64bf379381e4..34140462084f 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -90,7 +90,8 @@
>  			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
>  			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
>  			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
> -			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
> +			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
> +			  | X86_CR4_CET))
>  
>  #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
>  
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 126a31b99823..4414bd110f3c 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -385,13 +385,14 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
>  		F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) |
>  		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
>  		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
> -		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/;
> +		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | F(SHSTK) |
> +		0 /*WAITPKG*/;
>  
>  	/* cpuid 7.0.edx*/
>  	const u32 kvm_cpuid_7_0_edx_x86_features =
>  		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
>  		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
> -		F(MD_CLEAR);
> +		F(MD_CLEAR) | F(IBT);
>  
>  	/* cpuid 7.1.eax */
>  	const u32 kvm_cpuid_7_1_eax_x86_features =
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 0a75b65d03f0..52ac67604026 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1763,6 +1763,96 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
>  	return 0;
>  }
>  
> +#define CET_MSR_RSVD_BITS_1    0x3
> +#define CET_MSR_RSVD_BITS_2   (0xF << 6)

Would it make sense to use GENMASK?

> +static bool cet_ssp_write_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> +{
> +	u64 data = msr->data;
> +	u32 high_word = data >> 32;
> +
> +	if (is_64_bit_mode(vcpu)) {
> +		if (data & CET_MSR_RSVD_BITS_1)

This looks odd.  I assume it should look more like cet_ctl_write_allowed()?
E.g.

	if (data & CET_MSR_RSVD_BITS_1)
		return false;

	if (!is_64_bit_mode(vcpu) && high_word)
		return false;

> +			return false;
> +	} else if (high_word) {
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static bool cet_ctl_write_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> +{
> +	u64 data = msr->data;
> +	u32 high_word = data >> 32;
> +
> +	if (data & CET_MSR_RSVD_BITS_2)
> +		return false;
> +
> +	if (!is_64_bit_mode(vcpu) && high_word)
> +		return false;
> +
> +	return true;
> +}
> +
> +static bool cet_ssp_access_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> +{
> +	u64 kvm_xss;
> +	u32 index = msr->index;
> +
> +	if (is_guest_mode(vcpu))

Hmm, this seems wrong, e.g. shouldn't WRMSR be allowed if L1 passes the MSR
to L2, which is the only way to reach this, if I'm not mistaken.

> +		return false;
> +
> +	if (!boot_cpu_has(X86_FEATURE_SHSTK))
> +		return false;
> +
> +	if (!msr->host_initiated &&
> +	    !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK))
> +		return false;
> +
> +	if (index == MSR_IA32_INT_SSP_TAB)
> +		return true;
> +
> +	kvm_xss = kvm_supported_xss();
> +
> +	if (index == MSR_IA32_PL3_SSP) {
> +		if (!(kvm_xss & XFEATURE_MASK_CET_USER))
> +			return false;
> +	} else if (!(kvm_xss & XFEATURE_MASK_CET_KERNEL)) {
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static bool cet_ctl_access_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> +{
> +	u64 kvm_xss;
> +	u32 index = msr->index;
> +
> +	if (is_guest_mode(vcpu))
> +		return false;
> +
> +	kvm_xss = kvm_supported_xss();
> +
> +	if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
> +	    !boot_cpu_has(X86_FEATURE_IBT))
> +		return false;
> +
> +	if (!msr->host_initiated &&
> +	    !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) &&
> +	    !guest_cpuid_has(vcpu, X86_FEATURE_IBT))
> +		return false;
> +
> +	if (index == MSR_IA32_U_CET) {
> +		if (!(kvm_xss & XFEATURE_MASK_CET_USER))
> +			return false;
> +	} else if (!(kvm_xss & XFEATURE_MASK_CET_KERNEL)) {
> +		return false;
> +	}
> +
> +	return true;
> +}
>  /*
>   * Reads an msr value (of 'msr_index') into 'pdata'.
>   * Returns 0 on success, non-0 otherwise.
> @@ -1886,6 +1976,26 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		else
>  			msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
>  		break;
> +	case MSR_IA32_S_CET:
> +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> +			return 1;
> +		msr_info->data = vmcs_readl(GUEST_S_CET);
> +		break;
> +	case MSR_IA32_INT_SSP_TAB:
> +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> +			return 1;
> +		msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
> +		break;
> +	case MSR_IA32_U_CET:
> +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> +			return 1;
> +		rdmsrl(MSR_IA32_U_CET, msr_info->data);
> +		break;
> +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> +			return 1;
> +		rdmsrl(msr_info->index, msr_info->data);

Ugh, thought of another problem.  If a SoftIRQ runs after an IRQ it can
load the kernel FPU state.  So for all the XSAVES MSRs we'll need a helper
similar to vmx_write_guest_kernel_gs_base(), except XSAVES has to be even
more restrictive and disable IRQs entirely.  E.g.

static void vmx_get_xsave_msr(struct msr_data *msr_info)
{
	local_irq_disable();
	if (test_thread_flag(TIF_NEED_FPU_LOAD))
		switch_fpu_return();
	rdmsrl(msr_info->index, msr_info->data);
	local_irq_enable();
}

> +		break;
>  	case MSR_TSC_AUX:
>  		if (!msr_info->host_initiated &&
>  		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> @@ -2147,6 +2257,34 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		else
>  			vmx->pt_desc.guest.addr_a[index / 2] = data;
>  		break;
> +	case MSR_IA32_S_CET:
> +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> +			return 1;
> +		if (!cet_ctl_write_allowed(vcpu, msr_info))
> +			return 1;
> +		vmcs_writel(GUEST_S_CET, data);
> +		break;
> +	case MSR_IA32_INT_SSP_TAB:
> +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> +			return 1;
> +		if (!is_64_bit_mode(vcpu))
> +			return 1;
> +		vmcs_writel(GUEST_INTR_SSP_TABLE, data);
> +		break;
> +	case MSR_IA32_U_CET:
> +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> +			return 1;
> +		if (!cet_ctl_write_allowed(vcpu, msr_info))
> +			return 1;
> +		wrmsrl(MSR_IA32_U_CET, data);
> +		break;
> +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> +			return 1;
> +		if (!cet_ssp_write_allowed(vcpu, msr_info))
> +			return 1;
> +		wrmsrl(msr_info->index, data);
> +		break;
>  	case MSR_TSC_AUX:
>  		if (!msr_info->host_initiated &&
>  		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6dbe77365b22..7de6faa6aa51 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1186,6 +1186,10 @@ static const u32 msrs_to_save_all[] = {
>  	MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
>  	MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
>  	MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
> +
> +	MSR_IA32_XSS, MSR_IA32_U_CET, MSR_IA32_S_CET,
> +	MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP,
> +	MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB,
>  };
>  
>  static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
> @@ -1468,6 +1472,13 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
>  		 * invokes 64-bit SYSENTER.
>  		 */
>  		data = get_canonical(data, vcpu_virt_addr_bits(vcpu));
> +		break;
> +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> +	case MSR_IA32_U_CET:
> +	case MSR_IA32_S_CET:
> +	case MSR_IA32_INT_SSP_TAB:
> +		if (is_noncanonical_address(data, vcpu))
> +			return 1;
>  	}
>  
>  	msr.data = data;
> -- 
> 2.17.2
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 2/7] KVM: VMX: Define CET VMCS fields and #CP flag
  2020-03-03 21:42   ` Sean Christopherson
@ 2020-03-04  8:44     ` Yang Weijiang
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Weijiang @ 2020-03-04  8:44 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yang Weijiang, kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Tue, Mar 03, 2020 at 01:42:54PM -0800, Sean Christopherson wrote:
> On Fri, Dec 27, 2019 at 10:11:28AM +0800, Yang Weijiang wrote:
> > @@ -298,7 +298,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, unsigned long cr2,
> >   * In future, applicable XSS state bits can be added here
> >   * to make them available to KVM and guest.
> >   */
> > -#define KVM_SUPPORTED_XSS	0
> > +#define KVM_SUPPORTED_XSS	(XFEATURE_MASK_CET_USER \
> > +				| XFEATURE_MASK_CET_KERNEL)
> 
> My preference would be to put the operator on the previous line, though I
> realize this diverges from other KVM behavior.  I find it much easier to
> read With the names aligned.
> 
> #define KVM_SUPPORTED_XSS	(XFEATURE_MASK_CET_USER | \
> 				 XFEATURE_MASK_CET_KERNEL)

Yep, I also feel it's preferable now :-), thanks!
> >  
> >  extern u64 host_xcr0;
> >  
> > -- 
> > 2.17.2
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 3/7] KVM: VMX: Pass through CET related MSRs
  2020-03-03 21:51   ` Sean Christopherson
@ 2020-03-04  8:46     ` Yang Weijiang
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Weijiang @ 2020-03-04  8:46 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yang Weijiang, kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Tue, Mar 03, 2020 at 01:51:53PM -0800, Sean Christopherson wrote:
> On Fri, Dec 27, 2019 at 10:11:29AM +0800, Yang Weijiang wrote:
> > CET MSRs pass through Guest directly to enhance performance.
> > CET runtime control settings are stored in MSR_IA32_{U,S}_CET,
> > Shadow Stack Pointer(SSP) are stored in MSR_IA32_PL{0,1,2,3}_SSP,
> > SSP table base address is stored in MSR_IA32_INT_SSP_TAB,
> > these MSRs are defined in kernel and re-used here.
> > 
> > MSR_IA32_U_CET and MSR_IA32_PL3_SSP are used for user mode protection,
> > the contents could differ from process to process, therefore,
> > kernel needs to save/restore them during context switch, it makes
> > sense to pass through them so that the guest kernel can
> > use xsaves/xrstors to operate them efficiently. Other MSRs are used
> > for non-user mode protection. See CET spec for detailed info.
> > 
> > The difference between CET VMCS state fields and xsave components is that,
> > the former used for CET state storage during VMEnter/VMExit,
> > whereas the latter used for state retention between Guest task/process
> > switch.
> > 
> > Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> > ---
> >  arch/x86/kvm/cpuid.h   |  2 ++
> >  arch/x86/kvm/vmx/vmx.c | 48 ++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 50 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> > index d78a61408243..1d77b880084d 100644
> > --- a/arch/x86/kvm/cpuid.h
> > +++ b/arch/x86/kvm/cpuid.h
> > @@ -27,6 +27,8 @@ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
> >  
> >  int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu);
> >  
> > +u64 kvm_supported_xss(void);
> > +
> >  static inline int cpuid_maxphyaddr(struct kvm_vcpu *vcpu)
> >  {
> >  	return vcpu->arch.maxphyaddr;
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 477173e4a85d..61fc846c7ef3 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7091,6 +7091,52 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
> >  		vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));
> >  }
> >  
> > +static void vmx_update_intercept_for_cet_msr(struct kvm_vcpu *vcpu)
> > +{
> > +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> > +	unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
> > +
> > +	/*
> > +	 * U_CET is required for USER CET, per CET spec., meanwhile U_CET and
> > +	 * PL3_SPP are a bundle for USER CET xsaves.
> > +	 */
> > +	if ((kvm_supported_xss() & XFEATURE_MASK_CET_USER) &&
> > +	    (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> > +	    guest_cpuid_has(vcpu, X86_FEATURE_IBT))) {
> > +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW);
> > +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW);
> > +	} else {
> > +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW, true);
> > +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW, true);
> > +	}
> 
> I prefer the style of pt_update_intercept_for_msr(), e.g.
> 
> 	flag = (kvm_supported_xss() & XFEATURE_MASK_CET_USER) &&
> 		(guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> 		 guest_cpuid_has(vcpu, X86_FEATURE_IBT));
> 
> 	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_U_CET, MSR_TYPE_RW, flag);
> 	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL3_SSP, MSR_TYPE_RW, flag);
> 
> 
> 	flag = (kvm_supported_xss() & XFEATURE_MASK_CET_KERNEL) &&
> 		(guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> 		 guest_cpuid_has(vcpu, X86_FEATURE_IBT));
> 
> 	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW, flag);
> 	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW, flag);
> 	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW, flag);
> 	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW, flag);
> 
> 	/* SSP_TAB only available for KERNEL SHSTK.*/
> 	flag &= guest_cpuid_has(vcpu, X86_FEATURE_SHSTK);
> 	vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB, MSR_TYPE_RW, flag);
> 
>
Sure, will change that, thank you!

> > +	/*
> > +	 * S_CET is required for KERNEL CET, meanwhile PL0_SSP ... PL2_SSP are a bundle
> > +	 * for CET KERNEL xsaves.
> > +	 */
> > +	if ((kvm_supported_xss() & XFEATURE_MASK_CET_KERNEL) &&
> > +	    (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> > +	    guest_cpuid_has(vcpu, X86_FEATURE_IBT))) {
> > +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW);
> > +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW);
> > +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW);
> > +		vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW);
> > +
> > +		/* SSP_TAB only available for KERNEL SHSTK.*/
> > +		if (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK))
> > +			vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB,
> > +						      MSR_TYPE_RW);
> > +		else
> > +			vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB,
> > +						  MSR_TYPE_RW, true);
> > +	} else {
> > +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_S_CET, MSR_TYPE_RW, true);
> > +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL0_SSP, MSR_TYPE_RW, true);
> > +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL1_SSP, MSR_TYPE_RW, true);
> > +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_PL2_SSP, MSR_TYPE_RW, true);
> > +		vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_INT_SSP_TAB, MSR_TYPE_RW, true);
> > +	}
> > +}
> > +
> >  static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
> >  {
> >  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> > @@ -7115,6 +7161,8 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
> >  	if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&
> >  			guest_cpuid_has(vcpu, X86_FEATURE_INTEL_PT))
> >  		update_intel_pt_cfg(vcpu);
> > +
> > +	vmx_update_intercept_for_cet_msr(vcpu);
> >  }
> >  
> >  static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
> > -- 
> > 2.17.2
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 4/7] KVM: VMX: Load CET states on vmentry/vmexit
  2020-03-03 22:06   ` Sean Christopherson
@ 2020-03-04  8:55     ` Yang Weijiang
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Weijiang @ 2020-03-04  8:55 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yang Weijiang, kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Tue, Mar 03, 2020 at 02:06:18PM -0800, Sean Christopherson wrote:
> On Fri, Dec 27, 2019 at 10:11:30AM +0800, Yang Weijiang wrote:
> > "Load {guest,host} CET state" bit controls whether guest/host
> > CET states will be loaded at VM entry/exit. Before doing that,
> > KVM needs to check if CET can be enabled both on host and guest.
> > 
> > Note:
> > 1)The processor does not allow CR4.CET to be set if CR0.WP = 0,
> >   similarly, it does not allow CR0.WP to be cleared while CR4.CET = 1.
> >   In either case, KVM would inject #GP to guest.
> > 
> > 2)SHSTK and IBT features share one control MSR:
> >   MSR_IA32_{U,S}_CET, which means it's difficult to hide
> >   one feature from another in the case of SHSTK != IBT,
> >   after discussed in community, it's agreed to allow Guest
> >   control two features independently as it won't introduce
> >   security hole.
> > 
> > Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> > ---
> >  arch/x86/kvm/vmx/capabilities.h | 10 ++++++
> >  arch/x86/kvm/vmx/vmx.c          | 56 +++++++++++++++++++++++++++++++--
> >  arch/x86/kvm/x86.c              |  3 ++
> >  3 files changed, 67 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
> > index 7aa69716d516..4a67d35a42a2 100644
> > --- a/arch/x86/kvm/vmx/capabilities.h
> > +++ b/arch/x86/kvm/vmx/capabilities.h
> > @@ -106,6 +106,16 @@ static inline bool vmx_mpx_supported(void)
> >  		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS);
> >  }
> >  
> > +static inline bool cpu_has_load_guest_cet_states_ctrl(void)
> > +{
> > +	return ((vmcs_config.vmentry_ctrl) & VM_ENTRY_LOAD_GUEST_CET_STATE);
> > +}
> > +
> > +static inline bool cpu_has_load_host_cet_states_ctrl(void)
> > +{
> > +	return ((vmcs_config.vmexit_ctrl) & VM_EXIT_LOAD_HOST_CET_STATE);
> 
> No need for parantheses around vmcs_config.vmexit_ctrl.
> 
> > +}
> > +
> >  static inline bool cpu_has_vmx_tpr_shadow(void)
> >  {
> >  	return vmcs_config.cpu_based_exec_ctrl & CPU_BASED_TPR_SHADOW;
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 61fc846c7ef3..95063cc7da89 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -44,6 +44,7 @@
> >  #include <asm/spec-ctrl.h>
> >  #include <asm/virtext.h>
> >  #include <asm/vmx.h>
> > +#include <asm/cet.h>
> >  
> >  #include "capabilities.h"
> >  #include "cpuid.h"
> > @@ -2445,7 +2446,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
> >  	      VM_EXIT_LOAD_IA32_EFER |
> >  	      VM_EXIT_CLEAR_BNDCFGS |
> >  	      VM_EXIT_PT_CONCEAL_PIP |
> > -	      VM_EXIT_CLEAR_IA32_RTIT_CTL;
> > +	      VM_EXIT_CLEAR_IA32_RTIT_CTL |
> > +	      VM_EXIT_LOAD_HOST_CET_STATE;
> >  	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_EXIT_CTLS,
> >  				&_vmexit_control) < 0)
> >  		return -EIO;
> > @@ -2469,7 +2471,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
> >  	      VM_ENTRY_LOAD_IA32_EFER |
> >  	      VM_ENTRY_LOAD_BNDCFGS |
> >  	      VM_ENTRY_PT_CONCEAL_PIP |
> > -	      VM_ENTRY_LOAD_IA32_RTIT_CTL;
> > +	      VM_ENTRY_LOAD_IA32_RTIT_CTL |
> > +	      VM_ENTRY_LOAD_GUEST_CET_STATE;
> >  	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_ENTRY_CTLS,
> >  				&_vmentry_control) < 0)
> >  		return -EIO;
> > @@ -3027,6 +3030,25 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
> >  	vmcs_writel(GUEST_CR3, guest_cr3);
> >  }
> >  
> > +bool is_cet_bit_allowed(struct kvm_vcpu *vcpu)
> 
> This should be static.  I'd also include cr4 in the name, e.g.
> 
>   static bool is_cr4_set_allowed(...)
> 
>
Got it, thank you!
> > +{
> > +	u64 kvm_xss = kvm_supported_xss();
> > +	unsigned long cr0;
> > +	bool cet_allowed;
> > +
> > +	cr0 = kvm_read_cr0(vcpu);
> > +
> > +	/* Right now, only user-mode CET is supported.*/
> > +	cet_allowed = (kvm_xss & XFEATURE_MASK_CET_USER) &&
> > +		      (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> > +		      guest_cpuid_has(vcpu, X86_FEATURE_IBT));
> 
> Probably makes sense to add a "is_cet_supported()" helper.  That'd reduce
> the amount of copy+paste and would probably add clarity to most flows.

Good suggestion, wrapping the code in a helper looks better :-)
> 
> > +
> > +	if ((cr0 & X86_CR0_WP) && cet_allowed)
> > +		return true;
> > +
> > +	return false;
> 
> 	return (cr0 & X86_CR0_WP) && cet_allowed;
> 
> Even better, especially if you add is_cet_supported(), to avoid VMREAD of
> CR0 when CET isn't supported.
> 
> 	return is_cet_supported() && (kvm_read_cr0(vcpu) & X86_CR0_WP);
> 
> At that point, you can probably even forgo the helper, e.g.
> 
> 	if ((cr4 & X86_CR4_CET) &&
> 	    (!is_cet_supported() || !(kvm_read_cr0(vcpu) & X86_CR0_WP)))
> 		return 1;
> 
OK, will tweak the code, thank you!
> > +}
> > +
> >  int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
> >  {
> >  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> > @@ -3067,6 +3089,9 @@ int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
> >  			return 1;
> >  	}
> >  
> > +	if ((cr4 & X86_CR4_CET) && !is_cet_bit_allowed(vcpu))
> > +		return 1;
> > +
> >  	if (vmx->nested.vmxon && !nested_cr4_valid(vcpu, cr4))
> >  		return 1;
> >  
> > @@ -3930,6 +3955,12 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
> >  
> >  	if (cpu_has_load_ia32_efer())
> >  		vmcs_write64(HOST_IA32_EFER, host_efer);
> > +
> > +	if (cpu_has_load_host_cet_states_ctrl()) {
> > +		vmcs_writel(HOST_S_CET, 0);
> > +		vmcs_writel(HOST_INTR_SSP_TABLE, 0);
> > +		vmcs_writel(HOST_SSP, 0);
> > +	}
> >  }
> >  
> >  void set_cr4_guest_host_mask(struct vcpu_vmx *vmx)
> > @@ -6499,7 +6530,9 @@ bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
> >  static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
> >  {
> >  	struct vcpu_vmx *vmx = to_vmx(vcpu);
> > +	u64 kvm_xss = kvm_supported_xss();
> >  	unsigned long cr3, cr4;
> > +	bool cet_allowed;
> >  
> >  	/* Record the guest's net vcpu time for enforced NMI injections. */
> >  	if (unlikely(!enable_vnmi &&
> > @@ -6530,6 +6563,25 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
> >  		vmx->loaded_vmcs->host_state.cr3 = cr3;
> >  	}
> >  
> > +	/* Right now, only user-mode CET is supported.*/
> > +	cet_allowed = (kvm_xss & XFEATURE_MASK_CET_USER) &&
> > +		      (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) ||
> > +		      guest_cpuid_has(vcpu, X86_FEATURE_IBT));
> > +
> > +	if (cpu_has_load_guest_cet_states_ctrl() && cet_allowed)
> > +		vmcs_set_bits(VM_ENTRY_CONTROLS,
> > +			      VM_ENTRY_LOAD_GUEST_CET_STATE);
> > +	else
> > +		vmcs_clear_bits(VM_ENTRY_CONTROLS,
> > +				VM_ENTRY_LOAD_GUEST_CET_STATE);
> > +
> > +	if (cpu_has_load_host_cet_states_ctrl() && cet_allowed)
> > +		vmcs_set_bits(VM_EXIT_CONTROLS,
> > +			      VM_EXIT_LOAD_HOST_CET_STATE);
> > +	else
> > +		vmcs_clear_bits(VM_EXIT_CONTROLS,
> > +				VM_EXIT_LOAD_HOST_CET_STATE);
> 
> Why are you clearing VMCS bits in vmx_vcpu_run()?  Unless I'm missing
> something, these can go in vmx_cpuid_update().
Yes, I need to find a good place to do the work, thank you for pointing
it out.
> 
> > +
> >  	cr4 = cr4_read_shadow();
> >  	if (unlikely(cr4 != vmx->loaded_vmcs->host_state.cr4)) {
> >  		vmcs_writel(HOST_CR4, cr4);
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index a9b1140d0508..b27d97eaec24 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -788,6 +788,9 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
> >  	if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
> >  		return 1;
> >  
> > +	if (!(cr0 & X86_CR0_WP) && kvm_read_cr4_bits(vcpu, X86_CR4_CET))
> > +		return 1;
> > +
> >  	kvm_x86_ops->set_cr0(vcpu, cr0);
> >  
> >  	if ((cr0 ^ old_cr0) & X86_CR0_PG) {
> > -- 
> > 2.17.2
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs
  2020-03-03 22:28   ` Sean Christopherson
@ 2020-03-04 15:18     ` Yang Weijiang
  2020-03-04 15:45       ` Sean Christopherson
  0 siblings, 1 reply; 20+ messages in thread
From: Yang Weijiang @ 2020-03-04 15:18 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yang Weijiang, kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Tue, Mar 03, 2020 at 02:28:27PM -0800, Sean Christopherson wrote:
> Subject should be something like "Enable CET virtualization", or maybe
> move CPUID changes to a separate final patch?
>
OK, let me put the CPUID/CR4.CET into a separate patch.

> On Fri, Dec 27, 2019 at 10:11:33AM +0800, Yang Weijiang wrote:
> > There're two different places storing Guest CET states, states
> > managed with XSAVES/XRSTORS, as restored/saved
> > in previous patch, can be read/write directly from/to the MSRs.
> > For those stored in VMCS fields, they're access via vmcs_read/
> > vmcs_write.
> > 
> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |   3 +-
> >  arch/x86/kvm/cpuid.c            |   5 +-
> >  arch/x86/kvm/vmx/vmx.c          | 138 ++++++++++++++++++++++++++++++++
> >  arch/x86/kvm/x86.c              |  11 +++
> >  4 files changed, 154 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 64bf379381e4..34140462084f 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -90,7 +90,8 @@
> >  			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
> >  			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
> >  			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
> > -			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
> > +			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
> > +			  | X86_CR4_CET))
> >  
> >  #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
> >  
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index 126a31b99823..4414bd110f3c 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -385,13 +385,14 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
> >  		F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) |
> >  		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
> >  		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
> > -		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/;
> > +		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | F(SHSTK) |
> > +		0 /*WAITPKG*/;
> >  
> >  	/* cpuid 7.0.edx*/
> >  	const u32 kvm_cpuid_7_0_edx_x86_features =
> >  		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
> >  		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
> > -		F(MD_CLEAR);
> > +		F(MD_CLEAR) | F(IBT);
> >  
> >  	/* cpuid 7.1.eax */
> >  	const u32 kvm_cpuid_7_1_eax_x86_features =
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 0a75b65d03f0..52ac67604026 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -1763,6 +1763,96 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr)
> >  	return 0;
> >  }
> >  
> > +#define CET_MSR_RSVD_BITS_1    0x3
> > +#define CET_MSR_RSVD_BITS_2   (0xF << 6)
> 
> Would it make sense to use GENMASK?
>
Yes, will change it. thank you.

> > +static bool cet_ssp_write_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> > +{
> > +	u64 data = msr->data;
> > +	u32 high_word = data >> 32;
> > +
> > +	if (is_64_bit_mode(vcpu)) {
> > +		if (data & CET_MSR_RSVD_BITS_1)
> 
> This looks odd.  I assume it should look more like cet_ctl_write_allowed()?
> E.g.
> 
> 	if (data & CET_MSR_RSVD_BITS_1)
> 		return false;
> 
> 	if (!is_64_bit_mode(vcpu) && high_word)
> 		return false;
> 
Correct, this looks much better.
> > +			return false;
> > +	} else if (high_word) {
> > +		return false;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static bool cet_ctl_write_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> > +{
> > +	u64 data = msr->data;
> > +	u32 high_word = data >> 32;
> > +
> > +	if (data & CET_MSR_RSVD_BITS_2)
> > +		return false;
> > +
> > +	if (!is_64_bit_mode(vcpu) && high_word)
> > +		return false;
> > +
> > +	return true;
> > +}
> > +
> > +static bool cet_ssp_access_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> > +{
> > +	u64 kvm_xss;
> > +	u32 index = msr->index;
> > +
> > +	if (is_guest_mode(vcpu))
> 
> Hmm, this seems wrong, e.g. shouldn't WRMSR be allowed if L1 passes the MSR
> to L2, which is the only way to reach this, if I'm not mistaken.
> 
Actually I tried to hide the CET feature from L2 guest, but the code
was broken for this part, I'm working on this part...

> > +		return false;
> > +
> > +	if (!boot_cpu_has(X86_FEATURE_SHSTK))
> > +		return false;
> > +
> > +	if (!msr->host_initiated &&
> > +	    !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK))
> > +		return false;
> > +
> > +	if (index == MSR_IA32_INT_SSP_TAB)
> > +		return true;
> > +
> > +	kvm_xss = kvm_supported_xss();
> > +
> > +	if (index == MSR_IA32_PL3_SSP) {
> > +		if (!(kvm_xss & XFEATURE_MASK_CET_USER))
> > +			return false;
> > +	} else if (!(kvm_xss & XFEATURE_MASK_CET_KERNEL)) {
> > +		return false;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static bool cet_ctl_access_allowed(struct kvm_vcpu *vcpu, struct msr_data *msr)
> > +{
> > +	u64 kvm_xss;
> > +	u32 index = msr->index;
> > +
> > +	if (is_guest_mode(vcpu))
> > +		return false;
> > +
> > +	kvm_xss = kvm_supported_xss();
> > +
> > +	if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
> > +	    !boot_cpu_has(X86_FEATURE_IBT))
> > +		return false;
> > +
> > +	if (!msr->host_initiated &&
> > +	    !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) &&
> > +	    !guest_cpuid_has(vcpu, X86_FEATURE_IBT))
> > +		return false;
> > +
> > +	if (index == MSR_IA32_U_CET) {
> > +		if (!(kvm_xss & XFEATURE_MASK_CET_USER))
> > +			return false;
> > +	} else if (!(kvm_xss & XFEATURE_MASK_CET_KERNEL)) {
> > +		return false;
> > +	}
> > +
> > +	return true;
> > +}
> >  /*
> >   * Reads an msr value (of 'msr_index') into 'pdata'.
> >   * Returns 0 on success, non-0 otherwise.
> > @@ -1886,6 +1976,26 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  		else
> >  			msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
> >  		break;
> > +	case MSR_IA32_S_CET:
> > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		msr_info->data = vmcs_readl(GUEST_S_CET);
> > +		break;
> > +	case MSR_IA32_INT_SSP_TAB:
> > +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
> > +		break;
> > +	case MSR_IA32_U_CET:
> > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		rdmsrl(MSR_IA32_U_CET, msr_info->data);
> > +		break;
> > +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> > +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		rdmsrl(msr_info->index, msr_info->data);
> 
> Ugh, thought of another problem.  If a SoftIRQ runs after an IRQ it can
> load the kernel FPU state.  So for all the XSAVES MSRs we'll need a helper
> similar to vmx_write_guest_kernel_gs_base(), except XSAVES has to be even
> more restrictive and disable IRQs entirely.  E.g.
> 
> static void vmx_get_xsave_msr(struct msr_data *msr_info)
> {
> 	local_irq_disable();
> 	if (test_thread_flag(TIF_NEED_FPU_LOAD))
> 		switch_fpu_return();
> 	rdmsrl(msr_info->index, msr_info->data);
> 	local_irq_enable();
In this case, would SoftIRQ destroy vcpu->arch.guest.fpu states which
had been restored to XSAVES MSRs that we were accessing? So should we restore
guest.fpu or? In previous patch, we have restored guest.fpu before
access the XSAVES MSRs.

> }
> 
> > +		break;
> >  	case MSR_TSC_AUX:
> >  		if (!msr_info->host_initiated &&
> >  		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> > @@ -2147,6 +2257,34 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  		else
> >  			vmx->pt_desc.guest.addr_a[index / 2] = data;
> >  		break;
> > +	case MSR_IA32_S_CET:
> > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		if (!cet_ctl_write_allowed(vcpu, msr_info))
> > +			return 1;
> > +		vmcs_writel(GUEST_S_CET, data);
> > +		break;
> > +	case MSR_IA32_INT_SSP_TAB:
> > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		if (!is_64_bit_mode(vcpu))
> > +			return 1;
> > +		vmcs_writel(GUEST_INTR_SSP_TABLE, data);
> > +		break;
> > +	case MSR_IA32_U_CET:
> > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		if (!cet_ctl_write_allowed(vcpu, msr_info))
> > +			return 1;
> > +		wrmsrl(MSR_IA32_U_CET, data);
> > +		break;
> > +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> > +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> > +			return 1;
> > +		if (!cet_ssp_write_allowed(vcpu, msr_info))
> > +			return 1;
> > +		wrmsrl(msr_info->index, data);
> > +		break;
> >  	case MSR_TSC_AUX:
> >  		if (!msr_info->host_initiated &&
> >  		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 6dbe77365b22..7de6faa6aa51 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -1186,6 +1186,10 @@ static const u32 msrs_to_save_all[] = {
> >  	MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
> >  	MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
> >  	MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
> > +
> > +	MSR_IA32_XSS, MSR_IA32_U_CET, MSR_IA32_S_CET,
> > +	MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP,
> > +	MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB,
> >  };
> >  
> >  static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
> > @@ -1468,6 +1472,13 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
> >  		 * invokes 64-bit SYSENTER.
> >  		 */
> >  		data = get_canonical(data, vcpu_virt_addr_bits(vcpu));
> > +		break;
> > +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> > +	case MSR_IA32_U_CET:
> > +	case MSR_IA32_S_CET:
> > +	case MSR_IA32_INT_SSP_TAB:
> > +		if (is_noncanonical_address(data, vcpu))
> > +			return 1;
> >  	}
> >  
> >  	msr.data = data;
> > -- 
> > 2.17.2
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs
  2020-03-04 15:18     ` Yang Weijiang
@ 2020-03-04 15:45       ` Sean Christopherson
  2020-03-05 12:31         ` Yang Weijiang
  0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2020-03-04 15:45 UTC (permalink / raw)
  To: Yang Weijiang; +Cc: kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Wed, Mar 04, 2020 at 11:18:15PM +0800, Yang Weijiang wrote:
> On Tue, Mar 03, 2020 at 02:28:27PM -0800, Sean Christopherson wrote:
> > > @@ -1886,6 +1976,26 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > >  		else
> > >  			msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
> > >  		break;
> > > +	case MSR_IA32_S_CET:
> > > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > > +			return 1;
> > > +		msr_info->data = vmcs_readl(GUEST_S_CET);
> > > +		break;
> > > +	case MSR_IA32_INT_SSP_TAB:
> > > +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> > > +			return 1;
> > > +		msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
> > > +		break;
> > > +	case MSR_IA32_U_CET:
> > > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > > +			return 1;
> > > +		rdmsrl(MSR_IA32_U_CET, msr_info->data);
> > > +		break;
> > > +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> > > +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> > > +			return 1;
> > > +		rdmsrl(msr_info->index, msr_info->data);
> > 
> > Ugh, thought of another problem.  If a SoftIRQ runs after an IRQ it can
> > load the kernel FPU state.  So for all the XSAVES MSRs we'll need a helper
> > similar to vmx_write_guest_kernel_gs_base(), except XSAVES has to be even
> > more restrictive and disable IRQs entirely.  E.g.
> > 
> > static void vmx_get_xsave_msr(struct msr_data *msr_info)
> > {
> > 	local_irq_disable();
> > 	if (test_thread_flag(TIF_NEED_FPU_LOAD))
> > 		switch_fpu_return();
> > 	rdmsrl(msr_info->index, msr_info->data);
> > 	local_irq_enable();
> In this case, would SoftIRQ destroy vcpu->arch.guest.fpu states which
> had been restored to XSAVES MSRs that we were accessing?

Doing kernel_fpu_begin() from a softirq would swap guest.fpu out of the
CPUs registers.  It sets TIF_NEED_FPU_LOAD to mark the tasks has needing to
reload its FPU state prior to returning to userspace.  So it doesn't
destroy it per se.  The result is that KVM would read/write the CET MSRs
after they're loaded from the kernel's FPU state instead of reading the
MSRs loaded from the guest's FPU state.

> So should we restore
> guest.fpu or? In previous patch, we have restored guest.fpu before
> access the XSAVES MSRs.

There are three different FPU states:

  - kernel
  - userspace
  - guest

RDMSR/WRMSR for CET MSRs need to run while the guest.fpu state is loaded
into the CPU registers[1].  At the beginning of the syscall from userspace,
i.e. the vCPU ioctl(), the task's FPU state[2] holds userspace FPU state.
Patch 6/7 swaps out the userspace state and loads the guest state.

But, if a softirq runs between kvm_load_guest_fpu() and now, and executes
kernel_fpu_begin(), it will swap the guest state (out of CPU registers)
and load the kernel state (into PCU registers).  The actual RDMSR/WRMSR
needs to ensure the guest state is still loaded by checking and handling
TIF_NEED_FPU_LOAD.

[1] An alternative to doing switch_fpu_return() on TIF_NEED_FPU_LOAD would
    be to calculate the offset into the xsave and read/write directly
    to/from memory.  But IMO that's unnecessary complexity as the guest's
    fpu state still needs to be reloaded before re-entering the guest, e.g.
    if vmx_{g,s}et_msr() is invoked on {RD,WR}MSR intercept, while loading
    or saving MSR state from userspace isn't a hot path.

[2] I worded this to say "task's FPU state" because it's also possible the
    CPU registers hold kernel state at the beginning of the vCPU ioctl(),
    e.g. because of softirq.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs
  2020-03-04 15:45       ` Sean Christopherson
@ 2020-03-05 12:31         ` Yang Weijiang
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Weijiang @ 2020-03-05 12:31 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yang Weijiang, kvm, linux-kernel, pbonzini, jmattson, yu.c.zhang

On Wed, Mar 04, 2020 at 07:45:38AM -0800, Sean Christopherson wrote:
> On Wed, Mar 04, 2020 at 11:18:15PM +0800, Yang Weijiang wrote:
> > On Tue, Mar 03, 2020 at 02:28:27PM -0800, Sean Christopherson wrote:
> > > > @@ -1886,6 +1976,26 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > > >  		else
> > > >  			msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
> > > >  		break;
> > > > +	case MSR_IA32_S_CET:
> > > > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > > > +			return 1;
> > > > +		msr_info->data = vmcs_readl(GUEST_S_CET);
> > > > +		break;
> > > > +	case MSR_IA32_INT_SSP_TAB:
> > > > +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> > > > +			return 1;
> > > > +		msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
> > > > +		break;
> > > > +	case MSR_IA32_U_CET:
> > > > +		if (!cet_ctl_access_allowed(vcpu, msr_info))
> > > > +			return 1;
> > > > +		rdmsrl(MSR_IA32_U_CET, msr_info->data);
> > > > +		break;
> > > > +	case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> > > > +		if (!cet_ssp_access_allowed(vcpu, msr_info))
> > > > +			return 1;
> > > > +		rdmsrl(msr_info->index, msr_info->data);
> > > 
> > > Ugh, thought of another problem.  If a SoftIRQ runs after an IRQ it can
> > > load the kernel FPU state.  So for all the XSAVES MSRs we'll need a helper
> > > similar to vmx_write_guest_kernel_gs_base(), except XSAVES has to be even
> > > more restrictive and disable IRQs entirely.  E.g.
> > > 
> > > static void vmx_get_xsave_msr(struct msr_data *msr_info)
> > > {
> > > 	local_irq_disable();
> > > 	if (test_thread_flag(TIF_NEED_FPU_LOAD))
> > > 		switch_fpu_return();
> > > 	rdmsrl(msr_info->index, msr_info->data);
> > > 	local_irq_enable();
> > In this case, would SoftIRQ destroy vcpu->arch.guest.fpu states which
> > had been restored to XSAVES MSRs that we were accessing?
> 
> Doing kernel_fpu_begin() from a softirq would swap guest.fpu out of the
> CPUs registers.  It sets TIF_NEED_FPU_LOAD to mark the tasks has needing to
> reload its FPU state prior to returning to userspace.  So it doesn't
> destroy it per se.  The result is that KVM would read/write the CET MSRs
> after they're loaded from the kernel's FPU state instead of reading the
> MSRs loaded from the guest's FPU state.
>
OK, will wrap the access code with a helper, thank you!
> > So should we restore
> > guest.fpu or? In previous patch, we have restored guest.fpu before
> > access the XSAVES MSRs.
> 
> There are three different FPU states:
> 
>   - kernel
>   - userspace
>   - guest
> 
> RDMSR/WRMSR for CET MSRs need to run while the guest.fpu state is loaded
> into the CPU registers[1].  At the beginning of the syscall from userspace,
> i.e. the vCPU ioctl(), the task's FPU state[2] holds userspace FPU state.
> Patch 6/7 swaps out the userspace state and loads the guest state.
> 
> But, if a softirq runs between kvm_load_guest_fpu() and now, and executes
> kernel_fpu_begin(), it will swap the guest state (out of CPU registers)
> and load the kernel state (into PCU registers).  The actual RDMSR/WRMSR
> needs to ensure the guest state is still loaded by checking and handling
> TIF_NEED_FPU_LOAD.
> 
> [1] An alternative to doing switch_fpu_return() on TIF_NEED_FPU_LOAD would
>     be to calculate the offset into the xsave and read/write directly
>     to/from memory.  But IMO that's unnecessary complexity as the guest's
>     fpu state still needs to be reloaded before re-entering the guest, e.g.
>     if vmx_{g,s}et_msr() is invoked on {RD,WR}MSR intercept, while loading
>     or saving MSR state from userspace isn't a hot path.
> 
> [2] I worded this to say "task's FPU state" because it's also possible the
>     CPU registers hold kernel state at the beginning of the vCPU ioctl(),
>     e.g. because of softirq.

It's clear to me, thanks for the explanation.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration
  2019-12-27  2:11 ` [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration Yang Weijiang
@ 2020-03-05 14:51   ` Paolo Bonzini
  2020-03-06  0:38     ` Yang Weijiang
  0 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2020-03-05 14:51 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel, jmattson, sean.j.christopherson
  Cc: yu.c.zhang

On 27/12/19 03:11, Yang Weijiang wrote:
> +	u64 (*supported_xss)(void);

I don't think the new callback is needed.  Anyway I'm rewriting this
patch on top of the new CPUID feature and will post it shortly.

Paolo


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration
  2020-03-05 14:51   ` Paolo Bonzini
@ 2020-03-06  0:38     ` Yang Weijiang
  0 siblings, 0 replies; 20+ messages in thread
From: Yang Weijiang @ 2020-03-06  0:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, jmattson,
	sean.j.christopherson, yu.c.zhang

On Thu, Mar 05, 2020 at 03:51:17PM +0100, Paolo Bonzini wrote:
> On 27/12/19 03:11, Yang Weijiang wrote:
> > +	u64 (*supported_xss)(void);
> 
> I don't think the new callback is needed.  Anyway I'm rewriting this
> patch on top of the new CPUID feature and will post it shortly.
> 
> Paolo
Yes, it's not necessary. I've removed this in the internal
version, a global variable like that in Sean's patch can serve
the functions.
You may go ahead with the new patch, thanks for review!


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-03-06  0:34 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-27  2:11 [PATCH v9 0/7] Introduce support for guest CET feature Yang Weijiang
2019-12-27  2:11 ` [PATCH v9 1/7] KVM: CPUID: Fix IA32_XSS support in CPUID(0xd,i) enumeration Yang Weijiang
2020-03-05 14:51   ` Paolo Bonzini
2020-03-06  0:38     ` Yang Weijiang
2019-12-27  2:11 ` [PATCH v9 2/7] KVM: VMX: Define CET VMCS fields and #CP flag Yang Weijiang
2020-03-03 21:42   ` Sean Christopherson
2020-03-04  8:44     ` Yang Weijiang
2019-12-27  2:11 ` [PATCH v9 3/7] KVM: VMX: Pass through CET related MSRs Yang Weijiang
2020-03-03 21:51   ` Sean Christopherson
2020-03-04  8:46     ` Yang Weijiang
2019-12-27  2:11 ` [PATCH v9 4/7] KVM: VMX: Load CET states on vmentry/vmexit Yang Weijiang
2020-03-03 22:06   ` Sean Christopherson
2020-03-04  8:55     ` Yang Weijiang
2019-12-27  2:11 ` [PATCH v9 5/7] KVM: X86: Enable CET bits update in IA32_XSS Yang Weijiang
2019-12-27  2:11 ` [PATCH v9 6/7] KVM: X86: Load guest fpu state when accessing MSRs managed by XSAVES Yang Weijiang
2019-12-27  2:11 ` [PATCH v9 7/7] KVM: X86: Add user-space access interface for CET MSRs Yang Weijiang
2020-03-03 22:28   ` Sean Christopherson
2020-03-04 15:18     ` Yang Weijiang
2020-03-04 15:45       ` Sean Christopherson
2020-03-05 12:31         ` Yang Weijiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).