kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Add support for XSAVES on AMD and unify it with Intel
@ 2019-10-11 19:40 Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 1/5] KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE Aaron Lewis
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Aaron Lewis @ 2019-10-11 19:40 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Unify AMD's and Intel's approach for supporting XSAVES.  To do this
change Intel's approach from using the MSR-load areas to writing
the guest/host values to IA32_XSS on a VM-enter/VM-exit.  Switching to
this strategy allows for a common approach between both AMD and Intel.
Additionally, define svm_xsaves_supported() based on AMD's feedback, and
add IA32_XSS to the emulated_msrs list, in preparation for the day when
the guest IA32_XSS may contain a non-zero value.

This change sets up IA32_XSS to be a non-zero value in the future, which
may happen sooner than later with support for guest CET feature being
added.

v1 -> v2:
 - Add the flag xsaves_enabled to kvm_vcpu_arch to track when XSAVES is
   enabled in the guest, whether or not XSAVES is enumerated in the
   guest CPUID.
 - Remove code that sets the X86_FEATURE_XSAVES bit in the guest CPUID
   which was added in patch "Enumerate XSAVES in guest CPUID when it is
   available to the guest".  As a result we no longer need that patch.
 - Added a comment to kvm_set_msr_common to describe how to save/restore
   PT MSRS without using XSAVES/XRSTORS.
 - Added more comments to the "Add support for XSAVES on AMD" patch.
 - Replaced vcpu_set_msr_expect_result() with _vcpu_set_msr() in the
   test library.

Aaron Lewis (5):
  KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE
  KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  kvm: svm: Add support for XSAVES on AMD
  kvm: x86: Add IA32_XSS to the emulated_msrs list
  kvm: tests: Add test to verify MSR_IA32_XSS

 arch/x86/include/asm/kvm_host.h               |  1 +
 arch/x86/kvm/svm.c                            | 19 +++--
 arch/x86/kvm/vmx/vmx.c                        | 46 ++++--------
 arch/x86/kvm/x86.c                            | 46 ++++++++++--
 arch/x86/kvm/x86.h                            |  4 +-
 tools/testing/selftests/kvm/.gitignore        |  1 +
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../selftests/kvm/include/x86_64/processor.h  |  7 +-
 .../selftests/kvm/lib/x86_64/processor.c      | 72 ++++++++++++++++---
 .../selftests/kvm/x86_64/xss_msr_test.c       | 70 ++++++++++++++++++
 10 files changed, 210 insertions(+), 57 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/xss_msr_test.c

-- 
2.23.0.700.g56cf767bdb-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/5] KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE
  2019-10-11 19:40 [PATCH v2 0/5] Add support for XSAVES on AMD and unify it with Intel Aaron Lewis
@ 2019-10-11 19:40 ` Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS Aaron Lewis
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Aaron Lewis @ 2019-10-11 19:40 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Volume 4 of the SDM says that IA32_XSS is supported
"If(CPUID.(0DH, 1):EAX.[3] = 1", so only the X86_FEATURE_XSAVES check is
necessary.

Fixes: 4d763b168e9c5 ("KVM: VMX: check CPUID before allowing read/write of IA32_XSS")
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e7970a2e8eae..409e9a7323f1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1823,8 +1823,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_XSS:
 		if (!vmx_xsaves_supported() ||
 		    (!msr_info->host_initiated &&
-		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
-		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+		     !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))
 			return 1;
 		msr_info->data = vcpu->arch.ia32_xss;
 		break;
@@ -2066,8 +2065,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_XSS:
 		if (!vmx_xsaves_supported() ||
 		    (!msr_info->host_initiated &&
-		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
-		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+		     !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))
 			return 1;
 		/*
 		 * The only supported bit as of Skylake is bit 8, but
-- 
2.23.0.700.g56cf767bdb-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  2019-10-11 19:40 [PATCH v2 0/5] Add support for XSAVES on AMD and unify it with Intel Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 1/5] KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE Aaron Lewis
@ 2019-10-11 19:40 ` Aaron Lewis
  2019-10-12  0:18   ` Sean Christopherson
  2019-10-11 19:40 ` [PATCH v2 3/5] kvm: svm: Add support for XSAVES on AMD Aaron Lewis
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Aaron Lewis @ 2019-10-11 19:40 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Set IA32_XSS for the guest and host during VM Enter and VM Exit
transitions rather than by using the MSR-load areas.

By moving away from using the MSR-load area we can have a unified
solution for both AMD and Intel.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/svm.c              |  7 +++++--
 arch/x86/kvm/vmx/vmx.c          | 22 ++++++++++------------
 arch/x86/kvm/x86.c              | 23 +++++++++++++++++++----
 arch/x86/kvm/x86.h              |  4 ++--
 5 files changed, 37 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 50eb430b0ad8..634c2598e389 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -562,6 +562,7 @@ struct kvm_vcpu_arch {
 	u64 smbase;
 	u64 smi_count;
 	bool tpr_access_reporting;
+	bool xsaves_enabled;
 	u64 ia32_xss;
 	u64 microcode_version;
 	u64 arch_capabilities;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f8ecb6df5106..da69e95beb4d 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5628,7 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 	svm->vmcb->save.cr2 = vcpu->arch.cr2;
 
 	clgi();
-	kvm_load_guest_xcr0(vcpu);
+	kvm_load_guest_xsave_controls(vcpu);
 
 	if (lapic_in_kernel(vcpu) &&
 		vcpu->arch.apic->lapic_timer.timer_advance_ns)
@@ -5778,7 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
 		kvm_before_interrupt(&svm->vcpu);
 
-	kvm_put_guest_xcr0(vcpu);
+	kvm_load_host_xsave_controls(vcpu);
 	stgi();
 
 	/* Any pending NMI will happen here */
@@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
+	vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
+				    boot_cpu_has(X86_FEATURE_XSAVES);
+
 	/* Update nrips enabled cache */
 	svm->nrips_enabled = !!guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS);
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 409e9a7323f1..ce3020914c69 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -106,8 +106,6 @@ module_param(enable_apicv, bool, S_IRUGO);
 static bool __read_mostly nested = 1;
 module_param(nested, bool, S_IRUGO);
 
-static u64 __read_mostly host_xss;
-
 bool __read_mostly enable_pml = 1;
 module_param_named(pml, enable_pml, bool, S_IRUGO);
 
@@ -2074,11 +2072,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (data != 0)
 			return 1;
 		vcpu->arch.ia32_xss = data;
-		if (vcpu->arch.ia32_xss != host_xss)
-			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
-				vcpu->arch.ia32_xss, host_xss, false);
-		else
-			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
 		break;
 	case MSR_IA32_RTIT_CTL:
 		if ((pt_mode != PT_MODE_HOST_GUEST) ||
@@ -4038,6 +4031,8 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 			guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
 			guest_cpuid_has(vcpu, X86_FEATURE_XSAVES);
 
+		vcpu->arch.xsaves_enabled = xsaves_enabled;
+
 		if (!xsaves_enabled)
 			exec_control &= ~SECONDARY_EXEC_XSAVES;
 
@@ -6540,7 +6535,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
 		vmx_set_interrupt_shadow(vcpu, 0);
 
-	kvm_load_guest_xcr0(vcpu);
+	kvm_load_guest_xsave_controls(vcpu);
 
 	if (static_cpu_has(X86_FEATURE_PKU) &&
 	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
@@ -6647,7 +6642,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 			__write_pkru(vmx->host_pkru);
 	}
 
-	kvm_put_guest_xcr0(vcpu);
+	kvm_load_host_xsave_controls(vcpu);
 
 	vmx->nested.nested_run_pending = 0;
 	vmx->idt_vectoring_info = 0;
@@ -7091,6 +7086,12 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
+	/*
+	 * This will be set back to true in vmx_compute_secondary_exec_control()
+	 * if it should be true, so setting it to false here is okay.
+	 */
+	vcpu->arch.xsaves_enabled = false;
+
 	if (cpu_has_secondary_exec_ctrls()) {
 		vmx_compute_secondary_exec_control(vmx);
 		vmcs_set_secondary_exec_control(vmx);
@@ -7599,9 +7600,6 @@ static __init int hardware_setup(void)
 		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
 	}
 
-	if (boot_cpu_has(X86_FEATURE_XSAVES))
-		rdmsrl(MSR_IA32_XSS, host_xss);
-
 	if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() ||
 	    !(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global()))
 		enable_vpid = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 661e2bf38526..a61570d7034b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -176,6 +176,8 @@ struct kvm_shared_msrs {
 static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
 static struct kvm_shared_msrs __percpu *shared_msrs;
 
+static u64 __read_mostly host_xss;
+
 struct kvm_stats_debugfs_item debugfs_entries[] = {
 	{ "pf_fixed", VCPU_STAT(pf_fixed) },
 	{ "pf_guest", VCPU_STAT(pf_guest) },
@@ -812,27 +814,37 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
 }
 EXPORT_SYMBOL_GPL(kvm_lmsw);
 
-void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
+void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu)
 {
 	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
 			!vcpu->guest_xcr0_loaded) {
 		/* kvm_set_xcr() also depends on this */
 		if (vcpu->arch.xcr0 != host_xcr0)
 			xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
+
+		if (vcpu->arch.xsaves_enabled &&
+		    vcpu->arch.ia32_xss != host_xss)
+			wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
+
 		vcpu->guest_xcr0_loaded = 1;
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
+EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_controls);
 
-void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
+void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->guest_xcr0_loaded) {
 		if (vcpu->arch.xcr0 != host_xcr0)
 			xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
+
+		if (vcpu->arch.xsaves_enabled &&
+		    vcpu->arch.ia32_xss != host_xss)
+			wrmsrl(MSR_IA32_XSS, host_xss);
+
 		vcpu->guest_xcr0_loaded = 0;
 	}
 }
-EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
+EXPORT_SYMBOL_GPL(kvm_load_host_xsave_controls);
 
 static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 {
@@ -9293,6 +9305,9 @@ int kvm_arch_hardware_setup(void)
 		kvm_default_tsc_scaling_ratio = 1ULL << kvm_tsc_scaling_ratio_frac_bits;
 	}
 
+	if (boot_cpu_has(X86_FEATURE_XSAVES))
+		rdmsrl(MSR_IA32_XSS, host_xss);
+
 	kvm_init_msr_list();
 	return 0;
 }
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index dbf7442a822b..0d04e865665b 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -366,7 +366,7 @@ static inline bool kvm_pat_valid(u64 data)
 	return (data | ((data & 0x0202020202020202ull) << 1)) == data;
 }
 
-void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu);
-void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu);
+void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu);
+void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu);
 
 #endif
-- 
2.23.0.700.g56cf767bdb-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/5] kvm: svm: Add support for XSAVES on AMD
  2019-10-11 19:40 [PATCH v2 0/5] Add support for XSAVES on AMD and unify it with Intel Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 1/5] KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS Aaron Lewis
@ 2019-10-11 19:40 ` Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 4/5] kvm: x86: Add IA32_XSS to the emulated_msrs list Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 5/5] kvm: tests: Add test to verify MSR_IA32_XSS Aaron Lewis
  4 siblings, 0 replies; 11+ messages in thread
From: Aaron Lewis @ 2019-10-11 19:40 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Hoist support for RDMSR/WRMSR of IA32_XSS from vmx into common code so
that it can be used for AMD as well.

AMD has no equivalent of Intel's "Enable XSAVES/XRSTORS" VM-execution
control. Instead, XSAVES is always available to the guest when supported
on the host.

Unfortunately, right now, kvm only allows the guest IA32_XSS to be zero,
so the guest's usage of XSAVES will be exactly the same as XSAVEC.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/kvm/svm.c     |  2 +-
 arch/x86/kvm/vmx/vmx.c | 20 --------------------
 arch/x86/kvm/x86.c     | 22 ++++++++++++++++++++++
 3 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index da69e95beb4d..1953898e37ce 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5965,7 +5965,7 @@ static bool svm_mpx_supported(void)
 
 static bool svm_xsaves_supported(void)
 {
-	return false;
+	return boot_cpu_has(X86_FEATURE_XSAVES);
 }
 
 static bool svm_umip_emulated(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ce3020914c69..18bea844fffc 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1818,13 +1818,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
 				       &msr_info->data);
-	case MSR_IA32_XSS:
-		if (!vmx_xsaves_supported() ||
-		    (!msr_info->host_initiated &&
-		     !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))
-			return 1;
-		msr_info->data = vcpu->arch.ia32_xss;
-		break;
 	case MSR_IA32_RTIT_CTL:
 		if (pt_mode != PT_MODE_HOST_GUEST)
 			return 1;
@@ -2060,19 +2053,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (!nested_vmx_allowed(vcpu))
 			return 1;
 		return vmx_set_vmx_msr(vcpu, msr_index, data);
-	case MSR_IA32_XSS:
-		if (!vmx_xsaves_supported() ||
-		    (!msr_info->host_initiated &&
-		     !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))
-			return 1;
-		/*
-		 * The only supported bit as of Skylake is bit 8, but
-		 * it is not supported on KVM.
-		 */
-		if (data != 0)
-			return 1;
-		vcpu->arch.ia32_xss = data;
-		break;
 	case MSR_IA32_RTIT_CTL:
 		if ((pt_mode != PT_MODE_HOST_GUEST) ||
 			vmx_rtit_ctl_check(vcpu, data) ||
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a61570d7034b..2104e21855fc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2700,6 +2700,21 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_TSC:
 		kvm_write_tsc(vcpu, msr_info);
 		break;
+	case MSR_IA32_XSS:
+		if (!kvm_x86_ops->xsaves_supported() ||
+		    (!msr_info->host_initiated &&
+		     !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))
+			return 1;
+		/*
+		 * We do support PT if kvm_x86_ops->pt_supported(), but we do
+		 * not support IA32_XSS[bit 8]. Guests will have to use
+		 * RDMSR/WRMSR rather than XSAVES/XRSTORS to save/restore PT
+		 * MSRs.
+		 */
+		if (data != 0)
+			return 1;
+		vcpu->arch.ia32_xss = data;
+		break;
 	case MSR_SMI_COUNT:
 		if (!msr_info->host_initiated)
 			return 1;
@@ -3030,6 +3045,13 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_CTL(KVM_MAX_MCE_BANKS) - 1:
 		return get_msr_mce(vcpu, msr_info->index, &msr_info->data,
 				   msr_info->host_initiated);
+	case MSR_IA32_XSS:
+		if (!kvm_x86_ops->xsaves_supported() ||
+		    (!msr_info->host_initiated &&
+		     !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES)))
+			return 1;
+		msr_info->data = vcpu->arch.ia32_xss;
+		break;
 	case MSR_K7_CLK_CTL:
 		/*
 		 * Provide expected ramp-up count for K7. All other
-- 
2.23.0.700.g56cf767bdb-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/5] kvm: x86: Add IA32_XSS to the emulated_msrs list
  2019-10-11 19:40 [PATCH v2 0/5] Add support for XSAVES on AMD and unify it with Intel Aaron Lewis
                   ` (2 preceding siblings ...)
  2019-10-11 19:40 ` [PATCH v2 3/5] kvm: svm: Add support for XSAVES on AMD Aaron Lewis
@ 2019-10-11 19:40 ` Aaron Lewis
  2019-10-11 19:40 ` [PATCH v2 5/5] kvm: tests: Add test to verify MSR_IA32_XSS Aaron Lewis
  4 siblings, 0 replies; 11+ messages in thread
From: Aaron Lewis @ 2019-10-11 19:40 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Add IA32_XSS to the list of emulated MSRs if it is supported in the
guest.  At the moment, the guest IA32_XSS must be zero, but this change
prepares for expanded support in the future.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/kvm/svm.c     | 12 +++++++-----
 arch/x86/kvm/vmx/vmx.c |  2 ++
 arch/x86/kvm/x86.c     |  1 +
 3 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1953898e37ce..e23a5013c812 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -498,6 +498,11 @@ static inline bool avic_vcpu_is_running(struct kvm_vcpu *vcpu)
 	return (READ_ONCE(*entry) & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
 }
 
+static bool svm_xsaves_supported(void)
+{
+	return boot_cpu_has(X86_FEATURE_XSAVES);
+}
+
 static void recalc_intercepts(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *c, *h;
@@ -5871,6 +5876,8 @@ static bool svm_has_emulated_msr(int index)
 	case MSR_IA32_MCG_EXT_CTL:
 	case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
 		return false;
+	case MSR_IA32_XSS:
+		return svm_xsaves_supported();
 	default:
 		break;
 	}
@@ -5963,11 +5970,6 @@ static bool svm_mpx_supported(void)
 	return false;
 }
 
-static bool svm_xsaves_supported(void)
-{
-	return boot_cpu_has(X86_FEATURE_XSAVES);
-}
-
 static bool svm_umip_emulated(void)
 {
 	return false;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 18bea844fffc..4ba62ebd8703 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6272,6 +6272,8 @@ static bool vmx_has_emulated_msr(int index)
 	case MSR_AMD64_VIRT_SPEC_CTRL:
 		/* This is AMD only.  */
 		return false;
+	case MSR_IA32_XSS:
+		return vmx_xsaves_supported();
 	default:
 		return true;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2104e21855fc..e3b7dbb8be8f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1227,6 +1227,7 @@ static u32 emulated_msrs[] = {
 	MSR_MISC_FEATURES_ENABLES,
 	MSR_AMD64_VIRT_SPEC_CTRL,
 	MSR_IA32_POWER_CTL,
+	MSR_IA32_XSS,
 
 	/*
 	 * The following list leaves out MSRs whose values are determined
-- 
2.23.0.700.g56cf767bdb-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 5/5] kvm: tests: Add test to verify MSR_IA32_XSS
  2019-10-11 19:40 [PATCH v2 0/5] Add support for XSAVES on AMD and unify it with Intel Aaron Lewis
                   ` (3 preceding siblings ...)
  2019-10-11 19:40 ` [PATCH v2 4/5] kvm: x86: Add IA32_XSS to the emulated_msrs list Aaron Lewis
@ 2019-10-11 19:40 ` Aaron Lewis
  4 siblings, 0 replies; 11+ messages in thread
From: Aaron Lewis @ 2019-10-11 19:40 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Verify that calling get and set for MSR_IA32_XSS returns expected results.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |  1 +
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../selftests/kvm/include/x86_64/processor.h  |  7 +-
 .../selftests/kvm/lib/x86_64/processor.c      | 72 ++++++++++++++++---
 .../selftests/kvm/x86_64/xss_msr_test.c       | 70 ++++++++++++++++++
 5 files changed, 141 insertions(+), 10 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/xss_msr_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index b35da375530a..6e9ec34f8124 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -11,6 +11,7 @@
 /x86_64/vmx_close_while_nested_test
 /x86_64/vmx_set_nested_state_test
 /x86_64/vmx_tsc_adjust_test
+/x86_64/xss_msr_test
 /clear_dirty_log_test
 /dirty_log_test
 /kvm_create_max_vcpus
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c5ec868fa1e5..3138a916574a 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -25,6 +25,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_dirty_log_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test
+TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test
 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index ff234018219c..635ee6c33ad2 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -308,6 +308,8 @@ struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_x86_state *state);
 
+struct kvm_msr_list *kvm_get_msr_index_list(void);
+
 struct kvm_cpuid2 *kvm_get_supported_cpuid(void);
 void vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_cpuid2 *cpuid);
@@ -322,10 +324,13 @@ kvm_get_supported_cpuid_entry(uint32_t function)
 }
 
 uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index);
+int _vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
+		  uint64_t msr_value);
 void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
 	  	  uint64_t msr_value);
 
-uint32_t kvm_get_cpuid_max(void);
+uint32_t kvm_get_cpuid_max_basic(void);
+uint32_t kvm_get_cpuid_max_extended(void);
 void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits);
 
 /*
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 6698cb741e10..683d3bdb8f6a 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -869,7 +869,7 @@ uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index)
 	return buffer.entry.data;
 }
 
-/* VCPU Set MSR
+/* _VCPU Set MSR
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -879,12 +879,12 @@ uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index)
  *
  * Output Args: None
  *
- * Return: On success, nothing. On failure a TEST_ASSERT is produced.
+ * Return: The result of KVM_SET_MSRS.
  *
- * Set value of MSR for VCPU.
+ * Sets the value of an MSR for the given VCPU.
  */
-void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
-	uint64_t msr_value)
+int _vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
+		  uint64_t msr_value)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	struct {
@@ -899,6 +899,29 @@ void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
 	buffer.entry.index = msr_index;
 	buffer.entry.data = msr_value;
 	r = ioctl(vcpu->fd, KVM_SET_MSRS, &buffer.header);
+	return r;
+}
+
+/* VCPU Set MSR
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   vcpuid - VCPU ID
+ *   msr_index - Index of MSR
+ *   msr_value - New value of MSR
+ *
+ * Output Args: None
+ *
+ * Return: On success, nothing. On failure a TEST_ASSERT is produced.
+ *
+ * Set value of MSR for VCPU.
+ */
+void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
+	uint64_t msr_value)
+{
+	int r;
+
+	r = _vcpu_set_msr(vm, vcpuid, msr_index, msr_value);
 	TEST_ASSERT(r == 1, "KVM_SET_MSRS IOCTL failed,\n"
 		"  rc: %i errno: %i", r, errno);
 }
@@ -1000,19 +1023,45 @@ struct kvm_x86_state {
 	struct kvm_msrs msrs;
 };
 
-static int kvm_get_num_msrs(struct kvm_vm *vm)
+static int kvm_get_num_msrs_fd(int kvm_fd)
 {
 	struct kvm_msr_list nmsrs;
 	int r;
 
 	nmsrs.nmsrs = 0;
-	r = ioctl(vm->kvm_fd, KVM_GET_MSR_INDEX_LIST, &nmsrs);
+	r = ioctl(kvm_fd, KVM_GET_MSR_INDEX_LIST, &nmsrs);
 	TEST_ASSERT(r == -1 && errno == E2BIG, "Unexpected result from KVM_GET_MSR_INDEX_LIST probe, r: %i",
 		r);
 
 	return nmsrs.nmsrs;
 }
 
+static int kvm_get_num_msrs(struct kvm_vm *vm)
+{
+	return kvm_get_num_msrs_fd(vm->kvm_fd);
+}
+
+struct kvm_msr_list *kvm_get_msr_index_list(void)
+{
+	struct kvm_msr_list *list;
+	int nmsrs, r, kvm_fd;
+
+	kvm_fd = open(KVM_DEV_PATH, O_RDONLY);
+	if (kvm_fd < 0)
+		exit(KSFT_SKIP);
+
+	nmsrs = kvm_get_num_msrs_fd(kvm_fd);
+	list = malloc(sizeof(*list) + nmsrs * sizeof(list->indices[0]));
+	list->nmsrs = nmsrs;
+	r = ioctl(kvm_fd, KVM_GET_MSR_INDEX_LIST, list);
+	close(kvm_fd);
+
+	TEST_ASSERT(r == 0, "Unexpected result from KVM_GET_MSR_INDEX_LIST, r: %i",
+		r);
+
+	return list;
+}
+
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
@@ -1158,7 +1207,12 @@ bool is_intel_cpu(void)
 	return (ebx == chunk[0] && edx == chunk[1] && ecx == chunk[2]);
 }
 
-uint32_t kvm_get_cpuid_max(void)
+uint32_t kvm_get_cpuid_max_basic(void)
+{
+	return kvm_get_supported_cpuid_entry(0)->eax;
+}
+
+uint32_t kvm_get_cpuid_max_extended(void)
 {
 	return kvm_get_supported_cpuid_entry(0x80000000)->eax;
 }
@@ -1169,7 +1223,7 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits)
 	bool pae;
 
 	/* SDM 4.1.4 */
-	if (kvm_get_cpuid_max() < 0x80000008) {
+	if (kvm_get_cpuid_max_extended() < 0x80000008) {
 		pae = kvm_get_supported_cpuid_entry(1)->edx & (1 << 6);
 		*pa_bits = pae ? 36 : 32;
 		*va_bits = 32;
diff --git a/tools/testing/selftests/kvm/x86_64/xss_msr_test.c b/tools/testing/selftests/kvm/x86_64/xss_msr_test.c
new file mode 100644
index 000000000000..17db23336673
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/xss_msr_test.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019, Google LLC.
+ *
+ * Tests for the IA32_XSS MSR.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "vmx.h"
+
+#define VCPU_ID	      1
+
+#define X86_FEATURE_XSAVES	(1<<3)
+
+int main(int argc, char *argv[])
+{
+	struct kvm_cpuid_entry2 *entry;
+	struct kvm_msr_list *list;
+	bool found_xss = false;
+	struct kvm_vm *vm;
+	uint64_t xss_val;
+	int i, r;
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, 0);
+
+	list = kvm_get_msr_index_list();
+	for (i = 0; i < list->nmsrs; ++i) {
+		if (list->indices[i] == MSR_IA32_XSS) {
+			found_xss = true;
+			break;
+		}
+	}
+
+	if (kvm_get_cpuid_max_basic() < 0xd) {
+		printf("XSAVES is not supported by the CPU.\n");
+		exit(KSFT_SKIP);
+	}
+
+	entry = kvm_get_supported_cpuid_index(0xd, 1);
+	TEST_ASSERT(found_xss == !!(entry->eax & X86_FEATURE_XSAVES),
+		    "Support for IA32_XSS and support for XSAVES do not match.");
+
+	if (!found_xss) {
+		printf("IA32_XSS and XSAVES are not supported.  Skipping test.\n");
+		exit(KSFT_SKIP);
+	}
+
+	xss_val = vcpu_get_msr(vm, VCPU_ID, MSR_IA32_XSS);
+	TEST_ASSERT(xss_val == 0, "MSR_IA32_XSS should always be zero\n");
+
+	vcpu_set_msr(vm, VCPU_ID, MSR_IA32_XSS, xss_val);
+	/*
+	 * At present, KVM only supports a guest IA32_XSS value of 0. Verify
+	 * that trying to set the guest IA32_XSS to an unsupported value fails.
+	 *
+	 * Using '1' which is an illegal value since bit 0 is x87 state, which
+	 * is covered by XCR0.
+	 */
+	r = _vcpu_set_msr(vm, VCPU_ID, MSR_IA32_XSS, 1);
+	TEST_ASSERT(r == 0,
+		    "KVM_SET_MSRS IOCTL failed,\n  rc: %i errno: %i", r, errno);
+
+	free(list);
+	kvm_vm_free(vm);
+}
-- 
2.23.0.700.g56cf767bdb-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  2019-10-11 19:40 ` [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS Aaron Lewis
@ 2019-10-12  0:18   ` Sean Christopherson
  2019-10-12 17:36     ` Jim Mattson
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2019-10-12  0:18 UTC (permalink / raw)
  To: Aaron Lewis
  Cc: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm,
	Paolo Bonzini, Jim Mattson

On Fri, Oct 11, 2019 at 12:40:29PM -0700, Aaron Lewis wrote:
> Set IA32_XSS for the guest and host during VM Enter and VM Exit
> transitions rather than by using the MSR-load areas.
> 
> By moving away from using the MSR-load area we can have a unified
> solution for both AMD and Intel.
> 
> Reviewed-by: Jim Mattson <jmattson@google.com>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/svm.c              |  7 +++++--
>  arch/x86/kvm/vmx/vmx.c          | 22 ++++++++++------------
>  arch/x86/kvm/x86.c              | 23 +++++++++++++++++++----
>  arch/x86/kvm/x86.h              |  4 ++--
>  5 files changed, 37 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 50eb430b0ad8..634c2598e389 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -562,6 +562,7 @@ struct kvm_vcpu_arch {
>  	u64 smbase;
>  	u64 smi_count;
>  	bool tpr_access_reporting;
> +	bool xsaves_enabled;
>  	u64 ia32_xss;
>  	u64 microcode_version;
>  	u64 arch_capabilities;
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index f8ecb6df5106..da69e95beb4d 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -5628,7 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
>  	svm->vmcb->save.cr2 = vcpu->arch.cr2;
>  
>  	clgi();
> -	kvm_load_guest_xcr0(vcpu);
> +	kvm_load_guest_xsave_controls(vcpu);
>  
>  	if (lapic_in_kernel(vcpu) &&
>  		vcpu->arch.apic->lapic_timer.timer_advance_ns)
> @@ -5778,7 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
>  	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
>  		kvm_before_interrupt(&svm->vcpu);
>  
> -	kvm_put_guest_xcr0(vcpu);
> +	kvm_load_host_xsave_controls(vcpu);
>  	stgi();
>  
>  	/* Any pending NMI will happen here */
> @@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_svm *svm = to_svm(vcpu);
>  
> +	vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> +				    boot_cpu_has(X86_FEATURE_XSAVES);

This looks very much like a functional change to SVM, which feels wrong
for a patch with a subject of "KVM: VMX: Use wrmsr for switching between
guest and host IA32_XSS" feels wrong.  Shouldn't this be unconditionally
set false in this patch, and then enabled in " kvm: svm: Add support for
XSAVES on AMD"?

> +
>  	/* Update nrips enabled cache */
>  	svm->nrips_enabled = !!guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS);
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 409e9a7323f1..ce3020914c69 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -106,8 +106,6 @@ module_param(enable_apicv, bool, S_IRUGO);
>  static bool __read_mostly nested = 1;
>  module_param(nested, bool, S_IRUGO);
>  
> -static u64 __read_mostly host_xss;
> -
>  bool __read_mostly enable_pml = 1;
>  module_param_named(pml, enable_pml, bool, S_IRUGO);
>  
> @@ -2074,11 +2072,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		if (data != 0)
>  			return 1;
>  		vcpu->arch.ia32_xss = data;
> -		if (vcpu->arch.ia32_xss != host_xss)
> -			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
> -				vcpu->arch.ia32_xss, host_xss, false);
> -		else
> -			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);

I'm pretty sure the "vmx_xsaves_supported()" check in this case statement
can be removed after this patch.  The host support check was necessary
when the MSR load/save list was being used, but moving to xsaves_enabled
means theres no risk of writing MSR_IA32_XSS when it's not supported.

>  		break;
>  	case MSR_IA32_RTIT_CTL:
>  		if ((pt_mode != PT_MODE_HOST_GUEST) ||
> @@ -4038,6 +4031,8 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
>  			guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
>  			guest_cpuid_has(vcpu, X86_FEATURE_XSAVES);
>  
> +		vcpu->arch.xsaves_enabled = xsaves_enabled;

Doesn't this conflict with the direction of the previous patch, "KVM: VMX:
Remove unneeded check for X86_FEATURE_XSAVE"?  xsaves_enabled is %true iff
both XSAVE and XSAVES are exposed to the guest.

Alternatively, couldn't KVM check arch.xsaves_enabled in the MSR handling?

> +
>  		if (!xsaves_enabled)
>  			exec_control &= ~SECONDARY_EXEC_XSAVES;
>  
> @@ -6540,7 +6535,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
>  		vmx_set_interrupt_shadow(vcpu, 0);
>  
> -	kvm_load_guest_xcr0(vcpu);
> +	kvm_load_guest_xsave_controls(vcpu);
>  
>  	if (static_cpu_has(X86_FEATURE_PKU) &&
>  	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
> @@ -6647,7 +6642,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  			__write_pkru(vmx->host_pkru);
>  	}
>  
> -	kvm_put_guest_xcr0(vcpu);
> +	kvm_load_host_xsave_controls(vcpu);
>  
>  	vmx->nested.nested_run_pending = 0;
>  	vmx->idt_vectoring_info = 0;
> @@ -7091,6 +7086,12 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
>  
> +	/*
> +	 * This will be set back to true in vmx_compute_secondary_exec_control()
> +	 * if it should be true, so setting it to false here is okay.

Or simply:

	/* xsaves_enabled is recomputed in vmx_compute_secondary_exec_control(). */

> +	 */
> +	vcpu->arch.xsaves_enabled = false;
> +
>  	if (cpu_has_secondary_exec_ctrls()) {
>  		vmx_compute_secondary_exec_control(vmx);
>  		vmcs_set_secondary_exec_control(vmx);
> @@ -7599,9 +7600,6 @@ static __init int hardware_setup(void)
>  		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
>  	}
>  
> -	if (boot_cpu_has(X86_FEATURE_XSAVES))
> -		rdmsrl(MSR_IA32_XSS, host_xss);
> -
>  	if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() ||
>  	    !(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global()))
>  		enable_vpid = 0;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 661e2bf38526..a61570d7034b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -176,6 +176,8 @@ struct kvm_shared_msrs {
>  static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
>  static struct kvm_shared_msrs __percpu *shared_msrs;
>  
> +static u64 __read_mostly host_xss;
> +
>  struct kvm_stats_debugfs_item debugfs_entries[] = {
>  	{ "pf_fixed", VCPU_STAT(pf_fixed) },
>  	{ "pf_guest", VCPU_STAT(pf_guest) },
> @@ -812,27 +814,37 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
>  }
>  EXPORT_SYMBOL_GPL(kvm_lmsw);
>  
> -void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
> +void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu)
>  {
>  	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
>  			!vcpu->guest_xcr0_loaded) {
>  		/* kvm_set_xcr() also depends on this */
>  		if (vcpu->arch.xcr0 != host_xcr0)
>  			xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
> +
> +		if (vcpu->arch.xsaves_enabled &&
> +		    vcpu->arch.ia32_xss != host_xss)
> +			wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
> +
>  		vcpu->guest_xcr0_loaded = 1;
>  	}
>  }
> -EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
> +EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_controls);
>  
> -void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
> +void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu)
>  {
>  	if (vcpu->guest_xcr0_loaded) {
>  		if (vcpu->arch.xcr0 != host_xcr0)
>  			xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
> +
> +		if (vcpu->arch.xsaves_enabled &&
> +		    vcpu->arch.ia32_xss != host_xss)
> +			wrmsrl(MSR_IA32_XSS, host_xss);
> +
>  		vcpu->guest_xcr0_loaded = 0;
>  	}
>  }
> -EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
> +EXPORT_SYMBOL_GPL(kvm_load_host_xsave_controls);
>  
>  static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
>  {
> @@ -9293,6 +9305,9 @@ int kvm_arch_hardware_setup(void)
>  		kvm_default_tsc_scaling_ratio = 1ULL << kvm_tsc_scaling_ratio_frac_bits;
>  	}
>  
> +	if (boot_cpu_has(X86_FEATURE_XSAVES))
> +		rdmsrl(MSR_IA32_XSS, host_xss);

More code that goes beyond the subject of the patch, as host_xss is now
being read and potentially switched on SVM platforms.  I don't mind the
change itself being part of this patch (unlike setting xsaves_enabled),
but IMO the subject should make it more clear that this is a common x86
change.  Arguably this should be two patches, one to do what this subject
says and a second to move the code to common x86, but that seems like
overkill.

>  	kvm_init_msr_list();
>  	return 0;
>  }
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index dbf7442a822b..0d04e865665b 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -366,7 +366,7 @@ static inline bool kvm_pat_valid(u64 data)
>  	return (data | ((data & 0x0202020202020202ull) << 1)) == data;
>  }
>  
> -void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu);
> -void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu);
> +void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu);
> +void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu);
>  
>  #endif
> -- 
> 2.23.0.700.g56cf767bdb-goog
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  2019-10-12  0:18   ` Sean Christopherson
@ 2019-10-12 17:36     ` Jim Mattson
  2019-10-14 19:05       ` Sean Christopherson
  0 siblings, 1 reply; 11+ messages in thread
From: Jim Mattson @ 2019-10-12 17:36 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Aaron Lewis, Babu Moger, Yang Weijiang,
	Sebastian Andrzej Siewior, kvm list, Paolo Bonzini

On Fri, Oct 11, 2019 at 5:18 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Oct 11, 2019 at 12:40:29PM -0700, Aaron Lewis wrote:
> > Set IA32_XSS for the guest and host during VM Enter and VM Exit
> > transitions rather than by using the MSR-load areas.
> >
> > By moving away from using the MSR-load area we can have a unified
> > solution for both AMD and Intel.
> >
> > Reviewed-by: Jim Mattson <jmattson@google.com>
> > Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  1 +
> >  arch/x86/kvm/svm.c              |  7 +++++--
> >  arch/x86/kvm/vmx/vmx.c          | 22 ++++++++++------------
> >  arch/x86/kvm/x86.c              | 23 +++++++++++++++++++----
> >  arch/x86/kvm/x86.h              |  4 ++--
> >  5 files changed, 37 insertions(+), 20 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 50eb430b0ad8..634c2598e389 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -562,6 +562,7 @@ struct kvm_vcpu_arch {
> >       u64 smbase;
> >       u64 smi_count;
> >       bool tpr_access_reporting;
> > +     bool xsaves_enabled;
> >       u64 ia32_xss;
> >       u64 microcode_version;
> >       u64 arch_capabilities;
> > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > index f8ecb6df5106..da69e95beb4d 100644
> > --- a/arch/x86/kvm/svm.c
> > +++ b/arch/x86/kvm/svm.c
> > @@ -5628,7 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
> >       svm->vmcb->save.cr2 = vcpu->arch.cr2;
> >
> >       clgi();
> > -     kvm_load_guest_xcr0(vcpu);
> > +     kvm_load_guest_xsave_controls(vcpu);
> >
> >       if (lapic_in_kernel(vcpu) &&
> >               vcpu->arch.apic->lapic_timer.timer_advance_ns)
> > @@ -5778,7 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
> >       if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
> >               kvm_before_interrupt(&svm->vcpu);
> >
> > -     kvm_put_guest_xcr0(vcpu);
> > +     kvm_load_host_xsave_controls(vcpu);
> >       stgi();
> >
> >       /* Any pending NMI will happen here */
> > @@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
> >  {
> >       struct vcpu_svm *svm = to_svm(vcpu);
> >
> > +     vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> > +                                 boot_cpu_has(X86_FEATURE_XSAVES);
>
> This looks very much like a functional change to SVM, which feels wrong
> for a patch with a subject of "KVM: VMX: Use wrmsr for switching between
> guest and host IA32_XSS" feels wrong.  Shouldn't this be unconditionally
> set false in this patch, and then enabled in " kvm: svm: Add support for
> XSAVES on AMD"?

Nothing is being enabled here. Vcpu->arch.xsaves_enabled simply tells
us whether or not the guest can execute the XSAVES instruction. Any
guest with the ability to set CR4.OSXSAVE on an AMD host that supports
XSAVES can use the instruction.

> > +
> >       /* Update nrips enabled cache */
> >       svm->nrips_enabled = !!guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS);
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 409e9a7323f1..ce3020914c69 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -106,8 +106,6 @@ module_param(enable_apicv, bool, S_IRUGO);
> >  static bool __read_mostly nested = 1;
> >  module_param(nested, bool, S_IRUGO);
> >
> > -static u64 __read_mostly host_xss;
> > -
> >  bool __read_mostly enable_pml = 1;
> >  module_param_named(pml, enable_pml, bool, S_IRUGO);
> >
> > @@ -2074,11 +2072,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >               if (data != 0)
> >                       return 1;
> >               vcpu->arch.ia32_xss = data;
> > -             if (vcpu->arch.ia32_xss != host_xss)
> > -                     add_atomic_switch_msr(vmx, MSR_IA32_XSS,
> > -                             vcpu->arch.ia32_xss, host_xss, false);
> > -             else
> > -                     clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
>
> I'm pretty sure the "vmx_xsaves_supported()" check in this case statement
> can be removed after this patch.  The host support check was necessary
> when the MSR load/save list was being used, but moving to xsaves_enabled
> means theres no risk of writing MSR_IA32_XSS when it's not supported.

I agree. vmx_xsaves_supported() only tells us if the XSAVES
instruction can be enabled. It does not tell us anything about the
existence of IA32_XSS in the guest.

> >               break;
> >       case MSR_IA32_RTIT_CTL:
> >               if ((pt_mode != PT_MODE_HOST_GUEST) ||
> > @@ -4038,6 +4031,8 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
> >                       guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> >                       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES);
> >
> > +             vcpu->arch.xsaves_enabled = xsaves_enabled;
>
> Doesn't this conflict with the direction of the previous patch, "KVM: VMX:
> Remove unneeded check for X86_FEATURE_XSAVE"?  xsaves_enabled is %true iff
> both XSAVE and XSAVES are exposed to the guest.

There's no conflict, because the predicates are different. The
previous patch removed the test for XSAVE in the emulation of
RDMSR/WRMSR(IA32_XSS), because the presence of the IA32_XSS MSR
depends only on CPUID.(EAX=0DH, ECX=1):EAX.XSS[bit 3]. However,
architecturally, the XSAVES instruction will raise #UD if either
CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit
3] = 0. (Unfortunately, AMD does not allow us to enforce architectural
behavior without breaking the existing ABI, as Paolo pointed out
previously. This puts us in the bizarre situation where an AMD guest
may be able to execute XSAVES even though it cannot read or write
IA32_XSS.)

> Alternatively, couldn't KVM check arch.xsaves_enabled in the MSR handling?

No. The existence of the IA32_XSS MSR is not the same as the ability
of the guest to execute XSAVES.

> > +
> >               if (!xsaves_enabled)
> >                       exec_control &= ~SECONDARY_EXEC_XSAVES;
> >
> > @@ -6540,7 +6535,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
> >       if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
> >               vmx_set_interrupt_shadow(vcpu, 0);
> >
> > -     kvm_load_guest_xcr0(vcpu);
> > +     kvm_load_guest_xsave_controls(vcpu);
> >
> >       if (static_cpu_has(X86_FEATURE_PKU) &&
> >           kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
> > @@ -6647,7 +6642,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
> >                       __write_pkru(vmx->host_pkru);
> >       }
> >
> > -     kvm_put_guest_xcr0(vcpu);
> > +     kvm_load_host_xsave_controls(vcpu);
> >
> >       vmx->nested.nested_run_pending = 0;
> >       vmx->idt_vectoring_info = 0;
> > @@ -7091,6 +7086,12 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
> >  {
> >       struct vcpu_vmx *vmx = to_vmx(vcpu);
> >
> > +     /*
> > +      * This will be set back to true in vmx_compute_secondary_exec_control()
> > +      * if it should be true, so setting it to false here is okay.
>
> Or simply:
>
>         /* xsaves_enabled is recomputed in vmx_compute_secondary_exec_control(). */
>
> > +      */
> > +     vcpu->arch.xsaves_enabled = false;
> > +
> >       if (cpu_has_secondary_exec_ctrls()) {
> >               vmx_compute_secondary_exec_control(vmx);
> >               vmcs_set_secondary_exec_control(vmx);
> > @@ -7599,9 +7600,6 @@ static __init int hardware_setup(void)
> >               WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
> >       }
> >
> > -     if (boot_cpu_has(X86_FEATURE_XSAVES))
> > -             rdmsrl(MSR_IA32_XSS, host_xss);
> > -
> >       if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() ||
> >           !(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global()))
> >               enable_vpid = 0;
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 661e2bf38526..a61570d7034b 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -176,6 +176,8 @@ struct kvm_shared_msrs {
> >  static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
> >  static struct kvm_shared_msrs __percpu *shared_msrs;
> >
> > +static u64 __read_mostly host_xss;
> > +
> >  struct kvm_stats_debugfs_item debugfs_entries[] = {
> >       { "pf_fixed", VCPU_STAT(pf_fixed) },
> >       { "pf_guest", VCPU_STAT(pf_guest) },
> > @@ -812,27 +814,37 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
> >  }
> >  EXPORT_SYMBOL_GPL(kvm_lmsw);
> >
> > -void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
> > +void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu)
> >  {
> >       if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
> >                       !vcpu->guest_xcr0_loaded) {
> >               /* kvm_set_xcr() also depends on this */
> >               if (vcpu->arch.xcr0 != host_xcr0)
> >                       xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
> > +
> > +             if (vcpu->arch.xsaves_enabled &&
> > +                 vcpu->arch.ia32_xss != host_xss)
> > +                     wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
> > +
> >               vcpu->guest_xcr0_loaded = 1;
> >       }
> >  }
> > -EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
> > +EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_controls);
> >
> > -void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
> > +void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu)
> >  {
> >       if (vcpu->guest_xcr0_loaded) {
> >               if (vcpu->arch.xcr0 != host_xcr0)
> >                       xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
> > +
> > +             if (vcpu->arch.xsaves_enabled &&
> > +                 vcpu->arch.ia32_xss != host_xss)
> > +                     wrmsrl(MSR_IA32_XSS, host_xss);
> > +
> >               vcpu->guest_xcr0_loaded = 0;
> >       }
> >  }
> > -EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
> > +EXPORT_SYMBOL_GPL(kvm_load_host_xsave_controls);
> >
> >  static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
> >  {
> > @@ -9293,6 +9305,9 @@ int kvm_arch_hardware_setup(void)
> >               kvm_default_tsc_scaling_ratio = 1ULL << kvm_tsc_scaling_ratio_frac_bits;
> >       }
> >
> > +     if (boot_cpu_has(X86_FEATURE_XSAVES))
> > +             rdmsrl(MSR_IA32_XSS, host_xss);
>
> More code that goes beyond the subject of the patch, as host_xss is now
> being read and potentially switched on SVM platforms.  I don't mind the
> change itself being part of this patch (unlike setting xsaves_enabled),
> but IMO the subject should make it more clear that this is a common x86
> change.  Arguably this should be two patches, one to do what this subject
> says and a second to move the code to common x86, but that seems like
> overkill.
>
> >       kvm_init_msr_list();
> >       return 0;
> >  }
> > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> > index dbf7442a822b..0d04e865665b 100644
> > --- a/arch/x86/kvm/x86.h
> > +++ b/arch/x86/kvm/x86.h
> > @@ -366,7 +366,7 @@ static inline bool kvm_pat_valid(u64 data)
> >       return (data | ((data & 0x0202020202020202ull) << 1)) == data;
> >  }
> >
> > -void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu);
> > -void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu);
> > +void kvm_load_guest_xsave_controls(struct kvm_vcpu *vcpu);
> > +void kvm_load_host_xsave_controls(struct kvm_vcpu *vcpu);
> >
> >  #endif
> > --
> > 2.23.0.700.g56cf767bdb-goog
> >

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  2019-10-12 17:36     ` Jim Mattson
@ 2019-10-14 19:05       ` Sean Christopherson
  2019-10-14 21:01         ` Jim Mattson
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2019-10-14 19:05 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Aaron Lewis, Babu Moger, Yang Weijiang,
	Sebastian Andrzej Siewior, kvm list, Paolo Bonzini

On Sat, Oct 12, 2019 at 10:36:15AM -0700, Jim Mattson wrote:
> On Fri, Oct 11, 2019 at 5:18 PM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > On Fri, Oct 11, 2019 at 12:40:29PM -0700, Aaron Lewis wrote:
> > > Set IA32_XSS for the guest and host during VM Enter and VM Exit
> > > transitions rather than by using the MSR-load areas.
> > >
> > > By moving away from using the MSR-load area we can have a unified
> > > solution for both AMD and Intel.
> > >
> > > Reviewed-by: Jim Mattson <jmattson@google.com>
> > > Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> > > ---
> > >  arch/x86/include/asm/kvm_host.h |  1 +
> > >  arch/x86/kvm/svm.c              |  7 +++++--
> > >  arch/x86/kvm/vmx/vmx.c          | 22 ++++++++++------------
> > >  arch/x86/kvm/x86.c              | 23 +++++++++++++++++++----
> > >  arch/x86/kvm/x86.h              |  4 ++--
> > >  5 files changed, 37 insertions(+), 20 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > > index 50eb430b0ad8..634c2598e389 100644
> > > --- a/arch/x86/include/asm/kvm_host.h
> > > +++ b/arch/x86/include/asm/kvm_host.h
> > > @@ -562,6 +562,7 @@ struct kvm_vcpu_arch {
> > >       u64 smbase;
> > >       u64 smi_count;
> > >       bool tpr_access_reporting;
> > > +     bool xsaves_enabled;
> > >       u64 ia32_xss;
> > >       u64 microcode_version;
> > >       u64 arch_capabilities;
> > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > > index f8ecb6df5106..da69e95beb4d 100644
> > > --- a/arch/x86/kvm/svm.c
> > > +++ b/arch/x86/kvm/svm.c
> > > @@ -5628,7 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
> > >       svm->vmcb->save.cr2 = vcpu->arch.cr2;
> > >
> > >       clgi();
> > > -     kvm_load_guest_xcr0(vcpu);
> > > +     kvm_load_guest_xsave_controls(vcpu);
> > >
> > >       if (lapic_in_kernel(vcpu) &&
> > >               vcpu->arch.apic->lapic_timer.timer_advance_ns)
> > > @@ -5778,7 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
> > >       if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
> > >               kvm_before_interrupt(&svm->vcpu);
> > >
> > > -     kvm_put_guest_xcr0(vcpu);
> > > +     kvm_load_host_xsave_controls(vcpu);
> > >       stgi();
> > >
> > >       /* Any pending NMI will happen here */
> > > @@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
> > >  {
> > >       struct vcpu_svm *svm = to_svm(vcpu);
> > >
> > > +     vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> > > +                                 boot_cpu_has(X86_FEATURE_XSAVES);
> >
> > This looks very much like a functional change to SVM, which feels wrong
> > for a patch with a subject of "KVM: VMX: Use wrmsr for switching between
> > guest and host IA32_XSS" feels wrong.  Shouldn't this be unconditionally
> > set false in this patch, and then enabled in " kvm: svm: Add support for
> > XSAVES on AMD"?
> 
> Nothing is being enabled here. Vcpu->arch.xsaves_enabled simply tells
> us whether or not the guest can execute the XSAVES instruction. Any
> guest with the ability to set CR4.OSXSAVE on an AMD host that supports
> XSAVES can use the instruction.

Not enabling per se, but it's a functional change as it means MSR_IA32_XSS
will be written in kvm_load_{guest,host}_xsave_controls() if host_xss!=0.

> > > +
> > >       /* Update nrips enabled cache */
> > >       svm->nrips_enabled = !!guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS);
> > >
> > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > > index 409e9a7323f1..ce3020914c69 100644
> > > --- a/arch/x86/kvm/vmx/vmx.c
> > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > @@ -106,8 +106,6 @@ module_param(enable_apicv, bool, S_IRUGO);
> > >  static bool __read_mostly nested = 1;
> > >  module_param(nested, bool, S_IRUGO);
> > >
> > > -static u64 __read_mostly host_xss;
> > > -
> > >  bool __read_mostly enable_pml = 1;
> > >  module_param_named(pml, enable_pml, bool, S_IRUGO);
> > >
> > > @@ -2074,11 +2072,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > >               if (data != 0)
> > >                       return 1;
> > >               vcpu->arch.ia32_xss = data;
> > > -             if (vcpu->arch.ia32_xss != host_xss)
> > > -                     add_atomic_switch_msr(vmx, MSR_IA32_XSS,
> > > -                             vcpu->arch.ia32_xss, host_xss, false);
> > > -             else
> > > -                     clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
> >
> > I'm pretty sure the "vmx_xsaves_supported()" check in this case statement
> > can be removed after this patch.  The host support check was necessary
> > when the MSR load/save list was being used, but moving to xsaves_enabled
> > means theres no risk of writing MSR_IA32_XSS when it's not supported.
> 
> I agree. vmx_xsaves_supported() only tells us if the XSAVES
> instruction can be enabled. It does not tell us anything about the
> existence of IA32_XSS in the guest.
> 
> > >               break;
> > >       case MSR_IA32_RTIT_CTL:
> > >               if ((pt_mode != PT_MODE_HOST_GUEST) ||
> > > @@ -4038,6 +4031,8 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
> > >                       guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> > >                       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES);
> > >
> > > +             vcpu->arch.xsaves_enabled = xsaves_enabled;
> >
> > Doesn't this conflict with the direction of the previous patch, "KVM: VMX:
> > Remove unneeded check for X86_FEATURE_XSAVE"?  xsaves_enabled is %true iff
> > both XSAVE and XSAVES are exposed to the guest.
> 
> There's no conflict, because the predicates are different. The
> previous patch removed the test for XSAVE in the emulation of
> RDMSR/WRMSR(IA32_XSS), because the presence of the IA32_XSS MSR
> depends only on CPUID.(EAX=0DH, ECX=1):EAX.XSS[bit 3]. However,
> architecturally, the XSAVES instruction will raise #UD if either
> CPUID.01H:ECX.XSAVE[bit 26] = 0 or CPUID.(EAX=0DH,ECX=1):EAX.XSS[bit
> 3] = 0. (Unfortunately, AMD does not allow us to enforce architectural
> behavior without breaking the existing ABI, as Paolo pointed out
> previously. This puts us in the bizarre situation where an AMD guest
> may be able to execute XSAVES even though it cannot read or write
> IA32_XSS.)
> 
> > Alternatively, couldn't KVM check arch.xsaves_enabled in the MSR handling?
> 
> No. The existence of the IA32_XSS MSR is not the same as the ability
> of the guest to execute XSAVES.

Ugh, that's obnoxious.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  2019-10-14 19:05       ` Sean Christopherson
@ 2019-10-14 21:01         ` Jim Mattson
  2019-10-14 23:55           ` Sean Christopherson
  0 siblings, 1 reply; 11+ messages in thread
From: Jim Mattson @ 2019-10-14 21:01 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Aaron Lewis, Babu Moger, Yang Weijiang,
	Sebastian Andrzej Siewior, kvm list, Paolo Bonzini

On Mon, Oct 14, 2019 at 12:05 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Sat, Oct 12, 2019 at 10:36:15AM -0700, Jim Mattson wrote:
> > On Fri, Oct 11, 2019 at 5:18 PM Sean Christopherson
> > <sean.j.christopherson@intel.com> wrote:
> > >
> > > On Fri, Oct 11, 2019 at 12:40:29PM -0700, Aaron Lewis wrote:
> > > > Set IA32_XSS for the guest and host during VM Enter and VM Exit
> > > > transitions rather than by using the MSR-load areas.
> > > >
> > > > By moving away from using the MSR-load area we can have a unified
> > > > solution for both AMD and Intel.
> > > >
> > > > Reviewed-by: Jim Mattson <jmattson@google.com>
> > > > Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> > > > ---
> > > >  arch/x86/include/asm/kvm_host.h |  1 +
> > > >  arch/x86/kvm/svm.c              |  7 +++++--
> > > >  arch/x86/kvm/vmx/vmx.c          | 22 ++++++++++------------
> > > >  arch/x86/kvm/x86.c              | 23 +++++++++++++++++++----
> > > >  arch/x86/kvm/x86.h              |  4 ++--
> > > >  5 files changed, 37 insertions(+), 20 deletions(-)
> > > >
> > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > > > index 50eb430b0ad8..634c2598e389 100644
> > > > --- a/arch/x86/include/asm/kvm_host.h
> > > > +++ b/arch/x86/include/asm/kvm_host.h
> > > > @@ -562,6 +562,7 @@ struct kvm_vcpu_arch {
> > > >       u64 smbase;
> > > >       u64 smi_count;
> > > >       bool tpr_access_reporting;
> > > > +     bool xsaves_enabled;
> > > >       u64 ia32_xss;
> > > >       u64 microcode_version;
> > > >       u64 arch_capabilities;
> > > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > > > index f8ecb6df5106..da69e95beb4d 100644
> > > > --- a/arch/x86/kvm/svm.c
> > > > +++ b/arch/x86/kvm/svm.c
> > > > @@ -5628,7 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
> > > >       svm->vmcb->save.cr2 = vcpu->arch.cr2;
> > > >
> > > >       clgi();
> > > > -     kvm_load_guest_xcr0(vcpu);
> > > > +     kvm_load_guest_xsave_controls(vcpu);
> > > >
> > > >       if (lapic_in_kernel(vcpu) &&
> > > >               vcpu->arch.apic->lapic_timer.timer_advance_ns)
> > > > @@ -5778,7 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
> > > >       if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
> > > >               kvm_before_interrupt(&svm->vcpu);
> > > >
> > > > -     kvm_put_guest_xcr0(vcpu);
> > > > +     kvm_load_host_xsave_controls(vcpu);
> > > >       stgi();
> > > >
> > > >       /* Any pending NMI will happen here */
> > > > @@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
> > > >  {
> > > >       struct vcpu_svm *svm = to_svm(vcpu);
> > > >
> > > > +     vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> > > > +                                 boot_cpu_has(X86_FEATURE_XSAVES);
> > >
> > > This looks very much like a functional change to SVM, which feels wrong
> > > for a patch with a subject of "KVM: VMX: Use wrmsr for switching between
> > > guest and host IA32_XSS" feels wrong.  Shouldn't this be unconditionally
> > > set false in this patch, and then enabled in " kvm: svm: Add support for
> > > XSAVES on AMD"?
> >
> > Nothing is being enabled here. Vcpu->arch.xsaves_enabled simply tells
> > us whether or not the guest can execute the XSAVES instruction. Any
> > guest with the ability to set CR4.OSXSAVE on an AMD host that supports
> > XSAVES can use the instruction.
>
> Not enabling per se, but it's a functional change as it means MSR_IA32_XSS
> will be written in kvm_load_{guest,host}_xsave_controls() if host_xss!=0.

Fortunately, host_xss is guaranteed to be zero for the nonce. :-)

Perhaps the commit message just needs to be updated? The only
alternatives I see are:
1. Deliberately introducing buggy code to be removed later in the series, or
2. Introducing SVM-specific code first, to be removed later in the series.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  2019-10-14 21:01         ` Jim Mattson
@ 2019-10-14 23:55           ` Sean Christopherson
  0 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2019-10-14 23:55 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Aaron Lewis, Babu Moger, Yang Weijiang,
	Sebastian Andrzej Siewior, kvm list, Paolo Bonzini

On Mon, Oct 14, 2019 at 02:01:41PM -0700, Jim Mattson wrote:
> On Mon, Oct 14, 2019 at 12:05 PM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > On Sat, Oct 12, 2019 at 10:36:15AM -0700, Jim Mattson wrote:
> > > On Fri, Oct 11, 2019 at 5:18 PM Sean Christopherson
> > > <sean.j.christopherson@intel.com> wrote:
> > > >
> > > > On Fri, Oct 11, 2019 at 12:40:29PM -0700, Aaron Lewis wrote:
> > > > > @@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
> > > > >  {
> > > > >       struct vcpu_svm *svm = to_svm(vcpu);
> > > > >
> > > > > +     vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> > > > > +                                 boot_cpu_has(X86_FEATURE_XSAVES);
> > > >
> > > > This looks very much like a functional change to SVM, which feels wrong
> > > > for a patch with a subject of "KVM: VMX: Use wrmsr for switching between
> > > > guest and host IA32_XSS" feels wrong.  Shouldn't this be unconditionally
> > > > set false in this patch, and then enabled in " kvm: svm: Add support for
> > > > XSAVES on AMD"?
> > >
> > > Nothing is being enabled here. Vcpu->arch.xsaves_enabled simply tells
> > > us whether or not the guest can execute the XSAVES instruction. Any
> > > guest with the ability to set CR4.OSXSAVE on an AMD host that supports
> > > XSAVES can use the instruction.
> >
> > Not enabling per se, but it's a functional change as it means MSR_IA32_XSS
> > will be written in kvm_load_{guest,host}_xsave_controls() if host_xss!=0.
> 
> Fortunately, host_xss is guaranteed to be zero for the nonce. :-)

And then someone backports something and things go sideways.
 
> Perhaps the commit message just needs to be updated? The only
> alternatives I see are:
> 1. Deliberately introducing buggy code to be removed later in the series, or
> 2. Introducing SVM-specific code first, to be removed later in the series.

I'd go with:

  2a. Make the change actually local to VMX as the subject implies, and
      introduce SVM-specific code in a separate patch, to be removed
      later.

Then the series looks like:

  KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE
  KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS
  kvm: svm: Add support for XSAVES on AMD
  KVM: x86: Move IA32_XSS handling to common x86 code
  KVM: x86: Add IA32_XSS to the emulated_msrs list

It'd mean introducing vcpu->arch.xsaves_enabled in the VMX patch and
temporarily having it unused by SVM, but that's minor and can't backfire
if host_xss suddently is non-zero.

Side topic, I'm pretty sure vcpu->guest_xcr0_loaded is a waste of an 'int'
and a conditional branch.  VMX and SVM are the only users, and both
unconditionally pair kvm_load_guest_xcr0() with kvm_put_guest_xcr0().
Removing the useless flag as another prerequisite patch would make the
intermediate VMX and SVM patches slightly less awkward as they wouldn't
have to decide between adding a kludgy check on guest_xcr0_loaded and
ignoring it altogether.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-10-14 23:55 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-11 19:40 [PATCH v2 0/5] Add support for XSAVES on AMD and unify it with Intel Aaron Lewis
2019-10-11 19:40 ` [PATCH v2 1/5] KVM: VMX: Remove unneeded check for X86_FEATURE_XSAVE Aaron Lewis
2019-10-11 19:40 ` [PATCH v2 2/5] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS Aaron Lewis
2019-10-12  0:18   ` Sean Christopherson
2019-10-12 17:36     ` Jim Mattson
2019-10-14 19:05       ` Sean Christopherson
2019-10-14 21:01         ` Jim Mattson
2019-10-14 23:55           ` Sean Christopherson
2019-10-11 19:40 ` [PATCH v2 3/5] kvm: svm: Add support for XSAVES on AMD Aaron Lewis
2019-10-11 19:40 ` [PATCH v2 4/5] kvm: x86: Add IA32_XSS to the emulated_msrs list Aaron Lewis
2019-10-11 19:40 ` [PATCH v2 5/5] kvm: tests: Add test to verify MSR_IA32_XSS Aaron Lewis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).