All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel
@ 2019-10-21 23:30 Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 1/9] KVM: x86: Introduce vcpu->arch.xsaves_enabled Aaron Lewis
                   ` (9 more replies)
  0 siblings, 10 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Unify AMD's and Intel's approach for supporting XSAVES.  To do this
change Intel's approach from using the MSR-load areas to writing
the guest/host values to IA32_XSS on a VM-enter/VM-exit.  Switching to
this strategy allows for a common approach between both AMD and Intel.
Additionally, define svm_xsaves_supported() based on AMD's feedback, and add
vcpu->arch.xsaves_enabled to track whether XSAVES is enabled in the guest.

This change sets up IA32_XSS to be a non-zero value in the future, which
may happen sooner than later with support for guest CET feature being
added.

v2 -> v3:
 - Remove guest_xcr0_loaded from kvm_vcpu.
 - Add vcpu->arch.xsaves_enabled.
 - Add staged rollout to load the hardware IA32_XSS MSR with guest/host
   values on VM-entry and VM-exit:
     1) Introduce vcpu->arch->xsaves_enabled.
     2) Add svm implementation for switching between guest and host IA32_XSS.
     3) Add vmx implementation for switching between guest and host IA32_XSS.
     4) Remove svm and vmx implementation and add it to common code.

v1 -> v2:
 - Add the flag xsaves_enabled to kvm_vcpu_arch to track when XSAVES is
   enabled in the guest, whether or not XSAVES is enumerated in the
   guest CPUID.
 - Remove code that sets the X86_FEATURE_XSAVES bit in the guest CPUID
   which was added in patch "Enumerate XSAVES in guest CPUID when it is
   available to the guest".  As a result we no longer need that patch.
 - Added a comment to kvm_set_msr_common to describe how to save/restore
   PT MSRS without using XSAVES/XRSTORS.
 - Added more comments to the "Add support for XSAVES on AMD" patch.
 - Replaced vcpu_set_msr_expect_result() with _vcpu_set_msr() in the
   test library.

Aaron Lewis (9):
  KVM: x86: Introduce vcpu->arch.xsaves_enabled
  KVM: VMX: Fix conditions for guest IA32_XSS support
  KVM: x86: Remove unneeded kvm_vcpu variable, guest_xcr0_loaded
  KVM: SVM: Use wrmsr for switching between guest and host IA32_XSS on AMD
  KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS on Intel
  KVM: x86: Move IA32_XSS-swapping on VM-entry/VM-exit to common x86 code
  kvm: x86: Move IA32_XSS to kvm_{get,set}_msr_common
  kvm: svm: Update svm_xsaves_supported
  kvm: tests: Add test to verify MSR_IA32_XSS

 arch/x86/include/asm/kvm_host.h               |  1 +
 arch/x86/kvm/svm.c                            |  9 ++-
 arch/x86/kvm/vmx/vmx.c                        | 41 ++--------
 arch/x86/kvm/x86.c                            | 52 ++++++++++---
 arch/x86/kvm/x86.h                            |  4 +-
 include/linux/kvm_host.h                      |  1 -
 tools/testing/selftests/kvm/.gitignore        |  1 +
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../selftests/kvm/include/x86_64/processor.h  |  7 +-
 .../selftests/kvm/lib/x86_64/processor.c      | 72 +++++++++++++++---
 .../selftests/kvm/x86_64/xss_msr_test.c       | 76 +++++++++++++++++++
 11 files changed, 205 insertions(+), 60 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/xss_msr_test.c

-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/9] KVM: x86: Introduce vcpu->arch.xsaves_enabled
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 2/9] KVM: VMX: Fix conditions for guest IA32_XSS support Aaron Lewis
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Cache whether XSAVES is enabled in the guest by adding xsaves_enabled to
vcpu->arch.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: If4638e0901c28a4494dad2e103e2c075e8ab5d68
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/svm.c              | 3 +++
 arch/x86/kvm/vmx/vmx.c          | 5 +++++
 3 files changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 50eb430b0ad8..634c2598e389 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -562,6 +562,7 @@ struct kvm_vcpu_arch {
 	u64 smbase;
 	u64 smi_count;
 	bool tpr_access_reporting;
+	bool xsaves_enabled;
 	u64 ia32_xss;
 	u64 microcode_version;
 	u64 arch_capabilities;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f8ecb6df5106..f64041368594 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5887,6 +5887,9 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
+	vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
+				    boot_cpu_has(X86_FEATURE_XSAVES);
+
 	/* Update nrips enabled cache */
 	svm->nrips_enabled = !!guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS);
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e7970a2e8eae..34525af44353 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4040,6 +4040,8 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 			guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
 			guest_cpuid_has(vcpu, X86_FEATURE_XSAVES);
 
+		vcpu->arch.xsaves_enabled = xsaves_enabled;
+
 		if (!xsaves_enabled)
 			exec_control &= ~SECONDARY_EXEC_XSAVES;
 
@@ -7093,6 +7095,9 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
+	/* xsaves_enabled is recomputed in vmx_compute_secondary_exec_control(). */
+	vcpu->arch.xsaves_enabled = false;
+
 	if (cpu_has_secondary_exec_ctrls()) {
 		vmx_compute_secondary_exec_control(vmx);
 		vmcs_set_secondary_exec_control(vmx);
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/9] KVM: VMX: Fix conditions for guest IA32_XSS support
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 1/9] KVM: x86: Introduce vcpu->arch.xsaves_enabled Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-22 13:43   ` Paolo Bonzini
  2019-10-21 23:30 ` [PATCH v3 3/9] KVM: x86: Remove unneeded kvm_vcpu variable, guest_xcr0_loaded Aaron Lewis
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Volume 4 of the SDM says that IA32_XSS is supported
if CPUID(EAX=0DH,ECX=1):EAX.XSS[bit 3] is set, so only the
X86_FEATURE_XSAVES check is necessary (X86_FEATURE_XSAVES is the Linux
name for CPUID(EAX=0DH,ECX=1):EAX.XSS[bit 3]).

Fixes: 4d763b168e9c5 ("KVM: VMX: check CPUID before allowing read/write of IA32_XSS")
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: I9059b9f2e3595e4b09a4cdcf14b933b22ebad419
---
 arch/x86/kvm/vmx/vmx.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 34525af44353..a9b070001c3e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1821,10 +1821,8 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
 				       &msr_info->data);
 	case MSR_IA32_XSS:
-		if (!vmx_xsaves_supported() ||
-		    (!msr_info->host_initiated &&
-		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
-		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
 			return 1;
 		msr_info->data = vcpu->arch.ia32_xss;
 		break;
@@ -2064,10 +2062,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		return vmx_set_vmx_msr(vcpu, msr_index, data);
 	case MSR_IA32_XSS:
-		if (!vmx_xsaves_supported() ||
-		    (!msr_info->host_initiated &&
-		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
-		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
 			return 1;
 		/*
 		 * The only supported bit as of Skylake is bit 8, but
@@ -2076,11 +2072,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (data != 0)
 			return 1;
 		vcpu->arch.ia32_xss = data;
-		if (vcpu->arch.ia32_xss != host_xss)
-			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
-				vcpu->arch.ia32_xss, host_xss, false);
-		else
-			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
+		if (vcpu->arch.xsaves_enabled) {
+			if (vcpu->arch.ia32_xss != host_xss)
+				add_atomic_switch_msr(vmx, MSR_IA32_XSS,
+					vcpu->arch.ia32_xss, host_xss, false);
+			else
+				clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
+		}
 		break;
 	case MSR_IA32_RTIT_CTL:
 		if ((pt_mode != PT_MODE_HOST_GUEST) ||
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 3/9] KVM: x86: Remove unneeded kvm_vcpu variable, guest_xcr0_loaded
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 1/9] KVM: x86: Introduce vcpu->arch.xsaves_enabled Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 2/9] KVM: VMX: Fix conditions for guest IA32_XSS support Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 4/9] KVM: SVM: Use wrmsr for switching between guest and host IA32_XSS on AMD Aaron Lewis
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis, Sean Christopherson

The kvm_vcpu variable, guest_xcr0_loaded, is a waste of an 'int'
and a conditional branch.  VMX and SVM are the only users, and both
unconditionally pair kvm_load_guest_xcr0() with kvm_put_guest_xcr0()
making this check unnecessary. Without this variable, the predicates in
kvm_load_guest_xcr0 and kvm_put_guest_xcr0 should match.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: I7b1eb9b62969d7bbb2850f27e42f863421641b23
---
 arch/x86/kvm/x86.c       | 16 +++++-----------
 include/linux/kvm_host.h |  1 -
 2 files changed, 5 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 661e2bf38526..39eac7b2aa01 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -815,22 +815,16 @@ EXPORT_SYMBOL_GPL(kvm_lmsw);
 void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
 {
 	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
-			!vcpu->guest_xcr0_loaded) {
-		/* kvm_set_xcr() also depends on this */
-		if (vcpu->arch.xcr0 != host_xcr0)
-			xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
-		vcpu->guest_xcr0_loaded = 1;
-	}
+	    vcpu->arch.xcr0 != host_xcr0)
+		xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
 }
 EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
 
 void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->guest_xcr0_loaded) {
-		if (vcpu->arch.xcr0 != host_xcr0)
-			xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
-		vcpu->guest_xcr0_loaded = 0;
-	}
+	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
+	    vcpu->arch.xcr0 != host_xcr0)
+		xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
 }
 EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 719fc3e15ea4..d2017302996c 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -278,7 +278,6 @@ struct kvm_vcpu {
 	struct mutex mutex;
 	struct kvm_run *run;
 
-	int guest_xcr0_loaded;
 	struct swait_queue_head wq;
 	struct pid __rcu *pid;
 	int sigset_active;
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 4/9] KVM: SVM: Use wrmsr for switching between guest and host IA32_XSS on AMD
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
                   ` (2 preceding siblings ...)
  2019-10-21 23:30 ` [PATCH v3 3/9] KVM: x86: Remove unneeded kvm_vcpu variable, guest_xcr0_loaded Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 5/9] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS on Intel Aaron Lewis
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis, Sean Christopherson

When the guest can execute the XSAVES/XRSTORS instructions, set the
hardware IA32_XSS MSR to guest/host values on VM-entry/VM-exit.

Note that vcpu->arch.ia32_xss is currently guaranteed to be 0 on AMD,
since there is no way to change it.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: Id51a782462086e6d7a3ab621838e200f1c005afd
---
 arch/x86/kvm/svm.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f64041368594..2702ebba24ba 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -115,6 +115,8 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id);
 
 static bool erratum_383_found __read_mostly;
 
+static u64 __read_mostly host_xss;
+
 static const u32 host_save_user_msrs[] = {
 #ifdef CONFIG_X86_64
 	MSR_STAR, MSR_LSTAR, MSR_CSTAR, MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE,
@@ -1400,6 +1402,9 @@ static __init int svm_hardware_setup(void)
 			pr_info("Virtual GIF supported\n");
 	}
 
+	if (boot_cpu_has(X86_FEATURE_XSAVES))
+		rdmsrl(MSR_IA32_XSS, host_xss);
+
 	return 0;
 
 err:
@@ -5590,6 +5595,22 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
 	svm_complete_interrupts(svm);
 }
 
+static void svm_load_guest_xss(struct kvm_vcpu *vcpu)
+{
+	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
+	    vcpu->arch.xsaves_enabled &&
+	    vcpu->arch.ia32_xss != host_xss)
+		wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
+}
+
+static void svm_load_host_xss(struct kvm_vcpu *vcpu)
+{
+	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
+	    vcpu->arch.xsaves_enabled &&
+	    vcpu->arch.ia32_xss != host_xss)
+		wrmsrl(MSR_IA32_XSS, host_xss);
+}
+
 static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -5629,6 +5650,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	clgi();
 	kvm_load_guest_xcr0(vcpu);
+	svm_load_guest_xss(vcpu);
 
 	if (lapic_in_kernel(vcpu) &&
 		vcpu->arch.apic->lapic_timer.timer_advance_ns)
@@ -5778,6 +5800,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
 		kvm_before_interrupt(&svm->vcpu);
 
+	svm_load_host_xss(vcpu);
 	kvm_put_guest_xcr0(vcpu);
 	stgi();
 
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 5/9] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS on Intel
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
                   ` (3 preceding siblings ...)
  2019-10-21 23:30 ` [PATCH v3 4/9] KVM: SVM: Use wrmsr for switching between guest and host IA32_XSS on AMD Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 6/9] KVM: x86: Move IA32_XSS-swapping on VM-entry/VM-exit to common x86 code Aaron Lewis
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

When the guest can execute the XSAVES/XRSTORS instructions, use wrmsr to
set the hardware IA32_XSS MSR to guest/host values on VM-entry/VM-exit,
rather than the MSR-load areas. By using the same approach as AMD, we
will be able to use a common implementation for both (in the next
patch).

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: I9447d104b2615c04e39e4af0c911e1e7309bf464
---
 arch/x86/kvm/vmx/vmx.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a9b070001c3e..f3cd2e372c4a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2072,13 +2072,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (data != 0)
 			return 1;
 		vcpu->arch.ia32_xss = data;
-		if (vcpu->arch.xsaves_enabled) {
-			if (vcpu->arch.ia32_xss != host_xss)
-				add_atomic_switch_msr(vmx, MSR_IA32_XSS,
-					vcpu->arch.ia32_xss, host_xss, false);
-			else
-				clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
-		}
 		break;
 	case MSR_IA32_RTIT_CTL:
 		if ((pt_mode != PT_MODE_HOST_GUEST) ||
@@ -6492,6 +6485,22 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
 	}
 }
 
+static void vmx_load_guest_xss(struct kvm_vcpu *vcpu)
+{
+	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
+	    vcpu->arch.xsaves_enabled &&
+	    vcpu->arch.ia32_xss != host_xss)
+		wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
+}
+
+static void vmx_load_host_xss(struct kvm_vcpu *vcpu)
+{
+	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
+	    vcpu->arch.xsaves_enabled &&
+	    vcpu->arch.ia32_xss != host_xss)
+		wrmsrl(MSR_IA32_XSS, host_xss);
+}
+
 bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
 
 static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
@@ -6543,6 +6552,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		vmx_set_interrupt_shadow(vcpu, 0);
 
 	kvm_load_guest_xcr0(vcpu);
+	vmx_load_guest_xss(vcpu);
 
 	if (static_cpu_has(X86_FEATURE_PKU) &&
 	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
@@ -6649,6 +6659,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 			__write_pkru(vmx->host_pkru);
 	}
 
+	vmx_load_host_xss(vcpu);
 	kvm_put_guest_xcr0(vcpu);
 
 	vmx->nested.nested_run_pending = 0;
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 6/9] KVM: x86: Move IA32_XSS-swapping on VM-entry/VM-exit to common x86 code
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
                   ` (4 preceding siblings ...)
  2019-10-21 23:30 ` [PATCH v3 5/9] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS on Intel Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 7/9] kvm: x86: Move IA32_XSS to kvm_{get,set}_msr_common Aaron Lewis
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Hoist the vendor-specific code related to loading the hardware IA32_XSS
MSR with guest/host values on VM-entry/VM-exit to common x86 code.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: Ic6e3430833955b98eb9b79ae6715cf2a3fdd6d82
---
 arch/x86/kvm/svm.c     | 27 ++-------------------------
 arch/x86/kvm/vmx/vmx.c | 27 ++-------------------------
 arch/x86/kvm/x86.c     | 38 ++++++++++++++++++++++++++++----------
 arch/x86/kvm/x86.h     |  4 ++--
 4 files changed, 34 insertions(+), 62 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 2702ebba24ba..36d1cfd45c60 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -115,8 +115,6 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id);
 
 static bool erratum_383_found __read_mostly;
 
-static u64 __read_mostly host_xss;
-
 static const u32 host_save_user_msrs[] = {
 #ifdef CONFIG_X86_64
 	MSR_STAR, MSR_LSTAR, MSR_CSTAR, MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE,
@@ -1402,9 +1400,6 @@ static __init int svm_hardware_setup(void)
 			pr_info("Virtual GIF supported\n");
 	}
 
-	if (boot_cpu_has(X86_FEATURE_XSAVES))
-		rdmsrl(MSR_IA32_XSS, host_xss);
-
 	return 0;
 
 err:
@@ -5595,22 +5590,6 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
 	svm_complete_interrupts(svm);
 }
 
-static void svm_load_guest_xss(struct kvm_vcpu *vcpu)
-{
-	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
-	    vcpu->arch.xsaves_enabled &&
-	    vcpu->arch.ia32_xss != host_xss)
-		wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
-}
-
-static void svm_load_host_xss(struct kvm_vcpu *vcpu)
-{
-	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
-	    vcpu->arch.xsaves_enabled &&
-	    vcpu->arch.ia32_xss != host_xss)
-		wrmsrl(MSR_IA32_XSS, host_xss);
-}
-
 static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -5649,8 +5628,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 	svm->vmcb->save.cr2 = vcpu->arch.cr2;
 
 	clgi();
-	kvm_load_guest_xcr0(vcpu);
-	svm_load_guest_xss(vcpu);
+	kvm_load_guest_xsave_state(vcpu);
 
 	if (lapic_in_kernel(vcpu) &&
 		vcpu->arch.apic->lapic_timer.timer_advance_ns)
@@ -5800,8 +5778,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
 		kvm_before_interrupt(&svm->vcpu);
 
-	svm_load_host_xss(vcpu);
-	kvm_put_guest_xcr0(vcpu);
+	kvm_load_host_xsave_state(vcpu);
 	stgi();
 
 	/* Any pending NMI will happen here */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f3cd2e372c4a..f7d292ac9921 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -106,8 +106,6 @@ module_param(enable_apicv, bool, S_IRUGO);
 static bool __read_mostly nested = 1;
 module_param(nested, bool, S_IRUGO);
 
-static u64 __read_mostly host_xss;
-
 bool __read_mostly enable_pml = 1;
 module_param_named(pml, enable_pml, bool, S_IRUGO);
 
@@ -6485,22 +6483,6 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
 	}
 }
 
-static void vmx_load_guest_xss(struct kvm_vcpu *vcpu)
-{
-	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
-	    vcpu->arch.xsaves_enabled &&
-	    vcpu->arch.ia32_xss != host_xss)
-		wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
-}
-
-static void vmx_load_host_xss(struct kvm_vcpu *vcpu)
-{
-	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
-	    vcpu->arch.xsaves_enabled &&
-	    vcpu->arch.ia32_xss != host_xss)
-		wrmsrl(MSR_IA32_XSS, host_xss);
-}
-
 bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
 
 static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
@@ -6551,8 +6533,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
 		vmx_set_interrupt_shadow(vcpu, 0);
 
-	kvm_load_guest_xcr0(vcpu);
-	vmx_load_guest_xss(vcpu);
+	kvm_load_guest_xsave_state(vcpu);
 
 	if (static_cpu_has(X86_FEATURE_PKU) &&
 	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
@@ -6659,8 +6640,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 			__write_pkru(vmx->host_pkru);
 	}
 
-	vmx_load_host_xss(vcpu);
-	kvm_put_guest_xcr0(vcpu);
+	kvm_load_host_xsave_state(vcpu);
 
 	vmx->nested.nested_run_pending = 0;
 	vmx->idt_vectoring_info = 0;
@@ -7615,9 +7595,6 @@ static __init int hardware_setup(void)
 		WARN_ONCE(host_bndcfgs, "KVM: BNDCFGS in host will be lost");
 	}
 
-	if (boot_cpu_has(X86_FEATURE_XSAVES))
-		rdmsrl(MSR_IA32_XSS, host_xss);
-
 	if (!cpu_has_vmx_vpid() || !cpu_has_vmx_invvpid() ||
 	    !(cpu_has_vmx_invvpid_single() || cpu_has_vmx_invvpid_global()))
 		enable_vpid = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 39eac7b2aa01..259a30e4d3a9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -176,6 +176,8 @@ struct kvm_shared_msrs {
 static struct kvm_shared_msrs_global __read_mostly shared_msrs_global;
 static struct kvm_shared_msrs __percpu *shared_msrs;
 
+static u64 __read_mostly host_xss;
+
 struct kvm_stats_debugfs_item debugfs_entries[] = {
 	{ "pf_fixed", VCPU_STAT(pf_fixed) },
 	{ "pf_guest", VCPU_STAT(pf_guest) },
@@ -812,21 +814,34 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
 }
 EXPORT_SYMBOL_GPL(kvm_lmsw);
 
-void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
+void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
 {
-	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
-	    vcpu->arch.xcr0 != host_xcr0)
-		xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
+	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {
+
+		if (vcpu->arch.xcr0 != host_xcr0)
+			xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
+
+		if (vcpu->arch.xsaves_enabled &&
+		    vcpu->arch.ia32_xss != host_xss)
+			wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
+	}
 }
-EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
+EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);
 
-void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
+void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
 {
-	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
-	    vcpu->arch.xcr0 != host_xcr0)
-		xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
+	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {
+
+		if (vcpu->arch.xcr0 != host_xcr0)
+			xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
+
+		if (vcpu->arch.xsaves_enabled &&
+		    vcpu->arch.ia32_xss != host_xss)
+			wrmsrl(MSR_IA32_XSS, host_xss);
+	}
+
 }
-EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
+EXPORT_SYMBOL_GPL(kvm_load_host_xsave_state);
 
 static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 {
@@ -9287,6 +9302,9 @@ int kvm_arch_hardware_setup(void)
 		kvm_default_tsc_scaling_ratio = 1ULL << kvm_tsc_scaling_ratio_frac_bits;
 	}
 
+	if (boot_cpu_has(X86_FEATURE_XSAVES))
+		rdmsrl(MSR_IA32_XSS, host_xss);
+
 	kvm_init_msr_list();
 	return 0;
 }
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index dbf7442a822b..250c2c932e46 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -366,7 +366,7 @@ static inline bool kvm_pat_valid(u64 data)
 	return (data | ((data & 0x0202020202020202ull) << 1)) == data;
 }
 
-void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu);
-void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu);
+void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu);
+void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu);
 
 #endif
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 7/9] kvm: x86: Move IA32_XSS to kvm_{get,set}_msr_common
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
                   ` (5 preceding siblings ...)
  2019-10-21 23:30 ` [PATCH v3 6/9] KVM: x86: Move IA32_XSS-swapping on VM-entry/VM-exit to common x86 code Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 8/9] kvm: svm: Update svm_xsaves_supported Aaron Lewis
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Hoist support for RDMSR/WRMSR of IA32_XSS from vmx into common code so
that it can be used for svm as well.

Right now, kvm only allows the guest IA32_XSS to be zero,
so the guest's usage of XSAVES will be exactly the same as XSAVEC.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: Ie4b0f777d71e428fbee6e82071ac2d7618e9bb40
---
 arch/x86/kvm/vmx/vmx.c | 18 ------------------
 arch/x86/kvm/x86.c     | 20 ++++++++++++++++++++
 2 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f7d292ac9921..b29511d63971 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1818,12 +1818,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
 				       &msr_info->data);
-	case MSR_IA32_XSS:
-		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
-			return 1;
-		msr_info->data = vcpu->arch.ia32_xss;
-		break;
 	case MSR_IA32_RTIT_CTL:
 		if (pt_mode != PT_MODE_HOST_GUEST)
 			return 1;
@@ -2059,18 +2053,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (!nested_vmx_allowed(vcpu))
 			return 1;
 		return vmx_set_vmx_msr(vcpu, msr_index, data);
-	case MSR_IA32_XSS:
-		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
-			return 1;
-		/*
-		 * The only supported bit as of Skylake is bit 8, but
-		 * it is not supported on KVM.
-		 */
-		if (data != 0)
-			return 1;
-		vcpu->arch.ia32_xss = data;
-		break;
 	case MSR_IA32_RTIT_CTL:
 		if ((pt_mode != PT_MODE_HOST_GUEST) ||
 			vmx_rtit_ctl_check(vcpu, data) ||
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 259a30e4d3a9..cbbf792a04c1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2697,6 +2697,20 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_TSC:
 		kvm_write_tsc(vcpu, msr_info);
 		break;
+	case MSR_IA32_XSS:
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
+			return 1;
+		/*
+		 * We do support PT if kvm_x86_ops->pt_supported(), but we do
+		 * not support IA32_XSS[bit 8]. Guests will have to use
+		 * RDMSR/WRMSR rather than XSAVES/XRSTORS to save/restore PT
+		 * MSRs.
+		 */
+		if (data != 0)
+			return 1;
+		vcpu->arch.ia32_xss = data;
+		break;
 	case MSR_SMI_COUNT:
 		if (!msr_info->host_initiated)
 			return 1;
@@ -3027,6 +3041,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_CTL(KVM_MAX_MCE_BANKS) - 1:
 		return get_msr_mce(vcpu, msr_info->index, &msr_info->data,
 				   msr_info->host_initiated);
+	case MSR_IA32_XSS:
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
+			return 1;
+		msr_info->data = vcpu->arch.ia32_xss;
+		break;
 	case MSR_K7_CLK_CTL:
 		/*
 		 * Provide expected ramp-up count for K7. All other
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 8/9] kvm: svm: Update svm_xsaves_supported
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
                   ` (6 preceding siblings ...)
  2019-10-21 23:30 ` [PATCH v3 7/9] kvm: x86: Move IA32_XSS to kvm_{get,set}_msr_common Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-21 23:30 ` [PATCH v3 9/9] kvm: tests: Add test to verify MSR_IA32_XSS Aaron Lewis
  2019-10-22 13:48 ` [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Paolo Bonzini
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

AMD CPUs now support XSAVES in a limited fashion (they require IA32_XSS
to be zero).

AMD has no equivalent of Intel's "Enable XSAVES/XRSTORS" VM-execution
control. Instead, XSAVES is always available to the guest when supported
on the host.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: I40dc2c682eb0d38c2208d95d5eb7bbb6c47f6317
---
 arch/x86/kvm/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 36d1cfd45c60..dd8a6418d56c 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5965,7 +5965,7 @@ static bool svm_mpx_supported(void)
 
 static bool svm_xsaves_supported(void)
 {
-	return false;
+	return boot_cpu_has(X86_FEATURE_XSAVES);
 }
 
 static bool svm_umip_emulated(void)
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 9/9] kvm: tests: Add test to verify MSR_IA32_XSS
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
                   ` (7 preceding siblings ...)
  2019-10-21 23:30 ` [PATCH v3 8/9] kvm: svm: Update svm_xsaves_supported Aaron Lewis
@ 2019-10-21 23:30 ` Aaron Lewis
  2019-10-22 13:48 ` [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Paolo Bonzini
  9 siblings, 0 replies; 12+ messages in thread
From: Aaron Lewis @ 2019-10-21 23:30 UTC (permalink / raw)
  To: Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Paolo Bonzini, Jim Mattson, Aaron Lewis

Ensure that IA32_XSS appears in KVM_GET_MSR_INDEX_LIST if it can be set
to a non-zero value.

Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: Ia2d644f69e2d6d8c27d7e0a7a45c2bf9c42bf5ff
---
 tools/testing/selftests/kvm/.gitignore        |  1 +
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../selftests/kvm/include/x86_64/processor.h  |  7 +-
 .../selftests/kvm/lib/x86_64/processor.c      | 72 +++++++++++++++---
 .../selftests/kvm/x86_64/xss_msr_test.c       | 76 +++++++++++++++++++
 5 files changed, 147 insertions(+), 10 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/xss_msr_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index b35da375530a..6e9ec34f8124 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -11,6 +11,7 @@
 /x86_64/vmx_close_while_nested_test
 /x86_64/vmx_set_nested_state_test
 /x86_64/vmx_tsc_adjust_test
+/x86_64/xss_msr_test
 /clear_dirty_log_test
 /dirty_log_test
 /kvm_create_max_vcpus
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c5ec868fa1e5..3138a916574a 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -25,6 +25,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_dirty_log_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test
+TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test
 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index ff234018219c..635ee6c33ad2 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -308,6 +308,8 @@ struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_x86_state *state);
 
+struct kvm_msr_list *kvm_get_msr_index_list(void);
+
 struct kvm_cpuid2 *kvm_get_supported_cpuid(void);
 void vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_cpuid2 *cpuid);
@@ -322,10 +324,13 @@ kvm_get_supported_cpuid_entry(uint32_t function)
 }
 
 uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index);
+int _vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
+		  uint64_t msr_value);
 void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
 	  	  uint64_t msr_value);
 
-uint32_t kvm_get_cpuid_max(void);
+uint32_t kvm_get_cpuid_max_basic(void);
+uint32_t kvm_get_cpuid_max_extended(void);
 void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits);
 
 /*
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 6698cb741e10..683d3bdb8f6a 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -869,7 +869,7 @@ uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index)
 	return buffer.entry.data;
 }
 
-/* VCPU Set MSR
+/* _VCPU Set MSR
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -879,12 +879,12 @@ uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index)
  *
  * Output Args: None
  *
- * Return: On success, nothing. On failure a TEST_ASSERT is produced.
+ * Return: The result of KVM_SET_MSRS.
  *
- * Set value of MSR for VCPU.
+ * Sets the value of an MSR for the given VCPU.
  */
-void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
-	uint64_t msr_value)
+int _vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
+		  uint64_t msr_value)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	struct {
@@ -899,6 +899,29 @@ void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
 	buffer.entry.index = msr_index;
 	buffer.entry.data = msr_value;
 	r = ioctl(vcpu->fd, KVM_SET_MSRS, &buffer.header);
+	return r;
+}
+
+/* VCPU Set MSR
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   vcpuid - VCPU ID
+ *   msr_index - Index of MSR
+ *   msr_value - New value of MSR
+ *
+ * Output Args: None
+ *
+ * Return: On success, nothing. On failure a TEST_ASSERT is produced.
+ *
+ * Set value of MSR for VCPU.
+ */
+void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index,
+	uint64_t msr_value)
+{
+	int r;
+
+	r = _vcpu_set_msr(vm, vcpuid, msr_index, msr_value);
 	TEST_ASSERT(r == 1, "KVM_SET_MSRS IOCTL failed,\n"
 		"  rc: %i errno: %i", r, errno);
 }
@@ -1000,19 +1023,45 @@ struct kvm_x86_state {
 	struct kvm_msrs msrs;
 };
 
-static int kvm_get_num_msrs(struct kvm_vm *vm)
+static int kvm_get_num_msrs_fd(int kvm_fd)
 {
 	struct kvm_msr_list nmsrs;
 	int r;
 
 	nmsrs.nmsrs = 0;
-	r = ioctl(vm->kvm_fd, KVM_GET_MSR_INDEX_LIST, &nmsrs);
+	r = ioctl(kvm_fd, KVM_GET_MSR_INDEX_LIST, &nmsrs);
 	TEST_ASSERT(r == -1 && errno == E2BIG, "Unexpected result from KVM_GET_MSR_INDEX_LIST probe, r: %i",
 		r);
 
 	return nmsrs.nmsrs;
 }
 
+static int kvm_get_num_msrs(struct kvm_vm *vm)
+{
+	return kvm_get_num_msrs_fd(vm->kvm_fd);
+}
+
+struct kvm_msr_list *kvm_get_msr_index_list(void)
+{
+	struct kvm_msr_list *list;
+	int nmsrs, r, kvm_fd;
+
+	kvm_fd = open(KVM_DEV_PATH, O_RDONLY);
+	if (kvm_fd < 0)
+		exit(KSFT_SKIP);
+
+	nmsrs = kvm_get_num_msrs_fd(kvm_fd);
+	list = malloc(sizeof(*list) + nmsrs * sizeof(list->indices[0]));
+	list->nmsrs = nmsrs;
+	r = ioctl(kvm_fd, KVM_GET_MSR_INDEX_LIST, list);
+	close(kvm_fd);
+
+	TEST_ASSERT(r == 0, "Unexpected result from KVM_GET_MSR_INDEX_LIST, r: %i",
+		r);
+
+	return list;
+}
+
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
@@ -1158,7 +1207,12 @@ bool is_intel_cpu(void)
 	return (ebx == chunk[0] && edx == chunk[1] && ecx == chunk[2]);
 }
 
-uint32_t kvm_get_cpuid_max(void)
+uint32_t kvm_get_cpuid_max_basic(void)
+{
+	return kvm_get_supported_cpuid_entry(0)->eax;
+}
+
+uint32_t kvm_get_cpuid_max_extended(void)
 {
 	return kvm_get_supported_cpuid_entry(0x80000000)->eax;
 }
@@ -1169,7 +1223,7 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits)
 	bool pae;
 
 	/* SDM 4.1.4 */
-	if (kvm_get_cpuid_max() < 0x80000008) {
+	if (kvm_get_cpuid_max_extended() < 0x80000008) {
 		pae = kvm_get_supported_cpuid_entry(1)->edx & (1 << 6);
 		*pa_bits = pae ? 36 : 32;
 		*va_bits = 32;
diff --git a/tools/testing/selftests/kvm/x86_64/xss_msr_test.c b/tools/testing/selftests/kvm/x86_64/xss_msr_test.c
new file mode 100644
index 000000000000..851ea81b9d9f
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/xss_msr_test.c
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019, Google LLC.
+ *
+ * Tests for the IA32_XSS MSR.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "vmx.h"
+
+#define VCPU_ID	      1
+#define MSR_BITS      64
+
+#define X86_FEATURE_XSAVES	(1<<3)
+
+bool is_supported_msr(u32 msr_index)
+{
+	struct kvm_msr_list *list;
+	bool found = false;
+	int i;
+
+	list = kvm_get_msr_index_list();
+	for (i = 0; i < list->nmsrs; ++i) {
+		if (list->indices[i] == msr_index) {
+			found = true;
+			break;
+		}
+	}
+
+	free(list);
+	return found;
+}
+
+int main(int argc, char *argv[])
+{
+	struct kvm_cpuid_entry2 *entry;
+	bool xss_supported = false;
+	struct kvm_vm *vm;
+	uint64_t xss_val;
+	int i, r;
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, 0);
+
+	if (kvm_get_cpuid_max_basic() >= 0xd) {
+		entry = kvm_get_supported_cpuid_index(0xd, 1);
+		xss_supported = entry && !!(entry->eax & X86_FEATURE_XSAVES);
+	}
+	if (!xss_supported) {
+		printf("IA32_XSS is not supported by the vCPU.\n");
+		exit(KSFT_SKIP);
+	}
+
+	xss_val = vcpu_get_msr(vm, VCPU_ID, MSR_IA32_XSS);
+	TEST_ASSERT(xss_val == 0,
+		    "MSR_IA32_XSS should be initialized to zero\n");
+
+	vcpu_set_msr(vm, VCPU_ID, MSR_IA32_XSS, xss_val);
+	/*
+	 * At present, KVM only supports a guest IA32_XSS value of 0. Verify
+	 * that trying to set the guest IA32_XSS to an unsupported value fails.
+	 * Also, in the future when a non-zero value succeeds check that
+	 * IA32_XSS is in the KVM_GET_MSR_INDEX_LIST.
+	 */
+	for (i = 0; i < MSR_BITS; ++i) {
+		r = _vcpu_set_msr(vm, VCPU_ID, MSR_IA32_XSS, 1ull << i);
+		TEST_ASSERT(r == 0 || is_supported_msr(MSR_IA32_XSS),
+			    "IA32_XSS was able to be set, but was not found in KVM_GET_MSR_INDEX_LIST.\n");
+	}
+
+	kvm_vm_free(vm);
+}
-- 
2.23.0.866.gb869b98d4c-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 2/9] KVM: VMX: Fix conditions for guest IA32_XSS support
  2019-10-21 23:30 ` [PATCH v3 2/9] KVM: VMX: Fix conditions for guest IA32_XSS support Aaron Lewis
@ 2019-10-22 13:43   ` Paolo Bonzini
  0 siblings, 0 replies; 12+ messages in thread
From: Paolo Bonzini @ 2019-10-22 13:43 UTC (permalink / raw)
  To: Aaron Lewis, Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Jim Mattson

On 22/10/19 01:30, Aaron Lewis wrote:
> Volume 4 of the SDM says that IA32_XSS is supported
> if CPUID(EAX=0DH,ECX=1):EAX.XSS[bit 3] is set, so only the
> X86_FEATURE_XSAVES check is necessary (X86_FEATURE_XSAVES is the Linux
> name for CPUID(EAX=0DH,ECX=1):EAX.XSS[bit 3]).
> 
> Fixes: 4d763b168e9c5 ("KVM: VMX: check CPUID before allowing read/write of IA32_XSS")
> Reviewed-by: Jim Mattson <jmattson@google.com>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> Change-Id: I9059b9f2e3595e4b09a4cdcf14b933b22ebad419
> ---
>  arch/x86/kvm/vmx/vmx.c | 24 +++++++++++-------------
>  1 file changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 34525af44353..a9b070001c3e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1821,10 +1821,8 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		return vmx_get_vmx_msr(&vmx->nested.msrs, msr_info->index,
>  				       &msr_info->data);
>  	case MSR_IA32_XSS:
> -		if (!vmx_xsaves_supported() ||
> -		    (!msr_info->host_initiated &&
> -		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> -		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
> +		if (!msr_info->host_initiated &&
> +		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
>  			return 1;
>  		msr_info->data = vcpu->arch.ia32_xss;
>  		break;
> @@ -2064,10 +2062,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  			return 1;
>  		return vmx_set_vmx_msr(vcpu, msr_index, data);
>  	case MSR_IA32_XSS:
> -		if (!vmx_xsaves_supported() ||
> -		    (!msr_info->host_initiated &&
> -		     !(guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
> -		       guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))))
> +		if (!msr_info->host_initiated &&
> +		    !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
>  			return 1;
>  		/*
>  		 * The only supported bit as of Skylake is bit 8, but
> @@ -2076,11 +2072,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		if (data != 0)
>  			return 1;
>  		vcpu->arch.ia32_xss = data;
> -		if (vcpu->arch.ia32_xss != host_xss)
> -			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
> -				vcpu->arch.ia32_xss, host_xss, false);
> -		else
> -			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
> +		if (vcpu->arch.xsaves_enabled) {
> +			if (vcpu->arch.ia32_xss != host_xss)
> +				add_atomic_switch_msr(vmx, MSR_IA32_XSS,
> +					vcpu->arch.ia32_xss, host_xss, false);
> +			else
> +				clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
> +		}
>  		break;
>  	case MSR_IA32_RTIT_CTL:
>  		if ((pt_mode != PT_MODE_HOST_GUEST) ||
> 

The last hunk technically doesn't belong in this patch, but okay.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel
  2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
                   ` (8 preceding siblings ...)
  2019-10-21 23:30 ` [PATCH v3 9/9] kvm: tests: Add test to verify MSR_IA32_XSS Aaron Lewis
@ 2019-10-22 13:48 ` Paolo Bonzini
  9 siblings, 0 replies; 12+ messages in thread
From: Paolo Bonzini @ 2019-10-22 13:48 UTC (permalink / raw)
  To: Aaron Lewis, Babu Moger, Yang Weijiang, Sebastian Andrzej Siewior, kvm
  Cc: Jim Mattson

On 22/10/19 01:30, Aaron Lewis wrote:
> Unify AMD's and Intel's approach for supporting XSAVES.  To do this
> change Intel's approach from using the MSR-load areas to writing
> the guest/host values to IA32_XSS on a VM-enter/VM-exit.  Switching to
> this strategy allows for a common approach between both AMD and Intel.
> Additionally, define svm_xsaves_supported() based on AMD's feedback, and add
> vcpu->arch.xsaves_enabled to track whether XSAVES is enabled in the guest.
> 
> This change sets up IA32_XSS to be a non-zero value in the future, which
> may happen sooner than later with support for guest CET feature being
> added.
> 
> v2 -> v3:
>  - Remove guest_xcr0_loaded from kvm_vcpu.
>  - Add vcpu->arch.xsaves_enabled.
>  - Add staged rollout to load the hardware IA32_XSS MSR with guest/host
>    values on VM-entry and VM-exit:
>      1) Introduce vcpu->arch->xsaves_enabled.
>      2) Add svm implementation for switching between guest and host IA32_XSS.
>      3) Add vmx implementation for switching between guest and host IA32_XSS.
>      4) Remove svm and vmx implementation and add it to common code.
> 
> v1 -> v2:
>  - Add the flag xsaves_enabled to kvm_vcpu_arch to track when XSAVES is
>    enabled in the guest, whether or not XSAVES is enumerated in the
>    guest CPUID.
>  - Remove code that sets the X86_FEATURE_XSAVES bit in the guest CPUID
>    which was added in patch "Enumerate XSAVES in guest CPUID when it is
>    available to the guest".  As a result we no longer need that patch.
>  - Added a comment to kvm_set_msr_common to describe how to save/restore
>    PT MSRS without using XSAVES/XRSTORS.
>  - Added more comments to the "Add support for XSAVES on AMD" patch.
>  - Replaced vcpu_set_msr_expect_result() with _vcpu_set_msr() in the
>    test library.
> 
> Aaron Lewis (9):
>   KVM: x86: Introduce vcpu->arch.xsaves_enabled
>   KVM: VMX: Fix conditions for guest IA32_XSS support
>   KVM: x86: Remove unneeded kvm_vcpu variable, guest_xcr0_loaded
>   KVM: SVM: Use wrmsr for switching between guest and host IA32_XSS on AMD
>   KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS on Intel
>   KVM: x86: Move IA32_XSS-swapping on VM-entry/VM-exit to common x86 code
>   kvm: x86: Move IA32_XSS to kvm_{get,set}_msr_common
>   kvm: svm: Update svm_xsaves_supported
>   kvm: tests: Add test to verify MSR_IA32_XSS
> 
>  arch/x86/include/asm/kvm_host.h               |  1 +
>  arch/x86/kvm/svm.c                            |  9 ++-
>  arch/x86/kvm/vmx/vmx.c                        | 41 ++--------
>  arch/x86/kvm/x86.c                            | 52 ++++++++++---
>  arch/x86/kvm/x86.h                            |  4 +-
>  include/linux/kvm_host.h                      |  1 -
>  tools/testing/selftests/kvm/.gitignore        |  1 +
>  tools/testing/selftests/kvm/Makefile          |  1 +
>  .../selftests/kvm/include/x86_64/processor.h  |  7 +-
>  .../selftests/kvm/lib/x86_64/processor.c      | 72 +++++++++++++++---
>  .../selftests/kvm/x86_64/xss_msr_test.c       | 76 +++++++++++++++++++
>  11 files changed, 205 insertions(+), 60 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/x86_64/xss_msr_test.c
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-10-22 13:48 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-21 23:30 [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 1/9] KVM: x86: Introduce vcpu->arch.xsaves_enabled Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 2/9] KVM: VMX: Fix conditions for guest IA32_XSS support Aaron Lewis
2019-10-22 13:43   ` Paolo Bonzini
2019-10-21 23:30 ` [PATCH v3 3/9] KVM: x86: Remove unneeded kvm_vcpu variable, guest_xcr0_loaded Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 4/9] KVM: SVM: Use wrmsr for switching between guest and host IA32_XSS on AMD Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 5/9] KVM: VMX: Use wrmsr for switching between guest and host IA32_XSS on Intel Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 6/9] KVM: x86: Move IA32_XSS-swapping on VM-entry/VM-exit to common x86 code Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 7/9] kvm: x86: Move IA32_XSS to kvm_{get,set}_msr_common Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 8/9] kvm: svm: Update svm_xsaves_supported Aaron Lewis
2019-10-21 23:30 ` [PATCH v3 9/9] kvm: tests: Add test to verify MSR_IA32_XSS Aaron Lewis
2019-10-22 13:48 ` [PATCH v3 0/9] Add support for XSAVES to AMD and unify it with Intel Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.