All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] Final set of XSAVES patches
@ 2014-12-04 15:57 Paolo Bonzini
  2014-12-04 15:57 ` [PATCH 1/9] x86: export get_xsave_addr Paolo Bonzini
                   ` (9 more replies)
  0 siblings, 10 replies; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

These are all the patches needed to support XSAVES.

Paolo Bonzini (5):
  x86: export get_xsave_addr
  KVM: x86: support XSAVES usage in the host
  KVM: x86: use F() macro throughout cpuid.c
  KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly
  KVM: cpuid: mask more bits in leaf 0xd and subleaves

Wanpeng Li (4):
  kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest
  kvm: x86: handle XSAVES vmcs and vmexit
  kvm: vmx: add MSR logic for XSAVES
  kvm: vmx: add nested virtualization support for xsaves

 arch/x86/include/asm/kvm_host.h |  2 +
 arch/x86/include/asm/vmx.h      |  3 ++
 arch/x86/include/uapi/asm/vmx.h |  6 ++-
 arch/x86/kernel/xsave.c         |  1 +
 arch/x86/kvm/cpuid.c            | 47 ++++++++++++++-------
 arch/x86/kvm/svm.c              |  6 +++
 arch/x86/kvm/vmx.c              | 80 +++++++++++++++++++++++++++++++++++-
 arch/x86/kvm/x86.c              | 90 +++++++++++++++++++++++++++++++++++++----
 8 files changed, 210 insertions(+), 25 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/9] x86: export get_xsave_addr
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-04 16:34   ` Greg KH
  2014-12-04 15:57 ` [PATCH 2/9] KVM: x86: support XSAVES usage in the host Paolo Bonzini
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li, stable, x86, H. Peter Anvin

get_xsave_addr is the API to access XSAVE states, and KVM would
like to use it.  Export it.

Cc: stable@vger.kernel.org
Cc: x86@kernel.org
Cc: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kernel/xsave.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c
index 4c540c4719d8..0de1fae2bdf0 100644
--- a/arch/x86/kernel/xsave.c
+++ b/arch/x86/kernel/xsave.c
@@ -738,3 +738,4 @@ void *get_xsave_addr(struct xsave_struct *xsave, int xstate)
 
 	return (void *)xsave + xstate_comp_offsets[feature];
 }
+EXPORT_SYMBOL_GPL(get_xsave_addr);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/9] KVM: x86: support XSAVES usage in the host
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
  2014-12-04 15:57 ` [PATCH 1/9] x86: export get_xsave_addr Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-04 17:56   ` Radim Krčmář
  2014-12-04 15:57 ` [PATCH 3/9] KVM: x86: use F() macro throughout cpuid.c Paolo Bonzini
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li, Fenghua Yu, stable, H. Peter Anvin

Userspace is expecting non-compacted format for KVM_GET_XSAVE, but
struct xsave_struct might be using the compacted format.  Convert
in order to preserve userspace ABI.

Likewise, userspace is passing non-compacted format for KVM_SET_XSAVE
but the kernel will pass it to XRSTORS, and we need to convert back.

Fixes: f31a9f7c71691569359fa7fb8b0acaa44bce0324
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Cc: H. Peter Anvin <hpa@linux.intel.com>
Reported-by: Nadav Amit <namit@cs.technion.ac.il>
Tested-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 83 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 08b5657e57ed..c259814200bd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3132,15 +3132,89 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+#define XSTATE_COMPACTION_ENABLED (1ULL << 63)
+
+static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu)
+{
+	struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state->xsave;
+	u64 xstate_bv = xsave->xsave_hdr.xstate_bv;
+	u64 valid;
+
+	/*
+	 * Copy legacy XSAVE area, to avoid complications with CPUID
+	 * leaves 0 and 1 in the loop below.
+	 */
+	memcpy(dest, xsave, XSAVE_HDR_OFFSET);
+
+	/* Set XSTATE_BV */
+	*(u64 *)(dest + XSAVE_HDR_OFFSET) = xstate_bv;
+
+	/*
+	 * Copy each region from the possibly compacted offset to the
+	 * non-compacted offset.
+	 */
+	valid = xstate_bv & ~XSTATE_FPSSE;
+	while (valid) {
+		u64 feature = valid & -valid;
+		int index = fls64(feature) - 1;
+		void *src = get_xsave_addr(xsave, feature);
+
+		if (src) {
+			u32 size, offset, ecx, edx;
+			cpuid_count(XSTATE_CPUID, index,
+				    &size, &offset, &ecx, &edx);
+			memcpy(dest + offset, src, size);
+		}
+
+		valid -= feature;
+	}
+}
+
+static void load_xsave(struct kvm_vcpu *vcpu, u8 *src)
+{
+	struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state->xsave;
+	u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET);
+	u64 valid;
+
+	/*
+	 * Copy legacy XSAVE area, to avoid complications with CPUID
+	 * leaves 0 and 1 in the loop below.
+	 */
+	memcpy(xsave, src, XSAVE_HDR_OFFSET);
+
+	/* Set XSTATE_BV and possibly XCOMP_BV.  */
+	xsave->xsave_hdr.xstate_bv = xstate_bv;
+	if (cpu_has_xsaves)
+		xsave->xsave_hdr.xcomp_bv = host_xcr0 | XSTATE_COMPACTION_ENABLED;
+
+	/*
+	 * Copy each region from the non-compacted offset to the
+	 * possibly compacted offset.
+	 */
+	valid = xstate_bv & ~XSTATE_FPSSE;
+	while (valid) {
+		u64 feature = valid & -valid;
+		int index = fls64(feature) - 1;
+		void *dest = get_xsave_addr(xsave, feature);
+
+		if (dest) {
+			u32 size, offset, ecx, edx;
+			cpuid_count(XSTATE_CPUID, index,
+				    &size, &offset, &ecx, &edx);
+			memcpy(dest, src + offset, size);
+		} else
+			WARN_ON_ONCE(1);
+
+		valid -= feature;
+	}
+}
+
 static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
 					 struct kvm_xsave *guest_xsave)
 {
 	if (cpu_has_xsave) {
-		memcpy(guest_xsave->region,
-			&vcpu->arch.guest_fpu.state->xsave,
-			vcpu->arch.guest_xstate_size);
-		*(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] &=
-			vcpu->arch.guest_supported_xcr0 | XSTATE_FPSSE;
+		memset(guest_xsave, 0, sizeof(struct kvm_xsave));
+		fill_xsave((u8 *) guest_xsave->region, vcpu);
 	} else {
 		memcpy(guest_xsave->region,
 			&vcpu->arch.guest_fpu.state->fxsave,
@@ -3164,8 +3238,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
 		 */
 		if (xstate_bv & ~kvm_supported_xcr0())
 			return -EINVAL;
-		memcpy(&vcpu->arch.guest_fpu.state->xsave,
-			guest_xsave->region, vcpu->arch.guest_xstate_size);
+		load_xsave(vcpu, (u8 *)guest_xsave->region);
 	} else {
 		if (xstate_bv & ~XSTATE_FPSSE)
 			return -EINVAL;
@@ -6882,6 +6955,9 @@ int fx_init(struct kvm_vcpu *vcpu)
 		return err;
 
 	fpu_finit(&vcpu->arch.guest_fpu);
+	if (cpu_has_xsaves)
+		vcpu->arch.guest_fpu.state->xsave.xsave_hdr.xcomp_bv =
+			host_xcr0 | XSTATE_COMPACTION_ENABLED;
 
 	/*
 	 * Ensure guest xcr0 is valid for loading
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/9] KVM: x86: use F() macro throughout cpuid.c
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
  2014-12-04 15:57 ` [PATCH 1/9] x86: export get_xsave_addr Paolo Bonzini
  2014-12-04 15:57 ` [PATCH 2/9] KVM: x86: support XSAVES usage in the host Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-04 15:57 ` [PATCH 4/9] kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest Paolo Bonzini
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

For code that deals with cpuid, this makes things a bit more readable.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/cpuid.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a4f5ac46226c..11f09a2b31a0 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -53,6 +53,8 @@ u64 kvm_supported_xcr0(void)
 	return xcr0;
 }
 
+#define F(x) bit(X86_FEATURE_##x)
+
 int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpuid_entry2 *best;
@@ -64,13 +66,13 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 
 	/* Update OSXSAVE bit */
 	if (cpu_has_xsave && best->function == 0x1) {
-		best->ecx &= ~(bit(X86_FEATURE_OSXSAVE));
+		best->ecx &= ~F(OSXSAVE);
 		if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE))
-			best->ecx |= bit(X86_FEATURE_OSXSAVE);
+			best->ecx |= F(OSXSAVE);
 	}
 
 	if (apic) {
-		if (best->ecx & bit(X86_FEATURE_TSC_DEADLINE_TIMER))
+		if (best->ecx & F(TSC_DEADLINE_TIMER))
 			apic->lapic_timer.timer_mode_mask = 3 << 17;
 		else
 			apic->lapic_timer.timer_mode_mask = 1 << 17;
@@ -122,8 +124,8 @@ static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
 			break;
 		}
 	}
-	if (entry && (entry->edx & bit(X86_FEATURE_NX)) && !is_efer_nx()) {
-		entry->edx &= ~bit(X86_FEATURE_NX);
+	if (entry && (entry->edx & F(NX)) && !is_efer_nx()) {
+		entry->edx &= ~F(NX);
 		printk(KERN_INFO "kvm: guest NX capability removed\n");
 	}
 }
@@ -227,8 +229,6 @@ static void do_cpuid_1_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 	entry->flags = 0;
 }
 
-#define F(x) bit(X86_FEATURE_##x)
-
 static int __do_cpuid_ent_emulated(struct kvm_cpuid_entry2 *entry,
 				   u32 func, u32 index, int *nent, int maxnent)
 {
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/9] kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
                   ` (2 preceding siblings ...)
  2014-12-04 15:57 ` [PATCH 3/9] KVM: x86: use F() macro throughout cpuid.c Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-05 13:32   ` Radim Krčmář
  2014-12-04 15:57 ` [PATCH 5/9] KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly Paolo Bonzini
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

From: Wanpeng Li <wanpeng.li@linux.intel.com>

Expose the XSAVES feature to the guest if the kvm_x86_ops say it is
available.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/include/asm/vmx.h      | 1 +
 arch/x86/kvm/cpuid.c            | 3 ++-
 arch/x86/kvm/svm.c              | 6 ++++++
 arch/x86/kvm/vmx.c              | 7 +++++++
 5 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2896dbc18987..0271f6bcf123 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -771,6 +771,7 @@ struct kvm_x86_ops {
 			       enum x86_intercept_stage stage);
 	void (*handle_external_intr)(struct kvm_vcpu *vcpu);
 	bool (*mpx_supported)(void);
+	bool (*xsaves_supported)(void);
 
 	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
 
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index bcbfade26d8d..08dc770bd340 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -69,6 +69,7 @@
 #define SECONDARY_EXEC_PAUSE_LOOP_EXITING	0x00000400
 #define SECONDARY_EXEC_ENABLE_INVPCID		0x00001000
 #define SECONDARY_EXEC_SHADOW_VMCS              0x00004000
+#define SECONDARY_EXEC_XSAVES			0x00100000
 
 
 #define PIN_BASED_EXT_INTR_MASK                 0x00000001
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 11f09a2b31a0..e24df01ab118 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -267,6 +267,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 	unsigned f_rdtscp = kvm_x86_ops->rdtscp_supported() ? F(RDTSCP) : 0;
 	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
 	unsigned f_mpx = kvm_x86_ops->mpx_supported() ? F(MPX) : 0;
+	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
 
 	/* cpuid 1.edx */
 	const u32 kvm_supported_word0_x86_features =
@@ -322,7 +323,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	/* cpuid 0xD.1.eax */
 	const u32 kvm_supported_word10_x86_features =
-		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1);
+		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | f_xsaves;
 
 	/* all calls to cpuid_count() should be made on the same cpu */
 	get_cpu();
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 6b411add23b8..41dd0387cccb 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4127,6 +4127,11 @@ static bool svm_mpx_supported(void)
 	return false;
 }
 
+static bool svm_xsaves_supported(void)
+{
+	return false;
+}
+
 static bool svm_has_wbinvd_exit(void)
 {
 	return true;
@@ -4414,6 +4419,7 @@ static struct kvm_x86_ops svm_x86_ops = {
 	.rdtscp_supported = svm_rdtscp_supported,
 	.invpcid_supported = svm_invpcid_supported,
 	.mpx_supported = svm_mpx_supported,
+	.xsaves_supported = svm_xsaves_supported,
 
 	.set_supported_cpuid = svm_set_supported_cpuid,
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6a951d823c82..dd4fa461454a 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7553,6 +7553,12 @@ static bool vmx_mpx_supported(void)
 		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS);
 }
 
+static bool vmx_xsaves_supported(void)
+{
+	return vmcs_config.cpu_based_2nd_exec_ctrl &
+		SECONDARY_EXEC_XSAVES;
+}
+
 static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 {
 	u32 exit_intr_info;
@@ -9329,6 +9335,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
 	.check_intercept = vmx_check_intercept,
 	.handle_external_intr = vmx_handle_external_intr,
 	.mpx_supported = vmx_mpx_supported,
+	.xsaves_supported = vmx_xsaves_supported,
 
 	.check_nested_events = vmx_check_nested_events,
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/9] KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
                   ` (3 preceding siblings ...)
  2014-12-04 15:57 ` [PATCH 4/9] kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-05  0:40   ` Wanpeng Li
  2014-12-04 15:57 ` [PATCH 6/9] KVM: cpuid: mask more bits in leaf 0xd and subleaves Paolo Bonzini
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

This is the size of the XSAVES area.  This starts providing guest support
for XSAVES (with no support yet for supervisor states, i.e. XSS == 0
always in guests for now).

Wanpeng Li suggested testing XSAVEC as well as XSAVES, since in practice
no real processor exists that only has one of them, and there is no
other way for userspace programs to compute the area of the XSAVEC
save area.  CPUID(EAX=0xd,ECX=1).EBX provides an upper bound.

Suggested-by: Radim Krčmář <rkrcmar@redhat.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/cpuid.c | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e24df01ab118..2f7bc2de9915 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -23,7 +23,7 @@
 #include "mmu.h"
 #include "trace.h"
 
-static u32 xstate_required_size(u64 xstate_bv)
+static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 {
 	int feature_bit = 0;
 	u32 ret = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
@@ -31,9 +31,10 @@ static u32 xstate_required_size(u64 xstate_bv)
 	xstate_bv &= XSTATE_EXTEND_MASK;
 	while (xstate_bv) {
 		if (xstate_bv & 0x1) {
-		        u32 eax, ebx, ecx, edx;
+		        u32 eax, ebx, ecx, edx, offset;
 		        cpuid_count(0xD, feature_bit, &eax, &ebx, &ecx, &edx);
-			ret = max(ret, eax + ebx);
+			offset = compacted ? ret : ebx;
+			ret = max(ret, offset + eax);
 		}
 
 		xstate_bv >>= 1;
@@ -87,9 +88,13 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
 			(best->eax | ((u64)best->edx << 32)) &
 			kvm_supported_xcr0();
 		vcpu->arch.guest_xstate_size = best->ebx =
-			xstate_required_size(vcpu->arch.xcr0);
+			xstate_required_size(vcpu->arch.xcr0, false);
 	}
 
+	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
+	if (best && (best->eax & (F(XSAVES)|F(XSAVEC))))
+		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+
 	/*
 	 * The existing code assumes virtual address is 48-bit in the canonical
 	 * address checks; exit if it is ever changed.
@@ -470,9 +475,14 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 				goto out;
 
 			do_cpuid_1_ent(&entry[i], function, idx);
-			if (idx == 1)
+			if (idx == 1) {
 				entry[i].eax &= kvm_supported_word10_x86_features;
-			else if (entry[i].eax == 0 || !(supported & mask))
+				entry[i].ebx = 0;
+				if (entry[i].eax & (F(XSAVES)|F(XSAVEC)))
+					entry[i].ebx =
+						xstate_required_size(supported,
+								     true);
+			} else if (entry[i].eax == 0 || !(supported & mask))
 				continue;
 			entry[i].flags |=
 			       KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/9] KVM: cpuid: mask more bits in leaf 0xd and subleaves
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
                   ` (4 preceding siblings ...)
  2014-12-04 15:57 ` [PATCH 5/9] KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-05  0:34   ` Wanpeng Li
  2014-12-04 15:57 ` [PATCH 7/9] kvm: x86: handle XSAVES vmcs and vmexit Paolo Bonzini
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

- EAX=0Dh, ECX=1: output registers ECX/EDX are reserved.

- EAX=0Dh, ECX>1: output register ECX bit 0 is clear for all the CPUID
leaves we support, because variable "supported" comes from XCR0 and not
XSS.  Bits above 0 are reserved, so ECX is overall zero.  Output register
EDX is reserved.

Source: Intel Architecture Instruction Set Extensions Programming
Reference, ref. number 319433-022

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/cpuid.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 2f7bc2de9915..644bfe828ce1 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -482,8 +482,14 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 					entry[i].ebx =
 						xstate_required_size(supported,
 								     true);
-			} else if (entry[i].eax == 0 || !(supported & mask))
-				continue;
+			} else {
+				if (entry[i].eax == 0 || !(supported & mask))
+					continue;
+				if (WARN_ON_ONCE(entry[i].ecx & 1))
+					continue;
+			}
+			entry[i].ecx = 0;
+			entry[i].edx = 0;
 			entry[i].flags |=
 			       KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
 			++*nent;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/9] kvm: x86: handle XSAVES vmcs and vmexit
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
                   ` (5 preceding siblings ...)
  2014-12-04 15:57 ` [PATCH 6/9] KVM: cpuid: mask more bits in leaf 0xd and subleaves Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-04 18:06   ` Radim Krčmář
  2014-12-04 15:57 ` [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES Paolo Bonzini
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

From: Wanpeng Li <wanpeng.li@linux.intel.com>

Initialize the XSS exit bitmap.  It is zero so there should be no XSAVES
or XRSTORS exits.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/vmx.h      |  2 ++
 arch/x86/include/uapi/asm/vmx.h |  6 +++++-
 arch/x86/kvm/vmx.c              | 21 +++++++++++++++++++++
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 08dc770bd340..45afaee9555c 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -160,6 +160,8 @@ enum vmcs_field {
 	EOI_EXIT_BITMAP3_HIGH           = 0x00002023,
 	VMREAD_BITMAP                   = 0x00002026,
 	VMWRITE_BITMAP                  = 0x00002028,
+	XSS_EXIT_BITMAP                 = 0x0000202C,
+	XSS_EXIT_BITMAP_HIGH            = 0x0000202D,
 	GUEST_PHYSICAL_ADDRESS          = 0x00002400,
 	GUEST_PHYSICAL_ADDRESS_HIGH     = 0x00002401,
 	VMCS_LINK_POINTER               = 0x00002800,
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index 990a2fe1588d..b813bf9da1e2 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -72,6 +72,8 @@
 #define EXIT_REASON_XSETBV              55
 #define EXIT_REASON_APIC_WRITE          56
 #define EXIT_REASON_INVPCID             58
+#define EXIT_REASON_XSAVES              63
+#define EXIT_REASON_XRSTORS             64
 
 #define VMX_EXIT_REASONS \
 	{ EXIT_REASON_EXCEPTION_NMI,         "EXCEPTION_NMI" }, \
@@ -116,6 +118,8 @@
 	{ EXIT_REASON_INVALID_STATE,         "INVALID_STATE" }, \
 	{ EXIT_REASON_INVD,                  "INVD" }, \
 	{ EXIT_REASON_INVVPID,               "INVVPID" }, \
-	{ EXIT_REASON_INVPCID,               "INVPCID" }
+	{ EXIT_REASON_INVPCID,               "INVPCID" }, \
+	{ EXIT_REASON_XSAVES,                "XSAVES" }, \
+	{ EXIT_REASON_XRSTORS,               "XRSTORS" }
 
 #endif /* _UAPIVMX_H */
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index dd4fa461454a..88048c72b6b1 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -761,6 +761,7 @@ static u64 construct_eptp(unsigned long root_hpa);
 static void kvm_cpu_vmxon(u64 addr);
 static void kvm_cpu_vmxoff(void);
 static bool vmx_mpx_supported(void);
+static bool vmx_xsaves_supported(void);
 static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr);
 static void vmx_set_segment(struct kvm_vcpu *vcpu,
 			    struct kvm_segment *var, int seg);
@@ -4337,6 +4338,7 @@ static void ept_set_mmio_spte_mask(void)
 	kvm_mmu_set_mmio_spte_mask((0x3ull << 62) | 0x6ull);
 }
 
+#define VMX_XSS_EXIT_BITMAP 0
 /*
  * Sets up the vmcs for emulated real mode.
  */
@@ -4446,6 +4448,9 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx)
 	vmcs_writel(CR0_GUEST_HOST_MASK, ~0UL);
 	set_cr4_guest_host_mask(vmx);
 
+	if (vmx_xsaves_supported())
+		vmcs_write64(XSS_EXIT_BITMAP, VMX_XSS_EXIT_BITMAP);
+
 	return 0;
 }
 
@@ -5334,6 +5339,20 @@ static int handle_xsetbv(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+static int handle_xsaves(struct kvm_vcpu *vcpu)
+{
+	skip_emulated_instruction(vcpu);
+	WARN(1, "this should never happen\n");
+	return 1;
+}
+
+static int handle_xrstors(struct kvm_vcpu *vcpu)
+{
+	skip_emulated_instruction(vcpu);
+	WARN(1, "this should never happen\n");
+	return 1;
+}
+
 static int handle_apic_access(struct kvm_vcpu *vcpu)
 {
 	if (likely(fasteoi)) {
@@ -6951,6 +6970,8 @@ static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 	[EXIT_REASON_MONITOR_INSTRUCTION]     = handle_monitor,
 	[EXIT_REASON_INVEPT]                  = handle_invept,
 	[EXIT_REASON_INVVPID]                 = handle_invvpid,
+	[EXIT_REASON_XSAVES]                  = handle_xsaves,
+	[EXIT_REASON_XRSTORS]                 = handle_xrstors,
 };
 
 static const int kvm_vmx_max_exit_handlers =
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
                   ` (6 preceding siblings ...)
  2014-12-04 15:57 ` [PATCH 7/9] kvm: x86: handle XSAVES vmcs and vmexit Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-04 18:05   ` Radim Krčmář
  2014-12-04 18:07   ` Radim Krčmář
  2014-12-04 15:57 ` [PATCH 9/9] kvm: vmx: add nested virtualization support for xsaves Paolo Bonzini
  2014-12-05  1:15 ` [PATCH 0/9] Final set of XSAVES patches Wanpeng Li
  9 siblings, 2 replies; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

From: Wanpeng Li <wanpeng.li@linux.intel.com>

Add logic to get/set the XSS model-specific register.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx.c              | 29 ++++++++++++++++++++++++++++-
 2 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0271f6bcf123..0c4c88c008ce 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -362,6 +362,7 @@ struct kvm_vcpu_arch {
 	int mp_state;
 	u64 ia32_misc_enable_msr;
 	bool tpr_access_reporting;
+	u64 ia32_xss;
 
 	/*
 	 * Paging state of the vcpu
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 88048c72b6b1..ad1153a725a2 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -99,6 +99,8 @@ module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO);
 static bool __read_mostly nested = 0;
 module_param(nested, bool, S_IRUGO);
 
+static u64 __read_mostly host_xss;
+
 #define KVM_GUEST_CR0_MASK (X86_CR0_NW | X86_CR0_CD)
 #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST (X86_CR0_WP | X86_CR0_NE)
 #define KVM_VM_CR0_ALWAYS_ON						\
@@ -2570,6 +2572,11 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
 		if (!nested_vmx_allowed(vcpu))
 			return 1;
 		return vmx_get_vmx_msr(vcpu, msr_index, pdata);
+	case MSR_IA32_XSS:
+		if (!vmx_xsaves_supported())
+			return 1;
+		data = vcpu->arch.ia32_xss;
+		break;
 	case MSR_TSC_AUX:
 		if (!to_vmx(vcpu)->rdtscp_enabled)
 			return 1;
@@ -2661,6 +2668,22 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		break;
 	case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
 		return 1; /* they are read-only */
+	case MSR_IA32_XSS:
+		if (!vmx_xsaves_supported())
+			return 1;
+		/*
+		 * The only supported bit as of Skylake is bit 8, but
+		 * it is not supported on KVM.
+		 */
+		if (data != 0)
+			return 1;
+		vcpu->arch.ia32_xss = data;
+		if (vcpu->arch.ia32_xss != host_xss)
+			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
+				vcpu->arch.ia32_xss, host_xss);
+		else
+			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
+		break;
 	case MSR_TSC_AUX:
 		if (!vmx->rdtscp_enabled)
 			return 1;
@@ -2896,7 +2919,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
 			SECONDARY_EXEC_ENABLE_INVPCID |
 			SECONDARY_EXEC_APIC_REGISTER_VIRT |
 			SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
-			SECONDARY_EXEC_SHADOW_VMCS;
+			SECONDARY_EXEC_SHADOW_VMCS |
+			SECONDARY_EXEC_XSAVES;
 		if (adjust_vmx_controls(min2, opt2,
 					MSR_IA32_VMX_PROCBASED_CTLS2,
 					&_cpu_based_2nd_exec_control) < 0)
@@ -3019,6 +3043,9 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
 		}
 	}
 
+	if (cpu_has_xsaves)
+		rdmsrl(MSR_IA32_XSS, host_xss);
+
 	return 0;
 }
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 9/9] kvm: vmx: add nested virtualization support for xsaves
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
                   ` (7 preceding siblings ...)
  2014-12-04 15:57 ` [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES Paolo Bonzini
@ 2014-12-04 15:57 ` Paolo Bonzini
  2014-12-04 18:08   ` Radim Krčmář
  2014-12-05  1:15 ` [PATCH 0/9] Final set of XSAVES patches Wanpeng Li
  9 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 15:57 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar, Wanpeng Li

From: Wanpeng Li <wanpeng.li@linux.intel.com>

Add vmcs12 support for xsaves.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx.c | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index ad1153a725a2..9bcc871f0635 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -216,6 +216,7 @@ struct __packed vmcs12 {
 	u64 virtual_apic_page_addr;
 	u64 apic_access_addr;
 	u64 ept_pointer;
+	u64 xss_exit_bitmap;
 	u64 guest_physical_address;
 	u64 vmcs_link_pointer;
 	u64 guest_ia32_debugctl;
@@ -618,6 +619,7 @@ static const unsigned short vmcs_field_to_offset_table[] = {
 	FIELD64(VIRTUAL_APIC_PAGE_ADDR, virtual_apic_page_addr),
 	FIELD64(APIC_ACCESS_ADDR, apic_access_addr),
 	FIELD64(EPT_POINTER, ept_pointer),
+	FIELD64(XSS_EXIT_BITMAP, xss_exit_bitmap),
 	FIELD64(GUEST_PHYSICAL_ADDRESS, guest_physical_address),
 	FIELD64(VMCS_LINK_POINTER, vmcs_link_pointer),
 	FIELD64(GUEST_IA32_DEBUGCTL, guest_ia32_debugctl),
@@ -1104,6 +1106,12 @@ static inline int nested_cpu_has_ept(struct vmcs12 *vmcs12)
 	return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_EPT);
 }
 
+static inline bool nested_cpu_has_xsaves(struct vmcs12 *vmcs12)
+{
+	return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES) &&
+		vmx_xsaves_supported();
+}
+
 static inline bool is_exception(u32 intr_info)
 {
 	return (intr_info & (INTR_INFO_INTR_TYPE_MASK | INTR_INFO_VALID_MASK))
@@ -2392,7 +2400,8 @@ static __init void nested_vmx_setup_ctls_msrs(void)
 	nested_vmx_secondary_ctls_high &=
 		SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
 		SECONDARY_EXEC_UNRESTRICTED_GUEST |
-		SECONDARY_EXEC_WBINVD_EXITING;
+		SECONDARY_EXEC_WBINVD_EXITING |
+		SECONDARY_EXEC_XSAVES;
 
 	if (enable_ept) {
 		/* nested EPT: emulate EPT also to L1 */
@@ -7286,6 +7295,14 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu)
 		return nested_cpu_has2(vmcs12, SECONDARY_EXEC_WBINVD_EXITING);
 	case EXIT_REASON_XSETBV:
 		return 1;
+	case EXIT_REASON_XSAVES: case EXIT_REASON_XRSTORS:
+		/*
+		 * This should never happen, since it is not possible to
+		 * set XSS to a non-zero value---neither in L1 nor in L2.
+		 * If if it were, XSS would have to be checked against
+		 * the XSS exit bitmap in vmcs12.
+		 */
+		return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES);
 	default:
 		return 1;
 	}
@@ -8342,6 +8359,8 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
 	vmcs_writel(GUEST_SYSENTER_ESP, vmcs12->guest_sysenter_esp);
 	vmcs_writel(GUEST_SYSENTER_EIP, vmcs12->guest_sysenter_eip);
 
+	if (nested_cpu_has_xsaves(vmcs12))
+		vmcs_write64(XSS_EXIT_BITMAP, vmcs12->xss_exit_bitmap);
 	vmcs_write64(VMCS_LINK_POINTER, -1ull);
 
 	exec_control = vmcs12->pin_based_vm_exec_control;
@@ -8982,6 +9001,8 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 	vmcs12->guest_sysenter_eip = vmcs_readl(GUEST_SYSENTER_EIP);
 	if (vmx_mpx_supported())
 		vmcs12->guest_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
+	if (nested_cpu_has_xsaves(vmcs12))
+		vmcs12->xss_exit_bitmap = vmcs_read64(XSS_EXIT_BITMAP);
 
 	/* update exit information fields: */
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/9] x86: export get_xsave_addr
  2014-12-04 15:57 ` [PATCH 1/9] x86: export get_xsave_addr Paolo Bonzini
@ 2014-12-04 16:34   ` Greg KH
  2014-12-04 17:29     ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Greg KH @ 2014-12-04 16:34 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, rkrcmar, Wanpeng Li, stable, x86, H. Peter Anvin

On Thu, Dec 04, 2014 at 04:57:06PM +0100, Paolo Bonzini wrote:
> get_xsave_addr is the API to access XSAVE states, and KVM would
> like to use it.  Export it.

Use it in what way?

> Cc: stable@vger.kernel.org

Why is this a stable patch?


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/9] x86: export get_xsave_addr
  2014-12-04 16:34   ` Greg KH
@ 2014-12-04 17:29     ` Paolo Bonzini
  0 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-04 17:29 UTC (permalink / raw)
  To: Greg KH
  Cc: linux-kernel, kvm, rkrcmar, Wanpeng Li, stable, x86, H. Peter Anvin



On 04/12/2014 17:34, Greg KH wrote:
> On Thu, Dec 04, 2014 at 04:57:06PM +0100, Paolo Bonzini wrote:
>> > get_xsave_addr is the API to access XSAVE states, and KVM would
>> > like to use it.  Export it.
> Use it in what way?

As in patch 2/9, to avoid that upgrading to a newer processor breaks
userspace ABI.

>> > Cc: stable@vger.kernel.org
> Why is this a stable patch?

Because as of now, Skylake processors have a different userspace ABI
than previous generations.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/9] KVM: x86: support XSAVES usage in the host
  2014-12-04 15:57 ` [PATCH 2/9] KVM: x86: support XSAVES usage in the host Paolo Bonzini
@ 2014-12-04 17:56   ` Radim Krčmář
  0 siblings, 0 replies; 23+ messages in thread
From: Radim Krčmář @ 2014-12-04 17:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, Wanpeng Li, Fenghua Yu, stable, H. Peter Anvin

2014-12-04 16:57+0100, Paolo Bonzini:
> Userspace is expecting non-compacted format for KVM_GET_XSAVE, but
> struct xsave_struct might be using the compacted format.  Convert
> in order to preserve userspace ABI.
> 
> Likewise, userspace is passing non-compacted format for KVM_SET_XSAVE
> but the kernel will pass it to XRSTORS, and we need to convert back.
> 
> Fixes: f31a9f7c71691569359fa7fb8b0acaa44bce0324
> Cc: Fenghua Yu <fenghua.yu@intel.com>
> Cc: stable@vger.kernel.org
> Cc: H. Peter Anvin <hpa@linux.intel.com>
> Reported-by: Nadav Amit <namit@cs.technion.ac.il>
> Tested-by: Nadav Amit <namit@cs.technion.ac.il>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

> +++ b/arch/x86/kvm/x86.c
> @@ -3132,15 +3132,89 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
> +	u64 xstate_bv = xsave->xsave_hdr.xstate_bv;

(This looks like the only change since last review.)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES
  2014-12-04 15:57 ` [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES Paolo Bonzini
@ 2014-12-04 18:05   ` Radim Krčmář
  2014-12-04 18:07   ` Radim Krčmář
  1 sibling, 0 replies; 23+ messages in thread
From: Radim Krčmář @ 2014-12-04 18:05 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Wanpeng Li

2014-12-04 16:57+0100, Paolo Bonzini:
> From: Wanpeng Li <wanpeng.li@linux.intel.com>
> 
> Add logic to get/set the XSS model-specific register.
> 
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

> @@ -2896,7 +2919,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
>  			SECONDARY_EXEC_ENABLE_INVPCID |
>  			SECONDARY_EXEC_APIC_REGISTER_VIRT |
>  			SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
> -			SECONDARY_EXEC_SHADOW_VMCS;
> +			SECONDARY_EXEC_SHADOW_VMCS |
> +			SECONDARY_EXEC_XSAVES;

(Nothing to see here, move along ...)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 7/9] kvm: x86: handle XSAVES vmcs and vmexit
  2014-12-04 15:57 ` [PATCH 7/9] kvm: x86: handle XSAVES vmcs and vmexit Paolo Bonzini
@ 2014-12-04 18:06   ` Radim Krčmář
  0 siblings, 0 replies; 23+ messages in thread
From: Radim Krčmář @ 2014-12-04 18:06 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Wanpeng Li

2014-12-04 16:57+0100, Paolo Bonzini:
> From: Wanpeng Li <wanpeng.li@linux.intel.com>
> 
> Initialize the XSS exit bitmap.  It is zero so there should be no XSAVES
> or XRSTORS exits.
> 
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES
  2014-12-04 15:57 ` [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES Paolo Bonzini
  2014-12-04 18:05   ` Radim Krčmář
@ 2014-12-04 18:07   ` Radim Krčmář
  1 sibling, 0 replies; 23+ messages in thread
From: Radim Krčmář @ 2014-12-04 18:07 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Wanpeng Li

2014-12-04 16:57+0100, Paolo Bonzini:
> From: Wanpeng Li <wanpeng.li@linux.intel.com>
> 
> Add logic to get/set the XSS model-specific register.
> 
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

> @@ -2896,7 +2919,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
>  			SECONDARY_EXEC_ENABLE_INVPCID |
>  			SECONDARY_EXEC_APIC_REGISTER_VIRT |
>  			SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
> -			SECONDARY_EXEC_SHADOW_VMCS;
> +			SECONDARY_EXEC_SHADOW_VMCS |
> +			SECONDARY_EXEC_XSAVES;
>  		if (adjust_vmx_controls(min2, opt2,
>  					MSR_IA32_VMX_PROCBASED_CTLS2,
>  					&_cpu_based_2nd_exec_control) < 0)

(Sneaky ;)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 9/9] kvm: vmx: add nested virtualization support for xsaves
  2014-12-04 15:57 ` [PATCH 9/9] kvm: vmx: add nested virtualization support for xsaves Paolo Bonzini
@ 2014-12-04 18:08   ` Radim Krčmář
  0 siblings, 0 replies; 23+ messages in thread
From: Radim Krčmář @ 2014-12-04 18:08 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Wanpeng Li

2014-12-04 16:57+0100, Paolo Bonzini:
> From: Wanpeng Li <wanpeng.li@linux.intel.com>
> 
> Add vmcs12 support for xsaves.
> 
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

> +	case EXIT_REASON_XSAVES: case EXIT_REASON_XRSTORS:
> +		/*
> +		 * This should never happen, since it is not possible to
> +		 * set XSS to a non-zero value---neither in L1 nor in L2.
> +		 * If if it were, XSS would have to be checked against
> +		 * the XSS exit bitmap in vmcs12.
> +		 */

We could WARN then.

> +		return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES);
>  	default:
>  		return 1;
>  	}

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/9] KVM: cpuid: mask more bits in leaf 0xd and subleaves
  2014-12-04 15:57 ` [PATCH 6/9] KVM: cpuid: mask more bits in leaf 0xd and subleaves Paolo Bonzini
@ 2014-12-05  0:34   ` Wanpeng Li
  2014-12-05  1:20     ` Wanpeng Li
  0 siblings, 1 reply; 23+ messages in thread
From: Wanpeng Li @ 2014-12-05  0:34 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, rkrcmar

Hi Paolo,
On Thu, Dec 04, 2014 at 04:57:11PM +0100, Paolo Bonzini wrote:
>- EAX=0Dh, ECX=1: output registers ECX/EDX are reserved.
>
>- EAX=0Dh, ECX>1: output register ECX bit 0 is clear for all the CPUID
>leaves we support, because variable "supported" comes from XCR0 and not
>XSS.  Bits above 0 are reserved, so ECX is overall zero.  Output register
>EDX is reserved.
>
>Source: Intel Architecture Instruction Set Extensions Programming
>Reference, ref. number 319433-022
>
>Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
>Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>---
> arch/x86/kvm/cpuid.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>

Do you miss this in your patch?

+       /* cpuid 0xD.1.eax */
+       const u32 kvm_supported_word10_x86_features =
+               F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1);
+

In addition, there is bisect issue if this is added in this patch.
(4/9 will take advantage of kvm_supported_word10_x86_features)

Regards,
Wanpeng Li 

>diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>index 2f7bc2de9915..644bfe828ce1 100644
>--- a/arch/x86/kvm/cpuid.c
>+++ b/arch/x86/kvm/cpuid.c
>@@ -482,8 +482,14 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
> 					entry[i].ebx =
> 						xstate_required_size(supported,
> 								     true);
>-			} else if (entry[i].eax == 0 || !(supported & mask))
>-				continue;
>+			} else {
>+				if (entry[i].eax == 0 || !(supported & mask))
>+					continue;
>+				if (WARN_ON_ONCE(entry[i].ecx & 1))
>+					continue;
>+			}
>+			entry[i].ecx = 0;
>+			entry[i].edx = 0;
> 			entry[i].flags |=
> 			       KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
> 			++*nent;
>-- 
>1.8.3.1
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/9] KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly
  2014-12-04 15:57 ` [PATCH 5/9] KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly Paolo Bonzini
@ 2014-12-05  0:40   ` Wanpeng Li
  0 siblings, 0 replies; 23+ messages in thread
From: Wanpeng Li @ 2014-12-05  0:40 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, rkrcmar

Hi Paolo,
On Thu, Dec 04, 2014 at 04:57:10PM +0100, Paolo Bonzini wrote:
>This is the size of the XSAVES area.  This starts providing guest support
>for XSAVES (with no support yet for supervisor states, i.e. XSS == 0
>always in guests for now).
>
>Wanpeng Li suggested testing XSAVEC as well as XSAVES, since in practice
>no real processor exists that only has one of them, and there is no
>other way for userspace programs to compute the area of the XSAVEC
>save area.  CPUID(EAX=0xd,ECX=1).EBX provides an upper bound.
>
>Suggested-by: Radim Krčmář <rkrcmar@redhat.com>
>Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
>Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>---
> arch/x86/kvm/cpuid.c | 22 ++++++++++++++++------
> 1 file changed, 16 insertions(+), 6 deletions(-)
>
>diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>index e24df01ab118..2f7bc2de9915 100644
>--- a/arch/x86/kvm/cpuid.c
>+++ b/arch/x86/kvm/cpuid.c
>@@ -23,7 +23,7 @@
> #include "mmu.h"
> #include "trace.h"
> 
>-static u32 xstate_required_size(u64 xstate_bv)
>+static u32 xstate_required_size(u64 xstate_bv, bool compacted)
> {
> 	int feature_bit = 0;
> 	u32 ret = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
>@@ -31,9 +31,10 @@ static u32 xstate_required_size(u64 xstate_bv)
> 	xstate_bv &= XSTATE_EXTEND_MASK;
> 	while (xstate_bv) {
> 		if (xstate_bv & 0x1) {
>-		        u32 eax, ebx, ecx, edx;
>+		        u32 eax, ebx, ecx, edx, offset;
> 		        cpuid_count(0xD, feature_bit, &eax, &ebx, &ecx, &edx);
>-			ret = max(ret, eax + ebx);
>+			offset = compacted ? ret : ebx;
>+			ret = max(ret, offset + eax);
> 		}
> 
> 		xstate_bv >>= 1;
>@@ -87,9 +88,13 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
> 			(best->eax | ((u64)best->edx << 32)) &
> 			kvm_supported_xcr0();
> 		vcpu->arch.guest_xstate_size = best->ebx =
>-			xstate_required_size(vcpu->arch.xcr0);
>+			xstate_required_size(vcpu->arch.xcr0, false);
> 	}
> 
>+	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
>+	if (best && (best->eax & (F(XSAVES)|F(XSAVEC))))
>+		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
>+
> 	/*
> 	 * The existing code assumes virtual address is 48-bit in the canonical
> 	 * address checks; exit if it is ever changed.
>@@ -470,9 +475,14 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
> 				goto out;
> 
> 			do_cpuid_1_ent(&entry[i], function, idx);
>-			if (idx == 1)

When if (idx == 1) is added in this patchset? I suspect that you miss 
to add your patch "kvm: x86: mask out XSAVES" in this patchset.

Regards,
Wanpeng Li 

>+			if (idx == 1) {
> 				entry[i].eax &= kvm_supported_word10_x86_features;
>-			else if (entry[i].eax == 0 || !(supported & mask))
>+				entry[i].ebx = 0;
>+				if (entry[i].eax & (F(XSAVES)|F(XSAVEC)))
>+					entry[i].ebx =
>+						xstate_required_size(supported,
>+								     true);
>+			} else if (entry[i].eax == 0 || !(supported & mask))
> 				continue;
> 			entry[i].flags |=
> 			       KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
>-- 
>1.8.3.1
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/9] Final set of XSAVES patches
  2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
                   ` (8 preceding siblings ...)
  2014-12-04 15:57 ` [PATCH 9/9] kvm: vmx: add nested virtualization support for xsaves Paolo Bonzini
@ 2014-12-05  1:15 ` Wanpeng Li
  2014-12-05  7:27   ` Paolo Bonzini
  9 siblings, 1 reply; 23+ messages in thread
From: Wanpeng Li @ 2014-12-05  1:15 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, rkrcmar

Hi Paolo,
On Thu, Dec 04, 2014 at 04:57:05PM +0100, Paolo Bonzini wrote:
>These are all the patches needed to support XSAVES.
>

I think you miss to add your patch "kvm: x86: mask out XSAVES" to 
this patchset, I test the whole patchset w/ that patch applied on 
skylake-client machine, and it looks good, please feel free to add 
my tested-by to this patchset.

Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>

>Paolo Bonzini (5):
>  x86: export get_xsave_addr
>  KVM: x86: support XSAVES usage in the host
>  KVM: x86: use F() macro throughout cpuid.c
>  KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly
>  KVM: cpuid: mask more bits in leaf 0xd and subleaves
>
>Wanpeng Li (4):
>  kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest
>  kvm: x86: handle XSAVES vmcs and vmexit
>  kvm: vmx: add MSR logic for XSAVES
>  kvm: vmx: add nested virtualization support for xsaves
>
> arch/x86/include/asm/kvm_host.h |  2 +
> arch/x86/include/asm/vmx.h      |  3 ++
> arch/x86/include/uapi/asm/vmx.h |  6 ++-
> arch/x86/kernel/xsave.c         |  1 +
> arch/x86/kvm/cpuid.c            | 47 ++++++++++++++-------
> arch/x86/kvm/svm.c              |  6 +++
> arch/x86/kvm/vmx.c              | 80 +++++++++++++++++++++++++++++++++++-
> arch/x86/kvm/x86.c              | 90 +++++++++++++++++++++++++++++++++++++----
> 8 files changed, 210 insertions(+), 25 deletions(-)
>
>-- 
>1.8.3.1

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/9] KVM: cpuid: mask more bits in leaf 0xd and subleaves
  2014-12-05  0:34   ` Wanpeng Li
@ 2014-12-05  1:20     ` Wanpeng Li
  0 siblings, 0 replies; 23+ messages in thread
From: Wanpeng Li @ 2014-12-05  1:20 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, rkrcmar

On Fri, Dec 05, 2014 at 08:34:47AM +0800, Wanpeng Li wrote:
>Hi Paolo,
>On Thu, Dec 04, 2014 at 04:57:11PM +0100, Paolo Bonzini wrote:
>>- EAX=0Dh, ECX=1: output registers ECX/EDX are reserved.
>>
>>- EAX=0Dh, ECX>1: output register ECX bit 0 is clear for all the CPUID
>>leaves we support, because variable "supported" comes from XCR0 and not
>>XSS.  Bits above 0 are reserved, so ECX is overall zero.  Output register
>>EDX is reserved.
>>
>>Source: Intel Architecture Instruction Set Extensions Programming
>>Reference, ref. number 319433-022
>>
>>Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
>>Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>---
>> arch/x86/kvm/cpuid.c | 10 ++++++++--
>> 1 file changed, 8 insertions(+), 2 deletions(-)
>>
>
>Do you miss this in your patch?
>
>+       /* cpuid 0xD.1.eax */
>+       const u32 kvm_supported_word10_x86_features =
>+               F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1);
>+
>
>In addition, there is bisect issue if this is added in this patch.
>(4/9 will take advantage of kvm_supported_word10_x86_features)

Also due to "kvm: x86: mask out XSAVES".

Regards,
Wanpeng Li

>
>Regards,
>Wanpeng Li 
>
>>diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>>index 2f7bc2de9915..644bfe828ce1 100644
>>--- a/arch/x86/kvm/cpuid.c
>>+++ b/arch/x86/kvm/cpuid.c
>>@@ -482,8 +482,14 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>> 					entry[i].ebx =
>> 						xstate_required_size(supported,
>> 								     true);
>>-			} else if (entry[i].eax == 0 || !(supported & mask))
>>-				continue;
>>+			} else {
>>+				if (entry[i].eax == 0 || !(supported & mask))
>>+					continue;
>>+				if (WARN_ON_ONCE(entry[i].ecx & 1))
>>+					continue;
>>+			}
>>+			entry[i].ecx = 0;
>>+			entry[i].edx = 0;
>> 			entry[i].flags |=
>> 			       KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
>> 			++*nent;
>>-- 
>>1.8.3.1
>>
>--
>To unsubscribe from this list: send the line "unsubscribe kvm" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/9] Final set of XSAVES patches
  2014-12-05  1:15 ` [PATCH 0/9] Final set of XSAVES patches Wanpeng Li
@ 2014-12-05  7:27   ` Paolo Bonzini
  0 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2014-12-05  7:27 UTC (permalink / raw)
  To: Wanpeng Li; +Cc: linux-kernel, kvm, rkrcmar



On 05/12/2014 02:15, Wanpeng Li wrote:
> Hi Paolo,
> On Thu, Dec 04, 2014 at 04:57:05PM +0100, Paolo Bonzini wrote:
>> These are all the patches needed to support XSAVES.
>>
> 
> I think you miss to add your patch "kvm: x86: mask out XSAVES" to 
> this patchset,

That one is already on kvm/next, so it is more or less frozen.

> I test the whole patchset w/ that patch applied on 
> skylake-client machine, and it looks good, please feel free to add 
> my tested-by to this patchset.
> 
> Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>

Awesome, thanks!

Paolo

>> Paolo Bonzini (5):
>>  x86: export get_xsave_addr
>>  KVM: x86: support XSAVES usage in the host
>>  KVM: x86: use F() macro throughout cpuid.c
>>  KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly
>>  KVM: cpuid: mask more bits in leaf 0xd and subleaves
>>
>> Wanpeng Li (4):
>>  kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest
>>  kvm: x86: handle XSAVES vmcs and vmexit
>>  kvm: vmx: add MSR logic for XSAVES
>>  kvm: vmx: add nested virtualization support for xsaves
>>
>> arch/x86/include/asm/kvm_host.h |  2 +
>> arch/x86/include/asm/vmx.h      |  3 ++
>> arch/x86/include/uapi/asm/vmx.h |  6 ++-
>> arch/x86/kernel/xsave.c         |  1 +
>> arch/x86/kvm/cpuid.c            | 47 ++++++++++++++-------
>> arch/x86/kvm/svm.c              |  6 +++
>> arch/x86/kvm/vmx.c              | 80 +++++++++++++++++++++++++++++++++++-
>> arch/x86/kvm/x86.c              | 90 +++++++++++++++++++++++++++++++++++++----
>> 8 files changed, 210 insertions(+), 25 deletions(-)
>>
>> -- 
>> 1.8.3.1

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/9] kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest
  2014-12-04 15:57 ` [PATCH 4/9] kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest Paolo Bonzini
@ 2014-12-05 13:32   ` Radim Krčmář
  0 siblings, 0 replies; 23+ messages in thread
From: Radim Krčmář @ 2014-12-05 13:32 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Wanpeng Li

(I somehow managed to review one patch twice instead of this one ...)

2014-12-04 16:57+0100, Paolo Bonzini:
> From: Wanpeng Li <wanpeng.li@linux.intel.com>
> 
> Expose the XSAVES feature to the guest if the kvm_x86_ops say it is
> available.
> 
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

> --- a/arch/x86/include/asm/vmx.h
> +++ b/arch/x86/include/asm/vmx.h
> @@ -69,6 +69,7 @@
>  #define SECONDARY_EXEC_PAUSE_LOOP_EXITING	0x00000400
>  #define SECONDARY_EXEC_ENABLE_INVPCID		0x00001000
>  #define SECONDARY_EXEC_SHADOW_VMCS              0x00004000
> +#define SECONDARY_EXEC_XSAVES			0x00100000

(Email exposes tab alignment nicely.)

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2014-12-05 13:32 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-12-04 15:57 [PATCH 0/9] Final set of XSAVES patches Paolo Bonzini
2014-12-04 15:57 ` [PATCH 1/9] x86: export get_xsave_addr Paolo Bonzini
2014-12-04 16:34   ` Greg KH
2014-12-04 17:29     ` Paolo Bonzini
2014-12-04 15:57 ` [PATCH 2/9] KVM: x86: support XSAVES usage in the host Paolo Bonzini
2014-12-04 17:56   ` Radim Krčmář
2014-12-04 15:57 ` [PATCH 3/9] KVM: x86: use F() macro throughout cpuid.c Paolo Bonzini
2014-12-04 15:57 ` [PATCH 4/9] kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest Paolo Bonzini
2014-12-05 13:32   ` Radim Krčmář
2014-12-04 15:57 ` [PATCH 5/9] KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly Paolo Bonzini
2014-12-05  0:40   ` Wanpeng Li
2014-12-04 15:57 ` [PATCH 6/9] KVM: cpuid: mask more bits in leaf 0xd and subleaves Paolo Bonzini
2014-12-05  0:34   ` Wanpeng Li
2014-12-05  1:20     ` Wanpeng Li
2014-12-04 15:57 ` [PATCH 7/9] kvm: x86: handle XSAVES vmcs and vmexit Paolo Bonzini
2014-12-04 18:06   ` Radim Krčmář
2014-12-04 15:57 ` [PATCH 8/9] kvm: vmx: add MSR logic for XSAVES Paolo Bonzini
2014-12-04 18:05   ` Radim Krčmář
2014-12-04 18:07   ` Radim Krčmář
2014-12-04 15:57 ` [PATCH 9/9] kvm: vmx: add nested virtualization support for xsaves Paolo Bonzini
2014-12-04 18:08   ` Radim Krčmář
2014-12-05  1:15 ` [PATCH 0/9] Final set of XSAVES patches Wanpeng Li
2014-12-05  7:27   ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.