All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state
@ 2021-02-02 19:01 Michael Roth
  2021-02-02 19:01 ` [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional " Michael Roth
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Michael Roth @ 2021-02-02 19:01 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Sean Christopherson, Andy Lutomirski,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, linux-kernel

Hi Sean, Paolo,

Following up from previous v3 discussion:

  https://lore.kernel.org/kvm/X%2FSfw15OWarseivB@google.com/

I got bit in internal testing by a bug in v3 of this series that Sean had
already pointed out in v3 comments, so I thought it might be good to go
ahead and send a v4 with those fixes included. I also saw that Sean's vmsave
helpers are now in kvm/queue, so I've rebased these on top of those, and
made use of the new vmsave/vmload helpers:

  https://lore.kernel.org/kvm/8880fedc-14aa-1f14-b87b-118ebe0932a2@redhat.com/

Thanks!

-Mike

= Overview =

This series re-works the SVM KVM implementation to use vmload/vmsave to
handle saving/restoring additional host MSRs rather than explicit MSR
read/writes, resulting in a significant performance improvement for some
specific workloads and simplifying some of the save/load code (PATCH 1).

With those changes some commonalities emerge between SEV-ES and normal
vcpu_load/vcpu_put paths, which we then take advantage of to share more code,
as well as refactor them in a way that more closely aligns with the VMX
implementation (PATCH 2 and 3).

v4:
 - rebased on kvm/queue
 - use sme_page_pa() when accessing save area (Sean)
 - make sure vmload during host reboot is handled (Sean)
 - introduce vmload() helper like we have with vmsave(), use that instead
   of moving the introduce to ASM (Sean)

v3:
 - rebased on kvm-next
 - remove uneeded braces from host MSR save/load loops (Sean)
 - use page_to_phys() in place of page_to_pfn() and shifting (Sean)
 - use stack instead of struct field to cache host save area outside of
   per-cpu storage, and pass as an argument to __svm_vcpu_run() to
   handle the VMLOAD in ASM code rather than inlining ASM (Sean/Andy)
 - remove now-uneeded index/sev_es_restored fields from
   host_save_user_msrs list
 - move host-saving/guest-loading of registers to prepare_guest_switch(),
   and host-loading of registers to prepare_host_switch, for both normal
   and sev-es paths (Sean)

v2:
 - rebase on latest kvm/next
 - move VMLOAD to just after vmexit so we can use it to handle all FS/GS
   host state restoration and rather than relying on loadsegment() and
   explicit write to MSR_GS_BASE (Andy)
 - drop 'host' field from struct vcpu_svm since it is no longer needed
   for storing FS/GS/LDT state (Andy)

 arch/x86/kvm/svm/sev.c     |  30 +-----------------------------
 arch/x86/kvm/svm/svm.c     | 107 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------------------------------------
 arch/x86/kvm/svm/svm.h     |  29 +++++------------------------
 arch/x86/kvm/svm/svm_ops.h |   5 +++++
 4 files changed, 67 insertions(+), 104 deletions(-)



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional host state
  2021-02-02 19:01 [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Michael Roth
@ 2021-02-02 19:01 ` Michael Roth
  2021-02-03  0:38   ` Sean Christopherson
  2021-02-02 19:01 ` [PATCH v4 2/3] KVM: SVM: remove uneeded fields from host_save_users_msrs Michael Roth
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Michael Roth @ 2021-02-02 19:01 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Sean Christopherson, Andy Lutomirski,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, linux-kernel, Tom Lendacky

Using a guest workload which simply issues 'hlt' in a tight loop to
generate VMEXITs, it was observed (on a recent EPYC processor) that a
significant amount of the VMEXIT overhead measured on the host was the
result of MSR reads/writes in svm_vcpu_load/svm_vcpu_put according to
perf:

  67.49%--kvm_arch_vcpu_ioctl_run
          |
          |--23.13%--vcpu_put
          |          kvm_arch_vcpu_put
          |          |
          |          |--21.31%--native_write_msr
          |          |
          |           --1.27%--svm_set_cr4
          |
          |--16.11%--vcpu_load
          |          |
          |           --15.58%--kvm_arch_vcpu_load
          |                     |
          |                     |--13.97%--svm_set_cr4
          |                     |          |
          |                     |          |--12.64%--native_read_msr

Most of these MSRs relate to 'syscall'/'sysenter' and segment bases, and
can be saved/restored using 'vmsave'/'vmload' instructions rather than
explicit MSR reads/writes. In doing so there is a significant reduction
in the svm_vcpu_load/svm_vcpu_put overhead measured for the above
workload:

  50.92%--kvm_arch_vcpu_ioctl_run
          |
          |--19.28%--disable_nmi_singlestep
          |
          |--13.68%--vcpu_load
          |          kvm_arch_vcpu_load
          |          |
          |          |--9.19%--svm_set_cr4
          |          |          |
          |          |           --6.44%--native_read_msr
          |          |
          |           --3.55%--native_write_msr
          |
          |--6.05%--kvm_inject_nmi
          |--2.80%--kvm_sev_es_mmio_read
          |--2.19%--vcpu_put
          |          |
          |           --1.25%--kvm_arch_vcpu_put
          |                     native_write_msr

Quantifying this further, if we look at the raw cycle counts for a
normal iteration of the above workload (according to 'rdtscp'),
kvm_arch_vcpu_ioctl_run() takes ~4600 cycles from start to finish with
the current behavior. Using 'vmsave'/'vmload', this is reduced to
~2800 cycles, a savings of 39%.

While this approach doesn't seem to manifest in any noticeable
improvement for more realistic workloads like UnixBench, netperf, and
kernel builds, likely due to their exit paths generally involving IO
with comparatively high latencies, it does improve overall overhead
of KVM_RUN significantly, which may still be noticeable for certain
situations. It also simplifies some aspects of the code.

With this change, explicit save/restore is no longer needed for the
following host MSRs, since they are documented[1] as being part of the
VMCB State Save Area:

  MSR_STAR, MSR_LSTAR, MSR_CSTAR,
  MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE,
  MSR_IA32_SYSENTER_CS,
  MSR_IA32_SYSENTER_ESP,
  MSR_IA32_SYSENTER_EIP,
  MSR_FS_BASE, MSR_GS_BASE

and only the following MSR needs individual handling in
svm_vcpu_put/svm_vcpu_load:

  MSR_TSC_AUX

We could drop the host_save_user_msrs array/loop and instead handle
MSR read/write of MSR_TSC_AUX directly, but we leave that for now as
a potential follow-up.

Since 'vmsave'/'vmload' also handles the LDTR and FS/GS segment
registers (and associated hidden state)[2], some of the code
previously used to handle this is no longer needed, so we drop it
as well.

The first public release of the SVM spec[3] also documents the same
handling for the host state in question, so we make these changes
unconditionally.

Also worth noting is that we 'vmsave' to the same page that is
subsequently used by 'vmrun' to record some host additional state. This
is okay, since, in accordance with the spec[2], the additional state
written to the page by 'vmrun' does not overwrite any fields written by
'vmsave'. This has also been confirmed through testing (for the above
CPU, at least).

[1] AMD64 Architecture Programmer's Manual, Rev 3.33, Volume 2, Appendix B, Table B-2
[2] AMD64 Architecture Programmer's Manual, Rev 3.31, Volume 3, Chapter 4, VMSAVE/VMLOAD
[3] Secure Virtual Machine Architecture Reference Manual, Rev 3.01

Suggested-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
---
 arch/x86/kvm/svm/svm.c     | 31 +++++--------------------------
 arch/x86/kvm/svm/svm.h     | 17 -----------------
 arch/x86/kvm/svm/svm_ops.h |  5 +++++
 3 files changed, 10 insertions(+), 43 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 687876211ebe..bdc1921094dc 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1422,16 +1422,11 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	if (sev_es_guest(svm->vcpu.kvm)) {
 		sev_es_vcpu_load(svm, cpu);
 	} else {
-#ifdef CONFIG_X86_64
-		rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base);
-#endif
-		savesegment(fs, svm->host.fs);
-		savesegment(gs, svm->host.gs);
-		svm->host.ldt = kvm_read_ldt();
-
 		for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
 			rdmsrl(host_save_user_msrs[i].index,
 			       svm->host_user_msrs[i]);
+
+		vmsave(__sme_page_pa(sd->save_area));
 	}
 
 	if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) {
@@ -1463,17 +1458,6 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 	if (sev_es_guest(svm->vcpu.kvm)) {
 		sev_es_vcpu_put(svm);
 	} else {
-		kvm_load_ldt(svm->host.ldt);
-#ifdef CONFIG_X86_64
-		loadsegment(fs, svm->host.fs);
-		wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gsbase);
-		load_gs_index(svm->host.gs);
-#else
-#ifdef CONFIG_X86_32_LAZY_GS
-		loadsegment(gs, svm->host.gs);
-#endif
-#endif
-
 		for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
 			wrmsrl(host_save_user_msrs[i].index,
 			       svm->host_user_msrs[i]);
@@ -3780,16 +3764,11 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 	if (sev_es_guest(svm->vcpu.kvm)) {
 		__svm_sev_es_vcpu_run(svm->vmcb_pa);
 	} else {
+		struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu);
+
 		__svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs);
 
-#ifdef CONFIG_X86_64
-		native_wrmsrl(MSR_GS_BASE, svm->host.gs_base);
-#else
-		loadsegment(fs, svm->host.fs);
-#ifndef CONFIG_X86_32_LAZY_GS
-		loadsegment(gs, svm->host.gs);
-#endif
-#endif
+		vmload(__sme_page_pa(sd->save_area));
 	}
 
 	/*
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 0fe874ae5498..525f1bf57917 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -27,17 +27,6 @@ static const struct svm_host_save_msrs {
 	u32 index;		/* Index of the MSR */
 	bool sev_es_restored;	/* True if MSR is restored on SEV-ES VMEXIT */
 } host_save_user_msrs[] = {
-#ifdef CONFIG_X86_64
-	{ .index = MSR_STAR,			.sev_es_restored = true },
-	{ .index = MSR_LSTAR,			.sev_es_restored = true },
-	{ .index = MSR_CSTAR,			.sev_es_restored = true },
-	{ .index = MSR_SYSCALL_MASK,		.sev_es_restored = true },
-	{ .index = MSR_KERNEL_GS_BASE,		.sev_es_restored = true },
-	{ .index = MSR_FS_BASE,			.sev_es_restored = true },
-#endif
-	{ .index = MSR_IA32_SYSENTER_CS,	.sev_es_restored = true },
-	{ .index = MSR_IA32_SYSENTER_ESP,	.sev_es_restored = true },
-	{ .index = MSR_IA32_SYSENTER_EIP,	.sev_es_restored = true },
 	{ .index = MSR_TSC_AUX,			.sev_es_restored = false },
 };
 #define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs)
@@ -130,12 +119,6 @@ struct vcpu_svm {
 	u64 next_rip;
 
 	u64 host_user_msrs[NR_HOST_SAVE_USER_MSRS];
-	struct {
-		u16 fs;
-		u16 gs;
-		u16 ldt;
-		u64 gs_base;
-	} host;
 
 	u64 spec_ctrl;
 	/*
diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
index 0c8377aee52c..c2a05f56c8e4 100644
--- a/arch/x86/kvm/svm/svm_ops.h
+++ b/arch/x86/kvm/svm/svm_ops.h
@@ -56,4 +56,9 @@ static inline void vmsave(hpa_t pa)
 	svm_asm1(vmsave, "a" (pa), "memory");
 }
 
+static inline void vmload(hpa_t pa)
+{
+	svm_asm1(vmload, "a" (pa), "memory");
+}
+
 #endif /* __KVM_X86_SVM_OPS_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 2/3] KVM: SVM: remove uneeded fields from host_save_users_msrs
  2021-02-02 19:01 [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Michael Roth
  2021-02-02 19:01 ` [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional " Michael Roth
@ 2021-02-02 19:01 ` Michael Roth
  2021-02-02 19:01 ` [PATCH v4 3/3] KVM: SVM: use .prepare_guest_switch() to handle CPU register save/setup Michael Roth
  2021-02-03  8:10 ` [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Paolo Bonzini
  3 siblings, 0 replies; 7+ messages in thread
From: Michael Roth @ 2021-02-02 19:01 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Sean Christopherson, Andy Lutomirski,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, linux-kernel

Now that the set of host user MSRs that need to be individually
saved/restored are the same with/without SEV-ES, we can drop the
.sev_es_restored flag and just iterate through the list unconditionally
for both cases. A subsequent patch can then move these loops to a
common path.

Signed-off-by: Michael Roth <michael.roth@amd.com>
---
 arch/x86/kvm/svm/sev.c | 16 ++++------------
 arch/x86/kvm/svm/svm.c |  6 ++----
 arch/x86/kvm/svm/svm.h |  7 ++-----
 3 files changed, 8 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index a3e2b29f484d..87167ef8ca23 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2083,12 +2083,8 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
 	 * Certain MSRs are restored on VMEXIT, only save ones that aren't
 	 * restored.
 	 */
-	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) {
-		if (host_save_user_msrs[i].sev_es_restored)
-			continue;
-
-		rdmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]);
-	}
+	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
+		rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 
 	/* XCR0 is restored on VMEXIT, save the current host value */
 	hostsa = (struct vmcb_save_area *)(page_address(sd->save_area) + 0x400);
@@ -2109,12 +2105,8 @@ void sev_es_vcpu_put(struct vcpu_svm *svm)
 	 * Certain MSRs are restored on VMEXIT and were saved with vmsave in
 	 * sev_es_vcpu_load() above. Only restore ones that weren't.
 	 */
-	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) {
-		if (host_save_user_msrs[i].sev_es_restored)
-			continue;
-
-		wrmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]);
-	}
+	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
+		wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 }
 
 void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index bdc1921094dc..ae897aaa4471 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1423,8 +1423,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		sev_es_vcpu_load(svm, cpu);
 	} else {
 		for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
-			rdmsrl(host_save_user_msrs[i].index,
-			       svm->host_user_msrs[i]);
+			rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 
 		vmsave(__sme_page_pa(sd->save_area));
 	}
@@ -1459,8 +1458,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 		sev_es_vcpu_put(svm);
 	} else {
 		for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
-			wrmsrl(host_save_user_msrs[i].index,
-			       svm->host_user_msrs[i]);
+			wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 	}
 }
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 525f1bf57917..66d83dfefe18 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -23,11 +23,8 @@
 
 #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT)
 
-static const struct svm_host_save_msrs {
-	u32 index;		/* Index of the MSR */
-	bool sev_es_restored;	/* True if MSR is restored on SEV-ES VMEXIT */
-} host_save_user_msrs[] = {
-	{ .index = MSR_TSC_AUX,			.sev_es_restored = false },
+static const u32 host_save_user_msrs[] = {
+	MSR_TSC_AUX,
 };
 #define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs)
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 3/3] KVM: SVM: use .prepare_guest_switch() to handle CPU register save/setup
  2021-02-02 19:01 [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Michael Roth
  2021-02-02 19:01 ` [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional " Michael Roth
  2021-02-02 19:01 ` [PATCH v4 2/3] KVM: SVM: remove uneeded fields from host_save_users_msrs Michael Roth
@ 2021-02-02 19:01 ` Michael Roth
  2021-02-03  8:10 ` [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Paolo Bonzini
  3 siblings, 0 replies; 7+ messages in thread
From: Michael Roth @ 2021-02-02 19:01 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Sean Christopherson, Andy Lutomirski,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, linux-kernel

Currently we save host state like user-visible host MSRs, and do some
initial guest register setup for MSR_TSC_AUX and MSR_AMD64_TSC_RATIO
in svm_vcpu_load(). Defer this until just before we enter the guest by
moving the handling to kvm_x86_ops.prepare_guest_switch() similarly to
how it is done for the VMX implementation.

Additionally, since handling of saving/restoring host user MSRs is the
same both with/without SEV-ES enabled, move that handling to common
code.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
---
 arch/x86/kvm/svm/sev.c | 22 +-----------
 arch/x86/kvm/svm/svm.c | 76 +++++++++++++++++++++++++++++-------------
 arch/x86/kvm/svm/svm.h |  5 +--
 3 files changed, 56 insertions(+), 47 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 87167ef8ca23..874ea309279f 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2066,11 +2066,10 @@ void sev_es_create_vcpu(struct vcpu_svm *svm)
 					    sev_enc_bit));
 }
 
-void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
+void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu)
 {
 	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
 	struct vmcb_save_area *hostsa;
-	unsigned int i;
 
 	/*
 	 * As an SEV-ES guest, hardware will restore the host state on VMEXIT,
@@ -2079,13 +2078,6 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
 	 */
 	vmsave(__sme_page_pa(sd->save_area));
 
-	/*
-	 * Certain MSRs are restored on VMEXIT, only save ones that aren't
-	 * restored.
-	 */
-	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
-		rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
-
 	/* XCR0 is restored on VMEXIT, save the current host value */
 	hostsa = (struct vmcb_save_area *)(page_address(sd->save_area) + 0x400);
 	hostsa->xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
@@ -2097,18 +2089,6 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
 	hostsa->xss = host_xss;
 }
 
-void sev_es_vcpu_put(struct vcpu_svm *svm)
-{
-	unsigned int i;
-
-	/*
-	 * Certain MSRs are restored on VMEXIT and were saved with vmsave in
-	 * sev_es_vcpu_load() above. Only restore ones that weren't.
-	 */
-	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
-		wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
-}
-
 void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ae897aaa4471..0059f1d14b82 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1361,6 +1361,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
 		svm->vmsa = page_address(vmsa_page);
 
 	svm->asid_generation = 0;
+	svm->guest_state_loaded = false;
 	init_vmcb(svm);
 
 	svm_init_osvw(vcpu);
@@ -1408,23 +1409,30 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
 	__free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);
 }
 
-static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
-	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
-	int i;
+	struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu);
+	unsigned int i;
 
-	if (unlikely(cpu != vcpu->cpu)) {
-		svm->asid_generation = 0;
-		vmcb_mark_all_dirty(svm->vmcb);
-	}
+	if (svm->guest_state_loaded)
+		return;
+
+	/*
+	 * Certain MSRs are restored on VMEXIT (sev-es), or vmload of host save
+	 * area (non-sev-es). Save ones that aren't so we can restore them
+	 * individually later.
+	 */
+	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
+		rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 
+	/*
+	 * Save additional host state that will be restored on VMEXIT (sev-es)
+	 * or subsequent vmload of host save area.
+	 */
 	if (sev_es_guest(svm->vcpu.kvm)) {
-		sev_es_vcpu_load(svm, cpu);
+		sev_es_prepare_guest_switch(svm, vcpu->cpu);
 	} else {
-		for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
-			rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
-
 		vmsave(__sme_page_pa(sd->save_area));
 	}
 
@@ -1435,10 +1443,42 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 			wrmsrl(MSR_AMD64_TSC_RATIO, tsc_ratio);
 		}
 	}
+
 	/* This assumes that the kernel never uses MSR_TSC_AUX */
 	if (static_cpu_has(X86_FEATURE_RDTSCP))
 		wrmsrl(MSR_TSC_AUX, svm->tsc_aux);
 
+	svm->guest_state_loaded = true;
+}
+
+static void svm_prepare_host_switch(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	unsigned int i;
+
+	if (!svm->guest_state_loaded)
+		return;
+
+	/*
+	 * Certain MSRs are restored on VMEXIT (sev-es), or vmload of host save
+	 * area (non-sev-es). Restore the ones that weren't.
+	 */
+	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
+		wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
+
+	svm->guest_state_loaded = false;
+}
+
+static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+
+	if (unlikely(cpu != vcpu->cpu)) {
+		svm->asid_generation = 0;
+		vmcb_mark_all_dirty(svm->vmcb);
+	}
+
 	if (sd->current_vmcb != svm->vmcb) {
 		sd->current_vmcb = svm->vmcb;
 		indirect_branch_prediction_barrier();
@@ -1448,18 +1488,10 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 
 static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	struct vcpu_svm *svm = to_svm(vcpu);
-	int i;
-
 	avic_vcpu_put(vcpu);
+	svm_prepare_host_switch(vcpu);
 
 	++vcpu->stat.host_state_reload;
-	if (sev_es_guest(svm->vcpu.kvm)) {
-		sev_es_vcpu_put(svm);
-	} else {
-		for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
-			wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
-	}
 }
 
 static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu)
@@ -3614,10 +3646,6 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
 	invlpga(gva, svm->vmcb->control.asid);
 }
 
-static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
-{
-}
-
 static inline void sync_cr8_to_lapic(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 66d83dfefe18..cfc495c71fc1 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -172,6 +172,8 @@ struct vcpu_svm {
 	u64 ghcb_sa_len;
 	bool ghcb_sa_sync;
 	bool ghcb_sa_free;
+
+	bool guest_state_loaded;
 };
 
 struct svm_cpu_data {
@@ -570,9 +572,8 @@ int sev_handle_vmgexit(struct vcpu_svm *svm);
 int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
 void sev_es_init_vmcb(struct vcpu_svm *svm);
 void sev_es_create_vcpu(struct vcpu_svm *svm);
-void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu);
-void sev_es_vcpu_put(struct vcpu_svm *svm);
 void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
+void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu);
 
 /* vmenter.S */
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional host state
  2021-02-02 19:01 ` [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional " Michael Roth
@ 2021-02-03  0:38   ` Sean Christopherson
  2021-02-03  8:04     ` Paolo Bonzini
  0 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2021-02-03  0:38 UTC (permalink / raw)
  To: Michael Roth
  Cc: kvm, Paolo Bonzini, Andy Lutomirski, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H . Peter Anvin, linux-kernel,
	Tom Lendacky

One quick comment while it's on my mind, I'll give this a proper gander tomorrow.

On Tue, Feb 02, 2021, Michael Roth wrote:
> diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
> index 0c8377aee52c..c2a05f56c8e4 100644
> --- a/arch/x86/kvm/svm/svm_ops.h
> +++ b/arch/x86/kvm/svm/svm_ops.h
> @@ -56,4 +56,9 @@ static inline void vmsave(hpa_t pa)
>  	svm_asm1(vmsave, "a" (pa), "memory");
>  }
>  
> +static inline void vmload(hpa_t pa)

This needs to be 'unsigned long', using 'hpa_t' in vmsave() is wrong as the
instructions consume rAX based on effective address.  I wrote the function
comment for the vmsave() fix so that it applies to both VMSAVE and VMLOAD,
so this can be a simple fixup on application (assuming v5 isn't needed for
other reasons).

https://lkml.kernel.org/r/20210202223416.2702336-1-seanjc@google.com

> +{
> +	svm_asm1(vmload, "a" (pa), "memory");
> +}
> +
>  #endif /* __KVM_X86_SVM_OPS_H */
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional host state
  2021-02-03  0:38   ` Sean Christopherson
@ 2021-02-03  8:04     ` Paolo Bonzini
  0 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2021-02-03  8:04 UTC (permalink / raw)
  To: Sean Christopherson, Michael Roth
  Cc: kvm, Andy Lutomirski, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, linux-kernel, Tom Lendacky

On 03/02/21 01:38, Sean Christopherson wrote:
>>   
>> +static inline void vmload(hpa_t pa)
> This needs to be 'unsigned long', using 'hpa_t' in vmsave() is wrong as the
> instructions consume rAX based on effective address.  I wrote the function
> comment for the vmsave() fix so that it applies to both VMSAVE and VMLOAD,
> so this can be a simple fixup on application (assuming v5 isn't needed for
> other reasons).
> 
> https://lkml.kernel.org/r/20210202223416.2702336-1-seanjc@google.com
> 

Yup, fixed.

Paolo


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state
  2021-02-02 19:01 [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Michael Roth
                   ` (2 preceding siblings ...)
  2021-02-02 19:01 ` [PATCH v4 3/3] KVM: SVM: use .prepare_guest_switch() to handle CPU register save/setup Michael Roth
@ 2021-02-03  8:10 ` Paolo Bonzini
  3 siblings, 0 replies; 7+ messages in thread
From: Paolo Bonzini @ 2021-02-03  8:10 UTC (permalink / raw)
  To: Michael Roth, kvm
  Cc: Sean Christopherson, Andy Lutomirski, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H . Peter Anvin, linux-kernel

On 02/02/21 20:01, Michael Roth wrote:
> Hi Sean, Paolo,
> 
> Following up from previous v3 discussion:
> 
>    https://lore.kernel.org/kvm/X%2FSfw15OWarseivB@google.com/
> 
> I got bit in internal testing by a bug in v3 of this series that Sean had
> already pointed out in v3 comments, so I thought it might be good to go
> ahead and send a v4 with those fixes included. I also saw that Sean's vmsave
> helpers are now in kvm/queue, so I've rebased these on top of those, and
> made use of the new vmsave/vmload helpers:
> 
>    https://lore.kernel.org/kvm/8880fedc-14aa-1f14-b87b-118ebe0932a2@redhat.com/
> 
> Thanks!
> 
> -Mike
> 
> = Overview =
> 
> This series re-works the SVM KVM implementation to use vmload/vmsave to
> handle saving/restoring additional host MSRs rather than explicit MSR
> read/writes, resulting in a significant performance improvement for some
> specific workloads and simplifying some of the save/load code (PATCH 1).
> 
> With those changes some commonalities emerge between SEV-ES and normal
> vcpu_load/vcpu_put paths, which we then take advantage of to share more code,
> as well as refactor them in a way that more closely aligns with the VMX
> implementation (PATCH 2 and 3).

Queued, thanks.

Paolo

> v4:
>   - rebased on kvm/queue
>   - use sme_page_pa() when accessing save area (Sean)
>   - make sure vmload during host reboot is handled (Sean)
>   - introduce vmload() helper like we have with vmsave(), use that instead
>     of moving the introduce to ASM (Sean)
> 
> v3:
>   - rebased on kvm-next
>   - remove uneeded braces from host MSR save/load loops (Sean)
>   - use page_to_phys() in place of page_to_pfn() and shifting (Sean)
>   - use stack instead of struct field to cache host save area outside of
>     per-cpu storage, and pass as an argument to __svm_vcpu_run() to
>     handle the VMLOAD in ASM code rather than inlining ASM (Sean/Andy)
>   - remove now-uneeded index/sev_es_restored fields from
>     host_save_user_msrs list
>   - move host-saving/guest-loading of registers to prepare_guest_switch(),
>     and host-loading of registers to prepare_host_switch, for both normal
>     and sev-es paths (Sean)
> 
> v2:
>   - rebase on latest kvm/next
>   - move VMLOAD to just after vmexit so we can use it to handle all FS/GS
>     host state restoration and rather than relying on loadsegment() and
>     explicit write to MSR_GS_BASE (Andy)
>   - drop 'host' field from struct vcpu_svm since it is no longer needed
>     for storing FS/GS/LDT state (Andy)
> 
>   arch/x86/kvm/svm/sev.c     |  30 +-----------------------------
>   arch/x86/kvm/svm/svm.c     | 107 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------------------------------------
>   arch/x86/kvm/svm/svm.h     |  29 +++++------------------------
>   arch/x86/kvm/svm/svm_ops.h |   5 +++++
>   4 files changed, 67 insertions(+), 104 deletions(-)
> 
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-02-03  8:12 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-02 19:01 [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Michael Roth
2021-02-02 19:01 ` [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional " Michael Roth
2021-02-03  0:38   ` Sean Christopherson
2021-02-03  8:04     ` Paolo Bonzini
2021-02-02 19:01 ` [PATCH v4 2/3] KVM: SVM: remove uneeded fields from host_save_users_msrs Michael Roth
2021-02-02 19:01 ` [PATCH v4 3/3] KVM: SVM: use .prepare_guest_switch() to handle CPU register save/setup Michael Roth
2021-02-03  8:10 ` [PATCH v4 0/3] KVM: SVM: Refactor vcpu_load/put to use vmload/vmsave for host state Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.