* [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features
@ 2024-03-18 23:33 Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 01/15] KVM: SVM: Invert handling of SEV and SEV_ES feature flags Paolo Bonzini
` (18 more replies)
0 siblings, 19 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc, Dave Hansen
[Dave: there is a small arch/x86/kernel/fpu change in patch 9;
I am CCing you in the cover letter just for context. - Paolo]
The idea that no parameter would ever be necessary when enabling SEV or
SEV-ES for a VM was decidedly optimistic. The first source of variability
that was encountered is the desired set of VMSA features, as that affects
the measurement of the VM's initial state and cannot be changed
arbitrarily by the hypervisor.
This series adds all the APIs that are needed to customize the features,
with room for future enhancements:
- a new /dev/kvm device attribute to retrieve the set of supported
features (right now, only debug swap)
- a new sub-operation for KVM_MEM_ENCRYPT_OP that can take a struct,
replacing the existing KVM_SEV_INIT and KVM_SEV_ES_INIT
It then puts the new op to work by including the VMSA features as a field
of the The existing KVM_SEV_INIT and KVM_SEV_ES_INIT use the full set of
supported VMSA features for backwards compatibility; but I am considering
also making them use zero as the feature mask, and will gladly adjust the
patches if so requested.
In order to avoid creating *two* new KVM_MEM_ENCRYPT_OPs, I decided that
I could as well make SEV and SEV-ES use VM types. And then, why not make
a SEV-ES VM, when created with the new VM type instead of KVM_SEV_ES_INIT,
reject KVM_GET_REGS/KVM_SET_REGS and friends on the vCPU file descriptor
once the VMSA has been encrypted... Which is how the API should have
always behaved.
Note: despite having the same number of patches as v3, #9 and #15 are new!
The series is structured as follows:
- patches 1 and 2 change sev.c so that it is compiled only if SEV is enabled
in kconfig
- patches 3 to 6 introduce the new device attribute to retrieve supported
VMSA features
- patches 7 and 8 introduce new infrastructure for VM types, replacing
the similar code in the TDX patches
- patch 9 allows setting the FPU and AVX state prior to encryption of the
VMSA
- patches 10 to 12 introduce the new VM types for SEV and
SEV-ES, and KVM_SEV_INIT2 as a new sub-operation for KVM_MEM_ENCRYPT_OP.
- patch 13 reenables DebugSwap, now that there is an API that allows doing
so without breaking backwards compatibility
- patches 14 and 15 test the new ioctl.
The idea is that SEV SNP will only ever support KVM_SEV_INIT2. I have
placed patches for QEMU to support this new API at branch sevinit2 of
https://gitlab.com/bonzini/qemu.
I haven't fully tested patch 9 and it really deserves a selftest,
it is a bit tricky though without ucall infrastructure for SEV.
I will look at it tomorrow.
The series is at branch kvm-coco-queue of kvm.git, and I would like to
include it in kvm/next as soon as possible after the release of -rc1.
Thanks,
Paolo
v3->v4:
- moved patches 1 and 5 to separate "fixes" series for 6.9
- do not conditionalize prototypes for functions that are called by common
SVM code
- rebased on top of SEV selftest infrastructure from 6.9 merge window;
include new patch to drop the "subtype" concept
- rebased on top of SEV-SNP patches from 6.9 merge window
- rebased on top of patch to disable DebugSwap from 6.8 rc;
drop "warn" once SEV_ES_INIT stops enabling VMSA features and
finally re-enable DebugSwap
- simplified logic to return -EINVAL from ioctls
- also block KVM_GET/SET_FPU for protected-state guests
- move logic to set kvm_>arch.has_protected_state to svm_vm_init
- fix "struct struct" in documentation
Paolo Bonzini (14):
KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
KVM: x86: use u64_to_user_addr()
KVM: introduce new vendor op for KVM_GET_DEVICE_ATTR
KVM: SEV: publish supported VMSA features
KVM: SEV: store VMSA features in kvm_sev_info
KVM: x86: add fields to struct kvm_arch for CoCo features
KVM: x86: Add supported_vm_types to kvm_caps
KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
KVM: SEV: introduce to_kvm_sev_info
KVM: SEV: define VM types for SEV and SEV-ES
KVM: SEV: introduce KVM_SEV_INIT2 operation
KVM: SEV: allow SEV-ES DebugSwap again
selftests: kvm: add tests for KVM_SEV_INIT2
selftests: kvm: switch to using KVM_X86_*_VM
Sean Christopherson (1):
KVM: SVM: Invert handling of SEV and SEV_ES feature flags
Documentation/virt/kvm/api.rst | 2 +
.../virt/kvm/x86/amd-memory-encryption.rst | 52 +++++-
arch/x86/include/asm/fpu/api.h | 3 +
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/include/uapi/asm/kvm.h | 12 ++
arch/x86/kernel/fpu/xstate.h | 2 -
arch/x86/kvm/Makefile | 7 +-
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm/sev.c | 172 ++++++++++++++----
arch/x86/kvm/svm/svm.c | 27 ++-
arch/x86/kvm/svm/svm.h | 43 ++++-
arch/x86/kvm/x86.c | 170 +++++++++++------
arch/x86/kvm/x86.h | 2 +
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/kvm_util_base.h | 11 +-
.../selftests/kvm/include/x86_64/processor.h | 6 -
.../selftests/kvm/include/x86_64/sev.h | 16 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 1 -
.../selftests/kvm/lib/x86_64/processor.c | 14 +-
tools/testing/selftests/kvm/lib/x86_64/sev.c | 30 ++-
.../selftests/kvm/set_memory_region_test.c | 8 +-
.../selftests/kvm/x86_64/sev_init2_tests.c | 149 +++++++++++++++
23 files changed, 569 insertions(+), 170 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
--
2.43.0
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v4 01/15] KVM: SVM: Invert handling of SEV and SEV_ES feature flags
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y Paolo Bonzini
` (17 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
From: Sean Christopherson <seanjc@google.com>
Leave SEV and SEV_ES '0' in kvm_cpu_caps by default, and instead set them
in sev_set_cpu_caps() if SEV and SEV-ES support are fully enabled. Aside
from the fact that sev_set_cpu_caps() is wildly misleading when it *clears*
capabilities, this will allow compiling out sev.c without falsely
advertising SEV/SEV-ES support in KVM_GET_SUPPORTED_CPUID.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm/sev.c | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index adba49afb5fe..bde4df13a7e8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -761,7 +761,7 @@ void kvm_set_cpu_caps(void)
kvm_cpu_cap_mask(CPUID_8000_000A_EDX, 0);
kvm_cpu_cap_mask(CPUID_8000_001F_EAX,
- 0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
+ 0 /* SME */ | 0 /* SEV */ | 0 /* VM_PAGE_FLUSH */ | 0 /* SEV_ES */ |
F(SME_COHERENT));
kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index e5a4d9b0e79f..382c745b8ba9 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2186,10 +2186,10 @@ void sev_vm_destroy(struct kvm *kvm)
void __init sev_set_cpu_caps(void)
{
- if (!sev_enabled)
- kvm_cpu_cap_clear(X86_FEATURE_SEV);
- if (!sev_es_enabled)
- kvm_cpu_cap_clear(X86_FEATURE_SEV_ES);
+ if (sev_enabled)
+ kvm_cpu_cap_set(X86_FEATURE_SEV);
+ if (sev_es_enabled)
+ kvm_cpu_cap_set(X86_FEATURE_SEV_ES);
}
void __init sev_hardware_setup(void)
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 01/15] KVM: SVM: Invert handling of SEV and SEV_ES feature flags Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-19 22:55 ` kernel test robot
2024-03-20 8:26 ` kernel test robot
2024-03-18 23:33 ` [PATCH v4 03/15] KVM: x86: use u64_to_user_addr() Paolo Bonzini
` (16 subsequent siblings)
18 siblings, 2 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Stop compiling sev.c when CONFIG_KVM_AMD_SEV=n, as the number of #ifdefs
in sev.c is getting ridiculous, and having #ifdefs inside of SEV helpers
is quite confusing.
To minimize #ifdefs in code flows, #ifdef away only the kvm_x86_ops hooks
and the #VMGEXIT handler. Stubs are also restricted to functions that
check sev_enabled and to the destruction functions sev_free_cpu() and
sev_vm_destroy(), where the style of their callers is to leave checks
to the callers. Most call sites instead rely on dead code elimination
to take care of functions that are guarded with sev_guest() or
sev_es_guest().
Signed-off-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/Makefile | 7 ++++---
arch/x86/kvm/svm/sev.c | 23 -----------------------
arch/x86/kvm/svm/svm.c | 5 ++++-
arch/x86/kvm/svm/svm.h | 32 +++++++++++++++++++++++---------
4 files changed, 31 insertions(+), 36 deletions(-)
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 475b5fa917a6..744a1ea3ee5c 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -25,9 +25,10 @@ kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
kvm-intel-$(CONFIG_X86_SGX_KVM) += vmx/sgx.o
kvm-intel-$(CONFIG_KVM_HYPERV) += vmx/hyperv.o vmx/hyperv_evmcs.o
-kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o \
- svm/sev.o
-kvm-amd-$(CONFIG_KVM_HYPERV) += svm/hyperv.o
+kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o
+
+kvm-amd-$(CONFIG_KVM_AMD_SEV) += svm/sev.o
+kvm-amd-$(CONFIG_KVM_HYPERV) += svm/hyperv.o
ifdef CONFIG_HYPERV
kvm-y += kvm_onhyperv.o
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 382c745b8ba9..73fee5f08391 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -32,22 +32,6 @@
#include "cpuid.h"
#include "trace.h"
-#ifndef CONFIG_KVM_AMD_SEV
-/*
- * When this config is not defined, SEV feature is not supported and APIs in
- * this file are not used but this file still gets compiled into the KVM AMD
- * module.
- *
- * We will not have MISC_CG_RES_SEV and MISC_CG_RES_SEV_ES entries in the enum
- * misc_res_type {} defined in linux/misc_cgroup.h.
- *
- * Below macros allow compilation to succeed.
- */
-#define MISC_CG_RES_SEV MISC_CG_RES_TYPES
-#define MISC_CG_RES_SEV_ES MISC_CG_RES_TYPES
-#endif
-
-#ifdef CONFIG_KVM_AMD_SEV
/* enable/disable SEV support */
static bool sev_enabled = true;
module_param_named(sev, sev_enabled, bool, 0444);
@@ -59,11 +43,6 @@ module_param_named(sev_es, sev_es_enabled, bool, 0444);
/* enable/disable SEV-ES DebugSwap support */
static bool sev_es_debug_swap_enabled = false;
module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
-#else
-#define sev_enabled false
-#define sev_es_enabled false
-#define sev_es_debug_swap_enabled false
-#endif /* CONFIG_KVM_AMD_SEV */
static u8 sev_enc_bit;
static DECLARE_RWSEM(sev_deactivate_lock);
@@ -2194,7 +2173,6 @@ void __init sev_set_cpu_caps(void)
void __init sev_hardware_setup(void)
{
-#ifdef CONFIG_KVM_AMD_SEV
unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count;
bool sev_es_supported = false;
bool sev_supported = false;
@@ -2294,7 +2272,6 @@ void __init sev_hardware_setup(void)
if (!sev_es_enabled || !cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP) ||
!cpu_feature_enabled(X86_FEATURE_NO_NESTED_DATA_BP))
sev_es_debug_swap_enabled = false;
-#endif
}
void sev_hardware_unsetup(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index d1a9f9951635..e7f47a1f3eb1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3303,7 +3303,9 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
[SVM_EXIT_RSM] = rsm_interception,
[SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception,
[SVM_EXIT_AVIC_UNACCELERATED_ACCESS] = avic_unaccelerated_access_interception,
+#ifdef CONFIG_KVM_AMD_SEV
[SVM_EXIT_VMGEXIT] = sev_handle_vmgexit,
+#endif
};
static void dump_vmcb(struct kvm_vcpu *vcpu)
@@ -5023,6 +5025,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.enable_smi_window = svm_enable_smi_window,
#endif
+#ifdef CONFIG_KVM_AMD_SEV
.mem_enc_ioctl = sev_mem_enc_ioctl,
.mem_enc_register_region = sev_mem_enc_register_region,
.mem_enc_unregister_region = sev_mem_enc_unregister_region,
@@ -5030,7 +5033,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.vm_copy_enc_context_from = sev_vm_copy_enc_context_from,
.vm_move_enc_context_from = sev_vm_move_enc_context_from,
-
+#endif
.check_emulate_instruction = svm_check_emulate_instruction,
.apic_init_signal_blocked = svm_apic_init_signal_blocked,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 7f1fbd874c45..d20e48c31210 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -664,13 +664,10 @@ void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu);
/* sev.c */
+#ifdef CONFIG_KVM_AMD_SEV
#define GHCB_VERSION_MAX 1ULL
#define GHCB_VERSION_MIN 1ULL
-
-extern unsigned int max_sev_asid;
-
-void sev_vm_destroy(struct kvm *kvm);
int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp);
int sev_mem_enc_register_region(struct kvm *kvm,
struct kvm_enc_region *range);
@@ -681,20 +678,37 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd);
void sev_guest_memory_reclaimed(struct kvm *kvm);
void pre_sev_run(struct vcpu_svm *svm, int cpu);
-void __init sev_set_cpu_caps(void);
-void __init sev_hardware_setup(void);
-void sev_hardware_unsetup(void);
-int sev_cpu_init(struct svm_cpu_data *sd);
void sev_init_vmcb(struct vcpu_svm *svm);
void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
-void sev_free_vcpu(struct kvm_vcpu *vcpu);
int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
void sev_es_vcpu_reset(struct vcpu_svm *svm);
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
void sev_es_unmap_ghcb(struct vcpu_svm *svm);
+
+/* These symbols are used in common code and are stubbed below. */
struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu);
+void sev_free_vcpu(struct kvm_vcpu *vcpu);
+void sev_vm_destroy(struct kvm *kvm);
+void __init sev_set_cpu_caps(void);
+void __init sev_hardware_setup(void);
+void sev_hardware_unsetup(void);
+int sev_cpu_init(struct svm_cpu_data *sd);
+extern unsigned int max_sev_asid;
+#else
+static inline struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu) {
+ return alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
+}
+
+static inline void sev_free_vcpu(struct kvm_vcpu *vcpu) {}
+static inline void sev_vm_destroy(struct kvm *kvm) {}
+static inline void __init sev_set_cpu_caps(void) {}
+static inline void __init sev_hardware_setup(void) {}
+static inline void sev_hardware_unsetup(void) {}
+static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; }
+#define max_sev_asid 0
+#endif
/* vmenter.S */
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 03/15] KVM: x86: use u64_to_user_addr()
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 01/15] KVM: SVM: Invert handling of SEV and SEV_ES feature flags Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 04/15] KVM: introduce new vendor op for KVM_GET_DEVICE_ATTR Paolo Bonzini
` (15 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
There is no danger to the kernel if 32-bit userspace provides a 64-bit
value that has the high bits set, but for whatever reason happens to
resolve to an address that has something mapped there. KVM uses the
checked version of get_user() and put_user(), so any faults are caught
properly.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/x86.c | 24 +++---------------------
1 file changed, 3 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 47d9f03b7778..3d2029402513 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4842,25 +4842,13 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
return r;
}
-static inline void __user *kvm_get_attr_addr(struct kvm_device_attr *attr)
-{
- void __user *uaddr = (void __user*)(unsigned long)attr->addr;
-
- if ((u64)(unsigned long)uaddr != attr->addr)
- return ERR_PTR_USR(-EFAULT);
- return uaddr;
-}
-
static int kvm_x86_dev_get_attr(struct kvm_device_attr *attr)
{
- u64 __user *uaddr = kvm_get_attr_addr(attr);
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
if (attr->group)
return -ENXIO;
- if (IS_ERR(uaddr))
- return PTR_ERR(uaddr);
-
switch (attr->attr) {
case KVM_X86_XCOMP_GUEST_SUPP:
if (put_user(kvm_caps.supported_xcr0, uaddr))
@@ -5712,12 +5700,9 @@ static int kvm_arch_tsc_has_attr(struct kvm_vcpu *vcpu,
static int kvm_arch_tsc_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
- u64 __user *uaddr = kvm_get_attr_addr(attr);
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
int r;
- if (IS_ERR(uaddr))
- return PTR_ERR(uaddr);
-
switch (attr->attr) {
case KVM_VCPU_TSC_OFFSET:
r = -EFAULT;
@@ -5735,13 +5720,10 @@ static int kvm_arch_tsc_get_attr(struct kvm_vcpu *vcpu,
static int kvm_arch_tsc_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
- u64 __user *uaddr = kvm_get_attr_addr(attr);
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
struct kvm *kvm = vcpu->kvm;
int r;
- if (IS_ERR(uaddr))
- return PTR_ERR(uaddr);
-
switch (attr->attr) {
case KVM_VCPU_TSC_OFFSET: {
u64 offset, tsc, ns;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 04/15] KVM: introduce new vendor op for KVM_GET_DEVICE_ATTR
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (2 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 03/15] KVM: x86: use u64_to_user_addr() Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 05/15] KVM: SEV: publish supported VMSA features Paolo Bonzini
` (14 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Allow vendor modules to provide their own attributes on /dev/kvm.
To avoid proliferation of vendor ops, implement KVM_HAS_DEVICE_ATTR
and KVM_GET_DEVICE_ATTR in terms of the same function. You're not
supposed to use KVM_GET_DEVICE_ATTR to do complicated computations,
especially on /dev/kvm.
Reviewed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 43 ++++++++++++++++++++----------
3 files changed, 31 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 110d7f29ca9a..5187fcf4b610 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -121,6 +121,7 @@ KVM_X86_OP(enter_smm)
KVM_X86_OP(leave_smm)
KVM_X86_OP(enable_smi_window)
#endif
+KVM_X86_OP_OPTIONAL(dev_get_attr)
KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
KVM_X86_OP_OPTIONAL(mem_enc_register_region)
KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 16e07a2eee19..f6cc7bfb5462 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1778,6 +1778,7 @@ struct kvm_x86_ops {
void (*enable_smi_window)(struct kvm_vcpu *vcpu);
#endif
+ int (*dev_get_attr)(u64 attr, u64 *val);
int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3d2029402513..e8253aa8ef5e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4842,34 +4842,49 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
return r;
}
-static int kvm_x86_dev_get_attr(struct kvm_device_attr *attr)
+static int __kvm_x86_dev_get_attr(struct kvm_device_attr *attr, u64 *val)
{
- u64 __user *uaddr = u64_to_user_ptr(attr->addr);
+ int r;
if (attr->group)
return -ENXIO;
switch (attr->attr) {
case KVM_X86_XCOMP_GUEST_SUPP:
- if (put_user(kvm_caps.supported_xcr0, uaddr))
- return -EFAULT;
- return 0;
+ r = 0;
+ *val = kvm_caps.supported_xcr0;
+ break;
default:
- return -ENXIO;
+ r = -ENXIO;
+ if (kvm_x86_ops.dev_get_attr)
+ r = static_call(kvm_x86_dev_get_attr)(attr->attr, val);
+ break;
}
+
+ return r;
+}
+
+static int kvm_x86_dev_get_attr(struct kvm_device_attr *attr)
+{
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
+ int r;
+ u64 val;
+
+ r = __kvm_x86_dev_get_attr(attr, &val);
+ if (r < 0)
+ return r;
+
+ if (put_user(val, uaddr))
+ return -EFAULT;
+
+ return 0;
}
static int kvm_x86_dev_has_attr(struct kvm_device_attr *attr)
{
- if (attr->group)
- return -ENXIO;
+ u64 val;
- switch (attr->attr) {
- case KVM_X86_XCOMP_GUEST_SUPP:
- return 0;
- default:
- return -ENXIO;
- }
+ return __kvm_x86_dev_get_attr(attr, &val);
}
long kvm_arch_dev_ioctl(struct file *filp,
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 05/15] KVM: SEV: publish supported VMSA features
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (3 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 04/15] KVM: introduce new vendor op for KVM_GET_DEVICE_ATTR Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-25 23:59 ` Isaku Yamahata
2024-03-18 23:33 ` [PATCH v4 06/15] KVM: SEV: store VMSA features in kvm_sev_info Paolo Bonzini
` (13 subsequent siblings)
18 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Compute the set of features to be stored in the VMSA when KVM is
initialized; move it from there into kvm_sev_info when SEV is initialized,
and then into the initial VMSA.
The new variable can then be used to return the set of supported features
to userspace, via the KVM_GET_DEVICE_ATTR ioctl.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
.../virt/kvm/x86/amd-memory-encryption.rst | 12 +++++++++++
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/svm/sev.c | 21 +++++++++++++++++--
arch/x86/kvm/svm/svm.c | 1 +
arch/x86/kvm/svm/svm.h | 2 ++
5 files changed, 35 insertions(+), 2 deletions(-)
diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index 84335d119ff1..fb41470c0310 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -425,6 +425,18 @@ issued by the hypervisor to make the guest ready for execution.
Returns: 0 on success, -negative on error
+Device attribute API
+====================
+
+Attributes of the SEV implementation can be retrieved through the
+``KVM_HAS_DEVICE_ATTR`` and ``KVM_GET_DEVICE_ATTR`` ioctls on the ``/dev/kvm``
+device node.
+
+Currently only one attribute is implemented:
+
+* group 0, attribute ``KVM_X86_SEV_VMSA_FEATURES``: return the set of all
+ bits that are accepted in the ``vmsa_features`` of ``KVM_SEV_INIT2``.
+
Firmware Management
===================
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index ef11aa4cab42..d0c1b459f7e9 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -459,6 +459,7 @@ struct kvm_sync_regs {
/* attributes for system fd (group 0) */
#define KVM_X86_XCOMP_GUEST_SUPP 0
+#define KVM_X86_SEV_VMSA_FEATURES 1
struct kvm_vmx_nested_state_data {
__u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE];
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 73fee5f08391..22c35a39c25f 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -43,6 +43,7 @@ module_param_named(sev_es, sev_es_enabled, bool, 0444);
/* enable/disable SEV-ES DebugSwap support */
static bool sev_es_debug_swap_enabled = false;
module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
+static u64 sev_supported_vmsa_features;
static u8 sev_enc_bit;
static DECLARE_RWSEM(sev_deactivate_lock);
@@ -600,8 +601,8 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
save->xss = svm->vcpu.arch.ia32_xss;
save->dr6 = svm->vcpu.arch.dr6;
- if (sev_es_debug_swap_enabled) {
- save->sev_features |= SVM_SEV_FEAT_DEBUG_SWAP;
+ if (sev_supported_vmsa_features) {
+ save->sev_features = sev_supported_vmsa_features;
pr_warn_once("Enabling DebugSwap with KVM_SEV_ES_INIT. "
"This will not work starting with Linux 6.10\n");
}
@@ -1840,6 +1841,18 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
return ret;
}
+int sev_dev_get_attr(u64 attr, u64 *val)
+{
+ switch (attr) {
+ case KVM_X86_SEV_VMSA_FEATURES:
+ *val = sev_supported_vmsa_features;
+ return 0;
+
+ default:
+ return -ENXIO;
+ }
+}
+
int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
{
struct kvm_sev_cmd sev_cmd;
@@ -2272,6 +2285,10 @@ void __init sev_hardware_setup(void)
if (!sev_es_enabled || !cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP) ||
!cpu_feature_enabled(X86_FEATURE_NO_NESTED_DATA_BP))
sev_es_debug_swap_enabled = false;
+
+ sev_supported_vmsa_features = 0;
+ if (sev_es_debug_swap_enabled)
+ sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;
}
void sev_hardware_unsetup(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e7f47a1f3eb1..450535d6757f 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5026,6 +5026,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
#endif
#ifdef CONFIG_KVM_AMD_SEV
+ .dev_get_attr = sev_dev_get_attr,
.mem_enc_ioctl = sev_mem_enc_ioctl,
.mem_enc_register_region = sev_mem_enc_register_region,
.mem_enc_unregister_region = sev_mem_enc_unregister_region,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index d20e48c31210..864fac367424 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -695,6 +695,7 @@ void __init sev_set_cpu_caps(void);
void __init sev_hardware_setup(void);
void sev_hardware_unsetup(void);
int sev_cpu_init(struct svm_cpu_data *sd);
+int sev_dev_get_attr(u64 attr, u64 *val);
extern unsigned int max_sev_asid;
#else
static inline struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu) {
@@ -707,6 +708,7 @@ static inline void __init sev_set_cpu_caps(void) {}
static inline void __init sev_hardware_setup(void) {}
static inline void sev_hardware_unsetup(void) {}
static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; }
+static inline int sev_dev_get_attr(u64 attr, u64 *val) { return -ENXIO; }
#define max_sev_asid 0
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 06/15] KVM: SEV: store VMSA features in kvm_sev_info
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (4 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 05/15] KVM: SEV: publish supported VMSA features Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 07/15] KVM: x86: add fields to struct kvm_arch for CoCo features Paolo Bonzini
` (12 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Right now, the set of features that are stored in the VMSA upon
initialization is fixed and depends on the module parameters for
kvm-amd.ko. However, the hypervisor cannot really change it at will
because the feature word has to match between the hypervisor and whatever
computes a measurement of the VMSA for attestation purposes.
Add a field to kvm_sev_info that holds the set of features to be stored
in the VMSA; and query it instead of referring to the module parameters.
Because KVM_SEV_INIT and KVM_SEV_ES_INIT accept no parameters, this
does not yet introduce any functional change, but it paves the way for
an API that allows customization of the features per-VM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20240209183743.22030-6-pbonzini@redhat.com>
Reviewed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm/sev.c | 29 +++++++++++++++++++++--------
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/svm/svm.h | 3 ++-
3 files changed, 24 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 22c35a39c25f..a8300646a280 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -96,6 +96,14 @@ static inline bool is_mirroring_enc_context(struct kvm *kvm)
return !!to_kvm_svm(kvm)->sev_info.enc_context_owner;
}
+static bool sev_vcpu_has_debug_swap(struct vcpu_svm *svm)
+{
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
+
+ return sev->vmsa_features & SVM_SEV_FEAT_DEBUG_SWAP;
+}
+
/* Must be called with the sev_bitmap_lock held */
static bool __sev_recycle_asids(unsigned int min_asid, unsigned int max_asid)
{
@@ -245,6 +253,11 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
sev->active = true;
sev->es_active = argp->id == KVM_SEV_ES_INIT;
+ sev->vmsa_features = sev_supported_vmsa_features;
+ if (sev_supported_vmsa_features)
+ pr_warn_once("Enabling DebugSwap with KVM_SEV_ES_INIT. "
+ "This will not work starting with Linux 6.10\n");
+
ret = sev_asid_new(sev);
if (ret)
goto e_no_asid;
@@ -266,6 +279,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
sev_asid_free(sev);
sev->asid = 0;
e_no_asid:
+ sev->vmsa_features = 0;
sev->es_active = false;
sev->active = false;
return ret;
@@ -560,6 +574,8 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
static int sev_es_sync_vmsa(struct vcpu_svm *svm)
{
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct sev_es_save_area *save = svm->sev_es.vmsa;
/* Check some debug related fields before encrypting the VMSA */
@@ -601,11 +617,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
save->xss = svm->vcpu.arch.ia32_xss;
save->dr6 = svm->vcpu.arch.dr6;
- if (sev_supported_vmsa_features) {
- save->sev_features = sev_supported_vmsa_features;
- pr_warn_once("Enabling DebugSwap with KVM_SEV_ES_INIT. "
- "This will not work starting with Linux 6.10\n");
- }
+ save->sev_features = sev->vmsa_features;
pr_debug("Virtual Machine Save Area (VMSA):\n");
print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
@@ -1685,6 +1697,7 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
dst->pages_locked = src->pages_locked;
dst->enc_context_owner = src->enc_context_owner;
dst->es_active = src->es_active;
+ dst->vmsa_features = src->vmsa_features;
src->asid = 0;
src->active = false;
@@ -3057,7 +3070,7 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
svm_set_intercept(svm, TRAP_CR8_WRITE);
vmcb->control.intercepts[INTERCEPT_DR] = 0;
- if (!sev_es_debug_swap_enabled) {
+ if (!sev_vcpu_has_debug_swap(svm)) {
vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ);
vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE);
recalc_intercepts(svm);
@@ -3112,7 +3125,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
sev_enc_bit));
}
-void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa)
+void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)
{
/*
* All host state for SEV-ES guests is categorized into three swap types
@@ -3140,7 +3153,7 @@ void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa)
* the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU both
* saves and loads debug registers (Type-A).
*/
- if (sev_es_debug_swap_enabled) {
+ if (sev_vcpu_has_debug_swap(svm)) {
hostsa->dr0 = native_get_debugreg(0);
hostsa->dr1 = native_get_debugreg(1);
hostsa->dr2 = native_get_debugreg(2);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 450535d6757f..c22e87ebf0de 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1523,7 +1523,7 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
struct sev_es_save_area *hostsa;
hostsa = (struct sev_es_save_area *)(page_address(sd->save_area) + 0x400);
- sev_es_prepare_switch_to_guest(hostsa);
+ sev_es_prepare_switch_to_guest(svm, hostsa);
}
if (tsc_scaling)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 864fac367424..b7707514d042 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -85,6 +85,7 @@ struct kvm_sev_info {
unsigned long pages_locked; /* Number of pages locked */
struct list_head regions_list; /* List of registered regions */
u64 ap_jump_table; /* SEV-ES AP Jump Table address */
+ u64 vmsa_features;
struct kvm *enc_context_owner; /* Owner of copied encryption context */
struct list_head mirror_vms; /* List of VMs mirroring */
struct list_head mirror_entry; /* Use as a list entry of mirrors */
@@ -684,7 +685,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
void sev_es_vcpu_reset(struct vcpu_svm *svm);
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
-void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
+void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa);
void sev_es_unmap_ghcb(struct vcpu_svm *svm);
/* These symbols are used in common code and are stubbed below. */
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 07/15] KVM: x86: add fields to struct kvm_arch for CoCo features
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (5 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 06/15] KVM: SEV: store VMSA features in kvm_sev_info Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 08/15] KVM: x86: Add supported_vm_types to kvm_caps Paolo Bonzini
` (11 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Some VM types have characteristics in common; in fact, the only use
of VM types right now is kvm_arch_has_private_mem and it assumes that
_all_ nonzero VM types have private memory.
We will soon introduce a VM type for SEV and SEV-ES VMs, and at that
point we will have two special characteristics of confidential VMs
that depend on the VM type: not just if memory is private, but
also whether guest state is protected. For the latter we have
kvm->arch.guest_state_protected, which is only set on a fully initialized
VM.
For VM types with protected guest state, we can actually fix a problem in
the SEV-ES implementation, where ioctls to set registers do not cause an
error even if the VM has been initialized and the guest state encrypted.
Make sure that when using VM types that will become an error.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20240209183743.22030-7-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 7 ++-
arch/x86/kvm/x86.c | 93 ++++++++++++++++++++++++++-------
2 files changed, 79 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f6cc7bfb5462..7380877bc9b5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1279,12 +1279,14 @@ enum kvm_apicv_inhibit {
};
struct kvm_arch {
- unsigned long vm_type;
unsigned long n_used_mmu_pages;
unsigned long n_requested_mmu_pages;
unsigned long n_max_mmu_pages;
unsigned int indirect_shadow_pages;
u8 mmu_valid_gen;
+ u8 vm_type;
+ bool has_private_mem;
+ bool has_protected_state;
struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
struct list_head active_mmu_pages;
struct list_head zapped_obsolete_pages;
@@ -2153,8 +2155,9 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd);
void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
int tdp_max_root_level, int tdp_huge_page_level);
+
#ifdef CONFIG_KVM_PRIVATE_MEM
-#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.vm_type != KVM_X86_DEFAULT_VM)
+#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem)
#else
#define kvm_arch_has_private_mem(kvm) false
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e8253aa8ef5e..98b7979b4698 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5560,11 +5560,15 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
return 0;
}
-static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
- struct kvm_debugregs *dbgregs)
+static int kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
+ struct kvm_debugregs *dbgregs)
{
unsigned int i;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
memset(dbgregs, 0, sizeof(*dbgregs));
BUILD_BUG_ON(ARRAY_SIZE(vcpu->arch.db) != ARRAY_SIZE(dbgregs->db));
@@ -5573,6 +5577,7 @@ static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
dbgregs->dr6 = vcpu->arch.dr6;
dbgregs->dr7 = vcpu->arch.dr7;
+ return 0;
}
static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
@@ -5580,6 +5585,10 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
{
unsigned int i;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
if (dbgregs->flags)
return -EINVAL;
@@ -5600,8 +5609,8 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
}
-static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
- u8 *state, unsigned int size)
+static int kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
+ u8 *state, unsigned int size)
{
/*
* Only copy state for features that are enabled for the guest. The
@@ -5619,24 +5628,25 @@ static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
XFEATURE_MASK_FPSSE;
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
- return;
+ return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0;
fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, state, size,
supported_xcr0, vcpu->arch.pkru);
+ return 0;
}
-static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
- struct kvm_xsave *guest_xsave)
+static int kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
+ struct kvm_xsave *guest_xsave)
{
- kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
- sizeof(guest_xsave->region));
+ return kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
+ sizeof(guest_xsave->region));
}
static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
struct kvm_xsave *guest_xsave)
{
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
- return 0;
+ return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0;
return fpu_copy_uabi_to_guest_fpstate(&vcpu->arch.guest_fpu,
guest_xsave->region,
@@ -5644,18 +5654,23 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
&vcpu->arch.pkru);
}
-static void kvm_vcpu_ioctl_x86_get_xcrs(struct kvm_vcpu *vcpu,
- struct kvm_xcrs *guest_xcrs)
+static int kvm_vcpu_ioctl_x86_get_xcrs(struct kvm_vcpu *vcpu,
+ struct kvm_xcrs *guest_xcrs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
if (!boot_cpu_has(X86_FEATURE_XSAVE)) {
guest_xcrs->nr_xcrs = 0;
- return;
+ return 0;
}
guest_xcrs->nr_xcrs = 1;
guest_xcrs->flags = 0;
guest_xcrs->xcrs[0].xcr = XCR_XFEATURE_ENABLED_MASK;
guest_xcrs->xcrs[0].value = vcpu->arch.xcr0;
+ return 0;
}
static int kvm_vcpu_ioctl_x86_set_xcrs(struct kvm_vcpu *vcpu,
@@ -5663,6 +5678,10 @@ static int kvm_vcpu_ioctl_x86_set_xcrs(struct kvm_vcpu *vcpu,
{
int i, r = 0;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
if (!boot_cpu_has(X86_FEATURE_XSAVE))
return -EINVAL;
@@ -6045,7 +6064,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
case KVM_GET_DEBUGREGS: {
struct kvm_debugregs dbgregs;
- kvm_vcpu_ioctl_x86_get_debugregs(vcpu, &dbgregs);
+ r = kvm_vcpu_ioctl_x86_get_debugregs(vcpu, &dbgregs);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, &dbgregs,
@@ -6075,7 +6096,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (!u.xsave)
break;
- kvm_vcpu_ioctl_x86_get_xsave(vcpu, u.xsave);
+ r = kvm_vcpu_ioctl_x86_get_xsave(vcpu, u.xsave);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, u.xsave, sizeof(struct kvm_xsave)))
@@ -6104,7 +6127,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (!u.xsave)
break;
- kvm_vcpu_ioctl_x86_get_xsave2(vcpu, u.buffer, size);
+ r = kvm_vcpu_ioctl_x86_get_xsave2(vcpu, u.buffer, size);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, u.xsave, size))
@@ -6120,7 +6145,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (!u.xcrs)
break;
- kvm_vcpu_ioctl_x86_get_xcrs(vcpu, u.xcrs);
+ r = kvm_vcpu_ioctl_x86_get_xcrs(vcpu, u.xcrs);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, u.xcrs,
@@ -6264,6 +6291,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
}
#endif
case KVM_GET_SREGS2: {
+ r = -EINVAL;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ goto out;
+
u.sregs2 = kzalloc(sizeof(struct kvm_sregs2), GFP_KERNEL);
r = -ENOMEM;
if (!u.sregs2)
@@ -6276,6 +6308,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
break;
}
case KVM_SET_SREGS2: {
+ r = -EINVAL;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ goto out;
+
u.sregs2 = memdup_user(argp, sizeof(struct kvm_sregs2));
if (IS_ERR(u.sregs2)) {
r = PTR_ERR(u.sregs2);
@@ -11483,6 +11520,10 @@ static void __get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
__get_regs(vcpu, regs);
vcpu_put(vcpu);
@@ -11524,6 +11565,10 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
__set_regs(vcpu, regs);
vcpu_put(vcpu);
@@ -11596,6 +11641,10 @@ static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2)
int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
struct kvm_sregs *sregs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
__get_sregs(vcpu, sregs);
vcpu_put(vcpu);
@@ -11863,6 +11912,10 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
{
int ret;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
ret = __set_sregs(vcpu, sregs);
vcpu_put(vcpu);
@@ -11980,7 +12033,7 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
struct fxregs_state *fxsave;
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
- return 0;
+ return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0;
vcpu_load(vcpu);
@@ -12003,7 +12056,7 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
struct fxregs_state *fxsave;
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
- return 0;
+ return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0;
vcpu_load(vcpu);
@@ -12529,6 +12582,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
return -EINVAL;
kvm->arch.vm_type = type;
+ kvm->arch.has_private_mem =
+ (type == KVM_X86_SW_PROTECTED_VM);
ret = kvm_page_track_init(kvm);
if (ret)
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 08/15] KVM: x86: Add supported_vm_types to kvm_caps
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (6 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 07/15] KVM: x86: add fields to struct kvm_arch for CoCo features Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
` (10 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
This simplifies the implementation of KVM_CHECK_EXTENSION(KVM_CAP_VM_TYPES),
and also allows the vendor module to specify which VM types are supported.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/x86.c | 12 ++++++------
arch/x86/kvm/x86.h | 2 ++
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 98b7979b4698..8c56bcf3feb7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -94,6 +94,7 @@
struct kvm_caps kvm_caps __read_mostly = {
.supported_mce_cap = MCG_CTL_P | MCG_SER_P,
+ .supported_vm_types = BIT(KVM_X86_DEFAULT_VM),
};
EXPORT_SYMBOL_GPL(kvm_caps);
@@ -4629,9 +4630,7 @@ static int kvm_ioctl_get_supported_hv_cpuid(struct kvm_vcpu *vcpu,
static bool kvm_is_vm_type_supported(unsigned long type)
{
- return type == KVM_X86_DEFAULT_VM ||
- (type == KVM_X86_SW_PROTECTED_VM &&
- IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) && tdp_mmu_enabled);
+ return type < 32 && (kvm_caps.supported_vm_types & BIT(type));
}
int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
@@ -4832,9 +4831,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = kvm_caps.has_notify_vmexit;
break;
case KVM_CAP_VM_TYPES:
- r = BIT(KVM_X86_DEFAULT_VM);
- if (kvm_is_vm_type_supported(KVM_X86_SW_PROTECTED_VM))
- r |= BIT(KVM_X86_SW_PROTECTED_VM);
+ r = kvm_caps.supported_vm_types;
break;
default:
break;
@@ -9829,6 +9826,9 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
kvm_register_perf_callbacks(ops->handle_intel_pt_intr);
+ if (IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) && tdp_mmu_enabled)
+ kvm_caps.supported_vm_types |= BIT(KVM_X86_SW_PROTECTED_VM);
+
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
kvm_caps.supported_xss = 0;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index a8b71803777b..d80a4c6b5a38 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -24,6 +24,8 @@ struct kvm_caps {
bool has_bus_lock_exit;
/* notify VM exit supported? */
bool has_notify_vmexit;
+ /* bit mask of VM types */
+ u32 supported_vm_types;
u64 supported_mce_cap;
u64 supported_xcr0;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (7 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 08/15] KVM: x86: Add supported_vm_types to kvm_caps Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-19 13:42 ` Michael Roth
` (2 more replies)
2024-03-18 23:33 ` [PATCH v4 10/15] KVM: SEV: introduce to_kvm_sev_info Paolo Bonzini
` (9 subsequent siblings)
18 siblings, 3 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc, Dave Hansen
SEV-ES allows passing custom contents for x87, SSE and AVX state into the VMSA.
Allow userspace to do that with the usual KVM_SET_XSAVE API and only mark
FPU contents as confidential after it has been copied and encrypted into
the VMSA.
Since the XSAVE state for AVX is the first, it does not need the
compacted-state handling of get_xsave_addr(). However, there are other
parts of XSAVE state in the VMSA that currently are not handled, and
the validation logic of get_xsave_addr() is pointless to duplicate
in KVM, so move get_xsave_addr() to public FPU API; it is really just
a facility to operate on XSAVE state and does not expose any internal
details of arch/x86/kernel/fpu.
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/fpu/api.h | 3 +++
arch/x86/kernel/fpu/xstate.h | 2 --
arch/x86/kvm/svm/sev.c | 36 ++++++++++++++++++++++++++++++++++
arch/x86/kvm/svm/svm.c | 8 --------
4 files changed, 39 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
index a2be3aefff9f..f86ad3335529 100644
--- a/arch/x86/include/asm/fpu/api.h
+++ b/arch/x86/include/asm/fpu/api.h
@@ -143,6 +143,9 @@ extern void fpstate_clear_xstate_component(struct fpstate *fps, unsigned int xfe
extern u64 xstate_get_guest_group_perm(void);
+extern void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr);
+
+
/* KVM specific functions */
extern bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu);
extern void fpu_free_guest_fpstate(struct fpu_guest *gfpu);
diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
index 3518fb26d06b..4ff910545451 100644
--- a/arch/x86/kernel/fpu/xstate.h
+++ b/arch/x86/kernel/fpu/xstate.h
@@ -54,8 +54,6 @@ extern int copy_sigframe_from_user_to_xstate(struct task_struct *tsk, const void
extern void fpu__init_cpu_xstate(void);
extern void fpu__init_system_xstate(unsigned int legacy_size);
-extern void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr);
-
static inline u64 xfeatures_mask_supervisor(void)
{
return fpu_kernel_cfg.max_features & XFEATURE_MASK_SUPERVISOR_SUPPORTED;
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index a8300646a280..800e836a69fb 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -23,6 +23,7 @@
#include <asm/pkru.h>
#include <asm/trapnr.h>
#include <asm/fpu/xcr.h>
+#include <asm/fpu/xstate.h>
#include <asm/debugreg.h>
#include "mmu.h"
@@ -577,6 +578,10 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
struct kvm_vcpu *vcpu = &svm->vcpu;
struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct sev_es_save_area *save = svm->sev_es.vmsa;
+ struct xregs_state *xsave;
+ const u8 *s;
+ u8 *d;
+ int i;
/* Check some debug related fields before encrypting the VMSA */
if (svm->vcpu.guest_debug || (svm->vmcb->save.dr7 & ~DR7_FIXED_1))
@@ -619,6 +624,30 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
save->sev_features = sev->vmsa_features;
+ xsave = &vcpu->arch.guest_fpu.fpstate->regs.xsave;
+ save->x87_dp = xsave->i387.rdp;
+ save->mxcsr = xsave->i387.mxcsr;
+ save->x87_ftw = xsave->i387.twd;
+ save->x87_fsw = xsave->i387.swd;
+ save->x87_fcw = xsave->i387.cwd;
+ save->x87_fop = xsave->i387.fop;
+ save->x87_ds = 0;
+ save->x87_cs = 0;
+ save->x87_rip = xsave->i387.rip;
+
+ for (i = 0; i < 8; i++) {
+ d = save->fpreg_x87 + i * 10;
+ s = ((u8 *)xsave->i387.st_space) + i * 16;
+ memcpy(d, s, 10);
+ }
+ memcpy(save->fpreg_xmm, xsave->i387.xmm_space, 256);
+
+ s = get_xsave_addr(xsave, XFEATURE_YMM);
+ if (s)
+ memcpy(save->fpreg_ymm, s, 256);
+ else
+ memset(save->fpreg_ymm, 0, 256);
+
pr_debug("Virtual Machine Save Area (VMSA):\n");
print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
@@ -657,6 +686,13 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
if (ret)
return ret;
+ /*
+ * SEV-ES guests maintain an encrypted version of their FPU
+ * state which is restored and saved on VMRUN and VMEXIT.
+ * Mark vcpu->arch.guest_fpu->fpstate as scratch so it won't
+ * do xsave/xrstor on it.
+ */
+ fpstate_set_confidential(&vcpu->arch.guest_fpu);
vcpu->arch.guest_state_protected = true;
return 0;
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c22e87ebf0de..03108055a7b0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1433,14 +1433,6 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
vmsa_page = snp_safe_alloc_page(vcpu);
if (!vmsa_page)
goto error_free_vmcb_page;
-
- /*
- * SEV-ES guests maintain an encrypted version of their FPU
- * state which is restored and saved on VMRUN and VMEXIT.
- * Mark vcpu->arch.guest_fpu->fpstate as scratch so it won't
- * do xsave/xrstor on it.
- */
- fpstate_set_confidential(&vcpu->arch.guest_fpu);
}
err = avic_init_vcpu(svm);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 10/15] KVM: SEV: introduce to_kvm_sev_info
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (8 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 11/15] KVM: SEV: define VM types for SEV and SEV-ES Paolo Bonzini
` (8 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm/sev.c | 4 ++--
arch/x86/kvm/svm/svm.h | 5 +++++
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 800e836a69fb..704cd42b4f1b 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -94,7 +94,7 @@ static int sev_flush_asids(unsigned int min_asid, unsigned int max_asid)
static inline bool is_mirroring_enc_context(struct kvm *kvm)
{
- return !!to_kvm_svm(kvm)->sev_info.enc_context_owner;
+ return !!to_kvm_sev_info(kvm)->enc_context_owner;
}
static bool sev_vcpu_has_debug_swap(struct vcpu_svm *svm)
@@ -679,7 +679,7 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
clflush_cache_range(svm->sev_es.vmsa, PAGE_SIZE);
vmsa.reserved = 0;
- vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
+ vmsa.handle = to_kvm_sev_info(kvm)->handle;
vmsa.address = __sme_pa(svm->sev_es.vmsa);
vmsa.len = PAGE_SIZE;
ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index b7707514d042..6313679d464b 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -319,6 +319,11 @@ static __always_inline struct kvm_svm *to_kvm_svm(struct kvm *kvm)
return container_of(kvm, struct kvm_svm, kvm);
}
+static __always_inline struct kvm_sev_info *to_kvm_sev_info(struct kvm *kvm)
+{
+ return &to_kvm_svm(kvm)->sev_info;
+}
+
static __always_inline bool sev_guest(struct kvm *kvm)
{
#ifdef CONFIG_KVM_AMD_SEV
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 11/15] KVM: SEV: define VM types for SEV and SEV-ES
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (9 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 10/15] KVM: SEV: introduce to_kvm_sev_info Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 12/15] KVM: SEV: introduce KVM_SEV_INIT2 operation Paolo Bonzini
` (7 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Documentation/virt/kvm/api.rst | 2 ++
arch/x86/include/uapi/asm/kvm.h | 2 ++
arch/x86/kvm/svm/sev.c | 16 +++++++++++++---
arch/x86/kvm/svm/svm.c | 11 +++++++++++
arch/x86/kvm/svm/svm.h | 1 +
5 files changed, 29 insertions(+), 3 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 0b5a33ee71ee..f0b76ff5030d 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8819,6 +8819,8 @@ means the VM type with value @n is supported. Possible values of @n are::
#define KVM_X86_DEFAULT_VM 0
#define KVM_X86_SW_PROTECTED_VM 1
+ #define KVM_X86_SEV_VM 2
+ #define KVM_X86_SEV_ES_VM 3
Note, KVM_X86_SW_PROTECTED_VM is currently only for development and testing.
Do not use KVM_X86_SW_PROTECTED_VM for "real" VMs, and especially not in
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index d0c1b459f7e9..9d950b0b64c9 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -857,5 +857,7 @@ struct kvm_hyperv_eventfd {
#define KVM_X86_DEFAULT_VM 0
#define KVM_X86_SW_PROTECTED_VM 1
+#define KVM_X86_SEV_VM 2
+#define KVM_X86_SEV_ES_VM 3
#endif /* _ASM_X86_KVM_H */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 704cd42b4f1b..7c2b0471b92c 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -249,6 +249,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (kvm->created_vcpus)
return -EINVAL;
+ if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
+ return -EINVAL;
+
if (unlikely(sev->active))
return -EINVAL;
@@ -270,6 +273,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
INIT_LIST_HEAD(&sev->regions_list);
INIT_LIST_HEAD(&sev->mirror_vms);
+ sev->need_init = false;
kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_SEV);
@@ -1841,7 +1845,8 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
if (ret)
goto out_fput;
- if (sev_guest(kvm) || !sev_guest(source_kvm)) {
+ if (kvm->arch.vm_type != source_kvm->arch.vm_type ||
+ sev_guest(kvm) || !sev_guest(source_kvm)) {
ret = -EINVAL;
goto out_unlock;
}
@@ -2162,6 +2167,7 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
mirror_sev->asid = source_sev->asid;
mirror_sev->fd = source_sev->fd;
mirror_sev->es_active = source_sev->es_active;
+ mirror_sev->need_init = false;
mirror_sev->handle = source_sev->handle;
INIT_LIST_HEAD(&mirror_sev->regions_list);
INIT_LIST_HEAD(&mirror_sev->mirror_vms);
@@ -2227,10 +2233,14 @@ void sev_vm_destroy(struct kvm *kvm)
void __init sev_set_cpu_caps(void)
{
- if (sev_enabled)
+ if (sev_enabled) {
kvm_cpu_cap_set(X86_FEATURE_SEV);
- if (sev_es_enabled)
+ kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_VM);
+ }
+ if (sev_es_enabled) {
kvm_cpu_cap_set(X86_FEATURE_SEV_ES);
+ kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_ES_VM);
+ }
}
void __init sev_hardware_setup(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 03108055a7b0..0f3b59da0d4a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4078,6 +4078,9 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
{
+ if (to_kvm_sev_info(vcpu->kvm)->need_init)
+ return -EINVAL;
+
return 1;
}
@@ -4883,6 +4886,14 @@ static void svm_vm_destroy(struct kvm *kvm)
static int svm_vm_init(struct kvm *kvm)
{
+ int type = kvm->arch.vm_type;
+
+ if (type != KVM_X86_DEFAULT_VM &&
+ type != KVM_X86_SW_PROTECTED_VM) {
+ kvm->arch.has_protected_state = (type == KVM_X86_SEV_ES_VM);
+ to_kvm_sev_info(kvm)->need_init = true;
+ }
+
if (!pause_filter_count || !pause_filter_thresh)
kvm->arch.pause_in_guest = true;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 6313679d464b..c08eb6095e80 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -79,6 +79,7 @@ enum {
struct kvm_sev_info {
bool active; /* SEV enabled guest */
bool es_active; /* SEV-ES enabled guest */
+ bool need_init; /* waiting for SEV_INIT2 */
unsigned int asid; /* ASID used for this guest */
unsigned int handle; /* SEV firmware handle */
int fd; /* SEV device fd */
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 12/15] KVM: SEV: introduce KVM_SEV_INIT2 operation
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (10 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 11/15] KVM: SEV: define VM types for SEV and SEV-ES Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 13/15] KVM: SEV: allow SEV-ES DebugSwap again Paolo Bonzini
` (6 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
The idea that no parameter would ever be necessary when enabling SEV or
SEV-ES for a VM was decidedly optimistic. In fact, in some sense it's
already a parameter whether SEV or SEV-ES is desired. Another possible
source of variability is the desired set of VMSA features, as that affects
the measurement of the VM's initial state and cannot be changed
arbitrarily by the hypervisor.
Create a new sub-operation for KVM_MEMORY_ENCRYPT_OP that can take a struct,
and put the new op to work by including the VMSA features as a field of the
struct. The existing KVM_SEV_INIT and KVM_SEV_ES_INIT use the full set of
supported VMSA features for backwards compatibility.
The struct also includes the usual bells and whistles for future
extensibility: a flags field that must be zero for now, and some padding
at the end.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
.../virt/kvm/x86/amd-memory-encryption.rst | 40 ++++++++++++--
arch/x86/include/uapi/asm/kvm.h | 9 ++++
arch/x86/kvm/svm/sev.c | 53 ++++++++++++++++---
3 files changed, 92 insertions(+), 10 deletions(-)
diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index fb41470c0310..f7c007d34114 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -76,15 +76,49 @@ are defined in ``<linux/psp-dev.h>``.
KVM implements the following commands to support common lifecycle events of SEV
guests, such as launching, running, snapshotting, migrating and decommissioning.
-1. KVM_SEV_INIT
----------------
+1. KVM_SEV_INIT2
+----------------
-The KVM_SEV_INIT command is used by the hypervisor to initialize the SEV platform
+The KVM_SEV_INIT2 command is used by the hypervisor to initialize the SEV platform
context. In a typical workflow, this command should be the first command issued.
+For this command to be accepted, either KVM_X86_SEV_VM or KVM_X86_SEV_ES_VM
+must have been passed to the KVM_CREATE_VM ioctl. A virtual machine created
+with those machine types in turn cannot be run until KVM_SEV_INIT2 is invoked.
+
+Parameters: struct kvm_sev_init (in)
Returns: 0 on success, -negative on error
+::
+
+ struct kvm_sev_init {
+ __u64 vmsa_features; /* initial value of features field in VMSA */
+ __u32 flags; /* must be 0 */
+ __u32 pad[9];
+ };
+
+It is an error if the hypervisor does not support any of the bits that
+are set in ``flags`` or ``vmsa_features``. ``vmsa_features`` must be
+0 for SEV virtual machines, as they do not have a VMSA.
+
+This command replaces the deprecated KVM_SEV_INIT and KVM_SEV_ES_INIT commands.
+The commands did not have any parameters (the ```data``` field was unused) and
+only work for the KVM_X86_DEFAULT_VM machine type (0).
+
+They behave as if:
+
+* the VM type is KVM_X86_SEV_VM for KVM_SEV_INIT, or KVM_X86_SEV_ES_VM for
+ KVM_SEV_ES_INIT
+
+* the ``flags`` and ``vmsa_features`` fields of ``struct kvm_sev_init`` are
+ set to zero
+
+If the ``KVM_X86_SEV_VMSA_FEATURES`` attribute does not exist, the hypervisor only
+supports KVM_SEV_INIT and KVM_SEV_ES_INIT. In that case, note that KVM_SEV_ES_INIT
+might set the debug swap VMSA feature (bit 5) depending on the value of the
+``debug_swap`` parameter of ``kvm-amd.ko``.
+
2. KVM_SEV_LAUNCH_START
-----------------------
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 9d950b0b64c9..51b13080ed4b 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -690,6 +690,9 @@ enum sev_cmd_id {
/* Guest Migration Extension */
KVM_SEV_SEND_CANCEL,
+ /* Second time is the charm; improved versions of the above ioctls. */
+ KVM_SEV_INIT2,
+
KVM_SEV_NR_MAX,
};
@@ -701,6 +704,12 @@ struct kvm_sev_cmd {
__u32 sev_fd;
};
+struct kvm_sev_init {
+ __u64 vmsa_features;
+ __u32 flags;
+ __u32 pad[9];
+};
+
struct kvm_sev_launch_start {
__u32 handle;
__u32 policy;
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7c2b0471b92c..dc22b31faebd 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -240,27 +240,31 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
sev_decommission(handle);
}
-static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
+ struct kvm_sev_init *data,
+ unsigned long vm_type)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_platform_init_args init_args = {0};
+ bool es_active = vm_type != KVM_X86_SEV_VM;
+ u64 valid_vmsa_features = es_active ? sev_supported_vmsa_features : 0;
int ret;
if (kvm->created_vcpus)
return -EINVAL;
- if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
+ if (data->flags)
+ return -EINVAL;
+
+ if (data->vmsa_features & ~valid_vmsa_features)
return -EINVAL;
if (unlikely(sev->active))
return -EINVAL;
sev->active = true;
- sev->es_active = argp->id == KVM_SEV_ES_INIT;
- sev->vmsa_features = sev_supported_vmsa_features;
- if (sev_supported_vmsa_features)
- pr_warn_once("Enabling DebugSwap with KVM_SEV_ES_INIT. "
- "This will not work starting with Linux 6.10\n");
+ sev->es_active = es_active;
+ sev->vmsa_features = data->vmsa_features;
ret = sev_asid_new(sev);
if (ret)
@@ -290,6 +294,38 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
return ret;
}
+static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+ struct kvm_sev_init data = {
+ .vmsa_features = 0,
+ };
+ unsigned long vm_type;
+
+ if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
+ return -EINVAL;
+
+ vm_type = (argp->id == KVM_SEV_INIT ? KVM_X86_SEV_VM : KVM_X86_SEV_ES_VM);
+ return __sev_guest_init(kvm, argp, &data, vm_type);
+}
+
+static int sev_guest_init2(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+ struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ struct kvm_sev_init data;
+
+ if (!sev->need_init)
+ return -EINVAL;
+
+ if (kvm->arch.vm_type != KVM_X86_SEV_VM &&
+ kvm->arch.vm_type != KVM_X86_SEV_ES_VM)
+ return -EINVAL;
+
+ if (copy_from_user(&data, u64_to_user_ptr(argp->data), sizeof(data)))
+ return -EFAULT;
+
+ return __sev_guest_init(kvm, argp, &data, kvm->arch.vm_type);
+}
+
static int sev_bind_asid(struct kvm *kvm, unsigned int handle, int *error)
{
unsigned int asid = sev_get_asid(kvm);
@@ -1940,6 +1976,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
case KVM_SEV_INIT:
r = sev_guest_init(kvm, &sev_cmd);
break;
+ case KVM_SEV_INIT2:
+ r = sev_guest_init2(kvm, &sev_cmd);
+ break;
case KVM_SEV_LAUNCH_START:
r = sev_launch_start(kvm, &sev_cmd);
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 13/15] KVM: SEV: allow SEV-ES DebugSwap again
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (11 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 12/15] KVM: SEV: introduce KVM_SEV_INIT2 operation Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 14/15] selftests: kvm: add tests for KVM_SEV_INIT2 Paolo Bonzini
` (5 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
The DebugSwap feature of SEV-ES provides a way for confidential guests
to use data breakpoints. Its status is record in VMSA, and therefore
attestation signatures depend on whether it is enabled or not. In order
to avoid invalidating the signatures depending on the host machine, it
was disabled by default (see commit 5abf6dceb066, "SEV: disable SEV-ES
DebugSwap by default", 2024-03-09).
However, we now have a new API to create SEV VMs that allows enabling
DebugSwap based on what the user tells KVM to do, and we also changed the
legacy KVM_SEV_ES_INIT API to never enable DebugSwap. It is therefore
possible to re-enable the feature without breaking compatibility with
kernels that pre-date the introduction of DebugSwap, so go ahead.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm/sev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index dc22b31faebd..1a11840facfb 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -42,7 +42,7 @@ static bool sev_es_enabled = true;
module_param_named(sev_es, sev_es_enabled, bool, 0444);
/* enable/disable SEV-ES DebugSwap support */
-static bool sev_es_debug_swap_enabled = false;
+static bool sev_es_debug_swap_enabled = true;
module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
static u64 sev_supported_vmsa_features;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 14/15] selftests: kvm: add tests for KVM_SEV_INIT2
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (12 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 13/15] KVM: SEV: allow SEV-ES DebugSwap again Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 15/15] selftests: kvm: switch to using KVM_X86_*_VM Paolo Bonzini
` (4 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/kvm_util_base.h | 6 +-
.../selftests/kvm/set_memory_region_test.c | 8 +-
.../selftests/kvm/x86_64/sev_init2_tests.c | 149 ++++++++++++++++++
4 files changed, 156 insertions(+), 8 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 741c7dc16afc..871e2de3eb05 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -120,6 +120,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_init2_tests
TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
TEST_GEN_PROGS_x86_64 += x86_64/sev_smoke_test
TEST_GEN_PROGS_x86_64 += x86_64/amx_test
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 3e0db283a46a..7c06ceb36643 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -890,17 +890,15 @@ static inline struct kvm_vm *vm_create_barebones(void)
return ____vm_create(VM_SHAPE_DEFAULT);
}
-#ifdef __x86_64__
-static inline struct kvm_vm *vm_create_barebones_protected_vm(void)
+static inline struct kvm_vm *vm_create_barebones_type(unsigned long type)
{
const struct vm_shape shape = {
.mode = VM_MODE_DEFAULT,
- .type = KVM_X86_SW_PROTECTED_VM,
+ .type = type,
};
return ____vm_create(shape);
}
-#endif
static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
{
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 06b43ed23580..904d58793fc6 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -339,7 +339,7 @@ static void test_invalid_memory_region_flags(void)
#ifdef __x86_64__
if (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))
- vm = vm_create_barebones_protected_vm();
+ vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
else
#endif
vm = vm_create_barebones();
@@ -462,7 +462,7 @@ static void test_add_private_memory_region(void)
pr_info("Testing ADD of KVM_MEM_GUEST_MEMFD memory regions\n");
- vm = vm_create_barebones_protected_vm();
+ vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
test_invalid_guest_memfd(vm, vm->kvm_fd, 0, "KVM fd should fail");
test_invalid_guest_memfd(vm, vm->fd, 0, "VM's fd should fail");
@@ -471,7 +471,7 @@ static void test_add_private_memory_region(void)
test_invalid_guest_memfd(vm, memfd, 0, "Regular memfd() should fail");
close(memfd);
- vm2 = vm_create_barebones_protected_vm();
+ vm2 = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
memfd = vm_create_guest_memfd(vm2, MEM_REGION_SIZE, 0);
test_invalid_guest_memfd(vm, memfd, 0, "Other VM's guest_memfd() should fail");
@@ -499,7 +499,7 @@ static void test_add_overlapping_private_memory_regions(void)
pr_info("Testing ADD of overlapping KVM_MEM_GUEST_MEMFD memory regions\n");
- vm = vm_create_barebones_protected_vm();
+ vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
memfd = vm_create_guest_memfd(vm, MEM_REGION_SIZE * 4, 0);
diff --git a/tools/testing/selftests/kvm/x86_64/sev_init2_tests.c b/tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
new file mode 100644
index 000000000000..fe55aa5a1b04
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/kvm.h>
+#include <linux/psp-sev.h>
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <pthread.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "kselftest.h"
+
+#define SVM_SEV_FEAT_DEBUG_SWAP 32u
+
+/*
+ * Some features may have hidden dependencies, or may only work
+ * for certain VM types. Err on the side of safety and don't
+ * expect that all supported features can be passed one by one
+ * to KVM_SEV_INIT2.
+ *
+ * (Well, right now there's only one...)
+ */
+#define KNOWN_FEATURES SVM_SEV_FEAT_DEBUG_SWAP
+
+int kvm_fd;
+u64 supported_vmsa_features;
+bool have_sev_es;
+
+static int __sev_ioctl(int vm_fd, int cmd_id, void *data)
+{
+ struct kvm_sev_cmd cmd = {
+ .id = cmd_id,
+ .data = (uint64_t)data,
+ .sev_fd = open_sev_dev_path_or_exit(),
+ };
+ int ret;
+
+ ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
+ TEST_ASSERT(ret < 0 || cmd.error == SEV_RET_SUCCESS,
+ "%d failed: fw error: %d\n",
+ cmd_id, cmd.error);
+
+ return ret;
+}
+
+static void test_init2(unsigned long vm_type, struct kvm_sev_init *init)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_barebones_type(vm_type);
+ ret = __sev_ioctl(vm->fd, KVM_SEV_INIT2, init);
+ TEST_ASSERT(ret == 0,
+ "KVM_SEV_INIT2 return code is %d (expected 0), errno: %d",
+ ret, errno);
+ kvm_vm_free(vm);
+}
+
+static void test_init2_invalid(unsigned long vm_type, struct kvm_sev_init *init, const char *msg)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_barebones_type(vm_type);
+ ret = __sev_ioctl(vm->fd, KVM_SEV_INIT2, init);
+ TEST_ASSERT(ret == -1 && errno == EINVAL,
+ "KVM_SEV_INIT2 should fail, %s.",
+ msg);
+ kvm_vm_free(vm);
+}
+
+void test_vm_types(void)
+{
+ test_init2(KVM_X86_SEV_VM, &(struct kvm_sev_init){});
+
+ /*
+ * TODO: check that unsupported types cannot be created. Probably
+ * a separate selftest.
+ */
+ if (have_sev_es)
+ test_init2(KVM_X86_SEV_ES_VM, &(struct kvm_sev_init){});
+
+ test_init2_invalid(0, &(struct kvm_sev_init){},
+ "VM type is KVM_X86_DEFAULT_VM");
+ if (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))
+ test_init2_invalid(KVM_X86_SW_PROTECTED_VM, &(struct kvm_sev_init){},
+ "VM type is KVM_X86_SW_PROTECTED_VM");
+}
+
+void test_flags(uint32_t vm_type)
+{
+ int i;
+
+ for (i = 0; i < 32; i++)
+ test_init2_invalid(vm_type,
+ &(struct kvm_sev_init){ .flags = BIT(i) },
+ "invalid flag");
+}
+
+void test_features(uint32_t vm_type, uint64_t supported_features)
+{
+ int i;
+
+ for (i = 0; i < 64; i++) {
+ if (!(supported_features & (1u << i)))
+ test_init2_invalid(vm_type,
+ &(struct kvm_sev_init){ .vmsa_features = BIT_ULL(i) },
+ "unknown feature");
+ else if (KNOWN_FEATURES & (1u << i))
+ test_init2(vm_type,
+ &(struct kvm_sev_init){ .vmsa_features = BIT_ULL(i) });
+ }
+}
+
+int main(int argc, char *argv[])
+{
+ int kvm_fd = open_kvm_dev_path_or_exit();
+ bool have_sev;
+
+ TEST_REQUIRE(__kvm_has_device_attr(kvm_fd, 0, KVM_X86_SEV_VMSA_FEATURES) == 0);
+ kvm_device_attr_get(kvm_fd, 0, KVM_X86_SEV_VMSA_FEATURES, &supported_vmsa_features);
+
+ have_sev = kvm_cpu_has(X86_FEATURE_SEV);
+ TEST_ASSERT(have_sev == !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SEV_VM)),
+ "sev: KVM_CAP_VM_TYPES (%x) does not match cpuid (checking %x)",
+ kvm_check_cap(KVM_CAP_VM_TYPES), 1 << KVM_X86_SEV_VM);
+
+ TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SEV_VM));
+ have_sev_es = kvm_cpu_has(X86_FEATURE_SEV_ES);
+
+ TEST_ASSERT(have_sev_es == !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SEV_ES_VM)),
+ "sev-es: KVM_CAP_VM_TYPES (%x) does not match cpuid (checking %x)",
+ kvm_check_cap(KVM_CAP_VM_TYPES), 1 << KVM_X86_SEV_ES_VM);
+
+ test_vm_types();
+
+ test_flags(KVM_X86_SEV_VM);
+ if (have_sev_es)
+ test_flags(KVM_X86_SEV_ES_VM);
+
+ test_features(KVM_X86_SEV_VM, 0);
+ if (have_sev_es)
+ test_features(KVM_X86_SEV_ES_VM, supported_vmsa_features);
+
+ return 0;
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 15/15] selftests: kvm: switch to using KVM_X86_*_VM
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (13 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 14/15] selftests: kvm: add tests for KVM_SEV_INIT2 Paolo Bonzini
@ 2024-03-18 23:33 ` Paolo Bonzini
2024-03-19 2:20 ` [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Michael Roth
` (3 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-18 23:33 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, isaku.yamahata, seanjc
This removes the concept of "subtypes", instead letting the tests use proper
VM types that were recently added. While the sev_init_vm() and sev_es_init_vm()
are still able to operate with the legacy KVM_SEV_INIT and KVM_SEV_ES_INIT
ioctls, this is limited to VMs that are created manually with
vm_create_barebones().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
.../selftests/kvm/include/kvm_util_base.h | 5 ++--
.../selftests/kvm/include/x86_64/processor.h | 6 ----
.../selftests/kvm/include/x86_64/sev.h | 16 ++--------
tools/testing/selftests/kvm/lib/kvm_util.c | 1 -
.../selftests/kvm/lib/x86_64/processor.c | 14 +++++----
tools/testing/selftests/kvm/lib/x86_64/sev.c | 30 +++++++++++++++++--
6 files changed, 40 insertions(+), 32 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 7c06ceb36643..8acca8237687 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -93,7 +93,6 @@ enum kvm_mem_region_type {
struct kvm_vm {
int mode;
unsigned long type;
- uint8_t subtype;
int kvm_fd;
int fd;
unsigned int pgtable_levels;
@@ -200,8 +199,8 @@ enum vm_guest_mode {
struct vm_shape {
uint32_t type;
uint8_t mode;
- uint8_t subtype;
- uint16_t padding;
+ uint8_t pad0;
+ uint16_t pad1;
};
kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 3bd03b088dda..3c8dfd8180b1 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -23,12 +23,6 @@
extern bool host_cpu_is_intel;
extern bool host_cpu_is_amd;
-enum vm_guest_x86_subtype {
- VM_SUBTYPE_NONE = 0,
- VM_SUBTYPE_SEV,
- VM_SUBTYPE_SEV_ES,
-};
-
/* Forced emulation prefix, used to invoke the emulator unconditionally. */
#define KVM_FEP "ud2; .byte 'k', 'v', 'm';"
diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h
index 8a1bf88474c9..0719f083351a 100644
--- a/tools/testing/selftests/kvm/include/x86_64/sev.h
+++ b/tools/testing/selftests/kvm/include/x86_64/sev.h
@@ -67,20 +67,8 @@ kvm_static_assert(SEV_RET_SUCCESS == 0);
__TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, vm); \
})
-static inline void sev_vm_init(struct kvm_vm *vm)
-{
- vm->arch.sev_fd = open_sev_dev_path_or_exit();
-
- vm_sev_ioctl(vm, KVM_SEV_INIT, NULL);
-}
-
-
-static inline void sev_es_vm_init(struct kvm_vm *vm)
-{
- vm->arch.sev_fd = open_sev_dev_path_or_exit();
-
- vm_sev_ioctl(vm, KVM_SEV_ES_INIT, NULL);
-}
+void sev_vm_init(struct kvm_vm *vm);
+void sev_es_vm_init(struct kvm_vm *vm);
static inline void sev_register_encrypted_memory(struct kvm_vm *vm,
struct userspace_mem_region *region)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index b2262b5fad9e..9da388100f3a 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -276,7 +276,6 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
vm->mode = shape.mode;
vm->type = shape.type;
- vm->subtype = shape.subtype;
vm->pa_bits = vm_guest_mode_params[vm->mode].pa_bits;
vm->va_bits = vm_guest_mode_params[vm->mode].va_bits;
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 74a4c736c9ae..9f87ca8b7ab6 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -578,10 +578,11 @@ void kvm_arch_vm_post_create(struct kvm_vm *vm)
sync_global_to_guest(vm, host_cpu_is_intel);
sync_global_to_guest(vm, host_cpu_is_amd);
- if (vm->subtype == VM_SUBTYPE_SEV)
- sev_vm_init(vm);
- else if (vm->subtype == VM_SUBTYPE_SEV_ES)
- sev_es_vm_init(vm);
+ if (vm->type == KVM_X86_SEV_VM || vm->type == KVM_X86_SEV_ES_VM) {
+ struct kvm_sev_init init = { 0 };
+
+ vm_sev_ioctl(vm, KVM_SEV_INIT2, &init);
+ }
}
void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
@@ -1081,9 +1082,12 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits)
void kvm_init_vm_address_properties(struct kvm_vm *vm)
{
- if (vm->subtype == VM_SUBTYPE_SEV || vm->subtype == VM_SUBTYPE_SEV_ES) {
+ if (vm->type == KVM_X86_SEV_VM || vm->type == KVM_X86_SEV_ES_VM) {
+ vm->arch.sev_fd = open_sev_dev_path_or_exit();
vm->arch.c_bit = BIT_ULL(this_cpu_property(X86_PROPERTY_SEV_C_BIT));
vm->gpa_tag_mask = vm->arch.c_bit;
+ } else {
+ vm->arch.sev_fd = -1;
}
}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
index e248d3364b9c..597994fa4f41 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
@@ -35,6 +35,32 @@ static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *regio
}
}
+void sev_vm_init(struct kvm_vm *vm)
+{
+ if (vm->type == KVM_X86_DEFAULT_VM) {
+ assert(vm->arch.sev_fd == -1);
+ vm->arch.sev_fd = open_sev_dev_path_or_exit();
+ vm_sev_ioctl(vm, KVM_SEV_INIT, NULL);
+ } else {
+ struct kvm_sev_init init = { 0 };
+ assert(vm->type == KVM_X86_SEV_VM);
+ vm_sev_ioctl(vm, KVM_SEV_INIT2, &init);
+ }
+}
+
+void sev_es_vm_init(struct kvm_vm *vm)
+{
+ if (vm->type == KVM_X86_DEFAULT_VM) {
+ assert(vm->arch.sev_fd == -1);
+ vm->arch.sev_fd = open_sev_dev_path_or_exit();
+ vm_sev_ioctl(vm, KVM_SEV_ES_INIT, NULL);
+ } else {
+ struct kvm_sev_init init = { 0 };
+ assert(vm->type == KVM_X86_SEV_ES_VM);
+ vm_sev_ioctl(vm, KVM_SEV_INIT2, &init);
+ }
+}
+
void sev_vm_launch(struct kvm_vm *vm, uint32_t policy)
{
struct kvm_sev_launch_start launch_start = {
@@ -91,10 +117,8 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
struct kvm_vcpu **cpu)
{
struct vm_shape shape = {
- .type = VM_TYPE_DEFAULT,
.mode = VM_MODE_DEFAULT,
- .subtype = policy & SEV_POLICY_ES ? VM_SUBTYPE_SEV_ES :
- VM_SUBTYPE_SEV,
+ .type = policy & SEV_POLICY_ES ? KVM_X86_SEV_ES_VM : KVM_X86_SEV_VM,
};
struct kvm_vm *vm;
struct kvm_vcpu *cpus[1];
--
2.43.0
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (14 preceding siblings ...)
2024-03-18 23:33 ` [PATCH v4 15/15] selftests: kvm: switch to using KVM_X86_*_VM Paolo Bonzini
@ 2024-03-19 2:20 ` Michael Roth
2024-03-19 19:43 ` [PATCH v4 16/15] fixup! KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
` (2 subsequent siblings)
18 siblings, 0 replies; 29+ messages in thread
From: Michael Roth @ 2024-03-19 2:20 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm, isaku.yamahata, seanjc, Dave Hansen
On Mon, Mar 18, 2024 at 07:33:37PM -0400, Paolo Bonzini wrote:
>
> The idea is that SEV SNP will only ever support KVM_SEV_INIT2. I have
> placed patches for QEMU to support this new API at branch sevinit2 of
> https://gitlab.com/bonzini/qemu.
I don't see any references to KVM_SEV_INIT2 in the sevinit2 branch. Has
everything been pushed already?
-Mike
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
2024-03-18 23:33 ` [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
@ 2024-03-19 13:42 ` Michael Roth
2024-03-19 19:47 ` Paolo Bonzini
2024-03-19 20:07 ` Dave Hansen
2024-03-24 23:39 ` Michael Roth
2 siblings, 1 reply; 29+ messages in thread
From: Michael Roth @ 2024-03-19 13:42 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm, isaku.yamahata, seanjc, Dave Hansen
On Mon, Mar 18, 2024 at 07:33:46PM -0400, Paolo Bonzini wrote:
> SEV-ES allows passing custom contents for x87, SSE and AVX state into the VMSA.
> Allow userspace to do that with the usual KVM_SET_XSAVE API and only mark
> FPU contents as confidential after it has been copied and encrypted into
> the VMSA.
>
> Since the XSAVE state for AVX is the first, it does not need the
> compacted-state handling of get_xsave_addr(). However, there are other
> parts of XSAVE state in the VMSA that currently are not handled, and
> the validation logic of get_xsave_addr() is pointless to duplicate
> in KVM, so move get_xsave_addr() to public FPU API; it is really just
> a facility to operate on XSAVE state and does not expose any internal
> details of arch/x86/kernel/fpu.
>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/include/asm/fpu/api.h | 3 +++
> arch/x86/kernel/fpu/xstate.h | 2 --
> arch/x86/kvm/svm/sev.c | 36 ++++++++++++++++++++++++++++++++++
> arch/x86/kvm/svm/svm.c | 8 --------
> 4 files changed, 39 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
> index a2be3aefff9f..f86ad3335529 100644
> --- a/arch/x86/include/asm/fpu/api.h
> +++ b/arch/x86/include/asm/fpu/api.h
> @@ -143,6 +143,9 @@ extern void fpstate_clear_xstate_component(struct fpstate *fps, unsigned int xfe
>
> extern u64 xstate_get_guest_group_perm(void);
>
> +extern void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr);
I get a linker error if I don't add an EXPORT_SYMBOL_GPL(get_xsave_addr)
-Mike
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v4 16/15] fixup! KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (15 preceding siblings ...)
2024-03-19 2:20 ` [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Michael Roth
@ 2024-03-19 19:43 ` Paolo Bonzini
2024-03-19 19:43 ` [PATCH v4 17/15] selftests: kvm: split "launch" phase of SEV VM creation Paolo Bonzini
2024-03-19 19:43 ` [PATCH v4 18/15] selftests: kvm: add test for transferring FPU state into the VMSA Paolo Bonzini
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-19 19:43 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, seanjc
A small change to add EXPORT_SYMBOL_GPL, and especially to actually match
the format in which the processor expects x87 registers in the VMSA.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kernel/fpu/xstate.c | 1 +
arch/x86/kvm/svm/sev.c | 12 ++++++++++--
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 117e74c44e75..eeaf4ec9243d 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -990,6 +990,7 @@ void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr)
return __raw_xsave_addr(xsave, xfeature_nr);
}
+EXPORT_SYMBOL_GPL(get_xsave_addr);
#ifdef CONFIG_ARCH_HAS_PKEYS
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index cee6beb2bf29..66fa852b48b3 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -679,9 +679,17 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
save->x87_rip = xsave->i387.rip;
for (i = 0; i < 8; i++) {
- d = save->fpreg_x87 + i * 10;
+ /*
+ * The format of the x87 save area is totally undocumented,
+ * and definitely not what you would expect. It consists
+ * of an 8*8 bytes area with bytes 0-7 and an 8*2 bytes area
+ * with bytes 8-9 of each register.
+ */
+ d = save->fpreg_x87 + i * 8;
s = ((u8 *)xsave->i387.st_space) + i * 16;
- memcpy(d, s, 10);
+ memcpy(d, s, 8);
+ save->fpreg_x87[64 + i * 2] = s[8];
+ save->fpreg_x87[64 + i * 2 + 1] = s[9];
}
memcpy(save->fpreg_xmm, xsave->i387.xmm_space, 256);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 17/15] selftests: kvm: split "launch" phase of SEV VM creation
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (16 preceding siblings ...)
2024-03-19 19:43 ` [PATCH v4 16/15] fixup! KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
@ 2024-03-19 19:43 ` Paolo Bonzini
2024-03-19 19:43 ` [PATCH v4 18/15] selftests: kvm: add test for transferring FPU state into the VMSA Paolo Bonzini
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-19 19:43 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, seanjc
Allow the caller to set the initial state of the VM. Doing this
before sev_vm_launch() matters for SEV-ES, since that is the
place where the VMSA is updated and after which the guest state
becomes sealed.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
tools/testing/selftests/kvm/include/x86_64/sev.h | 3 ++-
tools/testing/selftests/kvm/lib/x86_64/sev.c | 16 ++++++++++------
.../selftests/kvm/x86_64/sev_smoke_test.c | 7 ++++++-
3 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h
index 0719f083351a..82c11c81a956 100644
--- a/tools/testing/selftests/kvm/include/x86_64/sev.h
+++ b/tools/testing/selftests/kvm/include/x86_64/sev.h
@@ -31,8 +31,9 @@ void sev_vm_launch(struct kvm_vm *vm, uint32_t policy);
void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement);
void sev_vm_launch_finish(struct kvm_vm *vm);
-struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
+struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t type, void *guest_code,
struct kvm_vcpu **cpu);
+void vm_sev_launch(struct kvm_vm *vm, uint32_t policy, uint8_t *measurement);
kvm_static_assert(SEV_RET_SUCCESS == 0);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
index 597994fa4f41..d482029b6004 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
@@ -113,26 +113,30 @@ void sev_vm_launch_finish(struct kvm_vm *vm)
TEST_ASSERT_EQ(status.state, SEV_GUEST_STATE_RUNNING);
}
-struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
+struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t type, void *guest_code,
struct kvm_vcpu **cpu)
{
struct vm_shape shape = {
.mode = VM_MODE_DEFAULT,
- .type = policy & SEV_POLICY_ES ? KVM_X86_SEV_ES_VM : KVM_X86_SEV_VM,
+ .type = type,
};
struct kvm_vm *vm;
struct kvm_vcpu *cpus[1];
- uint8_t measurement[512];
vm = __vm_create_with_vcpus(shape, 1, 0, guest_code, cpus);
*cpu = cpus[0];
+ return vm;
+}
+
+void vm_sev_launch(struct kvm_vm *vm, uint32_t policy, uint8_t *measurement)
+{
sev_vm_launch(vm, policy);
- /* TODO: Validate the measurement is as expected. */
+ if (!measurement)
+ measurement = alloca(256);
+
sev_vm_launch_measure(vm, measurement);
sev_vm_launch_finish(vm);
-
- return vm;
}
diff --git a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c b/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
index 026779f3ed06..234c80dd344d 100644
--- a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
@@ -41,7 +41,12 @@ static void test_sev(void *guest_code, uint64_t policy)
struct kvm_vm *vm;
struct ucall uc;
- vm = vm_sev_create_with_one_vcpu(policy, guest_code, &vcpu);
+ uint32_t type = policy & SEV_POLICY_ES ? KVM_X86_SEV_ES_VM : KVM_X86_SEV_VM;
+
+ vm = vm_sev_create_with_one_vcpu(type, guest_code, &vcpu);
+
+ /* TODO: Validate the measurement is as expected. */
+ vm_sev_launch(vm, policy, NULL);
for (;;) {
vcpu_run(vcpu);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v4 18/15] selftests: kvm: add test for transferring FPU state into the VMSA
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
` (17 preceding siblings ...)
2024-03-19 19:43 ` [PATCH v4 17/15] selftests: kvm: split "launch" phase of SEV VM creation Paolo Bonzini
@ 2024-03-19 19:43 ` Paolo Bonzini
18 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-19 19:43 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: michael.roth, seanjc
Test that CRn, XCR0 and FPU state are correctly moved from KVM's internal
state to the VMSA by SEV_LAUNCH_UPDATE_VMSA.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
.../selftests/kvm/x86_64/sev_smoke_test.c | 87 +++++++++++++++++++
1 file changed, 87 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c b/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
index 234c80dd344d..195150bc5013 100644
--- a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
@@ -4,6 +4,7 @@
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
+#include <math.h>
#include "test_util.h"
#include "kvm_util.h"
@@ -13,6 +14,8 @@
#include "sev.h"
+#define XFEATURE_MASK_X87_AVX (XFEATURE_MASK_FP | XFEATURE_MASK_SSE | XFEATURE_MASK_YMM)
+
static void guest_sev_es_code(void)
{
/* TODO: Check CPUID after GHCB-based hypercall support is added. */
@@ -35,6 +38,84 @@ static void guest_sev_code(void)
GUEST_DONE();
}
+/* Stash state passed via VMSA before any compiled code runs. */
+extern void guest_code_xsave(void);
+asm("guest_code_xsave:\n"
+ "mov $-1, %eax\n"
+ "mov $-1, %edx\n"
+ "xsave (%rdi)\n"
+ "jmp guest_sev_es_code");
+
+static void compare_xsave(u8 *from_host, u8 *from_guest)
+{
+ int i;
+ bool bad = false;
+ for (i = 0; i < 4095; i++) {
+ if (from_host[i] != from_guest[i]) {
+ printf("mismatch at %02hhx | %02hhx %02hhx\n", i, from_host[i], from_guest[i]);
+ bad = true;
+ }
+ }
+
+ if (bad)
+ abort();
+}
+
+static void test_sync_vmsa(uint32_t policy)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ vm_vaddr_t gva;
+ void *hva;
+
+ double x87val = M_PI;
+ struct kvm_xsave __attribute__((aligned(64))) xsave = { 0 };
+ struct kvm_sregs sregs;
+ struct kvm_xcrs xcrs = {
+ .nr_xcrs = 1,
+ .xcrs[0].xcr = 0,
+ .xcrs[0].value = XFEATURE_MASK_X87_AVX,
+ };
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SEV_ES_VM, guest_code_xsave, &vcpu);
+ gva = vm_vaddr_alloc_shared(vm, PAGE_SIZE, KVM_UTIL_MIN_VADDR,
+ MEM_REGION_TEST_DATA);
+ hva = addr_gva2hva(vm, gva);
+
+ vcpu_args_set(vcpu, 1, gva);
+
+ vcpu_sregs_get(vcpu, &sregs);
+ sregs.cr4 |= X86_CR4_OSFXSR | X86_CR4_OSXSAVE;
+ vcpu_sregs_set(vcpu, &sregs);
+
+ vcpu_xcrs_set(vcpu, &xcrs);
+ asm("fninit; fldl %3\n"
+ "vpcmpeqb %%ymm4, %%ymm4, %%ymm4\n"
+ "xsave (%2)"
+ : "=m"(xsave)
+ : "A"(XFEATURE_MASK_X87_AVX), "r"(&xsave), "m" (x87val)
+ : "ymm4", "st", "st(1)", "st(2)", "st(3)", "st(4)", "st(5)", "st(6)", "st(7)");
+ vcpu_xsave_set(vcpu, &xsave);
+
+ vm_sev_launch(vm, SEV_POLICY_ES | policy, NULL);
+
+ /* This page is shared, so make it decrypted. */
+ memset(hva, 0, 4096);
+
+ vcpu_run(vcpu);
+
+ TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_SYSTEM_EVENT,
+ "Wanted SYSTEM_EVENT, got %s",
+ exit_reason_str(vcpu->run->exit_reason));
+ TEST_ASSERT_EQ(vcpu->run->system_event.type, KVM_SYSTEM_EVENT_SEV_TERM);
+ TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 1);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[0], GHCB_MSR_TERM_REQ);
+
+ compare_xsave((u8 *)&xsave, (u8 *)hva);
+
+ kvm_vm_free(vm);
+}
+
static void test_sev(void *guest_code, uint64_t policy)
{
struct kvm_vcpu *vcpu;
@@ -87,6 +168,12 @@ int main(int argc, char *argv[])
if (kvm_cpu_has(X86_FEATURE_SEV_ES)) {
test_sev(guest_sev_es_code, SEV_POLICY_ES | SEV_POLICY_NO_DBG);
test_sev(guest_sev_es_code, SEV_POLICY_ES);
+
+ if (kvm_has_cap(KVM_CAP_XCRS) &&
+ (xgetbv(0) & XFEATURE_MASK_X87_AVX) == XFEATURE_MASK_X87_AVX) {
+ test_sync_vmsa(0);
+ test_sync_vmsa(SEV_POLICY_NO_DBG);
+ }
}
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
2024-03-19 13:42 ` Michael Roth
@ 2024-03-19 19:47 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-03-19 19:47 UTC (permalink / raw)
To: Michael Roth; +Cc: linux-kernel, kvm, isaku.yamahata, seanjc, Dave Hansen
On Tue, Mar 19, 2024 at 2:42 PM Michael Roth <michael.roth@amd.com> wrote:
> > Since the XSAVE state for AVX is the first, it does not need the
> > compacted-state handling of get_xsave_addr(). However, there are other
> > parts of XSAVE state in the VMSA that currently are not handled, and
> > the validation logic of get_xsave_addr() is pointless to duplicate
> > in KVM, so move get_xsave_addr() to public FPU API; it is really just
> > a facility to operate on XSAVE state and does not expose any internal
> > details of arch/x86/kernel/fpu.
> >
> > Cc: Dave Hansen <dave.hansen@linux.intel.com>
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> > arch/x86/include/asm/fpu/api.h | 3 +++
> > arch/x86/kernel/fpu/xstate.h | 2 --
> > arch/x86/kvm/svm/sev.c | 36 ++++++++++++++++++++++++++++++++++
> > arch/x86/kvm/svm/svm.c | 8 --------
> > 4 files changed, 39 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
> > index a2be3aefff9f..f86ad3335529 100644
> > --- a/arch/x86/include/asm/fpu/api.h
> > +++ b/arch/x86/include/asm/fpu/api.h
> > @@ -143,6 +143,9 @@ extern void fpstate_clear_xstate_component(struct fpstate *fps, unsigned int xfe
> >
> > extern u64 xstate_get_guest_group_perm(void);
> >
> > +extern void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr);
>
> I get a linker error if I don't add an EXPORT_SYMBOL_GPL(get_xsave_addr)
Indeed, and also the format for the 10-byte x87 registers is... unusual.
I sent a follow up at the end of this thread that includes a fixup for
this patch and the FPU/XSAVE test for SEV-ES.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
2024-03-18 23:33 ` [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
2024-03-19 13:42 ` Michael Roth
@ 2024-03-19 20:07 ` Dave Hansen
2024-03-24 23:39 ` Michael Roth
2 siblings, 0 replies; 29+ messages in thread
From: Dave Hansen @ 2024-03-19 20:07 UTC (permalink / raw)
To: Paolo Bonzini, linux-kernel, kvm
Cc: michael.roth, isaku.yamahata, seanjc, Dave Hansen
On 3/18/24 16:33, Paolo Bonzini wrote:
> Since the XSAVE state for AVX is the first, it does not need the
> compacted-state handling of get_xsave_addr(). However, there are other
> parts of XSAVE state in the VMSA that currently are not handled, and
> the validation logic of get_xsave_addr() is pointless to duplicate
> in KVM, so move get_xsave_addr() to public FPU API; it is really just
> a facility to operate on XSAVE state and does not expose any internal
> details of arch/x86/kernel/fpu.
We don't want to grow _too_ many users of get_xsave_addr() since it's
hard to use it right. But this seems to be a legitimate user.
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
2024-03-18 23:33 ` [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y Paolo Bonzini
@ 2024-03-19 22:55 ` kernel test robot
2024-03-20 8:26 ` kernel test robot
1 sibling, 0 replies; 29+ messages in thread
From: kernel test robot @ 2024-03-19 22:55 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: oe-kbuild-all
Hi Paolo,
kernel test robot noticed the following build errors:
[auto build test ERROR on kvm/queue]
[also build test ERROR on linus/master next-20240319]
[cannot apply to tip/x86/core mst-vhost/linux-next kvm/linux-next v6.8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Paolo-Bonzini/KVM-SVM-Invert-handling-of-SEV-and-SEV_ES-feature-flags/20240319-074252
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
patch link: https://lore.kernel.org/r/20240318233352.2728327-3-pbonzini%40redhat.com
patch subject: [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
config: i386-allmodconfig (https://download.01.org/0day-ci/archive/20240320/202403200610.90F5Upvs-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240320/202403200610.90F5Upvs-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403200610.90F5Upvs-lkp@intel.com/
All error/warnings (new ones prefixed by >>):
arch/x86/kvm/svm/svm.c: In function 'init_vmcb':
>> arch/x86/kvm/svm/svm.c:1367:17: error: implicit declaration of function 'sev_init_vmcb'; did you mean 'init_vmcb'? [-Werror=implicit-function-declaration]
1367 | sev_init_vmcb(svm);
| ^~~~~~~~~~~~~
| init_vmcb
arch/x86/kvm/svm/svm.c: In function '__svm_vcpu_reset':
>> arch/x86/kvm/svm/svm.c:1391:17: error: implicit declaration of function 'sev_es_vcpu_reset'; did you mean 'kvm_vcpu_reset'? [-Werror=implicit-function-declaration]
1391 | sev_es_vcpu_reset(svm);
| ^~~~~~~~~~~~~~~~~
| kvm_vcpu_reset
arch/x86/kvm/svm/svm.c: In function 'svm_prepare_switch_to_guest':
>> arch/x86/kvm/svm/svm.c:1512:17: error: implicit declaration of function 'sev_es_unmap_ghcb' [-Werror=implicit-function-declaration]
1512 | sev_es_unmap_ghcb(svm);
| ^~~~~~~~~~~~~~~~~
>> arch/x86/kvm/svm/svm.c:1526:17: error: implicit declaration of function 'sev_es_prepare_switch_to_guest'; did you mean 'svm_prepare_switch_to_guest'? [-Werror=implicit-function-declaration]
1526 | sev_es_prepare_switch_to_guest(hostsa);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| svm_prepare_switch_to_guest
arch/x86/kvm/svm/svm.c: In function 'io_interception':
>> arch/x86/kvm/svm/svm.c:2226:32: error: implicit declaration of function 'sev_es_string_io'; did you mean 'kvm_sev_es_string_io'? [-Werror=implicit-function-declaration]
2226 | return sev_es_string_io(svm, size, port, in);
| ^~~~~~~~~~~~~~~~
| kvm_sev_es_string_io
arch/x86/kvm/svm/svm.c: In function 'pre_svm_run':
>> arch/x86/kvm/svm/svm.c:3549:24: error: implicit declaration of function 'pre_sev_run'; did you mean 'pre_svm_run'? [-Werror=implicit-function-declaration]
3549 | return pre_sev_run(svm, vcpu->cpu);
| ^~~~~~~~~~~
| pre_svm_run
>> arch/x86/kvm/svm/svm.c:3549:24: warning: 'return' with a value, in function returning void [-Wreturn-type]
3549 | return pre_sev_run(svm, vcpu->cpu);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/svm/svm.c:3532:13: note: declared here
3532 | static void pre_svm_run(struct kvm_vcpu *vcpu)
| ^~~~~~~~~~~
arch/x86/kvm/svm/svm.c: In function 'svm_vcpu_after_set_cpuid':
>> arch/x86/kvm/svm/svm.c:4356:17: error: implicit declaration of function 'sev_vcpu_after_set_cpuid'; did you mean 'svm_vcpu_after_set_cpuid'? [-Werror=implicit-function-declaration]
4356 | sev_vcpu_after_set_cpuid(svm);
| ^~~~~~~~~~~~~~~~~~~~~~~~
| svm_vcpu_after_set_cpuid
arch/x86/kvm/svm/svm.c: In function 'svm_vcpu_deliver_sipi_vector':
>> arch/x86/kvm/svm/svm.c:4883:9: error: implicit declaration of function 'sev_vcpu_deliver_sipi_vector'; did you mean 'svm_vcpu_deliver_sipi_vector'? [-Werror=implicit-function-declaration]
4883 | sev_vcpu_deliver_sipi_vector(vcpu, vector);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
| svm_vcpu_deliver_sipi_vector
cc1: some warnings being treated as errors
vim +1367 arch/x86/kvm/svm/svm.c
36e8194dcd749c arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-09-23 1226
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1227 static void init_vmcb(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1228 {
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1229 struct vcpu_svm *svm = to_svm(vcpu);
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1230 struct vmcb *vmcb = svm->vmcb01.ptr;
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1231 struct vmcb_control_area *control = &vmcb->control;
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1232 struct vmcb_save_area *save = &vmcb->save;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1233
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1234 svm_set_intercept(svm, INTERCEPT_CR0_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1235 svm_set_intercept(svm, INTERCEPT_CR3_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1236 svm_set_intercept(svm, INTERCEPT_CR4_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1237 svm_set_intercept(svm, INTERCEPT_CR0_WRITE);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1238 svm_set_intercept(svm, INTERCEPT_CR3_WRITE);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1239 svm_set_intercept(svm, INTERCEPT_CR4_WRITE);
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1240 if (!kvm_vcpu_apicv_active(vcpu))
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1241 svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1242
5315c716b69f47 arch/x86/kvm/svm.c Paolo Bonzini 2014-03-03 1243 set_dr_intercepts(svm);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1244
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1245 set_exception_intercept(svm, PF_VECTOR);
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1246 set_exception_intercept(svm, UD_VECTOR);
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1247 set_exception_intercept(svm, MC_VECTOR);
54a20552e1eae0 arch/x86/kvm/svm.c Eric Northup 2015-11-03 1248 set_exception_intercept(svm, AC_VECTOR);
cbdb967af3d549 arch/x86/kvm/svm.c Paolo Bonzini 2015-11-10 1249 set_exception_intercept(svm, DB_VECTOR);
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1250 /*
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1251 * Guest access to VMware backdoor ports could legitimately
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1252 * trigger #GP because of TSS I/O permission bitmap.
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1253 * We intercept those #GP and allow access to them anyway
29de732cc95cb5 arch/x86/kvm/svm/svm.c Alexey Kardashevskiy 2023-06-15 1254 * as VMware does.
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1255 */
29de732cc95cb5 arch/x86/kvm/svm/svm.c Alexey Kardashevskiy 2023-06-15 1256 if (enable_vmware_backdoor)
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1257 set_exception_intercept(svm, GP_VECTOR);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1258
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1259 svm_set_intercept(svm, INTERCEPT_INTR);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1260 svm_set_intercept(svm, INTERCEPT_NMI);
4b639a9f82fcf1 arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-07-07 1261
4b639a9f82fcf1 arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-07-07 1262 if (intercept_smi)
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1263 svm_set_intercept(svm, INTERCEPT_SMI);
4b639a9f82fcf1 arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-07-07 1264
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1265 svm_set_intercept(svm, INTERCEPT_SELECTIVE_CR0);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1266 svm_set_intercept(svm, INTERCEPT_RDPMC);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1267 svm_set_intercept(svm, INTERCEPT_CPUID);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1268 svm_set_intercept(svm, INTERCEPT_INVD);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1269 svm_set_intercept(svm, INTERCEPT_INVLPG);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1270 svm_set_intercept(svm, INTERCEPT_INVLPGA);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1271 svm_set_intercept(svm, INTERCEPT_IOIO_PROT);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1272 svm_set_intercept(svm, INTERCEPT_MSR_PROT);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1273 svm_set_intercept(svm, INTERCEPT_TASK_SWITCH);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1274 svm_set_intercept(svm, INTERCEPT_SHUTDOWN);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1275 svm_set_intercept(svm, INTERCEPT_VMRUN);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1276 svm_set_intercept(svm, INTERCEPT_VMMCALL);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1277 svm_set_intercept(svm, INTERCEPT_VMLOAD);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1278 svm_set_intercept(svm, INTERCEPT_VMSAVE);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1279 svm_set_intercept(svm, INTERCEPT_STGI);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1280 svm_set_intercept(svm, INTERCEPT_CLGI);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1281 svm_set_intercept(svm, INTERCEPT_SKINIT);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1282 svm_set_intercept(svm, INTERCEPT_WBINVD);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1283 svm_set_intercept(svm, INTERCEPT_XSETBV);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1284 svm_set_intercept(svm, INTERCEPT_RDPRU);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1285 svm_set_intercept(svm, INTERCEPT_RSM);
668fffa3f838ed arch/x86/kvm/svm.c Michael S. Tsirkin 2017-04-21 1286
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1287 if (!kvm_mwait_in_guest(vcpu->kvm)) {
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1288 svm_set_intercept(svm, INTERCEPT_MONITOR);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1289 svm_set_intercept(svm, INTERCEPT_MWAIT);
668fffa3f838ed arch/x86/kvm/svm.c Michael S. Tsirkin 2017-04-21 1290 }
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1291
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1292 if (!kvm_hlt_in_guest(vcpu->kvm))
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1293 svm_set_intercept(svm, INTERCEPT_HLT);
caa057a2cad647 arch/x86/kvm/svm.c Wanpeng Li 2018-03-12 1294
d0ec49d4de9080 arch/x86/kvm/svm.c Tom Lendacky 2017-07-17 1295 control->iopm_base_pa = __sme_set(iopm_base);
d0ec49d4de9080 arch/x86/kvm/svm.c Tom Lendacky 2017-07-17 1296 control->msrpm_base_pa = __sme_set(__pa(svm->msrpm));
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1297 control->int_ctl = V_INTR_MASKING_MASK;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1298
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1299 init_seg(&save->es);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1300 init_seg(&save->ss);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1301 init_seg(&save->ds);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1302 init_seg(&save->fs);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1303 init_seg(&save->gs);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1304
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1305 save->cs.selector = 0xf000;
04b66839d312d3 arch/x86/kvm/svm.c Paolo Bonzini 2013-03-19 1306 save->cs.base = 0xffff0000;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1307 /* Executable/Readable Code Segment */
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1308 save->cs.attrib = SVM_SELECTOR_READ_MASK | SVM_SELECTOR_P_MASK |
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1309 SVM_SELECTOR_S_MASK | SVM_SELECTOR_CODE_MASK;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1310 save->cs.limit = 0xffff;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1311
4f117ce4aefca0 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-07-13 1312 save->gdtr.base = 0;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1313 save->gdtr.limit = 0xffff;
4f117ce4aefca0 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-07-13 1314 save->idtr.base = 0;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1315 save->idtr.limit = 0xffff;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1316
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1317 init_sys_seg(&save->ldtr, SEG_TYPE_LDT);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1318 init_sys_seg(&save->tr, SEG_TYPE_BUSY_TSS16);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1319
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1320 if (npt_enabled) {
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1321 /* Setup VMCB for Nested Paging */
cea3a19b007a69 arch/x86/kvm/svm.c Tom Lendacky 2017-12-04 1322 control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1323 svm_clr_intercept(svm, INTERCEPT_INVLPG);
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1324 clr_exception_intercept(svm, PF_VECTOR);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1325 svm_clr_intercept(svm, INTERCEPT_CR3_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1326 svm_clr_intercept(svm, INTERCEPT_CR3_WRITE);
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1327 save->g_pat = vcpu->arch.pat;
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1328 save->cr3 = 0;
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1329 }
193015adf40d04 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-12 1330 svm->current_vmcb->asid_generation = 0;
7e8e6eed75e290 arch/x86/kvm/svm/svm.c Cathy Avery 2020-10-11 1331 svm->asid = 0;
1371d90460189d arch/x86/kvm/svm.c Alexander Graf 2008-11-25 1332
c74ad08f3333db arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-05-03 1333 svm->nested.vmcb12_gpa = INVALID_GPA;
c74ad08f3333db arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-05-03 1334 svm->nested.last_vmcb12_gpa = INVALID_GPA;
2af9194d1b683f arch/x86/kvm/svm.c Joerg Roedel 2009-08-07 1335
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1336 if (!kvm_pause_in_guest(vcpu->kvm)) {
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1337 control->pause_filter_count = pause_filter_count;
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1338 if (pause_filter_thresh)
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1339 control->pause_filter_thresh = pause_filter_thresh;
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1340 svm_set_intercept(svm, INTERCEPT_PAUSE);
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1341 } else {
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1342 svm_clr_intercept(svm, INTERCEPT_PAUSE);
565d0998ecac83 arch/x86/kvm/svm.c Mark Langsdorf 2009-10-06 1343 }
565d0998ecac83 arch/x86/kvm/svm.c Mark Langsdorf 2009-10-06 1344
3b195ac9260235 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-05-04 1345 svm_recalc_instruction_intercepts(vcpu, svm);
4407a797e9412a arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1346
89c8a4984fc98e arch/x86/kvm/svm.c Janakarajan Natarajan 2017-07-06 1347 /*
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1348 * If the host supports V_SPEC_CTRL then disable the interception
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1349 * of MSR_IA32_SPEC_CTRL.
89c8a4984fc98e arch/x86/kvm/svm.c Janakarajan Natarajan 2017-07-06 1350 */
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1351 if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL))
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1352 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1353
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1354 if (kvm_vcpu_apicv_active(vcpu))
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1355 avic_init_vmcb(svm, vmcb);
89c8a4984fc98e arch/x86/kvm/svm.c Janakarajan Natarajan 2017-07-06 1356
fa4c027a7956f5 arch/x86/kvm/svm/svm.c Santosh Shukla 2023-02-27 1357 if (vnmi)
fa4c027a7956f5 arch/x86/kvm/svm/svm.c Santosh Shukla 2023-02-27 1358 svm->vmcb->control.int_ctl |= V_NMI_ENABLE_MASK;
fa4c027a7956f5 arch/x86/kvm/svm/svm.c Santosh Shukla 2023-02-27 1359
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1360 if (vgif) {
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1361 svm_clr_intercept(svm, INTERCEPT_STGI);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1362 svm_clr_intercept(svm, INTERCEPT_CLGI);
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1363 svm->vmcb->control.int_ctl |= V_GIF_ENABLE_MASK;
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1364 }
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1365
6defa24d3b12bb arch/x86/kvm/svm/svm.c Peter Gonda 2022-06-23 1366 if (sev_guest(vcpu->kvm))
6defa24d3b12bb arch/x86/kvm/svm/svm.c Peter Gonda 2022-06-23 @1367 sev_init_vmcb(svm);
1654efcbc431a3 arch/x86/kvm/svm.c Brijesh Singh 2017-12-04 1368
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1369 svm_hv_init_vmcb(vmcb);
36e8194dcd749c arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-09-23 1370 init_vmcb_after_set_cpuid(vcpu);
1e0c7d40758bcd arch/x86/kvm/svm/svm.c Vineeth Pillai 2021-06-03 1371
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1372 vmcb_mark_all_dirty(vmcb);
8d28fec406e4d5 arch/x86/kvm/svm.c Roedel, Joerg 2010-12-03 1373
2af9194d1b683f arch/x86/kvm/svm.c Joerg Roedel 2009-08-07 1374 enable_gif(svm);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1375 }
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1376
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1377 static void __svm_vcpu_reset(struct kvm_vcpu *vcpu)
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1378 {
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1379 struct vcpu_svm *svm = to_svm(vcpu);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1380
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1381 svm_vcpu_init_msrpm(vcpu, svm->msrpm);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1382
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1383 svm_init_osvw(vcpu);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1384 vcpu->arch.microcode_version = 0x01000065;
938c8745bcf2f7 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-05-24 1385 svm->tsc_ratio_msr = kvm_caps.default_tsc_scaling_ratio;
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1386
916b54a7688b0b arch/x86/kvm/svm/svm.c Maxim Levitsky 2023-01-30 1387 svm->nmi_masked = false;
916b54a7688b0b arch/x86/kvm/svm/svm.c Maxim Levitsky 2023-01-30 1388 svm->awaiting_iret_completion = false;
916b54a7688b0b arch/x86/kvm/svm/svm.c Maxim Levitsky 2023-01-30 1389
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1390 if (sev_es_guest(vcpu->kvm))
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 @1391 sev_es_vcpu_reset(svm);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1392 }
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1393
d28bc9dd25ce02 arch/x86/kvm/svm.c Nadav Amit 2015-04-13 1394 static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1395 {
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1396 struct vcpu_svm *svm = to_svm(vcpu);
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1397
b2ac58f90540e3 arch/x86/kvm/svm.c KarimAllah Ahmed 2018-02-03 1398 svm->spec_ctrl = 0;
ccbcd2674472a9 arch/x86/kvm/svm.c Thomas Gleixner 2018-05-09 1399 svm->virt_spec_ctrl = 0;
b2ac58f90540e3 arch/x86/kvm/svm.c KarimAllah Ahmed 2018-02-03 1400
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1401 init_vmcb(vcpu);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1402
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1403 if (!init_event)
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1404 __svm_vcpu_reset(vcpu);
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1405 }
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1406
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1407 void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb)
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1408 {
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1409 svm->current_vmcb = target_vmcb;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1410 svm->vmcb = target_vmcb->ptr;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1411 }
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1412
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 1413 static int svm_vcpu_create(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1414 {
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1415 struct vcpu_svm *svm;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1416 struct page *vmcb01_page;
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1417 struct page *vmsa_page = NULL;
fb3f0f51d92d14 drivers/kvm/svm.c Rusty Russell 2007-07-27 1418 int err;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1419
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1420 BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0);
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1421 svm = to_svm(vcpu);
fb3f0f51d92d14 drivers/kvm/svm.c Rusty Russell 2007-07-27 1422
fb3f0f51d92d14 drivers/kvm/svm.c Rusty Russell 2007-07-27 1423 err = -ENOMEM;
75253db41a467a arch/x86/kvm/svm/svm.c Brijesh Singh 2024-01-25 1424 vmcb01_page = snp_safe_alloc_page(vcpu);
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1425 if (!vmcb01_page)
987b2594ed5d12 arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1426 goto out;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1427
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1428 if (sev_es_guest(vcpu->kvm)) {
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1429 /*
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1430 * SEV-ES guests require a separate VMSA page used to contain
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1431 * the encrypted register state of the guest.
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1432 */
75253db41a467a arch/x86/kvm/svm/svm.c Brijesh Singh 2024-01-25 1433 vmsa_page = snp_safe_alloc_page(vcpu);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1434 if (!vmsa_page)
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1435 goto error_free_vmcb_page;
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1436
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1437 /*
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1438 * SEV-ES guests maintain an encrypted version of their FPU
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1439 * state which is restored and saved on VMRUN and VMEXIT.
d69c1382e1b73a arch/x86/kvm/svm/svm.c Thomas Gleixner 2021-10-22 1440 * Mark vcpu->arch.guest_fpu->fpstate as scratch so it won't
d69c1382e1b73a arch/x86/kvm/svm/svm.c Thomas Gleixner 2021-10-22 1441 * do xsave/xrstor on it.
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1442 */
d69c1382e1b73a arch/x86/kvm/svm/svm.c Thomas Gleixner 2021-10-22 1443 fpstate_set_confidential(&vcpu->arch.guest_fpu);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1444 }
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1445
dfa20099e26e35 arch/x86/kvm/svm.c Suravee Suthikulpanit 2017-09-12 1446 err = avic_init_vcpu(svm);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1447 if (err)
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1448 goto error_free_vmsa_page;
411b44ba80ab00 arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-08-23 1449
476c9bd8e997b4 arch/x86/kvm/svm/svm.c Aaron Lewis 2020-09-25 1450 svm->msrpm = svm_vcpu_alloc_msrpm();
054409ab253d9f arch/x86/kvm/svm/svm.c Chen Zhou 2020-11-17 1451 if (!svm->msrpm) {
054409ab253d9f arch/x86/kvm/svm/svm.c Chen Zhou 2020-11-17 1452 err = -ENOMEM;
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1453 goto error_free_vmsa_page;
054409ab253d9f arch/x86/kvm/svm/svm.c Chen Zhou 2020-11-17 1454 }
b286d5d8b0836e arch/x86/kvm/svm.c Alexander Graf 2008-11-25 1455
091abbf578f926 arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-05-19 1456 svm->x2avic_msrs_intercepted = true;
091abbf578f926 arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-05-19 1457
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1458 svm->vmcb01.ptr = page_address(vmcb01_page);
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1459 svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1460 svm_switch_vmcb(svm, &svm->vmcb01);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1461
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1462 if (vmsa_page)
b67a4cc35c9f72 arch/x86/kvm/svm/svm.c Peter Gonda 2021-10-21 1463 svm->sev_es.vmsa = page_address(vmsa_page);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1464
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1465 svm->guest_state_loaded = false;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1466
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1467 return 0;
36241b8c7cbcc8 drivers/kvm/svm.c Avi Kivity 2006-12-22 1468
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1469 error_free_vmsa_page:
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1470 if (vmsa_page)
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1471 __free_page(vmsa_page);
8d22b90e942c26 arch/x86/kvm/svm/svm.c Maxim Levitsky 2020-08-27 1472 error_free_vmcb_page:
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1473 __free_page(vmcb01_page);
987b2594ed5d12 arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1474 out:
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1475 return err;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1476 }
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1477
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1478 static void svm_clear_current_vmcb(struct vmcb *vmcb)
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1479 {
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1480 int i;
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1481
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1482 for_each_online_cpu(i)
73412dfeea724e arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-11-09 1483 cmpxchg(per_cpu_ptr(&svm_data.current_vmcb, i), vmcb, NULL);
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1484 }
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1485
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 1486 static void svm_vcpu_free(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1487 {
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1488 struct vcpu_svm *svm = to_svm(vcpu);
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1489
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1490 /*
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1491 * The vmcb page can be recycled, causing a false negative in
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1492 * svm_vcpu_load(). So, ensure that no logical CPU has this
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1493 * vmcb page recorded as its current vmcb.
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1494 */
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1495 svm_clear_current_vmcb(svm->vmcb);
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1496
917401f26a6af5 arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-11-03 1497 svm_leave_nested(vcpu);
2fcf4876ada8a2 arch/x86/kvm/svm/svm.c Maxim Levitsky 2020-10-01 1498 svm_free_nested(svm);
2fcf4876ada8a2 arch/x86/kvm/svm/svm.c Maxim Levitsky 2020-10-01 1499
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1500 sev_free_vcpu(vcpu);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1501
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1502 __free_page(pfn_to_page(__sme_clr(svm->vmcb01.pa) >> PAGE_SHIFT));
47903dc10e7ebb arch/x86/kvm/svm/svm.c Krish Sadhukhan 2021-04-12 1503 __free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE));
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1504 }
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1505
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 1506 static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1507 {
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1508 struct vcpu_svm *svm = to_svm(vcpu);
73412dfeea724e arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-11-09 1509 struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, vcpu->cpu);
0cc5064d335543 drivers/kvm/svm.c Avi Kivity 2007-03-25 1510
ce7ea0cfdc2e9f arch/x86/kvm/svm/svm.c Tom Lendacky 2021-05-06 1511 if (sev_es_guest(vcpu->kvm))
ce7ea0cfdc2e9f arch/x86/kvm/svm/svm.c Tom Lendacky 2021-05-06 @1512 sev_es_unmap_ghcb(svm);
ce7ea0cfdc2e9f arch/x86/kvm/svm/svm.c Tom Lendacky 2021-05-06 1513
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1514 if (svm->guest_state_loaded)
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1515 return;
94dfbdb3894eda drivers/kvm/svm.c Anthony Liguori 2007-04-29 1516
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1517 /*
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1518 * Save additional host state that will be restored on VMEXIT (sev-es)
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1519 * or subsequent vmload of host save area.
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1520 */
e287bd005ad9d8 arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-11-07 1521 vmsave(sd->save_area_pa);
068f7ea61895ff arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-01-25 1522 if (sev_es_guest(vcpu->kvm)) {
3dd2775b74c9b1 arch/x86/kvm/svm/svm.c Tom Lendacky 2022-04-05 1523 struct sev_es_save_area *hostsa;
3dd2775b74c9b1 arch/x86/kvm/svm/svm.c Tom Lendacky 2022-04-05 1524 hostsa = (struct sev_es_save_area *)(page_address(sd->save_area) + 0x400);
068f7ea61895ff arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-01-25 1525
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 @1526 sev_es_prepare_switch_to_guest(hostsa);
861377730aa9db arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1527 }
fbc0db76b77125 arch/x86/kvm/svm.c Joerg Roedel 2011-03-25 1528
11d39e8cc43e1c arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-06-06 1529 if (tsc_scaling)
11d39e8cc43e1c arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-06-06 1530 __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1531
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1532 /*
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1533 * TSC_AUX is always virtualized for SEV-ES guests when the feature is
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1534 * available. The user return MSR support is not required in this case
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1535 * because TSC_AUX is restored on #VMEXIT from the host save area
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1536 * (which has been initialized in svm_hardware_enable()).
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1537 */
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1538 if (likely(tsc_aux_uret_slot >= 0) &&
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1539 (!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm)))
0caa0a77c2f6fc arch/x86/kvm/svm/svm.c Sean Christopherson 2021-05-04 1540 kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
8221c13700561b arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1541
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1542 svm->guest_state_loaded = true;
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1543 }
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1544
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
2024-03-18 23:33 ` [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y Paolo Bonzini
2024-03-19 22:55 ` kernel test robot
@ 2024-03-20 8:26 ` kernel test robot
1 sibling, 0 replies; 29+ messages in thread
From: kernel test robot @ 2024-03-20 8:26 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: llvm, oe-kbuild-all
Hi Paolo,
kernel test robot noticed the following build errors:
[auto build test ERROR on kvm/queue]
[also build test ERROR on linus/master next-20240320]
[cannot apply to tip/x86/core mst-vhost/linux-next kvm/linux-next v6.8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Paolo-Bonzini/KVM-SVM-Invert-handling-of-SEV-and-SEV_ES-feature-flags/20240319-074252
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
patch link: https://lore.kernel.org/r/20240318233352.2728327-3-pbonzini%40redhat.com
patch subject: [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
config: x86_64-buildonly-randconfig-001-20240319 (https://download.01.org/0day-ci/archive/20240320/202403201614.EaivvQmz-lkp@intel.com/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240320/202403201614.EaivvQmz-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202403201614.EaivvQmz-lkp@intel.com/
All error/warnings (new ones prefixed by >>):
>> arch/x86/kvm/svm/svm.c:1367:3: error: call to undeclared function 'sev_init_vmcb'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1367 | sev_init_vmcb(svm);
| ^
>> arch/x86/kvm/svm/svm.c:1391:3: error: call to undeclared function 'sev_es_vcpu_reset'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1391 | sev_es_vcpu_reset(svm);
| ^
arch/x86/kvm/svm/svm.c:1391:3: note: did you mean 'kvm_vcpu_reset'?
arch/x86/include/asm/kvm_host.h:2222:6: note: 'kvm_vcpu_reset' declared here
2222 | void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
| ^
>> arch/x86/kvm/svm/svm.c:1512:3: error: call to undeclared function 'sev_es_unmap_ghcb'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1512 | sev_es_unmap_ghcb(svm);
| ^
>> arch/x86/kvm/svm/svm.c:1526:3: error: call to undeclared function 'sev_es_prepare_switch_to_guest'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1526 | sev_es_prepare_switch_to_guest(hostsa);
| ^
arch/x86/kvm/svm/svm.c:1526:3: note: did you mean 'svm_prepare_switch_to_guest'?
arch/x86/kvm/svm/svm.c:1506:13: note: 'svm_prepare_switch_to_guest' declared here
1506 | static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
| ^
>> arch/x86/kvm/svm/svm.c:2226:11: error: call to undeclared function 'sev_es_string_io'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
2226 | return sev_es_string_io(svm, size, port, in);
| ^
arch/x86/kvm/svm/svm.c:2226:11: note: did you mean 'kvm_sev_es_string_io'?
arch/x86/kvm/x86.h:537:5: note: 'kvm_sev_es_string_io' declared here
537 | int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size,
| ^
>> arch/x86/kvm/svm/svm.c:3549:10: error: call to undeclared function 'pre_sev_run'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
3549 | return pre_sev_run(svm, vcpu->cpu);
| ^
arch/x86/kvm/svm/svm.c:3549:10: note: did you mean 'pre_svm_run'?
arch/x86/kvm/svm/svm.c:3532:13: note: 'pre_svm_run' declared here
3532 | static void pre_svm_run(struct kvm_vcpu *vcpu)
| ^
>> arch/x86/kvm/svm/svm.c:3549:3: warning: void function 'pre_svm_run' should not return a value [-Wreturn-type]
3549 | return pre_sev_run(svm, vcpu->cpu);
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> arch/x86/kvm/svm/svm.c:4356:3: error: call to undeclared function 'sev_vcpu_after_set_cpuid'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
4356 | sev_vcpu_after_set_cpuid(svm);
| ^
arch/x86/kvm/svm/svm.c:4356:3: note: did you mean 'svm_vcpu_after_set_cpuid'?
arch/x86/kvm/svm/svm.c:4309:13: note: 'svm_vcpu_after_set_cpuid' declared here
4309 | static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
| ^
>> arch/x86/kvm/svm/svm.c:4883:2: error: call to undeclared function 'sev_vcpu_deliver_sipi_vector'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
4883 | sev_vcpu_deliver_sipi_vector(vcpu, vector);
| ^
arch/x86/kvm/svm/svm.c:4883:2: note: did you mean 'svm_vcpu_deliver_sipi_vector'?
arch/x86/kvm/svm/svm.c:4878:13: note: 'svm_vcpu_deliver_sipi_vector' declared here
4878 | static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
| ^
4879 | {
4880 | if (!sev_es_guest(vcpu->kvm))
4881 | return kvm_vcpu_deliver_sipi_vector(vcpu, vector);
4882 |
4883 | sev_vcpu_deliver_sipi_vector(vcpu, vector);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| svm_vcpu_deliver_sipi_vector
1 warning and 8 errors generated.
vim +/sev_init_vmcb +1367 arch/x86/kvm/svm/svm.c
36e8194dcd749c arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-09-23 1226
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1227 static void init_vmcb(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1228 {
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1229 struct vcpu_svm *svm = to_svm(vcpu);
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1230 struct vmcb *vmcb = svm->vmcb01.ptr;
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1231 struct vmcb_control_area *control = &vmcb->control;
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1232 struct vmcb_save_area *save = &vmcb->save;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1233
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1234 svm_set_intercept(svm, INTERCEPT_CR0_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1235 svm_set_intercept(svm, INTERCEPT_CR3_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1236 svm_set_intercept(svm, INTERCEPT_CR4_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1237 svm_set_intercept(svm, INTERCEPT_CR0_WRITE);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1238 svm_set_intercept(svm, INTERCEPT_CR3_WRITE);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1239 svm_set_intercept(svm, INTERCEPT_CR4_WRITE);
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1240 if (!kvm_vcpu_apicv_active(vcpu))
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1241 svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1242
5315c716b69f47 arch/x86/kvm/svm.c Paolo Bonzini 2014-03-03 1243 set_dr_intercepts(svm);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1244
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1245 set_exception_intercept(svm, PF_VECTOR);
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1246 set_exception_intercept(svm, UD_VECTOR);
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1247 set_exception_intercept(svm, MC_VECTOR);
54a20552e1eae0 arch/x86/kvm/svm.c Eric Northup 2015-11-03 1248 set_exception_intercept(svm, AC_VECTOR);
cbdb967af3d549 arch/x86/kvm/svm.c Paolo Bonzini 2015-11-10 1249 set_exception_intercept(svm, DB_VECTOR);
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1250 /*
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1251 * Guest access to VMware backdoor ports could legitimately
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1252 * trigger #GP because of TSS I/O permission bitmap.
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1253 * We intercept those #GP and allow access to them anyway
29de732cc95cb5 arch/x86/kvm/svm/svm.c Alexey Kardashevskiy 2023-06-15 1254 * as VMware does.
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1255 */
29de732cc95cb5 arch/x86/kvm/svm/svm.c Alexey Kardashevskiy 2023-06-15 1256 if (enable_vmware_backdoor)
9718420e9fd462 arch/x86/kvm/svm.c Liran Alon 2018-03-12 1257 set_exception_intercept(svm, GP_VECTOR);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1258
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1259 svm_set_intercept(svm, INTERCEPT_INTR);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1260 svm_set_intercept(svm, INTERCEPT_NMI);
4b639a9f82fcf1 arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-07-07 1261
4b639a9f82fcf1 arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-07-07 1262 if (intercept_smi)
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1263 svm_set_intercept(svm, INTERCEPT_SMI);
4b639a9f82fcf1 arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-07-07 1264
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1265 svm_set_intercept(svm, INTERCEPT_SELECTIVE_CR0);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1266 svm_set_intercept(svm, INTERCEPT_RDPMC);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1267 svm_set_intercept(svm, INTERCEPT_CPUID);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1268 svm_set_intercept(svm, INTERCEPT_INVD);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1269 svm_set_intercept(svm, INTERCEPT_INVLPG);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1270 svm_set_intercept(svm, INTERCEPT_INVLPGA);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1271 svm_set_intercept(svm, INTERCEPT_IOIO_PROT);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1272 svm_set_intercept(svm, INTERCEPT_MSR_PROT);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1273 svm_set_intercept(svm, INTERCEPT_TASK_SWITCH);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1274 svm_set_intercept(svm, INTERCEPT_SHUTDOWN);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1275 svm_set_intercept(svm, INTERCEPT_VMRUN);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1276 svm_set_intercept(svm, INTERCEPT_VMMCALL);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1277 svm_set_intercept(svm, INTERCEPT_VMLOAD);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1278 svm_set_intercept(svm, INTERCEPT_VMSAVE);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1279 svm_set_intercept(svm, INTERCEPT_STGI);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1280 svm_set_intercept(svm, INTERCEPT_CLGI);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1281 svm_set_intercept(svm, INTERCEPT_SKINIT);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1282 svm_set_intercept(svm, INTERCEPT_WBINVD);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1283 svm_set_intercept(svm, INTERCEPT_XSETBV);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1284 svm_set_intercept(svm, INTERCEPT_RDPRU);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1285 svm_set_intercept(svm, INTERCEPT_RSM);
668fffa3f838ed arch/x86/kvm/svm.c Michael S. Tsirkin 2017-04-21 1286
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1287 if (!kvm_mwait_in_guest(vcpu->kvm)) {
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1288 svm_set_intercept(svm, INTERCEPT_MONITOR);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1289 svm_set_intercept(svm, INTERCEPT_MWAIT);
668fffa3f838ed arch/x86/kvm/svm.c Michael S. Tsirkin 2017-04-21 1290 }
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1291
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1292 if (!kvm_hlt_in_guest(vcpu->kvm))
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1293 svm_set_intercept(svm, INTERCEPT_HLT);
caa057a2cad647 arch/x86/kvm/svm.c Wanpeng Li 2018-03-12 1294
d0ec49d4de9080 arch/x86/kvm/svm.c Tom Lendacky 2017-07-17 1295 control->iopm_base_pa = __sme_set(iopm_base);
d0ec49d4de9080 arch/x86/kvm/svm.c Tom Lendacky 2017-07-17 1296 control->msrpm_base_pa = __sme_set(__pa(svm->msrpm));
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1297 control->int_ctl = V_INTR_MASKING_MASK;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1298
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1299 init_seg(&save->es);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1300 init_seg(&save->ss);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1301 init_seg(&save->ds);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1302 init_seg(&save->fs);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1303 init_seg(&save->gs);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1304
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1305 save->cs.selector = 0xf000;
04b66839d312d3 arch/x86/kvm/svm.c Paolo Bonzini 2013-03-19 1306 save->cs.base = 0xffff0000;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1307 /* Executable/Readable Code Segment */
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1308 save->cs.attrib = SVM_SELECTOR_READ_MASK | SVM_SELECTOR_P_MASK |
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1309 SVM_SELECTOR_S_MASK | SVM_SELECTOR_CODE_MASK;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1310 save->cs.limit = 0xffff;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1311
4f117ce4aefca0 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-07-13 1312 save->gdtr.base = 0;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1313 save->gdtr.limit = 0xffff;
4f117ce4aefca0 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-07-13 1314 save->idtr.base = 0;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1315 save->idtr.limit = 0xffff;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1316
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1317 init_sys_seg(&save->ldtr, SEG_TYPE_LDT);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1318 init_sys_seg(&save->tr, SEG_TYPE_BUSY_TSS16);
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1319
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1320 if (npt_enabled) {
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1321 /* Setup VMCB for Nested Paging */
cea3a19b007a69 arch/x86/kvm/svm.c Tom Lendacky 2017-12-04 1322 control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1323 svm_clr_intercept(svm, INTERCEPT_INVLPG);
18c918c5f59bc3 arch/x86/kvm/svm.c Joerg Roedel 2010-11-30 1324 clr_exception_intercept(svm, PF_VECTOR);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1325 svm_clr_intercept(svm, INTERCEPT_CR3_READ);
830bd71f2c0684 arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1326 svm_clr_intercept(svm, INTERCEPT_CR3_WRITE);
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1327 save->g_pat = vcpu->arch.pat;
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1328 save->cr3 = 0;
709ddebf81cb40 arch/x86/kvm/svm.c Joerg Roedel 2008-02-07 1329 }
193015adf40d04 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-12 1330 svm->current_vmcb->asid_generation = 0;
7e8e6eed75e290 arch/x86/kvm/svm/svm.c Cathy Avery 2020-10-11 1331 svm->asid = 0;
1371d90460189d arch/x86/kvm/svm.c Alexander Graf 2008-11-25 1332
c74ad08f3333db arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-05-03 1333 svm->nested.vmcb12_gpa = INVALID_GPA;
c74ad08f3333db arch/x86/kvm/svm/svm.c Maxim Levitsky 2021-05-03 1334 svm->nested.last_vmcb12_gpa = INVALID_GPA;
2af9194d1b683f arch/x86/kvm/svm.c Joerg Roedel 2009-08-07 1335
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1336 if (!kvm_pause_in_guest(vcpu->kvm)) {
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1337 control->pause_filter_count = pause_filter_count;
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1338 if (pause_filter_thresh)
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1339 control->pause_filter_thresh = pause_filter_thresh;
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1340 svm_set_intercept(svm, INTERCEPT_PAUSE);
8566ac8b8e7cac arch/x86/kvm/svm.c Babu Moger 2018-03-16 1341 } else {
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1342 svm_clr_intercept(svm, INTERCEPT_PAUSE);
565d0998ecac83 arch/x86/kvm/svm.c Mark Langsdorf 2009-10-06 1343 }
565d0998ecac83 arch/x86/kvm/svm.c Mark Langsdorf 2009-10-06 1344
3b195ac9260235 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-05-04 1345 svm_recalc_instruction_intercepts(vcpu, svm);
4407a797e9412a arch/x86/kvm/svm/svm.c Babu Moger 2020-09-11 1346
89c8a4984fc98e arch/x86/kvm/svm.c Janakarajan Natarajan 2017-07-06 1347 /*
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1348 * If the host supports V_SPEC_CTRL then disable the interception
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1349 * of MSR_IA32_SPEC_CTRL.
89c8a4984fc98e arch/x86/kvm/svm.c Janakarajan Natarajan 2017-07-06 1350 */
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1351 if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL))
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1352 set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
d00b99c514b33a arch/x86/kvm/svm/svm.c Babu Moger 2021-02-17 1353
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1354 if (kvm_vcpu_apicv_active(vcpu))
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1355 avic_init_vmcb(svm, vmcb);
89c8a4984fc98e arch/x86/kvm/svm.c Janakarajan Natarajan 2017-07-06 1356
fa4c027a7956f5 arch/x86/kvm/svm/svm.c Santosh Shukla 2023-02-27 1357 if (vnmi)
fa4c027a7956f5 arch/x86/kvm/svm/svm.c Santosh Shukla 2023-02-27 1358 svm->vmcb->control.int_ctl |= V_NMI_ENABLE_MASK;
fa4c027a7956f5 arch/x86/kvm/svm/svm.c Santosh Shukla 2023-02-27 1359
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1360 if (vgif) {
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1361 svm_clr_intercept(svm, INTERCEPT_STGI);
a284ba56a0a4b5 arch/x86/kvm/svm/svm.c Joerg Roedel 2020-06-25 1362 svm_clr_intercept(svm, INTERCEPT_CLGI);
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1363 svm->vmcb->control.int_ctl |= V_GIF_ENABLE_MASK;
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1364 }
640bd6e5752274 arch/x86/kvm/svm.c Janakarajan Natarajan 2017-08-23 1365
6defa24d3b12bb arch/x86/kvm/svm/svm.c Peter Gonda 2022-06-23 1366 if (sev_guest(vcpu->kvm))
6defa24d3b12bb arch/x86/kvm/svm/svm.c Peter Gonda 2022-06-23 @1367 sev_init_vmcb(svm);
1654efcbc431a3 arch/x86/kvm/svm.c Brijesh Singh 2017-12-04 1368
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1369 svm_hv_init_vmcb(vmcb);
36e8194dcd749c arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-09-23 1370 init_vmcb_after_set_cpuid(vcpu);
1e0c7d40758bcd arch/x86/kvm/svm/svm.c Vineeth Pillai 2021-06-03 1371
1ee73a332f80dd arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-03-22 1372 vmcb_mark_all_dirty(vmcb);
8d28fec406e4d5 arch/x86/kvm/svm.c Roedel, Joerg 2010-12-03 1373
2af9194d1b683f arch/x86/kvm/svm.c Joerg Roedel 2009-08-07 1374 enable_gif(svm);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1375 }
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1376
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1377 static void __svm_vcpu_reset(struct kvm_vcpu *vcpu)
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1378 {
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1379 struct vcpu_svm *svm = to_svm(vcpu);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1380
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1381 svm_vcpu_init_msrpm(vcpu, svm->msrpm);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1382
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1383 svm_init_osvw(vcpu);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1384 vcpu->arch.microcode_version = 0x01000065;
938c8745bcf2f7 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-05-24 1385 svm->tsc_ratio_msr = kvm_caps.default_tsc_scaling_ratio;
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1386
916b54a7688b0b arch/x86/kvm/svm/svm.c Maxim Levitsky 2023-01-30 1387 svm->nmi_masked = false;
916b54a7688b0b arch/x86/kvm/svm/svm.c Maxim Levitsky 2023-01-30 1388 svm->awaiting_iret_completion = false;
916b54a7688b0b arch/x86/kvm/svm/svm.c Maxim Levitsky 2023-01-30 1389
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1390 if (sev_es_guest(vcpu->kvm))
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 @1391 sev_es_vcpu_reset(svm);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1392 }
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1393
d28bc9dd25ce02 arch/x86/kvm/svm.c Nadav Amit 2015-04-13 1394 static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1395 {
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1396 struct vcpu_svm *svm = to_svm(vcpu);
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1397
b2ac58f90540e3 arch/x86/kvm/svm.c KarimAllah Ahmed 2018-02-03 1398 svm->spec_ctrl = 0;
ccbcd2674472a9 arch/x86/kvm/svm.c Thomas Gleixner 2018-05-09 1399 svm->virt_spec_ctrl = 0;
b2ac58f90540e3 arch/x86/kvm/svm.c KarimAllah Ahmed 2018-02-03 1400
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1401 init_vmcb(vcpu);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1402
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1403 if (!init_event)
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1404 __svm_vcpu_reset(vcpu);
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1405 }
04d2cc7780d48a drivers/kvm/svm.c Avi Kivity 2007-09-10 1406
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1407 void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb)
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1408 {
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1409 svm->current_vmcb = target_vmcb;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1410 svm->vmcb = target_vmcb->ptr;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1411 }
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1412
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 1413 static int svm_vcpu_create(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1414 {
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1415 struct vcpu_svm *svm;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1416 struct page *vmcb01_page;
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1417 struct page *vmsa_page = NULL;
fb3f0f51d92d14 drivers/kvm/svm.c Rusty Russell 2007-07-27 1418 int err;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1419
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1420 BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0);
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1421 svm = to_svm(vcpu);
fb3f0f51d92d14 drivers/kvm/svm.c Rusty Russell 2007-07-27 1422
fb3f0f51d92d14 drivers/kvm/svm.c Rusty Russell 2007-07-27 1423 err = -ENOMEM;
75253db41a467a arch/x86/kvm/svm/svm.c Brijesh Singh 2024-01-25 1424 vmcb01_page = snp_safe_alloc_page(vcpu);
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1425 if (!vmcb01_page)
987b2594ed5d12 arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1426 goto out;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1427
63129754178c55 arch/x86/kvm/svm/svm.c Paolo Bonzini 2021-03-02 1428 if (sev_es_guest(vcpu->kvm)) {
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1429 /*
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1430 * SEV-ES guests require a separate VMSA page used to contain
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1431 * the encrypted register state of the guest.
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1432 */
75253db41a467a arch/x86/kvm/svm/svm.c Brijesh Singh 2024-01-25 1433 vmsa_page = snp_safe_alloc_page(vcpu);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1434 if (!vmsa_page)
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1435 goto error_free_vmcb_page;
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1436
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1437 /*
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1438 * SEV-ES guests maintain an encrypted version of their FPU
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1439 * state which is restored and saved on VMRUN and VMEXIT.
d69c1382e1b73a arch/x86/kvm/svm/svm.c Thomas Gleixner 2021-10-22 1440 * Mark vcpu->arch.guest_fpu->fpstate as scratch so it won't
d69c1382e1b73a arch/x86/kvm/svm/svm.c Thomas Gleixner 2021-10-22 1441 * do xsave/xrstor on it.
ed02b213098a90 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1442 */
d69c1382e1b73a arch/x86/kvm/svm/svm.c Thomas Gleixner 2021-10-22 1443 fpstate_set_confidential(&vcpu->arch.guest_fpu);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1444 }
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1445
dfa20099e26e35 arch/x86/kvm/svm.c Suravee Suthikulpanit 2017-09-12 1446 err = avic_init_vcpu(svm);
44a95dae1d229a arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1447 if (err)
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1448 goto error_free_vmsa_page;
411b44ba80ab00 arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-08-23 1449
476c9bd8e997b4 arch/x86/kvm/svm/svm.c Aaron Lewis 2020-09-25 1450 svm->msrpm = svm_vcpu_alloc_msrpm();
054409ab253d9f arch/x86/kvm/svm/svm.c Chen Zhou 2020-11-17 1451 if (!svm->msrpm) {
054409ab253d9f arch/x86/kvm/svm/svm.c Chen Zhou 2020-11-17 1452 err = -ENOMEM;
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1453 goto error_free_vmsa_page;
054409ab253d9f arch/x86/kvm/svm/svm.c Chen Zhou 2020-11-17 1454 }
b286d5d8b0836e arch/x86/kvm/svm.c Alexander Graf 2008-11-25 1455
091abbf578f926 arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-05-19 1456 svm->x2avic_msrs_intercepted = true;
091abbf578f926 arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-05-19 1457
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1458 svm->vmcb01.ptr = page_address(vmcb01_page);
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1459 svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT);
9ebe530b9f5da8 arch/x86/kvm/svm/svm.c Sean Christopherson 2021-09-20 1460 svm_switch_vmcb(svm, &svm->vmcb01);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1461
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1462 if (vmsa_page)
b67a4cc35c9f72 arch/x86/kvm/svm/svm.c Peter Gonda 2021-10-21 1463 svm->sev_es.vmsa = page_address(vmsa_page);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1464
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1465 svm->guest_state_loaded = false;
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1466
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1467 return 0;
36241b8c7cbcc8 drivers/kvm/svm.c Avi Kivity 2006-12-22 1468
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1469 error_free_vmsa_page:
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1470 if (vmsa_page)
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1471 __free_page(vmsa_page);
8d22b90e942c26 arch/x86/kvm/svm/svm.c Maxim Levitsky 2020-08-27 1472 error_free_vmcb_page:
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1473 __free_page(vmcb01_page);
987b2594ed5d12 arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1474 out:
a9dd6f09d7e54d arch/x86/kvm/svm.c Sean Christopherson 2019-12-18 1475 return err;
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1476 }
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1477
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1478 static void svm_clear_current_vmcb(struct vmcb *vmcb)
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1479 {
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1480 int i;
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1481
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1482 for_each_online_cpu(i)
73412dfeea724e arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-11-09 1483 cmpxchg(per_cpu_ptr(&svm_data.current_vmcb, i), vmcb, NULL);
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1484 }
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1485
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 1486 static void svm_vcpu_free(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1487 {
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1488 struct vcpu_svm *svm = to_svm(vcpu);
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1489
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1490 /*
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1491 * The vmcb page can be recycled, causing a false negative in
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1492 * svm_vcpu_load(). So, ensure that no logical CPU has this
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1493 * vmcb page recorded as its current vmcb.
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1494 */
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1495 svm_clear_current_vmcb(svm->vmcb);
fd65d3142f734b arch/x86/kvm/svm.c Jim Mattson 2018-05-22 1496
917401f26a6af5 arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-11-03 1497 svm_leave_nested(vcpu);
2fcf4876ada8a2 arch/x86/kvm/svm/svm.c Maxim Levitsky 2020-10-01 1498 svm_free_nested(svm);
2fcf4876ada8a2 arch/x86/kvm/svm/svm.c Maxim Levitsky 2020-10-01 1499
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1500 sev_free_vcpu(vcpu);
add5e2f0454145 arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1501
4995a3685f1b76 arch/x86/kvm/svm/svm.c Cathy Avery 2021-01-13 1502 __free_page(pfn_to_page(__sme_clr(svm->vmcb01.pa) >> PAGE_SHIFT));
47903dc10e7ebb arch/x86/kvm/svm/svm.c Krish Sadhukhan 2021-04-12 1503 __free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE));
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1504 }
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1505
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 1506 static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
6aa8b732ca01c3 drivers/kvm/svm.c Avi Kivity 2006-12-10 1507 {
a2fa3e9f52d875 drivers/kvm/svm.c Gregory Haskins 2007-07-27 1508 struct vcpu_svm *svm = to_svm(vcpu);
73412dfeea724e arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-11-09 1509 struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, vcpu->cpu);
0cc5064d335543 drivers/kvm/svm.c Avi Kivity 2007-03-25 1510
ce7ea0cfdc2e9f arch/x86/kvm/svm/svm.c Tom Lendacky 2021-05-06 1511 if (sev_es_guest(vcpu->kvm))
ce7ea0cfdc2e9f arch/x86/kvm/svm/svm.c Tom Lendacky 2021-05-06 @1512 sev_es_unmap_ghcb(svm);
ce7ea0cfdc2e9f arch/x86/kvm/svm/svm.c Tom Lendacky 2021-05-06 1513
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1514 if (svm->guest_state_loaded)
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1515 return;
94dfbdb3894eda drivers/kvm/svm.c Anthony Liguori 2007-04-29 1516
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1517 /*
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1518 * Save additional host state that will be restored on VMEXIT (sev-es)
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1519 * or subsequent vmload of host save area.
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1520 */
e287bd005ad9d8 arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-11-07 1521 vmsave(sd->save_area_pa);
068f7ea61895ff arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-01-25 1522 if (sev_es_guest(vcpu->kvm)) {
3dd2775b74c9b1 arch/x86/kvm/svm/svm.c Tom Lendacky 2022-04-05 1523 struct sev_es_save_area *hostsa;
3dd2775b74c9b1 arch/x86/kvm/svm/svm.c Tom Lendacky 2022-04-05 1524 hostsa = (struct sev_es_save_area *)(page_address(sd->save_area) + 0x400);
068f7ea61895ff arch/x86/kvm/svm/svm.c Paolo Bonzini 2022-01-25 1525
23e5092b6e2ad1 arch/x86/kvm/svm/svm.c Sean Christopherson 2022-01-28 @1526 sev_es_prepare_switch_to_guest(hostsa);
861377730aa9db arch/x86/kvm/svm/svm.c Tom Lendacky 2020-12-10 1527 }
fbc0db76b77125 arch/x86/kvm/svm.c Joerg Roedel 2011-03-25 1528
11d39e8cc43e1c arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-06-06 1529 if (tsc_scaling)
11d39e8cc43e1c arch/x86/kvm/svm/svm.c Maxim Levitsky 2022-06-06 1530 __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1531
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1532 /*
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1533 * TSC_AUX is always virtualized for SEV-ES guests when the feature is
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1534 * available. The user return MSR support is not required in this case
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1535 * because TSC_AUX is restored on #VMEXIT from the host save area
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1536 * (which has been initialized in svm_hardware_enable()).
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1537 */
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1538 if (likely(tsc_aux_uret_slot >= 0) &&
916e3e5f26abc1 arch/x86/kvm/svm/svm.c Tom Lendacky 2023-09-15 1539 (!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm)))
0caa0a77c2f6fc arch/x86/kvm/svm/svm.c Sean Christopherson 2021-05-04 1540 kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
8221c13700561b arch/x86/kvm/svm.c Suravee Suthikulpanit 2016-05-04 1541
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1542 svm->guest_state_loaded = true;
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1543 }
a7fc06dd2f14f8 arch/x86/kvm/svm/svm.c Michael Roth 2021-02-02 1544
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
2024-03-18 23:33 ` [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
2024-03-19 13:42 ` Michael Roth
2024-03-19 20:07 ` Dave Hansen
@ 2024-03-24 23:39 ` Michael Roth
2024-04-04 11:53 ` Paolo Bonzini
2 siblings, 1 reply; 29+ messages in thread
From: Michael Roth @ 2024-03-24 23:39 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm, isaku.yamahata, seanjc, Dave Hansen
On Mon, Mar 18, 2024 at 07:33:46PM -0400, Paolo Bonzini wrote:
> SEV-ES allows passing custom contents for x87, SSE and AVX state into the VMSA.
> Allow userspace to do that with the usual KVM_SET_XSAVE API and only mark
> FPU contents as confidential after it has been copied and encrypted into
> the VMSA.
>
> Since the XSAVE state for AVX is the first, it does not need the
> compacted-state handling of get_xsave_addr(). However, there are other
> parts of XSAVE state in the VMSA that currently are not handled, and
> the validation logic of get_xsave_addr() is pointless to duplicate
> in KVM, so move get_xsave_addr() to public FPU API; it is really just
> a facility to operate on XSAVE state and does not expose any internal
> details of arch/x86/kernel/fpu.
>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/include/asm/fpu/api.h | 3 +++
> arch/x86/kernel/fpu/xstate.h | 2 --
> arch/x86/kvm/svm/sev.c | 36 ++++++++++++++++++++++++++++++++++
> arch/x86/kvm/svm/svm.c | 8 --------
> 4 files changed, 39 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index a8300646a280..800e836a69fb 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> + xsave = &vcpu->arch.guest_fpu.fpstate->regs.xsave;
> + save->x87_dp = xsave->i387.rdp;
> + save->mxcsr = xsave->i387.mxcsr;
> + save->x87_ftw = xsave->i387.twd;
> + save->x87_fsw = xsave->i387.swd;
> + save->x87_fcw = xsave->i387.cwd;
> + save->x87_fop = xsave->i387.fop;
> + save->x87_ds = 0;
> + save->x87_cs = 0;
> + save->x87_rip = xsave->i387.rip;
> +
> + for (i = 0; i < 8; i++) {
> + d = save->fpreg_x87 + i * 10;
> + s = ((u8 *)xsave->i387.st_space) + i * 16;
> + memcpy(d, s, 10);
> + }
> + memcpy(save->fpreg_xmm, xsave->i387.xmm_space, 256);
> +
> + s = get_xsave_addr(xsave, XFEATURE_YMM);
> + if (s)
> + memcpy(save->fpreg_ymm, s, 256);
> + else
> + memset(save->fpreg_ymm, 0, 256);
> +
> pr_debug("Virtual Machine Save Area (VMSA):\n");
> print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
>
> @@ -657,6 +686,13 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
> if (ret)
> return ret;
>
> + /*
> + * SEV-ES guests maintain an encrypted version of their FPU
> + * state which is restored and saved on VMRUN and VMEXIT.
> + * Mark vcpu->arch.guest_fpu->fpstate as scratch so it won't
> + * do xsave/xrstor on it.
> + */
> + fpstate_set_confidential(&vcpu->arch.guest_fpu);
> vcpu->arch.guest_state_protected = true;
> return 0;
> }
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index c22e87ebf0de..03108055a7b0 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -1433,14 +1433,6 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
> vmsa_page = snp_safe_alloc_page(vcpu);
> if (!vmsa_page)
> goto error_free_vmcb_page;
> -
> - /*
> - * SEV-ES guests maintain an encrypted version of their FPU
> - * state which is restored and saved on VMRUN and VMEXIT.
> - * Mark vcpu->arch.guest_fpu->fpstate as scratch so it won't
> - * do xsave/xrstor on it.
> - */
> - fpstate_set_confidential(&vcpu->arch.guest_fpu);
There may have be userspaces that previously relied on KVM_SET_XSAVE
being silently ignored when calculating the expected VMSA measurement.
Granted, that's sort of buggy behavior on the part of userspace, but QEMU
for instance does this. In that case, it just so happens that QEMU's reset
values don't appear to affect the VMSA measurement/contents, but there may
be userspaces where it would.
To avoid this, and have parity with the other interfaces where the new
behavior is gated on the new vm_type/KVM_SEV_INIT2 stuff (via
has_protected_state), maybe should limited XSAVE/FPU sync'ing to
has_protected_state as well?
-Mike
> }
>
> err = avic_init_vcpu(svm);
> --
> 2.43.0
>
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 05/15] KVM: SEV: publish supported VMSA features
2024-03-18 23:33 ` [PATCH v4 05/15] KVM: SEV: publish supported VMSA features Paolo Bonzini
@ 2024-03-25 23:59 ` Isaku Yamahata
2024-04-04 11:46 ` Paolo Bonzini
0 siblings, 1 reply; 29+ messages in thread
From: Isaku Yamahata @ 2024-03-25 23:59 UTC (permalink / raw)
To: Paolo Bonzini
Cc: linux-kernel, kvm, michael.roth, isaku.yamahata, seanjc,
isaku.yamahata, rick.p.edgecombe, xiaoyao.li, kai.huang
On Mon, Mar 18, 2024 at 07:33:42PM -0400,
Paolo Bonzini <pbonzini@redhat.com> wrote:
> Compute the set of features to be stored in the VMSA when KVM is
> initialized; move it from there into kvm_sev_info when SEV is initialized,
> and then into the initial VMSA.
>
> The new variable can then be used to return the set of supported features
> to userspace, via the KVM_GET_DEVICE_ATTR ioctl.
Hi. The current TDX KVM introduces KVM_TDX_CAPABILITIES and struct
kvm_tdx_capabilities for feature enumeration. I'm wondering if TDX should also
use/switch to KVM_GET_DEVICE_ATTR with its own group. What do you think?
Something like
#define KVM_DEVICE_ATTR_GROUP_SEV 1
#define KVM_X86_SEV_VMSA_FEATURES 1
#define KVM_X86_SEV_xxx ...
#define KVM_DEVICE_ATTR_GROUP_TDX 2
#define KVM_X86_TDX_xxx ...
Thanks,
--
Isaku Yamahata <isaku.yamahata@intel.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 05/15] KVM: SEV: publish supported VMSA features
2024-03-25 23:59 ` Isaku Yamahata
@ 2024-04-04 11:46 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-04-04 11:46 UTC (permalink / raw)
To: Isaku Yamahata
Cc: linux-kernel, kvm, michael.roth, seanjc, isaku.yamahata,
rick.p.edgecombe, xiaoyao.li, kai.huang
On Tue, Mar 26, 2024 at 1:04 AM Isaku Yamahata <isaku.yamahata@intel.com> wrote:
>
> On Mon, Mar 18, 2024 at 07:33:42PM -0400,
> Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> > Compute the set of features to be stored in the VMSA when KVM is
> > initialized; move it from there into kvm_sev_info when SEV is initialized,
> > and then into the initial VMSA.
> >
> > The new variable can then be used to return the set of supported features
> > to userspace, via the KVM_GET_DEVICE_ATTR ioctl.
>
> Hi. The current TDX KVM introduces KVM_TDX_CAPABILITIES and struct
> kvm_tdx_capabilities for feature enumeration. I'm wondering if TDX should also
> use/switch to KVM_GET_DEVICE_ATTR with its own group. What do you think?
> Something like
>
> #define KVM_DEVICE_ATTR_GROUP_SEV 1
> #define KVM_X86_SEV_VMSA_FEATURES 1
> #define KVM_X86_SEV_xxx ...
>
> #define KVM_DEVICE_ATTR_GROUP_TDX 2
> #define KVM_X86_TDX_xxx ...
Yes, that's a very good idea. I've added the group argument in v5.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time
2024-03-24 23:39 ` Michael Roth
@ 2024-04-04 11:53 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2024-04-04 11:53 UTC (permalink / raw)
To: Michael Roth; +Cc: linux-kernel, kvm, isaku.yamahata, seanjc, Dave Hansen
On Mon, Mar 25, 2024 at 12:43 AM Michael Roth <michael.roth@amd.com> wrote:
> There may have be userspaces that previously relied on KVM_SET_XSAVE
> being silently ignored when calculating the expected VMSA measurement.
> Granted, that's sort of buggy behavior on the part of userspace, but QEMU
> for instance does this. In that case, it just so happens that QEMU's reset
> values don't appear to affect the VMSA measurement/contents, but there may
> be userspaces where it would.
>
> To avoid this, and have parity with the other interfaces where the new
> behavior is gated on the new vm_type/KVM_SEV_INIT2 stuff (via
> has_protected_state), maybe should limited XSAVE/FPU sync'ing to
> has_protected_state as well?
Yes, in particular I am kinda surprised that MXCSR (whose default
value after reset is 0x1F80) does not affect the measurement.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2024-04-04 11:53 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-18 23:33 [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 01/15] KVM: SVM: Invert handling of SEV and SEV_ES feature flags Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 02/15] KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y Paolo Bonzini
2024-03-19 22:55 ` kernel test robot
2024-03-20 8:26 ` kernel test robot
2024-03-18 23:33 ` [PATCH v4 03/15] KVM: x86: use u64_to_user_addr() Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 04/15] KVM: introduce new vendor op for KVM_GET_DEVICE_ATTR Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 05/15] KVM: SEV: publish supported VMSA features Paolo Bonzini
2024-03-25 23:59 ` Isaku Yamahata
2024-04-04 11:46 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 06/15] KVM: SEV: store VMSA features in kvm_sev_info Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 07/15] KVM: x86: add fields to struct kvm_arch for CoCo features Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 08/15] KVM: x86: Add supported_vm_types to kvm_caps Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 09/15] KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
2024-03-19 13:42 ` Michael Roth
2024-03-19 19:47 ` Paolo Bonzini
2024-03-19 20:07 ` Dave Hansen
2024-03-24 23:39 ` Michael Roth
2024-04-04 11:53 ` Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 10/15] KVM: SEV: introduce to_kvm_sev_info Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 11/15] KVM: SEV: define VM types for SEV and SEV-ES Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 12/15] KVM: SEV: introduce KVM_SEV_INIT2 operation Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 13/15] KVM: SEV: allow SEV-ES DebugSwap again Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 14/15] selftests: kvm: add tests for KVM_SEV_INIT2 Paolo Bonzini
2024-03-18 23:33 ` [PATCH v4 15/15] selftests: kvm: switch to using KVM_X86_*_VM Paolo Bonzini
2024-03-19 2:20 ` [PATCH v4 00/15] KVM: SEV: allow customizing VMSA features Michael Roth
2024-03-19 19:43 ` [PATCH v4 16/15] fixup! KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Paolo Bonzini
2024-03-19 19:43 ` [PATCH v4 17/15] selftests: kvm: split "launch" phase of SEV VM creation Paolo Bonzini
2024-03-19 19:43 ` [PATCH v4 18/15] selftests: kvm: add test for transferring FPU state into the VMSA Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.