linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support
@ 2022-01-25 14:16 Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 1/7] KVM: SVM: Get rid of set_ghcb_msr() and *ghcb_msr_bits() functions Joerg Roedel
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Joerg Roedel <jroedel@suse.de>

Hi,

here is a small set of patches which I originally took from the
pending SEV-SNP patch-sets to enable basic support for GHCB protocol
version 2. Meanwhile a couple of other patches from Sean
Christopherson have been added.

When SEV-SNP is not supported, only two new MSR protocol VMGEXIT calls
are required:

	- MSR-based AP-reset-hold
	- MSR-based HV-feature-request

These calls are implemented here and then the protocol is lifted to
version 2.

This is submitted separately because the MSR-based AP-reset-hold call
is required to support kexec/kdump in SEV-ES guests.

The previous version can be found here:

	https://lore.kernel.org/kvm/20211020124416.24523-1-joro@8bytes.org/

Regards,

	Joerg

Changes v5->v6:

	- Rebased to v5.17-rc1
	- Added changes requested by Sean Christopherson

Brijesh Singh (2):
  KVM: SVM: Add support for Hypervisor Feature support MSR protocol
  KVM: SVM: Increase supported GHCB protocol version

Joerg Roedel (2):
  KVM: SVM: Get rid of set_ghcb_msr() and *ghcb_msr_bits() functions
  KVM: SVM: Move kvm_emulate_ap_reset_hold() to AMD specific code

Sean Christopherson (2):
  KVM: SVM: Add helper to generate GHCB MSR verson info, and drop macro
  KVM: SVM: Set "released" on INIT-SIPI iff SEV-ES vCPU was in AP reset
    hold

Tom Lendacky (1):
  KVM: SVM: Add support to handle AP reset MSR protocol

 arch/x86/include/asm/kvm_host.h   |   2 +-
 arch/x86/include/asm/sev-common.h |  18 ++--
 arch/x86/include/uapi/asm/svm.h   |   1 +
 arch/x86/kvm/svm/sev.c            | 169 ++++++++++++++++++++----------
 arch/x86/kvm/svm/svm.c            |  13 ++-
 arch/x86/kvm/svm/svm.h            |  10 +-
 arch/x86/kvm/x86.c                |  12 +--
 7 files changed, 137 insertions(+), 88 deletions(-)


base-commit: e783362eb54cd99b2cac8b3a9aeac942e6f6ac07
-- 
2.34.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v6 1/7] KVM: SVM: Get rid of set_ghcb_msr() and *ghcb_msr_bits() functions
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
@ 2022-01-25 14:16 ` Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 2/7] KVM: SVM: Add helper to generate GHCB MSR verson info, and drop macro Joerg Roedel
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Joerg Roedel <jroedel@suse.de>

Replace the get_ghcb_msr_bits() function with macros and open code the
GHCB MSR setters with hypercall specific helper macros and functions.
This will avoid preserving any previous bits in the GHCB-MSR and
improves code readability.

Also get rid of the set_ghcb_msr() function and open-code it at its
call-sites for better code readability.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Reviewed-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/sev-common.h |  9 +++++
 arch/x86/kvm/svm/sev.c            | 55 +++++++++++--------------------
 2 files changed, 28 insertions(+), 36 deletions(-)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 1b2fd32b42fe..d49ebec1252a 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -53,6 +53,11 @@
 	/* GHCBData[63:32] */				\
 	(((unsigned long)fn) << 32))
 
+#define GHCB_MSR_CPUID_FN(msr)		\
+	(((msr) >> GHCB_MSR_CPUID_FUNC_POS) & GHCB_MSR_CPUID_FUNC_MASK)
+#define GHCB_MSR_CPUID_REG(msr)		\
+	(((msr) >> GHCB_MSR_CPUID_REG_POS) & GHCB_MSR_CPUID_REG_MASK)
+
 /* AP Reset Hold */
 #define GHCB_MSR_AP_RESET_HOLD_REQ	0x006
 #define GHCB_MSR_AP_RESET_HOLD_RESP	0x007
@@ -72,6 +77,10 @@
 	(((((u64)reason_set) &  0xf) << 12) |		\
 	 /* GHCBData[23:16] */				\
 	((((u64)reason_val) & 0xff) << 16))
+#define GHCB_MSR_TERM_REASON_SET(msr)	\
+	(((msr) >> GHCB_MSR_TERM_REASON_SET_POS) & GHCB_MSR_TERM_REASON_SET_MASK)
+#define GHCB_MSR_TERM_REASON(msr)	\
+	(((msr) >> GHCB_MSR_TERM_REASON_POS) & GHCB_MSR_TERM_REASON_MASK)
 
 #define GHCB_SEV_ES_GEN_REQ		0
 #define GHCB_SEV_ES_PROT_UNSUPPORTED	1
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6a22798eaaee..7632fc463458 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2625,21 +2625,15 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
 	return false;
 }
 
-static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask,
-			      unsigned int pos)
+static u64 ghcb_msr_cpuid_resp(u64 reg, u64 value)
 {
-	svm->vmcb->control.ghcb_gpa &= ~(mask << pos);
-	svm->vmcb->control.ghcb_gpa |= (value & mask) << pos;
-}
+	u64 msr;
 
-static u64 get_ghcb_msr_bits(struct vcpu_svm *svm, u64 mask, unsigned int pos)
-{
-	return (svm->vmcb->control.ghcb_gpa >> pos) & mask;
-}
+	msr  = GHCB_MSR_CPUID_RESP;
+	msr |= (reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS;
+	msr |= (value & GHCB_MSR_CPUID_VALUE_MASK) << GHCB_MSR_CPUID_VALUE_POS;
 
-static void set_ghcb_msr(struct vcpu_svm *svm, u64 value)
-{
-	svm->vmcb->control.ghcb_gpa = value;
+	return msr;
 }
 
 static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
@@ -2656,16 +2650,14 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 
 	switch (ghcb_info) {
 	case GHCB_MSR_SEV_INFO_REQ:
-		set_ghcb_msr(svm, GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
-						    GHCB_VERSION_MIN,
-						    sev_enc_bit));
+		svm->vmcb->control.ghcb_gpa = GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
+								GHCB_VERSION_MIN,
+								sev_enc_bit);
 		break;
 	case GHCB_MSR_CPUID_REQ: {
 		u64 cpuid_fn, cpuid_reg, cpuid_value;
 
-		cpuid_fn = get_ghcb_msr_bits(svm,
-					     GHCB_MSR_CPUID_FUNC_MASK,
-					     GHCB_MSR_CPUID_FUNC_POS);
+		cpuid_fn = GHCB_MSR_CPUID_FN(control->ghcb_gpa);
 
 		/* Initialize the registers needed by the CPUID intercept */
 		vcpu->arch.regs[VCPU_REGS_RAX] = cpuid_fn;
@@ -2677,9 +2669,8 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 			break;
 		}
 
-		cpuid_reg = get_ghcb_msr_bits(svm,
-					      GHCB_MSR_CPUID_REG_MASK,
-					      GHCB_MSR_CPUID_REG_POS);
+		cpuid_reg = GHCB_MSR_CPUID_REG(control->ghcb_gpa);
+
 		if (cpuid_reg == 0)
 			cpuid_value = vcpu->arch.regs[VCPU_REGS_RAX];
 		else if (cpuid_reg == 1)
@@ -2689,24 +2680,16 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 		else
 			cpuid_value = vcpu->arch.regs[VCPU_REGS_RDX];
 
-		set_ghcb_msr_bits(svm, cpuid_value,
-				  GHCB_MSR_CPUID_VALUE_MASK,
-				  GHCB_MSR_CPUID_VALUE_POS);
+		svm->vmcb->control.ghcb_gpa = ghcb_msr_cpuid_resp(cpuid_reg, cpuid_value);
 
-		set_ghcb_msr_bits(svm, GHCB_MSR_CPUID_RESP,
-				  GHCB_MSR_INFO_MASK,
-				  GHCB_MSR_INFO_POS);
 		break;
 	}
 	case GHCB_MSR_TERM_REQ: {
 		u64 reason_set, reason_code;
 
-		reason_set = get_ghcb_msr_bits(svm,
-					       GHCB_MSR_TERM_REASON_SET_MASK,
-					       GHCB_MSR_TERM_REASON_SET_POS);
-		reason_code = get_ghcb_msr_bits(svm,
-						GHCB_MSR_TERM_REASON_MASK,
-						GHCB_MSR_TERM_REASON_POS);
+		reason_set  = GHCB_MSR_TERM_REASON_SET(control->ghcb_gpa);
+		reason_code = GHCB_MSR_TERM_REASON(control->ghcb_gpa);
+
 		pr_info("SEV-ES guest requested termination: %#llx:%#llx\n",
 			reason_set, reason_code);
 
@@ -2897,9 +2880,9 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
 	 * Set the GHCB MSR value as per the GHCB specification when emulating
 	 * vCPU RESET for an SEV-ES guest.
 	 */
-	set_ghcb_msr(svm, GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
-					    GHCB_VERSION_MIN,
-					    sev_enc_bit));
+	svm->vmcb->control.ghcb_gpa = GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
+							GHCB_VERSION_MIN,
+							sev_enc_bit);
 }
 
 void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 2/7] KVM: SVM: Add helper to generate GHCB MSR verson info, and drop macro
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 1/7] KVM: SVM: Get rid of set_ghcb_msr() and *ghcb_msr_bits() functions Joerg Roedel
@ 2022-01-25 14:16 ` Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 3/7] KVM: SVM: Move kvm_emulate_ap_reset_hold() to AMD specific code Joerg Roedel
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Sean Christopherson <seanjc@google.com>

Convert the GHCB_MSR_SEV_INFO macro into a helper function, and have the
helper hardcode the min/max versions instead of relying on the caller to
do the same.  Under no circumstance should different pieces of KVM define
different min/max versions.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/sev-common.h |  9 ---------
 arch/x86/kvm/svm/sev.c            | 28 ++++++++++++++++++++++------
 arch/x86/kvm/svm/svm.h            |  5 -----
 3 files changed, 22 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index d49ebec1252a..5d6b7114d8ba 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -19,15 +19,6 @@
 #define GHCB_MSR_SEV_INFO_RESP		0x001
 #define GHCB_MSR_SEV_INFO_REQ		0x002
 
-#define GHCB_MSR_SEV_INFO(_max, _min, _cbit)	\
-	/* GHCBData[63:48] */			\
-	((((_max) & 0xffff) << 48) |		\
-	 /* GHCBData[47:32] */			\
-	 (((_min) & 0xffff) << 32) |		\
-	 /* GHCBData[31:24] */			\
-	 (((_cbit) & 0xff)  << 24) |		\
-	 GHCB_MSR_SEV_INFO_RESP)
-
 #define GHCB_MSR_INFO(v)		((v) & 0xfffUL)
 #define GHCB_MSR_PROTO_MAX(v)		(((v) >> 48) & 0xffff)
 #define GHCB_MSR_PROTO_MIN(v)		(((v) >> 32) & 0xffff)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7632fc463458..d6147137a7da 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2636,6 +2636,26 @@ static u64 ghcb_msr_cpuid_resp(u64 reg, u64 value)
 	return msr;
 }
 
+/* The min/max GHCB version supported by KVM. */
+#define GHCB_VERSION_MAX	1ULL
+#define GHCB_VERSION_MIN	1ULL
+
+static u64 ghcb_msr_version_info(void)
+{
+	u64 msr;
+
+	/* GHCBData[63:48] */
+	msr  = (GHCB_VERSION_MAX & 0xffff) << 48;
+	/* GHCBData[47:32] */
+	msr |= (GHCB_VERSION_MIN & 0xffff) << 32;
+	/* GHCBData[31:24] */
+	msr |= (sev_enc_bit      &   0xff) << 24;
+
+	msr |= GHCB_MSR_SEV_INFO_RESP;
+
+	return msr;
+}
+
 static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -2650,9 +2670,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 
 	switch (ghcb_info) {
 	case GHCB_MSR_SEV_INFO_REQ:
-		svm->vmcb->control.ghcb_gpa = GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
-								GHCB_VERSION_MIN,
-								sev_enc_bit);
+		control->ghcb_gpa = ghcb_msr_version_info();
 		break;
 	case GHCB_MSR_CPUID_REQ: {
 		u64 cpuid_fn, cpuid_reg, cpuid_value;
@@ -2880,9 +2898,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
 	 * Set the GHCB MSR value as per the GHCB specification when emulating
 	 * vCPU RESET for an SEV-ES guest.
 	 */
-	svm->vmcb->control.ghcb_gpa = GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
-							GHCB_VERSION_MIN,
-							sev_enc_bit);
+	svm->vmcb->control.ghcb_gpa = ghcb_msr_version_info();
 }
 
 void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 47ef8f4a9358..776be8ff9e50 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -596,11 +596,6 @@ void avic_vcpu_blocking(struct kvm_vcpu *vcpu);
 void avic_vcpu_unblocking(struct kvm_vcpu *vcpu);
 
 /* sev.c */
-
-#define GHCB_VERSION_MAX	1ULL
-#define GHCB_VERSION_MIN	1ULL
-
-
 extern unsigned int max_sev_asid;
 
 void sev_vm_destroy(struct kvm *kvm);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 3/7] KVM: SVM: Move kvm_emulate_ap_reset_hold() to AMD specific code
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 1/7] KVM: SVM: Get rid of set_ghcb_msr() and *ghcb_msr_bits() functions Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 2/7] KVM: SVM: Add helper to generate GHCB MSR verson info, and drop macro Joerg Roedel
@ 2022-01-25 14:16 ` Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 4/7] KVM: SVM: Set "released" on INIT-SIPI iff SEV-ES vCPU was in AP reset hold Joerg Roedel
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Joerg Roedel <jroedel@suse.de>

The function is only used by the kvm-amd module. Move it to the AMD
specific part of the code and name it sev_emulate_ap_reset_hold().

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Reviewed-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/svm/sev.c          | 10 +++++++++-
 arch/x86/kvm/x86.c              | 12 ++----------
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1384517d7709..9a1591878ff4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1701,7 +1701,7 @@ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in);
 int kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt(struct kvm_vcpu *vcpu);
 int kvm_emulate_halt_noskip(struct kvm_vcpu *vcpu);
-int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu);
+int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason);
 int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu);
 
 void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index d6147137a7da..bec5b6f4f75d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2656,6 +2656,14 @@ static u64 ghcb_msr_version_info(void)
 	return msr;
 }
 
+static int sev_emulate_ap_reset_hold(struct vcpu_svm *svm)
+{
+	int ret = kvm_skip_emulated_instruction(&svm->vcpu);
+
+	return __kvm_emulate_halt(&svm->vcpu,
+				  KVM_MP_STATE_AP_RESET_HOLD, KVM_EXIT_AP_RESET_HOLD) && ret;
+}
+
 static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -2792,7 +2800,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
 		ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_IRET);
 		break;
 	case SVM_VMGEXIT_AP_HLT_LOOP:
-		ret = kvm_emulate_ap_reset_hold(vcpu);
+		ret = sev_emulate_ap_reset_hold(svm);
 		break;
 	case SVM_VMGEXIT_AP_JUMP_TABLE: {
 		struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9e43d756312f..c9cb89b2136b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8814,7 +8814,7 @@ void kvm_arch_exit(void)
 #endif
 }
 
-static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
+int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
 {
 	/*
 	 * The vCPU has halted, e.g. executed HLT.  Update the run state if the
@@ -8832,6 +8832,7 @@ static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
 		return 0;
 	}
 }
+EXPORT_SYMBOL_GPL(__kvm_emulate_halt);
 
 int kvm_emulate_halt_noskip(struct kvm_vcpu *vcpu)
 {
@@ -8850,15 +8851,6 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_halt);
 
-int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu)
-{
-	int ret = kvm_skip_emulated_instruction(vcpu);
-
-	return __kvm_emulate_halt(vcpu, KVM_MP_STATE_AP_RESET_HOLD,
-					KVM_EXIT_AP_RESET_HOLD) && ret;
-}
-EXPORT_SYMBOL_GPL(kvm_emulate_ap_reset_hold);
-
 #ifdef CONFIG_X86_64
 static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
 			        unsigned long clock_type)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 4/7] KVM: SVM: Set "released" on INIT-SIPI iff SEV-ES vCPU was in AP reset hold
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
                   ` (2 preceding siblings ...)
  2022-01-25 14:16 ` [PATCH v6 3/7] KVM: SVM: Move kvm_emulate_ap_reset_hold() to AMD specific code Joerg Roedel
@ 2022-01-25 14:16 ` Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 5/7] KVM: SVM: Add support to handle AP reset MSR protocol Joerg Roedel
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Sean Christopherson <seanjc@google.com>

Set ghcb->sw_exit_info_2 when releasing a vCPU from an AP reset hold if
and only if the vCPU is actually in a reset hold.  Move the handling to
INIT (was SIPI) so that KVM can check the current MP state; when SIPI is
received, the vCPU will be in INIT_RECEIVED and will have lost track of
whether or not the vCPU was in a reset hold.

Drop the received_first_sipi flag, which was a hack to workaround the
fact that KVM lost track of whether or not the vCPU was in a reset hold.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kvm/svm/sev.c | 34 ++++++++++++----------------------
 arch/x86/kvm/svm/svm.c | 13 ++++++++-----
 arch/x86/kvm/svm/svm.h |  4 +---
 3 files changed, 21 insertions(+), 30 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index bec5b6f4f75d..5ece46eca87f 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2900,8 +2900,19 @@ void sev_es_init_vmcb(struct vcpu_svm *svm)
 	set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1);
 }
 
-void sev_es_vcpu_reset(struct vcpu_svm *svm)
+void sev_es_vcpu_reset(struct vcpu_svm *svm, bool init_event)
 {
+	if (init_event) {
+		/*
+		 * If the vCPU is in a "reset" hold, signal via SW_EXIT_INFO_2
+		 * that, assuming it receives a SIPI, the vCPU was "released".
+		 */
+		if (svm->vcpu.arch.mp_state == KVM_MP_STATE_AP_RESET_HOLD &&
+		    svm->sev_es.ghcb)
+			ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1);
+		return;
+	}
+
 	/*
 	 * Set the GHCB MSR value as per the GHCB specification when emulating
 	 * vCPU RESET for an SEV-ES guest.
@@ -2931,24 +2942,3 @@ void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu)
 	/* MSR_IA32_XSS is restored on VMEXIT, save the currnet host value */
 	hostsa->xss = host_xss;
 }
-
-void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
-{
-	struct vcpu_svm *svm = to_svm(vcpu);
-
-	/* First SIPI: Use the values as initially set by the VMM */
-	if (!svm->sev_es.received_first_sipi) {
-		svm->sev_es.received_first_sipi = true;
-		return;
-	}
-
-	/*
-	 * Subsequent SIPI: Return from an AP Reset Hold VMGEXIT, where
-	 * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a
-	 * non-zero value.
-	 */
-	if (!svm->sev_es.ghcb)
-		return;
-
-	ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1);
-}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 2c99b18d76c0..1fd662c0ab14 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1146,9 +1146,6 @@ static void __svm_vcpu_reset(struct kvm_vcpu *vcpu)
 	svm_init_osvw(vcpu);
 	vcpu->arch.microcode_version = 0x01000065;
 	svm->tsc_ratio_msr = kvm_default_tsc_scaling_ratio;
-
-	if (sev_es_guest(vcpu->kvm))
-		sev_es_vcpu_reset(svm);
 }
 
 static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
@@ -1162,6 +1159,9 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 
 	if (!init_event)
 		__svm_vcpu_reset(vcpu);
+
+	if (sev_es_guest(vcpu->kvm))
+		sev_es_vcpu_reset(svm, init_event);
 }
 
 void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb)
@@ -4345,10 +4345,13 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
 
 static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
 {
+	/*
+	 * SEV-ES (and later derivatives) use INIT-SIPI to bring up APs, but
+	 * the guest is responsible for transitioning to Real Mode and setting
+	 * CS:RIP, GPRs, etc...  KVM just needs to make the vCPU runnable.
+	 */
 	if (!sev_es_guest(vcpu->kvm))
 		return kvm_vcpu_deliver_sipi_vector(vcpu, vector);
-
-	sev_vcpu_deliver_sipi_vector(vcpu, vector);
 }
 
 static void svm_vm_destroy(struct kvm *kvm)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 776be8ff9e50..17812418d346 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -170,7 +170,6 @@ struct vcpu_sev_es_state {
 	struct vmcb_save_area *vmsa;
 	struct ghcb *ghcb;
 	struct kvm_host_map ghcb_map;
-	bool received_first_sipi;
 
 	/* SEV-ES scratch area support */
 	void *ghcb_sa;
@@ -615,8 +614,7 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu);
 int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
 int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
 void sev_es_init_vmcb(struct vcpu_svm *svm);
-void sev_es_vcpu_reset(struct vcpu_svm *svm);
-void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
+void sev_es_vcpu_reset(struct vcpu_svm *svm, bool init_event);
 void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu);
 void sev_es_unmap_ghcb(struct vcpu_svm *svm);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 5/7] KVM: SVM: Add support to handle AP reset MSR protocol
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
                   ` (3 preceding siblings ...)
  2022-01-25 14:16 ` [PATCH v6 4/7] KVM: SVM: Set "released" on INIT-SIPI iff SEV-ES vCPU was in AP reset hold Joerg Roedel
@ 2022-01-25 14:16 ` Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 6/7] KVM: SVM: Add support for Hypervisor Feature support " Joerg Roedel
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Tom Lendacky <thomas.lendacky@amd.com>

Add support for AP Reset Hold being invoked using the GHCB MSR protocol,
available in version 2 of the GHCB specification.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kvm/svm/sev.c | 52 +++++++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/svm/svm.h |  1 +
 2 files changed, 47 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 5ece46eca87f..1219c1771895 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2656,9 +2656,34 @@ static u64 ghcb_msr_version_info(void)
 	return msr;
 }
 
-static int sev_emulate_ap_reset_hold(struct vcpu_svm *svm)
+
+static u64 ghcb_msr_ap_rst_resp(u64 value)
+{
+	return (u64)GHCB_MSR_AP_RESET_HOLD_RESP | (value << GHCB_DATA_LOW);
+}
+
+static int sev_emulate_ap_reset_hold(struct vcpu_svm *svm, u64 hold_type)
 {
 	int ret = kvm_skip_emulated_instruction(&svm->vcpu);
+	if (hold_type == GHCB_MSR_AP_RESET_HOLD_REQ) {
+		/*
+		 * Preset the result to a non-SIPI return and then only set
+		 * the result to non-zero when delivering a SIPI.
+		 */
+		svm->vmcb->control.ghcb_gpa = ghcb_msr_ap_rst_resp(0);
+		svm->reset_hold_msr_protocol = true;
+	} else {
+		WARN_ON_ONCE(hold_type != SVM_VMGEXIT_AP_HLT_LOOP);
+		svm->reset_hold_msr_protocol = false;
+	}
+
+	/*
+	 * Ensure the writes to ghcb_gpa and reset_hold_msr_protocol are visible
+	 * before the MP state change so that the INIT-SIPI doesn't misread
+	 * reset_hold_msr_protocol or write ghcb_gpa before this.  Pairs with
+	 * the smp_rmb() in sev_vcpu_reset().
+	 */
+	smp_wmb();
 
 	return __kvm_emulate_halt(&svm->vcpu,
 				  KVM_MP_STATE_AP_RESET_HOLD, KVM_EXIT_AP_RESET_HOLD) && ret;
@@ -2710,6 +2735,9 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 
 		break;
 	}
+	case GHCB_MSR_AP_RESET_HOLD_REQ:
+		ret = sev_emulate_ap_reset_hold(svm, GHCB_MSR_AP_RESET_HOLD_REQ);
+		break;
 	case GHCB_MSR_TERM_REQ: {
 		u64 reason_set, reason_code;
 
@@ -2800,7 +2828,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
 		ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_IRET);
 		break;
 	case SVM_VMGEXIT_AP_HLT_LOOP:
-		ret = sev_emulate_ap_reset_hold(svm);
+		ret = sev_emulate_ap_reset_hold(svm, SVM_VMGEXIT_AP_HLT_LOOP);
 		break;
 	case SVM_VMGEXIT_AP_JUMP_TABLE: {
 		struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
@@ -2905,11 +2933,23 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm, bool init_event)
 	if (init_event) {
 		/*
 		 * If the vCPU is in a "reset" hold, signal via SW_EXIT_INFO_2
-		 * that, assuming it receives a SIPI, the vCPU was "released".
+		 * (or the GHCB_GPA for the MSR protocol) that, assuming it
+		 * receives a SIPI, the vCPU was "released".
 		 */
-		if (svm->vcpu.arch.mp_state == KVM_MP_STATE_AP_RESET_HOLD &&
-		    svm->sev_es.ghcb)
-			ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1);
+		if (svm->vcpu.arch.mp_state == KVM_MP_STATE_AP_RESET_HOLD) {
+			/*
+			 * Ensure mp_state is read before reset_hold_msr_protocol
+			 * and before writing ghcb_gpa to ensure KVM conumes the
+			 * correct protocol.  Pairs with the smp_wmb() in
+			 * sev_emulate_ap_reset_hold().
+			 */
+			smp_rmb();
+			if (svm->reset_hold_msr_protocol)
+				svm->vmcb->control.ghcb_gpa = ghcb_msr_ap_rst_resp(1);
+			else if (svm->sev_es.ghcb)
+				ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1);
+			svm->reset_hold_msr_protocol = false;
+		}
 		return;
 	}
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 17812418d346..dbecafc25574 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -243,6 +243,7 @@ struct vcpu_svm {
 	struct vcpu_sev_es_state sev_es;
 
 	bool guest_state_loaded;
+	bool reset_hold_msr_protocol;
 };
 
 struct svm_cpu_data {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 6/7] KVM: SVM: Add support for Hypervisor Feature support MSR protocol
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
                   ` (4 preceding siblings ...)
  2022-01-25 14:16 ` [PATCH v6 5/7] KVM: SVM: Add support to handle AP reset MSR protocol Joerg Roedel
@ 2022-01-25 14:16 ` Joerg Roedel
  2022-01-25 14:16 ` [PATCH v6 7/7] KVM: SVM: Increase supported GHCB protocol version Joerg Roedel
  2022-02-25 10:40 ` [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Brijesh Singh <brijesh.singh@amd.com>

Version 2 of the GHCB specification introduced advertisement of
supported Hypervisor SEV features. This request is required to support
a the GHCB version 2 protocol.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/include/uapi/asm/svm.h |  1 +
 arch/x86/kvm/svm/sev.c          | 22 ++++++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index efa969325ede..67cf153fe580 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -108,6 +108,7 @@
 #define SVM_VMGEXIT_AP_JUMP_TABLE		0x80000005
 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE		0
 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE		1
+#define SVM_VMGEXIT_HYPERVISOR_FEATURES		0x8000fffd
 #define SVM_VMGEXIT_UNSUPPORTED_EVENT		0x8000ffff
 
 /* Exit code reserved for hypervisor/software use */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 1219c1771895..403f24498020 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2452,6 +2452,7 @@ static bool sev_es_validate_vmgexit(struct vcpu_svm *svm)
 	case SVM_VMGEXIT_AP_HLT_LOOP:
 	case SVM_VMGEXIT_AP_JUMP_TABLE:
 	case SVM_VMGEXIT_UNSUPPORTED_EVENT:
+	case SVM_VMGEXIT_HYPERVISOR_FEATURES:
 		break;
 	default:
 		reason = GHCB_ERR_INVALID_EVENT;
@@ -2689,6 +2690,19 @@ static int sev_emulate_ap_reset_hold(struct vcpu_svm *svm, u64 hold_type)
 				  KVM_MP_STATE_AP_RESET_HOLD, KVM_EXIT_AP_RESET_HOLD) && ret;
 }
 
+/* Hypervisor GHCB Features supported by KVM */
+#define KVM_SUPPORTED_GHCB_HV_FEATURES		0UL
+
+static u64 ghcb_msr_hv_feat_resp(void)
+{
+	u64 msr;
+
+	msr  = GHCB_MSR_HV_FT_RESP;
+	msr |= (KVM_SUPPORTED_GHCB_HV_FEATURES << GHCB_DATA_LOW);
+
+	return msr;
+}
+
 static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 {
 	struct vmcb_control_area *control = &svm->vmcb->control;
@@ -2738,6 +2752,9 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
 	case GHCB_MSR_AP_RESET_HOLD_REQ:
 		ret = sev_emulate_ap_reset_hold(svm, GHCB_MSR_AP_RESET_HOLD_REQ);
 		break;
+	case GHCB_MSR_HV_FT_REQ:
+		svm->vmcb->control.ghcb_gpa = ghcb_msr_hv_feat_resp();
+		break;
 	case GHCB_MSR_TERM_REQ: {
 		u64 reason_set, reason_code;
 
@@ -2851,6 +2868,11 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
 
 		break;
 	}
+	case SVM_VMGEXIT_HYPERVISOR_FEATURES:
+		ghcb_set_sw_exit_info_2(ghcb, KVM_SUPPORTED_GHCB_HV_FEATURES);
+
+		ret = 1;
+		break;
 	case SVM_VMGEXIT_UNSUPPORTED_EVENT:
 		vcpu_unimpl(vcpu,
 			    "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 7/7] KVM: SVM: Increase supported GHCB protocol version
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
                   ` (5 preceding siblings ...)
  2022-01-25 14:16 ` [PATCH v6 6/7] KVM: SVM: Add support for Hypervisor Feature support " Joerg Roedel
@ 2022-01-25 14:16 ` Joerg Roedel
  2022-02-25 10:40 ` [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-01-25 14:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, x86, Brijesh Singh, Tom Lendacky, kvm,
	linux-kernel, Joerg Roedel

From: Brijesh Singh <brijesh.singh@amd.com>

Now that KVM has basic support for version 2 of the GHCB specification,
bump the maximum supported protocol version. The SNP specific functions
are still missing, but those are only required when the Hypervisor
supports running SNP guests.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kvm/svm/sev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 403f24498020..acf224fd29a2 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2638,7 +2638,7 @@ static u64 ghcb_msr_cpuid_resp(u64 reg, u64 value)
 }
 
 /* The min/max GHCB version supported by KVM. */
-#define GHCB_VERSION_MAX	1ULL
+#define GHCB_VERSION_MAX	2ULL
 #define GHCB_VERSION_MIN	1ULL
 
 static u64 ghcb_msr_version_info(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support
  2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
                   ` (6 preceding siblings ...)
  2022-01-25 14:16 ` [PATCH v6 7/7] KVM: SVM: Increase supported GHCB protocol version Joerg Roedel
@ 2022-02-25 10:40 ` Joerg Roedel
  7 siblings, 0 replies; 9+ messages in thread
From: Joerg Roedel @ 2022-02-25 10:40 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	x86, Brijesh Singh, Tom Lendacky, kvm, linux-kernel,
	Joerg Roedel

Gentle ping. Any comments on these patches?

On Tue, Jan 25, 2022 at 03:16:19PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Hi,
> 
> here is a small set of patches which I originally took from the
> pending SEV-SNP patch-sets to enable basic support for GHCB protocol
> version 2. Meanwhile a couple of other patches from Sean
> Christopherson have been added.
> 
> When SEV-SNP is not supported, only two new MSR protocol VMGEXIT calls
> are required:
> 
> 	- MSR-based AP-reset-hold
> 	- MSR-based HV-feature-request
> 
> These calls are implemented here and then the protocol is lifted to
> version 2.
> 
> This is submitted separately because the MSR-based AP-reset-hold call
> is required to support kexec/kdump in SEV-ES guests.
> 
> The previous version can be found here:
> 
> 	https://lore.kernel.org/kvm/20211020124416.24523-1-joro@8bytes.org/
> 
> Regards,
> 
> 	Joerg
> 
> Changes v5->v6:
> 
> 	- Rebased to v5.17-rc1
> 	- Added changes requested by Sean Christopherson
> 
> Brijesh Singh (2):
>   KVM: SVM: Add support for Hypervisor Feature support MSR protocol
>   KVM: SVM: Increase supported GHCB protocol version
> 
> Joerg Roedel (2):
>   KVM: SVM: Get rid of set_ghcb_msr() and *ghcb_msr_bits() functions
>   KVM: SVM: Move kvm_emulate_ap_reset_hold() to AMD specific code
> 
> Sean Christopherson (2):
>   KVM: SVM: Add helper to generate GHCB MSR verson info, and drop macro
>   KVM: SVM: Set "released" on INIT-SIPI iff SEV-ES vCPU was in AP reset
>     hold
> 
> Tom Lendacky (1):
>   KVM: SVM: Add support to handle AP reset MSR protocol
> 
>  arch/x86/include/asm/kvm_host.h   |   2 +-
>  arch/x86/include/asm/sev-common.h |  18 ++--
>  arch/x86/include/uapi/asm/svm.h   |   1 +
>  arch/x86/kvm/svm/sev.c            | 169 ++++++++++++++++++++----------
>  arch/x86/kvm/svm/svm.c            |  13 ++-
>  arch/x86/kvm/svm/svm.h            |  10 +-
>  arch/x86/kvm/x86.c                |  12 +--
>  7 files changed, 137 insertions(+), 88 deletions(-)
> 
> 
> base-commit: e783362eb54cd99b2cac8b3a9aeac942e6f6ac07
> -- 
> 2.34.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-02-25 10:41 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-25 14:16 [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel
2022-01-25 14:16 ` [PATCH v6 1/7] KVM: SVM: Get rid of set_ghcb_msr() and *ghcb_msr_bits() functions Joerg Roedel
2022-01-25 14:16 ` [PATCH v6 2/7] KVM: SVM: Add helper to generate GHCB MSR verson info, and drop macro Joerg Roedel
2022-01-25 14:16 ` [PATCH v6 3/7] KVM: SVM: Move kvm_emulate_ap_reset_hold() to AMD specific code Joerg Roedel
2022-01-25 14:16 ` [PATCH v6 4/7] KVM: SVM: Set "released" on INIT-SIPI iff SEV-ES vCPU was in AP reset hold Joerg Roedel
2022-01-25 14:16 ` [PATCH v6 5/7] KVM: SVM: Add support to handle AP reset MSR protocol Joerg Roedel
2022-01-25 14:16 ` [PATCH v6 6/7] KVM: SVM: Add support for Hypervisor Feature support " Joerg Roedel
2022-01-25 14:16 ` [PATCH v6 7/7] KVM: SVM: Increase supported GHCB protocol version Joerg Roedel
2022-02-25 10:40 ` [PATCH v6 0/7] KVM: SVM: Add initial GHCB protocol version 2 support Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).