KVM Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v5 00/11] KVM SGX virtualization support (KVM part)
@ 2021-04-12  4:21 Kai Huang
  2021-04-12  4:21 ` [PATCH v5 01/11] KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX) Kai Huang
                   ` (12 more replies)
  0 siblings, 13 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang

Hi Paolo, Sean,

Boris has merged x86 part patches to the tip/x86/sgx. This series is KVM part
patches. Due to some code change in x86 part patches, two KVM patches need
update so this is the new version. Please help to review. Thanks!

Specifically, x86 patch (x86/sgx: Add helpers to expose ECREATE and EINIT to
KVM) was changed to return -EINVAL directly w/o setting trapnr when 
access_ok()s fail on any user pointers, so KVM patches:

KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)

were updated to handle this case.

This seris was firstly based on tip/x86/sgx, and then rebased to latest
kvm/queue, so it can be applied to kvm/queue directly now.

Changelog:

(Please see individual patch for changelog for specific patch)

v4->v5:
 - Addressed Sean's comments (patch 06, 07, 09 were slightly updated).
 - Rebased to latest kvm/queue (patch 08, 11 were updated to resolve conflict).

Sean Christopherson (11):
  KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX)
  KVM: x86: Define new #PF SGX error code bit
  KVM: x86: Add support for reverse CPUID lookup of scattered features
  KVM: x86: Add reverse-CPUID lookup support for scattered SGX features
  KVM: VMX: Add basic handling of VM-Exit from SGX enclave
  KVM: VMX: Frame in ENCLS handler for SGX virtualization
  KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
  KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs
  KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
  KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  KVM: x86: Add capability to grant VM access to privileged SGX
    attribute

 Documentation/virt/kvm/api.rst  |  23 ++
 arch/x86/include/asm/kvm_host.h |   5 +
 arch/x86/include/asm/vmx.h      |   1 +
 arch/x86/include/uapi/asm/vmx.h |   1 +
 arch/x86/kvm/Makefile           |   2 +
 arch/x86/kvm/cpuid.c            |  89 +++++-
 arch/x86/kvm/cpuid.h            |  50 +++-
 arch/x86/kvm/vmx/nested.c       |  28 +-
 arch/x86/kvm/vmx/nested.h       |   5 +
 arch/x86/kvm/vmx/sgx.c          | 502 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/sgx.h          |  34 +++
 arch/x86/kvm/vmx/vmcs12.c       |   1 +
 arch/x86/kvm/vmx/vmcs12.h       |   4 +-
 arch/x86/kvm/vmx/vmx.c          | 109 ++++++-
 arch/x86/kvm/vmx/vmx.h          |   3 +
 arch/x86/kvm/x86.c              |  23 ++
 include/uapi/linux/kvm.h        |   1 +
 17 files changed, 858 insertions(+), 23 deletions(-)
 create mode 100644 arch/x86/kvm/vmx/sgx.c
 create mode 100644 arch/x86/kvm/vmx/sgx.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 01/11] KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX)
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12  4:21 ` [PATCH v5 02/11] KVM: x86: Define new #PF SGX error code bit Kai Huang
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Export the gva_to_gpa() helpers for use by SGX virtualization when
executing ENCLS[ECREATE] and ENCLS[EINIT] on behalf of the guest.
To execute ECREATE and EINIT, KVM must obtain the GPA of the target
Secure Enclave Control Structure (SECS) in order to get its
corresponding HVA.

Because the SECS must reside in the Enclave Page Cache (EPC), copying
the SECS's data to a host-controlled buffer via existing exported
helpers is not a viable option as the EPC is not readable or writable
by the kernel.

SGX virtualization will also use gva_to_gpa() to obtain HVAs for
non-EPC pages in order to pass user pointers directly to ECREATE and
EINIT, which avoids having to copy pages worth of data into the kernel.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/x86.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 16fb39503296..b9600540508e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6007,6 +6007,7 @@ gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
 	u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
+EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
 
  gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva,
 				struct x86_exception *exception)
@@ -6023,6 +6024,7 @@ gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva,
 	access |= PFERR_WRITE_MASK;
 	return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
+EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_write);
 
 /* uses this to access any guest's mapped memory without checking CPL */
 gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 02/11] KVM: x86: Define new #PF SGX error code bit
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
  2021-04-12  4:21 ` [PATCH v5 01/11] KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX) Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12  4:21 ` [PATCH v5 03/11] KVM: x86: Add support for reverse CPUID lookup of scattered features Kai Huang
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Page faults that are signaled by the SGX Enclave Page Cache Map (EPCM),
as opposed to the traditional IA32/EPT page tables, set an SGX bit in
the error code to indicate that the #PF was induced by SGX.  KVM will
need to emulate this behavior as part of its trap-and-execute scheme for
virtualizing SGX Launch Control, e.g. to inject SGX-induced #PFs if
EINIT faults in the host, and to support live migration.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 44f893043a3c..5368ef719709 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -227,6 +227,7 @@ enum x86_intercept_stage;
 #define PFERR_RSVD_BIT 3
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
+#define PFERR_SGX_BIT 15
 #define PFERR_GUEST_FINAL_BIT 32
 #define PFERR_GUEST_PAGE_BIT 33
 
@@ -236,6 +237,7 @@ enum x86_intercept_stage;
 #define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
 #define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 #define PFERR_PK_MASK (1U << PFERR_PK_BIT)
+#define PFERR_SGX_MASK (1U << PFERR_SGX_BIT)
 #define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
 #define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 03/11] KVM: x86: Add support for reverse CPUID lookup of scattered features
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
  2021-04-12  4:21 ` [PATCH v5 01/11] KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX) Kai Huang
  2021-04-12  4:21 ` [PATCH v5 02/11] KVM: x86: Define new #PF SGX error code bit Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-17 13:39   ` Paolo Bonzini
  2021-04-12  4:21 ` [PATCH v5 04/11] KVM: x86: Add reverse-CPUID lookup support for scattered SGX features Kai Huang
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <seanjc@google.com>

Introduce a scheme that allows KVM's CPUID magic to support features
that are scattered in the kernel's feature words.  To advertise and/or
query guest support for CPUID-based features, KVM requires the bit
number of an X86_FEATURE_* to match the bit number in its associated
CPUID entry.  For scattered features, this does not hold true.

Add a framework to allow defining KVM-only words, stored in
kvm_cpu_caps after the shared kernel caps, that can be used to gather
the scattered feature bits by translating X86_FEATURE_* flags into their
KVM-defined feature.

Note, because reverse_cpuid_check() effectively forces kvm_cpu_caps
lookups to be resolved at compile time, there is no runtime cost for
translating from kernel-defined to kvm-defined features.

More details here:  https://lkml.kernel.org/r/X/jxCOLG+HUO4QlZ@google.com

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/cpuid.c | 32 +++++++++++++++++++++++++++-----
 arch/x86/kvm/cpuid.h | 39 ++++++++++++++++++++++++++++++++++-----
 2 files changed, 61 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 6bd2f8b830e4..a0e7be9ed449 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -28,7 +28,7 @@
  * Unlike "struct cpuinfo_x86.x86_capability", kvm_cpu_caps doesn't need to be
  * aligned to sizeof(unsigned long) because it's not accessed via bitops.
  */
-u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
+u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly;
 EXPORT_SYMBOL_GPL(kvm_cpu_caps);
 
 static u32 xstate_required_size(u64 xstate_bv, bool compacted)
@@ -53,6 +53,7 @@ static u32 xstate_required_size(u64 xstate_bv, bool compacted)
 }
 
 #define F feature_bit
+#define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
 
 static inline struct kvm_cpuid_entry2 *cpuid_entry2_find(
 	struct kvm_cpuid_entry2 *entries, int nent, u32 function, u32 index)
@@ -347,13 +348,13 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
 	return r;
 }
 
-static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
+/* Mask kvm_cpu_caps for @leaf with the raw CPUID capabilities of this CPU. */
+static __always_inline void __kvm_cpu_cap_mask(enum cpuid_leafs leaf)
 {
 	const struct cpuid_reg cpuid = x86_feature_cpuid(leaf * 32);
 	struct kvm_cpuid_entry2 entry;
 
 	reverse_cpuid_check(leaf);
-	kvm_cpu_caps[leaf] &= mask;
 
 	cpuid_count(cpuid.function, cpuid.index,
 		    &entry.eax, &entry.ebx, &entry.ecx, &entry.edx);
@@ -361,6 +362,26 @@ static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
 	kvm_cpu_caps[leaf] &= *__cpuid_entry_get_reg(&entry, cpuid.reg);
 }
 
+static __always_inline void kvm_cpu_cap_mask(enum cpuid_leafs leaf, u32 mask)
+{
+	/* Use the "init" variant for scattered leafs. */
+	BUILD_BUG_ON(leaf >= NCAPINTS);
+
+	kvm_cpu_caps[leaf] &= mask;
+
+	__kvm_cpu_cap_mask(leaf);
+}
+
+static __always_inline void kvm_cpu_cap_init(enum cpuid_leafs leaf, u32 mask)
+{
+	/* Use the "mask" variant for hardwared-defined leafs. */
+	BUILD_BUG_ON(leaf < NCAPINTS);
+
+	kvm_cpu_caps[leaf] = mask;
+
+	__kvm_cpu_cap_mask(leaf);
+}
+
 void kvm_set_cpu_caps(void)
 {
 	unsigned int f_nx = is_efer_nx() ? F(NX) : 0;
@@ -371,12 +392,13 @@ void kvm_set_cpu_caps(void)
 	unsigned int f_gbpages = 0;
 	unsigned int f_lm = 0;
 #endif
+	memset(kvm_cpu_caps, 0, sizeof(kvm_cpu_caps));
 
-	BUILD_BUG_ON(sizeof(kvm_cpu_caps) >
+	BUILD_BUG_ON(sizeof(kvm_cpu_caps) - (NKVMCAPINTS * sizeof(*kvm_cpu_caps)) >
 		     sizeof(boot_cpu_data.x86_capability));
 
 	memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
-	       sizeof(kvm_cpu_caps));
+	       sizeof(kvm_cpu_caps) - (NKVMCAPINTS * sizeof(*kvm_cpu_caps)));
 
 	kvm_cpu_cap_mask(CPUID_1_ECX,
 		/*
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index ded84d244f19..315fa45eb7c8 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -7,7 +7,20 @@
 #include <asm/processor.h>
 #include <uapi/asm/kvm_para.h>
 
-extern u32 kvm_cpu_caps[NCAPINTS] __read_mostly;
+/*
+ * Hardware-defined CPUID leafs that are scattered in the kernel, but need to
+ * be directly used by KVM.  Note, these word values conflict with the kernel's
+ * "bug" caps, but KVM doesn't use those.
+ */
+enum kvm_only_cpuid_leafs {
+	NR_KVM_CPU_CAPS = NCAPINTS,
+
+	NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
+};
+
+#define X86_KVM_FEATURE(w, f)		((w)*32 + (f))
+
+extern u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly;
 void kvm_set_cpu_caps(void);
 
 void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu);
@@ -100,6 +113,20 @@ static __always_inline void reverse_cpuid_check(unsigned int x86_leaf)
 	BUILD_BUG_ON(reverse_cpuid[x86_leaf].function == 0);
 }
 
+/*
+ * Translate feature bits that are scattered in the kernel's cpufeatures word
+ * into KVM feature words that align with hardware's definitions.
+ */
+static __always_inline u32 __feature_translate(int x86_feature)
+{
+	return x86_feature;
+}
+
+static __always_inline u32 __feature_leaf(int x86_feature)
+{
+	return __feature_translate(x86_feature) / 32;
+}
+
 /*
  * Retrieve the bit mask from an X86_FEATURE_* definition.  Features contain
  * the hardware defined bit number (stored in bits 4:0) and a software defined
@@ -108,6 +135,8 @@ static __always_inline void reverse_cpuid_check(unsigned int x86_leaf)
  */
 static __always_inline u32 __feature_bit(int x86_feature)
 {
+	x86_feature = __feature_translate(x86_feature);
+
 	reverse_cpuid_check(x86_feature / 32);
 	return 1 << (x86_feature & 31);
 }
@@ -116,7 +145,7 @@ static __always_inline u32 __feature_bit(int x86_feature)
 
 static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned int x86_feature)
 {
-	unsigned int x86_leaf = x86_feature / 32;
+	unsigned int x86_leaf = __feature_leaf(x86_feature);
 
 	reverse_cpuid_check(x86_leaf);
 	return reverse_cpuid[x86_leaf];
@@ -316,7 +345,7 @@ static inline bool cpuid_fault_enabled(struct kvm_vcpu *vcpu)
 
 static __always_inline void kvm_cpu_cap_clear(unsigned int x86_feature)
 {
-	unsigned int x86_leaf = x86_feature / 32;
+	unsigned int x86_leaf = __feature_leaf(x86_feature);
 
 	reverse_cpuid_check(x86_leaf);
 	kvm_cpu_caps[x86_leaf] &= ~__feature_bit(x86_feature);
@@ -324,7 +353,7 @@ static __always_inline void kvm_cpu_cap_clear(unsigned int x86_feature)
 
 static __always_inline void kvm_cpu_cap_set(unsigned int x86_feature)
 {
-	unsigned int x86_leaf = x86_feature / 32;
+	unsigned int x86_leaf = __feature_leaf(x86_feature);
 
 	reverse_cpuid_check(x86_leaf);
 	kvm_cpu_caps[x86_leaf] |= __feature_bit(x86_feature);
@@ -332,7 +361,7 @@ static __always_inline void kvm_cpu_cap_set(unsigned int x86_feature)
 
 static __always_inline u32 kvm_cpu_cap_get(unsigned int x86_feature)
 {
-	unsigned int x86_leaf = x86_feature / 32;
+	unsigned int x86_leaf = __feature_leaf(x86_feature);
 
 	reverse_cpuid_check(x86_leaf);
 	return kvm_cpu_caps[x86_leaf] & __feature_bit(x86_feature);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 04/11] KVM: x86: Add reverse-CPUID lookup support for scattered SGX features
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (2 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 03/11] KVM: x86: Add support for reverse CPUID lookup of scattered features Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-17 13:39   ` Paolo Bonzini
  2021-04-12  4:21 ` [PATCH v5 05/11] KVM: VMX: Add basic handling of VM-Exit from SGX enclave Kai Huang
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <seanjc@google.com>

Define a new KVM-only feature word for advertising and querying SGX
sub-features in CPUID.0x12.0x0.EAX.  Because SGX1 and SGX2 are scattered
in the kernel's feature word, they need to be translated so that the
bit numbers match those of hardware.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/cpuid.h | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 315fa45eb7c8..40aec1443430 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -13,13 +13,18 @@
  * "bug" caps, but KVM doesn't use those.
  */
 enum kvm_only_cpuid_leafs {
-	NR_KVM_CPU_CAPS = NCAPINTS,
+	CPUID_12_EAX	 = NCAPINTS,
+	NR_KVM_CPU_CAPS,
 
 	NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
 };
 
 #define X86_KVM_FEATURE(w, f)		((w)*32 + (f))
 
+/* Intel-defined SGX sub-features, CPUID level 0x12 (EAX). */
+#define __X86_FEATURE_SGX1		X86_KVM_FEATURE(CPUID_12_EAX, 0)
+#define __X86_FEATURE_SGX2		X86_KVM_FEATURE(CPUID_12_EAX, 1)
+
 extern u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly;
 void kvm_set_cpu_caps(void);
 
@@ -93,6 +98,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
 	[CPUID_8000_0007_EBX] = {0x80000007, 0, CPUID_EBX},
 	[CPUID_7_EDX]         = {         7, 0, CPUID_EDX},
 	[CPUID_7_1_EAX]       = {         7, 1, CPUID_EAX},
+	[CPUID_12_EAX]        = {0x00000012, 0, CPUID_EAX},
 };
 
 /*
@@ -119,6 +125,11 @@ static __always_inline void reverse_cpuid_check(unsigned int x86_leaf)
  */
 static __always_inline u32 __feature_translate(int x86_feature)
 {
+	if (x86_feature == X86_FEATURE_SGX1)
+		return __X86_FEATURE_SGX1;
+	else if (x86_feature == X86_FEATURE_SGX2)
+		return __X86_FEATURE_SGX2;
+
 	return x86_feature;
 }
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 05/11] KVM: VMX: Add basic handling of VM-Exit from SGX enclave
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (3 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 04/11] KVM: x86: Add reverse-CPUID lookup support for scattered SGX features Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12  4:21 ` [PATCH v5 06/11] KVM: VMX: Frame in ENCLS handler for SGX virtualization Kai Huang
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add support for handling VM-Exits that originate from a guest SGX
enclave.  In SGX, an "enclave" is a new CPL3-only execution environment,
wherein the CPU and memory state is protected by hardware to make the
state inaccesible to code running outside of the enclave.  When exiting
an enclave due to an asynchronous event (from the perspective of the
enclave), e.g. exceptions, interrupts, and VM-Exits, the enclave's state
is automatically saved and scrubbed (the CPU loads synthetic state), and
then reloaded when re-entering the enclave.  E.g. after an instruction
based VM-Exit from an enclave, vmcs.GUEST_RIP will not contain the RIP
of the enclave instruction that trigered VM-Exit, but will instead point
to a RIP in the enclave's untrusted runtime (the guest userspace code
that coordinates entry/exit to/from the enclave).

To help a VMM recognize and handle exits from enclaves, SGX adds bits to
existing VMCS fields, VM_EXIT_REASON.VMX_EXIT_REASON_FROM_ENCLAVE and
GUEST_INTERRUPTIBILITY_INFO.GUEST_INTR_STATE_ENCLAVE_INTR.  Define the
new architectural bits, and add a boolean to struct vcpu_vmx to cache
VMX_EXIT_REASON_FROM_ENCLAVE.  Clear the bit in exit_reason so that
checks against exit_reason do not need to account for SGX, e.g.
"if (exit_reason == EXIT_REASON_EXCEPTION_NMI)" continues to work.

KVM is a largely a passive observer of the new bits, e.g. KVM needs to
account for the bits when propagating information to a nested VMM, but
otherwise doesn't need to act differently for the majority of VM-Exits
from enclaves.

The one scenario that is directly impacted is emulation, which is for
all intents and purposes impossible[1] since KVM does not have access to
the RIP or instruction stream that triggered the VM-Exit.  The inability
to emulate is a non-issue for KVM, as most instructions that might
trigger VM-Exit unconditionally #UD in an enclave (before the VM-Exit
check.  For the few instruction that conditionally #UD, KVM either never
sets the exiting control, e.g. PAUSE_EXITING[2], or sets it if and only
if the feature is not exposed to the guest in order to inject a #UD,
e.g. RDRAND_EXITING.

But, because it is still possible for a guest to trigger emulation,
e.g. MMIO, inject a #UD if KVM ever attempts emulation after a VM-Exit
from an enclave.  This is architecturally accurate for instruction
VM-Exits, and for MMIO it's the least bad choice, e.g. it's preferable
to killing the VM.  In practice, only broken or particularly stupid
guests should ever encounter this behavior.

Add a WARN in skip_emulated_instruction to detect any attempt to
modify the guest's RIP during an SGX enclave VM-Exit as all such flows
should either be unreachable or must handle exits from enclaves before
getting to skip_emulated_instruction.

[1] Impossible for all practical purposes.  Not truly impossible
    since KVM could implement some form of para-virtualization scheme.

[2] PAUSE_LOOP_EXITING only affects CPL0 and enclaves exist only at
    CPL3, so we also don't need to worry about that interaction.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/include/asm/vmx.h      |  1 +
 arch/x86/include/uapi/asm/vmx.h |  1 +
 arch/x86/kvm/vmx/nested.c       |  2 ++
 arch/x86/kvm/vmx/vmx.c          | 45 +++++++++++++++++++++++++++++++--
 4 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 358707f60d99..0ffaa3156a4e 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -373,6 +373,7 @@ enum vmcs_field {
 #define GUEST_INTR_STATE_MOV_SS		0x00000002
 #define GUEST_INTR_STATE_SMI		0x00000004
 #define GUEST_INTR_STATE_NMI		0x00000008
+#define GUEST_INTR_STATE_ENCLAVE_INTR	0x00000010
 
 /* GUEST_ACTIVITY_STATE flags */
 #define GUEST_ACTIVITY_ACTIVE		0
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index b8e650a985e3..946d761adbd3 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -27,6 +27,7 @@
 
 
 #define VMX_EXIT_REASONS_FAILED_VMENTRY         0x80000000
+#define VMX_EXIT_REASONS_SGX_ENCLAVE_MODE	0x08000000
 
 #define EXIT_REASON_EXCEPTION_NMI       0
 #define EXIT_REASON_EXTERNAL_INTERRUPT  1
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index e02ebc28270e..baaea9bbb8e5 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4111,6 +4111,8 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 {
 	/* update exit information fields: */
 	vmcs12->vm_exit_reason = vm_exit_reason;
+	if (to_vmx(vcpu)->exit_reason.enclave_mode)
+		vmcs12->vm_exit_reason |= VMX_EXIT_REASONS_SGX_ENCLAVE_MODE;
 	vmcs12->exit_qualification = exit_qualification;
 	vmcs12->vm_exit_intr_info = exit_intr_info;
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c05e6e2854b5..09f2c9ec4468 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1588,12 +1588,25 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data)
 
 static bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, void *insn, int insn_len)
 {
+	/*
+	 * Emulation of instructions in SGX enclaves is impossible as RIP does
+	 * not point  tthe failing instruction, and even if it did, the code
+	 * stream is inaccessible.  Inject #UD instead of exiting to userspace
+	 * so that guest userspace can't DoS the guest simply by triggering
+	 * emulation (enclaves are CPL3 only).
+	 */
+	if (to_vmx(vcpu)->exit_reason.enclave_mode) {
+		kvm_queue_exception(vcpu, UD_VECTOR);
+		return false;
+	}
 	return true;
 }
 
 static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
+	union vmx_exit_reason exit_reason = to_vmx(vcpu)->exit_reason;
 	unsigned long rip, orig_rip;
+	u32 instr_len;
 
 	/*
 	 * Using VMCS.VM_EXIT_INSTRUCTION_LEN on EPT misconfig depends on
@@ -1604,9 +1617,33 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 	 * i.e. we end up advancing IP with some random value.
 	 */
 	if (!static_cpu_has(X86_FEATURE_HYPERVISOR) ||
-	    to_vmx(vcpu)->exit_reason.basic != EXIT_REASON_EPT_MISCONFIG) {
+	    exit_reason.basic != EXIT_REASON_EPT_MISCONFIG) {
+		instr_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+
+		/*
+		 * Emulating an enclave's instructions isn't supported as KVM
+		 * cannot access the enclave's memory or its true RIP, e.g. the
+		 * vmcs.GUEST_RIP points at the exit point of the enclave, not
+		 * the RIP that actually triggered the VM-Exit.  But, because
+		 * most instructions that cause VM-Exit will #UD in an enclave,
+		 * most instruction-based VM-Exits simply do not occur.
+		 *
+		 * There are a few exceptions, notably the debug instructions
+		 * INT1ICEBRK and INT3, as they are allowed in debug enclaves
+		 * and generate #DB/#BP as expected, which KVM might intercept.
+		 * But again, the CPU does the dirty work and saves an instr
+		 * length of zero so VMMs don't shoot themselves in the foot.
+		 * WARN if KVM tries to skip a non-zero length instruction on
+		 * a VM-Exit from an enclave.
+		 */
+		if (!instr_len)
+			goto rip_updated;
+
+		WARN(exit_reason.enclave_mode,
+		     "KVM: skipping instruction after SGX enclave VM-Exit");
+
 		orig_rip = kvm_rip_read(vcpu);
-		rip = orig_rip + vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+		rip = orig_rip + instr_len;
 #ifdef CONFIG_X86_64
 		/*
 		 * We need to mask out the high 32 bits of RIP if not in 64-bit
@@ -1622,6 +1659,7 @@ static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 			return 0;
 	}
 
+rip_updated:
 	/* skipping an emulated instruction also counts */
 	vmx_set_interrupt_shadow(vcpu, 0);
 
@@ -5353,6 +5391,9 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
 {
 	gpa_t gpa;
 
+	if (!vmx_can_emulate_instruction(vcpu, NULL, 0))
+		return 1;
+
 	/*
 	 * A nested guest cannot optimize MMIO vmexits, because we have an
 	 * nGPA here instead of the required GPA.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 06/11] KVM: VMX: Frame in ENCLS handler for SGX virtualization
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (4 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 05/11] KVM: VMX: Add basic handling of VM-Exit from SGX enclave Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12  4:21 ` [PATCH v5 07/11] KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions Kai Huang
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Introduce sgx.c and sgx.h, along with the framework for handling ENCLS
VM-Exits.  Add a bool, enable_sgx, that will eventually be wired up to a
module param to control whether or not SGX virtualization is enabled at
runtime.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
v4->v5:
 - In handle_encls(), use kvm_rax_read() instead of explicit access to
   data structure.

---
 arch/x86/kvm/Makefile  |  2 ++
 arch/x86/kvm/vmx/sgx.c | 50 ++++++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/sgx.h | 15 +++++++++++++
 arch/x86/kvm/vmx/vmx.c |  9 +++++---
 4 files changed, 73 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/kvm/vmx/sgx.c
 create mode 100644 arch/x86/kvm/vmx/sgx.h

diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 1b4766fe1de2..87f514c36eae 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -23,6 +23,8 @@ kvm-$(CONFIG_KVM_XEN)	+= xen.o
 
 kvm-intel-y		+= vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
 			   vmx/evmcs.o vmx/nested.o vmx/posted_intr.o
+kvm-intel-$(CONFIG_X86_SGX_KVM)	+= vmx/sgx.o
+
 kvm-amd-y		+= svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o svm/sev.o
 
 obj-$(CONFIG_KVM)	+= kvm.o
diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
new file mode 100644
index 000000000000..d874eb180b7d
--- /dev/null
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -0,0 +1,50 @@
+// SPDX-License-Identifier: GPL-2.0
+/*  Copyright(c) 2021 Intel Corporation. */
+
+#include <asm/sgx.h>
+
+#include "cpuid.h"
+#include "kvm_cache_regs.h"
+#include "sgx.h"
+#include "vmx.h"
+#include "x86.h"
+
+bool __read_mostly enable_sgx;
+
+static inline bool encls_leaf_enabled_in_guest(struct kvm_vcpu *vcpu, u32 leaf)
+{
+	if (!enable_sgx || !guest_cpuid_has(vcpu, X86_FEATURE_SGX))
+		return false;
+
+	if (leaf >= ECREATE && leaf <= ETRACK)
+		return guest_cpuid_has(vcpu, X86_FEATURE_SGX1);
+
+	if (leaf >= EAUG && leaf <= EMODT)
+		return guest_cpuid_has(vcpu, X86_FEATURE_SGX2);
+
+	return false;
+}
+
+static inline bool sgx_enabled_in_guest_bios(struct kvm_vcpu *vcpu)
+{
+	const u64 bits = FEAT_CTL_SGX_ENABLED | FEAT_CTL_LOCKED;
+
+	return (to_vmx(vcpu)->msr_ia32_feature_control & bits) == bits;
+}
+
+int handle_encls(struct kvm_vcpu *vcpu)
+{
+	u32 leaf = (u32)kvm_rax_read(vcpu);
+
+	if (!encls_leaf_enabled_in_guest(vcpu, leaf)) {
+		kvm_queue_exception(vcpu, UD_VECTOR);
+	} else if (!sgx_enabled_in_guest_bios(vcpu)) {
+		kvm_inject_gp(vcpu, 0);
+	} else {
+		WARN(1, "KVM: unexpected exit on ENCLS[%u]", leaf);
+		vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
+		vcpu->run->hw.hardware_exit_reason = EXIT_REASON_ENCLS;
+		return 0;
+	}
+	return 1;
+}
diff --git a/arch/x86/kvm/vmx/sgx.h b/arch/x86/kvm/vmx/sgx.h
new file mode 100644
index 000000000000..6e17ecd4aca3
--- /dev/null
+++ b/arch/x86/kvm/vmx/sgx.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __KVM_X86_SGX_H
+#define __KVM_X86_SGX_H
+
+#include <linux/kvm_host.h>
+
+#ifdef CONFIG_X86_SGX_KVM
+extern bool __read_mostly enable_sgx;
+
+int handle_encls(struct kvm_vcpu *vcpu);
+#else
+#define enable_sgx 0
+#endif
+
+#endif /* __KVM_X86_SGX_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 09f2c9ec4468..d21036457c40 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -57,6 +57,7 @@
 #include "mmu.h"
 #include "nested.h"
 #include "pmu.h"
+#include "sgx.h"
 #include "trace.h"
 #include "vmcs.h"
 #include "vmcs12.h"
@@ -5607,16 +5608,18 @@ static int handle_vmx_instruction(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
+#ifndef CONFIG_X86_SGX_KVM
 static int handle_encls(struct kvm_vcpu *vcpu)
 {
 	/*
-	 * SGX virtualization is not yet supported.  There is no software
-	 * enable bit for SGX, so we have to trap ENCLS and inject a #UD
-	 * to prevent the guest from executing ENCLS.
+	 * SGX virtualization is disabled.  There is no software enable bit for
+	 * SGX, so KVM intercepts all ENCLS leafs and injects a #UD to prevent
+	 * the guest from executing ENCLS (when SGX is supported by hardware).
 	 */
 	kvm_queue_exception(vcpu, UD_VECTOR);
 	return 1;
 }
+#endif /* CONFIG_X86_SGX_KVM */
 
 static int handle_bus_lock_vmexit(struct kvm_vcpu *vcpu)
 {
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 07/11] KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (5 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 06/11] KVM: VMX: Frame in ENCLS handler for SGX virtualization Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12  4:21 ` [PATCH v5 08/11] KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs Kai Huang
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add an ECREATE handler that will be used to intercept ECREATE for the
purpose of enforcing and enclave's MISCSELECT, ATTRIBUTES and XFRM, i.e.
to allow userspace to restrict SGX features via CPUID.  ECREATE will be
intercepted when any of the aforementioned masks diverges from hardware
in order to enforce the desired CPUID model, i.e. inject #GP if the
guest attempts to set a bit that hasn't been enumerated as allowed-1 in
CPUID.

Note, access to the PROVISIONKEY is not yet supported.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Co-developed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
v4->v5:
 - Use GFP_KERNEL_ACCOUNT when allocating page in handle_encls_ecreate().
 - Improve comments around sgx_virt_ecreate().

---
 arch/x86/include/asm/kvm_host.h |   3 +
 arch/x86/kvm/vmx/sgx.c          | 275 ++++++++++++++++++++++++++++++++
 2 files changed, 278 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5368ef719709..05fe7d2b12d9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1038,6 +1038,9 @@ struct kvm_arch {
 
 	bool bus_lock_detection_enabled;
 
+	/* Guest can access the SGX PROVISIONKEY. */
+	bool sgx_provisioning_allowed;
+
 	struct kvm_pmu_event_filter __rcu *pmu_event_filter;
 	struct task_struct *nx_lpage_recovery_thread;
 
diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index d874eb180b7d..2cf9d7cf818c 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -11,6 +11,279 @@
 
 bool __read_mostly enable_sgx;
 
+/*
+ * ENCLS's memory operands use a fixed segment (DS) and a fixed
+ * address size based on the mode.  Related prefixes are ignored.
+ */
+static int sgx_get_encls_gva(struct kvm_vcpu *vcpu, unsigned long offset,
+			     int size, int alignment, gva_t *gva)
+{
+	struct kvm_segment s;
+	bool fault;
+
+	/* Skip vmcs.GUEST_DS retrieval for 64-bit mode to avoid VMREADs. */
+	*gva = offset;
+	if (!is_long_mode(vcpu)) {
+		vmx_get_segment(vcpu, &s, VCPU_SREG_DS);
+		*gva += s.base;
+	}
+
+	if (!IS_ALIGNED(*gva, alignment)) {
+		fault = true;
+	} else if (likely(is_long_mode(vcpu))) {
+		fault = is_noncanonical_address(*gva, vcpu);
+	} else {
+		*gva &= 0xffffffff;
+		fault = (s.unusable) ||
+			(s.type != 2 && s.type != 3) ||
+			(*gva > s.limit) ||
+			((s.base != 0 || s.limit != 0xffffffff) &&
+			(((u64)*gva + size - 1) > s.limit + 1));
+	}
+	if (fault)
+		kvm_inject_gp(vcpu, 0);
+	return fault ? -EINVAL : 0;
+}
+
+static void sgx_handle_emulation_failure(struct kvm_vcpu *vcpu, u64 addr,
+					 unsigned int size)
+{
+	vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+	vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
+	vcpu->run->internal.ndata = 2;
+	vcpu->run->internal.data[0] = addr;
+	vcpu->run->internal.data[1] = size;
+}
+
+static int sgx_read_hva(struct kvm_vcpu *vcpu, unsigned long hva, void *data,
+			unsigned int size)
+{
+	if (__copy_from_user(data, (void __user *)hva, size)) {
+		sgx_handle_emulation_failure(vcpu, hva, size);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static int sgx_gva_to_gpa(struct kvm_vcpu *vcpu, gva_t gva, bool write,
+			  gpa_t *gpa)
+{
+	struct x86_exception ex;
+
+	if (write)
+		*gpa = kvm_mmu_gva_to_gpa_write(vcpu, gva, &ex);
+	else
+		*gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, &ex);
+
+	if (*gpa == UNMAPPED_GVA) {
+		kvm_inject_emulated_page_fault(vcpu, &ex);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static int sgx_gpa_to_hva(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long *hva)
+{
+	*hva = kvm_vcpu_gfn_to_hva(vcpu, PFN_DOWN(gpa));
+	if (kvm_is_error_hva(*hva)) {
+		sgx_handle_emulation_failure(vcpu, gpa, 1);
+		return -EFAULT;
+	}
+
+	*hva |= gpa & ~PAGE_MASK;
+
+	return 0;
+}
+
+static int sgx_inject_fault(struct kvm_vcpu *vcpu, gva_t gva, int trapnr)
+{
+	struct x86_exception ex;
+
+	/*
+	 * A non-EPCM #PF indicates a bad userspace HVA.  This *should* check
+	 * for PFEC.SGX and not assume any #PF on SGX2 originated in the EPC,
+	 * but the error code isn't (yet) plumbed through the ENCLS helpers.
+	 */
+	if (trapnr == PF_VECTOR && !boot_cpu_has(X86_FEATURE_SGX2)) {
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
+		vcpu->run->internal.ndata = 0;
+		return 0;
+	}
+
+	/*
+	 * If the guest thinks it's running on SGX2 hardware, inject an SGX
+	 * #PF if the fault matches an EPCM fault signature (#GP on SGX1,
+	 * #PF on SGX2).  The assumption is that EPCM faults are much more
+	 * likely than a bad userspace address.
+	 */
+	if ((trapnr == PF_VECTOR || !boot_cpu_has(X86_FEATURE_SGX2)) &&
+	    guest_cpuid_has(vcpu, X86_FEATURE_SGX2)) {
+		memset(&ex, 0, sizeof(ex));
+		ex.vector = PF_VECTOR;
+		ex.error_code = PFERR_PRESENT_MASK | PFERR_WRITE_MASK |
+				PFERR_SGX_MASK;
+		ex.address = gva;
+		ex.error_code_valid = true;
+		ex.nested_page_fault = false;
+		kvm_inject_page_fault(vcpu, &ex);
+	} else {
+		kvm_inject_gp(vcpu, 0);
+	}
+	return 1;
+}
+
+static int __handle_encls_ecreate(struct kvm_vcpu *vcpu,
+				  struct sgx_pageinfo *pageinfo,
+				  unsigned long secs_hva,
+				  gva_t secs_gva)
+{
+	struct sgx_secs *contents = (struct sgx_secs *)pageinfo->contents;
+	struct kvm_cpuid_entry2 *sgx_12_0, *sgx_12_1;
+	u64 attributes, xfrm, size;
+	u32 miscselect;
+	u8 max_size_log2;
+	int trapnr, ret;
+
+	sgx_12_0 = kvm_find_cpuid_entry(vcpu, 0x12, 0);
+	sgx_12_1 = kvm_find_cpuid_entry(vcpu, 0x12, 1);
+	if (!sgx_12_0 || !sgx_12_1) {
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
+		vcpu->run->internal.ndata = 0;
+		return 0;
+	}
+
+	miscselect = contents->miscselect;
+	attributes = contents->attributes;
+	xfrm = contents->xfrm;
+	size = contents->size;
+
+	/* Enforce restriction of access to the PROVISIONKEY. */
+	if (!vcpu->kvm->arch.sgx_provisioning_allowed &&
+	    (attributes & SGX_ATTR_PROVISIONKEY)) {
+		if (sgx_12_1->eax & SGX_ATTR_PROVISIONKEY)
+			pr_warn_once("KVM: SGX PROVISIONKEY advertised but not allowed\n");
+		kvm_inject_gp(vcpu, 0);
+		return 1;
+	}
+
+	/* Enforce CPUID restrictions on MISCSELECT, ATTRIBUTES and XFRM. */
+	if ((u32)miscselect & ~sgx_12_0->ebx ||
+	    (u32)attributes & ~sgx_12_1->eax ||
+	    (u32)(attributes >> 32) & ~sgx_12_1->ebx ||
+	    (u32)xfrm & ~sgx_12_1->ecx ||
+	    (u32)(xfrm >> 32) & ~sgx_12_1->edx) {
+		kvm_inject_gp(vcpu, 0);
+		return 1;
+	}
+
+	/* Enforce CPUID restriction on max enclave size. */
+	max_size_log2 = (attributes & SGX_ATTR_MODE64BIT) ? sgx_12_0->edx >> 8 :
+							    sgx_12_0->edx;
+	if (size >= BIT_ULL(max_size_log2))
+		kvm_inject_gp(vcpu, 0);
+
+	/*
+	 * sgx_virt_ecreate() returns:
+	 *  1) 0:	ECREATE was successful
+	 *  2) -EFAULT:	ECREATE was run but faulted, and trapnr was set to the
+	 *		exception number.
+	 *  3) -EINVAL:	access_ok() on @secs_hva failed. This should never
+	 *		happen as KVM checks host addresses at memslot creation.
+	 *		sgx_virt_ecreate() has already warned in this case.
+	 */
+	ret = sgx_virt_ecreate(pageinfo, (void __user *)secs_hva, &trapnr);
+	if (!ret)
+		return kvm_skip_emulated_instruction(vcpu);
+	if (ret == -EFAULT)
+		return sgx_inject_fault(vcpu, secs_gva, trapnr);
+
+	return ret;
+}
+
+static int handle_encls_ecreate(struct kvm_vcpu *vcpu)
+{
+	gva_t pageinfo_gva, secs_gva;
+	gva_t metadata_gva, contents_gva;
+	gpa_t metadata_gpa, contents_gpa, secs_gpa;
+	unsigned long metadata_hva, contents_hva, secs_hva;
+	struct sgx_pageinfo pageinfo;
+	struct sgx_secs *contents;
+	struct x86_exception ex;
+	int r;
+
+	if (sgx_get_encls_gva(vcpu, kvm_rbx_read(vcpu), 32, 32, &pageinfo_gva) ||
+	    sgx_get_encls_gva(vcpu, kvm_rcx_read(vcpu), 4096, 4096, &secs_gva))
+		return 1;
+
+	/*
+	 * Copy the PAGEINFO to local memory, its pointers need to be
+	 * translated, i.e. we need to do a deep copy/translate.
+	 */
+	r = kvm_read_guest_virt(vcpu, pageinfo_gva, &pageinfo,
+				sizeof(pageinfo), &ex);
+	if (r == X86EMUL_PROPAGATE_FAULT) {
+		kvm_inject_emulated_page_fault(vcpu, &ex);
+		return 1;
+	} else if (r != X86EMUL_CONTINUE) {
+		sgx_handle_emulation_failure(vcpu, pageinfo_gva,
+					     sizeof(pageinfo));
+		return 0;
+	}
+
+	if (sgx_get_encls_gva(vcpu, pageinfo.metadata, 64, 64, &metadata_gva) ||
+	    sgx_get_encls_gva(vcpu, pageinfo.contents, 4096, 4096,
+			      &contents_gva))
+		return 1;
+
+	/*
+	 * Translate the SECINFO, SOURCE and SECS pointers from GVA to GPA.
+	 * Resume the guest on failure to inject a #PF.
+	 */
+	if (sgx_gva_to_gpa(vcpu, metadata_gva, false, &metadata_gpa) ||
+	    sgx_gva_to_gpa(vcpu, contents_gva, false, &contents_gpa) ||
+	    sgx_gva_to_gpa(vcpu, secs_gva, true, &secs_gpa))
+		return 1;
+
+	/*
+	 * ...and then to HVA.  The order of accesses isn't architectural, i.e.
+	 * KVM doesn't have to fully process one address at a time.  Exit to
+	 * userspace if a GPA is invalid.
+	 */
+	if (sgx_gpa_to_hva(vcpu, metadata_gpa, &metadata_hva) ||
+	    sgx_gpa_to_hva(vcpu, contents_gpa, &contents_hva) ||
+	    sgx_gpa_to_hva(vcpu, secs_gpa, &secs_hva))
+		return 0;
+
+	/*
+	 * Copy contents into kernel memory to prevent TOCTOU attack. E.g. the
+	 * guest could do ECREATE w/ SECS.SGX_ATTR_PROVISIONKEY=0, and
+	 * simultaneously set SGX_ATTR_PROVISIONKEY to bypass the check to
+	 * enforce restriction of access to the PROVISIONKEY.
+	 */
+	contents = (struct sgx_secs *)__get_free_page(GFP_KERNEL_ACCOUNT);
+	if (!contents)
+		return -ENOMEM;
+
+	/* Exit to userspace if copying from a host userspace address fails. */
+	if (sgx_read_hva(vcpu, contents_hva, (void *)contents, PAGE_SIZE)) {
+		free_page((unsigned long)contents);
+		return 0;
+	}
+
+	pageinfo.metadata = metadata_hva;
+	pageinfo.contents = (u64)contents;
+
+	r = __handle_encls_ecreate(vcpu, &pageinfo, secs_hva, secs_gva);
+
+	free_page((unsigned long)contents);
+
+	return r;
+}
+
 static inline bool encls_leaf_enabled_in_guest(struct kvm_vcpu *vcpu, u32 leaf)
 {
 	if (!enable_sgx || !guest_cpuid_has(vcpu, X86_FEATURE_SGX))
@@ -41,6 +314,8 @@ int handle_encls(struct kvm_vcpu *vcpu)
 	} else if (!sgx_enabled_in_guest_bios(vcpu)) {
 		kvm_inject_gp(vcpu, 0);
 	} else {
+		if (leaf == ECREATE)
+			return handle_encls_ecreate(vcpu);
 		WARN(1, "KVM: unexpected exit on ENCLS[%u]", leaf);
 		vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
 		vcpu->run->hw.hardware_exit_reason = EXIT_REASON_ENCLS;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 08/11] KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (6 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 07/11] KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-17 13:55   ` Paolo Bonzini
  2021-04-12  4:21 ` [PATCH v5 09/11] KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC) Kai Huang
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Emulate the four Launch Enclave public key hash MSRs (LE hash MSRs) that
exist on CPUs that support SGX Launch Control (LC).  SGX LC modifies the
behavior of ENCLS[EINIT] to use the LE hash MSRs when verifying the key
used to sign an enclave.  On CPUs without LC support, the LE hash is
hardwired into the CPU to an Intel controlled key (the Intel key is also
the reset value of the LE hash MSRs). Track the guest's desired hash so
that a future patch can stuff the hash into the hardware MSRs when
executing EINIT on behalf of the guest, when those MSRs are writable in
host.

Note, KVM allows writes to the LE hash MSRs if IA32_FEATURE_CONTROL is
unlocked.  This is technically not architectural behavior, but it's
roughly equivalent to the arch behavior of the MSRs being writable prior
to activating SGX[1].  Emulating SGX activation is feasible, but adds no
tangible benefits and would just create extra work for KVM and guest
firmware.

[1] SGX related bits in IA32_FEATURE_CONTROL cannot be set until SGX
    is activated, e.g. by firmware.  SGX activation is triggered by
    setting bit 0 in MSR 0x7a.  Until SGX is activated, the LE hash
    MSRs are writable, e.g. to allow firmware to lock down the LE
    root key with a non-Intel value.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Co-developed-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
v4->v5:
 - rebase to latest kvm/queue.

---
 arch/x86/kvm/vmx/sgx.c | 35 +++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/sgx.h |  6 ++++++
 arch/x86/kvm/vmx/vmx.c | 20 ++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.h |  3 +++
 4 files changed, 64 insertions(+)

diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index 2cf9d7cf818c..d53e6b17b320 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -11,6 +11,9 @@
 
 bool __read_mostly enable_sgx;
 
+/* Initial value of guest's virtual SGX_LEPUBKEYHASHn MSRs */
+static u64 sgx_pubkey_hash[4] __ro_after_init;
+
 /*
  * ENCLS's memory operands use a fixed segment (DS) and a fixed
  * address size based on the mode.  Related prefixes are ignored.
@@ -323,3 +326,35 @@ int handle_encls(struct kvm_vcpu *vcpu)
 	}
 	return 1;
 }
+
+void setup_default_sgx_lepubkeyhash(void)
+{
+	/*
+	 * Use Intel's default value for Skylake hardware if Launch Control is
+	 * not supported, i.e. Intel's hash is hardcoded into silicon, or if
+	 * Launch Control is supported and enabled, i.e. mimic the reset value
+	 * and let the guest write the MSRs at will.  If Launch Control is
+	 * supported but disabled, then use the current MSR values as the hash
+	 * MSRs exist but are read-only (locked and not writable).
+	 */
+	if (!enable_sgx || boot_cpu_has(X86_FEATURE_SGX_LC) ||
+	    rdmsrl_safe(MSR_IA32_SGXLEPUBKEYHASH0, &sgx_pubkey_hash[0])) {
+		sgx_pubkey_hash[0] = 0xa6053e051270b7acULL;
+		sgx_pubkey_hash[1] = 0x6cfbe8ba8b3b413dULL;
+		sgx_pubkey_hash[2] = 0xc4916d99f2b3735dULL;
+		sgx_pubkey_hash[3] = 0xd4f8c05909f9bb3bULL;
+	} else {
+		/* MSR_IA32_SGXLEPUBKEYHASH0 is read above */
+		rdmsrl(MSR_IA32_SGXLEPUBKEYHASH1, sgx_pubkey_hash[1]);
+		rdmsrl(MSR_IA32_SGXLEPUBKEYHASH2, sgx_pubkey_hash[2]);
+		rdmsrl(MSR_IA32_SGXLEPUBKEYHASH3, sgx_pubkey_hash[3]);
+	}
+}
+
+void vcpu_setup_sgx_lepubkeyhash(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	memcpy(vmx->msr_ia32_sgxlepubkeyhash, sgx_pubkey_hash,
+	       sizeof(sgx_pubkey_hash));
+}
diff --git a/arch/x86/kvm/vmx/sgx.h b/arch/x86/kvm/vmx/sgx.h
index 6e17ecd4aca3..6502fa52c7e9 100644
--- a/arch/x86/kvm/vmx/sgx.h
+++ b/arch/x86/kvm/vmx/sgx.h
@@ -8,8 +8,14 @@
 extern bool __read_mostly enable_sgx;
 
 int handle_encls(struct kvm_vcpu *vcpu);
+
+void setup_default_sgx_lepubkeyhash(void);
+void vcpu_setup_sgx_lepubkeyhash(struct kvm_vcpu *vcpu);
 #else
 #define enable_sgx 0
+
+static inline void setup_default_sgx_lepubkeyhash(void) { }
+static inline void vcpu_setup_sgx_lepubkeyhash(struct kvm_vcpu *vcpu) { }
 #endif
 
 #endif /* __KVM_X86_SGX_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d21036457c40..ccf49f596f28 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1922,6 +1922,13 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_FEAT_CTL:
 		msr_info->data = vmx->msr_ia32_feature_control;
 		break;
+	case MSR_IA32_SGXLEPUBKEYHASH0 ... MSR_IA32_SGXLEPUBKEYHASH3:
+		if (!msr_info->host_initiated &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_SGX_LC))
+			return 1;
+		msr_info->data = to_vmx(vcpu)->msr_ia32_sgxlepubkeyhash
+			[msr_info->index - MSR_IA32_SGXLEPUBKEYHASH0];
+		break;
 	case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
 		if (!nested_vmx_allowed(vcpu))
 			return 1;
@@ -2216,6 +2223,15 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (msr_info->host_initiated && data == 0)
 			vmx_leave_nested(vcpu);
 		break;
+	case MSR_IA32_SGXLEPUBKEYHASH0 ... MSR_IA32_SGXLEPUBKEYHASH3:
+		if (!msr_info->host_initiated &&
+		    (!guest_cpuid_has(vcpu, X86_FEATURE_SGX_LC) ||
+		    ((vmx->msr_ia32_feature_control & FEAT_CTL_LOCKED) &&
+		    !(vmx->msr_ia32_feature_control & FEAT_CTL_SGX_LC_ENABLED))))
+			return 1;
+		vmx->msr_ia32_sgxlepubkeyhash
+			[msr_index - MSR_IA32_SGXLEPUBKEYHASH0] = data;
+		break;
 	case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC:
 		if (!msr_info->host_initiated)
 			return 1; /* they are read-only */
@@ -6978,6 +6994,8 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
 	else
 		memset(&vmx->nested.msrs, 0, sizeof(vmx->nested.msrs));
 
+	vcpu_setup_sgx_lepubkeyhash(vcpu);
+
 	vmx->nested.posted_intr_nv = -1;
 	vmx->nested.current_vmptr = -1ull;
 
@@ -7915,6 +7933,8 @@ static __init int hardware_setup(void)
 	if (!enable_ept || !cpu_has_vmx_intel_pt())
 		pt_mode = PT_MODE_SYSTEM;
 
+	setup_default_sgx_lepubkeyhash();
+
 	if (nested) {
 		nested_vmx_setup_ctls_msrs(&vmcs_config.nested,
 					   vmx_capability.ept);
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 7886a08505cc..19fe09fad2fe 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -325,6 +325,9 @@ struct vcpu_vmx {
 	 */
 	u64 msr_ia32_feature_control;
 	u64 msr_ia32_feature_control_valid_bits;
+	/* SGX Launch Control public key hash */
+	u64 msr_ia32_sgxlepubkeyhash[4];
+
 #if IS_ENABLED(CONFIG_HYPERV)
 	u64 hv_root_ept;
 #endif
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 09/11] KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (7 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 08/11] KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12  4:21 ` [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC Kai Huang
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add a VM-Exit handler to trap-and-execute EINIT when SGX LC is enabled
in the host.  When SGX LC is enabled, the host kernel may rewrite the
hardware values at will, e.g. to launch enclaves with different signers,
thus KVM needs to intercept EINIT to ensure it is executed with the
correct LE hash (even if the guest sees a hardwired hash).

Switching the LE hash MSRs on VM-Enter/VM-Exit is not a viable option as
writing the MSRs is prohibitively expensive, e.g. on SKL hardware each
WRMSR is ~400 cycles.  And because EINIT takes tens of thousands of
cycles to execute, the ~1500 cycle overhead to trap-and-execute EINIT is
unlikely to be noticed by the guest, let alone impact its overall SGX
performance.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
v4->v5:
 - Improve comment around sgx_virt_einit().

---
 arch/x86/kvm/vmx/sgx.c | 64 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index d53e6b17b320..2eed5da91698 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -287,6 +287,68 @@ static int handle_encls_ecreate(struct kvm_vcpu *vcpu)
 	return r;
 }
 
+static int handle_encls_einit(struct kvm_vcpu *vcpu)
+{
+	unsigned long sig_hva, secs_hva, token_hva, rflags;
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	gva_t sig_gva, secs_gva, token_gva;
+	gpa_t sig_gpa, secs_gpa, token_gpa;
+	int ret, trapnr;
+
+	if (sgx_get_encls_gva(vcpu, kvm_rbx_read(vcpu), 1808, 4096, &sig_gva) ||
+	    sgx_get_encls_gva(vcpu, kvm_rcx_read(vcpu), 4096, 4096, &secs_gva) ||
+	    sgx_get_encls_gva(vcpu, kvm_rdx_read(vcpu), 304, 512, &token_gva))
+		return 1;
+
+	/*
+	 * Translate the SIGSTRUCT, SECS and TOKEN pointers from GVA to GPA.
+	 * Resume the guest on failure to inject a #PF.
+	 */
+	if (sgx_gva_to_gpa(vcpu, sig_gva, false, &sig_gpa) ||
+	    sgx_gva_to_gpa(vcpu, secs_gva, true, &secs_gpa) ||
+	    sgx_gva_to_gpa(vcpu, token_gva, false, &token_gpa))
+		return 1;
+
+	/*
+	 * ...and then to HVA.  The order of accesses isn't architectural, i.e.
+	 * KVM doesn't have to fully process one address at a time.  Exit to
+	 * userspace if a GPA is invalid.  Note, all structures are aligned and
+	 * cannot split pages.
+	 */
+	if (sgx_gpa_to_hva(vcpu, sig_gpa, &sig_hva) ||
+	    sgx_gpa_to_hva(vcpu, secs_gpa, &secs_hva) ||
+	    sgx_gpa_to_hva(vcpu, token_gpa, &token_hva))
+		return 0;
+
+	ret = sgx_virt_einit((void __user *)sig_hva, (void __user *)token_hva,
+			     (void __user *)secs_hva,
+			     vmx->msr_ia32_sgxlepubkeyhash, &trapnr);
+
+	if (ret == -EFAULT)
+		return sgx_inject_fault(vcpu, secs_gva, trapnr);
+
+	/*
+	 * sgx_virt_einit() returns -EINVAL when access_ok() fails on @sig_hva,
+	 * @token_hva or @secs_hva. This should never happen as KVM checks host
+	 * addresses at memslot creation. sgx_virt_einit() has already warned
+	 * in this case, so just return.
+	 */
+	if (ret < 0)
+		return ret;
+
+	rflags = vmx_get_rflags(vcpu) & ~(X86_EFLAGS_CF | X86_EFLAGS_PF |
+					  X86_EFLAGS_AF | X86_EFLAGS_SF |
+					  X86_EFLAGS_OF);
+	if (ret)
+		rflags |= X86_EFLAGS_ZF;
+	else
+		rflags &= ~X86_EFLAGS_ZF;
+	vmx_set_rflags(vcpu, rflags);
+
+	kvm_rax_write(vcpu, ret);
+	return kvm_skip_emulated_instruction(vcpu);
+}
+
 static inline bool encls_leaf_enabled_in_guest(struct kvm_vcpu *vcpu, u32 leaf)
 {
 	if (!enable_sgx || !guest_cpuid_has(vcpu, X86_FEATURE_SGX))
@@ -319,6 +381,8 @@ int handle_encls(struct kvm_vcpu *vcpu)
 	} else {
 		if (leaf == ECREATE)
 			return handle_encls_ecreate(vcpu);
+		if (leaf == EINIT)
+			return handle_encls_einit(vcpu);
 		WARN(1, "KVM: unexpected exit on ENCLS[%u]", leaf);
 		vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
 		vcpu->run->hw.hardware_exit_reason = EXIT_REASON_ENCLS;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (8 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 09/11] KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC) Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12  9:51   ` kernel test robot
  2021-04-17 14:11   ` Paolo Bonzini
  2021-04-12  4:21 ` [PATCH v5 11/11] KVM: x86: Add capability to grant VM access to privileged SGX attribute Kai Huang
                   ` (2 subsequent siblings)
  12 siblings, 2 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Enable SGX virtualization now that KVM has the VM-Exit handlers needed
to trap-and-execute ENCLS to ensure correctness and/or enforce the CPU
model exposed to the guest.  Add a KVM module param, "sgx", to allow an
admin to disable SGX virtualization independent of the kernel.

When supported in hardware and the kernel, advertise SGX1, SGX2 and SGX
LC to userspace via CPUID and wire up the ENCLS_EXITING bitmap based on
the guest's SGX capabilities, i.e. to allow ENCLS to be executed in an
SGX-enabled guest.  With the exception of the provision key, all SGX
attribute bits may be exposed to the guest.  Guest access to the
provision key, which is controlled via securityfs, will be added in a
future patch.

Note, KVM does not yet support exposing ENCLS_C leafs or ENCLV leafs.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
 arch/x86/kvm/cpuid.c      | 57 +++++++++++++++++++++++++++-
 arch/x86/kvm/vmx/nested.c | 26 +++++++++++--
 arch/x86/kvm/vmx/nested.h |  5 +++
 arch/x86/kvm/vmx/sgx.c    | 80 ++++++++++++++++++++++++++++++++++++++-
 arch/x86/kvm/vmx/sgx.h    | 13 +++++++
 arch/x86/kvm/vmx/vmcs12.c |  1 +
 arch/x86/kvm/vmx/vmcs12.h |  4 +-
 arch/x86/kvm/vmx/vmx.c    | 35 ++++++++++++++++-
 8 files changed, 212 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a0e7be9ed449..a0d45607b702 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -18,6 +18,7 @@
 #include <asm/processor.h>
 #include <asm/user.h>
 #include <asm/fpu/xstate.h>
+#include <asm/sgx.h>
 #include "cpuid.h"
 #include "lapic.h"
 #include "mmu.h"
@@ -171,6 +172,21 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 		vcpu->arch.guest_supported_xcr0 =
 			(best->eax | ((u64)best->edx << 32)) & supported_xcr0;
 
+	/*
+	 * Bits 127:0 of the allowed SECS.ATTRIBUTES (CPUID.0x12.0x1) enumerate
+	 * the supported XSAVE Feature Request Mask (XFRM), i.e. the enclave's
+	 * requested XCR0 value.  The enclave's XFRM must be a subset of XCRO
+	 * at the time of EENTER, thus adjust the allowed XFRM by the guest's
+	 * supported XCR0.  Similar to XCR0 handling, FP and SSE are forced to
+	 * '1' even on CPUs that don't support XSAVE.
+	 */
+	best = kvm_find_cpuid_entry(vcpu, 0x12, 0x1);
+	if (best) {
+		best->ecx &= vcpu->arch.guest_supported_xcr0 & 0xffffffff;
+		best->edx &= vcpu->arch.guest_supported_xcr0 >> 32;
+		best->ecx |= XFEATURE_MASK_FPSSE;
+	}
+
 	kvm_update_pv_runtime(vcpu);
 
 	vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
@@ -429,7 +445,7 @@ void kvm_set_cpu_caps(void)
 	);
 
 	kvm_cpu_cap_mask(CPUID_7_0_EBX,
-		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
+		F(FSGSBASE) | F(SGX) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
 		F(BMI2) | F(ERMS) | F(INVPCID) | F(RTM) | 0 /*MPX*/ | F(RDSEED) |
 		F(ADX) | F(SMAP) | F(AVX512IFMA) | F(AVX512F) | F(AVX512PF) |
 		F(AVX512ER) | F(AVX512CD) | F(CLFLUSHOPT) | F(CLWB) | F(AVX512DQ) |
@@ -440,7 +456,8 @@ void kvm_set_cpu_caps(void)
 		F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) |
 		F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) |
 		F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) |
-		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/
+		F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/ |
+		F(SGX_LC)
 	);
 	/* Set LA57 based on hardware capability. */
 	if (cpuid_ecx(7) & F(LA57))
@@ -479,6 +496,10 @@ void kvm_set_cpu_caps(void)
 		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES)
 	);
 
+	kvm_cpu_cap_init(CPUID_12_EAX,
+		SF(SGX1) | SF(SGX2)
+	);
+
 	kvm_cpu_cap_mask(CPUID_8000_0001_ECX,
 		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
 		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
@@ -800,6 +821,38 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 			entry->edx = 0;
 		}
 		break;
+	case 0x12:
+		/* Intel SGX */
+		if (!kvm_cpu_cap_has(X86_FEATURE_SGX)) {
+			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+			break;
+		}
+
+		/*
+		 * Index 0: Sub-features, MISCSELECT (a.k.a extended features)
+		 * and max enclave sizes.   The SGX sub-features and MISCSELECT
+		 * are restricted by kernel and KVM capabilities (like most
+		 * feature flags), while enclave size is unrestricted.
+		 */
+		cpuid_entry_override(entry, CPUID_12_EAX);
+		entry->ebx &= SGX_MISC_EXINFO;
+
+		entry = do_host_cpuid(array, function, 1);
+		if (!entry)
+			goto out;
+
+		/*
+		 * Index 1: SECS.ATTRIBUTES.  ATTRIBUTES are restricted a la
+		 * feature flags.  Advertise all supported flags, including
+		 * privileged attributes that require explicit opt-in from
+		 * userspace.  ATTRIBUTES.XFRM is not adjusted as userspace is
+		 * expected to derive it from supported XCR0.
+		 */
+		entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
+			      /* PROVISIONKEY | */ SGX_ATTR_EINITTOKENKEY |
+			      SGX_ATTR_KSS;
+		entry->ebx &= 0;
+		break;
 	/* Intel PT */
 	case 0x14:
 		if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT)) {
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index baaea9bbb8e5..8de1920da4f8 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -11,6 +11,7 @@
 #include "mmu.h"
 #include "nested.h"
 #include "pmu.h"
+#include "sgx.h"
 #include "trace.h"
 #include "vmx.h"
 #include "x86.h"
@@ -2300,6 +2301,9 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
 		if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_UNRESTRICTED_GUEST))
 		    exec_control &= ~SECONDARY_EXEC_UNRESTRICTED_GUEST;
 
+		if (exec_control & SECONDARY_EXEC_ENCLS_EXITING)
+			vmx_write_encls_bitmap(&vmx->vcpu, vmcs12);
+
 		secondary_exec_controls_set(vmx, exec_control);
 	}
 
@@ -5716,6 +5720,20 @@ static bool nested_vmx_exit_handled_cr(struct kvm_vcpu *vcpu,
 	return false;
 }
 
+static bool nested_vmx_exit_handled_encls(struct kvm_vcpu *vcpu,
+					  struct vmcs12 *vmcs12)
+{
+	u32 encls_leaf;
+
+	if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING))
+		return false;
+
+	encls_leaf = kvm_rax_read(vcpu);
+	if (encls_leaf > 62)
+		encls_leaf = 63;
+	return vmcs12->encls_exiting_bitmap & BIT_ULL(encls_leaf);
+}
+
 static bool nested_vmx_exit_handled_vmcs_access(struct kvm_vcpu *vcpu,
 	struct vmcs12 *vmcs12, gpa_t bitmap)
 {
@@ -5812,9 +5830,6 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
 	case EXIT_REASON_VMFUNC:
 		/* VM functions are emulated through L2->L0 vmexits. */
 		return true;
-	case EXIT_REASON_ENCLS:
-		/* SGX is never exposed to L1 */
-		return true;
 	default:
 		break;
 	}
@@ -5938,6 +5953,8 @@ static bool nested_vmx_l1_wants_exit(struct kvm_vcpu *vcpu,
 	case EXIT_REASON_TPAUSE:
 		return nested_cpu_has2(vmcs12,
 			SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE);
+	case EXIT_REASON_ENCLS:
+		return nested_vmx_exit_handled_encls(vcpu, vmcs12);
 	default:
 		return true;
 	}
@@ -6513,6 +6530,9 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps)
 		msrs->secondary_ctls_high |=
 			SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 
+	if (enable_sgx)
+		msrs->secondary_ctls_high |= SECONDARY_EXEC_ENCLS_EXITING;
+
 	/* miscellaneous data */
 	rdmsr(MSR_IA32_VMX_MISC,
 		msrs->misc_low,
diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h
index 197148d76b8f..184418baeb3c 100644
--- a/arch/x86/kvm/vmx/nested.h
+++ b/arch/x86/kvm/vmx/nested.h
@@ -244,6 +244,11 @@ static inline bool nested_exit_on_intr(struct kvm_vcpu *vcpu)
 		PIN_BASED_EXT_INTR_MASK;
 }
 
+static inline bool nested_cpu_has_encls_exit(struct vmcs12 *vmcs12)
+{
+	return nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING);
+}
+
 /*
  * if fixed0[i] == 1: val[i] must be 1
  * if fixed1[i] == 0: val[i] must be 0
diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index 2eed5da91698..6693ebdc0770 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -5,11 +5,13 @@
 
 #include "cpuid.h"
 #include "kvm_cache_regs.h"
+#include "nested.h"
 #include "sgx.h"
 #include "vmx.h"
 #include "x86.h"
 
-bool __read_mostly enable_sgx;
+bool __read_mostly enable_sgx = 1;
+module_param_named(sgx, enable_sgx, bool, 0444);
 
 /* Initial value of guest's virtual SGX_LEPUBKEYHASHn MSRs */
 static u64 sgx_pubkey_hash[4] __ro_after_init;
@@ -422,3 +424,79 @@ void vcpu_setup_sgx_lepubkeyhash(struct kvm_vcpu *vcpu)
 	memcpy(vmx->msr_ia32_sgxlepubkeyhash, sgx_pubkey_hash,
 	       sizeof(sgx_pubkey_hash));
 }
+
+/*
+ * ECREATE must be intercepted to enforce MISCSELECT, ATTRIBUTES and XFRM
+ * restrictions if the guest's allowed-1 settings diverge from hardware.
+ */
+static bool sgx_intercept_encls_ecreate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpuid_entry2 *guest_cpuid;
+	u32 eax, ebx, ecx, edx;
+
+	if (!vcpu->kvm->arch.sgx_provisioning_allowed)
+		return true;
+
+	guest_cpuid = kvm_find_cpuid_entry(vcpu, 0x12, 0);
+	if (!guest_cpuid)
+		return true;
+
+	cpuid_count(0x12, 0, &eax, &ebx, &ecx, &edx);
+	if (guest_cpuid->ebx != ebx || guest_cpuid->edx != edx)
+		return true;
+
+	guest_cpuid = kvm_find_cpuid_entry(vcpu, 0x12, 1);
+	if (!guest_cpuid)
+		return true;
+
+	cpuid_count(0x12, 1, &eax, &ebx, &ecx, &edx);
+	if (guest_cpuid->eax != eax || guest_cpuid->ebx != ebx ||
+	    guest_cpuid->ecx != ecx || guest_cpuid->edx != edx)
+		return true;
+
+	return false;
+}
+
+void vmx_write_encls_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+{
+	/*
+	 * There is no software enable bit for SGX that is virtualized by
+	 * hardware, e.g. there's no CR4.SGXE, so when SGX is disabled in the
+	 * guest (either by the host or by the guest's BIOS) but enabled in the
+	 * host, trap all ENCLS leafs and inject #UD/#GP as needed to emulate
+	 * the expected system behavior for ENCLS.
+	 */
+	u64 bitmap = -1ull;
+
+	/* Nothing to do if hardware doesn't support SGX */
+	if (!cpu_has_vmx_encls_vmexit())
+		return;
+
+	if (guest_cpuid_has(vcpu, X86_FEATURE_SGX) &&
+	    sgx_enabled_in_guest_bios(vcpu)) {
+		if (guest_cpuid_has(vcpu, X86_FEATURE_SGX1)) {
+			bitmap &= ~GENMASK_ULL(ETRACK, ECREATE);
+			if (sgx_intercept_encls_ecreate(vcpu))
+				bitmap |= (1 << ECREATE);
+		}
+
+		if (guest_cpuid_has(vcpu, X86_FEATURE_SGX2))
+			bitmap &= ~GENMASK_ULL(EMODT, EAUG);
+
+		/*
+		 * Trap and execute EINIT if launch control is enabled in the
+		 * host using the guest's values for launch control MSRs, even
+		 * if the guest's values are fixed to hardware default values.
+		 * The MSRs are not loaded/saved on VM-Enter/VM-Exit as writing
+		 * the MSRs is extraordinarily expensive.
+		 */
+		if (boot_cpu_has(X86_FEATURE_SGX_LC))
+			bitmap |= (1 << EINIT);
+
+		if (!vmcs12 && is_guest_mode(vcpu))
+			vmcs12 = get_vmcs12(vcpu);
+		if (vmcs12 && nested_cpu_has_encls_exit(vmcs12))
+			bitmap |= vmcs12->encls_exiting_bitmap;
+	}
+	vmcs_write64(ENCLS_EXITING_BITMAP, bitmap);
+}
diff --git a/arch/x86/kvm/vmx/sgx.h b/arch/x86/kvm/vmx/sgx.h
index 6502fa52c7e9..a400888b376d 100644
--- a/arch/x86/kvm/vmx/sgx.h
+++ b/arch/x86/kvm/vmx/sgx.h
@@ -4,6 +4,9 @@
 
 #include <linux/kvm_host.h>
 
+#include "capabilities.h"
+#include "vmx_ops.h"
+
 #ifdef CONFIG_X86_SGX_KVM
 extern bool __read_mostly enable_sgx;
 
@@ -11,11 +14,21 @@ int handle_encls(struct kvm_vcpu *vcpu);
 
 void setup_default_sgx_lepubkeyhash(void);
 void vcpu_setup_sgx_lepubkeyhash(struct kvm_vcpu *vcpu);
+
+void vmx_write_encls_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12);
 #else
 #define enable_sgx 0
 
 static inline void setup_default_sgx_lepubkeyhash(void) { }
 static inline void vcpu_setup_sgx_lepubkeyhash(struct kvm_vcpu *vcpu) { }
+
+static inline void vmx_write_encls_bitmap(struct kvm_vcpu *vcpu,
+					  struct vmcs12 *vmcs12)
+{
+	/* Nothing to do if hardware doesn't support SGX */
+	if (cpu_has_vmx_encls_vmexit())
+		vmcs_write64(ENCLS_EXITING_BITMAP, -1ull);
+}
 #endif
 
 #endif /* __KVM_X86_SGX_H */
diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c
index c8e51c004f78..034adb6404dc 100644
--- a/arch/x86/kvm/vmx/vmcs12.c
+++ b/arch/x86/kvm/vmx/vmcs12.c
@@ -50,6 +50,7 @@ const unsigned short vmcs_field_to_offset_table[] = {
 	FIELD64(VMREAD_BITMAP, vmread_bitmap),
 	FIELD64(VMWRITE_BITMAP, vmwrite_bitmap),
 	FIELD64(XSS_EXIT_BITMAP, xss_exit_bitmap),
+	FIELD64(ENCLS_EXITING_BITMAP, encls_exiting_bitmap),
 	FIELD64(GUEST_PHYSICAL_ADDRESS, guest_physical_address),
 	FIELD64(VMCS_LINK_POINTER, vmcs_link_pointer),
 	FIELD64(GUEST_IA32_DEBUGCTL, guest_ia32_debugctl),
diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h
index 80232daf00ff..13494956d0e9 100644
--- a/arch/x86/kvm/vmx/vmcs12.h
+++ b/arch/x86/kvm/vmx/vmcs12.h
@@ -69,7 +69,8 @@ struct __packed vmcs12 {
 	u64 vm_function_control;
 	u64 eptp_list_address;
 	u64 pml_address;
-	u64 padding64[3]; /* room for future expansion */
+	u64 encls_exiting_bitmap;
+	u64 padding64[2]; /* room for future expansion */
 	/*
 	 * To allow migration of L1 (complete with its L2 guests) between
 	 * machines of different natural widths (32 or 64 bit), we cannot have
@@ -256,6 +257,7 @@ static inline void vmx_check_vmcs12_offsets(void)
 	CHECK_OFFSET(vm_function_control, 296);
 	CHECK_OFFSET(eptp_list_address, 304);
 	CHECK_OFFSET(pml_address, 312);
+	CHECK_OFFSET(encls_exiting_bitmap, 320);
 	CHECK_OFFSET(cr0_guest_host_mask, 344);
 	CHECK_OFFSET(cr4_guest_host_mask, 352);
 	CHECK_OFFSET(cr0_read_shadow, 360);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ccf49f596f28..184dbfbca348 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2222,6 +2222,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		vmx->msr_ia32_feature_control = data;
 		if (msr_info->host_initiated && data == 0)
 			vmx_leave_nested(vcpu);
+
+		/* SGX may be enabled/disabled by guest's firmware */
+		vmx_write_encls_bitmap(vcpu, NULL);
 		break;
 	case MSR_IA32_SGXLEPUBKEYHASH0 ... MSR_IA32_SGXLEPUBKEYHASH3:
 		if (!msr_info->host_initiated &&
@@ -4377,6 +4380,15 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 	if (!vcpu->kvm->arch.bus_lock_detection_enabled)
 		exec_control &= ~SECONDARY_EXEC_BUS_LOCK_DETECTION;
 
+	if (cpu_has_vmx_encls_vmexit() && nested) {
+		if (guest_cpuid_has(vcpu, X86_FEATURE_SGX))
+			vmx->nested.msrs.secondary_ctls_high |=
+				SECONDARY_EXEC_ENCLS_EXITING;
+		else
+			vmx->nested.msrs.secondary_ctls_high &=
+				~SECONDARY_EXEC_ENCLS_EXITING;
+	}
+
 	vmx->secondary_exec_control = exec_control;
 }
 
@@ -4467,8 +4479,7 @@ static void init_vmcs(struct vcpu_vmx *vmx)
 		vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
 	}
 
-	if (cpu_has_vmx_encls_vmexit())
-		vmcs_write64(ENCLS_EXITING_BITMAP, -1ull);
+	vmx_write_encls_bitmap(&vmx->vcpu, NULL);
 
 	if (vmx_pt_mode_is_host_guest()) {
 		memset(&vmx->pt_desc, 0, sizeof(vmx->pt_desc));
@@ -7325,6 +7336,19 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 
 	set_cr4_guest_host_mask(vmx);
 
+	vmx_write_encls_bitmap(vcpu, NULL);
+	if (guest_cpuid_has(vcpu, X86_FEATURE_SGX))
+		vmx->msr_ia32_feature_control_valid_bits |= FEAT_CTL_SGX_ENABLED;
+	else
+		vmx->msr_ia32_feature_control_valid_bits &= ~FEAT_CTL_SGX_ENABLED;
+
+	if (guest_cpuid_has(vcpu, X86_FEATURE_SGX_LC))
+		vmx->msr_ia32_feature_control_valid_bits |=
+			FEAT_CTL_SGX_LC_ENABLED;
+	else
+		vmx->msr_ia32_feature_control_valid_bits &=
+			~FEAT_CTL_SGX_LC_ENABLED;
+
 	/* Refresh #PF interception to account for MAXPHYADDR changes. */
 	vmx_update_exception_bitmap(vcpu);
 }
@@ -7345,6 +7369,13 @@ static __init void vmx_set_cpu_caps(void)
 	if (vmx_pt_mode_is_host_guest())
 		kvm_cpu_cap_check_and_set(X86_FEATURE_INTEL_PT);
 
+	if (!enable_sgx) {
+		kvm_cpu_cap_clear(X86_FEATURE_SGX);
+		kvm_cpu_cap_clear(X86_FEATURE_SGX_LC);
+		kvm_cpu_cap_clear(X86_FEATURE_SGX1);
+		kvm_cpu_cap_clear(X86_FEATURE_SGX2);
+	}
+
 	if (vmx_umip_emulated())
 		kvm_cpu_cap_set(X86_FEATURE_UMIP);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 11/11] KVM: x86: Add capability to grant VM access to privileged SGX attribute
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (9 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC Kai Huang
@ 2021-04-12  4:21 ` Kai Huang
  2021-04-12 11:28   ` kernel test robot
  2021-04-13 14:51 ` [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Paolo Bonzini
  2021-04-17 14:15 ` Paolo Bonzini
  12 siblings, 1 reply; 26+ messages in thread
From: Kai Huang @ 2021-04-12  4:21 UTC (permalink / raw)
  To: kvm, linux-sgx
  Cc: seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang, Andy Lutomirski, Kai Huang

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add a capability, KVM_CAP_SGX_ATTRIBUTE, that can be used by userspace
to grant a VM access to a priveleged attribute, with args[0] holding a
file handle to a valid SGX attribute file.

The SGX subsystem restricts access to a subset of enclave attributes to
provide additional security for an uncompromised kernel, e.g. to prevent
malware from using the PROVISIONKEY to ensure its nodes are running
inside a geniune SGX enclave and/or to obtain a stable fingerprint.

To prevent userspace from circumventing such restrictions by running an
enclave in a VM, KVM restricts guest access to privileged attributes by
default.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---
v4->v5:
 - rebase to latest kvm/queue.

---
 Documentation/virt/kvm/api.rst | 23 +++++++++++++++++++++++
 arch/x86/kvm/cpuid.c           |  2 +-
 arch/x86/kvm/x86.c             | 21 +++++++++++++++++++++
 include/uapi/linux/kvm.h       |  1 +
 4 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 2c4253718881..1c073588cf0b 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6246,6 +6246,29 @@ the two vms from accidentally clobbering each other through interrupts and
 MSRs.
 
 
+7.25 KVM_CAP_SGX_ATTRIBUTE
+----------------------
+
+:Architectures: x86
+:Target: VM
+:Parameters: args[0] is a file handle of a SGX attribute file in securityfs
+:Returns: 0 on success, -EINVAL if the file handle is invalid or if a requested
+          attribute is not supported by KVM.
+
+KVM_CAP_SGX_ATTRIBUTE enables a userspace VMM to grant a VM access to one or
+more priveleged enclave attributes.  args[0] must hold a file handle to a valid
+SGX attribute file corresponding to an attribute that is supported/restricted
+by KVM (currently only PROVISIONKEY).
+
+The SGX subsystem restricts access to a subset of enclave attributes to provide
+additional security for an uncompromised kernel, e.g. use of the PROVISIONKEY
+is restricted to deter malware from using the PROVISIONKEY to obtain a stable
+system fingerprint.  To prevent userspace from circumventing such restrictions
+by running an enclave in a VM, KVM prevents access to privileged attributes by
+default.
+
+See Documentation/x86/sgx/2.Kernel-internals.rst for more details.
+
 8. Other capabilities.
 ======================
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index a0d45607b702..6dc12d949f86 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -849,7 +849,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		 * expected to derive it from supported XCR0.
 		 */
 		entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
-			      /* PROVISIONKEY | */ SGX_ATTR_EINITTOKENKEY |
+			      SGX_ATTR_PROVISIONKEY | SGX_ATTR_EINITTOKENKEY |
 			      SGX_ATTR_KSS;
 		entry->ebx &= 0;
 		break;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b9600540508e..aab07334e1d4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -75,6 +75,7 @@
 #include <asm/tlbflush.h>
 #include <asm/intel_pt.h>
 #include <asm/emulate_prefix.h>
+#include <asm/sgx.h>
 #include <clocksource/hyperv_timer.h>
 
 #define CREATE_TRACE_POINTS
@@ -3803,6 +3804,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_X86_MSR_FILTER:
 	case KVM_CAP_ENFORCE_PV_FEATURE_CPUID:
 	case KVM_CAP_VM_COPY_ENC_CONTEXT_FROM:
+#ifdef CONFIG_X86_SGX_KVM
+	case KVM_CAP_SGX_ATTRIBUTE:
+#endif
 		r = 1;
 		break;
 #ifdef CONFIG_KVM_XEN
@@ -5393,6 +5397,23 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		if (kvm_x86_ops.vm_copy_enc_context_from)
 			r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]);
 		return r;
+#ifdef CONFIG_X86_SGX_KVM
+	case KVM_CAP_SGX_ATTRIBUTE: {
+		unsigned long allowed_attributes = 0;
+
+		r = sgx_set_attribute(&allowed_attributes, cap->args[0]);
+		if (r)
+			break;
+
+		/* KVM only supports the PROVISIONKEY privileged attribute. */
+		if ((allowed_attributes & SGX_ATTR_PROVISIONKEY) &&
+		    !(allowed_attributes & ~SGX_ATTR_PROVISIONKEY))
+			kvm->arch.sgx_provisioning_allowed = true;
+		else
+			r = -EINVAL;
+		break;
+	}
+#endif
 	default:
 		r = -EINVAL;
 		break;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 424b12658923..130f756c696d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1079,6 +1079,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_X86_BUS_LOCK_EXIT 193
 #define KVM_CAP_PPC_DAWR1 194
 #define KVM_CAP_VM_COPY_ENC_CONTEXT_FROM 195
+#define KVM_CAP_SGX_ATTRIBUTE 196
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  2021-04-12  4:21 ` [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC Kai Huang
@ 2021-04-12  9:51   ` kernel test robot
  2021-04-12 10:47     ` Kai Huang
  2021-04-17 14:11   ` Paolo Bonzini
  1 sibling, 1 reply; 26+ messages in thread
From: kernel test robot @ 2021-04-12  9:51 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: kbuild-all, seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang


[-- Attachment #1: Type: text/plain, Size: 7062 bytes --]

Hi Kai,

I love your patch! Yet something to improve:

[auto build test ERROR on kvm/queue]
[also build test ERROR on next-20210409]
[cannot apply to vhost/linux-next v5.12-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Kai-Huang/KVM-SGX-virtualization-support-KVM-part/20210412-122425
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
config: x86_64-rhel-8.3-kselftests (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/66e235131b59a03ed48f6f6343de43ba9786e32d
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kai-Huang/KVM-SGX-virtualization-support-KVM-part/20210412-122425
        git checkout 66e235131b59a03ed48f6f6343de43ba9786e32d
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from arch/x86/kvm/cpuid.c:22:
   arch/x86/kvm/cpuid.h: In function '__feature_translate':
   arch/x86/kvm/cpuid.h:128:21: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
     128 |  if (x86_feature == X86_FEATURE_SGX1)
         |                     ^~~~~~~~~~~~~~~~
         |                     X86_FEATURE_SGX
   arch/x86/kvm/cpuid.h:128:21: note: each undeclared identifier is reported only once for each function it appears in
   arch/x86/kvm/cpuid.h:130:26: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
     130 |  else if (x86_feature == X86_FEATURE_SGX2)
         |                          ^~~~~~~~~~~~~~~~
         |                          X86_FEATURE_SGX
   In file included from arch/x86/include/asm/thread_info.h:53,
                    from include/linux/thread_info.h:58,
                    from arch/x86/include/asm/preempt.h:7,
                    from include/linux/preempt.h:78,
                    from include/linux/percpu.h:6,
                    from include/linux/context_tracking_state.h:5,
                    from include/linux/hardirq.h:5,
                    from include/linux/kvm_host.h:7,
                    from arch/x86/kvm/cpuid.c:12:
   arch/x86/kvm/cpuid.c: In function 'kvm_set_cpu_caps':
>> arch/x86/kvm/cpuid.c:57:32: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                                ^~~~~~~~~~~~
   arch/x86/include/asm/cpufeature.h:121:24: note: in definition of macro 'cpu_has'
     121 |  (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
         |                        ^~~
   arch/x86/kvm/cpuid.c:57:19: note: in expansion of macro 'boot_cpu_has'
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                   ^~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:500:3: note: in expansion of macro 'SF'
     500 |   SF(SGX1) | SF(SGX2)
         |   ^~
>> arch/x86/kvm/cpuid.c:57:32: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                                ^~~~~~~~~~~~
   arch/x86/include/asm/cpufeature.h:121:24: note: in definition of macro 'cpu_has'
     121 |  (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
         |                        ^~~
   arch/x86/kvm/cpuid.c:57:19: note: in expansion of macro 'boot_cpu_has'
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                   ^~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:500:14: note: in expansion of macro 'SF'
     500 |   SF(SGX1) | SF(SGX2)
         |              ^~
   arch/x86/kvm/cpuid.c: In function '__do_cpuid_func':
>> arch/x86/kvm/cpuid.c:838:17: error: 'SGX_MISC_EXINFO' undeclared (first use in this function)
     838 |   entry->ebx &= SGX_MISC_EXINFO;
         |                 ^~~~~~~~~~~~~~~
>> arch/x86/kvm/cpuid.c:851:17: error: 'SGX_ATTR_DEBUG' undeclared (first use in this function)
     851 |   entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
         |                 ^~~~~~~~~~~~~~
>> arch/x86/kvm/cpuid.c:851:34: error: 'SGX_ATTR_MODE64BIT' undeclared (first use in this function)
     851 |   entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
         |                                  ^~~~~~~~~~~~~~~~~~
>> arch/x86/kvm/cpuid.c:852:31: error: 'SGX_ATTR_EINITTOKENKEY' undeclared (first use in this function)
     852 |          /* PROVISIONKEY | */ SGX_ATTR_EINITTOKENKEY |
         |                               ^~~~~~~~~~~~~~~~~~~~~~
>> arch/x86/kvm/cpuid.c:853:10: error: 'SGX_ATTR_KSS' undeclared (first use in this function)
     853 |          SGX_ATTR_KSS;
         |          ^~~~~~~~~~~~
--
   In file included from arch/x86/kvm/vmx/vmx.c:51:
   arch/x86/kvm/cpuid.h: In function '__feature_translate':
   arch/x86/kvm/cpuid.h:128:21: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
     128 |  if (x86_feature == X86_FEATURE_SGX1)
         |                     ^~~~~~~~~~~~~~~~
         |                     X86_FEATURE_SGX
   arch/x86/kvm/cpuid.h:128:21: note: each undeclared identifier is reported only once for each function it appears in
   arch/x86/kvm/cpuid.h:130:26: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
     130 |  else if (x86_feature == X86_FEATURE_SGX2)
         |                          ^~~~~~~~~~~~~~~~
         |                          X86_FEATURE_SGX
   arch/x86/kvm/vmx/vmx.c: In function 'vmx_set_cpu_caps':
>> arch/x86/kvm/vmx/vmx.c:7375:21: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
    7375 |   kvm_cpu_cap_clear(X86_FEATURE_SGX1);
         |                     ^~~~~~~~~~~~~~~~
         |                     X86_FEATURE_SGX
>> arch/x86/kvm/vmx/vmx.c:7376:21: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
    7376 |   kvm_cpu_cap_clear(X86_FEATURE_SGX2);
         |                     ^~~~~~~~~~~~~~~~
         |                     X86_FEATURE_SGX


vim +57 arch/x86/kvm/cpuid.c

4344ee981e2199 Paolo Bonzini       2013-10-02  55  
87382003e35559 Sean Christopherson 2019-12-17  56  #define F feature_bit
cb4de96e0cca64 Sean Christopherson 2021-04-12 @57  #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
5c404cabd1b5c1 Paolo Bonzini       2014-12-03  58  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 41481 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  2021-04-12  9:51   ` kernel test robot
@ 2021-04-12 10:47     ` Kai Huang
  0 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-12 10:47 UTC (permalink / raw)
  To: kernel test robot, kvm, linux-sgx
  Cc: kbuild-all, seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang

Hi Paolo,

Obviously this series requires x86 part patches (which is on tip/x86/sgx) to work. Please
let me know how do you want to proceed?

On Mon, 2021-04-12 at 17:51 +0800, kernel test robot wrote:
> Hi Kai,
> 
> I love your patch! Yet something to improve:
> 
> [auto build test ERROR on kvm/queue]
> [also build test ERROR on next-20210409]
> [cannot apply to vhost/linux-next v5.12-rc7]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
> 
> url:    https://github.com/0day-ci/linux/commits/Kai-Huang/KVM-SGX-virtualization-support-KVM-part/20210412-122425
> base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
> config: x86_64-rhel-8.3-kselftests (attached as .config)
> compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
> reproduce (this is a W=1 build):
>         # https://github.com/0day-ci/linux/commit/66e235131b59a03ed48f6f6343de43ba9786e32d
>         git remote add linux-review https://github.com/0day-ci/linux
>         git fetch --no-tags linux-review Kai-Huang/KVM-SGX-virtualization-support-KVM-part/20210412-122425
>         git checkout 66e235131b59a03ed48f6f6343de43ba9786e32d
>         # save the attached .config to linux build tree
>         make W=1 ARCH=x86_64 
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
> 
> All errors (new ones prefixed by >>):
> 
>    In file included from arch/x86/kvm/cpuid.c:22:
>    arch/x86/kvm/cpuid.h: In function '__feature_translate':
>    arch/x86/kvm/cpuid.h:128:21: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>      128 |  if (x86_feature == X86_FEATURE_SGX1)
>          |                     ^~~~~~~~~~~~~~~~
>          |                     X86_FEATURE_SGX
>    arch/x86/kvm/cpuid.h:128:21: note: each undeclared identifier is reported only once for each function it appears in
>    arch/x86/kvm/cpuid.h:130:26: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>      130 |  else if (x86_feature == X86_FEATURE_SGX2)
>          |                          ^~~~~~~~~~~~~~~~
>          |                          X86_FEATURE_SGX
>    In file included from arch/x86/include/asm/thread_info.h:53,
>                     from include/linux/thread_info.h:58,
>                     from arch/x86/include/asm/preempt.h:7,
>                     from include/linux/preempt.h:78,
>                     from include/linux/percpu.h:6,
>                     from include/linux/context_tracking_state.h:5,
>                     from include/linux/hardirq.h:5,
>                     from include/linux/kvm_host.h:7,
>                     from arch/x86/kvm/cpuid.c:12:
>    arch/x86/kvm/cpuid.c: In function 'kvm_set_cpu_caps':
> > > arch/x86/kvm/cpuid.c:57:32: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>       57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
>          |                                ^~~~~~~~~~~~
>    arch/x86/include/asm/cpufeature.h:121:24: note: in definition of macro 'cpu_has'
>      121 |  (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
>          |                        ^~~
>    arch/x86/kvm/cpuid.c:57:19: note: in expansion of macro 'boot_cpu_has'
>       57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
>          |                   ^~~~~~~~~~~~
>    arch/x86/kvm/cpuid.c:500:3: note: in expansion of macro 'SF'
>      500 |   SF(SGX1) | SF(SGX2)
>          |   ^~
> > > arch/x86/kvm/cpuid.c:57:32: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>       57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
>          |                                ^~~~~~~~~~~~
>    arch/x86/include/asm/cpufeature.h:121:24: note: in definition of macro 'cpu_has'
>      121 |  (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
>          |                        ^~~
>    arch/x86/kvm/cpuid.c:57:19: note: in expansion of macro 'boot_cpu_has'
>       57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
>          |                   ^~~~~~~~~~~~
>    arch/x86/kvm/cpuid.c:500:14: note: in expansion of macro 'SF'
>      500 |   SF(SGX1) | SF(SGX2)
>          |              ^~
>    arch/x86/kvm/cpuid.c: In function '__do_cpuid_func':
> > > arch/x86/kvm/cpuid.c:838:17: error: 'SGX_MISC_EXINFO' undeclared (first use in this function)
>      838 |   entry->ebx &= SGX_MISC_EXINFO;
>          |                 ^~~~~~~~~~~~~~~
> > > arch/x86/kvm/cpuid.c:851:17: error: 'SGX_ATTR_DEBUG' undeclared (first use in this function)
>      851 |   entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
>          |                 ^~~~~~~~~~~~~~
> > > arch/x86/kvm/cpuid.c:851:34: error: 'SGX_ATTR_MODE64BIT' undeclared (first use in this function)
>      851 |   entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
>          |                                  ^~~~~~~~~~~~~~~~~~
> > > arch/x86/kvm/cpuid.c:852:31: error: 'SGX_ATTR_EINITTOKENKEY' undeclared (first use in this function)
>      852 |          /* PROVISIONKEY | */ SGX_ATTR_EINITTOKENKEY |
>          |                               ^~~~~~~~~~~~~~~~~~~~~~
> > > arch/x86/kvm/cpuid.c:853:10: error: 'SGX_ATTR_KSS' undeclared (first use in this function)
>      853 |          SGX_ATTR_KSS;
>          |          ^~~~~~~~~~~~
> --
>    In file included from arch/x86/kvm/vmx/vmx.c:51:
>    arch/x86/kvm/cpuid.h: In function '__feature_translate':
>    arch/x86/kvm/cpuid.h:128:21: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>      128 |  if (x86_feature == X86_FEATURE_SGX1)
>          |                     ^~~~~~~~~~~~~~~~
>          |                     X86_FEATURE_SGX
>    arch/x86/kvm/cpuid.h:128:21: note: each undeclared identifier is reported only once for each function it appears in
>    arch/x86/kvm/cpuid.h:130:26: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>      130 |  else if (x86_feature == X86_FEATURE_SGX2)
>          |                          ^~~~~~~~~~~~~~~~
>          |                          X86_FEATURE_SGX
>    arch/x86/kvm/vmx/vmx.c: In function 'vmx_set_cpu_caps':
> > > arch/x86/kvm/vmx/vmx.c:7375:21: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>     7375 |   kvm_cpu_cap_clear(X86_FEATURE_SGX1);
>          |                     ^~~~~~~~~~~~~~~~
>          |                     X86_FEATURE_SGX
> > > arch/x86/kvm/vmx/vmx.c:7376:21: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
>     7376 |   kvm_cpu_cap_clear(X86_FEATURE_SGX2);
>          |                     ^~~~~~~~~~~~~~~~
>          |                     X86_FEATURE_SGX
> 
> 
> vim +57 arch/x86/kvm/cpuid.c
> 
> 4344ee981e2199 Paolo Bonzini       2013-10-02  55  
> 87382003e35559 Sean Christopherson 2019-12-17  56  #define F feature_bit
> cb4de96e0cca64 Sean Christopherson 2021-04-12 @57  #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
> 5c404cabd1b5c1 Paolo Bonzini       2014-12-03  58  
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 11/11] KVM: x86: Add capability to grant VM access to privileged SGX attribute
  2021-04-12  4:21 ` [PATCH v5 11/11] KVM: x86: Add capability to grant VM access to privileged SGX attribute Kai Huang
@ 2021-04-12 11:28   ` kernel test robot
  0 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2021-04-12 11:28 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: kbuild-all, seanjc, pbonzini, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang


[-- Attachment #1: Type: text/plain, Size: 17068 bytes --]

Hi Kai,

I love your patch! Yet something to improve:

[auto build test ERROR on kvm/queue]
[cannot apply to vhost/linux-next v5.12-rc7 next-20210409]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Kai-Huang/KVM-SGX-virtualization-support-KVM-part/20210412-122425
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
config: x86_64-rhel-8.3-kselftests (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/ee406a5de64531c5ec7886a5097f5a832ad2b1e4
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Kai-Huang/KVM-SGX-virtualization-support-KVM-part/20210412-122425
        git checkout ee406a5de64531c5ec7886a5097f5a832ad2b1e4
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from arch/x86/kvm/cpuid.c:22:
   arch/x86/kvm/cpuid.h: In function '__feature_translate':
   arch/x86/kvm/cpuid.h:128:21: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
     128 |  if (x86_feature == X86_FEATURE_SGX1)
         |                     ^~~~~~~~~~~~~~~~
         |                     X86_FEATURE_SGX
   arch/x86/kvm/cpuid.h:128:21: note: each undeclared identifier is reported only once for each function it appears in
   arch/x86/kvm/cpuid.h:130:26: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
     130 |  else if (x86_feature == X86_FEATURE_SGX2)
         |                          ^~~~~~~~~~~~~~~~
         |                          X86_FEATURE_SGX
   In file included from arch/x86/include/asm/thread_info.h:53,
                    from include/linux/thread_info.h:58,
                    from arch/x86/include/asm/preempt.h:7,
                    from include/linux/preempt.h:78,
                    from include/linux/percpu.h:6,
                    from include/linux/context_tracking_state.h:5,
                    from include/linux/hardirq.h:5,
                    from include/linux/kvm_host.h:7,
                    from arch/x86/kvm/cpuid.c:12:
   arch/x86/kvm/cpuid.c: In function 'kvm_set_cpu_caps':
   arch/x86/kvm/cpuid.c:57:32: error: 'X86_FEATURE_SGX1' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                                ^~~~~~~~~~~~
   arch/x86/include/asm/cpufeature.h:121:24: note: in definition of macro 'cpu_has'
     121 |  (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
         |                        ^~~
   arch/x86/kvm/cpuid.c:57:19: note: in expansion of macro 'boot_cpu_has'
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                   ^~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:500:3: note: in expansion of macro 'SF'
     500 |   SF(SGX1) | SF(SGX2)
         |   ^~
   arch/x86/kvm/cpuid.c:57:32: error: 'X86_FEATURE_SGX2' undeclared (first use in this function); did you mean 'X86_FEATURE_SGX'?
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                                ^~~~~~~~~~~~
   arch/x86/include/asm/cpufeature.h:121:24: note: in definition of macro 'cpu_has'
     121 |  (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
         |                        ^~~
   arch/x86/kvm/cpuid.c:57:19: note: in expansion of macro 'boot_cpu_has'
      57 | #define SF(name) (boot_cpu_has(X86_FEATURE_##name) ? F(name) : 0)
         |                   ^~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:500:14: note: in expansion of macro 'SF'
     500 |   SF(SGX1) | SF(SGX2)
         |              ^~
   arch/x86/kvm/cpuid.c: In function '__do_cpuid_func':
   arch/x86/kvm/cpuid.c:838:17: error: 'SGX_MISC_EXINFO' undeclared (first use in this function)
     838 |   entry->ebx &= SGX_MISC_EXINFO;
         |                 ^~~~~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:851:17: error: 'SGX_ATTR_DEBUG' undeclared (first use in this function)
     851 |   entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
         |                 ^~~~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:851:34: error: 'SGX_ATTR_MODE64BIT' undeclared (first use in this function)
     851 |   entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
         |                                  ^~~~~~~~~~~~~~~~~~
>> arch/x86/kvm/cpuid.c:852:10: error: 'SGX_ATTR_PROVISIONKEY' undeclared (first use in this function)
     852 |          SGX_ATTR_PROVISIONKEY | SGX_ATTR_EINITTOKENKEY |
         |          ^~~~~~~~~~~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:852:34: error: 'SGX_ATTR_EINITTOKENKEY' undeclared (first use in this function)
     852 |          SGX_ATTR_PROVISIONKEY | SGX_ATTR_EINITTOKENKEY |
         |                                  ^~~~~~~~~~~~~~~~~~~~~~
   arch/x86/kvm/cpuid.c:853:10: error: 'SGX_ATTR_KSS' undeclared (first use in this function)
     853 |          SGX_ATTR_KSS;
         |          ^~~~~~~~~~~~


vim +/SGX_ATTR_PROVISIONKEY +852 arch/x86/kvm/cpuid.c

   643	
   644	static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
   645	{
   646		struct kvm_cpuid_entry2 *entry;
   647		int r, i, max_idx;
   648	
   649		/* all calls to cpuid_count() should be made on the same cpu */
   650		get_cpu();
   651	
   652		r = -E2BIG;
   653	
   654		entry = do_host_cpuid(array, function, 0);
   655		if (!entry)
   656			goto out;
   657	
   658		switch (function) {
   659		case 0:
   660			/* Limited to the highest leaf implemented in KVM. */
   661			entry->eax = min(entry->eax, 0x1fU);
   662			break;
   663		case 1:
   664			cpuid_entry_override(entry, CPUID_1_EDX);
   665			cpuid_entry_override(entry, CPUID_1_ECX);
   666			break;
   667		case 2:
   668			/*
   669			 * On ancient CPUs, function 2 entries are STATEFUL.  That is,
   670			 * CPUID(function=2, index=0) may return different results each
   671			 * time, with the least-significant byte in EAX enumerating the
   672			 * number of times software should do CPUID(2, 0).
   673			 *
   674			 * Modern CPUs, i.e. every CPU KVM has *ever* run on are less
   675			 * idiotic.  Intel's SDM states that EAX & 0xff "will always
   676			 * return 01H. Software should ignore this value and not
   677			 * interpret it as an informational descriptor", while AMD's
   678			 * APM states that CPUID(2) is reserved.
   679			 *
   680			 * WARN if a frankenstein CPU that supports virtualization and
   681			 * a stateful CPUID.0x2 is encountered.
   682			 */
   683			WARN_ON_ONCE((entry->eax & 0xff) > 1);
   684			break;
   685		/* functions 4 and 0x8000001d have additional index. */
   686		case 4:
   687		case 0x8000001d:
   688			/*
   689			 * Read entries until the cache type in the previous entry is
   690			 * zero, i.e. indicates an invalid entry.
   691			 */
   692			for (i = 1; entry->eax & 0x1f; ++i) {
   693				entry = do_host_cpuid(array, function, i);
   694				if (!entry)
   695					goto out;
   696			}
   697			break;
   698		case 6: /* Thermal management */
   699			entry->eax = 0x4; /* allow ARAT */
   700			entry->ebx = 0;
   701			entry->ecx = 0;
   702			entry->edx = 0;
   703			break;
   704		/* function 7 has additional index. */
   705		case 7:
   706			entry->eax = min(entry->eax, 1u);
   707			cpuid_entry_override(entry, CPUID_7_0_EBX);
   708			cpuid_entry_override(entry, CPUID_7_ECX);
   709			cpuid_entry_override(entry, CPUID_7_EDX);
   710	
   711			/* KVM only supports 0x7.0 and 0x7.1, capped above via min(). */
   712			if (entry->eax == 1) {
   713				entry = do_host_cpuid(array, function, 1);
   714				if (!entry)
   715					goto out;
   716	
   717				cpuid_entry_override(entry, CPUID_7_1_EAX);
   718				entry->ebx = 0;
   719				entry->ecx = 0;
   720				entry->edx = 0;
   721			}
   722			break;
   723		case 9:
   724			break;
   725		case 0xa: { /* Architectural Performance Monitoring */
   726			struct x86_pmu_capability cap;
   727			union cpuid10_eax eax;
   728			union cpuid10_edx edx;
   729	
   730			perf_get_x86_pmu_capability(&cap);
   731	
   732			/*
   733			 * Only support guest architectural pmu on a host
   734			 * with architectural pmu.
   735			 */
   736			if (!cap.version)
   737				memset(&cap, 0, sizeof(cap));
   738	
   739			eax.split.version_id = min(cap.version, 2);
   740			eax.split.num_counters = cap.num_counters_gp;
   741			eax.split.bit_width = cap.bit_width_gp;
   742			eax.split.mask_length = cap.events_mask_len;
   743	
   744			edx.split.num_counters_fixed = min(cap.num_counters_fixed, MAX_FIXED_COUNTERS);
   745			edx.split.bit_width_fixed = cap.bit_width_fixed;
   746			edx.split.anythread_deprecated = 1;
   747			edx.split.reserved1 = 0;
   748			edx.split.reserved2 = 0;
   749	
   750			entry->eax = eax.full;
   751			entry->ebx = cap.events_mask;
   752			entry->ecx = 0;
   753			entry->edx = edx.full;
   754			break;
   755		}
   756		/*
   757		 * Per Intel's SDM, the 0x1f is a superset of 0xb,
   758		 * thus they can be handled by common code.
   759		 */
   760		case 0x1f:
   761		case 0xb:
   762			/*
   763			 * Populate entries until the level type (ECX[15:8]) of the
   764			 * previous entry is zero.  Note, CPUID EAX.{0x1f,0xb}.0 is
   765			 * the starting entry, filled by the primary do_host_cpuid().
   766			 */
   767			for (i = 1; entry->ecx & 0xff00; ++i) {
   768				entry = do_host_cpuid(array, function, i);
   769				if (!entry)
   770					goto out;
   771			}
   772			break;
   773		case 0xd:
   774			entry->eax &= supported_xcr0;
   775			entry->ebx = xstate_required_size(supported_xcr0, false);
   776			entry->ecx = entry->ebx;
   777			entry->edx &= supported_xcr0 >> 32;
   778			if (!supported_xcr0)
   779				break;
   780	
   781			entry = do_host_cpuid(array, function, 1);
   782			if (!entry)
   783				goto out;
   784	
   785			cpuid_entry_override(entry, CPUID_D_1_EAX);
   786			if (entry->eax & (F(XSAVES)|F(XSAVEC)))
   787				entry->ebx = xstate_required_size(supported_xcr0 | supported_xss,
   788								  true);
   789			else {
   790				WARN_ON_ONCE(supported_xss != 0);
   791				entry->ebx = 0;
   792			}
   793			entry->ecx &= supported_xss;
   794			entry->edx &= supported_xss >> 32;
   795	
   796			for (i = 2; i < 64; ++i) {
   797				bool s_state;
   798				if (supported_xcr0 & BIT_ULL(i))
   799					s_state = false;
   800				else if (supported_xss & BIT_ULL(i))
   801					s_state = true;
   802				else
   803					continue;
   804	
   805				entry = do_host_cpuid(array, function, i);
   806				if (!entry)
   807					goto out;
   808	
   809				/*
   810				 * The supported check above should have filtered out
   811				 * invalid sub-leafs.  Only valid sub-leafs should
   812				 * reach this point, and they should have a non-zero
   813				 * save state size.  Furthermore, check whether the
   814				 * processor agrees with supported_xcr0/supported_xss
   815				 * on whether this is an XCR0- or IA32_XSS-managed area.
   816				 */
   817				if (WARN_ON_ONCE(!entry->eax || (entry->ecx & 0x1) != s_state)) {
   818					--array->nent;
   819					continue;
   820				}
   821				entry->edx = 0;
   822			}
   823			break;
   824		case 0x12:
   825			/* Intel SGX */
   826			if (!kvm_cpu_cap_has(X86_FEATURE_SGX)) {
   827				entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
   828				break;
   829			}
   830	
   831			/*
   832			 * Index 0: Sub-features, MISCSELECT (a.k.a extended features)
   833			 * and max enclave sizes.   The SGX sub-features and MISCSELECT
   834			 * are restricted by kernel and KVM capabilities (like most
   835			 * feature flags), while enclave size is unrestricted.
   836			 */
   837			cpuid_entry_override(entry, CPUID_12_EAX);
   838			entry->ebx &= SGX_MISC_EXINFO;
   839	
   840			entry = do_host_cpuid(array, function, 1);
   841			if (!entry)
   842				goto out;
   843	
   844			/*
   845			 * Index 1: SECS.ATTRIBUTES.  ATTRIBUTES are restricted a la
   846			 * feature flags.  Advertise all supported flags, including
   847			 * privileged attributes that require explicit opt-in from
   848			 * userspace.  ATTRIBUTES.XFRM is not adjusted as userspace is
   849			 * expected to derive it from supported XCR0.
   850			 */
   851			entry->eax &= SGX_ATTR_DEBUG | SGX_ATTR_MODE64BIT |
 > 852				      SGX_ATTR_PROVISIONKEY | SGX_ATTR_EINITTOKENKEY |
   853				      SGX_ATTR_KSS;
   854			entry->ebx &= 0;
   855			break;
   856		/* Intel PT */
   857		case 0x14:
   858			if (!kvm_cpu_cap_has(X86_FEATURE_INTEL_PT)) {
   859				entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
   860				break;
   861			}
   862	
   863			for (i = 1, max_idx = entry->eax; i <= max_idx; ++i) {
   864				if (!do_host_cpuid(array, function, i))
   865					goto out;
   866			}
   867			break;
   868		case KVM_CPUID_SIGNATURE: {
   869			static const char signature[12] = "KVMKVMKVM\0\0";
   870			const u32 *sigptr = (const u32 *)signature;
   871			entry->eax = KVM_CPUID_FEATURES;
   872			entry->ebx = sigptr[0];
   873			entry->ecx = sigptr[1];
   874			entry->edx = sigptr[2];
   875			break;
   876		}
   877		case KVM_CPUID_FEATURES:
   878			entry->eax = (1 << KVM_FEATURE_CLOCKSOURCE) |
   879				     (1 << KVM_FEATURE_NOP_IO_DELAY) |
   880				     (1 << KVM_FEATURE_CLOCKSOURCE2) |
   881				     (1 << KVM_FEATURE_ASYNC_PF) |
   882				     (1 << KVM_FEATURE_PV_EOI) |
   883				     (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
   884				     (1 << KVM_FEATURE_PV_UNHALT) |
   885				     (1 << KVM_FEATURE_PV_TLB_FLUSH) |
   886				     (1 << KVM_FEATURE_ASYNC_PF_VMEXIT) |
   887				     (1 << KVM_FEATURE_PV_SEND_IPI) |
   888				     (1 << KVM_FEATURE_POLL_CONTROL) |
   889				     (1 << KVM_FEATURE_PV_SCHED_YIELD) |
   890				     (1 << KVM_FEATURE_ASYNC_PF_INT);
   891	
   892			if (sched_info_on())
   893				entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
   894	
   895			entry->ebx = 0;
   896			entry->ecx = 0;
   897			entry->edx = 0;
   898			break;
   899		case 0x80000000:
   900			entry->eax = min(entry->eax, 0x8000001f);
   901			break;
   902		case 0x80000001:
   903			cpuid_entry_override(entry, CPUID_8000_0001_EDX);
   904			cpuid_entry_override(entry, CPUID_8000_0001_ECX);
   905			break;
   906		case 0x80000006:
   907			/* L2 cache and TLB: pass through host info. */
   908			break;
   909		case 0x80000007: /* Advanced power management */
   910			/* invariant TSC is CPUID.80000007H:EDX[8] */
   911			entry->edx &= (1 << 8);
   912			/* mask against host */
   913			entry->edx &= boot_cpu_data.x86_power;
   914			entry->eax = entry->ebx = entry->ecx = 0;
   915			break;
   916		case 0x80000008: {
   917			unsigned g_phys_as = (entry->eax >> 16) & 0xff;
   918			unsigned virt_as = max((entry->eax >> 8) & 0xff, 48U);
   919			unsigned phys_as = entry->eax & 0xff;
   920	
   921			if (!g_phys_as)
   922				g_phys_as = phys_as;
   923			entry->eax = g_phys_as | (virt_as << 8);
   924			entry->edx = 0;
   925			cpuid_entry_override(entry, CPUID_8000_0008_EBX);
   926			break;
   927		}
   928		case 0x8000000A:
   929			if (!kvm_cpu_cap_has(X86_FEATURE_SVM)) {
   930				entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
   931				break;
   932			}
   933			entry->eax = 1; /* SVM revision 1 */
   934			entry->ebx = 8; /* Lets support 8 ASIDs in case we add proper
   935					   ASID emulation to nested SVM */
   936			entry->ecx = 0; /* Reserved */
   937			cpuid_entry_override(entry, CPUID_8000_000A_EDX);
   938			break;
   939		case 0x80000019:
   940			entry->ecx = entry->edx = 0;
   941			break;
   942		case 0x8000001a:
   943		case 0x8000001e:
   944			break;
   945		/* Support memory encryption cpuid if host supports it */
   946		case 0x8000001F:
   947			if (!boot_cpu_has(X86_FEATURE_SEV))
   948				entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
   949			break;
   950		/*Add support for Centaur's CPUID instruction*/
   951		case 0xC0000000:
   952			/*Just support up to 0xC0000004 now*/
   953			entry->eax = min(entry->eax, 0xC0000004);
   954			break;
   955		case 0xC0000001:
   956			cpuid_entry_override(entry, CPUID_C000_0001_EDX);
   957			break;
   958		case 3: /* Processor serial number */
   959		case 5: /* MONITOR/MWAIT */
   960		case 0xC0000002:
   961		case 0xC0000003:
   962		case 0xC0000004:
   963		default:
   964			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
   965			break;
   966		}
   967	
   968		r = 0;
   969	
   970	out:
   971		put_cpu();
   972	
   973		return r;
   974	}
   975	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 41481 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 00/11] KVM SGX virtualization support (KVM part)
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (10 preceding siblings ...)
  2021-04-12  4:21 ` [PATCH v5 11/11] KVM: x86: Add capability to grant VM access to privileged SGX attribute Kai Huang
@ 2021-04-13 14:51 ` Paolo Bonzini
  2021-04-13 15:01   ` Borislav Petkov
  2021-04-17 14:15 ` Paolo Bonzini
  12 siblings, 1 reply; 26+ messages in thread
From: Paolo Bonzini @ 2021-04-13 14:51 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: seanjc, bp, jarkko, dave.hansen, luto, rick.p.edgecombe, haitao.huang

On 12/04/21 06:21, Kai Huang wrote:
> Hi Paolo, Sean,
> 
> Boris has merged x86 part patches to the tip/x86/sgx. This series is KVM part
> patches. Due to some code change in x86 part patches, two KVM patches need
> update so this is the new version. Please help to review. Thanks!
> 
> Specifically, x86 patch (x86/sgx: Add helpers to expose ECREATE and EINIT to
> KVM) was changed to return -EINVAL directly w/o setting trapnr when
> access_ok()s fail on any user pointers, so KVM patches:
> 
> KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
> KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
> 
> were updated to handle this case.
> 
> This seris was firstly based on tip/x86/sgx, and then rebased to latest
> kvm/queue, so it can be applied to kvm/queue directly now.

Boris, can you confirm that tip/x86/sgx has stable commit hashes?

Thanks,

Paolo

> Changelog:
> 
> (Please see individual patch for changelog for specific patch)
> 
> v4->v5:
>   - Addressed Sean's comments (patch 06, 07, 09 were slightly updated).
>   - Rebased to latest kvm/queue (patch 08, 11 were updated to resolve conflict).
> 
> Sean Christopherson (11):
>    KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX)
>    KVM: x86: Define new #PF SGX error code bit
>    KVM: x86: Add support for reverse CPUID lookup of scattered features
>    KVM: x86: Add reverse-CPUID lookup support for scattered SGX features
>    KVM: VMX: Add basic handling of VM-Exit from SGX enclave
>    KVM: VMX: Frame in ENCLS handler for SGX virtualization
>    KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
>    KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs
>    KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
>    KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
>    KVM: x86: Add capability to grant VM access to privileged SGX
>      attribute
> 
>   Documentation/virt/kvm/api.rst  |  23 ++
>   arch/x86/include/asm/kvm_host.h |   5 +
>   arch/x86/include/asm/vmx.h      |   1 +
>   arch/x86/include/uapi/asm/vmx.h |   1 +
>   arch/x86/kvm/Makefile           |   2 +
>   arch/x86/kvm/cpuid.c            |  89 +++++-
>   arch/x86/kvm/cpuid.h            |  50 +++-
>   arch/x86/kvm/vmx/nested.c       |  28 +-
>   arch/x86/kvm/vmx/nested.h       |   5 +
>   arch/x86/kvm/vmx/sgx.c          | 502 ++++++++++++++++++++++++++++++++
>   arch/x86/kvm/vmx/sgx.h          |  34 +++
>   arch/x86/kvm/vmx/vmcs12.c       |   1 +
>   arch/x86/kvm/vmx/vmcs12.h       |   4 +-
>   arch/x86/kvm/vmx/vmx.c          | 109 ++++++-
>   arch/x86/kvm/vmx/vmx.h          |   3 +
>   arch/x86/kvm/x86.c              |  23 ++
>   include/uapi/linux/kvm.h        |   1 +
>   17 files changed, 858 insertions(+), 23 deletions(-)
>   create mode 100644 arch/x86/kvm/vmx/sgx.c
>   create mode 100644 arch/x86/kvm/vmx/sgx.h
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 00/11] KVM SGX virtualization support (KVM part)
  2021-04-13 14:51 ` [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Paolo Bonzini
@ 2021-04-13 15:01   ` Borislav Petkov
  2021-04-13 21:47     ` Kai Huang
  0 siblings, 1 reply; 26+ messages in thread
From: Borislav Petkov @ 2021-04-13 15:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kai Huang, kvm, linux-sgx, seanjc, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang

On Tue, Apr 13, 2021 at 04:51:50PM +0200, Paolo Bonzini wrote:
> Boris, can you confirm that tip/x86/sgx has stable commit hashes?

Yap, you can go ahead and merge it.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 00/11] KVM SGX virtualization support (KVM part)
  2021-04-13 15:01   ` Borislav Petkov
@ 2021-04-13 21:47     ` Kai Huang
  0 siblings, 0 replies; 26+ messages in thread
From: Kai Huang @ 2021-04-13 21:47 UTC (permalink / raw)
  To: Borislav Petkov, Paolo Bonzini
  Cc: kvm, linux-sgx, seanjc, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang

On Tue, 2021-04-13 at 17:01 +0200, Borislav Petkov wrote:
> On Tue, Apr 13, 2021 at 04:51:50PM +0200, Paolo Bonzini wrote:
> > Boris, can you confirm that tip/x86/sgx has stable commit hashes?
> 
> Yap, you can go ahead and merge it.
> 
> Thx.
> 
Thank you Boris, Paolo!


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 03/11] KVM: x86: Add support for reverse CPUID lookup of scattered features
  2021-04-12  4:21 ` [PATCH v5 03/11] KVM: x86: Add support for reverse CPUID lookup of scattered features Kai Huang
@ 2021-04-17 13:39   ` Paolo Bonzini
  0 siblings, 0 replies; 26+ messages in thread
From: Paolo Bonzini @ 2021-04-17 13:39 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: seanjc, bp, jarkko, dave.hansen, luto, rick.p.edgecombe, haitao.huang

On 12/04/21 06:21, Kai Huang wrote:
> +static __always_inline void kvm_cpu_cap_init(enum cpuid_leafs leaf, u32 mask)

Just a small nit, I renamed this to kvm_cpu_cap_init_scattered.

Paolo


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 04/11] KVM: x86: Add reverse-CPUID lookup support for scattered SGX features
  2021-04-12  4:21 ` [PATCH v5 04/11] KVM: x86: Add reverse-CPUID lookup support for scattered SGX features Kai Huang
@ 2021-04-17 13:39   ` Paolo Bonzini
  0 siblings, 0 replies; 26+ messages in thread
From: Paolo Bonzini @ 2021-04-17 13:39 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: seanjc, bp, jarkko, dave.hansen, luto, rick.p.edgecombe, haitao.huang

On 12/04/21 06:21, Kai Huang wrote:
>   #define X86_KVM_FEATURE(w, f)		((w)*32 + (f))
>   
> +/* Intel-defined SGX sub-features, CPUID level 0x12 (EAX). */
> +#define __X86_FEATURE_SGX1		X86_KVM_FEATURE(CPUID_12_EAX, 0)
> +#define __X86_FEATURE_SGX2		X86_KVM_FEATURE(CPUID_12_EAX, 1)

And these to KVM_X86_FEATURE and KVM_X86_FEATURE_SGX{1,2}.

Paolo


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 08/11] KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs
  2021-04-12  4:21 ` [PATCH v5 08/11] KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs Kai Huang
@ 2021-04-17 13:55   ` Paolo Bonzini
  0 siblings, 0 replies; 26+ messages in thread
From: Paolo Bonzini @ 2021-04-17 13:55 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: seanjc, bp, jarkko, dave.hansen, luto, rick.p.edgecombe, haitao.huang

On 12/04/21 06:21, Kai Huang wrote:
> Note, KVM allows writes to the LE hash MSRs if IA32_FEATURE_CONTROL is
> unlocked.  This is technically not architectural behavior, but it's
> roughly equivalent to the arch behavior of the MSRs being writable prior
> to activating SGX[1].  Emulating SGX activation is feasible, but adds no
> tangible benefits and would just create extra work for KVM and guest
> firmware.
> 
> [1] SGX related bits in IA32_FEATURE_CONTROL cannot be set until SGX
>      is activated, e.g. by firmware.  SGX activation is triggered by
>      setting bit 0 in MSR 0x7a.  Until SGX is activated, the LE hash
>      MSRs are writable, e.g. to allow firmware to lock down the LE
>      root key with a non-Intel value.

I turned these into a comment in vmx_set_msr:

                 /*
                  * On real hardware, the LE hash MSRs are writable before
                  * the firmware sets bit 0 in MSR 0x7a ("activating" SGX),
                  * at which point SGX related bits in IA32_FEATURE_CONTROL
                  * become writable.
                  *
                  * KVM does not emulate SGX activation for simplicity, so
                  * allow writes to the LE hash MSRs if IA32_FEATURE_CONTROL
                  * is unlocked.  This is technically not architectural
                  * behavior, but close enough.
                  */

Paolo


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  2021-04-12  4:21 ` [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC Kai Huang
  2021-04-12  9:51   ` kernel test robot
@ 2021-04-17 14:11   ` Paolo Bonzini
  2021-04-19 11:44     ` Kai Huang
  1 sibling, 1 reply; 26+ messages in thread
From: Paolo Bonzini @ 2021-04-17 14:11 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: seanjc, bp, jarkko, dave.hansen, luto, rick.p.edgecombe, haitao.huang

On 12/04/21 06:21, Kai Huang wrote:
> @@ -4377,6 +4380,15 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
>   	if (!vcpu->kvm->arch.bus_lock_detection_enabled)
>   		exec_control &= ~SECONDARY_EXEC_BUS_LOCK_DETECTION;
>   
> +	if (cpu_has_vmx_encls_vmexit() && nested) {
> +		if (guest_cpuid_has(vcpu, X86_FEATURE_SGX))
> +			vmx->nested.msrs.secondary_ctls_high |=
> +				SECONDARY_EXEC_ENCLS_EXITING;
> +		else
> +			vmx->nested.msrs.secondary_ctls_high &=
> +				~SECONDARY_EXEC_ENCLS_EXITING;
> +	}
> +

This is incorrect, I've removed it.  The MSRs can only be written by 
userspace.

If SGX is disabled in the guest CPUID, nested_vmx_exit_handled_encls can 
just do:

	if (!guest_cpuid_has(vcpu, X86_FEATURE_SGX) ||
	    !nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING))
		return false;

and the useless ENCLS exiting bitmap in vmcs12 will be ignored.

Paolo


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 00/11] KVM SGX virtualization support (KVM part)
  2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
                   ` (11 preceding siblings ...)
  2021-04-13 14:51 ` [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Paolo Bonzini
@ 2021-04-17 14:15 ` Paolo Bonzini
  12 siblings, 0 replies; 26+ messages in thread
From: Paolo Bonzini @ 2021-04-17 14:15 UTC (permalink / raw)
  To: Kai Huang, kvm, linux-sgx
  Cc: seanjc, bp, jarkko, dave.hansen, luto, rick.p.edgecombe, haitao.huang

On 12/04/21 06:21, Kai Huang wrote:
> Hi Paolo, Sean,
> 
> Boris has merged x86 part patches to the tip/x86/sgx. This series is KVM part
> patches. Due to some code change in x86 part patches, two KVM patches need
> update so this is the new version. Please help to review. Thanks!
> 
> Specifically, x86 patch (x86/sgx: Add helpers to expose ECREATE and EINIT to
> KVM) was changed to return -EINVAL directly w/o setting trapnr when
> access_ok()s fail on any user pointers, so KVM patches:
> 
> KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
> KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
> 
> were updated to handle this case.
> 
> This seris was firstly based on tip/x86/sgx, and then rebased to latest
> kvm/queue, so it can be applied to kvm/queue directly now.
> 
> Changelog:
> 
> (Please see individual patch for changelog for specific patch)
> 
> v4->v5:
>   - Addressed Sean's comments (patch 06, 07, 09 were slightly updated).
>   - Rebased to latest kvm/queue (patch 08, 11 were updated to resolve conflict).
> 
> Sean Christopherson (11):
>    KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX)
>    KVM: x86: Define new #PF SGX error code bit
>    KVM: x86: Add support for reverse CPUID lookup of scattered features
>    KVM: x86: Add reverse-CPUID lookup support for scattered SGX features
>    KVM: VMX: Add basic handling of VM-Exit from SGX enclave
>    KVM: VMX: Frame in ENCLS handler for SGX virtualization
>    KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
>    KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs
>    KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
>    KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
>    KVM: x86: Add capability to grant VM access to privileged SGX
>      attribute
> 
>   Documentation/virt/kvm/api.rst  |  23 ++
>   arch/x86/include/asm/kvm_host.h |   5 +
>   arch/x86/include/asm/vmx.h      |   1 +
>   arch/x86/include/uapi/asm/vmx.h |   1 +
>   arch/x86/kvm/Makefile           |   2 +
>   arch/x86/kvm/cpuid.c            |  89 +++++-
>   arch/x86/kvm/cpuid.h            |  50 +++-
>   arch/x86/kvm/vmx/nested.c       |  28 +-
>   arch/x86/kvm/vmx/nested.h       |   5 +
>   arch/x86/kvm/vmx/sgx.c          | 502 ++++++++++++++++++++++++++++++++
>   arch/x86/kvm/vmx/sgx.h          |  34 +++
>   arch/x86/kvm/vmx/vmcs12.c       |   1 +
>   arch/x86/kvm/vmx/vmcs12.h       |   4 +-
>   arch/x86/kvm/vmx/vmx.c          | 109 ++++++-
>   arch/x86/kvm/vmx/vmx.h          |   3 +
>   arch/x86/kvm/x86.c              |  23 ++
>   include/uapi/linux/kvm.h        |   1 +
>   17 files changed, 858 insertions(+), 23 deletions(-)
>   create mode 100644 arch/x86/kvm/vmx/sgx.c
>   create mode 100644 arch/x86/kvm/vmx/sgx.h
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  2021-04-17 14:11   ` Paolo Bonzini
@ 2021-04-19 11:44     ` Kai Huang
  2021-04-19 15:16       ` Sean Christopherson
  0 siblings, 1 reply; 26+ messages in thread
From: Kai Huang @ 2021-04-19 11:44 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, linux-sgx
  Cc: seanjc, bp, jarkko, dave.hansen, luto, rick.p.edgecombe, haitao.huang

On Sat, 2021-04-17 at 16:11 +0200, Paolo Bonzini wrote:
> On 12/04/21 06:21, Kai Huang wrote:
> > @@ -4377,6 +4380,15 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
> >   	if (!vcpu->kvm->arch.bus_lock_detection_enabled)
> >   		exec_control &= ~SECONDARY_EXEC_BUS_LOCK_DETECTION;
> >   
> > 
> > +	if (cpu_has_vmx_encls_vmexit() && nested) {
> > +		if (guest_cpuid_has(vcpu, X86_FEATURE_SGX))
> > +			vmx->nested.msrs.secondary_ctls_high |=
> > +				SECONDARY_EXEC_ENCLS_EXITING;
> > +		else
> > +			vmx->nested.msrs.secondary_ctls_high &=
> > +				~SECONDARY_EXEC_ENCLS_EXITING;
> > +	}
> > +
> 
> This is incorrect, I've removed it.  The MSRs can only be written by 
> userspace.
> 
> If SGX is disabled in the guest CPUID, nested_vmx_exit_handled_encls can 
> just do:
> 
> 	if (!guest_cpuid_has(vcpu, X86_FEATURE_SGX) ||
> 	    !nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING))
> 		return false;
> 
> and the useless ENCLS exiting bitmap in vmcs12 will be ignored.
> 
> Paolo
> 

Thanks for queuing this series!

Looks good to me. However if I read code correctly, in this way a side effect would be
vmx->nested.msrs.secondary_ctls_high will always have SECONDARY_EXEC_ENCLS_EXITING bit
set, even SGX is not exposed to guest, which means a guest can set this even SGX is not
present, but I think it is OK since ENCLS exiting bitmap in vmcs12 will be ignored anyway
in nested_vmx_exit_handled_encls() as you mentioned above.

Anyway, I have tested this code and it works at my side (by creating L2 with SGX support
and running SGX workloads inside it).

Sean, please also comment if you have any.



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  2021-04-19 11:44     ` Kai Huang
@ 2021-04-19 15:16       ` Sean Christopherson
  2021-04-19 17:14         ` Paolo Bonzini
  0 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2021-04-19 15:16 UTC (permalink / raw)
  To: Kai Huang
  Cc: Paolo Bonzini, kvm, linux-sgx, bp, jarkko, dave.hansen, luto,
	rick.p.edgecombe, haitao.huang

On Mon, Apr 19, 2021, Kai Huang wrote:
> On Sat, 2021-04-17 at 16:11 +0200, Paolo Bonzini wrote:
> > On 12/04/21 06:21, Kai Huang wrote:
> > > @@ -4377,6 +4380,15 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
> > >   	if (!vcpu->kvm->arch.bus_lock_detection_enabled)
> > >   		exec_control &= ~SECONDARY_EXEC_BUS_LOCK_DETECTION;
> > >   
> > > 
> > > +	if (cpu_has_vmx_encls_vmexit() && nested) {
> > > +		if (guest_cpuid_has(vcpu, X86_FEATURE_SGX))
> > > +			vmx->nested.msrs.secondary_ctls_high |=
> > > +				SECONDARY_EXEC_ENCLS_EXITING;
> > > +		else
> > > +			vmx->nested.msrs.secondary_ctls_high &=
> > > +				~SECONDARY_EXEC_ENCLS_EXITING;
> > > +	}
> > > +
> > 
> > This is incorrect, I've removed it.  The MSRs can only be written by 
> > userspace.

vmx_compute_secondary_exec_control() violates that left, right, and center, it's
just buried down in vmx_adjust_secondary_exec_control().  This is an open coded
version of that helper, sans the actual update to exec_control since ENCLS needs
to be conditionally intercepted even when it's exposed to the guest.

> > If SGX is disabled in the guest CPUID, nested_vmx_exit_handled_encls can 
> > just do:
> > 
> > 	if (!guest_cpuid_has(vcpu, X86_FEATURE_SGX) ||
> > 	    !nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING))
> > 		return false;
> > 
> > and the useless ENCLS exiting bitmap in vmcs12 will be ignored.
> > 
> > Paolo
> > 
> 
> Thanks for queuing this series!
> 
> Looks good to me. However if I read code correctly, in this way a side effect would be
> vmx->nested.msrs.secondary_ctls_high will always have SECONDARY_EXEC_ENCLS_EXITING bit
> set, even SGX is not exposed to guest, which means a guest can set this even SGX is not
> present, but I think it is OK since ENCLS exiting bitmap in vmcs12 will be ignored anyway
> in nested_vmx_exit_handled_encls() as you mentioned above.
> 
> Anyway, I have tested this code and it works at my side (by creating L2 with SGX support
> and running SGX workloads inside it).
> 
> Sean, please also comment if you have any.
> 
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  2021-04-19 15:16       ` Sean Christopherson
@ 2021-04-19 17:14         ` Paolo Bonzini
  0 siblings, 0 replies; 26+ messages in thread
From: Paolo Bonzini @ 2021-04-19 17:14 UTC (permalink / raw)
  To: Sean Christopherson, Kai Huang
  Cc: kvm, linux-sgx, bp, jarkko, dave.hansen, luto, rick.p.edgecombe,
	haitao.huang

On 19/04/21 17:16, Sean Christopherson wrote:
> On Mon, Apr 19, 2021, Kai Huang wrote:
>> On Sat, 2021-04-17 at 16:11 +0200, Paolo Bonzini wrote:
>>> On 12/04/21 06:21, Kai Huang wrote:
>>>> @@ -4377,6 +4380,15 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
>>>>    	if (!vcpu->kvm->arch.bus_lock_detection_enabled)
>>>>    		exec_control &= ~SECONDARY_EXEC_BUS_LOCK_DETECTION;
>>>>    
>>>>
>>>> +	if (cpu_has_vmx_encls_vmexit() && nested) {
>>>> +		if (guest_cpuid_has(vcpu, X86_FEATURE_SGX))
>>>> +			vmx->nested.msrs.secondary_ctls_high |=
>>>> +				SECONDARY_EXEC_ENCLS_EXITING;
>>>> +		else
>>>> +			vmx->nested.msrs.secondary_ctls_high &=
>>>> +				~SECONDARY_EXEC_ENCLS_EXITING;
>>>> +	}
>>>> +
>>>
>>> This is incorrect, I've removed it.  The MSRs can only be written by
>>> userspace.
> 
> vmx_compute_secondary_exec_control() violates that left, right, and center, it's
> just buried down in vmx_adjust_secondary_exec_control().  This is an open coded
> version of that helper, sans the actual update to exec_control since ENCLS needs
> to be conditionally intercepted even when it's exposed to the guest.

Hmm, that's true.  I'll place back the version that you had.

Paolo

>>> If SGX is disabled in the guest CPUID, nested_vmx_exit_handled_encls can
>>> just do:
>>>
>>> 	if (!guest_cpuid_has(vcpu, X86_FEATURE_SGX) ||
>>> 	    !nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING))
>>> 		return false;
>>>
>>> and the useless ENCLS exiting bitmap in vmcs12 will be ignored.
>>>
>>> Paolo
>>>
>>
>> Thanks for queuing this series!
>>
>> Looks good to me. However if I read code correctly, in this way a side effect would be
>> vmx->nested.msrs.secondary_ctls_high will always have SECONDARY_EXEC_ENCLS_EXITING bit
>> set, even SGX is not exposed to guest, which means a guest can set this even SGX is not
>> present, but I think it is OK since ENCLS exiting bitmap in vmcs12 will be ignored anyway
>> in nested_vmx_exit_handled_encls() as you mentioned above.
>>
>> Anyway, I have tested this code and it works at my side (by creating L2 with SGX support
>> and running SGX workloads inside it).
>>
>> Sean, please also comment if you have any.
>>
>>
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, back to index

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-12  4:21 [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Kai Huang
2021-04-12  4:21 ` [PATCH v5 01/11] KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX) Kai Huang
2021-04-12  4:21 ` [PATCH v5 02/11] KVM: x86: Define new #PF SGX error code bit Kai Huang
2021-04-12  4:21 ` [PATCH v5 03/11] KVM: x86: Add support for reverse CPUID lookup of scattered features Kai Huang
2021-04-17 13:39   ` Paolo Bonzini
2021-04-12  4:21 ` [PATCH v5 04/11] KVM: x86: Add reverse-CPUID lookup support for scattered SGX features Kai Huang
2021-04-17 13:39   ` Paolo Bonzini
2021-04-12  4:21 ` [PATCH v5 05/11] KVM: VMX: Add basic handling of VM-Exit from SGX enclave Kai Huang
2021-04-12  4:21 ` [PATCH v5 06/11] KVM: VMX: Frame in ENCLS handler for SGX virtualization Kai Huang
2021-04-12  4:21 ` [PATCH v5 07/11] KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions Kai Huang
2021-04-12  4:21 ` [PATCH v5 08/11] KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs Kai Huang
2021-04-17 13:55   ` Paolo Bonzini
2021-04-12  4:21 ` [PATCH v5 09/11] KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC) Kai Huang
2021-04-12  4:21 ` [PATCH v5 10/11] KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC Kai Huang
2021-04-12  9:51   ` kernel test robot
2021-04-12 10:47     ` Kai Huang
2021-04-17 14:11   ` Paolo Bonzini
2021-04-19 11:44     ` Kai Huang
2021-04-19 15:16       ` Sean Christopherson
2021-04-19 17:14         ` Paolo Bonzini
2021-04-12  4:21 ` [PATCH v5 11/11] KVM: x86: Add capability to grant VM access to privileged SGX attribute Kai Huang
2021-04-12 11:28   ` kernel test robot
2021-04-13 14:51 ` [PATCH v5 00/11] KVM SGX virtualization support (KVM part) Paolo Bonzini
2021-04-13 15:01   ` Borislav Petkov
2021-04-13 21:47     ` Kai Huang
2021-04-17 14:15 ` Paolo Bonzini

KVM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \
		kvm@vger.kernel.org
	public-inbox-index kvm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.kvm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git