All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-4.15 0/5] KVM: (almost) emulate UMIP on current processors
@ 2017-11-13 14:40 Paolo Bonzini
  2017-11-13 14:40 ` [PATCH 1/5] KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP Paolo Bonzini
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Paolo Bonzini @ 2017-11-13 14:40 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar

The User-Mode Instruction Prevention feature present in recent Intel
processor prevents a group of instructions (sgdt, sidt, sldt, smsw, and
str) from being executed with CPL > 0. Otherwise, a general protection
fault is issued.

Add support for UMIP in virtual machines, and also allow emulation of
UMIP on older processors by enabling descriptor-table vmexits.  This
emulation is not perfect, because SMSW cannot be trapped.  However,
this is not an issue in practice because Linux is _also_ emulating SMSW
instructions on behalf of the program that executes them, because some
16-bit programs expect to use SMSW to detect vm86 mode.

Paolo

Paolo Bonzini (5):
  KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP
  KVM: x86: add support for UMIP
  KVM: x86: emulate sldt and str
  KVM: x86: add support for emulating UMIP
  KVM: vmx: add support for emulating UMIP

 arch/x86/include/asm/kvm_host.h |  3 ++-
 arch/x86/kvm/cpuid.c            |  6 ++++--
 arch/x86/kvm/emulate.c          | 40 ++++++++++++++++++++++++++++++++++------
 arch/x86/kvm/svm.c              |  6 ++++++
 arch/x86/kvm/vmx.c              | 36 +++++++++++++++++++++++++++++++++---
 arch/x86/kvm/x86.c              |  3 +++
 6 files changed, 82 insertions(+), 12 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP
  2017-11-13 14:40 [PATCH for-4.15 0/5] KVM: (almost) emulate UMIP on current processors Paolo Bonzini
@ 2017-11-13 14:40 ` Paolo Bonzini
  2017-11-14  9:30   ` Wanpeng Li
  2017-11-13 14:40 ` [PATCH 2/5] KVM: x86: add support for UMIP Paolo Bonzini
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2017-11-13 14:40 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar

These bits were not defined until now in common code, but they are
now that the kernel supports UMIP too.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index a6f4f095f8f4..8917e100ddeb 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -9732,8 +9732,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
 	cr4_fixed1_update(X86_CR4_SMEP,       ebx, bit(X86_FEATURE_SMEP));
 	cr4_fixed1_update(X86_CR4_SMAP,       ebx, bit(X86_FEATURE_SMAP));
 	cr4_fixed1_update(X86_CR4_PKE,        ecx, bit(X86_FEATURE_PKU));
-	/* TODO: Use X86_CR4_UMIP and X86_FEATURE_UMIP macros */
-	cr4_fixed1_update(bit(11),            ecx, bit(2));
+	cr4_fixed1_update(X86_CR4_UMIP,       ecx, bit(X86_FEATURE_UMIP));
 
 #undef cr4_fixed1_update
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] KVM: x86: add support for UMIP
  2017-11-13 14:40 [PATCH for-4.15 0/5] KVM: (almost) emulate UMIP on current processors Paolo Bonzini
  2017-11-13 14:40 ` [PATCH 1/5] KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP Paolo Bonzini
@ 2017-11-13 14:40 ` Paolo Bonzini
  2017-11-15  0:40   ` Wanpeng Li
  2018-02-06  2:45   ` Wanpeng Li
  2017-11-13 14:40 ` [PATCH 3/5] KVM: x86: emulate sldt and str Paolo Bonzini
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 12+ messages in thread
From: Paolo Bonzini @ 2017-11-13 14:40 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar

Add the CPUID bits, make the CR4.UMIP bit not reserved anymore, and
add UMIP support for instructions that are already emulated by KVM.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/cpuid.c            | 4 ++--
 arch/x86/kvm/emulate.c          | 8 ++++++++
 arch/x86/kvm/x86.c              | 3 +++
 4 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c73e493adf07..1b005ccf4d0b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -86,7 +86,7 @@
 			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
 			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
 			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
-			  | X86_CR4_SMAP | X86_CR4_PKE))
+			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
 
 #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 0099e10eb045..77fb8732b47b 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -387,8 +387,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 
 	/* cpuid 7.0.ecx*/
 	const u32 kvm_cpuid_7_0_ecx_x86_features =
-		F(AVX512VBMI) | F(LA57) | F(PKU) |
-		0 /*OSPKE*/ | F(AVX512_VPOPCNTDQ);
+		F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ |
+		F(AVX512_VPOPCNTDQ) | F(UMIP);
 
 	/* cpuid 7.0.edx*/
 	const u32 kvm_cpuid_7_0_edx_x86_features =
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index d90cdc77e077..d27339332ac8 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -3725,6 +3725,10 @@ static int emulate_store_desc_ptr(struct x86_emulate_ctxt *ctxt,
 {
 	struct desc_ptr desc_ptr;
 
+	if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
+	    ctxt->ops->cpl(ctxt) > 0)
+		return emulate_gp(ctxt, 0);
+
 	if (ctxt->mode == X86EMUL_MODE_PROT64)
 		ctxt->op_bytes = 8;
 	get(ctxt, &desc_ptr);
@@ -3784,6 +3788,10 @@ static int em_lidt(struct x86_emulate_ctxt *ctxt)
 
 static int em_smsw(struct x86_emulate_ctxt *ctxt)
 {
+	if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
+	    ctxt->ops->cpl(ctxt) > 0)
+		return emulate_gp(ctxt, 0);
+
 	if (ctxt->dst.type == OP_MEM)
 		ctxt->dst.bytes = 2;
 	ctxt->dst.val = ctxt->ops->get_cr(ctxt, 0);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 03869eb7fcd6..cda567aadd28 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -791,6 +791,9 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
 		return 1;
 
+	if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
+		return 1;
+
 	if (is_long_mode(vcpu)) {
 		if (!(cr4 & X86_CR4_PAE))
 			return 1;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] KVM: x86: emulate sldt and str
  2017-11-13 14:40 [PATCH for-4.15 0/5] KVM: (almost) emulate UMIP on current processors Paolo Bonzini
  2017-11-13 14:40 ` [PATCH 1/5] KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP Paolo Bonzini
  2017-11-13 14:40 ` [PATCH 2/5] KVM: x86: add support for UMIP Paolo Bonzini
@ 2017-11-13 14:40 ` Paolo Bonzini
  2017-11-15  0:41   ` Wanpeng Li
  2017-11-13 14:40 ` [PATCH 4/5] KVM: x86: add support for emulating UMIP Paolo Bonzini
  2017-11-13 14:40 ` [PATCH 5/5] KVM: vmx: " Paolo Bonzini
  4 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2017-11-13 14:40 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar

These are needed to handle the descriptor table vmexits when emulating
UMIP.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/emulate.c | 32 ++++++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index d27339332ac8..da2c5590240a 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -3638,17 +3638,27 @@ static int em_rdmsr(struct x86_emulate_ctxt *ctxt)
 	return X86EMUL_CONTINUE;
 }
 
-static int em_mov_rm_sreg(struct x86_emulate_ctxt *ctxt)
+static int em_store_sreg(struct x86_emulate_ctxt *ctxt, int segment)
 {
-	if (ctxt->modrm_reg > VCPU_SREG_GS)
-		return emulate_ud(ctxt);
+	if (segment > VCPU_SREG_GS &&
+	    (ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
+	    ctxt->ops->cpl(ctxt) > 0)
+		return emulate_gp(ctxt, 0);
 
-	ctxt->dst.val = get_segment_selector(ctxt, ctxt->modrm_reg);
+	ctxt->dst.val = get_segment_selector(ctxt, segment);
 	if (ctxt->dst.bytes == 4 && ctxt->dst.type == OP_MEM)
 		ctxt->dst.bytes = 2;
 	return X86EMUL_CONTINUE;
 }
 
+static int em_mov_rm_sreg(struct x86_emulate_ctxt *ctxt)
+{
+	if (ctxt->modrm_reg > VCPU_SREG_GS)
+		return emulate_ud(ctxt);
+
+	return em_store_sreg(ctxt, ctxt->modrm_reg);
+}
+
 static int em_mov_sreg_rm(struct x86_emulate_ctxt *ctxt)
 {
 	u16 sel = ctxt->src.val;
@@ -3664,6 +3674,11 @@ static int em_mov_sreg_rm(struct x86_emulate_ctxt *ctxt)
 	return load_segment_descriptor(ctxt, sel, ctxt->modrm_reg);
 }
 
+static int em_sldt(struct x86_emulate_ctxt *ctxt)
+{
+	return em_store_sreg(ctxt, VCPU_SREG_LDTR);
+}
+
 static int em_lldt(struct x86_emulate_ctxt *ctxt)
 {
 	u16 sel = ctxt->src.val;
@@ -3673,6 +3688,11 @@ static int em_lldt(struct x86_emulate_ctxt *ctxt)
 	return load_segment_descriptor(ctxt, sel, VCPU_SREG_LDTR);
 }
 
+static int em_str(struct x86_emulate_ctxt *ctxt)
+{
+	return em_store_sreg(ctxt, VCPU_SREG_TR);
+}
+
 static int em_ltr(struct x86_emulate_ctxt *ctxt)
 {
 	u16 sel = ctxt->src.val;
@@ -4365,8 +4385,8 @@ static int check_perm_out(struct x86_emulate_ctxt *ctxt)
 };
 
 static const struct opcode group6[] = {
-	DI(Prot | DstMem,	sldt),
-	DI(Prot | DstMem,	str),
+	II(Prot | DstMem,	   em_sldt, sldt),
+	II(Prot | DstMem,	   em_str, str),
 	II(Prot | Priv | SrcMem16, em_lldt, lldt),
 	II(Prot | Priv | SrcMem16, em_ltr, ltr),
 	N, N, N, N,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] KVM: x86: add support for emulating UMIP
  2017-11-13 14:40 [PATCH for-4.15 0/5] KVM: (almost) emulate UMIP on current processors Paolo Bonzini
                   ` (2 preceding siblings ...)
  2017-11-13 14:40 ` [PATCH 3/5] KVM: x86: emulate sldt and str Paolo Bonzini
@ 2017-11-13 14:40 ` Paolo Bonzini
  2017-11-15  0:42   ` Wanpeng Li
  2017-11-13 14:40 ` [PATCH 5/5] KVM: vmx: " Paolo Bonzini
  4 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2017-11-13 14:40 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar

The User-Mode Instruction Prevention feature present in recent Intel
processor prevents a group of instructions (sgdt, sidt, sldt, smsw, and
str) from being executed with CPL > 0. Otherwise, a general protection
fault is issued.

UMIP instructions in general are also able to trigger vmexits, so we can
actually emulate UMIP on older processors.  This commit sets up the
infrastructure so that kvm-intel.ko and kvm-amd.ko can set the UMIP
feature bit for CPUID even if the feature is not actually available
in hardware.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/cpuid.c            | 2 ++
 arch/x86/kvm/svm.c              | 6 ++++++
 arch/x86/kvm/vmx.c              | 6 ++++++
 4 files changed, 15 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1b005ccf4d0b..f0a4f107a97f 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1004,6 +1004,7 @@ struct kvm_x86_ops {
 	void (*handle_external_intr)(struct kvm_vcpu *vcpu);
 	bool (*mpx_supported)(void);
 	bool (*xsaves_supported)(void);
+	bool (*umip_emulated)(void);
 
 	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
 
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 77fb8732b47b..2b3b06458f6f 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -327,6 +327,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
 	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
 	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
+	unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
 
 	/* cpuid 1.edx */
 	const u32 kvm_cpuid_1_edx_x86_features =
@@ -473,6 +474,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			entry->ebx |= F(TSC_ADJUST);
 			entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
 			cpuid_mask(&entry->ecx, CPUID_7_ECX);
+			entry->ecx |= f_umip;
 			/* PKU is not yet implemented for shadow paging. */
 			if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
 				entry->ecx &= ~F(PKU);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0e68f0b3cbf7..be7fc7e5ee7e 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5174,6 +5174,11 @@ static bool svm_xsaves_supported(void)
 	return false;
 }
 
+static bool svm_umip_emulated(void)
+{
+	return false;
+}
+
 static bool svm_has_wbinvd_exit(void)
 {
 	return true;
@@ -5485,6 +5490,7 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu)
 	.invpcid_supported = svm_invpcid_supported,
 	.mpx_supported = svm_mpx_supported,
 	.xsaves_supported = svm_xsaves_supported,
+	.umip_emulated = svm_umip_emulated,
 
 	.set_supported_cpuid = svm_set_supported_cpuid,
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 8917e100ddeb..6c474c94e154 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -9095,6 +9095,11 @@ static bool vmx_xsaves_supported(void)
 		SECONDARY_EXEC_XSAVES;
 }
 
+static bool vmx_umip_emulated(void)
+{
+	return false;
+}
+
 static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
 {
 	u32 exit_intr_info;
@@ -12038,6 +12043,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu)
 	.handle_external_intr = vmx_handle_external_intr,
 	.mpx_supported = vmx_mpx_supported,
 	.xsaves_supported = vmx_xsaves_supported,
+	.umip_emulated = vmx_umip_emulated,
 
 	.check_nested_events = vmx_check_nested_events,
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] KVM: vmx: add support for emulating UMIP
  2017-11-13 14:40 [PATCH for-4.15 0/5] KVM: (almost) emulate UMIP on current processors Paolo Bonzini
                   ` (3 preceding siblings ...)
  2017-11-13 14:40 ` [PATCH 4/5] KVM: x86: add support for emulating UMIP Paolo Bonzini
@ 2017-11-13 14:40 ` Paolo Bonzini
  2017-11-15  0:42   ` Wanpeng Li
  4 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2017-11-13 14:40 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: rkrcmar

UMIP can be emulated almost perfectly on Intel processor by enabling
descriptor-table exits.  SMSW does not cause a vmexit and hence it
cannot be changed into a #GP fault, but all in all it's the most
"innocuous" of the unprivileged instructions that UMIP blocks.

In fact, Linux is _also_ emulating SMSW instructions on behalf of the
program that executes them, because some 16-bit programs expect to use
SMSW to detect vm86 mode, so this is an even smaller issue.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx.c | 29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6c474c94e154..a257ddc644d1 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3651,6 +3651,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
 			SECONDARY_EXEC_ENABLE_EPT |
 			SECONDARY_EXEC_UNRESTRICTED_GUEST |
 			SECONDARY_EXEC_PAUSE_LOOP_EXITING |
+			SECONDARY_EXEC_DESC |
 			SECONDARY_EXEC_RDTSCP |
 			SECONDARY_EXEC_ENABLE_INVPCID |
 			SECONDARY_EXEC_APIC_REGISTER_VIRT |
@@ -4347,6 +4348,14 @@ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 		(to_vmx(vcpu)->rmode.vm86_active ?
 		 KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
 
+	if ((cr4 & X86_CR4_UMIP) && !boot_cpu_has(X86_FEATURE_UMIP)) {
+		vmcs_set_bits(SECONDARY_VM_EXEC_CONTROL,
+			      SECONDARY_EXEC_DESC);
+		hw_cr4 &= ~X86_CR4_UMIP;
+	} else
+		vmcs_clear_bits(SECONDARY_VM_EXEC_CONTROL,
+				SECONDARY_EXEC_DESC);
+
 	if (cr4 & X86_CR4_VMXE) {
 		/*
 		 * To use VMXON (and later other VMX instructions), a guest
@@ -5296,6 +5305,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 	struct kvm_vcpu *vcpu = &vmx->vcpu;
 
 	u32 exec_control = vmcs_config.cpu_based_2nd_exec_ctrl;
+
 	if (!cpu_need_virtualize_apic_accesses(vcpu))
 		exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
 	if (vmx->vpid == 0)
@@ -5314,6 +5324,11 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 		exec_control &= ~(SECONDARY_EXEC_APIC_REGISTER_VIRT |
 				  SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
 	exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
+
+	/* SECONDARY_EXEC_DESC is enabled/disabled on writes to CR4.UMIP,
+	 * in vmx_set_cr4.  */
+	exec_control &= ~SECONDARY_EXEC_DESC;
+
 	/* SECONDARY_EXEC_SHADOW_VMCS is enabled when L1 executes VMPTRLD
 	   (handle_vmptrld).
 	   We can NOT enable shadow_vmcs here because we don't have yet
@@ -6064,6 +6079,12 @@ static int handle_set_cr4(struct kvm_vcpu *vcpu, unsigned long val)
 		return kvm_set_cr4(vcpu, val);
 }
 
+static int handle_desc(struct kvm_vcpu *vcpu)
+{
+	WARN_ON(!(vcpu->arch.cr4 & X86_CR4_UMIP));
+	return emulate_instruction(vcpu, 0) == EMULATE_DONE;
+}
+
 static int handle_cr(struct kvm_vcpu *vcpu)
 {
 	unsigned long exit_qualification, val;
@@ -8152,6 +8173,8 @@ static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 	[EXIT_REASON_XSETBV]                  = handle_xsetbv,
 	[EXIT_REASON_TASK_SWITCH]             = handle_task_switch,
 	[EXIT_REASON_MCE_DURING_VMENTRY]      = handle_machine_check,
+	[EXIT_REASON_GDTR_IDTR]		      = handle_desc,
+	[EXIT_REASON_LDTR_TR]		      = handle_desc,
 	[EXIT_REASON_EPT_VIOLATION]	      = handle_ept_violation,
 	[EXIT_REASON_EPT_MISCONFIG]           = handle_ept_misconfig,
 	[EXIT_REASON_PAUSE_INSTRUCTION]       = handle_pause,
@@ -9097,7 +9120,8 @@ static bool vmx_xsaves_supported(void)
 
 static bool vmx_umip_emulated(void)
 {
-	return false;
+	return vmcs_config.cpu_based_2nd_exec_ctrl &
+		SECONDARY_EXEC_DESC;
 }
 
 static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
@@ -9691,7 +9715,8 @@ static void vmcs_set_secondary_exec_control(u32 new_ctl)
 	u32 mask =
 		SECONDARY_EXEC_SHADOW_VMCS |
 		SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE |
-		SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+		SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+		SECONDARY_EXEC_DESC;
 
 	u32 cur_ctl = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/5] KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP
  2017-11-13 14:40 ` [PATCH 1/5] KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP Paolo Bonzini
@ 2017-11-14  9:30   ` Wanpeng Li
  0 siblings, 0 replies; 12+ messages in thread
From: Wanpeng Li @ 2017-11-14  9:30 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Radim Krcmar

2017-11-13 22:40 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> These bits were not defined until now in common code, but they are
> now that the kernel supports UMIP too.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/kvm/vmx.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index a6f4f095f8f4..8917e100ddeb 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -9732,8 +9732,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
>         cr4_fixed1_update(X86_CR4_SMEP,       ebx, bit(X86_FEATURE_SMEP));
>         cr4_fixed1_update(X86_CR4_SMAP,       ebx, bit(X86_FEATURE_SMAP));
>         cr4_fixed1_update(X86_CR4_PKE,        ecx, bit(X86_FEATURE_PKU));
> -       /* TODO: Use X86_CR4_UMIP and X86_FEATURE_UMIP macros */
> -       cr4_fixed1_update(bit(11),            ecx, bit(2));
> +       cr4_fixed1_update(X86_CR4_UMIP,       ecx, bit(X86_FEATURE_UMIP));
>
>  #undef cr4_fixed1_update
>  }
> --
> 1.8.3.1
>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/5] KVM: x86: add support for UMIP
  2017-11-13 14:40 ` [PATCH 2/5] KVM: x86: add support for UMIP Paolo Bonzini
@ 2017-11-15  0:40   ` Wanpeng Li
  2018-02-06  2:45   ` Wanpeng Li
  1 sibling, 0 replies; 12+ messages in thread
From: Wanpeng Li @ 2017-11-15  0:40 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Radim Krcmar

2017-11-13 22:40 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> Add the CPUID bits, make the CR4.UMIP bit not reserved anymore, and
> add UMIP support for instructions that are already emulated by KVM.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/cpuid.c            | 4 ++--
>  arch/x86/kvm/emulate.c          | 8 ++++++++
>  arch/x86/kvm/x86.c              | 3 +++
>  4 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index c73e493adf07..1b005ccf4d0b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -86,7 +86,7 @@
>                           | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
>                           | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
>                           | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
> -                         | X86_CR4_SMAP | X86_CR4_PKE))
> +                         | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
>
>  #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 0099e10eb045..77fb8732b47b 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -387,8 +387,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>
>         /* cpuid 7.0.ecx*/
>         const u32 kvm_cpuid_7_0_ecx_x86_features =
> -               F(AVX512VBMI) | F(LA57) | F(PKU) |
> -               0 /*OSPKE*/ | F(AVX512_VPOPCNTDQ);
> +               F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ |
> +               F(AVX512_VPOPCNTDQ) | F(UMIP);
>
>         /* cpuid 7.0.edx*/
>         const u32 kvm_cpuid_7_0_edx_x86_features =
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index d90cdc77e077..d27339332ac8 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -3725,6 +3725,10 @@ static int emulate_store_desc_ptr(struct x86_emulate_ctxt *ctxt,
>  {
>         struct desc_ptr desc_ptr;
>
> +       if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
> +           ctxt->ops->cpl(ctxt) > 0)
> +               return emulate_gp(ctxt, 0);
> +
>         if (ctxt->mode == X86EMUL_MODE_PROT64)
>                 ctxt->op_bytes = 8;
>         get(ctxt, &desc_ptr);
> @@ -3784,6 +3788,10 @@ static int em_lidt(struct x86_emulate_ctxt *ctxt)
>
>  static int em_smsw(struct x86_emulate_ctxt *ctxt)
>  {
> +       if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
> +           ctxt->ops->cpl(ctxt) > 0)
> +               return emulate_gp(ctxt, 0);
> +
>         if (ctxt->dst.type == OP_MEM)
>                 ctxt->dst.bytes = 2;
>         ctxt->dst.val = ctxt->ops->get_cr(ctxt, 0);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 03869eb7fcd6..cda567aadd28 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -791,6 +791,9 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>         if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
>                 return 1;
>
> +       if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
> +               return 1;
> +
>         if (is_long_mode(vcpu)) {
>                 if (!(cr4 & X86_CR4_PAE))
>                         return 1;
> --
> 1.8.3.1
>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/5] KVM: x86: emulate sldt and str
  2017-11-13 14:40 ` [PATCH 3/5] KVM: x86: emulate sldt and str Paolo Bonzini
@ 2017-11-15  0:41   ` Wanpeng Li
  0 siblings, 0 replies; 12+ messages in thread
From: Wanpeng Li @ 2017-11-15  0:41 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Radim Krcmar

2017-11-13 22:40 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> These are needed to handle the descriptor table vmexits when emulating
> UMIP.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/kvm/emulate.c | 32 ++++++++++++++++++++++++++------
>  1 file changed, 26 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index d27339332ac8..da2c5590240a 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -3638,17 +3638,27 @@ static int em_rdmsr(struct x86_emulate_ctxt *ctxt)
>         return X86EMUL_CONTINUE;
>  }
>
> -static int em_mov_rm_sreg(struct x86_emulate_ctxt *ctxt)
> +static int em_store_sreg(struct x86_emulate_ctxt *ctxt, int segment)
>  {
> -       if (ctxt->modrm_reg > VCPU_SREG_GS)
> -               return emulate_ud(ctxt);
> +       if (segment > VCPU_SREG_GS &&
> +           (ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
> +           ctxt->ops->cpl(ctxt) > 0)
> +               return emulate_gp(ctxt, 0);
>
> -       ctxt->dst.val = get_segment_selector(ctxt, ctxt->modrm_reg);
> +       ctxt->dst.val = get_segment_selector(ctxt, segment);
>         if (ctxt->dst.bytes == 4 && ctxt->dst.type == OP_MEM)
>                 ctxt->dst.bytes = 2;
>         return X86EMUL_CONTINUE;
>  }
>
> +static int em_mov_rm_sreg(struct x86_emulate_ctxt *ctxt)
> +{
> +       if (ctxt->modrm_reg > VCPU_SREG_GS)
> +               return emulate_ud(ctxt);
> +
> +       return em_store_sreg(ctxt, ctxt->modrm_reg);
> +}
> +
>  static int em_mov_sreg_rm(struct x86_emulate_ctxt *ctxt)
>  {
>         u16 sel = ctxt->src.val;
> @@ -3664,6 +3674,11 @@ static int em_mov_sreg_rm(struct x86_emulate_ctxt *ctxt)
>         return load_segment_descriptor(ctxt, sel, ctxt->modrm_reg);
>  }
>
> +static int em_sldt(struct x86_emulate_ctxt *ctxt)
> +{
> +       return em_store_sreg(ctxt, VCPU_SREG_LDTR);
> +}
> +
>  static int em_lldt(struct x86_emulate_ctxt *ctxt)
>  {
>         u16 sel = ctxt->src.val;
> @@ -3673,6 +3688,11 @@ static int em_lldt(struct x86_emulate_ctxt *ctxt)
>         return load_segment_descriptor(ctxt, sel, VCPU_SREG_LDTR);
>  }
>
> +static int em_str(struct x86_emulate_ctxt *ctxt)
> +{
> +       return em_store_sreg(ctxt, VCPU_SREG_TR);
> +}
> +
>  static int em_ltr(struct x86_emulate_ctxt *ctxt)
>  {
>         u16 sel = ctxt->src.val;
> @@ -4365,8 +4385,8 @@ static int check_perm_out(struct x86_emulate_ctxt *ctxt)
>  };
>
>  static const struct opcode group6[] = {
> -       DI(Prot | DstMem,       sldt),
> -       DI(Prot | DstMem,       str),
> +       II(Prot | DstMem,          em_sldt, sldt),
> +       II(Prot | DstMem,          em_str, str),
>         II(Prot | Priv | SrcMem16, em_lldt, lldt),
>         II(Prot | Priv | SrcMem16, em_ltr, ltr),
>         N, N, N, N,
> --
> 1.8.3.1
>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] KVM: x86: add support for emulating UMIP
  2017-11-13 14:40 ` [PATCH 4/5] KVM: x86: add support for emulating UMIP Paolo Bonzini
@ 2017-11-15  0:42   ` Wanpeng Li
  0 siblings, 0 replies; 12+ messages in thread
From: Wanpeng Li @ 2017-11-15  0:42 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Radim Krcmar

2017-11-13 22:40 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> The User-Mode Instruction Prevention feature present in recent Intel
> processor prevents a group of instructions (sgdt, sidt, sldt, smsw, and
> str) from being executed with CPL > 0. Otherwise, a general protection
> fault is issued.
>
> UMIP instructions in general are also able to trigger vmexits, so we can
> actually emulate UMIP on older processors.  This commit sets up the
> infrastructure so that kvm-intel.ko and kvm-amd.ko can set the UMIP
> feature bit for CPUID even if the feature is not actually available
> in hardware.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/include/asm/kvm_host.h | 1 +
>  arch/x86/kvm/cpuid.c            | 2 ++
>  arch/x86/kvm/svm.c              | 6 ++++++
>  arch/x86/kvm/vmx.c              | 6 ++++++
>  4 files changed, 15 insertions(+)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 1b005ccf4d0b..f0a4f107a97f 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1004,6 +1004,7 @@ struct kvm_x86_ops {
>         void (*handle_external_intr)(struct kvm_vcpu *vcpu);
>         bool (*mpx_supported)(void);
>         bool (*xsaves_supported)(void);
> +       bool (*umip_emulated)(void);
>
>         int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 77fb8732b47b..2b3b06458f6f 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -327,6 +327,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>         unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
>         unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
>         unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
> +       unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
>
>         /* cpuid 1.edx */
>         const u32 kvm_cpuid_1_edx_x86_features =
> @@ -473,6 +474,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>                         entry->ebx |= F(TSC_ADJUST);
>                         entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
>                         cpuid_mask(&entry->ecx, CPUID_7_ECX);
> +                       entry->ecx |= f_umip;
>                         /* PKU is not yet implemented for shadow paging. */
>                         if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
>                                 entry->ecx &= ~F(PKU);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 0e68f0b3cbf7..be7fc7e5ee7e 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -5174,6 +5174,11 @@ static bool svm_xsaves_supported(void)
>         return false;
>  }
>
> +static bool svm_umip_emulated(void)
> +{
> +       return false;
> +}
> +
>  static bool svm_has_wbinvd_exit(void)
>  {
>         return true;
> @@ -5485,6 +5490,7 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu)
>         .invpcid_supported = svm_invpcid_supported,
>         .mpx_supported = svm_mpx_supported,
>         .xsaves_supported = svm_xsaves_supported,
> +       .umip_emulated = svm_umip_emulated,
>
>         .set_supported_cpuid = svm_set_supported_cpuid,
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 8917e100ddeb..6c474c94e154 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -9095,6 +9095,11 @@ static bool vmx_xsaves_supported(void)
>                 SECONDARY_EXEC_XSAVES;
>  }
>
> +static bool vmx_umip_emulated(void)
> +{
> +       return false;
> +}
> +
>  static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
>  {
>         u32 exit_intr_info;
> @@ -12038,6 +12043,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu)
>         .handle_external_intr = vmx_handle_external_intr,
>         .mpx_supported = vmx_mpx_supported,
>         .xsaves_supported = vmx_xsaves_supported,
> +       .umip_emulated = vmx_umip_emulated,
>
>         .check_nested_events = vmx_check_nested_events,
>
> --
> 1.8.3.1
>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 5/5] KVM: vmx: add support for emulating UMIP
  2017-11-13 14:40 ` [PATCH 5/5] KVM: vmx: " Paolo Bonzini
@ 2017-11-15  0:42   ` Wanpeng Li
  0 siblings, 0 replies; 12+ messages in thread
From: Wanpeng Li @ 2017-11-15  0:42 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Radim Krcmar

2017-11-13 22:40 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> UMIP can be emulated almost perfectly on Intel processor by enabling
> descriptor-table exits.  SMSW does not cause a vmexit and hence it
> cannot be changed into a #GP fault, but all in all it's the most
> "innocuous" of the unprivileged instructions that UMIP blocks.
>
> In fact, Linux is _also_ emulating SMSW instructions on behalf of the
> program that executes them, because some 16-bit programs expect to use
> SMSW to detect vm86 mode, so this is an even smaller issue.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>

> ---
>  arch/x86/kvm/vmx.c | 29 +++++++++++++++++++++++++++--
>  1 file changed, 27 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 6c474c94e154..a257ddc644d1 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3651,6 +3651,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
>                         SECONDARY_EXEC_ENABLE_EPT |
>                         SECONDARY_EXEC_UNRESTRICTED_GUEST |
>                         SECONDARY_EXEC_PAUSE_LOOP_EXITING |
> +                       SECONDARY_EXEC_DESC |
>                         SECONDARY_EXEC_RDTSCP |
>                         SECONDARY_EXEC_ENABLE_INVPCID |
>                         SECONDARY_EXEC_APIC_REGISTER_VIRT |
> @@ -4347,6 +4348,14 @@ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>                 (to_vmx(vcpu)->rmode.vm86_active ?
>                  KVM_RMODE_VM_CR4_ALWAYS_ON : KVM_PMODE_VM_CR4_ALWAYS_ON);
>
> +       if ((cr4 & X86_CR4_UMIP) && !boot_cpu_has(X86_FEATURE_UMIP)) {
> +               vmcs_set_bits(SECONDARY_VM_EXEC_CONTROL,
> +                             SECONDARY_EXEC_DESC);
> +               hw_cr4 &= ~X86_CR4_UMIP;
> +       } else
> +               vmcs_clear_bits(SECONDARY_VM_EXEC_CONTROL,
> +                               SECONDARY_EXEC_DESC);
> +
>         if (cr4 & X86_CR4_VMXE) {
>                 /*
>                  * To use VMXON (and later other VMX instructions), a guest
> @@ -5296,6 +5305,7 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
>         struct kvm_vcpu *vcpu = &vmx->vcpu;
>
>         u32 exec_control = vmcs_config.cpu_based_2nd_exec_ctrl;
> +
>         if (!cpu_need_virtualize_apic_accesses(vcpu))
>                 exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
>         if (vmx->vpid == 0)
> @@ -5314,6 +5324,11 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
>                 exec_control &= ~(SECONDARY_EXEC_APIC_REGISTER_VIRT |
>                                   SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
>         exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
> +
> +       /* SECONDARY_EXEC_DESC is enabled/disabled on writes to CR4.UMIP,
> +        * in vmx_set_cr4.  */
> +       exec_control &= ~SECONDARY_EXEC_DESC;
> +
>         /* SECONDARY_EXEC_SHADOW_VMCS is enabled when L1 executes VMPTRLD
>            (handle_vmptrld).
>            We can NOT enable shadow_vmcs here because we don't have yet
> @@ -6064,6 +6079,12 @@ static int handle_set_cr4(struct kvm_vcpu *vcpu, unsigned long val)
>                 return kvm_set_cr4(vcpu, val);
>  }
>
> +static int handle_desc(struct kvm_vcpu *vcpu)
> +{
> +       WARN_ON(!(vcpu->arch.cr4 & X86_CR4_UMIP));
> +       return emulate_instruction(vcpu, 0) == EMULATE_DONE;
> +}
> +
>  static int handle_cr(struct kvm_vcpu *vcpu)
>  {
>         unsigned long exit_qualification, val;
> @@ -8152,6 +8173,8 @@ static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
>         [EXIT_REASON_XSETBV]                  = handle_xsetbv,
>         [EXIT_REASON_TASK_SWITCH]             = handle_task_switch,
>         [EXIT_REASON_MCE_DURING_VMENTRY]      = handle_machine_check,
> +       [EXIT_REASON_GDTR_IDTR]               = handle_desc,
> +       [EXIT_REASON_LDTR_TR]                 = handle_desc,
>         [EXIT_REASON_EPT_VIOLATION]           = handle_ept_violation,
>         [EXIT_REASON_EPT_MISCONFIG]           = handle_ept_misconfig,
>         [EXIT_REASON_PAUSE_INSTRUCTION]       = handle_pause,
> @@ -9097,7 +9120,8 @@ static bool vmx_xsaves_supported(void)
>
>  static bool vmx_umip_emulated(void)
>  {
> -       return false;
> +       return vmcs_config.cpu_based_2nd_exec_ctrl &
> +               SECONDARY_EXEC_DESC;
>  }
>
>  static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
> @@ -9691,7 +9715,8 @@ static void vmcs_set_secondary_exec_control(u32 new_ctl)
>         u32 mask =
>                 SECONDARY_EXEC_SHADOW_VMCS |
>                 SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE |
> -               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> +               SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
> +               SECONDARY_EXEC_DESC;
>
>         u32 cur_ctl = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
>
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/5] KVM: x86: add support for UMIP
  2017-11-13 14:40 ` [PATCH 2/5] KVM: x86: add support for UMIP Paolo Bonzini
  2017-11-15  0:40   ` Wanpeng Li
@ 2018-02-06  2:45   ` Wanpeng Li
  1 sibling, 0 replies; 12+ messages in thread
From: Wanpeng Li @ 2018-02-06  2:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: LKML, kvm, Radim Krcmar

2017-11-13 22:40 GMT+08:00 Paolo Bonzini <pbonzini@redhat.com>:
> Add the CPUID bits, make the CR4.UMIP bit not reserved anymore, and
> add UMIP support for instructions that are already emulated by KVM.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 +-
>  arch/x86/kvm/cpuid.c            | 4 ++--
>  arch/x86/kvm/emulate.c          | 8 ++++++++
>  arch/x86/kvm/x86.c              | 3 +++
>  4 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index c73e493adf07..1b005ccf4d0b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -86,7 +86,7 @@
>                           | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
>                           | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
>                           | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
> -                         | X86_CR4_SMAP | X86_CR4_PKE))
> +                         | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
>
>  #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 0099e10eb045..77fb8732b47b 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -387,8 +387,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>
>         /* cpuid 7.0.ecx*/
>         const u32 kvm_cpuid_7_0_ecx_x86_features =
> -               F(AVX512VBMI) | F(LA57) | F(PKU) |
> -               0 /*OSPKE*/ | F(AVX512_VPOPCNTDQ);
> +               F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ |
> +               F(AVX512_VPOPCNTDQ) | F(UMIP);
>
>         /* cpuid 7.0.edx*/
>         const u32 kvm_cpuid_7_0_edx_x86_features =
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index d90cdc77e077..d27339332ac8 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -3725,6 +3725,10 @@ static int emulate_store_desc_ptr(struct x86_emulate_ctxt *ctxt,
>  {
>         struct desc_ptr desc_ptr;
>
> +       if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
> +           ctxt->ops->cpl(ctxt) > 0)
> +               return emulate_gp(ctxt, 0);
> +
>         if (ctxt->mode == X86EMUL_MODE_PROT64)
>                 ctxt->op_bytes = 8;
>         get(ctxt, &desc_ptr);
> @@ -3784,6 +3788,10 @@ static int em_lidt(struct x86_emulate_ctxt *ctxt)
>
>  static int em_smsw(struct x86_emulate_ctxt *ctxt)
>  {
> +       if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
> +           ctxt->ops->cpl(ctxt) > 0)
> +               return emulate_gp(ctxt, 0);
> +
>         if (ctxt->dst.type == OP_MEM)
>                 ctxt->dst.bytes = 2;
>         ctxt->dst.val = ctxt->ops->get_cr(ctxt, 0);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 03869eb7fcd6..cda567aadd28 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -791,6 +791,9 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>         if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
>                 return 1;
>
> +       if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
> +               return 1;

There is a scenario here, UMIP CPUID is not exposed to the guest since
it depends on SECONDARY_EXEC_DESC is set, however, SECONDARY_EXEC_DESC
depends on guest sets the X86_CR4_UMIP bit, the function kvm_set_cr4()
will inject a #GP and fails to set X86_CR4_UMIP bit since UMIP CPUID
is not exposed to the guest. This scenario can be observed when
running kvm-unit-tests/umip.flat in the L1.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-02-06  2:46 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-13 14:40 [PATCH for-4.15 0/5] KVM: (almost) emulate UMIP on current processors Paolo Bonzini
2017-11-13 14:40 ` [PATCH 1/5] KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP Paolo Bonzini
2017-11-14  9:30   ` Wanpeng Li
2017-11-13 14:40 ` [PATCH 2/5] KVM: x86: add support for UMIP Paolo Bonzini
2017-11-15  0:40   ` Wanpeng Li
2018-02-06  2:45   ` Wanpeng Li
2017-11-13 14:40 ` [PATCH 3/5] KVM: x86: emulate sldt and str Paolo Bonzini
2017-11-15  0:41   ` Wanpeng Li
2017-11-13 14:40 ` [PATCH 4/5] KVM: x86: add support for emulating UMIP Paolo Bonzini
2017-11-15  0:42   ` Wanpeng Li
2017-11-13 14:40 ` [PATCH 5/5] KVM: vmx: " Paolo Bonzini
2017-11-15  0:42   ` Wanpeng Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.