* [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks
@ 2017-04-21 16:53 Jim Mattson
2017-04-22 0:24 ` kbuild test robot
` (3 more replies)
0 siblings, 4 replies; 19+ messages in thread
From: Jim Mattson @ 2017-04-21 16:53 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson
According to the Intel SDM, "Certain exceptions have priority over VM
exits. These include invalid-opcode exceptions, faults based on
privilege level*, and general-protection exceptions that are based on
checking I/O permission bits in the task-state segment (TSS)."
There is no need to check for faulting conditions that the hardware
has already checked.
One of the constraints on the VMX instructions is that they are not
allowed in real-address mode. Though the hardware checks for this
condition as well, when real-address mode is emulated, the faulting
condition does have to be checked in software.
* These include faults generated by attempts to execute, in
virtual-8086 mode, privileged instructions that are not recognized
in that mode.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/kvm/vmx.c | 58 ++++++++++++++----------------------------------------
1 file changed, 15 insertions(+), 43 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 259e9b28ccf8..1a975e942b87 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7115,25 +7115,14 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
/* The Intel VMX Instruction Reference lists a bunch of bits that
* are prerequisite to running VMXON, most notably cr4.VMXE must be
* set to 1 (see vmx_set_cr4() for when we allow the guest to set this).
- * Otherwise, we should fail with #UD. We test these now:
+ * Otherwise, we should fail with #UD. Hardware has already tested
+ * most or all of these conditions, with the exception of real-address
+ * mode, when real-address mode is emulated.
*/
- if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE) ||
- !kvm_read_cr0_bits(vcpu, X86_CR0_PE) ||
- (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if (is_long_mode(vcpu) && !cs.l) {
+ if ((!enable_unrestricted_guest &&
+ !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
-
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 1;
- }
if (vmx->nested.vmxon) {
nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
@@ -7161,30 +7150,18 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
* Intel's VMX Instruction Reference specifies a common set of prerequisites
* for running VMX instructions (except VMXON, whose prerequisites are
* slightly different). It also specifies what exception to inject otherwise.
+ * Note that many of these exceptions have priority over VM exits, so they
+ * don't have to be checked again here.
*/
-static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
+static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
{
- struct kvm_segment cs;
- struct vcpu_vmx *vmx = to_vmx(vcpu);
-
- if (!vmx->nested.vmxon) {
+ if (!to_vmx(vcpu)->nested.vmxon ||
+ (!enable_unrestricted_guest &&
+ !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
kvm_queue_exception(vcpu, UD_VECTOR);
- return 0;
- }
-
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if ((vmx_get_rflags(vcpu) & X86_EFLAGS_VM) ||
- (is_long_mode(vcpu) && !cs.l)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 0;
- }
-
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 0;
+ return false;
}
-
- return 1;
+ return true;
}
static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
@@ -7527,7 +7504,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &gva))
return 1;
- /* _system ok, as nested_vmx_check_permission verified cpl=0 */
+ /* _system ok, as hardware has verified cpl=0 */
kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva,
&field_value, (is_long_mode(vcpu) ? 8 : 4), NULL);
}
@@ -7660,7 +7637,7 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &vmcs_gva))
return 1;
- /* ok to use *_system, as nested_vmx_check_permission verified cpl=0 */
+ /* ok to use *_system, as hardware has verified cpl=0 */
if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva,
(void *)&to_vmx(vcpu)->nested.current_vmptr,
sizeof(u64), &e)) {
@@ -7693,11 +7670,6 @@ static int handle_invept(struct kvm_vcpu *vcpu)
if (!nested_vmx_check_permission(vcpu))
return 1;
- if (!kvm_read_cr0_bits(vcpu, X86_CR0_PE)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
-
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_readl(vcpu, (vmx_instruction_info >> 28) & 0xf);
--
2.12.2.816.g2cccc81164-goog
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-21 16:53 [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks Jim Mattson
@ 2017-04-22 0:24 ` kbuild test robot
2017-04-22 1:35 ` kbuild test robot
` (2 subsequent siblings)
3 siblings, 0 replies; 19+ messages in thread
From: kbuild test robot @ 2017-04-22 0:24 UTC (permalink / raw)
To: Jim Mattson; +Cc: kbuild-all, kvm, Jim Mattson
[-- Attachment #1: Type: text/plain, Size: 11496 bytes --]
Hi Jim,
[auto build test ERROR on kvm/linux-next]
[also build test ERROR on v4.11-rc7 next-20170421]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Jim-Mattson/kvm-nVMX-Remove-superfluous-VMX-instruction-fault-checks/20170422-073002
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: i386-randconfig-x073-201716 (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
# save the attached .config to linux build tree
make ARCH=i386
All error/warnings (new ones prefixed by >>):
arch/x86//kvm/vmx.c: In function 'handle_vmon':
>> arch/x86//kvm/vmx.c:7099:13: error: invalid storage class for function 'nested_vmx_check_permission'
static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7099:1: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
^~~~~~
>> arch/x86//kvm/vmx.c:7110:20: error: invalid storage class for function 'nested_release_vmcs12'
static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
^~~~~~~~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7144:13: error: invalid storage class for function 'free_nested'
static void free_nested(struct vcpu_vmx *vmx)
^~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7182:12: error: invalid storage class for function 'handle_vmoff'
static int handle_vmoff(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7192:12: error: invalid storage class for function 'handle_vmclear'
static int handle_vmclear(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7217:12: error: invalid storage class for function 'nested_vmx_run'
static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch);
^~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7220:12: error: invalid storage class for function 'handle_vmlaunch'
static int handle_vmlaunch(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~~~
arch/x86//kvm/vmx.c: In function 'handle_vmlaunch':
>> arch/x86//kvm/vmx.c:7222:9: error: implicit declaration of function 'nested_vmx_run' [-Werror=implicit-function-declaration]
return nested_vmx_run(vcpu, true);
^~~~~~~~~~~~~~
arch/x86//kvm/vmx.c: In function 'handle_vmon':
>> arch/x86//kvm/vmx.c:7226:12: error: invalid storage class for function 'handle_vmresume'
static int handle_vmresume(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7239:19: error: invalid storage class for function 'vmcs_field_type'
static inline int vmcs_field_type(unsigned long field)
^~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7246:19: error: invalid storage class for function 'vmcs_field_readonly'
static inline int vmcs_field_readonly(unsigned long field)
^~~~~~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7258:19: error: invalid storage class for function 'vmcs12_read_any'
static inline int vmcs12_read_any(struct kvm_vcpu *vcpu,
^~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7289:19: error: invalid storage class for function 'vmcs12_write_any'
static inline int vmcs12_write_any(struct kvm_vcpu *vcpu,
^~~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7316:13: error: invalid storage class for function 'copy_shadow_to_vmcs12'
static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx)
^~~~~~~~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7357:13: error: invalid storage class for function 'copy_vmcs12_to_shadow'
static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx)
^~~~~~~~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7407:12: error: invalid storage class for function 'nested_vmx_check_vmcs12'
static int nested_vmx_check_vmcs12(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7417:12: error: invalid storage class for function 'handle_vmread'
static int handle_vmread(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7460:12: error: invalid storage class for function 'handle_vmwrite'
static int handle_vmwrite(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~~
>> arch/x86//kvm/vmx.c:7512:13: error: invalid storage class for function 'set_current_vmptr'
static void set_current_vmptr(struct vcpu_vmx *vmx, gpa_t vmptr)
^~~~~~~~~~~~~~~~~
vim +/nested_vmx_check_permission +7099 arch/x86//kvm/vmx.c
7093 * Intel's VMX Instruction Reference specifies a common set of prerequisites
7094 * for running VMX instructions (except VMXON, whose prerequisites are
7095 * slightly different). It also specifies what exception to inject otherwise.
7096 * Note that many of these exceptions have priority over VM exits, so they
7097 * don't have to be checked again here.
7098 */
> 7099 static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
7100 {
7101 if (!to_vmx(vcpu)->nested.vmxon ||
7102 (!enable_unrestricted_guest &&
7103 !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
7104 kvm_queue_exception(vcpu, UD_VECTOR);
7105 return false;
7106 }
7107 return true;
7108 }
7109
> 7110 static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
7111 {
7112 if (vmx->nested.current_vmptr == -1ull)
7113 return;
7114
7115 /* current_vmptr and current_vmcs12 are always set/reset together */
7116 if (WARN_ON(vmx->nested.current_vmcs12 == NULL))
7117 return;
7118
7119 if (enable_shadow_vmcs) {
7120 /* copy to memory all shadowed fields in case
7121 they were modified */
7122 copy_shadow_to_vmcs12(vmx);
7123 vmx->nested.sync_shadow_vmcs = false;
7124 vmcs_clear_bits(SECONDARY_VM_EXEC_CONTROL,
7125 SECONDARY_EXEC_SHADOW_VMCS);
7126 vmcs_write64(VMCS_LINK_POINTER, -1ull);
7127 }
7128 vmx->nested.posted_intr_nv = -1;
7129
7130 /* Flush VMCS12 to guest memory */
7131 memcpy(vmx->nested.current_vmcs12, vmx->nested.cached_vmcs12,
7132 VMCS12_SIZE);
7133
7134 kunmap(vmx->nested.current_vmcs12_page);
7135 nested_release_page(vmx->nested.current_vmcs12_page);
7136 vmx->nested.current_vmptr = -1ull;
7137 vmx->nested.current_vmcs12 = NULL;
7138 }
7139
7140 /*
7141 * Free whatever needs to be freed from vmx->nested when L1 goes down, or
7142 * just stops using VMX.
7143 */
> 7144 static void free_nested(struct vcpu_vmx *vmx)
7145 {
7146 if (!vmx->nested.vmxon)
7147 return;
7148
7149 vmx->nested.vmxon = false;
7150 free_vpid(vmx->nested.vpid02);
7151 nested_release_vmcs12(vmx);
7152 if (vmx->nested.msr_bitmap) {
7153 free_page((unsigned long)vmx->nested.msr_bitmap);
7154 vmx->nested.msr_bitmap = NULL;
7155 }
7156 if (enable_shadow_vmcs) {
7157 vmcs_clear(vmx->vmcs01.shadow_vmcs);
7158 free_vmcs(vmx->vmcs01.shadow_vmcs);
7159 vmx->vmcs01.shadow_vmcs = NULL;
7160 }
7161 kfree(vmx->nested.cached_vmcs12);
7162 /* Unpin physical memory we referred to in current vmcs02 */
7163 if (vmx->nested.apic_access_page) {
7164 nested_release_page(vmx->nested.apic_access_page);
7165 vmx->nested.apic_access_page = NULL;
7166 }
7167 if (vmx->nested.virtual_apic_page) {
7168 nested_release_page(vmx->nested.virtual_apic_page);
7169 vmx->nested.virtual_apic_page = NULL;
7170 }
7171 if (vmx->nested.pi_desc_page) {
7172 kunmap(vmx->nested.pi_desc_page);
7173 nested_release_page(vmx->nested.pi_desc_page);
7174 vmx->nested.pi_desc_page = NULL;
7175 vmx->nested.pi_desc = NULL;
7176 }
7177
7178 nested_free_all_saved_vmcss(vmx);
7179 }
7180
7181 /* Emulate the VMXOFF instruction */
> 7182 static int handle_vmoff(struct kvm_vcpu *vcpu)
7183 {
7184 if (!nested_vmx_check_permission(vcpu))
7185 return 1;
7186 free_nested(to_vmx(vcpu));
7187 nested_vmx_succeed(vcpu);
7188 return kvm_skip_emulated_instruction(vcpu);
7189 }
7190
7191 /* Emulate the VMCLEAR instruction */
> 7192 static int handle_vmclear(struct kvm_vcpu *vcpu)
7193 {
7194 struct vcpu_vmx *vmx = to_vmx(vcpu);
7195 u32 zero = 0;
7196 gpa_t vmptr;
7197
7198 if (!nested_vmx_check_permission(vcpu))
7199 return 1;
7200
7201 if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMCLEAR, &vmptr))
7202 return 1;
7203
7204 if (vmptr == vmx->nested.current_vmptr)
7205 nested_release_vmcs12(vmx);
7206
7207 kvm_vcpu_write_guest(vcpu,
7208 vmptr + offsetof(struct vmcs12, launch_state),
7209 &zero, sizeof(zero));
7210
7211 nested_free_vmcs02(vmx, vmptr);
7212
7213 nested_vmx_succeed(vcpu);
7214 return kvm_skip_emulated_instruction(vcpu);
7215 }
7216
> 7217 static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch);
7218
7219 /* Emulate the VMLAUNCH instruction */
> 7220 static int handle_vmlaunch(struct kvm_vcpu *vcpu)
7221 {
> 7222 return nested_vmx_run(vcpu, true);
7223 }
7224
7225 /* Emulate the VMRESUME instruction */
> 7226 static int handle_vmresume(struct kvm_vcpu *vcpu)
7227 {
7228
7229 return nested_vmx_run(vcpu, false);
7230 }
7231
7232 enum vmcs_field_type {
7233 VMCS_FIELD_TYPE_U16 = 0,
7234 VMCS_FIELD_TYPE_U64 = 1,
7235 VMCS_FIELD_TYPE_U32 = 2,
7236 VMCS_FIELD_TYPE_NATURAL_WIDTH = 3
7237 };
7238
> 7239 static inline int vmcs_field_type(unsigned long field)
7240 {
7241 if (0x1 & field) /* the *_HIGH fields are all 32 bit */
7242 return VMCS_FIELD_TYPE_U32;
7243 return (field >> 13) & 0x3 ;
7244 }
7245
> 7246 static inline int vmcs_field_readonly(unsigned long field)
7247 {
7248 return (((field >> 10) & 0x3) == 1);
7249 }
7250
7251 /*
7252 * Read a vmcs12 field. Since these can have varying lengths and we return
7253 * one type, we chose the biggest type (u64) and zero-extend the return value
7254 * to that size. Note that the caller, handle_vmread, might need to use only
7255 * some of the bits we return here (e.g., on 32-bit guests, only 32 bits of
7256 * 64-bit fields are to be returned).
7257 */
> 7258 static inline int vmcs12_read_any(struct kvm_vcpu *vcpu,
7259 unsigned long field, u64 *ret)
7260 {
7261 short offset = vmcs_field_to_offset(field);
7262 char *p;
7263
7264 if (offset < 0)
7265 return offset;
7266
7267 p = ((char *)(get_vmcs12(vcpu))) + offset;
7268
7269 switch (vmcs_field_type(field)) {
7270 case VMCS_FIELD_TYPE_NATURAL_WIDTH:
7271 *ret = *((natural_width *)p);
7272 return 0;
7273 case VMCS_FIELD_TYPE_U16:
7274 *ret = *((u16 *)p);
7275 return 0;
7276 case VMCS_FIELD_TYPE_U32:
7277 *ret = *((u32 *)p);
7278 return 0;
7279 case VMCS_FIELD_TYPE_U64:
7280 *ret = *((u64 *)p);
7281 return 0;
7282 default:
7283 WARN_ON(1);
7284 return -ENOENT;
7285 }
7286 }
7287
7288
> 7289 static inline int vmcs12_write_any(struct kvm_vcpu *vcpu,
7290 unsigned long field, u64 field_value){
7291 short offset = vmcs_field_to_offset(field);
7292 char *p = ((char *) get_vmcs12(vcpu)) + offset;
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 25718 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-21 16:53 [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks Jim Mattson
2017-04-22 0:24 ` kbuild test robot
@ 2017-04-22 1:35 ` kbuild test robot
2017-04-24 15:28 ` [PATCH v2] " Jim Mattson
2017-04-26 9:26 ` [PATCH] " Paolo Bonzini
3 siblings, 0 replies; 19+ messages in thread
From: kbuild test robot @ 2017-04-22 1:35 UTC (permalink / raw)
To: Jim Mattson; +Cc: kbuild-all, kvm, Jim Mattson
[-- Attachment #1: Type: text/plain, Size: 11614 bytes --]
Hi Jim,
[auto build test WARNING on kvm/linux-next]
[also build test WARNING on v4.11-rc7 next-20170421]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Jim-Mattson/kvm-nVMX-Remove-superfluous-VMX-instruction-fault-checks/20170422-073002
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: x86_64-kexec (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64
All warnings (new ones prefixed by >>):
.get_tdp_level = get_ept_level,
^~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11459:19: note: (near initialization for 'vmx_x86_ops.get_tdp_level')
arch/x86/kvm/vmx.c:11460:17: error: initializer element is not constant
.get_mt_mask = vmx_get_mt_mask,
^~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11460:17: note: (near initialization for 'vmx_x86_ops.get_mt_mask')
arch/x86/kvm/vmx.c:11462:19: error: initializer element is not constant
.get_exit_info = vmx_get_exit_info,
^~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11462:19: note: (near initialization for 'vmx_x86_ops.get_exit_info')
arch/x86/kvm/vmx.c:11464:21: error: initializer element is not constant
.get_lpage_level = vmx_get_lpage_level,
^~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11464:21: note: (near initialization for 'vmx_x86_ops.get_lpage_level')
arch/x86/kvm/vmx.c:11466:18: error: initializer element is not constant
.cpuid_update = vmx_cpuid_update,
^~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11466:18: note: (near initialization for 'vmx_x86_ops.cpuid_update')
arch/x86/kvm/vmx.c:11471:25: error: initializer element is not constant
.set_supported_cpuid = vmx_set_supported_cpuid,
^~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11471:25: note: (near initialization for 'vmx_x86_ops.set_supported_cpuid')
arch/x86/kvm/vmx.c:11479:21: error: initializer element is not constant
.check_intercept = vmx_check_intercept,
^~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11479:21: note: (near initialization for 'vmx_x86_ops.check_intercept')
arch/x86/kvm/vmx.c:11480:26: error: initializer element is not constant
.handle_external_intr = vmx_handle_external_intr,
^~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11480:26: note: (near initialization for 'vmx_x86_ops.handle_external_intr')
arch/x86/kvm/vmx.c:11481:19: error: initializer element is not constant
.mpx_supported = vmx_mpx_supported,
^~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11481:19: note: (near initialization for 'vmx_x86_ops.mpx_supported')
arch/x86/kvm/vmx.c:11482:22: error: initializer element is not constant
.xsaves_supported = vmx_xsaves_supported,
^~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11482:22: note: (near initialization for 'vmx_x86_ops.xsaves_supported')
arch/x86/kvm/vmx.c:11484:25: error: initializer element is not constant
.check_nested_events = vmx_check_nested_events,
^~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11484:25: note: (near initialization for 'vmx_x86_ops.check_nested_events')
arch/x86/kvm/vmx.c:11486:14: error: initializer element is not constant
.sched_in = vmx_sched_in,
^~~~~~~~~~~~
arch/x86/kvm/vmx.c:11486:14: note: (near initialization for 'vmx_x86_ops.sched_in')
arch/x86/kvm/vmx.c:11488:27: error: initializer element is not constant
.slot_enable_log_dirty = vmx_slot_enable_log_dirty,
^~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11488:27: note: (near initialization for 'vmx_x86_ops.slot_enable_log_dirty')
arch/x86/kvm/vmx.c:11489:28: error: initializer element is not constant
.slot_disable_log_dirty = vmx_slot_disable_log_dirty,
^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11489:28: note: (near initialization for 'vmx_x86_ops.slot_disable_log_dirty')
arch/x86/kvm/vmx.c:11490:21: error: initializer element is not constant
.flush_log_dirty = vmx_flush_log_dirty,
^~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11490:21: note: (near initialization for 'vmx_x86_ops.flush_log_dirty')
arch/x86/kvm/vmx.c:11491:32: error: initializer element is not constant
.enable_log_dirty_pt_masked = vmx_enable_log_dirty_pt_masked,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11491:32: note: (near initialization for 'vmx_x86_ops.enable_log_dirty_pt_masked')
arch/x86/kvm/vmx.c:11493:15: error: initializer element is not constant
.pre_block = vmx_pre_block,
^~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11493:15: note: (near initialization for 'vmx_x86_ops.pre_block')
arch/x86/kvm/vmx.c:11494:16: error: initializer element is not constant
.post_block = vmx_post_block,
^~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11494:16: note: (near initialization for 'vmx_x86_ops.post_block')
arch/x86/kvm/vmx.c:11498:20: error: initializer element is not constant
.update_pi_irte = vmx_update_pi_irte,
^~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11498:20: note: (near initialization for 'vmx_x86_ops.update_pi_irte')
arch/x86/kvm/vmx.c:11501:18: error: initializer element is not constant
.set_hv_timer = vmx_set_hv_timer,
^~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11501:18: note: (near initialization for 'vmx_x86_ops.set_hv_timer')
arch/x86/kvm/vmx.c:11502:21: error: initializer element is not constant
.cancel_hv_timer = vmx_cancel_hv_timer,
^~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11502:21: note: (near initialization for 'vmx_x86_ops.cancel_hv_timer')
arch/x86/kvm/vmx.c:11505:15: error: initializer element is not constant
.setup_mce = vmx_setup_mce,
^~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11505:15: note: (near initialization for 'vmx_x86_ops.setup_mce')
arch/x86/kvm/vmx.c:11508:19: error: invalid storage class for function 'vmx_init'
static int __init vmx_init(void)
^~~~~~~~
arch/x86/kvm/vmx.c:11523:20: error: invalid storage class for function 'vmx_exit'
static void __exit vmx_exit(void)
^~~~~~~~
In file included from arch/x86/kvm/vmx.c:25:0:
include/linux/module.h:129:42: error: invalid storage class for function '__inittest'
static inline initcall_t __maybe_unused __inittest(void) \
^
arch/x86/kvm/vmx.c:11533:1: note: in expansion of macro 'module_init'
module_init(vmx_init)
^~~~~~~~~~~
>> arch/x86/kvm/vmx.c:11533:1: warning: 'alias' attribute ignored [-Wattributes]
In file included from arch/x86/kvm/vmx.c:25:0:
include/linux/module.h:135:42: error: invalid storage class for function '__exittest'
static inline exitcall_t __maybe_unused __exittest(void) \
^
arch/x86/kvm/vmx.c:11534:1: note: in expansion of macro 'module_exit'
module_exit(vmx_exit)
^~~~~~~~~~~
arch/x86/kvm/vmx.c:11534:1: warning: 'alias' attribute ignored [-Wattributes]
arch/x86/kvm/vmx.c:11534:1: error: expected declaration or statement at end of input
arch/x86/kvm/vmx.c:7053:21: warning: unused variable 'cs' [-Wunused-variable]
struct kvm_segment cs;
^~
arch/x86/kvm/vmx.c: At top level:
arch/x86/kvm/vmx.c:908:22: warning: 'nested_ept_get_cr3' used but never defined
static unsigned long nested_ept_get_cr3(struct kvm_vcpu *vcpu);
^~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:910:13: warning: 'vmx_xsaves_supported' used but never defined
static bool vmx_xsaves_supported(void);
^~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:918:13: warning: 'copy_vmcs12_to_shadow' declared 'static' but never defined [-Wunused-function]
static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx);
^~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:919:13: warning: 'copy_shadow_to_vmcs12' used but never defined
static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx);
^~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:1383:13: warning: 'nested_vmx_vmexit' used but never defined
static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
^~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:1386:13: warning: 'nested_vmx_entry_failure' used but never defined
static void nested_vmx_entry_failure(struct kvm_vcpu *vcpu,
^~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:3221:13: warning: 'vmx_leave_nested' used but never defined
static void vmx_leave_nested(struct kvm_vcpu *vcpu);
^~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11032:13: warning: 'nested_vmx_entry_failure' defined but not used [-Wunused-function]
static void nested_vmx_entry_failure(struct kvm_vcpu *vcpu,
^~~~~~~~~~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:11016:13: warning: 'vmx_leave_nested' defined but not used [-Wunused-function]
static void vmx_leave_nested(struct kvm_vcpu *vcpu)
^~~~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:10374:12: warning: 'nested_vmx_run' defined but not used [-Wunused-function]
static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
^~~~~~~~~~~~~~
arch/x86/kvm/vmx.c:7050:12: warning: 'handle_vmon' defined but not used [-Wunused-function]
static int handle_vmon(struct kvm_vcpu *vcpu)
^~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/alias +11533 arch/x86/kvm/vmx.c
8f536b76 arch/x86/kvm/vmx.c Zhang Yanfei 2012-12-06 11517 crash_vmclear_local_loaded_vmcss);
8f536b76 arch/x86/kvm/vmx.c Zhang Yanfei 2012-12-06 11518 #endif
8f536b76 arch/x86/kvm/vmx.c Zhang Yanfei 2012-12-06 11519
fdef3ad1 drivers/kvm/vmx.c He, Qing 2007-04-30 11520 return 0;
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 11521 }
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 11522
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 11523 static void __exit vmx_exit(void)
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 11524 {
2965faa5 arch/x86/kvm/vmx.c Dave Young 2015-09-09 11525 #ifdef CONFIG_KEXEC_CORE
3b63a43f arch/x86/kvm/vmx.c Monam Agarwal 2014-03-22 11526 RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
8f536b76 arch/x86/kvm/vmx.c Zhang Yanfei 2012-12-06 11527 synchronize_rcu();
8f536b76 arch/x86/kvm/vmx.c Zhang Yanfei 2012-12-06 11528 #endif
8f536b76 arch/x86/kvm/vmx.c Zhang Yanfei 2012-12-06 11529
cb498ea2 drivers/kvm/vmx.c Zhang Xiantao 2007-11-14 11530 kvm_exit();
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 11531 }
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 11532
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 @11533 module_init(vmx_init)
6aa8b732 drivers/kvm/vmx.c Avi Kivity 2006-12-10 11534 module_exit(vmx_exit)
:::::: The code at line 11533 was first introduced by commit
:::::: 6aa8b732ca01c3d7a54e93f4d701b8aabbe60fb7 [PATCH] kvm: userspace interface
:::::: TO: Avi Kivity <avi@qumranet.com>
:::::: CC: Linus Torvalds <torvalds@woody.osdl.org>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 25132 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-21 16:53 [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks Jim Mattson
2017-04-22 0:24 ` kbuild test robot
2017-04-22 1:35 ` kbuild test robot
@ 2017-04-24 15:28 ` Jim Mattson
2017-04-25 12:38 ` David Hildenbrand
2017-04-26 9:26 ` [PATCH] " Paolo Bonzini
3 siblings, 1 reply; 19+ messages in thread
From: Jim Mattson @ 2017-04-24 15:28 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson
According to the Intel SDM, "Certain exceptions have priority over VM
exits. These include invalid-opcode exceptions, faults based on
privilege level*, and general-protection exceptions that are based on
checking I/O permission bits in the task-state segment (TSS)."
There is no need to check for faulting conditions that the hardware
has already checked.
One of the constraints on the VMX instructions is that they are not
allowed in real-address mode. Though the hardware checks for this
condition as well, when real-address mode is emulated, the faulting
condition does have to be checked in software.
* These include faults generated by attempts to execute, in
virtual-8086 mode, privileged instructions that are not recognized
in that mode.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/kvm/vmx.c | 57 ++++++++++++++----------------------------------------
1 file changed, 15 insertions(+), 42 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 259e9b28ccf8..e7a7edb4cdb0 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7107,7 +7107,6 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
static int handle_vmon(struct kvm_vcpu *vcpu)
{
int ret;
- struct kvm_segment cs;
struct vcpu_vmx *vmx = to_vmx(vcpu);
const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED
| FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
@@ -7115,26 +7114,17 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
/* The Intel VMX Instruction Reference lists a bunch of bits that
* are prerequisite to running VMXON, most notably cr4.VMXE must be
* set to 1 (see vmx_set_cr4() for when we allow the guest to set this).
- * Otherwise, we should fail with #UD. We test these now:
+ * Otherwise, we should fail with #UD. Hardware has already tested
+ * most or all of these conditions, with the exception of real-address
+ * mode, when real-addresss mode is emulated.
*/
- if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE) ||
- !kvm_read_cr0_bits(vcpu, X86_CR0_PE) ||
- (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if (is_long_mode(vcpu) && !cs.l) {
+ if ((!enable_unrestricted_guest &&
+ !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
}
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 1;
- }
-
if (vmx->nested.vmxon) {
nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
return kvm_skip_emulated_instruction(vcpu);
@@ -7161,30 +7151,18 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
* Intel's VMX Instruction Reference specifies a common set of prerequisites
* for running VMX instructions (except VMXON, whose prerequisites are
* slightly different). It also specifies what exception to inject otherwise.
+ * Note that many of these exceptions have priority over VM exits, so they
+ * don't have to be checked again here.
*/
-static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
+static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
{
- struct kvm_segment cs;
- struct vcpu_vmx *vmx = to_vmx(vcpu);
-
- if (!vmx->nested.vmxon) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 0;
- }
-
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if ((vmx_get_rflags(vcpu) & X86_EFLAGS_VM) ||
- (is_long_mode(vcpu) && !cs.l)) {
+ if (!to_vmx(vcpu)->nested.vmxon ||
+ (!enable_unrestricted_guest &&
+ !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
kvm_queue_exception(vcpu, UD_VECTOR);
- return 0;
- }
-
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 0;
+ return false;
}
-
- return 1;
+ return true;
}
static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
@@ -7527,7 +7505,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &gva))
return 1;
- /* _system ok, as nested_vmx_check_permission verified cpl=0 */
+ /* _system ok, as hardware has verified cpl=0 */
kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva,
&field_value, (is_long_mode(vcpu) ? 8 : 4), NULL);
}
@@ -7660,7 +7638,7 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &vmcs_gva))
return 1;
- /* ok to use *_system, as nested_vmx_check_permission verified cpl=0 */
+ /* ok to use *_system, as hardware has verified cpl=0 */
if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva,
(void *)&to_vmx(vcpu)->nested.current_vmptr,
sizeof(u64), &e)) {
@@ -7693,11 +7671,6 @@ static int handle_invept(struct kvm_vcpu *vcpu)
if (!nested_vmx_check_permission(vcpu))
return 1;
- if (!kvm_read_cr0_bits(vcpu, X86_CR0_PE)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
-
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_readl(vcpu, (vmx_instruction_info >> 28) & 0xf);
--
2.12.2.816.g2cccc81164-goog
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v2] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-24 15:28 ` [PATCH v2] " Jim Mattson
@ 2017-04-25 12:38 ` David Hildenbrand
2017-04-25 14:10 ` Jim Mattson
0 siblings, 1 reply; 19+ messages in thread
From: David Hildenbrand @ 2017-04-25 12:38 UTC (permalink / raw)
To: Jim Mattson, kvm
On 24.04.2017 17:28, Jim Mattson wrote:
> According to the Intel SDM, "Certain exceptions have priority over VM
> exits. These include invalid-opcode exceptions, faults based on
> privilege level*, and general-protection exceptions that are based on
> checking I/O permission bits in the task-state segment (TSS)."
>
> There is no need to check for faulting conditions that the hardware
> has already checked.
>
> One of the constraints on the VMX instructions is that they are not
> allowed in real-address mode. Though the hardware checks for this
> condition as well, when real-address mode is emulated, the faulting
> condition does have to be checked in software.
>
> * These include faults generated by attempts to execute, in
> virtual-8086 mode, privileged instructions that are not recognized
> in that mode.
>
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
> arch/x86/kvm/vmx.c | 57 ++++++++++++++----------------------------------------
> 1 file changed, 15 insertions(+), 42 deletions(-)
Sounds perfectly sane to me and looks like a nice cleanup. Wonder if we
have a test to verify that the checks are actually done in HW. (I've
seen the strangest special cases related to virtualization mode in CPUs)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 259e9b28ccf8..e7a7edb4cdb0 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -7107,7 +7107,6 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
> static int handle_vmon(struct kvm_vcpu *vcpu)
> {
> int ret;
> - struct kvm_segment cs;
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED
> | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
> @@ -7115,26 +7114,17 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
> /* The Intel VMX Instruction Reference lists a bunch of bits that
> * are prerequisite to running VMXON, most notably cr4.VMXE must be
> * set to 1 (see vmx_set_cr4() for when we allow the guest to set this).
> - * Otherwise, we should fail with #UD. We test these now:
> + * Otherwise, we should fail with #UD. Hardware has already tested
> + * most or all of these conditions, with the exception of real-address
> + * mode, when real-addresss mode is emulated.
> */
> - if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE) ||
> - !kvm_read_cr0_bits(vcpu, X86_CR0_PE) ||
> - (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) {
> - kvm_queue_exception(vcpu, UD_VECTOR);
> - return 1;
> - }
>
> - vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
> - if (is_long_mode(vcpu) && !cs.l) {
> + if ((!enable_unrestricted_guest &&
> + !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
> kvm_queue_exception(vcpu, UD_VECTOR);
> return 1;
> }
>
> - if (vmx_get_cpl(vcpu)) {
> - kvm_inject_gp(vcpu, 0);
> - return 1;
> - }
> -
> if (vmx->nested.vmxon) {
> nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
> return kvm_skip_emulated_instruction(vcpu);
> @@ -7161,30 +7151,18 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
> * Intel's VMX Instruction Reference specifies a common set of prerequisites
> * for running VMX instructions (except VMXON, whose prerequisites are
> * slightly different). It also specifies what exception to inject otherwise.
> + * Note that many of these exceptions have priority over VM exits, so they
> + * don't have to be checked again here.
> */
> -static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
> +static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
I'm a fan of splitting such changes out. (less noise for reviewing the
actual magic).
--
Thanks,
David
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-25 12:38 ` David Hildenbrand
@ 2017-04-25 14:10 ` Jim Mattson
2017-04-25 16:30 ` David Hildenbrand
0 siblings, 1 reply; 19+ messages in thread
From: Jim Mattson @ 2017-04-25 14:10 UTC (permalink / raw)
To: David Hildenbrand; +Cc: kvm list
On Tue, Apr 25, 2017 at 5:38 AM, David Hildenbrand <david@redhat.com> wrote:
> On 24.04.2017 17:28, Jim Mattson wrote:
>> According to the Intel SDM, "Certain exceptions have priority over VM
>> exits. These include invalid-opcode exceptions, faults based on
>> privilege level*, and general-protection exceptions that are based on
>> checking I/O permission bits in the task-state segment (TSS)."
>>
>> There is no need to check for faulting conditions that the hardware
>> has already checked.
>>
>> One of the constraints on the VMX instructions is that they are not
>> allowed in real-address mode. Though the hardware checks for this
>> condition as well, when real-address mode is emulated, the faulting
>> condition does have to be checked in software.
>>
>> * These include faults generated by attempts to execute, in
>> virtual-8086 mode, privileged instructions that are not recognized
>> in that mode.
>>
>> Signed-off-by: Jim Mattson <jmattson@google.com>
>> ---
>> arch/x86/kvm/vmx.c | 57 ++++++++++++++----------------------------------------
>> 1 file changed, 15 insertions(+), 42 deletions(-)
>
> Sounds perfectly sane to me and looks like a nice cleanup. Wonder if we
> have a test to verify that the checks are actually done in HW. (I've
> seen the strangest special cases related to virtualization mode in CPUs)
I do have a compatibility-mode test for INVEPT that I can try to
crossport to the upstream kvm-unit-tests.
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 259e9b28ccf8..e7a7edb4cdb0 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -7107,7 +7107,6 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
>> static int handle_vmon(struct kvm_vcpu *vcpu)
>> {
>> int ret;
>> - struct kvm_segment cs;
>> struct vcpu_vmx *vmx = to_vmx(vcpu);
>> const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED
>> | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
>> @@ -7115,26 +7114,17 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
>> /* The Intel VMX Instruction Reference lists a bunch of bits that
>> * are prerequisite to running VMXON, most notably cr4.VMXE must be
>> * set to 1 (see vmx_set_cr4() for when we allow the guest to set this).
>> - * Otherwise, we should fail with #UD. We test these now:
>> + * Otherwise, we should fail with #UD. Hardware has already tested
>> + * most or all of these conditions, with the exception of real-address
>> + * mode, when real-addresss mode is emulated.
>> */
>> - if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE) ||
>> - !kvm_read_cr0_bits(vcpu, X86_CR0_PE) ||
>> - (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) {
>> - kvm_queue_exception(vcpu, UD_VECTOR);
>> - return 1;
>> - }
>>
>> - vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
>> - if (is_long_mode(vcpu) && !cs.l) {
>> + if ((!enable_unrestricted_guest &&
>> + !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
>> kvm_queue_exception(vcpu, UD_VECTOR);
>> return 1;
>> }
>>
>> - if (vmx_get_cpl(vcpu)) {
>> - kvm_inject_gp(vcpu, 0);
>> - return 1;
>> - }
>> -
>> if (vmx->nested.vmxon) {
>> nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
>> return kvm_skip_emulated_instruction(vcpu);
>> @@ -7161,30 +7151,18 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
>> * Intel's VMX Instruction Reference specifies a common set of prerequisites
>> * for running VMX instructions (except VMXON, whose prerequisites are
>> * slightly different). It also specifies what exception to inject otherwise.
>> + * Note that many of these exceptions have priority over VM exits, so they
>> + * don't have to be checked again here.
>> */
>> -static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
>> +static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
>
> I'm a fan of splitting such changes out. (less noise for reviewing the
> actual magic).
Do you mean you'd like to see separate patches for handle_vmon() and
nested_vmx_check_permission()?
>
> --
>
> Thanks,
>
> David
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-25 14:10 ` Jim Mattson
@ 2017-04-25 16:30 ` David Hildenbrand
0 siblings, 0 replies; 19+ messages in thread
From: David Hildenbrand @ 2017-04-25 16:30 UTC (permalink / raw)
To: Jim Mattson; +Cc: kvm list
>>> -
>>> if (vmx->nested.vmxon) {
>>> nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
>>> return kvm_skip_emulated_instruction(vcpu);
>>> @@ -7161,30 +7151,18 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
>>> * Intel's VMX Instruction Reference specifies a common set of prerequisites
>>> * for running VMX instructions (except VMXON, whose prerequisites are
>>> * slightly different). It also specifies what exception to inject otherwise.
>>> + * Note that many of these exceptions have priority over VM exits, so they
>>> + * don't have to be checked again here.
>>> */
>>> -static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
>>> +static bool nested_vmx_check_permission(struct kvm_vcpu *vcpu)
>>
>> I'm a fan of splitting such changes out. (less noise for reviewing the
>> actual magic).
>
> Do you mean you'd like to see separate patches for handle_vmon() and
> nested_vmx_check_permission()?
I was talking of the int->bool. But that is just my preference for
reviewing. And by far not worth a resend :)
>
>>
>> --
>>
>> Thanks,
>>
>> David
--
Thanks,
David
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-21 16:53 [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks Jim Mattson
` (2 preceding siblings ...)
2017-04-24 15:28 ` [PATCH v2] " Jim Mattson
@ 2017-04-26 9:26 ` Paolo Bonzini
2017-04-26 15:38 ` [PATCH v3] " Jim Mattson
2017-04-26 15:46 ` [PATCH] " Jim Mattson
3 siblings, 2 replies; 19+ messages in thread
From: Paolo Bonzini @ 2017-04-26 9:26 UTC (permalink / raw)
To: Jim Mattson, kvm
On 21/04/2017 18:53, Jim Mattson wrote:
> One of the constraints on the VMX instructions is that they are not
> allowed in real-address mode. Though the hardware checks for this
> condition as well, when real-address mode is emulated, the faulting
> condition does have to be checked in software.
Emulated real mode is virtual-8086 mode, so that should be checked by
the processor too, right?
VMX instructions are never called from the emulator, so they cannot be
reached from the emulate_invalid_guest_state path. And if they could,
you'd have to keep the CPL checks and all the others. So I think that
you can remove the checks for CR0.PE as well.
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-26 9:26 ` [PATCH] " Paolo Bonzini
@ 2017-04-26 15:38 ` Jim Mattson
2017-04-26 15:46 ` [PATCH] " Jim Mattson
1 sibling, 0 replies; 19+ messages in thread
From: Jim Mattson @ 2017-04-26 15:38 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson
According to the Intel SDM, "Certain exceptions have priority over VM
exits. These include invalid-opcode exceptions, faults based on
privilege level*, and general-protection exceptions that are based on
checking I/O permission bits in the task-state segment (TSS)."
There is no need to check for faulting conditions that the hardware
has already checked.
* These include faults generated by attempts to execute, in
virtual-8086 mode, privileged instructions that are not recognized
in that mode.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/kvm/vmx.c | 49 ++++++++++---------------------------------------
1 file changed, 10 insertions(+), 39 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 259e9b28ccf8..5dae87bcfff8 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7107,7 +7107,6 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
static int handle_vmon(struct kvm_vcpu *vcpu)
{
int ret;
- struct kvm_segment cs;
struct vcpu_vmx *vmx = to_vmx(vcpu);
const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED
| FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
@@ -7115,26 +7114,17 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
/* The Intel VMX Instruction Reference lists a bunch of bits that
* are prerequisite to running VMXON, most notably cr4.VMXE must be
* set to 1 (see vmx_set_cr4() for when we allow the guest to set this).
- * Otherwise, we should fail with #UD. We test these now:
+ * Otherwise, we should fail with #UD. Hardware has already tested
+ * most or all of these conditions, with the exception of real-address
+ * mode, when real-addresss mode is emulated.
*/
- if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE) ||
- !kvm_read_cr0_bits(vcpu, X86_CR0_PE) ||
- (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if (is_long_mode(vcpu) && !cs.l) {
+ if ((!enable_unrestricted_guest &&
+ !kvm_read_cr0_bits(vcpu, X86_CR0_PE))) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
}
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 1;
- }
-
if (vmx->nested.vmxon) {
nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
return kvm_skip_emulated_instruction(vcpu);
@@ -7161,29 +7151,15 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
* Intel's VMX Instruction Reference specifies a common set of prerequisites
* for running VMX instructions (except VMXON, whose prerequisites are
* slightly different). It also specifies what exception to inject otherwise.
+ * Note that many of these exceptions have priority over VM exits, so they
+ * don't have to be checked again here.
*/
static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
{
- struct kvm_segment cs;
- struct vcpu_vmx *vmx = to_vmx(vcpu);
-
- if (!vmx->nested.vmxon) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 0;
- }
-
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if ((vmx_get_rflags(vcpu) & X86_EFLAGS_VM) ||
- (is_long_mode(vcpu) && !cs.l)) {
+ if (!to_vmx(vcpu)->nested.vmxon) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 0;
}
-
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 0;
- }
-
return 1;
}
@@ -7527,7 +7503,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &gva))
return 1;
- /* _system ok, as nested_vmx_check_permission verified cpl=0 */
+ /* _system ok, as hardware has verified cpl=0 */
kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva,
&field_value, (is_long_mode(vcpu) ? 8 : 4), NULL);
}
@@ -7660,7 +7636,7 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &vmcs_gva))
return 1;
- /* ok to use *_system, as nested_vmx_check_permission verified cpl=0 */
+ /* ok to use *_system, as hardware has verified cpl=0 */
if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva,
(void *)&to_vmx(vcpu)->nested.current_vmptr,
sizeof(u64), &e)) {
@@ -7693,11 +7669,6 @@ static int handle_invept(struct kvm_vcpu *vcpu)
if (!nested_vmx_check_permission(vcpu))
return 1;
- if (!kvm_read_cr0_bits(vcpu, X86_CR0_PE)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
-
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_readl(vcpu, (vmx_instruction_info >> 28) & 0xf);
--
2.13.0.rc0.306.g87b477812d-goog
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-26 9:26 ` [PATCH] " Paolo Bonzini
2017-04-26 15:38 ` [PATCH v3] " Jim Mattson
@ 2017-04-26 15:46 ` Jim Mattson
2017-04-26 15:53 ` [PATCH v4] " Jim Mattson
1 sibling, 1 reply; 19+ messages in thread
From: Jim Mattson @ 2017-04-26 15:46 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm list
Yes, I agree.
On Wed, Apr 26, 2017 at 2:26 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 21/04/2017 18:53, Jim Mattson wrote:
>> One of the constraints on the VMX instructions is that they are not
>> allowed in real-address mode. Though the hardware checks for this
>> condition as well, when real-address mode is emulated, the faulting
>> condition does have to be checked in software.
>
> Emulated real mode is virtual-8086 mode, so that should be checked by
> the processor too, right?
>
> VMX instructions are never called from the emulator, so they cannot be
> reached from the emulate_invalid_guest_state path. And if they could,
> you'd have to keep the CPL checks and all the others. So I think that
> you can remove the checks for CR0.PE as well.
>
> Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-26 15:46 ` [PATCH] " Jim Mattson
@ 2017-04-26 15:53 ` Jim Mattson
2017-04-27 8:29 ` David Hildenbrand
0 siblings, 1 reply; 19+ messages in thread
From: Jim Mattson @ 2017-04-26 15:53 UTC (permalink / raw)
To: kvm; +Cc: Jim Mattson
According to the Intel SDM, "Certain exceptions have priority over VM
exits. These include invalid-opcode exceptions, faults based on
privilege level*, and general-protection exceptions that are based on
checking I/O permission bits in the task-state segment (TSS)."
There is no need to check for faulting conditions that the hardware
has already checked.
* These include faults generated by attempts to execute, in
virtual-8086 mode, privileged instructions that are not recognized
in that mode.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/kvm/vmx.c | 55 ++++++++----------------------------------------------
1 file changed, 8 insertions(+), 47 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 259e9b28ccf8..a6c4af687006 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7107,34 +7107,14 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
static int handle_vmon(struct kvm_vcpu *vcpu)
{
int ret;
- struct kvm_segment cs;
struct vcpu_vmx *vmx = to_vmx(vcpu);
const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED
| FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
- /* The Intel VMX Instruction Reference lists a bunch of bits that
- * are prerequisite to running VMXON, most notably cr4.VMXE must be
- * set to 1 (see vmx_set_cr4() for when we allow the guest to set this).
- * Otherwise, we should fail with #UD. We test these now:
+ /*
+ * Most of the faulting conditions have already been checked by
+ * hardware, prior to the VM-exit for VMXON.
*/
- if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE) ||
- !kvm_read_cr0_bits(vcpu, X86_CR0_PE) ||
- (vmx_get_rflags(vcpu) & X86_EFLAGS_VM)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
-
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if (is_long_mode(vcpu) && !cs.l) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
-
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 1;
- }
-
if (vmx->nested.vmxon) {
nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
return kvm_skip_emulated_instruction(vcpu);
@@ -7161,29 +7141,15 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
* Intel's VMX Instruction Reference specifies a common set of prerequisites
* for running VMX instructions (except VMXON, whose prerequisites are
* slightly different). It also specifies what exception to inject otherwise.
+ * Note that many of these exceptions have priority over VM exits, so they
+ * don't have to be checked again here.
*/
static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
{
- struct kvm_segment cs;
- struct vcpu_vmx *vmx = to_vmx(vcpu);
-
- if (!vmx->nested.vmxon) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 0;
- }
-
- vmx_get_segment(vcpu, &cs, VCPU_SREG_CS);
- if ((vmx_get_rflags(vcpu) & X86_EFLAGS_VM) ||
- (is_long_mode(vcpu) && !cs.l)) {
+ if (!to_vmx(vcpu)->nested.vmxon) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 0;
}
-
- if (vmx_get_cpl(vcpu)) {
- kvm_inject_gp(vcpu, 0);
- return 0;
- }
-
return 1;
}
@@ -7527,7 +7493,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &gva))
return 1;
- /* _system ok, as nested_vmx_check_permission verified cpl=0 */
+ /* _system ok, as hardware has verified cpl=0 */
kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva,
&field_value, (is_long_mode(vcpu) ? 8 : 4), NULL);
}
@@ -7660,7 +7626,7 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu)
if (get_vmx_mem_address(vcpu, exit_qualification,
vmx_instruction_info, true, &vmcs_gva))
return 1;
- /* ok to use *_system, as nested_vmx_check_permission verified cpl=0 */
+ /* ok to use *_system, as hardware has verified cpl=0 */
if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva,
(void *)&to_vmx(vcpu)->nested.current_vmptr,
sizeof(u64), &e)) {
@@ -7693,11 +7659,6 @@ static int handle_invept(struct kvm_vcpu *vcpu)
if (!nested_vmx_check_permission(vcpu))
return 1;
- if (!kvm_read_cr0_bits(vcpu, X86_CR0_PE)) {
- kvm_queue_exception(vcpu, UD_VECTOR);
- return 1;
- }
-
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_readl(vcpu, (vmx_instruction_info >> 28) & 0xf);
--
2.13.0.rc0.306.g87b477812d-goog
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-26 15:53 ` [PATCH v4] " Jim Mattson
@ 2017-04-27 8:29 ` David Hildenbrand
2017-04-27 11:25 ` Paolo Bonzini
0 siblings, 1 reply; 19+ messages in thread
From: David Hildenbrand @ 2017-04-27 8:29 UTC (permalink / raw)
To: Jim Mattson, kvm
On 26.04.2017 17:53, Jim Mattson wrote:
> According to the Intel SDM, "Certain exceptions have priority over VM
> exits. These include invalid-opcode exceptions, faults based on
> privilege level*, and general-protection exceptions that are based on
> checking I/O permission bits in the task-state segment (TSS)."
>
> There is no need to check for faulting conditions that the hardware
> has already checked.
>
> * These include faults generated by attempts to execute, in
> virtual-8086 mode, privileged instructions that are not recognized
> in that mode.
>
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
> arch/x86/kvm/vmx.c | 55 ++++++++----------------------------------------------
> 1 file changed, 8 insertions(+), 47 deletions(-)
Nice! So we really only have to check vmxon / pointer / features for
vmxon and for the others only vmxon.
Reviewed-by: David Hildenbrand <david@redhat.com>
[...]
> if (vmx->nested.vmxon) {
> nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
> return kvm_skip_emulated_instruction(vcpu);
> @@ -7161,29 +7141,15 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
> * Intel's VMX Instruction Reference specifies a common set of prerequisites
> * for running VMX instructions (except VMXON, whose prerequisites are
> * slightly different). It also specifies what exception to inject otherwise.
> + * Note that many of these exceptions have priority over VM exits, so they
> + * don't have to be checked again here.
> */
I think we could rename that one to nested_vmx_check_vmxon() now and
drop the complete comment. Maybe as a cleanup patch. Do we have some
nVMX documentation where we could put such information (so we could also
remove the comment from handle_vmon())?
> static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
> {
--
Thanks,
David
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-27 8:29 ` David Hildenbrand
@ 2017-04-27 11:25 ` Paolo Bonzini
2017-04-27 12:09 ` David Hildenbrand
2017-04-27 15:00 ` Jim Mattson
0 siblings, 2 replies; 19+ messages in thread
From: Paolo Bonzini @ 2017-04-27 11:25 UTC (permalink / raw)
To: David Hildenbrand, Jim Mattson, kvm
On 27/04/2017 10:29, David Hildenbrand wrote:
>> arch/x86/kvm/vmx.c | 55 ++++++++----------------------------------------------
>> 1 file changed, 8 insertions(+), 47 deletions(-)
> Nice! So we really only have to check vmxon / pointer / features for
> vmxon and for the others only vmxon.
Still not good, CR4.VMXE has to be checked because we always run the
guest with CR4.VMXE set (see section 23.8 in the SDM).
Paolo
> Reviewed-by: David Hildenbrand <david@redhat.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-27 11:25 ` Paolo Bonzini
@ 2017-04-27 12:09 ` David Hildenbrand
2017-04-27 15:00 ` Jim Mattson
1 sibling, 0 replies; 19+ messages in thread
From: David Hildenbrand @ 2017-04-27 12:09 UTC (permalink / raw)
To: Paolo Bonzini, Jim Mattson, kvm
On 27.04.2017 13:25, Paolo Bonzini wrote:
>
>
> On 27/04/2017 10:29, David Hildenbrand wrote:
>>> arch/x86/kvm/vmx.c | 55 ++++++++----------------------------------------------
>>> 1 file changed, 8 insertions(+), 47 deletions(-)
>> Nice! So we really only have to check vmxon / pointer / features for
>> vmxon and for the others only vmxon.
>
> Still not good, CR4.VMXE has to be checked because we always run the
> guest with CR4.VMXE set (see section 23.8 in the SDM).
>
> Paolo
>
>> Reviewed-by: David Hildenbrand <david@redhat.com>
Looking at the pseudocode of VMXON (30-27) (and friends), it looks like
the hardware performs the following checks before testing for non-root
operation.
(register operand) -> #UD
(cr4.vmxe = 0) -> #UD
(rflags.vm = 1) -> #UD
(lma = 1) -> #UD
(cs.l) -> #UD
I was assuming cr4.vmxe might come from the read shadow, but as I
learned shadows are only for MOV/SMSW executed in the guest. So you're
of course right, VMXE would be always active for us.
Check for (in VMX non-root operation) and triggering the VMexit is done
before checking CPL > 0. So maybe also the CPL check also has to stay.
I would really like to see a test case for that. And a brain dump of
Paolo :)
--
Thanks,
David
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-27 11:25 ` Paolo Bonzini
2017-04-27 12:09 ` David Hildenbrand
@ 2017-04-27 15:00 ` Jim Mattson
2017-04-27 15:31 ` Paolo Bonzini
2017-04-27 15:32 ` Jim Mattson
1 sibling, 2 replies; 19+ messages in thread
From: Jim Mattson @ 2017-04-27 15:00 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: David Hildenbrand, kvm list
On Thu, Apr 27, 2017 at 4:25 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 27/04/2017 10:29, David Hildenbrand wrote:
>>> arch/x86/kvm/vmx.c | 55 ++++++++----------------------------------------------
>>> 1 file changed, 8 insertions(+), 47 deletions(-)
>> Nice! So we really only have to check vmxon / pointer / features for
>> vmxon and for the others only vmxon.
>
> Still not good, CR4.VMXE has to be checked because we always run the
> guest with CR4.VMXE set (see section 23.8 in the SDM).
Ah, yes, Of course.
>
> Paolo
>
>> Reviewed-by: David Hildenbrand <david@redhat.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-27 15:00 ` Jim Mattson
@ 2017-04-27 15:31 ` Paolo Bonzini
2017-04-27 15:32 ` Jim Mattson
1 sibling, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2017-04-27 15:31 UTC (permalink / raw)
To: Jim Mattson; +Cc: David Hildenbrand, kvm list
On 27/04/2017 17:00, Jim Mattson wrote:
> On Thu, Apr 27, 2017 at 4:25 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>
>>
>> On 27/04/2017 10:29, David Hildenbrand wrote:
>>>> arch/x86/kvm/vmx.c | 55 ++++++++----------------------------------------------
>>>> 1 file changed, 8 insertions(+), 47 deletions(-)
>>> Nice! So we really only have to check vmxon / pointer / features for
>>> vmxon and for the others only vmxon.
>>
>> Still not good, CR4.VMXE has to be checked because we always run the
>> guest with CR4.VMXE set (see section 23.8 in the SDM).
>
> Ah, yes, Of course.
I fixed this up and will push the patch briefly.
Paolo
>>
>> Paolo
>>
>>> Reviewed-by: David Hildenbrand <david@redhat.com>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-27 15:00 ` Jim Mattson
2017-04-27 15:31 ` Paolo Bonzini
@ 2017-04-27 15:32 ` Jim Mattson
2017-04-27 15:49 ` Paolo Bonzini
1 sibling, 1 reply; 19+ messages in thread
From: Jim Mattson @ 2017-04-27 15:32 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: David Hildenbrand, kvm list
Also, the allowed/required CR0/CR4 bits may not be the same for the
vCPU as for the physical hardware, so more than just CR4.VMXE may have
to be checked.
On Thu, Apr 27, 2017 at 8:00 AM, Jim Mattson <jmattson@google.com> wrote:
> On Thu, Apr 27, 2017 at 4:25 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>>
>>
>> On 27/04/2017 10:29, David Hildenbrand wrote:
>>>> arch/x86/kvm/vmx.c | 55 ++++++++----------------------------------------------
>>>> 1 file changed, 8 insertions(+), 47 deletions(-)
>>> Nice! So we really only have to check vmxon / pointer / features for
>>> vmxon and for the others only vmxon.
>>
>> Still not good, CR4.VMXE has to be checked because we always run the
>> guest with CR4.VMXE set (see section 23.8 in the SDM).
>
> Ah, yes, Of course.
>
>>
>> Paolo
>>
>>> Reviewed-by: David Hildenbrand <david@redhat.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-27 15:32 ` Jim Mattson
@ 2017-04-27 15:49 ` Paolo Bonzini
2017-04-27 17:29 ` Jim Mattson
0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2017-04-27 15:49 UTC (permalink / raw)
To: Jim Mattson; +Cc: David Hildenbrand, kvm list
On 27/04/2017 17:32, Jim Mattson wrote:
> Also, the allowed/required CR0/CR4 bits may not be the same for the
> vCPU as for the physical hardware, so more than just CR4.VMXE may have
> to be checked.
I think we can pass on that. For CR0 we know the two cases are
fundamentally unrestricted guest and !unrestricted guest, and they both
are covered (via CR0.PE and EFLAGS.VM respectively).
For CR4, we also pretty much know the only FIXED1 bit is VMXE, and
FIXED0 bits match the values that are checked by MOV to CR4.
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4] kvm: nVMX: Remove superfluous VMX instruction fault checks
2017-04-27 15:49 ` Paolo Bonzini
@ 2017-04-27 17:29 ` Jim Mattson
0 siblings, 0 replies; 19+ messages in thread
From: Jim Mattson @ 2017-04-27 17:29 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: David Hildenbrand, kvm list
What about CR0.NE, which, like CR4.VMXE, will always be set while
running the guest, but which may not be set in the vCPU?
On Thu, Apr 27, 2017 at 8:49 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>
> On 27/04/2017 17:32, Jim Mattson wrote:
>> Also, the allowed/required CR0/CR4 bits may not be the same for the
>> vCPU as for the physical hardware, so more than just CR4.VMXE may have
>> to be checked.
>
> I think we can pass on that. For CR0 we know the two cases are
> fundamentally unrestricted guest and !unrestricted guest, and they both
> are covered (via CR0.PE and EFLAGS.VM respectively).
>
> For CR4, we also pretty much know the only FIXED1 bit is VMXE, and
> FIXED0 bits match the values that are checked by MOV to CR4.
>
> Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2017-04-27 17:29 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-21 16:53 [PATCH] kvm: nVMX: Remove superfluous VMX instruction fault checks Jim Mattson
2017-04-22 0:24 ` kbuild test robot
2017-04-22 1:35 ` kbuild test robot
2017-04-24 15:28 ` [PATCH v2] " Jim Mattson
2017-04-25 12:38 ` David Hildenbrand
2017-04-25 14:10 ` Jim Mattson
2017-04-25 16:30 ` David Hildenbrand
2017-04-26 9:26 ` [PATCH] " Paolo Bonzini
2017-04-26 15:38 ` [PATCH v3] " Jim Mattson
2017-04-26 15:46 ` [PATCH] " Jim Mattson
2017-04-26 15:53 ` [PATCH v4] " Jim Mattson
2017-04-27 8:29 ` David Hildenbrand
2017-04-27 11:25 ` Paolo Bonzini
2017-04-27 12:09 ` David Hildenbrand
2017-04-27 15:00 ` Jim Mattson
2017-04-27 15:31 ` Paolo Bonzini
2017-04-27 15:32 ` Jim Mattson
2017-04-27 15:49 ` Paolo Bonzini
2017-04-27 17:29 ` Jim Mattson
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.