All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes
@ 2017-04-18 10:25 Jan Beulich
  2017-04-18 10:29 ` [PATCH v2 1/4] x86emul: always fill x86_insn_modrm()'s outputs Jan Beulich
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Jan Beulich @ 2017-04-18 10:25 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Julien Grall

Patch 2 brings recently added code in line with what we did switch
other code to during this dev cycle. Patch 3 is at least a latent bug fix.
Patch 4 is merely improving debuggability, so other than the first two
I'm not sure it qualifies for 4.9.

This is all fallout from re-basing my UMIP emulation patch over
d0a699a389 ("x86/monitor: add support for descriptor access events").

1: x86emul: always fill x86_insn_modrm()'s outputs
2: x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
3: VMX: don't blindly enable descriptor table exiting control
4: x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New prereq patch 1. Patch 3 almost entirely reworked.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/4] x86emul: always fill x86_insn_modrm()'s outputs
  2017-04-18 10:25 [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
@ 2017-04-18 10:29 ` Jan Beulich
  2017-04-18 10:33   ` Andrew Cooper
  2017-04-18 10:31 ` [PATCH v2 2/4] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2017-04-18 10:29 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Julien Grall

[-- Attachment #1: Type: text/plain, Size: 832 bytes --]

The function is rather unlikely to be called for insns which don't have
ModRM bytes, and hence addressing Coverity's recurring complaint of
callers potentially consuming uninitialized data when they know that
certain opcodes have ModRM bytes can be suppressed this way without
unduly adding overhead to fast paths.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -8017,8 +8017,14 @@ x86_insn_modrm(const struct x86_emulate_
 {
     check_state(state);
 
-    if ( state->modrm_mod > 3 )
+    if ( unlikely(state->modrm_mod > 3) )
+    {
+        if ( rm )
+            *rm = ~0U;
+        if ( reg )
+            *reg = ~0U;
         return -EINVAL;
+    }
 
     if ( rm )
         *rm = state->modrm_rm;




[-- Attachment #2: x86emul-modrm-init.patch --]
[-- Type: text/plain, Size: 877 bytes --]

x86emul: always fill x86_insn_modrm()'s outputs

The function is rather unlikely to be called for insns which don't have
ModRM bytes, and hence addressing Coverity's recurring complaint of
callers potentially consuming uninitialized data when they know that
certain opcodes have ModRM bytes can be suppressed this way without
unduly adding overhead to fast paths.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -8017,8 +8017,14 @@ x86_insn_modrm(const struct x86_emulate_
 {
     check_state(state);
 
-    if ( state->modrm_mod > 3 )
+    if ( unlikely(state->modrm_mod > 3) )
+    {
+        if ( rm )
+            *rm = ~0U;
+        if ( reg )
+            *reg = ~0U;
         return -EINVAL;
+    }
 
     if ( rm )
         *rm = state->modrm_rm;

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 2/4] x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
  2017-04-18 10:25 [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
  2017-04-18 10:29 ` [PATCH v2 1/4] x86emul: always fill x86_insn_modrm()'s outputs Jan Beulich
@ 2017-04-18 10:31 ` Jan Beulich
  2017-04-18 10:35   ` Andrew Cooper
  2017-04-18 10:32 ` [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control Jan Beulich
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2017-04-18 10:31 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Julien Grall

[-- Attachment #1: Type: text/plain, Size: 2123 bytes --]

While I did review d0a699a389 ("x86/monitor: add support for descriptor
access events") it didn't really occur to me that someone could be this
blunt and add unguarded emulation again just a few weeks after we
guarded all special purpose emulator invocations. Fix this.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3598,6 +3598,28 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
+static bool is_sysdesc_access(const struct x86_emulate_state *state,
+                              const struct x86_emulate_ctxt *ctxt)
+{
+    unsigned int ext;
+    int mode = x86_insn_modrm(state, NULL, &ext);
+
+    switch ( ctxt->opcode )
+    {
+    case X86EMUL_OPC(0x0f, 0x00):
+        if ( !(ext & 4) ) /* SLDT / STR / LLDT / LTR */
+            return true;
+        break;
+
+    case X86EMUL_OPC(0x0f, 0x01):
+        if ( mode != 3 && !(ext & 4) ) /* SGDT / SIDT / LGDT / LIDT */
+            return true;
+        break;
+    }
+
+    return false;
+}
+
 int hvm_descriptor_access_intercept(uint64_t exit_info,
                                     uint64_t vmx_exit_qualification,
                                     unsigned int descriptor, bool is_write)
@@ -3611,24 +3633,8 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else
-    {
-        struct hvm_emulate_ctxt ctxt;
-
-        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
-        switch ( hvm_emulate_one(&ctxt) )
-        {
-        case X86EMUL_UNHANDLEABLE:
-            domain_crash(currd);
-            return X86EMUL_UNHANDLEABLE;
-        case X86EMUL_EXCEPTION:
-            hvm_inject_event(&ctxt.ctxt.event);
-            /* fall through */
-        default:
-            hvm_emulate_writeback(&ctxt);
-            break;
-        }
-    }
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+        domain_crash(currd);
 
     return X86EMUL_OKAY;
 }




[-- Attachment #2: x86-HVM-sysreg-emul.patch --]
[-- Type: text/plain, Size: 2185 bytes --]

x86/HVM: restrict emulation in hvm_descriptor_access_intercept()

While I did review d0a699a389 ("x86/monitor: add support for descriptor
access events") it didn't really occur to me that someone could be this
blunt and add unguarded emulation again just a few weeks after we
guarded all special purpose emulator invocations. Fix this.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3598,6 +3598,28 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
+static bool is_sysdesc_access(const struct x86_emulate_state *state,
+                              const struct x86_emulate_ctxt *ctxt)
+{
+    unsigned int ext;
+    int mode = x86_insn_modrm(state, NULL, &ext);
+
+    switch ( ctxt->opcode )
+    {
+    case X86EMUL_OPC(0x0f, 0x00):
+        if ( !(ext & 4) ) /* SLDT / STR / LLDT / LTR */
+            return true;
+        break;
+
+    case X86EMUL_OPC(0x0f, 0x01):
+        if ( mode != 3 && !(ext & 4) ) /* SGDT / SIDT / LGDT / LIDT */
+            return true;
+        break;
+    }
+
+    return false;
+}
+
 int hvm_descriptor_access_intercept(uint64_t exit_info,
                                     uint64_t vmx_exit_qualification,
                                     unsigned int descriptor, bool is_write)
@@ -3611,24 +3633,8 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else
-    {
-        struct hvm_emulate_ctxt ctxt;
-
-        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
-        switch ( hvm_emulate_one(&ctxt) )
-        {
-        case X86EMUL_UNHANDLEABLE:
-            domain_crash(currd);
-            return X86EMUL_UNHANDLEABLE;
-        case X86EMUL_EXCEPTION:
-            hvm_inject_event(&ctxt.ctxt.event);
-            /* fall through */
-        default:
-            hvm_emulate_writeback(&ctxt);
-            break;
-        }
-    }
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+        domain_crash(currd);
 
     return X86EMUL_OKAY;
 }

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control
  2017-04-18 10:25 [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
  2017-04-18 10:29 ` [PATCH v2 1/4] x86emul: always fill x86_insn_modrm()'s outputs Jan Beulich
  2017-04-18 10:31 ` [PATCH v2 2/4] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
@ 2017-04-18 10:32 ` Jan Beulich
  2017-04-18 10:37   ` Razvan Cojocaru
                     ` (2 more replies)
  2017-04-18 10:34 ` [PATCH v2 4/4] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
  2017-04-19  8:59 ` [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Julien Grall
  4 siblings, 3 replies; 12+ messages in thread
From: Jan Beulich @ 2017-04-18 10:32 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Razvan Cojocaru, Andrew Cooper, Julien Grall,
	Jun Nakajima, Tamas K Lengyel

[-- Attachment #1: Type: text/plain, Size: 4020 bytes --]

This is an optional feature and hence we should check for it before
use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-do detection of availability, resulting in almost all of the
    changes done here being different than in v1.

--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -226,6 +226,7 @@ static int vmx_init_vmcs_config(void)
         opt = (SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
                SECONDARY_EXEC_WBINVD_EXITING |
                SECONDARY_EXEC_ENABLE_EPT |
+               SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_ENABLE_RDTSCP |
                SECONDARY_EXEC_PAUSE_LOOP_EXITING |
                SECONDARY_EXEC_ENABLE_INVPCID |
@@ -1020,6 +1021,13 @@ static int construct_vmcs(struct vcpu *v
 
     v->arch.hvm_vmx.secondary_exec_control = vmx_secondary_exec_control;
 
+    /*
+     * Disable descriptor table exiting: It's controlled by the VM event
+     * monitor requesting it.
+     */
+    v->arch.hvm_vmx.secondary_exec_control &=
+        ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+
     /* Disable VPID for now: we decide when to enable it on VMENTER. */
     v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2325,7 +2325,6 @@ static struct hvm_function_table __initd
     .handle_cd            = vmx_handle_cd,
     .set_info_guest       = vmx_set_info_guest,
     .set_rdtsc_exiting    = vmx_set_rdtsc_exiting,
-    .set_descriptor_access_exiting = vmx_set_descriptor_access_exiting,
     .nhvm_vcpu_initialise = nvmx_vcpu_initialise,
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
@@ -2446,6 +2445,10 @@ const struct hvm_function_table * __init
         return NULL;
     }
 
+    if ( cpu_has_vmx_dt_exiting )
+        vmx_function_table.set_descriptor_access_exiting =
+            vmx_set_descriptor_access_exiting;
+
     /*
      * Do not enable EPT when (!cpu_has_vmx_pat), to prevent security hole
      * (refer to http://xenbits.xen.org/xsa/advisory-60.html).
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -219,6 +219,9 @@ int arch_monitor_domctl_event(struct dom
         if ( unlikely(old_status == requested_status) )
             return -EEXIST;
 
+        if ( !hvm_funcs.set_descriptor_access_exiting )
+            return -EOPNOTSUPP;
+
         domain_pause(d);
         ad->monitor.descriptor_access_enabled = requested_status;
 
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -274,6 +274,8 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
 #define cpu_has_vmx_ept \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT)
+#define cpu_has_vmx_dt_exiting \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING)
 #define cpu_has_vmx_vpid \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
 #define cpu_has_monitor_trap_flag \
--- a/xen/include/asm-x86/monitor.h
+++ b/xen/include/asm-x86/monitor.h
@@ -77,13 +77,15 @@ static inline uint32_t arch_monitor_get_
                    (1U << XEN_DOMCTL_MONITOR_EVENT_GUEST_REQUEST) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_DEBUG_EXCEPTION) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_CPUID) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT);
 
     /* Since we know this is on VMX, we can just call the hvm func */
     if ( hvm_is_singlestep_supported() )
         capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP);
 
+    if ( hvm_funcs.set_descriptor_access_exiting )
+        capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+
     return capabilities;
 }
 




[-- Attachment #2: VMX-check-dt-exiting.patch --]
[-- Type: text/plain, Size: 4076 bytes --]

VMX: don't blindly enable descriptor table exiting control

This is an optional feature and hence we should check for it before
use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-do detection of availability, resulting in almost all of the
    changes done here being different than in v1.

--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -226,6 +226,7 @@ static int vmx_init_vmcs_config(void)
         opt = (SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
                SECONDARY_EXEC_WBINVD_EXITING |
                SECONDARY_EXEC_ENABLE_EPT |
+               SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_ENABLE_RDTSCP |
                SECONDARY_EXEC_PAUSE_LOOP_EXITING |
                SECONDARY_EXEC_ENABLE_INVPCID |
@@ -1020,6 +1021,13 @@ static int construct_vmcs(struct vcpu *v
 
     v->arch.hvm_vmx.secondary_exec_control = vmx_secondary_exec_control;
 
+    /*
+     * Disable descriptor table exiting: It's controlled by the VM event
+     * monitor requesting it.
+     */
+    v->arch.hvm_vmx.secondary_exec_control &=
+        ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+
     /* Disable VPID for now: we decide when to enable it on VMENTER. */
     v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2325,7 +2325,6 @@ static struct hvm_function_table __initd
     .handle_cd            = vmx_handle_cd,
     .set_info_guest       = vmx_set_info_guest,
     .set_rdtsc_exiting    = vmx_set_rdtsc_exiting,
-    .set_descriptor_access_exiting = vmx_set_descriptor_access_exiting,
     .nhvm_vcpu_initialise = nvmx_vcpu_initialise,
     .nhvm_vcpu_destroy    = nvmx_vcpu_destroy,
     .nhvm_vcpu_reset      = nvmx_vcpu_reset,
@@ -2446,6 +2445,10 @@ const struct hvm_function_table * __init
         return NULL;
     }
 
+    if ( cpu_has_vmx_dt_exiting )
+        vmx_function_table.set_descriptor_access_exiting =
+            vmx_set_descriptor_access_exiting;
+
     /*
      * Do not enable EPT when (!cpu_has_vmx_pat), to prevent security hole
      * (refer to http://xenbits.xen.org/xsa/advisory-60.html).
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -219,6 +219,9 @@ int arch_monitor_domctl_event(struct dom
         if ( unlikely(old_status == requested_status) )
             return -EEXIST;
 
+        if ( !hvm_funcs.set_descriptor_access_exiting )
+            return -EOPNOTSUPP;
+
         domain_pause(d);
         ad->monitor.descriptor_access_enabled = requested_status;
 
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -274,6 +274,8 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
 #define cpu_has_vmx_ept \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT)
+#define cpu_has_vmx_dt_exiting \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING)
 #define cpu_has_vmx_vpid \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
 #define cpu_has_monitor_trap_flag \
--- a/xen/include/asm-x86/monitor.h
+++ b/xen/include/asm-x86/monitor.h
@@ -77,13 +77,15 @@ static inline uint32_t arch_monitor_get_
                    (1U << XEN_DOMCTL_MONITOR_EVENT_GUEST_REQUEST) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_DEBUG_EXCEPTION) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_CPUID) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT);
 
     /* Since we know this is on VMX, we can just call the hvm func */
     if ( hvm_is_singlestep_supported() )
         capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP);
 
+    if ( hvm_funcs.set_descriptor_access_exiting )
+        capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+
     return capabilities;
 }
 

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/4] x86emul: always fill x86_insn_modrm()'s outputs
  2017-04-18 10:29 ` [PATCH v2 1/4] x86emul: always fill x86_insn_modrm()'s outputs Jan Beulich
@ 2017-04-18 10:33   ` Andrew Cooper
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Cooper @ 2017-04-18 10:33 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Julien Grall

On 18/04/17 11:29, Jan Beulich wrote:
> The function is rather unlikely to be called for insns which don't have
> ModRM bytes, and hence addressing Coverity's recurring complaint of
> callers potentially consuming uninitialized data when they know that
> certain opcodes have ModRM bytes can be suppressed this way without
> unduly adding overhead to fast paths.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -8017,8 +8017,14 @@ x86_insn_modrm(const struct x86_emulate_
>  {
>      check_state(state);
>  
> -    if ( state->modrm_mod > 3 )
> +    if ( unlikely(state->modrm_mod > 3) )
> +    {
> +        if ( rm )
> +            *rm = ~0U;
> +        if ( reg )
> +            *reg = ~0U;
>          return -EINVAL;
> +    }
>  
>      if ( rm )
>          *rm = state->modrm_rm;
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 4/4] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation
  2017-04-18 10:25 [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
                   ` (2 preceding siblings ...)
  2017-04-18 10:32 ` [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control Jan Beulich
@ 2017-04-18 10:34 ` Jan Beulich
  2017-04-18 13:12   ` Paul Durrant
  2017-04-19  8:59 ` [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Julien Grall
  4 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2017-04-18 10:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Suravee Suthikulpanit, Andrew Cooper, Julien Grall,
	Paul Durrant, Jun Nakajima, Boris Ostrovsky

[-- Attachment #1: Type: text/plain, Size: 6377 bytes --]

This helps distinguishing the call paths leading there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2033,7 +2033,7 @@ int hvm_emulate_one_mmio(unsigned long m
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMCFG", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, "MMCFG", &ctxt);
         break;
     case X86EMUL_EXCEPTION:
         if ( ctxt.ctxt.event_pending )
@@ -2092,7 +2092,7 @@ void hvm_emulate_one_vm_event(enum emul_
          */
         return;
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_DEBUG "Mem event", &ctx);
+        hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event", &ctx);
         hvm_inject_hw_exception(trapnr, errcode);
         break;
     case X86EMUL_EXCEPTION:
@@ -2220,7 +2220,7 @@ static const char *guest_x86_mode_to_str
     }
 }
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     struct vcpu *curr = current;
@@ -2228,8 +2228,8 @@ void hvm_dump_emulation_state(const char
     const struct segment_register *cs =
         hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
 
-    printk("%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
-           prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
+    printk("%s%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
+           loglvl, prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
            hvmemul_ctxt->insn_buf_bytes, hvmemul_ctxt->insn_buf);
 }
 
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3633,7 +3633,7 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access, "sysdesc access") )
         domain_crash(currd);
 
     return X86EMUL_OKAY;
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -77,7 +77,7 @@ void send_invalidate_req(void)
         gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
 }
 
-bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate)
+bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
     struct hvm_emulate_ctxt ctxt;
     struct vcpu *curr = current;
@@ -96,7 +96,7 @@ bool hvm_emulate_one_insn(hvm_emulate_va
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMIO", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt);
         return false;
 
     case X86EMUL_EXCEPTION:
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2676,7 +2676,7 @@ void svm_vmexit_handler(struct cpu_user_
             if ( handle_pio(port, bytes, dir) )
                 __update_guest_eip(regs, vmcb->exitinfo2 - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2684,7 +2684,7 @@ void svm_vmexit_handler(struct cpu_user_
     case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
         if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
             svm_vmexit_do_cr_access(vmcb, regs);
-        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access, "CR access") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2694,7 +2694,7 @@ void svm_vmexit_handler(struct cpu_user_
             svm_invlpg_intercept(vmcb->exitinfo1);
             __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(is_invlpg) )
+        else if ( !hvm_emulate_one_insn(is_invlpg, "invlpg") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -153,7 +153,7 @@ void vmx_realmode_emulate_one(struct hvm
     return;
 
  fail:
-    hvm_dump_emulation_state(XENLOG_G_ERR "Real-mode", hvmemul_ctxt);
+    hvm_dump_emulation_state(XENLOG_G_ERR, "Real-mode", hvmemul_ctxt);
     domain_crash(curr->domain);
 }
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4013,7 +4013,7 @@ void vmx_vmexit_handler(struct cpu_user_
         if ( exit_qualification & 0x10 )
         {
             /* INS, OUTS */
-            if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+            if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
         }
         else
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -49,8 +49,9 @@ enum emul_kind {
     EMUL_KIND_SET_CONTEXT_INSN
 };
 
-bool __nonnull(1) hvm_emulate_one_insn(
-    hvm_emulate_validate_t *validate);
+bool __nonnull(1, 2) hvm_emulate_one_insn(
+    hvm_emulate_validate_t *validate,
+    const char *descr);
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
 void hvm_emulate_one_vm_event(enum emul_kind kind,
@@ -77,7 +78,7 @@ int hvm_emulate_one_mmio(unsigned long m
 
 static inline bool handle_mmio(void)
 {
-    return hvm_emulate_one_insn(x86_insn_is_mem_access);
+    return hvm_emulate_one_insn(x86_insn_is_mem_access, "MMIO");
 }
 
 int hvmemul_insn_fetch(enum x86_segment seg,
@@ -90,7 +91,7 @@ int hvmemul_do_pio_buffer(uint16_t port,
                           uint8_t dir,
                           void *buffer);
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt);
 
 #endif /* __ASM_X86_HVM_EMULATE_H__ */



[-- Attachment #2: x86-HVM-emul-one-insn-type.patch --]
[-- Type: text/plain, Size: 6453 bytes --]

x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation

This helps distinguishing the call paths leading there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2033,7 +2033,7 @@ int hvm_emulate_one_mmio(unsigned long m
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMCFG", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, "MMCFG", &ctxt);
         break;
     case X86EMUL_EXCEPTION:
         if ( ctxt.ctxt.event_pending )
@@ -2092,7 +2092,7 @@ void hvm_emulate_one_vm_event(enum emul_
          */
         return;
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_DEBUG "Mem event", &ctx);
+        hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event", &ctx);
         hvm_inject_hw_exception(trapnr, errcode);
         break;
     case X86EMUL_EXCEPTION:
@@ -2220,7 +2220,7 @@ static const char *guest_x86_mode_to_str
     }
 }
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     struct vcpu *curr = current;
@@ -2228,8 +2228,8 @@ void hvm_dump_emulation_state(const char
     const struct segment_register *cs =
         hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
 
-    printk("%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
-           prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
+    printk("%s%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
+           loglvl, prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
            hvmemul_ctxt->insn_buf_bytes, hvmemul_ctxt->insn_buf);
 }
 
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3633,7 +3633,7 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access, "sysdesc access") )
         domain_crash(currd);
 
     return X86EMUL_OKAY;
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -77,7 +77,7 @@ void send_invalidate_req(void)
         gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
 }
 
-bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate)
+bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
     struct hvm_emulate_ctxt ctxt;
     struct vcpu *curr = current;
@@ -96,7 +96,7 @@ bool hvm_emulate_one_insn(hvm_emulate_va
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMIO", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt);
         return false;
 
     case X86EMUL_EXCEPTION:
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2676,7 +2676,7 @@ void svm_vmexit_handler(struct cpu_user_
             if ( handle_pio(port, bytes, dir) )
                 __update_guest_eip(regs, vmcb->exitinfo2 - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2684,7 +2684,7 @@ void svm_vmexit_handler(struct cpu_user_
     case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
         if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
             svm_vmexit_do_cr_access(vmcb, regs);
-        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access, "CR access") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2694,7 +2694,7 @@ void svm_vmexit_handler(struct cpu_user_
             svm_invlpg_intercept(vmcb->exitinfo1);
             __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(is_invlpg) )
+        else if ( !hvm_emulate_one_insn(is_invlpg, "invlpg") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -153,7 +153,7 @@ void vmx_realmode_emulate_one(struct hvm
     return;
 
  fail:
-    hvm_dump_emulation_state(XENLOG_G_ERR "Real-mode", hvmemul_ctxt);
+    hvm_dump_emulation_state(XENLOG_G_ERR, "Real-mode", hvmemul_ctxt);
     domain_crash(curr->domain);
 }
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4013,7 +4013,7 @@ void vmx_vmexit_handler(struct cpu_user_
         if ( exit_qualification & 0x10 )
         {
             /* INS, OUTS */
-            if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+            if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
         }
         else
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -49,8 +49,9 @@ enum emul_kind {
     EMUL_KIND_SET_CONTEXT_INSN
 };
 
-bool __nonnull(1) hvm_emulate_one_insn(
-    hvm_emulate_validate_t *validate);
+bool __nonnull(1, 2) hvm_emulate_one_insn(
+    hvm_emulate_validate_t *validate,
+    const char *descr);
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
 void hvm_emulate_one_vm_event(enum emul_kind kind,
@@ -77,7 +78,7 @@ int hvm_emulate_one_mmio(unsigned long m
 
 static inline bool handle_mmio(void)
 {
-    return hvm_emulate_one_insn(x86_insn_is_mem_access);
+    return hvm_emulate_one_insn(x86_insn_is_mem_access, "MMIO");
 }
 
 int hvmemul_insn_fetch(enum x86_segment seg,
@@ -90,7 +91,7 @@ int hvmemul_do_pio_buffer(uint16_t port,
                           uint8_t dir,
                           void *buffer);
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt);
 
 #endif /* __ASM_X86_HVM_EMULATE_H__ */

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/4] x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
  2017-04-18 10:31 ` [PATCH v2 2/4] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
@ 2017-04-18 10:35   ` Andrew Cooper
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Cooper @ 2017-04-18 10:35 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Julien Grall

On 18/04/17 11:31, Jan Beulich wrote:
> While I did review d0a699a389 ("x86/monitor: add support for descriptor
> access events") it didn't really occur to me that someone could be this
> blunt and add unguarded emulation again just a few weeks after we
> guarded all special purpose emulator invocations. Fix this.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control
  2017-04-18 10:32 ` [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control Jan Beulich
@ 2017-04-18 10:37   ` Razvan Cojocaru
  2017-04-18 10:43   ` Andrew Cooper
  2017-04-18 13:36   ` Tian, Kevin
  2 siblings, 0 replies; 12+ messages in thread
From: Razvan Cojocaru @ 2017-04-18 10:37 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Julien Grall, Kevin Tian, Jun Nakajima, Tamas K Lengyel

On 04/18/2017 01:32 PM, Jan Beulich wrote:
> This is an optional feature and hence we should check for it before
> use.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Re-do detection of availability, resulting in almost all of the
>     changes done here being different than in v1.

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control
  2017-04-18 10:32 ` [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control Jan Beulich
  2017-04-18 10:37   ` Razvan Cojocaru
@ 2017-04-18 10:43   ` Andrew Cooper
  2017-04-18 13:36   ` Tian, Kevin
  2 siblings, 0 replies; 12+ messages in thread
From: Andrew Cooper @ 2017-04-18 10:43 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Julien Grall, Jun Nakajima, Razvan Cojocaru

On 18/04/17 11:32, Jan Beulich wrote:
> This is an optional feature and hence we should check for it before
> use.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation
  2017-04-18 10:34 ` [PATCH v2 4/4] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
@ 2017-04-18 13:12   ` Paul Durrant
  0 siblings, 0 replies; 12+ messages in thread
From: Paul Durrant @ 2017-04-18 13:12 UTC (permalink / raw)
  To: 'Jan Beulich', xen-devel
  Cc: Kevin Tian, Suravee Suthikulpanit, Andrew Cooper, Julien Grall,
	Jun Nakajima, Boris Ostrovsky

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 18 April 2017 11:34
> To: xen-devel <xen-devel@lists.xenproject.org>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>; Julien Grall
> <julien.grall@arm.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> Paul Durrant <Paul.Durrant@citrix.com>; Jun Nakajima
> <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Boris
> Ostrovsky <boris.ostrovsky@oracle.com>
> Subject: [PATCH v2 4/4] x86/HVM: don't uniformly report "MMIO" for
> various forms of failed emulation
> 
> This helps distinguishing the call paths leading there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> 

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -2033,7 +2033,7 @@ int hvm_emulate_one_mmio(unsigned long m
>      switch ( rc )
>      {
>      case X86EMUL_UNHANDLEABLE:
> -        hvm_dump_emulation_state(XENLOG_G_WARNING "MMCFG", &ctxt);
> +        hvm_dump_emulation_state(XENLOG_G_WARNING, "MMCFG",
> &ctxt);
>          break;
>      case X86EMUL_EXCEPTION:
>          if ( ctxt.ctxt.event_pending )
> @@ -2092,7 +2092,7 @@ void hvm_emulate_one_vm_event(enum emul_
>           */
>          return;
>      case X86EMUL_UNHANDLEABLE:
> -        hvm_dump_emulation_state(XENLOG_G_DEBUG "Mem event", &ctx);
> +        hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event",
> &ctx);
>          hvm_inject_hw_exception(trapnr, errcode);
>          break;
>      case X86EMUL_EXCEPTION:
> @@ -2220,7 +2220,7 @@ static const char *guest_x86_mode_to_str
>      }
>  }
> 
> -void hvm_dump_emulation_state(const char *prefix,
> +void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
>                                struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      struct vcpu *curr = current;
> @@ -2228,8 +2228,8 @@ void hvm_dump_emulation_state(const char
>      const struct segment_register *cs =
>          hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
> 
> -    printk("%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
> -           prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
> +    printk("%s%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
> +           loglvl, prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
>             hvmemul_ctxt->insn_buf_bytes, hvmemul_ctxt->insn_buf);
>  }
> 
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3633,7 +3633,7 @@ int hvm_descriptor_access_intercept(uint
>          hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
>                                        descriptor, is_write);
>      }
> -    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
> +    else if ( !hvm_emulate_one_insn(is_sysdesc_access, "sysdesc access") )
>          domain_crash(currd);
> 
>      return X86EMUL_OKAY;
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -77,7 +77,7 @@ void send_invalidate_req(void)
>          gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
>  }
> 
> -bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate)
> +bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const
> char *descr)
>  {
>      struct hvm_emulate_ctxt ctxt;
>      struct vcpu *curr = current;
> @@ -96,7 +96,7 @@ bool hvm_emulate_one_insn(hvm_emulate_va
>      switch ( rc )
>      {
>      case X86EMUL_UNHANDLEABLE:
> -        hvm_dump_emulation_state(XENLOG_G_WARNING "MMIO", &ctxt);
> +        hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt);
>          return false;
> 
>      case X86EMUL_EXCEPTION:
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2676,7 +2676,7 @@ void svm_vmexit_handler(struct cpu_user_
>              if ( handle_pio(port, bytes, dir) )
>                  __update_guest_eip(regs, vmcb->exitinfo2 - vmcb->rip);
>          }
> -        else if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
> +        else if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
>              hvm_inject_hw_exception(TRAP_gp_fault, 0);
>          break;
> 
> @@ -2684,7 +2684,7 @@ void svm_vmexit_handler(struct cpu_user_
>      case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
>          if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
>              svm_vmexit_do_cr_access(vmcb, regs);
> -        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access) )
> +        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access, "CR access") )
>              hvm_inject_hw_exception(TRAP_gp_fault, 0);
>          break;
> 
> @@ -2694,7 +2694,7 @@ void svm_vmexit_handler(struct cpu_user_
>              svm_invlpg_intercept(vmcb->exitinfo1);
>              __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
>          }
> -        else if ( !hvm_emulate_one_insn(is_invlpg) )
> +        else if ( !hvm_emulate_one_insn(is_invlpg, "invlpg") )
>              hvm_inject_hw_exception(TRAP_gp_fault, 0);
>          break;
> 
> --- a/xen/arch/x86/hvm/vmx/realmode.c
> +++ b/xen/arch/x86/hvm/vmx/realmode.c
> @@ -153,7 +153,7 @@ void vmx_realmode_emulate_one(struct hvm
>      return;
> 
>   fail:
> -    hvm_dump_emulation_state(XENLOG_G_ERR "Real-mode",
> hvmemul_ctxt);
> +    hvm_dump_emulation_state(XENLOG_G_ERR, "Real-mode",
> hvmemul_ctxt);
>      domain_crash(curr->domain);
>  }
> 
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -4013,7 +4013,7 @@ void vmx_vmexit_handler(struct cpu_user_
>          if ( exit_qualification & 0x10 )
>          {
>              /* INS, OUTS */
> -            if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
> +            if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
>                  hvm_inject_hw_exception(TRAP_gp_fault, 0);
>          }
>          else
> --- a/xen/include/asm-x86/hvm/emulate.h
> +++ b/xen/include/asm-x86/hvm/emulate.h
> @@ -49,8 +49,9 @@ enum emul_kind {
>      EMUL_KIND_SET_CONTEXT_INSN
>  };
> 
> -bool __nonnull(1) hvm_emulate_one_insn(
> -    hvm_emulate_validate_t *validate);
> +bool __nonnull(1, 2) hvm_emulate_one_insn(
> +    hvm_emulate_validate_t *validate,
> +    const char *descr);
>  int hvm_emulate_one(
>      struct hvm_emulate_ctxt *hvmemul_ctxt);
>  void hvm_emulate_one_vm_event(enum emul_kind kind,
> @@ -77,7 +78,7 @@ int hvm_emulate_one_mmio(unsigned long m
> 
>  static inline bool handle_mmio(void)
>  {
> -    return hvm_emulate_one_insn(x86_insn_is_mem_access);
> +    return hvm_emulate_one_insn(x86_insn_is_mem_access, "MMIO");
>  }
> 
>  int hvmemul_insn_fetch(enum x86_segment seg,
> @@ -90,7 +91,7 @@ int hvmemul_do_pio_buffer(uint16_t port,
>                            uint8_t dir,
>                            void *buffer);
> 
> -void hvm_dump_emulation_state(const char *prefix,
> +void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
>                                struct hvm_emulate_ctxt *hvmemul_ctxt);
> 
>  #endif /* __ASM_X86_HVM_EMULATE_H__ */
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control
  2017-04-18 10:32 ` [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control Jan Beulich
  2017-04-18 10:37   ` Razvan Cojocaru
  2017-04-18 10:43   ` Andrew Cooper
@ 2017-04-18 13:36   ` Tian, Kevin
  2 siblings, 0 replies; 12+ messages in thread
From: Tian, Kevin @ 2017-04-18 13:36 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Julien Grall, Nakajima, Jun, Razvan Cojocaru,
	Tamas K Lengyel

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, April 18, 2017 6:33 PM
> 
> This is an optional feature and hence we should check for it before
> use.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes
  2017-04-18 10:25 [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
                   ` (3 preceding siblings ...)
  2017-04-18 10:34 ` [PATCH v2 4/4] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
@ 2017-04-19  8:59 ` Julien Grall
  4 siblings, 0 replies; 12+ messages in thread
From: Julien Grall @ 2017-04-19  8:59 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper

Hi Jan,

On 04/18/2017 11:25 AM, Jan Beulich wrote:
> Patch 2 brings recently added code in line with what we did switch
> other code to during this dev cycle. Patch 3 is at least a latent bug fix.
> Patch 4 is merely improving debuggability, so other than the first two
> I'm not sure it qualifies for 4.9.
>
> This is all fallout from re-basing my UMIP emulation patch over
> d0a699a389 ("x86/monitor: add support for descriptor access events").
>
> 1: x86emul: always fill x86_insn_modrm()'s outputs
> 2: x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
> 3: VMX: don't blindly enable descriptor table exiting control

For patch #3 you said it is a latent bug and I guess it could be 
uncovered from a backport later on. So I am fine with #1-#3:

Release-acked-by: Julien Grall <julien.grall@arm.com>

> 4: x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation

On another thread I was mentioning that I will start enforcing bug fix 
from rc2. So:

Release-acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-04-19  8:59 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-18 10:25 [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
2017-04-18 10:29 ` [PATCH v2 1/4] x86emul: always fill x86_insn_modrm()'s outputs Jan Beulich
2017-04-18 10:33   ` Andrew Cooper
2017-04-18 10:31 ` [PATCH v2 2/4] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
2017-04-18 10:35   ` Andrew Cooper
2017-04-18 10:32 ` [PATCH v2 3/4] VMX: don't blindly enable descriptor table exiting control Jan Beulich
2017-04-18 10:37   ` Razvan Cojocaru
2017-04-18 10:43   ` Andrew Cooper
2017-04-18 13:36   ` Tian, Kevin
2017-04-18 10:34 ` [PATCH v2 4/4] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
2017-04-18 13:12   ` Paul Durrant
2017-04-19  8:59 ` [PATCH v2 0/4] x86/HVM: misc descriptor table access exiting related fixes Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.