All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] x86/HVM: misc descriptor table access exiting related fixes
@ 2017-04-13 14:42 Jan Beulich
  2017-04-13 14:51 ` [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Jan Beulich @ 2017-04-13 14:42 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Julien Grall

Patch 1 brings recently added code in line with what we did switch
other code to during this dev cycle. Patch 2 is at least a latent bug fix.
Patch 3 is merely improving debuggability, so other than the first two
I'm not sure it qualifies for 4.9.

This is all fallout from re-basing my UMIP emulation patch over
d0a699a389 ("x86/monitor: add support for descriptor access events").

1: x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
2: VMX: don't blindly enable descriptor table exiting control
3: x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation

Signed-off-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
  2017-04-13 14:42 [PATCH 0/3] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
@ 2017-04-13 14:51 ` Jan Beulich
  2017-04-13 14:59   ` Andrew Cooper
  2017-04-13 14:53 ` [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control Jan Beulich
  2017-04-13 14:54 ` [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
  2 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2017-04-13 14:51 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Julien Grall, Adrian Pop

[-- Attachment #1: Type: text/plain, Size: 2122 bytes --]

While I did review d0a699a389 ("x86/monitor: add support for descriptor
access events") it didn't really occur to me that somone could be this
blunt and add unguarded emulation again just a few weeks after we
guarded all special purpose emulator invocations. Fix this.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3598,6 +3598,28 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
+static bool is_sysdesc_access(const struct x86_emulate_state *state,
+                              const struct x86_emulate_ctxt *ctxt)
+{
+    unsigned int ext;
+    int mode = x86_insn_modrm(state, NULL, &ext);
+
+    switch ( ctxt->opcode )
+    {
+    case X86EMUL_OPC(0x0f, 0x00):
+        if ( !(ext & 4) ) /* SLDT / STR / LLDT / LTR */
+            return true;
+        break;
+
+    case X86EMUL_OPC(0x0f, 0x01):
+        if ( mode != 3 && !(ext & 4) ) /* SGDT / SIDT / LGDT / LIDT */
+            return true;
+        break;
+    }
+
+    return false;
+}
+
 int hvm_descriptor_access_intercept(uint64_t exit_info,
                                     uint64_t vmx_exit_qualification,
                                     unsigned int descriptor, bool is_write)
@@ -3611,24 +3633,8 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else
-    {
-        struct hvm_emulate_ctxt ctxt;
-
-        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
-        switch ( hvm_emulate_one(&ctxt) )
-        {
-        case X86EMUL_UNHANDLEABLE:
-            domain_crash(currd);
-            return X86EMUL_UNHANDLEABLE;
-        case X86EMUL_EXCEPTION:
-            hvm_inject_event(&ctxt.ctxt.event);
-            /* fall through */
-        default:
-            hvm_emulate_writeback(&ctxt);
-            break;
-        }
-    }
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+        domain_crash(currd);
 
     return X86EMUL_OKAY;
 }




[-- Attachment #2: x86-HVM-sysreg-emul.patch --]
[-- Type: text/plain, Size: 2184 bytes --]

x86/HVM: restrict emulation in hvm_descriptor_access_intercept()

While I did review d0a699a389 ("x86/monitor: add support for descriptor
access events") it didn't really occur to me that somone could be this
blunt and add unguarded emulation again just a few weeks after we
guarded all special purpose emulator invocations. Fix this.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3598,6 +3598,28 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
+static bool is_sysdesc_access(const struct x86_emulate_state *state,
+                              const struct x86_emulate_ctxt *ctxt)
+{
+    unsigned int ext;
+    int mode = x86_insn_modrm(state, NULL, &ext);
+
+    switch ( ctxt->opcode )
+    {
+    case X86EMUL_OPC(0x0f, 0x00):
+        if ( !(ext & 4) ) /* SLDT / STR / LLDT / LTR */
+            return true;
+        break;
+
+    case X86EMUL_OPC(0x0f, 0x01):
+        if ( mode != 3 && !(ext & 4) ) /* SGDT / SIDT / LGDT / LIDT */
+            return true;
+        break;
+    }
+
+    return false;
+}
+
 int hvm_descriptor_access_intercept(uint64_t exit_info,
                                     uint64_t vmx_exit_qualification,
                                     unsigned int descriptor, bool is_write)
@@ -3611,24 +3633,8 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else
-    {
-        struct hvm_emulate_ctxt ctxt;
-
-        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
-        switch ( hvm_emulate_one(&ctxt) )
-        {
-        case X86EMUL_UNHANDLEABLE:
-            domain_crash(currd);
-            return X86EMUL_UNHANDLEABLE;
-        case X86EMUL_EXCEPTION:
-            hvm_inject_event(&ctxt.ctxt.event);
-            /* fall through */
-        default:
-            hvm_emulate_writeback(&ctxt);
-            break;
-        }
-    }
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+        domain_crash(currd);
 
     return X86EMUL_OKAY;
 }

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control
  2017-04-13 14:42 [PATCH 0/3] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
  2017-04-13 14:51 ` [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
@ 2017-04-13 14:53 ` Jan Beulich
  2017-04-13 14:59   ` Razvan Cojocaru
                     ` (2 more replies)
  2017-04-13 14:54 ` [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
  2 siblings, 3 replies; 15+ messages in thread
From: Jan Beulich @ 2017-04-13 14:53 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Suravee Suthikulpanit, Razvan Cojocaru, Adrian Pop,
	Andrew Cooper, Julien Grall, Jun Nakajima, Tamas K Lengyel,
	Boris Ostrovsky

[-- Attachment #1: Type: text/plain, Size: 5677 bytes --]

This is an optional feature and hence we should check for it before
use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -866,7 +866,7 @@ static void svm_set_rdtsc_exiting(struct
     vmcb_set_general2_intercepts(vmcb, general2_intercepts);
 }
 
-static void svm_set_descriptor_access_exiting(struct vcpu *v, bool enable)
+static bool svm_set_descriptor_access_exiting(struct vcpu *v, bool enable)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
@@ -881,6 +881,8 @@ static void svm_set_descriptor_access_ex
         general1_intercepts &= ~mask;
 
     vmcb_set_general1_intercepts(vmcb, general1_intercepts);
+
+    return true;
 }
 
 static unsigned int svm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -226,6 +226,7 @@ static int vmx_init_vmcs_config(void)
         opt = (SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
                SECONDARY_EXEC_WBINVD_EXITING |
                SECONDARY_EXEC_ENABLE_EPT |
+               SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_ENABLE_RDTSCP |
                SECONDARY_EXEC_PAUSE_LOOP_EXITING |
                SECONDARY_EXEC_ENABLE_INVPCID |
@@ -1020,6 +1021,13 @@ static int construct_vmcs(struct vcpu *v
 
     v->arch.hvm_vmx.secondary_exec_control = vmx_secondary_exec_control;
 
+    /*
+     * Disable descriptor table exiting: It's controlled by the VM event
+     * monitor requesting it.
+     */
+    v->arch.hvm_vmx.secondary_exec_control &=
+        ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+
     /* Disable VPID for now: we decide when to enable it on VMENTER. */
     v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1423,11 +1423,15 @@ static void vmx_set_rdtsc_exiting(struct
     vmx_vmcs_exit(v);
 }
 
-static void vmx_set_descriptor_access_exiting(struct vcpu *v, bool enable)
+static bool vmx_set_descriptor_access_exiting(struct vcpu *v, bool enable)
 {
     if ( enable )
+    {
+        if ( !cpu_has_vmx_dt_exiting )
+            return false;
         v->arch.hvm_vmx.secondary_exec_control |=
             SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+    }
     else
         v->arch.hvm_vmx.secondary_exec_control &=
             ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
@@ -1435,6 +1439,8 @@ static void vmx_set_descriptor_access_ex
     vmx_vmcs_enter(v);
     vmx_update_secondary_exec_control(v);
     vmx_vmcs_exit(v);
+
+    return true;
 }
 
 static void vmx_init_hypercall_page(struct domain *d, void *hypercall_page)
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -213,7 +213,7 @@ int arch_monitor_domctl_event(struct dom
 
     case XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS:
     {
-        bool old_status = ad->monitor.descriptor_access_enabled;
+        bool old_status = ad->monitor.descriptor_access_enabled, ok = true;
         struct vcpu *v;
 
         if ( unlikely(old_status == requested_status) )
@@ -223,9 +223,15 @@ int arch_monitor_domctl_event(struct dom
         ad->monitor.descriptor_access_enabled = requested_status;
 
         for_each_vcpu ( d, v )
-            hvm_funcs.set_descriptor_access_exiting(v, requested_status);
+        {
+            ok = hvm_funcs.set_descriptor_access_exiting(v, requested_status);
+            if ( !ok )
+                break;
+        }
 
         domain_unpause(d);
+        if ( !ok )
+            return -EOPNOTSUPP;
         break;
     }
 
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -173,7 +173,7 @@ struct hvm_function_table {
     void (*handle_cd)(struct vcpu *v, unsigned long value);
     void (*set_info_guest)(struct vcpu *v);
     void (*set_rdtsc_exiting)(struct vcpu *v, bool_t);
-    void (*set_descriptor_access_exiting)(struct vcpu *v, bool);
+    bool (*set_descriptor_access_exiting)(struct vcpu *v, bool);
 
     /* Nested HVM */
     int (*nhvm_vcpu_initialise)(struct vcpu *v);
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -274,6 +274,8 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
 #define cpu_has_vmx_ept \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT)
+#define cpu_has_vmx_dt_exiting \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING)
 #define cpu_has_vmx_vpid \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
 #define cpu_has_monitor_trap_flag \
--- a/xen/include/asm-x86/monitor.h
+++ b/xen/include/asm-x86/monitor.h
@@ -77,13 +77,15 @@ static inline uint32_t arch_monitor_get_
                    (1U << XEN_DOMCTL_MONITOR_EVENT_GUEST_REQUEST) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_DEBUG_EXCEPTION) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_CPUID) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT);
 
     /* Since we know this is on VMX, we can just call the hvm func */
     if ( hvm_is_singlestep_supported() )
         capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP);
 
+    if ( cpu_has_vmx_dt_exiting || cpu_has_svm )
+        capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+
     return capabilities;
 }
 



[-- Attachment #2: VMX-check-dt-exiting.patch --]
[-- Type: text/plain, Size: 5735 bytes --]

VMX: don't blindly enable descriptor table exiting control

This is an optional feature and hence we should check for it before
use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -866,7 +866,7 @@ static void svm_set_rdtsc_exiting(struct
     vmcb_set_general2_intercepts(vmcb, general2_intercepts);
 }
 
-static void svm_set_descriptor_access_exiting(struct vcpu *v, bool enable)
+static bool svm_set_descriptor_access_exiting(struct vcpu *v, bool enable)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
@@ -881,6 +881,8 @@ static void svm_set_descriptor_access_ex
         general1_intercepts &= ~mask;
 
     vmcb_set_general1_intercepts(vmcb, general1_intercepts);
+
+    return true;
 }
 
 static unsigned int svm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -226,6 +226,7 @@ static int vmx_init_vmcs_config(void)
         opt = (SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
                SECONDARY_EXEC_WBINVD_EXITING |
                SECONDARY_EXEC_ENABLE_EPT |
+               SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING |
                SECONDARY_EXEC_ENABLE_RDTSCP |
                SECONDARY_EXEC_PAUSE_LOOP_EXITING |
                SECONDARY_EXEC_ENABLE_INVPCID |
@@ -1020,6 +1021,13 @@ static int construct_vmcs(struct vcpu *v
 
     v->arch.hvm_vmx.secondary_exec_control = vmx_secondary_exec_control;
 
+    /*
+     * Disable descriptor table exiting: It's controlled by the VM event
+     * monitor requesting it.
+     */
+    v->arch.hvm_vmx.secondary_exec_control &=
+        ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+
     /* Disable VPID for now: we decide when to enable it on VMENTER. */
     v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1423,11 +1423,15 @@ static void vmx_set_rdtsc_exiting(struct
     vmx_vmcs_exit(v);
 }
 
-static void vmx_set_descriptor_access_exiting(struct vcpu *v, bool enable)
+static bool vmx_set_descriptor_access_exiting(struct vcpu *v, bool enable)
 {
     if ( enable )
+    {
+        if ( !cpu_has_vmx_dt_exiting )
+            return false;
         v->arch.hvm_vmx.secondary_exec_control |=
             SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
+    }
     else
         v->arch.hvm_vmx.secondary_exec_control &=
             ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
@@ -1435,6 +1439,8 @@ static void vmx_set_descriptor_access_ex
     vmx_vmcs_enter(v);
     vmx_update_secondary_exec_control(v);
     vmx_vmcs_exit(v);
+
+    return true;
 }
 
 static void vmx_init_hypercall_page(struct domain *d, void *hypercall_page)
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -213,7 +213,7 @@ int arch_monitor_domctl_event(struct dom
 
     case XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS:
     {
-        bool old_status = ad->monitor.descriptor_access_enabled;
+        bool old_status = ad->monitor.descriptor_access_enabled, ok = true;
         struct vcpu *v;
 
         if ( unlikely(old_status == requested_status) )
@@ -223,9 +223,15 @@ int arch_monitor_domctl_event(struct dom
         ad->monitor.descriptor_access_enabled = requested_status;
 
         for_each_vcpu ( d, v )
-            hvm_funcs.set_descriptor_access_exiting(v, requested_status);
+        {
+            ok = hvm_funcs.set_descriptor_access_exiting(v, requested_status);
+            if ( !ok )
+                break;
+        }
 
         domain_unpause(d);
+        if ( !ok )
+            return -EOPNOTSUPP;
         break;
     }
 
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -173,7 +173,7 @@ struct hvm_function_table {
     void (*handle_cd)(struct vcpu *v, unsigned long value);
     void (*set_info_guest)(struct vcpu *v);
     void (*set_rdtsc_exiting)(struct vcpu *v, bool_t);
-    void (*set_descriptor_access_exiting)(struct vcpu *v, bool);
+    bool (*set_descriptor_access_exiting)(struct vcpu *v, bool);
 
     /* Nested HVM */
     int (*nhvm_vcpu_initialise)(struct vcpu *v);
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -274,6 +274,8 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
 #define cpu_has_vmx_ept \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT)
+#define cpu_has_vmx_dt_exiting \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING)
 #define cpu_has_vmx_vpid \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
 #define cpu_has_monitor_trap_flag \
--- a/xen/include/asm-x86/monitor.h
+++ b/xen/include/asm-x86/monitor.h
@@ -77,13 +77,15 @@ static inline uint32_t arch_monitor_get_
                    (1U << XEN_DOMCTL_MONITOR_EVENT_GUEST_REQUEST) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_DEBUG_EXCEPTION) |
                    (1U << XEN_DOMCTL_MONITOR_EVENT_CPUID) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT) |
-                   (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+                   (1U << XEN_DOMCTL_MONITOR_EVENT_INTERRUPT);
 
     /* Since we know this is on VMX, we can just call the hvm func */
     if ( hvm_is_singlestep_supported() )
         capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP);
 
+    if ( cpu_has_vmx_dt_exiting || cpu_has_svm )
+        capabilities |= (1U << XEN_DOMCTL_MONITOR_EVENT_DESC_ACCESS);
+
     return capabilities;
 }
 

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation
  2017-04-13 14:42 [PATCH 0/3] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
  2017-04-13 14:51 ` [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
  2017-04-13 14:53 ` [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control Jan Beulich
@ 2017-04-13 14:54 ` Jan Beulich
  2017-04-13 15:12   ` Boris Ostrovsky
                     ` (2 more replies)
  2 siblings, 3 replies; 15+ messages in thread
From: Jan Beulich @ 2017-04-13 14:54 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Suravee Suthikulpanit, Andrew Cooper, Julien Grall,
	Jun Nakajima, Boris Ostrovsky

[-- Attachment #1: Type: text/plain, Size: 6213 bytes --]

This helps distingushing the call paths leading there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2033,7 +2033,7 @@ int hvm_emulate_one_mmio(unsigned long m
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMCFG", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, "MMCFG", &ctxt);
         break;
     case X86EMUL_EXCEPTION:
         if ( ctxt.ctxt.event_pending )
@@ -2092,7 +2092,7 @@ void hvm_emulate_one_vm_event(enum emul_
          */
         return;
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_DEBUG "Mem event", &ctx);
+        hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event", &ctx);
         hvm_inject_hw_exception(trapnr, errcode);
         break;
     case X86EMUL_EXCEPTION:
@@ -2220,7 +2220,7 @@ static const char *guest_x86_mode_to_str
     }
 }
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     struct vcpu *curr = current;
@@ -2228,8 +2228,8 @@ void hvm_dump_emulation_state(const char
     const struct segment_register *cs =
         hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
 
-    printk("%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
-           prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
+    printk("%s%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
+           loglvl, prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
            hvmemul_ctxt->insn_buf_bytes, hvmemul_ctxt->insn_buf);
 }
 
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3633,7 +3633,7 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access, "sysdesc access") )
         domain_crash(currd);
 
     return X86EMUL_OKAY;
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -77,7 +77,7 @@ void send_invalidate_req(void)
         gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
 }
 
-bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate)
+bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
     struct hvm_emulate_ctxt ctxt;
     struct vcpu *curr = current;
@@ -96,7 +96,7 @@ bool hvm_emulate_one_insn(hvm_emulate_va
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMIO", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt);
         return false;
 
     case X86EMUL_EXCEPTION:
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2678,7 +2678,7 @@ void svm_vmexit_handler(struct cpu_user_
             if ( handle_pio(port, bytes, dir) )
                 __update_guest_eip(regs, vmcb->exitinfo2 - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2686,7 +2686,7 @@ void svm_vmexit_handler(struct cpu_user_
     case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
         if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
             svm_vmexit_do_cr_access(vmcb, regs);
-        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access, "CR access") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2696,7 +2696,7 @@ void svm_vmexit_handler(struct cpu_user_
             svm_invlpg_intercept(vmcb->exitinfo1);
             __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(is_invlpg) )
+        else if ( !hvm_emulate_one_insn(is_invlpg, "invlpg") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -153,7 +153,7 @@ void vmx_realmode_emulate_one(struct hvm
     return;
 
  fail:
-    hvm_dump_emulation_state(XENLOG_G_ERR "Real-mode", hvmemul_ctxt);
+    hvm_dump_emulation_state(XENLOG_G_ERR, "Real-mode", hvmemul_ctxt);
     domain_crash(curr->domain);
 }
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4016,7 +4016,7 @@ void vmx_vmexit_handler(struct cpu_user_
         if ( exit_qualification & 0x10 )
         {
             /* INS, OUTS */
-            if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+            if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
         }
         else
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -49,8 +49,9 @@ enum emul_kind {
     EMUL_KIND_SET_CONTEXT_INSN
 };
 
-bool __nonnull(1) hvm_emulate_one_insn(
-    hvm_emulate_validate_t *validate);
+bool __nonnull(1, 2) hvm_emulate_one_insn(
+    hvm_emulate_validate_t *validate,
+    const char *descr);
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
 void hvm_emulate_one_vm_event(enum emul_kind kind,
@@ -77,7 +78,7 @@ int hvm_emulate_one_mmio(unsigned long m
 
 static inline bool handle_mmio(void)
 {
-    return hvm_emulate_one_insn(x86_insn_is_mem_access);
+    return hvm_emulate_one_insn(x86_insn_is_mem_access, "MMIO");
 }
 
 int hvmemul_insn_fetch(enum x86_segment seg,
@@ -90,7 +91,7 @@ int hvmemul_do_pio_buffer(uint16_t port,
                           uint8_t dir,
                           void *buffer);
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt);
 
 #endif /* __ASM_X86_HVM_EMULATE_H__ */



[-- Attachment #2: x86-HVM-emul-one-insn-type.patch --]
[-- Type: text/plain, Size: 6289 bytes --]

x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation

This helps distingushing the call paths leading there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2033,7 +2033,7 @@ int hvm_emulate_one_mmio(unsigned long m
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMCFG", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, "MMCFG", &ctxt);
         break;
     case X86EMUL_EXCEPTION:
         if ( ctxt.ctxt.event_pending )
@@ -2092,7 +2092,7 @@ void hvm_emulate_one_vm_event(enum emul_
          */
         return;
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_DEBUG "Mem event", &ctx);
+        hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event", &ctx);
         hvm_inject_hw_exception(trapnr, errcode);
         break;
     case X86EMUL_EXCEPTION:
@@ -2220,7 +2220,7 @@ static const char *guest_x86_mode_to_str
     }
 }
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     struct vcpu *curr = current;
@@ -2228,8 +2228,8 @@ void hvm_dump_emulation_state(const char
     const struct segment_register *cs =
         hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
 
-    printk("%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
-           prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
+    printk("%s%s emulation failed: %pv %s @ %04x:%08lx -> %*ph\n",
+           loglvl, prefix, curr, mode_str, cs->sel, hvmemul_ctxt->insn_buf_eip,
            hvmemul_ctxt->insn_buf_bytes, hvmemul_ctxt->insn_buf);
 }
 
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3633,7 +3633,7 @@ int hvm_descriptor_access_intercept(uint
         hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
                                       descriptor, is_write);
     }
-    else if ( !hvm_emulate_one_insn(is_sysdesc_access) )
+    else if ( !hvm_emulate_one_insn(is_sysdesc_access, "sysdesc access") )
         domain_crash(currd);
 
     return X86EMUL_OKAY;
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -77,7 +77,7 @@ void send_invalidate_req(void)
         gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
 }
 
-bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate)
+bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
     struct hvm_emulate_ctxt ctxt;
     struct vcpu *curr = current;
@@ -96,7 +96,7 @@ bool hvm_emulate_one_insn(hvm_emulate_va
     switch ( rc )
     {
     case X86EMUL_UNHANDLEABLE:
-        hvm_dump_emulation_state(XENLOG_G_WARNING "MMIO", &ctxt);
+        hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt);
         return false;
 
     case X86EMUL_EXCEPTION:
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2678,7 +2678,7 @@ void svm_vmexit_handler(struct cpu_user_
             if ( handle_pio(port, bytes, dir) )
                 __update_guest_eip(regs, vmcb->exitinfo2 - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2686,7 +2686,7 @@ void svm_vmexit_handler(struct cpu_user_
     case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
         if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
             svm_vmexit_do_cr_access(vmcb, regs);
-        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access) )
+        else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access, "CR access") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
@@ -2696,7 +2696,7 @@ void svm_vmexit_handler(struct cpu_user_
             svm_invlpg_intercept(vmcb->exitinfo1);
             __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
         }
-        else if ( !hvm_emulate_one_insn(is_invlpg) )
+        else if ( !hvm_emulate_one_insn(is_invlpg, "invlpg") )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;
 
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -153,7 +153,7 @@ void vmx_realmode_emulate_one(struct hvm
     return;
 
  fail:
-    hvm_dump_emulation_state(XENLOG_G_ERR "Real-mode", hvmemul_ctxt);
+    hvm_dump_emulation_state(XENLOG_G_ERR, "Real-mode", hvmemul_ctxt);
     domain_crash(curr->domain);
 }
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4016,7 +4016,7 @@ void vmx_vmexit_handler(struct cpu_user_
         if ( exit_qualification & 0x10 )
         {
             /* INS, OUTS */
-            if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
+            if ( !hvm_emulate_one_insn(x86_insn_is_portio, "port I/O") )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
         }
         else
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -49,8 +49,9 @@ enum emul_kind {
     EMUL_KIND_SET_CONTEXT_INSN
 };
 
-bool __nonnull(1) hvm_emulate_one_insn(
-    hvm_emulate_validate_t *validate);
+bool __nonnull(1, 2) hvm_emulate_one_insn(
+    hvm_emulate_validate_t *validate,
+    const char *descr);
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
 void hvm_emulate_one_vm_event(enum emul_kind kind,
@@ -77,7 +78,7 @@ int hvm_emulate_one_mmio(unsigned long m
 
 static inline bool handle_mmio(void)
 {
-    return hvm_emulate_one_insn(x86_insn_is_mem_access);
+    return hvm_emulate_one_insn(x86_insn_is_mem_access, "MMIO");
 }
 
 int hvmemul_insn_fetch(enum x86_segment seg,
@@ -90,7 +91,7 @@ int hvmemul_do_pio_buffer(uint16_t port,
                           uint8_t dir,
                           void *buffer);
 
-void hvm_dump_emulation_state(const char *prefix,
+void hvm_dump_emulation_state(const char *loglvl, const char *prefix,
                               struct hvm_emulate_ctxt *hvmemul_ctxt);
 
 #endif /* __ASM_X86_HVM_EMULATE_H__ */

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
  2017-04-13 14:51 ` [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
@ 2017-04-13 14:59   ` Andrew Cooper
  2017-04-13 15:28     ` Jan Beulich
  0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2017-04-13 14:59 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Julien Grall, Adrian Pop

On 13/04/17 15:51, Jan Beulich wrote:
> While I did review d0a699a389 ("x86/monitor: add support for descriptor
> access events") it didn't really occur to me that somone could be this
> blunt and add unguarded emulation again just a few weeks after we
> guarded all special purpose emulator invocations. Fix this.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Oops - I also omitted that point from my review checklist.  I will
endeavour not to make the same mistake again.

>
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3598,6 +3598,28 @@ gp_fault:
>      return X86EMUL_EXCEPTION;
>  }
>  
> +static bool is_sysdesc_access(const struct x86_emulate_state *state,
> +                              const struct x86_emulate_ctxt *ctxt)
> +{
> +    unsigned int ext;
> +    int mode = x86_insn_modrm(state, NULL, &ext);

Unfortunately, this is another example which Coverity will pick up on,
along with the use of x86_insn_modrm() in is_invlpg().

In the case that we return -EINVAL, ext is uninitialised when it gets
used below.

Other than that, the change looks good.

~Andrew

> +
> +    switch ( ctxt->opcode )
> +    {
> +    case X86EMUL_OPC(0x0f, 0x00):
> +        if ( !(ext & 4) ) /* SLDT / STR / LLDT / LTR */
> +            return true;
> +        break;
> +
> +    case X86EMUL_OPC(0x0f, 0x01):
> +        if ( mode != 3 && !(ext & 4) ) /* SGDT / SIDT / LGDT / LIDT */
> +            return true;
> +        break;
> +    }
> +
> +    return false;
> +}
> +
>  int hvm_descriptor_access_intercept(uint64_t exit_info,
>                                      uint64_t vmx_exit_qualification,
>                                      unsigned int descriptor, bool is_write)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control
  2017-04-13 14:53 ` [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control Jan Beulich
@ 2017-04-13 14:59   ` Razvan Cojocaru
  2017-04-13 15:16   ` Andrew Cooper
  2017-04-14  6:22   ` Tian, Kevin
  2 siblings, 0 replies; 15+ messages in thread
From: Razvan Cojocaru @ 2017-04-13 14:59 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Kevin Tian, Suravee Suthikulpanit, Adrian Pop, Andrew Cooper,
	Julien Grall, Jun Nakajima, Tamas K Lengyel, Boris Ostrovsky

On 04/13/2017 05:53 PM, Jan Beulich wrote:
> This is an optional feature and hence we should check for it before
> use.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation
  2017-04-13 14:54 ` [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
@ 2017-04-13 15:12   ` Boris Ostrovsky
  2017-04-13 15:18   ` Andrew Cooper
  2017-04-14  6:23   ` Tian, Kevin
  2 siblings, 0 replies; 15+ messages in thread
From: Boris Ostrovsky @ 2017-04-13 15:12 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Kevin Tian, Andrew Cooper, Julien Grall, Jun Nakajima,
	Suravee Suthikulpanit

On 04/13/2017 10:54 AM, Jan Beulich wrote:
> This helps distingushing the call paths leading there.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control
  2017-04-13 14:53 ` [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control Jan Beulich
  2017-04-13 14:59   ` Razvan Cojocaru
@ 2017-04-13 15:16   ` Andrew Cooper
  2017-04-13 15:30     ` Jan Beulich
  2017-04-14  6:22   ` Tian, Kevin
  2 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2017-04-13 15:16 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Kevin Tian, Suravee Suthikulpanit, Razvan Cojocaru, Adrian Pop,
	Tamas K Lengyel, Julien Grall, Jun Nakajima, Boris Ostrovsky

On 13/04/17 15:53, Jan Beulich wrote:
> @@ -223,9 +223,15 @@ int arch_monitor_domctl_event(struct dom
>          ad->monitor.descriptor_access_enabled = requested_status;
>  
>          for_each_vcpu ( d, v )
> -            hvm_funcs.set_descriptor_access_exiting(v, requested_status);
> +        {
> +            ok = hvm_funcs.set_descriptor_access_exiting(v, requested_status);
> +            if ( !ok )
> +                break;
> +        }
>  
>          domain_unpause(d);
> +        if ( !ok )
> +            return -EOPNOTSUPP;

While the overall purpose of the patch is clearly a good thing, this
implementation isn't ideal.

In principle, this structure could hit a failure mid way through the
vcpus, leaving them in an inconsistent state.

Instead, why not clobber the set_descriptor_access_exiting() function
pointer if support isn't available, and use that (before the
domain_pause()) to fail with EOPNOTSUPP?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation
  2017-04-13 14:54 ` [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
  2017-04-13 15:12   ` Boris Ostrovsky
@ 2017-04-13 15:18   ` Andrew Cooper
  2017-04-14  6:23   ` Tian, Kevin
  2 siblings, 0 replies; 15+ messages in thread
From: Andrew Cooper @ 2017-04-13 15:18 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Kevin Tian, Julien Grall, Boris Ostrovsky, Jun Nakajima,
	Suravee Suthikulpanit

On 13/04/17 15:54, Jan Beulich wrote:
> This helps distingushing the call paths leading there.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
  2017-04-13 14:59   ` Andrew Cooper
@ 2017-04-13 15:28     ` Jan Beulich
  2017-04-13 16:00       ` Andrew Cooper
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2017-04-13 15:28 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Julien Grall, Adrian Pop

>>> On 13.04.17 at 16:59, <andrew.cooper3@citrix.com> wrote:
> On 13/04/17 15:51, Jan Beulich wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3598,6 +3598,28 @@ gp_fault:
>>      return X86EMUL_EXCEPTION;
>>  }
>>  
>> +static bool is_sysdesc_access(const struct x86_emulate_state *state,
>> +                              const struct x86_emulate_ctxt *ctxt)
>> +{
>> +    unsigned int ext;
>> +    int mode = x86_insn_modrm(state, NULL, &ext);
> 
> Unfortunately, this is another example which Coverity will pick up on,
> along with the use of x86_insn_modrm() in is_invlpg().
> 
> In the case that we return -EINVAL, ext is uninitialised when it gets
> used below.

Sigh. Yes, we can work around this tool limitation, but honestly I
don't really agree doing so in cases like this (where the code is
clearly correct, as -EINVAL can't come back). Plus we have
precedent to the above other than just in is_invlpg():
priv_op_validate() does the same, as does emulate_gate_op().
If there was a way to work around the warnings without growing
generated code, I'd be less opposed (but still not entirely in
agreement), but all I can see is either making the return value
check more involved or adding initializers. In neither case the
compiler will be able to eliminate the resulting insns.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control
  2017-04-13 15:16   ` Andrew Cooper
@ 2017-04-13 15:30     ` Jan Beulich
  0 siblings, 0 replies; 15+ messages in thread
From: Jan Beulich @ 2017-04-13 15:30 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: KevinTian, Suravee Suthikulpanit, Razvan Cojocaru, Adrian Pop,
	Tamas K Lengyel, Julien Grall, Jun Nakajima, xen-devel,
	Boris Ostrovsky

>>> On 13.04.17 at 17:16, <andrew.cooper3@citrix.com> wrote:
> On 13/04/17 15:53, Jan Beulich wrote:
>> @@ -223,9 +223,15 @@ int arch_monitor_domctl_event(struct dom
>>          ad->monitor.descriptor_access_enabled = requested_status;
>>  
>>          for_each_vcpu ( d, v )
>> -            hvm_funcs.set_descriptor_access_exiting(v, requested_status);
>> +        {
>> +            ok = hvm_funcs.set_descriptor_access_exiting(v, requested_status);
>> +            if ( !ok )
>> +                break;
>> +        }
>>  
>>          domain_unpause(d);
>> +        if ( !ok )
>> +            return -EOPNOTSUPP;
> 
> While the overall purpose of the patch is clearly a good thing, this
> implementation isn't ideal.
> 
> In principle, this structure could hit a failure mid way through the
> vcpus, leaving them in an inconsistent state.
> 
> Instead, why not clobber the set_descriptor_access_exiting() function
> pointer if support isn't available, and use that (before the
> domain_pause()) to fail with EOPNOTSUPP?

Ah, yes, I'll do that. Don't know why it didn't occur to me right away
(as I didn't like this property of the code above either).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
  2017-04-13 15:28     ` Jan Beulich
@ 2017-04-13 16:00       ` Andrew Cooper
  2017-04-18  8:59         ` Jan Beulich
  0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2017-04-13 16:00 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Julien Grall, Adrian Pop

On 13/04/17 16:28, Jan Beulich wrote:
>>>> On 13.04.17 at 16:59, <andrew.cooper3@citrix.com> wrote:
>> On 13/04/17 15:51, Jan Beulich wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -3598,6 +3598,28 @@ gp_fault:
>>>      return X86EMUL_EXCEPTION;
>>>  }
>>>  
>>> +static bool is_sysdesc_access(const struct x86_emulate_state *state,
>>> +                              const struct x86_emulate_ctxt *ctxt)
>>> +{
>>> +    unsigned int ext;
>>> +    int mode = x86_insn_modrm(state, NULL, &ext);
>> Unfortunately, this is another example which Coverity will pick up on,
>> along with the use of x86_insn_modrm() in is_invlpg().
>>
>> In the case that we return -EINVAL, ext is uninitialised when it gets
>> used below.
> Sigh. Yes, we can work around this tool limitation

It is not a tools limitation.  It is an entirely legitimate complaint
about the state of the current code.

ext being not undefined depends on all 8k lines of emulator code not
being buggy and accidentally clearing d & ModRM (or is it Vsib for that
matter), or accidentally clobbering ->modrm_mod.

> , but honestly I
> don't really agree doing so in cases like this (where the code is
> clearly correct, as -EINVAL can't come back). Plus we have
> precedent to the above other than just in is_invlpg():
> priv_op_validate() does the same, as does emulate_gate_op().
> If there was a way to work around the warnings without growing
> generated code, I'd be less opposed (but still not entirely in
> agreement), but all I can see is either making the return value
> check more involved or adding initializers. In neither case the
> compiler will be able to eliminate the resulting insns.

How about returning

enum {
    modrm_reg,
    modrm_mem,
    modrm_none
};

The users of x86_insn_modrm() don't care about values between 0 and 2. 
They are only looking for "does it have a memory reference", or "does it
have extra opcode encoded in the reg field".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control
  2017-04-13 14:53 ` [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control Jan Beulich
  2017-04-13 14:59   ` Razvan Cojocaru
  2017-04-13 15:16   ` Andrew Cooper
@ 2017-04-14  6:22   ` Tian, Kevin
  2 siblings, 0 replies; 15+ messages in thread
From: Tian, Kevin @ 2017-04-14  6:22 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Suravee Suthikulpanit, Razvan Cojocaru, Adrian Pop,
	Andrew Cooper, Julien Grall, Nakajima, Jun, Tamas K Lengyel,
	Boris Ostrovsky

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, April 13, 2017 10:53 PM
> 
> This is an optional feature and hence we should check for it before
> use.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation
  2017-04-13 14:54 ` [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
  2017-04-13 15:12   ` Boris Ostrovsky
  2017-04-13 15:18   ` Andrew Cooper
@ 2017-04-14  6:23   ` Tian, Kevin
  2 siblings, 0 replies; 15+ messages in thread
From: Tian, Kevin @ 2017-04-14  6:23 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Boris Ostrovsky, Andrew Cooper, Julien Grall, Nakajima, Jun,
	Suravee Suthikulpanit

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, April 13, 2017 10:54 PM
> 
> This helps distingushing the call paths leading there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept()
  2017-04-13 16:00       ` Andrew Cooper
@ 2017-04-18  8:59         ` Jan Beulich
  0 siblings, 0 replies; 15+ messages in thread
From: Jan Beulich @ 2017-04-18  8:59 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Julien Grall, Adrian Pop

>>> On 13.04.17 at 18:00, <andrew.cooper3@citrix.com> wrote:
> On 13/04/17 16:28, Jan Beulich wrote:
>>>>> On 13.04.17 at 16:59, <andrew.cooper3@citrix.com> wrote:
>>> On 13/04/17 15:51, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>> @@ -3598,6 +3598,28 @@ gp_fault:
>>>>      return X86EMUL_EXCEPTION;
>>>>  }
>>>>  
>>>> +static bool is_sysdesc_access(const struct x86_emulate_state *state,
>>>> +                              const struct x86_emulate_ctxt *ctxt)
>>>> +{
>>>> +    unsigned int ext;
>>>> +    int mode = x86_insn_modrm(state, NULL, &ext);
>>> Unfortunately, this is another example which Coverity will pick up on,
>>> along with the use of x86_insn_modrm() in is_invlpg().
>>>
>>> In the case that we return -EINVAL, ext is uninitialised when it gets
>>> used below.
>> Sigh. Yes, we can work around this tool limitation
> 
> It is not a tools limitation.  It is an entirely legitimate complaint
> about the state of the current code.
> 
> ext being not undefined depends on all 8k lines of emulator code not
> being buggy and accidentally clearing d & ModRM (or is it Vsib for that
> matter), or accidentally clobbering ->modrm_mod.

Come on - such a fundamental bug would have worse
consequences than uninitialized variables here. I agree this being
a tool limitation is a matter of perspective, as the tool can't possibly
know what we know. But the tool also doesn't offer a way to give
it the missing piece of information.

>> , but honestly I
>> don't really agree doing so in cases like this (where the code is
>> clearly correct, as -EINVAL can't come back). Plus we have
>> precedent to the above other than just in is_invlpg():
>> priv_op_validate() does the same, as does emulate_gate_op().
>> If there was a way to work around the warnings without growing
>> generated code, I'd be less opposed (but still not entirely in
>> agreement), but all I can see is either making the return value
>> check more involved or adding initializers. In neither case the
>> compiler will be able to eliminate the resulting insns.
> 
> How about returning
> 
> enum {
>     modrm_reg,
>     modrm_mem,
>     modrm_none
> };
> 
> The users of x86_insn_modrm() don't care about values between 0 and 2. 
> They are only looking for "does it have a memory reference", or "does it
> have extra opcode encoded in the reg field".

I don't see how this would help - the variables we're talking about
here would still be uninitialized. Instead, how about this as a prereq
patch to deal with the problem once an for all?

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -8017,8 +8017,14 @@ x86_insn_modrm(const struct x86_emulate_
 {
     check_state(state);
 
-    if ( state->modrm_mod > 3 )
+    if ( unlikely(state->modrm_mod > 3) )
+    {
+        if ( rm )
+            *rm = ~0U;
+        if ( reg )
+            *reg = ~0U;
         return -EINVAL;
+    }
 
     if ( rm )
         *rm = state->modrm_rm;

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-04-18  8:59 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-13 14:42 [PATCH 0/3] x86/HVM: misc descriptor table access exiting related fixes Jan Beulich
2017-04-13 14:51 ` [PATCH 1/3] x86/HVM: restrict emulation in hvm_descriptor_access_intercept() Jan Beulich
2017-04-13 14:59   ` Andrew Cooper
2017-04-13 15:28     ` Jan Beulich
2017-04-13 16:00       ` Andrew Cooper
2017-04-18  8:59         ` Jan Beulich
2017-04-13 14:53 ` [PATCH 2/3] VMX: don't blindly enable descriptor table exiting control Jan Beulich
2017-04-13 14:59   ` Razvan Cojocaru
2017-04-13 15:16   ` Andrew Cooper
2017-04-13 15:30     ` Jan Beulich
2017-04-14  6:22   ` Tian, Kevin
2017-04-13 14:54 ` [PATCH 3/3] x86/HVM: don't uniformly report "MMIO" for various forms of failed emulation Jan Beulich
2017-04-13 15:12   ` Boris Ostrovsky
2017-04-13 15:18   ` Andrew Cooper
2017-04-14  6:23   ` Tian, Kevin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.