All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] vm_event: Sanitize vm_event response handling
@ 2016-09-13 18:12 Tamas K Lengyel
  2016-09-13 18:12 ` [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation Tamas K Lengyel
                   ` (3 more replies)
  0 siblings, 4 replies; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-13 18:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Tamas K Lengyel, Julien Grall, Jan Beulich, Andrew Cooper

Setting response flags in vm_event are only ever safe if the vCPUs are paused.
To reflect this we move all checks within the if block that already checks
whether this is the case. Checks that are only supported on one architecture
we relocate the bitmask operations to the arch-specific handlers to avoid
the overhead on architectures that don't support it.

Furthermore, we clean-up the emulation checks so it more clearly represents the
decision-logic when emulation should take place. As part of this we also
set the stage to allow emulation in response to other types of events, not just
mem_access violations.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/x86/mm/p2m.c          | 81 +++++++++++++++++++-----------------------
 xen/arch/x86/vm_event.c        | 35 +++++++++++++++++-
 xen/common/vm_event.c          | 53 ++++++++++++++-------------
 xen/include/asm-arm/p2m.h      |  4 +--
 xen/include/asm-arm/vm_event.h |  9 ++++-
 xen/include/asm-x86/p2m.h      |  4 +--
 xen/include/asm-x86/vm_event.h |  5 ++-
 xen/include/xen/mem_access.h   | 12 -------
 8 files changed, 113 insertions(+), 90 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 7d14c3b..6c01868 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1588,62 +1588,55 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
     }
 }
 
-void p2m_mem_access_emulate_check(struct vcpu *v,
-                                  const vm_event_response_t *rsp)
+bool_t p2m_mem_access_emulate_check(struct vcpu *v,
+                                    const vm_event_response_t *rsp)
 {
-    /* Mark vcpu for skipping one instruction upon rescheduling. */
-    if ( rsp->flags & VM_EVENT_FLAG_EMULATE )
-    {
-        xenmem_access_t access;
-        bool_t violation = 1;
-        const struct vm_event_mem_access *data = &rsp->u.mem_access;
+    xenmem_access_t access;
+    bool_t violation = 1;
+    const struct vm_event_mem_access *data = &rsp->u.mem_access;
 
-        if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
+    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
+    {
+        switch ( access )
         {
-            switch ( access )
-            {
-            case XENMEM_access_n:
-            case XENMEM_access_n2rwx:
-            default:
-                violation = data->flags & MEM_ACCESS_RWX;
-                break;
+        case XENMEM_access_n:
+        case XENMEM_access_n2rwx:
+        default:
+            violation = data->flags & MEM_ACCESS_RWX;
+            break;
 
-            case XENMEM_access_r:
-                violation = data->flags & MEM_ACCESS_WX;
-                break;
+        case XENMEM_access_r:
+            violation = data->flags & MEM_ACCESS_WX;
+            break;
 
-            case XENMEM_access_w:
-                violation = data->flags & MEM_ACCESS_RX;
-                break;
+        case XENMEM_access_w:
+            violation = data->flags & MEM_ACCESS_RX;
+            break;
 
-            case XENMEM_access_x:
-                violation = data->flags & MEM_ACCESS_RW;
-                break;
+        case XENMEM_access_x:
+            violation = data->flags & MEM_ACCESS_RW;
+            break;
 
-            case XENMEM_access_rx:
-            case XENMEM_access_rx2rw:
-                violation = data->flags & MEM_ACCESS_W;
-                break;
+        case XENMEM_access_rx:
+        case XENMEM_access_rx2rw:
+            violation = data->flags & MEM_ACCESS_W;
+            break;
 
-            case XENMEM_access_wx:
-                violation = data->flags & MEM_ACCESS_R;
-                break;
+        case XENMEM_access_wx:
+            violation = data->flags & MEM_ACCESS_R;
+            break;
 
-            case XENMEM_access_rw:
-                violation = data->flags & MEM_ACCESS_X;
-                break;
+        case XENMEM_access_rw:
+            violation = data->flags & MEM_ACCESS_X;
+            break;
 
-            case XENMEM_access_rwx:
-                violation = 0;
-                break;
-            }
+        case XENMEM_access_rwx:
+            violation = 0;
+            break;
         }
-
-        v->arch.vm_event->emulate_flags = violation ? rsp->flags : 0;
-
-        if ( (rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA) )
-            v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
     }
+
+    return violation;
 }
 
 void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index e938ca3..343b9c8 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -18,6 +18,7 @@
  * License along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <asm/p2m.h>
 #include <asm/vm_event.h>
 
 /* Implicitly serialized by the domctl lock. */
@@ -56,8 +57,12 @@ void vm_event_cleanup_domain(struct domain *d)
     d->arch.mem_access_emulate_each_rep = 0;
 }
 
-void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
+void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
+                                vm_event_response_t *rsp)
 {
+    if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
+        return;
+
     if ( !is_hvm_domain(d) )
         return;
 
@@ -186,6 +191,34 @@ void vm_event_fill_regs(vm_event_request_t *req)
     req->data.regs.x86.cs_arbytes = seg.attr.bytes;
 }
 
+void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
+{
+    if ( !(rsp->flags & VM_EVENT_FLAG_EMULATE) )
+    {
+        v->arch.vm_event->emulate_flags = 0;
+        return;
+    }
+
+    switch ( rsp->reason )
+    {
+    case VM_EVENT_REASON_MEM_ACCESS:
+        /*
+         * Emulate iff this is a response to a mem_access violation and there
+         * are still conflicting mem_access permissions in-place.
+         */
+        if ( p2m_mem_access_emulate_check(v, rsp) )
+        {
+            if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
+                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
+
+            v->arch.vm_event->emulate_flags = rsp->flags;
+        }
+        break;
+    default:
+        break;
+    };
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 8398af7..907ab40 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -398,42 +398,41 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
          * In some cases the response type needs extra handling, so here
          * we call the appropriate handlers.
          */
-        switch ( rsp.reason )
-        {
-#ifdef CONFIG_X86
-        case VM_EVENT_REASON_MOV_TO_MSR:
-#endif
-        case VM_EVENT_REASON_WRITE_CTRLREG:
-            vm_event_register_write_resume(v, &rsp);
-            break;
-
-#ifdef CONFIG_HAS_MEM_ACCESS
-        case VM_EVENT_REASON_MEM_ACCESS:
-            mem_access_resume(v, &rsp);
-            break;
-#endif
 
+        /* Check flags which apply only when the vCPU is paused */
+        if ( atomic_read(&v->vm_event_pause_count) )
+        {
 #ifdef CONFIG_HAS_MEM_PAGING
-        case VM_EVENT_REASON_MEM_PAGING:
-            p2m_mem_paging_resume(d, &rsp);
-            break;
+            if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING )
+                p2m_mem_paging_resume(d, &rsp);
 #endif
 
-        };
+            /*
+             * Check emulation flags in the arch-specific handler only, as it
+             * has to set arch-specific flags when supported, and to avoid
+             * bitmask overhead when it isn't supported.
+             */
+            vm_event_emulate_check(v, &rsp);
+
+            /*
+             * Check in arch-specific handler to avoid bitmask overhead when
+             * not supported.
+             */
+            vm_event_register_write_resume(v, &rsp);
 
-        /* Check for altp2m switch */
-        if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
-            p2m_altp2m_check(v, rsp.altp2m_idx);
+            /*
+             * Check in arch-specific handler to avoid bitmask overhead when
+             * not supported.
+             */
+            vm_event_toggle_singlestep(d, v, &rsp);
+
+            /* Check for altp2m switch */
+            if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
+                p2m_altp2m_check(v, rsp.altp2m_idx);
 
-        /* Check flags which apply only when the vCPU is paused */
-        if ( atomic_read(&v->vm_event_pause_count) )
-        {
             if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS )
                 vm_event_set_registers(v, &rsp);
 
-            if ( rsp.flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP )
-                vm_event_toggle_singlestep(d, v);
-
             if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
                 vm_event_vcpu_unpause(v);
         }
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53c4d78..5e9bc54 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -121,10 +121,10 @@ typedef enum {
                              p2m_to_mask(p2m_map_foreign)))
 
 static inline
-void p2m_mem_access_emulate_check(struct vcpu *v,
+bool_t p2m_mem_access_emulate_check(struct vcpu *v,
                                   const vm_event_response_t *rsp)
 {
-    /* Not supported on ARM. */
+    return false;
 }
 
 static inline
diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
index 9482636..66f2474 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -34,7 +34,8 @@ static inline void vm_event_cleanup_domain(struct domain *d)
     memset(&d->monitor, 0, sizeof(d->monitor));
 }
 
-static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
+static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
+                                              vm_event_response_t *rsp)
 {
     /* Not supported on ARM. */
 }
@@ -45,4 +46,10 @@ void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
     /* Not supported on ARM. */
 }
 
+static inline
+void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
+{
+    /* Not supported on ARM. */
+}
+
 #endif /* __ASM_ARM_VM_EVENT_H__ */
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 9fc9ead..1897def 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -677,8 +677,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
 
 /* Check for emulation and mark vcpu for skipping one instruction
  * upon rescheduling if required. */
-void p2m_mem_access_emulate_check(struct vcpu *v,
-                                  const vm_event_response_t *rsp);
+bool_t p2m_mem_access_emulate_check(struct vcpu *v,
+                                    const vm_event_response_t *rsp);
 
 /* Sanity check for mem_access hardware support */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 294def6..ebb5d88 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -35,8 +35,11 @@ int vm_event_init_domain(struct domain *d);
 
 void vm_event_cleanup_domain(struct domain *d);
 
-void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v);
+void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
+                                vm_event_response_t *rsp);
 
 void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp);
 
+void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp);
+
 #endif /* __ASM_X86_VM_EVENT_H__ */
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index 3d054e0..da36e07 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -30,12 +30,6 @@
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
 
-static inline
-void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp)
-{
-    p2m_mem_access_emulate_check(v, rsp);
-}
-
 #else
 
 static inline
@@ -45,12 +39,6 @@ int mem_access_memop(unsigned long cmd,
     return -ENOSYS;
 }
 
-static inline
-void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp)
-{
-    /* Nothing to do. */
-}
-
 #endif /* HAS_MEM_ACCESS */
 
 #endif /* _XEN_ASM_MEM_ACCESS_H */
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-13 18:12 [PATCH 1/2] vm_event: Sanitize vm_event response handling Tamas K Lengyel
@ 2016-09-13 18:12 ` Tamas K Lengyel
  2016-09-14  7:58   ` Razvan Cojocaru
  2016-09-14 15:55   ` Jan Beulich
  2016-09-14  7:49 ` [PATCH 1/2] vm_event: Sanitize vm_event response handling Razvan Cojocaru
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-13 18:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Jan Beulich, Razvan Cojocaru,
	George Dunlap, Tamas K Lengyel, Julien Grall, Paul Durrant,
	Jun Nakajima, Andrew Cooper

When emulating instructions the emulator maintains a small i-cache fetched
from the guest memory. This patch extends the vm_event interface to allow
returning this i-cache via the vm_event response instead.

When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber
normally has to remove the INT3 from memory - singlestep - place back INT3
to allow the guest to continue execution. This routine however is susceptible
to a race-condition on multi-vCPU guests. By allowing the subscriber to return
the i-cache to be used for emulation it can side-step the problem by returning
a clean buffer without the INT3 present.

As part of this patch we rename hvm_mem_access_emulate_one to
hvm_emulate_one_vm_event to better reflect that it is used in various vm_event
scenarios now, not just in response to mem_access events.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>

Note: This patch only has been compile-tested.
---
 xen/arch/x86/hvm/emulate.c        | 44 ++++++++++++++++++++++++++-------------
 xen/arch/x86/hvm/hvm.c            |  9 +++++---
 xen/arch/x86/hvm/vmx/vmx.c        |  1 +
 xen/arch/x86/vm_event.c           |  9 +++++++-
 xen/common/vm_event.c             |  1 -
 xen/include/asm-x86/hvm/emulate.h |  8 ++++---
 xen/include/asm-x86/vm_event.h    |  3 ++-
 xen/include/public/vm_event.h     | 16 +++++++++++++-
 8 files changed, 67 insertions(+), 24 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index cc25676..504ed35 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size)
     if ( curr->arch.vm_event )
     {
         unsigned int safe_size =
-            min(size, curr->arch.vm_event->emul_read_data.size);
+            min(size, curr->arch.vm_event->emul_read.size);
 
-        memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size);
+        memcpy(buffer, curr->arch.vm_event->emul_read.data, safe_size);
         memset(buffer + safe_size, 0, size - safe_size);
         return X86EMUL_OKAY;
     }
@@ -827,7 +827,7 @@ static int hvmemul_read(
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
 
-    if ( unlikely(hvmemul_ctxt->set_context) )
+    if ( unlikely(hvmemul_ctxt->set_context_data) )
         return set_context_data(p_data, bytes);
 
     return __hvmemul_read(
@@ -1029,7 +1029,7 @@ static int hvmemul_cmpxchg(
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
 
-    if ( unlikely(hvmemul_ctxt->set_context) )
+    if ( unlikely(hvmemul_ctxt->set_context_data) )
     {
         int rc = set_context_data(p_new, bytes);
 
@@ -1122,7 +1122,7 @@ static int hvmemul_rep_outs(
     p2m_type_t p2mt;
     int rc;
 
-    if ( unlikely(hvmemul_ctxt->set_context) )
+    if ( unlikely(hvmemul_ctxt->set_context_data) )
         return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port,
                                             bytes_per_rep, reps, ctxt);
 
@@ -1264,7 +1264,7 @@ static int hvmemul_rep_movs(
     if ( buf == NULL )
         return X86EMUL_UNHANDLEABLE;
 
-    if ( unlikely(hvmemul_ctxt->set_context) )
+    if ( unlikely(hvmemul_ctxt->set_context_data) )
     {
         rc = set_context_data(buf, bytes);
 
@@ -1470,7 +1470,7 @@ static int hvmemul_read_io(
 
     *val = 0;
 
-    if ( unlikely(hvmemul_ctxt->set_context) )
+    if ( unlikely(hvmemul_ctxt->set_context_data) )
         return set_context_data(val, bytes);
 
     return hvmemul_do_pio_buffer(port, bytes, IOREQ_READ, val);
@@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
         pfec |= PFEC_user_mode;
 
     hvmemul_ctxt->insn_buf_eip = regs->eip;
-    if ( !vio->mmio_insn_bytes )
+
+    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
+    {
+        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
+        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
+               hvmemul_ctxt->insn_buf_bytes);
+    }
+    else if ( !vio->mmio_insn_bytes )
     {
         hvmemul_ctxt->insn_buf_bytes =
             hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
@@ -1931,7 +1938,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
     return rc;
 }
 
-void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
+void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     unsigned int errcode)
 {
     struct hvm_emulate_ctxt ctx = {{ 0 }};
@@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
     case EMUL_KIND_NOWRITE:
         rc = hvm_emulate_one_no_write(&ctx);
         break;
-    case EMUL_KIND_SET_CONTEXT:
-        ctx.set_context = 1;
-        /* Intentional fall-through. */
-    default:
+    case EMUL_KIND_SET_CONTEXT_DATA:
+        ctx.set_context_data = 1;
+        rc = hvm_emulate_one(&ctx);
+        break;
+    case EMUL_KIND_SET_CONTEXT_INSN:
+        ctx.set_context_insn = 1;
         rc = hvm_emulate_one(&ctx);
+        break;
+    case EMUL_KIND_NORMAL:
+        rc = hvm_emulate_one(&ctx);
+        break;
+    default:
+        return;
     }
 
     switch ( rc )
@@ -1983,7 +1998,8 @@ void hvm_emulate_prepare(
     hvmemul_ctxt->ctxt.force_writeback = 1;
     hvmemul_ctxt->seg_reg_accessed = 0;
     hvmemul_ctxt->seg_reg_dirty = 0;
-    hvmemul_ctxt->set_context = 0;
+    hvmemul_ctxt->set_context_data = 0;
+    hvmemul_ctxt->set_context_insn = 0;
     hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
     hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt);
 }
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ca96643..7462794 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -489,13 +489,16 @@ void hvm_do_resume(struct vcpu *v)
 
             if ( v->arch.vm_event->emulate_flags &
                  VM_EVENT_FLAG_SET_EMUL_READ_DATA )
-                kind = EMUL_KIND_SET_CONTEXT;
+                kind = EMUL_KIND_SET_CONTEXT_DATA;
             else if ( v->arch.vm_event->emulate_flags &
                       VM_EVENT_FLAG_EMULATE_NOWRITE )
                 kind = EMUL_KIND_NOWRITE;
+            else if ( v->arch.vm_event->emulate_flags &
+                 VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
+                kind = EMUL_KIND_SET_CONTEXT_INSN;
 
-            hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
-                                       HVM_DELIVER_NO_ERROR_CODE);
+            hvm_emulate_one_vm_event(kind, TRAP_invalid_op,
+                                     HVM_DELIVER_NO_ERROR_CODE);
 
             v->arch.vm_event->emulate_flags = 0;
         }
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 2759e6f..d214716 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -57,6 +57,7 @@
 #include <asm/altp2m.h>
 #include <asm/event.h>
 #include <asm/monitor.h>
+#include <asm/vm_event.h>
 #include <public/arch-x86/cpuid.h>
 
 static bool_t __initdata opt_force_ept;
diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 343b9c8..03beed3 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -209,11 +209,18 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
         if ( p2m_mem_access_emulate_check(v, rsp) )
         {
             if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
-                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
+                v->arch.vm_event->emul_read = rsp->data.emul.read;
 
             v->arch.vm_event->emulate_flags = rsp->flags;
         }
         break;
+    case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
+        if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
+        {
+            v->arch.vm_event->emul_insn = rsp->data.emul.insn;
+            v->arch.vm_event->emulate_flags = rsp->flags;
+        }
+        break;
     default:
         break;
     };
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 907ab40..d8ee7f3 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -398,7 +398,6 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
          * In some cases the response type needs extra handling, so here
          * we call the appropriate handlers.
          */
-
         /* Check flags which apply only when the vCPU is paused */
         if ( atomic_read(&v->vm_event_pause_count) )
         {
diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
index 3aabcbe..b52f99e 100644
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
 
     uint32_t intr_shadow;
 
-    bool_t set_context;
+    bool_t set_context_data;
+    bool_t set_context_insn;
 };
 
 enum emul_kind {
     EMUL_KIND_NORMAL,
     EMUL_KIND_NOWRITE,
-    EMUL_KIND_SET_CONTEXT
+    EMUL_KIND_SET_CONTEXT_DATA,
+    EMUL_KIND_SET_CONTEXT_INSN
 };
 
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
 int hvm_emulate_one_no_write(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
-void hvm_mem_access_emulate_one(enum emul_kind kind,
+void hvm_emulate_one_vm_event(enum emul_kind kind,
     unsigned int trapnr,
     unsigned int errcode);
 void hvm_emulate_prepare(
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index ebb5d88..a672784 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -27,7 +27,8 @@
  */
 struct arch_vm_event {
     uint32_t emulate_flags;
-    struct vm_event_emul_read_data emul_read_data;
+    struct vm_event_emul_read_data emul_read;
+    struct vm_event_emul_insn_data emul_insn;
     struct monitor_write_data write_data;
 };
 
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index f756126..ef62932 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -97,6 +97,13 @@
  * Requires the vCPU to be paused already (synchronous events only).
  */
 #define VM_EVENT_FLAG_SET_REGISTERS      (1 << 8)
+/*
+ * Instruction cache is being sent back to the hypervisor in the event response
+ * to be used by the emulator. This flag is only useful when combined with
+ * VM_EVENT_FLAG_EMULATE and is incompatible with also setting
+ * VM_EVENT_FLAG_EMULATE_NOWRITE or VM_EVENT_FLAG_SET_EMUL_READ_DATA.
+ */
+#define VM_EVENT_FLAG_SET_EMUL_INSN_DATA (1 << 9)
 
 /*
  * Reasons for the vm event request
@@ -265,6 +272,10 @@ struct vm_event_emul_read_data {
     uint8_t  data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)];
 };
 
+struct vm_event_emul_insn_data {
+    uint8_t data[16]; /* Has to be completely filled */
+};
+
 typedef struct vm_event_st {
     uint32_t version;   /* VM_EVENT_INTERFACE_VERSION */
     uint32_t flags;     /* VM_EVENT_FLAG_* */
@@ -291,7 +302,10 @@ typedef struct vm_event_st {
             struct vm_event_regs_arm arm;
         } regs;
 
-        struct vm_event_emul_read_data emul_read_data;
+        union {
+            struct vm_event_emul_read_data read;
+            struct vm_event_emul_insn_data insn;
+        } emul;
     } data;
 } vm_event_request_t, vm_event_response_t;
 
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-13 18:12 [PATCH 1/2] vm_event: Sanitize vm_event response handling Tamas K Lengyel
  2016-09-13 18:12 ` [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation Tamas K Lengyel
@ 2016-09-14  7:49 ` Razvan Cojocaru
  2016-09-14  9:33 ` Julien Grall
  2016-09-14 13:38 ` George Dunlap
  3 siblings, 0 replies; 21+ messages in thread
From: Razvan Cojocaru @ 2016-09-14  7:49 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: George Dunlap, Andrew Cooper, Julien Grall, Stefano Stabellini,
	Jan Beulich

On 09/13/2016 09:12 PM, Tamas K Lengyel wrote:
> Setting response flags in vm_event are only ever safe if the vCPUs are paused.
> To reflect this we move all checks within the if block that already checks
> whether this is the case. Checks that are only supported on one architecture
> we relocate the bitmask operations to the arch-specific handlers to avoid
> the overhead on architectures that don't support it.
> 
> Furthermore, we clean-up the emulation checks so it more clearly represents the
> decision-logic when emulation should take place. As part of this we also
> set the stage to allow emulation in response to other types of events, not just
> mem_access violations.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/x86/mm/p2m.c          | 81 +++++++++++++++++++-----------------------
>  xen/arch/x86/vm_event.c        | 35 +++++++++++++++++-
>  xen/common/vm_event.c          | 53 ++++++++++++++-------------
>  xen/include/asm-arm/p2m.h      |  4 +--
>  xen/include/asm-arm/vm_event.h |  9 ++++-
>  xen/include/asm-x86/p2m.h      |  4 +--
>  xen/include/asm-x86/vm_event.h |  5 ++-
>  xen/include/xen/mem_access.h   | 12 -------
>  8 files changed, 113 insertions(+), 90 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 7d14c3b..6c01868 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1588,62 +1588,55 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>      }
>  }
>  
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp)
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,

Judging by the reviews for my pending patch, I believe you'll be asked
to switch to bool / true / false here.

> +                                    const vm_event_response_t *rsp)
>  {
> -    /* Mark vcpu for skipping one instruction upon rescheduling. */
> -    if ( rsp->flags & VM_EVENT_FLAG_EMULATE )
> -    {
> -        xenmem_access_t access;
> -        bool_t violation = 1;
> -        const struct vm_event_mem_access *data = &rsp->u.mem_access;
> +    xenmem_access_t access;
> +    bool_t violation = 1;
> +    const struct vm_event_mem_access *data = &rsp->u.mem_access;
>  
> -        if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    {
> +        switch ( access )
>          {
> -            switch ( access )
> -            {
> -            case XENMEM_access_n:
> -            case XENMEM_access_n2rwx:
> -            default:
> -                violation = data->flags & MEM_ACCESS_RWX;
> -                break;
> +        case XENMEM_access_n:
> +        case XENMEM_access_n2rwx:
> +        default:
> +            violation = data->flags & MEM_ACCESS_RWX;
> +            break;
>  
> -            case XENMEM_access_r:
> -                violation = data->flags & MEM_ACCESS_WX;
> -                break;
> +        case XENMEM_access_r:
> +            violation = data->flags & MEM_ACCESS_WX;
> +            break;
>  
> -            case XENMEM_access_w:
> -                violation = data->flags & MEM_ACCESS_RX;
> -                break;
> +        case XENMEM_access_w:
> +            violation = data->flags & MEM_ACCESS_RX;
> +            break;
>  
> -            case XENMEM_access_x:
> -                violation = data->flags & MEM_ACCESS_RW;
> -                break;
> +        case XENMEM_access_x:
> +            violation = data->flags & MEM_ACCESS_RW;
> +            break;
>  
> -            case XENMEM_access_rx:
> -            case XENMEM_access_rx2rw:
> -                violation = data->flags & MEM_ACCESS_W;
> -                break;
> +        case XENMEM_access_rx:
> +        case XENMEM_access_rx2rw:
> +            violation = data->flags & MEM_ACCESS_W;
> +            break;
>  
> -            case XENMEM_access_wx:
> -                violation = data->flags & MEM_ACCESS_R;
> -                break;
> +        case XENMEM_access_wx:
> +            violation = data->flags & MEM_ACCESS_R;
> +            break;
>  
> -            case XENMEM_access_rw:
> -                violation = data->flags & MEM_ACCESS_X;
> -                break;
> +        case XENMEM_access_rw:
> +            violation = data->flags & MEM_ACCESS_X;
> +            break;
>  
> -            case XENMEM_access_rwx:
> -                violation = 0;
> -                break;
> -            }
> +        case XENMEM_access_rwx:
> +            violation = 0;
> +            break;
>          }
> -
> -        v->arch.vm_event->emulate_flags = violation ? rsp->flags : 0;
> -
> -        if ( (rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA) )
> -            v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>      }
> +
> +    return violation;
>  }
>  
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
> index e938ca3..343b9c8 100644
> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -18,6 +18,7 @@
>   * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <asm/p2m.h>
>  #include <asm/vm_event.h>
>  
>  /* Implicitly serialized by the domctl lock. */
> @@ -56,8 +57,12 @@ void vm_event_cleanup_domain(struct domain *d)
>      d->arch.mem_access_emulate_each_rep = 0;
>  }
>  
> -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
> +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
> +                                vm_event_response_t *rsp)
>  {
> +    if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
> +        return;
> +
>      if ( !is_hvm_domain(d) )
>          return;
>  
> @@ -186,6 +191,34 @@ void vm_event_fill_regs(vm_event_request_t *req)
>      req->data.regs.x86.cs_arbytes = seg.attr.bytes;
>  }
>  
> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
> +{
> +    if ( !(rsp->flags & VM_EVENT_FLAG_EMULATE) )
> +    {
> +        v->arch.vm_event->emulate_flags = 0;
> +        return;
> +    }
> +
> +    switch ( rsp->reason )
> +    {
> +    case VM_EVENT_REASON_MEM_ACCESS:
> +        /*
> +         * Emulate iff this is a response to a mem_access violation and there
> +         * are still conflicting mem_access permissions in-place.
> +         */
> +        if ( p2m_mem_access_emulate_check(v, rsp) )
> +        {
> +            if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
> +                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
> +
> +            v->arch.vm_event->emulate_flags = rsp->flags;
> +        }
> +        break;
> +    default:
> +        break;
> +    };
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 8398af7..907ab40 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -398,42 +398,41 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>           * In some cases the response type needs extra handling, so here
>           * we call the appropriate handlers.
>           */
> -        switch ( rsp.reason )
> -        {
> -#ifdef CONFIG_X86
> -        case VM_EVENT_REASON_MOV_TO_MSR:
> -#endif
> -        case VM_EVENT_REASON_WRITE_CTRLREG:
> -            vm_event_register_write_resume(v, &rsp);
> -            break;
> -
> -#ifdef CONFIG_HAS_MEM_ACCESS
> -        case VM_EVENT_REASON_MEM_ACCESS:
> -            mem_access_resume(v, &rsp);
> -            break;
> -#endif
>  
> +        /* Check flags which apply only when the vCPU is paused */
> +        if ( atomic_read(&v->vm_event_pause_count) )
> +        {
>  #ifdef CONFIG_HAS_MEM_PAGING
> -        case VM_EVENT_REASON_MEM_PAGING:
> -            p2m_mem_paging_resume(d, &rsp);
> -            break;
> +            if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING )
> +                p2m_mem_paging_resume(d, &rsp);
>  #endif
>  
> -        };
> +            /*
> +             * Check emulation flags in the arch-specific handler only, as it
> +             * has to set arch-specific flags when supported, and to avoid
> +             * bitmask overhead when it isn't supported.
> +             */
> +            vm_event_emulate_check(v, &rsp);
> +
> +            /*
> +             * Check in arch-specific handler to avoid bitmask overhead when
> +             * not supported.
> +             */
> +            vm_event_register_write_resume(v, &rsp);
>  
> -        /* Check for altp2m switch */
> -        if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
> -            p2m_altp2m_check(v, rsp.altp2m_idx);
> +            /*
> +             * Check in arch-specific handler to avoid bitmask overhead when
> +             * not supported.
> +             */
> +            vm_event_toggle_singlestep(d, v, &rsp);
> +
> +            /* Check for altp2m switch */
> +            if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
> +                p2m_altp2m_check(v, rsp.altp2m_idx);
>  
> -        /* Check flags which apply only when the vCPU is paused */
> -        if ( atomic_read(&v->vm_event_pause_count) )
> -        {
>              if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS )
>                  vm_event_set_registers(v, &rsp);
>  
> -            if ( rsp.flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP )
> -                vm_event_toggle_singlestep(d, v);
> -
>              if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
>                  vm_event_vcpu_unpause(v);
>          }
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 53c4d78..5e9bc54 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -121,10 +121,10 @@ typedef enum {
>                               p2m_to_mask(p2m_map_foreign)))
>  
>  static inline
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>                                    const vm_event_response_t *rsp)

This returns bool_t ...

>  {
> -    /* Not supported on ARM. */
> +    return false;
>  }

... but you return false. Please switch to bool.

>  
>  static inline
> diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
> index 9482636..66f2474 100644
> --- a/xen/include/asm-arm/vm_event.h
> +++ b/xen/include/asm-arm/vm_event.h
> @@ -34,7 +34,8 @@ static inline void vm_event_cleanup_domain(struct domain *d)
>      memset(&d->monitor, 0, sizeof(d->monitor));
>  }
>  
> -static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
> +static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
> +                                              vm_event_response_t *rsp)
>  {
>      /* Not supported on ARM. */
>  }
> @@ -45,4 +46,10 @@ void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
>      /* Not supported on ARM. */
>  }
>  
> +static inline
> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
> +{
> +    /* Not supported on ARM. */
> +}
> +
>  #endif /* __ASM_ARM_VM_EVENT_H__ */
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 9fc9ead..1897def 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -677,8 +677,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>  
>  /* Check for emulation and mark vcpu for skipping one instruction
>   * upon rescheduling if required. */
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp);
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
> +                                    const vm_event_response_t *rsp);

Bool.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-13 18:12 ` [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation Tamas K Lengyel
@ 2016-09-14  7:58   ` Razvan Cojocaru
  2016-09-15 16:36     ` Tamas K Lengyel
  2016-09-14 15:55   ` Jan Beulich
  1 sibling, 1 reply; 21+ messages in thread
From: Razvan Cojocaru @ 2016-09-14  7:58 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: Kevin Tian, Stefano Stabellini, Jan Beulich, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima

On 09/13/2016 09:12 PM, Tamas K Lengyel wrote:
> When emulating instructions the emulator maintains a small i-cache fetched
> from the guest memory. This patch extends the vm_event interface to allow
> returning this i-cache via the vm_event response instead.
> 
> When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber
> normally has to remove the INT3 from memory - singlestep - place back INT3
> to allow the guest to continue execution. This routine however is susceptible
> to a race-condition on multi-vCPU guests. By allowing the subscriber to return
> the i-cache to be used for emulation it can side-step the problem by returning
> a clean buffer without the INT3 present.
> 
> As part of this patch we rename hvm_mem_access_emulate_one to
> hvm_emulate_one_vm_event to better reflect that it is used in various vm_event
> scenarios now, not just in response to mem_access events.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> 
> Note: This patch only has been compile-tested.
> ---
>  xen/arch/x86/hvm/emulate.c        | 44 ++++++++++++++++++++++++++-------------
>  xen/arch/x86/hvm/hvm.c            |  9 +++++---
>  xen/arch/x86/hvm/vmx/vmx.c        |  1 +
>  xen/arch/x86/vm_event.c           |  9 +++++++-
>  xen/common/vm_event.c             |  1 -
>  xen/include/asm-x86/hvm/emulate.h |  8 ++++---
>  xen/include/asm-x86/vm_event.h    |  3 ++-
>  xen/include/public/vm_event.h     | 16 +++++++++++++-
>  8 files changed, 67 insertions(+), 24 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index cc25676..504ed35 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size)
>      if ( curr->arch.vm_event )
>      {
>          unsigned int safe_size =
> -            min(size, curr->arch.vm_event->emul_read_data.size);
> +            min(size, curr->arch.vm_event->emul_read.size);
>  
> -        memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size);
> +        memcpy(buffer, curr->arch.vm_event->emul_read.data, safe_size);
>          memset(buffer + safe_size, 0, size - safe_size);
>          return X86EMUL_OKAY;
>      }
> @@ -827,7 +827,7 @@ static int hvmemul_read(
>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>  
> -    if ( unlikely(hvmemul_ctxt->set_context) )
> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>          return set_context_data(p_data, bytes);
>  
>      return __hvmemul_read(
> @@ -1029,7 +1029,7 @@ static int hvmemul_cmpxchg(
>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>  
> -    if ( unlikely(hvmemul_ctxt->set_context) )
> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>      {
>          int rc = set_context_data(p_new, bytes);
>  
> @@ -1122,7 +1122,7 @@ static int hvmemul_rep_outs(
>      p2m_type_t p2mt;
>      int rc;
>  
> -    if ( unlikely(hvmemul_ctxt->set_context) )
> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>          return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port,
>                                              bytes_per_rep, reps, ctxt);
>  
> @@ -1264,7 +1264,7 @@ static int hvmemul_rep_movs(
>      if ( buf == NULL )
>          return X86EMUL_UNHANDLEABLE;
>  
> -    if ( unlikely(hvmemul_ctxt->set_context) )
> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>      {
>          rc = set_context_data(buf, bytes);
>  
> @@ -1470,7 +1470,7 @@ static int hvmemul_read_io(
>  
>      *val = 0;
>  
> -    if ( unlikely(hvmemul_ctxt->set_context) )
> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>          return set_context_data(val, bytes);
>  
>      return hvmemul_do_pio_buffer(port, bytes, IOREQ_READ, val);
> @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>          pfec |= PFEC_user_mode;
>  
>      hvmemul_ctxt->insn_buf_eip = regs->eip;
> -    if ( !vio->mmio_insn_bytes )
> +
> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
> +    {
> +        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
> +               hvmemul_ctxt->insn_buf_bytes);
> +    }
> +    else if ( !vio->mmio_insn_bytes )
>      {
>          hvmemul_ctxt->insn_buf_bytes =
>              hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
> @@ -1931,7 +1938,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>      return rc;
>  }
>  
> -void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
> +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>      unsigned int errcode)
>  {
>      struct hvm_emulate_ctxt ctx = {{ 0 }};
> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>      case EMUL_KIND_NOWRITE:
>          rc = hvm_emulate_one_no_write(&ctx);
>          break;
> -    case EMUL_KIND_SET_CONTEXT:
> -        ctx.set_context = 1;
> -        /* Intentional fall-through. */
> -    default:
> +    case EMUL_KIND_SET_CONTEXT_DATA:
> +        ctx.set_context_data = 1;
> +        rc = hvm_emulate_one(&ctx);
> +        break;
> +    case EMUL_KIND_SET_CONTEXT_INSN:
> +        ctx.set_context_insn = 1;
>          rc = hvm_emulate_one(&ctx);
> +        break;
> +    case EMUL_KIND_NORMAL:
> +        rc = hvm_emulate_one(&ctx);
> +        break;
> +    default:
> +        return;
>      }
>  
>      switch ( rc )
> @@ -1983,7 +1998,8 @@ void hvm_emulate_prepare(
>      hvmemul_ctxt->ctxt.force_writeback = 1;
>      hvmemul_ctxt->seg_reg_accessed = 0;
>      hvmemul_ctxt->seg_reg_dirty = 0;
> -    hvmemul_ctxt->set_context = 0;
> +    hvmemul_ctxt->set_context_data = 0;
> +    hvmemul_ctxt->set_context_insn = 0;
>      hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
>      hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt);
>  }
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index ca96643..7462794 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -489,13 +489,16 @@ void hvm_do_resume(struct vcpu *v)
>  
>              if ( v->arch.vm_event->emulate_flags &
>                   VM_EVENT_FLAG_SET_EMUL_READ_DATA )
> -                kind = EMUL_KIND_SET_CONTEXT;
> +                kind = EMUL_KIND_SET_CONTEXT_DATA;
>              else if ( v->arch.vm_event->emulate_flags &
>                        VM_EVENT_FLAG_EMULATE_NOWRITE )
>                  kind = EMUL_KIND_NOWRITE;
> +            else if ( v->arch.vm_event->emulate_flags &
> +                 VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
> +                kind = EMUL_KIND_SET_CONTEXT_INSN;
>  
> -            hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
> -                                       HVM_DELIVER_NO_ERROR_CODE);
> +            hvm_emulate_one_vm_event(kind, TRAP_invalid_op,
> +                                     HVM_DELIVER_NO_ERROR_CODE);
>  
>              v->arch.vm_event->emulate_flags = 0;
>          }
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 2759e6f..d214716 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -57,6 +57,7 @@
>  #include <asm/altp2m.h>
>  #include <asm/event.h>
>  #include <asm/monitor.h>
> +#include <asm/vm_event.h>
>  #include <public/arch-x86/cpuid.h>
>  
>  static bool_t __initdata opt_force_ept;
> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
> index 343b9c8..03beed3 100644
> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -209,11 +209,18 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
>          if ( p2m_mem_access_emulate_check(v, rsp) )
>          {
>              if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
> -                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
> +                v->arch.vm_event->emul_read = rsp->data.emul.read;
>  
>              v->arch.vm_event->emulate_flags = rsp->flags;
>          }
>          break;
> +    case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
> +        if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
> +        {
> +            v->arch.vm_event->emul_insn = rsp->data.emul.insn;
> +            v->arch.vm_event->emulate_flags = rsp->flags;
> +        }
> +        break;
>      default:
>          break;
>      };
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 907ab40..d8ee7f3 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -398,7 +398,6 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>           * In some cases the response type needs extra handling, so here
>           * we call the appropriate handlers.
>           */
> -
>          /* Check flags which apply only when the vCPU is paused */
>          if ( atomic_read(&v->vm_event_pause_count) )
>          {
> diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
> index 3aabcbe..b52f99e 100644
> --- a/xen/include/asm-x86/hvm/emulate.h
> +++ b/xen/include/asm-x86/hvm/emulate.h
> @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
>  
>      uint32_t intr_shadow;
>  
> -    bool_t set_context;
> +    bool_t set_context_data;
> +    bool_t set_context_insn;
>  };
>  
>  enum emul_kind {
>      EMUL_KIND_NORMAL,
>      EMUL_KIND_NOWRITE,
> -    EMUL_KIND_SET_CONTEXT
> +    EMUL_KIND_SET_CONTEXT_DATA,
> +    EMUL_KIND_SET_CONTEXT_INSN

Since this breaks compilation of existing clients, I think we should
increment some macro to alow for compile-time switching (maybe
VM_EVENT_INTERFACE_VERSION?).


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-13 18:12 [PATCH 1/2] vm_event: Sanitize vm_event response handling Tamas K Lengyel
  2016-09-13 18:12 ` [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation Tamas K Lengyel
  2016-09-14  7:49 ` [PATCH 1/2] vm_event: Sanitize vm_event response handling Razvan Cojocaru
@ 2016-09-14  9:33 ` Julien Grall
  2016-09-14 15:14   ` Tamas K Lengyel
  2016-09-14 13:38 ` George Dunlap
  3 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2016-09-14  9:33 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: George Dunlap, Andrew Cooper, Stefano Stabellini,
	Razvan Cojocaru, Jan Beulich

Hello Tamas,

On 13/09/16 19:12, Tamas K Lengyel wrote:
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 53c4d78..5e9bc54 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -121,10 +121,10 @@ typedef enum {
>                               p2m_to_mask(p2m_map_foreign)))
>
>  static inline
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>                                    const vm_event_response_t *rsp)

s/bool_t/bool/ and please indent properly the second line.

Please indent properly the second line.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-13 18:12 [PATCH 1/2] vm_event: Sanitize vm_event response handling Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2016-09-14  9:33 ` Julien Grall
@ 2016-09-14 13:38 ` George Dunlap
  2016-09-14 15:11   ` Tamas K Lengyel
  3 siblings, 1 reply; 21+ messages in thread
From: George Dunlap @ 2016-09-14 13:38 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Jan Beulich

On 13/09/16 19:12, Tamas K Lengyel wrote:
> Setting response flags in vm_event are only ever safe if the vCPUs are paused.
> To reflect this we move all checks within the if block that already checks
> whether this is the case. Checks that are only supported on one architecture
> we relocate the bitmask operations to the arch-specific handlers to avoid
> the overhead on architectures that don't support it.
> 
> Furthermore, we clean-up the emulation checks so it more clearly represents the
> decision-logic when emulation should take place. As part of this we also
> set the stage to allow emulation in response to other types of events, not just
> mem_access violations.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Tamas,

Would you like a detailed review from me for this?  I'm happy to ack the
p2m bits on the basis that they're only touching mem_access code.  A
full review may get stuck behind a pretty long review backlog. :-(

 -George

> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/x86/mm/p2m.c          | 81 +++++++++++++++++++-----------------------
>  xen/arch/x86/vm_event.c        | 35 +++++++++++++++++-
>  xen/common/vm_event.c          | 53 ++++++++++++++-------------
>  xen/include/asm-arm/p2m.h      |  4 +--
>  xen/include/asm-arm/vm_event.h |  9 ++++-
>  xen/include/asm-x86/p2m.h      |  4 +--
>  xen/include/asm-x86/vm_event.h |  5 ++-
>  xen/include/xen/mem_access.h   | 12 -------
>  8 files changed, 113 insertions(+), 90 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 7d14c3b..6c01868 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1588,62 +1588,55 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>      }
>  }
>  
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp)
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
> +                                    const vm_event_response_t *rsp)
>  {
> -    /* Mark vcpu for skipping one instruction upon rescheduling. */
> -    if ( rsp->flags & VM_EVENT_FLAG_EMULATE )
> -    {
> -        xenmem_access_t access;
> -        bool_t violation = 1;
> -        const struct vm_event_mem_access *data = &rsp->u.mem_access;
> +    xenmem_access_t access;
> +    bool_t violation = 1;
> +    const struct vm_event_mem_access *data = &rsp->u.mem_access;
>  
> -        if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    {
> +        switch ( access )
>          {
> -            switch ( access )
> -            {
> -            case XENMEM_access_n:
> -            case XENMEM_access_n2rwx:
> -            default:
> -                violation = data->flags & MEM_ACCESS_RWX;
> -                break;
> +        case XENMEM_access_n:
> +        case XENMEM_access_n2rwx:
> +        default:
> +            violation = data->flags & MEM_ACCESS_RWX;
> +            break;
>  
> -            case XENMEM_access_r:
> -                violation = data->flags & MEM_ACCESS_WX;
> -                break;
> +        case XENMEM_access_r:
> +            violation = data->flags & MEM_ACCESS_WX;
> +            break;
>  
> -            case XENMEM_access_w:
> -                violation = data->flags & MEM_ACCESS_RX;
> -                break;
> +        case XENMEM_access_w:
> +            violation = data->flags & MEM_ACCESS_RX;
> +            break;
>  
> -            case XENMEM_access_x:
> -                violation = data->flags & MEM_ACCESS_RW;
> -                break;
> +        case XENMEM_access_x:
> +            violation = data->flags & MEM_ACCESS_RW;
> +            break;
>  
> -            case XENMEM_access_rx:
> -            case XENMEM_access_rx2rw:
> -                violation = data->flags & MEM_ACCESS_W;
> -                break;
> +        case XENMEM_access_rx:
> +        case XENMEM_access_rx2rw:
> +            violation = data->flags & MEM_ACCESS_W;
> +            break;
>  
> -            case XENMEM_access_wx:
> -                violation = data->flags & MEM_ACCESS_R;
> -                break;
> +        case XENMEM_access_wx:
> +            violation = data->flags & MEM_ACCESS_R;
> +            break;
>  
> -            case XENMEM_access_rw:
> -                violation = data->flags & MEM_ACCESS_X;
> -                break;
> +        case XENMEM_access_rw:
> +            violation = data->flags & MEM_ACCESS_X;
> +            break;
>  
> -            case XENMEM_access_rwx:
> -                violation = 0;
> -                break;
> -            }
> +        case XENMEM_access_rwx:
> +            violation = 0;
> +            break;
>          }
> -
> -        v->arch.vm_event->emulate_flags = violation ? rsp->flags : 0;
> -
> -        if ( (rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA) )
> -            v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>      }
> +
> +    return violation;
>  }
>  
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
> index e938ca3..343b9c8 100644
> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -18,6 +18,7 @@
>   * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <asm/p2m.h>
>  #include <asm/vm_event.h>
>  
>  /* Implicitly serialized by the domctl lock. */
> @@ -56,8 +57,12 @@ void vm_event_cleanup_domain(struct domain *d)
>      d->arch.mem_access_emulate_each_rep = 0;
>  }
>  
> -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
> +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
> +                                vm_event_response_t *rsp)
>  {
> +    if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
> +        return;
> +
>      if ( !is_hvm_domain(d) )
>          return;
>  
> @@ -186,6 +191,34 @@ void vm_event_fill_regs(vm_event_request_t *req)
>      req->data.regs.x86.cs_arbytes = seg.attr.bytes;
>  }
>  
> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
> +{
> +    if ( !(rsp->flags & VM_EVENT_FLAG_EMULATE) )
> +    {
> +        v->arch.vm_event->emulate_flags = 0;
> +        return;
> +    }
> +
> +    switch ( rsp->reason )
> +    {
> +    case VM_EVENT_REASON_MEM_ACCESS:
> +        /*
> +         * Emulate iff this is a response to a mem_access violation and there
> +         * are still conflicting mem_access permissions in-place.
> +         */
> +        if ( p2m_mem_access_emulate_check(v, rsp) )
> +        {
> +            if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
> +                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
> +
> +            v->arch.vm_event->emulate_flags = rsp->flags;
> +        }
> +        break;
> +    default:
> +        break;
> +    };
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 8398af7..907ab40 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -398,42 +398,41 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>           * In some cases the response type needs extra handling, so here
>           * we call the appropriate handlers.
>           */
> -        switch ( rsp.reason )
> -        {
> -#ifdef CONFIG_X86
> -        case VM_EVENT_REASON_MOV_TO_MSR:
> -#endif
> -        case VM_EVENT_REASON_WRITE_CTRLREG:
> -            vm_event_register_write_resume(v, &rsp);
> -            break;
> -
> -#ifdef CONFIG_HAS_MEM_ACCESS
> -        case VM_EVENT_REASON_MEM_ACCESS:
> -            mem_access_resume(v, &rsp);
> -            break;
> -#endif
>  
> +        /* Check flags which apply only when the vCPU is paused */
> +        if ( atomic_read(&v->vm_event_pause_count) )
> +        {
>  #ifdef CONFIG_HAS_MEM_PAGING
> -        case VM_EVENT_REASON_MEM_PAGING:
> -            p2m_mem_paging_resume(d, &rsp);
> -            break;
> +            if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING )
> +                p2m_mem_paging_resume(d, &rsp);
>  #endif
>  
> -        };
> +            /*
> +             * Check emulation flags in the arch-specific handler only, as it
> +             * has to set arch-specific flags when supported, and to avoid
> +             * bitmask overhead when it isn't supported.
> +             */
> +            vm_event_emulate_check(v, &rsp);
> +
> +            /*
> +             * Check in arch-specific handler to avoid bitmask overhead when
> +             * not supported.
> +             */
> +            vm_event_register_write_resume(v, &rsp);
>  
> -        /* Check for altp2m switch */
> -        if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
> -            p2m_altp2m_check(v, rsp.altp2m_idx);
> +            /*
> +             * Check in arch-specific handler to avoid bitmask overhead when
> +             * not supported.
> +             */
> +            vm_event_toggle_singlestep(d, v, &rsp);
> +
> +            /* Check for altp2m switch */
> +            if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
> +                p2m_altp2m_check(v, rsp.altp2m_idx);
>  
> -        /* Check flags which apply only when the vCPU is paused */
> -        if ( atomic_read(&v->vm_event_pause_count) )
> -        {
>              if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS )
>                  vm_event_set_registers(v, &rsp);
>  
> -            if ( rsp.flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP )
> -                vm_event_toggle_singlestep(d, v);
> -
>              if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
>                  vm_event_vcpu_unpause(v);
>          }
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 53c4d78..5e9bc54 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -121,10 +121,10 @@ typedef enum {
>                               p2m_to_mask(p2m_map_foreign)))
>  
>  static inline
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>                                    const vm_event_response_t *rsp)
>  {
> -    /* Not supported on ARM. */
> +    return false;
>  }
>  
>  static inline
> diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
> index 9482636..66f2474 100644
> --- a/xen/include/asm-arm/vm_event.h
> +++ b/xen/include/asm-arm/vm_event.h
> @@ -34,7 +34,8 @@ static inline void vm_event_cleanup_domain(struct domain *d)
>      memset(&d->monitor, 0, sizeof(d->monitor));
>  }
>  
> -static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
> +static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
> +                                              vm_event_response_t *rsp)
>  {
>      /* Not supported on ARM. */
>  }
> @@ -45,4 +46,10 @@ void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
>      /* Not supported on ARM. */
>  }
>  
> +static inline
> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
> +{
> +    /* Not supported on ARM. */
> +}
> +
>  #endif /* __ASM_ARM_VM_EVENT_H__ */
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 9fc9ead..1897def 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -677,8 +677,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>  
>  /* Check for emulation and mark vcpu for skipping one instruction
>   * upon rescheduling if required. */
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp);
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
> +                                    const vm_event_response_t *rsp);
>  
>  /* Sanity check for mem_access hardware support */
>  static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
> diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
> index 294def6..ebb5d88 100644
> --- a/xen/include/asm-x86/vm_event.h
> +++ b/xen/include/asm-x86/vm_event.h
> @@ -35,8 +35,11 @@ int vm_event_init_domain(struct domain *d);
>  
>  void vm_event_cleanup_domain(struct domain *d);
>  
> -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v);
> +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
> +                                vm_event_response_t *rsp);
>  
>  void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp);
>  
> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp);
> +
>  #endif /* __ASM_X86_VM_EVENT_H__ */
> diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
> index 3d054e0..da36e07 100644
> --- a/xen/include/xen/mem_access.h
> +++ b/xen/include/xen/mem_access.h
> @@ -30,12 +30,6 @@
>  int mem_access_memop(unsigned long cmd,
>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
>  
> -static inline
> -void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp)
> -{
> -    p2m_mem_access_emulate_check(v, rsp);
> -}
> -
>  #else
>  
>  static inline
> @@ -45,12 +39,6 @@ int mem_access_memop(unsigned long cmd,
>      return -ENOSYS;
>  }
>  
> -static inline
> -void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp)
> -{
> -    /* Nothing to do. */
> -}
> -
>  #endif /* HAS_MEM_ACCESS */
>  
>  #endif /* _XEN_ASM_MEM_ACCESS_H */
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-14 13:38 ` George Dunlap
@ 2016-09-14 15:11   ` Tamas K Lengyel
  2016-09-15 10:35     ` George Dunlap
  0 siblings, 1 reply; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-14 15:11 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Jan Beulich, xen-devel

On Wed, Sep 14, 2016 at 7:38 AM, George Dunlap <george.dunlap@citrix.com> wrote:
> On 13/09/16 19:12, Tamas K Lengyel wrote:
>> Setting response flags in vm_event are only ever safe if the vCPUs are paused.
>> To reflect this we move all checks within the if block that already checks
>> whether this is the case. Checks that are only supported on one architecture
>> we relocate the bitmask operations to the arch-specific handlers to avoid
>> the overhead on architectures that don't support it.
>>
>> Furthermore, we clean-up the emulation checks so it more clearly represents the
>> decision-logic when emulation should take place. As part of this we also
>> set the stage to allow emulation in response to other types of events, not just
>> mem_access violations.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>
> Tamas,
>
> Would you like a detailed review from me for this?  I'm happy to ack the
> p2m bits on the basis that they're only touching mem_access code.  A
> full review may get stuck behind a pretty long review backlog. :-(
>
>  -George

Hi George,
acking just the p2m bits should suffice!

Thanks,
Tamas

>> ---
>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/x86/mm/p2m.c          | 81 +++++++++++++++++++-----------------------
>>  xen/arch/x86/vm_event.c        | 35 +++++++++++++++++-
>>  xen/common/vm_event.c          | 53 ++++++++++++++-------------
>>  xen/include/asm-arm/p2m.h      |  4 +--
>>  xen/include/asm-arm/vm_event.h |  9 ++++-
>>  xen/include/asm-x86/p2m.h      |  4 +--
>>  xen/include/asm-x86/vm_event.h |  5 ++-
>>  xen/include/xen/mem_access.h   | 12 -------
>>  8 files changed, 113 insertions(+), 90 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index 7d14c3b..6c01868 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1588,62 +1588,55 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>>      }
>>  }
>>
>> -void p2m_mem_access_emulate_check(struct vcpu *v,
>> -                                  const vm_event_response_t *rsp)
>> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>> +                                    const vm_event_response_t *rsp)
>>  {
>> -    /* Mark vcpu for skipping one instruction upon rescheduling. */
>> -    if ( rsp->flags & VM_EVENT_FLAG_EMULATE )
>> -    {
>> -        xenmem_access_t access;
>> -        bool_t violation = 1;
>> -        const struct vm_event_mem_access *data = &rsp->u.mem_access;
>> +    xenmem_access_t access;
>> +    bool_t violation = 1;
>> +    const struct vm_event_mem_access *data = &rsp->u.mem_access;
>>
>> -        if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
>> +    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
>> +    {
>> +        switch ( access )
>>          {
>> -            switch ( access )
>> -            {
>> -            case XENMEM_access_n:
>> -            case XENMEM_access_n2rwx:
>> -            default:
>> -                violation = data->flags & MEM_ACCESS_RWX;
>> -                break;
>> +        case XENMEM_access_n:
>> +        case XENMEM_access_n2rwx:
>> +        default:
>> +            violation = data->flags & MEM_ACCESS_RWX;
>> +            break;
>>
>> -            case XENMEM_access_r:
>> -                violation = data->flags & MEM_ACCESS_WX;
>> -                break;
>> +        case XENMEM_access_r:
>> +            violation = data->flags & MEM_ACCESS_WX;
>> +            break;
>>
>> -            case XENMEM_access_w:
>> -                violation = data->flags & MEM_ACCESS_RX;
>> -                break;
>> +        case XENMEM_access_w:
>> +            violation = data->flags & MEM_ACCESS_RX;
>> +            break;
>>
>> -            case XENMEM_access_x:
>> -                violation = data->flags & MEM_ACCESS_RW;
>> -                break;
>> +        case XENMEM_access_x:
>> +            violation = data->flags & MEM_ACCESS_RW;
>> +            break;
>>
>> -            case XENMEM_access_rx:
>> -            case XENMEM_access_rx2rw:
>> -                violation = data->flags & MEM_ACCESS_W;
>> -                break;
>> +        case XENMEM_access_rx:
>> +        case XENMEM_access_rx2rw:
>> +            violation = data->flags & MEM_ACCESS_W;
>> +            break;
>>
>> -            case XENMEM_access_wx:
>> -                violation = data->flags & MEM_ACCESS_R;
>> -                break;
>> +        case XENMEM_access_wx:
>> +            violation = data->flags & MEM_ACCESS_R;
>> +            break;
>>
>> -            case XENMEM_access_rw:
>> -                violation = data->flags & MEM_ACCESS_X;
>> -                break;
>> +        case XENMEM_access_rw:
>> +            violation = data->flags & MEM_ACCESS_X;
>> +            break;
>>
>> -            case XENMEM_access_rwx:
>> -                violation = 0;
>> -                break;
>> -            }
>> +        case XENMEM_access_rwx:
>> +            violation = 0;
>> +            break;
>>          }
>> -
>> -        v->arch.vm_event->emulate_flags = violation ? rsp->flags : 0;
>> -
>> -        if ( (rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA) )
>> -            v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>>      }
>> +
>> +    return violation;
>>  }
>>
>>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
>> index e938ca3..343b9c8 100644
>> --- a/xen/arch/x86/vm_event.c
>> +++ b/xen/arch/x86/vm_event.c
>> @@ -18,6 +18,7 @@
>>   * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>>   */
>>
>> +#include <asm/p2m.h>
>>  #include <asm/vm_event.h>
>>
>>  /* Implicitly serialized by the domctl lock. */
>> @@ -56,8 +57,12 @@ void vm_event_cleanup_domain(struct domain *d)
>>      d->arch.mem_access_emulate_each_rep = 0;
>>  }
>>
>> -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
>> +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
>> +                                vm_event_response_t *rsp)
>>  {
>> +    if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
>> +        return;
>> +
>>      if ( !is_hvm_domain(d) )
>>          return;
>>
>> @@ -186,6 +191,34 @@ void vm_event_fill_regs(vm_event_request_t *req)
>>      req->data.regs.x86.cs_arbytes = seg.attr.bytes;
>>  }
>>
>> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
>> +{
>> +    if ( !(rsp->flags & VM_EVENT_FLAG_EMULATE) )
>> +    {
>> +        v->arch.vm_event->emulate_flags = 0;
>> +        return;
>> +    }
>> +
>> +    switch ( rsp->reason )
>> +    {
>> +    case VM_EVENT_REASON_MEM_ACCESS:
>> +        /*
>> +         * Emulate iff this is a response to a mem_access violation and there
>> +         * are still conflicting mem_access permissions in-place.
>> +         */
>> +        if ( p2m_mem_access_emulate_check(v, rsp) )
>> +        {
>> +            if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>> +                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>> +
>> +            v->arch.vm_event->emulate_flags = rsp->flags;
>> +        }
>> +        break;
>> +    default:
>> +        break;
>> +    };
>> +}
>> +
>>  /*
>>   * Local variables:
>>   * mode: C
>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>> index 8398af7..907ab40 100644
>> --- a/xen/common/vm_event.c
>> +++ b/xen/common/vm_event.c
>> @@ -398,42 +398,41 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>>           * In some cases the response type needs extra handling, so here
>>           * we call the appropriate handlers.
>>           */
>> -        switch ( rsp.reason )
>> -        {
>> -#ifdef CONFIG_X86
>> -        case VM_EVENT_REASON_MOV_TO_MSR:
>> -#endif
>> -        case VM_EVENT_REASON_WRITE_CTRLREG:
>> -            vm_event_register_write_resume(v, &rsp);
>> -            break;
>> -
>> -#ifdef CONFIG_HAS_MEM_ACCESS
>> -        case VM_EVENT_REASON_MEM_ACCESS:
>> -            mem_access_resume(v, &rsp);
>> -            break;
>> -#endif
>>
>> +        /* Check flags which apply only when the vCPU is paused */
>> +        if ( atomic_read(&v->vm_event_pause_count) )
>> +        {
>>  #ifdef CONFIG_HAS_MEM_PAGING
>> -        case VM_EVENT_REASON_MEM_PAGING:
>> -            p2m_mem_paging_resume(d, &rsp);
>> -            break;
>> +            if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING )
>> +                p2m_mem_paging_resume(d, &rsp);
>>  #endif
>>
>> -        };
>> +            /*
>> +             * Check emulation flags in the arch-specific handler only, as it
>> +             * has to set arch-specific flags when supported, and to avoid
>> +             * bitmask overhead when it isn't supported.
>> +             */
>> +            vm_event_emulate_check(v, &rsp);
>> +
>> +            /*
>> +             * Check in arch-specific handler to avoid bitmask overhead when
>> +             * not supported.
>> +             */
>> +            vm_event_register_write_resume(v, &rsp);
>>
>> -        /* Check for altp2m switch */
>> -        if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
>> -            p2m_altp2m_check(v, rsp.altp2m_idx);
>> +            /*
>> +             * Check in arch-specific handler to avoid bitmask overhead when
>> +             * not supported.
>> +             */
>> +            vm_event_toggle_singlestep(d, v, &rsp);
>> +
>> +            /* Check for altp2m switch */
>> +            if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
>> +                p2m_altp2m_check(v, rsp.altp2m_idx);
>>
>> -        /* Check flags which apply only when the vCPU is paused */
>> -        if ( atomic_read(&v->vm_event_pause_count) )
>> -        {
>>              if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS )
>>                  vm_event_set_registers(v, &rsp);
>>
>> -            if ( rsp.flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP )
>> -                vm_event_toggle_singlestep(d, v);
>> -
>>              if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
>>                  vm_event_vcpu_unpause(v);
>>          }
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 53c4d78..5e9bc54 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -121,10 +121,10 @@ typedef enum {
>>                               p2m_to_mask(p2m_map_foreign)))
>>
>>  static inline
>> -void p2m_mem_access_emulate_check(struct vcpu *v,
>> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>>                                    const vm_event_response_t *rsp)
>>  {
>> -    /* Not supported on ARM. */
>> +    return false;
>>  }
>>
>>  static inline
>> diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
>> index 9482636..66f2474 100644
>> --- a/xen/include/asm-arm/vm_event.h
>> +++ b/xen/include/asm-arm/vm_event.h
>> @@ -34,7 +34,8 @@ static inline void vm_event_cleanup_domain(struct domain *d)
>>      memset(&d->monitor, 0, sizeof(d->monitor));
>>  }
>>
>> -static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
>> +static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
>> +                                              vm_event_response_t *rsp)
>>  {
>>      /* Not supported on ARM. */
>>  }
>> @@ -45,4 +46,10 @@ void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
>>      /* Not supported on ARM. */
>>  }
>>
>> +static inline
>> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
>> +{
>> +    /* Not supported on ARM. */
>> +}
>> +
>>  #endif /* __ASM_ARM_VM_EVENT_H__ */
>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>> index 9fc9ead..1897def 100644
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -677,8 +677,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>>
>>  /* Check for emulation and mark vcpu for skipping one instruction
>>   * upon rescheduling if required. */
>> -void p2m_mem_access_emulate_check(struct vcpu *v,
>> -                                  const vm_event_response_t *rsp);
>> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>> +                                    const vm_event_response_t *rsp);
>>
>>  /* Sanity check for mem_access hardware support */
>>  static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
>> diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
>> index 294def6..ebb5d88 100644
>> --- a/xen/include/asm-x86/vm_event.h
>> +++ b/xen/include/asm-x86/vm_event.h
>> @@ -35,8 +35,11 @@ int vm_event_init_domain(struct domain *d);
>>
>>  void vm_event_cleanup_domain(struct domain *d);
>>
>> -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v);
>> +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
>> +                                vm_event_response_t *rsp);
>>
>>  void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp);
>>
>> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp);
>> +
>>  #endif /* __ASM_X86_VM_EVENT_H__ */
>> diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
>> index 3d054e0..da36e07 100644
>> --- a/xen/include/xen/mem_access.h
>> +++ b/xen/include/xen/mem_access.h
>> @@ -30,12 +30,6 @@
>>  int mem_access_memop(unsigned long cmd,
>>                       XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
>>
>> -static inline
>> -void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp)
>> -{
>> -    p2m_mem_access_emulate_check(v, rsp);
>> -}
>> -
>>  #else
>>
>>  static inline
>> @@ -45,12 +39,6 @@ int mem_access_memop(unsigned long cmd,
>>      return -ENOSYS;
>>  }
>>
>> -static inline
>> -void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp)
>> -{
>> -    /* Nothing to do. */
>> -}
>> -
>>  #endif /* HAS_MEM_ACCESS */
>>
>>  #endif /* _XEN_ASM_MEM_ACCESS_H */
>>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-14  9:33 ` Julien Grall
@ 2016-09-14 15:14   ` Tamas K Lengyel
  2016-09-14 15:15     ` Julien Grall
  0 siblings, 1 reply; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-14 15:14 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Jan Beulich, xen-devel

On Wed, Sep 14, 2016 at 3:33 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Tamas,
>
> On 13/09/16 19:12, Tamas K Lengyel wrote:
>>
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 53c4d78..5e9bc54 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -121,10 +121,10 @@ typedef enum {
>>                               p2m_to_mask(p2m_map_foreign)))
>>
>>  static inline
>> -void p2m_mem_access_emulate_check(struct vcpu *v,
>> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>>                                    const vm_event_response_t *rsp)
>
>
> s/bool_t/bool/ and please indent properly the second line.

Fine by me but bool_t is used throughout p2m.h and I see no use of
just bool. Is there actually any difference between the two to warrant
enforcing one over the other?

>
> Please indent properly the second line.

Ack.

>
> Regards,
>
> --
> Julien Grall

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-14 15:14   ` Tamas K Lengyel
@ 2016-09-14 15:15     ` Julien Grall
  2016-09-14 15:19       ` Tamas K Lengyel
  0 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2016-09-14 15:15 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Jan Beulich, xen-devel



On 14/09/16 16:14, Tamas K Lengyel wrote:
> On Wed, Sep 14, 2016 at 3:33 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Hello Tamas,
>>
>> On 13/09/16 19:12, Tamas K Lengyel wrote:
>>>
>>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>>> index 53c4d78..5e9bc54 100644
>>> --- a/xen/include/asm-arm/p2m.h
>>> +++ b/xen/include/asm-arm/p2m.h
>>> @@ -121,10 +121,10 @@ typedef enum {
>>>                               p2m_to_mask(p2m_map_foreign)))
>>>
>>>  static inline
>>> -void p2m_mem_access_emulate_check(struct vcpu *v,
>>> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>>>                                    const vm_event_response_t *rsp)
>>
>>
>> s/bool_t/bool/ and please indent properly the second line.
>
> Fine by me but bool_t is used throughout p2m.h and I see no use of
> just bool. Is there actually any difference between the two to warrant
> enforcing one over the other?

bool_t has been turned into an alias to bool recently. Moving all the 
source code from one to another is a long task (very similar to mfn/gfn 
typesafe).

However, new code should use bool and not bool_t.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-14 15:15     ` Julien Grall
@ 2016-09-14 15:19       ` Tamas K Lengyel
  0 siblings, 0 replies; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-14 15:19 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Jan Beulich, xen-devel

On Wed, Sep 14, 2016 at 9:15 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 14/09/16 16:14, Tamas K Lengyel wrote:
>>
>> On Wed, Sep 14, 2016 at 3:33 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> Hello Tamas,
>>>
>>> On 13/09/16 19:12, Tamas K Lengyel wrote:
>>>>
>>>>
>>>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>>>> index 53c4d78..5e9bc54 100644
>>>> --- a/xen/include/asm-arm/p2m.h
>>>> +++ b/xen/include/asm-arm/p2m.h
>>>> @@ -121,10 +121,10 @@ typedef enum {
>>>>                               p2m_to_mask(p2m_map_foreign)))
>>>>
>>>>  static inline
>>>> -void p2m_mem_access_emulate_check(struct vcpu *v,
>>>> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>>>>                                    const vm_event_response_t *rsp)
>>>
>>>
>>>
>>> s/bool_t/bool/ and please indent properly the second line.
>>
>>
>> Fine by me but bool_t is used throughout p2m.h and I see no use of
>> just bool. Is there actually any difference between the two to warrant
>> enforcing one over the other?
>
>
> bool_t has been turned into an alias to bool recently. Moving all the source
> code from one to another is a long task (very similar to mfn/gfn typesafe).
>
> However, new code should use bool and not bool_t.

Ack.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-13 18:12 ` [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation Tamas K Lengyel
  2016-09-14  7:58   ` Razvan Cojocaru
@ 2016-09-14 15:55   ` Jan Beulich
  2016-09-14 16:20     ` Tamas K Lengyel
  1 sibling, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2016-09-14 15:55 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Kevin Tian, Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

>>> On 13.09.16 at 20:12, <tamas.lengyel@zentific.com> wrote:
> When emulating instructions the emulator maintains a small i-cache fetched
> from the guest memory. This patch extends the vm_event interface to allow
> returning this i-cache via the vm_event response instead.

I guess I'm sightly confused: Isn't the purpose to have the monitoring
app _write_ to the cache maintained in Xen? Or else, who is
"emulator" here? If that's the app, then I think subject and description
for hypervisor patches would better be written taking the perspective
of the hypervisor, especially when using potentially ambiguous terms.

> Note: This patch only has been compile-tested.

This certainly needs to change before this can be committed.

> @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>          pfec |= PFEC_user_mode;
>  
>      hvmemul_ctxt->insn_buf_eip = regs->eip;
> -    if ( !vio->mmio_insn_bytes )
> +
> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
> +    {
> +        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
> +               hvmemul_ctxt->insn_buf_bytes);
> +    }
> +    else if ( !vio->mmio_insn_bytes )
>      {
>          hvmemul_ctxt->insn_buf_bytes =
>              hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:



> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>      case EMUL_KIND_NOWRITE:
>          rc = hvm_emulate_one_no_write(&ctx);
>          break;
> -    case EMUL_KIND_SET_CONTEXT:
> -        ctx.set_context = 1;
> -        /* Intentional fall-through. */
> -    default:
> +    case EMUL_KIND_SET_CONTEXT_DATA:
> +        ctx.set_context_data = 1;
> +        rc = hvm_emulate_one(&ctx);
> +        break;
> +    case EMUL_KIND_SET_CONTEXT_INSN:
> +        ctx.set_context_insn = 1;
>          rc = hvm_emulate_one(&ctx);
> +        break;
> +    case EMUL_KIND_NORMAL:
> +        rc = hvm_emulate_one(&ctx);
> +        break;

One of the former two surely can be made fall through into the latter?

> +    default:
> +        return;

Why don't you retain the previous default handling?

> --- a/xen/include/asm-x86/hvm/emulate.h
> +++ b/xen/include/asm-x86/hvm/emulate.h
> @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
>  
>      uint32_t intr_shadow;
>  
> -    bool_t set_context;
> +    bool_t set_context_data;
> +    bool_t set_context_insn;

As you have been told by others on patch 1 already - please take
the opportunity to switch to plain bool.

> --- a/xen/include/asm-x86/vm_event.h
> +++ b/xen/include/asm-x86/vm_event.h
> @@ -27,7 +27,8 @@
>   */
>  struct arch_vm_event {
>      uint32_t emulate_flags;
> -    struct vm_event_emul_read_data emul_read_data;
> +    struct vm_event_emul_read_data emul_read;
> +    struct vm_event_emul_insn_data emul_insn;

Why don't these get put in a union, when ...

> --- a/xen/include/public/vm_event.h
> +++ b/xen/include/public/vm_event.h
> @@ -97,6 +97,13 @@
>   * Requires the vCPU to be paused already (synchronous events only).
>   */
>  #define VM_EVENT_FLAG_SET_REGISTERS      (1 << 8)
> +/*
> + * Instruction cache is being sent back to the hypervisor in the event response
> + * to be used by the emulator. This flag is only useful when combined with
> + * VM_EVENT_FLAG_EMULATE and is incompatible with also setting
> + * VM_EVENT_FLAG_EMULATE_NOWRITE or VM_EVENT_FLAG_SET_EMUL_READ_DATA.
> + */
> +#define VM_EVENT_FLAG_SET_EMUL_INSN_DATA (1 << 9)

... place these restrictions and use a union in in the public header?

> @@ -265,6 +272,10 @@ struct vm_event_emul_read_data {
>      uint8_t  data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)];
>  };
>  
> +struct vm_event_emul_insn_data {
> +    uint8_t data[16]; /* Has to be completely filled */
> +};

Any reason for the 16 (rather than the architectural 15) here?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-14 15:55   ` Jan Beulich
@ 2016-09-14 16:20     ` Tamas K Lengyel
  2016-09-15  5:58       ` Jan Beulich
  0 siblings, 1 reply; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-14 16:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

On Wed, Sep 14, 2016 at 9:55 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.09.16 at 20:12, <tamas.lengyel@zentific.com> wrote:
>> When emulating instructions the emulator maintains a small i-cache fetched
>> from the guest memory. This patch extends the vm_event interface to allow
>> returning this i-cache via the vm_event response instead.
>
> I guess I'm sightly confused: Isn't the purpose to have the monitoring
> app _write_ to the cache maintained in Xen? Or else, who is
> "emulator" here? If that's the app, then I think subject and description
> for hypervisor patches would better be written taking the perspective
> of the hypervisor, especially when using potentially ambiguous terms.

Well, I thought it was obvious that the built-in emulator in Xen is
what this patch is referring to. Otherwise none of this makes sense.

>
>> Note: This patch only has been compile-tested.
>
> This certainly needs to change before this can be committed.

I know, it's in process. That's why I put this note here to not get
too hasty about merging it.

>> @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>>          pfec |= PFEC_user_mode;
>>
>>      hvmemul_ctxt->insn_buf_eip = regs->eip;
>> -    if ( !vio->mmio_insn_bytes )
>> +
>> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
>> +    {
>> +        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
>> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
>> +               hvmemul_ctxt->insn_buf_bytes);
>> +    }
>> +    else if ( !vio->mmio_insn_bytes )
>>      {
>>          hvmemul_ctxt->insn_buf_bytes =
>>              hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
>
>
>
>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>      case EMUL_KIND_NOWRITE:
>>          rc = hvm_emulate_one_no_write(&ctx);
>>          break;
>> -    case EMUL_KIND_SET_CONTEXT:
>> -        ctx.set_context = 1;
>> -        /* Intentional fall-through. */
>> -    default:
>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>> +        ctx.set_context_data = 1;
>> +        rc = hvm_emulate_one(&ctx);
>> +        break;
>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>> +        ctx.set_context_insn = 1;
>>          rc = hvm_emulate_one(&ctx);
>> +        break;
>> +    case EMUL_KIND_NORMAL:
>> +        rc = hvm_emulate_one(&ctx);
>> +        break;
>
> One of the former two surely can be made fall through into the latter?

That's what I did before by turning this into if-else's and you
requested to go back to a switch. With a switch though I'll rather
make the cases explicit as a fall-through just makes it harder to read
for no real benefit.

>
>> +    default:
>> +        return;
>
> Why don't you retain the previous default handling?

The default label is unused and this makes it more apparent when
reading the code.

>
>> --- a/xen/include/asm-x86/hvm/emulate.h
>> +++ b/xen/include/asm-x86/hvm/emulate.h
>> @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
>>
>>      uint32_t intr_shadow;
>>
>> -    bool_t set_context;
>> +    bool_t set_context_data;
>> +    bool_t set_context_insn;
>
> As you have been told by others on patch 1 already - please take
> the opportunity to switch to plain bool.

Ack.

>
>> --- a/xen/include/asm-x86/vm_event.h
>> +++ b/xen/include/asm-x86/vm_event.h
>> @@ -27,7 +27,8 @@
>>   */
>>  struct arch_vm_event {
>>      uint32_t emulate_flags;
>> -    struct vm_event_emul_read_data emul_read_data;
>> +    struct vm_event_emul_read_data emul_read;
>> +    struct vm_event_emul_insn_data emul_insn;
>
> Why don't these get put in a union, when ...
>
>> --- a/xen/include/public/vm_event.h
>> +++ b/xen/include/public/vm_event.h
>> @@ -97,6 +97,13 @@
>>   * Requires the vCPU to be paused already (synchronous events only).
>>   */
>>  #define VM_EVENT_FLAG_SET_REGISTERS      (1 << 8)
>> +/*
>> + * Instruction cache is being sent back to the hypervisor in the event response
>> + * to be used by the emulator. This flag is only useful when combined with
>> + * VM_EVENT_FLAG_EMULATE and is incompatible with also setting
>> + * VM_EVENT_FLAG_EMULATE_NOWRITE or VM_EVENT_FLAG_SET_EMUL_READ_DATA.
>> + */
>> +#define VM_EVENT_FLAG_SET_EMUL_INSN_DATA (1 << 9)
>
> ... place these restrictions and use a union in in the public header?

True, they can be put into a union there as well.

>
>> @@ -265,6 +272,10 @@ struct vm_event_emul_read_data {
>>      uint8_t  data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)];
>>  };
>>
>> +struct vm_event_emul_insn_data {
>> +    uint8_t data[16]; /* Has to be completely filled */
>> +};
>
> Any reason for the 16 (rather than the architectural 15) here?

Yes, see the definition in include/asm-x86/hvm/emulate.h:

struct hvm_emulate_ctxt {
    struct x86_emulate_ctxt ctxt;

    /* Cache of 16 bytes of instruction. */
    uint8_t insn_buf[16];

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-14 16:20     ` Tamas K Lengyel
@ 2016-09-15  5:58       ` Jan Beulich
  2016-09-15 15:27         ` Tamas K Lengyel
  0 siblings, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2016-09-15  5:58 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Kevin Tian, Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

>>> On 14.09.16 at 18:20, <tamas.lengyel@zentific.com> wrote:
> On Wed, Sep 14, 2016 at 9:55 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 13.09.16 at 20:12, <tamas.lengyel@zentific.com> wrote:
>>> When emulating instructions the emulator maintains a small i-cache fetched
>>> from the guest memory. This patch extends the vm_event interface to allow
>>> returning this i-cache via the vm_event response instead.
>>
>> I guess I'm sightly confused: Isn't the purpose to have the monitoring
>> app _write_ to the cache maintained in Xen? Or else, who is
>> "emulator" here? If that's the app, then I think subject and description
>> for hypervisor patches would better be written taking the perspective
>> of the hypervisor, especially when using potentially ambiguous terms.
> 
> Well, I thought it was obvious that the built-in emulator in Xen is
> what this patch is referring to. Otherwise none of this makes sense.

It would have been if the sentence didn't say "returning". The
internal emulator's cache gets effectively overwritten, not
returned to anything (unless I'm still misunderstanding something).

>>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>      case EMUL_KIND_NOWRITE:
>>>          rc = hvm_emulate_one_no_write(&ctx);
>>>          break;
>>> -    case EMUL_KIND_SET_CONTEXT:
>>> -        ctx.set_context = 1;
>>> -        /* Intentional fall-through. */
>>> -    default:
>>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>>> +        ctx.set_context_data = 1;
>>> +        rc = hvm_emulate_one(&ctx);
>>> +        break;
>>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>>> +        ctx.set_context_insn = 1;
>>>          rc = hvm_emulate_one(&ctx);
>>> +        break;
>>> +    case EMUL_KIND_NORMAL:
>>> +        rc = hvm_emulate_one(&ctx);
>>> +        break;
>>
>> One of the former two surely can be made fall through into the latter?
> 
> That's what I did before by turning this into if-else's and you
> requested to go back to a switch. With a switch though I'll rather
> make the cases explicit as a fall-through just makes it harder to read
> for no real benefit.

I disagree.

>>> +    default:
>>> +        return;
>>
>> Why don't you retain the previous default handling?
> 
> The default label is unused and this makes it more apparent when
> reading the code.

Just like before, imo there shouldn't be any EMUL_KIND_NORMAL
case; that should be the default label instead.

>>> @@ -265,6 +272,10 @@ struct vm_event_emul_read_data {
>>>      uint8_t  data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)];
>>>  };
>>>
>>> +struct vm_event_emul_insn_data {
>>> +    uint8_t data[16]; /* Has to be completely filled */
>>> +};
>>
>> Any reason for the 16 (rather than the architectural 15) here?
> 
> Yes, see the definition in include/asm-x86/hvm/emulate.h:
> 
> struct hvm_emulate_ctxt {
>     struct x86_emulate_ctxt ctxt;
> 
>     /* Cache of 16 bytes of instruction. */
>     uint8_t insn_buf[16];

Ah, I didn't recall we have an oversized cache there too. But such a
connection would better be documented by a BUILD_BUG_ON()
comparing the two sizeof()s.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
  2016-09-14 15:11   ` Tamas K Lengyel
@ 2016-09-15 10:35     ` George Dunlap
  0 siblings, 0 replies; 21+ messages in thread
From: George Dunlap @ 2016-09-15 10:35 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Jan Beulich, xen-devel

On 14/09/16 16:11, Tamas K Lengyel wrote:
> On Wed, Sep 14, 2016 at 7:38 AM, George Dunlap <george.dunlap@citrix.com> wrote:
>> On 13/09/16 19:12, Tamas K Lengyel wrote:
>>> Setting response flags in vm_event are only ever safe if the vCPUs are paused.
>>> To reflect this we move all checks within the if block that already checks
>>> whether this is the case. Checks that are only supported on one architecture
>>> we relocate the bitmask operations to the arch-specific handlers to avoid
>>> the overhead on architectures that don't support it.
>>>
>>> Furthermore, we clean-up the emulation checks so it more clearly represents the
>>> decision-logic when emulation should take place. As part of this we also
>>> set the stage to allow emulation in response to other types of events, not just
>>> mem_access violations.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>
>> Tamas,
>>
>> Would you like a detailed review from me for this?  I'm happy to ack the
>> p2m bits on the basis that they're only touching mem_access code.  A
>> full review may get stuck behind a pretty long review backlog. :-(
>>
>>  -George
> 
> Hi George,
> acking just the p2m bits should suffice!

I should have given that in the first e-mail really. :-)

p2m bits:

Acked-by: George Dunlap <george.dunlap@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-15  5:58       ` Jan Beulich
@ 2016-09-15 15:27         ` Tamas K Lengyel
  2016-09-15 16:09           ` Jan Beulich
  0 siblings, 1 reply; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-15 15:27 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

On Wed, Sep 14, 2016 at 11:58 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 14.09.16 at 18:20, <tamas.lengyel@zentific.com> wrote:
>> On Wed, Sep 14, 2016 at 9:55 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 13.09.16 at 20:12, <tamas.lengyel@zentific.com> wrote:
>>>> When emulating instructions the emulator maintains a small i-cache fetched
>>>> from the guest memory. This patch extends the vm_event interface to allow
>>>> returning this i-cache via the vm_event response instead.
>>>
>>> I guess I'm sightly confused: Isn't the purpose to have the monitoring
>>> app _write_ to the cache maintained in Xen? Or else, who is
>>> "emulator" here? If that's the app, then I think subject and description
>>> for hypervisor patches would better be written taking the perspective
>>> of the hypervisor, especially when using potentially ambiguous terms.
>>
>> Well, I thought it was obvious that the built-in emulator in Xen is
>> what this patch is referring to. Otherwise none of this makes sense.
>
> It would have been if the sentence didn't say "returning". The
> internal emulator's cache gets effectively overwritten, not
> returned to anything (unless I'm still misunderstanding something).

It's "returning" the i-cache in the sense that it's part of a vm_event response.

>
>>>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>>      case EMUL_KIND_NOWRITE:
>>>>          rc = hvm_emulate_one_no_write(&ctx);
>>>>          break;
>>>> -    case EMUL_KIND_SET_CONTEXT:
>>>> -        ctx.set_context = 1;
>>>> -        /* Intentional fall-through. */
>>>> -    default:
>>>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>>>> +        ctx.set_context_data = 1;
>>>> +        rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>>>> +        ctx.set_context_insn = 1;
>>>>          rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>> +    case EMUL_KIND_NORMAL:
>>>> +        rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>
>>> One of the former two surely can be made fall through into the latter?
>>
>> That's what I did before by turning this into if-else's and you
>> requested to go back to a switch. With a switch though I'll rather
>> make the cases explicit as a fall-through just makes it harder to read
>> for no real benefit.
>
> I disagree.

OK, I don't really care about it too much so if you feel that strongly
about it then fine.

>
>>>> +    default:
>>>> +        return;
>>>
>>> Why don't you retain the previous default handling?
>>
>> The default label is unused and this makes it more apparent when
>> reading the code.
>
> Just like before, imo there shouldn't be any EMUL_KIND_NORMAL
> case; that should be the default label instead.

But there is EMUL_KIND_NORMAL case. All calls to this function must
specify a pre-defined kind. There is no calling this function with
non-defined kinds so the default label is really just
EMUL_KIND_NORMAL. Why is it better to keep it under the default label
then instead of explicitly showing that it's actually just that one
case?

>
>>>> @@ -265,6 +272,10 @@ struct vm_event_emul_read_data {
>>>>      uint8_t  data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)];
>>>>  };
>>>>
>>>> +struct vm_event_emul_insn_data {
>>>> +    uint8_t data[16]; /* Has to be completely filled */
>>>> +};
>>>
>>> Any reason for the 16 (rather than the architectural 15) here?
>>
>> Yes, see the definition in include/asm-x86/hvm/emulate.h:
>>
>> struct hvm_emulate_ctxt {
>>     struct x86_emulate_ctxt ctxt;
>>
>>     /* Cache of 16 bytes of instruction. */
>>     uint8_t insn_buf[16];
>
> Ah, I didn't recall we have an oversized cache there too. But such a
> connection would better be documented by a BUILD_BUG_ON()
> comparing the two sizeof()s.

Sure.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-15 15:27         ` Tamas K Lengyel
@ 2016-09-15 16:09           ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2016-09-15 16:09 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Kevin Tian, Stefano Stabellini, Razvan Cojocaru, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

>>> On 15.09.16 at 17:27, <tamas.lengyel@zentific.com> wrote:
> On Wed, Sep 14, 2016 at 11:58 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 14.09.16 at 18:20, <tamas.lengyel@zentific.com> wrote:
>>> On Wed, Sep 14, 2016 at 9:55 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 13.09.16 at 20:12, <tamas.lengyel@zentific.com> wrote:
>>>>> When emulating instructions the emulator maintains a small i-cache fetched
>>>>> from the guest memory. This patch extends the vm_event interface to allow
>>>>> returning this i-cache via the vm_event response instead.
>>>>
>>>> I guess I'm sightly confused: Isn't the purpose to have the monitoring
>>>> app _write_ to the cache maintained in Xen? Or else, who is
>>>> "emulator" here? If that's the app, then I think subject and description
>>>> for hypervisor patches would better be written taking the perspective
>>>> of the hypervisor, especially when using potentially ambiguous terms.
>>>
>>> Well, I thought it was obvious that the built-in emulator in Xen is
>>> what this patch is referring to. Otherwise none of this makes sense.
>>
>> It would have been if the sentence didn't say "returning". The
>> internal emulator's cache gets effectively overwritten, not
>> returned to anything (unless I'm still misunderstanding something).
> 
> It's "returning" the i-cache in the sense that it's part of a vm_event 
> response.

Well, I can only express my desire for this to get expressed in a less
confusing way. Maybe it's just me ...

>>>>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>>>      case EMUL_KIND_NOWRITE:
>>>>>          rc = hvm_emulate_one_no_write(&ctx);
>>>>>          break;
>>>>> -    case EMUL_KIND_SET_CONTEXT:
>>>>> -        ctx.set_context = 1;
>>>>> -        /* Intentional fall-through. */
>>>>> -    default:
>>>>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>>>>> +        ctx.set_context_data = 1;
>>>>> +        rc = hvm_emulate_one(&ctx);
>>>>> +        break;
>>>>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>>>>> +        ctx.set_context_insn = 1;
>>>>>          rc = hvm_emulate_one(&ctx);
>>>>> +        break;
>>>>> +    case EMUL_KIND_NORMAL:
>>>>> +        rc = hvm_emulate_one(&ctx);
>>>>> +        break;
>>>>> +    default:
>>>>> +        return;
>>>>
>>>> Why don't you retain the previous default handling?
>>>
>>> The default label is unused and this makes it more apparent when
>>> reading the code.
>>
>> Just like before, imo there shouldn't be any EMUL_KIND_NORMAL
>> case; that should be the default label instead.
> 
> But there is EMUL_KIND_NORMAL case. All calls to this function must
> specify a pre-defined kind. There is no calling this function with
> non-defined kinds so the default label is really just
> EMUL_KIND_NORMAL. Why is it better to keep it under the default label
> then instead of explicitly showing that it's actually just that one
> case?

Because changing that aspect is not the subject of your patch.

And btw - blank lines between non-fall-through cases please.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-14  7:58   ` Razvan Cojocaru
@ 2016-09-15 16:36     ` Tamas K Lengyel
  2016-09-15 17:08       ` Razvan Cojocaru
  0 siblings, 1 reply; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-15 16:36 UTC (permalink / raw)
  To: Razvan Cojocaru
  Cc: Kevin Tian, Stefano Stabellini, Jan Beulich, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

On Wed, Sep 14, 2016 at 1:58 AM, Razvan Cojocaru
<rcojocaru@bitdefender.com> wrote:
> On 09/13/2016 09:12 PM, Tamas K Lengyel wrote:
>> When emulating instructions the emulator maintains a small i-cache fetched
>> from the guest memory. This patch extends the vm_event interface to allow
>> returning this i-cache via the vm_event response instead.
>>
>> When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber
>> normally has to remove the INT3 from memory - singlestep - place back INT3
>> to allow the guest to continue execution. This routine however is susceptible
>> to a race-condition on multi-vCPU guests. By allowing the subscriber to return
>> the i-cache to be used for emulation it can side-step the problem by returning
>> a clean buffer without the INT3 present.
>>
>> As part of this patch we rename hvm_mem_access_emulate_one to
>> hvm_emulate_one_vm_event to better reflect that it is used in various vm_event
>> scenarios now, not just in response to mem_access events.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>> ---
>> Cc: Paul Durrant <paul.durrant@citrix.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>
>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>>
>> Note: This patch only has been compile-tested.
>> ---
>>  xen/arch/x86/hvm/emulate.c        | 44 ++++++++++++++++++++++++++-------------
>>  xen/arch/x86/hvm/hvm.c            |  9 +++++---
>>  xen/arch/x86/hvm/vmx/vmx.c        |  1 +
>>  xen/arch/x86/vm_event.c           |  9 +++++++-
>>  xen/common/vm_event.c             |  1 -
>>  xen/include/asm-x86/hvm/emulate.h |  8 ++++---
>>  xen/include/asm-x86/vm_event.h    |  3 ++-
>>  xen/include/public/vm_event.h     | 16 +++++++++++++-
>>  8 files changed, 67 insertions(+), 24 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
>> index cc25676..504ed35 100644
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size)
>>      if ( curr->arch.vm_event )
>>      {
>>          unsigned int safe_size =
>> -            min(size, curr->arch.vm_event->emul_read_data.size);
>> +            min(size, curr->arch.vm_event->emul_read.size);
>>
>> -        memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size);
>> +        memcpy(buffer, curr->arch.vm_event->emul_read.data, safe_size);
>>          memset(buffer + safe_size, 0, size - safe_size);
>>          return X86EMUL_OKAY;
>>      }
>> @@ -827,7 +827,7 @@ static int hvmemul_read(
>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>
>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>          return set_context_data(p_data, bytes);
>>
>>      return __hvmemul_read(
>> @@ -1029,7 +1029,7 @@ static int hvmemul_cmpxchg(
>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>
>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>      {
>>          int rc = set_context_data(p_new, bytes);
>>
>> @@ -1122,7 +1122,7 @@ static int hvmemul_rep_outs(
>>      p2m_type_t p2mt;
>>      int rc;
>>
>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>          return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port,
>>                                              bytes_per_rep, reps, ctxt);
>>
>> @@ -1264,7 +1264,7 @@ static int hvmemul_rep_movs(
>>      if ( buf == NULL )
>>          return X86EMUL_UNHANDLEABLE;
>>
>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>      {
>>          rc = set_context_data(buf, bytes);
>>
>> @@ -1470,7 +1470,7 @@ static int hvmemul_read_io(
>>
>>      *val = 0;
>>
>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>          return set_context_data(val, bytes);
>>
>>      return hvmemul_do_pio_buffer(port, bytes, IOREQ_READ, val);
>> @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>>          pfec |= PFEC_user_mode;
>>
>>      hvmemul_ctxt->insn_buf_eip = regs->eip;
>> -    if ( !vio->mmio_insn_bytes )
>> +
>> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
>> +    {
>> +        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
>> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
>> +               hvmemul_ctxt->insn_buf_bytes);
>> +    }
>> +    else if ( !vio->mmio_insn_bytes )
>>      {
>>          hvmemul_ctxt->insn_buf_bytes =
>>              hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
>> @@ -1931,7 +1938,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>>      return rc;
>>  }
>>
>> -void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>> +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>>      unsigned int errcode)
>>  {
>>      struct hvm_emulate_ctxt ctx = {{ 0 }};
>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>      case EMUL_KIND_NOWRITE:
>>          rc = hvm_emulate_one_no_write(&ctx);
>>          break;
>> -    case EMUL_KIND_SET_CONTEXT:
>> -        ctx.set_context = 1;
>> -        /* Intentional fall-through. */
>> -    default:
>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>> +        ctx.set_context_data = 1;
>> +        rc = hvm_emulate_one(&ctx);
>> +        break;
>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>> +        ctx.set_context_insn = 1;
>>          rc = hvm_emulate_one(&ctx);
>> +        break;
>> +    case EMUL_KIND_NORMAL:
>> +        rc = hvm_emulate_one(&ctx);
>> +        break;
>> +    default:
>> +        return;
>>      }
>>
>>      switch ( rc )
>> @@ -1983,7 +1998,8 @@ void hvm_emulate_prepare(
>>      hvmemul_ctxt->ctxt.force_writeback = 1;
>>      hvmemul_ctxt->seg_reg_accessed = 0;
>>      hvmemul_ctxt->seg_reg_dirty = 0;
>> -    hvmemul_ctxt->set_context = 0;
>> +    hvmemul_ctxt->set_context_data = 0;
>> +    hvmemul_ctxt->set_context_insn = 0;
>>      hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
>>      hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt);
>>  }
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index ca96643..7462794 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -489,13 +489,16 @@ void hvm_do_resume(struct vcpu *v)
>>
>>              if ( v->arch.vm_event->emulate_flags &
>>                   VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>> -                kind = EMUL_KIND_SET_CONTEXT;
>> +                kind = EMUL_KIND_SET_CONTEXT_DATA;
>>              else if ( v->arch.vm_event->emulate_flags &
>>                        VM_EVENT_FLAG_EMULATE_NOWRITE )
>>                  kind = EMUL_KIND_NOWRITE;
>> +            else if ( v->arch.vm_event->emulate_flags &
>> +                 VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>> +                kind = EMUL_KIND_SET_CONTEXT_INSN;
>>
>> -            hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
>> -                                       HVM_DELIVER_NO_ERROR_CODE);
>> +            hvm_emulate_one_vm_event(kind, TRAP_invalid_op,
>> +                                     HVM_DELIVER_NO_ERROR_CODE);
>>
>>              v->arch.vm_event->emulate_flags = 0;
>>          }
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index 2759e6f..d214716 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -57,6 +57,7 @@
>>  #include <asm/altp2m.h>
>>  #include <asm/event.h>
>>  #include <asm/monitor.h>
>> +#include <asm/vm_event.h>
>>  #include <public/arch-x86/cpuid.h>
>>
>>  static bool_t __initdata opt_force_ept;
>> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
>> index 343b9c8..03beed3 100644
>> --- a/xen/arch/x86/vm_event.c
>> +++ b/xen/arch/x86/vm_event.c
>> @@ -209,11 +209,18 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
>>          if ( p2m_mem_access_emulate_check(v, rsp) )
>>          {
>>              if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>> -                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>> +                v->arch.vm_event->emul_read = rsp->data.emul.read;
>>
>>              v->arch.vm_event->emulate_flags = rsp->flags;
>>          }
>>          break;
>> +    case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
>> +        if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>> +        {
>> +            v->arch.vm_event->emul_insn = rsp->data.emul.insn;
>> +            v->arch.vm_event->emulate_flags = rsp->flags;
>> +        }
>> +        break;
>>      default:
>>          break;
>>      };
>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>> index 907ab40..d8ee7f3 100644
>> --- a/xen/common/vm_event.c
>> +++ b/xen/common/vm_event.c
>> @@ -398,7 +398,6 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>>           * In some cases the response type needs extra handling, so here
>>           * we call the appropriate handlers.
>>           */
>> -
>>          /* Check flags which apply only when the vCPU is paused */
>>          if ( atomic_read(&v->vm_event_pause_count) )
>>          {
>> diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
>> index 3aabcbe..b52f99e 100644
>> --- a/xen/include/asm-x86/hvm/emulate.h
>> +++ b/xen/include/asm-x86/hvm/emulate.h
>> @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
>>
>>      uint32_t intr_shadow;
>>
>> -    bool_t set_context;
>> +    bool_t set_context_data;
>> +    bool_t set_context_insn;
>>  };
>>
>>  enum emul_kind {
>>      EMUL_KIND_NORMAL,
>>      EMUL_KIND_NOWRITE,
>> -    EMUL_KIND_SET_CONTEXT
>> +    EMUL_KIND_SET_CONTEXT_DATA,
>> +    EMUL_KIND_SET_CONTEXT_INSN
>
> Since this breaks compilation of existing clients, I think we should
> increment some macro to alow for compile-time switching (maybe
> VM_EVENT_INTERFACE_VERSION?).

I'm not sure I follow - this is a Xen internal-only enumeration. What
kind-of clients are you referring to?

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-15 16:36     ` Tamas K Lengyel
@ 2016-09-15 17:08       ` Razvan Cojocaru
  2016-09-16  7:21         ` Razvan Cojocaru
  0 siblings, 1 reply; 21+ messages in thread
From: Razvan Cojocaru @ 2016-09-15 17:08 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Kevin Tian, Stefano Stabellini, Jun Nakajima, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jan Beulich,
	xen-devel

On 09/15/16 19:36, Tamas K Lengyel wrote:
> On Wed, Sep 14, 2016 at 1:58 AM, Razvan Cojocaru
> <rcojocaru@bitdefender.com> wrote:
>> On 09/13/2016 09:12 PM, Tamas K Lengyel wrote:
>>> When emulating instructions the emulator maintains a small i-cache fetched
>>> from the guest memory. This patch extends the vm_event interface to allow
>>> returning this i-cache via the vm_event response instead.
>>>
>>> When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber
>>> normally has to remove the INT3 from memory - singlestep - place back INT3
>>> to allow the guest to continue execution. This routine however is susceptible
>>> to a race-condition on multi-vCPU guests. By allowing the subscriber to return
>>> the i-cache to be used for emulation it can side-step the problem by returning
>>> a clean buffer without the INT3 present.
>>>
>>> As part of this patch we rename hvm_mem_access_emulate_one to
>>> hvm_emulate_one_vm_event to better reflect that it is used in various vm_event
>>> scenarios now, not just in response to mem_access events.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>> ---
>>> Cc: Paul Durrant <paul.durrant@citrix.com>
>>> Cc: Jan Beulich <jbeulich@suse.com>
>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>> Cc: Julien Grall <julien.grall@arm.com>
>>>
>>> Note: This patch only has been compile-tested.
>>> ---
>>>  xen/arch/x86/hvm/emulate.c        | 44 ++++++++++++++++++++++++++-------------
>>>  xen/arch/x86/hvm/hvm.c            |  9 +++++---
>>>  xen/arch/x86/hvm/vmx/vmx.c        |  1 +
>>>  xen/arch/x86/vm_event.c           |  9 +++++++-
>>>  xen/common/vm_event.c             |  1 -
>>>  xen/include/asm-x86/hvm/emulate.h |  8 ++++---
>>>  xen/include/asm-x86/vm_event.h    |  3 ++-
>>>  xen/include/public/vm_event.h     | 16 +++++++++++++-
>>>  8 files changed, 67 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
>>> index cc25676..504ed35 100644
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size)
>>>      if ( curr->arch.vm_event )
>>>      {
>>>          unsigned int safe_size =
>>> -            min(size, curr->arch.vm_event->emul_read_data.size);
>>> +            min(size, curr->arch.vm_event->emul_read.size);
>>>
>>> -        memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size);
>>> +        memcpy(buffer, curr->arch.vm_event->emul_read.data, safe_size);
>>>          memset(buffer + safe_size, 0, size - safe_size);
>>>          return X86EMUL_OKAY;
>>>      }
>>> @@ -827,7 +827,7 @@ static int hvmemul_read(
>>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>>
>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>          return set_context_data(p_data, bytes);
>>>
>>>      return __hvmemul_read(
>>> @@ -1029,7 +1029,7 @@ static int hvmemul_cmpxchg(
>>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>>
>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>      {
>>>          int rc = set_context_data(p_new, bytes);
>>>
>>> @@ -1122,7 +1122,7 @@ static int hvmemul_rep_outs(
>>>      p2m_type_t p2mt;
>>>      int rc;
>>>
>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>          return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port,
>>>                                              bytes_per_rep, reps, ctxt);
>>>
>>> @@ -1264,7 +1264,7 @@ static int hvmemul_rep_movs(
>>>      if ( buf == NULL )
>>>          return X86EMUL_UNHANDLEABLE;
>>>
>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>      {
>>>          rc = set_context_data(buf, bytes);
>>>
>>> @@ -1470,7 +1470,7 @@ static int hvmemul_read_io(
>>>
>>>      *val = 0;
>>>
>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>          return set_context_data(val, bytes);
>>>
>>>      return hvmemul_do_pio_buffer(port, bytes, IOREQ_READ, val);
>>> @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>>>          pfec |= PFEC_user_mode;
>>>
>>>      hvmemul_ctxt->insn_buf_eip = regs->eip;
>>> -    if ( !vio->mmio_insn_bytes )
>>> +
>>> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
>>> +    {
>>> +        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
>>> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
>>> +               hvmemul_ctxt->insn_buf_bytes);
>>> +    }
>>> +    else if ( !vio->mmio_insn_bytes )
>>>      {
>>>          hvmemul_ctxt->insn_buf_bytes =
>>>              hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
>>> @@ -1931,7 +1938,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>>>      return rc;
>>>  }
>>>
>>> -void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>> +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>>>      unsigned int errcode)
>>>  {
>>>      struct hvm_emulate_ctxt ctx = {{ 0 }};
>>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>      case EMUL_KIND_NOWRITE:
>>>          rc = hvm_emulate_one_no_write(&ctx);
>>>          break;
>>> -    case EMUL_KIND_SET_CONTEXT:
>>> -        ctx.set_context = 1;
>>> -        /* Intentional fall-through. */
>>> -    default:
>>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>>> +        ctx.set_context_data = 1;
>>> +        rc = hvm_emulate_one(&ctx);
>>> +        break;
>>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>>> +        ctx.set_context_insn = 1;
>>>          rc = hvm_emulate_one(&ctx);
>>> +        break;
>>> +    case EMUL_KIND_NORMAL:
>>> +        rc = hvm_emulate_one(&ctx);
>>> +        break;
>>> +    default:
>>> +        return;
>>>      }
>>>
>>>      switch ( rc )
>>> @@ -1983,7 +1998,8 @@ void hvm_emulate_prepare(
>>>      hvmemul_ctxt->ctxt.force_writeback = 1;
>>>      hvmemul_ctxt->seg_reg_accessed = 0;
>>>      hvmemul_ctxt->seg_reg_dirty = 0;
>>> -    hvmemul_ctxt->set_context = 0;
>>> +    hvmemul_ctxt->set_context_data = 0;
>>> +    hvmemul_ctxt->set_context_insn = 0;
>>>      hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
>>>      hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt);
>>>  }
>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>> index ca96643..7462794 100644
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -489,13 +489,16 @@ void hvm_do_resume(struct vcpu *v)
>>>
>>>              if ( v->arch.vm_event->emulate_flags &
>>>                   VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>>> -                kind = EMUL_KIND_SET_CONTEXT;
>>> +                kind = EMUL_KIND_SET_CONTEXT_DATA;
>>>              else if ( v->arch.vm_event->emulate_flags &
>>>                        VM_EVENT_FLAG_EMULATE_NOWRITE )
>>>                  kind = EMUL_KIND_NOWRITE;
>>> +            else if ( v->arch.vm_event->emulate_flags &
>>> +                 VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>>> +                kind = EMUL_KIND_SET_CONTEXT_INSN;
>>>
>>> -            hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
>>> -                                       HVM_DELIVER_NO_ERROR_CODE);
>>> +            hvm_emulate_one_vm_event(kind, TRAP_invalid_op,
>>> +                                     HVM_DELIVER_NO_ERROR_CODE);
>>>
>>>              v->arch.vm_event->emulate_flags = 0;
>>>          }
>>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>>> index 2759e6f..d214716 100644
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -57,6 +57,7 @@
>>>  #include <asm/altp2m.h>
>>>  #include <asm/event.h>
>>>  #include <asm/monitor.h>
>>> +#include <asm/vm_event.h>
>>>  #include <public/arch-x86/cpuid.h>
>>>
>>>  static bool_t __initdata opt_force_ept;
>>> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
>>> index 343b9c8..03beed3 100644
>>> --- a/xen/arch/x86/vm_event.c
>>> +++ b/xen/arch/x86/vm_event.c
>>> @@ -209,11 +209,18 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
>>>          if ( p2m_mem_access_emulate_check(v, rsp) )
>>>          {
>>>              if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>>> -                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>>> +                v->arch.vm_event->emul_read = rsp->data.emul.read;
>>>
>>>              v->arch.vm_event->emulate_flags = rsp->flags;
>>>          }
>>>          break;
>>> +    case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
>>> +        if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>>> +        {
>>> +            v->arch.vm_event->emul_insn = rsp->data.emul.insn;
>>> +            v->arch.vm_event->emulate_flags = rsp->flags;
>>> +        }
>>> +        break;
>>>      default:
>>>          break;
>>>      };
>>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>>> index 907ab40..d8ee7f3 100644
>>> --- a/xen/common/vm_event.c
>>> +++ b/xen/common/vm_event.c
>>> @@ -398,7 +398,6 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>>>           * In some cases the response type needs extra handling, so here
>>>           * we call the appropriate handlers.
>>>           */
>>> -
>>>          /* Check flags which apply only when the vCPU is paused */
>>>          if ( atomic_read(&v->vm_event_pause_count) )
>>>          {
>>> diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
>>> index 3aabcbe..b52f99e 100644
>>> --- a/xen/include/asm-x86/hvm/emulate.h
>>> +++ b/xen/include/asm-x86/hvm/emulate.h
>>> @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
>>>
>>>      uint32_t intr_shadow;
>>>
>>> -    bool_t set_context;
>>> +    bool_t set_context_data;
>>> +    bool_t set_context_insn;
>>>  };
>>>
>>>  enum emul_kind {
>>>      EMUL_KIND_NORMAL,
>>>      EMUL_KIND_NOWRITE,
>>> -    EMUL_KIND_SET_CONTEXT
>>> +    EMUL_KIND_SET_CONTEXT_DATA,
>>> +    EMUL_KIND_SET_CONTEXT_INSN
>>
>> Since this breaks compilation of existing clients, I think we should
>> increment some macro to alow for compile-time switching (maybe
>> VM_EVENT_INTERFACE_VERSION?).
> 
> I'm not sure I follow - this is a Xen internal-only enumeration. What
> kind-of clients are you referring to?

Nevermind, I thought these changes propagate to the toolstack headers.
Sorry for the confusion.


Thanks,
Razvan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-15 17:08       ` Razvan Cojocaru
@ 2016-09-16  7:21         ` Razvan Cojocaru
  2016-09-16 15:37           ` Tamas K Lengyel
  0 siblings, 1 reply; 21+ messages in thread
From: Razvan Cojocaru @ 2016-09-16  7:21 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Kevin Tian, Stefano Stabellini, Jan Beulich, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

On 09/15/16 20:08, Razvan Cojocaru wrote:
> On 09/15/16 19:36, Tamas K Lengyel wrote:
>> On Wed, Sep 14, 2016 at 1:58 AM, Razvan Cojocaru
>> <rcojocaru@bitdefender.com> wrote:
>>> On 09/13/2016 09:12 PM, Tamas K Lengyel wrote:
>>>> When emulating instructions the emulator maintains a small i-cache fetched
>>>> from the guest memory. This patch extends the vm_event interface to allow
>>>> returning this i-cache via the vm_event response instead.
>>>>
>>>> When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber
>>>> normally has to remove the INT3 from memory - singlestep - place back INT3
>>>> to allow the guest to continue execution. This routine however is susceptible
>>>> to a race-condition on multi-vCPU guests. By allowing the subscriber to return
>>>> the i-cache to be used for emulation it can side-step the problem by returning
>>>> a clean buffer without the INT3 present.
>>>>
>>>> As part of this patch we rename hvm_mem_access_emulate_one to
>>>> hvm_emulate_one_vm_event to better reflect that it is used in various vm_event
>>>> scenarios now, not just in response to mem_access events.
>>>>
>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>>> ---
>>>> Cc: Paul Durrant <paul.durrant@citrix.com>
>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>>
>>>> Note: This patch only has been compile-tested.
>>>> ---
>>>>  xen/arch/x86/hvm/emulate.c        | 44 ++++++++++++++++++++++++++-------------
>>>>  xen/arch/x86/hvm/hvm.c            |  9 +++++---
>>>>  xen/arch/x86/hvm/vmx/vmx.c        |  1 +
>>>>  xen/arch/x86/vm_event.c           |  9 +++++++-
>>>>  xen/common/vm_event.c             |  1 -
>>>>  xen/include/asm-x86/hvm/emulate.h |  8 ++++---
>>>>  xen/include/asm-x86/vm_event.h    |  3 ++-
>>>>  xen/include/public/vm_event.h     | 16 +++++++++++++-
>>>>  8 files changed, 67 insertions(+), 24 deletions(-)
>>>>
>>>> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
>>>> index cc25676..504ed35 100644
>>>> --- a/xen/arch/x86/hvm/emulate.c
>>>> +++ b/xen/arch/x86/hvm/emulate.c
>>>> @@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size)
>>>>      if ( curr->arch.vm_event )
>>>>      {
>>>>          unsigned int safe_size =
>>>> -            min(size, curr->arch.vm_event->emul_read_data.size);
>>>> +            min(size, curr->arch.vm_event->emul_read.size);
>>>>
>>>> -        memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size);
>>>> +        memcpy(buffer, curr->arch.vm_event->emul_read.data, safe_size);
>>>>          memset(buffer + safe_size, 0, size - safe_size);
>>>>          return X86EMUL_OKAY;
>>>>      }
>>>> @@ -827,7 +827,7 @@ static int hvmemul_read(
>>>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>>>
>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>          return set_context_data(p_data, bytes);
>>>>
>>>>      return __hvmemul_read(
>>>> @@ -1029,7 +1029,7 @@ static int hvmemul_cmpxchg(
>>>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>>>
>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>      {
>>>>          int rc = set_context_data(p_new, bytes);
>>>>
>>>> @@ -1122,7 +1122,7 @@ static int hvmemul_rep_outs(
>>>>      p2m_type_t p2mt;
>>>>      int rc;
>>>>
>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>          return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port,
>>>>                                              bytes_per_rep, reps, ctxt);
>>>>
>>>> @@ -1264,7 +1264,7 @@ static int hvmemul_rep_movs(
>>>>      if ( buf == NULL )
>>>>          return X86EMUL_UNHANDLEABLE;
>>>>
>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>      {
>>>>          rc = set_context_data(buf, bytes);
>>>>
>>>> @@ -1470,7 +1470,7 @@ static int hvmemul_read_io(
>>>>
>>>>      *val = 0;
>>>>
>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>          return set_context_data(val, bytes);
>>>>
>>>>      return hvmemul_do_pio_buffer(port, bytes, IOREQ_READ, val);
>>>> @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>>>>          pfec |= PFEC_user_mode;
>>>>
>>>>      hvmemul_ctxt->insn_buf_eip = regs->eip;
>>>> -    if ( !vio->mmio_insn_bytes )
>>>> +
>>>> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
>>>> +    {
>>>> +        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
>>>> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
>>>> +               hvmemul_ctxt->insn_buf_bytes);
>>>> +    }
>>>> +    else if ( !vio->mmio_insn_bytes )
>>>>      {
>>>>          hvmemul_ctxt->insn_buf_bytes =
>>>>              hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
>>>> @@ -1931,7 +1938,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>>>>      return rc;
>>>>  }
>>>>
>>>> -void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>> +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>>>>      unsigned int errcode)
>>>>  {
>>>>      struct hvm_emulate_ctxt ctx = {{ 0 }};
>>>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>>      case EMUL_KIND_NOWRITE:
>>>>          rc = hvm_emulate_one_no_write(&ctx);
>>>>          break;
>>>> -    case EMUL_KIND_SET_CONTEXT:
>>>> -        ctx.set_context = 1;
>>>> -        /* Intentional fall-through. */
>>>> -    default:
>>>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>>>> +        ctx.set_context_data = 1;
>>>> +        rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>>>> +        ctx.set_context_insn = 1;
>>>>          rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>> +    case EMUL_KIND_NORMAL:
>>>> +        rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>> +    default:
>>>> +        return;
>>>>      }
>>>>
>>>>      switch ( rc )
>>>> @@ -1983,7 +1998,8 @@ void hvm_emulate_prepare(
>>>>      hvmemul_ctxt->ctxt.force_writeback = 1;
>>>>      hvmemul_ctxt->seg_reg_accessed = 0;
>>>>      hvmemul_ctxt->seg_reg_dirty = 0;
>>>> -    hvmemul_ctxt->set_context = 0;
>>>> +    hvmemul_ctxt->set_context_data = 0;
>>>> +    hvmemul_ctxt->set_context_insn = 0;
>>>>      hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
>>>>      hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt);
>>>>  }
>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>> index ca96643..7462794 100644
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>> @@ -489,13 +489,16 @@ void hvm_do_resume(struct vcpu *v)
>>>>
>>>>              if ( v->arch.vm_event->emulate_flags &
>>>>                   VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>>>> -                kind = EMUL_KIND_SET_CONTEXT;
>>>> +                kind = EMUL_KIND_SET_CONTEXT_DATA;
>>>>              else if ( v->arch.vm_event->emulate_flags &
>>>>                        VM_EVENT_FLAG_EMULATE_NOWRITE )
>>>>                  kind = EMUL_KIND_NOWRITE;
>>>> +            else if ( v->arch.vm_event->emulate_flags &
>>>> +                 VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>>>> +                kind = EMUL_KIND_SET_CONTEXT_INSN;
>>>>
>>>> -            hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
>>>> -                                       HVM_DELIVER_NO_ERROR_CODE);
>>>> +            hvm_emulate_one_vm_event(kind, TRAP_invalid_op,
>>>> +                                     HVM_DELIVER_NO_ERROR_CODE);
>>>>
>>>>              v->arch.vm_event->emulate_flags = 0;
>>>>          }
>>>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>>>> index 2759e6f..d214716 100644
>>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>>> @@ -57,6 +57,7 @@
>>>>  #include <asm/altp2m.h>
>>>>  #include <asm/event.h>
>>>>  #include <asm/monitor.h>
>>>> +#include <asm/vm_event.h>
>>>>  #include <public/arch-x86/cpuid.h>
>>>>
>>>>  static bool_t __initdata opt_force_ept;
>>>> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
>>>> index 343b9c8..03beed3 100644
>>>> --- a/xen/arch/x86/vm_event.c
>>>> +++ b/xen/arch/x86/vm_event.c
>>>> @@ -209,11 +209,18 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
>>>>          if ( p2m_mem_access_emulate_check(v, rsp) )
>>>>          {
>>>>              if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>>>> -                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>>>> +                v->arch.vm_event->emul_read = rsp->data.emul.read;
>>>>
>>>>              v->arch.vm_event->emulate_flags = rsp->flags;
>>>>          }
>>>>          break;
>>>> +    case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
>>>> +        if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>>>> +        {
>>>> +            v->arch.vm_event->emul_insn = rsp->data.emul.insn;
>>>> +            v->arch.vm_event->emulate_flags = rsp->flags;
>>>> +        }
>>>> +        break;
>>>>      default:
>>>>          break;
>>>>      };
>>>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>>>> index 907ab40..d8ee7f3 100644
>>>> --- a/xen/common/vm_event.c
>>>> +++ b/xen/common/vm_event.c
>>>> @@ -398,7 +398,6 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>>>>           * In some cases the response type needs extra handling, so here
>>>>           * we call the appropriate handlers.
>>>>           */
>>>> -
>>>>          /* Check flags which apply only when the vCPU is paused */
>>>>          if ( atomic_read(&v->vm_event_pause_count) )
>>>>          {
>>>> diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
>>>> index 3aabcbe..b52f99e 100644
>>>> --- a/xen/include/asm-x86/hvm/emulate.h
>>>> +++ b/xen/include/asm-x86/hvm/emulate.h
>>>> @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
>>>>
>>>>      uint32_t intr_shadow;
>>>>
>>>> -    bool_t set_context;
>>>> +    bool_t set_context_data;
>>>> +    bool_t set_context_insn;
>>>>  };
>>>>
>>>>  enum emul_kind {
>>>>      EMUL_KIND_NORMAL,
>>>>      EMUL_KIND_NOWRITE,
>>>> -    EMUL_KIND_SET_CONTEXT
>>>> +    EMUL_KIND_SET_CONTEXT_DATA,
>>>> +    EMUL_KIND_SET_CONTEXT_INSN
>>>
>>> Since this breaks compilation of existing clients, I think we should
>>> increment some macro to alow for compile-time switching (maybe
>>> VM_EVENT_INTERFACE_VERSION?).
>>
>> I'm not sure I follow - this is a Xen internal-only enumeration. What
>> kind-of clients are you referring to?
> 
> Nevermind, I thought these changes propagate to the toolstack headers.
> Sorry for the confusion.

On second thought (and a night's rest), the problem is real. I've made
it a point to test the patches today before re-reviewing them, and this
happened:

bdvmixeneventmanager.cpp:359:46: error: ‘union vm_event_st::<anonymous>’
has no member named ‘emul_read_data’
      uint32_t rspDataSize = sizeof( rsp.data.emul_read_data.data );
                                              ^
bdvmixeneventmanager.cpp:386:44: error: ‘union vm_event_st::<anonymous>’
has no member named ‘emul_read_data’
                           action, rsp.data.emul_read_data.data,
rspDataSize,
                                            ^
bdvmixeneventmanager.cpp:389:16: error: ‘union vm_event_st::<anonymous>’
has no member named ‘emul_read_data’
       rsp.data.emul_read_data.size = rspDataSize;
                ^
make: *** [bdvmixeneventmanager.o] Error 1

So, as you can see, existing clients really don't compile without
modification.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-16  7:21         ` Razvan Cojocaru
@ 2016-09-16 15:37           ` Tamas K Lengyel
  2016-09-16 16:09             ` Razvan Cojocaru
  0 siblings, 1 reply; 21+ messages in thread
From: Tamas K Lengyel @ 2016-09-16 15:37 UTC (permalink / raw)
  To: Razvan Cojocaru
  Cc: Kevin Tian, Stefano Stabellini, Jan Beulich, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

On Fri, Sep 16, 2016 at 1:21 AM, Razvan Cojocaru
<rcojocaru@bitdefender.com> wrote:
> On 09/15/16 20:08, Razvan Cojocaru wrote:
>> On 09/15/16 19:36, Tamas K Lengyel wrote:
>>> On Wed, Sep 14, 2016 at 1:58 AM, Razvan Cojocaru
>>> <rcojocaru@bitdefender.com> wrote:
>>>> On 09/13/2016 09:12 PM, Tamas K Lengyel wrote:
>>>>> When emulating instructions the emulator maintains a small i-cache fetched
>>>>> from the guest memory. This patch extends the vm_event interface to allow
>>>>> returning this i-cache via the vm_event response instead.
>>>>>
>>>>> When responding to a SOFTWARE_BREAKPOINT event (INT3) the monitor subscriber
>>>>> normally has to remove the INT3 from memory - singlestep - place back INT3
>>>>> to allow the guest to continue execution. This routine however is susceptible
>>>>> to a race-condition on multi-vCPU guests. By allowing the subscriber to return
>>>>> the i-cache to be used for emulation it can side-step the problem by returning
>>>>> a clean buffer without the INT3 present.
>>>>>
>>>>> As part of this patch we rename hvm_mem_access_emulate_one to
>>>>> hvm_emulate_one_vm_event to better reflect that it is used in various vm_event
>>>>> scenarios now, not just in response to mem_access events.
>>>>>
>>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>>>> ---
>>>>> Cc: Paul Durrant <paul.durrant@citrix.com>
>>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>>>
>>>>> Note: This patch only has been compile-tested.
>>>>> ---
>>>>>  xen/arch/x86/hvm/emulate.c        | 44 ++++++++++++++++++++++++++-------------
>>>>>  xen/arch/x86/hvm/hvm.c            |  9 +++++---
>>>>>  xen/arch/x86/hvm/vmx/vmx.c        |  1 +
>>>>>  xen/arch/x86/vm_event.c           |  9 +++++++-
>>>>>  xen/common/vm_event.c             |  1 -
>>>>>  xen/include/asm-x86/hvm/emulate.h |  8 ++++---
>>>>>  xen/include/asm-x86/vm_event.h    |  3 ++-
>>>>>  xen/include/public/vm_event.h     | 16 +++++++++++++-
>>>>>  8 files changed, 67 insertions(+), 24 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
>>>>> index cc25676..504ed35 100644
>>>>> --- a/xen/arch/x86/hvm/emulate.c
>>>>> +++ b/xen/arch/x86/hvm/emulate.c
>>>>> @@ -76,9 +76,9 @@ static int set_context_data(void *buffer, unsigned int size)
>>>>>      if ( curr->arch.vm_event )
>>>>>      {
>>>>>          unsigned int safe_size =
>>>>> -            min(size, curr->arch.vm_event->emul_read_data.size);
>>>>> +            min(size, curr->arch.vm_event->emul_read.size);
>>>>>
>>>>> -        memcpy(buffer, curr->arch.vm_event->emul_read_data.data, safe_size);
>>>>> +        memcpy(buffer, curr->arch.vm_event->emul_read.data, safe_size);
>>>>>          memset(buffer + safe_size, 0, size - safe_size);
>>>>>          return X86EMUL_OKAY;
>>>>>      }
>>>>> @@ -827,7 +827,7 @@ static int hvmemul_read(
>>>>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>>>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>>>>
>>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>>          return set_context_data(p_data, bytes);
>>>>>
>>>>>      return __hvmemul_read(
>>>>> @@ -1029,7 +1029,7 @@ static int hvmemul_cmpxchg(
>>>>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>>>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>>>>
>>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>>      {
>>>>>          int rc = set_context_data(p_new, bytes);
>>>>>
>>>>> @@ -1122,7 +1122,7 @@ static int hvmemul_rep_outs(
>>>>>      p2m_type_t p2mt;
>>>>>      int rc;
>>>>>
>>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>>          return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port,
>>>>>                                              bytes_per_rep, reps, ctxt);
>>>>>
>>>>> @@ -1264,7 +1264,7 @@ static int hvmemul_rep_movs(
>>>>>      if ( buf == NULL )
>>>>>          return X86EMUL_UNHANDLEABLE;
>>>>>
>>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>>      {
>>>>>          rc = set_context_data(buf, bytes);
>>>>>
>>>>> @@ -1470,7 +1470,7 @@ static int hvmemul_read_io(
>>>>>
>>>>>      *val = 0;
>>>>>
>>>>> -    if ( unlikely(hvmemul_ctxt->set_context) )
>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_data) )
>>>>>          return set_context_data(val, bytes);
>>>>>
>>>>>      return hvmemul_do_pio_buffer(port, bytes, IOREQ_READ, val);
>>>>> @@ -1793,7 +1793,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>>>>>          pfec |= PFEC_user_mode;
>>>>>
>>>>>      hvmemul_ctxt->insn_buf_eip = regs->eip;
>>>>> -    if ( !vio->mmio_insn_bytes )
>>>>> +
>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
>>>>> +    {
>>>>> +        hvmemul_ctxt->insn_buf_bytes = sizeof(curr->arch.vm_event->emul_insn);
>>>>> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul_insn,
>>>>> +               hvmemul_ctxt->insn_buf_bytes);
>>>>> +    }
>>>>> +    else if ( !vio->mmio_insn_bytes )
>>>>>      {
>>>>>          hvmemul_ctxt->insn_buf_bytes =
>>>>>              hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
>>>>> @@ -1931,7 +1938,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>>>>>      return rc;
>>>>>  }
>>>>>
>>>>> -void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>>> +void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>>>>>      unsigned int errcode)
>>>>>  {
>>>>>      struct hvm_emulate_ctxt ctx = {{ 0 }};
>>>>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr,
>>>>>      case EMUL_KIND_NOWRITE:
>>>>>          rc = hvm_emulate_one_no_write(&ctx);
>>>>>          break;
>>>>> -    case EMUL_KIND_SET_CONTEXT:
>>>>> -        ctx.set_context = 1;
>>>>> -        /* Intentional fall-through. */
>>>>> -    default:
>>>>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>>>>> +        ctx.set_context_data = 1;
>>>>> +        rc = hvm_emulate_one(&ctx);
>>>>> +        break;
>>>>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>>>>> +        ctx.set_context_insn = 1;
>>>>>          rc = hvm_emulate_one(&ctx);
>>>>> +        break;
>>>>> +    case EMUL_KIND_NORMAL:
>>>>> +        rc = hvm_emulate_one(&ctx);
>>>>> +        break;
>>>>> +    default:
>>>>> +        return;
>>>>>      }
>>>>>
>>>>>      switch ( rc )
>>>>> @@ -1983,7 +1998,8 @@ void hvm_emulate_prepare(
>>>>>      hvmemul_ctxt->ctxt.force_writeback = 1;
>>>>>      hvmemul_ctxt->seg_reg_accessed = 0;
>>>>>      hvmemul_ctxt->seg_reg_dirty = 0;
>>>>> -    hvmemul_ctxt->set_context = 0;
>>>>> +    hvmemul_ctxt->set_context_data = 0;
>>>>> +    hvmemul_ctxt->set_context_insn = 0;
>>>>>      hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt);
>>>>>      hvmemul_get_seg_reg(x86_seg_ss, hvmemul_ctxt);
>>>>>  }
>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>> index ca96643..7462794 100644
>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>> @@ -489,13 +489,16 @@ void hvm_do_resume(struct vcpu *v)
>>>>>
>>>>>              if ( v->arch.vm_event->emulate_flags &
>>>>>                   VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>>>>> -                kind = EMUL_KIND_SET_CONTEXT;
>>>>> +                kind = EMUL_KIND_SET_CONTEXT_DATA;
>>>>>              else if ( v->arch.vm_event->emulate_flags &
>>>>>                        VM_EVENT_FLAG_EMULATE_NOWRITE )
>>>>>                  kind = EMUL_KIND_NOWRITE;
>>>>> +            else if ( v->arch.vm_event->emulate_flags &
>>>>> +                 VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>>>>> +                kind = EMUL_KIND_SET_CONTEXT_INSN;
>>>>>
>>>>> -            hvm_mem_access_emulate_one(kind, TRAP_invalid_op,
>>>>> -                                       HVM_DELIVER_NO_ERROR_CODE);
>>>>> +            hvm_emulate_one_vm_event(kind, TRAP_invalid_op,
>>>>> +                                     HVM_DELIVER_NO_ERROR_CODE);
>>>>>
>>>>>              v->arch.vm_event->emulate_flags = 0;
>>>>>          }
>>>>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>>>>> index 2759e6f..d214716 100644
>>>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>>>> @@ -57,6 +57,7 @@
>>>>>  #include <asm/altp2m.h>
>>>>>  #include <asm/event.h>
>>>>>  #include <asm/monitor.h>
>>>>> +#include <asm/vm_event.h>
>>>>>  #include <public/arch-x86/cpuid.h>
>>>>>
>>>>>  static bool_t __initdata opt_force_ept;
>>>>> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
>>>>> index 343b9c8..03beed3 100644
>>>>> --- a/xen/arch/x86/vm_event.c
>>>>> +++ b/xen/arch/x86/vm_event.c
>>>>> @@ -209,11 +209,18 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
>>>>>          if ( p2m_mem_access_emulate_check(v, rsp) )
>>>>>          {
>>>>>              if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
>>>>> -                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>>>>> +                v->arch.vm_event->emul_read = rsp->data.emul.read;
>>>>>
>>>>>              v->arch.vm_event->emulate_flags = rsp->flags;
>>>>>          }
>>>>>          break;
>>>>> +    case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
>>>>> +        if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA )
>>>>> +        {
>>>>> +            v->arch.vm_event->emul_insn = rsp->data.emul.insn;
>>>>> +            v->arch.vm_event->emulate_flags = rsp->flags;
>>>>> +        }
>>>>> +        break;
>>>>>      default:
>>>>>          break;
>>>>>      };
>>>>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>>>>> index 907ab40..d8ee7f3 100644
>>>>> --- a/xen/common/vm_event.c
>>>>> +++ b/xen/common/vm_event.c
>>>>> @@ -398,7 +398,6 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>>>>>           * In some cases the response type needs extra handling, so here
>>>>>           * we call the appropriate handlers.
>>>>>           */
>>>>> -
>>>>>          /* Check flags which apply only when the vCPU is paused */
>>>>>          if ( atomic_read(&v->vm_event_pause_count) )
>>>>>          {
>>>>> diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
>>>>> index 3aabcbe..b52f99e 100644
>>>>> --- a/xen/include/asm-x86/hvm/emulate.h
>>>>> +++ b/xen/include/asm-x86/hvm/emulate.h
>>>>> @@ -34,20 +34,22 @@ struct hvm_emulate_ctxt {
>>>>>
>>>>>      uint32_t intr_shadow;
>>>>>
>>>>> -    bool_t set_context;
>>>>> +    bool_t set_context_data;
>>>>> +    bool_t set_context_insn;
>>>>>  };
>>>>>
>>>>>  enum emul_kind {
>>>>>      EMUL_KIND_NORMAL,
>>>>>      EMUL_KIND_NOWRITE,
>>>>> -    EMUL_KIND_SET_CONTEXT
>>>>> +    EMUL_KIND_SET_CONTEXT_DATA,
>>>>> +    EMUL_KIND_SET_CONTEXT_INSN
>>>>
>>>> Since this breaks compilation of existing clients, I think we should
>>>> increment some macro to alow for compile-time switching (maybe
>>>> VM_EVENT_INTERFACE_VERSION?).
>>>
>>> I'm not sure I follow - this is a Xen internal-only enumeration. What
>>> kind-of clients are you referring to?
>>
>> Nevermind, I thought these changes propagate to the toolstack headers.
>> Sorry for the confusion.
>
> On second thought (and a night's rest), the problem is real. I've made
> it a point to test the patches today before re-reviewing them, and this
> happened:
>
> bdvmixeneventmanager.cpp:359:46: error: ‘union vm_event_st::<anonymous>’
> has no member named ‘emul_read_data’
>       uint32_t rspDataSize = sizeof( rsp.data.emul_read_data.data );
>                                               ^
> bdvmixeneventmanager.cpp:386:44: error: ‘union vm_event_st::<anonymous>’
> has no member named ‘emul_read_data’
>                            action, rsp.data.emul_read_data.data,
> rspDataSize,
>                                             ^
> bdvmixeneventmanager.cpp:389:16: error: ‘union vm_event_st::<anonymous>’
> has no member named ‘emul_read_data’
>        rsp.data.emul_read_data.size = rspDataSize;
>                 ^
> make: *** [bdvmixeneventmanager.o] Error 1
>
> So, as you can see, existing clients really don't compile without
> modification.
>

Hi Razvan,
yes, for Xen 4.8 we already bumped the VM_EVENT_INTERFACE_VERSION, so
any client building on it need to adapt accordingly. That's why we
have the versioning in place. The way we handle this in LibVMI is by
defining a local copy of all supported Xen events ABI versions and
build with that rather then with xen/vm_event.h so we have backwards
compatibility. See
https://github.com/libvmi/libvmi/blob/master/libvmi/driver/xen/xen_events_abi.h.

Cheers,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation
  2016-09-16 15:37           ` Tamas K Lengyel
@ 2016-09-16 16:09             ` Razvan Cojocaru
  0 siblings, 0 replies; 21+ messages in thread
From: Razvan Cojocaru @ 2016-09-16 16:09 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Kevin Tian, Stefano Stabellini, Jan Beulich, George Dunlap,
	Andrew Cooper, Julien Grall, Paul Durrant, Jun Nakajima,
	xen-devel

On 09/16/16 18:37, Tamas K Lengyel wrote:
>>>>> Since this breaks compilation of existing clients, I think we should
>>>>> >>>> increment some macro to alow for compile-time switching (maybe
>>>>> >>>> VM_EVENT_INTERFACE_VERSION?).
>>>> >>>
>>>> >>> I'm not sure I follow - this is a Xen internal-only enumeration. What
>>>> >>> kind-of clients are you referring to?
>>> >>
>>> >> Nevermind, I thought these changes propagate to the toolstack headers.
>>> >> Sorry for the confusion.
>> >
>> > On second thought (and a night's rest), the problem is real. I've made
>> > it a point to test the patches today before re-reviewing them, and this
>> > happened:
>> >
>> > bdvmixeneventmanager.cpp:359:46: error: ‘union vm_event_st::<anonymous>’
>> > has no member named ‘emul_read_data’
>> >       uint32_t rspDataSize = sizeof( rsp.data.emul_read_data.data );
>> >                                               ^
>> > bdvmixeneventmanager.cpp:386:44: error: ‘union vm_event_st::<anonymous>’
>> > has no member named ‘emul_read_data’
>> >                            action, rsp.data.emul_read_data.data,
>> > rspDataSize,
>> >                                             ^
>> > bdvmixeneventmanager.cpp:389:16: error: ‘union vm_event_st::<anonymous>’
>> > has no member named ‘emul_read_data’
>> >        rsp.data.emul_read_data.size = rspDataSize;
>> >                 ^
>> > make: *** [bdvmixeneventmanager.o] Error 1
>> >
>> > So, as you can see, existing clients really don't compile without
>> > modification.
>> >
> Hi Razvan,
> yes, for Xen 4.8 we already bumped the VM_EVENT_INTERFACE_VERSION, so
> any client building on it need to adapt accordingly.

Fair enough. That's all I was asking for / about: that we should make
sure that VM_EVENT_INTERFACE_VERSION reflects these changes. If it's
already been incremented for this pending release, that's perfect.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-09-16 16:10 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-13 18:12 [PATCH 1/2] vm_event: Sanitize vm_event response handling Tamas K Lengyel
2016-09-13 18:12 ` [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation Tamas K Lengyel
2016-09-14  7:58   ` Razvan Cojocaru
2016-09-15 16:36     ` Tamas K Lengyel
2016-09-15 17:08       ` Razvan Cojocaru
2016-09-16  7:21         ` Razvan Cojocaru
2016-09-16 15:37           ` Tamas K Lengyel
2016-09-16 16:09             ` Razvan Cojocaru
2016-09-14 15:55   ` Jan Beulich
2016-09-14 16:20     ` Tamas K Lengyel
2016-09-15  5:58       ` Jan Beulich
2016-09-15 15:27         ` Tamas K Lengyel
2016-09-15 16:09           ` Jan Beulich
2016-09-14  7:49 ` [PATCH 1/2] vm_event: Sanitize vm_event response handling Razvan Cojocaru
2016-09-14  9:33 ` Julien Grall
2016-09-14 15:14   ` Tamas K Lengyel
2016-09-14 15:15     ` Julien Grall
2016-09-14 15:19       ` Tamas K Lengyel
2016-09-14 13:38 ` George Dunlap
2016-09-14 15:11   ` Tamas K Lengyel
2016-09-15 10:35     ` George Dunlap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.