All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V5 00/12] xen: Clean-up of mem_event subsystem
@ 2015-02-13 16:33 Tamas K Lengyel
  2015-02-13 16:33 ` [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
                   ` (11 more replies)
  0 siblings, 12 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

This patch series aims to clean up the mem_event subsystem within Xen. The
original use-case for this system was to allow external helper applications
running in privileged domains to control various memory operations performed
by Xen. Amongs these were paging, sharing and access control. The subsystem
has since been extended to also deliver non-memory related events, namely
various HVM debugging events (INT3, MTF, MOV-TO-CR, MOV-TO-MSR). The structures
and naming of related functions however has not caught up to these new
use-cases, thus leaving many ambiguities in the code. Furthermore, future
use-cases envisioned for this subsystem include PV domains and ARM domains,
thus there is a need to establish a common base to build on.

In this series we convert the mem_event system to vm_event in which we clearly
define the scope of information that is transmitted via the event
delivery mechanism. Afterwards, we clean up the naming of the structures and
related functions to more clearly be in line with their actual operations.
Finally, the control of monitor events is moved to a new domctl, monitor_op.

Each patch in the series has been build-tested on x86 and ARM, both with
and without XSM enabled.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/mem_event_cleanup5

Tamas K Lengyel (12):
  xen/mem_event: Cleanup of mem_event structures
  xen/mem_event: Cleanup mem_event ring names and domctls
  xen/mem_paging: Convert mem_event_op to mem_paging_op
  xen: Rename mem_event to vm_event
  tools/tests: Clean-up tools/tests/xen-access
  x86/hvm: factor out and rename vm_event related functions
  xen: Introduce monitor_op domctl
  xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  xen/vm_event: Decouple vm_event and mem_access.
  xen/vm_event: Relocate memop checks
  xen/xsm: Split vm_event_op into three separate labels
  xen/vm_event: Add RESUME option to vm_event_op domctl

 MAINTAINERS                                   |   4 +-
 docs/misc/xsm-flask.txt                       |   2 +-
 tools/libxc/Makefile                          |   3 +-
 tools/libxc/include/xenctrl.h                 |  41 ++-
 tools/libxc/xc_domain_restore.c               |  14 +-
 tools/libxc/xc_domain_save.c                  |   4 +-
 tools/libxc/xc_hvm_build_x86.c                |   2 +-
 tools/libxc/xc_mem_access.c                   |  32 --
 tools/libxc/xc_mem_paging.c                   |  52 ++-
 tools/libxc/xc_memshr.c                       |  29 +-
 tools/libxc/xc_monitor.c                      | 140 ++++++++
 tools/libxc/xc_private.h                      |  15 +-
 tools/libxc/{xc_mem_event.c => xc_vm_event.c} |  59 ++-
 tools/libxc/xg_save_restore.h                 |   2 +-
 tools/tests/xen-access/xen-access.c           | 252 +++++--------
 tools/xenpaging/pagein.c                      |   2 +-
 tools/xenpaging/xenpaging.c                   | 157 ++++----
 tools/xenpaging/xenpaging.h                   |   8 +-
 xen/arch/x86/Makefile                         |   1 +
 xen/arch/x86/domain.c                         |   2 +-
 xen/arch/x86/domctl.c                         |   4 +-
 xen/arch/x86/hvm/Makefile                     |   3 +-
 xen/arch/x86/hvm/emulate.c                    |   9 +-
 xen/arch/x86/hvm/event.c                      | 190 ++++++++++
 xen/arch/x86/hvm/hvm.c                        | 192 +---------
 xen/arch/x86/hvm/vmx/vmcs.c                   |  10 +-
 xen/arch/x86/hvm/vmx/vmx.c                    |   9 +-
 xen/arch/x86/mm/hap/nested_ept.c              |   4 +-
 xen/arch/x86/mm/hap/nested_hap.c              |   4 +-
 xen/arch/x86/mm/mem_paging.c                  |  58 +--
 xen/arch/x86/mm/mem_sharing.c                 | 136 ++++---
 xen/arch/x86/mm/p2m-pod.c                     |   4 +-
 xen/arch/x86/mm/p2m-pt.c                      |   4 +-
 xen/arch/x86/mm/p2m.c                         | 269 +++++++-------
 xen/arch/x86/monitor.c                        | 210 +++++++++++
 xen/arch/x86/x86_64/compat/mm.c               |  26 +-
 xen/arch/x86/x86_64/mm.c                      |  26 +-
 xen/common/Makefile                           |  18 +-
 xen/common/domain.c                           |  12 +-
 xen/common/domctl.c                           |  17 +-
 xen/common/mem_access.c                       |  47 +--
 xen/common/{mem_event.c => vm_event.c}        | 495 ++++++++++++++------------
 xen/drivers/passthrough/pci.c                 |   2 +-
 xen/include/asm-arm/monitor.h                 |  13 +
 xen/include/asm-arm/p2m.h                     |   6 +-
 xen/include/asm-x86/domain.h                  |  32 +-
 xen/include/asm-x86/hvm/domain.h              |   1 -
 xen/include/asm-x86/hvm/emulate.h             |   2 +-
 xen/include/asm-x86/hvm/event.h               |  40 +++
 xen/include/asm-x86/hvm/hvm.h                 |  11 -
 xen/include/asm-x86/mem_paging.h              |   7 +-
 xen/include/asm-x86/mem_sharing.h             |   5 +-
 xen/include/asm-x86/monitor.h                 |  30 ++
 xen/include/asm-x86/p2m.h                     |  18 +-
 xen/include/public/domctl.h                   | 115 ++++--
 xen/include/public/hvm/params.h               |  17 +-
 xen/include/public/mem_event.h                | 134 -------
 xen/include/public/memory.h                   |  36 +-
 xen/include/public/vm_event.h                 | 222 ++++++++++++
 xen/include/xen/mem_access.h                  |  18 +-
 xen/include/xen/mem_event.h                   | 143 --------
 xen/include/xen/p2m-common.h                  |   4 +-
 xen/include/xen/sched.h                       |  28 +-
 xen/include/xen/vm_event.h                    |  88 +++++
 xen/include/xsm/dummy.h                       |  22 +-
 xen/include/xsm/xsm.h                         |  35 +-
 xen/xsm/dummy.c                               |  13 +-
 xen/xsm/flask/hooks.c                         |  66 ++--
 xen/xsm/flask/policy/access_vectors           |  12 +-
 69 files changed, 2099 insertions(+), 1589 deletions(-)
 create mode 100644 tools/libxc/xc_monitor.c
 rename tools/libxc/{xc_mem_event.c => xc_vm_event.c} (70%)
 create mode 100644 xen/arch/x86/hvm/event.c
 create mode 100644 xen/arch/x86/monitor.c
 rename xen/common/{mem_event.c => vm_event.c} (52%)
 create mode 100644 xen/include/asm-arm/monitor.h
 create mode 100644 xen/include/asm-x86/hvm/event.h
 create mode 100644 xen/include/asm-x86/monitor.h
 delete mode 100644 xen/include/public/mem_event.h
 create mode 100644 xen/include/public/vm_event.h
 delete mode 100644 xen/include/xen/mem_event.h
 create mode 100644 xen/include/xen/vm_event.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 17:23   ` Andrew Cooper
  2015-02-13 16:33 ` [PATCH V5 02/12] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy,
	Razvan Cojocaru

The public mem_event structures used to communicate with helper applications via
shared rings have been used in different settings. However, the variable names
within this structure have not reflected this fact, resulting in the reuse of
variables to mean different things under different scenarios.

This patch remedies the issue by clearly defining the structure members based on
the actual context within which the structure is used.

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
v5: Style fixes
    Convert gfn to uint32_t and define mem_access flags bits as we can now save
        space on the ring this way
    Split non-mem_event flags into access/paging flags
v4: Attach mem_event version to each outgoing request directly in mem_event.
v3: Add padding to mem_event structures.
    Add version field to mem_event structures and checks for it.
---
 tools/libxc/xc_mem_event.c          |   2 +-
 tools/libxc/xc_private.h            |   2 +-
 tools/tests/xen-access/xen-access.c |  45 +++++----
 tools/xenpaging/xenpaging.c         |  51 ++++++-----
 xen/arch/x86/hvm/hvm.c              | 177 +++++++++++++++++++-----------------
 xen/arch/x86/mm/mem_sharing.c       |  16 +++-
 xen/arch/x86/mm/p2m.c               | 163 ++++++++++++++++++---------------
 xen/common/mem_access.c             |   6 ++
 xen/common/mem_event.c              |   2 +
 xen/include/public/mem_event.h      | 173 ++++++++++++++++++++++++++---------
 xen/include/public/memory.h         |  11 ++-
 11 files changed, 401 insertions(+), 247 deletions(-)

diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
index 8c0be4e..1b5f7c3 100644
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_mem_event.c
@@ -42,7 +42,7 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
 
 int xc_mem_event_memop(xc_interface *xch, domid_t domain_id, 
                         unsigned int op, unsigned int mode,
-                        uint64_t gfn, void *buffer)
+                        uint32_t gfn, void *buffer)
 {
     xen_mem_event_op_t meo;
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 45b8644..bc021b8 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -427,7 +427,7 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
                          unsigned int mode, uint32_t *port);
 int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
                         unsigned int op, unsigned int mode,
-                        uint64_t gfn, void *buffer);
+                        uint32_t gfn, void *buffer);
 /*
  * Enables mem_event and returns the mapped ring page indicated by param.
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 6cb382d..68f05db 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -551,13 +551,21 @@ int main(int argc, char *argv[])
                 continue;
             }
 
+            if ( req.version != MEM_EVENT_INTERFACE_VERSION )
+            {
+                ERROR("Error: mem_event interface version mismatch!\n");
+                interrupted = -1;
+                continue;
+            }
+
             memset( &rsp, 0, sizeof (rsp) );
+            rsp.version = MEM_EVENT_INTERFACE_VERSION;
             rsp.vcpu_id = req.vcpu_id;
             rsp.flags = req.flags;
 
             switch (req.reason) {
-            case MEM_EVENT_REASON_VIOLATION:
-                rc = xc_get_mem_access(xch, domain_id, req.gfn, &access);
+            case MEM_EVENT_REASON_MEM_ACCESS:
+                rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access);
                 if (rc < 0)
                 {
                     ERROR("Error %d getting mem_access event\n", rc);
@@ -565,23 +573,23 @@ int main(int argc, char *argv[])
                     continue;
                 }
 
-                printf("PAGE ACCESS: %c%c%c for GFN %"PRIx64" (offset %06"
+                printf("PAGE ACCESS: %c%c%c for GFN %"PRIx32" (offset %06"
                        PRIx64") gla %016"PRIx64" (valid: %c; fault in gpt: %c; fault with gla: %c) (vcpu %u)\n",
-                       req.access_r ? 'r' : '-',
-                       req.access_w ? 'w' : '-',
-                       req.access_x ? 'x' : '-',
-                       req.gfn,
-                       req.offset,
-                       req.gla,
-                       req.gla_valid ? 'y' : 'n',
-                       req.fault_in_gpt ? 'y' : 'n',
-                       req.fault_with_gla ? 'y': 'n',
+                       (req.u.mem_access.flags & MEM_ACCESS_R) ? 'r' : '-',
+                       (req.u.mem_access.flags & MEM_ACCESS_W) ? 'w' : '-',
+                       (req.u.mem_access.flags & MEM_ACCESS_X) ? 'x' : '-',
+                       req.u.mem_access.gfn,
+                       req.u.mem_access.offset,
+                       req.u.mem_access.gla,
+                       (req.u.mem_access.flags & MEM_ACCESS_GLA_VALID) ? 'y' : 'n',
+                       (req.u.mem_access.flags & MEM_ACCESS_FAULT_IN_GPT) ? 'y' : 'n',
+                       (req.u.mem_access.flags & MEM_ACCESS_FAULT_WITH_GLA) ? 'y': 'n',
                        req.vcpu_id);
 
                 if ( default_access != after_first_access )
                 {
                     rc = xc_set_mem_access(xch, domain_id, after_first_access,
-                                           req.gfn, 1);
+                                           req.u.mem_access.gfn, 1);
                     if (rc < 0)
                     {
                         ERROR("Error %d setting gfn to access_type %d\n", rc,
@@ -592,13 +600,12 @@ int main(int argc, char *argv[])
                 }
 
 
-                rsp.gfn = req.gfn;
-                rsp.p2mt = req.p2mt;
+                rsp.u.mem_access.gfn = req.u.mem_access.gfn;
                 break;
-            case MEM_EVENT_REASON_INT3:
-                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n", 
-                       req.gla, 
-                       req.gfn,
+            case MEM_EVENT_REASON_SOFTWARE_BREAKPOINT:
+                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx32" (vcpu %d)\n",
+                       req.regs.x86.rip,
+                       req.u.software_breakpoint.gfn,
                        req.vcpu_id);
 
                 /* Reinject */
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 82c1ee4..29ca7c7 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -684,9 +684,9 @@ static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t
          * This allows page-out of these gfns if the target grows again.
          */
         if (paging->num_paged_out > paging->policy_mru_size)
-            policy_notify_paged_in(rsp->gfn);
+            policy_notify_paged_in(rsp->u.mem_paging.gfn);
         else
-            policy_notify_paged_in_nomru(rsp->gfn);
+            policy_notify_paged_in_nomru(rsp->u.mem_paging.gfn);
 
        /* Record number of resumed pages */
        paging->num_paged_out--;
@@ -874,7 +874,8 @@ int main(int argc, char *argv[])
     }
     xch = paging->xc_handle;
 
-    DPRINTF("starting %s for domain_id %u with pagefile %s\n", argv[0], paging->mem_event.domain_id, filename);
+    DPRINTF("starting %s for domain_id %u with pagefile %s\n",
+            argv[0], paging->mem_event.domain_id, filename);
 
     /* ensure that if we get a signal, we'll do cleanup, then exit */
     act.sa_handler = close_handler;
@@ -910,49 +911,52 @@ int main(int argc, char *argv[])
 
             get_request(&paging->mem_event, &req);
 
-            if ( req.gfn > paging->max_pages )
+            if ( req.u.mem_paging.gfn > paging->max_pages )
             {
-                ERROR("Requested gfn %"PRIx64" higher than max_pages %x\n", req.gfn, paging->max_pages);
+                ERROR("Requested gfn %"PRIx32" higher than max_pages %x\n",
+                      req.u.mem_paging.gfn, paging->max_pages);
                 goto out;
             }
 
             /* Check if the page has already been paged in */
-            if ( test_and_clear_bit(req.gfn, paging->bitmap) )
+            if ( test_and_clear_bit(req.u.mem_paging.gfn, paging->bitmap) )
             {
                 /* Find where in the paging file to read from */
-                slot = paging->gfn_to_slot[req.gfn];
+                slot = paging->gfn_to_slot[req.u.mem_paging.gfn];
 
                 /* Sanity check */
-                if ( paging->slot_to_gfn[slot] != req.gfn )
+                if ( paging->slot_to_gfn[slot] != req.u.mem_paging.gfn )
                 {
-                    ERROR("Expected gfn %"PRIx64" in slot %d, but found gfn %lx\n", req.gfn, slot, paging->slot_to_gfn[slot]);
+                    ERROR("Expected gfn %"PRIx32" in slot %d, but found gfn %lx\n",
+                          req.u.mem_paging.gfn, slot, paging->slot_to_gfn[slot]);
                     goto out;
                 }
 
-                if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
+                if ( req.u.mem_paging.flags & MEM_PAGING_DROP_PAGE )
                 {
-                    DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n", req.gfn, slot);
+                    DPRINTF("drop_page ^ gfn %"PRIx32" pageslot %d\n",
+                            req.u.mem_paging.gfn, slot);
                     /* Notify policy of page being dropped */
-                    policy_notify_dropped(req.gfn);
+                    policy_notify_dropped(req.u.mem_paging.gfn);
                 }
                 else
                 {
                     /* Populate the page */
-                    if ( xenpaging_populate_page(paging, req.gfn, slot) < 0 )
+                    if ( xenpaging_populate_page(paging, req.u.mem_paging.gfn, slot) < 0 )
                     {
-                        ERROR("Error populating page %"PRIx64"", req.gfn);
+                        ERROR("Error populating page %"PRIx32"", req.u.mem_paging.gfn);
                         goto out;
                     }
                 }
 
                 /* Prepare the response */
-                rsp.gfn = req.gfn;
+                rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
                 rsp.vcpu_id = req.vcpu_id;
                 rsp.flags = req.flags;
 
                 if ( xenpaging_resume_page(paging, &rsp, 1) < 0 )
                 {
-                    PERROR("Error resuming page %"PRIx64"", req.gfn);
+                    PERROR("Error resuming page %"PRIx32"", req.u.mem_paging.gfn);
                     goto out;
                 }
 
@@ -965,23 +969,24 @@ int main(int argc, char *argv[])
             else
             {
                 DPRINTF("page %s populated (domain = %d; vcpu = %d;"
-                        " gfn = %"PRIx64"; paused = %d; evict_fail = %d)\n",
-                        req.flags & MEM_EVENT_FLAG_EVICT_FAIL ? "not" : "already",
-                        paging->mem_event.domain_id, req.vcpu_id, req.gfn,
+                        " gfn = %"PRIx32"; paused = %d; evict_fail = %d)\n",
+                        req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ? "not" : "already",
+                        paging->mem_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
                         !!(req.flags & MEM_EVENT_FLAG_VCPU_PAUSED) ,
-                        !!(req.flags & MEM_EVENT_FLAG_EVICT_FAIL) );
+                        !!(req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL) );
 
                 /* Tell Xen to resume the vcpu */
-                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) || ( req.flags & MEM_EVENT_FLAG_EVICT_FAIL ))
+                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) ||
+                    ( req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ))
                 {
                     /* Prepare the response */
-                    rsp.gfn = req.gfn;
+                    rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
                     rsp.vcpu_id = req.vcpu_id;
                     rsp.flags = req.flags;
 
                     if ( xenpaging_resume_page(paging, &rsp, 0) < 0 )
                     {
-                        PERROR("Error resuming page %"PRIx64"", req.gfn);
+                        PERROR("Error resuming page %"PRIx32"", req.u.mem_paging.gfn);
                         goto out;
                     }
                 }
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b03ee4e..fe5f568 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -6324,48 +6324,42 @@ static void hvm_mem_event_fill_regs(mem_event_request_t *req)
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     const struct vcpu *curr = current;
 
-    req->x86_regs.rax = regs->eax;
-    req->x86_regs.rcx = regs->ecx;
-    req->x86_regs.rdx = regs->edx;
-    req->x86_regs.rbx = regs->ebx;
-    req->x86_regs.rsp = regs->esp;
-    req->x86_regs.rbp = regs->ebp;
-    req->x86_regs.rsi = regs->esi;
-    req->x86_regs.rdi = regs->edi;
-
-    req->x86_regs.r8  = regs->r8;
-    req->x86_regs.r9  = regs->r9;
-    req->x86_regs.r10 = regs->r10;
-    req->x86_regs.r11 = regs->r11;
-    req->x86_regs.r12 = regs->r12;
-    req->x86_regs.r13 = regs->r13;
-    req->x86_regs.r14 = regs->r14;
-    req->x86_regs.r15 = regs->r15;
-
-    req->x86_regs.rflags = regs->eflags;
-    req->x86_regs.rip    = regs->eip;
-
-    req->x86_regs.msr_efer = curr->arch.hvm_vcpu.guest_efer;
-    req->x86_regs.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
-    req->x86_regs.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
-    req->x86_regs.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
-}
-
-static int hvm_memory_event_traps(long p, uint32_t reason,
-                                  unsigned long value, unsigned long old, 
-                                  bool_t gla_valid, unsigned long gla) 
-{
-    struct vcpu* v = current;
-    struct domain *d = v->domain;
-    mem_event_request_t req = { .reason = reason };
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
+    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
+    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
+    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
+}
+
+static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
+{
     int rc;
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
 
-    if ( !(p & HVMPME_MODE_MASK) ) 
+    if ( !(parameters & HVMPME_MODE_MASK) )
         return 0;
 
-    if ( (p & HVMPME_onchangeonly) && (value == old) )
-        return 1;
-
     rc = mem_event_claim_slot(d, &d->mem_event->access);
     if ( rc == -ENOSYS )
     {
@@ -6376,85 +6370,106 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
     else if ( rc < 0 )
         return rc;
 
-    if ( (p & HVMPME_MODE_MASK) == HVMPME_mode_sync ) 
+    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
     {
-        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;    
+        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
         mem_event_vcpu_pause(v);
     }
 
-    req.gfn = value;
-    req.vcpu_id = v->vcpu_id;
-    if ( gla_valid ) 
-    {
-        req.offset = gla & ((1 << PAGE_SHIFT) - 1);
-        req.gla = gla;
-        req.gla_valid = 1;
-    }
-    else
-    {
-        req.gla = old;
-    }
-    
-    hvm_mem_event_fill_regs(&req);
-    mem_event_put_request(d, &d->mem_event->access, &req);
-    
+    hvm_mem_event_fill_regs(req);
+    mem_event_put_request(d, &d->mem_event->access, req);
+
     return 1;
 }
 
+static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
+                                unsigned long old)
+{
+    mem_event_request_t req = {
+        .reason = reason,
+        .vcpu_id = current->vcpu_id,
+        .u.mov_to_cr.new_value = value,
+        .u.mov_to_cr.old_value = old
+    };
+    uint64_t parameters = 0;
+
+    switch(reason)
+    {
+    case MEM_EVENT_REASON_MOV_TO_CR0:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
+        break;
+    case MEM_EVENT_REASON_MOV_TO_CR3:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
+        break;
+    case MEM_EVENT_REASON_MOV_TO_CR4:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
+        break;
+    };
+
+    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+        return;
+
+    hvm_memory_event_traps(parameters, &req);
+}
+
 void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR0],
-                           MEM_EVENT_REASON_CR0,
-                           value, old, 0, 0);
+    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR0, value, old);
 }
 
 void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR3],
-                           MEM_EVENT_REASON_CR3,
-                           value, old, 0, 0);
+    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR3, value, old);
 }
 
 void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR4],
-                           MEM_EVENT_REASON_CR4,
-                           value, old, 0, 0);
+    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR4, value, old);
 }
 
 void hvm_memory_event_msr(unsigned long msr, unsigned long value)
 {
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MOV_TO_MSR,
+        .vcpu_id = current->vcpu_id,
+        .u.mov_to_msr.msr = msr,
+        .u.mov_to_msr.value = value,
+    };
+
     hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
-                           MEM_EVENT_REASON_MSR,
-                           value, ~value, 1, msr);
+                            .params[HVM_PARAM_MEMORY_EVENT_MSR],
+                           &req);
 }
 
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
-    unsigned long gfn;
-    gfn = paging_gva_to_gfn(current, gla, &pfec);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_SOFTWARE_BREAKPOINT,
+        .vcpu_id = current->vcpu_id,
+        .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
+    };
 
     return hvm_memory_event_traps(current->domain->arch.hvm_domain
-                                    .params[HVM_PARAM_MEMORY_EVENT_INT3],
-                                  MEM_EVENT_REASON_INT3,
-                                  gfn, 0, 1, gla);
+                                   .params[HVM_PARAM_MEMORY_EVENT_INT3],
+                                  &req);
 }
 
 int hvm_memory_event_single_step(unsigned long gla)
 {
     uint32_t pfec = PFEC_page_present;
-    unsigned long gfn;
-    gfn = paging_gva_to_gfn(current, gla, &pfec);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_SINGLESTEP,
+        .vcpu_id = current->vcpu_id,
+        .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
+    };
 
     return hvm_memory_event_traps(current->domain->arch.hvm_domain
-            .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
-            MEM_EVENT_REASON_SINGLESTEP,
-            gfn, 0, 1, gla);
+                                   .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
+                                  &req);
 }
 
 int nhvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 7c0fc7d..8a192ef 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -559,7 +559,12 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
 {
     struct vcpu *v = current;
     int rc;
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_SHARING,
+        .vcpu_id = v->vcpu_id,
+        .u.mem_sharing.gfn = gfn,
+        .u.mem_sharing.p2mt = p2m_ram_shared
+    };
 
     if ( (rc = __mem_event_claim_slot(d, 
                         &d->mem_event->share, allow_sleep)) < 0 )
@@ -571,9 +576,6 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
         mem_event_vcpu_pause(v);
     }
 
-    req.p2mt = p2m_ram_shared;
-    req.vcpu_id = v->vcpu_id;
-
     mem_event_put_request(d, &d->mem_event->share, &req);
 
     return 0;
@@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        {
+            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
+            continue;
+        }
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6a06e9f..339f8fe 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1081,7 +1081,10 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
 void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
                                 p2m_type_t p2mt)
 {
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_PAGING,
+        .u.mem_paging.gfn = gfn
+    };
 
     /* We allow no ring in this unique case, because it won't affect
      * correctness of the guest execution at this point.  If this is the only
@@ -1092,14 +1095,14 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
         return;
 
     /* Send release notification to pager */
-    req.flags = MEM_EVENT_FLAG_DROP_PAGE;
+    req.u.mem_paging.flags = MEM_PAGING_DROP_PAGE;
 
     /* Update stats unless the page hasn't yet been evicted */
     if ( p2mt != p2m_ram_paging_out )
         atomic_dec(&d->paged_pages);
     else
         /* Evict will fail now, tag this request for pager */
-        req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
+        req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL;
 
     mem_event_put_request(d, &d->mem_event->paging, &req);
 }
@@ -1128,7 +1131,10 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
 void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 {
     struct vcpu *v = current;
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_PAGING,
+        .u.mem_paging.gfn = gfn
+    };
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
@@ -1157,7 +1163,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     {
         /* Evict will fail now, tag this request for pager */
         if ( p2mt == p2m_ram_paging_out )
-            req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
+            req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL;
 
         p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a);
     }
@@ -1178,7 +1184,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     }
 
     /* Send request to pager */
-    req.p2mt = p2mt;
+    req.u.mem_paging.p2mt = p2mt;
     req.vcpu_id = v->vcpu_id;
 
     mem_event_put_request(d, &d->mem_event->paging, &req);
@@ -1300,6 +1306,12 @@ void p2m_mem_paging_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        {
+            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
+            continue;
+        }
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
@@ -1310,20 +1322,21 @@ void p2m_mem_paging_resume(struct domain *d)
         v = d->vcpu[rsp.vcpu_id];
 
         /* Fix p2m entry if the page was not dropped */
-        if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
+        if ( !(rsp.u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
         {
-            gfn_lock(p2m, rsp.gfn, 0);
-            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
+            uint64_t gfn = rsp.u.mem_access.gfn;
+            gfn_lock(p2m, gfn, 0);
+            mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
             /* Allow only pages which were prepared properly, or pages which
              * were nominated but not evicted */
             if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
             {
-                p2m_set_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
+                p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
                               paging_mode_log_dirty(d) ? p2m_ram_logdirty :
                               p2m_ram_rw, a);
-                set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+                set_gpfn_from_mfn(mfn_x(mfn), gfn);
             }
-            gfn_unlock(p2m, rsp.gfn, 0);
+            gfn_unlock(p2m, gfn, 0);
         }
         /* Unpause domain */
         if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
@@ -1341,92 +1354,94 @@ static void p2m_mem_event_fill_regs(mem_event_request_t *req)
     /* Architecture-specific vmcs/vmcb bits */
     hvm_funcs.save_cpu_ctxt(curr, &ctxt);
 
-    req->x86_regs.rax = regs->eax;
-    req->x86_regs.rcx = regs->ecx;
-    req->x86_regs.rdx = regs->edx;
-    req->x86_regs.rbx = regs->ebx;
-    req->x86_regs.rsp = regs->esp;
-    req->x86_regs.rbp = regs->ebp;
-    req->x86_regs.rsi = regs->esi;
-    req->x86_regs.rdi = regs->edi;
-
-    req->x86_regs.r8  = regs->r8;
-    req->x86_regs.r9  = regs->r9;
-    req->x86_regs.r10 = regs->r10;
-    req->x86_regs.r11 = regs->r11;
-    req->x86_regs.r12 = regs->r12;
-    req->x86_regs.r13 = regs->r13;
-    req->x86_regs.r14 = regs->r14;
-    req->x86_regs.r15 = regs->r15;
-
-    req->x86_regs.rflags = regs->eflags;
-    req->x86_regs.rip    = regs->eip;
-
-    req->x86_regs.dr7 = curr->arch.debugreg[7];
-    req->x86_regs.cr0 = ctxt.cr0;
-    req->x86_regs.cr2 = ctxt.cr2;
-    req->x86_regs.cr3 = ctxt.cr3;
-    req->x86_regs.cr4 = ctxt.cr4;
-
-    req->x86_regs.sysenter_cs = ctxt.sysenter_cs;
-    req->x86_regs.sysenter_esp = ctxt.sysenter_esp;
-    req->x86_regs.sysenter_eip = ctxt.sysenter_eip;
-
-    req->x86_regs.msr_efer = ctxt.msr_efer;
-    req->x86_regs.msr_star = ctxt.msr_star;
-    req->x86_regs.msr_lstar = ctxt.msr_lstar;
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.dr7 = curr->arch.debugreg[7];
+    req->regs.x86.cr0 = ctxt.cr0;
+    req->regs.x86.cr2 = ctxt.cr2;
+    req->regs.x86.cr3 = ctxt.cr3;
+    req->regs.x86.cr4 = ctxt.cr4;
+
+    req->regs.x86.sysenter_cs = ctxt.sysenter_cs;
+    req->regs.x86.sysenter_esp = ctxt.sysenter_esp;
+    req->regs.x86.sysenter_eip = ctxt.sysenter_eip;
+
+    req->regs.x86.msr_efer = ctxt.msr_efer;
+    req->regs.x86.msr_star = ctxt.msr_star;
+    req->regs.x86.msr_lstar = ctxt.msr_lstar;
 
     hvm_get_segment_register(curr, x86_seg_fs, &seg);
-    req->x86_regs.fs_base = seg.base;
+    req->regs.x86.fs_base = seg.base;
 
     hvm_get_segment_register(curr, x86_seg_gs, &seg);
-    req->x86_regs.gs_base = seg.base;
+    req->regs.x86.gs_base = seg.base;
 
     hvm_get_segment_register(curr, x86_seg_cs, &seg);
-    req->x86_regs.cs_arbytes = seg.attr.bytes;
+    req->regs.x86.cs_arbytes = seg.attr.bytes;
 }
 
-void p2m_mem_event_emulate_check(struct vcpu *v, const mem_event_response_t *rsp)
+void p2m_mem_event_emulate_check(struct vcpu *v,
+                                 const mem_event_response_t *rsp)
 {
     /* Mark vcpu for skipping one instruction upon rescheduling. */
-    if ( rsp->flags & MEM_EVENT_FLAG_EMULATE )
+    if ( rsp->flags & MEM_ACCESS_EMULATE )
     {
         xenmem_access_t access;
         bool_t violation = 1;
+        const struct mem_event_mem_access *data = &rsp->u.mem_access;
 
-        if ( p2m_get_mem_access(v->domain, rsp->gfn, &access) == 0 )
+        if ( p2m_get_mem_access(v->domain, data->gfn, &access) == 0 )
         {
             switch ( access )
             {
             case XENMEM_access_n:
             case XENMEM_access_n2rwx:
             default:
-                violation = rsp->access_r || rsp->access_w || rsp->access_x;
+                violation = data->flags & MEM_ACCESS_RWX;
                 break;
 
             case XENMEM_access_r:
-                violation = rsp->access_w || rsp->access_x;
+                violation = data->flags & MEM_ACCESS_WX;
                 break;
 
             case XENMEM_access_w:
-                violation = rsp->access_r || rsp->access_x;
+                violation = data->flags & MEM_ACCESS_RX;
                 break;
 
             case XENMEM_access_x:
-                violation = rsp->access_r || rsp->access_w;
+                violation = data->flags & MEM_ACCESS_RW;
                 break;
 
             case XENMEM_access_rx:
             case XENMEM_access_rx2rw:
-                violation = rsp->access_w;
+                violation = data->flags & MEM_ACCESS_W;
                 break;
 
             case XENMEM_access_wx:
-                violation = rsp->access_r;
+                violation = data->flags & MEM_ACCESS_R;
                 break;
 
             case XENMEM_access_rw:
-                violation = rsp->access_x;
+                violation = data->flags & MEM_ACCESS_X;
                 break;
 
             case XENMEM_access_rwx:
@@ -1532,7 +1547,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     if ( v->arch.mem_event.emulate_flags )
     {
         hvm_mem_event_emulate_one((v->arch.mem_event.emulate_flags &
-                                   MEM_EVENT_FLAG_EMULATE_NOWRITE) != 0,
+                                   MEM_ACCESS_EMULATE_NOWRITE) != 0,
                                   TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE);
 
         v->arch.mem_event.emulate_flags = 0;
@@ -1544,24 +1559,28 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     if ( req )
     {
         *req_ptr = req;
-        req->reason = MEM_EVENT_REASON_VIOLATION;
+        req->reason = MEM_EVENT_REASON_MEM_ACCESS;
 
         /* Pause the current VCPU */
         if ( p2ma != p2m_access_n2rwx )
             req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
 
         /* Send request to mem event */
-        req->gfn = gfn;
-        req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
-        req->gla_valid = npfec.gla_valid;
-        req->gla = gla;
-        if ( npfec.kind == npfec_kind_with_gla )
-            req->fault_with_gla = 1;
-        else if ( npfec.kind == npfec_kind_in_gpt )
-            req->fault_in_gpt = 1;
-        req->access_r = npfec.read_access;
-        req->access_w = npfec.write_access;
-        req->access_x = npfec.insn_fetch;
+        req->u.mem_access.gfn = gfn;
+        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
+        if ( npfec.gla_valid )
+        {
+            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
+            req->u.mem_access.gla = gla;
+
+            if ( npfec.kind == npfec_kind_with_gla )
+                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
+            else if ( npfec.kind == npfec_kind_in_gpt )
+                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
+        }
+        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
+        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
+        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
         req->vcpu_id = v->vcpu_id;
 
         p2m_mem_event_fill_regs(req);
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index d8aac5f..9c5b7a6 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -38,6 +38,12 @@ void mem_access_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        {
+            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
+            continue;
+        }
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 7cfbe8e..8ab06ce 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -291,6 +291,8 @@ void mem_event_put_request(struct domain *d,
 #endif
     }
 
+    req->version = MEM_EVENT_INTERFACE_VERSION;
+
     mem_event_ring_lock(med);
 
     /* Due to the reservations, this step must succeed. */
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index 599f9e8..1ef65d3 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -28,39 +28,59 @@
 #define _XEN_PUBLIC_MEM_EVENT_H
 
 #include "xen.h"
+
+#define MEM_EVENT_INTERFACE_VERSION 0x00000001
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
 #include "io/ring.h"
 
-/* Memory event flags */
+/*
+ * Memory event flags
+ */
+
+/*
+ * VCPU_PAUSED in a request signals that the vCPU triggering the event has been
+ *  paused
+ * VCPU_PAUSED in a response signals to unpause the vCPU
+ */
 #define MEM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
-#define MEM_EVENT_FLAG_DROP_PAGE       (1 << 1)
-#define MEM_EVENT_FLAG_EVICT_FAIL      (1 << 2)
-#define MEM_EVENT_FLAG_FOREIGN         (1 << 3)
-#define MEM_EVENT_FLAG_DUMMY           (1 << 4)
+
 /*
- * Emulate the fault-causing instruction (if set in the event response flags).
- * This will allow the guest to continue execution without lifting the page
- * access restrictions.
+ * Flags to aid debugging mem_event
+ */
+#define MEM_EVENT_FLAG_FOREIGN         (1 << 1)
+#define MEM_EVENT_FLAG_DUMMY           (1 << 2)
+
+/*
+ * Reasons for the vm event request
  */
-#define MEM_EVENT_FLAG_EMULATE         (1 << 5)
+
+/* Default case */
+#define MEM_EVENT_REASON_UNKNOWN                 0
+/* Memory access violation */
+#define MEM_EVENT_REASON_MEM_ACCESS              1
+/* Memory sharing event */
+#define MEM_EVENT_REASON_MEM_SHARING             2
+/* Memory paging event */
+#define MEM_EVENT_REASON_MEM_PAGING              3
+/* CR0 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR0              4
+/* CR3 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR3              5
+/* CR4 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR4              6
+/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
+#define MEM_EVENT_REASON_MOV_TO_MSR              7
+/* Debug operation executed (e.g. int3) */
+#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
+/* Single-step (e.g. MTF) */
+#define MEM_EVENT_REASON_SINGLESTEP              9
+
 /*
- * Same as MEM_EVENT_FLAG_EMULATE, but with write operations or operations
- * potentially having side effects (like memory mapped or port I/O) disabled.
+ * Using a custom struct (not hvm_hw_cpu) so as to not fill
+ * the mem_event ring buffer too quickly.
  */
-#define MEM_EVENT_FLAG_EMULATE_NOWRITE (1 << 6)
-
-/* Reasons for the memory event request */
-#define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
-#define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
-#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
-#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
-#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
-#define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
-#define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
-#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
-                                             does NOT honour HVMPME_onchangeonly */
-
-/* Using a custom struct (not hvm_hw_cpu) so as to not fill
- * the mem_event ring buffer too quickly. */
 struct mem_event_regs_x86 {
     uint64_t rax;
     uint64_t rcx;
@@ -97,31 +117,102 @@ struct mem_event_regs_x86 {
     uint32_t _pad;
 };
 
-typedef struct mem_event_st {
-    uint32_t flags;
-    uint32_t vcpu_id;
+/*
+ * mem_access flag definitions
+ *
+ * These flags are set only as part of a mem_event request.
+ *
+ * R/W/X: Defines the type of violation that has triggered the event
+ *        Multiple types can be set in a single violation!
+ * GLA_VALID: If the gla field holds a guest VA associated with the event
+ * FAULT_WITH_GLA: If the violation was triggered by accessing gla
+ * FAULT_IN_GPT: If the violation was triggered during translating gla
+ */
+#define MEM_ACCESS_R                    (1 << 0)
+#define MEM_ACCESS_W                    (1 << 1)
+#define MEM_ACCESS_X                    (1 << 2)
+#define MEM_ACCESS_RWX                  (MEM_ACCESS_R | MEM_ACCESS_W | MEM_ACCESS_X)
+#define MEM_ACCESS_RW                   (MEM_ACCESS_R | MEM_ACCESS_W)
+#define MEM_ACCESS_RX                   (MEM_ACCESS_R | MEM_ACCESS_X)
+#define MEM_ACCESS_WX                   (MEM_ACCESS_W | MEM_ACCESS_X)
+#define MEM_ACCESS_GLA_VALID            (1 << 3)
+#define MEM_ACCESS_FAULT_WITH_GLA       (1 << 4)
+#define MEM_ACCESS_FAULT_IN_GPT         (1 << 5)
+/*
+ * The following flags can be set in the response.
+ *
+ * Emulate the fault-causing instruction (if set in the event response flags).
+ * This will allow the guest to continue execution without lifting the page
+ * access restrictions.
+ */
+#define MEM_ACCESS_EMULATE              (1 << 6)
+/*
+ * Same as MEM_ACCESS_EMULATE, but with write operations or operations
+ * potentially having side effects (like memory mapped or port I/O) disabled.
+ */
+#define MEM_ACCESS_EMULATE_NOWRITE      (1 << 7)
 
-    uint64_t gfn;
+struct mem_event_mem_access {
+    uint32_t gfn;
+    uint32_t flags; /* MEM_ACCESS_* */
     uint64_t offset;
-    uint64_t gla; /* if gla_valid */
+    uint64_t gla;   /* if flags has MEM_ACCESS_GLA_VALID set */
+};
+
+struct mem_event_mov_to_cr {
+    uint64_t new_value;
+    uint64_t old_value;
+};
 
+struct mem_event_debug {
+    uint32_t gfn;
+    uint32_t _pad;
+};
+
+struct mem_event_mov_to_msr {
+    uint64_t msr;
+    uint64_t value;
+};
+
+#define MEM_PAGING_DROP_PAGE       (1 << 0)
+#define MEM_PAGING_EVICT_FAIL      (1 << 1)
+struct mem_event_paging {
+    uint32_t gfn;
+    uint32_t p2mt;
+    uint32_t flags;
+    uint32_t _pad;
+};
+
+struct mem_event_sharing {
+    uint32_t gfn;
     uint32_t p2mt;
+};
+
+typedef struct mem_event_st {
+    uint32_t version;   /* MEM_EVENT_INTERFACE_VERSION */
+    uint32_t flags;     /* MEM_EVENT_FLAG_* */
+    uint32_t reason;    /* MEM_EVENT_REASON_* */
+    uint32_t vcpu_id;
 
-    uint16_t access_r:1;
-    uint16_t access_w:1;
-    uint16_t access_x:1;
-    uint16_t gla_valid:1;
-    uint16_t fault_with_gla:1;
-    uint16_t fault_in_gpt:1;
-    uint16_t available:10;
+    union {
+        struct mem_event_paging                mem_paging;
+        struct mem_event_sharing               mem_sharing;
+        struct mem_event_mem_access            mem_access;
+        struct mem_event_mov_to_cr             mov_to_cr;
+        struct mem_event_mov_to_msr            mov_to_msr;
+        struct mem_event_debug                 software_breakpoint;
+        struct mem_event_debug                 singlestep;
+    } u;
 
-    uint16_t reason;
-    struct mem_event_regs_x86 x86_regs;
+    union {
+        struct mem_event_regs_x86 x86;
+    } regs;
 } mem_event_request_t, mem_event_response_t;
 
 DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
 
-#endif
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+#endif /* _XEN_PUBLIC_MEM_EVENT_H */
 
 /*
  * Local variables:
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 595f953..2ef1728 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -380,7 +380,8 @@ struct xen_mem_event_op {
     /* PAGING_PREP IN: buffer to immediately fill page in */
     uint64_aligned_t    buffer;
     /* Other OPs */
-    uint64_aligned_t    gfn;           /* IN:  gfn of page being operated on */
+    uint32_t    gfn;           /* IN:  gfn of page being operated on */
+    uint32_t    _pad;
 };
 typedef struct xen_mem_event_op xen_mem_event_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mem_event_op_t);
@@ -469,21 +470,21 @@ struct xen_mem_sharing_op {
     union {
         struct mem_sharing_op_nominate {  /* OP_NOMINATE_xxx           */
             union {
-                uint64_aligned_t gfn;     /* IN: gfn to nominate       */
+                uint32_t      gfn;        /* IN: gfn to nominate       */
                 uint32_t      grant_ref;  /* IN: grant ref to nominate */
             } u;
             uint64_aligned_t  handle;     /* OUT: the handle           */
         } nominate;
         struct mem_sharing_op_share {     /* OP_SHARE/ADD_PHYSMAP */
-            uint64_aligned_t source_gfn;    /* IN: the gfn of the source page */
+            uint32_t source_gfn;          /* IN: the gfn of the source page */
+            uint32_t client_gfn;          /* IN: the client gfn */
             uint64_aligned_t source_handle; /* IN: handle to the source page */
-            uint64_aligned_t client_gfn;    /* IN: the client gfn */
             uint64_aligned_t client_handle; /* IN: handle to the client page */
             domid_t  client_domain; /* IN: the client domain id */
         } share; 
         struct mem_sharing_op_debug {     /* OP_DEBUG_xxx */
             union {
-                uint64_aligned_t gfn;      /* IN: gfn to debug          */
+                uint32_t gfn;              /* IN: gfn to debug          */
                 uint64_aligned_t mfn;      /* IN: mfn to debug          */
                 uint32_t gref;     /* IN: gref to debug         */
             } u;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 02/12] xen/mem_event: Cleanup mem_event ring names and domctls
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
  2015-02-13 16:33 ` [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 17:53   ` Andrew Cooper
  2015-02-13 16:33 ` [PATCH V5 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The name of one of the mem_event rings still implies it is used only
for memory accesses, which is no longer the case. It is also used to
deliver various HVM events, thus the name "monitor" is more appropriate
in this setting.

The mem_event subop definitions are also shortened to be more meaningful.

The tool side changes are only mechanical renaming to match these new names.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v5: Style fixes
v4: Shorted mem_event domctl subops.
v3: Style and comment fixes.
---
 tools/libxc/xc_domain_restore.c | 14 +++++++-------
 tools/libxc/xc_domain_save.c    |  4 ++--
 tools/libxc/xc_hvm_build_x86.c  |  2 +-
 tools/libxc/xc_mem_access.c     |  8 ++++----
 tools/libxc/xc_mem_event.c      | 12 ++++++------
 tools/libxc/xc_mem_paging.c     |  4 ++--
 tools/libxc/xc_memshr.c         |  4 ++--
 tools/libxc/xg_save_restore.h   |  2 +-
 xen/arch/x86/hvm/hvm.c          |  4 ++--
 xen/arch/x86/hvm/vmx/vmcs.c     |  2 +-
 xen/arch/x86/mm/p2m.c           |  2 +-
 xen/common/mem_access.c         |  8 ++++----
 xen/common/mem_event.c          | 37 +++++++++++++++++++----------------
 xen/include/public/domctl.h     | 43 ++++++++++++++++++++++++-----------------
 xen/include/public/hvm/params.h |  2 +-
 xen/include/xen/sched.h         |  4 ++--
 16 files changed, 81 insertions(+), 71 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index a382701..2ab9f46 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -734,7 +734,7 @@ typedef struct {
     uint64_t vcpumap[XC_SR_MAX_VCPUS/64];
     uint64_t identpt;
     uint64_t paging_ring_pfn;
-    uint64_t access_ring_pfn;
+    uint64_t monitor_ring_pfn;
     uint64_t sharing_ring_pfn;
     uint64_t vm86_tss;
     uint64_t console_pfn;
@@ -828,15 +828,15 @@ static int pagebuf_get_one(xc_interface *xch, struct restore_ctx *ctx,
         // DPRINTF("paging ring pfn address: %llx\n", buf->paging_ring_pfn);
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
-    case XC_SAVE_ID_HVM_ACCESS_RING_PFN:
+    case XC_SAVE_ID_HVM_MONITOR_RING_PFN:
         /* Skip padding 4 bytes then read the mem access ring location. */
-        if ( RDEXACT(fd, &buf->access_ring_pfn, sizeof(uint32_t)) ||
-             RDEXACT(fd, &buf->access_ring_pfn, sizeof(uint64_t)) )
+        if ( RDEXACT(fd, &buf->monitor_ring_pfn, sizeof(uint32_t)) ||
+             RDEXACT(fd, &buf->monitor_ring_pfn, sizeof(uint64_t)) )
         {
             PERROR("error read the access ring pfn");
             return -1;
         }
-        // DPRINTF("access ring pfn address: %llx\n", buf->access_ring_pfn);
+        // DPRINTF("monitor ring pfn address: %llx\n", buf->monitor_ring_pfn);
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
     case XC_SAVE_ID_HVM_SHARING_RING_PFN:
@@ -1660,8 +1660,8 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                 xc_hvm_param_set(xch, dom, HVM_PARAM_IDENT_PT, pagebuf.identpt);
             if ( pagebuf.paging_ring_pfn )
                 xc_hvm_param_set(xch, dom, HVM_PARAM_PAGING_RING_PFN, pagebuf.paging_ring_pfn);
-            if ( pagebuf.access_ring_pfn )
-                xc_hvm_param_set(xch, dom, HVM_PARAM_ACCESS_RING_PFN, pagebuf.access_ring_pfn);
+            if ( pagebuf.monitor_ring_pfn )
+                xc_hvm_param_set(xch, dom, HVM_PARAM_MONITOR_RING_PFN, pagebuf.monitor_ring_pfn);
             if ( pagebuf.sharing_ring_pfn )
                 xc_hvm_param_set(xch, dom, HVM_PARAM_SHARING_RING_PFN, pagebuf.sharing_ring_pfn);
             if ( pagebuf.vm86_tss )
diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 254fdb3..949ef64 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -1664,9 +1664,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             goto out;
         }
 
-        chunk.id = XC_SAVE_ID_HVM_ACCESS_RING_PFN;
+        chunk.id = XC_SAVE_ID_HVM_MONITOR_RING_PFN;
         chunk.data = 0;
-        xc_hvm_param_get(xch, dom, HVM_PARAM_ACCESS_RING_PFN, &chunk.data);
+        xc_hvm_param_get(xch, dom, HVM_PARAM_MONITOR_RING_PFN, &chunk.data);
 
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index c81a25b..30a929d 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -497,7 +497,7 @@ static int setup_guest(xc_interface *xch,
                      special_pfn(SPECIALPAGE_CONSOLE));
     xc_hvm_param_set(xch, dom, HVM_PARAM_PAGING_RING_PFN,
                      special_pfn(SPECIALPAGE_PAGING));
-    xc_hvm_param_set(xch, dom, HVM_PARAM_ACCESS_RING_PFN,
+    xc_hvm_param_set(xch, dom, HVM_PARAM_MONITOR_RING_PFN,
                      special_pfn(SPECIALPAGE_ACCESS));
     xc_hvm_param_set(xch, dom, HVM_PARAM_SHARING_RING_PFN,
                      special_pfn(SPECIALPAGE_SHARING));
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 55d0e9f..446394b 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -26,22 +26,22 @@
 
 void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_ACCESS_RING_PFN,
+    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
                                port, 0);
 }
 
 void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
                                          uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_ACCESS_RING_PFN,
+    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
                                port, 1);
 }
 
 int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
 {
     return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS,
+                                XEN_MEM_EVENT_MONITOR_DISABLE,
+                                XEN_DOMCTL_MEM_EVENT_OP_MONITOR,
                                 NULL);
 }
 
diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
index 1b5f7c3..ee25cdd 100644
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_mem_event.c
@@ -115,20 +115,20 @@ void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
     switch ( param )
     {
     case HVM_PARAM_PAGING_RING_PFN:
-        op = XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE;
+        op = XEN_MEM_EVENT_PAGING_ENABLE;
         mode = XEN_DOMCTL_MEM_EVENT_OP_PAGING;
         break;
 
-    case HVM_PARAM_ACCESS_RING_PFN:
+    case HVM_PARAM_MONITOR_RING_PFN:
         if ( enable_introspection )
-            op = XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION;
+            op = XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION;
         else
-            op = XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_ACCESS;
+            op = XEN_MEM_EVENT_MONITOR_ENABLE;
+        mode = XEN_DOMCTL_MEM_EVENT_OP_MONITOR;
         break;
 
     case HVM_PARAM_SHARING_RING_PFN:
-        op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE;
+        op = XEN_MEM_EVENT_SHARING_ENABLE;
         mode = XEN_DOMCTL_MEM_EVENT_OP_SHARING;
         break;
 
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 8aa7d4d..5194423 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -34,7 +34,7 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
     }
         
     return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE,
+                                XEN_MEM_EVENT_PAGING_ENABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING,
                                 port);
 }
@@ -42,7 +42,7 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 {
     return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE,
+                                XEN_MEM_EVENT_PAGING_DISABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING,
                                 NULL);
 }
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index d6a9539..4398630 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -53,7 +53,7 @@ int xc_memshr_ring_enable(xc_interface *xch,
     }
         
     return xc_mem_event_control(xch, domid,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE,
+                                XEN_MEM_EVENT_SHARING_ENABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_SHARING,
                                 port);
 }
@@ -62,7 +62,7 @@ int xc_memshr_ring_disable(xc_interface *xch,
                            domid_t domid)
 {
     return xc_mem_event_control(xch, domid,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE,
+                                XEN_MEM_EVENT_SHARING_DISABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_SHARING,
                                 NULL);
 }
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index bdd9009..10348aa 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -256,7 +256,7 @@
 #define XC_SAVE_ID_HVM_GENERATION_ID_ADDR -14
 /* Markers for the pfn's hosting these mem event rings */
 #define XC_SAVE_ID_HVM_PAGING_RING_PFN  -15
-#define XC_SAVE_ID_HVM_ACCESS_RING_PFN  -16
+#define XC_SAVE_ID_HVM_MONITOR_RING_PFN -16
 #define XC_SAVE_ID_HVM_SHARING_RING_PFN -17
 #define XC_SAVE_ID_TOOLSTACK          -18 /* Optional toolstack specific info */
 /* These are a pair; it is an error for one to exist without the other */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index fe5f568..0379de7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -6360,7 +6360,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
     if ( !(parameters & HVMPME_MODE_MASK) )
         return 0;
 
-    rc = mem_event_claim_slot(d, &d->mem_event->access);
+    rc = mem_event_claim_slot(d, &d->mem_event->monitor);
     if ( rc == -ENOSYS )
     {
         /* If there was no ring to handle the event, then
@@ -6377,7 +6377,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
     }
 
     hvm_mem_event_fill_regs(req);
-    mem_event_put_request(d, &d->mem_event->access, req);
+    mem_event_put_request(d, &d->mem_event->monitor, req);
 
     return 1;
 }
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index d614638..e0a33e3 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -715,7 +715,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
         return;
 
     if ( unlikely(d->arch.hvm_domain.introspection_enabled) &&
-         mem_event_check_ring(&d->mem_event->access) )
+         mem_event_check_ring(&d->mem_event->monitor) )
     {
         unsigned int i;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 339f8fe..5851c66 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1501,7 +1501,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     gfn_unlock(p2m, gfn, 0);
 
     /* Otherwise, check if there is a memory event listener, and send the message along */
-    if ( !mem_event_check_ring(&d->mem_event->access) || !req_ptr ) 
+    if ( !mem_event_check_ring(&d->mem_event->monitor) || !req_ptr ) 
     {
         /* No listener */
         if ( p2m->access_required ) 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 9c5b7a6..0e83d70 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -34,7 +34,7 @@ void mem_access_resume(struct domain *d)
     mem_event_response_t rsp;
 
     /* Pull all responses off the ring. */
-    while ( mem_event_get_response(d, &d->mem_event->access, &rsp) )
+    while ( mem_event_get_response(d, &d->mem_event->monitor, &rsp) )
     {
         struct vcpu *v;
 
@@ -85,7 +85,7 @@ int mem_access_memop(unsigned long cmd,
         goto out;
 
     rc = -ENODEV;
-    if ( unlikely(!d->mem_event->access.ring_page) )
+    if ( unlikely(!d->mem_event->monitor.ring_page) )
         goto out;
 
     switch ( mao.op )
@@ -152,11 +152,11 @@ int mem_access_memop(unsigned long cmd,
 
 int mem_access_send_req(struct domain *d, mem_event_request_t *req)
 {
-    int rc = mem_event_claim_slot(d, &d->mem_event->access);
+    int rc = mem_event_claim_slot(d, &d->mem_event->monitor);
     if ( rc < 0 )
         return rc;
 
-    mem_event_put_request(d, &d->mem_event->access, req);
+    mem_event_put_request(d, &d->mem_event->monitor, req);
 
     return 0;
 }
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 8ab06ce..b96d9fb 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -444,7 +444,7 @@ static void mem_paging_notification(struct vcpu *v, unsigned int port)
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_access_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
+    if ( likely(v->domain->mem_event->monitor.ring_page != NULL) )
         mem_access_resume(v->domain);
 }
 #endif
@@ -496,7 +496,8 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
 void mem_event_cleanup(struct domain *d)
 {
 #ifdef HAS_MEM_PAGING
-    if ( d->mem_event->paging.ring_page ) {
+    if ( d->mem_event->paging.ring_page )
+    {
         /* Destroying the wait queue head means waking up all
          * queued vcpus. This will drain the list, allowing
          * the disable routine to complete. It will also drop
@@ -509,13 +510,15 @@ void mem_event_cleanup(struct domain *d)
     }
 #endif
 #ifdef HAS_MEM_ACCESS
-    if ( d->mem_event->access.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->access.wq);
-        (void)mem_event_disable(d, &d->mem_event->access);
+    if ( d->mem_event->monitor.ring_page )
+    {
+        destroy_waitqueue_head(&d->mem_event->monitor.wq);
+        (void)mem_event_disable(d, &d->mem_event->monitor);
     }
 #endif
 #ifdef HAS_MEM_SHARING
-    if ( d->mem_event->share.ring_page ) {
+    if ( d->mem_event->share.ring_page )
+    {
         destroy_waitqueue_head(&d->mem_event->share.wq);
         (void)mem_event_disable(d, &d->mem_event->share);
     }
@@ -564,7 +567,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
         switch( mec->op )
         {
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
+        case XEN_MEM_EVENT_PAGING_ENABLE:
         {
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -594,7 +597,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
         break;
 
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
+        case XEN_MEM_EVENT_PAGING_DISABLE:
         {
             if ( med->ring_page )
                 rc = mem_event_disable(d, med);
@@ -610,32 +613,32 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 #endif
 
 #ifdef HAS_MEM_ACCESS
-    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
+    case XEN_DOMCTL_MEM_EVENT_OP_MONITOR:
     {
-        struct mem_event_domain *med = &d->mem_event->access;
+        struct mem_event_domain *med = &d->mem_event->monitor;
         rc = -EINVAL;
 
         switch( mec->op )
         {
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION:
+        case XEN_MEM_EVENT_MONITOR_ENABLE:
+        case XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION:
         {
             rc = -ENODEV;
             if ( !p2m_mem_event_sanity_check(d) )
                 break;
 
             rc = mem_event_enable(d, mec, med, _VPF_mem_access,
-                                    HVM_PARAM_ACCESS_RING_PFN,
+                                    HVM_PARAM_MONITOR_RING_PFN,
                                     mem_access_notification);
 
-            if ( mec->op == XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION
+            if ( mec->op == XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
                  && !rc )
                 p2m_setup_introspection(d);
 
         }
         break;
 
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
+        case XEN_MEM_EVENT_MONITOR_DISABLE:
         {
             if ( med->ring_page )
             {
@@ -661,7 +664,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
         switch( mec->op )
         {
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
+        case XEN_MEM_EVENT_SHARING_ENABLE:
         {
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
@@ -679,7 +682,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
         break;
 
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
+        case XEN_MEM_EVENT_SHARING_DISABLE:
         {
             if ( med->ring_page )
                 rc = mem_event_disable(d, med);
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index b3413a2..813e81d 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -762,7 +762,7 @@ struct xen_domctl_gdbsx_domstatus {
  * pager<->hypervisor interface. Use XENMEM_paging_op*
  * to perform per-page operations.
  *
- * The XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE domctl returns several
+ * The XEN_MEM_EVENT_PAGING_ENABLE domctl returns several
  * non-standard error codes to indicate why paging could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EMLINK - guest has iommu passthrough enabled
@@ -771,33 +771,40 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_MEM_EVENT_OP_PAGING            1
 
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE     0
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE    1
+#define XEN_MEM_EVENT_PAGING_ENABLE               0
+#define XEN_MEM_EVENT_PAGING_DISABLE              1
 
 /*
- * Access permissions.
+ * Monitor helper.
  *
  * As with paging, use the domctl for teardown/setup of the
  * helper<->hypervisor interface.
  *
- * There are HVM hypercalls to set the per-page access permissions of every
- * page in a domain.  When one of these permissions--independent, read, 
- * write, and execute--is violated, the VCPU is paused and a memory event 
- * is sent with what happened.  (See public/mem_event.h) .
+ * The monitor interface can be used to register for various VM events. For
+ * example, there are HVM hypercalls to set the per-page access permissions
+ * of every page in a domain.  When one of these permissions--independent,
+ * read, write, and execute--is violated, the VCPU is paused and a memory event
+ * is sent with what happened. The memory event handler can then resume the
+ * VCPU and redo the access with a XENMEM_access_op_resume hypercall.
  *
- * The memory event handler can then resume the VCPU and redo the access 
- * with a XENMEM_access_op_resume hypercall.
+ * See public/mem_event.h for the list of available events that can be
+ * subscribed to via the monitor interface.
  *
- * The XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE domctl returns several
+ * To enable MOV-TO-MSR interception on x86, it is necessary to enable this
+ * interface with the XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
+ * operator.
+ *
+ * The XEN_MEM_EVENT_MONITOR_ENABLE* domctls return several
  * non-standard error codes to indicate why access could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EBUSY  - guest has or had access enabled, ring buffer still active
+ *
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS                        2
+#define XEN_DOMCTL_MEM_EVENT_OP_MONITOR                        2
 
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE                 0
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE                1
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION   2
+#define XEN_MEM_EVENT_MONITOR_ENABLE                           0
+#define XEN_MEM_EVENT_MONITOR_DISABLE                          1
+#define XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
 
 /*
  * Sharing ENOMEM helper.
@@ -814,13 +821,13 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_MEM_EVENT_OP_SHARING           3
 
-#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE    0
-#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE   1
+#define XEN_MEM_EVENT_SHARING_ENABLE              0
+#define XEN_MEM_EVENT_SHARING_DISABLE             1
 
 /* Use for teardown/setup of helper<->hypervisor interface for paging, 
  * access and sharing.*/
 struct xen_domctl_mem_event_op {
-    uint32_t       op;           /* XEN_DOMCTL_MEM_EVENT_OP_*_* */
+    uint32_t       op;           /* XEN_MEM_EVENT_*_* */
     uint32_t       mode;         /* XEN_DOMCTL_MEM_EVENT_OP_* */
 
     uint32_t port;              /* OUT: event channel for ring */
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index a2d43bc..6efcc0b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -182,7 +182,7 @@
 
 /* Params for the mem event rings */
 #define HVM_PARAM_PAGING_RING_PFN   27
-#define HVM_PARAM_ACCESS_RING_PFN   28
+#define HVM_PARAM_MONITOR_RING_PFN  28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
 /* SHUTDOWN_* action in case of a triple fault */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ccd7ed8..76e41f3 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -288,8 +288,8 @@ struct mem_event_per_domain
     struct mem_event_domain share;
     /* Memory paging support */
     struct mem_event_domain paging;
-    /* Memory access support */
-    struct mem_event_domain access;
+    /* VM event monitor support */
+    struct mem_event_domain monitor;
 };
 
 struct evtchn_port_ops;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
  2015-02-13 16:33 ` [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
  2015-02-13 16:33 ` [PATCH V5 02/12] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 18:17   ` Andrew Cooper
  2015-02-13 16:33 ` [PATCH V5 04/12] xen: Rename mem_event to vm_event Tamas K Lengyel
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The only use-case of the mem_event_op structure had been in mem_paging,
thus renaming the structure mem_paging_op and relocating its associated
functions clarifies its actual usage.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v5: Style fixes
v4: Style fixes
v3: Style fixes
---
 tools/libxc/xc_mem_event.c       | 16 ----------------
 tools/libxc/xc_mem_paging.c      | 26 ++++++++++++++++++--------
 tools/libxc/xc_private.h         |  3 ---
 xen/arch/x86/mm/mem_paging.c     | 32 +++++++++++++-------------------
 xen/arch/x86/x86_64/compat/mm.c  | 10 ++++++----
 xen/arch/x86/x86_64/mm.c         |  8 ++++----
 xen/common/mem_event.c           |  4 ++--
 xen/include/asm-x86/mem_paging.h |  2 +-
 xen/include/public/memory.h      |  9 ++++-----
 9 files changed, 48 insertions(+), 62 deletions(-)

diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
index ee25cdd..487fcee 100644
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_mem_event.c
@@ -40,22 +40,6 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
     return rc;
 }
 
-int xc_mem_event_memop(xc_interface *xch, domid_t domain_id, 
-                        unsigned int op, unsigned int mode,
-                        uint32_t gfn, void *buffer)
-{
-    xen_mem_event_op_t meo;
-
-    memset(&meo, 0, sizeof(meo));
-
-    meo.op      = op;
-    meo.domain  = domain_id;
-    meo.gfn     = gfn;
-    meo.buffer  = (unsigned long) buffer;
-
-    return do_memory_op(xch, mode, &meo, sizeof(meo));
-}
-
 void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
                           uint32_t *port, int enable_introspection)
 {
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 5194423..049aff4 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -23,6 +23,20 @@
 
 #include "xc_private.h"
 
+static int xc_mem_paging_memop(xc_interface *xch, domid_t domain_id,
+                               unsigned int op, uint32_t gfn, void *buffer)
+{
+    xen_mem_paging_op_t mpo;
+
+    memset(&mpo, 0, sizeof(mpo));
+
+    mpo.op      = op;
+    mpo.domain  = domain_id;
+    mpo.gfn     = gfn;
+    mpo.buffer  = (unsigned long) buffer;
+
+    return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
+}
 
 int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
                          uint32_t *port)
@@ -49,25 +63,22 @@ int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
-    return xc_mem_event_memop(xch, domain_id,
+    return xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_nominate,
-                                XENMEM_paging_op,
                                 gfn, NULL);
 }
 
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
-    return xc_mem_event_memop(xch, domain_id,
+    return xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_evict,
-                                XENMEM_paging_op,
                                 gfn, NULL);
 }
 
 int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
-    return xc_mem_event_memop(xch, domain_id,
+    return xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_prep,
-                                XENMEM_paging_op,
                                 gfn, NULL);
 }
 
@@ -87,9 +98,8 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
     if ( mlock(buffer, XC_PAGE_SIZE) )
         return -1;
         
-    rc = xc_mem_event_memop(xch, domain_id,
+    rc = xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_prep,
-                                XENMEM_paging_op,
                                 gfn, buffer);
 
     old_errno = errno;
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index bc021b8..f1f601c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -425,9 +425,6 @@ int xc_ffs64(uint64_t x);
  */
 int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
                          unsigned int mode, uint32_t *port);
-int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
-                        unsigned int op, unsigned int mode,
-                        uint32_t gfn, void *buffer);
 /*
  * Enables mem_event and returns the mapped ring page indicated by param.
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 65f6a3d..87a7b72 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -25,38 +25,32 @@
 #include <xen/mem_event.h>
 
 
-int mem_paging_memop(struct domain *d, xen_mem_event_op_t *mec)
+int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
 {
+    int rc = -ENODEV;
     if ( unlikely(!d->mem_event->paging.ring_page) )
-        return -ENODEV;
+        return rc;
 
-    switch( mec->op )
+    switch( mpo->op )
     {
     case XENMEM_paging_op_nominate:
-    {
-        unsigned long gfn = mec->gfn;
-        return p2m_mem_paging_nominate(d, gfn);
-    }
-    break;
+        rc = p2m_mem_paging_nominate(d, mpo->gfn);
+        break;
 
     case XENMEM_paging_op_evict:
-    {
-        unsigned long gfn = mec->gfn;
-        return p2m_mem_paging_evict(d, gfn);
-    }
-    break;
+        rc = p2m_mem_paging_evict(d, mpo->gfn);
+        break;
 
     case XENMEM_paging_op_prep:
-    {
-        unsigned long gfn = mec->gfn;
-        return p2m_mem_paging_prep(d, gfn, mec->buffer);
-    }
-    break;
+        rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer);
+        break;
 
     default:
-        return -ENOSYS;
+        rc = -ENOSYS;
         break;
     }
+
+    return rc;
 }
 
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f90f611..96cec31 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -188,11 +188,12 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     case XENMEM_paging_op:
     {
-        xen_mem_event_op_t meo;
-        if ( copy_from_guest(&meo, arg, 1) )
+        xen_mem_paging_op_t mpo;
+
+        if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, meo.domain, &meo);
-        if ( !rc && __copy_to_guest(arg, &meo, 1) )
+        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
     }
@@ -200,6 +201,7 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case XENMEM_sharing_op:
     {
         xen_mem_sharing_op_t mso;
+
         if ( copy_from_guest(&mso, arg, 1) )
             return -EFAULT;
         if ( mso.op == XENMEM_sharing_op_audit )
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d631aee..2fa1f67 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -985,11 +985,11 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     case XENMEM_paging_op:
     {
-        xen_mem_event_op_t meo;
-        if ( copy_from_guest(&meo, arg, 1) )
+        xen_mem_paging_op_t mpo;
+        if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, meo.domain, &meo);
-        if ( !rc && __copy_to_guest(arg, &meo, 1) )
+        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
     }
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index b96d9fb..ae60c10 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -475,12 +475,12 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
     {
 #ifdef HAS_MEM_PAGING
         case XENMEM_paging_op:
-            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
+            ret = mem_paging_memop(d, arg);
             break;
 #endif
 #ifdef HAS_MEM_SHARING
         case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
+            ret = mem_sharing_memop(d, arg);
             break;
 #endif
         default:
diff --git a/xen/include/asm-x86/mem_paging.h b/xen/include/asm-x86/mem_paging.h
index 6b7a1fe..92ed2fa 100644
--- a/xen/include/asm-x86/mem_paging.h
+++ b/xen/include/asm-x86/mem_paging.h
@@ -21,7 +21,7 @@
  */
 
 
-int mem_paging_memop(struct domain *d, xen_mem_event_op_t *meo);
+int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *meo);
 
 
 /*
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 2ef1728..9c41f5d 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -372,10 +372,9 @@ typedef struct xen_pod_target xen_pod_target_t;
 #define XENMEM_paging_op_evict              1
 #define XENMEM_paging_op_prep               2
 
-struct xen_mem_event_op {
-    uint8_t     op;         /* XENMEM_*_op_* */
+struct xen_mem_paging_op {
+    uint8_t     op;         /* XENMEM_paging_op_* */
     domid_t     domain;
-    
 
     /* PAGING_PREP IN: buffer to immediately fill page in */
     uint64_aligned_t    buffer;
@@ -383,8 +382,8 @@ struct xen_mem_event_op {
     uint32_t    gfn;           /* IN:  gfn of page being operated on */
     uint32_t    _pad;
 };
-typedef struct xen_mem_event_op xen_mem_event_op_t;
-DEFINE_XEN_GUEST_HANDLE(xen_mem_event_op_t);
+typedef struct xen_mem_paging_op xen_mem_paging_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
 
 #define XENMEM_access_op                    21
 #define XENMEM_access_op_resume             0
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 04/12] xen: Rename mem_event to vm_event
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 18:31   ` Andrew Cooper
  2015-02-13 16:33 ` [PATCH V5 05/12] tools/tests: Clean-up tools/tests/xen-access Tamas K Lengyel
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

In this patch we mechanically rename mem_event to vm_event. This patch
introduces no logic changes to the code. Using the name vm_event better
describes the intended use of this subsystem, which is not limited to memory
events. It can be used for off-loading the decision making logic into helper
applications when encountering various events during a VM's execution.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
v5: hvm_mem_event_emulate_one is renamed hvm_mem_access_emulate_one
    Rebased on master
v4: Using git -M option for patch to improve readability
    Note that in include/xen/vm_event.h the style problems are fixed in a later
     patch in the series so that git can keep track of the relocation here.
---
 MAINTAINERS                                    |   4 +-
 docs/misc/xsm-flask.txt                        |   2 +-
 tools/libxc/Makefile                           |   2 +-
 tools/libxc/xc_mem_access.c                    |  16 +-
 tools/libxc/xc_mem_paging.c                    |  18 +-
 tools/libxc/xc_memshr.c                        |  18 +-
 tools/libxc/xc_private.h                       |  12 +-
 tools/libxc/{xc_mem_event.c => xc_vm_event.c}  |  40 +--
 tools/tests/xen-access/xen-access.c            | 110 ++++----
 tools/xenpaging/pagein.c                       |   2 +-
 tools/xenpaging/xenpaging.c                    | 112 ++++----
 tools/xenpaging/xenpaging.h                    |   8 +-
 xen/arch/x86/domain.c                          |   2 +-
 xen/arch/x86/domctl.c                          |   4 +-
 xen/arch/x86/hvm/emulate.c                     |   6 +-
 xen/arch/x86/hvm/hvm.c                         |  46 ++--
 xen/arch/x86/hvm/vmx/vmcs.c                    |   4 +-
 xen/arch/x86/mm/hap/nested_ept.c               |   4 +-
 xen/arch/x86/mm/hap/nested_hap.c               |   4 +-
 xen/arch/x86/mm/mem_paging.c                   |   4 +-
 xen/arch/x86/mm/mem_sharing.c                  |  32 +--
 xen/arch/x86/mm/p2m-pod.c                      |   4 +-
 xen/arch/x86/mm/p2m-pt.c                       |   4 +-
 xen/arch/x86/mm/p2m.c                          |  91 ++++---
 xen/arch/x86/x86_64/compat/mm.c                |   6 +-
 xen/arch/x86/x86_64/mm.c                       |   6 +-
 xen/common/Makefile                            |   2 +-
 xen/common/domain.c                            |  12 +-
 xen/common/domctl.c                            |   8 +-
 xen/common/mem_access.c                        |  28 +-
 xen/common/{mem_event.c => vm_event.c}         | 338 ++++++++++++-------------
 xen/drivers/passthrough/pci.c                  |   2 +-
 xen/include/asm-arm/p2m.h                      |   6 +-
 xen/include/asm-x86/domain.h                   |   4 +-
 xen/include/asm-x86/hvm/emulate.h              |   2 +-
 xen/include/asm-x86/p2m.h                      |  12 +-
 xen/include/public/domctl.h                    |  46 ++--
 xen/include/public/{mem_event.h => vm_event.h} |  83 +++---
 xen/include/xen/mem_access.h                   |   4 +-
 xen/include/xen/p2m-common.h                   |   4 +-
 xen/include/xen/sched.h                        |  26 +-
 xen/include/xen/{mem_event.h => vm_event.h}    |  74 +++---
 xen/include/xsm/dummy.h                        |   4 +-
 xen/include/xsm/xsm.h                          |  12 +-
 xen/xsm/dummy.c                                |   4 +-
 xen/xsm/flask/hooks.c                          |  16 +-
 xen/xsm/flask/policy/access_vectors            |   2 +-
 47 files changed, 623 insertions(+), 627 deletions(-)
 rename tools/libxc/{xc_mem_event.c => xc_vm_event.c} (79%)
 rename xen/common/{mem_event.c => vm_event.c} (59%)
 rename xen/include/public/{mem_event.h => vm_event.h} (73%)
 rename xen/include/xen/{mem_event.h => vm_event.h} (50%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3bbac9e..3d09d15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -361,10 +361,10 @@ F:	xen/arch/x86/mm/mem_sharing.c
 F:	xen/arch/x86/mm/mem_paging.c
 F:	tools/memshr
 
-MEMORY EVENT AND ACCESS
+VM EVENT AND MEM ACCESS
 M:	Tim Deegan <tim@xen.org>
 S:	Supported
-F:	xen/common/mem_event.c
+F:	xen/common/vm_event.c
 F:	xen/common/mem_access.c
 
 XENTRACE
diff --git a/docs/misc/xsm-flask.txt b/docs/misc/xsm-flask.txt
index 9559028..13ce498 100644
--- a/docs/misc/xsm-flask.txt
+++ b/docs/misc/xsm-flask.txt
@@ -87,7 +87,7 @@ __HYPERVISOR_domctl (xen/include/public/domctl.h)
  * XEN_DOMCTL_set_machine_address_size
  * XEN_DOMCTL_debug_op
  * XEN_DOMCTL_gethvmcontext_partial
- * XEN_DOMCTL_mem_event_op
+ * XEN_DOMCTL_vm_event_op
  * XEN_DOMCTL_mem_sharing_op
  * XEN_DOMCTL_setvcpuextstate
  * XEN_DOMCTL_getvcpuextstate
diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 6fa88c7..22ba2a1 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -31,7 +31,7 @@ CTRL_SRCS-y       += xc_pm.c
 CTRL_SRCS-y       += xc_cpu_hotplug.c
 CTRL_SRCS-y       += xc_resume.c
 CTRL_SRCS-y       += xc_tmem.c
-CTRL_SRCS-y       += xc_mem_event.c
+CTRL_SRCS-y       += xc_vm_event.c
 CTRL_SRCS-y       += xc_mem_paging.c
 CTRL_SRCS-y       += xc_mem_access.c
 CTRL_SRCS-y       += xc_memshr.c
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 446394b..0a3f0e6 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -26,23 +26,23 @@
 
 void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                               port, 0);
+    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+                              port, 0);
 }
 
 void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
                                          uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                               port, 1);
+    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+                              port, 1);
 }
 
 int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
 {
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_MEM_EVENT_MONITOR_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_MONITOR,
-                                NULL);
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_MONITOR_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
 }
 
 int xc_mem_access_resume(xc_interface *xch, domid_t domain_id)
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 049aff4..083ab60 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -46,19 +46,19 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
         errno = EINVAL;
         return -1;
     }
-        
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_MEM_EVENT_PAGING_ENABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
-                                port);
+
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_PAGING_ENABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_PAGING,
+                               port);
 }
 
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 {
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_MEM_EVENT_PAGING_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
-                                NULL);
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_PAGING_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_PAGING,
+                               NULL);
 }
 
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id, unsigned long gfn)
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 4398630..14cc1ce 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -51,20 +51,20 @@ int xc_memshr_ring_enable(xc_interface *xch,
         errno = EINVAL;
         return -1;
     }
-        
-    return xc_mem_event_control(xch, domid,
-                                XEN_MEM_EVENT_SHARING_ENABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING,
-                                port);
+
+    return xc_vm_event_control(xch, domid,
+                               XEN_VM_EVENT_SHARING_ENABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_SHARING,
+                               port);
 }
 
 int xc_memshr_ring_disable(xc_interface *xch, 
                            domid_t domid)
 {
-    return xc_mem_event_control(xch, domid,
-                                XEN_MEM_EVENT_SHARING_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING,
-                                NULL);
+    return xc_vm_event_control(xch, domid,
+                               XEN_VM_EVENT_SHARING_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_SHARING,
+                               NULL);
 }
 
 static int xc_memshr_memop(xc_interface *xch, domid_t domid, 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index f1f601c..843540c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -421,15 +421,15 @@ int xc_ffs64(uint64_t x);
 #define DOMPRINTF_CALLED(xch) xc_dom_printf((xch), "%s: called", __FUNCTION__)
 
 /**
- * mem_event operations. Internal use only.
+ * vm_event operations. Internal use only.
  */
-int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
-                         unsigned int mode, uint32_t *port);
+int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
+                        unsigned int mode, uint32_t *port);
 /*
- * Enables mem_event and returns the mapped ring page indicated by param.
+ * Enables vm_event and returns the mapped ring page indicated by param.
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
  */
-void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                          uint32_t *port, int enable_introspection);
+void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
+                         uint32_t *port, int enable_introspection);
 
 #endif /* __XC_PRIVATE_H__ */
diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_vm_event.c
similarity index 79%
rename from tools/libxc/xc_mem_event.c
rename to tools/libxc/xc_vm_event.c
index 487fcee..d458b9a 100644
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -1,6 +1,6 @@
 /******************************************************************************
  *
- * xc_mem_event.c
+ * xc_vm_event.c
  *
  * Interface to low-level memory event functionality.
  *
@@ -23,25 +23,25 @@
 
 #include "xc_private.h"
 
-int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
-                         unsigned int mode, uint32_t *port)
+int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
+                        unsigned int mode, uint32_t *port)
 {
     DECLARE_DOMCTL;
     int rc;
 
-    domctl.cmd = XEN_DOMCTL_mem_event_op;
+    domctl.cmd = XEN_DOMCTL_vm_event_op;
     domctl.domain = domain_id;
-    domctl.u.mem_event_op.op = op;
-    domctl.u.mem_event_op.mode = mode;
-    
+    domctl.u.vm_event_op.op = op;
+    domctl.u.vm_event_op.mode = mode;
+
     rc = do_domctl(xch, &domctl);
     if ( !rc && port )
-        *port = domctl.u.mem_event_op.port;
+        *port = domctl.u.vm_event_op.port;
     return rc;
 }
 
-void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                          uint32_t *port, int enable_introspection)
+void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
+                         uint32_t *port, int enable_introspection)
 {
     void *ring_page = NULL;
     uint64_t pfn;
@@ -99,26 +99,26 @@ void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
     switch ( param )
     {
     case HVM_PARAM_PAGING_RING_PFN:
-        op = XEN_MEM_EVENT_PAGING_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_PAGING;
+        op = XEN_VM_EVENT_PAGING_ENABLE;
+        mode = XEN_DOMCTL_VM_EVENT_OP_PAGING;
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
         if ( enable_introspection )
-            op = XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION;
+            op = XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION;
         else
-            op = XEN_MEM_EVENT_MONITOR_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_MONITOR;
+            op = XEN_VM_EVENT_MONITOR_ENABLE;
+        mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
     case HVM_PARAM_SHARING_RING_PFN:
-        op = XEN_MEM_EVENT_SHARING_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_SHARING;
+        op = XEN_VM_EVENT_SHARING_ENABLE;
+        mode = XEN_DOMCTL_VM_EVENT_OP_SHARING;
         break;
 
     /*
      * This is for the outside chance that the HVM_PARAM is valid but is invalid
-     * as far as mem_event goes.
+     * as far as vm_event goes.
      */
     default:
         errno = EINVAL;
@@ -126,10 +126,10 @@ void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
         goto out;
     }
 
-    rc1 = xc_mem_event_control(xch, domain_id, op, mode, port);
+    rc1 = xc_vm_event_control(xch, domain_id, op, mode, port);
     if ( rc1 != 0 )
     {
-        PERROR("Failed to enable mem_event\n");
+        PERROR("Failed to enable vm_event\n");
         goto out;
     }
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 68f05db..0d4f190 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -39,7 +39,7 @@
 #include <sys/poll.h>
 
 #include <xenctrl.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 
 #define DPRINTF(a, b...) fprintf(stderr, a, ## b)
 #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
@@ -91,26 +91,26 @@ static inline int spin_trylock(spinlock_t *lock)
     return !test_and_set_bit(1, lock);
 }
 
-#define mem_event_ring_lock_init(_m)  spin_lock_init(&(_m)->ring_lock)
-#define mem_event_ring_lock(_m)       spin_lock(&(_m)->ring_lock)
-#define mem_event_ring_unlock(_m)     spin_unlock(&(_m)->ring_lock)
+#define vm_event_ring_lock_init(_m)  spin_lock_init(&(_m)->ring_lock)
+#define vm_event_ring_lock(_m)       spin_lock(&(_m)->ring_lock)
+#define vm_event_ring_unlock(_m)     spin_unlock(&(_m)->ring_lock)
 
-typedef struct mem_event {
+typedef struct vm_event {
     domid_t domain_id;
     xc_evtchn *xce_handle;
     int port;
-    mem_event_back_ring_t back_ring;
+    vm_event_back_ring_t back_ring;
     uint32_t evtchn_port;
     void *ring_page;
     spinlock_t ring_lock;
-} mem_event_t;
+} vm_event_t;
 
 typedef struct xenaccess {
     xc_interface *xc_handle;
 
     xc_domaininfo_t    *domain_info;
 
-    mem_event_t mem_event;
+    vm_event_t vm_event;
 } xenaccess_t;
 
 static int interrupted;
@@ -170,13 +170,13 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         return 0;
 
     /* Tear down domain xenaccess in Xen */
-    if ( xenaccess->mem_event.ring_page )
-        munmap(xenaccess->mem_event.ring_page, XC_PAGE_SIZE);
+    if ( xenaccess->vm_event.ring_page )
+        munmap(xenaccess->vm_event.ring_page, XC_PAGE_SIZE);
 
     if ( mem_access_enable )
     {
         rc = xc_mem_access_disable(xenaccess->xc_handle,
-                                   xenaccess->mem_event.domain_id);
+                                   xenaccess->vm_event.domain_id);
         if ( rc != 0 )
         {
             ERROR("Error tearing down domain xenaccess in xen");
@@ -186,8 +186,8 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     /* Unbind VIRQ */
     if ( evtchn_bind )
     {
-        rc = xc_evtchn_unbind(xenaccess->mem_event.xce_handle,
-                              xenaccess->mem_event.port);
+        rc = xc_evtchn_unbind(xenaccess->vm_event.xce_handle,
+                              xenaccess->vm_event.port);
         if ( rc != 0 )
         {
             ERROR("Error unbinding event port");
@@ -197,7 +197,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     /* Close event channel */
     if ( evtchn_open )
     {
-        rc = xc_evtchn_close(xenaccess->mem_event.xce_handle);
+        rc = xc_evtchn_close(xenaccess->vm_event.xce_handle);
         if ( rc != 0 )
         {
             ERROR("Error closing event channel");
@@ -239,17 +239,17 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     xenaccess->xc_handle = xch;
 
     /* Set domain id */
-    xenaccess->mem_event.domain_id = domain_id;
+    xenaccess->vm_event.domain_id = domain_id;
 
     /* Initialise lock */
-    mem_event_ring_lock_init(&xenaccess->mem_event);
+    vm_event_ring_lock_init(&xenaccess->vm_event);
 
     /* Enable mem_access */
-    xenaccess->mem_event.ring_page =
+    xenaccess->vm_event.ring_page =
             xc_mem_access_enable(xenaccess->xc_handle,
-                                 xenaccess->mem_event.domain_id,
-                                 &xenaccess->mem_event.evtchn_port);
-    if ( xenaccess->mem_event.ring_page == NULL )
+                                 xenaccess->vm_event.domain_id,
+                                 &xenaccess->vm_event.evtchn_port);
+    if ( xenaccess->vm_event.ring_page == NULL )
     {
         switch ( errno ) {
             case EBUSY:
@@ -267,8 +267,8 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     mem_access_enable = 1;
 
     /* Open event channel */
-    xenaccess->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
-    if ( xenaccess->mem_event.xce_handle == NULL )
+    xenaccess->vm_event.xce_handle = xc_evtchn_open(NULL, 0);
+    if ( xenaccess->vm_event.xce_handle == NULL )
     {
         ERROR("Failed to open event channel");
         goto err;
@@ -276,21 +276,21 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     evtchn_open = 1;
 
     /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(xenaccess->mem_event.xce_handle,
-                                    xenaccess->mem_event.domain_id,
-                                    xenaccess->mem_event.evtchn_port);
+    rc = xc_evtchn_bind_interdomain(xenaccess->vm_event.xce_handle,
+                                    xenaccess->vm_event.domain_id,
+                                    xenaccess->vm_event.evtchn_port);
     if ( rc < 0 )
     {
         ERROR("Failed to bind event channel");
         goto err;
     }
     evtchn_bind = 1;
-    xenaccess->mem_event.port = rc;
+    xenaccess->vm_event.port = rc;
 
     /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)xenaccess->mem_event.ring_page);
-    BACK_RING_INIT(&xenaccess->mem_event.back_ring,
-                   (mem_event_sring_t *)xenaccess->mem_event.ring_page,
+    SHARED_RING_INIT((vm_event_sring_t *)xenaccess->vm_event.ring_page);
+    BACK_RING_INIT(&xenaccess->vm_event.back_ring,
+                   (vm_event_sring_t *)xenaccess->vm_event.ring_page,
                    XC_PAGE_SIZE);
 
     /* Get domaininfo */
@@ -320,14 +320,14 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     return NULL;
 }
 
-int get_request(mem_event_t *mem_event, mem_event_request_t *req)
+int get_request(vm_event_t *vm_event, vm_event_request_t *req)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX req_cons;
 
-    mem_event_ring_lock(mem_event);
+    vm_event_ring_lock(vm_event);
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     req_cons = back_ring->req_cons;
 
     /* Copy request */
@@ -338,19 +338,19 @@ int get_request(mem_event_t *mem_event, mem_event_request_t *req)
     back_ring->req_cons = req_cons;
     back_ring->sring->req_event = req_cons + 1;
 
-    mem_event_ring_unlock(mem_event);
+    vm_event_ring_unlock(vm_event);
 
     return 0;
 }
 
-static int put_response(mem_event_t *mem_event, mem_event_response_t *rsp)
+static int put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX rsp_prod;
 
-    mem_event_ring_lock(mem_event);
+    vm_event_ring_lock(vm_event);
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     rsp_prod = back_ring->rsp_prod_pvt;
 
     /* Copy response */
@@ -361,24 +361,24 @@ static int put_response(mem_event_t *mem_event, mem_event_response_t *rsp)
     back_ring->rsp_prod_pvt = rsp_prod;
     RING_PUSH_RESPONSES(back_ring);
 
-    mem_event_ring_unlock(mem_event);
+    vm_event_ring_unlock(vm_event);
 
     return 0;
 }
 
-static int xenaccess_resume_page(xenaccess_t *paging, mem_event_response_t *rsp)
+static int xenaccess_resume_page(xenaccess_t *paging, vm_event_response_t *rsp)
 {
     int ret;
 
     /* Put the page info on the ring */
-    ret = put_response(&paging->mem_event, rsp);
+    ret = put_response(&paging->vm_event, rsp);
     if ( ret != 0 )
         goto out;
 
     /* Tell Xen page is ready */
-    ret = xc_mem_access_resume(paging->xc_handle, paging->mem_event.domain_id);
-    ret = xc_evtchn_notify(paging->mem_event.xce_handle,
-                           paging->mem_event.port);
+    ret = xc_mem_access_resume(paging->xc_handle, paging->vm_event.domain_id);
+    ret = xc_evtchn_notify(paging->vm_event.xce_handle,
+                           paging->vm_event.port);
 
  out:
     return ret;
@@ -400,8 +400,8 @@ int main(int argc, char *argv[])
     struct sigaction act;
     domid_t domain_id;
     xenaccess_t *xenaccess;
-    mem_event_request_t req;
-    mem_event_response_t rsp;
+    vm_event_request_t req;
+    vm_event_response_t rsp;
     int rc = -1;
     int rc1;
     xc_interface *xch;
@@ -507,7 +507,7 @@ int main(int argc, char *argv[])
         rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
     if ( rc < 0 )
     {
-        ERROR("Error %d setting int3 mem_event\n", rc);
+        ERROR("Error %d setting int3 vm_event\n", rc);
         goto exit;
     }
 
@@ -527,7 +527,7 @@ int main(int argc, char *argv[])
             shutting_down = 1;
         }
 
-        rc = xc_wait_for_event_or_timeout(xch, xenaccess->mem_event.xce_handle, 100);
+        rc = xc_wait_for_event_or_timeout(xch, xenaccess->vm_event.xce_handle, 100);
         if ( rc < -1 )
         {
             ERROR("Error getting event");
@@ -539,11 +539,11 @@ int main(int argc, char *argv[])
             DPRINTF("Got event from Xen\n");
         }
 
-        while ( RING_HAS_UNCONSUMED_REQUESTS(&xenaccess->mem_event.back_ring) )
+        while ( RING_HAS_UNCONSUMED_REQUESTS(&xenaccess->vm_event.back_ring) )
         {
             xenmem_access_t access;
 
-            rc = get_request(&xenaccess->mem_event, &req);
+            rc = get_request(&xenaccess->vm_event, &req);
             if ( rc != 0 )
             {
                 ERROR("Error getting request");
@@ -551,20 +551,20 @@ int main(int argc, char *argv[])
                 continue;
             }
 
-            if ( req.version != MEM_EVENT_INTERFACE_VERSION )
+            if ( req.version != VM_EVENT_INTERFACE_VERSION )
             {
-                ERROR("Error: mem_event interface version mismatch!\n");
+                ERROR("Error: vm_event interface version mismatch!\n");
                 interrupted = -1;
                 continue;
             }
 
             memset( &rsp, 0, sizeof (rsp) );
-            rsp.version = MEM_EVENT_INTERFACE_VERSION;
+            rsp.version = VM_EVENT_INTERFACE_VERSION;
             rsp.vcpu_id = req.vcpu_id;
             rsp.flags = req.flags;
 
             switch (req.reason) {
-            case MEM_EVENT_REASON_MEM_ACCESS:
+            case VM_EVENT_REASON_MEM_ACCESS:
                 rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access);
                 if (rc < 0)
                 {
@@ -602,7 +602,7 @@ int main(int argc, char *argv[])
 
                 rsp.u.mem_access.gfn = req.u.mem_access.gfn;
                 break;
-            case MEM_EVENT_REASON_SOFTWARE_BREAKPOINT:
+            case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
                 printf("INT3: rip=%016"PRIx64", gfn=%"PRIx32" (vcpu %d)\n",
                        req.regs.x86.rip,
                        req.u.software_breakpoint.gfn,
diff --git a/tools/xenpaging/pagein.c b/tools/xenpaging/pagein.c
index b3bcef7..7cb0f33 100644
--- a/tools/xenpaging/pagein.c
+++ b/tools/xenpaging/pagein.c
@@ -63,7 +63,7 @@ void page_in_trigger(void)
 
 void create_page_in_thread(struct xenpaging *paging)
 {
-    page_in_args.dom = paging->mem_event.domain_id;
+    page_in_args.dom = paging->vm_event.domain_id;
     page_in_args.pagein_queue = paging->pagein_queue;
     page_in_args.xch = paging->xc_handle;
     if (pthread_create(&page_in_thread, NULL, page_in, &page_in_args) == 0)
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 29ca7c7..723da0b 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -63,7 +63,7 @@ static void close_handler(int sig)
 static void xenpaging_mem_paging_flush_ioemu_cache(struct xenpaging *paging)
 {
     struct xs_handle *xsh = paging->xs_handle;
-    domid_t domain_id = paging->mem_event.domain_id;
+    domid_t domain_id = paging->vm_event.domain_id;
     char path[80];
 
     sprintf(path, "/local/domain/0/device-model/%u/command", domain_id);
@@ -74,7 +74,7 @@ static void xenpaging_mem_paging_flush_ioemu_cache(struct xenpaging *paging)
 static int xenpaging_wait_for_event_or_timeout(struct xenpaging *paging)
 {
     xc_interface *xch = paging->xc_handle;
-    xc_evtchn *xce = paging->mem_event.xce_handle;
+    xc_evtchn *xce = paging->vm_event.xce_handle;
     char **vec, *val;
     unsigned int num;
     struct pollfd fd[2];
@@ -111,7 +111,7 @@ static int xenpaging_wait_for_event_or_timeout(struct xenpaging *paging)
             if ( strcmp(vec[XS_WATCH_TOKEN], watch_token) == 0 )
             {
                 /* If our guest disappeared, set interrupt flag and fall through */
-                if ( xs_is_domain_introduced(paging->xs_handle, paging->mem_event.domain_id) == false )
+                if ( xs_is_domain_introduced(paging->xs_handle, paging->vm_event.domain_id) == false )
                 {
                     xs_unwatch(paging->xs_handle, "@releaseDomain", watch_token);
                     interrupted = SIGQUIT;
@@ -171,7 +171,7 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
     xc_domaininfo_t domain_info;
     int rc;
 
-    rc = xc_domain_getinfolist(xch, paging->mem_event.domain_id, 1, &domain_info);
+    rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1, &domain_info);
     if ( rc != 1 )
     {
         PERROR("Error getting domain info");
@@ -231,7 +231,7 @@ static int xenpaging_getopts(struct xenpaging *paging, int argc, char *argv[])
     {
         switch(ch) {
         case 'd':
-            paging->mem_event.domain_id = atoi(optarg);
+            paging->vm_event.domain_id = atoi(optarg);
             break;
         case 'f':
             filename = strdup(optarg);
@@ -264,7 +264,7 @@ static int xenpaging_getopts(struct xenpaging *paging, int argc, char *argv[])
     }
 
     /* Set domain id */
-    if ( !paging->mem_event.domain_id )
+    if ( !paging->vm_event.domain_id )
     {
         printf("Numerical <domain_id> missing!\n");
         return 1;
@@ -312,7 +312,7 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* write domain ID to watch so we can ignore other domain shutdowns */
-    snprintf(watch_token, sizeof(watch_token), "%u", paging->mem_event.domain_id);
+    snprintf(watch_token, sizeof(watch_token), "%u", paging->vm_event.domain_id);
     if ( xs_watch(paging->xs_handle, "@releaseDomain", watch_token) == false )
     {
         PERROR("Could not bind to shutdown watch\n");
@@ -320,7 +320,7 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* Watch xenpagings working target */
-    dom_path = xs_get_domain_path(paging->xs_handle, paging->mem_event.domain_id);
+    dom_path = xs_get_domain_path(paging->xs_handle, paging->vm_event.domain_id);
     if ( !dom_path )
     {
         PERROR("Could not find domain path\n");
@@ -339,17 +339,17 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* Map the ring page */
-    xc_get_hvm_param(xch, paging->mem_event.domain_id, 
+    xc_get_hvm_param(xch, paging->vm_event.domain_id, 
                         HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
     mmap_pfn = ring_pfn;
-    paging->mem_event.ring_page = 
-        xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
+    paging->vm_event.ring_page = 
+        xc_map_foreign_batch(xch, paging->vm_event.domain_id, 
                                 PROT_READ | PROT_WRITE, &mmap_pfn, 1);
     if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
     {
         /* Map failed, populate ring page */
         rc = xc_domain_populate_physmap_exact(paging->xc_handle, 
-                                              paging->mem_event.domain_id,
+                                              paging->vm_event.domain_id,
                                               1, 0, 0, &ring_pfn);
         if ( rc != 0 )
         {
@@ -358,8 +358,8 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         }
 
         mmap_pfn = ring_pfn;
-        paging->mem_event.ring_page = 
-            xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
+        paging->vm_event.ring_page = 
+            xc_map_foreign_batch(xch, paging->vm_event.domain_id, 
                                     PROT_READ | PROT_WRITE, &mmap_pfn, 1);
         if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
         {
@@ -369,8 +369,8 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
     
     /* Initialise Xen */
-    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
-                             &paging->mem_event.evtchn_port);
+    rc = xc_mem_paging_enable(xch, paging->vm_event.domain_id,
+                             &paging->vm_event.evtchn_port);
     if ( rc != 0 )
     {
         switch ( errno ) {
@@ -394,40 +394,40 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* Open event channel */
-    paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
-    if ( paging->mem_event.xce_handle == NULL )
+    paging->vm_event.xce_handle = xc_evtchn_open(NULL, 0);
+    if ( paging->vm_event.xce_handle == NULL )
     {
         PERROR("Failed to open event channel");
         goto err;
     }
 
     /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
-                                    paging->mem_event.domain_id,
-                                    paging->mem_event.evtchn_port);
+    rc = xc_evtchn_bind_interdomain(paging->vm_event.xce_handle,
+                                    paging->vm_event.domain_id,
+                                    paging->vm_event.evtchn_port);
     if ( rc < 0 )
     {
         PERROR("Failed to bind event channel");
         goto err;
     }
 
-    paging->mem_event.port = rc;
+    paging->vm_event.port = rc;
 
     /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
-    BACK_RING_INIT(&paging->mem_event.back_ring,
-                   (mem_event_sring_t *)paging->mem_event.ring_page,
+    SHARED_RING_INIT((vm_event_sring_t *)paging->vm_event.ring_page);
+    BACK_RING_INIT(&paging->vm_event.back_ring,
+                   (vm_event_sring_t *)paging->vm_event.ring_page,
                    PAGE_SIZE);
 
     /* Now that the ring is set, remove it from the guest's physmap */
     if ( xc_domain_decrease_reservation_exact(xch, 
-                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
+                    paging->vm_event.domain_id, 1, 0, &ring_pfn) )
         PERROR("Failed to remove ring from guest physmap");
 
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
     {
-        rc = xc_domain_getinfolist(xch, paging->mem_event.domain_id, 1,
+        rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1,
                                    &domain_info);
         if ( rc != 1 )
         {
@@ -497,9 +497,9 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
             free(paging->paging_buffer);
         }
 
-        if ( paging->mem_event.ring_page )
+        if ( paging->vm_event.ring_page )
         {
-            munmap(paging->mem_event.ring_page, PAGE_SIZE);
+            munmap(paging->vm_event.ring_page, PAGE_SIZE);
         }
 
         free(dom_path);
@@ -524,28 +524,28 @@ static void xenpaging_teardown(struct xenpaging *paging)
 
     paging->xc_handle = NULL;
     /* Tear down domain paging in Xen */
-    munmap(paging->mem_event.ring_page, PAGE_SIZE);
-    rc = xc_mem_paging_disable(xch, paging->mem_event.domain_id);
+    munmap(paging->vm_event.ring_page, PAGE_SIZE);
+    rc = xc_mem_paging_disable(xch, paging->vm_event.domain_id);
     if ( rc != 0 )
     {
         PERROR("Error tearing down domain paging in xen");
     }
 
     /* Unbind VIRQ */
-    rc = xc_evtchn_unbind(paging->mem_event.xce_handle, paging->mem_event.port);
+    rc = xc_evtchn_unbind(paging->vm_event.xce_handle, paging->vm_event.port);
     if ( rc != 0 )
     {
         PERROR("Error unbinding event port");
     }
-    paging->mem_event.port = -1;
+    paging->vm_event.port = -1;
 
     /* Close event channel */
-    rc = xc_evtchn_close(paging->mem_event.xce_handle);
+    rc = xc_evtchn_close(paging->vm_event.xce_handle);
     if ( rc != 0 )
     {
         PERROR("Error closing event channel");
     }
-    paging->mem_event.xce_handle = NULL;
+    paging->vm_event.xce_handle = NULL;
     
     /* Close connection to xenstore */
     xs_close(paging->xs_handle);
@@ -558,12 +558,12 @@ static void xenpaging_teardown(struct xenpaging *paging)
     }
 }
 
-static void get_request(struct mem_event *mem_event, mem_event_request_t *req)
+static void get_request(struct vm_event *vm_event, vm_event_request_t *req)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX req_cons;
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     req_cons = back_ring->req_cons;
 
     /* Copy request */
@@ -575,12 +575,12 @@ static void get_request(struct mem_event *mem_event, mem_event_request_t *req)
     back_ring->sring->req_event = req_cons + 1;
 }
 
-static void put_response(struct mem_event *mem_event, mem_event_response_t *rsp)
+static void put_response(struct vm_event *vm_event, vm_event_response_t *rsp)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX rsp_prod;
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     rsp_prod = back_ring->rsp_prod_pvt;
 
     /* Copy response */
@@ -607,7 +607,7 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     DECLARE_DOMCTL;
 
     /* Nominate page */
-    ret = xc_mem_paging_nominate(xch, paging->mem_event.domain_id, gfn);
+    ret = xc_mem_paging_nominate(xch, paging->vm_event.domain_id, gfn);
     if ( ret < 0 )
     {
         /* unpageable gfn is indicated by EBUSY */
@@ -619,7 +619,7 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     }
 
     /* Map page */
-    page = xc_map_foreign_pages(xch, paging->mem_event.domain_id, PROT_READ, &victim, 1);
+    page = xc_map_foreign_pages(xch, paging->vm_event.domain_id, PROT_READ, &victim, 1);
     if ( page == NULL )
     {
         PERROR("Error mapping page %lx", gfn);
@@ -641,7 +641,7 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     munmap(page, PAGE_SIZE);
 
     /* Tell Xen to evict page */
-    ret = xc_mem_paging_evict(xch, paging->mem_event.domain_id, gfn);
+    ret = xc_mem_paging_evict(xch, paging->vm_event.domain_id, gfn);
     if ( ret < 0 )
     {
         /* A gfn in use is indicated by EBUSY */
@@ -671,10 +671,10 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     return ret;
 }
 
-static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t *rsp, int notify_policy)
+static int xenpaging_resume_page(struct xenpaging *paging, vm_event_response_t *rsp, int notify_policy)
 {
     /* Put the page info on the ring */
-    put_response(&paging->mem_event, rsp);
+    put_response(&paging->vm_event, rsp);
 
     /* Notify policy of page being paged in */
     if ( notify_policy )
@@ -693,7 +693,7 @@ static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t
     }
 
     /* Tell Xen page is ready */
-    return xc_evtchn_notify(paging->mem_event.xce_handle, paging->mem_event.port);
+    return xc_evtchn_notify(paging->vm_event.xce_handle, paging->vm_event.port);
 }
 
 static int xenpaging_populate_page(struct xenpaging *paging, unsigned long gfn, int i)
@@ -715,7 +715,7 @@ static int xenpaging_populate_page(struct xenpaging *paging, unsigned long gfn,
     do
     {
         /* Tell Xen to allocate a page for the domain */
-        ret = xc_mem_paging_load(xch, paging->mem_event.domain_id, gfn, paging->paging_buffer);
+        ret = xc_mem_paging_load(xch, paging->vm_event.domain_id, gfn, paging->paging_buffer);
         if ( ret < 0 )
         {
             if ( errno == ENOMEM )
@@ -857,8 +857,8 @@ int main(int argc, char *argv[])
 {
     struct sigaction act;
     struct xenpaging *paging;
-    mem_event_request_t req;
-    mem_event_response_t rsp;
+    vm_event_request_t req;
+    vm_event_response_t rsp;
     int num, prev_num = 0;
     int slot;
     int tot_pages;
@@ -875,7 +875,7 @@ int main(int argc, char *argv[])
     xch = paging->xc_handle;
 
     DPRINTF("starting %s for domain_id %u with pagefile %s\n",
-            argv[0], paging->mem_event.domain_id, filename);
+            argv[0], paging->vm_event.domain_id, filename);
 
     /* ensure that if we get a signal, we'll do cleanup, then exit */
     act.sa_handler = close_handler;
@@ -904,12 +904,12 @@ int main(int argc, char *argv[])
             DPRINTF("Got event from Xen\n");
         }
 
-        while ( RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
+        while ( RING_HAS_UNCONSUMED_REQUESTS(&paging->vm_event.back_ring) )
         {
             /* Indicate possible error */
             rc = 1;
 
-            get_request(&paging->mem_event, &req);
+            get_request(&paging->vm_event, &req);
 
             if ( req.u.mem_paging.gfn > paging->max_pages )
             {
@@ -971,12 +971,12 @@ int main(int argc, char *argv[])
                 DPRINTF("page %s populated (domain = %d; vcpu = %d;"
                         " gfn = %"PRIx32"; paused = %d; evict_fail = %d)\n",
                         req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ? "not" : "already",
-                        paging->mem_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
-                        !!(req.flags & MEM_EVENT_FLAG_VCPU_PAUSED) ,
+                        paging->vm_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
+                        !!(req.flags & VM_EVENT_FLAG_VCPU_PAUSED) ,
                         !!(req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL) );
 
                 /* Tell Xen to resume the vcpu */
-                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) ||
+                if (( req.flags & VM_EVENT_FLAG_VCPU_PAUSED ) ||
                     ( req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ))
                 {
                     /* Prepare the response */
diff --git a/tools/xenpaging/xenpaging.h b/tools/xenpaging/xenpaging.h
index 877db2f..25d511d 100644
--- a/tools/xenpaging/xenpaging.h
+++ b/tools/xenpaging/xenpaging.h
@@ -27,15 +27,15 @@
 
 #include <xc_private.h>
 #include <xen/event_channel.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 
 #define XENPAGING_PAGEIN_QUEUE_SIZE 64
 
-struct mem_event {
+struct vm_event {
     domid_t domain_id;
     xc_evtchn *xce_handle;
     int port;
-    mem_event_back_ring_t back_ring;
+    vm_event_back_ring_t back_ring;
     uint32_t evtchn_port;
     void *ring_page;
 };
@@ -51,7 +51,7 @@ struct xenpaging {
 
     void *paging_buffer;
 
-    struct mem_event mem_event;
+    struct vm_event vm_event;
     int fd;
     /* number of pages for which data structures were allocated */
     int max_pages;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index cfe7945..97fa25c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -421,7 +421,7 @@ int vcpu_initialise(struct vcpu *v)
     v->arch.flags = TF_kernel_mode;
 
     /* By default, do not emulate */
-    v->arch.mem_event.emulate_flags = 0;
+    v->arch.vm_event.emulate_flags = 0;
 
     rc = mapcache_vcpu_init(v);
     if ( rc )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index a1c5db0..2a30f50 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -30,8 +30,8 @@
 #include <xen/hypercall.h> /* for arch_do_domctl */
 #include <xsm/xsm.h>
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
-#include <public/mem_event.h>
+#include <xen/vm_event.h>
+#include <public/vm_event.h>
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 2ed4344..a2b3088 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -407,7 +407,7 @@ static int hvmemul_virtual_to_linear(
      * The chosen maximum is very conservative but it's what we use in
      * hvmemul_linear_to_phys() so there is no point in using a larger value.
      * If introspection has been enabled for this domain, *reps should be
-     * at most 1, since optimization might otherwise cause a single mem_event
+     * at most 1, since optimization might otherwise cause a single vm_event
      * being triggered for repeated writes to a whole page.
      */
     *reps = min_t(unsigned long, *reps,
@@ -1521,7 +1521,7 @@ int hvm_emulate_one_no_write(
     return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops_no_write);
 }
 
-void hvm_mem_event_emulate_one(bool_t nowrite, unsigned int trapnr,
+void hvm_mem_access_emulate_one(bool_t nowrite, unsigned int trapnr,
     unsigned int errcode)
 {
     struct hvm_emulate_ctxt ctx = {{ 0 }};
@@ -1538,7 +1538,7 @@ void hvm_mem_event_emulate_one(bool_t nowrite, unsigned int trapnr,
     {
     case X86EMUL_RETRY:
         /*
-         * This function is called when handling an EPT-related mem_event
+         * This function is called when handling an EPT-related vm_event
          * reply. As such, nothing else needs to be done here, since simply
          * returning makes the current instruction cause a page fault again,
          * consistent with X86EMUL_RETRY.
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 0379de7..427da2c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -35,7 +35,7 @@
 #include <xen/paging.h>
 #include <xen/cpu.h>
 #include <xen/wait.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/rangeset.h>
 #include <asm/shadow.h>
@@ -66,7 +66,7 @@
 #include <public/hvm/ioreq.h>
 #include <public/version.h>
 #include <public/memory.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <public/arch-x86/cpuid.h>
 
 bool_t __read_mostly hvm_enabled;
@@ -2774,7 +2774,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     struct p2m_domain *p2m;
     int rc, fall_through = 0, paged = 0;
     int sharing_enomem = 0;
-    mem_event_request_t *req_ptr = NULL;
+    vm_event_request_t *req_ptr = NULL;
 
     /* On Nested Virtualization, walk the guest page table.
      * If this succeeds, all is fine.
@@ -2844,7 +2844,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     {
         bool_t violation;
 
-        /* If the access is against the permissions, then send to mem_event */
+        /* If the access is against the permissions, then send to vm_event */
         switch (p2ma)
         {
         case p2m_access_n:
@@ -6319,7 +6319,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
     return rc;
 }
 
-static void hvm_mem_event_fill_regs(mem_event_request_t *req)
+static void hvm_mem_event_fill_regs(vm_event_request_t *req)
 {
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     const struct vcpu *curr = current;
@@ -6351,7 +6351,7 @@ static void hvm_mem_event_fill_regs(mem_event_request_t *req)
     req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
 }
 
-static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
+static int hvm_memory_event_traps(uint64_t parameters, vm_event_request_t *req)
 {
     int rc;
     struct vcpu *v = current;
@@ -6360,7 +6360,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
     if ( !(parameters & HVMPME_MODE_MASK) )
         return 0;
 
-    rc = mem_event_claim_slot(d, &d->mem_event->monitor);
+    rc = vm_event_claim_slot(d, &d->vm_event->monitor);
     if ( rc == -ENOSYS )
     {
         /* If there was no ring to handle the event, then
@@ -6372,12 +6372,12 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
 
     if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
     {
-        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
-        mem_event_vcpu_pause(v);
+        req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(v);
     }
 
     hvm_mem_event_fill_regs(req);
-    mem_event_put_request(d, &d->mem_event->monitor, req);
+    vm_event_put_request(d, &d->vm_event->monitor, req);
 
     return 1;
 }
@@ -6385,7 +6385,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
 static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
                                 unsigned long old)
 {
-    mem_event_request_t req = {
+    vm_event_request_t req = {
         .reason = reason,
         .vcpu_id = current->vcpu_id,
         .u.mov_to_cr.new_value = value,
@@ -6395,15 +6395,15 @@ static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
 
     switch(reason)
     {
-    case MEM_EVENT_REASON_MOV_TO_CR0:
+    case VM_EVENT_REASON_MOV_TO_CR0:
         parameters = current->domain->arch.hvm_domain
                       .params[HVM_PARAM_MEMORY_EVENT_CR0];
         break;
-    case MEM_EVENT_REASON_MOV_TO_CR3:
+    case VM_EVENT_REASON_MOV_TO_CR3:
         parameters = current->domain->arch.hvm_domain
                       .params[HVM_PARAM_MEMORY_EVENT_CR3];
         break;
-    case MEM_EVENT_REASON_MOV_TO_CR4:
+    case VM_EVENT_REASON_MOV_TO_CR4:
         parameters = current->domain->arch.hvm_domain
                       .params[HVM_PARAM_MEMORY_EVENT_CR4];
         break;
@@ -6417,23 +6417,23 @@ static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
 
 void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR0, value, old);
+    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
 }
 
 void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR3, value, old);
+    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
 }
 
 void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR4, value, old);
+    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
 }
 
 void hvm_memory_event_msr(unsigned long msr, unsigned long value)
 {
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MOV_TO_MSR,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MOV_TO_MSR,
         .vcpu_id = current->vcpu_id,
         .u.mov_to_msr.msr = msr,
         .u.mov_to_msr.value = value,
@@ -6447,8 +6447,8 @@ void hvm_memory_event_msr(unsigned long msr, unsigned long value)
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_SOFTWARE_BREAKPOINT,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SOFTWARE_BREAKPOINT,
         .vcpu_id = current->vcpu_id,
         .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
     };
@@ -6461,8 +6461,8 @@ int hvm_memory_event_int3(unsigned long gla)
 int hvm_memory_event_single_step(unsigned long gla)
 {
     uint32_t pfec = PFEC_page_present;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_SINGLESTEP,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SINGLESTEP,
         .vcpu_id = current->vcpu_id,
         .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
     };
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index e0a33e3..63007a9 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -25,7 +25,7 @@
 #include <xen/event.h>
 #include <xen/kernel.h>
 #include <xen/keyhandler.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <asm/current.h>
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
@@ -715,7 +715,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
         return;
 
     if ( unlikely(d->arch.hvm_domain.introspection_enabled) &&
-         mem_event_check_ring(&d->mem_event->monitor) )
+         vm_event_check_ring(&d->vm_event->monitor) )
     {
         unsigned int i;
 
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index cbbc4e9..40adac3 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -17,9 +17,9 @@
  * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
  * Place - Suite 330, Boston, MA 02111-1307 USA.
  */
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 9c1ec11..cb28943 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -19,9 +19,9 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 87a7b72..e63d8c1 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -22,13 +22,13 @@
 
 
 #include <asm/p2m.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 
 
 int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
 {
     int rc = -ENODEV;
-    if ( unlikely(!d->mem_event->paging.ring_page) )
+    if ( unlikely(!d->vm_event->paging.ring_page) )
         return rc;
 
     switch( mpo->op )
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 8a192ef..4e5477a 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -28,7 +28,7 @@
 #include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/rcupdate.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <asm/page.h>
 #include <asm/string.h>
 #include <asm/p2m.h>
@@ -559,24 +559,24 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
 {
     struct vcpu *v = current;
     int rc;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MEM_SHARING,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MEM_SHARING,
         .vcpu_id = v->vcpu_id,
         .u.mem_sharing.gfn = gfn,
         .u.mem_sharing.p2mt = p2m_ram_shared
     };
 
-    if ( (rc = __mem_event_claim_slot(d, 
-                        &d->mem_event->share, allow_sleep)) < 0 )
+    if ( (rc = __vm_event_claim_slot(d, 
+                        &d->vm_event->share, allow_sleep)) < 0 )
         return rc;
 
     if ( v->domain == d )
     {
-        req.flags = MEM_EVENT_FLAG_VCPU_PAUSED;
-        mem_event_vcpu_pause(v);
+        req.flags = VM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(v);
     }
 
-    mem_event_put_request(d, &d->mem_event->share, &req);
+    vm_event_put_request(d, &d->vm_event->share, &req);
 
     return 0;
 }
@@ -593,20 +593,20 @@ unsigned int mem_sharing_get_nr_shared_mfns(void)
 
 int mem_sharing_sharing_resume(struct domain *d)
 {
-    mem_event_response_t rsp;
+    vm_event_response_t rsp;
 
     /* Get all requests off the ring */
-    while ( mem_event_get_response(d, &d->mem_event->share, &rsp) )
+    while ( vm_event_get_response(d, &d->vm_event->share, &rsp) )
     {
         struct vcpu *v;
 
-        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
         {
-            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
+            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
             continue;
         }
 
-        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
 
         /* Validate the vcpu_id in the response. */
@@ -616,8 +616,8 @@ int mem_sharing_sharing_resume(struct domain *d)
         v = d->vcpu[rsp.vcpu_id];
 
         /* Unpause domain/vcpu */
-        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-            mem_event_vcpu_unpause(v);
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
     }
 
     return 0;
@@ -1144,7 +1144,7 @@ err_out:
 
 /* A note on the rationale for unshare error handling:
  *  1. Unshare can only fail with ENOMEM. Any other error conditions BUG_ON()'s
- *  2. We notify a potential dom0 helper through a mem_event ring. But we
+ *  2. We notify a potential dom0 helper through a vm_event ring. But we
  *     allow the notification to not go to sleep. If the event ring is full 
  *     of ENOMEM warnings, then it's on the ball.
  *  3. We cannot go to sleep until the unshare is resolved, because we might
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 43f507c..0679f00 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -21,9 +21,9 @@
  */
 
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 26fb18d..e50b6fa 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -26,10 +26,10 @@
  */
 
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
 #include <xen/trace.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 5851c66..1974bfb 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -25,9 +25,9 @@
  */
 
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
@@ -1081,8 +1081,8 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
 void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
                                 p2m_type_t p2mt)
 {
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MEM_PAGING,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MEM_PAGING,
         .u.mem_paging.gfn = gfn
     };
 
@@ -1090,7 +1090,7 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
      * correctness of the guest execution at this point.  If this is the only
      * page that happens to be paged-out, we'll be okay..  but it's likely the
      * guest will crash shortly anyways. */
-    int rc = mem_event_claim_slot(d, &d->mem_event->paging);
+    int rc = vm_event_claim_slot(d, &d->vm_event->paging);
     if ( rc < 0 )
         return;
 
@@ -1104,7 +1104,7 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
         /* Evict will fail now, tag this request for pager */
         req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL;
 
-    mem_event_put_request(d, &d->mem_event->paging, &req);
+    vm_event_put_request(d, &d->vm_event->paging, &req);
 }
 
 /**
@@ -1131,8 +1131,8 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
 void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 {
     struct vcpu *v = current;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MEM_PAGING,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MEM_PAGING,
         .u.mem_paging.gfn = gfn
     };
     p2m_type_t p2mt;
@@ -1141,7 +1141,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* We're paging. There should be a ring */
-    int rc = mem_event_claim_slot(d, &d->mem_event->paging);
+    int rc = vm_event_claim_slot(d, &d->vm_event->paging);
     if ( rc == -ENOSYS )
     {
         gdprintk(XENLOG_ERR, "Domain %hu paging gfn %lx yet no ring "
@@ -1172,14 +1172,14 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     /* Pause domain if request came from guest and gfn has paging type */
     if ( p2m_is_paging(p2mt) && v->domain == d )
     {
-        mem_event_vcpu_pause(v);
-        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(v);
+        req.flags |= VM_EVENT_FLAG_VCPU_PAUSED;
     }
     /* No need to inform pager if the gfn is not in the page-out path */
     else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
     {
         /* gfn is already on its way back and vcpu is not paused */
-        mem_event_cancel_slot(d, &d->mem_event->paging);
+        vm_event_cancel_slot(d, &d->vm_event->paging);
         return;
     }
 
@@ -1187,7 +1187,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     req.u.mem_paging.p2mt = p2mt;
     req.vcpu_id = v->vcpu_id;
 
-    mem_event_put_request(d, &d->mem_event->paging, &req);
+    vm_event_put_request(d, &d->vm_event->paging, &req);
 }
 
 /**
@@ -1296,23 +1296,23 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
 void p2m_mem_paging_resume(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    mem_event_response_t rsp;
+    vm_event_response_t rsp;
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
 
     /* Pull all responses off the ring */
-    while( mem_event_get_response(d, &d->mem_event->paging, &rsp) )
+    while( vm_event_get_response(d, &d->vm_event->paging, &rsp) )
     {
         struct vcpu *v;
 
-        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
         {
-            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
+            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
             continue;
         }
 
-        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
 
         /* Validate the vcpu_id in the response. */
@@ -1339,12 +1339,12 @@ void p2m_mem_paging_resume(struct domain *d)
             gfn_unlock(p2m, gfn, 0);
         }
         /* Unpause domain */
-        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-            mem_event_vcpu_unpause(v);
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
     }
 }
 
-static void p2m_mem_event_fill_regs(mem_event_request_t *req)
+static void p2m_vm_event_fill_regs(vm_event_request_t *req)
 {
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     struct segment_register seg;
@@ -1399,15 +1399,14 @@ static void p2m_mem_event_fill_regs(mem_event_request_t *req)
     req->regs.x86.cs_arbytes = seg.attr.bytes;
 }
 
-void p2m_mem_event_emulate_check(struct vcpu *v,
-                                 const mem_event_response_t *rsp)
+void p2m_vm_event_emulate_check(struct vcpu *v, const vm_event_response_t *rsp)
 {
     /* Mark vcpu for skipping one instruction upon rescheduling. */
     if ( rsp->flags & MEM_ACCESS_EMULATE )
     {
         xenmem_access_t access;
         bool_t violation = 1;
-        const struct mem_event_mem_access *data = &rsp->u.mem_access;
+        const struct vm_event_mem_access *data = &rsp->u.mem_access;
 
         if ( p2m_get_mem_access(v->domain, data->gfn, &access) == 0 )
         {
@@ -1450,7 +1449,7 @@ void p2m_mem_event_emulate_check(struct vcpu *v,
             }
         }
 
-        v->arch.mem_event.emulate_flags = violation ? rsp->flags : 0;
+        v->arch.vm_event.emulate_flags = violation ? rsp->flags : 0;
     }
 }
 
@@ -1465,7 +1464,7 @@ void p2m_setup_introspection(struct domain *d)
 
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
-                            mem_event_request_t **req_ptr)
+                            vm_event_request_t **req_ptr)
 {
     struct vcpu *v = current;
     unsigned long gfn = gpa >> PAGE_SHIFT;
@@ -1474,7 +1473,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     mfn_t mfn;
     p2m_type_t p2mt;
     p2m_access_t p2ma;
-    mem_event_request_t *req;
+    vm_event_request_t *req;
     int rc;
     unsigned long eip = guest_cpu_user_regs()->eip;
 
@@ -1501,13 +1500,13 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     gfn_unlock(p2m, gfn, 0);
 
     /* Otherwise, check if there is a memory event listener, and send the message along */
-    if ( !mem_event_check_ring(&d->mem_event->monitor) || !req_ptr ) 
+    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr ) 
     {
         /* No listener */
         if ( p2m->access_required ) 
         {
             gdprintk(XENLOG_INFO, "Memory access permissions failure, "
-                                  "no mem_event listener VCPU %d, dom %d\n",
+                                  "no vm_event listener VCPU %d, dom %d\n",
                                   v->vcpu_id, d->domain_id);
             domain_crash(v->domain);
             return 0;
@@ -1530,40 +1529,40 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
         }
     }
 
-    /* The previous mem_event reply does not match the current state. */
-    if ( v->arch.mem_event.gpa != gpa || v->arch.mem_event.eip != eip )
+    /* The previous vm_event reply does not match the current state. */
+    if ( v->arch.vm_event.gpa != gpa || v->arch.vm_event.eip != eip )
     {
-        /* Don't emulate the current instruction, send a new mem_event. */
-        v->arch.mem_event.emulate_flags = 0;
+        /* Don't emulate the current instruction, send a new vm_event. */
+        v->arch.vm_event.emulate_flags = 0;
 
         /*
          * Make sure to mark the current state to match it again against
-         * the new mem_event about to be sent.
+         * the new vm_event about to be sent.
          */
-        v->arch.mem_event.gpa = gpa;
-        v->arch.mem_event.eip = eip;
+        v->arch.vm_event.gpa = gpa;
+        v->arch.vm_event.eip = eip;
     }
 
-    if ( v->arch.mem_event.emulate_flags )
+    if ( v->arch.vm_event.emulate_flags )
     {
-        hvm_mem_event_emulate_one((v->arch.mem_event.emulate_flags &
-                                   MEM_ACCESS_EMULATE_NOWRITE) != 0,
-                                  TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE);
+        hvm_mem_access_emulate_one((v->arch.vm_event.emulate_flags &
+                                    MEM_ACCESS_EMULATE_NOWRITE) != 0,
+                                   TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE);
 
-        v->arch.mem_event.emulate_flags = 0;
+        v->arch.vm_event.emulate_flags = 0;
         return 1;
     }
 
     *req_ptr = NULL;
-    req = xzalloc(mem_event_request_t);
+    req = xzalloc(vm_event_request_t);
     if ( req )
     {
         *req_ptr = req;
-        req->reason = MEM_EVENT_REASON_MEM_ACCESS;
+        req->reason = VM_EVENT_REASON_MEM_ACCESS;
 
         /* Pause the current VCPU */
         if ( p2ma != p2m_access_n2rwx )
-            req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+            req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
 
         /* Send request to mem event */
         req->u.mem_access.gfn = gfn;
@@ -1583,12 +1582,12 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
         req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
         req->vcpu_id = v->vcpu_id;
 
-        p2m_mem_event_fill_regs(req);
+        p2m_vm_event_fill_regs(req);
     }
 
     /* Pause the current VCPU */
     if ( p2ma != p2m_access_n2rwx )
-        mem_event_vcpu_pause(v);
+        vm_event_vcpu_pause(v);
 
     /* VCPU may be paused, return whether we promoted automatically */
     return (p2ma == p2m_access_n2rwx);
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 96cec31..85f138b 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -1,5 +1,5 @@
 #include <xen/event.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/multicall.h>
 #include <compat/memory.h>
@@ -192,7 +192,7 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
         if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
@@ -206,7 +206,7 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -EFAULT;
         if ( mso.op == XENMEM_sharing_op_audit )
             return mem_sharing_audit(); 
-        rc = do_mem_event_op(cmd, mso.domain, &mso);
+        rc = do_vm_event_op(cmd, mso.domain, &mso);
         if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 2fa1f67..1e2bd1a 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -26,7 +26,7 @@
 #include <xen/nodemask.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
 #include <asm/asm_defns.h>
@@ -988,7 +988,7 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         xen_mem_paging_op_t mpo;
         if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
         if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
@@ -1001,7 +1001,7 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -EFAULT;
         if ( mso.op == XENMEM_sharing_op_audit )
             return mem_sharing_audit(); 
-        rc = do_mem_event_op(cmd, mso.domain, &mso);
+        rc = do_vm_event_op(cmd, mso.domain, &mso);
         if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 1956091..e5bd75b 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -54,7 +54,7 @@ obj-y += rbtree.o
 obj-y += lzo.o
 obj-$(HAS_PDX) += pdx.o
 obj-$(HAS_MEM_ACCESS) += mem_access.o
-obj-$(HAS_MEM_ACCESS) += mem_event.o
+obj-$(HAS_MEM_ACCESS) += vm_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 0b05681..60bf00f 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -15,7 +15,7 @@
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/time.h>
 #include <xen/console.h>
 #include <xen/softirq.h>
@@ -344,8 +344,8 @@ struct domain *domain_create(
         poolid = 0;
 
         err = -ENOMEM;
-        d->mem_event = xzalloc(struct mem_event_per_domain);
-        if ( !d->mem_event )
+        d->vm_event = xzalloc(struct vm_event_per_domain);
+        if ( !d->vm_event )
             goto fail;
 
         d->pbuf = xzalloc_array(char, DOMAIN_PBUF_SIZE);
@@ -387,7 +387,7 @@ struct domain *domain_create(
     if ( hardware_domain == d )
         hardware_domain = old_hwdom;
     atomic_set(&d->refcnt, DOMAIN_DESTROYED);
-    xfree(d->mem_event);
+    xfree(d->vm_event);
     xfree(d->pbuf);
     if ( init_status & INIT_arch )
         arch_domain_destroy(d);
@@ -629,7 +629,7 @@ int domain_kill(struct domain *d)
         d->is_dying = DOMDYING_dead;
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
-        mem_event_cleanup(d);
+        vm_event_cleanup(d);
         put_domain(d);
         send_global_virq(VIRQ_DOM_EXC);
         /* fallthrough */
@@ -808,7 +808,7 @@ static void complete_domain_destroy(struct rcu_head *head)
     free_xenoprof_pages(d);
 #endif
 
-    xfree(d->mem_event);
+    xfree(d->vm_event);
     xfree(d->pbuf);
 
     for ( i = d->max_vcpus - 1; i >= 0; i-- )
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 33ecd45..85afd68 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -24,7 +24,7 @@
 #include <xen/bitmap.h>
 #include <xen/paging.h>
 #include <xen/hypercall.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -1114,9 +1114,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
     break;
 
-    case XEN_DOMCTL_mem_event_op:
-        ret = mem_event_domctl(d, &op->u.mem_event_op,
-                               guest_handle_cast(u_domctl, void));
+    case XEN_DOMCTL_vm_event_op:
+        ret = vm_event_domctl(d, &op->u.vm_event_op,
+                              guest_handle_cast(u_domctl, void));
         copyback = 1;
         break;
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 0e83d70..a6d82d1 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -24,27 +24,27 @@
 #include <xen/sched.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <public/memory.h>
 #include <asm/p2m.h>
 #include <xsm/xsm.h>
 
 void mem_access_resume(struct domain *d)
 {
-    mem_event_response_t rsp;
+    vm_event_response_t rsp;
 
     /* Pull all responses off the ring. */
-    while ( mem_event_get_response(d, &d->mem_event->monitor, &rsp) )
+    while ( vm_event_get_response(d, &d->vm_event->monitor, &rsp) )
     {
         struct vcpu *v;
 
-        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
         {
-            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
+            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
             continue;
         }
 
-        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
 
         /* Validate the vcpu_id in the response. */
@@ -53,11 +53,11 @@ void mem_access_resume(struct domain *d)
 
         v = d->vcpu[rsp.vcpu_id];
 
-        p2m_mem_event_emulate_check(v, &rsp);
+        p2m_vm_event_emulate_check(v, &rsp);
 
         /* Unpause domain. */
-        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-            mem_event_vcpu_unpause(v);
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
     }
 }
 
@@ -80,12 +80,12 @@ int mem_access_memop(unsigned long cmd,
     if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
-    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
     if ( rc )
         goto out;
 
     rc = -ENODEV;
-    if ( unlikely(!d->mem_event->monitor.ring_page) )
+    if ( unlikely(!d->vm_event->monitor.ring_page) )
         goto out;
 
     switch ( mao.op )
@@ -150,13 +150,13 @@ int mem_access_memop(unsigned long cmd,
     return rc;
 }
 
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+int mem_access_send_req(struct domain *d, vm_event_request_t *req)
 {
-    int rc = mem_event_claim_slot(d, &d->mem_event->monitor);
+    int rc = vm_event_claim_slot(d, &d->vm_event->monitor);
     if ( rc < 0 )
         return rc;
 
-    mem_event_put_request(d, &d->mem_event->monitor, req);
+    vm_event_put_request(d, &d->vm_event->monitor, req);
 
     return 0;
 }
diff --git a/xen/common/mem_event.c b/xen/common/vm_event.c
similarity index 59%
rename from xen/common/mem_event.c
rename to xen/common/vm_event.c
index ae60c10..195739e 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/vm_event.c
@@ -1,7 +1,7 @@
 /******************************************************************************
- * mem_event.c
+ * vm_event.c
  *
- * Memory event support.
+ * VM event support.
  *
  * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
  *
@@ -24,7 +24,7 @@
 #include <xen/sched.h>
 #include <xen/event.h>
 #include <xen/wait.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/p2m.h>
 
@@ -43,14 +43,14 @@
 #define xen_rmb()  rmb()
 #define xen_wmb()  wmb()
 
-#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
-#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
-#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
+#define vm_event_ring_lock_init(_ved)  spin_lock_init(&(_ved)->ring_lock)
+#define vm_event_ring_lock(_ved)       spin_lock(&(_ved)->ring_lock)
+#define vm_event_ring_unlock(_ved)     spin_unlock(&(_ved)->ring_lock)
 
-static int mem_event_enable(
+static int vm_event_enable(
     struct domain *d,
-    xen_domctl_mem_event_op_t *mec,
-    struct mem_event_domain *med,
+    xen_domctl_vm_event_op_t *vec,
+    struct vm_event_domain *ved,
     int pause_flag,
     int param,
     xen_event_channel_notification_t notification_fn)
@@ -61,7 +61,7 @@ static int mem_event_enable(
     /* Only one helper at a time. If the helper crashed,
      * the ring is in an undefined state and so is the guest.
      */
-    if ( med->ring_page )
+    if ( ved->ring_page )
         return -EBUSY;
 
     /* The parameter defaults to zero, and it should be
@@ -69,16 +69,16 @@ static int mem_event_enable(
     if ( ring_gfn == 0 )
         return -ENOSYS;
 
-    mem_event_ring_lock_init(med);
-    mem_event_ring_lock(med);
+    vm_event_ring_lock_init(ved);
+    vm_event_ring_lock(ved);
 
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
-                                    &med->ring_page);
+    rc = prepare_ring_for_helper(d, ring_gfn, &ved->ring_pg_struct,
+                                    &ved->ring_page);
     if ( rc < 0 )
         goto err;
 
     /* Set the number of currently blocked vCPUs to 0. */
-    med->blocked = 0;
+    ved->blocked = 0;
 
     /* Allocate event channel */
     rc = alloc_unbound_xen_event_channel(d, 0, current->domain->domain_id,
@@ -86,35 +86,35 @@ static int mem_event_enable(
     if ( rc < 0 )
         goto err;
 
-    med->xen_port = mec->port = rc;
+    ved->xen_port = vec->port = rc;
 
     /* Prepare ring buffer */
-    FRONT_RING_INIT(&med->front_ring,
-                    (mem_event_sring_t *)med->ring_page,
+    FRONT_RING_INIT(&ved->front_ring,
+                    (vm_event_sring_t *)ved->ring_page,
                     PAGE_SIZE);
 
     /* Save the pause flag for this particular ring. */
-    med->pause_flag = pause_flag;
+    ved->pause_flag = pause_flag;
 
     /* Initialize the last-chance wait queue. */
-    init_waitqueue_head(&med->wq);
+    init_waitqueue_head(&ved->wq);
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
     return 0;
 
  err:
-    destroy_ring_for_helper(&med->ring_page,
-                            med->ring_pg_struct);
-    mem_event_ring_unlock(med);
+    destroy_ring_for_helper(&ved->ring_page,
+                            ved->ring_pg_struct);
+    vm_event_ring_unlock(ved);
 
     return rc;
 }
 
-static unsigned int mem_event_ring_available(struct mem_event_domain *med)
+static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
 {
-    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
-    avail_req -= med->target_producers;
-    avail_req -= med->foreign_producers;
+    int avail_req = RING_FREE_REQUESTS(&ved->front_ring);
+    avail_req -= ved->target_producers;
+    avail_req -= ved->foreign_producers;
 
     BUG_ON(avail_req < 0);
 
@@ -122,18 +122,18 @@ static unsigned int mem_event_ring_available(struct mem_event_domain *med)
 }
 
 /*
- * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
+ * vm_event_wake_blocked() will wakeup vcpus waiting for room in the
  * ring. These vCPUs were paused on their way out after placing an event,
  * but need to be resumed where the ring is capable of processing at least
  * one event from them.
  */
-static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
+static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain *ved)
 {
     struct vcpu *v;
     int online = d->max_vcpus;
-    unsigned int avail_req = mem_event_ring_available(med);
+    unsigned int avail_req = vm_event_ring_available(ved);
 
-    if ( avail_req == 0 || med->blocked == 0 )
+    if ( avail_req == 0 || ved->blocked == 0 )
         return;
 
     /*
@@ -142,13 +142,13 @@ static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *me
      * memory events are lost (due to the fact that certain types of events
      * cannot be replayed, we need to ensure that there is space in the ring
      * for when they are hit).
-     * See comment below in mem_event_put_request().
+     * See comment below in vm_event_put_request().
      */
     for_each_vcpu ( d, v )
-        if ( test_bit(med->pause_flag, &v->pause_flags) )
+        if ( test_bit(ved->pause_flag, &v->pause_flags) )
             online--;
 
-    ASSERT(online == (d->max_vcpus - med->blocked));
+    ASSERT(online == (d->max_vcpus - ved->blocked));
 
     /* We remember which vcpu last woke up to avoid scanning always linearly
      * from zero and starving higher-numbered vcpus under high load */
@@ -156,22 +156,22 @@ static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *me
     {
         int i, j, k;
 
-        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
+        for (i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
         {
             k = i % d->max_vcpus;
             v = d->vcpu[k];
             if ( !v )
                 continue;
 
-            if ( !(med->blocked) || online >= avail_req )
+            if ( !(ved->blocked) || online >= avail_req )
                break;
 
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
             {
                 vcpu_unpause(v);
                 online++;
-                med->blocked--;
-                med->last_vcpu_wake_up = k;
+                ved->blocked--;
+                ved->last_vcpu_wake_up = k;
             }
         }
     }
@@ -182,87 +182,87 @@ static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *me
  * was unable to do so, it is queued on a wait queue.  These are woken as
  * needed, and take precedence over the blocked vCPUs.
  */
-static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
+static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
 {
-    unsigned int avail_req = mem_event_ring_available(med);
+    unsigned int avail_req = vm_event_ring_available(ved);
 
     if ( avail_req > 0 )
-        wake_up_nr(&med->wq, avail_req);
+        wake_up_nr(&ved->wq, avail_req);
 }
 
 /*
- * mem_event_wake() will wakeup all vcpus waiting for the ring to
+ * vm_event_wake() will wakeup all vcpus waiting for the ring to
  * become available.  If we have queued vCPUs, they get top priority. We
  * are guaranteed that they will go through code paths that will eventually
- * call mem_event_wake() again, ensuring that any blocked vCPUs will get
+ * call vm_event_wake() again, ensuring that any blocked vCPUs will get
  * unpaused once all the queued vCPUs have made it through.
  */
-void mem_event_wake(struct domain *d, struct mem_event_domain *med)
+void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
 {
-    if (!list_empty(&med->wq.list))
-        mem_event_wake_queued(d, med);
+    if (!list_empty(&ved->wq.list))
+        vm_event_wake_queued(d, ved);
     else
-        mem_event_wake_blocked(d, med);
+        vm_event_wake_blocked(d, ved);
 }
 
-static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
+static int vm_event_disable(struct domain *d, struct vm_event_domain *ved)
 {
-    if ( med->ring_page )
+    if ( ved->ring_page )
     {
         struct vcpu *v;
 
-        mem_event_ring_lock(med);
+        vm_event_ring_lock(ved);
 
-        if ( !list_empty(&med->wq.list) )
+        if ( !list_empty(&ved->wq.list) )
         {
-            mem_event_ring_unlock(med);
+            vm_event_ring_unlock(ved);
             return -EBUSY;
         }
 
         /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d, med->xen_port);
+        free_xen_event_channel(d, ved->xen_port);
 
         /* Unblock all vCPUs */
         for_each_vcpu ( d, v )
         {
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
             {
                 vcpu_unpause(v);
-                med->blocked--;
+                ved->blocked--;
             }
         }
 
-        destroy_ring_for_helper(&med->ring_page,
-                                med->ring_pg_struct);
-        mem_event_ring_unlock(med);
+        destroy_ring_for_helper(&ved->ring_page,
+                                ved->ring_pg_struct);
+        vm_event_ring_unlock(ved);
     }
 
     return 0;
 }
 
-static inline void mem_event_release_slot(struct domain *d,
-                                          struct mem_event_domain *med)
+static inline void vm_event_release_slot(struct domain *d,
+                                         struct vm_event_domain *ved)
 {
     /* Update the accounting */
     if ( current->domain == d )
-        med->target_producers--;
+        ved->target_producers--;
     else
-        med->foreign_producers--;
+        ved->foreign_producers--;
 
     /* Kick any waiters */
-    mem_event_wake(d, med);
+    vm_event_wake(d, ved);
 }
 
 /*
- * mem_event_mark_and_pause() tags vcpu and put it to sleep.
- * The vcpu will resume execution in mem_event_wake_waiters().
+ * vm_event_mark_and_pause() tags vcpu and put it to sleep.
+ * The vcpu will resume execution in vm_event_wake_waiters().
  */
-void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
+void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain *ved)
 {
-    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
+    if ( !test_and_set_bit(ved->pause_flag, &v->pause_flags) )
     {
         vcpu_pause_nosync(v);
-        med->blocked++;
+        ved->blocked++;
     }
 }
 
@@ -272,31 +272,31 @@ void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
  * overly full and its continued execution would cause stalling and excessive
  * waiting.  The vCPU will be automatically unpaused when the ring clears.
  */
-void mem_event_put_request(struct domain *d,
-                           struct mem_event_domain *med,
-                           mem_event_request_t *req)
+void vm_event_put_request(struct domain *d,
+                          struct vm_event_domain *ved,
+                          vm_event_request_t *req)
 {
-    mem_event_front_ring_t *front_ring;
+    vm_event_front_ring_t *front_ring;
     int free_req;
     unsigned int avail_req;
     RING_IDX req_prod;
 
     if ( current->domain != d )
     {
-        req->flags |= MEM_EVENT_FLAG_FOREIGN;
+        req->flags |= VM_EVENT_FLAG_FOREIGN;
 #ifndef NDEBUG
-        if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
+        if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) )
             gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n",
                      d->domain_id, req->vcpu_id);
 #endif
     }
 
-    req->version = MEM_EVENT_INTERFACE_VERSION;
+    req->version = VM_EVENT_INTERFACE_VERSION;
 
-    mem_event_ring_lock(med);
+    vm_event_ring_lock(ved);
 
     /* Due to the reservations, this step must succeed. */
-    front_ring = &med->front_ring;
+    front_ring = &ved->front_ring;
     free_req = RING_FREE_REQUESTS(front_ring);
     ASSERT(free_req > 0);
 
@@ -310,33 +310,33 @@ void mem_event_put_request(struct domain *d,
     RING_PUSH_REQUESTS(front_ring);
 
     /* We've actually *used* our reservation, so release the slot. */
-    mem_event_release_slot(d, med);
+    vm_event_release_slot(d, ved);
 
     /* Give this vCPU a black eye if necessary, on the way out.
      * See the comments above wake_blocked() for more information
      * on how this mechanism works to avoid waiting. */
-    avail_req = mem_event_ring_available(med);
+    avail_req = vm_event_ring_available(ved);
     if( current->domain == d && avail_req < d->max_vcpus )
-        mem_event_mark_and_pause(current, med);
+        vm_event_mark_and_pause(current, ved);
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
 
-    notify_via_xen_event_channel(d, med->xen_port);
+    notify_via_xen_event_channel(d, ved->xen_port);
 }
 
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
+int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, vm_event_response_t *rsp)
 {
-    mem_event_front_ring_t *front_ring;
+    vm_event_front_ring_t *front_ring;
     RING_IDX rsp_cons;
 
-    mem_event_ring_lock(med);
+    vm_event_ring_lock(ved);
 
-    front_ring = &med->front_ring;
+    front_ring = &ved->front_ring;
     rsp_cons = front_ring->rsp_cons;
 
     if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
     {
-        mem_event_ring_unlock(med);
+        vm_event_ring_unlock(ved);
         return 0;
     }
 
@@ -350,70 +350,70 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_e
 
     /* Kick any waiters -- since we've just consumed an event,
      * there may be additional space available in the ring. */
-    mem_event_wake(d, med);
+    vm_event_wake(d, ved);
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
 
     return 1;
 }
 
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
 {
-    mem_event_ring_lock(med);
-    mem_event_release_slot(d, med);
-    mem_event_ring_unlock(med);
+    vm_event_ring_lock(ved);
+    vm_event_release_slot(d, ved);
+    vm_event_ring_unlock(ved);
 }
 
-static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
+static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign)
 {
     unsigned int avail_req;
 
-    if ( !med->ring_page )
+    if ( !ved->ring_page )
         return -ENOSYS;
 
-    mem_event_ring_lock(med);
+    vm_event_ring_lock(ved);
 
-    avail_req = mem_event_ring_available(med);
+    avail_req = vm_event_ring_available(ved);
     if ( avail_req == 0 )
     {
-        mem_event_ring_unlock(med);
+        vm_event_ring_unlock(ved);
         return -EBUSY;
     }
 
     if ( !foreign )
-        med->target_producers++;
+        ved->target_producers++;
     else
-        med->foreign_producers++;
+        ved->foreign_producers++;
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
 
     return 0;
 }
 
 /* Simple try_grab wrapper for use in the wait_event() macro. */
-static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
+static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc)
 {
-    *rc = mem_event_grab_slot(med, 0);
+    *rc = vm_event_grab_slot(ved, 0);
     return *rc;
 }
 
-/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
-static int mem_event_wait_slot(struct mem_event_domain *med)
+/* Call vm_event_grab_slot() until the ring doesn't exist, or is available. */
+static int vm_event_wait_slot(struct vm_event_domain *ved)
 {
     int rc = -EBUSY;
-    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
+    wait_event(ved->wq, vm_event_wait_try_grab(ved, &rc) != -EBUSY);
     return rc;
 }
 
-bool_t mem_event_check_ring(struct mem_event_domain *med)
+bool_t vm_event_check_ring(struct vm_event_domain *ved)
 {
-    return (med->ring_page != NULL);
+    return (ved->ring_page != NULL);
 }
 
 /*
  * Determines whether or not the current vCPU belongs to the target domain,
  * and calls the appropriate wait function.  If it is a guest vCPU, then we
- * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
+ * use vm_event_wait_slot() to reserve a slot.  As long as there is a ring,
  * this function will always return 0 for a guest.  For a non-guest, we check
  * for space and return -EBUSY if the ring is not available.
  *
@@ -422,20 +422,20 @@ bool_t mem_event_check_ring(struct mem_event_domain *med)
  *               0: a spot has been reserved
  *
  */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep)
+int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
+                          bool_t allow_sleep)
 {
     if ( (current->domain == d) && allow_sleep )
-        return mem_event_wait_slot(med);
+        return vm_event_wait_slot(ved);
     else
-        return mem_event_grab_slot(med, (current->domain != d));
+        return vm_event_grab_slot(ved, (current->domain != d));
 }
 
 #ifdef HAS_MEM_PAGING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
+    if ( likely(v->domain->vm_event->paging.ring_page != NULL) )
         p2m_mem_paging_resume(v->domain);
 }
 #endif
@@ -444,7 +444,7 @@ static void mem_paging_notification(struct vcpu *v, unsigned int port)
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_access_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->monitor.ring_page != NULL) )
+    if ( likely(v->domain->vm_event->monitor.ring_page != NULL) )
         mem_access_resume(v->domain);
 }
 #endif
@@ -453,12 +453,12 @@ static void mem_access_notification(struct vcpu *v, unsigned int port)
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
+    if ( likely(v->domain->vm_event->share.ring_page != NULL) )
         mem_sharing_sharing_resume(v->domain);
 }
 #endif
 
-int do_mem_event_op(int op, uint32_t domain, void *arg)
+int do_vm_event_op(int op, uint32_t domain, void *arg)
 {
     int ret;
     struct domain *d;
@@ -467,7 +467,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
     if ( ret )
         return ret;
 
-    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
+    ret = xsm_vm_event_op(XSM_DM_PRIV, d, op);
     if ( ret )
         goto out;
 
@@ -493,10 +493,10 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
 }
 
 /* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d)
+void vm_event_cleanup(struct domain *d)
 {
 #ifdef HAS_MEM_PAGING
-    if ( d->mem_event->paging.ring_page )
+    if ( d->vm_event->paging.ring_page )
     {
         /* Destroying the wait queue head means waking up all
          * queued vcpus. This will drain the list, allowing
@@ -505,32 +505,32 @@ void mem_event_cleanup(struct domain *d)
          * Finally, because this code path involves previously
          * pausing the domain (domain_kill), unpausing the
          * vcpus causes no harm. */
-        destroy_waitqueue_head(&d->mem_event->paging.wq);
-        (void)mem_event_disable(d, &d->mem_event->paging);
+        destroy_waitqueue_head(&d->vm_event->paging.wq);
+        (void)vm_event_disable(d, &d->vm_event->paging);
     }
 #endif
 #ifdef HAS_MEM_ACCESS
-    if ( d->mem_event->monitor.ring_page )
+    if ( d->vm_event->monitor.ring_page )
     {
-        destroy_waitqueue_head(&d->mem_event->monitor.wq);
-        (void)mem_event_disable(d, &d->mem_event->monitor);
+        destroy_waitqueue_head(&d->vm_event->monitor.wq);
+        (void)vm_event_disable(d, &d->vm_event->monitor);
     }
 #endif
 #ifdef HAS_MEM_SHARING
-    if ( d->mem_event->share.ring_page )
+    if ( d->vm_event->share.ring_page )
     {
-        destroy_waitqueue_head(&d->mem_event->share.wq);
-        (void)mem_event_disable(d, &d->mem_event->share);
+        destroy_waitqueue_head(&d->vm_event->share.wq);
+        (void)vm_event_disable(d, &d->vm_event->share);
     }
 #endif
 }
 
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
+                    XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
-    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
+    rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op);
     if ( rc )
         return rc;
 
@@ -557,17 +557,17 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
     rc = -ENOSYS;
 
-    switch ( mec->mode )
+    switch ( vec->mode )
     {
 #ifdef HAS_MEM_PAGING
-    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
+    case XEN_DOMCTL_VM_EVENT_OP_PAGING:
     {
-        struct mem_event_domain *med = &d->mem_event->paging;
+        struct vm_event_domain *ved = &d->vm_event->paging;
         rc = -EINVAL;
 
-        switch( mec->op )
+        switch( vec->op )
         {
-        case XEN_MEM_EVENT_PAGING_ENABLE:
+        case XEN_VM_EVENT_PAGING_ENABLE:
         {
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -591,16 +591,16 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( p2m->pod.entry_count )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
-                                    HVM_PARAM_PAGING_RING_PFN,
-                                    mem_paging_notification);
+            rc = vm_event_enable(d, vec, ved, _VPF_mem_paging,
+                                 HVM_PARAM_PAGING_RING_PFN,
+                                 mem_paging_notification);
         }
         break;
 
-        case XEN_MEM_EVENT_PAGING_DISABLE:
+        case XEN_VM_EVENT_PAGING_DISABLE:
         {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
+            if ( ved->ring_page )
+                rc = vm_event_disable(d, ved);
         }
         break;
 
@@ -613,36 +613,36 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 #endif
 
 #ifdef HAS_MEM_ACCESS
-    case XEN_DOMCTL_MEM_EVENT_OP_MONITOR:
+    case XEN_DOMCTL_VM_EVENT_OP_MONITOR:
     {
-        struct mem_event_domain *med = &d->mem_event->monitor;
+        struct vm_event_domain *ved = &d->vm_event->monitor;
         rc = -EINVAL;
 
-        switch( mec->op )
+        switch( vec->op )
         {
-        case XEN_MEM_EVENT_MONITOR_ENABLE:
-        case XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION:
+        case XEN_VM_EVENT_MONITOR_ENABLE:
+        case XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION:
         {
             rc = -ENODEV;
-            if ( !p2m_mem_event_sanity_check(d) )
+            if ( !p2m_vm_event_sanity_check(d) )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_access,
+            rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                     HVM_PARAM_MONITOR_RING_PFN,
                                     mem_access_notification);
 
-            if ( mec->op == XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
+            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
                  && !rc )
                 p2m_setup_introspection(d);
 
         }
         break;
 
-        case XEN_MEM_EVENT_MONITOR_DISABLE:
+        case XEN_VM_EVENT_MONITOR_DISABLE:
         {
-            if ( med->ring_page )
+            if ( ved->ring_page )
             {
-                rc = mem_event_disable(d, med);
+                rc = vm_event_disable(d, ved);
                 d->arch.hvm_domain.introspection_enabled = 0;
             }
         }
@@ -657,14 +657,14 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 #endif
 
 #ifdef HAS_MEM_SHARING
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
+    case XEN_DOMCTL_VM_EVENT_OP_SHARING:
     {
-        struct mem_event_domain *med = &d->mem_event->share;
+        struct vm_event_domain *ved = &d->vm_event->share;
         rc = -EINVAL;
 
-        switch( mec->op )
+        switch( vec->op )
         {
-        case XEN_MEM_EVENT_SHARING_ENABLE:
+        case XEN_VM_EVENT_SHARING_ENABLE:
         {
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
@@ -676,16 +676,16 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( !hap_enabled(d) )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
+            rc = vm_event_enable(d, vec, ved, _VPF_mem_sharing,
                                     HVM_PARAM_SHARING_RING_PFN,
                                     mem_sharing_notification);
         }
         break;
 
-        case XEN_MEM_EVENT_SHARING_DISABLE:
+        case XEN_VM_EVENT_SHARING_DISABLE:
         {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
+            if ( ved->ring_page )
+                rc = vm_event_disable(d, ved);
         }
         break;
 
@@ -704,17 +704,17 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     return rc;
 }
 
-void mem_event_vcpu_pause(struct vcpu *v)
+void vm_event_vcpu_pause(struct vcpu *v)
 {
     ASSERT(v == current);
 
-    atomic_inc(&v->mem_event_pause_count);
+    atomic_inc(&v->vm_event_pause_count);
     vcpu_pause_nosync(v);
 }
 
-void mem_event_vcpu_unpause(struct vcpu *v)
+void vm_event_vcpu_unpause(struct vcpu *v)
 {
-    int old, new, prev = v->mem_event_pause_count.counter;
+    int old, new, prev = v->vm_event_pause_count.counter;
 
     /* All unpause requests as a result of toolstack responses.  Prevent
      * underflow of the vcpu pause count. */
@@ -726,11 +726,11 @@ void mem_event_vcpu_unpause(struct vcpu *v)
         if ( new < 0 )
         {
             printk(XENLOG_G_WARNING
-                   "%pv mem_event: Too many unpause attempts\n", v);
+                   "%pv vm_event: Too many unpause attempts\n", v);
             return;
         }
 
-        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
+        prev = cmpxchg(&v->vm_event_pause_count.counter, old, new);
     } while ( prev != old );
 
     vcpu_unpause(v);
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index b93e7d8..1cc2934 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1346,7 +1346,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
      * enabled for this domain */
     if ( unlikely(!need_iommu(d) &&
             (d->arch.hvm_domain.mem_sharing_enabled ||
-             d->mem_event->paging.ring_page ||
+             d->vm_event->paging.ring_page ||
              p2m_get_hostp2m(d)->global_logdirty)) )
         return -EXDEV;
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index da36504..21a8d71 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -45,7 +45,7 @@ struct p2m_domain {
         unsigned long shattered[4];
     } stats;
 
-    /* If true, and an access fault comes in and there is no mem_event listener,
+    /* If true, and an access fault comes in and there is no vm_event listener,
      * pause domain. Otherwise, remove access restrictions. */
     bool_t access_required;
 };
@@ -71,8 +71,8 @@ typedef enum {
 } p2m_type_t;
 
 static inline
-void p2m_mem_event_emulate_check(struct vcpu *v,
-                                 const mem_event_response_t *rsp)
+void p2m_vm_event_emulate_check(struct vcpu *v,
+                                const vm_event_response_t *rsp)
 {
     /* Not supported on ARM. */
 };
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 00a8606..85f05a5 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -483,13 +483,13 @@ struct arch_vcpu
 
     /*
      * Should we emulate the next matching instruction on VCPU resume
-     * after a mem_event?
+     * after a vm_event?
      */
     struct {
         uint32_t emulate_flags;
         unsigned long gpa;
         unsigned long eip;
-    } mem_event;
+    } vm_event;
 
 } __cacheline_aligned;
 
diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
index 5411302..b3971c8 100644
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -38,7 +38,7 @@ int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
 int hvm_emulate_one_no_write(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
-void hvm_mem_event_emulate_one(bool_t nowrite,
+void hvm_mem_access_emulate_one(bool_t nowrite,
     unsigned int trapnr,
     unsigned int errcode);
 void hvm_emulate_prepare(
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index e93c551..6266f9a 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -252,7 +252,7 @@ struct p2m_domain {
      * retyped get this access type.  See definition of p2m_access_t. */
     p2m_access_t default_access;
 
-    /* If true, and an access fault comes in and there is no mem_event listener, 
+    /* If true, and an access fault comes in and there is no vm_event listener, 
      * pause domain.  Otherwise, remove access restrictions. */
     bool_t       access_required;
 
@@ -595,7 +595,7 @@ void p2m_mem_paging_resume(struct domain *d);
  * locks -- caller must also xfree the request. */
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
-                            mem_event_request_t **req_ptr);
+                            vm_event_request_t **req_ptr);
 
 /* Set access type for a region of pfns.
  * If start_pfn == -1ul, sets the default access type */
@@ -609,14 +609,14 @@ int p2m_get_mem_access(struct domain *d, unsigned long pfn,
 
 /* Check for emulation and mark vcpu for skipping one instruction
  * upon rescheduling if required. */
-void p2m_mem_event_emulate_check(struct vcpu *v,
-                                 const mem_event_response_t *rsp);
+void p2m_vm_event_emulate_check(struct vcpu *v,
+                                 const vm_event_response_t *rsp);
 
 /* Enable arch specific introspection options (such as MSR interception). */
 void p2m_setup_introspection(struct domain *d);
 
-/* Sanity check for mem_event hardware support */
-static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
+/* Sanity check for vm_event hardware support */
+static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
 {
     return hap_enabled(d) && cpu_has_vmx;
 }
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 813e81d..9d4972a 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -750,10 +750,10 @@ struct xen_domctl_gdbsx_domstatus {
 };
 
 /*
- * Memory event operations
+ * VM event operations
  */
 
-/* XEN_DOMCTL_mem_event_op */
+/* XEN_DOMCTL_vm_event_op */
 
 /*
  * Domain memory paging
@@ -762,17 +762,17 @@ struct xen_domctl_gdbsx_domstatus {
  * pager<->hypervisor interface. Use XENMEM_paging_op*
  * to perform per-page operations.
  *
- * The XEN_MEM_EVENT_PAGING_ENABLE domctl returns several
+ * The XEN_VM_EVENT_PAGING_ENABLE domctl returns several
  * non-standard error codes to indicate why paging could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EMLINK - guest has iommu passthrough enabled
  * EXDEV  - guest has PoD enabled
  * EBUSY  - guest has or had paging enabled, ring buffer still active
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING            1
+#define XEN_DOMCTL_VM_EVENT_OP_PAGING            1
 
-#define XEN_MEM_EVENT_PAGING_ENABLE               0
-#define XEN_MEM_EVENT_PAGING_DISABLE              1
+#define XEN_VM_EVENT_PAGING_ENABLE               0
+#define XEN_VM_EVENT_PAGING_DISABLE              1
 
 /*
  * Monitor helper.
@@ -787,24 +787,24 @@ struct xen_domctl_gdbsx_domstatus {
  * is sent with what happened. The memory event handler can then resume the
  * VCPU and redo the access with a XENMEM_access_op_resume hypercall.
  *
- * See public/mem_event.h for the list of available events that can be
+ * See public/vm_event.h for the list of available events that can be
  * subscribed to via the monitor interface.
  *
  * To enable MOV-TO-MSR interception on x86, it is necessary to enable this
- * interface with the XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
+ * interface with the XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
  * operator.
  *
- * The XEN_MEM_EVENT_MONITOR_ENABLE* domctls return several
+ * The XEN_VM_EVENT_MONITOR_ENABLE* domctls return several
  * non-standard error codes to indicate why access could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EBUSY  - guest has or had access enabled, ring buffer still active
  *
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_MONITOR                        2
+#define XEN_DOMCTL_VM_EVENT_OP_MONITOR                        2
 
-#define XEN_MEM_EVENT_MONITOR_ENABLE                           0
-#define XEN_MEM_EVENT_MONITOR_DISABLE                          1
-#define XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
+#define XEN_VM_EVENT_MONITOR_ENABLE                           0
+#define XEN_VM_EVENT_MONITOR_DISABLE                          1
+#define XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
 
 /*
  * Sharing ENOMEM helper.
@@ -819,21 +819,21 @@ struct xen_domctl_gdbsx_domstatus {
  * Note that shring can be turned on (as per the domctl below)
  * *without* this ring being setup.
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_SHARING           3
+#define XEN_DOMCTL_VM_EVENT_OP_SHARING           3
 
-#define XEN_MEM_EVENT_SHARING_ENABLE              0
-#define XEN_MEM_EVENT_SHARING_DISABLE             1
+#define XEN_VM_EVENT_SHARING_ENABLE              0
+#define XEN_VM_EVENT_SHARING_DISABLE             1
 
 /* Use for teardown/setup of helper<->hypervisor interface for paging, 
  * access and sharing.*/
-struct xen_domctl_mem_event_op {
-    uint32_t       op;           /* XEN_MEM_EVENT_*_* */
-    uint32_t       mode;         /* XEN_DOMCTL_MEM_EVENT_OP_* */
+struct xen_domctl_vm_event_op {
+    uint32_t       op;           /* XEN_VM_EVENT_*_* */
+    uint32_t       mode;         /* XEN_DOMCTL_VM_EVENT_OP_* */
 
     uint32_t port;              /* OUT: event channel for ring */
 };
-typedef struct xen_domctl_mem_event_op xen_domctl_mem_event_op_t;
-DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_event_op_t);
+typedef struct xen_domctl_vm_event_op xen_domctl_vm_event_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_vm_event_op_t);
 
 /*
  * Memory sharing operations
@@ -1056,7 +1056,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_suppress_spurious_page_faults 53
 #define XEN_DOMCTL_debug_op                      54
 #define XEN_DOMCTL_gethvmcontext_partial         55
-#define XEN_DOMCTL_mem_event_op                  56
+#define XEN_DOMCTL_vm_event_op                   56
 #define XEN_DOMCTL_mem_sharing_op                57
 #define XEN_DOMCTL_disable_migrate               58
 #define XEN_DOMCTL_gettscinfo                    59
@@ -1124,7 +1124,7 @@ struct xen_domctl {
         struct xen_domctl_set_target        set_target;
         struct xen_domctl_subscribe         subscribe;
         struct xen_domctl_debug_op          debug_op;
-        struct xen_domctl_mem_event_op      mem_event_op;
+        struct xen_domctl_vm_event_op       vm_event_op;
         struct xen_domctl_mem_sharing_op    mem_sharing_op;
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_domctl_cpuid             cpuid;
diff --git a/xen/include/public/mem_event.h b/xen/include/public/vm_event.h
similarity index 73%
rename from xen/include/public/mem_event.h
rename to xen/include/public/vm_event.h
index 1ef65d3..b619bff 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/vm_event.h
@@ -1,5 +1,5 @@
 /******************************************************************************
- * mem_event.h
+ * vm_event.h
  *
  * Memory event common structures.
  *
@@ -24,12 +24,12 @@
  * DEALINGS IN THE SOFTWARE.
  */
 
-#ifndef _XEN_PUBLIC_MEM_EVENT_H
-#define _XEN_PUBLIC_MEM_EVENT_H
+#ifndef _XEN_PUBLIC_VM_EVENT_H
+#define _XEN_PUBLIC_VM_EVENT_H
 
 #include "xen.h"
 
-#define MEM_EVENT_INTERFACE_VERSION 0x00000001
+#define VM_EVENT_INTERFACE_VERSION 0x00000001
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
@@ -44,44 +44,41 @@
  *  paused
  * VCPU_PAUSED in a response signals to unpause the vCPU
  */
-#define MEM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
-
-/*
- * Flags to aid debugging mem_event
- */
-#define MEM_EVENT_FLAG_FOREIGN         (1 << 1)
-#define MEM_EVENT_FLAG_DUMMY           (1 << 2)
+#define VM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
+/* Flags to aid debugging mem_event */
+#define VM_EVENT_FLAG_FOREIGN         (1 << 1)
+#define VM_EVENT_FLAG_DUMMY           (1 << 2)
 
 /*
  * Reasons for the vm event request
  */
 
 /* Default case */
-#define MEM_EVENT_REASON_UNKNOWN                 0
+#define VM_EVENT_REASON_UNKNOWN                 0
 /* Memory access violation */
-#define MEM_EVENT_REASON_MEM_ACCESS              1
+#define VM_EVENT_REASON_MEM_ACCESS              1
 /* Memory sharing event */
-#define MEM_EVENT_REASON_MEM_SHARING             2
+#define VM_EVENT_REASON_MEM_SHARING             2
 /* Memory paging event */
-#define MEM_EVENT_REASON_MEM_PAGING              3
+#define VM_EVENT_REASON_MEM_PAGING              3
 /* CR0 was updated */
-#define MEM_EVENT_REASON_MOV_TO_CR0              4
+#define VM_EVENT_REASON_MOV_TO_CR0              4
 /* CR3 was updated */
-#define MEM_EVENT_REASON_MOV_TO_CR3              5
+#define VM_EVENT_REASON_MOV_TO_CR3              5
 /* CR4 was updated */
-#define MEM_EVENT_REASON_MOV_TO_CR4              6
+#define VM_EVENT_REASON_MOV_TO_CR4              6
 /* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
-#define MEM_EVENT_REASON_MOV_TO_MSR              7
+#define VM_EVENT_REASON_MOV_TO_MSR              7
 /* Debug operation executed (e.g. int3) */
-#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
+#define VM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
 /* Single-step (e.g. MTF) */
-#define MEM_EVENT_REASON_SINGLESTEP              9
+#define VM_EVENT_REASON_SINGLESTEP              9
 
 /*
  * Using a custom struct (not hvm_hw_cpu) so as to not fill
  * the mem_event ring buffer too quickly.
  */
-struct mem_event_regs_x86 {
+struct vm_event_regs_x86 {
     uint64_t rax;
     uint64_t rcx;
     uint64_t rdx;
@@ -152,67 +149,67 @@ struct mem_event_regs_x86 {
  */
 #define MEM_ACCESS_EMULATE_NOWRITE      (1 << 7)
 
-struct mem_event_mem_access {
+struct vm_event_mem_access {
     uint32_t gfn;
     uint32_t flags; /* MEM_ACCESS_* */
     uint64_t offset;
     uint64_t gla;   /* if flags has MEM_ACCESS_GLA_VALID set */
 };
 
-struct mem_event_mov_to_cr {
+struct vm_event_mov_to_cr {
     uint64_t new_value;
     uint64_t old_value;
 };
 
-struct mem_event_debug {
+struct vm_event_debug {
     uint32_t gfn;
     uint32_t _pad;
 };
 
-struct mem_event_mov_to_msr {
+struct vm_event_mov_to_msr {
     uint64_t msr;
     uint64_t value;
 };
 
 #define MEM_PAGING_DROP_PAGE       (1 << 0)
 #define MEM_PAGING_EVICT_FAIL      (1 << 1)
-struct mem_event_paging {
+struct vm_event_paging {
     uint32_t gfn;
     uint32_t p2mt;
     uint32_t flags;
     uint32_t _pad;
 };
 
-struct mem_event_sharing {
+struct vm_event_sharing {
     uint32_t gfn;
     uint32_t p2mt;
 };
 
-typedef struct mem_event_st {
-    uint32_t version;   /* MEM_EVENT_INTERFACE_VERSION */
-    uint32_t flags;     /* MEM_EVENT_FLAG_* */
-    uint32_t reason;    /* MEM_EVENT_REASON_* */
+typedef struct vm_event_st {
+    uint32_t version;   /* VM_EVENT_INTERFACE_VERSION */
+    uint32_t flags;     /* VM_EVENT_FLAG_* */
+    uint32_t reason;    /* VM_EVENT_REASON_* */
     uint32_t vcpu_id;
 
     union {
-        struct mem_event_paging                mem_paging;
-        struct mem_event_sharing               mem_sharing;
-        struct mem_event_mem_access            mem_access;
-        struct mem_event_mov_to_cr             mov_to_cr;
-        struct mem_event_mov_to_msr            mov_to_msr;
-        struct mem_event_debug                 software_breakpoint;
-        struct mem_event_debug                 singlestep;
+        struct vm_event_paging                mem_paging;
+        struct vm_event_sharing               mem_sharing;
+        struct vm_event_mem_access            mem_access;
+        struct vm_event_mov_to_cr             mov_to_cr;
+        struct vm_event_mov_to_msr            mov_to_msr;
+        struct vm_event_debug                 software_breakpoint;
+        struct vm_event_debug                 singlestep;
     } u;
 
     union {
-        struct mem_event_regs_x86 x86;
+        struct vm_event_regs_x86 x86;
     } regs;
-} mem_event_request_t, mem_event_response_t;
+} vm_event_request_t, vm_event_response_t;
 
-DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
+DEFINE_RING_TYPES(vm_event, vm_event_request_t, vm_event_response_t);
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
-#endif /* _XEN_PUBLIC_MEM_EVENT_H */
+#endif /* _XEN_PUBLIC_VM_EVENT_H */
 
 /*
  * Local variables:
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index 6ceb2a4..1d01221 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -29,7 +29,7 @@
 
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
-int mem_access_send_req(struct domain *d, mem_event_request_t *req);
+int mem_access_send_req(struct domain *d, vm_event_request_t *req);
 
 /* Resumes the running of the VCPU, restarting the last instruction */
 void mem_access_resume(struct domain *d);
@@ -44,7 +44,7 @@ int mem_access_memop(unsigned long cmd,
 }
 
 static inline
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+int mem_access_send_req(struct domain *d, vm_event_request_t *req)
 {
     return -ENOSYS;
 }
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 29f3628..5da8a2d 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -1,12 +1,12 @@
 #ifndef _XEN_P2M_COMMON_H
 #define _XEN_P2M_COMMON_H
 
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 
 /*
  * Additional access types, which are used to further restrict
  * the permissions given my the p2m_type_t memory type.  Violations
- * caused by p2m_access_t restrictions are sent to the mem_event
+ * caused by p2m_access_t restrictions are sent to the vm_event
  * interface.
  *
  * The access permissions are soft state: when any ambiguous change of page
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 76e41f3..151eb86 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -23,7 +23,7 @@
 #include <public/domctl.h>
 #include <public/sysctl.h>
 #include <public/vcpu.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <public/event_channel.h>
 
 #ifdef CONFIG_COMPAT
@@ -214,8 +214,8 @@ struct vcpu
     unsigned long    pause_flags;
     atomic_t         pause_count;
 
-    /* VCPU paused for mem_event replies. */
-    atomic_t         mem_event_pause_count;
+    /* VCPU paused for vm_event replies. */
+    atomic_t         vm_event_pause_count;
     /* VCPU paused by system controller. */
     int              controller_pause_count;
 
@@ -257,8 +257,8 @@ struct vcpu
 #define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock)
 #define domain_is_locked(d) spin_is_locked(&(d)->domain_lock)
 
-/* Memory event */
-struct mem_event_domain
+/* VM event */
+struct vm_event_domain
 {
     /* ring lock */
     spinlock_t ring_lock;
@@ -269,10 +269,10 @@ struct mem_event_domain
     void *ring_page;
     struct page_info *ring_pg_struct;
     /* front-end ring */
-    mem_event_front_ring_t front_ring;
+    vm_event_front_ring_t front_ring;
     /* event channel port (vcpu0 only) */
     int xen_port;
-    /* mem_event bit for vcpu->pause_flags */
+    /* vm_event bit for vcpu->pause_flags */
     int pause_flag;
     /* list of vcpus waiting for room in the ring */
     struct waitqueue_head wq;
@@ -282,14 +282,14 @@ struct mem_event_domain
     unsigned int last_vcpu_wake_up;
 };
 
-struct mem_event_per_domain
+struct vm_event_per_domain
 {
     /* Memory sharing support */
-    struct mem_event_domain share;
+    struct vm_event_domain share;
     /* Memory paging support */
-    struct mem_event_domain paging;
+    struct vm_event_domain paging;
     /* VM event monitor support */
-    struct mem_event_domain monitor;
+    struct vm_event_domain monitor;
 };
 
 struct evtchn_port_ops;
@@ -442,8 +442,8 @@ struct domain
 
     struct lock_profile_qhead profile_head;
 
-    /* Various mem_events */
-    struct mem_event_per_domain *mem_event;
+    /* Various vm_events */
+    struct vm_event_per_domain *vm_event;
 
     /*
      * Can be specified by the user. If that is not the case, it is
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/vm_event.h
similarity index 50%
rename from xen/include/xen/mem_event.h
rename to xen/include/xen/vm_event.h
index 4f3ad8e..988ea42 100644
--- a/xen/include/xen/mem_event.h
+++ b/xen/include/xen/vm_event.h
@@ -1,5 +1,5 @@
 /******************************************************************************
- * mem_event.h
+ * vm_event.h
  *
  * Common interface for memory event support.
  *
@@ -21,18 +21,18 @@
  */
 
 
-#ifndef __MEM_EVENT_H__
-#define __MEM_EVENT_H__
+#ifndef __VM_EVENT_H__
+#define __VM_EVENT_H__
 
 #include <xen/sched.h>
 
 #ifdef HAS_MEM_ACCESS
 
 /* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d);
+void vm_event_cleanup(struct domain *d);
 
 /* Returns whether a ring has been set up */
-bool_t mem_event_check_ring(struct mem_event_domain *med);
+bool_t vm_event_check_ring(struct vm_event_domain *med);
 
 /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
  * available space and the caller is a foreign domain. If the guest itself
@@ -47,90 +47,90 @@ bool_t mem_event_check_ring(struct mem_event_domain *med);
  * cancel_slot(), both of which are guaranteed to
  * succeed.
  */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *med,
                             bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot(struct domain *d,
+                                        struct vm_event_domain *med)
 {
-    return __mem_event_claim_slot(d, med, 1);
+    return __vm_event_claim_slot(d, med, 1);
 }
 
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot_nosleep(struct domain *d,
+                                        struct vm_event_domain *med)
 {
-    return __mem_event_claim_slot(d, med, 0);
+    return __vm_event_claim_slot(d, med, 0);
 }
 
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
+void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med);
 
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req);
+void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
+                            vm_event_request_t *req);
 
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp);
+int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
+                           vm_event_response_t *rsp);
 
-int do_mem_event_op(int op, uint32_t domain, void *arg);
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+int do_vm_event_op(int op, uint32_t domain, void *arg);
+int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
                      XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
-void mem_event_vcpu_pause(struct vcpu *v);
-void mem_event_vcpu_unpause(struct vcpu *v);
+void vm_event_vcpu_pause(struct vcpu *v);
+void vm_event_vcpu_unpause(struct vcpu *v);
 
 #else
 
-static inline void mem_event_cleanup(struct domain *d) {}
+static inline void vm_event_cleanup(struct domain *d) {}
 
-static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
+static inline bool_t vm_event_check_ring(struct vm_event_domain *med)
 {
     return 0;
 }
 
-static inline int mem_event_claim_slot(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot(struct domain *d,
+                                        struct vm_event_domain *med)
 {
     return -ENOSYS;
 }
 
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot_nosleep(struct domain *d,
+                                        struct vm_event_domain *med)
 {
     return -ENOSYS;
 }
 
 static inline
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med)
 {}
 
 static inline
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req)
+void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
+                            vm_event_request_t *req)
 {}
 
 static inline
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp)
+int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
+                           vm_event_response_t *rsp)
 {
     return -ENOSYS;
 }
 
-static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
+static inline int do_vm_event_op(int op, uint32_t domain, void *arg)
 {
     return -ENOSYS;
 }
 
 static inline
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
                      XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     return -ENOSYS;
 }
 
-static inline void mem_event_vcpu_pause(struct vcpu *v) {}
-static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
+static inline void vm_event_vcpu_pause(struct vcpu *v) {}
+static inline void vm_event_vcpu_unpause(struct vcpu *v) {}
 
 #endif /* HAS_MEM_ACCESS */
 
-#endif /* __MEM_EVENT_H__ */
+#endif /* __VM_EVENT_H__ */
 
 
 /*
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index f20e89c..4227093 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -514,13 +514,13 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
 }
 
 #ifdef HAS_MEM_ACCESS
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 4ce089f..cff9d35 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -142,8 +142,8 @@ struct xsm_operations {
     int (*get_vnumainfo) (struct domain *d);
 
 #ifdef HAS_MEM_ACCESS
-    int (*mem_event_control) (struct domain *d, int mode, int op);
-    int (*mem_event_op) (struct domain *d, int op);
+    int (*vm_event_control) (struct domain *d, int mode, int op);
+    int (*vm_event_op) (struct domain *d, int op);
 #endif
 
 #ifdef CONFIG_X86
@@ -544,14 +544,14 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
 }
 
 #ifdef HAS_MEM_ACCESS
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
 {
-    return xsm_ops->mem_event_control(d, mode, op);
+    return xsm_ops->vm_event_control(d, mode, op);
 }
 
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
 {
-    return xsm_ops->mem_event_op(d, op);
+    return xsm_ops->vm_event_op(d, op);
 }
 #endif
 
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 8eb3050..25fca68 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -119,8 +119,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
 #ifdef HAS_MEM_ACCESS
-    set_to_dummy_if_null(ops, mem_event_control);
-    set_to_dummy_if_null(ops, mem_event_op);
+    set_to_dummy_if_null(ops, vm_event_control);
+    set_to_dummy_if_null(ops, vm_event_op);
 #endif
 
 #ifdef CONFIG_X86
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 65094bb..475ef6c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -578,7 +578,7 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
 #ifdef HAS_MEM_ACCESS
-    case XEN_DOMCTL_mem_event_op:
+    case XEN_DOMCTL_vm_event_op:
 #endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
@@ -689,7 +689,7 @@ static int flask_domctl(struct domain *d, int cmd)
         return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
 
     case XEN_DOMCTL_set_access_required:
-        return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
@@ -1203,14 +1203,14 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
 #ifdef HAS_MEM_ACCESS
-static int flask_mem_event_control(struct domain *d, int mode, int op)
+static int flask_vm_event_control(struct domain *d, int mode, int op)
 {
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 }
 
-static int flask_mem_event_op(struct domain *d, int op)
+static int flask_vm_event_op(struct domain *d, int op)
 {
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 }
 #endif /* HAS_MEM_ACCESS */
 
@@ -1598,8 +1598,8 @@ static struct xsm_operations flask_ops = {
 #endif
 
 #ifdef HAS_MEM_ACCESS
-    .mem_event_control = flask_mem_event_control,
-    .mem_event_op = flask_mem_event_op,
+    .vm_event_control = flask_vm_event_control,
+    .vm_event_op = flask_vm_event_op,
 #endif
 
 #ifdef CONFIG_X86
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 8f44b9d..23b47bf 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -250,7 +250,7 @@ class hvm
 # HVMOP_inject_trap
     hvmctl
 # XEN_DOMCTL_set_access_required
-    mem_event
+    vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
 #  target = domain whose memory is being shared
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 05/12] tools/tests: Clean-up tools/tests/xen-access
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 04/12] xen: Rename mem_event to vm_event Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 16:33 ` [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The spin-lock implementation in the xen-access test program is implemented
in a fashion that is actually incomplete. The x86 assembly that guarantees that
the lock is held by only one thread lacks the "lock;" instruction.

However, the spin-lock is not actually necessary in xen-access as it is not
multithreaded. The presence of the faulty implementation of the lock in a non-
multithreaded environment is unnecessarily complicated for developers who are
trying to follow this code as a guide in implementing their own applications.
Thus, removing it from the code improves the clarity on the behavior of the
system.

Also converting functions that always return 0 to return to void, and making
the teardown function actually return an error code on error.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v5: Process all requests on the ring before notifying Xen
    No need to notify Xen twice
---
 tools/tests/xen-access/xen-access.c | 131 +++++++++---------------------------
 1 file changed, 30 insertions(+), 101 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 0d4f190..d951703 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -45,56 +45,6 @@
 #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
 #define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
 
-/* Spinlock and mem event definitions */
-
-#define SPIN_LOCK_UNLOCKED 0
-
-#define ADDR (*(volatile long *) addr)
-/**
- * test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
- */
-static inline int test_and_set_bit(int nr, volatile void *addr)
-{
-    int oldbit;
-
-    asm volatile (
-        "btsl %2,%1\n\tsbbl %0,%0"
-        : "=r" (oldbit), "=m" (ADDR)
-        : "Ir" (nr), "m" (ADDR) : "memory");
-    return oldbit;
-}
-
-typedef int spinlock_t;
-
-static inline void spin_lock(spinlock_t *lock)
-{
-    while ( test_and_set_bit(1, lock) );
-}
-
-static inline void spin_lock_init(spinlock_t *lock)
-{
-    *lock = SPIN_LOCK_UNLOCKED;
-}
-
-static inline void spin_unlock(spinlock_t *lock)
-{
-    *lock = SPIN_LOCK_UNLOCKED;
-}
-
-static inline int spin_trylock(spinlock_t *lock)
-{
-    return !test_and_set_bit(1, lock);
-}
-
-#define vm_event_ring_lock_init(_m)  spin_lock_init(&(_m)->ring_lock)
-#define vm_event_ring_lock(_m)       spin_lock(&(_m)->ring_lock)
-#define vm_event_ring_unlock(_m)     spin_unlock(&(_m)->ring_lock)
-
 typedef struct vm_event {
     domid_t domain_id;
     xc_evtchn *xce_handle;
@@ -102,7 +52,6 @@ typedef struct vm_event {
     vm_event_back_ring_t back_ring;
     uint32_t evtchn_port;
     void *ring_page;
-    spinlock_t ring_lock;
 } vm_event_t;
 
 typedef struct xenaccess {
@@ -180,6 +129,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         if ( rc != 0 )
         {
             ERROR("Error tearing down domain xenaccess in xen");
+            return rc;
         }
     }
 
@@ -191,6 +141,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         if ( rc != 0 )
         {
             ERROR("Error unbinding event port");
+            return rc;
         }
     }
 
@@ -201,6 +152,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         if ( rc != 0 )
         {
             ERROR("Error closing event channel");
+            return rc;
         }
     }
 
@@ -209,6 +161,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     if ( rc != 0 )
     {
         ERROR("Error closing connection to xen");
+        return rc;
     }
     xenaccess->xc_handle = NULL;
 
@@ -241,9 +194,6 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     /* Set domain id */
     xenaccess->vm_event.domain_id = domain_id;
 
-    /* Initialise lock */
-    vm_event_ring_lock_init(&xenaccess->vm_event);
-
     /* Enable mem_access */
     xenaccess->vm_event.ring_page =
             xc_mem_access_enable(xenaccess->xc_handle,
@@ -314,19 +264,24 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     return xenaccess;
 
  err:
-    xenaccess_teardown(xch, xenaccess);
+    rc = xenaccess_teardown(xch, xenaccess);
+    if ( rc )
+    {
+        ERROR("Failed to teardown xenaccess structure!\n");
+    }
 
  err_iface:
     return NULL;
 }
 
-int get_request(vm_event_t *vm_event, vm_event_request_t *req)
+/*
+ * Note that this function is not thread safe.
+ */
+static void get_request(vm_event_t *vm_event, vm_event_request_t *req)
 {
     vm_event_back_ring_t *back_ring;
     RING_IDX req_cons;
 
-    vm_event_ring_lock(vm_event);
-
     back_ring = &vm_event->back_ring;
     req_cons = back_ring->req_cons;
 
@@ -337,19 +292,16 @@ int get_request(vm_event_t *vm_event, vm_event_request_t *req)
     /* Update ring */
     back_ring->req_cons = req_cons;
     back_ring->sring->req_event = req_cons + 1;
-
-    vm_event_ring_unlock(vm_event);
-
-    return 0;
 }
 
-static int put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
+/*
+ * Note that this function is not thread safe.
+ */
+static void put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
 {
     vm_event_back_ring_t *back_ring;
     RING_IDX rsp_prod;
 
-    vm_event_ring_lock(vm_event);
-
     back_ring = &vm_event->back_ring;
     rsp_prod = back_ring->rsp_prod_pvt;
 
@@ -360,28 +312,6 @@ static int put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
     /* Update ring */
     back_ring->rsp_prod_pvt = rsp_prod;
     RING_PUSH_RESPONSES(back_ring);
-
-    vm_event_ring_unlock(vm_event);
-
-    return 0;
-}
-
-static int xenaccess_resume_page(xenaccess_t *paging, vm_event_response_t *rsp)
-{
-    int ret;
-
-    /* Put the page info on the ring */
-    ret = put_response(&paging->vm_event, rsp);
-    if ( ret != 0 )
-        goto out;
-
-    /* Tell Xen page is ready */
-    ret = xc_mem_access_resume(paging->xc_handle, paging->vm_event.domain_id);
-    ret = xc_evtchn_notify(paging->vm_event.xce_handle,
-                           paging->vm_event.port);
-
- out:
-    return ret;
 }
 
 void usage(char* progname)
@@ -543,13 +473,7 @@ int main(int argc, char *argv[])
         {
             xenmem_access_t access;
 
-            rc = get_request(&xenaccess->vm_event, &req);
-            if ( rc != 0 )
-            {
-                ERROR("Error getting request");
-                interrupted = -1;
-                continue;
-            }
+            get_request(&xenaccess->vm_event, &req);
 
             if ( req.version != VM_EVENT_INTERFACE_VERSION )
             {
@@ -624,13 +548,18 @@ int main(int argc, char *argv[])
                 fprintf(stderr, "UNKNOWN REASON CODE %d\n", req.reason);
             }
 
-            rc = xenaccess_resume_page(xenaccess, &rsp);
-            if ( rc != 0 )
-            {
-                ERROR("Error resuming page");
-                interrupted = -1;
-                continue;
-            }
+            /* Put the response on the ring */
+            put_response(&xenaccess->vm_event, &rsp);
+        }
+
+        /* Tell Xen page is ready */
+        rc = xc_evtchn_notify(xenaccess->vm_event.xce_handle,
+                              xenaccess->vm_event.port);
+
+        if ( rc != 0 )
+        {
+            ERROR("Error resuming page");
+            interrupted = -1;
         }
 
         if ( shutting_down )
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 05/12] tools/tests: Clean-up tools/tests/xen-access Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 18:41   ` Andrew Cooper
  2015-02-17 11:56   ` Jan Beulich
  2015-02-13 16:33 ` [PATCH V5 07/12] xen: Introduce monitor_op domctl Tamas K Lengyel
                   ` (5 subsequent siblings)
  11 siblings, 2 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

To avoid growing hvm.c these functions can be stored separately. Minor style
changes are applied to the logic in the file.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
---
v5: Style fixes
    Fix hvm_event_msr input types to match the incoming variables
v4: Style fixes
v3: Style fixes and removing unused fields from msr events.
---
 xen/arch/x86/hvm/Makefile       |   3 +-
 xen/arch/x86/hvm/event.c        | 195 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c          | 163 ++-------------------------------
 xen/arch/x86/hvm/vmx/vmx.c      |   7 +-
 xen/include/asm-x86/hvm/event.h |  40 +++++++++
 xen/include/asm-x86/hvm/hvm.h   |  11 ---
 6 files changed, 246 insertions(+), 173 deletions(-)
 create mode 100644 xen/arch/x86/hvm/event.c
 create mode 100644 xen/include/asm-x86/hvm/event.h

diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..69af47f 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -3,6 +3,7 @@ subdir-y += vmx
 
 obj-y += asid.o
 obj-y += emulate.o
+obj-y += event.o
 obj-y += hpet.o
 obj-y += hvm.o
 obj-y += i8254.o
@@ -22,4 +23,4 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
+obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/event.c b/xen/arch/x86/hvm/event.c
new file mode 100644
index 0000000..3fdd28e
--- /dev/null
+++ b/xen/arch/x86/hvm/event.c
@@ -0,0 +1,195 @@
+/*
+* event.c: Common hardware virtual machine event abstractions.
+*
+* Copyright (c) 2004, Intel Corporation.
+* Copyright (c) 2005, International Business Machines Corporation.
+* Copyright (c) 2008, Citrix Systems, Inc.
+*
+* This program is free software; you can redistribute it and/or modify it
+* under the terms and conditions of the GNU General Public License,
+* version 2, as published by the Free Software Foundation.
+*
+* This program is distributed in the hope it will be useful, but WITHOUT
+* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+* more details.
+*
+* You should have received a copy of the GNU General Public License along with
+* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+* Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+
+#include <xen/vm_event.h>
+#include <xen/paging.h>
+#include <public/vm_event.h>
+
+static void hvm_event_fill_regs(vm_event_request_t *req)
+{
+    const struct cpu_user_regs *regs = guest_cpu_user_regs();
+    const struct vcpu *curr = current;
+
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
+    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
+    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
+    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
+}
+
+static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
+{
+    int rc;
+    struct vcpu *curr = current;
+    struct domain *currd = curr->domain;
+
+    if ( !(parameters & HVMPME_MODE_MASK) )
+        return 0;
+
+    rc = vm_event_claim_slot(currd, &currd->vm_event->monitor);
+    switch ( rc )
+    {
+    case 0:
+        break;
+    case -ENOSYS:
+        /*
+         * If there was no ring to handle the event, then
+         * simply continue executing normally.
+         */
+        return 1;
+    default:
+        return rc;
+    };
+
+    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
+    {
+        req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(curr);
+    }
+
+    hvm_event_fill_regs(req);
+    vm_event_put_request(currd, &currd->vm_event->monitor, req);
+
+    return 1;
+}
+
+static void hvm_event_cr(uint32_t reason, unsigned long value,
+                                unsigned long old)
+{
+    vm_event_request_t req = {
+        .reason = reason,
+        .vcpu_id = current->vcpu_id,
+        .u.mov_to_cr.new_value = value,
+        .u.mov_to_cr.old_value = old
+    };
+    uint64_t parameters = 0;
+
+    switch(reason)
+    {
+    case VM_EVENT_REASON_MOV_TO_CR0:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
+        break;
+    case VM_EVENT_REASON_MOV_TO_CR3:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
+        break;
+    case VM_EVENT_REASON_MOV_TO_CR4:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
+        break;
+    };
+
+    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+        return;
+
+    hvm_event_traps(parameters, &req);
+}
+
+void hvm_event_cr0(unsigned long value, unsigned long old)
+{
+    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
+}
+
+void hvm_event_cr3(unsigned long value, unsigned long old)
+{
+    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
+}
+
+void hvm_event_cr4(unsigned long value, unsigned long old)
+{
+    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
+}
+
+void hvm_event_msr(unsigned int msr, uint64_t value)
+{
+    struct vcpu *curr = current;
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MOV_TO_MSR,
+        .vcpu_id = curr->vcpu_id,
+        .u.mov_to_msr.msr = msr,
+        .u.mov_to_msr.value = value,
+    };
+    uint64_t params = current->domain->arch.hvm_domain
+                        .params[HVM_PARAM_MEMORY_EVENT_MSR];
+
+    hvm_event_traps(params, &req);
+}
+
+int hvm_event_int3(unsigned long gla)
+{
+    uint32_t pfec = PFEC_page_present;
+    struct vcpu *curr = current;
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SOFTWARE_BREAKPOINT,
+        .vcpu_id = curr->vcpu_id,
+        .u.software_breakpoint.gfn = paging_gva_to_gfn(curr, gla, &pfec)
+    };
+    uint64_t params = curr->domain->arch.hvm_domain
+                        .params[HVM_PARAM_MEMORY_EVENT_INT3];
+
+    return hvm_event_traps(params, &req);
+}
+
+int hvm_event_single_step(unsigned long gla)
+{
+    uint32_t pfec = PFEC_page_present;
+    struct vcpu *curr = current;
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SINGLESTEP,
+        .vcpu_id = curr->vcpu_id,
+        .u.singlestep.gfn = paging_gva_to_gfn(curr, gla, &pfec)
+    };
+    uint64_t params = curr->domain->arch.hvm_domain
+                        .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
+
+    return hvm_event_traps(params, &req);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 427da2c..2c4d0ff 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -35,7 +35,6 @@
 #include <xen/paging.h>
 #include <xen/cpu.h>
 #include <xen/wait.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/rangeset.h>
 #include <asm/shadow.h>
@@ -60,6 +59,7 @@
 #include <asm/hvm/cacheattr.h>
 #include <asm/hvm/trace.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/event.h>
 #include <asm/mtrr.h>
 #include <asm/apic.h>
 #include <public/sched.h>
@@ -3278,7 +3278,7 @@ int hvm_set_cr0(unsigned long value)
         hvm_funcs.handle_cd(v, value);
 
     hvm_update_cr(v, 0, value);
-    hvm_memory_event_cr0(value, old_value);
+    hvm_event_cr0(value, old_value);
 
     if ( (value ^ old_value) & X86_CR0_PG ) {
         if ( !nestedhvm_vmswitch_in_progress(v) && nestedhvm_vcpu_in_guestmode(v) )
@@ -3319,7 +3319,7 @@ int hvm_set_cr3(unsigned long value)
     old=v->arch.hvm_vcpu.guest_cr[3];
     v->arch.hvm_vcpu.guest_cr[3] = value;
     paging_update_cr3(v);
-    hvm_memory_event_cr3(value, old);
+    hvm_event_cr3(value, old);
     return X86EMUL_OKAY;
 
  bad_cr3:
@@ -3360,7 +3360,7 @@ int hvm_set_cr4(unsigned long value)
     }
 
     hvm_update_cr(v, 4, value);
-    hvm_memory_event_cr4(value, old_cr);
+    hvm_event_cr4(value, old_cr);
 
     /*
      * Modifying CR4.{PSE,PAE,PGE,SMEP}, or clearing CR4.PCIDE
@@ -4510,7 +4510,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
     hvm_cpuid(1, NULL, NULL, NULL, &edx);
     mtrr = !!(edx & cpufeat_mask(X86_FEATURE_MTRR));
 
-    hvm_memory_event_msr(msr, msr_content);
+    hvm_event_msr(msr, msr_content);
 
     switch ( msr )
     {
@@ -6319,159 +6319,6 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
     return rc;
 }
 
-static void hvm_mem_event_fill_regs(vm_event_request_t *req)
-{
-    const struct cpu_user_regs *regs = guest_cpu_user_regs();
-    const struct vcpu *curr = current;
-
-    req->regs.x86.rax = regs->eax;
-    req->regs.x86.rcx = regs->ecx;
-    req->regs.x86.rdx = regs->edx;
-    req->regs.x86.rbx = regs->ebx;
-    req->regs.x86.rsp = regs->esp;
-    req->regs.x86.rbp = regs->ebp;
-    req->regs.x86.rsi = regs->esi;
-    req->regs.x86.rdi = regs->edi;
-
-    req->regs.x86.r8  = regs->r8;
-    req->regs.x86.r9  = regs->r9;
-    req->regs.x86.r10 = regs->r10;
-    req->regs.x86.r11 = regs->r11;
-    req->regs.x86.r12 = regs->r12;
-    req->regs.x86.r13 = regs->r13;
-    req->regs.x86.r14 = regs->r14;
-    req->regs.x86.r15 = regs->r15;
-
-    req->regs.x86.rflags = regs->eflags;
-    req->regs.x86.rip    = regs->eip;
-
-    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
-    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
-    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
-    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
-}
-
-static int hvm_memory_event_traps(uint64_t parameters, vm_event_request_t *req)
-{
-    int rc;
-    struct vcpu *v = current;
-    struct domain *d = v->domain;
-
-    if ( !(parameters & HVMPME_MODE_MASK) )
-        return 0;
-
-    rc = vm_event_claim_slot(d, &d->vm_event->monitor);
-    if ( rc == -ENOSYS )
-    {
-        /* If there was no ring to handle the event, then
-         * simple continue executing normally. */
-        return 1;
-    }
-    else if ( rc < 0 )
-        return rc;
-
-    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
-    {
-        req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
-        vm_event_vcpu_pause(v);
-    }
-
-    hvm_mem_event_fill_regs(req);
-    vm_event_put_request(d, &d->vm_event->monitor, req);
-
-    return 1;
-}
-
-static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
-                                unsigned long old)
-{
-    vm_event_request_t req = {
-        .reason = reason,
-        .vcpu_id = current->vcpu_id,
-        .u.mov_to_cr.new_value = value,
-        .u.mov_to_cr.old_value = old
-    };
-    uint64_t parameters = 0;
-
-    switch(reason)
-    {
-    case VM_EVENT_REASON_MOV_TO_CR0:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR3:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR4:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
-        break;
-    };
-
-    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
-        return;
-
-    hvm_memory_event_traps(parameters, &req);
-}
-
-void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
-{
-    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
-}
-
-void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
-{
-    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
-}
-
-void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
-{
-    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
-}
-
-void hvm_memory_event_msr(unsigned long msr, unsigned long value)
-{
-    vm_event_request_t req = {
-        .reason = VM_EVENT_REASON_MOV_TO_MSR,
-        .vcpu_id = current->vcpu_id,
-        .u.mov_to_msr.msr = msr,
-        .u.mov_to_msr.value = value,
-    };
-
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                            .params[HVM_PARAM_MEMORY_EVENT_MSR],
-                           &req);
-}
-
-int hvm_memory_event_int3(unsigned long gla) 
-{
-    uint32_t pfec = PFEC_page_present;
-    vm_event_request_t req = {
-        .reason = VM_EVENT_REASON_SOFTWARE_BREAKPOINT,
-        .vcpu_id = current->vcpu_id,
-        .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
-    };
-
-    return hvm_memory_event_traps(current->domain->arch.hvm_domain
-                                   .params[HVM_PARAM_MEMORY_EVENT_INT3],
-                                  &req);
-}
-
-int hvm_memory_event_single_step(unsigned long gla)
-{
-    uint32_t pfec = PFEC_page_present;
-    vm_event_request_t req = {
-        .reason = VM_EVENT_REASON_SINGLESTEP,
-        .vcpu_id = current->vcpu_id,
-        .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
-    };
-
-    return hvm_memory_event_traps(current->domain->arch.hvm_domain
-                                   .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
-                                  &req);
-}
-
 int nhvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
 {
     if (hvm_funcs.nhvm_vcpu_hostrestore)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 88b7821..3f2a18f 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -52,6 +52,7 @@
 #include <asm/hvm/vpt.h>
 #include <public/hvm/save.h>
 #include <asm/hvm/trace.h>
+#include <asm/hvm/event.h>
 #include <asm/xenoprof.h>
 #include <asm/debugger.h>
 #include <asm/apic.h>
@@ -1979,7 +1980,7 @@ static int vmx_cr_access(unsigned long exit_qualification)
         unsigned long old = curr->arch.hvm_vcpu.guest_cr[0];
         curr->arch.hvm_vcpu.guest_cr[0] &= ~X86_CR0_TS;
         vmx_update_guest_cr(curr, 0);
-        hvm_memory_event_cr0(curr->arch.hvm_vcpu.guest_cr[0], old);
+        hvm_event_cr0(curr->arch.hvm_vcpu.guest_cr[0], old);
         HVMTRACE_0D(CLTS);
         break;
     }
@@ -2831,7 +2832,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
                 break;
             }
             else {
-                int handled = hvm_memory_event_int3(regs->eip);
+                int handled = hvm_event_int3(regs->eip);
                 
                 if ( handled < 0 ) 
                 {
@@ -3149,7 +3150,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
         if ( v->arch.hvm_vcpu.single_step ) {
-          hvm_memory_event_single_step(regs->eip);
+          hvm_event_single_step(regs->eip);
           if ( v->domain->debugger_attached )
               domain_pause_for_debugger();
         }
diff --git a/xen/include/asm-x86/hvm/event.h b/xen/include/asm-x86/hvm/event.h
new file mode 100644
index 0000000..bb757a1
--- /dev/null
+++ b/xen/include/asm-x86/hvm/event.h
@@ -0,0 +1,40 @@
+/*
+ * event.h: Hardware virtual machine assist events.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#ifndef __ASM_X86_HVM_EVENT_H__
+#define __ASM_X86_HVM_EVENT_H__
+
+/* Called for current VCPU on crX/MSR changes by guest */
+void hvm_event_cr0(unsigned long value, unsigned long old);
+void hvm_event_cr3(unsigned long value, unsigned long old);
+void hvm_event_cr4(unsigned long value, unsigned long old);
+void hvm_event_msr(unsigned int msr, uint64_t value);
+/* Called for current VCPU: returns -1 if no listener */
+int hvm_event_int3(unsigned long gla);
+int hvm_event_single_step(unsigned long gla);
+
+#endif /* __ASM_X86_HVM_EVENT_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index e3d2d9a..c77076a 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -473,17 +473,6 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
 int hvm_x2apic_msr_read(struct vcpu *v, unsigned int msr, uint64_t *msr_content);
 int hvm_x2apic_msr_write(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 
-/* Called for current VCPU on crX changes by guest */
-void hvm_memory_event_cr0(unsigned long value, unsigned long old);
-void hvm_memory_event_cr3(unsigned long value, unsigned long old);
-void hvm_memory_event_cr4(unsigned long value, unsigned long old);
-void hvm_memory_event_msr(unsigned long msr, unsigned long value);
-/* Called for current VCPU on int3: returns -1 if no listener */
-int hvm_memory_event_int3(unsigned long gla);
-
-/* Called for current VCPU on single step: returns -1 if no listener */
-int hvm_memory_event_single_step(unsigned long gla);
-
 /*
  * Nested HVM
  */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 20:09   ` Andrew Cooper
  2015-02-17 14:02   ` Jan Beulich
  2015-02-13 16:33 ` [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
                   ` (4 subsequent siblings)
  11 siblings, 2 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

In preparation for allowing for introspecting ARM and PV domains the old
control interface via the hvm_op hypercall is retired. A new control mechanism
is introduced via the domctl hypercall: monitor_op.

This patch aims to establish a base API on which future applications can build
on.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
v5: p2m_vm_event_sanity_check is moved into the monitor_op handler
v4: Style fixes
    Only defining struct mov_to_cr and struct debug_event in asm-x86/domain.h
    Add pause/unpause domain wrapper when enabled a monitor option.
---
 tools/libxc/Makefile                |   1 +
 tools/libxc/include/xenctrl.h       |  19 ++++
 tools/libxc/xc_mem_access.c         |   9 +-
 tools/libxc/xc_monitor.c            | 118 ++++++++++++++++++++
 tools/libxc/xc_private.h            |   2 +-
 tools/libxc/xc_vm_event.c           |   7 +-
 tools/tests/xen-access/xen-access.c |  14 +--
 xen/arch/x86/Makefile               |   1 +
 xen/arch/x86/hvm/emulate.c          |   3 +-
 xen/arch/x86/hvm/event.c            |  69 ++++++------
 xen/arch/x86/hvm/hvm.c              |  38 +------
 xen/arch/x86/hvm/vmx/vmcs.c         |   6 +-
 xen/arch/x86/hvm/vmx/vmx.c          |   2 +-
 xen/arch/x86/mm/p2m.c               |   9 --
 xen/arch/x86/monitor.c              | 210 ++++++++++++++++++++++++++++++++++++
 xen/common/domctl.c                 |   9 ++
 xen/common/vm_event.c               |  23 +---
 xen/include/asm-arm/monitor.h       |  13 +++
 xen/include/asm-x86/domain.h        |  28 +++++
 xen/include/asm-x86/hvm/domain.h    |   1 -
 xen/include/asm-x86/monitor.h       |  30 ++++++
 xen/include/asm-x86/p2m.h           |   8 +-
 xen/include/public/domctl.h         |  50 ++++++++-
 xen/include/public/hvm/params.h     |  15 ---
 xen/include/public/vm_event.h       |   2 +-
 xen/xsm/flask/hooks.c               |   3 +
 xen/xsm/flask/policy/access_vectors |   2 +
 27 files changed, 539 insertions(+), 153 deletions(-)
 create mode 100644 tools/libxc/xc_monitor.c
 create mode 100644 xen/arch/x86/monitor.c
 create mode 100644 xen/include/asm-arm/monitor.h
 create mode 100644 xen/include/asm-x86/monitor.h

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 22ba2a1..8b609cf 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -32,6 +32,7 @@ CTRL_SRCS-y       += xc_cpu_hotplug.c
 CTRL_SRCS-y       += xc_resume.c
 CTRL_SRCS-y       += xc_tmem.c
 CTRL_SRCS-y       += xc_vm_event.c
+CTRL_SRCS-y       += xc_monitor.c
 CTRL_SRCS-y       += xc_mem_paging.c
 CTRL_SRCS-y       += xc_mem_access.c
 CTRL_SRCS-y       += xc_memshr.c
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 790db53..3324132 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2308,6 +2308,25 @@ int xc_get_mem_access(xc_interface *xch, domid_t domain_id,
                       uint64_t pfn, xenmem_access_t *access);
 
 /***
+ * Monitor control operations.
+ */
+int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly);
+int xc_monitor_mov_to_cr3(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly);
+int xc_monitor_mov_to_cr4(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly);
+int xc_monitor_mov_to_msr(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int extended_capture);
+int xc_monitor_singlestep(xc_interface *xch, domid_t domain_id,
+                          unsigned int op);
+int xc_monitor_software_breakpoint(xc_interface *xch, domid_t domain_id,
+                                   unsigned int op);
+
+/***
  * Memory sharing operations.
  *
  * Unles otherwise noted, these calls return 0 on succes, -1 and errno on
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 0a3f0e6..37e776c 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -27,14 +27,7 @@
 void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
 {
     return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port, 0);
-}
-
-void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
-                                         uint32_t *port)
-{
-    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port, 1);
+                              port);
 }
 
 int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
new file mode 100644
index 0000000..9e807d1
--- /dev/null
+++ b/tools/libxc/xc_monitor.c
@@ -0,0 +1,118 @@
+/******************************************************************************
+ *
+ * xc_monitor.c
+ *
+ * Interface to VM event monitor
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#include "xc_private.h"
+
+int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_cr3(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_cr4(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR4;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_msr(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int extended_capture)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_MSR;
+    domctl.u.monitor_op.u.mov_to_msr.extended_capture = extended_capture;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_software_breakpoint(xc_interface *xch, domid_t domain_id,
+                                   unsigned int op)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_SOFTWARE_BREAKPOINT;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_singlestep(xc_interface *xch, domid_t domain_id,
+                          unsigned int op)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_SINGLESTEP;
+
+    return do_domctl(xch, &domctl);
+}
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 843540c..9f55309 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -430,6 +430,6 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
  */
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                         uint32_t *port, int enable_introspection);
+                         uint32_t *port);
 
 #endif /* __XC_PRIVATE_H__ */
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index d458b9a..7277e86 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -41,7 +41,7 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
 }
 
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                         uint32_t *port, int enable_introspection)
+                         uint32_t *port)
 {
     void *ring_page = NULL;
     uint64_t pfn;
@@ -104,10 +104,7 @@ void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
-        if ( enable_introspection )
-            op = XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION;
-        else
-            op = XEN_VM_EVENT_MONITOR_ENABLE;
+        op = XEN_VM_EVENT_MONITOR_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index d951703..d52b175 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -432,13 +432,13 @@ int main(int argc, char *argv[])
     }
 
     if ( int3 )
-        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_sync);
-    else
-        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
-    if ( rc < 0 )
     {
-        ERROR("Error %d setting int3 vm_event\n", rc);
-        goto exit;
+        rc = xc_monitor_software_breakpoint(xch, domain_id, 1);
+        if ( rc < 0 )
+        {
+            ERROR("Error %d setting int3 vm_event\n", rc);
+            goto exit;
+        }
     }
 
     /* Wait for access */
@@ -452,7 +452,7 @@ int main(int argc, char *argv[])
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
                                    xenaccess->domain_info->max_pages);
-            rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
+            rc = xc_monitor_software_breakpoint(xch, domain_id, 0);
 
             shutting_down = 1;
         }
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 86ca5f8..37e547c 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -36,6 +36,7 @@ obj-y += microcode_intel.o
 # This must come after the vendor specific files.
 obj-y += microcode.o
 obj-y += mm.o
+obj-y += monitor.o
 obj-y += mpparse.o
 obj-y += nmi.o
 obj-y += numa.o
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index a2b3088..d195f45 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -411,7 +411,8 @@ static int hvmemul_virtual_to_linear(
      * being triggered for repeated writes to a whole page.
      */
     *reps = min_t(unsigned long, *reps,
-                  unlikely(current->domain->arch.hvm_domain.introspection_enabled)
+                  unlikely(current->domain->arch
+                            .monitor_options.mov_to_msr.extended_capture)
                            ? 1 : 4096);
 
     reg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
diff --git a/xen/arch/x86/hvm/event.c b/xen/arch/x86/hvm/event.c
index 3fdd28e..0b59f7a 100644
--- a/xen/arch/x86/hvm/event.c
+++ b/xen/arch/x86/hvm/event.c
@@ -55,15 +55,12 @@ static void hvm_event_fill_regs(vm_event_request_t *req)
     req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
 }
 
-static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
+static int hvm_event_traps(uint8_t sync, vm_event_request_t *req)
 {
     int rc;
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
 
-    if ( !(parameters & HVMPME_MODE_MASK) )
-        return 0;
-
     rc = vm_event_claim_slot(currd, &currd->vm_event->monitor);
     switch ( rc )
     {
@@ -79,7 +76,7 @@ static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
         return rc;
     };
 
-    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
+    if ( sync )
     {
         req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
         vm_event_vcpu_pause(curr);
@@ -92,7 +89,7 @@ static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
 }
 
 static void hvm_event_cr(uint32_t reason, unsigned long value,
-                                unsigned long old)
+                         unsigned long old, struct mov_to_cr *option)
 {
     vm_event_request_t req = {
         .reason = reason,
@@ -100,43 +97,38 @@ static void hvm_event_cr(uint32_t reason, unsigned long value,
         .u.mov_to_cr.new_value = value,
         .u.mov_to_cr.old_value = old
     };
-    uint64_t parameters = 0;
-
-    switch(reason)
-    {
-    case VM_EVENT_REASON_MOV_TO_CR0:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR3:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR4:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
-        break;
-    };
 
-    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+    if ( option->onchangeonly && value == old )
         return;
 
-    hvm_event_traps(parameters, &req);
+    hvm_event_traps(option->sync, &req);
 }
 
 void hvm_event_cr0(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
+    struct domain *currd = current->domain;
+
+    if ( currd->arch.monitor_options.mov_to_cr0.enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old,
+                     &currd->arch.monitor_options.mov_to_cr0);
 }
 
 void hvm_event_cr3(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
+    struct domain *currd = current->domain;
+
+    if ( currd->arch.monitor_options.mov_to_cr3.enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old,
+                     &currd->arch.monitor_options.mov_to_cr3);
 }
 
 void hvm_event_cr4(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
+    struct domain *currd = current->domain;
+
+    if ( currd->arch.monitor_options.mov_to_cr4.enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old,
+                     &currd->arch.monitor_options.mov_to_cr4);
 }
 
 void hvm_event_msr(unsigned int msr, uint64_t value)
@@ -148,14 +140,14 @@ void hvm_event_msr(unsigned int msr, uint64_t value)
         .u.mov_to_msr.msr = msr,
         .u.mov_to_msr.value = value,
     };
-    uint64_t params = current->domain->arch.hvm_domain
-                        .params[HVM_PARAM_MEMORY_EVENT_MSR];
 
-    hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor_options.mov_to_msr.enabled )
+        hvm_event_traps(1, &req);
 }
 
 int hvm_event_int3(unsigned long gla)
 {
+    int rc = 0;
     uint32_t pfec = PFEC_page_present;
     struct vcpu *curr = current;
     vm_event_request_t req = {
@@ -163,14 +155,16 @@ int hvm_event_int3(unsigned long gla)
         .vcpu_id = curr->vcpu_id,
         .u.software_breakpoint.gfn = paging_gva_to_gfn(curr, gla, &pfec)
     };
-    uint64_t params = curr->domain->arch.hvm_domain
-                        .params[HVM_PARAM_MEMORY_EVENT_INT3];
 
-    return hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor_options.software_breakpoint.enabled )
+        rc = hvm_event_traps(1, &req);
+
+    return rc;
 }
 
 int hvm_event_single_step(unsigned long gla)
 {
+    int rc = 0;
     uint32_t pfec = PFEC_page_present;
     struct vcpu *curr = current;
     vm_event_request_t req = {
@@ -178,10 +172,11 @@ int hvm_event_single_step(unsigned long gla)
         .vcpu_id = curr->vcpu_id,
         .u.singlestep.gfn = paging_gva_to_gfn(curr, gla, &pfec)
     };
-    uint64_t params = curr->domain->arch.hvm_domain
-                        .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
 
-    return hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor_options.singlestep.enabled )
+        rc = hvm_event_traps(1, &req);
+
+    return rc;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 2c4d0ff..4a05fb2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5785,23 +5785,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             case HVM_PARAM_ACPI_IOPORTS_LOCATION:
                 rc = pmtimer_change_ioport(d, a.value);
                 break;
-            case HVM_PARAM_MEMORY_EVENT_CR0:
-            case HVM_PARAM_MEMORY_EVENT_CR3:
-            case HVM_PARAM_MEMORY_EVENT_CR4:
-                if ( d == current->domain )
-                    rc = -EPERM;
-                break;
-            case HVM_PARAM_MEMORY_EVENT_INT3:
-            case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
-            case HVM_PARAM_MEMORY_EVENT_MSR:
-                if ( d == current->domain )
-                {
-                    rc = -EPERM;
-                    break;
-                }
-                if ( a.value & HVMPME_onchangeonly )
-                    rc = -EINVAL;
-                break;
             case HVM_PARAM_NESTEDHVM:
                 rc = xsm_hvm_param_nested(XSM_PRIV, d);
                 if ( rc )
@@ -5860,29 +5843,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             }
             }
 
-            if ( rc == 0 ) 
+            if ( rc == 0 )
             {
                 d->arch.hvm_domain.params[a.index] = a.value;
-
-                switch( a.index )
-                {
-                case HVM_PARAM_MEMORY_EVENT_INT3:
-                case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
-                {
-                    domain_pause(d);
-                    domain_unpause(d); /* Causes guest to latch new status */
-                    break;
-                }
-                case HVM_PARAM_MEMORY_EVENT_CR3:
-                {
-                    for_each_vcpu ( d, v )
-                        hvm_funcs.update_guest_cr(v, 0); /* Latches new CR3 mask through CR0 code */
-                    break;
-                }
-                }
-
             }
-
         }
         else
         {
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 63007a9..17b2ab0 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -714,7 +714,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
     if ( msr_bitmap == NULL )
         return;
 
-    if ( unlikely(d->arch.hvm_domain.introspection_enabled) &&
+    if ( unlikely(d->arch.monitor_options.mov_to_msr.extended_capture) &&
          vm_event_check_ring(&d->vm_event->monitor) )
     {
         unsigned int i;
@@ -1373,8 +1373,8 @@ void vmx_do_resume(struct vcpu *v)
     }
 
     debug_state = v->domain->debugger_attached
-                  || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
-                  || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
+                  || v->domain->arch.monitor_options.software_breakpoint.enabled
+                  || v->domain->arch.monitor_options.singlestep.enabled;
 
     if ( unlikely(v->arch.hvm_vcpu.debug_state_latch != debug_state) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3f2a18f..fcd25df 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1232,7 +1232,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
                 v->arch.hvm_vmx.exec_control |= cr3_ctls;
 
             /* Trap CR3 updates if CR3 memory events are enabled. */
-            if ( v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_CR3] )
+            if ( v->domain->arch.monitor_options.mov_to_cr3.enabled )
                 v->arch.hvm_vmx.exec_control |= CPU_BASED_CR3_LOAD_EXITING;
 
             vmx_update_cpu_exec_control(v);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 1974bfb..5ce852e 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1453,15 +1453,6 @@ void p2m_vm_event_emulate_check(struct vcpu *v, const vm_event_response_t *rsp)
     }
 }
 
-void p2m_setup_introspection(struct domain *d)
-{
-    if ( hvm_funcs.enable_msr_exit_interception )
-    {
-        d->arch.hvm_domain.introspection_enabled = 1;
-        hvm_funcs.enable_msr_exit_interception(d);
-    }
-}
-
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
                             vm_event_request_t **req_ptr)
diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
new file mode 100644
index 0000000..be4bb20
--- /dev/null
+++ b/xen/arch/x86/monitor.c
@@ -0,0 +1,210 @@
+/*
+ * arch/x86/monitor.c
+ *
+ * Architecture-specific monitor_op domctl handler.
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/mm.h>
+#include <asm/domain.h>
+
+#define DISABLE_OPTION(option)              \
+    do {                                    \
+        if ( !option->enabled )             \
+            return -EFAULT;                 \
+        domain_pause(d);                    \
+        option->enabled = 0;                \
+        domain_unpause(d);                  \
+    } while (0)
+
+#define ENABLE_OPTION(option)               \
+    do {                                    \
+        domain_pause(d);                    \
+        option->enabled = 1;                \
+        domain_unpause(d);                  \
+    } while (0)
+
+int monitor_domctl(struct xen_domctl_monitor_op *domctl, struct domain *d)
+{
+    /*
+     * At the moment only Intel HVM domains are supported. However, event
+     * delivery could be extended to AMD and PV domains. See comments below.
+     */
+    if ( !is_hvm_domain(d) || !cpu_has_vmx)
+        return -ENOSYS;
+
+    if ( domctl->op != XEN_DOMCTL_MONITOR_OP_ENABLE &&
+         domctl->op != XEN_DOMCTL_MONITOR_OP_DISABLE )
+        return -EFAULT;
+
+    switch ( domctl->subop )
+    {
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0:
+    {
+        /* Note: could be supported on PV domains. */
+        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr0;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            options->sync = domctl->u.mov_to_cr.sync;
+            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3:
+    {
+        /* Note: could be supported on PV domains. */
+        struct vcpu *v;
+        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr3;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            options->sync = domctl->u.mov_to_cr.sync;
+            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        /* Latches new CR3 mask through CR0 code */
+        for_each_vcpu ( d, v )
+            hvm_funcs.update_guest_cr(v, 0);
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR4:
+    {
+        /* Note: could be supported on PV domains. */
+        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr4;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            options->sync = domctl->u.mov_to_cr.sync;
+            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+        break;
+    }
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_MSR:
+    {
+        struct mov_to_msr *options = &d->arch.monitor_options.mov_to_msr;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            if ( domctl->u.mov_to_msr.extended_capture )
+            {
+                if ( hvm_funcs.enable_msr_exit_interception )
+                {
+                    options->extended_capture = 1;
+                    hvm_funcs.enable_msr_exit_interception(d);
+                }
+                else
+                {
+                    return -ENOSYS;
+                }
+            }
+
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        break;
+    }
+    case XEN_DOMCTL_MONITOR_SUBOP_SINGLESTEP:
+    {
+        struct debug_event *options = &d->arch.monitor_options.singlestep;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        break;
+    }
+    case XEN_DOMCTL_MONITOR_SUBOP_SOFTWARE_BREAKPOINT:
+    {
+        struct debug_event *options =
+            &d->arch.monitor_options.software_breakpoint;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        break;
+    }
+
+    default:
+        return -ENOSYS;
+
+    };
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 85afd68..d06f4b6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -29,6 +29,7 @@
 #include <asm/irq.h>
 #include <asm/page.h>
 #include <asm/p2m.h>
+#include <asm/monitor.h>
 #include <public/domctl.h>
 #include <xsm/xsm.h>
 
@@ -1178,6 +1179,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
     break;
 
+    case XEN_DOMCTL_monitor_op:
+        ret = -EPERM;
+        if ( current->domain == d )
+            break;
+
+        ret = monitor_domctl(&op->u.monitor_op, d);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 195739e..f988291 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -621,32 +621,17 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         switch( vec->op )
         {
         case XEN_VM_EVENT_MONITOR_ENABLE:
-        case XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION:
-        {
-            rc = -ENODEV;
-            if ( !p2m_vm_event_sanity_check(d) )
-                break;
-
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
-                                    HVM_PARAM_MONITOR_RING_PFN,
-                                    mem_access_notification);
-
-            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
-                 && !rc )
-                p2m_setup_introspection(d);
-
-        }
-        break;
+                                 HVM_PARAM_MONITOR_RING_PFN,
+                                 mem_access_notification);
+            break;
 
         case XEN_VM_EVENT_MONITOR_DISABLE:
-        {
             if ( ved->ring_page )
             {
                 rc = vm_event_disable(d, ved);
-                d->arch.hvm_domain.introspection_enabled = 0;
             }
-        }
-        break;
+            break;
 
         default:
             rc = -ENOSYS;
diff --git a/xen/include/asm-arm/monitor.h b/xen/include/asm-arm/monitor.h
new file mode 100644
index 0000000..ef8f38a
--- /dev/null
+++ b/xen/include/asm-arm/monitor.h
@@ -0,0 +1,13 @@
+#ifndef __ASM_ARM_MONITOR_H__
+#define __ASM_ARM_MONITOR_H__
+
+#include <xen/config.h>
+#include <public/domctl.h>
+
+static inline
+int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d)
+{
+    return -ENOSYS;
+}
+
+#endif /* __ASM_X86_MONITOR_H__ */
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 85f05a5..ae56cc4 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -241,6 +241,24 @@ struct time_scale {
     u32 mul_frac;
 };
 
+/************************************************/
+/*            monitor event options             */
+/************************************************/
+struct mov_to_cr {
+    uint8_t enabled;
+    uint8_t sync;
+    uint8_t onchangeonly;
+};
+
+struct mov_to_msr {
+    uint8_t enabled;
+    uint8_t extended_capture;
+};
+
+struct debug_event {
+    uint8_t enabled;
+};
+
 struct pv_domain
 {
     l1_pgentry_t **gdt_ldt_l1tab;
@@ -335,6 +353,16 @@ struct arch_domain
     unsigned long pirq_eoi_map_mfn;
 
     unsigned int psr_rmid; /* RMID assigned to the domain for CMT */
+
+    /* Monitor options */
+    struct {
+        struct mov_to_cr mov_to_cr0;
+        struct mov_to_cr mov_to_cr3;
+        struct mov_to_cr mov_to_cr4;
+        struct mov_to_msr mov_to_msr;
+        struct debug_event singlestep;
+        struct debug_event software_breakpoint;
+    } monitor_options;
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 2757c7f..0f8b19a 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -134,7 +134,6 @@ struct hvm_domain {
     bool_t                 mem_sharing_enabled;
     bool_t                 qemu_mapcache_invalidate;
     bool_t                 is_s3_suspended;
-    bool_t                 introspection_enabled;
 
     /*
      * TSC value that VCPUs use to calculate their tsc_offset value.
diff --git a/xen/include/asm-x86/monitor.h b/xen/include/asm-x86/monitor.h
new file mode 100644
index 0000000..d1fa859
--- /dev/null
+++ b/xen/include/asm-x86/monitor.h
@@ -0,0 +1,30 @@
+/*
+ * include/asm-x86/monitor.h
+ *
+ * Architecture-specific monitor_op domctl handler.
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef __ASM_X86_MONITOR_H__
+#define __ASM_X86_MONITOR_H__
+
+#include <public/domctl.h>
+
+int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d);
+
+#endif /* __ASM_X86_MONITOR_H__ */
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 6266f9a..bd84e60 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -615,16 +615,10 @@ void p2m_vm_event_emulate_check(struct vcpu *v,
 /* Enable arch specific introspection options (such as MSR interception). */
 void p2m_setup_introspection(struct domain *d);
 
-/* Sanity check for vm_event hardware support */
-static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
-{
-    return hap_enabled(d) && cpu_has_vmx;
-}
-
 /* Sanity check for mem_access hardware support */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-    return is_hvm_domain(d);
+    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
 }
 
 /* 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 9d4972a..0242914 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -804,7 +804,6 @@ struct xen_domctl_gdbsx_domstatus {
 
 #define XEN_VM_EVENT_MONITOR_ENABLE                           0
 #define XEN_VM_EVENT_MONITOR_DISABLE                          1
-#define XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
 
 /*
  * Sharing ENOMEM helper.
@@ -1002,6 +1001,53 @@ struct xen_domctl_psr_cmt_op {
 typedef struct xen_domctl_psr_cmt_op xen_domctl_psr_cmt_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cmt_op_t);
 
+/*  XEN_DOMCTL_MONITOR_*
+ *
+ * Enable/disable monitoring various VM events.
+ * This domctl configures what events will be reported to helper apps
+ * via the ring buffer "MONITOR". The ring has to be first enabled
+ * with the domctl XEN_DOMCTL_VM_EVENT_OP_MONITOR.
+ *
+ * NOTICE: mem_access events are also delivered via the "MONITOR" ring buffer;
+ * however, enabling/disabling those events is performed with the use of
+ * memory_op hypercalls!
+ */
+#define XEN_DOMCTL_MONITOR_OP_ENABLE   0
+#define XEN_DOMCTL_MONITOR_OP_DISABLE  1
+
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0            0
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3            1
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR4            2
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_MSR            3
+#define XEN_DOMCTL_MONITOR_SUBOP_SINGLESTEP            4
+#define XEN_DOMCTL_MONITOR_SUBOP_SOFTWARE_BREAKPOINT   5
+
+struct xen_domctl_monitor_op {
+    uint32_t op; /* XEN_DOMCTL_MONITOR_OP_* */
+    uint32_t subop; /* XEN_DOMCTL_MONITOR_SUBOP_* */
+
+    /*
+     * Further options when issuing XEN_DOMCTL_MONITOR_OP_ENABLE.
+     */
+    union {
+        struct {
+            /* Pause vCPU until response */
+            uint8_t sync;
+            /* Send event only on a change of value */
+            uint8_t onchangeonly;
+            uint8_t _pad[6];
+        } mov_to_cr;
+
+        struct {
+            /* Enable the capture of an extended set of MSRs */
+            uint8_t extended_capture;
+            uint8_t _pad[7];
+        } mov_to_msr;
+    } u;
+};
+typedef struct xen_domctl__op xen_domctl_monitor_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_monitor_op_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1077,6 +1123,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setvnumainfo                  74
 #define XEN_DOMCTL_psr_cmt_op                    75
 #define XEN_DOMCTL_arm_configure_domain          76
+#define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1142,6 +1189,7 @@ struct xen_domctl {
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         struct xen_domctl_vnuma             vnuma;
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
+        struct xen_domctl_monitor_op        monitor_op;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 6efcc0b..5de6a4b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -162,21 +162,6 @@
  */
 #define HVM_PARAM_ACPI_IOPORTS_LOCATION 19
 
-/* Enable blocking memory events, async or sync (pause vcpu until response) 
- * onchangeonly indicates messages only on a change of value */
-#define HVM_PARAM_MEMORY_EVENT_CR0          20
-#define HVM_PARAM_MEMORY_EVENT_CR3          21
-#define HVM_PARAM_MEMORY_EVENT_CR4          22
-#define HVM_PARAM_MEMORY_EVENT_INT3         23
-#define HVM_PARAM_MEMORY_EVENT_SINGLE_STEP  25
-#define HVM_PARAM_MEMORY_EVENT_MSR          30
-
-#define HVMPME_MODE_MASK       (3 << 0)
-#define HVMPME_mode_disabled   0
-#define HVMPME_mode_async      1
-#define HVMPME_mode_sync       2
-#define HVMPME_onchangeonly    (1 << 2)
-
 /* Boolean: Enable nestedhvm (hvm only) */
 #define HVM_PARAM_NESTEDHVM    24
 
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index b619bff..30e9a32 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -67,7 +67,7 @@
 #define VM_EVENT_REASON_MOV_TO_CR3              5
 /* CR4 was updated */
 #define VM_EVENT_REASON_MOV_TO_CR4              6
-/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
+/* An MSR was updated. */
 #define VM_EVENT_REASON_MOV_TO_MSR              7
 /* Debug operation executed (e.g. int3) */
 #define VM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 475ef6c..2179cc3 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -691,6 +691,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_access_required:
         return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 
+    case XEN_DOMCTL_monitor_op:
+        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
     case XEN_DOMCTL_gdbsx_pausevcpu:
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 23b47bf..ebe690a 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -250,6 +250,8 @@ class hvm
 # HVMOP_inject_trap
     hvmctl
 # XEN_DOMCTL_set_access_required
+# XEN_DOMCTL_monitor_op
+# XEN_DOMCTL_vm_event_op
     vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (6 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 07/12] xen: Introduce monitor_op domctl Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 20:14   ` Andrew Cooper
  2015-02-13 16:33 ` [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The flag is only used for debugging purposes, thus it should be only checked
for in debug builds of Xen.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/x86/mm/mem_sharing.c | 2 ++
 xen/arch/x86/mm/p2m.c         | 2 ++
 xen/common/mem_access.c       | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 4e5477a..0459544 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -606,8 +606,10 @@ int mem_sharing_sharing_resume(struct domain *d)
             continue;
         }
 
+#ifndef NDEBUG
         if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
+#endif
 
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 5ce852e..68d57d7 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1312,8 +1312,10 @@ void p2m_mem_paging_resume(struct domain *d)
             continue;
         }
 
+#ifndef NDEBUG
         if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
+#endif
 
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index a6d82d1..63f2b52 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -44,8 +44,10 @@ void mem_access_resume(struct domain *d)
             continue;
         }
 
+#ifndef NDEBUG
         if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
+#endif
 
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (7 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 21:05   ` Andrew Cooper
  2015-02-17 14:17   ` Jan Beulich
  2015-02-13 16:33 ` [PATCH V5 10/12] xen/vm_event: Relocate memop checks Tamas K Lengyel
                   ` (2 subsequent siblings)
  11 siblings, 2 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The vm_event subsystem has been artifically tied to the presence of mem_access.
While mem_access does depend on vm_event, vm_event is an entirely independent
subsystem that can be used for arbitrary function-offloading to helper apps in
domains. This patch removes the dependency that mem_access needs to be supported
in order to enable vm_event.

A new vm_event_resume function is introduced which pulls all responses off from
given ring and delegates handling to appropriate helper functions (if
necessary). By default, vm_event_resume just pulls the response from the ring
and unpauses the corresponding vCPU. This approach reduces code duplication
and present a single point of entry for the entire vm_event subsystem's
response handling mechanism.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
v4: Consolidate resume routines into vm_event_resume
    Style fixes
    Sort xen/common/Makefile to be alphabetical
v3: Move ring processing out from mem_access.c to monitor.c in common
---
 xen/arch/x86/mm/mem_sharing.c       | 37 +-----------------
 xen/arch/x86/mm/p2m.c               | 66 ++++++++++---------------------
 xen/common/Makefile                 | 18 ++++-----
 xen/common/mem_access.c             | 36 +----------------
 xen/common/vm_event.c               | 77 +++++++++++++++++++++++++++++++------
 xen/include/asm-x86/mem_sharing.h   |  1 -
 xen/include/asm-x86/p2m.h           |  2 +-
 xen/include/xen/mem_access.h        | 14 +++++--
 xen/include/xen/vm_event.h          | 70 ++++-----------------------------
 xen/include/xsm/dummy.h             |  2 -
 xen/include/xsm/xsm.h               |  4 --
 xen/xsm/dummy.c                     |  2 -
 xen/xsm/flask/hooks.c               | 36 ++++++++---------
 xen/xsm/flask/policy/access_vectors |  8 ++--
 14 files changed, 137 insertions(+), 236 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 0459544..4959407 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -591,40 +591,6 @@ unsigned int mem_sharing_get_nr_shared_mfns(void)
     return (unsigned int)atomic_read(&nr_shared_mfns);
 }
 
-int mem_sharing_sharing_resume(struct domain *d)
-{
-    vm_event_response_t rsp;
-
-    /* Get all requests off the ring */
-    while ( vm_event_get_response(d, &d->vm_event->share, &rsp) )
-    {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
-            continue;
-        }
-
-#ifndef NDEBUG
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-#endif
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        /* Unpause domain/vcpu */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
-    }
-
-    return 0;
-}
-
 /* Functions that change a page's type and ownership */
 static int page_make_sharable(struct domain *d, 
                        struct page_info *page, 
@@ -1475,7 +1441,8 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
         {
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
-            rc = mem_sharing_sharing_resume(d);
+
+            vm_event_resume(d, &d->vm_event->share);
         }
         break;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 68d57d7..dab1da1 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1279,13 +1279,13 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
 }
 
 /**
- * p2m_mem_paging_resume - Resume guest gfn and vcpus
+ * p2m_mem_paging_resume - Resume guest gfn
  * @d: guest domain
- * @gfn: guest page in paging state
+ * @rsp: vm_event response received
+ *
+ * p2m_mem_paging_resume() will forward the p2mt of a gfn to ram_rw. It is
+ * called by the pager.
  *
- * p2m_mem_paging_resume() will forward the p2mt of a gfn to ram_rw and all
- * waiting vcpus will be unpaused again. It is called by the pager.
- * 
  * The gfn was previously either evicted and populated, or nominated and
  * populated. If the page was evicted the p2mt will be p2m_ram_paging_in. If
  * the page was just nominated the p2mt will be p2m_ram_paging_in_start because
@@ -1293,56 +1293,30 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
  *
  * If the gfn was dropped the vcpu needs to be unpaused.
  */
-void p2m_mem_paging_resume(struct domain *d)
+
+void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    vm_event_response_t rsp;
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
 
-    /* Pull all responses off the ring */
-    while( vm_event_get_response(d, &d->vm_event->paging, &rsp) )
+    /* Fix p2m entry if the page was not dropped */
+    if ( !(rsp->u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
     {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
-            continue;
-        }
-
-#ifndef NDEBUG
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-#endif
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        /* Fix p2m entry if the page was not dropped */
-        if ( !(rsp.u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
+        unsigned long gfn = rsp->u.mem_access.gfn;
+        gfn_lock(p2m, gfn, 0);
+        mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
+        /* Allow only pages which were prepared properly, or pages which
+         * were nominated but not evicted */
+        if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
         {
-            uint64_t gfn = rsp.u.mem_access.gfn;
-            gfn_lock(p2m, gfn, 0);
-            mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
-            /* Allow only pages which were prepared properly, or pages which
-             * were nominated but not evicted */
-            if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
-            {
-                p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
-                              paging_mode_log_dirty(d) ? p2m_ram_logdirty :
-                              p2m_ram_rw, a);
-                set_gpfn_from_mfn(mfn_x(mfn), gfn);
-            }
-            gfn_unlock(p2m, gfn, 0);
+            p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
+                          paging_mode_log_dirty(d) ? p2m_ram_logdirty :
+                          p2m_ram_rw, a);
+            set_gpfn_from_mfn(mfn_x(mfn), gfn);
         }
-        /* Unpause domain */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
+        gfn_unlock(p2m, gfn, 0);
     }
 }
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index e5bd75b..8d84bc6 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -15,13 +15,19 @@ obj-y += keyhandler.o
 obj-$(HAS_KEXEC) += kexec.o
 obj-$(HAS_KEXEC) += kimage.o
 obj-y += lib.o
+obj-y += lzo.o
+obj-$(HAS_MEM_ACCESS) += mem_access.o
 obj-y += memory.o
 obj-y += multicall.o
 obj-y += notifier.o
 obj-y += page_alloc.o
+obj-$(HAS_PDX) += pdx.o
 obj-y += preempt.o
 obj-y += random.o
 obj-y += rangeset.o
+obj-y += radix-tree.o
+obj-y += rbtree.o
+obj-y += rcupdate.o
 obj-y += sched_credit.o
 obj-y += sched_credit2.o
 obj-y += sched_sedf.o
@@ -40,21 +46,15 @@ obj-y += sysctl.o
 obj-y += tasklet.o
 obj-y += time.o
 obj-y += timer.o
+obj-y += tmem.o
+obj-y += tmem_xen.o
 obj-y += trace.o
 obj-y += version.o
+obj-y += vm_event.o
 obj-y += vmap.o
 obj-y += vsprintf.o
 obj-y += wait.o
 obj-y += xmalloc_tlsf.o
-obj-y += rcupdate.o
-obj-y += tmem.o
-obj-y += tmem_xen.o
-obj-y += radix-tree.o
-obj-y += rbtree.o
-obj-y += lzo.o
-obj-$(HAS_PDX) += pdx.o
-obj-$(HAS_MEM_ACCESS) += mem_access.o
-obj-$(HAS_MEM_ACCESS) += vm_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 63f2b52..a54fe6e 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -29,40 +29,6 @@
 #include <asm/p2m.h>
 #include <xsm/xsm.h>
 
-void mem_access_resume(struct domain *d)
-{
-    vm_event_response_t rsp;
-
-    /* Pull all responses off the ring. */
-    while ( vm_event_get_response(d, &d->vm_event->monitor, &rsp) )
-    {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
-            continue;
-        }
-
-#ifndef NDEBUG
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-#endif
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        p2m_vm_event_emulate_check(v, &rsp);
-
-        /* Unpause domain. */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
-    }
-}
-
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
 {
@@ -97,7 +63,7 @@ int mem_access_memop(unsigned long cmd,
             rc = -ENOSYS;
         else
         {
-            mem_access_resume(d);
+            vm_event_resume(d, &d->vm_event->monitor);
             rc = 0;
         }
         break;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index f988291..f89361e 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -357,6 +357,67 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, vm_even
     return 1;
 }
 
+/*
+ * Pull all responses from the given ring and unpause the corresponding vCPU
+ * if required. Based on the response type, here we can also call custom
+ * handlers.
+ *
+ * Note: responses are handled the same way regardless of which ring they
+ * arrive on.
+ */
+void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
+{
+    vm_event_response_t rsp;
+
+    /* Pull all responses off the ring. */
+    while ( vm_event_get_response(d, ved, &rsp) )
+    {
+        struct vcpu *v;
+
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
+        {
+            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!");
+            continue;
+        }
+
+#ifndef NDEBUG
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
+            continue;
+#endif
+
+        /* Validate the vcpu_id in the response. */
+        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+            continue;
+
+        v = d->vcpu[rsp.vcpu_id];
+
+        /*
+         * In some cases the response type needs extra handling, so here
+         * we call the appropriate handlers.
+         */
+        switch ( rsp.reason )
+        {
+
+#ifdef HAS_MEM_ACCESS
+        case VM_EVENT_REASON_MEM_ACCESS:
+            mem_access_resume(v, &rsp);
+            break;
+#endif
+
+#ifdef HAS_MEM_PAGING
+        case VM_EVENT_REASON_MEM_PAGING:
+            p2m_mem_paging_resume(d, &rsp);
+            break;
+#endif
+
+        };
+
+        /* Unpause domain. */
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
+    }
+}
+
 void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
 {
     vm_event_ring_lock(ved);
@@ -436,25 +497,23 @@ int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->paging.ring_page != NULL) )
-        p2m_mem_paging_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->paging);
 }
 #endif
 
-#ifdef HAS_MEM_ACCESS
 /* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_access_notification(struct vcpu *v, unsigned int port)
+static void monitor_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->monitor.ring_page != NULL) )
-        mem_access_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->monitor);
 }
-#endif
 
 #ifdef HAS_MEM_SHARING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->share.ring_page != NULL) )
-        mem_sharing_sharing_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->share);
 }
 #endif
 
@@ -509,13 +568,11 @@ void vm_event_cleanup(struct domain *d)
         (void)vm_event_disable(d, &d->vm_event->paging);
     }
 #endif
-#ifdef HAS_MEM_ACCESS
     if ( d->vm_event->monitor.ring_page )
     {
         destroy_waitqueue_head(&d->vm_event->monitor.wq);
         (void)vm_event_disable(d, &d->vm_event->monitor);
     }
-#endif
 #ifdef HAS_MEM_SHARING
     if ( d->vm_event->share.ring_page )
     {
@@ -612,7 +669,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
     break;
 #endif
 
-#ifdef HAS_MEM_ACCESS
     case XEN_DOMCTL_VM_EVENT_OP_MONITOR:
     {
         struct vm_event_domain *ved = &d->vm_event->monitor;
@@ -623,7 +679,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         case XEN_VM_EVENT_MONITOR_ENABLE:
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                  HVM_PARAM_MONITOR_RING_PFN,
-                                 mem_access_notification);
+                                 monitor_notification);
             break;
 
         case XEN_VM_EVENT_MONITOR_DISABLE:
@@ -639,7 +695,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
     }
     break;
-#endif
 
 #ifdef HAS_MEM_SHARING
     case XEN_DOMCTL_VM_EVENT_OP_SHARING:
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index 2f1f3d2..da99d46 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -90,7 +90,6 @@ static inline int mem_sharing_unshare_page(struct domain *d,
  */
 int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
                                 bool_t allow_sleep);
-int mem_sharing_sharing_resume(struct domain *d);
 int mem_sharing_memop(struct domain *d, 
                        xen_mem_sharing_op_t *mec);
 int mem_sharing_domctl(struct domain *d, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index bd84e60..302ff22 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -586,7 +586,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
 /* Prepare the p2m for paging a frame in */
 int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer);
 /* Resume normal operation (in case a domain was paused) */
-void p2m_mem_paging_resume(struct domain *d);
+void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp);
 
 /* Send mem event based on the access (gla is -1ull if not available).  Handles
  * the rw2rx conversion. Boolean return value indicates if access rights have 
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index 1d01221..221eca0 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -24,6 +24,7 @@
 #define _XEN_ASM_MEM_ACCESS_H
 
 #include <public/memory.h>
+#include <asm/p2m.h>
 
 #ifdef HAS_MEM_ACCESS
 
@@ -31,8 +32,11 @@ int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
 int mem_access_send_req(struct domain *d, vm_event_request_t *req);
 
-/* Resumes the running of the VCPU, restarting the last instruction */
-void mem_access_resume(struct domain *d);
+static inline
+void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp)
+{
+    p2m_vm_event_emulate_check(v, rsp);
+}
 
 #else
 
@@ -49,7 +53,11 @@ int mem_access_send_req(struct domain *d, vm_event_request_t *req)
     return -ENOSYS;
 }
 
-static inline void mem_access_resume(struct domain *d) {}
+static inline
+void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp)
+{
+    /* Nothing to do. */
+}
 
 #endif /* HAS_MEM_ACCESS */
 
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 988ea42..82a6e56 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -26,8 +26,6 @@
 
 #include <xen/sched.h>
 
-#ifdef HAS_MEM_ACCESS
-
 /* Clean up on domain destruction */
 void vm_event_cleanup(struct domain *d);
 
@@ -48,15 +46,15 @@ bool_t vm_event_check_ring(struct vm_event_domain *med);
  * succeed.
  */
 int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *med,
-                            bool_t allow_sleep);
+                          bool_t allow_sleep);
 static inline int vm_event_claim_slot(struct domain *d,
-                                        struct vm_event_domain *med)
+                                      struct vm_event_domain *med)
 {
     return __vm_event_claim_slot(d, med, 1);
 }
 
 static inline int vm_event_claim_slot_nosleep(struct domain *d,
-                                        struct vm_event_domain *med)
+                                              struct vm_event_domain *med)
 {
     return __vm_event_claim_slot(d, med, 0);
 }
@@ -64,72 +62,20 @@ static inline int vm_event_claim_slot_nosleep(struct domain *d,
 void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med);
 
 void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
-                            vm_event_request_t *req);
+                          vm_event_request_t *req);
 
 int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
-                           vm_event_response_t *rsp);
+                          vm_event_response_t *rsp);
+
+void vm_event_resume(struct domain *d, struct vm_event_domain *ved);
 
 int do_vm_event_op(int op, uint32_t domain, void *arg);
 int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+                    XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 void vm_event_vcpu_pause(struct vcpu *v);
 void vm_event_vcpu_unpause(struct vcpu *v);
 
-#else
-
-static inline void vm_event_cleanup(struct domain *d) {}
-
-static inline bool_t vm_event_check_ring(struct vm_event_domain *med)
-{
-    return 0;
-}
-
-static inline int vm_event_claim_slot(struct domain *d,
-                                        struct vm_event_domain *med)
-{
-    return -ENOSYS;
-}
-
-static inline int vm_event_claim_slot_nosleep(struct domain *d,
-                                        struct vm_event_domain *med)
-{
-    return -ENOSYS;
-}
-
-static inline
-void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med)
-{}
-
-static inline
-void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
-                            vm_event_request_t *req)
-{}
-
-static inline
-int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
-                           vm_event_response_t *rsp)
-{
-    return -ENOSYS;
-}
-
-static inline int do_vm_event_op(int op, uint32_t domain, void *arg)
-{
-    return -ENOSYS;
-}
-
-static inline
-int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
-    return -ENOSYS;
-}
-
-static inline void vm_event_vcpu_pause(struct vcpu *v) {}
-static inline void vm_event_vcpu_unpause(struct vcpu *v) {}
-
-#endif /* HAS_MEM_ACCESS */
-
 #endif /* __VM_EVENT_H__ */
 
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 4227093..50ee929 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -513,7 +513,6 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
-#ifdef HAS_MEM_ACCESS
 static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
@@ -525,7 +524,6 @@ static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
-#endif
 
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index cff9d35..d56a68f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -141,10 +141,8 @@ struct xsm_operations {
     int (*hvm_param_nested) (struct domain *d);
     int (*get_vnumainfo) (struct domain *d);
 
-#ifdef HAS_MEM_ACCESS
     int (*vm_event_control) (struct domain *d, int mode, int op);
     int (*vm_event_op) (struct domain *d, int op);
-#endif
 
 #ifdef CONFIG_X86
     int (*do_mca) (void);
@@ -543,7 +541,6 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
     return xsm_ops->get_vnumainfo(d);
 }
 
-#ifdef HAS_MEM_ACCESS
 static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
 {
     return xsm_ops->vm_event_control(d, mode, op);
@@ -553,7 +550,6 @@ static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
 {
     return xsm_ops->vm_event_op(d, op);
 }
-#endif
 
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 25fca68..6d12d32 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -118,10 +118,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
-#ifdef HAS_MEM_ACCESS
     set_to_dummy_if_null(ops, vm_event_control);
     set_to_dummy_if_null(ops, vm_event_op);
-#endif
 
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 2179cc3..a65f68c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -577,9 +577,7 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_iomem_permission:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
-#ifdef HAS_MEM_ACCESS
     case XEN_DOMCTL_vm_event_op:
-#endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
     case XEN_DOMCTL_shadow_op:
@@ -689,10 +687,10 @@ static int flask_domctl(struct domain *d, int cmd)
         return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
 
     case XEN_DOMCTL_set_access_required:
-        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 
     case XEN_DOMCTL_monitor_op:
-        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
@@ -1139,6 +1137,16 @@ static int flask_hvm_param_nested(struct domain *d)
     return current_has_perm(d, SECCLASS_HVM, HVM__NESTED);
 }
 
+static int flask_vm_event_control(struct domain *d, int mode, int op)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+}
+
+static int flask_vm_event_op(struct domain *d, int op)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+}
+
 #if defined(HAS_PASSTHROUGH) && defined(HAS_PCI)
 static int flask_get_device_group(uint32_t machine_bdf)
 {
@@ -1205,18 +1213,6 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
-#ifdef HAS_MEM_ACCESS
-static int flask_vm_event_control(struct domain *d, int mode, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
-}
-
-static int flask_vm_event_op(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
-}
-#endif /* HAS_MEM_ACCESS */
-
 #ifdef CONFIG_X86
 static int flask_do_mca(void)
 {
@@ -1585,6 +1581,9 @@ static struct xsm_operations flask_ops = {
     .do_xsm_op = do_flask_op,
     .get_vnumainfo = flask_get_vnumainfo,
 
+    .vm_event_control = flask_vm_event_control,
+    .vm_event_op = flask_vm_event_op,
+
 #ifdef CONFIG_COMPAT
     .do_compat_op = compat_flask_op,
 #endif
@@ -1600,11 +1599,6 @@ static struct xsm_operations flask_ops = {
     .deassign_device = flask_deassign_device,
 #endif
 
-#ifdef HAS_MEM_ACCESS
-    .vm_event_control = flask_vm_event_control,
-    .vm_event_op = flask_vm_event_op,
-#endif
-
 #ifdef CONFIG_X86
     .do_mca = flask_do_mca,
     .shadow_control = flask_shadow_control,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index ebe690a..2e231e1 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -221,6 +221,10 @@ class domain2
     psr_cmt_op
 # XEN_DOMCTL_configure_domain
     configure_domain
+# XEN_DOMCTL_set_access_required
+# XEN_DOMCTL_monitor_op
+# XEN_DOMCTL_vm_event_op
+    vm_event
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
@@ -249,10 +253,6 @@ class hvm
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
     hvmctl
-# XEN_DOMCTL_set_access_required
-# XEN_DOMCTL_monitor_op
-# XEN_DOMCTL_vm_event_op
-    vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
 #  target = domain whose memory is being shared
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (8 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 21:23   ` Andrew Cooper
  2015-02-17 14:25   ` Jan Beulich
  2015-02-13 16:33 ` [PATCH V5 11/12] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
  2015-02-13 16:33 ` [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
  11 siblings, 2 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The memop handler function for paging/sharing responsible for calling XSM
doesn't really have anything to do with vm_event, thus in this patch we
relocate it into mem_paging_memop and mem_sharing_memop. This has already
been the approach in mem_access_memop, so in this patch we just make it
consistent.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/x86/mm/mem_paging.c      | 36 ++++++++++++++-----
 xen/arch/x86/mm/mem_sharing.c     | 76 ++++++++++++++++++++++++++-------------
 xen/arch/x86/x86_64/compat/mm.c   | 28 +++------------
 xen/arch/x86/x86_64/mm.c          | 26 +++-----------
 xen/common/vm_event.c             | 43 ----------------------
 xen/include/asm-x86/mem_paging.h  |  7 +++-
 xen/include/asm-x86/mem_sharing.h |  4 +--
 xen/include/xen/vm_event.h        |  1 -
 8 files changed, 97 insertions(+), 124 deletions(-)

diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index e63d8c1..4aee6b7 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -21,28 +21,45 @@
  */
 
 
+#include <xen/guest_access.h>
 #include <asm/p2m.h>
-#include <xen/vm_event.h>
+#include <xsm/xsm.h>
 
-
-int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
+int mem_paging_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
 {
-    int rc = -ENODEV;
+    int rc;
+    xen_mem_paging_op_t mpo;
+    struct domain *d;
+
+    rc = -EFAULT;
+    if ( copy_from_guest(&mpo, arg, 1) )
+        return rc;
+
+    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
+    if ( rc )
+        return rc;
+
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
+    if ( rc )
+        return rc;
+
+    rc = -ENODEV;
     if ( unlikely(!d->vm_event->paging.ring_page) )
         return rc;
 
-    switch( mpo->op )
+    switch( mpo.op )
     {
     case XENMEM_paging_op_nominate:
-        rc = p2m_mem_paging_nominate(d, mpo->gfn);
+        rc = p2m_mem_paging_nominate(d, mpo.gfn);
         break;
 
     case XENMEM_paging_op_evict:
-        rc = p2m_mem_paging_evict(d, mpo->gfn);
+        rc = p2m_mem_paging_evict(d, mpo.gfn);
         break;
 
     case XENMEM_paging_op_prep:
-        rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer);
+        rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer);
         break;
 
     default:
@@ -50,6 +67,9 @@ int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
         break;
     }
 
+    if ( !rc && __copy_to_guest(arg, &mpo, 1) )
+        rc = -EFAULT;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 4959407..612ed89 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -28,6 +28,7 @@
 #include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/rcupdate.h>
+#include <xen/guest_access.h>
 #include <xen/vm_event.h>
 #include <asm/page.h>
 #include <asm/string.h>
@@ -1293,30 +1294,52 @@ int relinquish_shared_pages(struct domain *d)
     return rc;
 }
 
-int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
+int mem_sharing_memop(unsigned long cmd,
+                      XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
-    int rc = 0;
+    int rc;
+    xen_mem_sharing_op_t mso;
+    struct domain *d;
+
+    rc = -EFAULT;
+    if ( copy_from_guest(&mso, arg, 1) )
+        return rc;
+
+    if ( mso.op == XENMEM_sharing_op_audit )
+        return mem_sharing_audit();
+
+    rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d);
+    if ( rc )
+        return rc;
 
     /* Only HAP is supported */
     if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
          return -ENODEV;
 
-    switch(mec->op)
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
+    if ( rc )
+        return rc;
+
+    rc = -ENODEV;
+    if ( unlikely(!d->vm_event->share.ring_page) )
+        return rc;
+
+    switch(mso.op)
     {
         case XENMEM_sharing_op_nominate_gfn:
         {
-            unsigned long gfn = mec->u.nominate.u.gfn;
+            unsigned long gfn = mso.u.nominate.u.gfn;
             shr_handle_t handle;
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
             rc = mem_sharing_nominate_page(d, gfn, 0, &handle);
-            mec->u.nominate.handle = handle;
+            mso.u.nominate.handle = handle;
         }
         break;
 
         case XENMEM_sharing_op_nominate_gref:
         {
-            grant_ref_t gref = mec->u.nominate.u.grant_ref;
+            grant_ref_t gref = mso.u.nominate.u.grant_ref;
             unsigned long gfn;
             shr_handle_t handle;
 
@@ -1325,7 +1348,7 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( mem_sharing_gref_to_gfn(d, gref, &gfn) < 0 )
                 return -EINVAL;
             rc = mem_sharing_nominate_page(d, gfn, 3, &handle);
-            mec->u.nominate.handle = handle;
+            mso.u.nominate.handle = handle;
         }
         break;
 
@@ -1338,12 +1361,12 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            rc = rcu_lock_live_remote_domain_by_id(mec->u.share.client_domain,
+            rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                    &cd);
             if ( rc )
                 return rc;
 
-            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mec->op);
+            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
             if ( rc )
             {
                 rcu_unlock_domain(cd);
@@ -1356,36 +1379,36 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
                 return -EINVAL;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.source_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.source_gfn) )
             {
                 grant_ref_t gref = (grant_ref_t) 
                                     (XENMEM_SHARING_OP_FIELD_GET_GREF(
-                                        mec->u.share.source_gfn));
+                                        mso.u.share.source_gfn));
                 if ( mem_sharing_gref_to_gfn(d, gref, &sgfn) < 0 )
                 {
                     rcu_unlock_domain(cd);
                     return -EINVAL;
                 }
             } else {
-                sgfn  = mec->u.share.source_gfn;
+                sgfn  = mso.u.share.source_gfn;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.client_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.client_gfn) )
             {
                 grant_ref_t gref = (grant_ref_t) 
                                     (XENMEM_SHARING_OP_FIELD_GET_GREF(
-                                        mec->u.share.client_gfn));
+                                        mso.u.share.client_gfn));
                 if ( mem_sharing_gref_to_gfn(cd, gref, &cgfn) < 0 )
                 {
                     rcu_unlock_domain(cd);
                     return -EINVAL;
                 }
             } else {
-                cgfn  = mec->u.share.client_gfn;
+                cgfn  = mso.u.share.client_gfn;
             }
 
-            sh = mec->u.share.source_handle;
-            ch = mec->u.share.client_handle;
+            sh = mso.u.share.source_handle;
+            ch = mso.u.share.client_handle;
 
             rc = mem_sharing_share_pages(d, sgfn, sh, cd, cgfn, ch); 
 
@@ -1402,12 +1425,12 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            rc = rcu_lock_live_remote_domain_by_id(mec->u.share.client_domain,
+            rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                    &cd);
             if ( rc )
                 return rc;
 
-            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mec->op);
+            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
             if ( rc )
             {
                 rcu_unlock_domain(cd);
@@ -1420,16 +1443,16 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
                 return -EINVAL;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.source_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.source_gfn) )
             {
                 /* Cannot add a gref to the physmap */
                 rcu_unlock_domain(cd);
                 return -EINVAL;
             }
 
-            sgfn    = mec->u.share.source_gfn;
-            sh      = mec->u.share.source_handle;
-            cgfn    = mec->u.share.client_gfn;
+            sgfn    = mso.u.share.source_gfn;
+            sh      = mso.u.share.source_handle;
+            cgfn    = mso.u.share.client_gfn;
 
             rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn); 
 
@@ -1448,14 +1471,14 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
 
         case XENMEM_sharing_op_debug_gfn:
         {
-            unsigned long gfn = mec->u.debug.u.gfn;
+            unsigned long gfn = mso.u.debug.u.gfn;
             rc = mem_sharing_debug_gfn(d, gfn);
         }
         break;
 
         case XENMEM_sharing_op_debug_gref:
         {
-            grant_ref_t gref = mec->u.debug.u.gref;
+            grant_ref_t gref = mso.u.debug.u.gref;
             rc = mem_sharing_debug_gref(d, gref);
         }
         break;
@@ -1465,6 +1488,9 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             break;
     }
 
+    if ( !rc && __copy_to_guest(arg, &mso, 1) )
+        return -EFAULT;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 85f138b..bb03870 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -1,9 +1,9 @@
 #include <xen/event.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/multicall.h>
 #include <compat/memory.h>
 #include <compat/xen.h>
+#include <asm/mem_paging.h>
 #include <asm/mem_sharing.h>
 
 int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
@@ -187,30 +187,12 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return mem_sharing_get_nr_shared_mfns();
 
     case XENMEM_paging_op:
-    {
-        xen_mem_paging_op_t mpo;
-
-        if ( copy_from_guest(&mpo, arg, 1) )
-            return -EFAULT;
-        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
-        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_paging_memop(cmd,
+                                guest_handle_cast(arg, xen_mem_paging_op_t));
 
     case XENMEM_sharing_op:
-    {
-        xen_mem_sharing_op_t mso;
-
-        if ( copy_from_guest(&mso, arg, 1) )
-            return -EFAULT;
-        if ( mso.op == XENMEM_sharing_op_audit )
-            return mem_sharing_audit(); 
-        rc = do_vm_event_op(cmd, mso.domain, &mso);
-        if ( !rc && __copy_to_guest(arg, &mso, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_sharing_memop(cmd,
+                                 guest_handle_cast(arg, xen_mem_sharing_op_t));
 
     default:
         rc = -ENOSYS;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 1e2bd1a..bdefe5c 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -26,7 +26,6 @@
 #include <xen/nodemask.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
 #include <asm/asm_defns.h>
@@ -37,6 +36,7 @@
 #include <asm/msr.h>
 #include <asm/setup.h>
 #include <asm/numa.h>
+#include <asm/mem_paging.h>
 #include <asm/mem_sharing.h>
 #include <public/memory.h>
 
@@ -984,28 +984,12 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return mem_sharing_get_nr_shared_mfns();
 
     case XENMEM_paging_op:
-    {
-        xen_mem_paging_op_t mpo;
-        if ( copy_from_guest(&mpo, arg, 1) )
-            return -EFAULT;
-        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
-        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_paging_memop(cmd,
+                                guest_handle_cast(arg, xen_mem_paging_op_t));
 
     case XENMEM_sharing_op:
-    {
-        xen_mem_sharing_op_t mso;
-        if ( copy_from_guest(&mso, arg, 1) )
-            return -EFAULT;
-        if ( mso.op == XENMEM_sharing_op_audit )
-            return mem_sharing_audit(); 
-        rc = do_vm_event_op(cmd, mso.domain, &mso);
-        if ( !rc && __copy_to_guest(arg, &mso, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_sharing_memop(cmd,
+                                 guest_handle_cast(arg, xen_mem_sharing_op_t));
 
     default:
         rc = -ENOSYS;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index f89361e..2343ae5 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -27,15 +27,6 @@
 #include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/p2m.h>
-
-#ifdef HAS_MEM_PAGING
-#include <asm/mem_paging.h>
-#endif
-
-#ifdef HAS_MEM_SHARING
-#include <asm/mem_sharing.h>
-#endif
-
 #include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
@@ -517,40 +508,6 @@ static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 }
 #endif
 
-int do_vm_event_op(int op, uint32_t domain, void *arg)
-{
-    int ret;
-    struct domain *d;
-
-    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
-    if ( ret )
-        return ret;
-
-    ret = xsm_vm_event_op(XSM_DM_PRIV, d, op);
-    if ( ret )
-        goto out;
-
-    switch (op)
-    {
-#ifdef HAS_MEM_PAGING
-        case XENMEM_paging_op:
-            ret = mem_paging_memop(d, arg);
-            break;
-#endif
-#ifdef HAS_MEM_SHARING
-        case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, arg);
-            break;
-#endif
-        default:
-            ret = -ENOSYS;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return ret;
-}
-
 /* Clean up on domain destruction */
 void vm_event_cleanup(struct domain *d)
 {
diff --git a/xen/include/asm-x86/mem_paging.h b/xen/include/asm-x86/mem_paging.h
index 92ed2fa..9d8bc67 100644
--- a/xen/include/asm-x86/mem_paging.h
+++ b/xen/include/asm-x86/mem_paging.h
@@ -20,8 +20,13 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
+#ifndef __MEM_PAGING_H__
+#define __MEM_PAGING_H__
 
-int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *meo);
+int mem_paging_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg);
+
+#endif /*__MEM_PAGING_H__ */
 
 
 /*
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index da99d46..51d4364 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -90,8 +90,8 @@ static inline int mem_sharing_unshare_page(struct domain *d,
  */
 int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
                                 bool_t allow_sleep);
-int mem_sharing_memop(struct domain *d, 
-                       xen_mem_sharing_op_t *mec);
+int mem_sharing_memop(unsigned long cmd,
+                      XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg);
 int mem_sharing_domctl(struct domain *d, 
                        xen_domctl_mem_sharing_op_t *mec);
 int mem_sharing_audit(void);
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 82a6e56..871a519 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -69,7 +69,6 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
 
 void vm_event_resume(struct domain *d, struct vm_event_domain *ved);
 
-int do_vm_event_op(int op, uint32_t domain, void *arg);
 int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 11/12] xen/xsm: Split vm_event_op into three separate labels
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (9 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 10/12] xen/vm_event: Relocate memop checks Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 21:25   ` Andrew Cooper
  2015-02-13 16:33 ` [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
  11 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The XSM label vm_event_op has been used to control the three memops
controlling mem_access, mem_paging and mem_sharing. While these systems
rely on vm_event, these are not vm_event operations themselves. Thus,
in this patch we introduce three separate labels for each of these memops.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/mm/mem_paging.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c       |  2 +-
 xen/common/mem_access.c             |  2 +-
 xen/include/xsm/dummy.h             | 20 +++++++++++++++++++-
 xen/include/xsm/xsm.h               | 33 ++++++++++++++++++++++++++++++---
 xen/xsm/dummy.c                     | 13 ++++++++++++-
 xen/xsm/flask/hooks.c               | 33 ++++++++++++++++++++++++++++++---
 xen/xsm/flask/policy/access_vectors |  6 ++++++
 8 files changed, 100 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 4aee6b7..73b3580 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -40,7 +40,7 @@ int mem_paging_memop(unsigned long cmd,
     if ( rc )
         return rc;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
+    rc = xsm_mem_paging(XSM_DM_PRIV, d);
     if ( rc )
         return rc;
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 612ed89..e3ebc05 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1316,7 +1316,7 @@ int mem_sharing_memop(unsigned long cmd,
     if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
          return -ENODEV;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
+    rc = xsm_mem_sharing(XSM_DM_PRIV, d);
     if ( rc )
         return rc;
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index a54fe6e..426f766 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -48,7 +48,7 @@ int mem_access_memop(unsigned long cmd,
     if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    rc = xsm_mem_access(XSM_DM_PRIV, d);
     if ( rc )
         goto out;
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 50ee929..16967ed 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -519,11 +519,29 @@ static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static XSM_INLINE int xsm_mem_access(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
+#endif
+
+#ifdef HAS_MEM_PAGING
+static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
 
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index d56a68f..2a88d84 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -142,7 +142,18 @@ struct xsm_operations {
     int (*get_vnumainfo) (struct domain *d);
 
     int (*vm_event_control) (struct domain *d, int mode, int op);
-    int (*vm_event_op) (struct domain *d, int op);
+
+#ifdef HAS_MEM_ACCESS
+    int (*mem_access) (struct domain *d);
+#endif
+
+#ifdef HAS_MEM_PAGING
+    int (*mem_paging) (struct domain *d);
+#endif
+
+#ifdef HAS_MEM_SHARING
+    int (*mem_sharing) (struct domain *d);
+#endif
 
 #ifdef CONFIG_X86
     int (*do_mca) (void);
@@ -546,10 +557,26 @@ static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int
     return xsm_ops->vm_event_control(d, mode, op);
 }
 
-static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static inline int xsm_mem_access (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->vm_event_op(d, op);
+    return xsm_ops->mem_access(d);
 }
+#endif
+
+#ifdef HAS_MEM_PAGING
+static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->mem_paging(d);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->mem_sharing(d);
+}
+#endif
 
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 6d12d32..3ddb4f6 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -119,7 +119,18 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
     set_to_dummy_if_null(ops, vm_event_control);
-    set_to_dummy_if_null(ops, vm_event_op);
+
+#ifdef HAS_MEM_ACCESS
+    set_to_dummy_if_null(ops, mem_access);
+#endif
+
+#ifdef HAS_MEM_PAGING
+    set_to_dummy_if_null(ops, mem_paging);
+#endif
+
+#ifdef HAS_MEM_SHARING
+    set_to_dummy_if_null(ops, mem_sharing);
+#endif
 
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index a65f68c..01d761b 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1142,10 +1142,26 @@ static int flask_vm_event_control(struct domain *d, int mode, int op)
     return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 }
 
-static int flask_vm_event_op(struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static int flask_mem_access(struct domain *d)
 {
-    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_ACCESS);
+}
+#endif
+
+#ifdef HAS_MEM_PAGING
+static int flask_mem_paging(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_PAGING);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static int flask_mem_sharing(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_SHARING);
 }
+#endif
 
 #if defined(HAS_PASSTHROUGH) && defined(HAS_PCI)
 static int flask_get_device_group(uint32_t machine_bdf)
@@ -1582,7 +1598,18 @@ static struct xsm_operations flask_ops = {
     .get_vnumainfo = flask_get_vnumainfo,
 
     .vm_event_control = flask_vm_event_control,
-    .vm_event_op = flask_vm_event_op,
+
+#ifdef HAS_MEM_ACCESS
+    .mem_access = flask_mem_access,
+#endif
+
+#ifdef HAS_MEM_PAGING
+    .mem_paging = flask_mem_paging,
+#endif
+
+#ifdef HAS_MEM_SHARING
+    .mem_sharing = flask_mem_sharing,
+#endif
 
 #ifdef CONFIG_COMPAT
     .do_compat_op = compat_flask_op,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 2e231e1..e5197df 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -225,6 +225,12 @@ class domain2
 # XEN_DOMCTL_monitor_op
 # XEN_DOMCTL_vm_event_op
     vm_event
+# XENMEM_access_op
+    mem_access
+# XENMEM_paging_op
+    mem_paging
+# XENMEM_sharing_op
+    mem_sharing
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (10 preceding siblings ...)
  2015-02-13 16:33 ` [PATCH V5 11/12] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
@ 2015-02-13 16:33 ` Tamas K Lengyel
  2015-02-13 21:44   ` Andrew Cooper
  2015-02-17 14:31   ` Jan Beulich
  11 siblings, 2 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 16:33 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

Thus far mem_access and mem_sharing memops had been able to signal
to Xen to start pulling responses off the corresponding rings. In this patch
we retire these memops and add them to the option to the vm_event_op domctl.

The vm_event_op domctl suboptions are the same for each ring thus we
consolidate them into XEN_VM_EVENT_ENABLE/DISABLE/RESUME.

As part of this patch in libxc we also rename the mem_access_enable/disable
functions to monitor_enable/disable and move them into xc_monitor.c.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxc/include/xenctrl.h       | 22 ++++++++++-----------
 tools/libxc/xc_mem_access.c         | 25 ------------------------
 tools/libxc/xc_mem_paging.c         | 12 ++++++++++--
 tools/libxc/xc_memshr.c             | 15 ++++++--------
 tools/libxc/xc_monitor.c            | 22 +++++++++++++++++++++
 tools/libxc/xc_vm_event.c           |  6 +++---
 tools/tests/xen-access/xen-access.c | 10 +++++-----
 xen/arch/x86/mm/mem_sharing.c       |  9 ---------
 xen/common/mem_access.c             |  9 ---------
 xen/common/vm_event.c               | 39 +++++++++++++++++++++++++++++++------
 xen/include/public/domctl.h         | 32 ++++++++++++++----------------
 xen/include/public/memory.h         | 16 +++++++--------
 12 files changed, 112 insertions(+), 105 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 3324132..3042e98 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2269,6 +2269,7 @@ int xc_tmem_restore_extra(xc_interface *xch, int dom, int fd);
  */
 int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id);
+int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id);
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id,
                            unsigned long gfn);
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn);
@@ -2282,17 +2283,6 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
  */
 
 /*
- * Enables mem_access and returns the mapped ring page.
- * Will return NULL on error.
- * Caller has to unmap this page when done.
- */
-void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
-void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
-                                         uint32_t *port);
-int xc_mem_access_disable(xc_interface *xch, domid_t domain_id);
-int xc_mem_access_resume(xc_interface *xch, domid_t domain_id);
-
-/*
  * Set a range of memory to a specific access.
  * Allowed types are XENMEM_access_default, XENMEM_access_n, any combination of
  * XENMEM_access_ + (rwx), and XENMEM_access_rx2rw
@@ -2309,7 +2299,17 @@ int xc_get_mem_access(xc_interface *xch, domid_t domain_id,
 
 /***
  * Monitor control operations.
+ *
+ * Enables the VM event monitor ring and returns the mapped ring page.
+ * This ring is used to deliver mem_access events, as well a set of additional
+ * events that can be enabled with the xc_monitor_* functions.
+ *
+ * Will return NULL on error.
+ * Caller has to unmap this page when done.
  */
+void *xc_monitor_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
+int xc_monitor_disable(xc_interface *xch, domid_t domain_id);
+int xc_monitor_resume(xc_interface *xch, domid_t domain_id);
 int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
                           unsigned int op, unsigned int sync,
                           unsigned int onchangeonly);
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 37e776c..7050692 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -24,31 +24,6 @@
 #include "xc_private.h"
 #include <xen/memory.h>
 
-void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
-{
-    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port);
-}
-
-int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
-{
-    return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_MONITOR_DISABLE,
-                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
-                               NULL);
-}
-
-int xc_mem_access_resume(xc_interface *xch, domid_t domain_id)
-{
-    xen_mem_access_op_t mao =
-    {
-        .op    = XENMEM_access_op_resume,
-        .domid = domain_id
-    };
-
-    return do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao));
-}
-
 int xc_set_mem_access(xc_interface *xch,
                       domid_t domain_id,
                       xenmem_access_t access,
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 083ab60..76e0c05 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -48,7 +48,7 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
     }
 
     return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_PAGING_ENABLE,
+                               XEN_VM_EVENT_ENABLE,
                                XEN_DOMCTL_VM_EVENT_OP_PAGING,
                                port);
 }
@@ -56,7 +56,15 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 {
     return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_PAGING_DISABLE,
+                               XEN_VM_EVENT_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_PAGING,
+                               NULL);
+}
+
+int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_RESUME,
                                XEN_DOMCTL_VM_EVENT_OP_PAGING,
                                NULL);
 }
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 14cc1ce..0960c6d 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -53,7 +53,7 @@ int xc_memshr_ring_enable(xc_interface *xch,
     }
 
     return xc_vm_event_control(xch, domid,
-                               XEN_VM_EVENT_SHARING_ENABLE,
+                               XEN_VM_EVENT_ENABLE,
                                XEN_DOMCTL_VM_EVENT_OP_SHARING,
                                port);
 }
@@ -62,7 +62,7 @@ int xc_memshr_ring_disable(xc_interface *xch,
                            domid_t domid)
 {
     return xc_vm_event_control(xch, domid,
-                               XEN_VM_EVENT_SHARING_DISABLE,
+                               XEN_VM_EVENT_DISABLE,
                                XEN_DOMCTL_VM_EVENT_OP_SHARING,
                                NULL);
 }
@@ -185,13 +185,10 @@ int xc_memshr_add_to_physmap(xc_interface *xch,
 int xc_memshr_domain_resume(xc_interface *xch,
                             domid_t domid)
 {
-    xen_mem_sharing_op_t mso;
-
-    memset(&mso, 0, sizeof(mso));
-
-    mso.op = XENMEM_sharing_op_resume;
-
-    return xc_memshr_memop(xch, domid, &mso);
+    return xc_vm_event_control(xch, domid,
+                               XEN_VM_EVENT_RESUME,
+                               XEN_DOMCTL_VM_EVENT_OP_SHARING,
+                               NULL);
 }
 
 int xc_memshr_debug_gfn(xc_interface *xch,
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
index 9e807d1..a4d7b7a 100644
--- a/tools/libxc/xc_monitor.c
+++ b/tools/libxc/xc_monitor.c
@@ -23,6 +23,28 @@
 
 #include "xc_private.h"
 
+void *xc_monitor_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
+{
+    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+                              port);
+}
+
+int xc_monitor_disable(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
+}
+
+int xc_monitor_resume(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_RESUME,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
+}
+
 int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
                           unsigned int op, unsigned int sync,
                           unsigned int onchangeonly)
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index 7277e86..a5b3277 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -99,17 +99,17 @@ void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
     switch ( param )
     {
     case HVM_PARAM_PAGING_RING_PFN:
-        op = XEN_VM_EVENT_PAGING_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_PAGING;
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
-        op = XEN_VM_EVENT_MONITOR_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
     case HVM_PARAM_SHARING_RING_PFN:
-        op = XEN_VM_EVENT_SHARING_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_SHARING;
         break;
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index d52b175..665eee2 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -124,8 +124,8 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
 
     if ( mem_access_enable )
     {
-        rc = xc_mem_access_disable(xenaccess->xc_handle,
-                                   xenaccess->vm_event.domain_id);
+        rc = xc_monitor_disable(xenaccess->xc_handle,
+                                xenaccess->vm_event.domain_id);
         if ( rc != 0 )
         {
             ERROR("Error tearing down domain xenaccess in xen");
@@ -196,9 +196,9 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
 
     /* Enable mem_access */
     xenaccess->vm_event.ring_page =
-            xc_mem_access_enable(xenaccess->xc_handle,
-                                 xenaccess->vm_event.domain_id,
-                                 &xenaccess->vm_event.evtchn_port);
+            xc_monitor_enable(xenaccess->xc_handle,
+                              xenaccess->vm_event.domain_id,
+                              &xenaccess->vm_event.evtchn_port);
     if ( xenaccess->vm_event.ring_page == NULL )
     {
         switch ( errno ) {
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index e3ebc05..0a2fbb6 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1460,15 +1460,6 @@ int mem_sharing_memop(unsigned long cmd,
         }
         break;
 
-        case XENMEM_sharing_op_resume:
-        {
-            if ( !mem_sharing_enabled(d) )
-                return -EINVAL;
-
-            vm_event_resume(d, &d->vm_event->share);
-        }
-        break;
-
         case XENMEM_sharing_op_debug_gfn:
         {
             unsigned long gfn = mso.u.debug.u.gfn;
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 426f766..1ece9b5 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -58,15 +58,6 @@ int mem_access_memop(unsigned long cmd,
 
     switch ( mao.op )
     {
-    case XENMEM_access_op_resume:
-        if ( unlikely(start_iter) )
-            rc = -ENOSYS;
-        else
-        {
-            vm_event_resume(d, &d->vm_event->monitor);
-            rc = 0;
-        }
-        break;
 
     case XENMEM_access_op_set_access:
         rc = -EINVAL;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 2343ae5..0d7cdc2 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -581,7 +581,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_PAGING_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
         {
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -611,13 +611,22 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
         break;
 
-        case XEN_VM_EVENT_PAGING_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
         {
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
         }
         break;
 
+        case XEN_VM_EVENT_RESUME:
+        {
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+        }
+        break;
+
         default:
             rc = -ENOSYS;
             break;
@@ -633,19 +642,28 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_MONITOR_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                  HVM_PARAM_MONITOR_RING_PFN,
                                  monitor_notification);
             break;
 
-        case XEN_VM_EVENT_MONITOR_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
             if ( ved->ring_page )
             {
                 rc = vm_event_disable(d, ved);
             }
             break;
 
+        case XEN_VM_EVENT_RESUME:
+        {
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+        }
+        break;
+
         default:
             rc = -ENOSYS;
             break;
@@ -661,7 +679,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_SHARING_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
         {
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
@@ -679,13 +697,22 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
         break;
 
-        case XEN_VM_EVENT_SHARING_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
         {
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
         }
         break;
 
+        case XEN_VM_EVENT_RESUME:
+        {
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+        }
+        break;
+
         default:
             rc = -ENOSYS;
             break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 0242914..cdcf9a8 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -756,6 +756,17 @@ struct xen_domctl_gdbsx_domstatus {
 /* XEN_DOMCTL_vm_event_op */
 
 /*
+ * There are currently three rings available for VM events:
+ * sharing, monitor and paging. This hypercall allows one to
+ * control these rings (enable/disable), as well as to signal
+ * to the hypervisor to pull responses (resume) from the given
+ * ring.
+ */
+#define XEN_VM_EVENT_ENABLE               0
+#define XEN_VM_EVENT_DISABLE              1
+#define XEN_VM_EVENT_RESUME               2
+
+/*
  * Domain memory paging
  * Page memory in and out.
  * Domctl interface to set up and tear down the 
@@ -771,9 +782,6 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_VM_EVENT_OP_PAGING            1
 
-#define XEN_VM_EVENT_PAGING_ENABLE               0
-#define XEN_VM_EVENT_PAGING_DISABLE              1
-
 /*
  * Monitor helper.
  *
@@ -785,25 +793,18 @@ struct xen_domctl_gdbsx_domstatus {
  * of every page in a domain.  When one of these permissions--independent,
  * read, write, and execute--is violated, the VCPU is paused and a memory event
  * is sent with what happened. The memory event handler can then resume the
- * VCPU and redo the access with a XENMEM_access_op_resume hypercall.
+ * VCPU and redo the access with a XEN_VM_EVENT_RESUME option.
  *
  * See public/vm_event.h for the list of available events that can be
  * subscribed to via the monitor interface.
  *
- * To enable MOV-TO-MSR interception on x86, it is necessary to enable this
- * interface with the XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
- * operator.
- *
- * The XEN_VM_EVENT_MONITOR_ENABLE* domctls return several
+ * The XEN_VM_EVENT_MONITOR_* domctls returns
  * non-standard error codes to indicate why access could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EBUSY  - guest has or had access enabled, ring buffer still active
  *
  */
-#define XEN_DOMCTL_VM_EVENT_OP_MONITOR                        2
-
-#define XEN_VM_EVENT_MONITOR_ENABLE                           0
-#define XEN_VM_EVENT_MONITOR_DISABLE                          1
+#define XEN_DOMCTL_VM_EVENT_OP_MONITOR           2
 
 /*
  * Sharing ENOMEM helper.
@@ -820,13 +821,10 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_VM_EVENT_OP_SHARING           3
 
-#define XEN_VM_EVENT_SHARING_ENABLE              0
-#define XEN_VM_EVENT_SHARING_DISABLE             1
-
 /* Use for teardown/setup of helper<->hypervisor interface for paging, 
  * access and sharing.*/
 struct xen_domctl_vm_event_op {
-    uint32_t       op;           /* XEN_VM_EVENT_*_* */
+    uint32_t       op;           /* XEN_VM_EVENT_* */
     uint32_t       mode;         /* XEN_DOMCTL_VM_EVENT_OP_* */
 
     uint32_t port;              /* OUT: event channel for ring */
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 9c41f5d..334f60e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -386,9 +386,8 @@ typedef struct xen_mem_paging_op xen_mem_paging_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
 
 #define XENMEM_access_op                    21
-#define XENMEM_access_op_resume             0
-#define XENMEM_access_op_set_access         1
-#define XENMEM_access_op_get_access         2
+#define XENMEM_access_op_set_access         0
+#define XENMEM_access_op_get_access         1
 
 typedef enum {
     XENMEM_access_n,
@@ -439,12 +438,11 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
 #define XENMEM_sharing_op_nominate_gfn      0
 #define XENMEM_sharing_op_nominate_gref     1
 #define XENMEM_sharing_op_share             2
-#define XENMEM_sharing_op_resume            3
-#define XENMEM_sharing_op_debug_gfn         4
-#define XENMEM_sharing_op_debug_mfn         5
-#define XENMEM_sharing_op_debug_gref        6
-#define XENMEM_sharing_op_add_physmap       7
-#define XENMEM_sharing_op_audit             8
+#define XENMEM_sharing_op_debug_gfn         3
+#define XENMEM_sharing_op_debug_mfn         4
+#define XENMEM_sharing_op_debug_gref        5
+#define XENMEM_sharing_op_add_physmap       6
+#define XENMEM_sharing_op_audit             7
 
 #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
 #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures
  2015-02-13 16:33 ` [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
@ 2015-02-13 17:23   ` Andrew Cooper
  2015-02-13 18:03     ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 17:23 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, Razvan Cojocaru,
	stefano.stabellini, tim, steve, jbeulich, eddie.dong, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> The public mem_event structures used to communicate with helper applications via
> shared rings have been used in different settings. However, the variable names
> within this structure have not reflected this fact, resulting in the reuse of
> variables to mean different things under different scenarios.
>
> This patch remedies the issue by clearly defining the structure members based on
> the actual context within which the structure is used.
>
> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
> v5: Style fixes
>     Convert gfn to uint32_t 

It is perfectly possible to have guests with more memory than is covered
by 44 bits, or PV guests whose frames reside above the 44bit boundary. 
All gfn values should be 64bits wide.

~Andrew

> and define mem_access flags bits as we can now save
>         space on the ring this way
>     Split non-mem_event flags into access/paging flags
> v4: Attach mem_event version to each outgoing request directly in mem_event.
> v3: Add padding to mem_event structures.
>     Add version field to mem_event structures and checks for it.
> ---
>  tools/libxc/xc_mem_event.c          |   2 +-
>  tools/libxc/xc_private.h            |   2 +-
>  tools/tests/xen-access/xen-access.c |  45 +++++----
>  tools/xenpaging/xenpaging.c         |  51 ++++++-----
>  xen/arch/x86/hvm/hvm.c              | 177 +++++++++++++++++++-----------------
>  xen/arch/x86/mm/mem_sharing.c       |  16 +++-
>  xen/arch/x86/mm/p2m.c               | 163 ++++++++++++++++++---------------
>  xen/common/mem_access.c             |   6 ++
>  xen/common/mem_event.c              |   2 +
>  xen/include/public/mem_event.h      | 173 ++++++++++++++++++++++++++---------
>  xen/include/public/memory.h         |  11 ++-
>  11 files changed, 401 insertions(+), 247 deletions(-)
>
> diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
> index 8c0be4e..1b5f7c3 100644
> --- a/tools/libxc/xc_mem_event.c
> +++ b/tools/libxc/xc_mem_event.c
> @@ -42,7 +42,7 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
>  
>  int xc_mem_event_memop(xc_interface *xch, domid_t domain_id, 
>                          unsigned int op, unsigned int mode,
> -                        uint64_t gfn, void *buffer)
> +                        uint32_t gfn, void *buffer)
>  {
>      xen_mem_event_op_t meo;
>  
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 45b8644..bc021b8 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -427,7 +427,7 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
>                           unsigned int mode, uint32_t *port);
>  int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
>                          unsigned int op, unsigned int mode,
> -                        uint64_t gfn, void *buffer);
> +                        uint32_t gfn, void *buffer);
>  /*
>   * Enables mem_event and returns the mapped ring page indicated by param.
>   * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
> diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
> index 6cb382d..68f05db 100644
> --- a/tools/tests/xen-access/xen-access.c
> +++ b/tools/tests/xen-access/xen-access.c
> @@ -551,13 +551,21 @@ int main(int argc, char *argv[])
>                  continue;
>              }
>  
> +            if ( req.version != MEM_EVENT_INTERFACE_VERSION )
> +            {
> +                ERROR("Error: mem_event interface version mismatch!\n");
> +                interrupted = -1;
> +                continue;
> +            }
> +
>              memset( &rsp, 0, sizeof (rsp) );
> +            rsp.version = MEM_EVENT_INTERFACE_VERSION;
>              rsp.vcpu_id = req.vcpu_id;
>              rsp.flags = req.flags;
>  
>              switch (req.reason) {
> -            case MEM_EVENT_REASON_VIOLATION:
> -                rc = xc_get_mem_access(xch, domain_id, req.gfn, &access);
> +            case MEM_EVENT_REASON_MEM_ACCESS:
> +                rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access);
>                  if (rc < 0)
>                  {
>                      ERROR("Error %d getting mem_access event\n", rc);
> @@ -565,23 +573,23 @@ int main(int argc, char *argv[])
>                      continue;
>                  }
>  
> -                printf("PAGE ACCESS: %c%c%c for GFN %"PRIx64" (offset %06"
> +                printf("PAGE ACCESS: %c%c%c for GFN %"PRIx32" (offset %06"
>                         PRIx64") gla %016"PRIx64" (valid: %c; fault in gpt: %c; fault with gla: %c) (vcpu %u)\n",
> -                       req.access_r ? 'r' : '-',
> -                       req.access_w ? 'w' : '-',
> -                       req.access_x ? 'x' : '-',
> -                       req.gfn,
> -                       req.offset,
> -                       req.gla,
> -                       req.gla_valid ? 'y' : 'n',
> -                       req.fault_in_gpt ? 'y' : 'n',
> -                       req.fault_with_gla ? 'y': 'n',
> +                       (req.u.mem_access.flags & MEM_ACCESS_R) ? 'r' : '-',
> +                       (req.u.mem_access.flags & MEM_ACCESS_W) ? 'w' : '-',
> +                       (req.u.mem_access.flags & MEM_ACCESS_X) ? 'x' : '-',
> +                       req.u.mem_access.gfn,
> +                       req.u.mem_access.offset,
> +                       req.u.mem_access.gla,
> +                       (req.u.mem_access.flags & MEM_ACCESS_GLA_VALID) ? 'y' : 'n',
> +                       (req.u.mem_access.flags & MEM_ACCESS_FAULT_IN_GPT) ? 'y' : 'n',
> +                       (req.u.mem_access.flags & MEM_ACCESS_FAULT_WITH_GLA) ? 'y': 'n',
>                         req.vcpu_id);
>  
>                  if ( default_access != after_first_access )
>                  {
>                      rc = xc_set_mem_access(xch, domain_id, after_first_access,
> -                                           req.gfn, 1);
> +                                           req.u.mem_access.gfn, 1);
>                      if (rc < 0)
>                      {
>                          ERROR("Error %d setting gfn to access_type %d\n", rc,
> @@ -592,13 +600,12 @@ int main(int argc, char *argv[])
>                  }
>  
>  
> -                rsp.gfn = req.gfn;
> -                rsp.p2mt = req.p2mt;
> +                rsp.u.mem_access.gfn = req.u.mem_access.gfn;
>                  break;
> -            case MEM_EVENT_REASON_INT3:
> -                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n", 
> -                       req.gla, 
> -                       req.gfn,
> +            case MEM_EVENT_REASON_SOFTWARE_BREAKPOINT:
> +                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx32" (vcpu %d)\n",
> +                       req.regs.x86.rip,
> +                       req.u.software_breakpoint.gfn,
>                         req.vcpu_id);
>  
>                  /* Reinject */
> diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
> index 82c1ee4..29ca7c7 100644
> --- a/tools/xenpaging/xenpaging.c
> +++ b/tools/xenpaging/xenpaging.c
> @@ -684,9 +684,9 @@ static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t
>           * This allows page-out of these gfns if the target grows again.
>           */
>          if (paging->num_paged_out > paging->policy_mru_size)
> -            policy_notify_paged_in(rsp->gfn);
> +            policy_notify_paged_in(rsp->u.mem_paging.gfn);
>          else
> -            policy_notify_paged_in_nomru(rsp->gfn);
> +            policy_notify_paged_in_nomru(rsp->u.mem_paging.gfn);
>  
>         /* Record number of resumed pages */
>         paging->num_paged_out--;
> @@ -874,7 +874,8 @@ int main(int argc, char *argv[])
>      }
>      xch = paging->xc_handle;
>  
> -    DPRINTF("starting %s for domain_id %u with pagefile %s\n", argv[0], paging->mem_event.domain_id, filename);
> +    DPRINTF("starting %s for domain_id %u with pagefile %s\n",
> +            argv[0], paging->mem_event.domain_id, filename);
>  
>      /* ensure that if we get a signal, we'll do cleanup, then exit */
>      act.sa_handler = close_handler;
> @@ -910,49 +911,52 @@ int main(int argc, char *argv[])
>  
>              get_request(&paging->mem_event, &req);
>  
> -            if ( req.gfn > paging->max_pages )
> +            if ( req.u.mem_paging.gfn > paging->max_pages )
>              {
> -                ERROR("Requested gfn %"PRIx64" higher than max_pages %x\n", req.gfn, paging->max_pages);
> +                ERROR("Requested gfn %"PRIx32" higher than max_pages %x\n",
> +                      req.u.mem_paging.gfn, paging->max_pages);
>                  goto out;
>              }
>  
>              /* Check if the page has already been paged in */
> -            if ( test_and_clear_bit(req.gfn, paging->bitmap) )
> +            if ( test_and_clear_bit(req.u.mem_paging.gfn, paging->bitmap) )
>              {
>                  /* Find where in the paging file to read from */
> -                slot = paging->gfn_to_slot[req.gfn];
> +                slot = paging->gfn_to_slot[req.u.mem_paging.gfn];
>  
>                  /* Sanity check */
> -                if ( paging->slot_to_gfn[slot] != req.gfn )
> +                if ( paging->slot_to_gfn[slot] != req.u.mem_paging.gfn )
>                  {
> -                    ERROR("Expected gfn %"PRIx64" in slot %d, but found gfn %lx\n", req.gfn, slot, paging->slot_to_gfn[slot]);
> +                    ERROR("Expected gfn %"PRIx32" in slot %d, but found gfn %lx\n",
> +                          req.u.mem_paging.gfn, slot, paging->slot_to_gfn[slot]);
>                      goto out;
>                  }
>  
> -                if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
> +                if ( req.u.mem_paging.flags & MEM_PAGING_DROP_PAGE )
>                  {
> -                    DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n", req.gfn, slot);
> +                    DPRINTF("drop_page ^ gfn %"PRIx32" pageslot %d\n",
> +                            req.u.mem_paging.gfn, slot);
>                      /* Notify policy of page being dropped */
> -                    policy_notify_dropped(req.gfn);
> +                    policy_notify_dropped(req.u.mem_paging.gfn);
>                  }
>                  else
>                  {
>                      /* Populate the page */
> -                    if ( xenpaging_populate_page(paging, req.gfn, slot) < 0 )
> +                    if ( xenpaging_populate_page(paging, req.u.mem_paging.gfn, slot) < 0 )
>                      {
> -                        ERROR("Error populating page %"PRIx64"", req.gfn);
> +                        ERROR("Error populating page %"PRIx32"", req.u.mem_paging.gfn);
>                          goto out;
>                      }
>                  }
>  
>                  /* Prepare the response */
> -                rsp.gfn = req.gfn;
> +                rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
>                  rsp.vcpu_id = req.vcpu_id;
>                  rsp.flags = req.flags;
>  
>                  if ( xenpaging_resume_page(paging, &rsp, 1) < 0 )
>                  {
> -                    PERROR("Error resuming page %"PRIx64"", req.gfn);
> +                    PERROR("Error resuming page %"PRIx32"", req.u.mem_paging.gfn);
>                      goto out;
>                  }
>  
> @@ -965,23 +969,24 @@ int main(int argc, char *argv[])
>              else
>              {
>                  DPRINTF("page %s populated (domain = %d; vcpu = %d;"
> -                        " gfn = %"PRIx64"; paused = %d; evict_fail = %d)\n",
> -                        req.flags & MEM_EVENT_FLAG_EVICT_FAIL ? "not" : "already",
> -                        paging->mem_event.domain_id, req.vcpu_id, req.gfn,
> +                        " gfn = %"PRIx32"; paused = %d; evict_fail = %d)\n",
> +                        req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ? "not" : "already",
> +                        paging->mem_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
>                          !!(req.flags & MEM_EVENT_FLAG_VCPU_PAUSED) ,
> -                        !!(req.flags & MEM_EVENT_FLAG_EVICT_FAIL) );
> +                        !!(req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL) );
>  
>                  /* Tell Xen to resume the vcpu */
> -                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) || ( req.flags & MEM_EVENT_FLAG_EVICT_FAIL ))
> +                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) ||
> +                    ( req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ))
>                  {
>                      /* Prepare the response */
> -                    rsp.gfn = req.gfn;
> +                    rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
>                      rsp.vcpu_id = req.vcpu_id;
>                      rsp.flags = req.flags;
>  
>                      if ( xenpaging_resume_page(paging, &rsp, 0) < 0 )
>                      {
> -                        PERROR("Error resuming page %"PRIx64"", req.gfn);
> +                        PERROR("Error resuming page %"PRIx32"", req.u.mem_paging.gfn);
>                          goto out;
>                      }
>                  }
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index b03ee4e..fe5f568 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -6324,48 +6324,42 @@ static void hvm_mem_event_fill_regs(mem_event_request_t *req)
>      const struct cpu_user_regs *regs = guest_cpu_user_regs();
>      const struct vcpu *curr = current;
>  
> -    req->x86_regs.rax = regs->eax;
> -    req->x86_regs.rcx = regs->ecx;
> -    req->x86_regs.rdx = regs->edx;
> -    req->x86_regs.rbx = regs->ebx;
> -    req->x86_regs.rsp = regs->esp;
> -    req->x86_regs.rbp = regs->ebp;
> -    req->x86_regs.rsi = regs->esi;
> -    req->x86_regs.rdi = regs->edi;
> -
> -    req->x86_regs.r8  = regs->r8;
> -    req->x86_regs.r9  = regs->r9;
> -    req->x86_regs.r10 = regs->r10;
> -    req->x86_regs.r11 = regs->r11;
> -    req->x86_regs.r12 = regs->r12;
> -    req->x86_regs.r13 = regs->r13;
> -    req->x86_regs.r14 = regs->r14;
> -    req->x86_regs.r15 = regs->r15;
> -
> -    req->x86_regs.rflags = regs->eflags;
> -    req->x86_regs.rip    = regs->eip;
> -
> -    req->x86_regs.msr_efer = curr->arch.hvm_vcpu.guest_efer;
> -    req->x86_regs.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
> -    req->x86_regs.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
> -    req->x86_regs.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
> -}
> -
> -static int hvm_memory_event_traps(long p, uint32_t reason,
> -                                  unsigned long value, unsigned long old, 
> -                                  bool_t gla_valid, unsigned long gla) 
> -{
> -    struct vcpu* v = current;
> -    struct domain *d = v->domain;
> -    mem_event_request_t req = { .reason = reason };
> +    req->regs.x86.rax = regs->eax;
> +    req->regs.x86.rcx = regs->ecx;
> +    req->regs.x86.rdx = regs->edx;
> +    req->regs.x86.rbx = regs->ebx;
> +    req->regs.x86.rsp = regs->esp;
> +    req->regs.x86.rbp = regs->ebp;
> +    req->regs.x86.rsi = regs->esi;
> +    req->regs.x86.rdi = regs->edi;
> +
> +    req->regs.x86.r8  = regs->r8;
> +    req->regs.x86.r9  = regs->r9;
> +    req->regs.x86.r10 = regs->r10;
> +    req->regs.x86.r11 = regs->r11;
> +    req->regs.x86.r12 = regs->r12;
> +    req->regs.x86.r13 = regs->r13;
> +    req->regs.x86.r14 = regs->r14;
> +    req->regs.x86.r15 = regs->r15;
> +
> +    req->regs.x86.rflags = regs->eflags;
> +    req->regs.x86.rip    = regs->eip;
> +
> +    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
> +    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
> +    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
> +    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
> +}
> +
> +static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
> +{
>      int rc;
> +    struct vcpu *v = current;
> +    struct domain *d = v->domain;
>  
> -    if ( !(p & HVMPME_MODE_MASK) ) 
> +    if ( !(parameters & HVMPME_MODE_MASK) )
>          return 0;
>  
> -    if ( (p & HVMPME_onchangeonly) && (value == old) )
> -        return 1;
> -
>      rc = mem_event_claim_slot(d, &d->mem_event->access);
>      if ( rc == -ENOSYS )
>      {
> @@ -6376,85 +6370,106 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
>      else if ( rc < 0 )
>          return rc;
>  
> -    if ( (p & HVMPME_MODE_MASK) == HVMPME_mode_sync ) 
> +    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
>      {
> -        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;    
> +        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>          mem_event_vcpu_pause(v);
>      }
>  
> -    req.gfn = value;
> -    req.vcpu_id = v->vcpu_id;
> -    if ( gla_valid ) 
> -    {
> -        req.offset = gla & ((1 << PAGE_SHIFT) - 1);
> -        req.gla = gla;
> -        req.gla_valid = 1;
> -    }
> -    else
> -    {
> -        req.gla = old;
> -    }
> -    
> -    hvm_mem_event_fill_regs(&req);
> -    mem_event_put_request(d, &d->mem_event->access, &req);
> -    
> +    hvm_mem_event_fill_regs(req);
> +    mem_event_put_request(d, &d->mem_event->access, req);
> +
>      return 1;
>  }
>  
> +static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
> +                                unsigned long old)
> +{
> +    mem_event_request_t req = {
> +        .reason = reason,
> +        .vcpu_id = current->vcpu_id,
> +        .u.mov_to_cr.new_value = value,
> +        .u.mov_to_cr.old_value = old
> +    };
> +    uint64_t parameters = 0;
> +
> +    switch(reason)
> +    {
> +    case MEM_EVENT_REASON_MOV_TO_CR0:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
> +        break;
> +    case MEM_EVENT_REASON_MOV_TO_CR3:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
> +        break;
> +    case MEM_EVENT_REASON_MOV_TO_CR4:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
> +        break;
> +    };
> +
> +    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
> +        return;
> +
> +    hvm_memory_event_traps(parameters, &req);
> +}
> +
>  void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
>  {
> -    hvm_memory_event_traps(current->domain->arch.hvm_domain
> -                             .params[HVM_PARAM_MEMORY_EVENT_CR0],
> -                           MEM_EVENT_REASON_CR0,
> -                           value, old, 0, 0);
> +    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR0, value, old);
>  }
>  
>  void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
>  {
> -    hvm_memory_event_traps(current->domain->arch.hvm_domain
> -                             .params[HVM_PARAM_MEMORY_EVENT_CR3],
> -                           MEM_EVENT_REASON_CR3,
> -                           value, old, 0, 0);
> +    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR3, value, old);
>  }
>  
>  void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
>  {
> -    hvm_memory_event_traps(current->domain->arch.hvm_domain
> -                             .params[HVM_PARAM_MEMORY_EVENT_CR4],
> -                           MEM_EVENT_REASON_CR4,
> -                           value, old, 0, 0);
> +    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR4, value, old);
>  }
>  
>  void hvm_memory_event_msr(unsigned long msr, unsigned long value)
>  {
> +    mem_event_request_t req = {
> +        .reason = MEM_EVENT_REASON_MOV_TO_MSR,
> +        .vcpu_id = current->vcpu_id,
> +        .u.mov_to_msr.msr = msr,
> +        .u.mov_to_msr.value = value,
> +    };
> +
>      hvm_memory_event_traps(current->domain->arch.hvm_domain
> -                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
> -                           MEM_EVENT_REASON_MSR,
> -                           value, ~value, 1, msr);
> +                            .params[HVM_PARAM_MEMORY_EVENT_MSR],
> +                           &req);
>  }
>  
>  int hvm_memory_event_int3(unsigned long gla) 
>  {
>      uint32_t pfec = PFEC_page_present;
> -    unsigned long gfn;
> -    gfn = paging_gva_to_gfn(current, gla, &pfec);
> +    mem_event_request_t req = {
> +        .reason = MEM_EVENT_REASON_SOFTWARE_BREAKPOINT,
> +        .vcpu_id = current->vcpu_id,
> +        .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
> +    };
>  
>      return hvm_memory_event_traps(current->domain->arch.hvm_domain
> -                                    .params[HVM_PARAM_MEMORY_EVENT_INT3],
> -                                  MEM_EVENT_REASON_INT3,
> -                                  gfn, 0, 1, gla);
> +                                   .params[HVM_PARAM_MEMORY_EVENT_INT3],
> +                                  &req);
>  }
>  
>  int hvm_memory_event_single_step(unsigned long gla)
>  {
>      uint32_t pfec = PFEC_page_present;
> -    unsigned long gfn;
> -    gfn = paging_gva_to_gfn(current, gla, &pfec);
> +    mem_event_request_t req = {
> +        .reason = MEM_EVENT_REASON_SINGLESTEP,
> +        .vcpu_id = current->vcpu_id,
> +        .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
> +    };
>  
>      return hvm_memory_event_traps(current->domain->arch.hvm_domain
> -            .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
> -            MEM_EVENT_REASON_SINGLESTEP,
> -            gfn, 0, 1, gla);
> +                                   .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
> +                                  &req);
>  }
>  
>  int nhvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 7c0fc7d..8a192ef 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -559,7 +559,12 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
>  {
>      struct vcpu *v = current;
>      int rc;
> -    mem_event_request_t req = { .gfn = gfn };
> +    mem_event_request_t req = {
> +        .reason = MEM_EVENT_REASON_MEM_SHARING,
> +        .vcpu_id = v->vcpu_id,
> +        .u.mem_sharing.gfn = gfn,
> +        .u.mem_sharing.p2mt = p2m_ram_shared
> +    };
>  
>      if ( (rc = __mem_event_claim_slot(d, 
>                          &d->mem_event->share, allow_sleep)) < 0 )
> @@ -571,9 +576,6 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
>          mem_event_vcpu_pause(v);
>      }
>  
> -    req.p2mt = p2m_ram_shared;
> -    req.vcpu_id = v->vcpu_id;
> -
>      mem_event_put_request(d, &d->mem_event->share, &req);
>  
>      return 0;
> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>      {
>          struct vcpu *v;
>  
> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
> +        {
> +            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
> +            continue;
> +        }
> +
>          if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
>              continue;
>  
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 6a06e9f..339f8fe 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1081,7 +1081,10 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
>  void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
>                                  p2m_type_t p2mt)
>  {
> -    mem_event_request_t req = { .gfn = gfn };
> +    mem_event_request_t req = {
> +        .reason = MEM_EVENT_REASON_MEM_PAGING,
> +        .u.mem_paging.gfn = gfn
> +    };
>  
>      /* We allow no ring in this unique case, because it won't affect
>       * correctness of the guest execution at this point.  If this is the only
> @@ -1092,14 +1095,14 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
>          return;
>  
>      /* Send release notification to pager */
> -    req.flags = MEM_EVENT_FLAG_DROP_PAGE;
> +    req.u.mem_paging.flags = MEM_PAGING_DROP_PAGE;
>  
>      /* Update stats unless the page hasn't yet been evicted */
>      if ( p2mt != p2m_ram_paging_out )
>          atomic_dec(&d->paged_pages);
>      else
>          /* Evict will fail now, tag this request for pager */
> -        req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
> +        req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL;
>  
>      mem_event_put_request(d, &d->mem_event->paging, &req);
>  }
> @@ -1128,7 +1131,10 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
>  void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>  {
>      struct vcpu *v = current;
> -    mem_event_request_t req = { .gfn = gfn };
> +    mem_event_request_t req = {
> +        .reason = MEM_EVENT_REASON_MEM_PAGING,
> +        .u.mem_paging.gfn = gfn
> +    };
>      p2m_type_t p2mt;
>      p2m_access_t a;
>      mfn_t mfn;
> @@ -1157,7 +1163,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>      {
>          /* Evict will fail now, tag this request for pager */
>          if ( p2mt == p2m_ram_paging_out )
> -            req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
> +            req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL;
>  
>          p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a);
>      }
> @@ -1178,7 +1184,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>      }
>  
>      /* Send request to pager */
> -    req.p2mt = p2mt;
> +    req.u.mem_paging.p2mt = p2mt;
>      req.vcpu_id = v->vcpu_id;
>  
>      mem_event_put_request(d, &d->mem_event->paging, &req);
> @@ -1300,6 +1306,12 @@ void p2m_mem_paging_resume(struct domain *d)
>      {
>          struct vcpu *v;
>  
> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
> +        {
> +            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
> +            continue;
> +        }
> +
>          if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
>              continue;
>  
> @@ -1310,20 +1322,21 @@ void p2m_mem_paging_resume(struct domain *d)
>          v = d->vcpu[rsp.vcpu_id];
>  
>          /* Fix p2m entry if the page was not dropped */
> -        if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
> +        if ( !(rsp.u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
>          {
> -            gfn_lock(p2m, rsp.gfn, 0);
> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
> +            uint64_t gfn = rsp.u.mem_access.gfn;
> +            gfn_lock(p2m, gfn, 0);
> +            mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
>              /* Allow only pages which were prepared properly, or pages which
>               * were nominated but not evicted */
>              if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
>              {
> -                p2m_set_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
> +                p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
>                                paging_mode_log_dirty(d) ? p2m_ram_logdirty :
>                                p2m_ram_rw, a);
> -                set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
> +                set_gpfn_from_mfn(mfn_x(mfn), gfn);
>              }
> -            gfn_unlock(p2m, rsp.gfn, 0);
> +            gfn_unlock(p2m, gfn, 0);
>          }
>          /* Unpause domain */
>          if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
> @@ -1341,92 +1354,94 @@ static void p2m_mem_event_fill_regs(mem_event_request_t *req)
>      /* Architecture-specific vmcs/vmcb bits */
>      hvm_funcs.save_cpu_ctxt(curr, &ctxt);
>  
> -    req->x86_regs.rax = regs->eax;
> -    req->x86_regs.rcx = regs->ecx;
> -    req->x86_regs.rdx = regs->edx;
> -    req->x86_regs.rbx = regs->ebx;
> -    req->x86_regs.rsp = regs->esp;
> -    req->x86_regs.rbp = regs->ebp;
> -    req->x86_regs.rsi = regs->esi;
> -    req->x86_regs.rdi = regs->edi;
> -
> -    req->x86_regs.r8  = regs->r8;
> -    req->x86_regs.r9  = regs->r9;
> -    req->x86_regs.r10 = regs->r10;
> -    req->x86_regs.r11 = regs->r11;
> -    req->x86_regs.r12 = regs->r12;
> -    req->x86_regs.r13 = regs->r13;
> -    req->x86_regs.r14 = regs->r14;
> -    req->x86_regs.r15 = regs->r15;
> -
> -    req->x86_regs.rflags = regs->eflags;
> -    req->x86_regs.rip    = regs->eip;
> -
> -    req->x86_regs.dr7 = curr->arch.debugreg[7];
> -    req->x86_regs.cr0 = ctxt.cr0;
> -    req->x86_regs.cr2 = ctxt.cr2;
> -    req->x86_regs.cr3 = ctxt.cr3;
> -    req->x86_regs.cr4 = ctxt.cr4;
> -
> -    req->x86_regs.sysenter_cs = ctxt.sysenter_cs;
> -    req->x86_regs.sysenter_esp = ctxt.sysenter_esp;
> -    req->x86_regs.sysenter_eip = ctxt.sysenter_eip;
> -
> -    req->x86_regs.msr_efer = ctxt.msr_efer;
> -    req->x86_regs.msr_star = ctxt.msr_star;
> -    req->x86_regs.msr_lstar = ctxt.msr_lstar;
> +    req->regs.x86.rax = regs->eax;
> +    req->regs.x86.rcx = regs->ecx;
> +    req->regs.x86.rdx = regs->edx;
> +    req->regs.x86.rbx = regs->ebx;
> +    req->regs.x86.rsp = regs->esp;
> +    req->regs.x86.rbp = regs->ebp;
> +    req->regs.x86.rsi = regs->esi;
> +    req->regs.x86.rdi = regs->edi;
> +
> +    req->regs.x86.r8  = regs->r8;
> +    req->regs.x86.r9  = regs->r9;
> +    req->regs.x86.r10 = regs->r10;
> +    req->regs.x86.r11 = regs->r11;
> +    req->regs.x86.r12 = regs->r12;
> +    req->regs.x86.r13 = regs->r13;
> +    req->regs.x86.r14 = regs->r14;
> +    req->regs.x86.r15 = regs->r15;
> +
> +    req->regs.x86.rflags = regs->eflags;
> +    req->regs.x86.rip    = regs->eip;
> +
> +    req->regs.x86.dr7 = curr->arch.debugreg[7];
> +    req->regs.x86.cr0 = ctxt.cr0;
> +    req->regs.x86.cr2 = ctxt.cr2;
> +    req->regs.x86.cr3 = ctxt.cr3;
> +    req->regs.x86.cr4 = ctxt.cr4;
> +
> +    req->regs.x86.sysenter_cs = ctxt.sysenter_cs;
> +    req->regs.x86.sysenter_esp = ctxt.sysenter_esp;
> +    req->regs.x86.sysenter_eip = ctxt.sysenter_eip;
> +
> +    req->regs.x86.msr_efer = ctxt.msr_efer;
> +    req->regs.x86.msr_star = ctxt.msr_star;
> +    req->regs.x86.msr_lstar = ctxt.msr_lstar;
>  
>      hvm_get_segment_register(curr, x86_seg_fs, &seg);
> -    req->x86_regs.fs_base = seg.base;
> +    req->regs.x86.fs_base = seg.base;
>  
>      hvm_get_segment_register(curr, x86_seg_gs, &seg);
> -    req->x86_regs.gs_base = seg.base;
> +    req->regs.x86.gs_base = seg.base;
>  
>      hvm_get_segment_register(curr, x86_seg_cs, &seg);
> -    req->x86_regs.cs_arbytes = seg.attr.bytes;
> +    req->regs.x86.cs_arbytes = seg.attr.bytes;
>  }
>  
> -void p2m_mem_event_emulate_check(struct vcpu *v, const mem_event_response_t *rsp)
> +void p2m_mem_event_emulate_check(struct vcpu *v,
> +                                 const mem_event_response_t *rsp)
>  {
>      /* Mark vcpu for skipping one instruction upon rescheduling. */
> -    if ( rsp->flags & MEM_EVENT_FLAG_EMULATE )
> +    if ( rsp->flags & MEM_ACCESS_EMULATE )
>      {
>          xenmem_access_t access;
>          bool_t violation = 1;
> +        const struct mem_event_mem_access *data = &rsp->u.mem_access;
>  
> -        if ( p2m_get_mem_access(v->domain, rsp->gfn, &access) == 0 )
> +        if ( p2m_get_mem_access(v->domain, data->gfn, &access) == 0 )
>          {
>              switch ( access )
>              {
>              case XENMEM_access_n:
>              case XENMEM_access_n2rwx:
>              default:
> -                violation = rsp->access_r || rsp->access_w || rsp->access_x;
> +                violation = data->flags & MEM_ACCESS_RWX;
>                  break;
>  
>              case XENMEM_access_r:
> -                violation = rsp->access_w || rsp->access_x;
> +                violation = data->flags & MEM_ACCESS_WX;
>                  break;
>  
>              case XENMEM_access_w:
> -                violation = rsp->access_r || rsp->access_x;
> +                violation = data->flags & MEM_ACCESS_RX;
>                  break;
>  
>              case XENMEM_access_x:
> -                violation = rsp->access_r || rsp->access_w;
> +                violation = data->flags & MEM_ACCESS_RW;
>                  break;
>  
>              case XENMEM_access_rx:
>              case XENMEM_access_rx2rw:
> -                violation = rsp->access_w;
> +                violation = data->flags & MEM_ACCESS_W;
>                  break;
>  
>              case XENMEM_access_wx:
> -                violation = rsp->access_r;
> +                violation = data->flags & MEM_ACCESS_R;
>                  break;
>  
>              case XENMEM_access_rw:
> -                violation = rsp->access_x;
> +                violation = data->flags & MEM_ACCESS_X;
>                  break;
>  
>              case XENMEM_access_rwx:
> @@ -1532,7 +1547,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>      if ( v->arch.mem_event.emulate_flags )
>      {
>          hvm_mem_event_emulate_one((v->arch.mem_event.emulate_flags &
> -                                   MEM_EVENT_FLAG_EMULATE_NOWRITE) != 0,
> +                                   MEM_ACCESS_EMULATE_NOWRITE) != 0,
>                                    TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE);
>  
>          v->arch.mem_event.emulate_flags = 0;
> @@ -1544,24 +1559,28 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>      if ( req )
>      {
>          *req_ptr = req;
> -        req->reason = MEM_EVENT_REASON_VIOLATION;
> +        req->reason = MEM_EVENT_REASON_MEM_ACCESS;
>  
>          /* Pause the current VCPU */
>          if ( p2ma != p2m_access_n2rwx )
>              req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>  
>          /* Send request to mem event */
> -        req->gfn = gfn;
> -        req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
> -        req->gla_valid = npfec.gla_valid;
> -        req->gla = gla;
> -        if ( npfec.kind == npfec_kind_with_gla )
> -            req->fault_with_gla = 1;
> -        else if ( npfec.kind == npfec_kind_in_gpt )
> -            req->fault_in_gpt = 1;
> -        req->access_r = npfec.read_access;
> -        req->access_w = npfec.write_access;
> -        req->access_x = npfec.insn_fetch;
> +        req->u.mem_access.gfn = gfn;
> +        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
> +        if ( npfec.gla_valid )
> +        {
> +            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
> +            req->u.mem_access.gla = gla;
> +
> +            if ( npfec.kind == npfec_kind_with_gla )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
> +            else if ( npfec.kind == npfec_kind_in_gpt )
> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
> +        }
> +        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
> +        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
> +        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
>          req->vcpu_id = v->vcpu_id;
>  
>          p2m_mem_event_fill_regs(req);
> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
> index d8aac5f..9c5b7a6 100644
> --- a/xen/common/mem_access.c
> +++ b/xen/common/mem_access.c
> @@ -38,6 +38,12 @@ void mem_access_resume(struct domain *d)
>      {
>          struct vcpu *v;
>  
> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
> +        {
> +            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
> +            continue;
> +        }
> +
>          if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
>              continue;
>  
> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
> index 7cfbe8e..8ab06ce 100644
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -291,6 +291,8 @@ void mem_event_put_request(struct domain *d,
>  #endif
>      }
>  
> +    req->version = MEM_EVENT_INTERFACE_VERSION;
> +
>      mem_event_ring_lock(med);
>  
>      /* Due to the reservations, this step must succeed. */
> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> index 599f9e8..1ef65d3 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -28,39 +28,59 @@
>  #define _XEN_PUBLIC_MEM_EVENT_H
>  
>  #include "xen.h"
> +
> +#define MEM_EVENT_INTERFACE_VERSION 0x00000001
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
>  #include "io/ring.h"
>  
> -/* Memory event flags */
> +/*
> + * Memory event flags
> + */
> +
> +/*
> + * VCPU_PAUSED in a request signals that the vCPU triggering the event has been
> + *  paused
> + * VCPU_PAUSED in a response signals to unpause the vCPU
> + */
>  #define MEM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
> -#define MEM_EVENT_FLAG_DROP_PAGE       (1 << 1)
> -#define MEM_EVENT_FLAG_EVICT_FAIL      (1 << 2)
> -#define MEM_EVENT_FLAG_FOREIGN         (1 << 3)
> -#define MEM_EVENT_FLAG_DUMMY           (1 << 4)
> +
>  /*
> - * Emulate the fault-causing instruction (if set in the event response flags).
> - * This will allow the guest to continue execution without lifting the page
> - * access restrictions.
> + * Flags to aid debugging mem_event
> + */
> +#define MEM_EVENT_FLAG_FOREIGN         (1 << 1)
> +#define MEM_EVENT_FLAG_DUMMY           (1 << 2)
> +
> +/*
> + * Reasons for the vm event request
>   */
> -#define MEM_EVENT_FLAG_EMULATE         (1 << 5)
> +
> +/* Default case */
> +#define MEM_EVENT_REASON_UNKNOWN                 0
> +/* Memory access violation */
> +#define MEM_EVENT_REASON_MEM_ACCESS              1
> +/* Memory sharing event */
> +#define MEM_EVENT_REASON_MEM_SHARING             2
> +/* Memory paging event */
> +#define MEM_EVENT_REASON_MEM_PAGING              3
> +/* CR0 was updated */
> +#define MEM_EVENT_REASON_MOV_TO_CR0              4
> +/* CR3 was updated */
> +#define MEM_EVENT_REASON_MOV_TO_CR3              5
> +/* CR4 was updated */
> +#define MEM_EVENT_REASON_MOV_TO_CR4              6
> +/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
> +#define MEM_EVENT_REASON_MOV_TO_MSR              7
> +/* Debug operation executed (e.g. int3) */
> +#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
> +/* Single-step (e.g. MTF) */
> +#define MEM_EVENT_REASON_SINGLESTEP              9
> +
>  /*
> - * Same as MEM_EVENT_FLAG_EMULATE, but with write operations or operations
> - * potentially having side effects (like memory mapped or port I/O) disabled.
> + * Using a custom struct (not hvm_hw_cpu) so as to not fill
> + * the mem_event ring buffer too quickly.
>   */
> -#define MEM_EVENT_FLAG_EMULATE_NOWRITE (1 << 6)
> -
> -/* Reasons for the memory event request */
> -#define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
> -#define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
> -#define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
> -#define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
> -#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
> -                                             does NOT honour HVMPME_onchangeonly */
> -
> -/* Using a custom struct (not hvm_hw_cpu) so as to not fill
> - * the mem_event ring buffer too quickly. */
>  struct mem_event_regs_x86 {
>      uint64_t rax;
>      uint64_t rcx;
> @@ -97,31 +117,102 @@ struct mem_event_regs_x86 {
>      uint32_t _pad;
>  };
>  
> -typedef struct mem_event_st {
> -    uint32_t flags;
> -    uint32_t vcpu_id;
> +/*
> + * mem_access flag definitions
> + *
> + * These flags are set only as part of a mem_event request.
> + *
> + * R/W/X: Defines the type of violation that has triggered the event
> + *        Multiple types can be set in a single violation!
> + * GLA_VALID: If the gla field holds a guest VA associated with the event
> + * FAULT_WITH_GLA: If the violation was triggered by accessing gla
> + * FAULT_IN_GPT: If the violation was triggered during translating gla
> + */
> +#define MEM_ACCESS_R                    (1 << 0)
> +#define MEM_ACCESS_W                    (1 << 1)
> +#define MEM_ACCESS_X                    (1 << 2)
> +#define MEM_ACCESS_RWX                  (MEM_ACCESS_R | MEM_ACCESS_W | MEM_ACCESS_X)
> +#define MEM_ACCESS_RW                   (MEM_ACCESS_R | MEM_ACCESS_W)
> +#define MEM_ACCESS_RX                   (MEM_ACCESS_R | MEM_ACCESS_X)
> +#define MEM_ACCESS_WX                   (MEM_ACCESS_W | MEM_ACCESS_X)
> +#define MEM_ACCESS_GLA_VALID            (1 << 3)
> +#define MEM_ACCESS_FAULT_WITH_GLA       (1 << 4)
> +#define MEM_ACCESS_FAULT_IN_GPT         (1 << 5)
> +/*
> + * The following flags can be set in the response.
> + *
> + * Emulate the fault-causing instruction (if set in the event response flags).
> + * This will allow the guest to continue execution without lifting the page
> + * access restrictions.
> + */
> +#define MEM_ACCESS_EMULATE              (1 << 6)
> +/*
> + * Same as MEM_ACCESS_EMULATE, but with write operations or operations
> + * potentially having side effects (like memory mapped or port I/O) disabled.
> + */
> +#define MEM_ACCESS_EMULATE_NOWRITE      (1 << 7)
>  
> -    uint64_t gfn;
> +struct mem_event_mem_access {
> +    uint32_t gfn;
> +    uint32_t flags; /* MEM_ACCESS_* */
>      uint64_t offset;
> -    uint64_t gla; /* if gla_valid */
> +    uint64_t gla;   /* if flags has MEM_ACCESS_GLA_VALID set */
> +};
> +
> +struct mem_event_mov_to_cr {
> +    uint64_t new_value;
> +    uint64_t old_value;
> +};
>  
> +struct mem_event_debug {
> +    uint32_t gfn;
> +    uint32_t _pad;
> +};
> +
> +struct mem_event_mov_to_msr {
> +    uint64_t msr;
> +    uint64_t value;
> +};
> +
> +#define MEM_PAGING_DROP_PAGE       (1 << 0)
> +#define MEM_PAGING_EVICT_FAIL      (1 << 1)
> +struct mem_event_paging {
> +    uint32_t gfn;
> +    uint32_t p2mt;
> +    uint32_t flags;
> +    uint32_t _pad;
> +};
> +
> +struct mem_event_sharing {
> +    uint32_t gfn;
>      uint32_t p2mt;
> +};
> +
> +typedef struct mem_event_st {
> +    uint32_t version;   /* MEM_EVENT_INTERFACE_VERSION */
> +    uint32_t flags;     /* MEM_EVENT_FLAG_* */
> +    uint32_t reason;    /* MEM_EVENT_REASON_* */
> +    uint32_t vcpu_id;
>  
> -    uint16_t access_r:1;
> -    uint16_t access_w:1;
> -    uint16_t access_x:1;
> -    uint16_t gla_valid:1;
> -    uint16_t fault_with_gla:1;
> -    uint16_t fault_in_gpt:1;
> -    uint16_t available:10;
> +    union {
> +        struct mem_event_paging                mem_paging;
> +        struct mem_event_sharing               mem_sharing;
> +        struct mem_event_mem_access            mem_access;
> +        struct mem_event_mov_to_cr             mov_to_cr;
> +        struct mem_event_mov_to_msr            mov_to_msr;
> +        struct mem_event_debug                 software_breakpoint;
> +        struct mem_event_debug                 singlestep;
> +    } u;
>  
> -    uint16_t reason;
> -    struct mem_event_regs_x86 x86_regs;
> +    union {
> +        struct mem_event_regs_x86 x86;
> +    } regs;
>  } mem_event_request_t, mem_event_response_t;
>  
>  DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
>  
> -#endif
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +#endif /* _XEN_PUBLIC_MEM_EVENT_H */
>  
>  /*
>   * Local variables:
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index 595f953..2ef1728 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -380,7 +380,8 @@ struct xen_mem_event_op {
>      /* PAGING_PREP IN: buffer to immediately fill page in */
>      uint64_aligned_t    buffer;
>      /* Other OPs */
> -    uint64_aligned_t    gfn;           /* IN:  gfn of page being operated on */
> +    uint32_t    gfn;           /* IN:  gfn of page being operated on */
> +    uint32_t    _pad;
>  };
>  typedef struct xen_mem_event_op xen_mem_event_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_mem_event_op_t);
> @@ -469,21 +470,21 @@ struct xen_mem_sharing_op {
>      union {
>          struct mem_sharing_op_nominate {  /* OP_NOMINATE_xxx           */
>              union {
> -                uint64_aligned_t gfn;     /* IN: gfn to nominate       */
> +                uint32_t      gfn;        /* IN: gfn to nominate       */
>                  uint32_t      grant_ref;  /* IN: grant ref to nominate */
>              } u;
>              uint64_aligned_t  handle;     /* OUT: the handle           */
>          } nominate;
>          struct mem_sharing_op_share {     /* OP_SHARE/ADD_PHYSMAP */
> -            uint64_aligned_t source_gfn;    /* IN: the gfn of the source page */
> +            uint32_t source_gfn;          /* IN: the gfn of the source page */
> +            uint32_t client_gfn;          /* IN: the client gfn */
>              uint64_aligned_t source_handle; /* IN: handle to the source page */
> -            uint64_aligned_t client_gfn;    /* IN: the client gfn */
>              uint64_aligned_t client_handle; /* IN: handle to the client page */
>              domid_t  client_domain; /* IN: the client domain id */
>          } share; 
>          struct mem_sharing_op_debug {     /* OP_DEBUG_xxx */
>              union {
> -                uint64_aligned_t gfn;      /* IN: gfn to debug          */
> +                uint32_t gfn;              /* IN: gfn to debug          */
>                  uint64_aligned_t mfn;      /* IN: mfn to debug          */
>                  uint32_t gref;     /* IN: gref to debug         */
>              } u;

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 02/12] xen/mem_event: Cleanup mem_event ring names and domctls
  2015-02-13 16:33 ` [PATCH V5 02/12] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
@ 2015-02-13 17:53   ` Andrew Cooper
  2015-02-13 18:06     ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 17:53 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:

> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 339f8fe..5851c66 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1501,7 +1501,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>      gfn_unlock(p2m, gfn, 0);
>  
>      /* Otherwise, check if there is a memory event listener, and send the message along */
> -    if ( !mem_event_check_ring(&d->mem_event->access) || !req_ptr ) 
> +    if ( !mem_event_check_ring(&d->mem_event->monitor) || !req_ptr ) 

This hunk introduces trailing whitespace.

Once fixed,  Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures
  2015-02-13 17:23   ` Andrew Cooper
@ 2015-02-13 18:03     ` Tamas K Lengyel
  2015-02-13 18:09       ` Andrew Cooper
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 18:03 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Razvan Cojocaru,
	Stefano Stabellini, Tim Deegan, Steven Maresca, xen-devel,
	Jan Beulich, Dong, Eddie, Andres Lagar-Cavilla, Jun Nakajima,
	rshriram, Keir Fraser, Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 6:23 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 16:33, Tamas K Lengyel wrote:
>> The public mem_event structures used to communicate with helper applications via
>> shared rings have been used in different settings. However, the variable names
>> within this structure have not reflected this fact, resulting in the reuse of
>> variables to mean different things under different scenarios.
>>
>> This patch remedies the issue by clearly defining the structure members based on
>> the actual context within which the structure is used.
>>
>> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>> ---
>> v5: Style fixes
>>     Convert gfn to uint32_t
>
> It is perfectly possible to have guests with more memory than is covered
> by 44 bits, or PV guests whose frames reside above the 44bit boundary.
> All gfn values should be 64bits wide.
>
> ~Andrew

Internally Xen handles all gfn's as unsigned long's so depending on
the compiler it may be only 32-bit wide. If gfn must be larger than
32-bit than we should use unsigned long long's within Xen.

Tamas

>
>> and define mem_access flags bits as we can now save
>>         space on the ring this way
>>     Split non-mem_event flags into access/paging flags
>> v4: Attach mem_event version to each outgoing request directly in mem_event.
>> v3: Add padding to mem_event structures.
>>     Add version field to mem_event structures and checks for it.
>> ---
>>  tools/libxc/xc_mem_event.c          |   2 +-
>>  tools/libxc/xc_private.h            |   2 +-
>>  tools/tests/xen-access/xen-access.c |  45 +++++----
>>  tools/xenpaging/xenpaging.c         |  51 ++++++-----
>>  xen/arch/x86/hvm/hvm.c              | 177 +++++++++++++++++++-----------------
>>  xen/arch/x86/mm/mem_sharing.c       |  16 +++-
>>  xen/arch/x86/mm/p2m.c               | 163 ++++++++++++++++++---------------
>>  xen/common/mem_access.c             |   6 ++
>>  xen/common/mem_event.c              |   2 +
>>  xen/include/public/mem_event.h      | 173 ++++++++++++++++++++++++++---------
>>  xen/include/public/memory.h         |  11 ++-
>>  11 files changed, 401 insertions(+), 247 deletions(-)
>>
>> diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
>> index 8c0be4e..1b5f7c3 100644
>> --- a/tools/libxc/xc_mem_event.c
>> +++ b/tools/libxc/xc_mem_event.c
>> @@ -42,7 +42,7 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
>>
>>  int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
>>                          unsigned int op, unsigned int mode,
>> -                        uint64_t gfn, void *buffer)
>> +                        uint32_t gfn, void *buffer)
>>  {
>>      xen_mem_event_op_t meo;
>>
>> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
>> index 45b8644..bc021b8 100644
>> --- a/tools/libxc/xc_private.h
>> +++ b/tools/libxc/xc_private.h
>> @@ -427,7 +427,7 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
>>                           unsigned int mode, uint32_t *port);
>>  int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
>>                          unsigned int op, unsigned int mode,
>> -                        uint64_t gfn, void *buffer);
>> +                        uint32_t gfn, void *buffer);
>>  /*
>>   * Enables mem_event and returns the mapped ring page indicated by param.
>>   * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
>> diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
>> index 6cb382d..68f05db 100644
>> --- a/tools/tests/xen-access/xen-access.c
>> +++ b/tools/tests/xen-access/xen-access.c
>> @@ -551,13 +551,21 @@ int main(int argc, char *argv[])
>>                  continue;
>>              }
>>
>> +            if ( req.version != MEM_EVENT_INTERFACE_VERSION )
>> +            {
>> +                ERROR("Error: mem_event interface version mismatch!\n");
>> +                interrupted = -1;
>> +                continue;
>> +            }
>> +
>>              memset( &rsp, 0, sizeof (rsp) );
>> +            rsp.version = MEM_EVENT_INTERFACE_VERSION;
>>              rsp.vcpu_id = req.vcpu_id;
>>              rsp.flags = req.flags;
>>
>>              switch (req.reason) {
>> -            case MEM_EVENT_REASON_VIOLATION:
>> -                rc = xc_get_mem_access(xch, domain_id, req.gfn, &access);
>> +            case MEM_EVENT_REASON_MEM_ACCESS:
>> +                rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access);
>>                  if (rc < 0)
>>                  {
>>                      ERROR("Error %d getting mem_access event\n", rc);
>> @@ -565,23 +573,23 @@ int main(int argc, char *argv[])
>>                      continue;
>>                  }
>>
>> -                printf("PAGE ACCESS: %c%c%c for GFN %"PRIx64" (offset %06"
>> +                printf("PAGE ACCESS: %c%c%c for GFN %"PRIx32" (offset %06"
>>                         PRIx64") gla %016"PRIx64" (valid: %c; fault in gpt: %c; fault with gla: %c) (vcpu %u)\n",
>> -                       req.access_r ? 'r' : '-',
>> -                       req.access_w ? 'w' : '-',
>> -                       req.access_x ? 'x' : '-',
>> -                       req.gfn,
>> -                       req.offset,
>> -                       req.gla,
>> -                       req.gla_valid ? 'y' : 'n',
>> -                       req.fault_in_gpt ? 'y' : 'n',
>> -                       req.fault_with_gla ? 'y': 'n',
>> +                       (req.u.mem_access.flags & MEM_ACCESS_R) ? 'r' : '-',
>> +                       (req.u.mem_access.flags & MEM_ACCESS_W) ? 'w' : '-',
>> +                       (req.u.mem_access.flags & MEM_ACCESS_X) ? 'x' : '-',
>> +                       req.u.mem_access.gfn,
>> +                       req.u.mem_access.offset,
>> +                       req.u.mem_access.gla,
>> +                       (req.u.mem_access.flags & MEM_ACCESS_GLA_VALID) ? 'y' : 'n',
>> +                       (req.u.mem_access.flags & MEM_ACCESS_FAULT_IN_GPT) ? 'y' : 'n',
>> +                       (req.u.mem_access.flags & MEM_ACCESS_FAULT_WITH_GLA) ? 'y': 'n',
>>                         req.vcpu_id);
>>
>>                  if ( default_access != after_first_access )
>>                  {
>>                      rc = xc_set_mem_access(xch, domain_id, after_first_access,
>> -                                           req.gfn, 1);
>> +                                           req.u.mem_access.gfn, 1);
>>                      if (rc < 0)
>>                      {
>>                          ERROR("Error %d setting gfn to access_type %d\n", rc,
>> @@ -592,13 +600,12 @@ int main(int argc, char *argv[])
>>                  }
>>
>>
>> -                rsp.gfn = req.gfn;
>> -                rsp.p2mt = req.p2mt;
>> +                rsp.u.mem_access.gfn = req.u.mem_access.gfn;
>>                  break;
>> -            case MEM_EVENT_REASON_INT3:
>> -                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n",
>> -                       req.gla,
>> -                       req.gfn,
>> +            case MEM_EVENT_REASON_SOFTWARE_BREAKPOINT:
>> +                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx32" (vcpu %d)\n",
>> +                       req.regs.x86.rip,
>> +                       req.u.software_breakpoint.gfn,
>>                         req.vcpu_id);
>>
>>                  /* Reinject */
>> diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
>> index 82c1ee4..29ca7c7 100644
>> --- a/tools/xenpaging/xenpaging.c
>> +++ b/tools/xenpaging/xenpaging.c
>> @@ -684,9 +684,9 @@ static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t
>>           * This allows page-out of these gfns if the target grows again.
>>           */
>>          if (paging->num_paged_out > paging->policy_mru_size)
>> -            policy_notify_paged_in(rsp->gfn);
>> +            policy_notify_paged_in(rsp->u.mem_paging.gfn);
>>          else
>> -            policy_notify_paged_in_nomru(rsp->gfn);
>> +            policy_notify_paged_in_nomru(rsp->u.mem_paging.gfn);
>>
>>         /* Record number of resumed pages */
>>         paging->num_paged_out--;
>> @@ -874,7 +874,8 @@ int main(int argc, char *argv[])
>>      }
>>      xch = paging->xc_handle;
>>
>> -    DPRINTF("starting %s for domain_id %u with pagefile %s\n", argv[0], paging->mem_event.domain_id, filename);
>> +    DPRINTF("starting %s for domain_id %u with pagefile %s\n",
>> +            argv[0], paging->mem_event.domain_id, filename);
>>
>>      /* ensure that if we get a signal, we'll do cleanup, then exit */
>>      act.sa_handler = close_handler;
>> @@ -910,49 +911,52 @@ int main(int argc, char *argv[])
>>
>>              get_request(&paging->mem_event, &req);
>>
>> -            if ( req.gfn > paging->max_pages )
>> +            if ( req.u.mem_paging.gfn > paging->max_pages )
>>              {
>> -                ERROR("Requested gfn %"PRIx64" higher than max_pages %x\n", req.gfn, paging->max_pages);
>> +                ERROR("Requested gfn %"PRIx32" higher than max_pages %x\n",
>> +                      req.u.mem_paging.gfn, paging->max_pages);
>>                  goto out;
>>              }
>>
>>              /* Check if the page has already been paged in */
>> -            if ( test_and_clear_bit(req.gfn, paging->bitmap) )
>> +            if ( test_and_clear_bit(req.u.mem_paging.gfn, paging->bitmap) )
>>              {
>>                  /* Find where in the paging file to read from */
>> -                slot = paging->gfn_to_slot[req.gfn];
>> +                slot = paging->gfn_to_slot[req.u.mem_paging.gfn];
>>
>>                  /* Sanity check */
>> -                if ( paging->slot_to_gfn[slot] != req.gfn )
>> +                if ( paging->slot_to_gfn[slot] != req.u.mem_paging.gfn )
>>                  {
>> -                    ERROR("Expected gfn %"PRIx64" in slot %d, but found gfn %lx\n", req.gfn, slot, paging->slot_to_gfn[slot]);
>> +                    ERROR("Expected gfn %"PRIx32" in slot %d, but found gfn %lx\n",
>> +                          req.u.mem_paging.gfn, slot, paging->slot_to_gfn[slot]);
>>                      goto out;
>>                  }
>>
>> -                if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
>> +                if ( req.u.mem_paging.flags & MEM_PAGING_DROP_PAGE )
>>                  {
>> -                    DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n", req.gfn, slot);
>> +                    DPRINTF("drop_page ^ gfn %"PRIx32" pageslot %d\n",
>> +                            req.u.mem_paging.gfn, slot);
>>                      /* Notify policy of page being dropped */
>> -                    policy_notify_dropped(req.gfn);
>> +                    policy_notify_dropped(req.u.mem_paging.gfn);
>>                  }
>>                  else
>>                  {
>>                      /* Populate the page */
>> -                    if ( xenpaging_populate_page(paging, req.gfn, slot) < 0 )
>> +                    if ( xenpaging_populate_page(paging, req.u.mem_paging.gfn, slot) < 0 )
>>                      {
>> -                        ERROR("Error populating page %"PRIx64"", req.gfn);
>> +                        ERROR("Error populating page %"PRIx32"", req.u.mem_paging.gfn);
>>                          goto out;
>>                      }
>>                  }
>>
>>                  /* Prepare the response */
>> -                rsp.gfn = req.gfn;
>> +                rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
>>                  rsp.vcpu_id = req.vcpu_id;
>>                  rsp.flags = req.flags;
>>
>>                  if ( xenpaging_resume_page(paging, &rsp, 1) < 0 )
>>                  {
>> -                    PERROR("Error resuming page %"PRIx64"", req.gfn);
>> +                    PERROR("Error resuming page %"PRIx32"", req.u.mem_paging.gfn);
>>                      goto out;
>>                  }
>>
>> @@ -965,23 +969,24 @@ int main(int argc, char *argv[])
>>              else
>>              {
>>                  DPRINTF("page %s populated (domain = %d; vcpu = %d;"
>> -                        " gfn = %"PRIx64"; paused = %d; evict_fail = %d)\n",
>> -                        req.flags & MEM_EVENT_FLAG_EVICT_FAIL ? "not" : "already",
>> -                        paging->mem_event.domain_id, req.vcpu_id, req.gfn,
>> +                        " gfn = %"PRIx32"; paused = %d; evict_fail = %d)\n",
>> +                        req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ? "not" : "already",
>> +                        paging->mem_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
>>                          !!(req.flags & MEM_EVENT_FLAG_VCPU_PAUSED) ,
>> -                        !!(req.flags & MEM_EVENT_FLAG_EVICT_FAIL) );
>> +                        !!(req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL) );
>>
>>                  /* Tell Xen to resume the vcpu */
>> -                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) || ( req.flags & MEM_EVENT_FLAG_EVICT_FAIL ))
>> +                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) ||
>> +                    ( req.u.mem_paging.flags & MEM_PAGING_EVICT_FAIL ))
>>                  {
>>                      /* Prepare the response */
>> -                    rsp.gfn = req.gfn;
>> +                    rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
>>                      rsp.vcpu_id = req.vcpu_id;
>>                      rsp.flags = req.flags;
>>
>>                      if ( xenpaging_resume_page(paging, &rsp, 0) < 0 )
>>                      {
>> -                        PERROR("Error resuming page %"PRIx64"", req.gfn);
>> +                        PERROR("Error resuming page %"PRIx32"", req.u.mem_paging.gfn);
>>                          goto out;
>>                      }
>>                  }
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index b03ee4e..fe5f568 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -6324,48 +6324,42 @@ static void hvm_mem_event_fill_regs(mem_event_request_t *req)
>>      const struct cpu_user_regs *regs = guest_cpu_user_regs();
>>      const struct vcpu *curr = current;
>>
>> -    req->x86_regs.rax = regs->eax;
>> -    req->x86_regs.rcx = regs->ecx;
>> -    req->x86_regs.rdx = regs->edx;
>> -    req->x86_regs.rbx = regs->ebx;
>> -    req->x86_regs.rsp = regs->esp;
>> -    req->x86_regs.rbp = regs->ebp;
>> -    req->x86_regs.rsi = regs->esi;
>> -    req->x86_regs.rdi = regs->edi;
>> -
>> -    req->x86_regs.r8  = regs->r8;
>> -    req->x86_regs.r9  = regs->r9;
>> -    req->x86_regs.r10 = regs->r10;
>> -    req->x86_regs.r11 = regs->r11;
>> -    req->x86_regs.r12 = regs->r12;
>> -    req->x86_regs.r13 = regs->r13;
>> -    req->x86_regs.r14 = regs->r14;
>> -    req->x86_regs.r15 = regs->r15;
>> -
>> -    req->x86_regs.rflags = regs->eflags;
>> -    req->x86_regs.rip    = regs->eip;
>> -
>> -    req->x86_regs.msr_efer = curr->arch.hvm_vcpu.guest_efer;
>> -    req->x86_regs.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
>> -    req->x86_regs.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
>> -    req->x86_regs.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
>> -}
>> -
>> -static int hvm_memory_event_traps(long p, uint32_t reason,
>> -                                  unsigned long value, unsigned long old,
>> -                                  bool_t gla_valid, unsigned long gla)
>> -{
>> -    struct vcpu* v = current;
>> -    struct domain *d = v->domain;
>> -    mem_event_request_t req = { .reason = reason };
>> +    req->regs.x86.rax = regs->eax;
>> +    req->regs.x86.rcx = regs->ecx;
>> +    req->regs.x86.rdx = regs->edx;
>> +    req->regs.x86.rbx = regs->ebx;
>> +    req->regs.x86.rsp = regs->esp;
>> +    req->regs.x86.rbp = regs->ebp;
>> +    req->regs.x86.rsi = regs->esi;
>> +    req->regs.x86.rdi = regs->edi;
>> +
>> +    req->regs.x86.r8  = regs->r8;
>> +    req->regs.x86.r9  = regs->r9;
>> +    req->regs.x86.r10 = regs->r10;
>> +    req->regs.x86.r11 = regs->r11;
>> +    req->regs.x86.r12 = regs->r12;
>> +    req->regs.x86.r13 = regs->r13;
>> +    req->regs.x86.r14 = regs->r14;
>> +    req->regs.x86.r15 = regs->r15;
>> +
>> +    req->regs.x86.rflags = regs->eflags;
>> +    req->regs.x86.rip    = regs->eip;
>> +
>> +    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
>> +    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
>> +    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
>> +    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
>> +}
>> +
>> +static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
>> +{
>>      int rc;
>> +    struct vcpu *v = current;
>> +    struct domain *d = v->domain;
>>
>> -    if ( !(p & HVMPME_MODE_MASK) )
>> +    if ( !(parameters & HVMPME_MODE_MASK) )
>>          return 0;
>>
>> -    if ( (p & HVMPME_onchangeonly) && (value == old) )
>> -        return 1;
>> -
>>      rc = mem_event_claim_slot(d, &d->mem_event->access);
>>      if ( rc == -ENOSYS )
>>      {
>> @@ -6376,85 +6370,106 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
>>      else if ( rc < 0 )
>>          return rc;
>>
>> -    if ( (p & HVMPME_MODE_MASK) == HVMPME_mode_sync )
>> +    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
>>      {
>> -        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>> +        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>>          mem_event_vcpu_pause(v);
>>      }
>>
>> -    req.gfn = value;
>> -    req.vcpu_id = v->vcpu_id;
>> -    if ( gla_valid )
>> -    {
>> -        req.offset = gla & ((1 << PAGE_SHIFT) - 1);
>> -        req.gla = gla;
>> -        req.gla_valid = 1;
>> -    }
>> -    else
>> -    {
>> -        req.gla = old;
>> -    }
>> -
>> -    hvm_mem_event_fill_regs(&req);
>> -    mem_event_put_request(d, &d->mem_event->access, &req);
>> -
>> +    hvm_mem_event_fill_regs(req);
>> +    mem_event_put_request(d, &d->mem_event->access, req);
>> +
>>      return 1;
>>  }
>>
>> +static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
>> +                                unsigned long old)
>> +{
>> +    mem_event_request_t req = {
>> +        .reason = reason,
>> +        .vcpu_id = current->vcpu_id,
>> +        .u.mov_to_cr.new_value = value,
>> +        .u.mov_to_cr.old_value = old
>> +    };
>> +    uint64_t parameters = 0;
>> +
>> +    switch(reason)
>> +    {
>> +    case MEM_EVENT_REASON_MOV_TO_CR0:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
>> +        break;
>> +    case MEM_EVENT_REASON_MOV_TO_CR3:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
>> +        break;
>> +    case MEM_EVENT_REASON_MOV_TO_CR4:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
>> +        break;
>> +    };
>> +
>> +    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
>> +        return;
>> +
>> +    hvm_memory_event_traps(parameters, &req);
>> +}
>> +
>>  void hvm_memory_event_cr0(unsigned long value, unsigned long old)
>>  {
>> -    hvm_memory_event_traps(current->domain->arch.hvm_domain
>> -                             .params[HVM_PARAM_MEMORY_EVENT_CR0],
>> -                           MEM_EVENT_REASON_CR0,
>> -                           value, old, 0, 0);
>> +    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR0, value, old);
>>  }
>>
>>  void hvm_memory_event_cr3(unsigned long value, unsigned long old)
>>  {
>> -    hvm_memory_event_traps(current->domain->arch.hvm_domain
>> -                             .params[HVM_PARAM_MEMORY_EVENT_CR3],
>> -                           MEM_EVENT_REASON_CR3,
>> -                           value, old, 0, 0);
>> +    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR3, value, old);
>>  }
>>
>>  void hvm_memory_event_cr4(unsigned long value, unsigned long old)
>>  {
>> -    hvm_memory_event_traps(current->domain->arch.hvm_domain
>> -                             .params[HVM_PARAM_MEMORY_EVENT_CR4],
>> -                           MEM_EVENT_REASON_CR4,
>> -                           value, old, 0, 0);
>> +    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR4, value, old);
>>  }
>>
>>  void hvm_memory_event_msr(unsigned long msr, unsigned long value)
>>  {
>> +    mem_event_request_t req = {
>> +        .reason = MEM_EVENT_REASON_MOV_TO_MSR,
>> +        .vcpu_id = current->vcpu_id,
>> +        .u.mov_to_msr.msr = msr,
>> +        .u.mov_to_msr.value = value,
>> +    };
>> +
>>      hvm_memory_event_traps(current->domain->arch.hvm_domain
>> -                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
>> -                           MEM_EVENT_REASON_MSR,
>> -                           value, ~value, 1, msr);
>> +                            .params[HVM_PARAM_MEMORY_EVENT_MSR],
>> +                           &req);
>>  }
>>
>>  int hvm_memory_event_int3(unsigned long gla)
>>  {
>>      uint32_t pfec = PFEC_page_present;
>> -    unsigned long gfn;
>> -    gfn = paging_gva_to_gfn(current, gla, &pfec);
>> +    mem_event_request_t req = {
>> +        .reason = MEM_EVENT_REASON_SOFTWARE_BREAKPOINT,
>> +        .vcpu_id = current->vcpu_id,
>> +        .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
>> +    };
>>
>>      return hvm_memory_event_traps(current->domain->arch.hvm_domain
>> -                                    .params[HVM_PARAM_MEMORY_EVENT_INT3],
>> -                                  MEM_EVENT_REASON_INT3,
>> -                                  gfn, 0, 1, gla);
>> +                                   .params[HVM_PARAM_MEMORY_EVENT_INT3],
>> +                                  &req);
>>  }
>>
>>  int hvm_memory_event_single_step(unsigned long gla)
>>  {
>>      uint32_t pfec = PFEC_page_present;
>> -    unsigned long gfn;
>> -    gfn = paging_gva_to_gfn(current, gla, &pfec);
>> +    mem_event_request_t req = {
>> +        .reason = MEM_EVENT_REASON_SINGLESTEP,
>> +        .vcpu_id = current->vcpu_id,
>> +        .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
>> +    };
>>
>>      return hvm_memory_event_traps(current->domain->arch.hvm_domain
>> -            .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
>> -            MEM_EVENT_REASON_SINGLESTEP,
>> -            gfn, 0, 1, gla);
>> +                                   .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
>> +                                  &req);
>>  }
>>
>>  int nhvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
>> index 7c0fc7d..8a192ef 100644
>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -559,7 +559,12 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
>>  {
>>      struct vcpu *v = current;
>>      int rc;
>> -    mem_event_request_t req = { .gfn = gfn };
>> +    mem_event_request_t req = {
>> +        .reason = MEM_EVENT_REASON_MEM_SHARING,
>> +        .vcpu_id = v->vcpu_id,
>> +        .u.mem_sharing.gfn = gfn,
>> +        .u.mem_sharing.p2mt = p2m_ram_shared
>> +    };
>>
>>      if ( (rc = __mem_event_claim_slot(d,
>>                          &d->mem_event->share, allow_sleep)) < 0 )
>> @@ -571,9 +576,6 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
>>          mem_event_vcpu_pause(v);
>>      }
>>
>> -    req.p2mt = p2m_ram_shared;
>> -    req.vcpu_id = v->vcpu_id;
>> -
>>      mem_event_put_request(d, &d->mem_event->share, &req);
>>
>>      return 0;
>> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>>      {
>>          struct vcpu *v;
>>
>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>> +        {
>> +            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
>> +            continue;
>> +        }
>> +
>>          if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
>>              continue;
>>
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index 6a06e9f..339f8fe 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1081,7 +1081,10 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
>>  void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
>>                                  p2m_type_t p2mt)
>>  {
>> -    mem_event_request_t req = { .gfn = gfn };
>> +    mem_event_request_t req = {
>> +        .reason = MEM_EVENT_REASON_MEM_PAGING,
>> +        .u.mem_paging.gfn = gfn
>> +    };
>>
>>      /* We allow no ring in this unique case, because it won't affect
>>       * correctness of the guest execution at this point.  If this is the only
>> @@ -1092,14 +1095,14 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
>>          return;
>>
>>      /* Send release notification to pager */
>> -    req.flags = MEM_EVENT_FLAG_DROP_PAGE;
>> +    req.u.mem_paging.flags = MEM_PAGING_DROP_PAGE;
>>
>>      /* Update stats unless the page hasn't yet been evicted */
>>      if ( p2mt != p2m_ram_paging_out )
>>          atomic_dec(&d->paged_pages);
>>      else
>>          /* Evict will fail now, tag this request for pager */
>> -        req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
>> +        req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL;
>>
>>      mem_event_put_request(d, &d->mem_event->paging, &req);
>>  }
>> @@ -1128,7 +1131,10 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
>>  void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>>  {
>>      struct vcpu *v = current;
>> -    mem_event_request_t req = { .gfn = gfn };
>> +    mem_event_request_t req = {
>> +        .reason = MEM_EVENT_REASON_MEM_PAGING,
>> +        .u.mem_paging.gfn = gfn
>> +    };
>>      p2m_type_t p2mt;
>>      p2m_access_t a;
>>      mfn_t mfn;
>> @@ -1157,7 +1163,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>>      {
>>          /* Evict will fail now, tag this request for pager */
>>          if ( p2mt == p2m_ram_paging_out )
>> -            req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
>> +            req.u.mem_paging.flags |= MEM_PAGING_EVICT_FAIL;
>>
>>          p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a);
>>      }
>> @@ -1178,7 +1184,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>>      }
>>
>>      /* Send request to pager */
>> -    req.p2mt = p2mt;
>> +    req.u.mem_paging.p2mt = p2mt;
>>      req.vcpu_id = v->vcpu_id;
>>
>>      mem_event_put_request(d, &d->mem_event->paging, &req);
>> @@ -1300,6 +1306,12 @@ void p2m_mem_paging_resume(struct domain *d)
>>      {
>>          struct vcpu *v;
>>
>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>> +        {
>> +            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
>> +            continue;
>> +        }
>> +
>>          if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
>>              continue;
>>
>> @@ -1310,20 +1322,21 @@ void p2m_mem_paging_resume(struct domain *d)
>>          v = d->vcpu[rsp.vcpu_id];
>>
>>          /* Fix p2m entry if the page was not dropped */
>> -        if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
>> +        if ( !(rsp.u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
>>          {
>> -            gfn_lock(p2m, rsp.gfn, 0);
>> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
>> +            uint64_t gfn = rsp.u.mem_access.gfn;
>> +            gfn_lock(p2m, gfn, 0);
>> +            mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
>>              /* Allow only pages which were prepared properly, or pages which
>>               * were nominated but not evicted */
>>              if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
>>              {
>> -                p2m_set_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
>> +                p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
>>                                paging_mode_log_dirty(d) ? p2m_ram_logdirty :
>>                                p2m_ram_rw, a);
>> -                set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
>> +                set_gpfn_from_mfn(mfn_x(mfn), gfn);
>>              }
>> -            gfn_unlock(p2m, rsp.gfn, 0);
>> +            gfn_unlock(p2m, gfn, 0);
>>          }
>>          /* Unpause domain */
>>          if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
>> @@ -1341,92 +1354,94 @@ static void p2m_mem_event_fill_regs(mem_event_request_t *req)
>>      /* Architecture-specific vmcs/vmcb bits */
>>      hvm_funcs.save_cpu_ctxt(curr, &ctxt);
>>
>> -    req->x86_regs.rax = regs->eax;
>> -    req->x86_regs.rcx = regs->ecx;
>> -    req->x86_regs.rdx = regs->edx;
>> -    req->x86_regs.rbx = regs->ebx;
>> -    req->x86_regs.rsp = regs->esp;
>> -    req->x86_regs.rbp = regs->ebp;
>> -    req->x86_regs.rsi = regs->esi;
>> -    req->x86_regs.rdi = regs->edi;
>> -
>> -    req->x86_regs.r8  = regs->r8;
>> -    req->x86_regs.r9  = regs->r9;
>> -    req->x86_regs.r10 = regs->r10;
>> -    req->x86_regs.r11 = regs->r11;
>> -    req->x86_regs.r12 = regs->r12;
>> -    req->x86_regs.r13 = regs->r13;
>> -    req->x86_regs.r14 = regs->r14;
>> -    req->x86_regs.r15 = regs->r15;
>> -
>> -    req->x86_regs.rflags = regs->eflags;
>> -    req->x86_regs.rip    = regs->eip;
>> -
>> -    req->x86_regs.dr7 = curr->arch.debugreg[7];
>> -    req->x86_regs.cr0 = ctxt.cr0;
>> -    req->x86_regs.cr2 = ctxt.cr2;
>> -    req->x86_regs.cr3 = ctxt.cr3;
>> -    req->x86_regs.cr4 = ctxt.cr4;
>> -
>> -    req->x86_regs.sysenter_cs = ctxt.sysenter_cs;
>> -    req->x86_regs.sysenter_esp = ctxt.sysenter_esp;
>> -    req->x86_regs.sysenter_eip = ctxt.sysenter_eip;
>> -
>> -    req->x86_regs.msr_efer = ctxt.msr_efer;
>> -    req->x86_regs.msr_star = ctxt.msr_star;
>> -    req->x86_regs.msr_lstar = ctxt.msr_lstar;
>> +    req->regs.x86.rax = regs->eax;
>> +    req->regs.x86.rcx = regs->ecx;
>> +    req->regs.x86.rdx = regs->edx;
>> +    req->regs.x86.rbx = regs->ebx;
>> +    req->regs.x86.rsp = regs->esp;
>> +    req->regs.x86.rbp = regs->ebp;
>> +    req->regs.x86.rsi = regs->esi;
>> +    req->regs.x86.rdi = regs->edi;
>> +
>> +    req->regs.x86.r8  = regs->r8;
>> +    req->regs.x86.r9  = regs->r9;
>> +    req->regs.x86.r10 = regs->r10;
>> +    req->regs.x86.r11 = regs->r11;
>> +    req->regs.x86.r12 = regs->r12;
>> +    req->regs.x86.r13 = regs->r13;
>> +    req->regs.x86.r14 = regs->r14;
>> +    req->regs.x86.r15 = regs->r15;
>> +
>> +    req->regs.x86.rflags = regs->eflags;
>> +    req->regs.x86.rip    = regs->eip;
>> +
>> +    req->regs.x86.dr7 = curr->arch.debugreg[7];
>> +    req->regs.x86.cr0 = ctxt.cr0;
>> +    req->regs.x86.cr2 = ctxt.cr2;
>> +    req->regs.x86.cr3 = ctxt.cr3;
>> +    req->regs.x86.cr4 = ctxt.cr4;
>> +
>> +    req->regs.x86.sysenter_cs = ctxt.sysenter_cs;
>> +    req->regs.x86.sysenter_esp = ctxt.sysenter_esp;
>> +    req->regs.x86.sysenter_eip = ctxt.sysenter_eip;
>> +
>> +    req->regs.x86.msr_efer = ctxt.msr_efer;
>> +    req->regs.x86.msr_star = ctxt.msr_star;
>> +    req->regs.x86.msr_lstar = ctxt.msr_lstar;
>>
>>      hvm_get_segment_register(curr, x86_seg_fs, &seg);
>> -    req->x86_regs.fs_base = seg.base;
>> +    req->regs.x86.fs_base = seg.base;
>>
>>      hvm_get_segment_register(curr, x86_seg_gs, &seg);
>> -    req->x86_regs.gs_base = seg.base;
>> +    req->regs.x86.gs_base = seg.base;
>>
>>      hvm_get_segment_register(curr, x86_seg_cs, &seg);
>> -    req->x86_regs.cs_arbytes = seg.attr.bytes;
>> +    req->regs.x86.cs_arbytes = seg.attr.bytes;
>>  }
>>
>> -void p2m_mem_event_emulate_check(struct vcpu *v, const mem_event_response_t *rsp)
>> +void p2m_mem_event_emulate_check(struct vcpu *v,
>> +                                 const mem_event_response_t *rsp)
>>  {
>>      /* Mark vcpu for skipping one instruction upon rescheduling. */
>> -    if ( rsp->flags & MEM_EVENT_FLAG_EMULATE )
>> +    if ( rsp->flags & MEM_ACCESS_EMULATE )
>>      {
>>          xenmem_access_t access;
>>          bool_t violation = 1;
>> +        const struct mem_event_mem_access *data = &rsp->u.mem_access;
>>
>> -        if ( p2m_get_mem_access(v->domain, rsp->gfn, &access) == 0 )
>> +        if ( p2m_get_mem_access(v->domain, data->gfn, &access) == 0 )
>>          {
>>              switch ( access )
>>              {
>>              case XENMEM_access_n:
>>              case XENMEM_access_n2rwx:
>>              default:
>> -                violation = rsp->access_r || rsp->access_w || rsp->access_x;
>> +                violation = data->flags & MEM_ACCESS_RWX;
>>                  break;
>>
>>              case XENMEM_access_r:
>> -                violation = rsp->access_w || rsp->access_x;
>> +                violation = data->flags & MEM_ACCESS_WX;
>>                  break;
>>
>>              case XENMEM_access_w:
>> -                violation = rsp->access_r || rsp->access_x;
>> +                violation = data->flags & MEM_ACCESS_RX;
>>                  break;
>>
>>              case XENMEM_access_x:
>> -                violation = rsp->access_r || rsp->access_w;
>> +                violation = data->flags & MEM_ACCESS_RW;
>>                  break;
>>
>>              case XENMEM_access_rx:
>>              case XENMEM_access_rx2rw:
>> -                violation = rsp->access_w;
>> +                violation = data->flags & MEM_ACCESS_W;
>>                  break;
>>
>>              case XENMEM_access_wx:
>> -                violation = rsp->access_r;
>> +                violation = data->flags & MEM_ACCESS_R;
>>                  break;
>>
>>              case XENMEM_access_rw:
>> -                violation = rsp->access_x;
>> +                violation = data->flags & MEM_ACCESS_X;
>>                  break;
>>
>>              case XENMEM_access_rwx:
>> @@ -1532,7 +1547,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>>      if ( v->arch.mem_event.emulate_flags )
>>      {
>>          hvm_mem_event_emulate_one((v->arch.mem_event.emulate_flags &
>> -                                   MEM_EVENT_FLAG_EMULATE_NOWRITE) != 0,
>> +                                   MEM_ACCESS_EMULATE_NOWRITE) != 0,
>>                                    TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE);
>>
>>          v->arch.mem_event.emulate_flags = 0;
>> @@ -1544,24 +1559,28 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>>      if ( req )
>>      {
>>          *req_ptr = req;
>> -        req->reason = MEM_EVENT_REASON_VIOLATION;
>> +        req->reason = MEM_EVENT_REASON_MEM_ACCESS;
>>
>>          /* Pause the current VCPU */
>>          if ( p2ma != p2m_access_n2rwx )
>>              req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>>
>>          /* Send request to mem event */
>> -        req->gfn = gfn;
>> -        req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
>> -        req->gla_valid = npfec.gla_valid;
>> -        req->gla = gla;
>> -        if ( npfec.kind == npfec_kind_with_gla )
>> -            req->fault_with_gla = 1;
>> -        else if ( npfec.kind == npfec_kind_in_gpt )
>> -            req->fault_in_gpt = 1;
>> -        req->access_r = npfec.read_access;
>> -        req->access_w = npfec.write_access;
>> -        req->access_x = npfec.insn_fetch;
>> +        req->u.mem_access.gfn = gfn;
>> +        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
>> +        if ( npfec.gla_valid )
>> +        {
>> +            req->u.mem_access.flags |= MEM_ACCESS_GLA_VALID;
>> +            req->u.mem_access.gla = gla;
>> +
>> +            if ( npfec.kind == npfec_kind_with_gla )
>> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA;
>> +            else if ( npfec.kind == npfec_kind_in_gpt )
>> +                req->u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT;
>> +        }
>> +        req->u.mem_access.flags |= npfec.read_access    ? MEM_ACCESS_R : 0;
>> +        req->u.mem_access.flags |= npfec.write_access   ? MEM_ACCESS_W : 0;
>> +        req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
>>          req->vcpu_id = v->vcpu_id;
>>
>>          p2m_mem_event_fill_regs(req);
>> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
>> index d8aac5f..9c5b7a6 100644
>> --- a/xen/common/mem_access.c
>> +++ b/xen/common/mem_access.c
>> @@ -38,6 +38,12 @@ void mem_access_resume(struct domain *d)
>>      {
>>          struct vcpu *v;
>>
>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>> +        {
>> +            printk(XENLOG_G_WARNING "mem_event interface version mismatch\n");
>> +            continue;
>> +        }
>> +
>>          if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
>>              continue;
>>
>> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
>> index 7cfbe8e..8ab06ce 100644
>> --- a/xen/common/mem_event.c
>> +++ b/xen/common/mem_event.c
>> @@ -291,6 +291,8 @@ void mem_event_put_request(struct domain *d,
>>  #endif
>>      }
>>
>> +    req->version = MEM_EVENT_INTERFACE_VERSION;
>> +
>>      mem_event_ring_lock(med);
>>
>>      /* Due to the reservations, this step must succeed. */
>> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
>> index 599f9e8..1ef65d3 100644
>> --- a/xen/include/public/mem_event.h
>> +++ b/xen/include/public/mem_event.h
>> @@ -28,39 +28,59 @@
>>  #define _XEN_PUBLIC_MEM_EVENT_H
>>
>>  #include "xen.h"
>> +
>> +#define MEM_EVENT_INTERFACE_VERSION 0x00000001
>> +
>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>> +
>>  #include "io/ring.h"
>>
>> -/* Memory event flags */
>> +/*
>> + * Memory event flags
>> + */
>> +
>> +/*
>> + * VCPU_PAUSED in a request signals that the vCPU triggering the event has been
>> + *  paused
>> + * VCPU_PAUSED in a response signals to unpause the vCPU
>> + */
>>  #define MEM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
>> -#define MEM_EVENT_FLAG_DROP_PAGE       (1 << 1)
>> -#define MEM_EVENT_FLAG_EVICT_FAIL      (1 << 2)
>> -#define MEM_EVENT_FLAG_FOREIGN         (1 << 3)
>> -#define MEM_EVENT_FLAG_DUMMY           (1 << 4)
>> +
>>  /*
>> - * Emulate the fault-causing instruction (if set in the event response flags).
>> - * This will allow the guest to continue execution without lifting the page
>> - * access restrictions.
>> + * Flags to aid debugging mem_event
>> + */
>> +#define MEM_EVENT_FLAG_FOREIGN         (1 << 1)
>> +#define MEM_EVENT_FLAG_DUMMY           (1 << 2)
>> +
>> +/*
>> + * Reasons for the vm event request
>>   */
>> -#define MEM_EVENT_FLAG_EMULATE         (1 << 5)
>> +
>> +/* Default case */
>> +#define MEM_EVENT_REASON_UNKNOWN                 0
>> +/* Memory access violation */
>> +#define MEM_EVENT_REASON_MEM_ACCESS              1
>> +/* Memory sharing event */
>> +#define MEM_EVENT_REASON_MEM_SHARING             2
>> +/* Memory paging event */
>> +#define MEM_EVENT_REASON_MEM_PAGING              3
>> +/* CR0 was updated */
>> +#define MEM_EVENT_REASON_MOV_TO_CR0              4
>> +/* CR3 was updated */
>> +#define MEM_EVENT_REASON_MOV_TO_CR3              5
>> +/* CR4 was updated */
>> +#define MEM_EVENT_REASON_MOV_TO_CR4              6
>> +/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
>> +#define MEM_EVENT_REASON_MOV_TO_MSR              7
>> +/* Debug operation executed (e.g. int3) */
>> +#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
>> +/* Single-step (e.g. MTF) */
>> +#define MEM_EVENT_REASON_SINGLESTEP              9
>> +
>>  /*
>> - * Same as MEM_EVENT_FLAG_EMULATE, but with write operations or operations
>> - * potentially having side effects (like memory mapped or port I/O) disabled.
>> + * Using a custom struct (not hvm_hw_cpu) so as to not fill
>> + * the mem_event ring buffer too quickly.
>>   */
>> -#define MEM_EVENT_FLAG_EMULATE_NOWRITE (1 << 6)
>> -
>> -/* Reasons for the memory event request */
>> -#define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
>> -#define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
>> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
>> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
>> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
>> -#define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
>> -#define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
>> -#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
>> -                                             does NOT honour HVMPME_onchangeonly */
>> -
>> -/* Using a custom struct (not hvm_hw_cpu) so as to not fill
>> - * the mem_event ring buffer too quickly. */
>>  struct mem_event_regs_x86 {
>>      uint64_t rax;
>>      uint64_t rcx;
>> @@ -97,31 +117,102 @@ struct mem_event_regs_x86 {
>>      uint32_t _pad;
>>  };
>>
>> -typedef struct mem_event_st {
>> -    uint32_t flags;
>> -    uint32_t vcpu_id;
>> +/*
>> + * mem_access flag definitions
>> + *
>> + * These flags are set only as part of a mem_event request.
>> + *
>> + * R/W/X: Defines the type of violation that has triggered the event
>> + *        Multiple types can be set in a single violation!
>> + * GLA_VALID: If the gla field holds a guest VA associated with the event
>> + * FAULT_WITH_GLA: If the violation was triggered by accessing gla
>> + * FAULT_IN_GPT: If the violation was triggered during translating gla
>> + */
>> +#define MEM_ACCESS_R                    (1 << 0)
>> +#define MEM_ACCESS_W                    (1 << 1)
>> +#define MEM_ACCESS_X                    (1 << 2)
>> +#define MEM_ACCESS_RWX                  (MEM_ACCESS_R | MEM_ACCESS_W | MEM_ACCESS_X)
>> +#define MEM_ACCESS_RW                   (MEM_ACCESS_R | MEM_ACCESS_W)
>> +#define MEM_ACCESS_RX                   (MEM_ACCESS_R | MEM_ACCESS_X)
>> +#define MEM_ACCESS_WX                   (MEM_ACCESS_W | MEM_ACCESS_X)
>> +#define MEM_ACCESS_GLA_VALID            (1 << 3)
>> +#define MEM_ACCESS_FAULT_WITH_GLA       (1 << 4)
>> +#define MEM_ACCESS_FAULT_IN_GPT         (1 << 5)
>> +/*
>> + * The following flags can be set in the response.
>> + *
>> + * Emulate the fault-causing instruction (if set in the event response flags).
>> + * This will allow the guest to continue execution without lifting the page
>> + * access restrictions.
>> + */
>> +#define MEM_ACCESS_EMULATE              (1 << 6)
>> +/*
>> + * Same as MEM_ACCESS_EMULATE, but with write operations or operations
>> + * potentially having side effects (like memory mapped or port I/O) disabled.
>> + */
>> +#define MEM_ACCESS_EMULATE_NOWRITE      (1 << 7)
>>
>> -    uint64_t gfn;
>> +struct mem_event_mem_access {
>> +    uint32_t gfn;
>> +    uint32_t flags; /* MEM_ACCESS_* */
>>      uint64_t offset;
>> -    uint64_t gla; /* if gla_valid */
>> +    uint64_t gla;   /* if flags has MEM_ACCESS_GLA_VALID set */
>> +};
>> +
>> +struct mem_event_mov_to_cr {
>> +    uint64_t new_value;
>> +    uint64_t old_value;
>> +};
>>
>> +struct mem_event_debug {
>> +    uint32_t gfn;
>> +    uint32_t _pad;
>> +};
>> +
>> +struct mem_event_mov_to_msr {
>> +    uint64_t msr;
>> +    uint64_t value;
>> +};
>> +
>> +#define MEM_PAGING_DROP_PAGE       (1 << 0)
>> +#define MEM_PAGING_EVICT_FAIL      (1 << 1)
>> +struct mem_event_paging {
>> +    uint32_t gfn;
>> +    uint32_t p2mt;
>> +    uint32_t flags;
>> +    uint32_t _pad;
>> +};
>> +
>> +struct mem_event_sharing {
>> +    uint32_t gfn;
>>      uint32_t p2mt;
>> +};
>> +
>> +typedef struct mem_event_st {
>> +    uint32_t version;   /* MEM_EVENT_INTERFACE_VERSION */
>> +    uint32_t flags;     /* MEM_EVENT_FLAG_* */
>> +    uint32_t reason;    /* MEM_EVENT_REASON_* */
>> +    uint32_t vcpu_id;
>>
>> -    uint16_t access_r:1;
>> -    uint16_t access_w:1;
>> -    uint16_t access_x:1;
>> -    uint16_t gla_valid:1;
>> -    uint16_t fault_with_gla:1;
>> -    uint16_t fault_in_gpt:1;
>> -    uint16_t available:10;
>> +    union {
>> +        struct mem_event_paging                mem_paging;
>> +        struct mem_event_sharing               mem_sharing;
>> +        struct mem_event_mem_access            mem_access;
>> +        struct mem_event_mov_to_cr             mov_to_cr;
>> +        struct mem_event_mov_to_msr            mov_to_msr;
>> +        struct mem_event_debug                 software_breakpoint;
>> +        struct mem_event_debug                 singlestep;
>> +    } u;
>>
>> -    uint16_t reason;
>> -    struct mem_event_regs_x86 x86_regs;
>> +    union {
>> +        struct mem_event_regs_x86 x86;
>> +    } regs;
>>  } mem_event_request_t, mem_event_response_t;
>>
>>  DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
>>
>> -#endif
>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>> +#endif /* _XEN_PUBLIC_MEM_EVENT_H */
>>
>>  /*
>>   * Local variables:
>> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> index 595f953..2ef1728 100644
>> --- a/xen/include/public/memory.h
>> +++ b/xen/include/public/memory.h
>> @@ -380,7 +380,8 @@ struct xen_mem_event_op {
>>      /* PAGING_PREP IN: buffer to immediately fill page in */
>>      uint64_aligned_t    buffer;
>>      /* Other OPs */
>> -    uint64_aligned_t    gfn;           /* IN:  gfn of page being operated on */
>> +    uint32_t    gfn;           /* IN:  gfn of page being operated on */
>> +    uint32_t    _pad;
>>  };
>>  typedef struct xen_mem_event_op xen_mem_event_op_t;
>>  DEFINE_XEN_GUEST_HANDLE(xen_mem_event_op_t);
>> @@ -469,21 +470,21 @@ struct xen_mem_sharing_op {
>>      union {
>>          struct mem_sharing_op_nominate {  /* OP_NOMINATE_xxx           */
>>              union {
>> -                uint64_aligned_t gfn;     /* IN: gfn to nominate       */
>> +                uint32_t      gfn;        /* IN: gfn to nominate       */
>>                  uint32_t      grant_ref;  /* IN: grant ref to nominate */
>>              } u;
>>              uint64_aligned_t  handle;     /* OUT: the handle           */
>>          } nominate;
>>          struct mem_sharing_op_share {     /* OP_SHARE/ADD_PHYSMAP */
>> -            uint64_aligned_t source_gfn;    /* IN: the gfn of the source page */
>> +            uint32_t source_gfn;          /* IN: the gfn of the source page */
>> +            uint32_t client_gfn;          /* IN: the client gfn */
>>              uint64_aligned_t source_handle; /* IN: handle to the source page */
>> -            uint64_aligned_t client_gfn;    /* IN: the client gfn */
>>              uint64_aligned_t client_handle; /* IN: handle to the client page */
>>              domid_t  client_domain; /* IN: the client domain id */
>>          } share;
>>          struct mem_sharing_op_debug {     /* OP_DEBUG_xxx */
>>              union {
>> -                uint64_aligned_t gfn;      /* IN: gfn to debug          */
>> +                uint32_t gfn;              /* IN: gfn to debug          */
>>                  uint64_aligned_t mfn;      /* IN: mfn to debug          */
>>                  uint32_t gref;     /* IN: gref to debug         */
>>              } u;
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 02/12] xen/mem_event: Cleanup mem_event ring names and domctls
  2015-02-13 17:53   ` Andrew Cooper
@ 2015-02-13 18:06     ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 18:06 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 6:53 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 16:33, Tamas K Lengyel wrote:
>
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index 339f8fe..5851c66 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1501,7 +1501,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>>      gfn_unlock(p2m, gfn, 0);
>>
>>      /* Otherwise, check if there is a memory event listener, and send the message along */
>> -    if ( !mem_event_check_ring(&d->mem_event->access) || !req_ptr )
>> +    if ( !mem_event_check_ring(&d->mem_event->monitor) || !req_ptr )
>
> This hunk introduces trailing whitespace.

The whitespace was already there but I'll fix it in the next revision.

>
> Once fixed,  Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures
  2015-02-13 18:03     ` Tamas K Lengyel
@ 2015-02-13 18:09       ` Andrew Cooper
  2015-02-13 18:13         ` Tamas K Lengyel
  2015-02-17 11:48         ` Jan Beulich
  0 siblings, 2 replies; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 18:09 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Razvan Cojocaru,
	Stefano Stabellini, Tim Deegan, Steven Maresca, xen-devel,
	Jan Beulich, Dong, Eddie, Andres Lagar-Cavilla, Jun Nakajima,
	rshriram, Keir Fraser, Daniel De Graaf, yanghy, Ian Jackson

On 13/02/15 18:03, Tamas K Lengyel wrote:
> On Fri, Feb 13, 2015 at 6:23 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 13/02/15 16:33, Tamas K Lengyel wrote:
>>> The public mem_event structures used to communicate with helper applications via
>>> shared rings have been used in different settings. However, the variable names
>>> within this structure have not reflected this fact, resulting in the reuse of
>>> variables to mean different things under different scenarios.
>>>
>>> This patch remedies the issue by clearly defining the structure members based on
>>> the actual context within which the structure is used.
>>>
>>> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>> ---
>>> v5: Style fixes
>>>     Convert gfn to uint32_t
>> It is perfectly possible to have guests with more memory than is covered
>> by 44 bits, or PV guests whose frames reside above the 44bit boundary.
>> All gfn values should be 64bits wide.
>>
>> ~Andrew
> Internally Xen handles all gfn's as unsigned long's so depending on
> the compiler it may be only 32-bit wide. If gfn must be larger than
> 32-bit than we should use unsigned long long's within Xen.

x86_32 Xen support was ripped out a while ago.  For the time being all
unsigned longs are 64bit.

With the arm32/64 split, there is a slow move in the direction of
paddr_t rather than unsigned long.

~Andrew

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures
  2015-02-13 18:09       ` Andrew Cooper
@ 2015-02-13 18:13         ` Tamas K Lengyel
  2015-02-17 11:48         ` Jan Beulich
  1 sibling, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 18:13 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Razvan Cojocaru,
	Stefano Stabellini, Tim Deegan, Steven Maresca, xen-devel,
	Jan Beulich, Dong, Eddie, Andres Lagar-Cavilla, Jun Nakajima,
	rshriram, Keir Fraser, Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 7:09 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 18:03, Tamas K Lengyel wrote:
>> On Fri, Feb 13, 2015 at 6:23 PM, Andrew Cooper
>> <andrew.cooper3@citrix.com> wrote:
>>> On 13/02/15 16:33, Tamas K Lengyel wrote:
>>>> The public mem_event structures used to communicate with helper applications via
>>>> shared rings have been used in different settings. However, the variable names
>>>> within this structure have not reflected this fact, resulting in the reuse of
>>>> variables to mean different things under different scenarios.
>>>>
>>>> This patch remedies the issue by clearly defining the structure members based on
>>>> the actual context within which the structure is used.
>>>>
>>>> Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>>> ---
>>>> v5: Style fixes
>>>>     Convert gfn to uint32_t
>>> It is perfectly possible to have guests with more memory than is covered
>>> by 44 bits, or PV guests whose frames reside above the 44bit boundary.
>>> All gfn values should be 64bits wide.
>>>
>>> ~Andrew
>> Internally Xen handles all gfn's as unsigned long's so depending on
>> the compiler it may be only 32-bit wide. If gfn must be larger than
>> 32-bit than we should use unsigned long long's within Xen.
>
> x86_32 Xen support was ripped out a while ago.  For the time being all
> unsigned longs are 64bit.
>
> With the arm32/64 split, there is a slow move in the direction of
> paddr_t rather than unsigned long.
>
> ~Andrew

Ack.

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op
  2015-02-13 16:33 ` [PATCH V5 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
@ 2015-02-13 18:17   ` Andrew Cooper
  2015-02-13 18:30     ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 18:17 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> The only use-case of the mem_event_op structure had been in mem_paging,
> thus renaming the structure mem_paging_op and relocating its associated
> functions clarifies its actual usage.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Tim Deegan <tim@xen.org>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> ---
> v5: Style fixes
> v4: Style fixes
> v3: Style fixes
> ---
>  tools/libxc/xc_mem_event.c       | 16 ----------------
>  tools/libxc/xc_mem_paging.c      | 26 ++++++++++++++++++--------
>  tools/libxc/xc_private.h         |  3 ---
>  xen/arch/x86/mm/mem_paging.c     | 32 +++++++++++++-------------------
>  xen/arch/x86/x86_64/compat/mm.c  | 10 ++++++----
>  xen/arch/x86/x86_64/mm.c         |  8 ++++----
>  xen/common/mem_event.c           |  4 ++--
>  xen/include/asm-x86/mem_paging.h |  2 +-
>  xen/include/public/memory.h      |  9 ++++-----
>  9 files changed, 48 insertions(+), 62 deletions(-)
>
> diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
> index ee25cdd..487fcee 100644
> --- a/tools/libxc/xc_mem_event.c
> +++ b/tools/libxc/xc_mem_event.c
> @@ -40,22 +40,6 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
>      return rc;
>  }
>  
> -int xc_mem_event_memop(xc_interface *xch, domid_t domain_id, 
> -                        unsigned int op, unsigned int mode,
> -                        uint32_t gfn, void *buffer)
> -{
> -    xen_mem_event_op_t meo;
> -
> -    memset(&meo, 0, sizeof(meo));
> -
> -    meo.op      = op;
> -    meo.domain  = domain_id;
> -    meo.gfn     = gfn;
> -    meo.buffer  = (unsigned long) buffer;
> -
> -    return do_memory_op(xch, mode, &meo, sizeof(meo));
> -}
> -
>  void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
>                            uint32_t *port, int enable_introspection)
>  {
> diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
> index 5194423..049aff4 100644
> --- a/tools/libxc/xc_mem_paging.c
> +++ b/tools/libxc/xc_mem_paging.c
> @@ -23,6 +23,20 @@
>  
>  #include "xc_private.h"
>  
> +static int xc_mem_paging_memop(xc_interface *xch, domid_t domain_id,
> +                               unsigned int op, uint32_t gfn, void *buffer)

As said in patch 1, this gfn must be a uint64_t

> +{
> +    xen_mem_paging_op_t mpo;
> +
> +    memset(&mpo, 0, sizeof(mpo));
> +
> +    mpo.op      = op;
> +    mpo.domain  = domain_id;
> +    mpo.gfn     = gfn;
> +    mpo.buffer  = (unsigned long) buffer;
> +
> +    return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
> +}
>  
>  int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
>                           uint32_t *port)
> @@ -49,25 +63,22 @@ int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
>  
>  int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id, unsigned long gfn)
>  {

And these 'unsigned long gfn' should be promoted to a uint64_t gfn to
avoid truncation in 32bit toolstacks.

Whether you wish to fix this in the same patch, or fix it in a separate
"make mem_event interface 64/32bit safe" patch is up to you.  This is
straying somewhat form a simple refactoring of mem_event_op to
mem_paging_op.

> -    return xc_mem_event_memop(xch, domain_id,
> +    return xc_mem_paging_memop(xch, domain_id,
>                                  XENMEM_paging_op_nominate,
> -                                XENMEM_paging_op,
>                                  gfn, NULL);
>  }
>  
>  int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn)
>  {
> -    return xc_mem_event_memop(xch, domain_id,
> +    return xc_mem_paging_memop(xch, domain_id,
>                                  XENMEM_paging_op_evict,
> -                                XENMEM_paging_op,
>                                  gfn, NULL);
>  }
>  
>  int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn)
>  {
> -    return xc_mem_event_memop(xch, domain_id,
> +    return xc_mem_paging_memop(xch, domain_id,
>                                  XENMEM_paging_op_prep,
> -                                XENMEM_paging_op,
>                                  gfn, NULL);
>  }
>  
> @@ -87,9 +98,8 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
>      if ( mlock(buffer, XC_PAGE_SIZE) )
>          return -1;
>          
> -    rc = xc_mem_event_memop(xch, domain_id,
> +    rc = xc_mem_paging_memop(xch, domain_id,
>                                  XENMEM_paging_op_prep,
> -                                XENMEM_paging_op,
>                                  gfn, buffer);
>  
>      old_errno = errno;
> diff --git a/xen/include/asm-x86/mem_paging.h b/xen/include/asm-x86/mem_paging.h
> index 6b7a1fe..92ed2fa 100644
> --- a/xen/include/asm-x86/mem_paging.h
> +++ b/xen/include/asm-x86/mem_paging.h
> @@ -21,7 +21,7 @@
>   */
>  
>  
> -int mem_paging_memop(struct domain *d, xen_mem_event_op_t *meo);
> +int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *meo);

s/meo/mpo/ like the implementation.

Once fixed, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op
  2015-02-13 18:17   ` Andrew Cooper
@ 2015-02-13 18:30     ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 18:30 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 7:17 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 16:33, Tamas K Lengyel wrote:
>> The only use-case of the mem_event_op structure had been in mem_paging,
>> thus renaming the structure mem_paging_op and relocating its associated
>> functions clarifies its actual usage.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>> Acked-by: Tim Deegan <tim@xen.org>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v5: Style fixes
>> v4: Style fixes
>> v3: Style fixes
>> ---
>>  tools/libxc/xc_mem_event.c       | 16 ----------------
>>  tools/libxc/xc_mem_paging.c      | 26 ++++++++++++++++++--------
>>  tools/libxc/xc_private.h         |  3 ---
>>  xen/arch/x86/mm/mem_paging.c     | 32 +++++++++++++-------------------
>>  xen/arch/x86/x86_64/compat/mm.c  | 10 ++++++----
>>  xen/arch/x86/x86_64/mm.c         |  8 ++++----
>>  xen/common/mem_event.c           |  4 ++--
>>  xen/include/asm-x86/mem_paging.h |  2 +-
>>  xen/include/public/memory.h      |  9 ++++-----
>>  9 files changed, 48 insertions(+), 62 deletions(-)
>>
>> diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
>> index ee25cdd..487fcee 100644
>> --- a/tools/libxc/xc_mem_event.c
>> +++ b/tools/libxc/xc_mem_event.c
>> @@ -40,22 +40,6 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
>>      return rc;
>>  }
>>
>> -int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
>> -                        unsigned int op, unsigned int mode,
>> -                        uint32_t gfn, void *buffer)
>> -{
>> -    xen_mem_event_op_t meo;
>> -
>> -    memset(&meo, 0, sizeof(meo));
>> -
>> -    meo.op      = op;
>> -    meo.domain  = domain_id;
>> -    meo.gfn     = gfn;
>> -    meo.buffer  = (unsigned long) buffer;
>> -
>> -    return do_memory_op(xch, mode, &meo, sizeof(meo));
>> -}
>> -
>>  void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
>>                            uint32_t *port, int enable_introspection)
>>  {
>> diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
>> index 5194423..049aff4 100644
>> --- a/tools/libxc/xc_mem_paging.c
>> +++ b/tools/libxc/xc_mem_paging.c
>> @@ -23,6 +23,20 @@
>>
>>  #include "xc_private.h"
>>
>> +static int xc_mem_paging_memop(xc_interface *xch, domid_t domain_id,
>> +                               unsigned int op, uint32_t gfn, void *buffer)
>
> As said in patch 1, this gfn must be a uint64_t
>
>> +{
>> +    xen_mem_paging_op_t mpo;
>> +
>> +    memset(&mpo, 0, sizeof(mpo));
>> +
>> +    mpo.op      = op;
>> +    mpo.domain  = domain_id;
>> +    mpo.gfn     = gfn;
>> +    mpo.buffer  = (unsigned long) buffer;
>> +
>> +    return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
>> +}
>>
>>  int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
>>                           uint32_t *port)
>> @@ -49,25 +63,22 @@ int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
>>
>>  int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id, unsigned long gfn)
>>  {
>
> And these 'unsigned long gfn' should be promoted to a uint64_t gfn to
> avoid truncation in 32bit toolstacks.
>
> Whether you wish to fix this in the same patch, or fix it in a separate
> "make mem_event interface 64/32bit safe" patch is up to you.  This is
> straying somewhat form a simple refactoring of mem_event_op to
> mem_paging_op.

I'll just do it here for the sake of juggling fewer patches.

>
>> -    return xc_mem_event_memop(xch, domain_id,
>> +    return xc_mem_paging_memop(xch, domain_id,
>>                                  XENMEM_paging_op_nominate,
>> -                                XENMEM_paging_op,
>>                                  gfn, NULL);
>>  }
>>
>>  int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn)
>>  {
>> -    return xc_mem_event_memop(xch, domain_id,
>> +    return xc_mem_paging_memop(xch, domain_id,
>>                                  XENMEM_paging_op_evict,
>> -                                XENMEM_paging_op,
>>                                  gfn, NULL);
>>  }
>>
>>  int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn)
>>  {
>> -    return xc_mem_event_memop(xch, domain_id,
>> +    return xc_mem_paging_memop(xch, domain_id,
>>                                  XENMEM_paging_op_prep,
>> -                                XENMEM_paging_op,
>>                                  gfn, NULL);
>>  }
>>
>> @@ -87,9 +98,8 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
>>      if ( mlock(buffer, XC_PAGE_SIZE) )
>>          return -1;
>>
>> -    rc = xc_mem_event_memop(xch, domain_id,
>> +    rc = xc_mem_paging_memop(xch, domain_id,
>>                                  XENMEM_paging_op_prep,
>> -                                XENMEM_paging_op,
>>                                  gfn, buffer);
>>
>>      old_errno = errno;
>> diff --git a/xen/include/asm-x86/mem_paging.h b/xen/include/asm-x86/mem_paging.h
>> index 6b7a1fe..92ed2fa 100644
>> --- a/xen/include/asm-x86/mem_paging.h
>> +++ b/xen/include/asm-x86/mem_paging.h
>> @@ -21,7 +21,7 @@
>>   */
>>
>>
>> -int mem_paging_memop(struct domain *d, xen_mem_event_op_t *meo);
>> +int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *meo);
>
> s/meo/mpo/ like the implementation.

Ack. This header also seems to have been missing an #ifdef wrapper so
I'm going to add that here as well.

Tamas

>
> Once fixed, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 04/12] xen: Rename mem_event to vm_event
  2015-02-13 16:33 ` [PATCH V5 04/12] xen: Rename mem_event to vm_event Tamas K Lengyel
@ 2015-02-13 18:31   ` Andrew Cooper
  0 siblings, 0 replies; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 18:31 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> In this patch we mechanically rename mem_event to vm_event. This patch
> introduces no logic changes to the code. Using the name vm_event better
> describes the intended use of this subsystem, which is not limited to memory
> events. It can be used for off-loading the decision making logic into helper
> applications when encountering various events during a VM's execution.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions
  2015-02-13 16:33 ` [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
@ 2015-02-13 18:41   ` Andrew Cooper
  2015-02-17 11:56   ` Jan Beulich
  1 sibling, 0 replies; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 18:41 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> To avoid growing hvm.c these functions can be stored separately. Minor style
> changes are applied to the logic in the file.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-13 16:33 ` [PATCH V5 07/12] xen: Introduce monitor_op domctl Tamas K Lengyel
@ 2015-02-13 20:09   ` Andrew Cooper
  2015-02-17 14:02   ` Jan Beulich
  1 sibling, 0 replies; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 20:09 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:

> In preparation for allowing for introspecting ARM and PV domains the old
> control interface via the hvm_op hypercall is retired. A new control mechanism
> is introduced via the domctl hypercall: monitor_op.
>
> This patch aims to establish a base API on which future applications can build
> on.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

This is a very good step in the right direction!

> diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
> new file mode 100644
> index 0000000..9e807d1
> --- /dev/null
> +++ b/tools/libxc/xc_monitor.c
> @@ -0,0 +1,118 @@
> +/******************************************************************************
> + *
> + * xc_monitor.c
> + *
> + * Interface to VM event monitor
> + *
> + * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
> + */
> +
> +#include "xc_private.h"
> +
> +int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
> +                          unsigned int op, unsigned int sync,
> +                          unsigned int onchangeonly)

As an API review, I would suggest changing these all to bool (as they
are indeed booleans), and rename 'op' to 'enable' (or equivalent) to
avoid giving the impression that it might accept an arbitrary subop of
"monitor_move_to_cr0".

> diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
> index d951703..d52b175 100644
> --- a/tools/tests/xen-access/xen-access.c
> +++ b/tools/tests/xen-access/xen-access.c
> @@ -432,13 +432,13 @@ int main(int argc, char *argv[])
>      }
>  
>      if ( int3 )
> -        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_sync);
> -    else
> -        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
> -    if ( rc < 0 )
>      {
> -        ERROR("Error %d setting int3 vm_event\n", rc);
> -        goto exit;
> +        rc = xc_monitor_software_breakpoint(xch, domain_id, 1);
> +        if ( rc < 0 )
> +        {
> +            ERROR("Error %d setting int3 vm_event\n", rc);

s/int3/breakpoint/

> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 2c4d0ff..4a05fb2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5785,23 +5785,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              case HVM_PARAM_ACPI_IOPORTS_LOCATION:
>                  rc = pmtimer_change_ioport(d, a.value);
>                  break;
> -            case HVM_PARAM_MEMORY_EVENT_CR0:
> -            case HVM_PARAM_MEMORY_EVENT_CR3:
> -            case HVM_PARAM_MEMORY_EVENT_CR4:
> -                if ( d == current->domain )
> -                    rc = -EPERM;
> -                break;
> -            case HVM_PARAM_MEMORY_EVENT_INT3:
> -            case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
> -            case HVM_PARAM_MEMORY_EVENT_MSR:
> -                if ( d == current->domain )
> -                {
> -                    rc = -EPERM;
> -                    break;
> -                }
> -                if ( a.value & HVMPME_onchangeonly )
> -                    rc = -EINVAL;
> -                break;

These case statements should remain, and unconditionally fail with with
EPERM for guests (to maintain interface consistency), and fail with
something more appropriate for toolstacks (EOPNOTSUPP, ENXIO?)

Currently, removing these case statements will cause the HVM params to
become blanket read/write to everyone, including the guest.  (I should
really get around to fixing this.)

> diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
> new file mode 100644
> index 0000000..be4bb20
> --- /dev/null
> +++ b/xen/arch/x86/monitor.c
> @@ -0,0 +1,210 @@
> +/*
> + * arch/x86/monitor.c
> + *
> + * Architecture-specific monitor_op domctl handler.
> + *
> + * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public
> + * License v2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public
> + * License along with this program; if not, write to the
> + * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
> + * Boston, MA 021110-1307, USA.
> + */
> +
> +#include <xen/config.h>
> +#include <xen/sched.h>
> +#include <xen/mm.h>
> +#include <asm/domain.h>
> +
> +#define DISABLE_OPTION(option)              \
> +    do {                                    \
> +        if ( !option->enabled )             \
> +            return -EFAULT;                 \

EFAULT is very specifically used for signalling a bad virtual address. 
It is not an appropriate error code to use here.

Furthermore, hiding a conditional return in a macro is very antisocial.

What you should be probably be using is:

static void alter_option(struct domain *d, bool_t * opt_ptr, bool_t enable)
{
    domain_pause(d);
    *opt_ptr = enable;
    domain_unpause(d);
}

instead of these macros.


> +        domain_pause(d);                    \
> +        option->enabled = 0;                \
> +        domain_unpause(d);                  \
> +    } while (0)
> +
> +#define ENABLE_OPTION(option)               \
> +    do {                                    \
> +        domain_pause(d);                    \
> +        option->enabled = 1;                \
> +        domain_unpause(d);                  \
> +    } while (0)
> +
> +int monitor_domctl(struct xen_domctl_monitor_op *domctl, struct domain *d)

Convention would dictate that struct domain is the first parameter. 
Also, the monitor op pointer should probably be named 'mop' or something
else more specific than domctl.

> +{
> +    /*
> +     * At the moment only Intel HVM domains are supported. However, event
> +     * delivery could be extended to AMD and PV domains. See comments below.
> +     */
> +    if ( !is_hvm_domain(d) || !cpu_has_vmx)

Space before the closing bracket.

> +        return -ENOSYS;

EOPNOTSUPP is preferred for circumstantial restrictions like this,
especially for ones we hope to relax.

> +
> +    if ( domctl->op != XEN_DOMCTL_MONITOR_OP_ENABLE &&
> +         domctl->op != XEN_DOMCTL_MONITOR_OP_DISABLE )
> +        return -EFAULT;

ENOSYS is the most appropriate for "no such subop".

> +
> +    switch ( domctl->subop )
> +    {
> +    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0:
> +    {
> +        /* Note: could be supported on PV domains. */

Every single case in this switch could be supported on PV domains.  I
wouldn't call some of them out specifically.

> +        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr0;
> +
> +        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
> +        {
> +            if ( options->enabled )
> +                return -EBUSY;
> +
> +            options->sync = domctl->u.mov_to_cr.sync;
> +            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
> +            ENABLE_OPTION(options);
> +        }
> +        else
> +        {
> +            DISABLE_OPTION(options);
> +        }
> +        break;

All of these case statements follow a similar pattern.  Can I suggest
something along the following lines

At the top:

bool_t enable = (mop->op == XEN_DOMCTL_MONITOR_OP_ENABLE);

And in each case:

struct $FOO *options = &d->$BAR;

if ( !(options->enable ^ enable) )
    return -EEXIST;

if ( enable )
    $BAZ

alter_option(d, &options->enable, enable);


It is a good idea to fail any attempt to redundantly enable/disable
monitoring options, as it is a sign of a toolstack bug or two toolstack
components interfering with each other.

> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
> index 85f05a5..ae56cc4 100644
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -241,6 +241,24 @@ struct time_scale {
>      u32 mul_frac;
>  };
>  
> +/************************************************/
> +/*            monitor event options             */
> +/************************************************/
> +struct mov_to_cr {
> +    uint8_t enabled;
> +    uint8_t sync;
> +    uint8_t onchangeonly;
> +};
> +
> +struct mov_to_msr {
> +    uint8_t enabled;
> +    uint8_t extended_capture;
> +};
> +
> +struct debug_event {
> +    uint8_t enabled;
> +};

I believe that all of these should be bool_t's instead.

> +
>  struct pv_domain
>  {
>      l1_pgentry_t **gdt_ldt_l1tab;
> @@ -335,6 +353,16 @@ struct arch_domain
>      unsigned long pirq_eoi_map_mfn;
>  
>      unsigned int psr_rmid; /* RMID assigned to the domain for CMT */
> +
> +    /* Monitor options */
> +    struct {
> +        struct mov_to_cr mov_to_cr0;
> +        struct mov_to_cr mov_to_cr3;
> +        struct mov_to_cr mov_to_cr4;
> +        struct mov_to_msr mov_to_msr;
> +        struct debug_event singlestep;
> +        struct debug_event software_breakpoint;
> +    } monitor_options;

I would name this struct simply 'monitor'.  {d,v}->arch.$FOO.$BAR.BAZ
lines are already long enough.

>  } __cacheline_aligned;
>  
>  #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
>
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 9d4972a..0242914 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -804,7 +804,6 @@ struct xen_domctl_gdbsx_domstatus {
>  
>  #define XEN_VM_EVENT_MONITOR_ENABLE                           0
>  #define XEN_VM_EVENT_MONITOR_DISABLE                          1
> -#define XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
>  
>  /*
>   * Sharing ENOMEM helper.
> @@ -1002,6 +1001,53 @@ struct xen_domctl_psr_cmt_op {
>  typedef struct xen_domctl_psr_cmt_op xen_domctl_psr_cmt_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cmt_op_t);
>  
> +/*  XEN_DOMCTL_MONITOR_*
> + *
> + * Enable/disable monitoring various VM events.
> + * This domctl configures what events will be reported to helper apps
> + * via the ring buffer "MONITOR". The ring has to be first enabled
> + * with the domctl XEN_DOMCTL_VM_EVENT_OP_MONITOR.
> + *
> + * NOTICE: mem_access events are also delivered via the "MONITOR" ring buffer;
> + * however, enabling/disabling those events is performed with the use of
> + * memory_op hypercalls!
> + */
> +#define XEN_DOMCTL_MONITOR_OP_ENABLE   0
> +#define XEN_DOMCTL_MONITOR_OP_DISABLE  1
> +
> +#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0            0
> +#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3            1
> +#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR4            2
> +#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_MSR            3
> +#define XEN_DOMCTL_MONITOR_SUBOP_SINGLESTEP            4
> +#define XEN_DOMCTL_MONITOR_SUBOP_SOFTWARE_BREAKPOINT   5
> +
> +struct xen_domctl_monitor_op {
> +    uint32_t op; /* XEN_DOMCTL_MONITOR_OP_* */
> +    uint32_t subop; /* XEN_DOMCTL_MONITOR_SUBOP_* */

These are not really subops.  I would s/subop/event/ everywhere.

> +
> +    /*
> +     * Further options when issuing XEN_DOMCTL_MONITOR_OP_ENABLE.
> +     */
> +    union {
> +        struct {
> +            /* Pause vCPU until response */
> +            uint8_t sync;
> +            /* Send event only on a change of value */
> +            uint8_t onchangeonly;
> +            uint8_t _pad[6];

You don't need to explicitly pad.  This struct itself lives inside a
padded struct domctl union, and is safe to change under the safety of
the DOMCTL interface version.

> +        } mov_to_cr;
> +
> +        struct {
> +            /* Enable the capture of an extended set of MSRs */
> +            uint8_t extended_capture;
> +            uint8_t _pad[7];
> +        } mov_to_msr;
> +    } u;
> +};
> +typedef struct xen_domctl__op xen_domctl_monitor_op_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_monitor_op_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -1077,6 +1123,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_setvnumainfo                  74
>  #define XEN_DOMCTL_psr_cmt_op                    75
>  #define XEN_DOMCTL_arm_configure_domain          76
> +#define XEN_DOMCTL_monitor_op                    77
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1142,6 +1189,7 @@ struct xen_domctl {
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          struct xen_domctl_vnuma             vnuma;
>          struct xen_domctl_psr_cmt_op        psr_cmt_op;
> +        struct xen_domctl_monitor_op        monitor_op;
>          uint8_t                             pad[128];
>      } u;
>  };
> diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
> index 6efcc0b..5de6a4b 100644
> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -162,21 +162,6 @@
>   */
>  #define HVM_PARAM_ACPI_IOPORTS_LOCATION 19
>  
> -/* Enable blocking memory events, async or sync (pause vcpu until response) 
> - * onchangeonly indicates messages only on a change of value */
> -#define HVM_PARAM_MEMORY_EVENT_CR0          20
> -#define HVM_PARAM_MEMORY_EVENT_CR3          21
> -#define HVM_PARAM_MEMORY_EVENT_CR4          22
> -#define HVM_PARAM_MEMORY_EVENT_INT3         23
> -#define HVM_PARAM_MEMORY_EVENT_SINGLE_STEP  25
> -#define HVM_PARAM_MEMORY_EVENT_MSR          30
> -
> -#define HVMPME_MODE_MASK       (3 << 0)
> -#define HVMPME_mode_disabled   0
> -#define HVMPME_mode_async      1
> -#define HVMPME_mode_sync       2
> -#define HVMPME_onchangeonly    (1 << 2)
> -

I think these params need to stay (to avoid their accidental reuse) and
be identified explicitly as deprecated.

~Andrew

>  /* Boolean: Enable nestedhvm (hvm only) */
>  #define HVM_PARAM_NESTEDHVM    24
>  

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  2015-02-13 16:33 ` [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
@ 2015-02-13 20:14   ` Andrew Cooper
  2015-02-13 22:48     ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 20:14 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> The flag is only used for debugging purposes, thus it should be only checked
> for in debug builds of Xen.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

What is the purpose of the dummy flag?  I would have thought it would be
more dangerous to accidentally process a dummy response in a non-debug Xen.

~Andrew

> ---
>  xen/arch/x86/mm/mem_sharing.c | 2 ++
>  xen/arch/x86/mm/p2m.c         | 2 ++
>  xen/common/mem_access.c       | 2 ++
>  3 files changed, 6 insertions(+)
>
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 4e5477a..0459544 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -606,8 +606,10 @@ int mem_sharing_sharing_resume(struct domain *d)
>              continue;
>          }
>  
> +#ifndef NDEBUG
>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>              continue;
> +#endif
>  
>          /* Validate the vcpu_id in the response. */
>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 5ce852e..68d57d7 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1312,8 +1312,10 @@ void p2m_mem_paging_resume(struct domain *d)
>              continue;
>          }
>  
> +#ifndef NDEBUG
>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>              continue;
> +#endif
>  
>          /* Validate the vcpu_id in the response. */
>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
> index a6d82d1..63f2b52 100644
> --- a/xen/common/mem_access.c
> +++ b/xen/common/mem_access.c
> @@ -44,8 +44,10 @@ void mem_access_resume(struct domain *d)
>              continue;
>          }
>  
> +#ifndef NDEBUG
>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>              continue;
> +#endif
>  
>          /* Validate the vcpu_id in the response. */
>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-13 16:33 ` [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
@ 2015-02-13 21:05   ` Andrew Cooper
  2015-02-13 23:00     ` Tamas K Lengyel
  2015-02-17 14:17   ` Jan Beulich
  1 sibling, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 21:05 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index f988291..f89361e 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -357,6 +357,67 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, vm_even
>      return 1;
>  }
>  
> +/*
> + * Pull all responses from the given ring and unpause the corresponding vCPU
> + * if required. Based on the response type, here we can also call custom
> + * handlers.
> + *
> + * Note: responses are handled the same way regardless of which ring they
> + * arrive on.
> + */
> +void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
> +{
> +    vm_event_response_t rsp;
> +
> +    /* Pull all responses off the ring. */
> +    while ( vm_event_get_response(d, ved, &rsp) )
> +    {
> +        struct vcpu *v;
> +
> +        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
> +        {
> +            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!");
> +            continue;
> +        }
> +
> +#ifndef NDEBUG
> +        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
> +            continue;
> +#endif
> +
> +        /* Validate the vcpu_id in the response. */
> +        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
> +            continue;
> +
> +        v = d->vcpu[rsp.vcpu_id];
> +
> +        /*
> +         * In some cases the response type needs extra handling, so here
> +         * we call the appropriate handlers.
> +         */
> +        switch ( rsp.reason )
> +        {
> +
> +#ifdef HAS_MEM_ACCESS
> +        case VM_EVENT_REASON_MEM_ACCESS:
> +            mem_access_resume(v, &rsp);
> +            break;
> +#endif
> +
> +#ifdef HAS_MEM_PAGING
> +        case VM_EVENT_REASON_MEM_PAGING:
> +            p2m_mem_paging_resume(d, &rsp);
> +            break;
> +#endif
> +

You need a default clause which captures unknown/invalid responses, logs
a message for debugging purposes, and ceases any further processing.  It
is not sensible for an unknown response to unpause the vcpu.

This will require you to have a whitelist of known reasons which simply
break.

~Andrew

> +        };
> +
> +        /* Unpause domain. */
> +        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
> +            vm_event_vcpu_unpause(v);
> +    }
> +}
> +
>  void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
>  {
>      vm_event_ring_lock(ved);
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-13 16:33 ` [PATCH V5 10/12] xen/vm_event: Relocate memop checks Tamas K Lengyel
@ 2015-02-13 21:23   ` Andrew Cooper
  2015-02-13 23:20     ` Tamas K Lengyel
  2015-02-17 14:25   ` Jan Beulich
  1 sibling, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 21:23 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> The memop handler function for paging/sharing responsible for calling XSM
> doesn't really have anything to do with vm_event, thus in this patch we
> relocate it into mem_paging_memop and mem_sharing_memop. This has already
> been the approach in mem_access_memop, so in this patch we just make it
> consistent.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
>  xen/arch/x86/mm/mem_paging.c      | 36 ++++++++++++++-----
>  xen/arch/x86/mm/mem_sharing.c     | 76 ++++++++++++++++++++++++++-------------
>  xen/arch/x86/x86_64/compat/mm.c   | 28 +++------------
>  xen/arch/x86/x86_64/mm.c          | 26 +++-----------
>  xen/common/vm_event.c             | 43 ----------------------
>  xen/include/asm-x86/mem_paging.h  |  7 +++-
>  xen/include/asm-x86/mem_sharing.h |  4 +--
>  xen/include/xen/vm_event.h        |  1 -
>  8 files changed, 97 insertions(+), 124 deletions(-)
>
> diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
> index e63d8c1..4aee6b7 100644
> --- a/xen/arch/x86/mm/mem_paging.c
> +++ b/xen/arch/x86/mm/mem_paging.c
> @@ -21,28 +21,45 @@
>   */
>  
>  
> +#include <xen/guest_access.h>
>  #include <asm/p2m.h>
> -#include <xen/vm_event.h>
> +#include <xsm/xsm.h>

Order of includes.

>  
> -
> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
> +int mem_paging_memop(unsigned long cmd,

I don't believe cmd is a useful parameter to pass.  You know that its
value is XENMEM_paging_op by virtue of being in this function.

> +                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
>  {
> -    int rc = -ENODEV;
> +    int rc;
> +    xen_mem_paging_op_t mpo;
> +    struct domain *d;
> +
> +    rc = -EFAULT;
> +    if ( copy_from_guest(&mpo, arg, 1) )
> +        return rc;
> +
> +    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
> +    if ( rc )
> +        return rc;
> +
> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
> +    if ( rc )
> +        return rc;
> +
> +    rc = -ENODEV;
>      if ( unlikely(!d->vm_event->paging.ring_page) )
>          return rc;
>  
> -    switch( mpo->op )
> +    switch( mpo.op )
>      {
>      case XENMEM_paging_op_nominate:
> -        rc = p2m_mem_paging_nominate(d, mpo->gfn);
> +        rc = p2m_mem_paging_nominate(d, mpo.gfn);
>          break;
>  
>      case XENMEM_paging_op_evict:
> -        rc = p2m_mem_paging_evict(d, mpo->gfn);
> +        rc = p2m_mem_paging_evict(d, mpo.gfn);
>          break;
>  
>      case XENMEM_paging_op_prep:
> -        rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer);
> +        rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer);
>          break;
>  
>      default:
> @@ -50,6 +67,9 @@ int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
>          break;
>      }
>  
> +    if ( !rc && __copy_to_guest(arg, &mpo, 1) )
> +        rc = -EFAULT;

Do any of the paging ops need to be copied back?  Nothing appears to
write into mpo in this function.  (The original code looks to be overly
pessimistic).

> +
>      return rc;
>  }
>  
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 4959407..612ed89 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -28,6 +28,7 @@
>  #include <xen/grant_table.h>
>  #include <xen/sched.h>
>  #include <xen/rcupdate.h>
> +#include <xen/guest_access.h>
>  #include <xen/vm_event.h>
>  #include <asm/page.h>
>  #include <asm/string.h>
> @@ -1293,30 +1294,52 @@ int relinquish_shared_pages(struct domain *d)
>      return rc;
>  }
>  
> -int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
> +int mem_sharing_memop(unsigned long cmd,
> +                      XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>  {
> -    int rc = 0;
> +    int rc;
> +    xen_mem_sharing_op_t mso;
> +    struct domain *d;
> +
> +    rc = -EFAULT;
> +    if ( copy_from_guest(&mso, arg, 1) )
> +        return rc;
> +
> +    if ( mso.op == XENMEM_sharing_op_audit )
> +        return mem_sharing_audit();
> +
> +    rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d);
> +    if ( rc )
> +        return rc;
>  
>      /* Only HAP is supported */
>      if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
>           return -ENODEV;
>  
> -    switch(mec->op)
> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
> +    if ( rc )
> +        return rc;
> +
> +    rc = -ENODEV;
> +    if ( unlikely(!d->vm_event->share.ring_page) )
> +        return rc;
> +
> +    switch(mso.op)

Style ( spaces )

> @@ -1465,6 +1488,9 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
>              break;
>      }
>  
> +    if ( !rc && __copy_to_guest(arg, &mso, 1) )
> +        return -EFAULT;

Only certain subops need to copy information back.  It is common to have
a function-level bool_t copyback which relevant subops sets.

~Andrew

> +
>      return rc;
>  }

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 11/12] xen/xsm: Split vm_event_op into three separate labels
  2015-02-13 16:33 ` [PATCH V5 11/12] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
@ 2015-02-13 21:25   ` Andrew Cooper
  0 siblings, 0 replies; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 21:25 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> The XSM label vm_event_op has been used to control the three memops
> controlling mem_access, mem_paging and mem_sharing. While these systems
> rely on vm_event, these are not vm_event operations themselves. Thus,
> in this patch we introduce three separate labels for each of these memops.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-13 16:33 ` [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
@ 2015-02-13 21:44   ` Andrew Cooper
  2015-02-13 23:10     ` Tamas K Lengyel
  2015-02-17 14:31   ` Jan Beulich
  1 sibling, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 21:44 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, stefano.stabellini, tim,
	steve, jbeulich, eddie.dong, andres, jun.nakajima, rshriram,
	keir, dgdegra, yanghy, ian.jackson

On 13/02/15 16:33, Tamas K Lengyel wrote:
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index 9c41f5d..334f60e 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -386,9 +386,8 @@ typedef struct xen_mem_paging_op xen_mem_paging_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
>  
>  #define XENMEM_access_op                    21
> -#define XENMEM_access_op_resume             0
> -#define XENMEM_access_op_set_access         1
> -#define XENMEM_access_op_get_access         2
> +#define XENMEM_access_op_set_access         0
> +#define XENMEM_access_op_get_access         1
>  
>  typedef enum {
>      XENMEM_access_n,
> @@ -439,12 +438,11 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
>  #define XENMEM_sharing_op_nominate_gfn      0
>  #define XENMEM_sharing_op_nominate_gref     1
>  #define XENMEM_sharing_op_share             2
> -#define XENMEM_sharing_op_resume            3
> -#define XENMEM_sharing_op_debug_gfn         4
> -#define XENMEM_sharing_op_debug_mfn         5
> -#define XENMEM_sharing_op_debug_gref        6
> -#define XENMEM_sharing_op_add_physmap       7
> -#define XENMEM_sharing_op_audit             8
> +#define XENMEM_sharing_op_debug_gfn         3
> +#define XENMEM_sharing_op_debug_mfn         4
> +#define XENMEM_sharing_op_debug_gref        5
> +#define XENMEM_sharing_op_add_physmap       6
> +#define XENMEM_sharing_op_audit             7

Hmm - I am not certain what our API/ABI constraints are in this case. 
MEMOPs are not toolstack exclusive so lack a formal interface version,
but these bits are inside an #ifdef __XEN_TOOLS__

If we work on the assumption that there are not currently any
out-of-tree tools trying to use this interface, then its probably ok to
break the ABI now and get it right, perhaps even including an ABI
version parameter in the op itself if we wish to be flexible going forwards.

I also notice that the structures have 32bit gfns baked into them, which
is going to need to be fixed at some point.

~Andrew

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  2015-02-13 20:14   ` Andrew Cooper
@ 2015-02-13 22:48     ` Tamas K Lengyel
  2015-02-13 22:53       ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 22:48 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 9:14 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 16:33, Tamas K Lengyel wrote:
>> The flag is only used for debugging purposes, thus it should be only checked
>> for in debug builds of Xen.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>
> What is the purpose of the dummy flag?  I would have thought it would be
> more dangerous to accidentally process a dummy response in a non-debug Xen.
>
> ~Andrew

I honestly have no idea what was the real use-case for it. It is a way
to signal to Xen to just remove the reponse from the ring without
doing anything else with it so I figured it might be handy when
debugging a toolstack.

Tamas

>
>> ---
>>  xen/arch/x86/mm/mem_sharing.c | 2 ++
>>  xen/arch/x86/mm/p2m.c         | 2 ++
>>  xen/common/mem_access.c       | 2 ++
>>  3 files changed, 6 insertions(+)
>>
>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
>> index 4e5477a..0459544 100644
>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -606,8 +606,10 @@ int mem_sharing_sharing_resume(struct domain *d)
>>              continue;
>>          }
>>
>> +#ifndef NDEBUG
>>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>>              continue;
>> +#endif
>>
>>          /* Validate the vcpu_id in the response. */
>>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>> index 5ce852e..68d57d7 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1312,8 +1312,10 @@ void p2m_mem_paging_resume(struct domain *d)
>>              continue;
>>          }
>>
>> +#ifndef NDEBUG
>>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>>              continue;
>> +#endif
>>
>>          /* Validate the vcpu_id in the response. */
>>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
>> index a6d82d1..63f2b52 100644
>> --- a/xen/common/mem_access.c
>> +++ b/xen/common/mem_access.c
>> @@ -44,8 +44,10 @@ void mem_access_resume(struct domain *d)
>>              continue;
>>          }
>>
>> +#ifndef NDEBUG
>>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>>              continue;
>> +#endif
>>
>>          /* Validate the vcpu_id in the response. */
>>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  2015-02-13 22:48     ` Tamas K Lengyel
@ 2015-02-13 22:53       ` Tamas K Lengyel
  2015-02-13 23:00         ` Andrew Cooper
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 22:53 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 11:48 PM, Tamas K Lengyel
<tamas.lengyel@zentific.com> wrote:
> On Fri, Feb 13, 2015 at 9:14 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 13/02/15 16:33, Tamas K Lengyel wrote:
>>> The flag is only used for debugging purposes, thus it should be only checked
>>> for in debug builds of Xen.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>
>> What is the purpose of the dummy flag?  I would have thought it would be
>> more dangerous to accidentally process a dummy response in a non-debug Xen.
>>
>> ~Andrew
>
> I honestly have no idea what was the real use-case for it. It is a way
> to signal to Xen to just remove the reponse from the ring without
> doing anything else with it so I figured it might be handy when
> debugging a toolstack.
>
> Tamas

As for processing it on a non-debug Xen: I think it's safe as the
response would just be handled according to the other flags that are
set in the response, like unpausing the vCPU that triggered the event.
IMHO its a wasteful check on production systems.

Tamas

>
>>
>>> ---
>>>  xen/arch/x86/mm/mem_sharing.c | 2 ++
>>>  xen/arch/x86/mm/p2m.c         | 2 ++
>>>  xen/common/mem_access.c       | 2 ++
>>>  3 files changed, 6 insertions(+)
>>>
>>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
>>> index 4e5477a..0459544 100644
>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>> @@ -606,8 +606,10 @@ int mem_sharing_sharing_resume(struct domain *d)
>>>              continue;
>>>          }
>>>
>>> +#ifndef NDEBUG
>>>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>>>              continue;
>>> +#endif
>>>
>>>          /* Validate the vcpu_id in the response. */
>>>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>>> index 5ce852e..68d57d7 100644
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -1312,8 +1312,10 @@ void p2m_mem_paging_resume(struct domain *d)
>>>              continue;
>>>          }
>>>
>>> +#ifndef NDEBUG
>>>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>>>              continue;
>>> +#endif
>>>
>>>          /* Validate the vcpu_id in the response. */
>>>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>>> diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
>>> index a6d82d1..63f2b52 100644
>>> --- a/xen/common/mem_access.c
>>> +++ b/xen/common/mem_access.c
>>> @@ -44,8 +44,10 @@ void mem_access_resume(struct domain *d)
>>>              continue;
>>>          }
>>>
>>> +#ifndef NDEBUG
>>>          if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>>>              continue;
>>> +#endif
>>>
>>>          /* Validate the vcpu_id in the response. */
>>>          if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  2015-02-13 22:53       ` Tamas K Lengyel
@ 2015-02-13 23:00         ` Andrew Cooper
  2015-02-13 23:02           ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-13 23:00 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On 13/02/2015 22:53, Tamas K Lengyel wrote:
> On Fri, Feb 13, 2015 at 11:48 PM, Tamas K Lengyel
> <tamas.lengyel@zentific.com> wrote:
>> On Fri, Feb 13, 2015 at 9:14 PM, Andrew Cooper
>> <andrew.cooper3@citrix.com> wrote:
>>> On 13/02/15 16:33, Tamas K Lengyel wrote:
>>>> The flag is only used for debugging purposes, thus it should be only checked
>>>> for in debug builds of Xen.
>>>>
>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>> What is the purpose of the dummy flag?  I would have thought it would be
>>> more dangerous to accidentally process a dummy response in a non-debug Xen.
>>>
>>> ~Andrew
>> I honestly have no idea what was the real use-case for it. It is a way
>> to signal to Xen to just remove the reponse from the ring without
>> doing anything else with it so I figured it might be handy when
>> debugging a toolstack.
>>
>> Tamas
> As for processing it on a non-debug Xen: I think it's safe as the
> response would just be handled according to the other flags that are
> set in the response, like unpausing the vCPU that triggered the event.
> IMHO its a wasteful check on production systems.

If it is not useful then discard it completely (seeing as the entire
system is being overhauled).  I can't see any purpose in having a method
of telling Xen to ignore a request placed on the ring.  If the toolstack
wishes Xen not to do anything then it should not have put a request on
the ring in the first place.

~Andrew

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-13 21:05   ` Andrew Cooper
@ 2015-02-13 23:00     ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 23:00 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 10:05 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 16:33, Tamas K Lengyel wrote:
>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>> index f988291..f89361e 100644
>> --- a/xen/common/vm_event.c
>> +++ b/xen/common/vm_event.c
>> @@ -357,6 +357,67 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, vm_even
>>      return 1;
>>  }
>>
>> +/*
>> + * Pull all responses from the given ring and unpause the corresponding vCPU
>> + * if required. Based on the response type, here we can also call custom
>> + * handlers.
>> + *
>> + * Note: responses are handled the same way regardless of which ring they
>> + * arrive on.
>> + */
>> +void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>> +{
>> +    vm_event_response_t rsp;
>> +
>> +    /* Pull all responses off the ring. */
>> +    while ( vm_event_get_response(d, ved, &rsp) )
>> +    {
>> +        struct vcpu *v;
>> +
>> +        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
>> +        {
>> +            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!");
>> +            continue;
>> +        }
>> +
>> +#ifndef NDEBUG
>> +        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>> +            continue;
>> +#endif
>> +
>> +        /* Validate the vcpu_id in the response. */
>> +        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>> +            continue;
>> +
>> +        v = d->vcpu[rsp.vcpu_id];
>> +
>> +        /*
>> +         * In some cases the response type needs extra handling, so here
>> +         * we call the appropriate handlers.
>> +         */
>> +        switch ( rsp.reason )
>> +        {
>> +
>> +#ifdef HAS_MEM_ACCESS
>> +        case VM_EVENT_REASON_MEM_ACCESS:
>> +            mem_access_resume(v, &rsp);
>> +            break;
>> +#endif
>> +
>> +#ifdef HAS_MEM_PAGING
>> +        case VM_EVENT_REASON_MEM_PAGING:
>> +            p2m_mem_paging_resume(d, &rsp);
>> +            break;
>> +#endif
>> +
>
> You need a default clause which captures unknown/invalid responses, logs
> a message for debugging purposes, and ceases any further processing.  It
> is not sensible for an unknown response to unpause the vcpu.

That's how it works today, no validation on the response fields
anywhere. I was also thinking further checking if the response is
arriving on the appropriate ring - we could send arbitrary responses
on all three rings - but I think it is a bit of an overkill.

>
> This will require you to have a whitelist of known reasons which simply
> break.

Sure, that can be done easily.
Tamas

>
> ~Andrew
>
>> +        };
>> +
>> +        /* Unpause domain. */
>> +        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
>> +            vm_event_vcpu_unpause(v);
>> +    }
>> +}
>> +
>>  void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
>>  {
>>      vm_event_ring_lock(ved);
>>
>
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  2015-02-13 23:00         ` Andrew Cooper
@ 2015-02-13 23:02           ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 23:02 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Sat, Feb 14, 2015 at 12:00 AM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/2015 22:53, Tamas K Lengyel wrote:
>> On Fri, Feb 13, 2015 at 11:48 PM, Tamas K Lengyel
>> <tamas.lengyel@zentific.com> wrote:
>>> On Fri, Feb 13, 2015 at 9:14 PM, Andrew Cooper
>>> <andrew.cooper3@citrix.com> wrote:
>>>> On 13/02/15 16:33, Tamas K Lengyel wrote:
>>>>> The flag is only used for debugging purposes, thus it should be only checked
>>>>> for in debug builds of Xen.
>>>>>
>>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>>> What is the purpose of the dummy flag?  I would have thought it would be
>>>> more dangerous to accidentally process a dummy response in a non-debug Xen.
>>>>
>>>> ~Andrew
>>> I honestly have no idea what was the real use-case for it. It is a way
>>> to signal to Xen to just remove the reponse from the ring without
>>> doing anything else with it so I figured it might be handy when
>>> debugging a toolstack.
>>>
>>> Tamas
>> As for processing it on a non-debug Xen: I think it's safe as the
>> response would just be handled according to the other flags that are
>> set in the response, like unpausing the vCPU that triggered the event.
>> IMHO its a wasteful check on production systems.
>
> If it is not useful then discard it completely (seeing as the entire
> system is being overhauled).  I can't see any purpose in having a method
> of telling Xen to ignore a request placed on the ring.  If the toolstack
> wishes Xen not to do anything then it should not have put a request on
> the ring in the first place.
>
> ~Andrew

Ack.

Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-13 21:44   ` Andrew Cooper
@ 2015-02-13 23:10     ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 23:10 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 10:44 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 16:33, Tamas K Lengyel wrote:
>> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> index 9c41f5d..334f60e 100644
>> --- a/xen/include/public/memory.h
>> +++ b/xen/include/public/memory.h
>> @@ -386,9 +386,8 @@ typedef struct xen_mem_paging_op xen_mem_paging_op_t;
>>  DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
>>
>>  #define XENMEM_access_op                    21
>> -#define XENMEM_access_op_resume             0
>> -#define XENMEM_access_op_set_access         1
>> -#define XENMEM_access_op_get_access         2
>> +#define XENMEM_access_op_set_access         0
>> +#define XENMEM_access_op_get_access         1
>>
>>  typedef enum {
>>      XENMEM_access_n,
>> @@ -439,12 +438,11 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
>>  #define XENMEM_sharing_op_nominate_gfn      0
>>  #define XENMEM_sharing_op_nominate_gref     1
>>  #define XENMEM_sharing_op_share             2
>> -#define XENMEM_sharing_op_resume            3
>> -#define XENMEM_sharing_op_debug_gfn         4
>> -#define XENMEM_sharing_op_debug_mfn         5
>> -#define XENMEM_sharing_op_debug_gref        6
>> -#define XENMEM_sharing_op_add_physmap       7
>> -#define XENMEM_sharing_op_audit             8
>> +#define XENMEM_sharing_op_debug_gfn         3
>> +#define XENMEM_sharing_op_debug_mfn         4
>> +#define XENMEM_sharing_op_debug_gref        5
>> +#define XENMEM_sharing_op_add_physmap       6
>> +#define XENMEM_sharing_op_audit             7
>
> Hmm - I am not certain what our API/ABI constraints are in this case.
> MEMOPs are not toolstack exclusive so lack a formal interface version,
> but these bits are inside an #ifdef __XEN_TOOLS__
>
> If we work on the assumption that there are not currently any
> out-of-tree tools trying to use this interface, then its probably ok to
> break the ABI now and get it right, perhaps even including an ABI
> version parameter in the op itself if we wish to be flexible going forwards.

I think there are some out-of-tree tools that make use if these.
Andres did imply couple years ago that he works with proprietary
software that uses the sharing system but he couldn't really tell much
about it.

>
> I also notice that the structures have 32bit gfns baked into them, which
> is going to need to be fixed at some point.

I think that was just me in the first patch in the series..

>
> ~Andrew
>

On a more general note, I'm not actually sure if we need this resume
option on the domctl. Currently there are two methods ending up
signaling to Xen to process the responses from the ring - one with the
memops/domctl here, and one via event channels. I think the event
channel method is sufficient so I'm not actually sure why these resume
memops existed in the first place. It might be sufficient to just
deprecate them altogether.

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-13 21:23   ` Andrew Cooper
@ 2015-02-13 23:20     ` Tamas K Lengyel
  2015-02-13 23:24       ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 23:20 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Fri, Feb 13, 2015 at 10:23 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/15 16:33, Tamas K Lengyel wrote:
>> The memop handler function for paging/sharing responsible for calling XSM
>> doesn't really have anything to do with vm_event, thus in this patch we
>> relocate it into mem_paging_memop and mem_sharing_memop. This has already
>> been the approach in mem_access_memop, so in this patch we just make it
>> consistent.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>> ---
>>  xen/arch/x86/mm/mem_paging.c      | 36 ++++++++++++++-----
>>  xen/arch/x86/mm/mem_sharing.c     | 76 ++++++++++++++++++++++++++-------------
>>  xen/arch/x86/x86_64/compat/mm.c   | 28 +++------------
>>  xen/arch/x86/x86_64/mm.c          | 26 +++-----------
>>  xen/common/vm_event.c             | 43 ----------------------
>>  xen/include/asm-x86/mem_paging.h  |  7 +++-
>>  xen/include/asm-x86/mem_sharing.h |  4 +--
>>  xen/include/xen/vm_event.h        |  1 -
>>  8 files changed, 97 insertions(+), 124 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
>> index e63d8c1..4aee6b7 100644
>> --- a/xen/arch/x86/mm/mem_paging.c
>> +++ b/xen/arch/x86/mm/mem_paging.c
>> @@ -21,28 +21,45 @@
>>   */
>>
>>
>> +#include <xen/guest_access.h>
>>  #include <asm/p2m.h>
>> -#include <xen/vm_event.h>
>> +#include <xsm/xsm.h>
>
> Order of includes.
>
>>
>> -
>> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
>> +int mem_paging_memop(unsigned long cmd,
>
> I don't believe cmd is a useful parameter to pass.  You know that its
> value is XENMEM_paging_op by virtue of being in this function.
>
>> +                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
>>  {
>> -    int rc = -ENODEV;
>> +    int rc;
>> +    xen_mem_paging_op_t mpo;
>> +    struct domain *d;
>> +
>> +    rc = -EFAULT;
>> +    if ( copy_from_guest(&mpo, arg, 1) )
>> +        return rc;
>> +
>> +    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    rc = -ENODEV;
>>      if ( unlikely(!d->vm_event->paging.ring_page) )
>>          return rc;
>>
>> -    switch( mpo->op )
>> +    switch( mpo.op )
>>      {
>>      case XENMEM_paging_op_nominate:
>> -        rc = p2m_mem_paging_nominate(d, mpo->gfn);
>> +        rc = p2m_mem_paging_nominate(d, mpo.gfn);
>>          break;
>>
>>      case XENMEM_paging_op_evict:
>> -        rc = p2m_mem_paging_evict(d, mpo->gfn);
>> +        rc = p2m_mem_paging_evict(d, mpo.gfn);
>>          break;
>>
>>      case XENMEM_paging_op_prep:
>> -        rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer);
>> +        rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer);
>>          break;
>>
>>      default:
>> @@ -50,6 +67,9 @@ int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
>>          break;
>>      }
>>
>> +    if ( !rc && __copy_to_guest(arg, &mpo, 1) )
>> +        rc = -EFAULT;
>
> Do any of the paging ops need to be copied back?  Nothing appears to
> write into mpo in this function.  (The original code looks to be overly
> pessimistic).

I'm not sure if any of these actually copy back - I just tried to keep
as much of the existing flow intact as possible while sanitizing the
vm_event side of things. That is not to say mem_sharing and mem_paging
couldn't use more scrutiny and optimizations, it's just a bit out of
the scope of what this series is about.

>
>> +
>>      return rc;
>>  }
>>
>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
>> index 4959407..612ed89 100644
>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -28,6 +28,7 @@
>>  #include <xen/grant_table.h>
>>  #include <xen/sched.h>
>>  #include <xen/rcupdate.h>
>> +#include <xen/guest_access.h>
>>  #include <xen/vm_event.h>
>>  #include <asm/page.h>
>>  #include <asm/string.h>
>> @@ -1293,30 +1294,52 @@ int relinquish_shared_pages(struct domain *d)
>>      return rc;
>>  }
>>
>> -int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
>> +int mem_sharing_memop(unsigned long cmd,
>> +                      XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>>  {
>> -    int rc = 0;
>> +    int rc;
>> +    xen_mem_sharing_op_t mso;
>> +    struct domain *d;
>> +
>> +    rc = -EFAULT;
>> +    if ( copy_from_guest(&mso, arg, 1) )
>> +        return rc;
>> +
>> +    if ( mso.op == XENMEM_sharing_op_audit )
>> +        return mem_sharing_audit();
>> +
>> +    rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d);
>> +    if ( rc )
>> +        return rc;
>>
>>      /* Only HAP is supported */
>>      if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
>>           return -ENODEV;
>>
>> -    switch(mec->op)
>> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    rc = -ENODEV;
>> +    if ( unlikely(!d->vm_event->share.ring_page) )
>> +        return rc;
>> +
>> +    switch(mso.op)
>
> Style ( spaces )

Ack.

>
>> @@ -1465,6 +1488,9 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
>>              break;
>>      }
>>
>> +    if ( !rc && __copy_to_guest(arg, &mso, 1) )
>> +        return -EFAULT;
>
> Only certain subops need to copy information back.  It is common to have
> a function-level bool_t copyback which relevant subops sets.

If it's not a critical fix I would rather just keep it as it was in
this series. It could be part of another cleanup series for mem_paging
and mem_sharing.

Thanks,
Tamas

>
> ~Andrew
>
>> +
>>      return rc;
>>  }
>
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-13 23:20     ` Tamas K Lengyel
@ 2015-02-13 23:24       ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-13 23:24 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Tim Deegan, Steven Maresca, xen-devel, Jan Beulich, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Sat, Feb 14, 2015 at 12:20 AM, Tamas K Lengyel
<tamas.lengyel@zentific.com> wrote:
> On Fri, Feb 13, 2015 at 10:23 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 13/02/15 16:33, Tamas K Lengyel wrote:
>>> The memop handler function for paging/sharing responsible for calling XSM
>>> doesn't really have anything to do with vm_event, thus in this patch we
>>> relocate it into mem_paging_memop and mem_sharing_memop. This has already
>>> been the approach in mem_access_memop, so in this patch we just make it
>>> consistent.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>> ---
>>>  xen/arch/x86/mm/mem_paging.c      | 36 ++++++++++++++-----
>>>  xen/arch/x86/mm/mem_sharing.c     | 76 ++++++++++++++++++++++++++-------------
>>>  xen/arch/x86/x86_64/compat/mm.c   | 28 +++------------
>>>  xen/arch/x86/x86_64/mm.c          | 26 +++-----------
>>>  xen/common/vm_event.c             | 43 ----------------------
>>>  xen/include/asm-x86/mem_paging.h  |  7 +++-
>>>  xen/include/asm-x86/mem_sharing.h |  4 +--
>>>  xen/include/xen/vm_event.h        |  1 -
>>>  8 files changed, 97 insertions(+), 124 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
>>> index e63d8c1..4aee6b7 100644
>>> --- a/xen/arch/x86/mm/mem_paging.c
>>> +++ b/xen/arch/x86/mm/mem_paging.c
>>> @@ -21,28 +21,45 @@
>>>   */
>>>
>>>
>>> +#include <xen/guest_access.h>
>>>  #include <asm/p2m.h>
>>> -#include <xen/vm_event.h>
>>> +#include <xsm/xsm.h>
>>
>> Order of includes.
>>
>>>
>>> -
>>> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
>>> +int mem_paging_memop(unsigned long cmd,
>>
>> I don't believe cmd is a useful parameter to pass.  You know that its
>> value is XENMEM_paging_op by virtue of being in this function.
>>
>>> +                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
>>>  {
>>> -    int rc = -ENODEV;
>>> +    int rc;
>>> +    xen_mem_paging_op_t mpo;
>>> +    struct domain *d;
>>> +
>>> +    rc = -EFAULT;
>>> +    if ( copy_from_guest(&mpo, arg, 1) )
>>> +        return rc;
>>> +
>>> +    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
>>> +    if ( rc )
>>> +        return rc;
>>> +
>>> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
>>> +    if ( rc )
>>> +        return rc;
>>> +
>>> +    rc = -ENODEV;
>>>      if ( unlikely(!d->vm_event->paging.ring_page) )
>>>          return rc;
>>>
>>> -    switch( mpo->op )
>>> +    switch( mpo.op )
>>>      {
>>>      case XENMEM_paging_op_nominate:
>>> -        rc = p2m_mem_paging_nominate(d, mpo->gfn);
>>> +        rc = p2m_mem_paging_nominate(d, mpo.gfn);
>>>          break;
>>>
>>>      case XENMEM_paging_op_evict:
>>> -        rc = p2m_mem_paging_evict(d, mpo->gfn);
>>> +        rc = p2m_mem_paging_evict(d, mpo.gfn);
>>>          break;
>>>
>>>      case XENMEM_paging_op_prep:
>>> -        rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer);
>>> +        rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer);
>>>          break;
>>>
>>>      default:
>>> @@ -50,6 +67,9 @@ int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
>>>          break;
>>>      }
>>>
>>> +    if ( !rc && __copy_to_guest(arg, &mpo, 1) )
>>> +        rc = -EFAULT;
>>
>> Do any of the paging ops need to be copied back?  Nothing appears to
>> write into mpo in this function.  (The original code looks to be overly
>> pessimistic).
>
> I'm not sure if any of these actually copy back - I just tried to keep
> as much of the existing flow intact as possible while sanitizing the
> vm_event side of things. That is not to say mem_sharing and mem_paging
> couldn't use more scrutiny and optimizations, it's just a bit out of
> the scope of what this series is about.
>
>>
>>> +
>>>      return rc;
>>>  }
>>>
>>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
>>> index 4959407..612ed89 100644
>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>> @@ -28,6 +28,7 @@
>>>  #include <xen/grant_table.h>
>>>  #include <xen/sched.h>
>>>  #include <xen/rcupdate.h>
>>> +#include <xen/guest_access.h>
>>>  #include <xen/vm_event.h>
>>>  #include <asm/page.h>
>>>  #include <asm/string.h>
>>> @@ -1293,30 +1294,52 @@ int relinquish_shared_pages(struct domain *d)
>>>      return rc;
>>>  }
>>>
>>> -int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
>>> +int mem_sharing_memop(unsigned long cmd,
>>> +                      XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>>>  {
>>> -    int rc = 0;
>>> +    int rc;
>>> +    xen_mem_sharing_op_t mso;
>>> +    struct domain *d;
>>> +
>>> +    rc = -EFAULT;
>>> +    if ( copy_from_guest(&mso, arg, 1) )
>>> +        return rc;
>>> +
>>> +    if ( mso.op == XENMEM_sharing_op_audit )
>>> +        return mem_sharing_audit();
>>> +
>>> +    rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d);
>>> +    if ( rc )
>>> +        return rc;
>>>
>>>      /* Only HAP is supported */
>>>      if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
>>>           return -ENODEV;
>>>
>>> -    switch(mec->op)
>>> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
>>> +    if ( rc )
>>> +        return rc;
>>> +
>>> +    rc = -ENODEV;
>>> +    if ( unlikely(!d->vm_event->share.ring_page) )
>>> +        return rc;
>>> +
>>> +    switch(mso.op)
>>
>> Style ( spaces )
>
> Ack.
>
>>
>>> @@ -1465,6 +1488,9 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
>>>              break;
>>>      }
>>>
>>> +    if ( !rc && __copy_to_guest(arg, &mso, 1) )
>>> +        return -EFAULT;
>>
>> Only certain subops need to copy information back.  It is common to have
>> a function-level bool_t copyback which relevant subops sets.
>
> If it's not a critical fix I would rather just keep it as it was in
> this series. It could be part of another cleanup series for mem_paging
> and mem_sharing.
>
> Thanks,
> Tamas

Nevermind, it's quite obvious that only XENMEM_paging_op_prep copies
stuff into the buffer so it's a no brainer. I'll add the fix.

Thanks,
Tamas

>
>>
>> ~Andrew
>>
>>> +
>>>      return rc;
>>>  }
>>
>>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures
  2015-02-13 18:09       ` Andrew Cooper
  2015-02-13 18:13         ` Tamas K Lengyel
@ 2015-02-17 11:48         ` Jan Beulich
  1 sibling, 0 replies; 61+ messages in thread
From: Jan Beulich @ 2015-02-17 11:48 UTC (permalink / raw)
  To: Andrew Cooper, Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, IanCampbell, Razvan Cojocaru,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Steven Maresca, AndresLagar-Cavilla, Jun Nakajima, rshriram,
	Keir Fraser, DanielDe Graaf, yanghy

>>> On 13.02.15 at 19:09, <andrew.cooper3@citrix.com> wrote:
> On 13/02/15 18:03, Tamas K Lengyel wrote:
>> On Fri, Feb 13, 2015 at 6:23 PM, Andrew Cooper
>> <andrew.cooper3@citrix.com> wrote:
>>> On 13/02/15 16:33, Tamas K Lengyel wrote:
>>>> v5: Style fixes
>>>>     Convert gfn to uint32_t
>>> It is perfectly possible to have guests with more memory than is covered
>>> by 44 bits, or PV guests whose frames reside above the 44bit boundary.
>>> All gfn values should be 64bits wide.
>>>
>>> ~Andrew
>> Internally Xen handles all gfn's as unsigned long's so depending on
>> the compiler it may be only 32-bit wide. If gfn must be larger than
>> 32-bit than we should use unsigned long long's within Xen.
> 
> x86_32 Xen support was ripped out a while ago.  For the time being all
> unsigned longs are 64bit.
> 
> With the arm32/64 split, there is a slow move in the direction of
> paddr_t rather than unsigned long.

But (hopefully) not for GFNs.

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions
  2015-02-13 16:33 ` [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
  2015-02-13 18:41   ` Andrew Cooper
@ 2015-02-17 11:56   ` Jan Beulich
  2015-02-17 17:37     ` Tamas K Lengyel
  1 sibling, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-17 11:56 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> +static void hvm_event_cr(uint32_t reason, unsigned long value,
> +                                unsigned long old)
> +{
> +    vm_event_request_t req = {
> +        .reason = reason,
> +        .vcpu_id = current->vcpu_id,
> +        .u.mov_to_cr.new_value = value,
> +        .u.mov_to_cr.old_value = old
> +    };
> +    uint64_t parameters = 0;
> +
> +    switch(reason)

Coding style. Also I continue to think using switch() here rather than
having the caller pass both VM_EVENT_* and HVM_PARAM_* is ugly/
inefficient (even if the compiler may be able to sort this out for you).

> +    {
> +    case VM_EVENT_REASON_MOV_TO_CR0:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
> +        break;
> +    case VM_EVENT_REASON_MOV_TO_CR3:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
> +        break;
> +    case VM_EVENT_REASON_MOV_TO_CR4:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
> +        break;
> +    };

In any event, if you stay with the current model, latch current
(used four times) into local variable.

> +void hvm_event_msr(unsigned int msr, uint64_t value)
> +{
> +    struct vcpu *curr = current;
> +    vm_event_request_t req = {
> +        .reason = VM_EVENT_REASON_MOV_TO_MSR,
> +        .vcpu_id = curr->vcpu_id,
> +        .u.mov_to_msr.msr = msr,
> +        .u.mov_to_msr.value = value,
> +    };
> +    uint64_t params = current->domain->arch.hvm_domain

Why "current" when you have "curr"?

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-13 16:33 ` [PATCH V5 07/12] xen: Introduce monitor_op domctl Tamas K Lengyel
  2015-02-13 20:09   ` Andrew Cooper
@ 2015-02-17 14:02   ` Jan Beulich
  2015-02-17 18:20     ` Tamas K Lengyel
  1 sibling, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-17 14:02 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -411,7 +411,8 @@ static int hvmemul_virtual_to_linear(
>       * being triggered for repeated writes to a whole page.
>       */
>      *reps = min_t(unsigned long, *reps,
> -                  unlikely(current->domain->arch.hvm_domain.introspection_enabled)
> +                  unlikely(current->domain->arch
> +                            .monitor_options.mov_to_msr.extended_capture)
>                             ? 1 : 4096);

This makes no sense (especially not to a reader in a year or two):
There's no connection between mov-to-msr and the repeat count
capping done here. Please wrap this in a suitably named is_...() or
has_...() or introspection_enabled() helper, with a comment at its
definition site making the connection explicit.

> @@ -79,7 +76,7 @@ static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
>          return rc;
>      };
>  
> -    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
> +    if ( sync )

Looks like this function parameter wants to be bool_t.

> +#define DISABLE_OPTION(option)              \
> +    do {                                    \
> +        if ( !option->enabled )             \
> +            return -EFAULT;                 \
> +        domain_pause(d);                    \
> +        option->enabled = 0;                \
> +        domain_unpause(d);                  \
> +    } while (0)
> +
> +#define ENABLE_OPTION(option)               \
> +    do {                                    \
> +        domain_pause(d);                    \
> +        option->enabled = 1;                \
> +        domain_unpause(d);                  \
> +    } while (0)

If you decide not to follow Andrew's suggestion, then at the very
least these macros need to be properly parenthesized, have all
their parameters made explicit, and all their arguments evaluated
exactly once.

> +int monitor_domctl(struct xen_domctl_monitor_op *domctl, struct domain *d)
> +{
> +    /*
> +     * At the moment only Intel HVM domains are supported. However, event
> +     * delivery could be extended to AMD and PV domains. See comments below.
> +     */
> +    if ( !is_hvm_domain(d) || !cpu_has_vmx)
> +        return -ENOSYS;
> +
> +    if ( domctl->op != XEN_DOMCTL_MONITOR_OP_ENABLE &&
> +         domctl->op != XEN_DOMCTL_MONITOR_OP_DISABLE )
> +        return -EFAULT;
> +
> +    switch ( domctl->subop )
> +    {
> +    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0:
> +    {
> +        /* Note: could be supported on PV domains. */
> +        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr0;

As three of the cases need a variable of this type, please consider
declaring it in the scope of switch() itself.

> +
> +        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
> +        {
> +            if ( options->enabled )
> +                return -EBUSY;
> +
> +            options->sync = domctl->u.mov_to_cr.sync;
> +            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
> +            ENABLE_OPTION(options);
> +        }
> +        else
> +        {
> +            DISABLE_OPTION(options);
> +        }

Pointless braces.

> +        break;
> +    }
> +
> +    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3:

Here you properly add a blank line between cases - please do so
too further down.

> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -621,32 +621,17 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
>          switch( vec->op )
>          {
>          case XEN_VM_EVENT_MONITOR_ENABLE:
> -        case XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION:
> -        {
> -            rc = -ENODEV;
> -            if ( !p2m_vm_event_sanity_check(d) )
> -                break;

Other than the revision log says, this doesn't get moved but dropped,
which seems wrong (or would need mentioning in the description).

>              rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
> -                                    HVM_PARAM_MONITOR_RING_PFN,
> -                                    mem_access_notification);
> -
> -            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
> -                 && !rc )
> -                p2m_setup_introspection(d);
> -
> -        }
> -        break;
> +                                 HVM_PARAM_MONITOR_RING_PFN,
> +                                 mem_access_notification);

I don't see what changes for these two lines. If it's indentation, it
should be done right when the code gets added.

> +            break;

This indentation change should not be necessary - the braces you
remove here shouldn't get added in the first place when the new
file gets introduced. Same further down.

> --- /dev/null
> +++ b/xen/include/asm-arm/monitor.h
> @@ -0,0 +1,13 @@
> +#ifndef __ASM_ARM_MONITOR_H__
> +#define __ASM_ARM_MONITOR_H__
> +
> +#include <xen/config.h>

This is pointless and should be dropped (I seem to recall having made
the same statement before on an earlier version - please apply such
to all of the patches in a series).

> +#include <public/domctl.h>
> +
> +static inline
> +int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d)

The includes above are insufficient for the types used, or you should
forward declare _both_ structs and not have any includes.

> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -241,6 +241,24 @@ struct time_scale {
>      u32 mul_frac;
>  };
>  
> +/************************************************/
> +/*            monitor event options             */
> +/************************************************/
> +struct mov_to_cr {
> +    uint8_t enabled;
> +    uint8_t sync;
> +    uint8_t onchangeonly;
> +};
> +
> +struct mov_to_msr {
> +    uint8_t enabled;
> +    uint8_t extended_capture;
> +};
> +
> +struct debug_event {
> +    uint8_t enabled;
> +};

These are all internal structures - is there anything wrong with using
bitfields here?

> --- /dev/null
> +++ b/xen/include/asm-x86/monitor.h
> @@ -0,0 +1,30 @@
> +/*
> + * include/asm-x86/monitor.h
> + *
> + * Architecture-specific monitor_op domctl handler.
> + *
> + * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public
> + * License v2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public
> + * License along with this program; if not, write to the
> + * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
> + * Boston, MA 021110-1307, USA.
> + */
> +
> +#ifndef __ASM_X86_MONITOR_H__
> +#define __ASM_X86_MONITOR_H__
> +
> +#include <public/domctl.h>
> +
> +int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d);

Same as for the ARM variant.

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-13 16:33 ` [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
  2015-02-13 21:05   ` Andrew Cooper
@ 2015-02-17 14:17   ` Jan Beulich
  2015-02-17 18:30     ` Tamas K Lengyel
  1 sibling, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-17 14:17 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> @@ -1293,56 +1293,30 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
>   *
>   * If the gfn was dropped the vcpu needs to be unpaused.
>   */
> -void p2m_mem_paging_resume(struct domain *d)
> +
> +void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>  {
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    vm_event_response_t rsp;
>      p2m_type_t p2mt;
>      p2m_access_t a;
>      mfn_t mfn;
>  
> -    /* Pull all responses off the ring */
> -    while( vm_event_get_response(d, &d->vm_event->paging, &rsp) )
> +    /* Fix p2m entry if the page was not dropped */
> +    if ( !(rsp->u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
>      {
> -        struct vcpu *v;
> -
> -        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
> -        {
> -            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
> -            continue;
> -        }
> -
> -#ifndef NDEBUG
> -        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
> -            continue;
> -#endif
> -
> -        /* Validate the vcpu_id in the response. */
> -        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
> -            continue;
> -
> -        v = d->vcpu[rsp.vcpu_id];
> -
> -        /* Fix p2m entry if the page was not dropped */
> -        if ( !(rsp.u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
> +        unsigned long gfn = rsp->u.mem_access.gfn;
> +        gfn_lock(p2m, gfn, 0);

Once again - blank line between declarations and statements please.

> +        mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
> +        /* Allow only pages which were prepared properly, or pages which
> +         * were nominated but not evicted */

Coding style.

> @@ -48,15 +46,15 @@ bool_t vm_event_check_ring(struct vm_event_domain *med);
>   * succeed.
>   */
>  int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *med,
> -                            bool_t allow_sleep);
> +                          bool_t allow_sleep);
>  static inline int vm_event_claim_slot(struct domain *d,
> -                                        struct vm_event_domain *med)
> +                                      struct vm_event_domain *med)
>  {
>      return __vm_event_claim_slot(d, med, 1);
>  }
>  
>  static inline int vm_event_claim_slot_nosleep(struct domain *d,
> -                                        struct vm_event_domain *med)
> +                                              struct vm_event_domain *med)

All these whitespace changes here and further down don't really
belong in this patch - please again get this right when adding the
code.

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-13 16:33 ` [PATCH V5 10/12] xen/vm_event: Relocate memop checks Tamas K Lengyel
  2015-02-13 21:23   ` Andrew Cooper
@ 2015-02-17 14:25   ` Jan Beulich
  2015-02-17 18:47     ` Tamas K Lengyel
  1 sibling, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-17 14:25 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
> +int mem_paging_memop(unsigned long cmd,
> +                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
>  {
> -    int rc = -ENODEV;
> +    int rc;
> +    xen_mem_paging_op_t mpo;
> +    struct domain *d;
> +
> +    rc = -EFAULT;
> +    if ( copy_from_guest(&mpo, arg, 1) )
> +        return rc;

Please don't make things more complicated than they need to be:
You only use the -EFAULT once here, so no reason to assign it to
rc up front.

> +
> +    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
> +    if ( rc )
> +        return rc;
> +
> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
> +    if ( rc )

There's an RCU lock you take right before this, which you now fail
to drop here and below.

> +        return rc;
> +
> +    rc = -ENODEV;
>      if ( unlikely(!d->vm_event->paging.ring_page) )
>          return rc;

Same comment as for the -EFAULT above.

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-13 16:33 ` [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
  2015-02-13 21:44   ` Andrew Cooper
@ 2015-02-17 14:31   ` Jan Beulich
  2015-02-17 18:32     ` Tamas K Lengyel
  1 sibling, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-17 14:31 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> @@ -611,13 +611,22 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
>          }
>          break;
>  
> -        case XEN_VM_EVENT_PAGING_DISABLE:
> +        case XEN_VM_EVENT_DISABLE:
>          {
>              if ( ved->ring_page )
>                  rc = vm_event_disable(d, ved);
>          }
>          break;
>  
> +        case XEN_VM_EVENT_RESUME:
> +        {
> +            if ( ved->ring_page )
> +                vm_event_resume(d, ved);
> +            else
> +                rc = -ENODEV;
> +        }
> +        break;

Stray braces again.

I also find it confusing that the same set of changes repeats three
times here - is that an indication of a problem with an earlier patch?

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions
  2015-02-17 11:56   ` Jan Beulich
@ 2015-02-17 17:37     ` Tamas K Lengyel
  2015-02-18  9:07       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 17:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Tue, Feb 17, 2015 at 12:56 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>> +static void hvm_event_cr(uint32_t reason, unsigned long value,
>> +                                unsigned long old)
>> +{
>> +    vm_event_request_t req = {
>> +        .reason = reason,
>> +        .vcpu_id = current->vcpu_id,
>> +        .u.mov_to_cr.new_value = value,
>> +        .u.mov_to_cr.old_value = old
>> +    };
>> +    uint64_t parameters = 0;
>> +
>> +    switch(reason)
>
> Coding style. Also I continue to think using switch() here rather than
> having the caller pass both VM_EVENT_* and HVM_PARAM_* is ugly/
> inefficient (even if the compiler may be able to sort this out for you).

It's getting retired in the series so there isn't much point in
tweaking it here.

>> +    {
>> +    case VM_EVENT_REASON_MOV_TO_CR0:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
>> +        break;
>> +    case VM_EVENT_REASON_MOV_TO_CR3:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
>> +        break;
>> +    case VM_EVENT_REASON_MOV_TO_CR4:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
>> +        break;
>> +    };
>
> In any event, if you stay with the current model, latch current
> (used four times) into local variable.

Ack.

>
>> +void hvm_event_msr(unsigned int msr, uint64_t value)
>> +{
>> +    struct vcpu *curr = current;
>> +    vm_event_request_t req = {
>> +        .reason = VM_EVENT_REASON_MOV_TO_MSR,
>> +        .vcpu_id = curr->vcpu_id,
>> +        .u.mov_to_msr.msr = msr,
>> +        .u.mov_to_msr.value = value,
>> +    };
>> +    uint64_t params = current->domain->arch.hvm_domain
>
> Why "current" when you have "curr"?

Just forgot to update it.

>
> Jan

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-17 14:02   ` Jan Beulich
@ 2015-02-17 18:20     ` Tamas K Lengyel
  2015-02-17 18:37       ` Andrew Cooper
                         ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 18:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Tue, Feb 17, 2015 at 3:02 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -411,7 +411,8 @@ static int hvmemul_virtual_to_linear(
>>       * being triggered for repeated writes to a whole page.
>>       */
>>      *reps = min_t(unsigned long, *reps,
>> -                  unlikely(current->domain->arch.hvm_domain.introspection_enabled)
>> +                  unlikely(current->domain->arch
>> +                            .monitor_options.mov_to_msr.extended_capture)
>>                             ? 1 : 4096);
>
> This makes no sense (especially not to a reader in a year or two):
> There's no connection between mov-to-msr and the repeat count
> capping done here. Please wrap this in a suitably named is_...() or
> has_...() or introspection_enabled() helper, with a comment at its
> definition site making the connection explicit.

It took me a while to understand what "introspection_enabled" actually
represents and all it really does at the moment is enabling the
interception of an extended set of MSRs. If something, that is a bad
variable name. Since in this series "introspection_enabled" is
removed, here I just updated the variable to the one that holds the
same information. I don't actually know what the code here does as I
didn't touch it. If this indeed has no connection to mov-to-msr, it
should have its own option field with its own name actually describing
what it does. Maybe Razvan has some more information on what is going
on here and if another variable needs to be introduced that was just
latched onto "introspection_enabled".

>
>> @@ -79,7 +76,7 @@ static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
>>          return rc;
>>      };
>>
>> -    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
>> +    if ( sync )
>
> Looks like this function parameter wants to be bool_t.
>
>> +#define DISABLE_OPTION(option)              \
>> +    do {                                    \
>> +        if ( !option->enabled )             \
>> +            return -EFAULT;                 \
>> +        domain_pause(d);                    \
>> +        option->enabled = 0;                \
>> +        domain_unpause(d);                  \
>> +    } while (0)
>> +
>> +#define ENABLE_OPTION(option)               \
>> +    do {                                    \
>> +        domain_pause(d);                    \
>> +        option->enabled = 1;                \
>> +        domain_unpause(d);                  \
>> +    } while (0)
>
> If you decide not to follow Andrew's suggestion, then at the very
> least these macros need to be properly parenthesized, have all
> their parameters made explicit, and all their arguments evaluated
> exactly once.

I'm going with Andrew's suggestions.

>
>> +int monitor_domctl(struct xen_domctl_monitor_op *domctl, struct domain *d)
>> +{
>> +    /*
>> +     * At the moment only Intel HVM domains are supported. However, event
>> +     * delivery could be extended to AMD and PV domains. See comments below.
>> +     */
>> +    if ( !is_hvm_domain(d) || !cpu_has_vmx)
>> +        return -ENOSYS;
>> +
>> +    if ( domctl->op != XEN_DOMCTL_MONITOR_OP_ENABLE &&
>> +         domctl->op != XEN_DOMCTL_MONITOR_OP_DISABLE )
>> +        return -EFAULT;
>> +
>> +    switch ( domctl->subop )
>> +    {
>> +    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0:
>> +    {
>> +        /* Note: could be supported on PV domains. */
>> +        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr0;
>
> As three of the cases need a variable of this type, please consider
> declaring it in the scope of switch() itself.
>
>> +
>> +        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
>> +        {
>> +            if ( options->enabled )
>> +                return -EBUSY;
>> +
>> +            options->sync = domctl->u.mov_to_cr.sync;
>> +            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
>> +            ENABLE_OPTION(options);
>> +        }
>> +        else
>> +        {
>> +            DISABLE_OPTION(options);
>> +        }
>
> Pointless braces.
>
>> +        break;
>> +    }
>> +
>> +    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3:
>
> Here you properly add a blank line between cases - please do so
> too further down.
>
>> --- a/xen/common/vm_event.c
>> +++ b/xen/common/vm_event.c
>> @@ -621,32 +621,17 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
>>          switch( vec->op )
>>          {
>>          case XEN_VM_EVENT_MONITOR_ENABLE:
>> -        case XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION:
>> -        {
>> -            rc = -ENODEV;
>> -            if ( !p2m_vm_event_sanity_check(d) )
>> -                break;
>
> Other than the revision log says, this doesn't get moved but dropped,
> which seems wrong (or would need mentioning in the description).

The declaration of the check as a separate function is dropped. The
check itself is moved into the domctl handler.

>>              rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
>> -                                    HVM_PARAM_MONITOR_RING_PFN,
>> -                                    mem_access_notification);
>> -
>> -            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
>> -                 && !rc )
>> -                p2m_setup_introspection(d);
>> -
>> -        }
>> -        break;
>> +                                 HVM_PARAM_MONITOR_RING_PFN,
>> +                                 mem_access_notification);
>
> I don't see what changes for these two lines. If it's indentation, it
> should be done right when the code gets added.

Indentation can't be fixed in the code addition as it breaks git -M.
It reverts to the old format where it just removes the whole file and
adds the new one. I think its a waste to add a whole new separate
patch just to fix indentations so I just fix it here.

>> +            break;
>
> This indentation change should not be necessary - the braces you
> remove here shouldn't get added in the first place when the new
> file gets introduced. Same further down.

See my previous comment.

>
>> --- /dev/null
>> +++ b/xen/include/asm-arm/monitor.h
>> @@ -0,0 +1,13 @@
>> +#ifndef __ASM_ARM_MONITOR_H__
>> +#define __ASM_ARM_MONITOR_H__
>> +
>> +#include <xen/config.h>
>
> This is pointless and should be dropped (I seem to recall having made
> the same statement before on an earlier version - please apply such
> to all of the patches in a series).

Ack.

>
>> +#include <public/domctl.h>
>> +
>> +static inline
>> +int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d)
>
> The includes above are insufficient for the types used, or you should
> forward declare _both_ structs and not have any includes.

Just including sched.h additionally should be enough IMHO.

>
>> --- a/xen/include/asm-x86/domain.h
>> +++ b/xen/include/asm-x86/domain.h
>> @@ -241,6 +241,24 @@ struct time_scale {
>>      u32 mul_frac;
>>  };
>>
>> +/************************************************/
>> +/*            monitor event options             */
>> +/************************************************/
>> +struct mov_to_cr {
>> +    uint8_t enabled;
>> +    uint8_t sync;
>> +    uint8_t onchangeonly;
>> +};
>> +
>> +struct mov_to_msr {
>> +    uint8_t enabled;
>> +    uint8_t extended_capture;
>> +};
>> +
>> +struct debug_event {
>> +    uint8_t enabled;
>> +};
>
> These are all internal structures - is there anything wrong with using
> bitfields here?

The use if bitfields is not good performance-wise AFAIK. Would there
be any benefit that would offset that?

>
>> --- /dev/null
>> +++ b/xen/include/asm-x86/monitor.h
>> @@ -0,0 +1,30 @@
>> +/*
>> + * include/asm-x86/monitor.h
>> + *
>> + * Architecture-specific monitor_op domctl handler.
>> + *
>> + * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public
>> + * License v2 as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> + * General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public
>> + * License along with this program; if not, write to the
>> + * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
>> + * Boston, MA 021110-1307, USA.
>> + */
>> +
>> +#ifndef __ASM_X86_MONITOR_H__
>> +#define __ASM_X86_MONITOR_H__
>> +
>> +#include <public/domctl.h>
>> +
>> +int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d);
>
> Same as for the ARM variant.

Ack.

>
> Jan

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-17 14:17   ` Jan Beulich
@ 2015-02-17 18:30     ` Tamas K Lengyel
  2015-02-17 18:34       ` Andrew Cooper
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 18:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Tue, Feb 17, 2015 at 3:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>> @@ -1293,56 +1293,30 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
>>   *
>>   * If the gfn was dropped the vcpu needs to be unpaused.
>>   */
>> -void p2m_mem_paging_resume(struct domain *d)
>> +
>> +void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>>  {
>>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> -    vm_event_response_t rsp;
>>      p2m_type_t p2mt;
>>      p2m_access_t a;
>>      mfn_t mfn;
>>
>> -    /* Pull all responses off the ring */
>> -    while( vm_event_get_response(d, &d->vm_event->paging, &rsp) )
>> +    /* Fix p2m entry if the page was not dropped */
>> +    if ( !(rsp->u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
>>      {
>> -        struct vcpu *v;
>> -
>> -        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
>> -        {
>> -            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
>> -            continue;
>> -        }
>> -
>> -#ifndef NDEBUG
>> -        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
>> -            continue;
>> -#endif
>> -
>> -        /* Validate the vcpu_id in the response. */
>> -        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
>> -            continue;
>> -
>> -        v = d->vcpu[rsp.vcpu_id];
>> -
>> -        /* Fix p2m entry if the page was not dropped */
>> -        if ( !(rsp.u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
>> +        unsigned long gfn = rsp->u.mem_access.gfn;
>> +        gfn_lock(p2m, gfn, 0);
>
> Once again - blank line between declarations and statements please.
>
>> +        mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
>> +        /* Allow only pages which were prepared properly, or pages which
>> +         * were nominated but not evicted */
>
> Coding style.
>
>> @@ -48,15 +46,15 @@ bool_t vm_event_check_ring(struct vm_event_domain *med);
>>   * succeed.
>>   */
>>  int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *med,
>> -                            bool_t allow_sleep);
>> +                          bool_t allow_sleep);
>>  static inline int vm_event_claim_slot(struct domain *d,
>> -                                        struct vm_event_domain *med)
>> +                                      struct vm_event_domain *med)
>>  {
>>      return __vm_event_claim_slot(d, med, 1);
>>  }
>>
>>  static inline int vm_event_claim_slot_nosleep(struct domain *d,
>> -                                        struct vm_event_domain *med)
>> +                                              struct vm_event_domain *med)
>
> All these whitespace changes here and further down don't really
> belong in this patch - please again get this right when adding the
> code.

Same issue I mentioned in the other patch: git -M can't track the
files if indentation is fixed as part of the renaming process. As I
end up touching all the files that have with minor style issues like
this in the series as a result of the renaming, I fix them as I go
along. If that stretches the rules, I will need to add a whole new
separate patch just for indentation fixing.

>
> Jan

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-17 14:31   ` Jan Beulich
@ 2015-02-17 18:32     ` Tamas K Lengyel
  2015-02-18  9:31       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 18:32 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Tue, Feb 17, 2015 at 3:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>> @@ -611,13 +611,22 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
>>          }
>>          break;
>>
>> -        case XEN_VM_EVENT_PAGING_DISABLE:
>> +        case XEN_VM_EVENT_DISABLE:
>>          {
>>              if ( ved->ring_page )
>>                  rc = vm_event_disable(d, ved);
>>          }
>>          break;
>>
>> +        case XEN_VM_EVENT_RESUME:
>> +        {
>> +            if ( ved->ring_page )
>> +                vm_event_resume(d, ved);
>> +            else
>> +                rc = -ENODEV;
>> +        }
>> +        break;
>
> Stray braces again.

Ack.

>
> I also find it confusing that the same set of changes repeats three
> times here - is that an indication of a problem with an earlier patch?

No it's not. There are three rings vm_event can use, thus three rings
that can be resumed.

>
> Jan


Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-17 18:30     ` Tamas K Lengyel
@ 2015-02-17 18:34       ` Andrew Cooper
  2015-02-17 18:49         ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-17 18:34 UTC (permalink / raw)
  To: Tamas K Lengyel, Jan Beulich
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Steven Maresca, Tim Deegan, xen-devel, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On 17/02/15 18:30, Tamas K Lengyel wrote:
>> All these whitespace changes here and further down don't really
>> belong in this patch - please again get this right when adding the
>> code.
> Same issue I mentioned in the other patch: git -M can't track the
> files if indentation is fixed as part of the renaming process. As I
> end up touching all the files that have with minor style issues like
> this in the series as a result of the renaming, I fix them as I go
> along. If that stretches the rules, I will need to add a whole new
> separate patch just for indentation fixing.

Separating the two is best for review.

One patch which git diff -M says is identical for moving the file, and
one patch which git diff -w says is identical for whitespace fixes.

It makes it trivial to confirm that there is no functional change involved.

~Andrew

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-17 18:20     ` Tamas K Lengyel
@ 2015-02-17 18:37       ` Andrew Cooper
  2015-02-17 18:48         ` Tamas K Lengyel
  2015-02-17 22:59       ` Tamas K Lengyel
  2015-02-18  9:26       ` Jan Beulich
  2 siblings, 1 reply; 61+ messages in thread
From: Andrew Cooper @ 2015-02-17 18:37 UTC (permalink / raw)
  To: Tamas K Lengyel, Jan Beulich
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Steven Maresca, Tim Deegan, xen-devel, Dong, Eddie,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On 17/02/15 18:20, Tamas K Lengyel wrote:
> +/************************************************/
> +/*            monitor event options             */
> +/************************************************/
> +struct mov_to_cr {
> +    uint8_t enabled;
> +    uint8_t sync;
> +    uint8_t onchangeonly;
> +};
> +
> +struct mov_to_msr {
> +    uint8_t enabled;
> +    uint8_t extended_capture;
> +};
> +
> +struct debug_event {
> +    uint8_t enabled;
> +};
>> These are all internal structures - is there anything wrong with using
>> bitfields here?
> The use if bitfields is not good performance-wise AFAIK. Would there
> be any benefit that would offset that?

struct vcpu lives in a 4k page.  We avoid needlessly bloating it.

However bitfields will not work with my suggestion as you cannot
construct a pointer to 'enabled' if enabled is a bit.

~Andrew

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-17 14:25   ` Jan Beulich
@ 2015-02-17 18:47     ` Tamas K Lengyel
  2015-02-18  9:29       ` Jan Beulich
  0 siblings, 1 reply; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 18:47 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Tue, Feb 17, 2015 at 3:25 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
>> +int mem_paging_memop(unsigned long cmd,
>> +                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
>>  {
>> -    int rc = -ENODEV;
>> +    int rc;
>> +    xen_mem_paging_op_t mpo;
>> +    struct domain *d;
>> +
>> +    rc = -EFAULT;
>> +    if ( copy_from_guest(&mpo, arg, 1) )
>> +        return rc;
>
> Please don't make things more complicated than they need to be:
> You only use the -EFAULT once here, so no reason to assign it to
> rc up front.

This return will be a "goto out;" where the rcu is getting unlocked as well.

>
>> +
>> +    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
>> +    if ( rc )
>
> There's an RCU lock you take right before this, which you now fail
> to drop here and below.

Ack.

>
>> +        return rc;
>> +
>> +    rc = -ENODEV;
>>      if ( unlikely(!d->vm_event->paging.ring_page) )
>>          return rc;
>
> Same comment as for the -EFAULT above.

Same as above, will be a goto out.

>
> Jan

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-17 18:37       ` Andrew Cooper
@ 2015-02-17 18:48         ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 18:48 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Jun Nakajima, Tim Deegan, Steven Maresca, xen-devel, Dong, Eddie,
	Andres Lagar-Cavilla, Jan Beulich, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Tue, Feb 17, 2015 at 7:37 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 17/02/15 18:20, Tamas K Lengyel wrote:
>> +/************************************************/
>> +/*            monitor event options             */
>> +/************************************************/
>> +struct mov_to_cr {
>> +    uint8_t enabled;
>> +    uint8_t sync;
>> +    uint8_t onchangeonly;
>> +};
>> +
>> +struct mov_to_msr {
>> +    uint8_t enabled;
>> +    uint8_t extended_capture;
>> +};
>> +
>> +struct debug_event {
>> +    uint8_t enabled;
>> +};
>>> These are all internal structures - is there anything wrong with using
>>> bitfields here?
>> The use if bitfields is not good performance-wise AFAIK. Would there
>> be any benefit that would offset that?
>
> struct vcpu lives in a 4k page.  We avoid needlessly bloating it.
>
> However bitfields will not work with my suggestion as you cannot
> construct a pointer to 'enabled' if enabled is a bit.
>
> ~Andrew

That too.. I would rather keep these as bool_t's if possible.

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-17 18:34       ` Andrew Cooper
@ 2015-02-17 18:49         ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 18:49 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Tian, Kevin, wei.liu2, Ian Campbell, Stefano Stabellini,
	Jun Nakajima, Tim Deegan, Steven Maresca, xen-devel, Dong, Eddie,
	Andres Lagar-Cavilla, Jan Beulich, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy, Ian Jackson

On Tue, Feb 17, 2015 at 7:34 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 17/02/15 18:30, Tamas K Lengyel wrote:
>>> All these whitespace changes here and further down don't really
>>> belong in this patch - please again get this right when adding the
>>> code.
>> Same issue I mentioned in the other patch: git -M can't track the
>> files if indentation is fixed as part of the renaming process. As I
>> end up touching all the files that have with minor style issues like
>> this in the series as a result of the renaming, I fix them as I go
>> along. If that stretches the rules, I will need to add a whole new
>> separate patch just for indentation fixing.
>
> Separating the two is best for review.
>
> One patch which git diff -M says is identical for moving the file, and
> one patch which git diff -w says is identical for whitespace fixes.
>
> It makes it trivial to confirm that there is no functional change involved.
>
> ~Andrew

Alright, that makes sense, will do so.

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-17 18:20     ` Tamas K Lengyel
  2015-02-17 18:37       ` Andrew Cooper
@ 2015-02-17 22:59       ` Tamas K Lengyel
  2015-02-18  9:26       ` Jan Beulich
  2 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-17 22:59 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Tue, Feb 17, 2015 at 7:20 PM, Tamas K Lengyel
<tamas.lengyel@zentific.com> wrote:
> On Tue, Feb 17, 2015 at 3:02 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -411,7 +411,8 @@ static int hvmemul_virtual_to_linear(
>>>       * being triggered for repeated writes to a whole page.
>>>       */
>>>      *reps = min_t(unsigned long, *reps,
>>> -                  unlikely(current->domain->arch.hvm_domain.introspection_enabled)
>>> +                  unlikely(current->domain->arch
>>> +                            .monitor_options.mov_to_msr.extended_capture)
>>>                             ? 1 : 4096);
>>
>> This makes no sense (especially not to a reader in a year or two):
>> There's no connection between mov-to-msr and the repeat count
>> capping done here. Please wrap this in a suitably named is_...() or
>> has_...() or introspection_enabled() helper, with a comment at its
>> definition site making the connection explicit.
>
> It took me a while to understand what "introspection_enabled" actually
> represents and all it really does at the moment is enabling the
> interception of an extended set of MSRs. If something, that is a bad
> variable name. Since in this series "introspection_enabled" is
> removed, here I just updated the variable to the one that holds the
> same information. I don't actually know what the code here does as I
> didn't touch it. If this indeed has no connection to mov-to-msr, it
> should have its own option field with its own name actually describing
> what it does. Maybe Razvan has some more information on what is going
> on here and if another variable needs to be introduced that was just
> latched onto "introspection_enabled".

So I looked into this a bit more and this is being used when a
mem_event response to a mem_access event has the emulation flag set.
So this is an extra option that was latched onto
introspection_enabled, thus we will need an extra field to determine
if this particular feature is enabled or not.

Now I understand why Razvan was wondering about "umbrella" options
going forward. IMHO this highlights the problem with umbrella options
- it becomes really hard to understand what the option actually does.
Especially without proper documentation.

Tamas

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions
  2015-02-17 17:37     ` Tamas K Lengyel
@ 2015-02-18  9:07       ` Jan Beulich
  2015-02-18 12:09         ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-18  9:07 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

>>> On 17.02.15 at 18:37, <tamas.lengyel@zentific.com> wrote:
> On Tue, Feb 17, 2015 at 12:56 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>>> +static void hvm_event_cr(uint32_t reason, unsigned long value,
>>> +                                unsigned long old)
>>> +{
>>> +    vm_event_request_t req = {
>>> +        .reason = reason,
>>> +        .vcpu_id = current->vcpu_id,
>>> +        .u.mov_to_cr.new_value = value,
>>> +        .u.mov_to_cr.old_value = old
>>> +    };
>>> +    uint64_t parameters = 0;
>>> +
>>> +    switch(reason)
>>
>> Coding style. Also I continue to think using switch() here rather than
>> having the caller pass both VM_EVENT_* and HVM_PARAM_* is ugly/
>> inefficient (even if the compiler may be able to sort this out for you).
> 
> It's getting retired in the series so there isn't much point in
> tweaking it here.

I realized that looking at later patches in this series, but then you
could similarly argue that the other requested adjustments are
unnecessary. But please always keep in mind that series may get
applied partially. And of course ideally a series wouldn't introduce
code just for a later patch to delete it again - i.e. if you already
find you want/need to do that, then please accept that coding
style remarks are still being made and considered relevant.

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-17 18:20     ` Tamas K Lengyel
  2015-02-17 18:37       ` Andrew Cooper
  2015-02-17 22:59       ` Tamas K Lengyel
@ 2015-02-18  9:26       ` Jan Beulich
  2015-02-18 12:11         ` Tamas K Lengyel
  2 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-18  9:26 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

>>> On 17.02.15 at 19:20, <tamas.lengyel@zentific.com> wrote:
> On Tue, Feb 17, 2015 at 3:02 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>>>              rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
>>> -                                    HVM_PARAM_MONITOR_RING_PFN,
>>> -                                    mem_access_notification);
>>> -
>>> -            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
>>> -                 && !rc )
>>> -                p2m_setup_introspection(d);
>>> -
>>> -        }
>>> -        break;
>>> +                                 HVM_PARAM_MONITOR_RING_PFN,
>>> +                                 mem_access_notification);
>>
>> I don't see what changes for these two lines. If it's indentation, it
>> should be done right when the code gets added.
> 
> Indentation can't be fixed in the code addition as it breaks git -M.
> It reverts to the old format where it just removes the whole file and
> adds the new one. I think its a waste to add a whole new separate
> patch just to fix indentations so I just fix it here.

Considering that indentation is broken already prior to your
series, this is perhaps acceptable. But at least if indentation
was correct before the rename, it should be afterwards. You'd
have to use of git's -B option to control the resulting diff.

>>> +#include <public/domctl.h>
>>> +
>>> +static inline
>>> +int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d)
>>
>> The includes above are insufficient for the types used, or you should
>> forward declare _both_ structs and not have any includes.
> 
> Just including sched.h additionally should be enough IMHO.

Resulting in a huge pile of further dependencies. Our goal really
should be to get the dependencies down, not up - improving build
time. Hence forward declarations are very likely the better choice
here.

>>> --- a/xen/include/asm-x86/domain.h
>>> +++ b/xen/include/asm-x86/domain.h
>>> @@ -241,6 +241,24 @@ struct time_scale {
>>>      u32 mul_frac;
>>>  };
>>>
>>> +/************************************************/
>>> +/*            monitor event options             */
>>> +/************************************************/
>>> +struct mov_to_cr {
>>> +    uint8_t enabled;
>>> +    uint8_t sync;
>>> +    uint8_t onchangeonly;
>>> +};
>>> +
>>> +struct mov_to_msr {
>>> +    uint8_t enabled;
>>> +    uint8_t extended_capture;
>>> +};
>>> +
>>> +struct debug_event {
>>> +    uint8_t enabled;
>>> +};
>>
>> These are all internal structures - is there anything wrong with using
>> bitfields here?
> 
> The use if bitfields is not good performance-wise AFAIK. Would there
> be any benefit that would offset that?

As Andrew already said - total structure size. Also I'm pretty
convinced "or $<val>, <mem>" as well as "and $~<val>,<mem>"
aren't much worse than "mov $<val>,<mem>", and the code
writing these fields shouldn't be performance critical. And
"test $<val>,<mem>" and "cmp $<val>,<mem>" (as well as
their split up alternatives, should the compiler elect to do so)
ought to be equal performance wise.

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-17 18:47     ` Tamas K Lengyel
@ 2015-02-18  9:29       ` Jan Beulich
  2015-02-18 12:13         ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-18  9:29 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

>>> On 17.02.15 at 19:47, <tamas.lengyel@zentific.com> wrote:
> On Tue, Feb 17, 2015 at 3:25 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>>> -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
>>> +int mem_paging_memop(unsigned long cmd,
>>> +                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
>>>  {
>>> -    int rc = -ENODEV;
>>> +    int rc;
>>> +    xen_mem_paging_op_t mpo;
>>> +    struct domain *d;
>>> +
>>> +    rc = -EFAULT;
>>> +    if ( copy_from_guest(&mpo, arg, 1) )
>>> +        return rc;
>>
>> Please don't make things more complicated than they need to be:
>> You only use the -EFAULT once here, so no reason to assign it to
>> rc up front.
> 
> This return will be a "goto out;" where the rcu is getting unlocked as well.

How that? You didn't take the RCU lock yet (which is even visible
from the rest of the hunk above).

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-17 18:32     ` Tamas K Lengyel
@ 2015-02-18  9:31       ` Jan Beulich
  2015-02-18 12:18         ` Tamas K Lengyel
  0 siblings, 1 reply; 61+ messages in thread
From: Jan Beulich @ 2015-02-18  9:31 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

>>> On 17.02.15 at 19:32, <tamas.lengyel@zentific.com> wrote:
> On Tue, Feb 17, 2015 at 3:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
>>> @@ -611,13 +611,22 @@ int vm_event_domctl(struct domain *d, 
> xen_domctl_vm_event_op_t *vec,
>>>          }
>>>          break;
>>>
>>> -        case XEN_VM_EVENT_PAGING_DISABLE:
>>> +        case XEN_VM_EVENT_DISABLE:
>>>          {
>>>              if ( ved->ring_page )
>>>                  rc = vm_event_disable(d, ved);
>>>          }
>>>          break;
>>>
>>> +        case XEN_VM_EVENT_RESUME:
>>> +        {
>>> +            if ( ved->ring_page )
>>> +                vm_event_resume(d, ved);
>>> +            else
>>> +                rc = -ENODEV;
>>> +        }
>>> +        break;
>>
>> Stray braces again.
> 
> Ack.
> 
>>
>> I also find it confusing that the same set of changes repeats three
>> times here - is that an indication of a problem with an earlier patch?
> 
> No it's not. There are three rings vm_event can use, thus three rings
> that can be resumed.

But if the code ends up being almost identical, this loudly calls for
consolidation into e.g. a helper function.

Jan

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions
  2015-02-18  9:07       ` Jan Beulich
@ 2015-02-18 12:09         ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-18 12:09 UTC (permalink / raw)
  To: Jan Beulich, Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Wed Feb 18 2015 10:07:29 AM CET, Jan Beulich <JBeulich@suse.com> wrote:

> > > > On 17.02.15 at 18:37, <tamas.lengyel@zentific.com> wrote:
> > On Tue, Feb 17, 2015 at 12:56 PM, Jan Beulich <JBeulich@suse.com>
> > wrote:
> > > > > > On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> > > > +static void hvm_event_cr(uint32_t reason, unsigned long value,
> > > > +                                                               unsigned long old)
> > > > +{
> > > > +       vm_event_request_t req = {
> > > > +               .reason = reason,
> > > > +               .vcpu_id = current->vcpu_id,
> > > > +               .u.mov_to_cr.new_value = value,
> > > > +               .u.mov_to_cr.old_value = old
> > > > +       };
> > > > +       uint64_t parameters = 0;
> > > > +
> > > > +       switch(reason)
> > > 
> > > Coding style. Also I continue to think using switch() here rather
> > > than having the caller pass both VM_EVENT_* and HVM_PARAM_* is ugly/
> > > inefficient (even if the compiler may be able to sort this out for
> > > you).
> > 
> > It's getting retired in the series so there isn't much point in
> > tweaking it here.
> 
> I realized that looking at later patches in this series, but then you
> could similarly argue that the other requested adjustments are
> unnecessary. But please always keep in mind that series may get
> applied partially. And of course ideally a series wouldn't introduce
> code just for a later patch to delete it again - i.e. if you already
> find you want/need to do that, then please accept that coding
> style remarks are still being made and considered relevant.
> 
> Jan

I do consider coding style issues relevant. Here we were talking about optimizing a method that is being retired in the series anyway but it is your call. In v6 I already made the changes you requested. 

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 07/12] xen: Introduce monitor_op domctl
  2015-02-18  9:26       ` Jan Beulich
@ 2015-02-18 12:11         ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-18 12:11 UTC (permalink / raw)
  To: Jan Beulich, Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Wed Feb 18 2015 10:26:00 AM CET, Jan Beulich <JBeulich@suse.com> wrote:

> > > > On 17.02.15 at 19:20, <tamas.lengyel@zentific.com> wrote:
> > On Tue, Feb 17, 2015 at 3:02 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > > > > > On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> > > > rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
> > > > -                                                                       HVM_PARAM_MONITOR_RING_PFN,
> > > > -                                                                       mem_access_notification);
> > > > -
> > > > -                       if ( vec->op ==
> > > > XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION -                                 && !rc
> > > > ) -                               p2m_setup_introspection(d);
> > > > -
> > > > -               }
> > > > -               break;
> > > > +                                                                 HVM_PARAM_MONITOR_RING_PFN,
> > > > +                                                                 mem_access_notification);
> > > 
> > > I don't see what changes for these two lines. If it's indentation, it
> > > should be done right when the code gets added.
> > 
> > Indentation can't be fixed in the code addition as it breaks git -M.
> > It reverts to the old format where it just removes the whole file and
> > adds the new one. I think its a waste to add a whole new separate
> > patch just to fix indentations so I just fix it here.
> 
> Considering that indentation is broken already prior to your
> series, this is perhaps acceptable. But at least if indentation
> was correct before the rename, it should be afterwards. You'd
> have to use of git's -B option to control the resulting diff.
> 
> > > > +#include <public/domctl.h>
> > > > +
> > > > +static inline
> > > > +int monitor_domctl(struct xen_domctl_monitor_op *op, struct
> > > > domain *d)
> > > 
> > > The includes above are insufficient for the types used, or you should
> > > forward declare _both_ structs and not have any includes.
> > 
> > Just including sched.h additionally should be enough IMHO.
> 
> Resulting in a huge pile of further dependencies. Our goal really
> should be to get the dependencies down, not up - improving build
> time. Hence forward declarations are very likely the better choice
> here.
> 
> > > > --- a/xen/include/asm-x86/domain.h
> > > > +++ b/xen/include/asm-x86/domain.h
> > > > @@ -241,6 +241,24 @@ struct time_scale {
> > > > u32 mul_frac;
> > > > };
> > > > 
> > > > +/************************************************/
> > > > +/*                       monitor event options                         */
> > > > +/************************************************/
> > > > +struct mov_to_cr {
> > > > +       uint8_t enabled;
> > > > +       uint8_t sync;
> > > > +       uint8_t onchangeonly;
> > > > +};
> > > > +
> > > > +struct mov_to_msr {
> > > > +       uint8_t enabled;
> > > > +       uint8_t extended_capture;
> > > > +};
> > > > +
> > > > +struct debug_event {
> > > > +       uint8_t enabled;
> > > > +};
> > > 
> > > These are all internal structures - is there anything wrong with
> > > using bitfields here?
> > 
> > The use if bitfields is not good performance-wise AFAIK. Would there
> > be any benefit that would offset that?
> 
> As Andrew already said - total structure size. Also I'm pretty
> convinced "or $<val>, <mem>" as well as "and $~<val>,<mem>"
> aren't much worse than "mov $<val>,<mem>", and the code
> writing these fields shouldn't be performance critical. And
> "test $<val>,<mem>" and "cmp $<val>,<mem>" (as well as
> their split up alternatives, should the compiler elect to do so)
> ought to be equal performance wise.
> 
> Jan

OK, I'll switch to bitfields and adjust the patch accordingly.

Tamas


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 10/12] xen/vm_event: Relocate memop checks
  2015-02-18  9:29       ` Jan Beulich
@ 2015-02-18 12:13         ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-18 12:13 UTC (permalink / raw)
  To: Jan Beulich, Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Wed Feb 18 2015 10:29:40 AM CET, Jan Beulich <JBeulich@suse.com> wrote:

> > > > On 17.02.15 at 19:47, <tamas.lengyel@zentific.com> wrote:
> > On Tue, Feb 17, 2015 at 3:25 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > > > > > On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> > > > -int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
> > > > +int mem_paging_memop(unsigned long cmd,
> > > > +                                         XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t)
> > > > arg) {
> > > > -       int rc = -ENODEV;
> > > > +       int rc;
> > > > +       xen_mem_paging_op_t mpo;
> > > > +       struct domain *d;
> > > > +
> > > > +       rc = -EFAULT;
> > > > +       if ( copy_from_guest(&mpo, arg, 1) )
> > > > +               return rc;
> > > 
> > > Please don't make things more complicated than they need to be:
> > > You only use the -EFAULT once here, so no reason to assign it to
> > > rc up front.
> > 
> > This return will be a "goto out;" where the rcu is getting unlocked as
> > well.
> 
> How that? You didn't take the RCU lock yet (which is even visible
> from the rest of the hunk above).
> 
> Jan

Sorry, was just replying mechanically as most returns here turn into goto outs. Ack.

Tamas


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-18  9:31       ` Jan Beulich
@ 2015-02-18 12:18         ` Tamas K Lengyel
  0 siblings, 0 replies; 61+ messages in thread
From: Tamas K Lengyel @ 2015-02-18 12:18 UTC (permalink / raw)
  To: Jan Beulich, Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Steven Maresca,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Andres Lagar-Cavilla, Jun Nakajima, rshriram, Keir Fraser,
	Daniel De Graaf, yanghy

On Wed Feb 18 2015 10:31:06 AM CET, Jan Beulich <JBeulich@suse.com> wrote:

> > > > On 17.02.15 at 19:32, <tamas.lengyel@zentific.com> wrote:
> > On Tue, Feb 17, 2015 at 3:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > > > > > On 13.02.15 at 17:33, <tamas.lengyel@zentific.com> wrote:
> > > > @@ -611,13 +611,22 @@ int vm_event_domctl(struct domain *d, 
> > xen_domctl_vm_event_op_t *vec,
> > > > }
> > > > break;
> > > > 
> > > > -               case XEN_VM_EVENT_PAGING_DISABLE:
> > > > +               case XEN_VM_EVENT_DISABLE:
> > > > {
> > > > if ( ved->ring_page )
> > > > rc = vm_event_disable(d, ved);
> > > > }
> > > > break;
> > > > 
> > > > +               case XEN_VM_EVENT_RESUME:
> > > > +               {
> > > > +                       if ( ved->ring_page )
> > > > +                               vm_event_resume(d, ved);
> > > > +                       else
> > > > +                               rc = -ENODEV;
> > > > +               }
> > > > +               break;
> > > 
> > > Stray braces again.
> > 
> > Ack.
> > 
> > > 
> > > I also find it confusing that the same set of changes repeats three
> > > times here - is that an indication of a problem with an earlier
> > > patch?
> > 
> > No it's not. There are three rings vm_event can use, thus three rings
> > that can be resumed.
> 
> But if the code ends up being almost identical, this loudly calls for
> consolidation into e.g. a helper function.
> 
> Jan

That could be done but considering we are talking about only couple lines of code, I'm not sure that would improve readability by much.

I think the question I raised earlier, whether we need the resume option to the domctl to begin with is what needs discussion. IMHO the event channel method is enough so maybe I'll just turn this patch into deprecating the resume options in the memops.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2015-02-18 12:18 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-13 16:33 [PATCH V5 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 01/12] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
2015-02-13 17:23   ` Andrew Cooper
2015-02-13 18:03     ` Tamas K Lengyel
2015-02-13 18:09       ` Andrew Cooper
2015-02-13 18:13         ` Tamas K Lengyel
2015-02-17 11:48         ` Jan Beulich
2015-02-13 16:33 ` [PATCH V5 02/12] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
2015-02-13 17:53   ` Andrew Cooper
2015-02-13 18:06     ` Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
2015-02-13 18:17   ` Andrew Cooper
2015-02-13 18:30     ` Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 04/12] xen: Rename mem_event to vm_event Tamas K Lengyel
2015-02-13 18:31   ` Andrew Cooper
2015-02-13 16:33 ` [PATCH V5 05/12] tools/tests: Clean-up tools/tests/xen-access Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 06/12] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
2015-02-13 18:41   ` Andrew Cooper
2015-02-17 11:56   ` Jan Beulich
2015-02-17 17:37     ` Tamas K Lengyel
2015-02-18  9:07       ` Jan Beulich
2015-02-18 12:09         ` Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 07/12] xen: Introduce monitor_op domctl Tamas K Lengyel
2015-02-13 20:09   ` Andrew Cooper
2015-02-17 14:02   ` Jan Beulich
2015-02-17 18:20     ` Tamas K Lengyel
2015-02-17 18:37       ` Andrew Cooper
2015-02-17 18:48         ` Tamas K Lengyel
2015-02-17 22:59       ` Tamas K Lengyel
2015-02-18  9:26       ` Jan Beulich
2015-02-18 12:11         ` Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 08/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
2015-02-13 20:14   ` Andrew Cooper
2015-02-13 22:48     ` Tamas K Lengyel
2015-02-13 22:53       ` Tamas K Lengyel
2015-02-13 23:00         ` Andrew Cooper
2015-02-13 23:02           ` Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 09/12] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
2015-02-13 21:05   ` Andrew Cooper
2015-02-13 23:00     ` Tamas K Lengyel
2015-02-17 14:17   ` Jan Beulich
2015-02-17 18:30     ` Tamas K Lengyel
2015-02-17 18:34       ` Andrew Cooper
2015-02-17 18:49         ` Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 10/12] xen/vm_event: Relocate memop checks Tamas K Lengyel
2015-02-13 21:23   ` Andrew Cooper
2015-02-13 23:20     ` Tamas K Lengyel
2015-02-13 23:24       ` Tamas K Lengyel
2015-02-17 14:25   ` Jan Beulich
2015-02-17 18:47     ` Tamas K Lengyel
2015-02-18  9:29       ` Jan Beulich
2015-02-18 12:13         ` Tamas K Lengyel
2015-02-13 16:33 ` [PATCH V5 11/12] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
2015-02-13 21:25   ` Andrew Cooper
2015-02-13 16:33 ` [PATCH V5 12/12] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
2015-02-13 21:44   ` Andrew Cooper
2015-02-13 23:10     ` Tamas K Lengyel
2015-02-17 14:31   ` Jan Beulich
2015-02-17 18:32     ` Tamas K Lengyel
2015-02-18  9:31       ` Jan Beulich
2015-02-18 12:18         ` Tamas K Lengyel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.