All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V4 00/13] xen: Clean-up of mem_event subsystem
@ 2015-02-09 18:53 Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
                   ` (12 more replies)
  0 siblings, 13 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

This patch series aims to clean up the mem_event subsystem within Xen. The
original use-case for this system was to allow external helper applications
running in privileged domains to control various memory operations performed
by Xen. Amongs these were paging, sharing and access control. The subsystem
has since been extended to also deliver non-memory related events, namely
various HVM debugging events (INT3, MTF, MOV-TO-CR, MOV-TO-MSR). The structures
and naming of related functions however has not caught up to these new
use-cases, thus leaving many ambiguities in the code. Furthermore, future
use-cases envisioned for this subsystem include PV domains and ARM domains,
thus there is a need to establish a common base to build on.

In this series we convert the mem_event system to vm_event in which we clearly
define the scope of information that is transmitted via the event
delivery mechanism. Afterwards, we clean up the naming of the structures and
related functions to more clearly be in line with their actual operations.
Finally, the control of monitor events is moved to a new domctl, monitor_op.

Each patch in the series has been build-tested on x86 and ARM, both with
and without XSM enabled.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/mem_event_cleanup4

Razvan Cojocaru (1):
  xen/mem_event: Cleanup of mem_event structures

Tamas K Lengyel (12):
  xen/mem_event: Cleanup mem_event ring names and domctls
  xen/mem_paging: Convert mem_event_op to mem_paging_op
  xen/mem_access: Merge mem_event sanity check into mem_access check
  xen: Rename mem_event to vm_event
  tools/tests: Clean-up tools/tests/xen-access
  x86/hvm: factor out and rename vm_event related functions
  xen: Introduce monitor_op domctl
  xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  xen/vm_event: Decouple vm_event and mem_access.
  xen/vm_event: Relocate memop checks
  xen/xsm: Split vm_event_op into three separate labels
  xen/vm_event: Add RESUME option to vm_event_op domctl

 MAINTAINERS                                   |   4 +-
 docs/misc/xsm-flask.txt                       |   2 +-
 tools/libxc/Makefile                          |   3 +-
 tools/libxc/include/xenctrl.h                 |  41 ++-
 tools/libxc/xc_domain_restore.c               |  14 +-
 tools/libxc/xc_domain_save.c                  |   4 +-
 tools/libxc/xc_hvm_build_x86.c                |   2 +-
 tools/libxc/xc_mem_access.c                   |  32 --
 tools/libxc/xc_mem_paging.c                   |  52 ++-
 tools/libxc/xc_memshr.c                       |  29 +-
 tools/libxc/xc_monitor.c                      | 140 ++++++++
 tools/libxc/xc_private.h                      |  15 +-
 tools/libxc/{xc_mem_event.c => xc_vm_event.c} |  59 ++-
 tools/libxc/xg_save_restore.h                 |   2 +-
 tools/tests/xen-access/xen-access.c           | 226 +++++-------
 tools/xenpaging/pagein.c                      |   2 +-
 tools/xenpaging/xenpaging.c                   | 154 ++++----
 tools/xenpaging/xenpaging.h                   |   8 +-
 xen/arch/x86/Makefile                         |   1 +
 xen/arch/x86/domain.c                         |   2 +-
 xen/arch/x86/domctl.c                         |   4 +-
 xen/arch/x86/hvm/Makefile                     |   3 +-
 xen/arch/x86/hvm/emulate.c                    |   9 +-
 xen/arch/x86/hvm/event.c                      | 190 ++++++++++
 xen/arch/x86/hvm/hvm.c                        | 192 +---------
 xen/arch/x86/hvm/vmx/vmcs.c                   |  10 +-
 xen/arch/x86/hvm/vmx/vmx.c                    |   9 +-
 xen/arch/x86/mm/hap/nested_ept.c              |   4 +-
 xen/arch/x86/mm/hap/nested_hap.c              |   4 +-
 xen/arch/x86/mm/mem_paging.c                  |  46 ++-
 xen/arch/x86/mm/mem_sharing.c                 | 136 ++++---
 xen/arch/x86/mm/p2m-pod.c                     |   4 +-
 xen/arch/x86/mm/p2m-pt.c                      |   4 +-
 xen/arch/x86/mm/p2m.c                         | 259 +++++++-------
 xen/arch/x86/monitor.c                        | 204 +++++++++++
 xen/arch/x86/x86_64/compat/mm.c               |  26 +-
 xen/arch/x86/x86_64/mm.c                      |  26 +-
 xen/common/Makefile                           |  18 +-
 xen/common/domain.c                           |  12 +-
 xen/common/domctl.c                           |  17 +-
 xen/common/mem_access.c                       |  47 +--
 xen/common/{mem_event.c => vm_event.c}        | 492 ++++++++++++++------------
 xen/drivers/passthrough/pci.c                 |   2 +-
 xen/include/asm-arm/monitor.h                 |  13 +
 xen/include/asm-arm/p2m.h                     |   6 +-
 xen/include/asm-x86/domain.h                  |  32 +-
 xen/include/asm-x86/hvm/domain.h              |   1 -
 xen/include/asm-x86/hvm/emulate.h             |   2 +-
 xen/include/asm-x86/hvm/event.h               |  40 +++
 xen/include/asm-x86/hvm/hvm.h                 |  11 -
 xen/include/asm-x86/mem_paging.h              |   7 +-
 xen/include/asm-x86/mem_sharing.h             |   5 +-
 xen/include/asm-x86/monitor.h                 |   8 +
 xen/include/asm-x86/p2m.h                     |  18 +-
 xen/include/public/domctl.h                   | 116 ++++--
 xen/include/public/hvm/params.h               |  17 +-
 xen/include/public/mem_event.h                | 134 -------
 xen/include/public/memory.h                   |  25 +-
 xen/include/public/vm_event.h                 | 192 ++++++++++
 xen/include/xen/mem_access.h                  |  18 +-
 xen/include/xen/mem_event.h                   | 143 --------
 xen/include/xen/p2m-common.h                  |   4 +-
 xen/include/xen/sched.h                       |  28 +-
 xen/include/xen/vm_event.h                    |  88 +++++
 xen/include/xsm/dummy.h                       |  22 +-
 xen/include/xsm/xsm.h                         |  35 +-
 xen/xsm/dummy.c                               |  13 +-
 xen/xsm/flask/hooks.c                         |  66 ++--
 xen/xsm/flask/policy/access_vectors           |  12 +-
 69 files changed, 2012 insertions(+), 1554 deletions(-)
 create mode 100644 tools/libxc/xc_monitor.c
 rename tools/libxc/{xc_mem_event.c => xc_vm_event.c} (70%)
 create mode 100644 xen/arch/x86/hvm/event.c
 create mode 100644 xen/arch/x86/monitor.c
 rename xen/common/{mem_event.c => vm_event.c} (52%)
 create mode 100644 xen/include/asm-arm/monitor.h
 create mode 100644 xen/include/asm-x86/hvm/event.h
 create mode 100644 xen/include/asm-x86/monitor.h
 delete mode 100644 xen/include/public/mem_event.h
 create mode 100644 xen/include/public/vm_event.h
 delete mode 100644 xen/include/xen/mem_event.h
 create mode 100644 xen/include/xen/vm_event.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-10 12:52   ` Jan Beulich
  2015-02-09 18:53 ` [PATCH V4 02/13] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy,
	Razvan Cojocaru

From: Razvan Cojocaru <rcojocaru@bitdefender.com>

The public mem_event structures used to communicate with helper applications via
shared rings have been used in different settings. However, the variable names
within this structure have not reflected this fact, resulting in the reuse of
variables to mean different things under different scenarios.

This patch remedies the issue by clearly defining the structure members based on
the actual context within which the structure is used.

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
v4: Attach mem_event version to each outgoing request directly in mem_event.
v3: Add padding to mem_event structures.
    Add version field to mem_event structures and checks for it.
---
 tools/tests/xen-access/xen-access.c |  43 +++++----
 tools/xenpaging/xenpaging.c         |  40 ++++----
 xen/arch/x86/hvm/hvm.c              | 177 +++++++++++++++++++-----------------
 xen/arch/x86/mm/mem_sharing.c       |  16 +++-
 xen/arch/x86/mm/p2m.c               | 143 ++++++++++++++++-------------
 xen/common/mem_access.c             |   6 ++
 xen/common/mem_event.c              |   2 +
 xen/include/public/mem_event.h      | 108 +++++++++++++++++-----
 8 files changed, 325 insertions(+), 210 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 6cb382d..dd21d3b 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -551,13 +551,21 @@ int main(int argc, char *argv[])
                 continue;
             }
 
+            if ( req.version != MEM_EVENT_INTERFACE_VERSION )
+            {
+                ERROR("Error: mem_event interface version mismatch!\n");
+                interrupted = -1;
+                continue;
+            }
+
             memset( &rsp, 0, sizeof (rsp) );
+            rsp.version = MEM_EVENT_INTERFACE_VERSION;
             rsp.vcpu_id = req.vcpu_id;
             rsp.flags = req.flags;
 
             switch (req.reason) {
-            case MEM_EVENT_REASON_VIOLATION:
-                rc = xc_get_mem_access(xch, domain_id, req.gfn, &access);
+            case MEM_EVENT_REASON_MEM_ACCESS:
+                rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access);
                 if (rc < 0)
                 {
                     ERROR("Error %d getting mem_access event\n", rc);
@@ -567,21 +575,21 @@ int main(int argc, char *argv[])
 
                 printf("PAGE ACCESS: %c%c%c for GFN %"PRIx64" (offset %06"
                        PRIx64") gla %016"PRIx64" (valid: %c; fault in gpt: %c; fault with gla: %c) (vcpu %u)\n",
-                       req.access_r ? 'r' : '-',
-                       req.access_w ? 'w' : '-',
-                       req.access_x ? 'x' : '-',
-                       req.gfn,
-                       req.offset,
-                       req.gla,
-                       req.gla_valid ? 'y' : 'n',
-                       req.fault_in_gpt ? 'y' : 'n',
-                       req.fault_with_gla ? 'y': 'n',
+                       req.u.mem_access.access_r ? 'r' : '-',
+                       req.u.mem_access.access_w ? 'w' : '-',
+                       req.u.mem_access.access_x ? 'x' : '-',
+                       req.u.mem_access.gfn,
+                       req.u.mem_access.offset,
+                       req.u.mem_access.gla,
+                       req.u.mem_access.gla_valid ? 'y' : 'n',
+                       req.u.mem_access.fault_in_gpt ? 'y' : 'n',
+                       req.u.mem_access.fault_with_gla ? 'y': 'n',
                        req.vcpu_id);
 
                 if ( default_access != after_first_access )
                 {
                     rc = xc_set_mem_access(xch, domain_id, after_first_access,
-                                           req.gfn, 1);
+                                           req.u.mem_access.gfn, 1);
                     if (rc < 0)
                     {
                         ERROR("Error %d setting gfn to access_type %d\n", rc,
@@ -592,13 +600,12 @@ int main(int argc, char *argv[])
                 }
 
 
-                rsp.gfn = req.gfn;
-                rsp.p2mt = req.p2mt;
+                rsp.u.mem_access.gfn = req.u.mem_access.gfn;
                 break;
-            case MEM_EVENT_REASON_INT3:
-                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n", 
-                       req.gla, 
-                       req.gfn,
+            case MEM_EVENT_REASON_SOFTWARE_BREAKPOINT:
+                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n",
+                       req.regs.x86.rip,
+                       req.u.software_breakpoint.gfn,
                        req.vcpu_id);
 
                 /* Reinject */
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 82c1ee4..c71ee06 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -684,9 +684,9 @@ static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t
          * This allows page-out of these gfns if the target grows again.
          */
         if (paging->num_paged_out > paging->policy_mru_size)
-            policy_notify_paged_in(rsp->gfn);
+            policy_notify_paged_in(rsp->u.mem_paging.gfn);
         else
-            policy_notify_paged_in_nomru(rsp->gfn);
+            policy_notify_paged_in_nomru(rsp->u.mem_paging.gfn);
 
        /* Record number of resumed pages */
        paging->num_paged_out--;
@@ -874,7 +874,8 @@ int main(int argc, char *argv[])
     }
     xch = paging->xc_handle;
 
-    DPRINTF("starting %s for domain_id %u with pagefile %s\n", argv[0], paging->mem_event.domain_id, filename);
+    DPRINTF("starting %s for domain_id %u with pagefile %s\n",
+            argv[0], paging->mem_event.domain_id, filename);
 
     /* ensure that if we get a signal, we'll do cleanup, then exit */
     act.sa_handler = close_handler;
@@ -910,49 +911,52 @@ int main(int argc, char *argv[])
 
             get_request(&paging->mem_event, &req);
 
-            if ( req.gfn > paging->max_pages )
+            if ( req.u.mem_paging.gfn > paging->max_pages )
             {
-                ERROR("Requested gfn %"PRIx64" higher than max_pages %x\n", req.gfn, paging->max_pages);
+                ERROR("Requested gfn %"PRIx64" higher than max_pages %x\n",
+                      req.u.mem_paging.gfn, paging->max_pages);
                 goto out;
             }
 
             /* Check if the page has already been paged in */
-            if ( test_and_clear_bit(req.gfn, paging->bitmap) )
+            if ( test_and_clear_bit(req.u.mem_paging.gfn, paging->bitmap) )
             {
                 /* Find where in the paging file to read from */
-                slot = paging->gfn_to_slot[req.gfn];
+                slot = paging->gfn_to_slot[req.u.mem_paging.gfn];
 
                 /* Sanity check */
-                if ( paging->slot_to_gfn[slot] != req.gfn )
+                if ( paging->slot_to_gfn[slot] != req.u.mem_paging.gfn )
                 {
-                    ERROR("Expected gfn %"PRIx64" in slot %d, but found gfn %lx\n", req.gfn, slot, paging->slot_to_gfn[slot]);
+                    ERROR("Expected gfn %"PRIx64" in slot %d, but found gfn %lx\n",
+                          req.u.mem_paging.gfn, slot, paging->slot_to_gfn[slot]);
                     goto out;
                 }
 
                 if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
                 {
-                    DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n", req.gfn, slot);
+                    DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n",
+                            req.u.mem_paging.gfn, slot);
                     /* Notify policy of page being dropped */
-                    policy_notify_dropped(req.gfn);
+                    policy_notify_dropped(req.u.mem_paging.gfn);
                 }
                 else
                 {
                     /* Populate the page */
-                    if ( xenpaging_populate_page(paging, req.gfn, slot) < 0 )
+                    if ( xenpaging_populate_page(paging, req.u.mem_paging.gfn, slot) < 0 )
                     {
-                        ERROR("Error populating page %"PRIx64"", req.gfn);
+                        ERROR("Error populating page %"PRIx64"", req.u.mem_paging.gfn);
                         goto out;
                     }
                 }
 
                 /* Prepare the response */
-                rsp.gfn = req.gfn;
+                rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
                 rsp.vcpu_id = req.vcpu_id;
                 rsp.flags = req.flags;
 
                 if ( xenpaging_resume_page(paging, &rsp, 1) < 0 )
                 {
-                    PERROR("Error resuming page %"PRIx64"", req.gfn);
+                    PERROR("Error resuming page %"PRIx64"", req.u.mem_paging.gfn);
                     goto out;
                 }
 
@@ -967,7 +971,7 @@ int main(int argc, char *argv[])
                 DPRINTF("page %s populated (domain = %d; vcpu = %d;"
                         " gfn = %"PRIx64"; paused = %d; evict_fail = %d)\n",
                         req.flags & MEM_EVENT_FLAG_EVICT_FAIL ? "not" : "already",
-                        paging->mem_event.domain_id, req.vcpu_id, req.gfn,
+                        paging->mem_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
                         !!(req.flags & MEM_EVENT_FLAG_VCPU_PAUSED) ,
                         !!(req.flags & MEM_EVENT_FLAG_EVICT_FAIL) );
 
@@ -975,13 +979,13 @@ int main(int argc, char *argv[])
                 if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) || ( req.flags & MEM_EVENT_FLAG_EVICT_FAIL ))
                 {
                     /* Prepare the response */
-                    rsp.gfn = req.gfn;
+                    rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
                     rsp.vcpu_id = req.vcpu_id;
                     rsp.flags = req.flags;
 
                     if ( xenpaging_resume_page(paging, &rsp, 0) < 0 )
                     {
-                        PERROR("Error resuming page %"PRIx64"", req.gfn);
+                        PERROR("Error resuming page %"PRIx64"", req.u.mem_paging.gfn);
                         goto out;
                     }
                 }
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index fd2314e..e2432fd 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -6322,48 +6322,42 @@ static void hvm_mem_event_fill_regs(mem_event_request_t *req)
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     const struct vcpu *curr = current;
 
-    req->x86_regs.rax = regs->eax;
-    req->x86_regs.rcx = regs->ecx;
-    req->x86_regs.rdx = regs->edx;
-    req->x86_regs.rbx = regs->ebx;
-    req->x86_regs.rsp = regs->esp;
-    req->x86_regs.rbp = regs->ebp;
-    req->x86_regs.rsi = regs->esi;
-    req->x86_regs.rdi = regs->edi;
-
-    req->x86_regs.r8  = regs->r8;
-    req->x86_regs.r9  = regs->r9;
-    req->x86_regs.r10 = regs->r10;
-    req->x86_regs.r11 = regs->r11;
-    req->x86_regs.r12 = regs->r12;
-    req->x86_regs.r13 = regs->r13;
-    req->x86_regs.r14 = regs->r14;
-    req->x86_regs.r15 = regs->r15;
-
-    req->x86_regs.rflags = regs->eflags;
-    req->x86_regs.rip    = regs->eip;
-
-    req->x86_regs.msr_efer = curr->arch.hvm_vcpu.guest_efer;
-    req->x86_regs.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
-    req->x86_regs.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
-    req->x86_regs.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
-}
-
-static int hvm_memory_event_traps(long p, uint32_t reason,
-                                  unsigned long value, unsigned long old, 
-                                  bool_t gla_valid, unsigned long gla) 
-{
-    struct vcpu* v = current;
-    struct domain *d = v->domain;
-    mem_event_request_t req = { .reason = reason };
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
+    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
+    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
+    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
+}
+
+static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
+{
     int rc;
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
 
-    if ( !(p & HVMPME_MODE_MASK) ) 
+    if ( !(parameters & HVMPME_MODE_MASK) )
         return 0;
 
-    if ( (p & HVMPME_onchangeonly) && (value == old) )
-        return 1;
-
     rc = mem_event_claim_slot(d, &d->mem_event->access);
     if ( rc == -ENOSYS )
     {
@@ -6374,85 +6368,106 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
     else if ( rc < 0 )
         return rc;
 
-    if ( (p & HVMPME_MODE_MASK) == HVMPME_mode_sync ) 
+    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
     {
-        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;    
+        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
         mem_event_vcpu_pause(v);
     }
 
-    req.gfn = value;
-    req.vcpu_id = v->vcpu_id;
-    if ( gla_valid ) 
-    {
-        req.offset = gla & ((1 << PAGE_SHIFT) - 1);
-        req.gla = gla;
-        req.gla_valid = 1;
-    }
-    else
-    {
-        req.gla = old;
-    }
-    
-    hvm_mem_event_fill_regs(&req);
-    mem_event_put_request(d, &d->mem_event->access, &req);
-    
+    hvm_mem_event_fill_regs(req);
+    mem_event_put_request(d, &d->mem_event->access, req);
+
     return 1;
 }
 
+static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
+                                unsigned long old)
+{
+    mem_event_request_t req = {
+        .reason = reason,
+        .vcpu_id = current->vcpu_id,
+        .u.mov_to_cr.new_value = value,
+        .u.mov_to_cr.old_value = old
+    };
+
+    uint64_t parameters = 0 ;
+    switch(reason)
+    {
+    case MEM_EVENT_REASON_MOV_TO_CR0:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
+        break;
+    case MEM_EVENT_REASON_MOV_TO_CR3:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
+        break;
+    case MEM_EVENT_REASON_MOV_TO_CR4:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
+        break;
+    };
+
+    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+        return;
+
+    hvm_memory_event_traps(parameters, &req);
+}
+
 void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR0],
-                           MEM_EVENT_REASON_CR0,
-                           value, old, 0, 0);
+    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR0, value, old);
 }
 
 void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR3],
-                           MEM_EVENT_REASON_CR3,
-                           value, old, 0, 0);
+    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR3, value, old);
 }
 
 void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR4],
-                           MEM_EVENT_REASON_CR4,
-                           value, old, 0, 0);
+    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR4, value, old);
 }
 
 void hvm_memory_event_msr(unsigned long msr, unsigned long value)
 {
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MOV_TO_MSR,
+        .vcpu_id = current->vcpu_id,
+        .u.mov_to_msr.msr = msr,
+        .u.mov_to_msr.value = value,
+    };
+
     hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
-                           MEM_EVENT_REASON_MSR,
-                           value, ~value, 1, msr);
+                            .params[HVM_PARAM_MEMORY_EVENT_MSR],
+                           &req);
 }
 
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
-    unsigned long gfn;
-    gfn = paging_gva_to_gfn(current, gla, &pfec);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_SOFTWARE_BREAKPOINT,
+        .vcpu_id = current->vcpu_id,
+        .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
+    };
 
     return hvm_memory_event_traps(current->domain->arch.hvm_domain
-                                    .params[HVM_PARAM_MEMORY_EVENT_INT3],
-                                  MEM_EVENT_REASON_INT3,
-                                  gfn, 0, 1, gla);
+                                   .params[HVM_PARAM_MEMORY_EVENT_INT3],
+                                  &req);
 }
 
 int hvm_memory_event_single_step(unsigned long gla)
 {
     uint32_t pfec = PFEC_page_present;
-    unsigned long gfn;
-    gfn = paging_gva_to_gfn(current, gla, &pfec);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_SINGLESTEP,
+        .vcpu_id = current->vcpu_id,
+        .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
+    };
 
     return hvm_memory_event_traps(current->domain->arch.hvm_domain
-            .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
-            MEM_EVENT_REASON_SINGLESTEP,
-            gfn, 0, 1, gla);
+                                   .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
+                                  &req);
 }
 
 int nhvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 7c0fc7d..e722655 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -559,7 +559,12 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
 {
     struct vcpu *v = current;
     int rc;
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_SHARING,
+        .vcpu_id = v->vcpu_id,
+        .u.mem_sharing.gfn = gfn,
+        .u.mem_sharing.p2mt = p2m_ram_shared
+    };
 
     if ( (rc = __mem_event_claim_slot(d, 
                         &d->mem_event->share, allow_sleep)) < 0 )
@@ -571,9 +576,6 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
         mem_event_vcpu_pause(v);
     }
 
-    req.p2mt = p2m_ram_shared;
-    req.vcpu_id = v->vcpu_id;
-
     mem_event_put_request(d, &d->mem_event->share, &req);
 
     return 0;
@@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        {
+            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
+            continue;
+        }
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index c1b7545..aba4a20 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1079,7 +1079,10 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
 void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
                                 p2m_type_t p2mt)
 {
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_PAGING,
+        .u.mem_paging.gfn = gfn
+    };
 
     /* We allow no ring in this unique case, because it won't affect
      * correctness of the guest execution at this point.  If this is the only
@@ -1126,7 +1129,10 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
 void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 {
     struct vcpu *v = current;
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_PAGING,
+        .u.mem_paging.gfn = gfn
+    };
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
@@ -1176,7 +1182,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     }
 
     /* Send request to pager */
-    req.p2mt = p2mt;
+    req.u.mem_paging.p2mt = p2mt;
     req.vcpu_id = v->vcpu_id;
 
     mem_event_put_request(d, &d->mem_event->paging, &req);
@@ -1298,6 +1304,12 @@ void p2m_mem_paging_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        {
+            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
+            continue;
+        }
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
@@ -1310,18 +1322,19 @@ void p2m_mem_paging_resume(struct domain *d)
         /* Fix p2m entry if the page was not dropped */
         if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
         {
-            gfn_lock(p2m, rsp.gfn, 0);
-            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
+            uint64_t gfn = rsp.u.mem_access.gfn;
+            gfn_lock(p2m, gfn, 0);
+            mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
             /* Allow only pages which were prepared properly, or pages which
              * were nominated but not evicted */
             if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
             {
-                p2m_set_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
+                p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
                               paging_mode_log_dirty(d) ? p2m_ram_logdirty :
                               p2m_ram_rw, a);
-                set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+                set_gpfn_from_mfn(mfn_x(mfn), gfn);
             }
-            gfn_unlock(p2m, rsp.gfn, 0);
+            gfn_unlock(p2m, gfn, 0);
         }
         /* Unpause domain */
         if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
@@ -1339,92 +1352,94 @@ static void p2m_mem_event_fill_regs(mem_event_request_t *req)
     /* Architecture-specific vmcs/vmcb bits */
     hvm_funcs.save_cpu_ctxt(curr, &ctxt);
 
-    req->x86_regs.rax = regs->eax;
-    req->x86_regs.rcx = regs->ecx;
-    req->x86_regs.rdx = regs->edx;
-    req->x86_regs.rbx = regs->ebx;
-    req->x86_regs.rsp = regs->esp;
-    req->x86_regs.rbp = regs->ebp;
-    req->x86_regs.rsi = regs->esi;
-    req->x86_regs.rdi = regs->edi;
-
-    req->x86_regs.r8  = regs->r8;
-    req->x86_regs.r9  = regs->r9;
-    req->x86_regs.r10 = regs->r10;
-    req->x86_regs.r11 = regs->r11;
-    req->x86_regs.r12 = regs->r12;
-    req->x86_regs.r13 = regs->r13;
-    req->x86_regs.r14 = regs->r14;
-    req->x86_regs.r15 = regs->r15;
-
-    req->x86_regs.rflags = regs->eflags;
-    req->x86_regs.rip    = regs->eip;
-
-    req->x86_regs.dr7 = curr->arch.debugreg[7];
-    req->x86_regs.cr0 = ctxt.cr0;
-    req->x86_regs.cr2 = ctxt.cr2;
-    req->x86_regs.cr3 = ctxt.cr3;
-    req->x86_regs.cr4 = ctxt.cr4;
-
-    req->x86_regs.sysenter_cs = ctxt.sysenter_cs;
-    req->x86_regs.sysenter_esp = ctxt.sysenter_esp;
-    req->x86_regs.sysenter_eip = ctxt.sysenter_eip;
-
-    req->x86_regs.msr_efer = ctxt.msr_efer;
-    req->x86_regs.msr_star = ctxt.msr_star;
-    req->x86_regs.msr_lstar = ctxt.msr_lstar;
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.dr7 = curr->arch.debugreg[7];
+    req->regs.x86.cr0 = ctxt.cr0;
+    req->regs.x86.cr2 = ctxt.cr2;
+    req->regs.x86.cr3 = ctxt.cr3;
+    req->regs.x86.cr4 = ctxt.cr4;
+
+    req->regs.x86.sysenter_cs = ctxt.sysenter_cs;
+    req->regs.x86.sysenter_esp = ctxt.sysenter_esp;
+    req->regs.x86.sysenter_eip = ctxt.sysenter_eip;
+
+    req->regs.x86.msr_efer = ctxt.msr_efer;
+    req->regs.x86.msr_star = ctxt.msr_star;
+    req->regs.x86.msr_lstar = ctxt.msr_lstar;
 
     hvm_get_segment_register(curr, x86_seg_fs, &seg);
-    req->x86_regs.fs_base = seg.base;
+    req->regs.x86.fs_base = seg.base;
 
     hvm_get_segment_register(curr, x86_seg_gs, &seg);
-    req->x86_regs.gs_base = seg.base;
+    req->regs.x86.gs_base = seg.base;
 
     hvm_get_segment_register(curr, x86_seg_cs, &seg);
-    req->x86_regs.cs_arbytes = seg.attr.bytes;
+    req->regs.x86.cs_arbytes = seg.attr.bytes;
 }
 
-void p2m_mem_event_emulate_check(struct vcpu *v, const mem_event_response_t *rsp)
+void p2m_mem_event_emulate_check(struct vcpu *v,
+                                 const mem_event_response_t *rsp)
 {
     /* Mark vcpu for skipping one instruction upon rescheduling. */
     if ( rsp->flags & MEM_EVENT_FLAG_EMULATE )
     {
         xenmem_access_t access;
         bool_t violation = 1;
+        const struct mem_event_mem_access_data *data = &rsp->u.mem_access;
 
-        if ( p2m_get_mem_access(v->domain, rsp->gfn, &access) == 0 )
+        if ( p2m_get_mem_access(v->domain, data->gfn, &access) == 0 )
         {
             switch ( access )
             {
             case XENMEM_access_n:
             case XENMEM_access_n2rwx:
             default:
-                violation = rsp->access_r || rsp->access_w || rsp->access_x;
+                violation = data->access_r || data->access_w || data->access_x;
                 break;
 
             case XENMEM_access_r:
-                violation = rsp->access_w || rsp->access_x;
+                violation = data->access_w || data->access_x;
                 break;
 
             case XENMEM_access_w:
-                violation = rsp->access_r || rsp->access_x;
+                violation = data->access_r || data->access_x;
                 break;
 
             case XENMEM_access_x:
-                violation = rsp->access_r || rsp->access_w;
+                violation = data->access_r || data->access_w;
                 break;
 
             case XENMEM_access_rx:
             case XENMEM_access_rx2rw:
-                violation = rsp->access_w;
+                violation = data->access_w;
                 break;
 
             case XENMEM_access_wx:
-                violation = rsp->access_r;
+                violation = data->access_r;
                 break;
 
             case XENMEM_access_rw:
-                violation = rsp->access_x;
+                violation = data->access_x;
                 break;
 
             case XENMEM_access_rwx:
@@ -1542,24 +1557,24 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     if ( req )
     {
         *req_ptr = req;
-        req->reason = MEM_EVENT_REASON_VIOLATION;
+        req->reason = MEM_EVENT_REASON_MEM_ACCESS;
 
         /* Pause the current VCPU */
         if ( p2ma != p2m_access_n2rwx )
             req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
 
         /* Send request to mem event */
-        req->gfn = gfn;
-        req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
-        req->gla_valid = npfec.gla_valid;
-        req->gla = gla;
+        req->u.mem_access.gfn = gfn;
+        req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
+        req->u.mem_access.gla_valid = npfec.gla_valid;
+        req->u.mem_access.gla = gla;
         if ( npfec.kind == npfec_kind_with_gla )
-            req->fault_with_gla = 1;
+            req->u.mem_access.fault_with_gla = 1;
         else if ( npfec.kind == npfec_kind_in_gpt )
-            req->fault_in_gpt = 1;
-        req->access_r = npfec.read_access;
-        req->access_w = npfec.write_access;
-        req->access_x = npfec.insn_fetch;
+            req->u.mem_access.fault_in_gpt = 1;
+        req->u.mem_access.access_r = npfec.read_access;
+        req->u.mem_access.access_w = npfec.write_access;
+        req->u.mem_access.access_x = npfec.insn_fetch;
         req->vcpu_id = v->vcpu_id;
 
         p2m_mem_event_fill_regs(req);
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index d8aac5f..f0039a3 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -38,6 +38,12 @@ void mem_access_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        {
+            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!");
+            continue;
+        }
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 16ebdb5..cf6112e 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -292,6 +292,8 @@ void mem_event_put_request(struct domain *d,
 #endif
     }
 
+    req->version = MEM_EVENT_INTERFACE_VERSION;
+
     mem_event_ring_lock(med);
 
     /* Due to the reservations, this step must succeed. */
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index 599f9e8..17b6bb8 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -28,6 +28,11 @@
 #define _XEN_PUBLIC_MEM_EVENT_H
 
 #include "xen.h"
+
+#define MEM_EVENT_INTERFACE_VERSION 0x00000001
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
 #include "io/ring.h"
 
 /* Memory event flags */
@@ -47,17 +52,27 @@
  * potentially having side effects (like memory mapped or port I/O) disabled.
  */
 #define MEM_EVENT_FLAG_EMULATE_NOWRITE (1 << 6)
-
-/* Reasons for the memory event request */
-#define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
-#define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
-#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
-#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
-#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
-#define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
-#define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
-#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
-                                             does NOT honour HVMPME_onchangeonly */
+/* Reasons for the vm event request */
+/* Default case */
+#define MEM_EVENT_REASON_UNKNOWN                 0
+/* Memory access violation */
+#define MEM_EVENT_REASON_MEM_ACCESS              1
+/* Memory sharing event */
+#define MEM_EVENT_REASON_MEM_SHARING             2
+/* Memory paging event */
+#define MEM_EVENT_REASON_MEM_PAGING              3
+/* CR0 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR0              4
+/* CR3 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR3              5
+/* CR4 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR4              6
+/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
+#define MEM_EVENT_REASON_MOV_TO_MSR              7
+/* Debug operation executed (int3) */
+#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
+/* Single-step (MTF) */
+#define MEM_EVENT_REASON_SINGLESTEP              9
 
 /* Using a custom struct (not hvm_hw_cpu) so as to not fill
  * the mem_event ring buffer too quickly. */
@@ -97,31 +112,74 @@ struct mem_event_regs_x86 {
     uint32_t _pad;
 };
 
-typedef struct mem_event_st {
-    uint32_t flags;
-    uint32_t vcpu_id;
-
+struct mem_event_mem_access_data {
     uint64_t gfn;
     uint64_t offset;
     uint64_t gla; /* if gla_valid */
+    uint8_t access_r;
+    uint8_t access_w;
+    uint8_t access_x;
+    uint8_t gla_valid;
+    uint8_t fault_with_gla;
+    uint8_t fault_in_gpt;
+    uint16_t _pad;
+};
+
+struct mem_event_mov_to_cr_data {
+    uint64_t new_value;
+    uint64_t old_value;
+};
+
+struct mem_event_software_breakpoint_data {
+    uint64_t gfn;
+};
 
+struct mem_event_singlestep_data {
+    uint64_t gfn;
+};
+
+struct mem_event_mov_to_msr_data {
+    uint64_t msr;
+    uint64_t value;
+};
+
+struct mem_event_paging_data {
+    uint64_t gfn;
     uint32_t p2mt;
+    uint32_t _pad;
+};
+
+struct mem_event_sharing_data {
+    uint64_t gfn;
+    uint32_t p2mt;
+    uint32_t _pad;
+};
+
+typedef struct mem_event_st {
+    uint32_t version; /* MEM_EVENT_INTERFACE_VERSION */
+    uint32_t flags;
+    uint32_t vcpu_id;
+    uint32_t reason; /* MEM_EVENT_REASON_* */
 
-    uint16_t access_r:1;
-    uint16_t access_w:1;
-    uint16_t access_x:1;
-    uint16_t gla_valid:1;
-    uint16_t fault_with_gla:1;
-    uint16_t fault_in_gpt:1;
-    uint16_t available:10;
+    union {
+        struct mem_event_paging_data                mem_paging;
+        struct mem_event_sharing_data               mem_sharing;
+        struct mem_event_mem_access_data            mem_access;
+        struct mem_event_mov_to_cr_data             mov_to_cr;
+        struct mem_event_mov_to_msr_data            mov_to_msr;
+        struct mem_event_software_breakpoint_data   software_breakpoint;
+        struct mem_event_singlestep_data            singlestep;
+    } u;
 
-    uint16_t reason;
-    struct mem_event_regs_x86 x86_regs;
+    union {
+        struct mem_event_regs_x86 x86;
+    } regs;
 } mem_event_request_t, mem_event_response_t;
 
 DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
 
-#endif
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+#endif /* _XEN_PUBLIC_MEM_EVENT_H */
 
 /*
  * Local variables:
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 02/13] xen/mem_event: Cleanup mem_event ring names and domctls
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-10 12:56   ` Jan Beulich
  2015-02-09 18:53 ` [PATCH V4 03/13] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The name of one of the mem_event rings still implies it is used only
for memory accesses, which is no longer the case. It is also used to
deliver various HVM events, thus the name "monitor" is more appropriate
in this setting.

The mem_event subop definitions are also shortened to be more meaningful.

The tool side changes are only mechanical renaming to match these new names.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v4: Shorted mem_event domctl subops.
v3: Style and comment fixes.
---
 tools/libxc/xc_domain_restore.c | 14 +++++++-------
 tools/libxc/xc_domain_save.c    |  4 ++--
 tools/libxc/xc_hvm_build_x86.c  |  2 +-
 tools/libxc/xc_mem_access.c     |  8 ++++----
 tools/libxc/xc_mem_event.c      | 12 ++++++------
 tools/libxc/xc_mem_paging.c     |  4 ++--
 tools/libxc/xc_memshr.c         |  4 ++--
 tools/libxc/xg_save_restore.h   |  2 +-
 xen/arch/x86/hvm/hvm.c          |  4 ++--
 xen/arch/x86/hvm/vmx/vmcs.c     |  2 +-
 xen/arch/x86/mm/p2m.c           |  2 +-
 xen/common/mem_access.c         |  8 ++++----
 xen/common/mem_event.c          | 30 ++++++++++++++--------------
 xen/include/public/domctl.h     | 43 ++++++++++++++++++++++++-----------------
 xen/include/public/hvm/params.h |  2 +-
 xen/include/xen/sched.h         |  4 ++--
 16 files changed, 76 insertions(+), 69 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index a382701..2ab9f46 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -734,7 +734,7 @@ typedef struct {
     uint64_t vcpumap[XC_SR_MAX_VCPUS/64];
     uint64_t identpt;
     uint64_t paging_ring_pfn;
-    uint64_t access_ring_pfn;
+    uint64_t monitor_ring_pfn;
     uint64_t sharing_ring_pfn;
     uint64_t vm86_tss;
     uint64_t console_pfn;
@@ -828,15 +828,15 @@ static int pagebuf_get_one(xc_interface *xch, struct restore_ctx *ctx,
         // DPRINTF("paging ring pfn address: %llx\n", buf->paging_ring_pfn);
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
-    case XC_SAVE_ID_HVM_ACCESS_RING_PFN:
+    case XC_SAVE_ID_HVM_MONITOR_RING_PFN:
         /* Skip padding 4 bytes then read the mem access ring location. */
-        if ( RDEXACT(fd, &buf->access_ring_pfn, sizeof(uint32_t)) ||
-             RDEXACT(fd, &buf->access_ring_pfn, sizeof(uint64_t)) )
+        if ( RDEXACT(fd, &buf->monitor_ring_pfn, sizeof(uint32_t)) ||
+             RDEXACT(fd, &buf->monitor_ring_pfn, sizeof(uint64_t)) )
         {
             PERROR("error read the access ring pfn");
             return -1;
         }
-        // DPRINTF("access ring pfn address: %llx\n", buf->access_ring_pfn);
+        // DPRINTF("monitor ring pfn address: %llx\n", buf->monitor_ring_pfn);
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
     case XC_SAVE_ID_HVM_SHARING_RING_PFN:
@@ -1660,8 +1660,8 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                 xc_hvm_param_set(xch, dom, HVM_PARAM_IDENT_PT, pagebuf.identpt);
             if ( pagebuf.paging_ring_pfn )
                 xc_hvm_param_set(xch, dom, HVM_PARAM_PAGING_RING_PFN, pagebuf.paging_ring_pfn);
-            if ( pagebuf.access_ring_pfn )
-                xc_hvm_param_set(xch, dom, HVM_PARAM_ACCESS_RING_PFN, pagebuf.access_ring_pfn);
+            if ( pagebuf.monitor_ring_pfn )
+                xc_hvm_param_set(xch, dom, HVM_PARAM_MONITOR_RING_PFN, pagebuf.monitor_ring_pfn);
             if ( pagebuf.sharing_ring_pfn )
                 xc_hvm_param_set(xch, dom, HVM_PARAM_SHARING_RING_PFN, pagebuf.sharing_ring_pfn);
             if ( pagebuf.vm86_tss )
diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 254fdb3..949ef64 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -1664,9 +1664,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             goto out;
         }
 
-        chunk.id = XC_SAVE_ID_HVM_ACCESS_RING_PFN;
+        chunk.id = XC_SAVE_ID_HVM_MONITOR_RING_PFN;
         chunk.data = 0;
-        xc_hvm_param_get(xch, dom, HVM_PARAM_ACCESS_RING_PFN, &chunk.data);
+        xc_hvm_param_get(xch, dom, HVM_PARAM_MONITOR_RING_PFN, &chunk.data);
 
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index c81a25b..30a929d 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -497,7 +497,7 @@ static int setup_guest(xc_interface *xch,
                      special_pfn(SPECIALPAGE_CONSOLE));
     xc_hvm_param_set(xch, dom, HVM_PARAM_PAGING_RING_PFN,
                      special_pfn(SPECIALPAGE_PAGING));
-    xc_hvm_param_set(xch, dom, HVM_PARAM_ACCESS_RING_PFN,
+    xc_hvm_param_set(xch, dom, HVM_PARAM_MONITOR_RING_PFN,
                      special_pfn(SPECIALPAGE_ACCESS));
     xc_hvm_param_set(xch, dom, HVM_PARAM_SHARING_RING_PFN,
                      special_pfn(SPECIALPAGE_SHARING));
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 55d0e9f..446394b 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -26,22 +26,22 @@
 
 void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_ACCESS_RING_PFN,
+    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
                                port, 0);
 }
 
 void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
                                          uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_ACCESS_RING_PFN,
+    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
                                port, 1);
 }
 
 int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
 {
     return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS,
+                                XEN_MEM_EVENT_MONITOR_DISABLE,
+                                XEN_DOMCTL_MEM_EVENT_OP_MONITOR,
                                 NULL);
 }
 
diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
index 8c0be4e..4bb120d 100644
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_mem_event.c
@@ -115,20 +115,20 @@ void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
     switch ( param )
     {
     case HVM_PARAM_PAGING_RING_PFN:
-        op = XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE;
+        op = XEN_MEM_EVENT_PAGING_ENABLE;
         mode = XEN_DOMCTL_MEM_EVENT_OP_PAGING;
         break;
 
-    case HVM_PARAM_ACCESS_RING_PFN:
+    case HVM_PARAM_MONITOR_RING_PFN:
         if ( enable_introspection )
-            op = XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION;
+            op = XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION;
         else
-            op = XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_ACCESS;
+            op = XEN_MEM_EVENT_MONITOR_ENABLE;
+        mode = XEN_DOMCTL_MEM_EVENT_OP_MONITOR;
         break;
 
     case HVM_PARAM_SHARING_RING_PFN:
-        op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE;
+        op = XEN_MEM_EVENT_SHARING_ENABLE;
         mode = XEN_DOMCTL_MEM_EVENT_OP_SHARING;
         break;
 
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 8aa7d4d..5194423 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -34,7 +34,7 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
     }
         
     return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE,
+                                XEN_MEM_EVENT_PAGING_ENABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING,
                                 port);
 }
@@ -42,7 +42,7 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 {
     return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE,
+                                XEN_MEM_EVENT_PAGING_DISABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING,
                                 NULL);
 }
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index d6a9539..4398630 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -53,7 +53,7 @@ int xc_memshr_ring_enable(xc_interface *xch,
     }
         
     return xc_mem_event_control(xch, domid,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE,
+                                XEN_MEM_EVENT_SHARING_ENABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_SHARING,
                                 port);
 }
@@ -62,7 +62,7 @@ int xc_memshr_ring_disable(xc_interface *xch,
                            domid_t domid)
 {
     return xc_mem_event_control(xch, domid,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE,
+                                XEN_MEM_EVENT_SHARING_DISABLE,
                                 XEN_DOMCTL_MEM_EVENT_OP_SHARING,
                                 NULL);
 }
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index bdd9009..10348aa 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -256,7 +256,7 @@
 #define XC_SAVE_ID_HVM_GENERATION_ID_ADDR -14
 /* Markers for the pfn's hosting these mem event rings */
 #define XC_SAVE_ID_HVM_PAGING_RING_PFN  -15
-#define XC_SAVE_ID_HVM_ACCESS_RING_PFN  -16
+#define XC_SAVE_ID_HVM_MONITOR_RING_PFN -16
 #define XC_SAVE_ID_HVM_SHARING_RING_PFN -17
 #define XC_SAVE_ID_TOOLSTACK          -18 /* Optional toolstack specific info */
 /* These are a pair; it is an error for one to exist without the other */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e2432fd..11a7b2b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -6358,7 +6358,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
     if ( !(parameters & HVMPME_MODE_MASK) )
         return 0;
 
-    rc = mem_event_claim_slot(d, &d->mem_event->access);
+    rc = mem_event_claim_slot(d, &d->mem_event->monitor);
     if ( rc == -ENOSYS )
     {
         /* If there was no ring to handle the event, then
@@ -6375,7 +6375,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
     }
 
     hvm_mem_event_fill_regs(req);
-    mem_event_put_request(d, &d->mem_event->access, req);
+    mem_event_put_request(d, &d->mem_event->monitor, req);
 
     return 1;
 }
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index d614638..e0a33e3 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -715,7 +715,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
         return;
 
     if ( unlikely(d->arch.hvm_domain.introspection_enabled) &&
-         mem_event_check_ring(&d->mem_event->access) )
+         mem_event_check_ring(&d->mem_event->monitor) )
     {
         unsigned int i;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index aba4a20..feec99f 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1499,7 +1499,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     gfn_unlock(p2m, gfn, 0);
 
     /* Otherwise, check if there is a memory event listener, and send the message along */
-    if ( !mem_event_check_ring(&d->mem_event->access) || !req_ptr ) 
+    if ( !mem_event_check_ring(&d->mem_event->monitor) || !req_ptr ) 
     {
         /* No listener */
         if ( p2m->access_required ) 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index f0039a3..3a650ad 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -34,7 +34,7 @@ void mem_access_resume(struct domain *d)
     mem_event_response_t rsp;
 
     /* Pull all responses off the ring. */
-    while ( mem_event_get_response(d, &d->mem_event->access, &rsp) )
+    while ( mem_event_get_response(d, &d->mem_event->monitor, &rsp) )
     {
         struct vcpu *v;
 
@@ -85,7 +85,7 @@ int mem_access_memop(unsigned long cmd,
         goto out;
 
     rc = -ENODEV;
-    if ( unlikely(!d->mem_event->access.ring_page) )
+    if ( unlikely(!d->mem_event->monitor.ring_page) )
         goto out;
 
     switch ( mao.op )
@@ -152,11 +152,11 @@ int mem_access_memop(unsigned long cmd,
 
 int mem_access_send_req(struct domain *d, mem_event_request_t *req)
 {
-    int rc = mem_event_claim_slot(d, &d->mem_event->access);
+    int rc = mem_event_claim_slot(d, &d->mem_event->monitor);
     if ( rc < 0 )
         return rc;
 
-    mem_event_put_request(d, &d->mem_event->access, req);
+    mem_event_put_request(d, &d->mem_event->monitor, req);
 
     return 0;
 }
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index cf6112e..03482d0 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -445,7 +445,7 @@ static void mem_paging_notification(struct vcpu *v, unsigned int port)
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_access_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->access.ring_page != NULL) )
+    if ( likely(v->domain->mem_event->monitor.ring_page != NULL) )
         mem_access_resume(v->domain);
 }
 #endif
@@ -510,9 +510,9 @@ void mem_event_cleanup(struct domain *d)
     }
 #endif
 #ifdef HAS_MEM_ACCESS
-    if ( d->mem_event->access.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->access.wq);
-        (void)mem_event_disable(d, &d->mem_event->access);
+    if ( d->mem_event->monitor.ring_page ) {
+        destroy_waitqueue_head(&d->mem_event->monitor.wq);
+        (void)mem_event_disable(d, &d->mem_event->monitor);
     }
 #endif
 #ifdef HAS_MEM_SHARING
@@ -565,7 +565,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
         switch( mec->op )
         {
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
+        case XEN_MEM_EVENT_PAGING_ENABLE:
         {
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -595,7 +595,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
         break;
 
-        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
+        case XEN_MEM_EVENT_PAGING_DISABLE:
         {
             if ( med->ring_page )
                 rc = mem_event_disable(d, med);
@@ -611,32 +611,32 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 #endif
 
 #ifdef HAS_MEM_ACCESS
-    case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
+    case XEN_DOMCTL_MEM_EVENT_OP_MONITOR:
     {
-        struct mem_event_domain *med = &d->mem_event->access;
+        struct mem_event_domain *med = &d->mem_event->monitor;
         rc = -EINVAL;
 
         switch( mec->op )
         {
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION:
+        case XEN_MEM_EVENT_MONITOR_ENABLE:
+        case XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION:
         {
             rc = -ENODEV;
             if ( !p2m_mem_event_sanity_check(d) )
                 break;
 
             rc = mem_event_enable(d, mec, med, _VPF_mem_access,
-                                    HVM_PARAM_ACCESS_RING_PFN,
+                                    HVM_PARAM_MONITOR_RING_PFN,
                                     mem_access_notification);
 
-            if ( mec->op == XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION
+            if ( mec->op == XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
                  && !rc )
                 p2m_setup_introspection(d);
 
         }
         break;
 
-        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
+        case XEN_MEM_EVENT_MONITOR_DISABLE:
         {
             if ( med->ring_page )
             {
@@ -662,7 +662,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
         switch( mec->op )
         {
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
+        case XEN_MEM_EVENT_SHARING_ENABLE:
         {
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
@@ -680,7 +680,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         }
         break;
 
-        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
+        case XEN_MEM_EVENT_SHARING_DISABLE:
         {
             if ( med->ring_page )
                 rc = mem_event_disable(d, med);
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 57e2ed7..034cec7 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -762,7 +762,7 @@ struct xen_domctl_gdbsx_domstatus {
  * pager<->hypervisor interface. Use XENMEM_paging_op*
  * to perform per-page operations.
  *
- * The XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE domctl returns several
+ * The XEN_MEM_EVENT_PAGING_ENABLE domctl returns several
  * non-standard error codes to indicate why paging could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EMLINK - guest has iommu passthrough enabled
@@ -771,33 +771,40 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_MEM_EVENT_OP_PAGING            1
 
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE     0
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE    1
+#define XEN_MEM_EVENT_PAGING_ENABLE               0
+#define XEN_MEM_EVENT_PAGING_DISABLE              1
 
 /*
- * Access permissions.
+ * Monitor helper.
  *
  * As with paging, use the domctl for teardown/setup of the
  * helper<->hypervisor interface.
  *
- * There are HVM hypercalls to set the per-page access permissions of every
- * page in a domain.  When one of these permissions--independent, read, 
- * write, and execute--is violated, the VCPU is paused and a memory event 
- * is sent with what happened.  (See public/mem_event.h) .
+ * The monitor interface can be used to register for various VM events. For
+ * example, there are HVM hypercalls to set the per-page access permissions
+ * of every page in a domain.  When one of these permissions--independent,
+ * read, write, and execute--is violated, the VCPU is paused and a memory event
+ * is sent with what happened. The memory event handler can then resume the
+ * VCPU and redo the access with a XENMEM_access_op_resume hypercall.
  *
- * The memory event handler can then resume the VCPU and redo the access 
- * with a XENMEM_access_op_resume hypercall.
+ * See public/mem_event.h for the list of available events that can be
+ * subscribed to via the monitor interface.
  *
- * The XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE domctl returns several
+ * To enable MOV-TO-MSR interception on x86, it is necessary to enable this
+ * interface with the XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
+ * operator.
+ *
+ * The XEN_MEM_EVENT_MONITOR_ENABLE* domctls return several
  * non-standard error codes to indicate why access could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EBUSY  - guest has or had access enabled, ring buffer still active
+ *
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS                        2
+#define XEN_DOMCTL_MEM_EVENT_OP_MONITOR                        2
 
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE                 0
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE                1
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE_INTROSPECTION   2
+#define XEN_MEM_EVENT_MONITOR_ENABLE                           0
+#define XEN_MEM_EVENT_MONITOR_DISABLE                          1
+#define XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
 
 /*
  * Sharing ENOMEM helper.
@@ -814,13 +821,13 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_MEM_EVENT_OP_SHARING           3
 
-#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE    0
-#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE   1
+#define XEN_MEM_EVENT_SHARING_ENABLE              0
+#define XEN_MEM_EVENT_SHARING_DISABLE             1
 
 /* Use for teardown/setup of helper<->hypervisor interface for paging, 
  * access and sharing.*/
 struct xen_domctl_mem_event_op {
-    uint32_t       op;           /* XEN_DOMCTL_MEM_EVENT_OP_*_* */
+    uint32_t       op;           /* XEN_MEM_EVENT_*_* */
     uint32_t       mode;         /* XEN_DOMCTL_MEM_EVENT_OP_* */
 
     uint32_t port;              /* OUT: event channel for ring */
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index a2d43bc..6efcc0b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -182,7 +182,7 @@
 
 /* Params for the mem event rings */
 #define HVM_PARAM_PAGING_RING_PFN   27
-#define HVM_PARAM_ACCESS_RING_PFN   28
+#define HVM_PARAM_MONITOR_RING_PFN  28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
 /* SHUTDOWN_* action in case of a triple fault */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 814e087..64a2bd3 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -288,8 +288,8 @@ struct mem_event_per_domain
     struct mem_event_domain share;
     /* Memory paging support */
     struct mem_event_domain paging;
-    /* Memory access support */
-    struct mem_event_domain access;
+    /* VM event monitor support */
+    struct mem_event_domain monitor;
 };
 
 struct evtchn_port_ops;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 03/13] xen/mem_paging: Convert mem_event_op to mem_paging_op
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 02/13] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-10 13:00   ` Jan Beulich
  2015-02-09 18:53 ` [PATCH V4 04/13] xen/mem_access: Merge mem_event sanity check into mem_access check Tamas K Lengyel
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The only use-case of the mem_event_op structure had been in mem_paging,
thus renaming the structure mem_paging_op and relocating its associated
functions clarifies its actual usage.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
v4: Style fixes
v3: Style fixes
---
 tools/libxc/xc_mem_event.c       | 16 ----------------
 tools/libxc/xc_mem_paging.c      | 26 ++++++++++++++++++--------
 tools/libxc/xc_private.h         |  3 ---
 xen/arch/x86/mm/mem_paging.c     | 14 ++++++--------
 xen/arch/x86/x86_64/compat/mm.c  | 10 ++++++----
 xen/arch/x86/x86_64/mm.c         |  8 ++++----
 xen/common/mem_event.c           |  4 ++--
 xen/include/asm-x86/mem_paging.h |  2 +-
 xen/include/public/memory.h      |  9 ++++-----
 9 files changed, 41 insertions(+), 51 deletions(-)

diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_mem_event.c
index 4bb120d..487fcee 100644
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_mem_event.c
@@ -40,22 +40,6 @@ int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
     return rc;
 }
 
-int xc_mem_event_memop(xc_interface *xch, domid_t domain_id, 
-                        unsigned int op, unsigned int mode,
-                        uint64_t gfn, void *buffer)
-{
-    xen_mem_event_op_t meo;
-
-    memset(&meo, 0, sizeof(meo));
-
-    meo.op      = op;
-    meo.domain  = domain_id;
-    meo.gfn     = gfn;
-    meo.buffer  = (unsigned long) buffer;
-
-    return do_memory_op(xch, mode, &meo, sizeof(meo));
-}
-
 void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
                           uint32_t *port, int enable_introspection)
 {
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 5194423..212f9ec 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -23,6 +23,20 @@
 
 #include "xc_private.h"
 
+static int xc_mem_paging_memop(xc_interface *xch, domid_t domain_id,
+                               unsigned int op, uint64_t gfn, void *buffer)
+{
+    xen_mem_paging_op_t mpo;
+
+    memset(&mpo, 0, sizeof(mpo));
+
+    mpo.op      = op;
+    mpo.domain  = domain_id;
+    mpo.gfn     = gfn;
+    mpo.buffer  = (unsigned long) buffer;
+
+    return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
+}
 
 int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
                          uint32_t *port)
@@ -49,25 +63,22 @@ int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
-    return xc_mem_event_memop(xch, domain_id,
+    return xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_nominate,
-                                XENMEM_paging_op,
                                 gfn, NULL);
 }
 
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
-    return xc_mem_event_memop(xch, domain_id,
+    return xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_evict,
-                                XENMEM_paging_op,
                                 gfn, NULL);
 }
 
 int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
-    return xc_mem_event_memop(xch, domain_id,
+    return xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_prep,
-                                XENMEM_paging_op,
                                 gfn, NULL);
 }
 
@@ -87,9 +98,8 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
     if ( mlock(buffer, XC_PAGE_SIZE) )
         return -1;
         
-    rc = xc_mem_event_memop(xch, domain_id,
+    rc = xc_mem_paging_memop(xch, domain_id,
                                 XENMEM_paging_op_prep,
-                                XENMEM_paging_op,
                                 gfn, buffer);
 
     old_errno = errno;
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 45b8644..f1f601c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -425,9 +425,6 @@ int xc_ffs64(uint64_t x);
  */
 int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
                          unsigned int mode, uint32_t *port);
-int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
-                        unsigned int op, unsigned int mode,
-                        uint64_t gfn, void *buffer);
 /*
  * Enables mem_event and returns the mapped ring page indicated by param.
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 65f6a3d..e3d64a6 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -25,31 +25,29 @@
 #include <xen/mem_event.h>
 
 
-int mem_paging_memop(struct domain *d, xen_mem_event_op_t *mec)
+int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpc)
 {
     if ( unlikely(!d->mem_event->paging.ring_page) )
         return -ENODEV;
 
-    switch( mec->op )
+    switch( mpc->op )
     {
     case XENMEM_paging_op_nominate:
     {
-        unsigned long gfn = mec->gfn;
-        return p2m_mem_paging_nominate(d, gfn);
+        return p2m_mem_paging_nominate(d, mpc->gfn);
     }
     break;
 
     case XENMEM_paging_op_evict:
     {
-        unsigned long gfn = mec->gfn;
-        return p2m_mem_paging_evict(d, gfn);
+        return p2m_mem_paging_evict(d, mpc->gfn);
     }
     break;
 
     case XENMEM_paging_op_prep:
     {
-        unsigned long gfn = mec->gfn;
-        return p2m_mem_paging_prep(d, gfn, mec->buffer);
+        unsigned long gfn = mpc->gfn;
+        return p2m_mem_paging_prep(d, gfn, mpc->buffer);
     }
     break;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f90f611..96cec31 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -188,11 +188,12 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     case XENMEM_paging_op:
     {
-        xen_mem_event_op_t meo;
-        if ( copy_from_guest(&meo, arg, 1) )
+        xen_mem_paging_op_t mpo;
+
+        if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, meo.domain, &meo);
-        if ( !rc && __copy_to_guest(arg, &meo, 1) )
+        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
     }
@@ -200,6 +201,7 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case XENMEM_sharing_op:
     {
         xen_mem_sharing_op_t mso;
+
         if ( copy_from_guest(&mso, arg, 1) )
             return -EFAULT;
         if ( mso.op == XENMEM_sharing_op_audit )
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d631aee..2fa1f67 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -985,11 +985,11 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     case XENMEM_paging_op:
     {
-        xen_mem_event_op_t meo;
-        if ( copy_from_guest(&meo, arg, 1) )
+        xen_mem_paging_op_t mpo;
+        if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, meo.domain, &meo);
-        if ( !rc && __copy_to_guest(arg, &meo, 1) )
+        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
     }
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 03482d0..8a9119f 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -476,12 +476,12 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
     {
 #ifdef HAS_MEM_PAGING
         case XENMEM_paging_op:
-            ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
+            ret = mem_paging_memop(d, arg);
             break;
 #endif
 #ifdef HAS_MEM_SHARING
         case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
+            ret = mem_sharing_memop(d, arg);
             break;
 #endif
         default:
diff --git a/xen/include/asm-x86/mem_paging.h b/xen/include/asm-x86/mem_paging.h
index 6b7a1fe..92ed2fa 100644
--- a/xen/include/asm-x86/mem_paging.h
+++ b/xen/include/asm-x86/mem_paging.h
@@ -21,7 +21,7 @@
  */
 
 
-int mem_paging_memop(struct domain *d, xen_mem_event_op_t *meo);
+int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *meo);
 
 
 /*
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 595f953..e0cca46 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -372,18 +372,17 @@ typedef struct xen_pod_target xen_pod_target_t;
 #define XENMEM_paging_op_evict              1
 #define XENMEM_paging_op_prep               2
 
-struct xen_mem_event_op {
-    uint8_t     op;         /* XENMEM_*_op_* */
+struct xen_mem_paging_op {
+    uint8_t     op;         /* XENMEM_paging_op_* */
     domid_t     domain;
-    
 
     /* PAGING_PREP IN: buffer to immediately fill page in */
     uint64_aligned_t    buffer;
     /* Other OPs */
     uint64_aligned_t    gfn;           /* IN:  gfn of page being operated on */
 };
-typedef struct xen_mem_event_op xen_mem_event_op_t;
-DEFINE_XEN_GUEST_HANDLE(xen_mem_event_op_t);
+typedef struct xen_mem_paging_op xen_mem_paging_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
 
 #define XENMEM_access_op                    21
 #define XENMEM_access_op_resume             0
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 04/13] xen/mem_access: Merge mem_event sanity check into mem_access check
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 03/13] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 05/13] xen: Rename mem_event to vm_event Tamas K Lengyel
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The current sanity check when enabling mem_event is only applicable
to mem_access.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/common/mem_event.c      | 4 ----
 xen/include/asm-x86/p2m.h   | 8 +-------
 xen/include/public/domctl.h | 1 -
 3 files changed, 1 insertion(+), 12 deletions(-)

diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 8a9119f..3ed6abc 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -621,10 +621,6 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         case XEN_MEM_EVENT_MONITOR_ENABLE:
         case XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION:
         {
-            rc = -ENODEV;
-            if ( !p2m_mem_event_sanity_check(d) )
-                break;
-
             rc = mem_event_enable(d, mec, med, _VPF_mem_access,
                                     HVM_PARAM_MONITOR_RING_PFN,
                                     mem_access_notification);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index e86e26f..20accc6 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -600,16 +600,10 @@ void p2m_mem_event_emulate_check(struct vcpu *v,
 /* Enable arch specific introspection options (such as MSR interception). */
 void p2m_setup_introspection(struct domain *d);
 
-/* Sanity check for mem_event hardware support */
-static inline bool_t p2m_mem_event_sanity_check(struct domain *d)
-{
-    return hap_enabled(d) && cpu_has_vmx;
-}
-
 /* Sanity check for mem_access hardware support */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-    return is_hvm_domain(d);
+    return is_hvm_domain(d) && hap_enabled(d) && cpu_has_vmx;
 }
 
 /* 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 034cec7..3b4c2e2 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -796,7 +796,6 @@ struct xen_domctl_gdbsx_domstatus {
  *
  * The XEN_MEM_EVENT_MONITOR_ENABLE* domctls return several
  * non-standard error codes to indicate why access could not be enabled:
- * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EBUSY  - guest has or had access enabled, ring buffer still active
  *
  */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 05/13] xen: Rename mem_event to vm_event
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 04/13] xen/mem_access: Merge mem_event sanity check into mem_access check Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 20:09   ` Daniel De Graaf
                     ` (2 more replies)
  2015-02-09 18:53 ` [PATCH V4 06/13] tools/tests: Clean-up tools/tests/xen-access Tamas K Lengyel
                   ` (7 subsequent siblings)
  12 siblings, 3 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

In this patch we mechanically rename mem_event to vm_event. This patch
introduces no logic changes to the code. Using the name vm_event better
describes the intended use of this subsystem, which is not limited to memory
events. It can be used for off-loading the decision making logic into helper
applications when encountering various events during a VM's execution.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
v4: Using git -M option for patch to improve readability
    Note that in include/xen/vm_event.h the style problems are fixed in a later
     patch in the series so that git can keep track of the relocation here.
---
 MAINTAINERS                                    |   4 +-
 docs/misc/xsm-flask.txt                        |   2 +-
 tools/libxc/Makefile                           |   2 +-
 tools/libxc/xc_mem_access.c                    |  16 +-
 tools/libxc/xc_mem_paging.c                    |  18 +-
 tools/libxc/xc_memshr.c                        |  18 +-
 tools/libxc/xc_private.h                       |  12 +-
 tools/libxc/{xc_mem_event.c => xc_vm_event.c}  |  40 +--
 tools/tests/xen-access/xen-access.c            | 110 ++++----
 tools/xenpaging/pagein.c                       |   2 +-
 tools/xenpaging/xenpaging.c                    | 118 ++++-----
 tools/xenpaging/xenpaging.h                    |   8 +-
 xen/arch/x86/domain.c                          |   2 +-
 xen/arch/x86/domctl.c                          |   4 +-
 xen/arch/x86/hvm/emulate.c                     |   6 +-
 xen/arch/x86/hvm/hvm.c                         |  46 ++--
 xen/arch/x86/hvm/vmx/vmcs.c                    |   4 +-
 xen/arch/x86/mm/hap/nested_ept.c               |   4 +-
 xen/arch/x86/mm/hap/nested_hap.c               |   4 +-
 xen/arch/x86/mm/mem_paging.c                   |   4 +-
 xen/arch/x86/mm/mem_sharing.c                  |  32 +--
 xen/arch/x86/mm/p2m-pod.c                      |   4 +-
 xen/arch/x86/mm/p2m-pt.c                       |   4 +-
 xen/arch/x86/mm/p2m.c                          |  99 ++++----
 xen/arch/x86/x86_64/compat/mm.c                |   6 +-
 xen/arch/x86/x86_64/mm.c                       |   6 +-
 xen/common/Makefile                            |   2 +-
 xen/common/domain.c                            |  12 +-
 xen/common/domctl.c                            |   8 +-
 xen/common/mem_access.c                        |  28 +--
 xen/common/{mem_event.c => vm_event.c}         | 336 ++++++++++++-------------
 xen/drivers/passthrough/pci.c                  |   2 +-
 xen/include/asm-arm/p2m.h                      |   6 +-
 xen/include/asm-x86/domain.h                   |   4 +-
 xen/include/asm-x86/hvm/emulate.h              |   2 +-
 xen/include/asm-x86/p2m.h                      |   8 +-
 xen/include/public/domctl.h                    |  46 ++--
 xen/include/public/{mem_event.h => vm_event.h} |  90 +++----
 xen/include/xen/mem_access.h                   |   4 +-
 xen/include/xen/p2m-common.h                   |   4 +-
 xen/include/xen/sched.h                        |  26 +-
 xen/include/xen/{mem_event.h => vm_event.h}    |  74 +++---
 xen/include/xsm/dummy.h                        |   4 +-
 xen/include/xsm/xsm.h                          |  12 +-
 xen/xsm/dummy.c                                |   4 +-
 xen/xsm/flask/hooks.c                          |  16 +-
 xen/xsm/flask/policy/access_vectors            |   2 +-
 47 files changed, 632 insertions(+), 633 deletions(-)
 rename tools/libxc/{xc_mem_event.c => xc_vm_event.c} (79%)
 rename xen/common/{mem_event.c => vm_event.c} (59%)
 rename xen/include/public/{mem_event.h => vm_event.h} (61%)
 rename xen/include/xen/{mem_event.h => vm_event.h} (50%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 3bbac9e..3d09d15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -361,10 +361,10 @@ F:	xen/arch/x86/mm/mem_sharing.c
 F:	xen/arch/x86/mm/mem_paging.c
 F:	tools/memshr
 
-MEMORY EVENT AND ACCESS
+VM EVENT AND MEM ACCESS
 M:	Tim Deegan <tim@xen.org>
 S:	Supported
-F:	xen/common/mem_event.c
+F:	xen/common/vm_event.c
 F:	xen/common/mem_access.c
 
 XENTRACE
diff --git a/docs/misc/xsm-flask.txt b/docs/misc/xsm-flask.txt
index 9559028..13ce498 100644
--- a/docs/misc/xsm-flask.txt
+++ b/docs/misc/xsm-flask.txt
@@ -87,7 +87,7 @@ __HYPERVISOR_domctl (xen/include/public/domctl.h)
  * XEN_DOMCTL_set_machine_address_size
  * XEN_DOMCTL_debug_op
  * XEN_DOMCTL_gethvmcontext_partial
- * XEN_DOMCTL_mem_event_op
+ * XEN_DOMCTL_vm_event_op
  * XEN_DOMCTL_mem_sharing_op
  * XEN_DOMCTL_setvcpuextstate
  * XEN_DOMCTL_getvcpuextstate
diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 6fa88c7..22ba2a1 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -31,7 +31,7 @@ CTRL_SRCS-y       += xc_pm.c
 CTRL_SRCS-y       += xc_cpu_hotplug.c
 CTRL_SRCS-y       += xc_resume.c
 CTRL_SRCS-y       += xc_tmem.c
-CTRL_SRCS-y       += xc_mem_event.c
+CTRL_SRCS-y       += xc_vm_event.c
 CTRL_SRCS-y       += xc_mem_paging.c
 CTRL_SRCS-y       += xc_mem_access.c
 CTRL_SRCS-y       += xc_memshr.c
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 446394b..0a3f0e6 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -26,23 +26,23 @@
 
 void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                               port, 0);
+    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+                              port, 0);
 }
 
 void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
                                          uint32_t *port)
 {
-    return xc_mem_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                               port, 1);
+    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+                              port, 1);
 }
 
 int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
 {
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_MEM_EVENT_MONITOR_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_MONITOR,
-                                NULL);
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_MONITOR_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
 }
 
 int xc_mem_access_resume(xc_interface *xch, domid_t domain_id)
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 212f9ec..b635a4d 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -46,19 +46,19 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
         errno = EINVAL;
         return -1;
     }
-        
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_MEM_EVENT_PAGING_ENABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
-                                port);
+
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_PAGING_ENABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_PAGING,
+                               port);
 }
 
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 {
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_MEM_EVENT_PAGING_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
-                                NULL);
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_PAGING_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_PAGING,
+                               NULL);
 }
 
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id, unsigned long gfn)
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 4398630..14cc1ce 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -51,20 +51,20 @@ int xc_memshr_ring_enable(xc_interface *xch,
         errno = EINVAL;
         return -1;
     }
-        
-    return xc_mem_event_control(xch, domid,
-                                XEN_MEM_EVENT_SHARING_ENABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING,
-                                port);
+
+    return xc_vm_event_control(xch, domid,
+                               XEN_VM_EVENT_SHARING_ENABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_SHARING,
+                               port);
 }
 
 int xc_memshr_ring_disable(xc_interface *xch, 
                            domid_t domid)
 {
-    return xc_mem_event_control(xch, domid,
-                                XEN_MEM_EVENT_SHARING_DISABLE,
-                                XEN_DOMCTL_MEM_EVENT_OP_SHARING,
-                                NULL);
+    return xc_vm_event_control(xch, domid,
+                               XEN_VM_EVENT_SHARING_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_SHARING,
+                               NULL);
 }
 
 static int xc_memshr_memop(xc_interface *xch, domid_t domid, 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index f1f601c..843540c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -421,15 +421,15 @@ int xc_ffs64(uint64_t x);
 #define DOMPRINTF_CALLED(xch) xc_dom_printf((xch), "%s: called", __FUNCTION__)
 
 /**
- * mem_event operations. Internal use only.
+ * vm_event operations. Internal use only.
  */
-int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
-                         unsigned int mode, uint32_t *port);
+int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
+                        unsigned int mode, uint32_t *port);
 /*
- * Enables mem_event and returns the mapped ring page indicated by param.
+ * Enables vm_event and returns the mapped ring page indicated by param.
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
  */
-void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                          uint32_t *port, int enable_introspection);
+void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
+                         uint32_t *port, int enable_introspection);
 
 #endif /* __XC_PRIVATE_H__ */
diff --git a/tools/libxc/xc_mem_event.c b/tools/libxc/xc_vm_event.c
similarity index 79%
rename from tools/libxc/xc_mem_event.c
rename to tools/libxc/xc_vm_event.c
index 487fcee..d458b9a 100644
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -1,6 +1,6 @@
 /******************************************************************************
  *
- * xc_mem_event.c
+ * xc_vm_event.c
  *
  * Interface to low-level memory event functionality.
  *
@@ -23,25 +23,25 @@
 
 #include "xc_private.h"
 
-int xc_mem_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
-                         unsigned int mode, uint32_t *port)
+int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
+                        unsigned int mode, uint32_t *port)
 {
     DECLARE_DOMCTL;
     int rc;
 
-    domctl.cmd = XEN_DOMCTL_mem_event_op;
+    domctl.cmd = XEN_DOMCTL_vm_event_op;
     domctl.domain = domain_id;
-    domctl.u.mem_event_op.op = op;
-    domctl.u.mem_event_op.mode = mode;
-    
+    domctl.u.vm_event_op.op = op;
+    domctl.u.vm_event_op.mode = mode;
+
     rc = do_domctl(xch, &domctl);
     if ( !rc && port )
-        *port = domctl.u.mem_event_op.port;
+        *port = domctl.u.vm_event_op.port;
     return rc;
 }
 
-void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                          uint32_t *port, int enable_introspection)
+void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
+                         uint32_t *port, int enable_introspection)
 {
     void *ring_page = NULL;
     uint64_t pfn;
@@ -99,26 +99,26 @@ void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
     switch ( param )
     {
     case HVM_PARAM_PAGING_RING_PFN:
-        op = XEN_MEM_EVENT_PAGING_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_PAGING;
+        op = XEN_VM_EVENT_PAGING_ENABLE;
+        mode = XEN_DOMCTL_VM_EVENT_OP_PAGING;
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
         if ( enable_introspection )
-            op = XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION;
+            op = XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION;
         else
-            op = XEN_MEM_EVENT_MONITOR_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_MONITOR;
+            op = XEN_VM_EVENT_MONITOR_ENABLE;
+        mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
     case HVM_PARAM_SHARING_RING_PFN:
-        op = XEN_MEM_EVENT_SHARING_ENABLE;
-        mode = XEN_DOMCTL_MEM_EVENT_OP_SHARING;
+        op = XEN_VM_EVENT_SHARING_ENABLE;
+        mode = XEN_DOMCTL_VM_EVENT_OP_SHARING;
         break;
 
     /*
      * This is for the outside chance that the HVM_PARAM is valid but is invalid
-     * as far as mem_event goes.
+     * as far as vm_event goes.
      */
     default:
         errno = EINVAL;
@@ -126,10 +126,10 @@ void *xc_mem_event_enable(xc_interface *xch, domid_t domain_id, int param,
         goto out;
     }
 
-    rc1 = xc_mem_event_control(xch, domain_id, op, mode, port);
+    rc1 = xc_vm_event_control(xch, domain_id, op, mode, port);
     if ( rc1 != 0 )
     {
-        PERROR("Failed to enable mem_event\n");
+        PERROR("Failed to enable vm_event\n");
         goto out;
     }
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index dd21d3b..0a22a31 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -39,7 +39,7 @@
 #include <sys/poll.h>
 
 #include <xenctrl.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 
 #define DPRINTF(a, b...) fprintf(stderr, a, ## b)
 #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
@@ -91,26 +91,26 @@ static inline int spin_trylock(spinlock_t *lock)
     return !test_and_set_bit(1, lock);
 }
 
-#define mem_event_ring_lock_init(_m)  spin_lock_init(&(_m)->ring_lock)
-#define mem_event_ring_lock(_m)       spin_lock(&(_m)->ring_lock)
-#define mem_event_ring_unlock(_m)     spin_unlock(&(_m)->ring_lock)
+#define vm_event_ring_lock_init(_m)  spin_lock_init(&(_m)->ring_lock)
+#define vm_event_ring_lock(_m)       spin_lock(&(_m)->ring_lock)
+#define vm_event_ring_unlock(_m)     spin_unlock(&(_m)->ring_lock)
 
-typedef struct mem_event {
+typedef struct vm_event {
     domid_t domain_id;
     xc_evtchn *xce_handle;
     int port;
-    mem_event_back_ring_t back_ring;
+    vm_event_back_ring_t back_ring;
     uint32_t evtchn_port;
     void *ring_page;
     spinlock_t ring_lock;
-} mem_event_t;
+} vm_event_t;
 
 typedef struct xenaccess {
     xc_interface *xc_handle;
 
     xc_domaininfo_t    *domain_info;
 
-    mem_event_t mem_event;
+    vm_event_t vm_event;
 } xenaccess_t;
 
 static int interrupted;
@@ -170,13 +170,13 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         return 0;
 
     /* Tear down domain xenaccess in Xen */
-    if ( xenaccess->mem_event.ring_page )
-        munmap(xenaccess->mem_event.ring_page, XC_PAGE_SIZE);
+    if ( xenaccess->vm_event.ring_page )
+        munmap(xenaccess->vm_event.ring_page, XC_PAGE_SIZE);
 
     if ( mem_access_enable )
     {
         rc = xc_mem_access_disable(xenaccess->xc_handle,
-                                   xenaccess->mem_event.domain_id);
+                                   xenaccess->vm_event.domain_id);
         if ( rc != 0 )
         {
             ERROR("Error tearing down domain xenaccess in xen");
@@ -186,8 +186,8 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     /* Unbind VIRQ */
     if ( evtchn_bind )
     {
-        rc = xc_evtchn_unbind(xenaccess->mem_event.xce_handle,
-                              xenaccess->mem_event.port);
+        rc = xc_evtchn_unbind(xenaccess->vm_event.xce_handle,
+                              xenaccess->vm_event.port);
         if ( rc != 0 )
         {
             ERROR("Error unbinding event port");
@@ -197,7 +197,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     /* Close event channel */
     if ( evtchn_open )
     {
-        rc = xc_evtchn_close(xenaccess->mem_event.xce_handle);
+        rc = xc_evtchn_close(xenaccess->vm_event.xce_handle);
         if ( rc != 0 )
         {
             ERROR("Error closing event channel");
@@ -239,17 +239,17 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     xenaccess->xc_handle = xch;
 
     /* Set domain id */
-    xenaccess->mem_event.domain_id = domain_id;
+    xenaccess->vm_event.domain_id = domain_id;
 
     /* Initialise lock */
-    mem_event_ring_lock_init(&xenaccess->mem_event);
+    vm_event_ring_lock_init(&xenaccess->vm_event);
 
     /* Enable mem_access */
-    xenaccess->mem_event.ring_page =
+    xenaccess->vm_event.ring_page =
             xc_mem_access_enable(xenaccess->xc_handle,
-                                 xenaccess->mem_event.domain_id,
-                                 &xenaccess->mem_event.evtchn_port);
-    if ( xenaccess->mem_event.ring_page == NULL )
+                                 xenaccess->vm_event.domain_id,
+                                 &xenaccess->vm_event.evtchn_port);
+    if ( xenaccess->vm_event.ring_page == NULL )
     {
         switch ( errno ) {
             case EBUSY:
@@ -267,8 +267,8 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     mem_access_enable = 1;
 
     /* Open event channel */
-    xenaccess->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
-    if ( xenaccess->mem_event.xce_handle == NULL )
+    xenaccess->vm_event.xce_handle = xc_evtchn_open(NULL, 0);
+    if ( xenaccess->vm_event.xce_handle == NULL )
     {
         ERROR("Failed to open event channel");
         goto err;
@@ -276,21 +276,21 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     evtchn_open = 1;
 
     /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(xenaccess->mem_event.xce_handle,
-                                    xenaccess->mem_event.domain_id,
-                                    xenaccess->mem_event.evtchn_port);
+    rc = xc_evtchn_bind_interdomain(xenaccess->vm_event.xce_handle,
+                                    xenaccess->vm_event.domain_id,
+                                    xenaccess->vm_event.evtchn_port);
     if ( rc < 0 )
     {
         ERROR("Failed to bind event channel");
         goto err;
     }
     evtchn_bind = 1;
-    xenaccess->mem_event.port = rc;
+    xenaccess->vm_event.port = rc;
 
     /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)xenaccess->mem_event.ring_page);
-    BACK_RING_INIT(&xenaccess->mem_event.back_ring,
-                   (mem_event_sring_t *)xenaccess->mem_event.ring_page,
+    SHARED_RING_INIT((vm_event_sring_t *)xenaccess->vm_event.ring_page);
+    BACK_RING_INIT(&xenaccess->vm_event.back_ring,
+                   (vm_event_sring_t *)xenaccess->vm_event.ring_page,
                    XC_PAGE_SIZE);
 
     /* Get domaininfo */
@@ -320,14 +320,14 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     return NULL;
 }
 
-int get_request(mem_event_t *mem_event, mem_event_request_t *req)
+int get_request(vm_event_t *vm_event, vm_event_request_t *req)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX req_cons;
 
-    mem_event_ring_lock(mem_event);
+    vm_event_ring_lock(vm_event);
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     req_cons = back_ring->req_cons;
 
     /* Copy request */
@@ -338,19 +338,19 @@ int get_request(mem_event_t *mem_event, mem_event_request_t *req)
     back_ring->req_cons = req_cons;
     back_ring->sring->req_event = req_cons + 1;
 
-    mem_event_ring_unlock(mem_event);
+    vm_event_ring_unlock(vm_event);
 
     return 0;
 }
 
-static int put_response(mem_event_t *mem_event, mem_event_response_t *rsp)
+static int put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX rsp_prod;
 
-    mem_event_ring_lock(mem_event);
+    vm_event_ring_lock(vm_event);
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     rsp_prod = back_ring->rsp_prod_pvt;
 
     /* Copy response */
@@ -361,24 +361,24 @@ static int put_response(mem_event_t *mem_event, mem_event_response_t *rsp)
     back_ring->rsp_prod_pvt = rsp_prod;
     RING_PUSH_RESPONSES(back_ring);
 
-    mem_event_ring_unlock(mem_event);
+    vm_event_ring_unlock(vm_event);
 
     return 0;
 }
 
-static int xenaccess_resume_page(xenaccess_t *paging, mem_event_response_t *rsp)
+static int xenaccess_resume_page(xenaccess_t *paging, vm_event_response_t *rsp)
 {
     int ret;
 
     /* Put the page info on the ring */
-    ret = put_response(&paging->mem_event, rsp);
+    ret = put_response(&paging->vm_event, rsp);
     if ( ret != 0 )
         goto out;
 
     /* Tell Xen page is ready */
-    ret = xc_mem_access_resume(paging->xc_handle, paging->mem_event.domain_id);
-    ret = xc_evtchn_notify(paging->mem_event.xce_handle,
-                           paging->mem_event.port);
+    ret = xc_mem_access_resume(paging->xc_handle, paging->vm_event.domain_id);
+    ret = xc_evtchn_notify(paging->vm_event.xce_handle,
+                           paging->vm_event.port);
 
  out:
     return ret;
@@ -400,8 +400,8 @@ int main(int argc, char *argv[])
     struct sigaction act;
     domid_t domain_id;
     xenaccess_t *xenaccess;
-    mem_event_request_t req;
-    mem_event_response_t rsp;
+    vm_event_request_t req;
+    vm_event_response_t rsp;
     int rc = -1;
     int rc1;
     xc_interface *xch;
@@ -507,7 +507,7 @@ int main(int argc, char *argv[])
         rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
     if ( rc < 0 )
     {
-        ERROR("Error %d setting int3 mem_event\n", rc);
+        ERROR("Error %d setting int3 vm_event\n", rc);
         goto exit;
     }
 
@@ -527,7 +527,7 @@ int main(int argc, char *argv[])
             shutting_down = 1;
         }
 
-        rc = xc_wait_for_event_or_timeout(xch, xenaccess->mem_event.xce_handle, 100);
+        rc = xc_wait_for_event_or_timeout(xch, xenaccess->vm_event.xce_handle, 100);
         if ( rc < -1 )
         {
             ERROR("Error getting event");
@@ -539,11 +539,11 @@ int main(int argc, char *argv[])
             DPRINTF("Got event from Xen\n");
         }
 
-        while ( RING_HAS_UNCONSUMED_REQUESTS(&xenaccess->mem_event.back_ring) )
+        while ( RING_HAS_UNCONSUMED_REQUESTS(&xenaccess->vm_event.back_ring) )
         {
             xenmem_access_t access;
 
-            rc = get_request(&xenaccess->mem_event, &req);
+            rc = get_request(&xenaccess->vm_event, &req);
             if ( rc != 0 )
             {
                 ERROR("Error getting request");
@@ -551,20 +551,20 @@ int main(int argc, char *argv[])
                 continue;
             }
 
-            if ( req.version != MEM_EVENT_INTERFACE_VERSION )
+            if ( req.version != VM_EVENT_INTERFACE_VERSION )
             {
-                ERROR("Error: mem_event interface version mismatch!\n");
+                ERROR("Error: vm_event interface version mismatch!\n");
                 interrupted = -1;
                 continue;
             }
 
             memset( &rsp, 0, sizeof (rsp) );
-            rsp.version = MEM_EVENT_INTERFACE_VERSION;
+            rsp.version = VM_EVENT_INTERFACE_VERSION;
             rsp.vcpu_id = req.vcpu_id;
             rsp.flags = req.flags;
 
             switch (req.reason) {
-            case MEM_EVENT_REASON_MEM_ACCESS:
+            case VM_EVENT_REASON_MEM_ACCESS:
                 rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access);
                 if (rc < 0)
                 {
@@ -602,7 +602,7 @@ int main(int argc, char *argv[])
 
                 rsp.u.mem_access.gfn = req.u.mem_access.gfn;
                 break;
-            case MEM_EVENT_REASON_SOFTWARE_BREAKPOINT:
+            case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
                 printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n",
                        req.regs.x86.rip,
                        req.u.software_breakpoint.gfn,
diff --git a/tools/xenpaging/pagein.c b/tools/xenpaging/pagein.c
index b3bcef7..7cb0f33 100644
--- a/tools/xenpaging/pagein.c
+++ b/tools/xenpaging/pagein.c
@@ -63,7 +63,7 @@ void page_in_trigger(void)
 
 void create_page_in_thread(struct xenpaging *paging)
 {
-    page_in_args.dom = paging->mem_event.domain_id;
+    page_in_args.dom = paging->vm_event.domain_id;
     page_in_args.pagein_queue = paging->pagein_queue;
     page_in_args.xch = paging->xc_handle;
     if (pthread_create(&page_in_thread, NULL, page_in, &page_in_args) == 0)
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index c71ee06..9cc6a49 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -63,7 +63,7 @@ static void close_handler(int sig)
 static void xenpaging_mem_paging_flush_ioemu_cache(struct xenpaging *paging)
 {
     struct xs_handle *xsh = paging->xs_handle;
-    domid_t domain_id = paging->mem_event.domain_id;
+    domid_t domain_id = paging->vm_event.domain_id;
     char path[80];
 
     sprintf(path, "/local/domain/0/device-model/%u/command", domain_id);
@@ -74,7 +74,7 @@ static void xenpaging_mem_paging_flush_ioemu_cache(struct xenpaging *paging)
 static int xenpaging_wait_for_event_or_timeout(struct xenpaging *paging)
 {
     xc_interface *xch = paging->xc_handle;
-    xc_evtchn *xce = paging->mem_event.xce_handle;
+    xc_evtchn *xce = paging->vm_event.xce_handle;
     char **vec, *val;
     unsigned int num;
     struct pollfd fd[2];
@@ -111,7 +111,7 @@ static int xenpaging_wait_for_event_or_timeout(struct xenpaging *paging)
             if ( strcmp(vec[XS_WATCH_TOKEN], watch_token) == 0 )
             {
                 /* If our guest disappeared, set interrupt flag and fall through */
-                if ( xs_is_domain_introduced(paging->xs_handle, paging->mem_event.domain_id) == false )
+                if ( xs_is_domain_introduced(paging->xs_handle, paging->vm_event.domain_id) == false )
                 {
                     xs_unwatch(paging->xs_handle, "@releaseDomain", watch_token);
                     interrupted = SIGQUIT;
@@ -171,7 +171,7 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
     xc_domaininfo_t domain_info;
     int rc;
 
-    rc = xc_domain_getinfolist(xch, paging->mem_event.domain_id, 1, &domain_info);
+    rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1, &domain_info);
     if ( rc != 1 )
     {
         PERROR("Error getting domain info");
@@ -231,7 +231,7 @@ static int xenpaging_getopts(struct xenpaging *paging, int argc, char *argv[])
     {
         switch(ch) {
         case 'd':
-            paging->mem_event.domain_id = atoi(optarg);
+            paging->vm_event.domain_id = atoi(optarg);
             break;
         case 'f':
             filename = strdup(optarg);
@@ -264,7 +264,7 @@ static int xenpaging_getopts(struct xenpaging *paging, int argc, char *argv[])
     }
 
     /* Set domain id */
-    if ( !paging->mem_event.domain_id )
+    if ( !paging->vm_event.domain_id )
     {
         printf("Numerical <domain_id> missing!\n");
         return 1;
@@ -312,7 +312,7 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* write domain ID to watch so we can ignore other domain shutdowns */
-    snprintf(watch_token, sizeof(watch_token), "%u", paging->mem_event.domain_id);
+    snprintf(watch_token, sizeof(watch_token), "%u", paging->vm_event.domain_id);
     if ( xs_watch(paging->xs_handle, "@releaseDomain", watch_token) == false )
     {
         PERROR("Could not bind to shutdown watch\n");
@@ -320,7 +320,7 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* Watch xenpagings working target */
-    dom_path = xs_get_domain_path(paging->xs_handle, paging->mem_event.domain_id);
+    dom_path = xs_get_domain_path(paging->xs_handle, paging->vm_event.domain_id);
     if ( !dom_path )
     {
         PERROR("Could not find domain path\n");
@@ -339,17 +339,17 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* Map the ring page */
-    xc_get_hvm_param(xch, paging->mem_event.domain_id, 
+    xc_get_hvm_param(xch, paging->vm_event.domain_id, 
                         HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
     mmap_pfn = ring_pfn;
-    paging->mem_event.ring_page = 
-        xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
+    paging->vm_event.ring_page = 
+        xc_map_foreign_batch(xch, paging->vm_event.domain_id, 
                                 PROT_READ | PROT_WRITE, &mmap_pfn, 1);
     if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
     {
         /* Map failed, populate ring page */
         rc = xc_domain_populate_physmap_exact(paging->xc_handle, 
-                                              paging->mem_event.domain_id,
+                                              paging->vm_event.domain_id,
                                               1, 0, 0, &ring_pfn);
         if ( rc != 0 )
         {
@@ -358,8 +358,8 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         }
 
         mmap_pfn = ring_pfn;
-        paging->mem_event.ring_page = 
-            xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
+        paging->vm_event.ring_page = 
+            xc_map_foreign_batch(xch, paging->vm_event.domain_id, 
                                     PROT_READ | PROT_WRITE, &mmap_pfn, 1);
         if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
         {
@@ -369,8 +369,8 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
     
     /* Initialise Xen */
-    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
-                             &paging->mem_event.evtchn_port);
+    rc = xc_mem_paging_enable(xch, paging->vm_event.domain_id,
+                             &paging->vm_event.evtchn_port);
     if ( rc != 0 )
     {
         switch ( errno ) {
@@ -394,40 +394,40 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     }
 
     /* Open event channel */
-    paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
-    if ( paging->mem_event.xce_handle == NULL )
+    paging->vm_event.xce_handle = xc_evtchn_open(NULL, 0);
+    if ( paging->vm_event.xce_handle == NULL )
     {
         PERROR("Failed to open event channel");
         goto err;
     }
 
     /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
-                                    paging->mem_event.domain_id,
-                                    paging->mem_event.evtchn_port);
+    rc = xc_evtchn_bind_interdomain(paging->vm_event.xce_handle,
+                                    paging->vm_event.domain_id,
+                                    paging->vm_event.evtchn_port);
     if ( rc < 0 )
     {
         PERROR("Failed to bind event channel");
         goto err;
     }
 
-    paging->mem_event.port = rc;
+    paging->vm_event.port = rc;
 
     /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
-    BACK_RING_INIT(&paging->mem_event.back_ring,
-                   (mem_event_sring_t *)paging->mem_event.ring_page,
+    SHARED_RING_INIT((vm_event_sring_t *)paging->vm_event.ring_page);
+    BACK_RING_INIT(&paging->vm_event.back_ring,
+                   (vm_event_sring_t *)paging->vm_event.ring_page,
                    PAGE_SIZE);
 
     /* Now that the ring is set, remove it from the guest's physmap */
     if ( xc_domain_decrease_reservation_exact(xch, 
-                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
+                    paging->vm_event.domain_id, 1, 0, &ring_pfn) )
         PERROR("Failed to remove ring from guest physmap");
 
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
     {
-        rc = xc_domain_getinfolist(xch, paging->mem_event.domain_id, 1,
+        rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1,
                                    &domain_info);
         if ( rc != 1 )
         {
@@ -497,9 +497,9 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
             free(paging->paging_buffer);
         }
 
-        if ( paging->mem_event.ring_page )
+        if ( paging->vm_event.ring_page )
         {
-            munmap(paging->mem_event.ring_page, PAGE_SIZE);
+            munmap(paging->vm_event.ring_page, PAGE_SIZE);
         }
 
         free(dom_path);
@@ -524,28 +524,28 @@ static void xenpaging_teardown(struct xenpaging *paging)
 
     paging->xc_handle = NULL;
     /* Tear down domain paging in Xen */
-    munmap(paging->mem_event.ring_page, PAGE_SIZE);
-    rc = xc_mem_paging_disable(xch, paging->mem_event.domain_id);
+    munmap(paging->vm_event.ring_page, PAGE_SIZE);
+    rc = xc_mem_paging_disable(xch, paging->vm_event.domain_id);
     if ( rc != 0 )
     {
         PERROR("Error tearing down domain paging in xen");
     }
 
     /* Unbind VIRQ */
-    rc = xc_evtchn_unbind(paging->mem_event.xce_handle, paging->mem_event.port);
+    rc = xc_evtchn_unbind(paging->vm_event.xce_handle, paging->vm_event.port);
     if ( rc != 0 )
     {
         PERROR("Error unbinding event port");
     }
-    paging->mem_event.port = -1;
+    paging->vm_event.port = -1;
 
     /* Close event channel */
-    rc = xc_evtchn_close(paging->mem_event.xce_handle);
+    rc = xc_evtchn_close(paging->vm_event.xce_handle);
     if ( rc != 0 )
     {
         PERROR("Error closing event channel");
     }
-    paging->mem_event.xce_handle = NULL;
+    paging->vm_event.xce_handle = NULL;
     
     /* Close connection to xenstore */
     xs_close(paging->xs_handle);
@@ -558,12 +558,12 @@ static void xenpaging_teardown(struct xenpaging *paging)
     }
 }
 
-static void get_request(struct mem_event *mem_event, mem_event_request_t *req)
+static void get_request(struct vm_event *vm_event, vm_event_request_t *req)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX req_cons;
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     req_cons = back_ring->req_cons;
 
     /* Copy request */
@@ -575,12 +575,12 @@ static void get_request(struct mem_event *mem_event, mem_event_request_t *req)
     back_ring->sring->req_event = req_cons + 1;
 }
 
-static void put_response(struct mem_event *mem_event, mem_event_response_t *rsp)
+static void put_response(struct vm_event *vm_event, vm_event_response_t *rsp)
 {
-    mem_event_back_ring_t *back_ring;
+    vm_event_back_ring_t *back_ring;
     RING_IDX rsp_prod;
 
-    back_ring = &mem_event->back_ring;
+    back_ring = &vm_event->back_ring;
     rsp_prod = back_ring->rsp_prod_pvt;
 
     /* Copy response */
@@ -607,7 +607,7 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     DECLARE_DOMCTL;
 
     /* Nominate page */
-    ret = xc_mem_paging_nominate(xch, paging->mem_event.domain_id, gfn);
+    ret = xc_mem_paging_nominate(xch, paging->vm_event.domain_id, gfn);
     if ( ret < 0 )
     {
         /* unpageable gfn is indicated by EBUSY */
@@ -619,7 +619,7 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     }
 
     /* Map page */
-    page = xc_map_foreign_pages(xch, paging->mem_event.domain_id, PROT_READ, &victim, 1);
+    page = xc_map_foreign_pages(xch, paging->vm_event.domain_id, PROT_READ, &victim, 1);
     if ( page == NULL )
     {
         PERROR("Error mapping page %lx", gfn);
@@ -641,7 +641,7 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     munmap(page, PAGE_SIZE);
 
     /* Tell Xen to evict page */
-    ret = xc_mem_paging_evict(xch, paging->mem_event.domain_id, gfn);
+    ret = xc_mem_paging_evict(xch, paging->vm_event.domain_id, gfn);
     if ( ret < 0 )
     {
         /* A gfn in use is indicated by EBUSY */
@@ -671,10 +671,10 @@ static int xenpaging_evict_page(struct xenpaging *paging, unsigned long gfn, int
     return ret;
 }
 
-static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t *rsp, int notify_policy)
+static int xenpaging_resume_page(struct xenpaging *paging, vm_event_response_t *rsp, int notify_policy)
 {
     /* Put the page info on the ring */
-    put_response(&paging->mem_event, rsp);
+    put_response(&paging->vm_event, rsp);
 
     /* Notify policy of page being paged in */
     if ( notify_policy )
@@ -693,7 +693,7 @@ static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t
     }
 
     /* Tell Xen page is ready */
-    return xc_evtchn_notify(paging->mem_event.xce_handle, paging->mem_event.port);
+    return xc_evtchn_notify(paging->vm_event.xce_handle, paging->vm_event.port);
 }
 
 static int xenpaging_populate_page(struct xenpaging *paging, unsigned long gfn, int i)
@@ -715,7 +715,7 @@ static int xenpaging_populate_page(struct xenpaging *paging, unsigned long gfn,
     do
     {
         /* Tell Xen to allocate a page for the domain */
-        ret = xc_mem_paging_load(xch, paging->mem_event.domain_id, gfn, paging->paging_buffer);
+        ret = xc_mem_paging_load(xch, paging->vm_event.domain_id, gfn, paging->paging_buffer);
         if ( ret < 0 )
         {
             if ( errno == ENOMEM )
@@ -857,8 +857,8 @@ int main(int argc, char *argv[])
 {
     struct sigaction act;
     struct xenpaging *paging;
-    mem_event_request_t req;
-    mem_event_response_t rsp;
+    vm_event_request_t req;
+    vm_event_response_t rsp;
     int num, prev_num = 0;
     int slot;
     int tot_pages;
@@ -875,7 +875,7 @@ int main(int argc, char *argv[])
     xch = paging->xc_handle;
 
     DPRINTF("starting %s for domain_id %u with pagefile %s\n",
-            argv[0], paging->mem_event.domain_id, filename);
+            argv[0], paging->vm_event.domain_id, filename);
 
     /* ensure that if we get a signal, we'll do cleanup, then exit */
     act.sa_handler = close_handler;
@@ -904,12 +904,12 @@ int main(int argc, char *argv[])
             DPRINTF("Got event from Xen\n");
         }
 
-        while ( RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
+        while ( RING_HAS_UNCONSUMED_REQUESTS(&paging->vm_event.back_ring) )
         {
             /* Indicate possible error */
             rc = 1;
 
-            get_request(&paging->mem_event, &req);
+            get_request(&paging->vm_event, &req);
 
             if ( req.u.mem_paging.gfn > paging->max_pages )
             {
@@ -932,7 +932,7 @@ int main(int argc, char *argv[])
                     goto out;
                 }
 
-                if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
+                if ( req.flags & VM_EVENT_FLAG_DROP_PAGE )
                 {
                     DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n",
                             req.u.mem_paging.gfn, slot);
@@ -970,13 +970,13 @@ int main(int argc, char *argv[])
             {
                 DPRINTF("page %s populated (domain = %d; vcpu = %d;"
                         " gfn = %"PRIx64"; paused = %d; evict_fail = %d)\n",
-                        req.flags & MEM_EVENT_FLAG_EVICT_FAIL ? "not" : "already",
-                        paging->mem_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
-                        !!(req.flags & MEM_EVENT_FLAG_VCPU_PAUSED) ,
-                        !!(req.flags & MEM_EVENT_FLAG_EVICT_FAIL) );
+                        req.flags & VM_EVENT_FLAG_EVICT_FAIL ? "not" : "already",
+                        paging->vm_event.domain_id, req.vcpu_id, req.u.mem_paging.gfn,
+                        !!(req.flags & VM_EVENT_FLAG_VCPU_PAUSED) ,
+                        !!(req.flags & VM_EVENT_FLAG_EVICT_FAIL) );
 
                 /* Tell Xen to resume the vcpu */
-                if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) || ( req.flags & MEM_EVENT_FLAG_EVICT_FAIL ))
+                if (( req.flags & VM_EVENT_FLAG_VCPU_PAUSED ) || ( req.flags & VM_EVENT_FLAG_EVICT_FAIL ))
                 {
                     /* Prepare the response */
                     rsp.u.mem_paging.gfn = req.u.mem_paging.gfn;
diff --git a/tools/xenpaging/xenpaging.h b/tools/xenpaging/xenpaging.h
index 877db2f..25d511d 100644
--- a/tools/xenpaging/xenpaging.h
+++ b/tools/xenpaging/xenpaging.h
@@ -27,15 +27,15 @@
 
 #include <xc_private.h>
 #include <xen/event_channel.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 
 #define XENPAGING_PAGEIN_QUEUE_SIZE 64
 
-struct mem_event {
+struct vm_event {
     domid_t domain_id;
     xc_evtchn *xce_handle;
     int port;
-    mem_event_back_ring_t back_ring;
+    vm_event_back_ring_t back_ring;
     uint32_t evtchn_port;
     void *ring_page;
 };
@@ -51,7 +51,7 @@ struct xenpaging {
 
     void *paging_buffer;
 
-    struct mem_event mem_event;
+    struct vm_event vm_event;
     int fd;
     /* number of pages for which data structures were allocated */
     int max_pages;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index cfe7945..97fa25c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -421,7 +421,7 @@ int vcpu_initialise(struct vcpu *v)
     v->arch.flags = TF_kernel_mode;
 
     /* By default, do not emulate */
-    v->arch.mem_event.emulate_flags = 0;
+    v->arch.vm_event.emulate_flags = 0;
 
     rc = mapcache_vcpu_init(v);
     if ( rc )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index a1c5db0..2a30f50 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -30,8 +30,8 @@
 #include <xen/hypercall.h> /* for arch_do_domctl */
 #include <xsm/xsm.h>
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
-#include <public/mem_event.h>
+#include <xen/vm_event.h>
+#include <public/vm_event.h>
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 2ed4344..fa7175a 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -407,7 +407,7 @@ static int hvmemul_virtual_to_linear(
      * The chosen maximum is very conservative but it's what we use in
      * hvmemul_linear_to_phys() so there is no point in using a larger value.
      * If introspection has been enabled for this domain, *reps should be
-     * at most 1, since optimization might otherwise cause a single mem_event
+     * at most 1, since optimization might otherwise cause a single vm_event
      * being triggered for repeated writes to a whole page.
      */
     *reps = min_t(unsigned long, *reps,
@@ -1521,7 +1521,7 @@ int hvm_emulate_one_no_write(
     return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops_no_write);
 }
 
-void hvm_mem_event_emulate_one(bool_t nowrite, unsigned int trapnr,
+void hvm_vm_event_emulate_one(bool_t nowrite, unsigned int trapnr,
     unsigned int errcode)
 {
     struct hvm_emulate_ctxt ctx = {{ 0 }};
@@ -1538,7 +1538,7 @@ void hvm_mem_event_emulate_one(bool_t nowrite, unsigned int trapnr,
     {
     case X86EMUL_RETRY:
         /*
-         * This function is called when handling an EPT-related mem_event
+         * This function is called when handling an EPT-related vm_event
          * reply. As such, nothing else needs to be done here, since simply
          * returning makes the current instruction cause a page fault again,
          * consistent with X86EMUL_RETRY.
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 11a7b2b..fac6cba 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -35,7 +35,7 @@
 #include <xen/paging.h>
 #include <xen/cpu.h>
 #include <xen/wait.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/rangeset.h>
 #include <asm/shadow.h>
@@ -66,7 +66,7 @@
 #include <public/hvm/ioreq.h>
 #include <public/version.h>
 #include <public/memory.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <public/arch-x86/cpuid.h>
 
 bool_t __read_mostly hvm_enabled;
@@ -2772,7 +2772,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     struct p2m_domain *p2m;
     int rc, fall_through = 0, paged = 0;
     int sharing_enomem = 0;
-    mem_event_request_t *req_ptr = NULL;
+    vm_event_request_t *req_ptr = NULL;
 
     /* On Nested Virtualization, walk the guest page table.
      * If this succeeds, all is fine.
@@ -2842,7 +2842,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     {
         bool_t violation;
 
-        /* If the access is against the permissions, then send to mem_event */
+        /* If the access is against the permissions, then send to vm_event */
         switch (p2ma)
         {
         case p2m_access_n:
@@ -6317,7 +6317,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
     return rc;
 }
 
-static void hvm_mem_event_fill_regs(mem_event_request_t *req)
+static void hvm_mem_event_fill_regs(vm_event_request_t *req)
 {
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     const struct vcpu *curr = current;
@@ -6349,7 +6349,7 @@ static void hvm_mem_event_fill_regs(mem_event_request_t *req)
     req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
 }
 
-static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
+static int hvm_memory_event_traps(uint64_t parameters, vm_event_request_t *req)
 {
     int rc;
     struct vcpu *v = current;
@@ -6358,7 +6358,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
     if ( !(parameters & HVMPME_MODE_MASK) )
         return 0;
 
-    rc = mem_event_claim_slot(d, &d->mem_event->monitor);
+    rc = vm_event_claim_slot(d, &d->vm_event->monitor);
     if ( rc == -ENOSYS )
     {
         /* If there was no ring to handle the event, then
@@ -6370,12 +6370,12 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
 
     if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
     {
-        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
-        mem_event_vcpu_pause(v);
+        req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(v);
     }
 
     hvm_mem_event_fill_regs(req);
-    mem_event_put_request(d, &d->mem_event->monitor, req);
+    vm_event_put_request(d, &d->vm_event->monitor, req);
 
     return 1;
 }
@@ -6383,7 +6383,7 @@ static int hvm_memory_event_traps(uint64_t parameters, mem_event_request_t *req)
 static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
                                 unsigned long old)
 {
-    mem_event_request_t req = {
+    vm_event_request_t req = {
         .reason = reason,
         .vcpu_id = current->vcpu_id,
         .u.mov_to_cr.new_value = value,
@@ -6393,15 +6393,15 @@ static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
     uint64_t parameters = 0 ;
     switch(reason)
     {
-    case MEM_EVENT_REASON_MOV_TO_CR0:
+    case VM_EVENT_REASON_MOV_TO_CR0:
         parameters = current->domain->arch.hvm_domain
                       .params[HVM_PARAM_MEMORY_EVENT_CR0];
         break;
-    case MEM_EVENT_REASON_MOV_TO_CR3:
+    case VM_EVENT_REASON_MOV_TO_CR3:
         parameters = current->domain->arch.hvm_domain
                       .params[HVM_PARAM_MEMORY_EVENT_CR3];
         break;
-    case MEM_EVENT_REASON_MOV_TO_CR4:
+    case VM_EVENT_REASON_MOV_TO_CR4:
         parameters = current->domain->arch.hvm_domain
                       .params[HVM_PARAM_MEMORY_EVENT_CR4];
         break;
@@ -6415,23 +6415,23 @@ static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
 
 void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR0, value, old);
+    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
 }
 
 void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR3, value, old);
+    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
 }
 
 void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_cr(MEM_EVENT_REASON_MOV_TO_CR4, value, old);
+    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
 }
 
 void hvm_memory_event_msr(unsigned long msr, unsigned long value)
 {
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MOV_TO_MSR,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MOV_TO_MSR,
         .vcpu_id = current->vcpu_id,
         .u.mov_to_msr.msr = msr,
         .u.mov_to_msr.value = value,
@@ -6445,8 +6445,8 @@ void hvm_memory_event_msr(unsigned long msr, unsigned long value)
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_SOFTWARE_BREAKPOINT,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SOFTWARE_BREAKPOINT,
         .vcpu_id = current->vcpu_id,
         .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
     };
@@ -6459,8 +6459,8 @@ int hvm_memory_event_int3(unsigned long gla)
 int hvm_memory_event_single_step(unsigned long gla)
 {
     uint32_t pfec = PFEC_page_present;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_SINGLESTEP,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SINGLESTEP,
         .vcpu_id = current->vcpu_id,
         .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
     };
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index e0a33e3..63007a9 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -25,7 +25,7 @@
 #include <xen/event.h>
 #include <xen/kernel.h>
 #include <xen/keyhandler.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <asm/current.h>
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
@@ -715,7 +715,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
         return;
 
     if ( unlikely(d->arch.hvm_domain.introspection_enabled) &&
-         mem_event_check_ring(&d->mem_event->monitor) )
+         vm_event_check_ring(&d->vm_event->monitor) )
     {
         unsigned int i;
 
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index cbbc4e9..40adac3 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -17,9 +17,9 @@
  * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
  * Place - Suite 330, Boston, MA 02111-1307 USA.
  */
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 9c1ec11..cb28943 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -19,9 +19,9 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index e3d64a6..68b7fcc 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -22,12 +22,12 @@
 
 
 #include <asm/p2m.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 
 
 int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpc)
 {
-    if ( unlikely(!d->mem_event->paging.ring_page) )
+    if ( unlikely(!d->vm_event->paging.ring_page) )
         return -ENODEV;
 
     switch( mpc->op )
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index e722655..9d796e7 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -28,7 +28,7 @@
 #include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/rcupdate.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <asm/page.h>
 #include <asm/string.h>
 #include <asm/p2m.h>
@@ -559,24 +559,24 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
 {
     struct vcpu *v = current;
     int rc;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MEM_SHARING,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MEM_SHARING,
         .vcpu_id = v->vcpu_id,
         .u.mem_sharing.gfn = gfn,
         .u.mem_sharing.p2mt = p2m_ram_shared
     };
 
-    if ( (rc = __mem_event_claim_slot(d, 
-                        &d->mem_event->share, allow_sleep)) < 0 )
+    if ( (rc = __vm_event_claim_slot(d, 
+                        &d->vm_event->share, allow_sleep)) < 0 )
         return rc;
 
     if ( v->domain == d )
     {
-        req.flags = MEM_EVENT_FLAG_VCPU_PAUSED;
-        mem_event_vcpu_pause(v);
+        req.flags = VM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(v);
     }
 
-    mem_event_put_request(d, &d->mem_event->share, &req);
+    vm_event_put_request(d, &d->vm_event->share, &req);
 
     return 0;
 }
@@ -593,20 +593,20 @@ unsigned int mem_sharing_get_nr_shared_mfns(void)
 
 int mem_sharing_sharing_resume(struct domain *d)
 {
-    mem_event_response_t rsp;
+    vm_event_response_t rsp;
 
     /* Get all requests off the ring */
-    while ( mem_event_get_response(d, &d->mem_event->share, &rsp) )
+    while ( vm_event_get_response(d, &d->vm_event->share, &rsp) )
     {
         struct vcpu *v;
 
-        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
         {
-            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
+            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!\n");
             continue;
         }
 
-        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
 
         /* Validate the vcpu_id in the response. */
@@ -616,8 +616,8 @@ int mem_sharing_sharing_resume(struct domain *d)
         v = d->vcpu[rsp.vcpu_id];
 
         /* Unpause domain/vcpu */
-        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-            mem_event_vcpu_unpause(v);
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
     }
 
     return 0;
@@ -1144,7 +1144,7 @@ err_out:
 
 /* A note on the rationale for unshare error handling:
  *  1. Unshare can only fail with ENOMEM. Any other error conditions BUG_ON()'s
- *  2. We notify a potential dom0 helper through a mem_event ring. But we
+ *  2. We notify a potential dom0 helper through a vm_event ring. But we
  *     allow the notification to not go to sleep. If the event ring is full 
  *     of ENOMEM warnings, then it's on the ball.
  *  3. We cannot go to sleep until the unshare is resolved, because we might
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 43f507c..0679f00 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -21,9 +21,9 @@
  */
 
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 26fb18d..e50b6fa 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -26,10 +26,10 @@
  */
 
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
 #include <xen/trace.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index feec99f..db332ef 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -25,9 +25,9 @@
  */
 
 #include <xen/iommu.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <asm/domain.h>
 #include <asm/page.h>
 #include <asm/paging.h>
@@ -1079,8 +1079,8 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
 void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
                                 p2m_type_t p2mt)
 {
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MEM_PAGING,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MEM_PAGING,
         .u.mem_paging.gfn = gfn
     };
 
@@ -1088,21 +1088,21 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
      * correctness of the guest execution at this point.  If this is the only
      * page that happens to be paged-out, we'll be okay..  but it's likely the
      * guest will crash shortly anyways. */
-    int rc = mem_event_claim_slot(d, &d->mem_event->paging);
+    int rc = vm_event_claim_slot(d, &d->vm_event->paging);
     if ( rc < 0 )
         return;
 
     /* Send release notification to pager */
-    req.flags = MEM_EVENT_FLAG_DROP_PAGE;
+    req.flags = VM_EVENT_FLAG_DROP_PAGE;
 
     /* Update stats unless the page hasn't yet been evicted */
     if ( p2mt != p2m_ram_paging_out )
         atomic_dec(&d->paged_pages);
     else
         /* Evict will fail now, tag this request for pager */
-        req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
+        req.flags |= VM_EVENT_FLAG_EVICT_FAIL;
 
-    mem_event_put_request(d, &d->mem_event->paging, &req);
+    vm_event_put_request(d, &d->vm_event->paging, &req);
 }
 
 /**
@@ -1129,8 +1129,8 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
 void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 {
     struct vcpu *v = current;
-    mem_event_request_t req = {
-        .reason = MEM_EVENT_REASON_MEM_PAGING,
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MEM_PAGING,
         .u.mem_paging.gfn = gfn
     };
     p2m_type_t p2mt;
@@ -1139,7 +1139,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* We're paging. There should be a ring */
-    int rc = mem_event_claim_slot(d, &d->mem_event->paging);
+    int rc = vm_event_claim_slot(d, &d->vm_event->paging);
     if ( rc == -ENOSYS )
     {
         gdprintk(XENLOG_ERR, "Domain %hu paging gfn %lx yet no ring "
@@ -1161,7 +1161,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     {
         /* Evict will fail now, tag this request for pager */
         if ( p2mt == p2m_ram_paging_out )
-            req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
+            req.flags |= VM_EVENT_FLAG_EVICT_FAIL;
 
         p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a);
     }
@@ -1170,14 +1170,14 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     /* Pause domain if request came from guest and gfn has paging type */
     if ( p2m_is_paging(p2mt) && v->domain == d )
     {
-        mem_event_vcpu_pause(v);
-        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(v);
+        req.flags |= VM_EVENT_FLAG_VCPU_PAUSED;
     }
     /* No need to inform pager if the gfn is not in the page-out path */
     else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
     {
         /* gfn is already on its way back and vcpu is not paused */
-        mem_event_cancel_slot(d, &d->mem_event->paging);
+        vm_event_cancel_slot(d, &d->vm_event->paging);
         return;
     }
 
@@ -1185,7 +1185,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     req.u.mem_paging.p2mt = p2mt;
     req.vcpu_id = v->vcpu_id;
 
-    mem_event_put_request(d, &d->mem_event->paging, &req);
+    vm_event_put_request(d, &d->vm_event->paging, &req);
 }
 
 /**
@@ -1294,23 +1294,23 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
 void p2m_mem_paging_resume(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    mem_event_response_t rsp;
+    vm_event_response_t rsp;
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
 
     /* Pull all responses off the ring */
-    while( mem_event_get_response(d, &d->mem_event->paging, &rsp) )
+    while( vm_event_get_response(d, &d->vm_event->paging, &rsp) )
     {
         struct vcpu *v;
 
-        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
         {
-            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
+            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!\n");
             continue;
         }
 
-        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
 
         /* Validate the vcpu_id in the response. */
@@ -1320,7 +1320,7 @@ void p2m_mem_paging_resume(struct domain *d)
         v = d->vcpu[rsp.vcpu_id];
 
         /* Fix p2m entry if the page was not dropped */
-        if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
+        if ( !(rsp.flags & VM_EVENT_FLAG_DROP_PAGE) )
         {
             uint64_t gfn = rsp.u.mem_access.gfn;
             gfn_lock(p2m, gfn, 0);
@@ -1337,12 +1337,12 @@ void p2m_mem_paging_resume(struct domain *d)
             gfn_unlock(p2m, gfn, 0);
         }
         /* Unpause domain */
-        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-            mem_event_vcpu_unpause(v);
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
     }
 }
 
-static void p2m_mem_event_fill_regs(mem_event_request_t *req)
+static void p2m_vm_event_fill_regs(vm_event_request_t *req)
 {
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     struct segment_register seg;
@@ -1397,15 +1397,14 @@ static void p2m_mem_event_fill_regs(mem_event_request_t *req)
     req->regs.x86.cs_arbytes = seg.attr.bytes;
 }
 
-void p2m_mem_event_emulate_check(struct vcpu *v,
-                                 const mem_event_response_t *rsp)
+void p2m_vm_event_emulate_check(struct vcpu *v, const vm_event_response_t *rsp)
 {
     /* Mark vcpu for skipping one instruction upon rescheduling. */
-    if ( rsp->flags & MEM_EVENT_FLAG_EMULATE )
+    if ( rsp->flags & VM_EVENT_FLAG_EMULATE )
     {
         xenmem_access_t access;
         bool_t violation = 1;
-        const struct mem_event_mem_access_data *data = &rsp->u.mem_access;
+        const struct vm_event_mem_access_data *data = &rsp->u.mem_access;
 
         if ( p2m_get_mem_access(v->domain, data->gfn, &access) == 0 )
         {
@@ -1448,7 +1447,7 @@ void p2m_mem_event_emulate_check(struct vcpu *v,
             }
         }
 
-        v->arch.mem_event.emulate_flags = violation ? rsp->flags : 0;
+        v->arch.vm_event.emulate_flags = violation ? rsp->flags : 0;
     }
 }
 
@@ -1463,7 +1462,7 @@ void p2m_setup_introspection(struct domain *d)
 
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
-                            mem_event_request_t **req_ptr)
+                            vm_event_request_t **req_ptr)
 {
     struct vcpu *v = current;
     unsigned long gfn = gpa >> PAGE_SHIFT;
@@ -1472,7 +1471,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     mfn_t mfn;
     p2m_type_t p2mt;
     p2m_access_t p2ma;
-    mem_event_request_t *req;
+    vm_event_request_t *req;
     int rc;
     unsigned long eip = guest_cpu_user_regs()->eip;
 
@@ -1499,13 +1498,13 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     gfn_unlock(p2m, gfn, 0);
 
     /* Otherwise, check if there is a memory event listener, and send the message along */
-    if ( !mem_event_check_ring(&d->mem_event->monitor) || !req_ptr ) 
+    if ( !vm_event_check_ring(&d->vm_event->monitor) || !req_ptr ) 
     {
         /* No listener */
         if ( p2m->access_required ) 
         {
             gdprintk(XENLOG_INFO, "Memory access permissions failure, "
-                                  "no mem_event listener VCPU %d, dom %d\n",
+                                  "no vm_event listener VCPU %d, dom %d\n",
                                   v->vcpu_id, d->domain_id);
             domain_crash(v->domain);
             return 0;
@@ -1528,40 +1527,40 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
         }
     }
 
-    /* The previous mem_event reply does not match the current state. */
-    if ( v->arch.mem_event.gpa != gpa || v->arch.mem_event.eip != eip )
+    /* The previous vm_event reply does not match the current state. */
+    if ( v->arch.vm_event.gpa != gpa || v->arch.vm_event.eip != eip )
     {
-        /* Don't emulate the current instruction, send a new mem_event. */
-        v->arch.mem_event.emulate_flags = 0;
+        /* Don't emulate the current instruction, send a new vm_event. */
+        v->arch.vm_event.emulate_flags = 0;
 
         /*
          * Make sure to mark the current state to match it again against
-         * the new mem_event about to be sent.
+         * the new vm_event about to be sent.
          */
-        v->arch.mem_event.gpa = gpa;
-        v->arch.mem_event.eip = eip;
+        v->arch.vm_event.gpa = gpa;
+        v->arch.vm_event.eip = eip;
     }
 
-    if ( v->arch.mem_event.emulate_flags )
+    if ( v->arch.vm_event.emulate_flags )
     {
-        hvm_mem_event_emulate_one((v->arch.mem_event.emulate_flags &
-                                   MEM_EVENT_FLAG_EMULATE_NOWRITE) != 0,
+        hvm_vm_event_emulate_one((v->arch.vm_event.emulate_flags &
+                                   VM_EVENT_FLAG_EMULATE_NOWRITE) != 0,
                                   TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE);
 
-        v->arch.mem_event.emulate_flags = 0;
+        v->arch.vm_event.emulate_flags = 0;
         return 1;
     }
 
     *req_ptr = NULL;
-    req = xzalloc(mem_event_request_t);
+    req = xzalloc(vm_event_request_t);
     if ( req )
     {
         *req_ptr = req;
-        req->reason = MEM_EVENT_REASON_MEM_ACCESS;
+        req->reason = VM_EVENT_REASON_MEM_ACCESS;
 
         /* Pause the current VCPU */
         if ( p2ma != p2m_access_n2rwx )
-            req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+            req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
 
         /* Send request to mem event */
         req->u.mem_access.gfn = gfn;
@@ -1577,12 +1576,12 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
         req->u.mem_access.access_x = npfec.insn_fetch;
         req->vcpu_id = v->vcpu_id;
 
-        p2m_mem_event_fill_regs(req);
+        p2m_vm_event_fill_regs(req);
     }
 
     /* Pause the current VCPU */
     if ( p2ma != p2m_access_n2rwx )
-        mem_event_vcpu_pause(v);
+        vm_event_vcpu_pause(v);
 
     /* VCPU may be paused, return whether we promoted automatically */
     return (p2ma == p2m_access_n2rwx);
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 96cec31..85f138b 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -1,5 +1,5 @@
 #include <xen/event.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/multicall.h>
 #include <compat/memory.h>
@@ -192,7 +192,7 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
         if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
@@ -206,7 +206,7 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -EFAULT;
         if ( mso.op == XENMEM_sharing_op_audit )
             return mem_sharing_audit(); 
-        rc = do_mem_event_op(cmd, mso.domain, &mso);
+        rc = do_vm_event_op(cmd, mso.domain, &mso);
         if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 2fa1f67..1e2bd1a 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -26,7 +26,7 @@
 #include <xen/nodemask.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
 #include <asm/asm_defns.h>
@@ -988,7 +988,7 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         xen_mem_paging_op_t mpo;
         if ( copy_from_guest(&mpo, arg, 1) )
             return -EFAULT;
-        rc = do_mem_event_op(cmd, mpo.domain, &mpo);
+        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
         if ( !rc && __copy_to_guest(arg, &mpo, 1) )
             return -EFAULT;
         break;
@@ -1001,7 +1001,7 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -EFAULT;
         if ( mso.op == XENMEM_sharing_op_audit )
             return mem_sharing_audit(); 
-        rc = do_mem_event_op(cmd, mso.domain, &mso);
+        rc = do_vm_event_op(cmd, mso.domain, &mso);
         if ( !rc && __copy_to_guest(arg, &mso, 1) )
             return -EFAULT;
         break;
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 1956091..e5bd75b 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -54,7 +54,7 @@ obj-y += rbtree.o
 obj-y += lzo.o
 obj-$(HAS_PDX) += pdx.o
 obj-$(HAS_MEM_ACCESS) += mem_access.o
-obj-$(HAS_MEM_ACCESS) += mem_event.o
+obj-$(HAS_MEM_ACCESS) += vm_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 0b05681..60bf00f 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -15,7 +15,7 @@
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/time.h>
 #include <xen/console.h>
 #include <xen/softirq.h>
@@ -344,8 +344,8 @@ struct domain *domain_create(
         poolid = 0;
 
         err = -ENOMEM;
-        d->mem_event = xzalloc(struct mem_event_per_domain);
-        if ( !d->mem_event )
+        d->vm_event = xzalloc(struct vm_event_per_domain);
+        if ( !d->vm_event )
             goto fail;
 
         d->pbuf = xzalloc_array(char, DOMAIN_PBUF_SIZE);
@@ -387,7 +387,7 @@ struct domain *domain_create(
     if ( hardware_domain == d )
         hardware_domain = old_hwdom;
     atomic_set(&d->refcnt, DOMAIN_DESTROYED);
-    xfree(d->mem_event);
+    xfree(d->vm_event);
     xfree(d->pbuf);
     if ( init_status & INIT_arch )
         arch_domain_destroy(d);
@@ -629,7 +629,7 @@ int domain_kill(struct domain *d)
         d->is_dying = DOMDYING_dead;
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
-        mem_event_cleanup(d);
+        vm_event_cleanup(d);
         put_domain(d);
         send_global_virq(VIRQ_DOM_EXC);
         /* fallthrough */
@@ -808,7 +808,7 @@ static void complete_domain_destroy(struct rcu_head *head)
     free_xenoprof_pages(d);
 #endif
 
-    xfree(d->mem_event);
+    xfree(d->vm_event);
     xfree(d->pbuf);
 
     for ( i = d->max_vcpus - 1; i >= 0; i-- )
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 33ecd45..85afd68 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -24,7 +24,7 @@
 #include <xen/bitmap.h>
 #include <xen/paging.h>
 #include <xen/hypercall.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -1114,9 +1114,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
     break;
 
-    case XEN_DOMCTL_mem_event_op:
-        ret = mem_event_domctl(d, &op->u.mem_event_op,
-                               guest_handle_cast(u_domctl, void));
+    case XEN_DOMCTL_vm_event_op:
+        ret = vm_event_domctl(d, &op->u.vm_event_op,
+                              guest_handle_cast(u_domctl, void));
         copyback = 1;
         break;
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 3a650ad..f77f134 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -24,27 +24,27 @@
 #include <xen/sched.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <public/memory.h>
 #include <asm/p2m.h>
 #include <xsm/xsm.h>
 
 void mem_access_resume(struct domain *d)
 {
-    mem_event_response_t rsp;
+    vm_event_response_t rsp;
 
     /* Pull all responses off the ring. */
-    while ( mem_event_get_response(d, &d->mem_event->monitor, &rsp) )
+    while ( vm_event_get_response(d, &d->vm_event->monitor, &rsp) )
     {
         struct vcpu *v;
 
-        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
         {
-            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!");
+            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!");
             continue;
         }
 
-        if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
 
         /* Validate the vcpu_id in the response. */
@@ -53,11 +53,11 @@ void mem_access_resume(struct domain *d)
 
         v = d->vcpu[rsp.vcpu_id];
 
-        p2m_mem_event_emulate_check(v, &rsp);
+        p2m_vm_event_emulate_check(v, &rsp);
 
         /* Unpause domain. */
-        if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
-            mem_event_vcpu_unpause(v);
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
     }
 }
 
@@ -80,12 +80,12 @@ int mem_access_memop(unsigned long cmd,
     if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
-    rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
     if ( rc )
         goto out;
 
     rc = -ENODEV;
-    if ( unlikely(!d->mem_event->monitor.ring_page) )
+    if ( unlikely(!d->vm_event->monitor.ring_page) )
         goto out;
 
     switch ( mao.op )
@@ -150,13 +150,13 @@ int mem_access_memop(unsigned long cmd,
     return rc;
 }
 
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+int mem_access_send_req(struct domain *d, vm_event_request_t *req)
 {
-    int rc = mem_event_claim_slot(d, &d->mem_event->monitor);
+    int rc = vm_event_claim_slot(d, &d->vm_event->monitor);
     if ( rc < 0 )
         return rc;
 
-    mem_event_put_request(d, &d->mem_event->monitor, req);
+    vm_event_put_request(d, &d->vm_event->monitor, req);
 
     return 0;
 }
diff --git a/xen/common/mem_event.c b/xen/common/vm_event.c
similarity index 59%
rename from xen/common/mem_event.c
rename to xen/common/vm_event.c
index 3ed6abc..57ef58c 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/vm_event.c
@@ -1,7 +1,7 @@
 /******************************************************************************
- * mem_event.c
+ * vm_event.c
  *
- * Memory event support.
+ * VM event support.
  *
  * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
  *
@@ -24,7 +24,7 @@
 #include <xen/sched.h>
 #include <xen/event.h>
 #include <xen/wait.h>
-#include <xen/mem_event.h>
+#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/p2m.h>
 
@@ -43,14 +43,14 @@
 #define xen_rmb()  rmb()
 #define xen_wmb()  wmb()
 
-#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
-#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
-#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
+#define vm_event_ring_lock_init(_ved)  spin_lock_init(&(_ved)->ring_lock)
+#define vm_event_ring_lock(_ved)       spin_lock(&(_ved)->ring_lock)
+#define vm_event_ring_unlock(_ved)     spin_unlock(&(_ved)->ring_lock)
 
-static int mem_event_enable(
+static int vm_event_enable(
     struct domain *d,
-    xen_domctl_mem_event_op_t *mec,
-    struct mem_event_domain *med,
+    xen_domctl_vm_event_op_t *vec,
+    struct vm_event_domain *ved,
     int pause_flag,
     int param,
     xen_event_channel_notification_t notification_fn)
@@ -61,7 +61,7 @@ static int mem_event_enable(
     /* Only one helper at a time. If the helper crashed,
      * the ring is in an undefined state and so is the guest.
      */
-    if ( med->ring_page )
+    if ( ved->ring_page )
         return -EBUSY;
 
     /* The parameter defaults to zero, and it should be
@@ -69,16 +69,16 @@ static int mem_event_enable(
     if ( ring_gfn == 0 )
         return -ENOSYS;
 
-    mem_event_ring_lock_init(med);
-    mem_event_ring_lock(med);
+    vm_event_ring_lock_init(ved);
+    vm_event_ring_lock(ved);
 
-    rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
-                                    &med->ring_page);
+    rc = prepare_ring_for_helper(d, ring_gfn, &ved->ring_pg_struct,
+                                    &ved->ring_page);
     if ( rc < 0 )
         goto err;
 
     /* Set the number of currently blocked vCPUs to 0. */
-    med->blocked = 0;
+    ved->blocked = 0;
 
     /* Allocate event channel */
     rc = alloc_unbound_xen_event_channel(d->vcpu[0],
@@ -87,35 +87,35 @@ static int mem_event_enable(
     if ( rc < 0 )
         goto err;
 
-    med->xen_port = mec->port = rc;
+    ved->xen_port = vec->port = rc;
 
     /* Prepare ring buffer */
-    FRONT_RING_INIT(&med->front_ring,
-                    (mem_event_sring_t *)med->ring_page,
+    FRONT_RING_INIT(&ved->front_ring,
+                    (vm_event_sring_t *)ved->ring_page,
                     PAGE_SIZE);
 
     /* Save the pause flag for this particular ring. */
-    med->pause_flag = pause_flag;
+    ved->pause_flag = pause_flag;
 
     /* Initialize the last-chance wait queue. */
-    init_waitqueue_head(&med->wq);
+    init_waitqueue_head(&ved->wq);
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
     return 0;
 
  err:
-    destroy_ring_for_helper(&med->ring_page,
-                            med->ring_pg_struct);
-    mem_event_ring_unlock(med);
+    destroy_ring_for_helper(&ved->ring_page,
+                            ved->ring_pg_struct);
+    vm_event_ring_unlock(ved);
 
     return rc;
 }
 
-static unsigned int mem_event_ring_available(struct mem_event_domain *med)
+static unsigned int vm_event_ring_available(struct vm_event_domain *ved)
 {
-    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
-    avail_req -= med->target_producers;
-    avail_req -= med->foreign_producers;
+    int avail_req = RING_FREE_REQUESTS(&ved->front_ring);
+    avail_req -= ved->target_producers;
+    avail_req -= ved->foreign_producers;
 
     BUG_ON(avail_req < 0);
 
@@ -123,18 +123,18 @@ static unsigned int mem_event_ring_available(struct mem_event_domain *med)
 }
 
 /*
- * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
+ * vm_event_wake_blocked() will wakeup vcpus waiting for room in the
  * ring. These vCPUs were paused on their way out after placing an event,
  * but need to be resumed where the ring is capable of processing at least
  * one event from them.
  */
-static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
+static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain *ved)
 {
     struct vcpu *v;
     int online = d->max_vcpus;
-    unsigned int avail_req = mem_event_ring_available(med);
+    unsigned int avail_req = vm_event_ring_available(ved);
 
-    if ( avail_req == 0 || med->blocked == 0 )
+    if ( avail_req == 0 || ved->blocked == 0 )
         return;
 
     /*
@@ -143,13 +143,13 @@ static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *me
      * memory events are lost (due to the fact that certain types of events
      * cannot be replayed, we need to ensure that there is space in the ring
      * for when they are hit).
-     * See comment below in mem_event_put_request().
+     * See comment below in vm_event_put_request().
      */
     for_each_vcpu ( d, v )
-        if ( test_bit(med->pause_flag, &v->pause_flags) )
+        if ( test_bit(ved->pause_flag, &v->pause_flags) )
             online--;
 
-    ASSERT(online == (d->max_vcpus - med->blocked));
+    ASSERT(online == (d->max_vcpus - ved->blocked));
 
     /* We remember which vcpu last woke up to avoid scanning always linearly
      * from zero and starving higher-numbered vcpus under high load */
@@ -157,22 +157,22 @@ static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *me
     {
         int i, j, k;
 
-        for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
+        for (i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
         {
             k = i % d->max_vcpus;
             v = d->vcpu[k];
             if ( !v )
                 continue;
 
-            if ( !(med->blocked) || online >= avail_req )
+            if ( !(ved->blocked) || online >= avail_req )
                break;
 
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
             {
                 vcpu_unpause(v);
                 online++;
-                med->blocked--;
-                med->last_vcpu_wake_up = k;
+                ved->blocked--;
+                ved->last_vcpu_wake_up = k;
             }
         }
     }
@@ -183,87 +183,87 @@ static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *me
  * was unable to do so, it is queued on a wait queue.  These are woken as
  * needed, and take precedence over the blocked vCPUs.
  */
-static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
+static void vm_event_wake_queued(struct domain *d, struct vm_event_domain *ved)
 {
-    unsigned int avail_req = mem_event_ring_available(med);
+    unsigned int avail_req = vm_event_ring_available(ved);
 
     if ( avail_req > 0 )
-        wake_up_nr(&med->wq, avail_req);
+        wake_up_nr(&ved->wq, avail_req);
 }
 
 /*
- * mem_event_wake() will wakeup all vcpus waiting for the ring to
+ * vm_event_wake() will wakeup all vcpus waiting for the ring to
  * become available.  If we have queued vCPUs, they get top priority. We
  * are guaranteed that they will go through code paths that will eventually
- * call mem_event_wake() again, ensuring that any blocked vCPUs will get
+ * call vm_event_wake() again, ensuring that any blocked vCPUs will get
  * unpaused once all the queued vCPUs have made it through.
  */
-void mem_event_wake(struct domain *d, struct mem_event_domain *med)
+void vm_event_wake(struct domain *d, struct vm_event_domain *ved)
 {
-    if (!list_empty(&med->wq.list))
-        mem_event_wake_queued(d, med);
+    if (!list_empty(&ved->wq.list))
+        vm_event_wake_queued(d, ved);
     else
-        mem_event_wake_blocked(d, med);
+        vm_event_wake_blocked(d, ved);
 }
 
-static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
+static int vm_event_disable(struct domain *d, struct vm_event_domain *ved)
 {
-    if ( med->ring_page )
+    if ( ved->ring_page )
     {
         struct vcpu *v;
 
-        mem_event_ring_lock(med);
+        vm_event_ring_lock(ved);
 
-        if ( !list_empty(&med->wq.list) )
+        if ( !list_empty(&ved->wq.list) )
         {
-            mem_event_ring_unlock(med);
+            vm_event_ring_unlock(ved);
             return -EBUSY;
         }
 
         /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d->vcpu[0], med->xen_port);
+        free_xen_event_channel(d->vcpu[0], ved->xen_port);
 
         /* Unblock all vCPUs */
         for_each_vcpu ( d, v )
         {
-            if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
             {
                 vcpu_unpause(v);
-                med->blocked--;
+                ved->blocked--;
             }
         }
 
-        destroy_ring_for_helper(&med->ring_page,
-                                med->ring_pg_struct);
-        mem_event_ring_unlock(med);
+        destroy_ring_for_helper(&ved->ring_page,
+                                ved->ring_pg_struct);
+        vm_event_ring_unlock(ved);
     }
 
     return 0;
 }
 
-static inline void mem_event_release_slot(struct domain *d,
-                                          struct mem_event_domain *med)
+static inline void vm_event_release_slot(struct domain *d,
+                                         struct vm_event_domain *ved)
 {
     /* Update the accounting */
     if ( current->domain == d )
-        med->target_producers--;
+        ved->target_producers--;
     else
-        med->foreign_producers--;
+        ved->foreign_producers--;
 
     /* Kick any waiters */
-    mem_event_wake(d, med);
+    vm_event_wake(d, ved);
 }
 
 /*
- * mem_event_mark_and_pause() tags vcpu and put it to sleep.
- * The vcpu will resume execution in mem_event_wake_waiters().
+ * vm_event_mark_and_pause() tags vcpu and put it to sleep.
+ * The vcpu will resume execution in vm_event_wake_waiters().
  */
-void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
+void vm_event_mark_and_pause(struct vcpu *v, struct vm_event_domain *ved)
 {
-    if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
+    if ( !test_and_set_bit(ved->pause_flag, &v->pause_flags) )
     {
         vcpu_pause_nosync(v);
-        med->blocked++;
+        ved->blocked++;
     }
 }
 
@@ -273,31 +273,31 @@ void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
  * overly full and its continued execution would cause stalling and excessive
  * waiting.  The vCPU will be automatically unpaused when the ring clears.
  */
-void mem_event_put_request(struct domain *d,
-                           struct mem_event_domain *med,
-                           mem_event_request_t *req)
+void vm_event_put_request(struct domain *d,
+                          struct vm_event_domain *ved,
+                          vm_event_request_t *req)
 {
-    mem_event_front_ring_t *front_ring;
+    vm_event_front_ring_t *front_ring;
     int free_req;
     unsigned int avail_req;
     RING_IDX req_prod;
 
     if ( current->domain != d )
     {
-        req->flags |= MEM_EVENT_FLAG_FOREIGN;
+        req->flags |= VM_EVENT_FLAG_FOREIGN;
 #ifndef NDEBUG
-        if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
+        if ( !(req->flags & VM_EVENT_FLAG_VCPU_PAUSED) )
             gdprintk(XENLOG_G_WARNING, "d%dv%d was not paused.\n",
                      d->domain_id, req->vcpu_id);
 #endif
     }
 
-    req->version = MEM_EVENT_INTERFACE_VERSION;
+    req->version = VM_EVENT_INTERFACE_VERSION;
 
-    mem_event_ring_lock(med);
+    vm_event_ring_lock(ved);
 
     /* Due to the reservations, this step must succeed. */
-    front_ring = &med->front_ring;
+    front_ring = &ved->front_ring;
     free_req = RING_FREE_REQUESTS(front_ring);
     ASSERT(free_req > 0);
 
@@ -311,33 +311,33 @@ void mem_event_put_request(struct domain *d,
     RING_PUSH_REQUESTS(front_ring);
 
     /* We've actually *used* our reservation, so release the slot. */
-    mem_event_release_slot(d, med);
+    vm_event_release_slot(d, ved);
 
     /* Give this vCPU a black eye if necessary, on the way out.
      * See the comments above wake_blocked() for more information
      * on how this mechanism works to avoid waiting. */
-    avail_req = mem_event_ring_available(med);
+    avail_req = vm_event_ring_available(ved);
     if( current->domain == d && avail_req < d->max_vcpus )
-        mem_event_mark_and_pause(current, med);
+        vm_event_mark_and_pause(current, ved);
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
 
-    notify_via_xen_event_channel(d, med->xen_port);
+    notify_via_xen_event_channel(d, ved->xen_port);
 }
 
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
+int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, vm_event_response_t *rsp)
 {
-    mem_event_front_ring_t *front_ring;
+    vm_event_front_ring_t *front_ring;
     RING_IDX rsp_cons;
 
-    mem_event_ring_lock(med);
+    vm_event_ring_lock(ved);
 
-    front_ring = &med->front_ring;
+    front_ring = &ved->front_ring;
     rsp_cons = front_ring->rsp_cons;
 
     if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
     {
-        mem_event_ring_unlock(med);
+        vm_event_ring_unlock(ved);
         return 0;
     }
 
@@ -351,70 +351,70 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_e
 
     /* Kick any waiters -- since we've just consumed an event,
      * there may be additional space available in the ring. */
-    mem_event_wake(d, med);
+    vm_event_wake(d, ved);
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
 
     return 1;
 }
 
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
 {
-    mem_event_ring_lock(med);
-    mem_event_release_slot(d, med);
-    mem_event_ring_unlock(med);
+    vm_event_ring_lock(ved);
+    vm_event_release_slot(d, ved);
+    vm_event_ring_unlock(ved);
 }
 
-static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
+static int vm_event_grab_slot(struct vm_event_domain *ved, int foreign)
 {
     unsigned int avail_req;
 
-    if ( !med->ring_page )
+    if ( !ved->ring_page )
         return -ENOSYS;
 
-    mem_event_ring_lock(med);
+    vm_event_ring_lock(ved);
 
-    avail_req = mem_event_ring_available(med);
+    avail_req = vm_event_ring_available(ved);
     if ( avail_req == 0 )
     {
-        mem_event_ring_unlock(med);
+        vm_event_ring_unlock(ved);
         return -EBUSY;
     }
 
     if ( !foreign )
-        med->target_producers++;
+        ved->target_producers++;
     else
-        med->foreign_producers++;
+        ved->foreign_producers++;
 
-    mem_event_ring_unlock(med);
+    vm_event_ring_unlock(ved);
 
     return 0;
 }
 
 /* Simple try_grab wrapper for use in the wait_event() macro. */
-static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
+static int vm_event_wait_try_grab(struct vm_event_domain *ved, int *rc)
 {
-    *rc = mem_event_grab_slot(med, 0);
+    *rc = vm_event_grab_slot(ved, 0);
     return *rc;
 }
 
-/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
-static int mem_event_wait_slot(struct mem_event_domain *med)
+/* Call vm_event_grab_slot() until the ring doesn't exist, or is available. */
+static int vm_event_wait_slot(struct vm_event_domain *ved)
 {
     int rc = -EBUSY;
-    wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
+    wait_event(ved->wq, vm_event_wait_try_grab(ved, &rc) != -EBUSY);
     return rc;
 }
 
-bool_t mem_event_check_ring(struct mem_event_domain *med)
+bool_t vm_event_check_ring(struct vm_event_domain *ved)
 {
-    return (med->ring_page != NULL);
+    return (ved->ring_page != NULL);
 }
 
 /*
  * Determines whether or not the current vCPU belongs to the target domain,
  * and calls the appropriate wait function.  If it is a guest vCPU, then we
- * use mem_event_wait_slot() to reserve a slot.  As long as there is a ring,
+ * use vm_event_wait_slot() to reserve a slot.  As long as there is a ring,
  * this function will always return 0 for a guest.  For a non-guest, we check
  * for space and return -EBUSY if the ring is not available.
  *
@@ -423,20 +423,20 @@ bool_t mem_event_check_ring(struct mem_event_domain *med)
  *               0: a spot has been reserved
  *
  */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
-                            bool_t allow_sleep)
+int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
+                          bool_t allow_sleep)
 {
     if ( (current->domain == d) && allow_sleep )
-        return mem_event_wait_slot(med);
+        return vm_event_wait_slot(ved);
     else
-        return mem_event_grab_slot(med, (current->domain != d));
+        return vm_event_grab_slot(ved, (current->domain != d));
 }
 
 #ifdef HAS_MEM_PAGING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
+    if ( likely(v->domain->vm_event->paging.ring_page != NULL) )
         p2m_mem_paging_resume(v->domain);
 }
 #endif
@@ -445,7 +445,7 @@ static void mem_paging_notification(struct vcpu *v, unsigned int port)
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_access_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->monitor.ring_page != NULL) )
+    if ( likely(v->domain->vm_event->monitor.ring_page != NULL) )
         mem_access_resume(v->domain);
 }
 #endif
@@ -454,12 +454,12 @@ static void mem_access_notification(struct vcpu *v, unsigned int port)
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
-    if ( likely(v->domain->mem_event->share.ring_page != NULL) )
+    if ( likely(v->domain->vm_event->share.ring_page != NULL) )
         mem_sharing_sharing_resume(v->domain);
 }
 #endif
 
-int do_mem_event_op(int op, uint32_t domain, void *arg)
+int do_vm_event_op(int op, uint32_t domain, void *arg)
 {
     int ret;
     struct domain *d;
@@ -468,7 +468,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
     if ( ret )
         return ret;
 
-    ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
+    ret = xsm_vm_event_op(XSM_DM_PRIV, d, op);
     if ( ret )
         goto out;
 
@@ -494,10 +494,10 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
 }
 
 /* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d)
+void vm_event_cleanup(struct domain *d)
 {
 #ifdef HAS_MEM_PAGING
-    if ( d->mem_event->paging.ring_page ) {
+    if ( d->vm_event->paging.ring_page ) {
         /* Destroying the wait queue head means waking up all
          * queued vcpus. This will drain the list, allowing
          * the disable routine to complete. It will also drop
@@ -505,30 +505,30 @@ void mem_event_cleanup(struct domain *d)
          * Finally, because this code path involves previously
          * pausing the domain (domain_kill), unpausing the
          * vcpus causes no harm. */
-        destroy_waitqueue_head(&d->mem_event->paging.wq);
-        (void)mem_event_disable(d, &d->mem_event->paging);
+        destroy_waitqueue_head(&d->vm_event->paging.wq);
+        (void)vm_event_disable(d, &d->vm_event->paging);
     }
 #endif
 #ifdef HAS_MEM_ACCESS
-    if ( d->mem_event->monitor.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->monitor.wq);
-        (void)mem_event_disable(d, &d->mem_event->monitor);
+    if ( d->vm_event->monitor.ring_page ) {
+        destroy_waitqueue_head(&d->vm_event->monitor.wq);
+        (void)vm_event_disable(d, &d->vm_event->monitor);
     }
 #endif
 #ifdef HAS_MEM_SHARING
-    if ( d->mem_event->share.ring_page ) {
-        destroy_waitqueue_head(&d->mem_event->share.wq);
-        (void)mem_event_disable(d, &d->mem_event->share);
+    if ( d->vm_event->share.ring_page ) {
+        destroy_waitqueue_head(&d->vm_event->share.wq);
+        (void)vm_event_disable(d, &d->vm_event->share);
     }
 #endif
 }
 
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
+                    XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
-    rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
+    rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op);
     if ( rc )
         return rc;
 
@@ -555,17 +555,17 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 
     rc = -ENOSYS;
 
-    switch ( mec->mode )
+    switch ( vec->mode )
     {
 #ifdef HAS_MEM_PAGING
-    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
+    case XEN_DOMCTL_VM_EVENT_OP_PAGING:
     {
-        struct mem_event_domain *med = &d->mem_event->paging;
+        struct vm_event_domain *ved = &d->vm_event->paging;
         rc = -EINVAL;
 
-        switch( mec->op )
+        switch( vec->op )
         {
-        case XEN_MEM_EVENT_PAGING_ENABLE:
+        case XEN_VM_EVENT_PAGING_ENABLE:
         {
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -589,16 +589,16 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( p2m->pod.entry_count )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
-                                    HVM_PARAM_PAGING_RING_PFN,
-                                    mem_paging_notification);
+            rc = vm_event_enable(d, vec, ved, _VPF_mem_paging,
+                                 HVM_PARAM_PAGING_RING_PFN,
+                                 mem_paging_notification);
         }
         break;
 
-        case XEN_MEM_EVENT_PAGING_DISABLE:
+        case XEN_VM_EVENT_PAGING_DISABLE:
         {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
+            if ( ved->ring_page )
+                rc = vm_event_disable(d, ved);
         }
         break;
 
@@ -611,32 +611,32 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 #endif
 
 #ifdef HAS_MEM_ACCESS
-    case XEN_DOMCTL_MEM_EVENT_OP_MONITOR:
+    case XEN_DOMCTL_VM_EVENT_OP_MONITOR:
     {
-        struct mem_event_domain *med = &d->mem_event->monitor;
+        struct vm_event_domain *ved = &d->vm_event->monitor;
         rc = -EINVAL;
 
-        switch( mec->op )
+        switch( vec->op )
         {
-        case XEN_MEM_EVENT_MONITOR_ENABLE:
-        case XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION:
+        case XEN_VM_EVENT_MONITOR_ENABLE:
+        case XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION:
         {
-            rc = mem_event_enable(d, mec, med, _VPF_mem_access,
+            rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                     HVM_PARAM_MONITOR_RING_PFN,
                                     mem_access_notification);
 
-            if ( mec->op == XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
+            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
                  && !rc )
                 p2m_setup_introspection(d);
 
         }
         break;
 
-        case XEN_MEM_EVENT_MONITOR_DISABLE:
+        case XEN_VM_EVENT_MONITOR_DISABLE:
         {
-            if ( med->ring_page )
+            if ( ved->ring_page )
             {
-                rc = mem_event_disable(d, med);
+                rc = vm_event_disable(d, ved);
                 d->arch.hvm_domain.introspection_enabled = 0;
             }
         }
@@ -651,14 +651,14 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 #endif
 
 #ifdef HAS_MEM_SHARING
-    case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
+    case XEN_DOMCTL_VM_EVENT_OP_SHARING:
     {
-        struct mem_event_domain *med = &d->mem_event->share;
+        struct vm_event_domain *ved = &d->vm_event->share;
         rc = -EINVAL;
 
-        switch( mec->op )
+        switch( vec->op )
         {
-        case XEN_MEM_EVENT_SHARING_ENABLE:
+        case XEN_VM_EVENT_SHARING_ENABLE:
         {
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
@@ -670,16 +670,16 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
             if ( !hap_enabled(d) )
                 break;
 
-            rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
+            rc = vm_event_enable(d, vec, ved, _VPF_mem_sharing,
                                     HVM_PARAM_SHARING_RING_PFN,
                                     mem_sharing_notification);
         }
         break;
 
-        case XEN_MEM_EVENT_SHARING_DISABLE:
+        case XEN_VM_EVENT_SHARING_DISABLE:
         {
-            if ( med->ring_page )
-                rc = mem_event_disable(d, med);
+            if ( ved->ring_page )
+                rc = vm_event_disable(d, ved);
         }
         break;
 
@@ -698,17 +698,17 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
     return rc;
 }
 
-void mem_event_vcpu_pause(struct vcpu *v)
+void vm_event_vcpu_pause(struct vcpu *v)
 {
     ASSERT(v == current);
 
-    atomic_inc(&v->mem_event_pause_count);
+    atomic_inc(&v->vm_event_pause_count);
     vcpu_pause_nosync(v);
 }
 
-void mem_event_vcpu_unpause(struct vcpu *v)
+void vm_event_vcpu_unpause(struct vcpu *v)
 {
-    int old, new, prev = v->mem_event_pause_count.counter;
+    int old, new, prev = v->vm_event_pause_count.counter;
 
     /* All unpause requests as a result of toolstack responses.  Prevent
      * underflow of the vcpu pause count. */
@@ -720,11 +720,11 @@ void mem_event_vcpu_unpause(struct vcpu *v)
         if ( new < 0 )
         {
             printk(XENLOG_G_WARNING
-                   "%pv mem_event: Too many unpause attempts\n", v);
+                   "%pv vm_event: Too many unpause attempts\n", v);
             return;
         }
 
-        prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
+        prev = cmpxchg(&v->vm_event_pause_count.counter, old, new);
     } while ( prev != old );
 
     vcpu_unpause(v);
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 78c6977..964384b 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1346,7 +1346,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
      * enabled for this domain */
     if ( unlikely(!need_iommu(d) &&
             (d->arch.hvm_domain.mem_sharing_enabled ||
-             d->mem_event->paging.ring_page ||
+             d->vm_event->paging.ring_page ||
              p2m_get_hostp2m(d)->global_logdirty)) )
         return -EXDEV;
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index da36504..21a8d71 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -45,7 +45,7 @@ struct p2m_domain {
         unsigned long shattered[4];
     } stats;
 
-    /* If true, and an access fault comes in and there is no mem_event listener,
+    /* If true, and an access fault comes in and there is no vm_event listener,
      * pause domain. Otherwise, remove access restrictions. */
     bool_t access_required;
 };
@@ -71,8 +71,8 @@ typedef enum {
 } p2m_type_t;
 
 static inline
-void p2m_mem_event_emulate_check(struct vcpu *v,
-                                 const mem_event_response_t *rsp)
+void p2m_vm_event_emulate_check(struct vcpu *v,
+                                const vm_event_response_t *rsp)
 {
     /* Not supported on ARM. */
 };
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index b233fbc..e0c4b64 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -479,13 +479,13 @@ struct arch_vcpu
 
     /*
      * Should we emulate the next matching instruction on VCPU resume
-     * after a mem_event?
+     * after a vm_event?
      */
     struct {
         uint32_t emulate_flags;
         unsigned long gpa;
         unsigned long eip;
-    } mem_event;
+    } vm_event;
 
 } __cacheline_aligned;
 
diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
index 5411302..b726654 100644
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -38,7 +38,7 @@ int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
 int hvm_emulate_one_no_write(
     struct hvm_emulate_ctxt *hvmemul_ctxt);
-void hvm_mem_event_emulate_one(bool_t nowrite,
+void hvm_vm_event_emulate_one(bool_t nowrite,
     unsigned int trapnr,
     unsigned int errcode);
 void hvm_emulate_prepare(
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 20accc6..9e14015 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -245,7 +245,7 @@ struct p2m_domain {
      * retyped get this access type.  See definition of p2m_access_t. */
     p2m_access_t default_access;
 
-    /* If true, and an access fault comes in and there is no mem_event listener, 
+    /* If true, and an access fault comes in and there is no vm_event listener, 
      * pause domain.  Otherwise, remove access restrictions. */
     bool_t       access_required;
 
@@ -580,7 +580,7 @@ void p2m_mem_paging_resume(struct domain *d);
  * locks -- caller must also xfree the request. */
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
-                            mem_event_request_t **req_ptr);
+                            vm_event_request_t **req_ptr);
 
 /* Set access type for a region of pfns.
  * If start_pfn == -1ul, sets the default access type */
@@ -594,8 +594,8 @@ int p2m_get_mem_access(struct domain *d, unsigned long pfn,
 
 /* Check for emulation and mark vcpu for skipping one instruction
  * upon rescheduling if required. */
-void p2m_mem_event_emulate_check(struct vcpu *v,
-                                 const mem_event_response_t *rsp);
+void p2m_vm_event_emulate_check(struct vcpu *v,
+                                 const vm_event_response_t *rsp);
 
 /* Enable arch specific introspection options (such as MSR interception). */
 void p2m_setup_introspection(struct domain *d);
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 3b4c2e2..ef373eb 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -750,10 +750,10 @@ struct xen_domctl_gdbsx_domstatus {
 };
 
 /*
- * Memory event operations
+ * VM event operations
  */
 
-/* XEN_DOMCTL_mem_event_op */
+/* XEN_DOMCTL_vm_event_op */
 
 /*
  * Domain memory paging
@@ -762,17 +762,17 @@ struct xen_domctl_gdbsx_domstatus {
  * pager<->hypervisor interface. Use XENMEM_paging_op*
  * to perform per-page operations.
  *
- * The XEN_MEM_EVENT_PAGING_ENABLE domctl returns several
+ * The XEN_VM_EVENT_PAGING_ENABLE domctl returns several
  * non-standard error codes to indicate why paging could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EMLINK - guest has iommu passthrough enabled
  * EXDEV  - guest has PoD enabled
  * EBUSY  - guest has or had paging enabled, ring buffer still active
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING            1
+#define XEN_DOMCTL_VM_EVENT_OP_PAGING            1
 
-#define XEN_MEM_EVENT_PAGING_ENABLE               0
-#define XEN_MEM_EVENT_PAGING_DISABLE              1
+#define XEN_VM_EVENT_PAGING_ENABLE               0
+#define XEN_VM_EVENT_PAGING_DISABLE              1
 
 /*
  * Monitor helper.
@@ -787,23 +787,23 @@ struct xen_domctl_gdbsx_domstatus {
  * is sent with what happened. The memory event handler can then resume the
  * VCPU and redo the access with a XENMEM_access_op_resume hypercall.
  *
- * See public/mem_event.h for the list of available events that can be
+ * See public/vm_event.h for the list of available events that can be
  * subscribed to via the monitor interface.
  *
  * To enable MOV-TO-MSR interception on x86, it is necessary to enable this
- * interface with the XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION
+ * interface with the XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
  * operator.
  *
- * The XEN_MEM_EVENT_MONITOR_ENABLE* domctls return several
+ * The XEN_VM_EVENT_MONITOR_ENABLE* domctls return several
  * non-standard error codes to indicate why access could not be enabled:
  * EBUSY  - guest has or had access enabled, ring buffer still active
  *
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_MONITOR                        2
+#define XEN_DOMCTL_VM_EVENT_OP_MONITOR                        2
 
-#define XEN_MEM_EVENT_MONITOR_ENABLE                           0
-#define XEN_MEM_EVENT_MONITOR_DISABLE                          1
-#define XEN_MEM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
+#define XEN_VM_EVENT_MONITOR_ENABLE                           0
+#define XEN_VM_EVENT_MONITOR_DISABLE                          1
+#define XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
 
 /*
  * Sharing ENOMEM helper.
@@ -818,21 +818,21 @@ struct xen_domctl_gdbsx_domstatus {
  * Note that shring can be turned on (as per the domctl below)
  * *without* this ring being setup.
  */
-#define XEN_DOMCTL_MEM_EVENT_OP_SHARING           3
+#define XEN_DOMCTL_VM_EVENT_OP_SHARING           3
 
-#define XEN_MEM_EVENT_SHARING_ENABLE              0
-#define XEN_MEM_EVENT_SHARING_DISABLE             1
+#define XEN_VM_EVENT_SHARING_ENABLE              0
+#define XEN_VM_EVENT_SHARING_DISABLE             1
 
 /* Use for teardown/setup of helper<->hypervisor interface for paging, 
  * access and sharing.*/
-struct xen_domctl_mem_event_op {
-    uint32_t       op;           /* XEN_MEM_EVENT_*_* */
-    uint32_t       mode;         /* XEN_DOMCTL_MEM_EVENT_OP_* */
+struct xen_domctl_vm_event_op {
+    uint32_t       op;           /* XEN_VM_EVENT_*_* */
+    uint32_t       mode;         /* XEN_DOMCTL_VM_EVENT_OP_* */
 
     uint32_t port;              /* OUT: event channel for ring */
 };
-typedef struct xen_domctl_mem_event_op xen_domctl_mem_event_op_t;
-DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_event_op_t);
+typedef struct xen_domctl_vm_event_op xen_domctl_vm_event_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_vm_event_op_t);
 
 /*
  * Memory sharing operations
@@ -1055,7 +1055,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_suppress_spurious_page_faults 53
 #define XEN_DOMCTL_debug_op                      54
 #define XEN_DOMCTL_gethvmcontext_partial         55
-#define XEN_DOMCTL_mem_event_op                  56
+#define XEN_DOMCTL_vm_event_op                   56
 #define XEN_DOMCTL_mem_sharing_op                57
 #define XEN_DOMCTL_disable_migrate               58
 #define XEN_DOMCTL_gettscinfo                    59
@@ -1123,7 +1123,7 @@ struct xen_domctl {
         struct xen_domctl_set_target        set_target;
         struct xen_domctl_subscribe         subscribe;
         struct xen_domctl_debug_op          debug_op;
-        struct xen_domctl_mem_event_op      mem_event_op;
+        struct xen_domctl_vm_event_op       vm_event_op;
         struct xen_domctl_mem_sharing_op    mem_sharing_op;
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_domctl_cpuid             cpuid;
diff --git a/xen/include/public/mem_event.h b/xen/include/public/vm_event.h
similarity index 61%
rename from xen/include/public/mem_event.h
rename to xen/include/public/vm_event.h
index 17b6bb8..5667adf 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/vm_event.h
@@ -1,5 +1,5 @@
 /******************************************************************************
- * mem_event.h
+ * vm_event.h
  *
  * Memory event common structures.
  *
@@ -24,59 +24,59 @@
  * DEALINGS IN THE SOFTWARE.
  */
 
-#ifndef _XEN_PUBLIC_MEM_EVENT_H
-#define _XEN_PUBLIC_MEM_EVENT_H
+#ifndef _XEN_PUBLIC_VM_EVENT_H
+#define _XEN_PUBLIC_VM_EVENT_H
 
 #include "xen.h"
 
-#define MEM_EVENT_INTERFACE_VERSION 0x00000001
+#define VM_EVENT_INTERFACE_VERSION 0x00000001
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
 #include "io/ring.h"
 
 /* Memory event flags */
-#define MEM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
-#define MEM_EVENT_FLAG_DROP_PAGE       (1 << 1)
-#define MEM_EVENT_FLAG_EVICT_FAIL      (1 << 2)
-#define MEM_EVENT_FLAG_FOREIGN         (1 << 3)
-#define MEM_EVENT_FLAG_DUMMY           (1 << 4)
+#define VM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
+#define VM_EVENT_FLAG_DROP_PAGE       (1 << 1)
+#define VM_EVENT_FLAG_EVICT_FAIL      (1 << 2)
+#define VM_EVENT_FLAG_FOREIGN         (1 << 3)
+#define VM_EVENT_FLAG_DUMMY           (1 << 4)
 /*
  * Emulate the fault-causing instruction (if set in the event response flags).
  * This will allow the guest to continue execution without lifting the page
  * access restrictions.
  */
-#define MEM_EVENT_FLAG_EMULATE         (1 << 5)
+#define VM_EVENT_FLAG_EMULATE         (1 << 5)
 /*
- * Same as MEM_EVENT_FLAG_EMULATE, but with write operations or operations
+ * Same as VM_EVENT_FLAG_EMULATE, but with write operations or operations
  * potentially having side effects (like memory mapped or port I/O) disabled.
  */
-#define MEM_EVENT_FLAG_EMULATE_NOWRITE (1 << 6)
+#define VM_EVENT_FLAG_EMULATE_NOWRITE (1 << 6)
 /* Reasons for the vm event request */
 /* Default case */
-#define MEM_EVENT_REASON_UNKNOWN                 0
+#define VM_EVENT_REASON_UNKNOWN                 0
 /* Memory access violation */
-#define MEM_EVENT_REASON_MEM_ACCESS              1
+#define VM_EVENT_REASON_MEM_ACCESS              1
 /* Memory sharing event */
-#define MEM_EVENT_REASON_MEM_SHARING             2
+#define VM_EVENT_REASON_MEM_SHARING             2
 /* Memory paging event */
-#define MEM_EVENT_REASON_MEM_PAGING              3
+#define VM_EVENT_REASON_MEM_PAGING              3
 /* CR0 was updated */
-#define MEM_EVENT_REASON_MOV_TO_CR0              4
+#define VM_EVENT_REASON_MOV_TO_CR0              4
 /* CR3 was updated */
-#define MEM_EVENT_REASON_MOV_TO_CR3              5
+#define VM_EVENT_REASON_MOV_TO_CR3              5
 /* CR4 was updated */
-#define MEM_EVENT_REASON_MOV_TO_CR4              6
+#define VM_EVENT_REASON_MOV_TO_CR4              6
 /* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
-#define MEM_EVENT_REASON_MOV_TO_MSR              7
+#define VM_EVENT_REASON_MOV_TO_MSR              7
 /* Debug operation executed (int3) */
-#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
+#define VM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
 /* Single-step (MTF) */
-#define MEM_EVENT_REASON_SINGLESTEP              9
+#define VM_EVENT_REASON_SINGLESTEP              9
 
 /* Using a custom struct (not hvm_hw_cpu) so as to not fill
- * the mem_event ring buffer too quickly. */
-struct mem_event_regs_x86 {
+ * the vm_event ring buffer too quickly. */
+struct vm_event_regs_x86 {
     uint64_t rax;
     uint64_t rcx;
     uint64_t rdx;
@@ -112,7 +112,7 @@ struct mem_event_regs_x86 {
     uint32_t _pad;
 };
 
-struct mem_event_mem_access_data {
+struct vm_event_mem_access_data {
     uint64_t gfn;
     uint64_t offset;
     uint64_t gla; /* if gla_valid */
@@ -125,61 +125,61 @@ struct mem_event_mem_access_data {
     uint16_t _pad;
 };
 
-struct mem_event_mov_to_cr_data {
+struct vm_event_mov_to_cr_data {
     uint64_t new_value;
     uint64_t old_value;
 };
 
-struct mem_event_software_breakpoint_data {
+struct vm_event_software_breakpoint_data {
     uint64_t gfn;
 };
 
-struct mem_event_singlestep_data {
+struct vm_event_singlestep_data {
     uint64_t gfn;
 };
 
-struct mem_event_mov_to_msr_data {
+struct vm_event_mov_to_msr_data {
     uint64_t msr;
     uint64_t value;
 };
 
-struct mem_event_paging_data {
+struct vm_event_paging_data {
     uint64_t gfn;
     uint32_t p2mt;
     uint32_t _pad;
 };
 
-struct mem_event_sharing_data {
+struct vm_event_sharing_data {
     uint64_t gfn;
     uint32_t p2mt;
     uint32_t _pad;
 };
 
-typedef struct mem_event_st {
-    uint32_t version; /* MEM_EVENT_INTERFACE_VERSION */
+typedef struct vm_event_st {
+    uint32_t version; /* VM_EVENT_INTERFACE_VERSION */
     uint32_t flags;
     uint32_t vcpu_id;
-    uint32_t reason; /* MEM_EVENT_REASON_* */
+    uint32_t reason; /* VM_EVENT_REASON_* */
 
     union {
-        struct mem_event_paging_data                mem_paging;
-        struct mem_event_sharing_data               mem_sharing;
-        struct mem_event_mem_access_data            mem_access;
-        struct mem_event_mov_to_cr_data             mov_to_cr;
-        struct mem_event_mov_to_msr_data            mov_to_msr;
-        struct mem_event_software_breakpoint_data   software_breakpoint;
-        struct mem_event_singlestep_data            singlestep;
+        struct vm_event_paging_data                mem_paging;
+        struct vm_event_sharing_data               mem_sharing;
+        struct vm_event_mem_access_data            mem_access;
+        struct vm_event_mov_to_cr_data             mov_to_cr;
+        struct vm_event_mov_to_msr_data            mov_to_msr;
+        struct vm_event_software_breakpoint_data   software_breakpoint;
+        struct vm_event_singlestep_data            singlestep;
     } u;
 
     union {
-        struct mem_event_regs_x86 x86;
+        struct vm_event_regs_x86 x86;
     } regs;
-} mem_event_request_t, mem_event_response_t;
+} vm_event_request_t, vm_event_response_t;
 
-DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
+DEFINE_RING_TYPES(vm_event, vm_event_request_t, vm_event_response_t);
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
-#endif /* _XEN_PUBLIC_MEM_EVENT_H */
+#endif /* _XEN_PUBLIC_VM_EVENT_H */
 
 /*
  * Local variables:
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index 6ceb2a4..1d01221 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -29,7 +29,7 @@
 
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
-int mem_access_send_req(struct domain *d, mem_event_request_t *req);
+int mem_access_send_req(struct domain *d, vm_event_request_t *req);
 
 /* Resumes the running of the VCPU, restarting the last instruction */
 void mem_access_resume(struct domain *d);
@@ -44,7 +44,7 @@ int mem_access_memop(unsigned long cmd,
 }
 
 static inline
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+int mem_access_send_req(struct domain *d, vm_event_request_t *req)
 {
     return -ENOSYS;
 }
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 29f3628..5da8a2d 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -1,12 +1,12 @@
 #ifndef _XEN_P2M_COMMON_H
 #define _XEN_P2M_COMMON_H
 
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 
 /*
  * Additional access types, which are used to further restrict
  * the permissions given my the p2m_type_t memory type.  Violations
- * caused by p2m_access_t restrictions are sent to the mem_event
+ * caused by p2m_access_t restrictions are sent to the vm_event
  * interface.
  *
  * The access permissions are soft state: when any ambiguous change of page
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 64a2bd3..33283b5 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -23,7 +23,7 @@
 #include <public/domctl.h>
 #include <public/sysctl.h>
 #include <public/vcpu.h>
-#include <public/mem_event.h>
+#include <public/vm_event.h>
 #include <public/event_channel.h>
 
 #ifdef CONFIG_COMPAT
@@ -214,8 +214,8 @@ struct vcpu
     unsigned long    pause_flags;
     atomic_t         pause_count;
 
-    /* VCPU paused for mem_event replies. */
-    atomic_t         mem_event_pause_count;
+    /* VCPU paused for vm_event replies. */
+    atomic_t         vm_event_pause_count;
     /* VCPU paused by system controller. */
     int              controller_pause_count;
 
@@ -257,8 +257,8 @@ struct vcpu
 #define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock)
 #define domain_is_locked(d) spin_is_locked(&(d)->domain_lock)
 
-/* Memory event */
-struct mem_event_domain
+/* VM event */
+struct vm_event_domain
 {
     /* ring lock */
     spinlock_t ring_lock;
@@ -269,10 +269,10 @@ struct mem_event_domain
     void *ring_page;
     struct page_info *ring_pg_struct;
     /* front-end ring */
-    mem_event_front_ring_t front_ring;
+    vm_event_front_ring_t front_ring;
     /* event channel port (vcpu0 only) */
     int xen_port;
-    /* mem_event bit for vcpu->pause_flags */
+    /* vm_event bit for vcpu->pause_flags */
     int pause_flag;
     /* list of vcpus waiting for room in the ring */
     struct waitqueue_head wq;
@@ -282,14 +282,14 @@ struct mem_event_domain
     unsigned int last_vcpu_wake_up;
 };
 
-struct mem_event_per_domain
+struct vm_event_per_domain
 {
     /* Memory sharing support */
-    struct mem_event_domain share;
+    struct vm_event_domain share;
     /* Memory paging support */
-    struct mem_event_domain paging;
+    struct vm_event_domain paging;
     /* VM event monitor support */
-    struct mem_event_domain monitor;
+    struct vm_event_domain monitor;
 };
 
 struct evtchn_port_ops;
@@ -442,8 +442,8 @@ struct domain
 
     struct lock_profile_qhead profile_head;
 
-    /* Various mem_events */
-    struct mem_event_per_domain *mem_event;
+    /* Various vm_events */
+    struct vm_event_per_domain *vm_event;
 
     /*
      * Can be specified by the user. If that is not the case, it is
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/vm_event.h
similarity index 50%
rename from xen/include/xen/mem_event.h
rename to xen/include/xen/vm_event.h
index 4f3ad8e..988ea42 100644
--- a/xen/include/xen/mem_event.h
+++ b/xen/include/xen/vm_event.h
@@ -1,5 +1,5 @@
 /******************************************************************************
- * mem_event.h
+ * vm_event.h
  *
  * Common interface for memory event support.
  *
@@ -21,18 +21,18 @@
  */
 
 
-#ifndef __MEM_EVENT_H__
-#define __MEM_EVENT_H__
+#ifndef __VM_EVENT_H__
+#define __VM_EVENT_H__
 
 #include <xen/sched.h>
 
 #ifdef HAS_MEM_ACCESS
 
 /* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d);
+void vm_event_cleanup(struct domain *d);
 
 /* Returns whether a ring has been set up */
-bool_t mem_event_check_ring(struct mem_event_domain *med);
+bool_t vm_event_check_ring(struct vm_event_domain *med);
 
 /* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
  * available space and the caller is a foreign domain. If the guest itself
@@ -47,90 +47,90 @@ bool_t mem_event_check_ring(struct mem_event_domain *med);
  * cancel_slot(), both of which are guaranteed to
  * succeed.
  */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *med,
                             bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot(struct domain *d,
+                                        struct vm_event_domain *med)
 {
-    return __mem_event_claim_slot(d, med, 1);
+    return __vm_event_claim_slot(d, med, 1);
 }
 
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot_nosleep(struct domain *d,
+                                        struct vm_event_domain *med)
 {
-    return __mem_event_claim_slot(d, med, 0);
+    return __vm_event_claim_slot(d, med, 0);
 }
 
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
+void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med);
 
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req);
+void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
+                            vm_event_request_t *req);
 
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp);
+int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
+                           vm_event_response_t *rsp);
 
-int do_mem_event_op(int op, uint32_t domain, void *arg);
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+int do_vm_event_op(int op, uint32_t domain, void *arg);
+int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
                      XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
-void mem_event_vcpu_pause(struct vcpu *v);
-void mem_event_vcpu_unpause(struct vcpu *v);
+void vm_event_vcpu_pause(struct vcpu *v);
+void vm_event_vcpu_unpause(struct vcpu *v);
 
 #else
 
-static inline void mem_event_cleanup(struct domain *d) {}
+static inline void vm_event_cleanup(struct domain *d) {}
 
-static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
+static inline bool_t vm_event_check_ring(struct vm_event_domain *med)
 {
     return 0;
 }
 
-static inline int mem_event_claim_slot(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot(struct domain *d,
+                                        struct vm_event_domain *med)
 {
     return -ENOSYS;
 }
 
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
-                                        struct mem_event_domain *med)
+static inline int vm_event_claim_slot_nosleep(struct domain *d,
+                                        struct vm_event_domain *med)
 {
     return -ENOSYS;
 }
 
 static inline
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med)
 {}
 
 static inline
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
-                            mem_event_request_t *req)
+void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
+                            vm_event_request_t *req)
 {}
 
 static inline
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
-                           mem_event_response_t *rsp)
+int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
+                           vm_event_response_t *rsp)
 {
     return -ENOSYS;
 }
 
-static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
+static inline int do_vm_event_op(int op, uint32_t domain, void *arg)
 {
     return -ENOSYS;
 }
 
 static inline
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
                      XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     return -ENOSYS;
 }
 
-static inline void mem_event_vcpu_pause(struct vcpu *v) {}
-static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
+static inline void vm_event_vcpu_pause(struct vcpu *v) {}
+static inline void vm_event_vcpu_unpause(struct vcpu *v) {}
 
 #endif /* HAS_MEM_ACCESS */
 
-#endif /* __MEM_EVENT_H__ */
+#endif /* __VM_EVENT_H__ */
 
 
 /*
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index f20e89c..4227093 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -514,13 +514,13 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
 }
 
 #ifdef HAS_MEM_ACCESS
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 4ce089f..cff9d35 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -142,8 +142,8 @@ struct xsm_operations {
     int (*get_vnumainfo) (struct domain *d);
 
 #ifdef HAS_MEM_ACCESS
-    int (*mem_event_control) (struct domain *d, int mode, int op);
-    int (*mem_event_op) (struct domain *d, int op);
+    int (*vm_event_control) (struct domain *d, int mode, int op);
+    int (*vm_event_op) (struct domain *d, int op);
 #endif
 
 #ifdef CONFIG_X86
@@ -544,14 +544,14 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
 }
 
 #ifdef HAS_MEM_ACCESS
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
 {
-    return xsm_ops->mem_event_control(d, mode, op);
+    return xsm_ops->vm_event_control(d, mode, op);
 }
 
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
 {
-    return xsm_ops->mem_event_op(d, op);
+    return xsm_ops->vm_event_op(d, op);
 }
 #endif
 
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 8eb3050..25fca68 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -119,8 +119,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
 #ifdef HAS_MEM_ACCESS
-    set_to_dummy_if_null(ops, mem_event_control);
-    set_to_dummy_if_null(ops, mem_event_op);
+    set_to_dummy_if_null(ops, vm_event_control);
+    set_to_dummy_if_null(ops, vm_event_op);
 #endif
 
 #ifdef CONFIG_X86
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index d48463f..c419543 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -578,7 +578,7 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
 #ifdef HAS_MEM_ACCESS
-    case XEN_DOMCTL_mem_event_op:
+    case XEN_DOMCTL_vm_event_op:
 #endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
@@ -689,7 +689,7 @@ static int flask_domctl(struct domain *d, int cmd)
         return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
 
     case XEN_DOMCTL_set_access_required:
-        return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
@@ -1203,14 +1203,14 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
 #ifdef HAS_MEM_ACCESS
-static int flask_mem_event_control(struct domain *d, int mode, int op)
+static int flask_vm_event_control(struct domain *d, int mode, int op)
 {
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 }
 
-static int flask_mem_event_op(struct domain *d, int op)
+static int flask_vm_event_op(struct domain *d, int op)
 {
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 }
 #endif /* HAS_MEM_ACCESS */
 
@@ -1597,8 +1597,8 @@ static struct xsm_operations flask_ops = {
 #endif
 
 #ifdef HAS_MEM_ACCESS
-    .mem_event_control = flask_mem_event_control,
-    .mem_event_op = flask_mem_event_op,
+    .vm_event_control = flask_vm_event_control,
+    .vm_event_op = flask_vm_event_op,
 #endif
 
 #ifdef CONFIG_X86
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1da9f63..9da3275 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -249,7 +249,7 @@ class hvm
 # HVMOP_inject_trap
     hvmctl
 # XEN_DOMCTL_set_access_required
-    mem_event
+    vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
 #  target = domain whose memory is being shared
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 06/13] tools/tests: Clean-up tools/tests/xen-access
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 05/13] xen: Rename mem_event to vm_event Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 07/13] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The spin-lock implementation in the xen-access test program is implemented
in a fashion that is actually incomplete. The x86 assembly that guarantees that
the lock is held by only one thread lacks the "lock;" instruction.

However, the spin-lock is not actually necessary in xen-access as it is not
multithreaded. The presence of the faulty implementation of the lock in a non-
multithreaded environment is unnecessarily complicated for developers who are
trying to follow this code as a guide in implementing their own applications.
Thus, removing it from the code improves the clarity on the behavior of the
system.

Also converting functions that always return 0 to return to void, and making
the teardown function actually return an error code on error.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/tests/xen-access/xen-access.c | 99 +++++++------------------------------
 1 file changed, 19 insertions(+), 80 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 0a22a31..fe1589e 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -45,56 +45,6 @@
 #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
 #define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
 
-/* Spinlock and mem event definitions */
-
-#define SPIN_LOCK_UNLOCKED 0
-
-#define ADDR (*(volatile long *) addr)
-/**
- * test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
- */
-static inline int test_and_set_bit(int nr, volatile void *addr)
-{
-    int oldbit;
-
-    asm volatile (
-        "btsl %2,%1\n\tsbbl %0,%0"
-        : "=r" (oldbit), "=m" (ADDR)
-        : "Ir" (nr), "m" (ADDR) : "memory");
-    return oldbit;
-}
-
-typedef int spinlock_t;
-
-static inline void spin_lock(spinlock_t *lock)
-{
-    while ( test_and_set_bit(1, lock) );
-}
-
-static inline void spin_lock_init(spinlock_t *lock)
-{
-    *lock = SPIN_LOCK_UNLOCKED;
-}
-
-static inline void spin_unlock(spinlock_t *lock)
-{
-    *lock = SPIN_LOCK_UNLOCKED;
-}
-
-static inline int spin_trylock(spinlock_t *lock)
-{
-    return !test_and_set_bit(1, lock);
-}
-
-#define vm_event_ring_lock_init(_m)  spin_lock_init(&(_m)->ring_lock)
-#define vm_event_ring_lock(_m)       spin_lock(&(_m)->ring_lock)
-#define vm_event_ring_unlock(_m)     spin_unlock(&(_m)->ring_lock)
-
 typedef struct vm_event {
     domid_t domain_id;
     xc_evtchn *xce_handle;
@@ -102,7 +52,6 @@ typedef struct vm_event {
     vm_event_back_ring_t back_ring;
     uint32_t evtchn_port;
     void *ring_page;
-    spinlock_t ring_lock;
 } vm_event_t;
 
 typedef struct xenaccess {
@@ -180,6 +129,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         if ( rc != 0 )
         {
             ERROR("Error tearing down domain xenaccess in xen");
+            return rc;
         }
     }
 
@@ -191,6 +141,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         if ( rc != 0 )
         {
             ERROR("Error unbinding event port");
+            return rc;
         }
     }
 
@@ -201,6 +152,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
         if ( rc != 0 )
         {
             ERROR("Error closing event channel");
+            return rc;
         }
     }
 
@@ -209,6 +161,7 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
     if ( rc != 0 )
     {
         ERROR("Error closing connection to xen");
+        return rc;
     }
     xenaccess->xc_handle = NULL;
 
@@ -241,9 +194,6 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     /* Set domain id */
     xenaccess->vm_event.domain_id = domain_id;
 
-    /* Initialise lock */
-    vm_event_ring_lock_init(&xenaccess->vm_event);
-
     /* Enable mem_access */
     xenaccess->vm_event.ring_page =
             xc_mem_access_enable(xenaccess->xc_handle,
@@ -314,19 +264,24 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     return xenaccess;
 
  err:
-    xenaccess_teardown(xch, xenaccess);
+    rc = xenaccess_teardown(xch, xenaccess);
+    if ( rc )
+    {
+        ERROR("Failed to teardown xenaccess structure!\n");
+    }
 
  err_iface:
     return NULL;
 }
 
-int get_request(vm_event_t *vm_event, vm_event_request_t *req)
+/*
+ * Note that this function is not thread safe.
+ */
+static void get_request(vm_event_t *vm_event, vm_event_request_t *req)
 {
     vm_event_back_ring_t *back_ring;
     RING_IDX req_cons;
 
-    vm_event_ring_lock(vm_event);
-
     back_ring = &vm_event->back_ring;
     req_cons = back_ring->req_cons;
 
@@ -337,19 +292,16 @@ int get_request(vm_event_t *vm_event, vm_event_request_t *req)
     /* Update ring */
     back_ring->req_cons = req_cons;
     back_ring->sring->req_event = req_cons + 1;
-
-    vm_event_ring_unlock(vm_event);
-
-    return 0;
 }
 
-static int put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
+/*
+ * Note that this function is not thread safe.
+ */
+static void put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
 {
     vm_event_back_ring_t *back_ring;
     RING_IDX rsp_prod;
 
-    vm_event_ring_lock(vm_event);
-
     back_ring = &vm_event->back_ring;
     rsp_prod = back_ring->rsp_prod_pvt;
 
@@ -360,10 +312,6 @@ static int put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
     /* Update ring */
     back_ring->rsp_prod_pvt = rsp_prod;
     RING_PUSH_RESPONSES(back_ring);
-
-    vm_event_ring_unlock(vm_event);
-
-    return 0;
 }
 
 static int xenaccess_resume_page(xenaccess_t *paging, vm_event_response_t *rsp)
@@ -371,16 +319,13 @@ static int xenaccess_resume_page(xenaccess_t *paging, vm_event_response_t *rsp)
     int ret;
 
     /* Put the page info on the ring */
-    ret = put_response(&paging->vm_event, rsp);
-    if ( ret != 0 )
-        goto out;
+    put_response(&paging->vm_event, rsp);
 
     /* Tell Xen page is ready */
     ret = xc_mem_access_resume(paging->xc_handle, paging->vm_event.domain_id);
     ret = xc_evtchn_notify(paging->vm_event.xce_handle,
                            paging->vm_event.port);
 
- out:
     return ret;
 }
 
@@ -543,13 +488,7 @@ int main(int argc, char *argv[])
         {
             xenmem_access_t access;
 
-            rc = get_request(&xenaccess->vm_event, &req);
-            if ( rc != 0 )
-            {
-                ERROR("Error getting request");
-                interrupted = -1;
-                continue;
-            }
+            get_request(&xenaccess->vm_event, &req);
 
             if ( req.version != VM_EVENT_INTERFACE_VERSION )
             {
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 07/13] x86/hvm: factor out and rename vm_event related functions
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 06/13] tools/tests: Clean-up tools/tests/xen-access Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-10 13:15   ` Jan Beulich
  2015-02-09 18:53 ` [PATCH V4 08/13] xen: Introduce monitor_op domctl Tamas K Lengyel
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

To avoid growing hvm.c these functions can be stored separately. Minor style
changes are applied to the logic in the file.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
---
v4: Style fixes
v3: Style fixes and removing unused fields from msr events.
---
 xen/arch/x86/hvm/Makefile       |   3 +-
 xen/arch/x86/hvm/event.c        | 195 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c          | 163 ++-------------------------------
 xen/arch/x86/hvm/vmx/vmx.c      |   7 +-
 xen/include/asm-x86/hvm/event.h |  40 +++++++++
 xen/include/asm-x86/hvm/hvm.h   |  11 ---
 6 files changed, 246 insertions(+), 173 deletions(-)
 create mode 100644 xen/arch/x86/hvm/event.c
 create mode 100644 xen/include/asm-x86/hvm/event.h

diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..69af47f 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -3,6 +3,7 @@ subdir-y += vmx
 
 obj-y += asid.o
 obj-y += emulate.o
+obj-y += event.o
 obj-y += hpet.o
 obj-y += hvm.o
 obj-y += i8254.o
@@ -22,4 +23,4 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
+obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/event.c b/xen/arch/x86/hvm/event.c
new file mode 100644
index 0000000..f3919b0
--- /dev/null
+++ b/xen/arch/x86/hvm/event.c
@@ -0,0 +1,195 @@
+/*
+* event.c: Common hardware virtual machine event abstractions.
+*
+* Copyright (c) 2004, Intel Corporation.
+* Copyright (c) 2005, International Business Machines Corporation.
+* Copyright (c) 2008, Citrix Systems, Inc.
+*
+* This program is free software; you can redistribute it and/or modify it
+* under the terms and conditions of the GNU General Public License,
+* version 2, as published by the Free Software Foundation.
+*
+* This program is distributed in the hope it will be useful, but WITHOUT
+* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+* more details.
+*
+* You should have received a copy of the GNU General Public License along with
+* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+* Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+
+#include <xen/vm_event.h>
+#include <xen/paging.h>
+#include <public/vm_event.h>
+
+static void hvm_event_fill_regs(vm_event_request_t *req)
+{
+    const struct cpu_user_regs *regs = guest_cpu_user_regs();
+    const struct vcpu *curr = current;
+
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
+    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
+    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
+    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
+}
+
+static int hvm_event_traps(long parameters, vm_event_request_t *req)
+{
+    int rc;
+    struct vcpu *curr = current;
+    struct domain *currd = curr->domain;
+
+    if ( !(parameters & HVMPME_MODE_MASK) )
+        return 0;
+
+    rc = vm_event_claim_slot(currd, &currd->vm_event->monitor);
+    switch ( rc )
+    {
+    case 0:
+        break;
+    case -ENOSYS:
+        /*
+         * If there was no ring to handle the event, then
+         * simple continue executing normally.
+         */
+        return 1;
+    default:
+        return rc;
+    };
+
+    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
+    {
+        req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
+        vm_event_vcpu_pause(curr);
+    }
+
+    hvm_event_fill_regs(req);
+    vm_event_put_request(currd, &currd->vm_event->monitor, req);
+
+    return 1;
+}
+
+static void hvm_event_cr(uint32_t reason, unsigned long value,
+                                unsigned long old)
+{
+    vm_event_request_t req = {
+        .reason = reason,
+        .vcpu_id = current->vcpu_id,
+        .u.mov_to_cr.new_value = value,
+        .u.mov_to_cr.old_value = old
+    };
+    uint64_t parameters = 0;
+
+    switch(reason)
+    {
+    case VM_EVENT_REASON_MOV_TO_CR0:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
+        break;
+    case VM_EVENT_REASON_MOV_TO_CR3:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
+        break;
+    case VM_EVENT_REASON_MOV_TO_CR4:
+        parameters = current->domain->arch.hvm_domain
+                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
+        break;
+    };
+
+    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+        return;
+
+    hvm_event_traps(parameters, &req);
+}
+
+void hvm_event_cr0(unsigned long value, unsigned long old)
+{
+    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
+}
+
+void hvm_event_cr3(unsigned long value, unsigned long old)
+{
+    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
+}
+
+void hvm_event_cr4(unsigned long value, unsigned long old)
+{
+    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
+}
+
+void hvm_event_msr(unsigned long msr, unsigned long value)
+{
+    struct vcpu *curr = current;
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_MOV_TO_MSR,
+        .vcpu_id = curr->vcpu_id,
+        .u.mov_to_msr.msr = msr,
+        .u.mov_to_msr.value = value,
+    };
+    long params = current->domain->arch.hvm_domain
+                    .params[HVM_PARAM_MEMORY_EVENT_MSR];
+
+    hvm_event_traps(params, &req);
+}
+
+int hvm_event_int3(unsigned long gla)
+{
+    uint32_t pfec = PFEC_page_present;
+    struct vcpu *curr = current;
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SOFTWARE_BREAKPOINT,
+        .vcpu_id = curr->vcpu_id,
+        .u.software_breakpoint.gfn = paging_gva_to_gfn(curr, gla, &pfec)
+    };
+    long params = curr->domain->arch.hvm_domain
+                    .params[HVM_PARAM_MEMORY_EVENT_INT3];
+
+    return hvm_event_traps(params, &req);
+}
+
+int hvm_event_single_step(unsigned long gla)
+{
+    uint32_t pfec = PFEC_page_present;
+    struct vcpu *curr = current;
+    vm_event_request_t req = {
+        .reason = VM_EVENT_REASON_SINGLESTEP,
+        .vcpu_id = curr->vcpu_id,
+        .u.singlestep.gfn = paging_gva_to_gfn(curr, gla, &pfec)
+    };
+    long params = curr->domain->arch.hvm_domain
+                    .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
+
+    return hvm_event_traps(params, &req);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index fac6cba..ea8a82d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -35,7 +35,6 @@
 #include <xen/paging.h>
 #include <xen/cpu.h>
 #include <xen/wait.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/rangeset.h>
 #include <asm/shadow.h>
@@ -60,6 +59,7 @@
 #include <asm/hvm/cacheattr.h>
 #include <asm/hvm/trace.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/event.h>
 #include <asm/mtrr.h>
 #include <asm/apic.h>
 #include <public/sched.h>
@@ -3276,7 +3276,7 @@ int hvm_set_cr0(unsigned long value)
         hvm_funcs.handle_cd(v, value);
 
     hvm_update_cr(v, 0, value);
-    hvm_memory_event_cr0(value, old_value);
+    hvm_event_cr0(value, old_value);
 
     if ( (value ^ old_value) & X86_CR0_PG ) {
         if ( !nestedhvm_vmswitch_in_progress(v) && nestedhvm_vcpu_in_guestmode(v) )
@@ -3317,7 +3317,7 @@ int hvm_set_cr3(unsigned long value)
     old=v->arch.hvm_vcpu.guest_cr[3];
     v->arch.hvm_vcpu.guest_cr[3] = value;
     paging_update_cr3(v);
-    hvm_memory_event_cr3(value, old);
+    hvm_event_cr3(value, old);
     return X86EMUL_OKAY;
 
  bad_cr3:
@@ -3358,7 +3358,7 @@ int hvm_set_cr4(unsigned long value)
     }
 
     hvm_update_cr(v, 4, value);
-    hvm_memory_event_cr4(value, old_cr);
+    hvm_event_cr4(value, old_cr);
 
     /*
      * Modifying CR4.{PSE,PAE,PGE,SMEP}, or clearing CR4.PCIDE
@@ -4508,7 +4508,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
     hvm_cpuid(1, NULL, NULL, NULL, &edx);
     mtrr = !!(edx & cpufeat_mask(X86_FEATURE_MTRR));
 
-    hvm_memory_event_msr(msr, msr_content);
+    hvm_event_msr(msr, msr_content);
 
     switch ( msr )
     {
@@ -6317,159 +6317,6 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
     return rc;
 }
 
-static void hvm_mem_event_fill_regs(vm_event_request_t *req)
-{
-    const struct cpu_user_regs *regs = guest_cpu_user_regs();
-    const struct vcpu *curr = current;
-
-    req->regs.x86.rax = regs->eax;
-    req->regs.x86.rcx = regs->ecx;
-    req->regs.x86.rdx = regs->edx;
-    req->regs.x86.rbx = regs->ebx;
-    req->regs.x86.rsp = regs->esp;
-    req->regs.x86.rbp = regs->ebp;
-    req->regs.x86.rsi = regs->esi;
-    req->regs.x86.rdi = regs->edi;
-
-    req->regs.x86.r8  = regs->r8;
-    req->regs.x86.r9  = regs->r9;
-    req->regs.x86.r10 = regs->r10;
-    req->regs.x86.r11 = regs->r11;
-    req->regs.x86.r12 = regs->r12;
-    req->regs.x86.r13 = regs->r13;
-    req->regs.x86.r14 = regs->r14;
-    req->regs.x86.r15 = regs->r15;
-
-    req->regs.x86.rflags = regs->eflags;
-    req->regs.x86.rip    = regs->eip;
-
-    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
-    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
-    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
-    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
-}
-
-static int hvm_memory_event_traps(uint64_t parameters, vm_event_request_t *req)
-{
-    int rc;
-    struct vcpu *v = current;
-    struct domain *d = v->domain;
-
-    if ( !(parameters & HVMPME_MODE_MASK) )
-        return 0;
-
-    rc = vm_event_claim_slot(d, &d->vm_event->monitor);
-    if ( rc == -ENOSYS )
-    {
-        /* If there was no ring to handle the event, then
-         * simple continue executing normally. */
-        return 1;
-    }
-    else if ( rc < 0 )
-        return rc;
-
-    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
-    {
-        req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
-        vm_event_vcpu_pause(v);
-    }
-
-    hvm_mem_event_fill_regs(req);
-    vm_event_put_request(d, &d->vm_event->monitor, req);
-
-    return 1;
-}
-
-static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
-                                unsigned long old)
-{
-    vm_event_request_t req = {
-        .reason = reason,
-        .vcpu_id = current->vcpu_id,
-        .u.mov_to_cr.new_value = value,
-        .u.mov_to_cr.old_value = old
-    };
-
-    uint64_t parameters = 0 ;
-    switch(reason)
-    {
-    case VM_EVENT_REASON_MOV_TO_CR0:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR3:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR4:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
-        break;
-    };
-
-    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
-        return;
-
-    hvm_memory_event_traps(parameters, &req);
-}
-
-void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
-{
-    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
-}
-
-void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
-{
-    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
-}
-
-void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
-{
-    hvm_memory_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
-}
-
-void hvm_memory_event_msr(unsigned long msr, unsigned long value)
-{
-    vm_event_request_t req = {
-        .reason = VM_EVENT_REASON_MOV_TO_MSR,
-        .vcpu_id = current->vcpu_id,
-        .u.mov_to_msr.msr = msr,
-        .u.mov_to_msr.value = value,
-    };
-
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                            .params[HVM_PARAM_MEMORY_EVENT_MSR],
-                           &req);
-}
-
-int hvm_memory_event_int3(unsigned long gla) 
-{
-    uint32_t pfec = PFEC_page_present;
-    vm_event_request_t req = {
-        .reason = VM_EVENT_REASON_SOFTWARE_BREAKPOINT,
-        .vcpu_id = current->vcpu_id,
-        .u.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
-    };
-
-    return hvm_memory_event_traps(current->domain->arch.hvm_domain
-                                   .params[HVM_PARAM_MEMORY_EVENT_INT3],
-                                  &req);
-}
-
-int hvm_memory_event_single_step(unsigned long gla)
-{
-    uint32_t pfec = PFEC_page_present;
-    vm_event_request_t req = {
-        .reason = VM_EVENT_REASON_SINGLESTEP,
-        .vcpu_id = current->vcpu_id,
-        .u.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
-    };
-
-    return hvm_memory_event_traps(current->domain->arch.hvm_domain
-                                   .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
-                                  &req);
-}
-
 int nhvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
 {
     if (hvm_funcs.nhvm_vcpu_hostrestore)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 88b7821..3f2a18f 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -52,6 +52,7 @@
 #include <asm/hvm/vpt.h>
 #include <public/hvm/save.h>
 #include <asm/hvm/trace.h>
+#include <asm/hvm/event.h>
 #include <asm/xenoprof.h>
 #include <asm/debugger.h>
 #include <asm/apic.h>
@@ -1979,7 +1980,7 @@ static int vmx_cr_access(unsigned long exit_qualification)
         unsigned long old = curr->arch.hvm_vcpu.guest_cr[0];
         curr->arch.hvm_vcpu.guest_cr[0] &= ~X86_CR0_TS;
         vmx_update_guest_cr(curr, 0);
-        hvm_memory_event_cr0(curr->arch.hvm_vcpu.guest_cr[0], old);
+        hvm_event_cr0(curr->arch.hvm_vcpu.guest_cr[0], old);
         HVMTRACE_0D(CLTS);
         break;
     }
@@ -2831,7 +2832,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
                 break;
             }
             else {
-                int handled = hvm_memory_event_int3(regs->eip);
+                int handled = hvm_event_int3(regs->eip);
                 
                 if ( handled < 0 ) 
                 {
@@ -3149,7 +3150,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
         if ( v->arch.hvm_vcpu.single_step ) {
-          hvm_memory_event_single_step(regs->eip);
+          hvm_event_single_step(regs->eip);
           if ( v->domain->debugger_attached )
               domain_pause_for_debugger();
         }
diff --git a/xen/include/asm-x86/hvm/event.h b/xen/include/asm-x86/hvm/event.h
new file mode 100644
index 0000000..1e7bb8b
--- /dev/null
+++ b/xen/include/asm-x86/hvm/event.h
@@ -0,0 +1,40 @@
+/*
+ * event.h: Hardware virtual machine assist events.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#ifndef __ASM_X86_HVM_EVENT_H__
+#define __ASM_X86_HVM_EVENT_H__
+
+/* Called for current VCPU on crX/MSR changes by guest */
+void hvm_event_cr0(unsigned long value, unsigned long old);
+void hvm_event_cr3(unsigned long value, unsigned long old);
+void hvm_event_cr4(unsigned long value, unsigned long old);
+void hvm_event_msr(unsigned long msr, unsigned long value);
+/* Called for current VCPU: returns -1 if no listener */
+int hvm_event_int3(unsigned long gla);
+int hvm_event_single_step(unsigned long gla);
+
+#endif /* __ASM_X86_HVM_EVENT_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index e3d2d9a..c77076a 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -473,17 +473,6 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
 int hvm_x2apic_msr_read(struct vcpu *v, unsigned int msr, uint64_t *msr_content);
 int hvm_x2apic_msr_write(struct vcpu *v, unsigned int msr, uint64_t msr_content);
 
-/* Called for current VCPU on crX changes by guest */
-void hvm_memory_event_cr0(unsigned long value, unsigned long old);
-void hvm_memory_event_cr3(unsigned long value, unsigned long old);
-void hvm_memory_event_cr4(unsigned long value, unsigned long old);
-void hvm_memory_event_msr(unsigned long msr, unsigned long value);
-/* Called for current VCPU on int3: returns -1 if no listener */
-int hvm_memory_event_int3(unsigned long gla);
-
-/* Called for current VCPU on single step: returns -1 if no listener */
-int hvm_memory_event_single_step(unsigned long gla);
-
 /*
  * Nested HVM
  */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 08/13] xen: Introduce monitor_op domctl
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (6 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 07/13] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 20:09   ` Daniel De Graaf
  2015-02-09 18:53 ` [PATCH V4 09/13] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

In preparation for allowing for introspecting ARM and PV domains the old
control interface via the hvm_op hypercall is retired. A new control mechanism
is introduced via the domctl hypercall: monitor_op.

This patch aims to establish a base API on which future applications can build
on.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
---
v4: Style fixes
    Only defining struct mov_to_cr and struct debug_event in asm-x86/domain.h
    Add pause/unpause domain wrapper when enabled a monitor option.
---
 tools/libxc/Makefile                |   1 +
 tools/libxc/include/xenctrl.h       |  19 ++++
 tools/libxc/xc_mem_access.c         |   9 +-
 tools/libxc/xc_monitor.c            | 118 +++++++++++++++++++++
 tools/libxc/xc_private.h            |   2 +-
 tools/libxc/xc_vm_event.c           |   7 +-
 tools/tests/xen-access/xen-access.c |  14 +--
 xen/arch/x86/Makefile               |   1 +
 xen/arch/x86/hvm/emulate.c          |   3 +-
 xen/arch/x86/hvm/event.c            |  69 ++++++------
 xen/arch/x86/hvm/hvm.c              |  38 +------
 xen/arch/x86/hvm/vmx/vmcs.c         |   6 +-
 xen/arch/x86/hvm/vmx/vmx.c          |   2 +-
 xen/arch/x86/mm/p2m.c               |   9 --
 xen/arch/x86/monitor.c              | 204 ++++++++++++++++++++++++++++++++++++
 xen/common/domctl.c                 |   9 ++
 xen/common/vm_event.c               |  19 +---
 xen/include/asm-arm/monitor.h       |  13 +++
 xen/include/asm-x86/domain.h        |  28 +++++
 xen/include/asm-x86/hvm/domain.h    |   1 -
 xen/include/asm-x86/monitor.h       |   8 ++
 xen/include/public/domctl.h         |  50 ++++++++-
 xen/include/public/hvm/params.h     |  15 ---
 xen/include/public/vm_event.h       |   2 +-
 xen/xsm/flask/hooks.c               |   3 +
 xen/xsm/flask/policy/access_vectors |   2 +
 26 files changed, 510 insertions(+), 142 deletions(-)
 create mode 100644 tools/libxc/xc_monitor.c
 create mode 100644 xen/arch/x86/monitor.c
 create mode 100644 xen/include/asm-arm/monitor.h
 create mode 100644 xen/include/asm-x86/monitor.h

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 22ba2a1..8b609cf 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -32,6 +32,7 @@ CTRL_SRCS-y       += xc_cpu_hotplug.c
 CTRL_SRCS-y       += xc_resume.c
 CTRL_SRCS-y       += xc_tmem.c
 CTRL_SRCS-y       += xc_vm_event.c
+CTRL_SRCS-y       += xc_monitor.c
 CTRL_SRCS-y       += xc_mem_paging.c
 CTRL_SRCS-y       += xc_mem_access.c
 CTRL_SRCS-y       += xc_memshr.c
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 790db53..3324132 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2308,6 +2308,25 @@ int xc_get_mem_access(xc_interface *xch, domid_t domain_id,
                       uint64_t pfn, xenmem_access_t *access);
 
 /***
+ * Monitor control operations.
+ */
+int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly);
+int xc_monitor_mov_to_cr3(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly);
+int xc_monitor_mov_to_cr4(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly);
+int xc_monitor_mov_to_msr(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int extended_capture);
+int xc_monitor_singlestep(xc_interface *xch, domid_t domain_id,
+                          unsigned int op);
+int xc_monitor_software_breakpoint(xc_interface *xch, domid_t domain_id,
+                                   unsigned int op);
+
+/***
  * Memory sharing operations.
  *
  * Unles otherwise noted, these calls return 0 on succes, -1 and errno on
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 0a3f0e6..37e776c 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -27,14 +27,7 @@
 void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
 {
     return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port, 0);
-}
-
-void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
-                                         uint32_t *port)
-{
-    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port, 1);
+                              port);
 }
 
 int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
new file mode 100644
index 0000000..9e807d1
--- /dev/null
+++ b/tools/libxc/xc_monitor.c
@@ -0,0 +1,118 @@
+/******************************************************************************
+ *
+ * xc_monitor.c
+ *
+ * Interface to VM event monitor
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#include "xc_private.h"
+
+int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_cr3(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_cr4(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int sync,
+                          unsigned int onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR4;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_msr(xc_interface *xch, domid_t domain_id,
+                          unsigned int op, unsigned int extended_capture)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_MSR;
+    domctl.u.monitor_op.u.mov_to_msr.extended_capture = extended_capture;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_software_breakpoint(xc_interface *xch, domid_t domain_id,
+                                   unsigned int op)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_SOFTWARE_BREAKPOINT;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_singlestep(xc_interface *xch, domid_t domain_id,
+                          unsigned int op)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = op ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.subop = XEN_DOMCTL_MONITOR_SUBOP_SINGLESTEP;
+
+    return do_domctl(xch, &domctl);
+}
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 843540c..9f55309 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -430,6 +430,6 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
  */
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                         uint32_t *port, int enable_introspection);
+                         uint32_t *port);
 
 #endif /* __XC_PRIVATE_H__ */
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index d458b9a..7277e86 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -41,7 +41,7 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
 }
 
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                         uint32_t *port, int enable_introspection)
+                         uint32_t *port)
 {
     void *ring_page = NULL;
     uint64_t pfn;
@@ -104,10 +104,7 @@ void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
-        if ( enable_introspection )
-            op = XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION;
-        else
-            op = XEN_VM_EVENT_MONITOR_ENABLE;
+        op = XEN_VM_EVENT_MONITOR_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index fe1589e..0d1bf78 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -447,13 +447,13 @@ int main(int argc, char *argv[])
     }
 
     if ( int3 )
-        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_sync);
-    else
-        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
-    if ( rc < 0 )
     {
-        ERROR("Error %d setting int3 vm_event\n", rc);
-        goto exit;
+        rc = xc_monitor_software_breakpoint(xch, domain_id, 1);
+        if ( rc < 0 )
+        {
+            ERROR("Error %d setting int3 vm_event\n", rc);
+            goto exit;
+        }
     }
 
     /* Wait for access */
@@ -467,7 +467,7 @@ int main(int argc, char *argv[])
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
                                    xenaccess->domain_info->max_pages);
-            rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
+            rc = xc_monitor_software_breakpoint(xch, domain_id, 0);
 
             shutting_down = 1;
         }
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 86ca5f8..37e547c 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -36,6 +36,7 @@ obj-y += microcode_intel.o
 # This must come after the vendor specific files.
 obj-y += microcode.o
 obj-y += mm.o
+obj-y += monitor.o
 obj-y += mpparse.o
 obj-y += nmi.o
 obj-y += numa.o
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index fa7175a..6177ca9 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -411,7 +411,8 @@ static int hvmemul_virtual_to_linear(
      * being triggered for repeated writes to a whole page.
      */
     *reps = min_t(unsigned long, *reps,
-                  unlikely(current->domain->arch.hvm_domain.introspection_enabled)
+                  unlikely(current->domain->arch
+                            .monitor_options.mov_to_msr.extended_capture)
                            ? 1 : 4096);
 
     reg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
diff --git a/xen/arch/x86/hvm/event.c b/xen/arch/x86/hvm/event.c
index f3919b0..95f4311 100644
--- a/xen/arch/x86/hvm/event.c
+++ b/xen/arch/x86/hvm/event.c
@@ -55,15 +55,12 @@ static void hvm_event_fill_regs(vm_event_request_t *req)
     req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
 }
 
-static int hvm_event_traps(long parameters, vm_event_request_t *req)
+static int hvm_event_traps(uint8_t sync, vm_event_request_t *req)
 {
     int rc;
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
 
-    if ( !(parameters & HVMPME_MODE_MASK) )
-        return 0;
-
     rc = vm_event_claim_slot(currd, &currd->vm_event->monitor);
     switch ( rc )
     {
@@ -79,7 +76,7 @@ static int hvm_event_traps(long parameters, vm_event_request_t *req)
         return rc;
     };
 
-    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
+    if ( sync )
     {
         req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
         vm_event_vcpu_pause(curr);
@@ -92,7 +89,7 @@ static int hvm_event_traps(long parameters, vm_event_request_t *req)
 }
 
 static void hvm_event_cr(uint32_t reason, unsigned long value,
-                                unsigned long old)
+                         unsigned long old, struct mov_to_cr *option)
 {
     vm_event_request_t req = {
         .reason = reason,
@@ -100,43 +97,38 @@ static void hvm_event_cr(uint32_t reason, unsigned long value,
         .u.mov_to_cr.new_value = value,
         .u.mov_to_cr.old_value = old
     };
-    uint64_t parameters = 0;
-
-    switch(reason)
-    {
-    case VM_EVENT_REASON_MOV_TO_CR0:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR3:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
-        break;
-    case VM_EVENT_REASON_MOV_TO_CR4:
-        parameters = current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
-        break;
-    };
 
-    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+    if ( option->onchangeonly && value == old )
         return;
 
-    hvm_event_traps(parameters, &req);
+    hvm_event_traps(option->sync, &req);
 }
 
 void hvm_event_cr0(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old);
+    struct domain *currd = current->domain;
+
+    if ( currd->arch.monitor_options.mov_to_cr0.enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old,
+                     &currd->arch.monitor_options.mov_to_cr0);
 }
 
 void hvm_event_cr3(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old);
+    struct domain *currd = current->domain;
+
+    if ( currd->arch.monitor_options.mov_to_cr3.enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old,
+                     &currd->arch.monitor_options.mov_to_cr3);
 }
 
 void hvm_event_cr4(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old);
+    struct domain *currd = current->domain;
+
+    if ( currd->arch.monitor_options.mov_to_cr4.enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old,
+                     &currd->arch.monitor_options.mov_to_cr4);
 }
 
 void hvm_event_msr(unsigned long msr, unsigned long value)
@@ -148,14 +140,14 @@ void hvm_event_msr(unsigned long msr, unsigned long value)
         .u.mov_to_msr.msr = msr,
         .u.mov_to_msr.value = value,
     };
-    long params = current->domain->arch.hvm_domain
-                    .params[HVM_PARAM_MEMORY_EVENT_MSR];
 
-    hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor_options.mov_to_msr.enabled )
+        hvm_event_traps(1, &req);
 }
 
 int hvm_event_int3(unsigned long gla)
 {
+    int rc = 0;
     uint32_t pfec = PFEC_page_present;
     struct vcpu *curr = current;
     vm_event_request_t req = {
@@ -163,14 +155,16 @@ int hvm_event_int3(unsigned long gla)
         .vcpu_id = curr->vcpu_id,
         .u.software_breakpoint.gfn = paging_gva_to_gfn(curr, gla, &pfec)
     };
-    long params = curr->domain->arch.hvm_domain
-                    .params[HVM_PARAM_MEMORY_EVENT_INT3];
 
-    return hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor_options.software_breakpoint.enabled )
+        rc = hvm_event_traps(1, &req);
+
+    return rc;
 }
 
 int hvm_event_single_step(unsigned long gla)
 {
+    int rc = 0;
     uint32_t pfec = PFEC_page_present;
     struct vcpu *curr = current;
     vm_event_request_t req = {
@@ -178,10 +172,11 @@ int hvm_event_single_step(unsigned long gla)
         .vcpu_id = curr->vcpu_id,
         .u.singlestep.gfn = paging_gva_to_gfn(curr, gla, &pfec)
     };
-    long params = curr->domain->arch.hvm_domain
-                    .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
 
-    return hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor_options.singlestep.enabled )
+        rc = hvm_event_traps(1, &req);
+
+    return rc;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ea8a82d..3210264 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5783,23 +5783,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             case HVM_PARAM_ACPI_IOPORTS_LOCATION:
                 rc = pmtimer_change_ioport(d, a.value);
                 break;
-            case HVM_PARAM_MEMORY_EVENT_CR0:
-            case HVM_PARAM_MEMORY_EVENT_CR3:
-            case HVM_PARAM_MEMORY_EVENT_CR4:
-                if ( d == current->domain )
-                    rc = -EPERM;
-                break;
-            case HVM_PARAM_MEMORY_EVENT_INT3:
-            case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
-            case HVM_PARAM_MEMORY_EVENT_MSR:
-                if ( d == current->domain )
-                {
-                    rc = -EPERM;
-                    break;
-                }
-                if ( a.value & HVMPME_onchangeonly )
-                    rc = -EINVAL;
-                break;
             case HVM_PARAM_NESTEDHVM:
                 rc = xsm_hvm_param_nested(XSM_PRIV, d);
                 if ( rc )
@@ -5858,29 +5841,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             }
             }
 
-            if ( rc == 0 ) 
+            if ( rc == 0 )
             {
                 d->arch.hvm_domain.params[a.index] = a.value;
-
-                switch( a.index )
-                {
-                case HVM_PARAM_MEMORY_EVENT_INT3:
-                case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
-                {
-                    domain_pause(d);
-                    domain_unpause(d); /* Causes guest to latch new status */
-                    break;
-                }
-                case HVM_PARAM_MEMORY_EVENT_CR3:
-                {
-                    for_each_vcpu ( d, v )
-                        hvm_funcs.update_guest_cr(v, 0); /* Latches new CR3 mask through CR0 code */
-                    break;
-                }
-                }
-
             }
-
         }
         else
         {
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 63007a9..17b2ab0 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -714,7 +714,7 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
     if ( msr_bitmap == NULL )
         return;
 
-    if ( unlikely(d->arch.hvm_domain.introspection_enabled) &&
+    if ( unlikely(d->arch.monitor_options.mov_to_msr.extended_capture) &&
          vm_event_check_ring(&d->vm_event->monitor) )
     {
         unsigned int i;
@@ -1373,8 +1373,8 @@ void vmx_do_resume(struct vcpu *v)
     }
 
     debug_state = v->domain->debugger_attached
-                  || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
-                  || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
+                  || v->domain->arch.monitor_options.software_breakpoint.enabled
+                  || v->domain->arch.monitor_options.singlestep.enabled;
 
     if ( unlikely(v->arch.hvm_vcpu.debug_state_latch != debug_state) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 3f2a18f..fcd25df 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1232,7 +1232,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
                 v->arch.hvm_vmx.exec_control |= cr3_ctls;
 
             /* Trap CR3 updates if CR3 memory events are enabled. */
-            if ( v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_CR3] )
+            if ( v->domain->arch.monitor_options.mov_to_cr3.enabled )
                 v->arch.hvm_vmx.exec_control |= CPU_BASED_CR3_LOAD_EXITING;
 
             vmx_update_cpu_exec_control(v);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index db332ef..38defdf 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1451,15 +1451,6 @@ void p2m_vm_event_emulate_check(struct vcpu *v, const vm_event_response_t *rsp)
     }
 }
 
-void p2m_setup_introspection(struct domain *d)
-{
-    if ( hvm_funcs.enable_msr_exit_interception )
-    {
-        d->arch.hvm_domain.introspection_enabled = 1;
-        hvm_funcs.enable_msr_exit_interception(d);
-    }
-}
-
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
                             vm_event_request_t **req_ptr)
diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
new file mode 100644
index 0000000..7bfab64
--- /dev/null
+++ b/xen/arch/x86/monitor.c
@@ -0,0 +1,204 @@
+/*
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/mm.h>
+#include <asm/domain.h>
+
+#define DISABLE_OPTION(option)              \
+    do {                                    \
+        if ( !option->enabled )             \
+            return -EFAULT;                 \
+        domain_pause(d);                    \
+        option->enabled = 0;                \
+        domain_unpause(d);                  \
+    } while (0)
+
+#define ENABLE_OPTION(option)               \
+    do {                                    \
+        domain_pause(d);                    \
+        option->enabled = 1;                \
+        domain_unpause(d);                  \
+    } while (0)
+
+int monitor_domctl(struct xen_domctl_monitor_op *domctl, struct domain *d)
+{
+    /*
+     * At the moment only HVM domains are supported. However, event delivery
+     * could be extended to PV domains. See comments below.
+     */
+    if ( !is_hvm_domain(d) )
+        return -ENOSYS;
+
+    if ( domctl->op != XEN_DOMCTL_MONITOR_OP_ENABLE &&
+         domctl->op != XEN_DOMCTL_MONITOR_OP_DISABLE )
+        return -EFAULT;
+
+    switch ( domctl->subop )
+    {
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0:
+    {
+        /* Note: could be supported on PV domains. */
+        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr0;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            options->sync = domctl->u.mov_to_cr.sync;
+            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3:
+    {
+        /* Note: could be supported on PV domains. */
+        struct vcpu *v;
+        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr3;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            options->sync = domctl->u.mov_to_cr.sync;
+            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        /* Latches new CR3 mask through CR0 code */
+        for_each_vcpu ( d, v )
+            hvm_funcs.update_guest_cr(v, 0);
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR4:
+    {
+        /* Note: could be supported on PV domains. */
+        struct mov_to_cr *options = &d->arch.monitor_options.mov_to_cr4;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            options->sync = domctl->u.mov_to_cr.sync;
+            options->onchangeonly = domctl->u.mov_to_cr.onchangeonly;
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+        break;
+    }
+    case XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_MSR:
+    {
+        struct mov_to_msr *options = &d->arch.monitor_options.mov_to_msr;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            if ( domctl->u.mov_to_msr.extended_capture )
+            {
+                if ( hvm_funcs.enable_msr_exit_interception )
+                {
+                    options->extended_capture = 1;
+                    hvm_funcs.enable_msr_exit_interception(d);
+                }
+                else
+                {
+                    return -ENOSYS;
+                }
+            }
+
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        break;
+    }
+    case XEN_DOMCTL_MONITOR_SUBOP_SINGLESTEP:
+    {
+        struct debug_event *options = &d->arch.monitor_options.singlestep;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        break;
+    }
+    case XEN_DOMCTL_MONITOR_SUBOP_SOFTWARE_BREAKPOINT:
+    {
+        struct debug_event *options =
+            &d->arch.monitor_options.software_breakpoint;
+
+        if ( domctl->op == XEN_DOMCTL_MONITOR_OP_ENABLE )
+        {
+            if ( options->enabled )
+                return -EBUSY;
+
+            ENABLE_OPTION(options);
+        }
+        else
+        {
+            DISABLE_OPTION(options);
+        }
+
+        break;
+    }
+
+    default:
+        return -ENOSYS;
+
+    };
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 85afd68..d06f4b6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -29,6 +29,7 @@
 #include <asm/irq.h>
 #include <asm/page.h>
 #include <asm/p2m.h>
+#include <asm/monitor.h>
 #include <public/domctl.h>
 #include <xsm/xsm.h>
 
@@ -1178,6 +1179,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     }
     break;
 
+    case XEN_DOMCTL_monitor_op:
+        ret = -EPERM;
+        if ( current->domain == d )
+            break;
+
+        ret = monitor_domctl(&op->u.monitor_op, d);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 57ef58c..5fdac37 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -619,28 +619,17 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         switch( vec->op )
         {
         case XEN_VM_EVENT_MONITOR_ENABLE:
-        case XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION:
-        {
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
-                                    HVM_PARAM_MONITOR_RING_PFN,
-                                    mem_access_notification);
-
-            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
-                 && !rc )
-                p2m_setup_introspection(d);
-
-        }
-        break;
+                                 HVM_PARAM_MONITOR_RING_PFN,
+                                 mem_access_notification);
+            break;
 
         case XEN_VM_EVENT_MONITOR_DISABLE:
-        {
             if ( ved->ring_page )
             {
                 rc = vm_event_disable(d, ved);
-                d->arch.hvm_domain.introspection_enabled = 0;
             }
-        }
-        break;
+            break;
 
         default:
             rc = -ENOSYS;
diff --git a/xen/include/asm-arm/monitor.h b/xen/include/asm-arm/monitor.h
new file mode 100644
index 0000000..ef8f38a
--- /dev/null
+++ b/xen/include/asm-arm/monitor.h
@@ -0,0 +1,13 @@
+#ifndef __ASM_ARM_MONITOR_H__
+#define __ASM_ARM_MONITOR_H__
+
+#include <xen/config.h>
+#include <public/domctl.h>
+
+static inline
+int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d)
+{
+    return -ENOSYS;
+}
+
+#endif /* __ASM_X86_MONITOR_H__ */
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index e0c4b64..8797f96 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -237,6 +237,24 @@ struct time_scale {
     u32 mul_frac;
 };
 
+/************************************************/
+/*            monitor event options             */
+/************************************************/
+struct mov_to_cr {
+    uint8_t enabled;
+    uint8_t sync;
+    uint8_t onchangeonly;
+};
+
+struct mov_to_msr {
+    uint8_t enabled;
+    uint8_t extended_capture;
+};
+
+struct debug_event {
+    uint8_t enabled;
+};
+
 struct pv_domain
 {
     l1_pgentry_t **gdt_ldt_l1tab;
@@ -331,6 +349,16 @@ struct arch_domain
     unsigned long pirq_eoi_map_mfn;
 
     unsigned int psr_rmid; /* RMID assigned to the domain for CMT */
+
+    /* Monitor options */
+    struct {
+        struct mov_to_cr mov_to_cr0;
+        struct mov_to_cr mov_to_cr3;
+        struct mov_to_cr mov_to_cr4;
+        struct mov_to_msr mov_to_msr;
+        struct debug_event singlestep;
+        struct debug_event software_breakpoint;
+    } monitor_options;
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 2757c7f..0f8b19a 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -134,7 +134,6 @@ struct hvm_domain {
     bool_t                 mem_sharing_enabled;
     bool_t                 qemu_mapcache_invalidate;
     bool_t                 is_s3_suspended;
-    bool_t                 introspection_enabled;
 
     /*
      * TSC value that VCPUs use to calculate their tsc_offset value.
diff --git a/xen/include/asm-x86/monitor.h b/xen/include/asm-x86/monitor.h
new file mode 100644
index 0000000..91204b8
--- /dev/null
+++ b/xen/include/asm-x86/monitor.h
@@ -0,0 +1,8 @@
+#ifndef __ASM_X86_MONITOR_H__
+#define __ASM_X86_MONITOR_H__
+
+#include <public/domctl.h>
+
+int monitor_domctl(struct xen_domctl_monitor_op *op, struct domain *d);
+
+#endif /* __ASM_X86_MONITOR_H__ */
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index ef373eb..c82a992 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -803,7 +803,6 @@ struct xen_domctl_gdbsx_domstatus {
 
 #define XEN_VM_EVENT_MONITOR_ENABLE                           0
 #define XEN_VM_EVENT_MONITOR_DISABLE                          1
-#define XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
 
 /*
  * Sharing ENOMEM helper.
@@ -1001,6 +1000,53 @@ struct xen_domctl_psr_cmt_op {
 typedef struct xen_domctl_psr_cmt_op xen_domctl_psr_cmt_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cmt_op_t);
 
+/*  XEN_DOMCTL_MONITOR_*
+ *
+ * Enable/disable monitoring various VM events.
+ * This domctl configures what events will be reported to helper apps
+ * via the ring buffer "MONITOR". The ring has to be first enabled
+ * with the domctl XEN_DOMCTL_VM_EVENT_OP_MONITOR.
+ *
+ * NOTICE: mem_access events are also delivered via the "MONITOR" ring buffer;
+ * however, enabling/disabling those events is performed with the use of
+ * memory_op hypercalls!
+ */
+#define XEN_DOMCTL_MONITOR_OP_ENABLE   0
+#define XEN_DOMCTL_MONITOR_OP_DISABLE  1
+
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR0            0
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR3            1
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_CR4            2
+#define XEN_DOMCTL_MONITOR_SUBOP_MOV_TO_MSR            3
+#define XEN_DOMCTL_MONITOR_SUBOP_SINGLESTEP            4
+#define XEN_DOMCTL_MONITOR_SUBOP_SOFTWARE_BREAKPOINT   5
+
+struct xen_domctl_monitor_op {
+    uint32_t op; /* XEN_DOMCTL_MONITOR_OP_* */
+    uint32_t subop; /* XEN_DOMCTL_MONITOR_SUBOP_* */
+
+    /*
+     * Further options when issuing XEN_DOMCTL_MONITOR_OP_ENABLE.
+     */
+    union {
+        struct {
+            /* Pause vCPU until response */
+            uint8_t sync;
+            /* Send event only on a change of value */
+            uint8_t onchangeonly;
+            uint8_t _pad[6];
+        } mov_to_cr;
+
+        struct {
+            /* Enable the capture of an extended set of MSRs */
+            uint8_t extended_capture;
+            uint8_t _pad[7];
+        } mov_to_msr;
+    } u;
+};
+typedef struct xen_domctl__op xen_domctl_monitor_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_monitor_op_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1076,6 +1122,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setvnumainfo                  74
 #define XEN_DOMCTL_psr_cmt_op                    75
 #define XEN_DOMCTL_arm_configure_domain          76
+#define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1141,6 +1188,7 @@ struct xen_domctl {
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         struct xen_domctl_vnuma             vnuma;
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
+        struct xen_domctl_monitor_op        monitor_op;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 6efcc0b..5de6a4b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -162,21 +162,6 @@
  */
 #define HVM_PARAM_ACPI_IOPORTS_LOCATION 19
 
-/* Enable blocking memory events, async or sync (pause vcpu until response) 
- * onchangeonly indicates messages only on a change of value */
-#define HVM_PARAM_MEMORY_EVENT_CR0          20
-#define HVM_PARAM_MEMORY_EVENT_CR3          21
-#define HVM_PARAM_MEMORY_EVENT_CR4          22
-#define HVM_PARAM_MEMORY_EVENT_INT3         23
-#define HVM_PARAM_MEMORY_EVENT_SINGLE_STEP  25
-#define HVM_PARAM_MEMORY_EVENT_MSR          30
-
-#define HVMPME_MODE_MASK       (3 << 0)
-#define HVMPME_mode_disabled   0
-#define HVMPME_mode_async      1
-#define HVMPME_mode_sync       2
-#define HVMPME_onchangeonly    (1 << 2)
-
 /* Boolean: Enable nestedhvm (hvm only) */
 #define HVM_PARAM_NESTEDHVM    24
 
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index 5667adf..96d599d 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -67,7 +67,7 @@
 #define VM_EVENT_REASON_MOV_TO_CR3              5
 /* CR4 was updated */
 #define VM_EVENT_REASON_MOV_TO_CR4              6
-/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
+/* An MSR was updated. */
 #define VM_EVENT_REASON_MOV_TO_MSR              7
 /* Debug operation executed (int3) */
 #define VM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index c419543..266915f 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -691,6 +691,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_access_required:
         return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 
+    case XEN_DOMCTL_monitor_op:
+        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
     case XEN_DOMCTL_gdbsx_pausevcpu:
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 9da3275..35d1c7b 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -249,6 +249,8 @@ class hvm
 # HVMOP_inject_trap
     hvmctl
 # XEN_DOMCTL_set_access_required
+# XEN_DOMCLT_monitor_op
+# XEN_DOMCLT_vm_event_op
     vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 09/13] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (7 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 08/13] xen: Introduce monitor_op domctl Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 10/13] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The flag is only used for debugging purposes, thus it should be only checked
for in debug builds of Xen.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/x86/mm/mem_sharing.c | 2 ++
 xen/arch/x86/mm/p2m.c         | 2 ++
 xen/common/mem_access.c       | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 9d796e7..0731261 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -606,8 +606,10 @@ int mem_sharing_sharing_resume(struct domain *d)
             continue;
         }
 
+#ifndef NDEBUG
         if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
+#endif
 
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 38defdf..13a567d 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1310,8 +1310,10 @@ void p2m_mem_paging_resume(struct domain *d)
             continue;
         }
 
+#ifndef NDEBUG
         if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
+#endif
 
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index f77f134..15dcbf0 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -44,8 +44,10 @@ void mem_access_resume(struct domain *d)
             continue;
         }
 
+#ifndef NDEBUG
         if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
             continue;
+#endif
 
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 10/13] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (8 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 09/13] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 20:09   ` Daniel De Graaf
  2015-02-09 18:53 ` [PATCH V4 11/13] xen/vm_event: Relocate memop checks Tamas K Lengyel
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The vm_event subsystem has been artifically tied to the presence of mem_access.
While mem_access does depend on vm_event, vm_event is an entirely independent
subsystem that can be used for arbitrary function-offloading to helper apps in
domains. This patch removes the dependency that mem_access needs to be supported
in order to enable vm_event.

A new vm_event_resume function is introduced which pulls all responses off from
given ring and delegates handling to appropriate helper functions (if
necessary). By default, vm_event_resume just pulls the response from the ring
and unpauses the corresponding vCPU. This approach reduces code duplication
and present a single point of entry for the entire vm_event subsystem's
response handling mechanism.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
v4: Consolidate resume routines into vm_event_resume
    Style fixes
    Sort xen/common/Makefile to be alphabetical
v3: Move ring processing out from mem_access.c to monitor.c in common
---
 xen/arch/x86/mm/mem_sharing.c       | 37 +-----------------
 xen/arch/x86/mm/p2m.c               | 66 ++++++++++---------------------
 xen/common/Makefile                 | 18 ++++-----
 xen/common/mem_access.c             | 36 +----------------
 xen/common/vm_event.c               | 77 +++++++++++++++++++++++++++++++------
 xen/include/asm-x86/mem_sharing.h   |  1 -
 xen/include/asm-x86/p2m.h           |  2 +-
 xen/include/xen/mem_access.h        | 14 +++++--
 xen/include/xen/vm_event.h          | 70 ++++-----------------------------
 xen/include/xsm/dummy.h             |  2 -
 xen/include/xsm/xsm.h               |  4 --
 xen/xsm/dummy.c                     |  2 -
 xen/xsm/flask/hooks.c               | 36 ++++++++---------
 xen/xsm/flask/policy/access_vectors |  8 ++--
 14 files changed, 137 insertions(+), 236 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 0731261..4959407 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -591,40 +591,6 @@ unsigned int mem_sharing_get_nr_shared_mfns(void)
     return (unsigned int)atomic_read(&nr_shared_mfns);
 }
 
-int mem_sharing_sharing_resume(struct domain *d)
-{
-    vm_event_response_t rsp;
-
-    /* Get all requests off the ring */
-    while ( vm_event_get_response(d, &d->vm_event->share, &rsp) )
-    {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!\n");
-            continue;
-        }
-
-#ifndef NDEBUG
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-#endif
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        /* Unpause domain/vcpu */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
-    }
-
-    return 0;
-}
-
 /* Functions that change a page's type and ownership */
 static int page_make_sharable(struct domain *d, 
                        struct page_info *page, 
@@ -1475,7 +1441,8 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
         {
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
-            rc = mem_sharing_sharing_resume(d);
+
+            vm_event_resume(d, &d->vm_event->share);
         }
         break;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 13a567d..5ccaede 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1277,13 +1277,13 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
 }
 
 /**
- * p2m_mem_paging_resume - Resume guest gfn and vcpus
+ * p2m_mem_paging_resume - Resume guest gfn
  * @d: guest domain
- * @gfn: guest page in paging state
+ * @rsp: vm_event response received
+ *
+ * p2m_mem_paging_resume() will forward the p2mt of a gfn to ram_rw. It is
+ * called by the pager.
  *
- * p2m_mem_paging_resume() will forward the p2mt of a gfn to ram_rw and all
- * waiting vcpus will be unpaused again. It is called by the pager.
- * 
  * The gfn was previously either evicted and populated, or nominated and
  * populated. If the page was evicted the p2mt will be p2m_ram_paging_in. If
  * the page was just nominated the p2mt will be p2m_ram_paging_in_start because
@@ -1291,56 +1291,30 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
  *
  * If the gfn was dropped the vcpu needs to be unpaused.
  */
-void p2m_mem_paging_resume(struct domain *d)
+
+void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    vm_event_response_t rsp;
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
 
-    /* Pull all responses off the ring */
-    while( vm_event_get_response(d, &d->vm_event->paging, &rsp) )
+    /* Fix p2m entry if the page was not dropped */
+    if ( !(rsp->flags & VM_EVENT_FLAG_DROP_PAGE) )
     {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!\n");
-            continue;
-        }
-
-#ifndef NDEBUG
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-#endif
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        /* Fix p2m entry if the page was not dropped */
-        if ( !(rsp.flags & VM_EVENT_FLAG_DROP_PAGE) )
+        uint64_t gfn = rsp->u.mem_access.gfn;
+        gfn_lock(p2m, gfn, 0);
+        mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
+        /* Allow only pages which were prepared properly, or pages which
+         * were nominated but not evicted */
+        if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
         {
-            uint64_t gfn = rsp.u.mem_access.gfn;
-            gfn_lock(p2m, gfn, 0);
-            mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
-            /* Allow only pages which were prepared properly, or pages which
-             * were nominated but not evicted */
-            if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
-            {
-                p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
-                              paging_mode_log_dirty(d) ? p2m_ram_logdirty :
-                              p2m_ram_rw, a);
-                set_gpfn_from_mfn(mfn_x(mfn), gfn);
-            }
-            gfn_unlock(p2m, gfn, 0);
+            p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
+                          paging_mode_log_dirty(d) ? p2m_ram_logdirty :
+                          p2m_ram_rw, a);
+            set_gpfn_from_mfn(mfn_x(mfn), gfn);
         }
-        /* Unpause domain */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
+        gfn_unlock(p2m, gfn, 0);
     }
 }
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index e5bd75b..8d84bc6 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -15,13 +15,19 @@ obj-y += keyhandler.o
 obj-$(HAS_KEXEC) += kexec.o
 obj-$(HAS_KEXEC) += kimage.o
 obj-y += lib.o
+obj-y += lzo.o
+obj-$(HAS_MEM_ACCESS) += mem_access.o
 obj-y += memory.o
 obj-y += multicall.o
 obj-y += notifier.o
 obj-y += page_alloc.o
+obj-$(HAS_PDX) += pdx.o
 obj-y += preempt.o
 obj-y += random.o
 obj-y += rangeset.o
+obj-y += radix-tree.o
+obj-y += rbtree.o
+obj-y += rcupdate.o
 obj-y += sched_credit.o
 obj-y += sched_credit2.o
 obj-y += sched_sedf.o
@@ -40,21 +46,15 @@ obj-y += sysctl.o
 obj-y += tasklet.o
 obj-y += time.o
 obj-y += timer.o
+obj-y += tmem.o
+obj-y += tmem_xen.o
 obj-y += trace.o
 obj-y += version.o
+obj-y += vm_event.o
 obj-y += vmap.o
 obj-y += vsprintf.o
 obj-y += wait.o
 obj-y += xmalloc_tlsf.o
-obj-y += rcupdate.o
-obj-y += tmem.o
-obj-y += tmem_xen.o
-obj-y += radix-tree.o
-obj-y += rbtree.o
-obj-y += lzo.o
-obj-$(HAS_PDX) += pdx.o
-obj-$(HAS_MEM_ACCESS) += mem_access.o
-obj-$(HAS_MEM_ACCESS) += vm_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 15dcbf0..a54fe6e 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -29,40 +29,6 @@
 #include <asm/p2m.h>
 #include <xsm/xsm.h>
 
-void mem_access_resume(struct domain *d)
-{
-    vm_event_response_t rsp;
-
-    /* Pull all responses off the ring. */
-    while ( vm_event_get_response(d, &d->vm_event->monitor, &rsp) )
-    {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!");
-            continue;
-        }
-
-#ifndef NDEBUG
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-#endif
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        p2m_vm_event_emulate_check(v, &rsp);
-
-        /* Unpause domain. */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
-    }
-}
-
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
 {
@@ -97,7 +63,7 @@ int mem_access_memop(unsigned long cmd,
             rc = -ENOSYS;
         else
         {
-            mem_access_resume(d);
+            vm_event_resume(d, &d->vm_event->monitor);
             rc = 0;
         }
         break;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 5fdac37..3b8fdd5 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -358,6 +358,67 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *ved, vm_even
     return 1;
 }
 
+/*
+ * Pull all responses from the given ring and unpause the corresponding vCPU
+ * if required. Based on the response type, here we can also call custom
+ * handlers.
+ *
+ * Note: responses are handled the same way regardless of which ring they
+ * arrive on.
+ */
+void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
+{
+    vm_event_response_t rsp;
+
+    /* Pull all responses off the ring. */
+    while ( vm_event_get_response(d, ved, &rsp) )
+    {
+        struct vcpu *v;
+
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
+        {
+            gdprintk(XENLOG_WARNING, "vm_event interface version mismatch!");
+            continue;
+        }
+
+#ifndef NDEBUG
+        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
+            continue;
+#endif
+
+        /* Validate the vcpu_id in the response. */
+        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+            continue;
+
+        v = d->vcpu[rsp.vcpu_id];
+
+        /*
+         * In some cases the response type needs extra handling, so here
+         * we call the appropriate handlers.
+         */
+        switch ( rsp.reason )
+        {
+
+#ifdef HAS_MEM_ACCESS
+        case VM_EVENT_REASON_MEM_ACCESS:
+            mem_access_resume(v, &rsp);
+            break;
+#endif
+
+#ifdef HAS_MEM_PAGING
+        case VM_EVENT_REASON_MEM_PAGING:
+            p2m_mem_paging_resume(d, &rsp);
+            break;
+#endif
+
+        };
+
+        /* Unpause domain. */
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
+    }
+}
+
 void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
 {
     vm_event_ring_lock(ved);
@@ -437,25 +498,23 @@ int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->paging.ring_page != NULL) )
-        p2m_mem_paging_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->paging);
 }
 #endif
 
-#ifdef HAS_MEM_ACCESS
 /* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_access_notification(struct vcpu *v, unsigned int port)
+static void monitor_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->monitor.ring_page != NULL) )
-        mem_access_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->monitor);
 }
-#endif
 
 #ifdef HAS_MEM_SHARING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->share.ring_page != NULL) )
-        mem_sharing_sharing_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->share);
 }
 #endif
 
@@ -509,12 +568,10 @@ void vm_event_cleanup(struct domain *d)
         (void)vm_event_disable(d, &d->vm_event->paging);
     }
 #endif
-#ifdef HAS_MEM_ACCESS
     if ( d->vm_event->monitor.ring_page ) {
         destroy_waitqueue_head(&d->vm_event->monitor.wq);
         (void)vm_event_disable(d, &d->vm_event->monitor);
     }
-#endif
 #ifdef HAS_MEM_SHARING
     if ( d->vm_event->share.ring_page ) {
         destroy_waitqueue_head(&d->vm_event->share.wq);
@@ -610,7 +667,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
     break;
 #endif
 
-#ifdef HAS_MEM_ACCESS
     case XEN_DOMCTL_VM_EVENT_OP_MONITOR:
     {
         struct vm_event_domain *ved = &d->vm_event->monitor;
@@ -621,7 +677,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         case XEN_VM_EVENT_MONITOR_ENABLE:
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                  HVM_PARAM_MONITOR_RING_PFN,
-                                 mem_access_notification);
+                                 monitor_notification);
             break;
 
         case XEN_VM_EVENT_MONITOR_DISABLE:
@@ -637,7 +693,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
     }
     break;
-#endif
 
 #ifdef HAS_MEM_SHARING
     case XEN_DOMCTL_VM_EVENT_OP_SHARING:
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index 2f1f3d2..da99d46 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -90,7 +90,6 @@ static inline int mem_sharing_unshare_page(struct domain *d,
  */
 int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
                                 bool_t allow_sleep);
-int mem_sharing_sharing_resume(struct domain *d);
 int mem_sharing_memop(struct domain *d, 
                        xen_mem_sharing_op_t *mec);
 int mem_sharing_domctl(struct domain *d, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 9e14015..5b5a055 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -571,7 +571,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
 /* Prepare the p2m for paging a frame in */
 int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer);
 /* Resume normal operation (in case a domain was paused) */
-void p2m_mem_paging_resume(struct domain *d);
+void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp);
 
 /* Send mem event based on the access (gla is -1ull if not available).  Handles
  * the rw2rx conversion. Boolean return value indicates if access rights have 
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index 1d01221..221eca0 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -24,6 +24,7 @@
 #define _XEN_ASM_MEM_ACCESS_H
 
 #include <public/memory.h>
+#include <asm/p2m.h>
 
 #ifdef HAS_MEM_ACCESS
 
@@ -31,8 +32,11 @@ int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
 int mem_access_send_req(struct domain *d, vm_event_request_t *req);
 
-/* Resumes the running of the VCPU, restarting the last instruction */
-void mem_access_resume(struct domain *d);
+static inline
+void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp)
+{
+    p2m_vm_event_emulate_check(v, rsp);
+}
 
 #else
 
@@ -49,7 +53,11 @@ int mem_access_send_req(struct domain *d, vm_event_request_t *req)
     return -ENOSYS;
 }
 
-static inline void mem_access_resume(struct domain *d) {}
+static inline
+void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp)
+{
+    /* Nothing to do. */
+}
 
 #endif /* HAS_MEM_ACCESS */
 
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 988ea42..82a6e56 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -26,8 +26,6 @@
 
 #include <xen/sched.h>
 
-#ifdef HAS_MEM_ACCESS
-
 /* Clean up on domain destruction */
 void vm_event_cleanup(struct domain *d);
 
@@ -48,15 +46,15 @@ bool_t vm_event_check_ring(struct vm_event_domain *med);
  * succeed.
  */
 int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *med,
-                            bool_t allow_sleep);
+                          bool_t allow_sleep);
 static inline int vm_event_claim_slot(struct domain *d,
-                                        struct vm_event_domain *med)
+                                      struct vm_event_domain *med)
 {
     return __vm_event_claim_slot(d, med, 1);
 }
 
 static inline int vm_event_claim_slot_nosleep(struct domain *d,
-                                        struct vm_event_domain *med)
+                                              struct vm_event_domain *med)
 {
     return __vm_event_claim_slot(d, med, 0);
 }
@@ -64,72 +62,20 @@ static inline int vm_event_claim_slot_nosleep(struct domain *d,
 void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med);
 
 void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
-                            vm_event_request_t *req);
+                          vm_event_request_t *req);
 
 int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
-                           vm_event_response_t *rsp);
+                          vm_event_response_t *rsp);
+
+void vm_event_resume(struct domain *d, struct vm_event_domain *ved);
 
 int do_vm_event_op(int op, uint32_t domain, void *arg);
 int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+                    XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 void vm_event_vcpu_pause(struct vcpu *v);
 void vm_event_vcpu_unpause(struct vcpu *v);
 
-#else
-
-static inline void vm_event_cleanup(struct domain *d) {}
-
-static inline bool_t vm_event_check_ring(struct vm_event_domain *med)
-{
-    return 0;
-}
-
-static inline int vm_event_claim_slot(struct domain *d,
-                                        struct vm_event_domain *med)
-{
-    return -ENOSYS;
-}
-
-static inline int vm_event_claim_slot_nosleep(struct domain *d,
-                                        struct vm_event_domain *med)
-{
-    return -ENOSYS;
-}
-
-static inline
-void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *med)
-{}
-
-static inline
-void vm_event_put_request(struct domain *d, struct vm_event_domain *med,
-                            vm_event_request_t *req)
-{}
-
-static inline
-int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
-                           vm_event_response_t *rsp)
-{
-    return -ENOSYS;
-}
-
-static inline int do_vm_event_op(int op, uint32_t domain, void *arg)
-{
-    return -ENOSYS;
-}
-
-static inline
-int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
-                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
-    return -ENOSYS;
-}
-
-static inline void vm_event_vcpu_pause(struct vcpu *v) {}
-static inline void vm_event_vcpu_unpause(struct vcpu *v) {}
-
-#endif /* HAS_MEM_ACCESS */
-
 #endif /* __VM_EVENT_H__ */
 
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 4227093..50ee929 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -513,7 +513,6 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
-#ifdef HAS_MEM_ACCESS
 static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
@@ -525,7 +524,6 @@ static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
-#endif
 
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index cff9d35..d56a68f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -141,10 +141,8 @@ struct xsm_operations {
     int (*hvm_param_nested) (struct domain *d);
     int (*get_vnumainfo) (struct domain *d);
 
-#ifdef HAS_MEM_ACCESS
     int (*vm_event_control) (struct domain *d, int mode, int op);
     int (*vm_event_op) (struct domain *d, int op);
-#endif
 
 #ifdef CONFIG_X86
     int (*do_mca) (void);
@@ -543,7 +541,6 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
     return xsm_ops->get_vnumainfo(d);
 }
 
-#ifdef HAS_MEM_ACCESS
 static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
 {
     return xsm_ops->vm_event_control(d, mode, op);
@@ -553,7 +550,6 @@ static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
 {
     return xsm_ops->vm_event_op(d, op);
 }
-#endif
 
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 25fca68..6d12d32 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -118,10 +118,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
-#ifdef HAS_MEM_ACCESS
     set_to_dummy_if_null(ops, vm_event_control);
     set_to_dummy_if_null(ops, vm_event_op);
-#endif
 
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 266915f..c34c793 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -577,9 +577,7 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_iomem_permission:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
-#ifdef HAS_MEM_ACCESS
     case XEN_DOMCTL_vm_event_op:
-#endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
     case XEN_DOMCTL_shadow_op:
@@ -689,10 +687,10 @@ static int flask_domctl(struct domain *d, int cmd)
         return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
 
     case XEN_DOMCTL_set_access_required:
-        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 
     case XEN_DOMCTL_monitor_op:
-        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
@@ -1139,6 +1137,16 @@ static int flask_hvm_param_nested(struct domain *d)
     return current_has_perm(d, SECCLASS_HVM, HVM__NESTED);
 }
 
+static int flask_vm_event_control(struct domain *d, int mode, int op)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+}
+
+static int flask_vm_event_op(struct domain *d, int op)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+}
+
 #if defined(HAS_PASSTHROUGH) && defined(HAS_PCI)
 static int flask_get_device_group(uint32_t machine_bdf)
 {
@@ -1205,18 +1213,6 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
-#ifdef HAS_MEM_ACCESS
-static int flask_vm_event_control(struct domain *d, int mode, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
-}
-
-static int flask_vm_event_op(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
-}
-#endif /* HAS_MEM_ACCESS */
-
 #ifdef CONFIG_X86
 static int flask_do_mca(void)
 {
@@ -1584,6 +1580,9 @@ static struct xsm_operations flask_ops = {
     .do_xsm_op = do_flask_op,
     .get_vnumainfo = flask_get_vnumainfo,
 
+    .vm_event_control = flask_vm_event_control,
+    .vm_event_op = flask_vm_event_op,
+
 #ifdef CONFIG_COMPAT
     .do_compat_op = compat_flask_op,
 #endif
@@ -1599,11 +1598,6 @@ static struct xsm_operations flask_ops = {
     .deassign_device = flask_deassign_device,
 #endif
 
-#ifdef HAS_MEM_ACCESS
-    .vm_event_control = flask_vm_event_control,
-    .vm_event_op = flask_vm_event_op,
-#endif
-
 #ifdef CONFIG_X86
     .do_mca = flask_do_mca,
     .shadow_control = flask_shadow_control,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 35d1c7b..d47a28c 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -220,6 +220,10 @@ class domain2
     psr_cmt_op
 # XEN_DOMCTL_configure_domain
     configure_domain
+# XEN_DOMCTL_set_access_required
+# XEN_DOMCLT_monitor_op
+# XEN_DOMCLT_vm_event_op
+    vm_event
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
@@ -248,10 +252,6 @@ class hvm
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
     hvmctl
-# XEN_DOMCTL_set_access_required
-# XEN_DOMCLT_monitor_op
-# XEN_DOMCLT_vm_event_op
-    vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
 #  target = domain whose memory is being shared
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 11/13] xen/vm_event: Relocate memop checks
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (9 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 10/13] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 12/13] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
  2015-02-09 18:53 ` [PATCH V4 13/13] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
  12 siblings, 0 replies; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The memop handler function for paging/sharing responsible for calling XSM
doesn't really have anything to do with vm_event, thus in this patch we
relocate it into mem_paging_memop and mem_sharing_memop. This has already
been the approach in mem_access_memop, so in this patch we just make it
consistent.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/x86/mm/mem_paging.c      | 42 ++++++++++++++++------
 xen/arch/x86/mm/mem_sharing.c     | 76 ++++++++++++++++++++++++++-------------
 xen/arch/x86/x86_64/compat/mm.c   | 28 +++------------
 xen/arch/x86/x86_64/mm.c          | 26 +++-----------
 xen/common/vm_event.c             | 43 ----------------------
 xen/include/asm-x86/mem_paging.h  |  7 +++-
 xen/include/asm-x86/mem_sharing.h |  4 +--
 xen/include/xen/vm_event.h        |  1 -
 8 files changed, 101 insertions(+), 126 deletions(-)

diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 68b7fcc..9d2cc4c 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -21,40 +21,62 @@
  */
 
 
+#include <xen/guest_access.h>
 #include <asm/p2m.h>
-#include <xen/vm_event.h>
+#include <xsm/xsm.h>
 
-
-int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpc)
+int mem_paging_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
 {
+    int rc;
+    xen_mem_paging_op_t mpo;
+    struct domain *d;
+
+    rc = -EFAULT;
+    if ( copy_from_guest(&mpo, arg, 1) )
+        return rc;
+
+    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
+    if ( rc )
+        return rc;
+
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
+    if ( rc )
+        return rc;
+
+    rc = -ENODEV;
     if ( unlikely(!d->vm_event->paging.ring_page) )
-        return -ENODEV;
+        return rc;
 
-    switch( mpc->op )
+    switch( mpo.op )
     {
     case XENMEM_paging_op_nominate:
     {
-        return p2m_mem_paging_nominate(d, mpc->gfn);
+        rc = p2m_mem_paging_nominate(d, mpo.gfn);
     }
     break;
 
     case XENMEM_paging_op_evict:
     {
-        return p2m_mem_paging_evict(d, mpc->gfn);
+        rc = p2m_mem_paging_evict(d, mpo.gfn);
     }
     break;
 
     case XENMEM_paging_op_prep:
     {
-        unsigned long gfn = mpc->gfn;
-        return p2m_mem_paging_prep(d, gfn, mpc->buffer);
+        rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer);
     }
     break;
 
     default:
-        return -ENOSYS;
+        rc = -ENOSYS;
         break;
     }
+
+    if ( !rc && __copy_to_guest(arg, &mpo, 1) )
+        rc = -EFAULT;
+
+    return rc;
 }
 
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 4959407..612ed89 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -28,6 +28,7 @@
 #include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/rcupdate.h>
+#include <xen/guest_access.h>
 #include <xen/vm_event.h>
 #include <asm/page.h>
 #include <asm/string.h>
@@ -1293,30 +1294,52 @@ int relinquish_shared_pages(struct domain *d)
     return rc;
 }
 
-int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
+int mem_sharing_memop(unsigned long cmd,
+                      XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
-    int rc = 0;
+    int rc;
+    xen_mem_sharing_op_t mso;
+    struct domain *d;
+
+    rc = -EFAULT;
+    if ( copy_from_guest(&mso, arg, 1) )
+        return rc;
+
+    if ( mso.op == XENMEM_sharing_op_audit )
+        return mem_sharing_audit();
+
+    rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d);
+    if ( rc )
+        return rc;
 
     /* Only HAP is supported */
     if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
          return -ENODEV;
 
-    switch(mec->op)
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
+    if ( rc )
+        return rc;
+
+    rc = -ENODEV;
+    if ( unlikely(!d->vm_event->share.ring_page) )
+        return rc;
+
+    switch(mso.op)
     {
         case XENMEM_sharing_op_nominate_gfn:
         {
-            unsigned long gfn = mec->u.nominate.u.gfn;
+            unsigned long gfn = mso.u.nominate.u.gfn;
             shr_handle_t handle;
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
             rc = mem_sharing_nominate_page(d, gfn, 0, &handle);
-            mec->u.nominate.handle = handle;
+            mso.u.nominate.handle = handle;
         }
         break;
 
         case XENMEM_sharing_op_nominate_gref:
         {
-            grant_ref_t gref = mec->u.nominate.u.grant_ref;
+            grant_ref_t gref = mso.u.nominate.u.grant_ref;
             unsigned long gfn;
             shr_handle_t handle;
 
@@ -1325,7 +1348,7 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( mem_sharing_gref_to_gfn(d, gref, &gfn) < 0 )
                 return -EINVAL;
             rc = mem_sharing_nominate_page(d, gfn, 3, &handle);
-            mec->u.nominate.handle = handle;
+            mso.u.nominate.handle = handle;
         }
         break;
 
@@ -1338,12 +1361,12 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            rc = rcu_lock_live_remote_domain_by_id(mec->u.share.client_domain,
+            rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                    &cd);
             if ( rc )
                 return rc;
 
-            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mec->op);
+            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
             if ( rc )
             {
                 rcu_unlock_domain(cd);
@@ -1356,36 +1379,36 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
                 return -EINVAL;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.source_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.source_gfn) )
             {
                 grant_ref_t gref = (grant_ref_t) 
                                     (XENMEM_SHARING_OP_FIELD_GET_GREF(
-                                        mec->u.share.source_gfn));
+                                        mso.u.share.source_gfn));
                 if ( mem_sharing_gref_to_gfn(d, gref, &sgfn) < 0 )
                 {
                     rcu_unlock_domain(cd);
                     return -EINVAL;
                 }
             } else {
-                sgfn  = mec->u.share.source_gfn;
+                sgfn  = mso.u.share.source_gfn;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.client_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.client_gfn) )
             {
                 grant_ref_t gref = (grant_ref_t) 
                                     (XENMEM_SHARING_OP_FIELD_GET_GREF(
-                                        mec->u.share.client_gfn));
+                                        mso.u.share.client_gfn));
                 if ( mem_sharing_gref_to_gfn(cd, gref, &cgfn) < 0 )
                 {
                     rcu_unlock_domain(cd);
                     return -EINVAL;
                 }
             } else {
-                cgfn  = mec->u.share.client_gfn;
+                cgfn  = mso.u.share.client_gfn;
             }
 
-            sh = mec->u.share.source_handle;
-            ch = mec->u.share.client_handle;
+            sh = mso.u.share.source_handle;
+            ch = mso.u.share.client_handle;
 
             rc = mem_sharing_share_pages(d, sgfn, sh, cd, cgfn, ch); 
 
@@ -1402,12 +1425,12 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            rc = rcu_lock_live_remote_domain_by_id(mec->u.share.client_domain,
+            rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                    &cd);
             if ( rc )
                 return rc;
 
-            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mec->op);
+            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
             if ( rc )
             {
                 rcu_unlock_domain(cd);
@@ -1420,16 +1443,16 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
                 return -EINVAL;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.source_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.source_gfn) )
             {
                 /* Cannot add a gref to the physmap */
                 rcu_unlock_domain(cd);
                 return -EINVAL;
             }
 
-            sgfn    = mec->u.share.source_gfn;
-            sh      = mec->u.share.source_handle;
-            cgfn    = mec->u.share.client_gfn;
+            sgfn    = mso.u.share.source_gfn;
+            sh      = mso.u.share.source_handle;
+            cgfn    = mso.u.share.client_gfn;
 
             rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn); 
 
@@ -1448,14 +1471,14 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
 
         case XENMEM_sharing_op_debug_gfn:
         {
-            unsigned long gfn = mec->u.debug.u.gfn;
+            unsigned long gfn = mso.u.debug.u.gfn;
             rc = mem_sharing_debug_gfn(d, gfn);
         }
         break;
 
         case XENMEM_sharing_op_debug_gref:
         {
-            grant_ref_t gref = mec->u.debug.u.gref;
+            grant_ref_t gref = mso.u.debug.u.gref;
             rc = mem_sharing_debug_gref(d, gref);
         }
         break;
@@ -1465,6 +1488,9 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             break;
     }
 
+    if ( !rc && __copy_to_guest(arg, &mso, 1) )
+        return -EFAULT;
+
     return rc;
 }
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 85f138b..bb03870 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -1,9 +1,9 @@
 #include <xen/event.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/multicall.h>
 #include <compat/memory.h>
 #include <compat/xen.h>
+#include <asm/mem_paging.h>
 #include <asm/mem_sharing.h>
 
 int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
@@ -187,30 +187,12 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return mem_sharing_get_nr_shared_mfns();
 
     case XENMEM_paging_op:
-    {
-        xen_mem_paging_op_t mpo;
-
-        if ( copy_from_guest(&mpo, arg, 1) )
-            return -EFAULT;
-        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
-        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_paging_memop(cmd,
+                                guest_handle_cast(arg, xen_mem_paging_op_t));
 
     case XENMEM_sharing_op:
-    {
-        xen_mem_sharing_op_t mso;
-
-        if ( copy_from_guest(&mso, arg, 1) )
-            return -EFAULT;
-        if ( mso.op == XENMEM_sharing_op_audit )
-            return mem_sharing_audit(); 
-        rc = do_vm_event_op(cmd, mso.domain, &mso);
-        if ( !rc && __copy_to_guest(arg, &mso, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_sharing_memop(cmd,
+                                 guest_handle_cast(arg, xen_mem_sharing_op_t));
 
     default:
         rc = -ENOSYS;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 1e2bd1a..bdefe5c 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -26,7 +26,6 @@
 #include <xen/nodemask.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
 #include <asm/asm_defns.h>
@@ -37,6 +36,7 @@
 #include <asm/msr.h>
 #include <asm/setup.h>
 #include <asm/numa.h>
+#include <asm/mem_paging.h>
 #include <asm/mem_sharing.h>
 #include <public/memory.h>
 
@@ -984,28 +984,12 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return mem_sharing_get_nr_shared_mfns();
 
     case XENMEM_paging_op:
-    {
-        xen_mem_paging_op_t mpo;
-        if ( copy_from_guest(&mpo, arg, 1) )
-            return -EFAULT;
-        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
-        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_paging_memop(cmd,
+                                guest_handle_cast(arg, xen_mem_paging_op_t));
 
     case XENMEM_sharing_op:
-    {
-        xen_mem_sharing_op_t mso;
-        if ( copy_from_guest(&mso, arg, 1) )
-            return -EFAULT;
-        if ( mso.op == XENMEM_sharing_op_audit )
-            return mem_sharing_audit(); 
-        rc = do_vm_event_op(cmd, mso.domain, &mso);
-        if ( !rc && __copy_to_guest(arg, &mso, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_sharing_memop(cmd,
+                                 guest_handle_cast(arg, xen_mem_sharing_op_t));
 
     default:
         rc = -ENOSYS;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 3b8fdd5..5b0d0d2 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -27,15 +27,6 @@
 #include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/p2m.h>
-
-#ifdef HAS_MEM_PAGING
-#include <asm/mem_paging.h>
-#endif
-
-#ifdef HAS_MEM_SHARING
-#include <asm/mem_sharing.h>
-#endif
-
 #include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
@@ -518,40 +509,6 @@ static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 }
 #endif
 
-int do_vm_event_op(int op, uint32_t domain, void *arg)
-{
-    int ret;
-    struct domain *d;
-
-    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
-    if ( ret )
-        return ret;
-
-    ret = xsm_vm_event_op(XSM_DM_PRIV, d, op);
-    if ( ret )
-        goto out;
-
-    switch (op)
-    {
-#ifdef HAS_MEM_PAGING
-        case XENMEM_paging_op:
-            ret = mem_paging_memop(d, arg);
-            break;
-#endif
-#ifdef HAS_MEM_SHARING
-        case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, arg);
-            break;
-#endif
-        default:
-            ret = -ENOSYS;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return ret;
-}
-
 /* Clean up on domain destruction */
 void vm_event_cleanup(struct domain *d)
 {
diff --git a/xen/include/asm-x86/mem_paging.h b/xen/include/asm-x86/mem_paging.h
index 92ed2fa..9d8bc67 100644
--- a/xen/include/asm-x86/mem_paging.h
+++ b/xen/include/asm-x86/mem_paging.h
@@ -20,8 +20,13 @@
  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
  */
 
+#ifndef __MEM_PAGING_H__
+#define __MEM_PAGING_H__
 
-int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *meo);
+int mem_paging_memop(unsigned long cmd,
+                     XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg);
+
+#endif /*__MEM_PAGING_H__ */
 
 
 /*
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index da99d46..51d4364 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -90,8 +90,8 @@ static inline int mem_sharing_unshare_page(struct domain *d,
  */
 int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
                                 bool_t allow_sleep);
-int mem_sharing_memop(struct domain *d, 
-                       xen_mem_sharing_op_t *mec);
+int mem_sharing_memop(unsigned long cmd,
+                      XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg);
 int mem_sharing_domctl(struct domain *d, 
                        xen_domctl_mem_sharing_op_t *mec);
 int mem_sharing_audit(void);
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 82a6e56..871a519 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -69,7 +69,6 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *med,
 
 void vm_event_resume(struct domain *d, struct vm_event_domain *ved);
 
-int do_vm_event_op(int op, uint32_t domain, void *arg);
 int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *mec,
                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 12/13] xen/xsm: Split vm_event_op into three separate labels
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (10 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 11/13] xen/vm_event: Relocate memop checks Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-09 20:09   ` Daniel De Graaf
  2015-02-09 18:53 ` [PATCH V4 13/13] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

The XSM label vm_event_op has been used to control the three memops
controlling mem_access, mem_paging and mem_sharing. While these systems
rely on vm_event, these are not vm_event operations themselves. Thus,
in this patch we introduce three separate labels for each of these memops.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/x86/mm/mem_paging.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c       |  2 +-
 xen/common/mem_access.c             |  2 +-
 xen/include/xsm/dummy.h             | 20 +++++++++++++++++++-
 xen/include/xsm/xsm.h               | 33 ++++++++++++++++++++++++++++++---
 xen/xsm/dummy.c                     | 13 ++++++++++++-
 xen/xsm/flask/hooks.c               | 33 ++++++++++++++++++++++++++++++---
 xen/xsm/flask/policy/access_vectors |  6 ++++++
 8 files changed, 100 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 9d2cc4c..0856c23 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -40,7 +40,7 @@ int mem_paging_memop(unsigned long cmd,
     if ( rc )
         return rc;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
+    rc = xsm_mem_paging(XSM_DM_PRIV, d);
     if ( rc )
         return rc;
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 612ed89..e3ebc05 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1316,7 +1316,7 @@ int mem_sharing_memop(unsigned long cmd,
     if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
          return -ENODEV;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
+    rc = xsm_mem_sharing(XSM_DM_PRIV, d);
     if ( rc )
         return rc;
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index a54fe6e..426f766 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -48,7 +48,7 @@ int mem_access_memop(unsigned long cmd,
     if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    rc = xsm_mem_access(XSM_DM_PRIV, d);
     if ( rc )
         goto out;
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 50ee929..16967ed 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -519,11 +519,29 @@ static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static XSM_INLINE int xsm_mem_access(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
+#endif
+
+#ifdef HAS_MEM_PAGING
+static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
 
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index d56a68f..2a88d84 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -142,7 +142,18 @@ struct xsm_operations {
     int (*get_vnumainfo) (struct domain *d);
 
     int (*vm_event_control) (struct domain *d, int mode, int op);
-    int (*vm_event_op) (struct domain *d, int op);
+
+#ifdef HAS_MEM_ACCESS
+    int (*mem_access) (struct domain *d);
+#endif
+
+#ifdef HAS_MEM_PAGING
+    int (*mem_paging) (struct domain *d);
+#endif
+
+#ifdef HAS_MEM_SHARING
+    int (*mem_sharing) (struct domain *d);
+#endif
 
 #ifdef CONFIG_X86
     int (*do_mca) (void);
@@ -546,10 +557,26 @@ static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int
     return xsm_ops->vm_event_control(d, mode, op);
 }
 
-static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static inline int xsm_mem_access (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->vm_event_op(d, op);
+    return xsm_ops->mem_access(d);
 }
+#endif
+
+#ifdef HAS_MEM_PAGING
+static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->mem_paging(d);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->mem_sharing(d);
+}
+#endif
 
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 6d12d32..3ddb4f6 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -119,7 +119,18 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
     set_to_dummy_if_null(ops, vm_event_control);
-    set_to_dummy_if_null(ops, vm_event_op);
+
+#ifdef HAS_MEM_ACCESS
+    set_to_dummy_if_null(ops, mem_access);
+#endif
+
+#ifdef HAS_MEM_PAGING
+    set_to_dummy_if_null(ops, mem_paging);
+#endif
+
+#ifdef HAS_MEM_SHARING
+    set_to_dummy_if_null(ops, mem_sharing);
+#endif
 
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index c34c793..07da2c4 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1142,10 +1142,26 @@ static int flask_vm_event_control(struct domain *d, int mode, int op)
     return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 }
 
-static int flask_vm_event_op(struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static int flask_mem_access(struct domain *d)
 {
-    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_ACCESS);
+}
+#endif
+
+#ifdef HAS_MEM_PAGING
+static int flask_mem_paging(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_PAGING);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static int flask_mem_sharing(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_SHARING);
 }
+#endif
 
 #if defined(HAS_PASSTHROUGH) && defined(HAS_PCI)
 static int flask_get_device_group(uint32_t machine_bdf)
@@ -1581,7 +1597,18 @@ static struct xsm_operations flask_ops = {
     .get_vnumainfo = flask_get_vnumainfo,
 
     .vm_event_control = flask_vm_event_control,
-    .vm_event_op = flask_vm_event_op,
+
+#ifdef HAS_MEM_ACCESS
+    .mem_access = flask_mem_access,
+#endif
+
+#ifdef HAS_MEM_PAGING
+    .mem_paging = flask_mem_paging,
+#endif
+
+#ifdef HAS_MEM_SHARING
+    .mem_sharing = flask_mem_sharing,
+#endif
 
 #ifdef CONFIG_COMPAT
     .do_compat_op = compat_flask_op,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index d47a28c..11d2d31 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -224,6 +224,12 @@ class domain2
 # XEN_DOMCLT_monitor_op
 # XEN_DOMCLT_vm_event_op
     vm_event
+# XENMEM_access_op
+    mem_access
+# XENMEM_paging_op
+    mem_paging
+# XENMEM_sharing_op
+    mem_sharing
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH V4 13/13] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (11 preceding siblings ...)
  2015-02-09 18:53 ` [PATCH V4 12/13] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
@ 2015-02-09 18:53 ` Tamas K Lengyel
  2015-02-13 12:12   ` Wei Liu
  12 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-09 18:53 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy

Thus far mem_access and mem_sharing memops had been able to signal
to Xen to start pulling responses off the corresponding rings. In this patch
we retire these memops and add them to the option to the vm_event_op domctl.

The vm_event_op domctl suboptions are the same for each ring thus we
consolidate them into XEN_VM_EVENT_ENABLE/DISABLE/RESUME.

As part of this patch in libxc we also rename the mem_access_enable/disable
functions to monitor_enable/disable and move them into xc_monitor.c.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 tools/libxc/include/xenctrl.h       | 22 ++++++++++-----------
 tools/libxc/xc_mem_access.c         | 25 ------------------------
 tools/libxc/xc_mem_paging.c         | 12 ++++++++++--
 tools/libxc/xc_memshr.c             | 15 ++++++--------
 tools/libxc/xc_monitor.c            | 22 +++++++++++++++++++++
 tools/libxc/xc_vm_event.c           |  6 +++---
 tools/tests/xen-access/xen-access.c | 12 ++++++------
 xen/arch/x86/mm/mem_sharing.c       |  9 ---------
 xen/common/mem_access.c             |  9 ---------
 xen/common/vm_event.c               | 39 +++++++++++++++++++++++++++++++------
 xen/include/public/domctl.h         | 32 ++++++++++++++----------------
 xen/include/public/memory.h         | 16 +++++++--------
 12 files changed, 113 insertions(+), 106 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 3324132..3042e98 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2269,6 +2269,7 @@ int xc_tmem_restore_extra(xc_interface *xch, int dom, int fd);
  */
 int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id);
+int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id);
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id,
                            unsigned long gfn);
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn);
@@ -2282,17 +2283,6 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
  */
 
 /*
- * Enables mem_access and returns the mapped ring page.
- * Will return NULL on error.
- * Caller has to unmap this page when done.
- */
-void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
-void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
-                                         uint32_t *port);
-int xc_mem_access_disable(xc_interface *xch, domid_t domain_id);
-int xc_mem_access_resume(xc_interface *xch, domid_t domain_id);
-
-/*
  * Set a range of memory to a specific access.
  * Allowed types are XENMEM_access_default, XENMEM_access_n, any combination of
  * XENMEM_access_ + (rwx), and XENMEM_access_rx2rw
@@ -2309,7 +2299,17 @@ int xc_get_mem_access(xc_interface *xch, domid_t domain_id,
 
 /***
  * Monitor control operations.
+ *
+ * Enables the VM event monitor ring and returns the mapped ring page.
+ * This ring is used to deliver mem_access events, as well a set of additional
+ * events that can be enabled with the xc_monitor_* functions.
+ *
+ * Will return NULL on error.
+ * Caller has to unmap this page when done.
  */
+void *xc_monitor_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
+int xc_monitor_disable(xc_interface *xch, domid_t domain_id);
+int xc_monitor_resume(xc_interface *xch, domid_t domain_id);
 int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
                           unsigned int op, unsigned int sync,
                           unsigned int onchangeonly);
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 37e776c..7050692 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -24,31 +24,6 @@
 #include "xc_private.h"
 #include <xen/memory.h>
 
-void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
-{
-    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port);
-}
-
-int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
-{
-    return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_MONITOR_DISABLE,
-                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
-                               NULL);
-}
-
-int xc_mem_access_resume(xc_interface *xch, domid_t domain_id)
-{
-    xen_mem_access_op_t mao =
-    {
-        .op    = XENMEM_access_op_resume,
-        .domid = domain_id
-    };
-
-    return do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao));
-}
-
 int xc_set_mem_access(xc_interface *xch,
                       domid_t domain_id,
                       xenmem_access_t access,
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index b635a4d..9e64190 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -48,7 +48,7 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
     }
 
     return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_PAGING_ENABLE,
+                               XEN_VM_EVENT_ENABLE,
                                XEN_DOMCTL_VM_EVENT_OP_PAGING,
                                port);
 }
@@ -56,7 +56,15 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 {
     return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_PAGING_DISABLE,
+                               XEN_VM_EVENT_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_PAGING,
+                               NULL);
+}
+
+int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_RESUME,
                                XEN_DOMCTL_VM_EVENT_OP_PAGING,
                                NULL);
 }
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 14cc1ce..0960c6d 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -53,7 +53,7 @@ int xc_memshr_ring_enable(xc_interface *xch,
     }
 
     return xc_vm_event_control(xch, domid,
-                               XEN_VM_EVENT_SHARING_ENABLE,
+                               XEN_VM_EVENT_ENABLE,
                                XEN_DOMCTL_VM_EVENT_OP_SHARING,
                                port);
 }
@@ -62,7 +62,7 @@ int xc_memshr_ring_disable(xc_interface *xch,
                            domid_t domid)
 {
     return xc_vm_event_control(xch, domid,
-                               XEN_VM_EVENT_SHARING_DISABLE,
+                               XEN_VM_EVENT_DISABLE,
                                XEN_DOMCTL_VM_EVENT_OP_SHARING,
                                NULL);
 }
@@ -185,13 +185,10 @@ int xc_memshr_add_to_physmap(xc_interface *xch,
 int xc_memshr_domain_resume(xc_interface *xch,
                             domid_t domid)
 {
-    xen_mem_sharing_op_t mso;
-
-    memset(&mso, 0, sizeof(mso));
-
-    mso.op = XENMEM_sharing_op_resume;
-
-    return xc_memshr_memop(xch, domid, &mso);
+    return xc_vm_event_control(xch, domid,
+                               XEN_VM_EVENT_RESUME,
+                               XEN_DOMCTL_VM_EVENT_OP_SHARING,
+                               NULL);
 }
 
 int xc_memshr_debug_gfn(xc_interface *xch,
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
index 9e807d1..a4d7b7a 100644
--- a/tools/libxc/xc_monitor.c
+++ b/tools/libxc/xc_monitor.c
@@ -23,6 +23,28 @@
 
 #include "xc_private.h"
 
+void *xc_monitor_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
+{
+    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+                              port);
+}
+
+int xc_monitor_disable(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
+}
+
+int xc_monitor_resume(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_RESUME,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
+}
+
 int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id,
                           unsigned int op, unsigned int sync,
                           unsigned int onchangeonly)
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index 7277e86..a5b3277 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -99,17 +99,17 @@ void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
     switch ( param )
     {
     case HVM_PARAM_PAGING_RING_PFN:
-        op = XEN_VM_EVENT_PAGING_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_PAGING;
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
-        op = XEN_VM_EVENT_MONITOR_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
     case HVM_PARAM_SHARING_RING_PFN:
-        op = XEN_VM_EVENT_SHARING_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_SHARING;
         break;
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 0d1bf78..b1197ad 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -124,8 +124,8 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
 
     if ( mem_access_enable )
     {
-        rc = xc_mem_access_disable(xenaccess->xc_handle,
-                                   xenaccess->vm_event.domain_id);
+        rc = xc_monitor_disable(xenaccess->xc_handle,
+                                xenaccess->vm_event.domain_id);
         if ( rc != 0 )
         {
             ERROR("Error tearing down domain xenaccess in xen");
@@ -196,9 +196,9 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
 
     /* Enable mem_access */
     xenaccess->vm_event.ring_page =
-            xc_mem_access_enable(xenaccess->xc_handle,
-                                 xenaccess->vm_event.domain_id,
-                                 &xenaccess->vm_event.evtchn_port);
+            xc_monitor_enable(xenaccess->xc_handle,
+                              xenaccess->vm_event.domain_id,
+                              &xenaccess->vm_event.evtchn_port);
     if ( xenaccess->vm_event.ring_page == NULL )
     {
         switch ( errno ) {
@@ -322,7 +322,7 @@ static int xenaccess_resume_page(xenaccess_t *paging, vm_event_response_t *rsp)
     put_response(&paging->vm_event, rsp);
 
     /* Tell Xen page is ready */
-    ret = xc_mem_access_resume(paging->xc_handle, paging->vm_event.domain_id);
+    ret = xc_monitor_resume(paging->xc_handle, paging->vm_event.domain_id);
     ret = xc_evtchn_notify(paging->vm_event.xce_handle,
                            paging->vm_event.port);
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index e3ebc05..0a2fbb6 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1460,15 +1460,6 @@ int mem_sharing_memop(unsigned long cmd,
         }
         break;
 
-        case XENMEM_sharing_op_resume:
-        {
-            if ( !mem_sharing_enabled(d) )
-                return -EINVAL;
-
-            vm_event_resume(d, &d->vm_event->share);
-        }
-        break;
-
         case XENMEM_sharing_op_debug_gfn:
         {
             unsigned long gfn = mso.u.debug.u.gfn;
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 426f766..1ece9b5 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -58,15 +58,6 @@ int mem_access_memop(unsigned long cmd,
 
     switch ( mao.op )
     {
-    case XENMEM_access_op_resume:
-        if ( unlikely(start_iter) )
-            rc = -ENOSYS;
-        else
-        {
-            vm_event_resume(d, &d->vm_event->monitor);
-            rc = 0;
-        }
-        break;
 
     case XENMEM_access_op_set_access:
         rc = -EINVAL;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 5b0d0d2..77d2663 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -579,7 +579,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_PAGING_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
         {
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -609,13 +609,22 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
         break;
 
-        case XEN_VM_EVENT_PAGING_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
         {
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
         }
         break;
 
+        case XEN_VM_EVENT_RESUME:
+        {
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+        }
+        break;
+
         default:
             rc = -ENOSYS;
             break;
@@ -631,19 +640,28 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_MONITOR_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                  HVM_PARAM_MONITOR_RING_PFN,
                                  monitor_notification);
             break;
 
-        case XEN_VM_EVENT_MONITOR_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
             if ( ved->ring_page )
             {
                 rc = vm_event_disable(d, ved);
             }
             break;
 
+        case XEN_VM_EVENT_RESUME:
+        {
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+        }
+        break;
+
         default:
             rc = -ENOSYS;
             break;
@@ -659,7 +677,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_SHARING_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
         {
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
@@ -677,13 +695,22 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
         break;
 
-        case XEN_VM_EVENT_SHARING_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
         {
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
         }
         break;
 
+        case XEN_VM_EVENT_RESUME:
+        {
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+        }
+        break;
+
         default:
             rc = -ENOSYS;
             break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index c82a992..68c53cc 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -756,6 +756,17 @@ struct xen_domctl_gdbsx_domstatus {
 /* XEN_DOMCTL_vm_event_op */
 
 /*
+ * There are currently three rings available for VM events:
+ * sharing, monitor and paging. This hypercall allows one to
+ * control these rings (enable/disable), as well as to signal
+ * to the hypervisor to pull responses (resume) from the given
+ * ring.
+ */
+#define XEN_VM_EVENT_ENABLE               0
+#define XEN_VM_EVENT_DISABLE              1
+#define XEN_VM_EVENT_RESUME               2
+
+/*
  * Domain memory paging
  * Page memory in and out.
  * Domctl interface to set up and tear down the 
@@ -771,9 +782,6 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_VM_EVENT_OP_PAGING            1
 
-#define XEN_VM_EVENT_PAGING_ENABLE               0
-#define XEN_VM_EVENT_PAGING_DISABLE              1
-
 /*
  * Monitor helper.
  *
@@ -785,24 +793,17 @@ struct xen_domctl_gdbsx_domstatus {
  * of every page in a domain.  When one of these permissions--independent,
  * read, write, and execute--is violated, the VCPU is paused and a memory event
  * is sent with what happened. The memory event handler can then resume the
- * VCPU and redo the access with a XENMEM_access_op_resume hypercall.
+ * VCPU and redo the access with a XEN_VM_EVENT_RESUME option.
  *
  * See public/vm_event.h for the list of available events that can be
  * subscribed to via the monitor interface.
  *
- * To enable MOV-TO-MSR interception on x86, it is necessary to enable this
- * interface with the XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
- * operator.
- *
- * The XEN_VM_EVENT_MONITOR_ENABLE* domctls return several
+ * The XEN_VM_EVENT_MONITOR_* domctls returns
  * non-standard error codes to indicate why access could not be enabled:
  * EBUSY  - guest has or had access enabled, ring buffer still active
  *
  */
-#define XEN_DOMCTL_VM_EVENT_OP_MONITOR                        2
-
-#define XEN_VM_EVENT_MONITOR_ENABLE                           0
-#define XEN_VM_EVENT_MONITOR_DISABLE                          1
+#define XEN_DOMCTL_VM_EVENT_OP_MONITOR           2
 
 /*
  * Sharing ENOMEM helper.
@@ -819,13 +820,10 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_VM_EVENT_OP_SHARING           3
 
-#define XEN_VM_EVENT_SHARING_ENABLE              0
-#define XEN_VM_EVENT_SHARING_DISABLE             1
-
 /* Use for teardown/setup of helper<->hypervisor interface for paging, 
  * access and sharing.*/
 struct xen_domctl_vm_event_op {
-    uint32_t       op;           /* XEN_VM_EVENT_*_* */
+    uint32_t       op;           /* XEN_VM_EVENT_* */
     uint32_t       mode;         /* XEN_DOMCTL_VM_EVENT_OP_* */
 
     uint32_t port;              /* OUT: event channel for ring */
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index e0cca46..329b690 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -385,9 +385,8 @@ typedef struct xen_mem_paging_op xen_mem_paging_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
 
 #define XENMEM_access_op                    21
-#define XENMEM_access_op_resume             0
-#define XENMEM_access_op_set_access         1
-#define XENMEM_access_op_get_access         2
+#define XENMEM_access_op_set_access         0
+#define XENMEM_access_op_get_access         1
 
 typedef enum {
     XENMEM_access_n,
@@ -438,12 +437,11 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
 #define XENMEM_sharing_op_nominate_gfn      0
 #define XENMEM_sharing_op_nominate_gref     1
 #define XENMEM_sharing_op_share             2
-#define XENMEM_sharing_op_resume            3
-#define XENMEM_sharing_op_debug_gfn         4
-#define XENMEM_sharing_op_debug_mfn         5
-#define XENMEM_sharing_op_debug_gref        6
-#define XENMEM_sharing_op_add_physmap       7
-#define XENMEM_sharing_op_audit             8
+#define XENMEM_sharing_op_debug_gfn         3
+#define XENMEM_sharing_op_debug_mfn         4
+#define XENMEM_sharing_op_debug_gref        5
+#define XENMEM_sharing_op_add_physmap       6
+#define XENMEM_sharing_op_audit             7
 
 #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
 #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 12/13] xen/xsm: Split vm_event_op into three separate labels
  2015-02-09 18:53 ` [PATCH V4 12/13] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
@ 2015-02-09 20:09   ` Daniel De Graaf
  0 siblings, 0 replies; 31+ messages in thread
From: Daniel De Graaf @ 2015-02-09 20:09 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	rshriram, keir, yanghy

On 02/09/2015 01:53 PM, Tamas K Lengyel wrote:
> The XSM label vm_event_op has been used to control the three memops
> controlling mem_access, mem_paging and mem_sharing. While these systems
> rely on vm_event, these are not vm_event operations themselves. Thus,
> in this patch we introduce three separate labels for each of these memops.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

-- 
Daniel De Graaf
National Security Agency

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 10/13] xen/vm_event: Decouple vm_event and mem_access.
  2015-02-09 18:53 ` [PATCH V4 10/13] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
@ 2015-02-09 20:09   ` Daniel De Graaf
  0 siblings, 0 replies; 31+ messages in thread
From: Daniel De Graaf @ 2015-02-09 20:09 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	rshriram, keir, yanghy

On 02/09/2015 01:53 PM, Tamas K Lengyel wrote:
> The vm_event subsystem has been artifically tied to the presence of mem_access.
> While mem_access does depend on vm_event, vm_event is an entirely independent
> subsystem that can be used for arbitrary function-offloading to helper apps in
> domains. This patch removes the dependency that mem_access needs to be supported
> in order to enable vm_event.
>
> A new vm_event_resume function is introduced which pulls all responses off from
> given ring and delegates handling to appropriate helper functions (if
> necessary). By default, vm_event_resume just pulls the response from the ring
> and unpauses the corresponding vCPU. This approach reduces code duplication
> and present a single point of entry for the entire vm_event subsystem's
> response handling mechanism.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Typos from patch 8 have propagated. Otherwise:

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 08/13] xen: Introduce monitor_op domctl
  2015-02-09 18:53 ` [PATCH V4 08/13] xen: Introduce monitor_op domctl Tamas K Lengyel
@ 2015-02-09 20:09   ` Daniel De Graaf
  0 siblings, 0 replies; 31+ messages in thread
From: Daniel De Graaf @ 2015-02-09 20:09 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	rshriram, keir, yanghy

On 02/09/2015 01:53 PM, Tamas K Lengyel wrote:
> In preparation for allowing for introspecting ARM and PV domains the old
> control interface via the hvm_op hypercall is retired. A new control mechanism
> is introduced via the domctl hypercall: monitor_op.
>
> This patch aims to establish a base API on which future applications can build
> on.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>

One minor typo, then:
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

[...]
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index 9da3275..35d1c7b 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -249,6 +249,8 @@ class hvm
>   # HVMOP_inject_trap
>       hvmctl
>   # XEN_DOMCTL_set_access_required
> +# XEN_DOMCLT_monitor_op
> +# XEN_DOMCLT_vm_event_op
>       vm_event
>   # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
>   #  source = the domain making the hypercall

DOMCLT => DOMCTL

-- 
Daniel De Graaf
National Security Agency

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 05/13] xen: Rename mem_event to vm_event
  2015-02-09 18:53 ` [PATCH V4 05/13] xen: Rename mem_event to vm_event Tamas K Lengyel
@ 2015-02-09 20:09   ` Daniel De Graaf
  2015-02-10 13:06   ` Jan Beulich
  2015-02-13 12:13   ` Wei Liu
  2 siblings, 0 replies; 31+ messages in thread
From: Daniel De Graaf @ 2015-02-09 20:09 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	rshriram, keir, yanghy

On 02/09/2015 01:53 PM, Tamas K Lengyel wrote:
> In this patch we mechanically rename mem_event to vm_event. This patch
> introduces no logic changes to the code. Using the name vm_event better
> describes the intended use of this subsystem, which is not limited to memory
> events. It can be used for off-loading the decision making logic into helper
> applications when encountering various events during a VM's execution.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-09 18:53 ` [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
@ 2015-02-10 12:52   ` Jan Beulich
  2015-02-10 13:50     ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2015-02-10 12:52 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, Razvan Cojocaru,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, steve,
	andres, jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
> +static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
> +                                unsigned long old)
> +{
> +    mem_event_request_t req = {
> +        .reason = reason,
> +        .vcpu_id = current->vcpu_id,
> +        .u.mov_to_cr.new_value = value,
> +        .u.mov_to_cr.old_value = old
> +    };
> +
> +    uint64_t parameters = 0 ;
> +    switch(reason)

Blank line between declarations and statements please. And no blank
before a semicolon.

> +    {
> +    case MEM_EVENT_REASON_MOV_TO_CR0:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
> +        break;
> +    case MEM_EVENT_REASON_MOV_TO_CR3:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
> +        break;
> +    case MEM_EVENT_REASON_MOV_TO_CR4:
> +        parameters = current->domain->arch.hvm_domain
> +                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
> +        break;
> +    };

I think you should bail in the default case; perhaps add an
ASSERT_UNREACHABLE(). And with that I think readability (and
maybe even generated code) would benefit if you just latched
the index into .params[].

> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>      {
>          struct vcpu *v;
>  
> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
> +        {
> +            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");

Why gdprintk()?

> @@ -1310,18 +1322,19 @@ void p2m_mem_paging_resume(struct domain *d)
>          /* Fix p2m entry if the page was not dropped */
>          if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
>          {
> -            gfn_lock(p2m, rsp.gfn, 0);
> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
> +            uint64_t gfn = rsp.u.mem_access.gfn;
> +            gfn_lock(p2m, gfn, 0);

Blank line between declarations and statements. Also - why uint64_t
instead of just unsigned long?

> +/* Reasons for the vm event request */
> +/* Default case */
> +#define MEM_EVENT_REASON_UNKNOWN                 0
> +/* Memory access violation */
> +#define MEM_EVENT_REASON_MEM_ACCESS              1
> +/* Memory sharing event */
> +#define MEM_EVENT_REASON_MEM_SHARING             2
> +/* Memory paging event */
> +#define MEM_EVENT_REASON_MEM_PAGING              3
> +/* CR0 was updated */
> +#define MEM_EVENT_REASON_MOV_TO_CR0              4
> +/* CR3 was updated */
> +#define MEM_EVENT_REASON_MOV_TO_CR3              5
> +/* CR4 was updated */
> +#define MEM_EVENT_REASON_MOV_TO_CR4              6
> +/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
> +#define MEM_EVENT_REASON_MOV_TO_MSR              7
> +/* Debug operation executed (int3) */

If you absolutely want to mention architecture specific things in a
generic header, please make this an example (i.e. insert "e.g."
above).

> +#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
> +/* Single-step (MTF) */

Same here then (this one is even VT-x specific).

> @@ -97,31 +112,74 @@ struct mem_event_regs_x86 {
>      uint32_t _pad;
>  };
>  
> -typedef struct mem_event_st {
> -    uint32_t flags;
> -    uint32_t vcpu_id;
> -
> +struct mem_event_mem_access_data {

Do you really need all these _data tags?

Jan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 02/13] xen/mem_event: Cleanup mem_event ring names and domctls
  2015-02-09 18:53 ` [PATCH V4 02/13] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
@ 2015-02-10 12:56   ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2015-02-10 12:56 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
> The name of one of the mem_event rings still implies it is used only
> for memory accesses, which is no longer the case. It is also used to
> deliver various HVM events, thus the name "monitor" is more appropriate
> in this setting.
> 
> The mem_event subop definitions are also shortened to be more meaningful.
> 
> The tool side changes are only mechanical renaming to match these new names.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

For the small parts this is relevant for and with one remark (below):
Acked-by: Jan Beulich <jbeulich@suse.com>

> @@ -510,9 +510,9 @@ void mem_event_cleanup(struct domain *d)
>      }
>  #endif
>  #ifdef HAS_MEM_ACCESS
> -    if ( d->mem_event->access.ring_page ) {
> -        destroy_waitqueue_head(&d->mem_event->access.wq);
> -        (void)mem_event_disable(d, &d->mem_event->access);
> +    if ( d->mem_event->monitor.ring_page ) {

Please fix coding style issues when you need to touch code anyway.

Jan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 03/13] xen/mem_paging: Convert mem_event_op to mem_paging_op
  2015-02-09 18:53 ` [PATCH V4 03/13] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
@ 2015-02-10 13:00   ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2015-02-10 13:00 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
> ---
> v4: Style fixes

But why did you not do all of them then?

> --- a/xen/arch/x86/mm/mem_paging.c
> +++ b/xen/arch/x86/mm/mem_paging.c
> @@ -25,31 +25,29 @@
>  #include <xen/mem_event.h>
>  
>  
> -int mem_paging_memop(struct domain *d, xen_mem_event_op_t *mec)
> +int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpc)
>  {
>      if ( unlikely(!d->mem_event->paging.ring_page) )
>          return -ENODEV;
>  
> -    switch( mec->op )
> +    switch( mpc->op )
>      {
>      case XENMEM_paging_op_nominate:
>      {
> -        unsigned long gfn = mec->gfn;
> -        return p2m_mem_paging_nominate(d, gfn);
> +        return p2m_mem_paging_nominate(d, mpc->gfn);
>      }
>      break;

I'm relatively certain I said this before: The braces now become
pointless and hence should be dropped.

Again for the parts this is applicable to,
Acked-by: Jan Beulich <jbeulich@suse.com>
provided the above gets adjusted throughout.

Jan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 05/13] xen: Rename mem_event to vm_event
  2015-02-09 18:53 ` [PATCH V4 05/13] xen: Rename mem_event to vm_event Tamas K Lengyel
  2015-02-09 20:09   ` Daniel De Graaf
@ 2015-02-10 13:06   ` Jan Beulich
  2015-02-13 12:13   ` Wei Liu
  2 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2015-02-10 13:06 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
> In this patch we mechanically rename mem_event to vm_event. This patch
> introduces no logic changes to the code. Using the name vm_event better
> describes the intended use of this subsystem, which is not limited to memory
> events. It can be used for off-loading the decision making logic into helper
> applications when encountering various events during a VM's execution.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 07/13] x86/hvm: factor out and rename vm_event related functions
  2015-02-09 18:53 ` [PATCH V4 07/13] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
@ 2015-02-10 13:15   ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2015-02-10 13:15 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, steve,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, andres,
	jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
> +static int hvm_event_traps(long parameters, vm_event_request_t *req)

Please apply more care: The original function's parameter type
changed in v4, so you shouldn't blindly drop this change here.

> +{
> +    int rc;
> +    struct vcpu *curr = current;
> +    struct domain *currd = curr->domain;
> +
> +    if ( !(parameters & HVMPME_MODE_MASK) )
> +        return 0;
> +
> +    rc = vm_event_claim_slot(currd, &currd->vm_event->monitor);
> +    switch ( rc )
> +    {
> +    case 0:
> +        break;
> +    case -ENOSYS:
> +        /*
> +         * If there was no ring to handle the event, then
> +         * simple continue executing normally.

simply

> +void hvm_event_msr(unsigned long msr, unsigned long value)

I realize you just move this, but I have a hard time seeing why either
parameter type would be unsigned long.

> +{
> +    struct vcpu *curr = current;
> +    vm_event_request_t req = {
> +        .reason = VM_EVENT_REASON_MOV_TO_MSR,
> +        .vcpu_id = curr->vcpu_id,
> +        .u.mov_to_msr.msr = msr,
> +        .u.mov_to_msr.value = value,
> +    };
> +    long params = current->domain->arch.hvm_domain
> +                    .params[HVM_PARAM_MEMORY_EVENT_MSR];

Again: long? (Please apply comments given to all instances.)

Jan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-10 12:52   ` Jan Beulich
@ 2015-02-10 13:50     ` Tamas K Lengyel
  2015-02-10 16:17       ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-10 13:50 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Razvan Cojocaru,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Steven Maresca, Andres Lagar-Cavilla, Jun Nakajima, rshriram,
	Keir Fraser, Daniel De Graaf, yanghy

On Tue, Feb 10, 2015 at 1:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
>> +static void hvm_memory_event_cr(uint32_t reason, unsigned long value,
>> +                                unsigned long old)
>> +{
>> +    mem_event_request_t req = {
>> +        .reason = reason,
>> +        .vcpu_id = current->vcpu_id,
>> +        .u.mov_to_cr.new_value = value,
>> +        .u.mov_to_cr.old_value = old
>> +    };
>> +
>> +    uint64_t parameters = 0 ;
>> +    switch(reason)
>
> Blank line between declarations and statements please. And no blank
> before a semicolon.

Ack

>
>> +    {
>> +    case MEM_EVENT_REASON_MOV_TO_CR0:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR0];
>> +        break;
>> +    case MEM_EVENT_REASON_MOV_TO_CR3:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR3];
>> +        break;
>> +    case MEM_EVENT_REASON_MOV_TO_CR4:
>> +        parameters = current->domain->arch.hvm_domain
>> +                      .params[HVM_PARAM_MEMORY_EVENT_CR4];
>> +        break;
>> +    };
>
> I think you should bail in the default case; perhaps add an
> ASSERT_UNREACHABLE(). And with that I think readability (and
> maybe even generated code) would benefit if you just latched
> the index into .params[].

Simply initializing parameters to be 0 is enough IMHO. The only
callers of this function already explicitly choose one of these cases.
Furthermore, hvm params are retired later in the series anyway so
there wouldn't really be much benefit in tweaking this here.

>
>> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>>      {
>>          struct vcpu *v;
>>
>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>> +        {
>> +            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
>
> Why gdprintk()?

Is that only for debug cases?

>
>> @@ -1310,18 +1322,19 @@ void p2m_mem_paging_resume(struct domain *d)
>>          /* Fix p2m entry if the page was not dropped */
>>          if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
>>          {
>> -            gfn_lock(p2m, rsp.gfn, 0);
>> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
>> +            uint64_t gfn = rsp.u.mem_access.gfn;
>> +            gfn_lock(p2m, gfn, 0);
>
> Blank line between declarations and statements. Also - why uint64_t
> instead of just unsigned long?

The type of mem_access.gfn is uint64_t so its that for consistency.

>
>> +/* Reasons for the vm event request */
>> +/* Default case */
>> +#define MEM_EVENT_REASON_UNKNOWN                 0
>> +/* Memory access violation */
>> +#define MEM_EVENT_REASON_MEM_ACCESS              1
>> +/* Memory sharing event */
>> +#define MEM_EVENT_REASON_MEM_SHARING             2
>> +/* Memory paging event */
>> +#define MEM_EVENT_REASON_MEM_PAGING              3
>> +/* CR0 was updated */
>> +#define MEM_EVENT_REASON_MOV_TO_CR0              4
>> +/* CR3 was updated */
>> +#define MEM_EVENT_REASON_MOV_TO_CR3              5
>> +/* CR4 was updated */
>> +#define MEM_EVENT_REASON_MOV_TO_CR4              6
>> +/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
>> +#define MEM_EVENT_REASON_MOV_TO_MSR              7
>> +/* Debug operation executed (int3) */
>
> If you absolutely want to mention architecture specific things in a
> generic header, please make this an example (i.e. insert "e.g."
> above).

Ack.

>
>> +#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
>> +/* Single-step (MTF) */
>
> Same here then (this one is even VT-x specific).
>
>> @@ -97,31 +112,74 @@ struct mem_event_regs_x86 {
>>      uint32_t _pad;
>>  };
>>
>> -typedef struct mem_event_st {
>> -    uint32_t flags;
>> -    uint32_t vcpu_id;
>> -
>> +struct mem_event_mem_access_data {
>
> Do you really need all these _data tags?

Not really, no.

>
> Jan
>

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-10 13:50     ` Tamas K Lengyel
@ 2015-02-10 16:17       ` Jan Beulich
  2015-02-10 16:38         ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2015-02-10 16:17 UTC (permalink / raw)
  To: tamas.lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, rcojocaru,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, steve,
	andres, jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 2:51 PM >>>
On Tue, Feb 10, 2015 at 1:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
>>> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>>>      {
>>>          struct vcpu *v;
>>>
>>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>>> +        {
>>> +            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
>>
>> Why gdprintk()?
>
>Is that only for debug cases?

I'm intending to propose compiling out alll dprintk() and gdprintk() instance in
non-debug builds. Right now they're preferable when the message is so terse
that identifying its origin without file name and line number is difficult. Clearly
any non-debug messages shouldn't be of such poor quality.

>>> @@ -1310,18 +1322,19 @@ void p2m_mem_paging_resume(struct domain *d)
>>>          /* Fix p2m entry if the page was not dropped */
>>>          if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
>>>          {
>>> -            gfn_lock(p2m, rsp.gfn, 0);
>>> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
>>> +            uint64_t gfn = rsp.u.mem_access.gfn;
>>> +            gfn_lock(p2m, gfn, 0);
>>
>> Blank line between declarations and statements. Also - why uint64_t
>> instead of just unsigned long?
>
>The type of mem_access.gfn is uint64_t so its that for consistency.

And the type most functions taking a gfn expect is unsigned long.

Jan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-10 16:17       ` Jan Beulich
@ 2015-02-10 16:38         ` Tamas K Lengyel
  2015-02-10 17:39           ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-10 16:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Razvan Cojocaru,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Steven Maresca, Andres Lagar-Cavilla, Jun Nakajima, rshriram,
	Keir Fraser, Daniel De Graaf, yanghy

On Tue, Feb 10, 2015 at 5:17 PM, Jan Beulich <jbeulich@suse.com> wrote:
>>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 2:51 PM >>>
> On Tue, Feb 10, 2015 at 1:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
>>>> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>>>>      {
>>>>          struct vcpu *v;
>>>>
>>>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>>>> +        {
>>>> +            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
>>>
>>> Why gdprintk()?
>>
>>Is that only for debug cases?
>
> I'm intending to propose compiling out alll dprintk() and gdprintk() instance in
> non-debug builds. Right now they're preferable when the message is so terse
> that identifying its origin without file name and line number is difficult. Clearly
> any non-debug messages shouldn't be of such poor quality.

I willwrap it into #ifndef NDEBUG as it is really only for debugging.

>
>>>> @@ -1310,18 +1322,19 @@ void p2m_mem_paging_resume(struct domain *d)
>>>>          /* Fix p2m entry if the page was not dropped */
>>>>          if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
>>>>          {
>>>> -            gfn_lock(p2m, rsp.gfn, 0);
>>>> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
>>>> +            uint64_t gfn = rsp.u.mem_access.gfn;
>>>> +            gfn_lock(p2m, gfn, 0);
>>>
>>> Blank line between declarations and statements. Also - why uint64_t
>>> instead of just unsigned long?
>>
>>The type of mem_access.gfn is uint64_t so its that for consistency.
>
> And the type most functions taking a gfn expect is unsigned long.
>
> Jan

Well, we need to have it as uint64_t in the shared public struct
definition so that the width of it is explicit. Unsigned long may be
32-bit depending on the compiler so it is going to be casted
somewhere. Whether it is preferred to be casted when assigned to a
local variable or when passed to the function is just a style question
- I'm fine with either.

Tamas

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-10 16:38         ` Tamas K Lengyel
@ 2015-02-10 17:39           ` Jan Beulich
  2015-02-10 18:03             ` Tamas K Lengyel
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2015-02-10 17:39 UTC (permalink / raw)
  To: tamas.lengyel
  Cc: tim, kevin.tian, wei.liu2, ian.campbell, rcojocaru,
	stefano.stabellini, eddie.dong, ian.jackson, xen-devel, steve,
	andres, jun.nakajima, rshriram, keir, dgdegra, yanghy

>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 5:38 PM >>>
>On Tue, Feb 10, 2015 at 5:17 PM, Jan Beulich <jbeulich@suse.com> wrote:
>>>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 2:51 PM >>>
>> On Tue, Feb 10, 2015 at 1:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
>>>>> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>>>>>      {
>>>>>          struct vcpu *v;
>>>>>
>>>>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>>>>> +        {
>>>>> +            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
>>>>
>>>> Why gdprintk()?
>>>
>>>Is that only for debug cases?
>>
>> I'm intending to propose compiling out alll dprintk() and gdprintk() instance in
>> non-debug builds. Right now they're preferable when the message is so terse
>> that identifying its origin without file name and line number is difficult. Clearly
>> any non-debug messages shouldn't be of such poor quality.
>
>I willwrap it into #ifndef NDEBUG as it is really only for debugging.

That'll make the code even uglier, and won't address the question I originally asked.

>>>>> @@ -1310,18 +1322,19 @@ void p2m_mem_paging_resume(struct domain *d)
>>>>>          /* Fix p2m entry if the page was not dropped */
>>>>>          if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
>>>>>          {
>>>>> -            gfn_lock(p2m, rsp.gfn, 0);
>>>>> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
>>>>> +            uint64_t gfn = rsp.u.mem_access.gfn;
>>>>> +            gfn_lock(p2m, gfn, 0);
>>>>
>>>> Blank line between declarations and statements. Also - why uint64_t
>>>> instead of just unsigned long?
>>>
>>>The type of mem_access.gfn is uint64_t so its that for consistency.
>>
>> And the type most functions taking a gfn expect is unsigned long.
>
>Well, we need to have it as uint64_t in the shared public struct
>definition so that the width of it is explicit. Unsigned long may be
>32-bit depending on the compiler so it is going to be casted
>somewhere.

That's understood.

> Whether it is preferred to be casted when assigned to a
>local variable or when passed to the function is just a style question
>- I'm fine with either.

As much as "gfn" function parameters are usually unsigned long, local
variables are too iirc. So please follow suit.

Jan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-10 17:39           ` Jan Beulich
@ 2015-02-10 18:03             ` Tamas K Lengyel
  2015-02-11  7:43               ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Tamas K Lengyel @ 2015-02-10 18:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Tian, Kevin, wei.liu2, Ian Campbell, Razvan Cojocaru,
	Stefano Stabellini, Dong, Eddie, Ian Jackson, xen-devel,
	Steven Maresca, Andres Lagar-Cavilla, Jun Nakajima, rshriram,
	Keir Fraser, Daniel De Graaf, yanghy

On Tue, Feb 10, 2015 at 6:39 PM, Jan Beulich <jbeulich@suse.com> wrote:
>>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 5:38 PM >>>
>>On Tue, Feb 10, 2015 at 5:17 PM, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 2:51 PM >>>
>>> On Tue, Feb 10, 2015 at 1:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
>>>>>> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>>>>>>      {
>>>>>>          struct vcpu *v;
>>>>>>
>>>>>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>>>>>> +        {
>>>>>> +            gdprintk(XENLOG_WARNING, "mem_event interface version mismatch!\n");
>>>>>
>>>>> Why gdprintk()?
>>>>
>>>>Is that only for debug cases?
>>>
>>> I'm intending to propose compiling out alll dprintk() and gdprintk() instance in
>>> non-debug builds. Right now they're preferable when the message is so terse
>>> that identifying its origin without file name and line number is difficult. Clearly
>>> any non-debug messages shouldn't be of such poor quality.
>>
>>I willwrap it into #ifndef NDEBUG as it is really only for debugging.
>
> That'll make the code even uglier, and won't address the question I originally asked.

I just reused the function since I've seen it being used for printing
warnings. What would be the preffered print function to be used here?

>
>>>>>> @@ -1310,18 +1322,19 @@ void p2m_mem_paging_resume(struct domain *d)
>>>>>>          /* Fix p2m entry if the page was not dropped */
>>>>>>          if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
>>>>>>          {
>>>>>> -            gfn_lock(p2m, rsp.gfn, 0);
>>>>>> -            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
>>>>>> +            uint64_t gfn = rsp.u.mem_access.gfn;
>>>>>> +            gfn_lock(p2m, gfn, 0);
>>>>>
>>>>> Blank line between declarations and statements. Also - why uint64_t
>>>>> instead of just unsigned long?
>>>>
>>>>The type of mem_access.gfn is uint64_t so its that for consistency.
>>>
>>> And the type most functions taking a gfn expect is unsigned long.
>>
>>Well, we need to have it as uint64_t in the shared public struct
>>definition so that the width of it is explicit. Unsigned long may be
>>32-bit depending on the compiler so it is going to be casted
>>somewhere.
>
> That's understood.
>
>> Whether it is preferred to be casted when assigned to a
>>local variable or when passed to the function is just a style question
>>- I'm fine with either.
>
> As much as "gfn" function parameters are usually unsigned long, local
> variables are too iirc. So please follow suit.

Ack.

>
> Jan
>

Thanks,
Tamas

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures
  2015-02-10 18:03             ` Tamas K Lengyel
@ 2015-02-11  7:43               ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2015-02-11  7:43 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tim Deegan, Kevin Tian, wei.liu2, Ian Campbell, Razvan Cojocaru,
	Stefano Stabellini, Eddie Dong, Ian Jackson, xen-devel,
	Steven Maresca, Andres Lagar-Cavilla, Jun Nakajima, rshriram,
	Keir Fraser, Daniel De Graaf, yanghy

>>> On 10.02.15 at 19:03, <tamas.lengyel@zentific.com> wrote:
> On Tue, Feb 10, 2015 at 6:39 PM, Jan Beulich <jbeulich@suse.com> wrote:
>>>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 5:38 PM >>>
>>>On Tue, Feb 10, 2015 at 5:17 PM, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>> Tamas K Lengyel <tamas.lengyel@zentific.com> 02/10/15 2:51 PM >>>
>>>> On Tue, Feb 10, 2015 at 1:52 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>> On 09.02.15 at 19:53, <tamas.lengyel@zentific.com> wrote:
>>>>>>> @@ -598,6 +600,12 @@ int mem_sharing_sharing_resume(struct domain *d)
>>>>>>>      {
>>>>>>>          struct vcpu *v;
>>>>>>>
>>>>>>> +        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
>>>>>>> +        {
>>>>>>> +            gdprintk(XENLOG_WARNING, "mem_event interface version 
> mismatch!\n");
>>>>>>
>>>>>> Why gdprintk()?
>>>>>
>>>>>Is that only for debug cases?
>>>>
>>>> I'm intending to propose compiling out alll dprintk() and gdprintk() 
> instance in
>>>> non-debug builds. Right now they're preferable when the message is so terse
>>>> that identifying its origin without file name and line number is difficult. 
> Clearly
>>>> any non-debug messages shouldn't be of such poor quality.
>>>
>>>I willwrap it into #ifndef NDEBUG as it is really only for debugging.
>>
>> That'll make the code even uglier, and won't address the question I 
> originally asked.
> 
> I just reused the function since I've seen it being used for printing
> warnings. What would be the preffered print function to be used here?

printk(XENLOG_G_* ...);

Jan

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 13/13] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-02-09 18:53 ` [PATCH V4 13/13] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
@ 2015-02-13 12:12   ` Wei Liu
  0 siblings, 0 replies; 31+ messages in thread
From: Wei Liu @ 2015-02-13 12:12 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, xen-devel, eddie.dong, andres,
	jbeulich, rshriram, keir, dgdegra, yanghy

On Mon, Feb 09, 2015 at 07:53:38PM +0100, Tamas K Lengyel wrote:
> Thus far mem_access and mem_sharing memops had been able to signal
> to Xen to start pulling responses off the corresponding rings. In this patch
> we retire these memops and add them to the option to the vm_event_op domctl.
> 
> The vm_event_op domctl suboptions are the same for each ring thus we
> consolidate them into XEN_VM_EVENT_ENABLE/DISABLE/RESUME.
> 
> As part of this patch in libxc we also rename the mem_access_enable/disable
> functions to monitor_enable/disable and move them into xc_monitor.c.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
>  tools/libxc/include/xenctrl.h       | 22 ++++++++++-----------
>  tools/libxc/xc_mem_access.c         | 25 ------------------------
>  tools/libxc/xc_mem_paging.c         | 12 ++++++++++--
>  tools/libxc/xc_memshr.c             | 15 ++++++--------
>  tools/libxc/xc_monitor.c            | 22 +++++++++++++++++++++
>  tools/libxc/xc_vm_event.c           |  6 +++---

I've only skimmed this patch but most of these are mechanical changes.
We are relatively lax on libxc interface, so this part looks good
to me.

Acked-by: Wei Liu <wei.liu2@citrix.com>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH V4 05/13] xen: Rename mem_event to vm_event
  2015-02-09 18:53 ` [PATCH V4 05/13] xen: Rename mem_event to vm_event Tamas K Lengyel
  2015-02-09 20:09   ` Daniel De Graaf
  2015-02-10 13:06   ` Jan Beulich
@ 2015-02-13 12:13   ` Wei Liu
  2 siblings, 0 replies; 31+ messages in thread
From: Wei Liu @ 2015-02-13 12:13 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, xen-devel, eddie.dong, andres,
	jbeulich, rshriram, keir, dgdegra, yanghy

On Mon, Feb 09, 2015 at 07:53:30PM +0100, Tamas K Lengyel wrote:
> In this patch we mechanically rename mem_event to vm_event. This patch
> introduces no logic changes to the code. Using the name vm_event better
> describes the intended use of this subsystem, which is not limited to memory
> events. It can be used for off-loading the decision making logic into helper
> applications when encountering various events during a VM's execution.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

For tools:

Acked-by: Wei Liu <wei.liu2@citrix.com>

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2015-02-13 12:13 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-09 18:53 [PATCH V4 00/13] xen: Clean-up of mem_event subsystem Tamas K Lengyel
2015-02-09 18:53 ` [PATCH V4 01/13] xen/mem_event: Cleanup of mem_event structures Tamas K Lengyel
2015-02-10 12:52   ` Jan Beulich
2015-02-10 13:50     ` Tamas K Lengyel
2015-02-10 16:17       ` Jan Beulich
2015-02-10 16:38         ` Tamas K Lengyel
2015-02-10 17:39           ` Jan Beulich
2015-02-10 18:03             ` Tamas K Lengyel
2015-02-11  7:43               ` Jan Beulich
2015-02-09 18:53 ` [PATCH V4 02/13] xen/mem_event: Cleanup mem_event ring names and domctls Tamas K Lengyel
2015-02-10 12:56   ` Jan Beulich
2015-02-09 18:53 ` [PATCH V4 03/13] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
2015-02-10 13:00   ` Jan Beulich
2015-02-09 18:53 ` [PATCH V4 04/13] xen/mem_access: Merge mem_event sanity check into mem_access check Tamas K Lengyel
2015-02-09 18:53 ` [PATCH V4 05/13] xen: Rename mem_event to vm_event Tamas K Lengyel
2015-02-09 20:09   ` Daniel De Graaf
2015-02-10 13:06   ` Jan Beulich
2015-02-13 12:13   ` Wei Liu
2015-02-09 18:53 ` [PATCH V4 06/13] tools/tests: Clean-up tools/tests/xen-access Tamas K Lengyel
2015-02-09 18:53 ` [PATCH V4 07/13] x86/hvm: factor out and rename vm_event related functions Tamas K Lengyel
2015-02-10 13:15   ` Jan Beulich
2015-02-09 18:53 ` [PATCH V4 08/13] xen: Introduce monitor_op domctl Tamas K Lengyel
2015-02-09 20:09   ` Daniel De Graaf
2015-02-09 18:53 ` [PATCH V4 09/13] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
2015-02-09 18:53 ` [PATCH V4 10/13] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
2015-02-09 20:09   ` Daniel De Graaf
2015-02-09 18:53 ` [PATCH V4 11/13] xen/vm_event: Relocate memop checks Tamas K Lengyel
2015-02-09 18:53 ` [PATCH V4 12/13] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
2015-02-09 20:09   ` Daniel De Graaf
2015-02-09 18:53 ` [PATCH V4 13/13] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
2015-02-13 12:12   ` Wei Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.