All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V9 0/6] xen: Clean-up of mem_event subsystem
@ 2015-04-09 14:32 Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 1/6] xen: Introduce monitor_op domctl Tamas K Lengyel
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Tamas K Lengyel @ 2015-04-09 14:32 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy, rcojocaru

This patch series aims to clean up the mem_event subsystem within Xen. The
original use-case for this system was to allow external helper applications
running in privileged domains to control various memory operations performed
by Xen. Amongs these were paging, sharing and access control. The subsystem
has since been extended to also deliver non-memory related events, namely
various HVM debugging events (INT3, MTF, MOV-TO-CR, MOV-TO-MSR). The structures
and naming of related functions however has not caught up to these new
use-cases, thus leaving many ambiguities in the code. Furthermore, future
use-cases envisioned for this subsystem include PV domains and ARM domains,
thus there is a need to establish a common base to build on.

Each patch in the series has been build-tested on x86 and ARM, both with
and without XSM enabled.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/mem_event_cleanup9

Tamas K Lengyel (6):
  xen: Introduce monitor_op domctl
  xen/vm_event: Deprecate VM_EVENT_FLAG_DUMMY flag
  xen/vm_event: Decouple vm_event and mem_access.
  xen/vm_event: Relocate memop checks
  xen/xsm: Split vm_event_op into three separate labels
  xen/vm_event: Add RESUME option to vm_event_op domctl

 MAINTAINERS                         |   1 +
 tools/libxc/Makefile                |   1 +
 tools/libxc/include/xenctrl.h       |  49 +++++++--
 tools/libxc/xc_domain.c             |  28 +++++-
 tools/libxc/xc_mem_access.c         |  56 +++++------
 tools/libxc/xc_mem_paging.c         |  12 ++-
 tools/libxc/xc_memshr.c             |  15 ++-
 tools/libxc/xc_monitor.c            | 137 +++++++++++++++++++++++++
 tools/libxc/xc_private.h            |   2 +-
 tools/libxc/xc_vm_event.c           |  11 +-
 tools/tests/xen-access/xen-access.c |  40 ++++----
 xen/arch/x86/Makefile               |   1 +
 xen/arch/x86/hvm/emulate.c          |   2 +-
 xen/arch/x86/hvm/event.c            |  82 ++++++++-------
 xen/arch/x86/hvm/hvm.c              |  35 +------
 xen/arch/x86/hvm/vmx/vmcs.c         |   7 +-
 xen/arch/x86/hvm/vmx/vmx.c          |   2 +-
 xen/arch/x86/mm/mem_paging.c        |  41 ++++++--
 xen/arch/x86/mm/mem_sharing.c       | 160 ++++++++++++++---------------
 xen/arch/x86/mm/p2m.c               |  74 ++++----------
 xen/arch/x86/monitor.c              | 196 ++++++++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/mm.c     |  26 +----
 xen/arch/x86/x86_64/mm.c            |  24 +----
 xen/common/Makefile                 |  18 ++--
 xen/common/domctl.c                 |   9 ++
 xen/common/mem_access.c             |  51 ++--------
 xen/common/vm_event.c               | 181 +++++++++++++++++----------------
 xen/include/asm-arm/monitor.h       |  35 +++++++
 xen/include/asm-arm/p2m.h           |  18 +++-
 xen/include/asm-x86/domain.h        |  22 +++-
 xen/include/asm-x86/hvm/domain.h    |   1 -
 xen/include/asm-x86/mem_paging.h    |   2 +-
 xen/include/asm-x86/mem_sharing.h   |   4 +-
 xen/include/asm-x86/monitor.h       |  31 ++++++
 xen/include/asm-x86/p2m.h           |  37 +++++--
 xen/include/public/domctl.h         |  80 +++++++++++----
 xen/include/public/hvm/params.h     |   9 +-
 xen/include/public/memory.h         |  18 ++--
 xen/include/public/vm_event.h       |   3 +-
 xen/include/xen/mem_access.h        |  14 ++-
 xen/include/xen/vm_event.h          |  59 +----------
 xen/include/xsm/dummy.h             |  20 +++-
 xen/include/xsm/xsm.h               |  33 +++++-
 xen/xsm/dummy.c                     |  13 ++-
 xen/xsm/flask/hooks.c               |  64 ++++++++----
 xen/xsm/flask/policy/access_vectors |  12 ++-
 46 files changed, 1106 insertions(+), 630 deletions(-)
 create mode 100644 tools/libxc/xc_monitor.c
 create mode 100644 xen/arch/x86/monitor.c
 create mode 100644 xen/include/asm-arm/monitor.h
 create mode 100644 xen/include/asm-x86/monitor.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V9 1/6] xen: Introduce monitor_op domctl
  2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
@ 2015-04-09 14:32 ` Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 2/6] xen/vm_event: Deprecate VM_EVENT_FLAG_DUMMY flag Tamas K Lengyel
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tamas K Lengyel @ 2015-04-09 14:32 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy, rcojocaru

In preparation for allowing for introspecting ARM and PV domains the old
control interface via the hvm_op hypercall is retired. A new control mechanism
is introduced via the domctl hypercall: monitor_op.

This patch aims to establish a base API on which future applications can build
on.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Tim Deegan <tim@xen.org>
---
v9: Rebase on staging
    Set mov_to_msr_extended to 0 explicitly if not requested
v8: Style fixes, comment expansion and forward declaration of structs
    in monitor.h instead of including entire headers
v7: Convert monitor control fields to a bitfield in Xen
    Fix XSM checks in monitor_domctl
v6: Convert monitor control fields to bool_t both in Xen and in libxc
    Make monitor_domctl more compact by abstracting common patterns
    Deprecated HVM params remain but throw error both in Xen and in libxc
    Style fixes
    Add another field to arch.domain that determines if mem_access emulation
        is enabled and corresponding mem_access memops to enable/disable it
    Add XSM check to monitor_domctl
v5: p2m_vm_event_sanity_check is moved into the monitor_op handler
v4: Style fixes
    Only defining struct mov_to_cr and struct debug_event in asm-x86/domain.h
    Add pause/unpause domain wrapper when enabled a monitor option.
---
 MAINTAINERS                         |   1 +
 tools/libxc/Makefile                |   1 +
 tools/libxc/include/xenctrl.h       |  27 +++++
 tools/libxc/xc_domain.c             |  28 +++++-
 tools/libxc/xc_mem_access.c         |  33 ++++--
 tools/libxc/xc_monitor.c            | 115 +++++++++++++++++++++
 tools/libxc/xc_private.h            |   2 +-
 tools/libxc/xc_vm_event.c           |   7 +-
 tools/tests/xen-access/xen-access.c |  30 +++---
 xen/arch/x86/Makefile               |   1 +
 xen/arch/x86/hvm/emulate.c          |   2 +-
 xen/arch/x86/hvm/event.c            |  82 ++++++++-------
 xen/arch/x86/hvm/hvm.c              |  35 +------
 xen/arch/x86/hvm/vmx/vmcs.c         |   7 +-
 xen/arch/x86/hvm/vmx/vmx.c          |   2 +-
 xen/arch/x86/mm/p2m.c               |   9 --
 xen/arch/x86/monitor.c              | 196 ++++++++++++++++++++++++++++++++++++
 xen/common/domctl.c                 |   9 ++
 xen/common/mem_access.c             |   8 ++
 xen/common/vm_event.c               |  33 +-----
 xen/include/asm-arm/monitor.h       |  35 +++++++
 xen/include/asm-arm/p2m.h           |  18 +++-
 xen/include/asm-x86/domain.h        |  22 +++-
 xen/include/asm-x86/hvm/domain.h    |   1 -
 xen/include/asm-x86/monitor.h       |  31 ++++++
 xen/include/asm-x86/p2m.h           |  35 +++++--
 xen/include/public/domctl.h         |  48 ++++++++-
 xen/include/public/hvm/params.h     |   9 +-
 xen/include/public/memory.h         |   2 +
 xen/include/public/vm_event.h       |   2 +-
 xen/xsm/flask/hooks.c               |   3 +
 xen/xsm/flask/policy/access_vectors |   2 +
 32 files changed, 669 insertions(+), 167 deletions(-)
 create mode 100644 tools/libxc/xc_monitor.c
 create mode 100644 xen/arch/x86/monitor.c
 create mode 100644 xen/include/asm-arm/monitor.h
 create mode 100644 xen/include/asm-x86/monitor.h

diff --git a/MAINTAINERS b/MAINTAINERS
index b744166..c714bc1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -381,6 +381,7 @@ S:	Supported
 F:	xen/common/vm_event.c
 F:	xen/common/mem_access.c
 F:	xen/arch/x86/hvm/event.c
+F:	xen/arch/x86/monitor.c
 
 XENTRACE
 M:	George Dunlap <george.dunlap@eu.citrix.com>
diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 22ba2a1..8b609cf 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -32,6 +32,7 @@ CTRL_SRCS-y       += xc_cpu_hotplug.c
 CTRL_SRCS-y       += xc_resume.c
 CTRL_SRCS-y       += xc_tmem.c
 CTRL_SRCS-y       += xc_vm_event.c
+CTRL_SRCS-y       += xc_monitor.c
 CTRL_SRCS-y       += xc_mem_paging.c
 CTRL_SRCS-y       += xc_mem_access.c
 CTRL_SRCS-y       += xc_memshr.c
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 02d0db8..a1861cb 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -34,6 +34,7 @@
 #include <stddef.h>
 #include <stdint.h>
 #include <stdio.h>
+#include <stdbool.h>
 #include <xen/xen.h>
 #include <xen/domctl.h>
 #include <xen/physdev.h>
@@ -2311,6 +2312,32 @@ int xc_set_mem_access(xc_interface *xch, domid_t domain_id,
 int xc_get_mem_access(xc_interface *xch, domid_t domain_id,
                       uint64_t pfn, xenmem_access_t *access);
 
+/*
+ * Instructions causing a mem_access violation can be emulated by Xen
+ * to progress the execution without having to relax the mem_access
+ * permissions.
+ * This feature has to be first enabled, then in the vm_event
+ * response to a mem_access event it can be indicated if the instruction
+ * should be emulated.
+ */
+int xc_mem_access_enable_emulate(xc_interface *xch, domid_t domain_id);
+int xc_mem_access_disable_emulate(xc_interface *xch, domid_t domain_id);
+
+/***
+ * Monitor control operations.
+ */
+int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool sync, bool onchangeonly);
+int xc_monitor_mov_to_cr3(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool sync, bool onchangeonly);
+int xc_monitor_mov_to_cr4(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool sync, bool onchangeonly);
+int xc_monitor_mov_to_msr(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool extended_capture);
+int xc_monitor_singlestep(xc_interface *xch, domid_t domain_id, bool enable);
+int xc_monitor_software_breakpoint(xc_interface *xch, domid_t domain_id,
+                                   bool enable);
+
 /***
  * Memory sharing operations.
  *
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 95e3098..7cb36d9 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1316,11 +1316,32 @@ int xc_domain_send_trigger(xc_interface *xch,
     return do_domctl(xch, &domctl);
 }
 
+static inline int xc_hvm_param_deprecated_check(uint32_t param)
+{
+    switch ( param )
+    {
+        case HVM_PARAM_MEMORY_EVENT_CR0:
+        case HVM_PARAM_MEMORY_EVENT_CR3:
+        case HVM_PARAM_MEMORY_EVENT_CR4:
+        case HVM_PARAM_MEMORY_EVENT_INT3:
+        case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
+        case HVM_PARAM_MEMORY_EVENT_MSR:
+            return -EOPNOTSUPP;
+        default:
+            break;
+    };
+
+    return 0;
+}
+
 int xc_hvm_param_set(xc_interface *handle, domid_t dom, uint32_t param, uint64_t value)
 {
     DECLARE_HYPERCALL;
     DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg);
-    int rc;
+    int rc = xc_hvm_param_deprecated_check(param);
+
+    if ( rc )
+        return rc;
 
     arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg));
     if ( arg == NULL )
@@ -1341,7 +1362,10 @@ int xc_hvm_param_get(xc_interface *handle, domid_t dom, uint32_t param, uint64_t
 {
     DECLARE_HYPERCALL;
     DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg);
-    int rc;
+    int rc = xc_hvm_param_deprecated_check(param);
+
+    if ( rc )
+        return rc;
 
     arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg));
     if ( arg == NULL )
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index 0a3f0e6..f27bc44 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -27,14 +27,7 @@
 void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
 {
     return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port, 0);
-}
-
-void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
-                                         uint32_t *port)
-{
-    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port, 1);
+                              port);
 }
 
 int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
@@ -95,6 +88,30 @@ int xc_get_mem_access(xc_interface *xch,
     return rc;
 }
 
+int xc_mem_access_enable_emulate(xc_interface *xch,
+                                 domid_t domain_id)
+{
+    xen_mem_access_op_t mao =
+    {
+        .op     = XENMEM_access_op_enable_emulate,
+        .domid  = domain_id,
+    };
+
+    return do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao));
+}
+
+int xc_mem_access_disable_emulate(xc_interface *xch,
+                                  domid_t domain_id)
+{
+    xen_mem_access_op_t mao =
+    {
+        .op     = XENMEM_access_op_disable_emulate,
+        .domid  = domain_id,
+    };
+
+    return do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao));
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
new file mode 100644
index 0000000..fe090bc
--- /dev/null
+++ b/tools/libxc/xc_monitor.c
@@ -0,0 +1,115 @@
+/******************************************************************************
+ *
+ * xc_monitor.c
+ *
+ * Interface to VM event monitor
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#include "xc_private.h"
+
+int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool sync, bool onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = enable ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                    : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.event = XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR0;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_cr3(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool sync, bool onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = enable ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                    : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.event = XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR3;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_cr4(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool sync, bool onchangeonly)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = enable ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                    : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.event = XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR4;
+    domctl.u.monitor_op.u.mov_to_cr.sync = sync;
+    domctl.u.monitor_op.u.mov_to_cr.onchangeonly = onchangeonly;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_mov_to_msr(xc_interface *xch, domid_t domain_id, bool enable,
+                          bool extended_capture)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = enable ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                    : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.event = XEN_DOMCTL_MONITOR_EVENT_MOV_TO_MSR;
+    domctl.u.monitor_op.u.mov_to_msr.extended_capture = extended_capture;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_software_breakpoint(xc_interface *xch, domid_t domain_id,
+                                   bool enable)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = enable ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                    : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.event = XEN_DOMCTL_MONITOR_EVENT_SOFTWARE_BREAKPOINT;
+
+    return do_domctl(xch, &domctl);
+}
+
+int xc_monitor_singlestep(xc_interface *xch, domid_t domain_id,
+                          bool enable)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_monitor_op;
+    domctl.domain = domain_id;
+    domctl.u.monitor_op.op = enable ? XEN_DOMCTL_MONITOR_OP_ENABLE
+                                    : XEN_DOMCTL_MONITOR_OP_DISABLE;
+    domctl.u.monitor_op.event = XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP;
+
+    return do_domctl(xch, &domctl);
+}
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 843540c..9f55309 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -430,6 +430,6 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
  * param can be HVM_PARAM_PAGING/ACCESS/SHARING_RING_PFN
  */
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                         uint32_t *port, int enable_introspection);
+                         uint32_t *port);
 
 #endif /* __XC_PRIVATE_H__ */
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index d458b9a..7277e86 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -41,7 +41,7 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
 }
 
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
-                         uint32_t *port, int enable_introspection)
+                         uint32_t *port)
 {
     void *ring_page = NULL;
     uint64_t pfn;
@@ -104,10 +104,7 @@ void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
-        if ( enable_introspection )
-            op = XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION;
-        else
-            op = XEN_VM_EVENT_MONITOR_ENABLE;
+        op = XEN_VM_EVENT_MONITOR_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index d667bf5..23438fc 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -317,9 +317,9 @@ static void put_response(vm_event_t *vm_event, vm_event_response_t *rsp)
 void usage(char* progname)
 {
     fprintf(stderr,
-            "Usage: %s [-m] <domain_id> write|exec|int3\n"
+            "Usage: %s [-m] <domain_id> write|exec|breakpoint\n"
             "\n"
-            "Logs first page writes, execs, or int3 traps that occur on the domain.\n"
+            "Logs first page writes, execs, or breakpoint traps that occur on the domain.\n"
             "\n"
             "-m requires this program to run, or else the domain may pause\n",
             progname);
@@ -338,7 +338,7 @@ int main(int argc, char *argv[])
     xenmem_access_t default_access = XENMEM_access_rwx;
     xenmem_access_t after_first_access = XENMEM_access_rwx;
     int required = 0;
-    int int3 = 0;
+    int breakpoint = 0;
     int shutting_down = 0;
 
     char* progname = argv[0];
@@ -378,9 +378,9 @@ int main(int argc, char *argv[])
         default_access = XENMEM_access_rw;
         after_first_access = XENMEM_access_rwx;
     }
-    else if ( !strcmp(argv[0], "int3") )
+    else if ( !strcmp(argv[0], "breakpoint") )
     {
-        int3 = 1;
+        breakpoint = 1;
     }
     else
     {
@@ -431,14 +431,14 @@ int main(int argc, char *argv[])
         goto exit;
     }
 
-    if ( int3 )
-        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_sync);
-    else
-        rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
-    if ( rc < 0 )
+    if ( breakpoint )
     {
-        ERROR("Error %d setting int3 vm_event\n", rc);
-        goto exit;
+        rc = xc_monitor_software_breakpoint(xch, domain_id, 1);
+        if ( rc < 0 )
+        {
+            ERROR("Error %d setting breakpoint trapping with vm_event\n", rc);
+            goto exit;
+        }
     }
 
     /* Wait for access */
@@ -452,7 +452,7 @@ int main(int argc, char *argv[])
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
             rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
                                    xenaccess->domain_info->max_pages);
-            rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
+            rc = xc_monitor_software_breakpoint(xch, domain_id, 0);
 
             shutting_down = 1;
         }
@@ -527,7 +527,7 @@ int main(int argc, char *argv[])
                 rsp.u.mem_access.gfn = req.u.mem_access.gfn;
                 break;
             case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
-                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n",
+                printf("Breakpoint: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n",
                        req.regs.x86.rip,
                        req.u.software_breakpoint.gfn,
                        req.vcpu_id);
@@ -538,7 +538,7 @@ int main(int argc, char *argv[])
                     HVMOP_TRAP_sw_exc, -1, 0, 0);
                 if (rc < 0)
                 {
-                    ERROR("Error %d injecting int3\n", rc);
+                    ERROR("Error %d injecting breakpoint\n", rc);
                     interrupted = -1;
                     continue;
                 }
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 86ca5f8..37e547c 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -36,6 +36,7 @@ obj-y += microcode_intel.o
 # This must come after the vendor specific files.
 obj-y += microcode.o
 obj-y += mm.o
+obj-y += monitor.o
 obj-y += mpparse.o
 obj-y += nmi.o
 obj-y += numa.o
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 40de776..ac9c9d6 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -418,7 +418,7 @@ static int hvmemul_virtual_to_linear(
      * being triggered for repeated writes to a whole page.
      */
     *reps = min_t(unsigned long, *reps,
-                  unlikely(current->domain->arch.hvm_domain.introspection_enabled)
+                  unlikely(current->domain->arch.mem_access_emulate_enabled)
                            ? 1 : 4096);
 
     reg = hvmemul_get_seg_reg(seg, hvmemul_ctxt);
diff --git a/xen/arch/x86/hvm/event.c b/xen/arch/x86/hvm/event.c
index dfb0ab7..9d5f9f3 100644
--- a/xen/arch/x86/hvm/event.c
+++ b/xen/arch/x86/hvm/event.c
@@ -55,15 +55,12 @@ static void hvm_event_fill_regs(vm_event_request_t *req)
     req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
 }
 
-static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
+static int hvm_event_traps(uint8_t sync, vm_event_request_t *req)
 {
     int rc;
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
 
-    if ( !(parameters & HVMPME_MODE_MASK) )
-        return 0;
-
     rc = vm_event_claim_slot(currd, &currd->vm_event->monitor);
     switch ( rc )
     {
@@ -79,7 +76,7 @@ static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
         return rc;
     };
 
-    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
+    if ( sync )
     {
         req->flags |= VM_EVENT_FLAG_VCPU_PAUSED;
         vm_event_vcpu_pause(curr);
@@ -91,41 +88,53 @@ static int hvm_event_traps(uint64_t parameters, vm_event_request_t *req)
     return 1;
 }
 
-static void hvm_event_cr(uint32_t reason, unsigned long value,
-                         unsigned long old, uint64_t parameters)
+static inline
+void hvm_event_cr(uint32_t reason, unsigned long value,
+                         unsigned long old, bool_t onchangeonly, bool_t sync)
 {
-    vm_event_request_t req = {
-        .reason = reason,
-        .vcpu_id = current->vcpu_id,
-        .u.mov_to_cr.new_value = value,
-        .u.mov_to_cr.old_value = old
-    };
-
-    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+    if ( onchangeonly && value == old )
         return;
-
-    hvm_event_traps(parameters, &req);
+    else
+    {
+        vm_event_request_t req = {
+            .reason = reason,
+            .vcpu_id = current->vcpu_id,
+            .u.mov_to_cr.new_value = value,
+            .u.mov_to_cr.old_value = old
+        };
+
+        hvm_event_traps(sync, &req);
+    }
 }
 
 void hvm_event_cr0(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old,
-                 current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR0]);
+    struct arch_domain *currad = &current->domain->arch;
+
+    if ( currad->monitor.mov_to_cr0_enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR0, value, old,
+                     currad->monitor.mov_to_cr0_onchangeonly,
+                     currad->monitor.mov_to_cr0_sync);
 }
 
 void hvm_event_cr3(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old,
-                 current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR3]);
+    struct arch_domain *currad = &current->domain->arch;
+
+    if ( currad->monitor.mov_to_cr3_enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR3, value, old,
+                     currad->monitor.mov_to_cr3_onchangeonly,
+                     currad->monitor.mov_to_cr3_sync);
 }
 
 void hvm_event_cr4(unsigned long value, unsigned long old)
 {
-    hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old,
-                 current->domain->arch.hvm_domain
-                      .params[HVM_PARAM_MEMORY_EVENT_CR4]);
+    struct arch_domain *currad = &current->domain->arch;
+
+    if ( currad->monitor.mov_to_cr4_enabled )
+        hvm_event_cr(VM_EVENT_REASON_MOV_TO_CR4, value, old,
+                     currad->monitor.mov_to_cr4_onchangeonly,
+                     currad->monitor.mov_to_cr4_sync);
 }
 
 void hvm_event_msr(unsigned int msr, uint64_t value)
@@ -137,14 +146,14 @@ void hvm_event_msr(unsigned int msr, uint64_t value)
         .u.mov_to_msr.msr = msr,
         .u.mov_to_msr.value = value,
     };
-    uint64_t params = curr->domain->arch.hvm_domain
-                        .params[HVM_PARAM_MEMORY_EVENT_MSR];
 
-    hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor.mov_to_msr_enabled )
+        hvm_event_traps(1, &req);
 }
 
 int hvm_event_int3(unsigned long gla)
 {
+    int rc = 0;
     uint32_t pfec = PFEC_page_present;
     struct vcpu *curr = current;
     vm_event_request_t req = {
@@ -152,14 +161,16 @@ int hvm_event_int3(unsigned long gla)
         .vcpu_id = curr->vcpu_id,
         .u.software_breakpoint.gfn = paging_gva_to_gfn(curr, gla, &pfec)
     };
-    uint64_t params = curr->domain->arch.hvm_domain
-                        .params[HVM_PARAM_MEMORY_EVENT_INT3];
 
-    return hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor.software_breakpoint_enabled )
+        rc = hvm_event_traps(1, &req);
+
+    return rc;
 }
 
 int hvm_event_single_step(unsigned long gla)
 {
+    int rc = 0;
     uint32_t pfec = PFEC_page_present;
     struct vcpu *curr = current;
     vm_event_request_t req = {
@@ -167,10 +178,11 @@ int hvm_event_single_step(unsigned long gla)
         .vcpu_id = curr->vcpu_id,
         .u.singlestep.gfn = paging_gva_to_gfn(curr, gla, &pfec)
     };
-    uint64_t params = curr->domain->arch.hvm_domain
-                        .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
 
-    return hvm_event_traps(params, &req);
+    if ( curr->domain->arch.monitor.singlestep_enabled )
+        rc = hvm_event_traps(1, &req);
+
+    return rc;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index bfde380..d038082 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5829,19 +5829,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             case HVM_PARAM_MEMORY_EVENT_CR0:
             case HVM_PARAM_MEMORY_EVENT_CR3:
             case HVM_PARAM_MEMORY_EVENT_CR4:
-                if ( d == current->domain )
-                    rc = -EPERM;
-                break;
             case HVM_PARAM_MEMORY_EVENT_INT3:
             case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
             case HVM_PARAM_MEMORY_EVENT_MSR:
-                if ( d == current->domain )
-                {
-                    rc = -EPERM;
-                    break;
-                }
-                if ( a.value & HVMPME_onchangeonly )
-                    rc = -EINVAL;
+                /* Deprecated */
+                rc = -EOPNOTSUPP;
                 break;
             case HVM_PARAM_NESTEDHVM:
                 rc = xsm_hvm_param_nested(XSM_PRIV, d);
@@ -5901,29 +5893,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             }
             }
 
-            if ( rc == 0 ) 
-            {
+            if ( rc == 0 )
                 d->arch.hvm_domain.params[a.index] = a.value;
-
-                switch( a.index )
-                {
-                case HVM_PARAM_MEMORY_EVENT_INT3:
-                case HVM_PARAM_MEMORY_EVENT_SINGLE_STEP:
-                {
-                    domain_pause(d);
-                    domain_unpause(d); /* Causes guest to latch new status */
-                    break;
-                }
-                case HVM_PARAM_MEMORY_EVENT_CR3:
-                {
-                    for_each_vcpu ( d, v )
-                        hvm_funcs.update_guest_cr(v, 0); /* Latches new CR3 mask through CR0 code */
-                    break;
-                }
-                }
-
-            }
-
         }
         else
         {
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 63007a9..8b98a9f 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -714,7 +714,8 @@ void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type)
     if ( msr_bitmap == NULL )
         return;
 
-    if ( unlikely(d->arch.hvm_domain.introspection_enabled) &&
+    if ( unlikely(d->arch.monitor.mov_to_msr_enabled &&
+                  d->arch.monitor.mov_to_msr_extended) &&
          vm_event_check_ring(&d->vm_event->monitor) )
     {
         unsigned int i;
@@ -1373,8 +1374,8 @@ void vmx_do_resume(struct vcpu *v)
     }
 
     debug_state = v->domain->debugger_attached
-                  || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
-                  || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
+                  || v->domain->arch.monitor.software_breakpoint_enabled
+                  || v->domain->arch.monitor.singlestep_enabled;
 
     if ( unlikely(v->arch.hvm_vcpu.debug_state_latch != debug_state) )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 8dbd314..e51a757 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1232,7 +1232,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
                 v->arch.hvm_vmx.exec_control |= cr3_ctls;
 
             /* Trap CR3 updates if CR3 memory events are enabled. */
-            if ( v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_CR3] )
+            if ( v->domain->arch.monitor.mov_to_cr3_enabled )
                 v->arch.hvm_vmx.exec_control |= CPU_BASED_CR3_LOAD_EXITING;
 
             vmx_update_cpu_exec_control(v);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index df4a485..1d3356a 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1454,15 +1454,6 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
     }
 }
 
-void p2m_setup_introspection(struct domain *d)
-{
-    if ( hvm_funcs.enable_msr_exit_interception )
-    {
-        d->arch.hvm_domain.introspection_enabled = 1;
-        hvm_funcs.enable_msr_exit_interception(d);
-    }
-}
-
 bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                             struct npfec npfec,
                             vm_event_request_t **req_ptr)
diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
new file mode 100644
index 0000000..d7b1c18
--- /dev/null
+++ b/xen/arch/x86/monitor.c
@@ -0,0 +1,196 @@
+/*
+ * arch/x86/monitor.c
+ *
+ * Architecture-specific monitor_op domctl handler.
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/mm.h>
+#include <asm/domain.h>
+#include <asm/monitor.h>
+#include <public/domctl.h>
+#include <xsm/xsm.h>
+
+/*
+ * Sanity check whether option is already enabled/disabled
+ */
+static inline
+int status_check(struct xen_domctl_monitor_op *mop, bool_t status)
+{
+    bool_t requested_status = (mop->op == XEN_DOMCTL_MONITOR_OP_ENABLE);
+
+    if ( status == requested_status )
+        return -EEXIST;
+
+    return 0;
+}
+
+int monitor_domctl(struct domain *d, struct xen_domctl_monitor_op *mop)
+{
+    int rc;
+    struct arch_domain *ad = &d->arch;
+
+    rc = xsm_vm_event_control(XSM_PRIV, d, mop->op, mop->event);
+    if ( rc )
+        return rc;
+
+    /*
+     * At the moment only Intel HVM domains are supported. However, event
+     * delivery could be extended to AMD and PV domains.
+     */
+    if ( !is_hvm_domain(d) || !cpu_has_vmx )
+        return -EOPNOTSUPP;
+
+    /*
+     * Sanity check
+     */
+    if ( mop->op != XEN_DOMCTL_MONITOR_OP_ENABLE &&
+         mop->op != XEN_DOMCTL_MONITOR_OP_DISABLE )
+        return -EOPNOTSUPP;
+
+    switch ( mop->event )
+    {
+    case XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR0:
+    {
+        bool_t status = ad->monitor.mov_to_cr0_enabled;
+
+        rc = status_check(mop, status);
+        if ( rc )
+            return rc;
+
+        ad->monitor.mov_to_cr0_sync = mop->u.mov_to_cr.sync;
+        ad->monitor.mov_to_cr0_onchangeonly = mop->u.mov_to_cr.onchangeonly;
+
+        domain_pause(d);
+        ad->monitor.mov_to_cr0_enabled = !status;
+        domain_unpause(d);
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR3:
+    {
+        bool_t status = ad->monitor.mov_to_cr3_enabled;
+        struct vcpu *v;
+
+        rc = status_check(mop, status);
+        if ( rc )
+            return rc;
+
+        ad->monitor.mov_to_cr3_sync = mop->u.mov_to_cr.sync;
+        ad->monitor.mov_to_cr3_onchangeonly = mop->u.mov_to_cr.onchangeonly;
+
+        domain_pause(d);
+        ad->monitor.mov_to_cr3_enabled = !status;
+        domain_unpause(d);
+
+        /* Latches new CR3 mask through CR0 code */
+        for_each_vcpu ( d, v )
+            hvm_funcs.update_guest_cr(v, 0);
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR4:
+    {
+        bool_t status = ad->monitor.mov_to_cr4_enabled;
+
+        rc = status_check(mop, status);
+        if ( rc )
+            return rc;
+
+        ad->monitor.mov_to_cr4_sync = mop->u.mov_to_cr.sync;
+        ad->monitor.mov_to_cr4_onchangeonly = mop->u.mov_to_cr.onchangeonly;
+
+        domain_pause(d);
+        ad->monitor.mov_to_cr4_enabled = !status;
+        domain_unpause(d);
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_EVENT_MOV_TO_MSR:
+    {
+        bool_t status = ad->monitor.mov_to_msr_enabled;
+
+        rc = status_check(mop, status);
+        if ( rc )
+            return rc;
+
+        if ( mop->op == XEN_DOMCTL_MONITOR_OP_ENABLE &&
+             mop->u.mov_to_msr.extended_capture )
+        {
+            if ( hvm_funcs.enable_msr_exit_interception )
+            {
+                ad->monitor.mov_to_msr_extended = 1;
+                hvm_funcs.enable_msr_exit_interception(d);
+            }
+            else
+                return -EOPNOTSUPP;
+        } else
+            ad->monitor.mov_to_msr_extended = 0;
+
+        domain_pause(d);
+        ad->monitor.mov_to_msr_enabled = !status;
+        domain_unpause(d);
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP:
+    {
+        bool_t status = ad->monitor.singlestep_enabled;
+
+        rc = status_check(mop, status);
+        if ( rc )
+            return rc;
+
+        domain_pause(d);
+        ad->monitor.singlestep_enabled = !status;
+        domain_unpause(d);
+        break;
+    }
+
+    case XEN_DOMCTL_MONITOR_EVENT_SOFTWARE_BREAKPOINT:
+    {
+        bool_t status = ad->monitor.software_breakpoint_enabled;
+
+        rc = status_check(mop, status);
+        if ( rc )
+            return rc;
+
+        domain_pause(d);
+        ad->monitor.software_breakpoint_enabled = !status;
+        domain_unpause(d);
+        break;
+    }
+
+    default:
+        return -EOPNOTSUPP;
+
+    };
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index b722143..28aea55 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -29,6 +29,7 @@
 #include <asm/irq.h>
 #include <asm/page.h>
 #include <asm/p2m.h>
+#include <asm/monitor.h>
 #include <public/domctl.h>
 #include <xsm/xsm.h>
 
@@ -1159,6 +1160,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         break;
     }
 
+    case XEN_DOMCTL_monitor_op:
+        ret = -EPERM;
+        if ( current->domain == d )
+            break;
+
+        ret = monitor_domctl(d, &op->u.monitor_op);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 4bbe2f9..f925ac7 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -140,6 +140,14 @@ int mem_access_memop(unsigned long cmd,
         break;
     }
 
+    case XENMEM_access_op_enable_emulate:
+        rc = p2m_mem_access_enable_emulate(d);
+        break;
+
+    case XENMEM_access_op_disable_emulate:
+        rc = p2m_mem_access_disable_emulate(d);
+        break;
+
     default:
         rc = -ENOSYS;
         break;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 9afb54f..7c532fc 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -599,11 +599,9 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         break;
 
         case XEN_VM_EVENT_PAGING_DISABLE:
-        {
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
-        }
-        break;
+            break;
 
         default:
             rc = -ENOSYS;
@@ -622,32 +620,15 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         switch( vec->op )
         {
         case XEN_VM_EVENT_MONITOR_ENABLE:
-        case XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION:
-        {
-            rc = -ENODEV;
-            if ( !p2m_vm_event_sanity_check(d) )
-                break;
-
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                  HVM_PARAM_MONITOR_RING_PFN,
                                  mem_access_notification);
-
-            if ( vec->op == XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
-                 && !rc )
-                p2m_setup_introspection(d);
-
-        }
-        break;
+            break;
 
         case XEN_VM_EVENT_MONITOR_DISABLE:
-        {
             if ( ved->ring_page )
-            {
                 rc = vm_event_disable(d, ved);
-                d->arch.hvm_domain.introspection_enabled = 0;
-            }
-        }
-        break;
+            break;
 
         default:
             rc = -ENOSYS;
@@ -666,7 +647,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         switch( vec->op )
         {
         case XEN_VM_EVENT_SHARING_ENABLE:
-        {
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
             if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
@@ -680,15 +660,12 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
             rc = vm_event_enable(d, vec, ved, _VPF_mem_sharing,
                                  HVM_PARAM_SHARING_RING_PFN,
                                  mem_sharing_notification);
-        }
-        break;
+            break;
 
         case XEN_VM_EVENT_SHARING_DISABLE:
-        {
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
-        }
-        break;
+            break;
 
         default:
             rc = -ENOSYS;
diff --git a/xen/include/asm-arm/monitor.h b/xen/include/asm-arm/monitor.h
new file mode 100644
index 0000000..0f01e06
--- /dev/null
+++ b/xen/include/asm-arm/monitor.h
@@ -0,0 +1,35 @@
+/*
+ * include/asm-arm/monitor.h
+ *
+ * Architecture-specific monitor_op domctl handler.
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef __ASM_ARM_MONITOR_H__
+#define __ASM_ARM_MONITOR_H__
+
+#include <xen/sched.h>
+#include <public/domctl.h>
+
+static inline
+int monitor_domctl(struct domain *d, struct xen_domctl_monitor_op *op)
+{
+    return -ENOSYS;
+}
+
+#endif /* __ASM_X86_MONITOR_H__ */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 23cf7d8..11efb5b 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -71,16 +71,24 @@ typedef enum {
 } p2m_type_t;
 
 static inline
-void p2m_mem_access_emulate_check(struct vcpu *v,
-                                  const vm_event_response_t *rsp)
+int p2m_mem_access_enable_emulate(struct domain *d)
 {
-    /* Not supported on ARM. */
+    /* Not supported on ARM */
+    return -ENOSYS;
+}
+
+static inline
+int p2m_mem_access_disable_emulate(struct domain *d)
+{
+    /* Not supported on ARM */
+    return -ENOSYS;
 }
 
 static inline
-void p2m_setup_introspection(struct domain *d)
+void p2m_mem_access_emulate_check(struct vcpu *v,
+                                  const vm_event_response_t *rsp)
 {
-    /* No special setup on ARM. */
+    /* Not supported on ARM. */
 }
 
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index e5102cc..3f83e8b 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -338,7 +338,27 @@ struct arch_domain
     /* Shared page for notifying that explicit PIRQ EOI is required. */
     unsigned long *pirq_eoi_map;
     unsigned long pirq_eoi_map_mfn;
-};
+
+    /* Monitor options */
+    struct {
+        uint16_t mov_to_cr0_enabled          : 1;
+        uint16_t mov_to_cr0_sync             : 1;
+        uint16_t mov_to_cr0_onchangeonly     : 1;
+        uint16_t mov_to_cr3_enabled          : 1;
+        uint16_t mov_to_cr3_sync             : 1;
+        uint16_t mov_to_cr3_onchangeonly     : 1;
+        uint16_t mov_to_cr4_enabled          : 1;
+        uint16_t mov_to_cr4_sync             : 1;
+        uint16_t mov_to_cr4_onchangeonly     : 1;
+        uint16_t mov_to_msr_enabled          : 1;
+        uint16_t mov_to_msr_extended         : 1;
+        uint16_t singlestep_enabled          : 1;
+        uint16_t software_breakpoint_enabled : 1;
+    } monitor;
+
+    /* Mem_access emulation control */
+    bool_t mem_access_emulate_enabled;
+} __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
 
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 0702bf5..3b21a55 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -135,7 +135,6 @@ struct hvm_domain {
     bool_t                 mem_sharing_enabled;
     bool_t                 qemu_mapcache_invalidate;
     bool_t                 is_s3_suspended;
-    bool_t                 introspection_enabled;
 
     /*
      * TSC value that VCPUs use to calculate their tsc_offset value.
diff --git a/xen/include/asm-x86/monitor.h b/xen/include/asm-x86/monitor.h
new file mode 100644
index 0000000..21b3e5b
--- /dev/null
+++ b/xen/include/asm-x86/monitor.h
@@ -0,0 +1,31 @@
+/*
+ * include/asm-x86/monitor.h
+ *
+ * Architecture-specific monitor_op domctl handler.
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef __ASM_X86_MONITOR_H__
+#define __ASM_X86_MONITOR_H__
+
+struct domain;
+struct xen_domctl_monitor_op;
+
+int monitor_domctl(struct domain *d, struct xen_domctl_monitor_op *op);
+
+#endif /* __ASM_X86_MONITOR_H__ */
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 91fc099..2b5f0fc 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -607,24 +607,39 @@ long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
 int p2m_get_mem_access(struct domain *d, unsigned long pfn,
                        xenmem_access_t *access);
 
-/* Check for emulation and mark vcpu for skipping one instruction
- * upon rescheduling if required. */
-void p2m_mem_access_emulate_check(struct vcpu *v,
-                                  const vm_event_response_t *rsp);
+/*
+ * Emulating a memory access requires custom handling. These non-atomic
+ * functions should be called under domctl lock.
+ */
+static inline
+int p2m_mem_access_enable_emulate(struct domain *d)
+{
+    if ( d->arch.mem_access_emulate_enabled )
+        return -EEXIST;
 
-/* Enable arch specific introspection options (such as MSR interception). */
-void p2m_setup_introspection(struct domain *d);
+    d->arch.mem_access_emulate_enabled = 1;
+    return 0;
+}
 
-/* Sanity check for vm_event hardware support */
-static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
+static inline
+int p2m_mem_access_disable_emulate(struct domain *d)
 {
-    return hap_enabled(d) && cpu_has_vmx;
+    if ( !d->arch.mem_access_emulate_enabled )
+        return -EEXIST;
+
+    d->arch.mem_access_emulate_enabled = 0;
+    return 0;
 }
 
+/* Check for emulation and mark vcpu for skipping one instruction
+ * upon rescheduling if required. */
+void p2m_mem_access_emulate_check(struct vcpu *v,
+                                  const vm_event_response_t *rsp);
+
 /* Sanity check for mem_access hardware support */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-    return is_hvm_domain(d);
+    return is_hvm_domain(d) && cpu_has_vmx && hap_enabled(d);
 }
 
 /* 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index d2e8db0..3e5d1f4 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -793,7 +793,6 @@ struct xen_domctl_gdbsx_domstatus {
 
 #define XEN_VM_EVENT_MONITOR_ENABLE                           0
 #define XEN_VM_EVENT_MONITOR_DISABLE                          1
-#define XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION             2
 
 /*
  * Sharing ENOMEM helper.
@@ -1001,6 +1000,51 @@ struct xen_domctl_psr_cmt_op {
 typedef struct xen_domctl_psr_cmt_op xen_domctl_psr_cmt_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cmt_op_t);
 
+/*  XEN_DOMCTL_MONITOR_*
+ *
+ * Enable/disable monitoring various VM events.
+ * This domctl configures what events will be reported to helper apps
+ * via the ring buffer "MONITOR". The ring has to be first enabled
+ * with the domctl XEN_DOMCTL_VM_EVENT_OP_MONITOR.
+ *
+ * NOTICE: mem_access events are also delivered via the "MONITOR" ring buffer;
+ * however, enabling/disabling those events is performed with the use of
+ * memory_op hypercalls!
+ */
+#define XEN_DOMCTL_MONITOR_OP_ENABLE   0
+#define XEN_DOMCTL_MONITOR_OP_DISABLE  1
+
+#define XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR0            0
+#define XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR3            1
+#define XEN_DOMCTL_MONITOR_EVENT_MOV_TO_CR4            2
+#define XEN_DOMCTL_MONITOR_EVENT_MOV_TO_MSR            3
+#define XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP            4
+#define XEN_DOMCTL_MONITOR_EVENT_SOFTWARE_BREAKPOINT   5
+
+struct xen_domctl_monitor_op {
+    uint32_t op; /* XEN_DOMCTL_MONITOR_OP_* */
+    uint32_t event; /* XEN_DOMCTL_MONITOR_EVENT_* */
+
+    /*
+     * Further options when issuing XEN_DOMCTL_MONITOR_OP_ENABLE.
+     */
+    union {
+        struct {
+            /* Pause vCPU until response */
+            uint8_t sync;
+            /* Send event only on a change of value */
+            uint8_t onchangeonly;
+        } mov_to_cr;
+
+        struct {
+            /* Enable the capture of an extended set of MSRs */
+            uint8_t extended_capture;
+        } mov_to_msr;
+    } u;
+};
+typedef struct xen_domctl__op xen_domctl_monitor_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_monitor_op_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1075,6 +1119,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_set_vcpu_msrs                 73
 #define XEN_DOMCTL_setvnumainfo                  74
 #define XEN_DOMCTL_psr_cmt_op                    75
+#define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1137,6 +1182,7 @@ struct xen_domctl {
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         struct xen_domctl_vnuma             vnuma;
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
+        struct xen_domctl_monitor_op        monitor_op;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 6efcc0b..7c73089 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -162,8 +162,7 @@
  */
 #define HVM_PARAM_ACPI_IOPORTS_LOCATION 19
 
-/* Enable blocking memory events, async or sync (pause vcpu until response) 
- * onchangeonly indicates messages only on a change of value */
+/* Deprecated */
 #define HVM_PARAM_MEMORY_EVENT_CR0          20
 #define HVM_PARAM_MEMORY_EVENT_CR3          21
 #define HVM_PARAM_MEMORY_EVENT_CR4          22
@@ -171,12 +170,6 @@
 #define HVM_PARAM_MEMORY_EVENT_SINGLE_STEP  25
 #define HVM_PARAM_MEMORY_EVENT_MSR          30
 
-#define HVMPME_MODE_MASK       (3 << 0)
-#define HVMPME_mode_disabled   0
-#define HVMPME_mode_async      1
-#define HVMPME_mode_sync       2
-#define HVMPME_onchangeonly    (1 << 2)
-
 /* Boolean: Enable nestedhvm (hvm only) */
 #define HVM_PARAM_NESTEDHVM    24
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 043c26d..1fe3f75 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -390,6 +390,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
 #define XENMEM_access_op_resume             0
 #define XENMEM_access_op_set_access         1
 #define XENMEM_access_op_get_access         2
+#define XENMEM_access_op_enable_emulate     3
+#define XENMEM_access_op_disable_emulate    4
 
 typedef enum {
     XENMEM_access_n,
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index 28d8d7a..ed9105b 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -67,7 +67,7 @@
 #define VM_EVENT_REASON_MOV_TO_CR3              5
 /* CR4 was updated */
 #define VM_EVENT_REASON_MOV_TO_CR4              6
-/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
+/* An MSR was updated. */
 #define VM_EVENT_REASON_MOV_TO_MSR              7
 /* Debug operation executed (e.g. int3) */
 #define VM_EVENT_REASON_SOFTWARE_BREAKPOINT     8
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index ab5141d..bd712b2 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -691,6 +691,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_access_required:
         return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
 
+    case XEN_DOMCTL_monitor_op:
+        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
     case XEN_DOMCTL_gdbsx_pausevcpu:
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 128250e..4f392a5 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -248,6 +248,8 @@ class hvm
 # HVMOP_inject_trap
     hvmctl
 # XEN_DOMCTL_set_access_required
+# XEN_DOMCTL_monitor_op
+# XEN_DOMCTL_vm_event_op
     vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V9 2/6] xen/vm_event: Deprecate VM_EVENT_FLAG_DUMMY flag
  2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 1/6] xen: Introduce monitor_op domctl Tamas K Lengyel
@ 2015-04-09 14:32 ` Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 3/6] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tamas K Lengyel @ 2015-04-09 14:32 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy, rcojocaru

There are no use-cases for this flag.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/mem_sharing.c | 3 ---
 xen/arch/x86/mm/p2m.c         | 3 ---
 xen/common/mem_access.c       | 3 ---
 xen/include/public/vm_event.h | 1 -
 4 files changed, 10 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 4e5477a..e6572af 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -606,9 +606,6 @@ int mem_sharing_sharing_resume(struct domain *d)
             continue;
         }
 
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
             continue;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 1d3356a..4032c62 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1312,9 +1312,6 @@ void p2m_mem_paging_resume(struct domain *d)
             continue;
         }
 
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
             continue;
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index f925ac7..7ed8a4e 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -44,9 +44,6 @@ void mem_access_resume(struct domain *d)
             continue;
         }
 
-        if ( rsp.flags & VM_EVENT_FLAG_DUMMY )
-            continue;
-
         /* Validate the vcpu_id in the response. */
         if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
             continue;
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index ed9105b..c7426de 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -47,7 +47,6 @@
 #define VM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
 /* Flags to aid debugging mem_event */
 #define VM_EVENT_FLAG_FOREIGN         (1 << 1)
-#define VM_EVENT_FLAG_DUMMY           (1 << 2)
 
 /*
  * Reasons for the vm event request
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V9 3/6] xen/vm_event: Decouple vm_event and mem_access.
  2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 1/6] xen: Introduce monitor_op domctl Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 2/6] xen/vm_event: Deprecate VM_EVENT_FLAG_DUMMY flag Tamas K Lengyel
@ 2015-04-09 14:32 ` Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 4/6] xen/vm_event: Relocate memop checks Tamas K Lengyel
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tamas K Lengyel @ 2015-04-09 14:32 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy, rcojocaru

The vm_event subsystem has been artifically tied to the presence of mem_access.
While mem_access does depend on vm_event, vm_event is an entirely independent
subsystem that can be used for arbitrary function-offloading to helper apps in
domains. This patch removes the dependency that mem_access needs to be supported
in order to enable vm_event.

A new vm_event_resume function is introduced which pulls all responses off from
given ring and delegates handling to appropriate helper functions (if
necessary). By default, vm_event_resume just pulls the response from the ring
and unpauses the corresponding vCPU. This approach reduces code duplication
and present a single point of entry for the entire vm_event subsystem's
response handling mechanism.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Tim Deegan <tim@xen.org>
---
v4: Consolidate resume routines into vm_event_resume
    Style fixes
    Sort xen/common/Makefile to be alphabetical
v3: Move ring processing out from mem_access.c to monitor.c in common
---
 xen/arch/x86/mm/mem_sharing.c       | 32 ++---------------
 xen/arch/x86/mm/p2m.c               | 62 ++++++++++++--------------------
 xen/common/Makefile                 | 18 +++++-----
 xen/common/mem_access.c             | 31 +---------------
 xen/common/vm_event.c               | 72 +++++++++++++++++++++++++++++++------
 xen/include/asm-x86/mem_sharing.h   |  1 -
 xen/include/asm-x86/p2m.h           |  2 +-
 xen/include/xen/mem_access.h        | 14 ++++++--
 xen/include/xen/vm_event.h          | 58 ++----------------------------
 xen/include/xsm/dummy.h             |  2 --
 xen/include/xsm/xsm.h               |  4 ---
 xen/xsm/dummy.c                     |  2 --
 xen/xsm/flask/hooks.c               | 36 ++++++++-----------
 xen/xsm/flask/policy/access_vectors |  8 ++---
 14 files changed, 128 insertions(+), 214 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index e6572af..4959407 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -591,35 +591,6 @@ unsigned int mem_sharing_get_nr_shared_mfns(void)
     return (unsigned int)atomic_read(&nr_shared_mfns);
 }
 
-int mem_sharing_sharing_resume(struct domain *d)
-{
-    vm_event_response_t rsp;
-
-    /* Get all requests off the ring */
-    while ( vm_event_get_response(d, &d->vm_event->share, &rsp) )
-    {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
-            continue;
-        }
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        /* Unpause domain/vcpu */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
-    }
-
-    return 0;
-}
-
 /* Functions that change a page's type and ownership */
 static int page_make_sharable(struct domain *d, 
                        struct page_info *page, 
@@ -1470,7 +1441,8 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
         {
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
-            rc = mem_sharing_sharing_resume(d);
+
+            vm_event_resume(d, &d->vm_event->share);
         }
         break;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 4032c62..6403172 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1279,13 +1279,13 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
 }
 
 /**
- * p2m_mem_paging_resume - Resume guest gfn and vcpus
+ * p2m_mem_paging_resume - Resume guest gfn
  * @d: guest domain
- * @gfn: guest page in paging state
+ * @rsp: vm_event response received
+ *
+ * p2m_mem_paging_resume() will forward the p2mt of a gfn to ram_rw. It is
+ * called by the pager.
  *
- * p2m_mem_paging_resume() will forward the p2mt of a gfn to ram_rw and all
- * waiting vcpus will be unpaused again. It is called by the pager.
- * 
  * The gfn was previously either evicted and populated, or nominated and
  * populated. If the page was evicted the p2mt will be p2m_ram_paging_in. If
  * the page was just nominated the p2mt will be p2m_ram_paging_in_start because
@@ -1293,51 +1293,33 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
  *
  * If the gfn was dropped the vcpu needs to be unpaused.
  */
-void p2m_mem_paging_resume(struct domain *d)
+
+void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    vm_event_response_t rsp;
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
 
-    /* Pull all responses off the ring */
-    while( vm_event_get_response(d, &d->vm_event->paging, &rsp) )
+    /* Fix p2m entry if the page was not dropped */
+    if ( !(rsp->u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
     {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
-            continue;
-        }
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
+        unsigned long gfn = rsp->u.mem_access.gfn;
 
-        v = d->vcpu[rsp.vcpu_id];
-
-        /* Fix p2m entry if the page was not dropped */
-        if ( !(rsp.u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
+        gfn_lock(p2m, gfn, 0);
+        mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
+        /*
+         * Allow only pages which were prepared properly, or pages which
+         * were nominated but not evicted.
+         */
+        if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
         {
-            unsigned long gfn = rsp.u.mem_access.gfn;
-            gfn_lock(p2m, gfn, 0);
-            mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL);
-            /* Allow only pages which were prepared properly, or pages which
-             * were nominated but not evicted */
-            if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
-            {
-                p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
-                              paging_mode_log_dirty(d) ? p2m_ram_logdirty :
-                              p2m_ram_rw, a);
-                set_gpfn_from_mfn(mfn_x(mfn), gfn);
-            }
-            gfn_unlock(p2m, gfn, 0);
+            p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
+                          paging_mode_log_dirty(d) ? p2m_ram_logdirty :
+                          p2m_ram_rw, a);
+            set_gpfn_from_mfn(mfn_x(mfn), gfn);
         }
-        /* Unpause domain */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
+        gfn_unlock(p2m, gfn, 0);
     }
 }
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index e5bd75b..8d84bc6 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -15,13 +15,19 @@ obj-y += keyhandler.o
 obj-$(HAS_KEXEC) += kexec.o
 obj-$(HAS_KEXEC) += kimage.o
 obj-y += lib.o
+obj-y += lzo.o
+obj-$(HAS_MEM_ACCESS) += mem_access.o
 obj-y += memory.o
 obj-y += multicall.o
 obj-y += notifier.o
 obj-y += page_alloc.o
+obj-$(HAS_PDX) += pdx.o
 obj-y += preempt.o
 obj-y += random.o
 obj-y += rangeset.o
+obj-y += radix-tree.o
+obj-y += rbtree.o
+obj-y += rcupdate.o
 obj-y += sched_credit.o
 obj-y += sched_credit2.o
 obj-y += sched_sedf.o
@@ -40,21 +46,15 @@ obj-y += sysctl.o
 obj-y += tasklet.o
 obj-y += time.o
 obj-y += timer.o
+obj-y += tmem.o
+obj-y += tmem_xen.o
 obj-y += trace.o
 obj-y += version.o
+obj-y += vm_event.o
 obj-y += vmap.o
 obj-y += vsprintf.o
 obj-y += wait.o
 obj-y += xmalloc_tlsf.o
-obj-y += rcupdate.o
-obj-y += tmem.o
-obj-y += tmem_xen.o
-obj-y += radix-tree.o
-obj-y += rbtree.o
-obj-y += lzo.o
-obj-$(HAS_PDX) += pdx.o
-obj-$(HAS_MEM_ACCESS) += mem_access.o
-obj-$(HAS_MEM_ACCESS) += vm_event.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 7ed8a4e..511c8c5 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -29,35 +29,6 @@
 #include <asm/p2m.h>
 #include <xsm/xsm.h>
 
-void mem_access_resume(struct domain *d)
-{
-    vm_event_response_t rsp;
-
-    /* Pull all responses off the ring. */
-    while ( vm_event_get_response(d, &d->vm_event->monitor, &rsp) )
-    {
-        struct vcpu *v;
-
-        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
-        {
-            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
-            continue;
-        }
-
-        /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
-            continue;
-
-        v = d->vcpu[rsp.vcpu_id];
-
-        p2m_mem_access_emulate_check(v, &rsp);
-
-        /* Unpause domain. */
-        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
-            vm_event_vcpu_unpause(v);
-    }
-}
-
 int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
 {
@@ -92,7 +63,7 @@ int mem_access_memop(unsigned long cmd,
             rc = -ENOSYS;
         else
         {
-            mem_access_resume(d);
+            vm_event_resume(d, &d->vm_event->monitor);
             rc = 0;
         }
         break;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 7c532fc..d276bd8 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -358,6 +358,62 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
     return 1;
 }
 
+/*
+ * Pull all responses from the given ring and unpause the corresponding vCPU
+ * if required. Based on the response type, here we can also call custom
+ * handlers.
+ *
+ * Note: responses are handled the same way regardless of which ring they
+ * arrive on.
+ */
+void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
+{
+    vm_event_response_t rsp;
+
+    /* Pull all responses off the ring. */
+    while ( vm_event_get_response(d, ved, &rsp) )
+    {
+        struct vcpu *v;
+
+        if ( rsp.version != VM_EVENT_INTERFACE_VERSION )
+        {
+            printk(XENLOG_G_WARNING "vm_event interface version mismatch\n");
+            continue;
+        }
+
+        /* Validate the vcpu_id in the response. */
+        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+            continue;
+
+        v = d->vcpu[rsp.vcpu_id];
+
+        /*
+         * In some cases the response type needs extra handling, so here
+         * we call the appropriate handlers.
+         */
+        switch ( rsp.reason )
+        {
+
+#ifdef HAS_MEM_ACCESS
+        case VM_EVENT_REASON_MEM_ACCESS:
+            mem_access_resume(v, &rsp);
+            break;
+#endif
+
+#ifdef HAS_MEM_PAGING
+        case VM_EVENT_REASON_MEM_PAGING:
+            p2m_mem_paging_resume(d, &rsp);
+            break;
+#endif
+
+        };
+
+        /* Unpause domain. */
+        if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
+            vm_event_vcpu_unpause(v);
+    }
+}
+
 void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
 {
     vm_event_ring_lock(ved);
@@ -437,25 +493,23 @@ int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->paging.ring_page != NULL) )
-        p2m_mem_paging_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->paging);
 }
 #endif
 
-#ifdef HAS_MEM_ACCESS
 /* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_access_notification(struct vcpu *v, unsigned int port)
+static void monitor_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->monitor.ring_page != NULL) )
-        mem_access_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->monitor);
 }
-#endif
 
 #ifdef HAS_MEM_SHARING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 {
     if ( likely(v->domain->vm_event->share.ring_page != NULL) )
-        mem_sharing_sharing_resume(v->domain);
+        vm_event_resume(v->domain, &v->domain->vm_event->share);
 }
 #endif
 
@@ -510,13 +564,11 @@ void vm_event_cleanup(struct domain *d)
         (void)vm_event_disable(d, &d->vm_event->paging);
     }
 #endif
-#ifdef HAS_MEM_ACCESS
     if ( d->vm_event->monitor.ring_page )
     {
         destroy_waitqueue_head(&d->vm_event->monitor.wq);
         (void)vm_event_disable(d, &d->vm_event->monitor);
     }
-#endif
 #ifdef HAS_MEM_SHARING
     if ( d->vm_event->share.ring_page )
     {
@@ -611,7 +663,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
     break;
 #endif
 
-#ifdef HAS_MEM_ACCESS
     case XEN_DOMCTL_VM_EVENT_OP_MONITOR:
     {
         struct vm_event_domain *ved = &d->vm_event->monitor;
@@ -622,7 +673,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         case XEN_VM_EVENT_MONITOR_ENABLE:
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                  HVM_PARAM_MONITOR_RING_PFN,
-                                 mem_access_notification);
+                                 monitor_notification);
             break;
 
         case XEN_VM_EVENT_MONITOR_DISABLE:
@@ -636,7 +687,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
     }
     break;
-#endif
 
 #ifdef HAS_MEM_SHARING
     case XEN_DOMCTL_VM_EVENT_OP_SHARING:
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index 2f1f3d2..da99d46 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -90,7 +90,6 @@ static inline int mem_sharing_unshare_page(struct domain *d,
  */
 int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
                                 bool_t allow_sleep);
-int mem_sharing_sharing_resume(struct domain *d);
 int mem_sharing_memop(struct domain *d, 
                        xen_mem_sharing_op_t *mec);
 int mem_sharing_domctl(struct domain *d, 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2b5f0fc..2c5eda3 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -586,7 +586,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
 /* Prepare the p2m for paging a frame in */
 int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer);
 /* Resume normal operation (in case a domain was paused) */
-void p2m_mem_paging_resume(struct domain *d);
+void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp);
 
 /* Send mem event based on the access (gla is -1ull if not available).  Handles
  * the rw2rx conversion. Boolean return value indicates if access rights have 
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index 1d01221..f60b727 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -24,6 +24,7 @@
 #define _XEN_ASM_MEM_ACCESS_H
 
 #include <public/memory.h>
+#include <asm/p2m.h>
 
 #ifdef HAS_MEM_ACCESS
 
@@ -31,8 +32,11 @@ int mem_access_memop(unsigned long cmd,
                      XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
 int mem_access_send_req(struct domain *d, vm_event_request_t *req);
 
-/* Resumes the running of the VCPU, restarting the last instruction */
-void mem_access_resume(struct domain *d);
+static inline
+void mem_access_resume(struct vcpu *v, vm_event_response_t *rsp)
+{
+    p2m_mem_access_emulate_check(v, rsp);
+}
 
 #else
 
@@ -49,7 +53,11 @@ int mem_access_send_req(struct domain *d, vm_event_request_t *req)
     return -ENOSYS;
 }
 
-static inline void mem_access_resume(struct domain *d) {}
+static inline
+void mem_access_resume(struct vcpu *vcpu, vm_event_response_t *rsp)
+{
+    /* Nothing to do. */
+}
 
 #endif /* HAS_MEM_ACCESS */
 
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index c6c665d..960beb2 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -26,8 +26,6 @@
 
 #include <xen/sched.h>
 
-#ifdef HAS_MEM_ACCESS
-
 /* Clean up on domain destruction */
 void vm_event_cleanup(struct domain *d);
 
@@ -69,6 +67,8 @@ void vm_event_put_request(struct domain *d, struct vm_event_domain *ved,
 int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
                           vm_event_response_t *rsp);
 
+void vm_event_resume(struct domain *d, struct vm_event_domain *ved);
+
 int do_vm_event_op(int op, uint32_t domain, void *arg);
 int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
@@ -76,60 +76,6 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 void vm_event_vcpu_pause(struct vcpu *v);
 void vm_event_vcpu_unpause(struct vcpu *v);
 
-#else
-
-static inline void vm_event_cleanup(struct domain *d) {}
-
-static inline bool_t vm_event_check_ring(struct vm_event_domain *ved)
-{
-    return 0;
-}
-
-static inline int vm_event_claim_slot(struct domain *d,
-                                      struct vm_event_domain *ved)
-{
-    return -ENOSYS;
-}
-
-static inline int vm_event_claim_slot_nosleep(struct domain *d,
-                                              struct vm_event_domain *ved)
-{
-    return -ENOSYS;
-}
-
-static inline
-void vm_event_cancel_slot(struct domain *d, struct vm_event_domain *ved)
-{}
-
-static inline
-void vm_event_put_request(struct domain *d, struct vm_event_domain *ved,
-                          vm_event_request_t *req)
-{}
-
-static inline
-int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
-                          vm_event_response_t *rsp)
-{
-    return -ENOSYS;
-}
-
-static inline int do_vm_event_op(int op, uint32_t domain, void *arg)
-{
-    return -ENOSYS;
-}
-
-static inline
-int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
-                    XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
-    return -ENOSYS;
-}
-
-static inline void vm_event_vcpu_pause(struct vcpu *v) {}
-static inline void vm_event_vcpu_unpause(struct vcpu *v) {}
-
-#endif /* HAS_MEM_ACCESS */
-
 #endif /* __VM_EVENT_H__ */
 
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 4227093..50ee929 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -513,7 +513,6 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
-#ifdef HAS_MEM_ACCESS
 static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
 {
     XSM_ASSERT_ACTION(XSM_PRIV);
@@ -525,7 +524,6 @@ static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
-#endif
 
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index c2049f3..ca8371c 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -141,10 +141,8 @@ struct xsm_operations {
     int (*hvm_param_nested) (struct domain *d);
     int (*get_vnumainfo) (struct domain *d);
 
-#ifdef HAS_MEM_ACCESS
     int (*vm_event_control) (struct domain *d, int mode, int op);
     int (*vm_event_op) (struct domain *d, int op);
-#endif
 
 #ifdef CONFIG_X86
     int (*do_mca) (void);
@@ -543,7 +541,6 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
     return xsm_ops->get_vnumainfo(d);
 }
 
-#ifdef HAS_MEM_ACCESS
 static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
 {
     return xsm_ops->vm_event_control(d, mode, op);
@@ -553,7 +550,6 @@ static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
 {
     return xsm_ops->vm_event_op(d, op);
 }
-#endif
 
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 25fca68..6d12d32 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -118,10 +118,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
-#ifdef HAS_MEM_ACCESS
     set_to_dummy_if_null(ops, vm_event_control);
     set_to_dummy_if_null(ops, vm_event_op);
-#endif
 
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index bd712b2..061d897 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -577,9 +577,7 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_iomem_permission:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
-#ifdef HAS_MEM_ACCESS
     case XEN_DOMCTL_vm_event_op:
-#endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
     case XEN_DOMCTL_shadow_op:
@@ -689,10 +687,10 @@ static int flask_domctl(struct domain *d, int cmd)
         return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
 
     case XEN_DOMCTL_set_access_required:
-        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 
     case XEN_DOMCTL_monitor_op:
-        return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 
     case XEN_DOMCTL_debug_op:
     case XEN_DOMCTL_gdbsx_guestmemio:
@@ -1136,6 +1134,16 @@ static int flask_hvm_param_nested(struct domain *d)
     return current_has_perm(d, SECCLASS_HVM, HVM__NESTED);
 }
 
+static int flask_vm_event_control(struct domain *d, int mode, int op)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+}
+
+static int flask_vm_event_op(struct domain *d, int op)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+}
+
 #if defined(HAS_PASSTHROUGH) && defined(HAS_PCI)
 static int flask_get_device_group(uint32_t machine_bdf)
 {
@@ -1202,18 +1210,6 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
-#ifdef HAS_MEM_ACCESS
-static int flask_vm_event_control(struct domain *d, int mode, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
-}
-
-static int flask_vm_event_op(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__VM_EVENT);
-}
-#endif /* HAS_MEM_ACCESS */
-
 #ifdef CONFIG_X86
 static int flask_do_mca(void)
 {
@@ -1582,6 +1578,9 @@ static struct xsm_operations flask_ops = {
     .do_xsm_op = do_flask_op,
     .get_vnumainfo = flask_get_vnumainfo,
 
+    .vm_event_control = flask_vm_event_control,
+    .vm_event_op = flask_vm_event_op,
+
 #ifdef CONFIG_COMPAT
     .do_compat_op = compat_flask_op,
 #endif
@@ -1597,11 +1596,6 @@ static struct xsm_operations flask_ops = {
     .deassign_device = flask_deassign_device,
 #endif
 
-#ifdef HAS_MEM_ACCESS
-    .vm_event_control = flask_vm_event_control,
-    .vm_event_op = flask_vm_event_op,
-#endif
-
 #ifdef CONFIG_X86
     .do_mca = flask_do_mca,
     .shadow_control = flask_shadow_control,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 4f392a5..9a9d1c5 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -219,6 +219,10 @@ class domain2
     get_vnumainfo
 # XEN_DOMCTL_psr_cmt_op
     psr_cmt_op
+# XEN_DOMCTL_set_access_required
+# XEN_DOMCTL_monitor_op
+# XEN_DOMCTL_vm_event_op
+    vm_event
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
@@ -247,10 +251,6 @@ class hvm
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
     hvmctl
-# XEN_DOMCTL_set_access_required
-# XEN_DOMCTL_monitor_op
-# XEN_DOMCTL_vm_event_op
-    vm_event
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
 #  target = domain whose memory is being shared
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V9 4/6] xen/vm_event: Relocate memop checks
  2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2015-04-09 14:32 ` [PATCH V9 3/6] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
@ 2015-04-09 14:32 ` Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 5/6] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tamas K Lengyel @ 2015-04-09 14:32 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy, rcojocaru

The memop handler function for paging/sharing responsible for calling XSM
doesn't really have anything to do with vm_event, thus in this patch we
relocate it into mem_paging_memop and mem_sharing_memop. This has already
been the approach in mem_access_memop, so in this patch we just make it
consistent.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
v7: Minor fixes with returning error codes on rcu lock failure
v6: Don't pass superfluous cmd to the memops.
    Unlock rcu's in sharing/paging
    Style fixes
---
 xen/arch/x86/mm/mem_paging.c      |  41 ++++++++++---
 xen/arch/x86/mm/mem_sharing.c     | 125 +++++++++++++++++++++++++-------------
 xen/arch/x86/x86_64/compat/mm.c   |  26 +-------
 xen/arch/x86/x86_64/mm.c          |  24 +-------
 xen/common/vm_event.c             |  43 -------------
 xen/include/asm-x86/mem_paging.h  |   2 +-
 xen/include/asm-x86/mem_sharing.h |   3 +-
 xen/include/xen/vm_event.h        |   1 -
 8 files changed, 124 insertions(+), 141 deletions(-)

diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index e63d8c1..17d2319 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -22,27 +22,45 @@
 
 
 #include <asm/p2m.h>
-#include <xen/vm_event.h>
+#include <xen/guest_access.h>
+#include <xsm/xsm.h>
 
-
-int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
+int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
 {
-    int rc = -ENODEV;
-    if ( unlikely(!d->vm_event->paging.ring_page) )
+    int rc;
+    xen_mem_paging_op_t mpo;
+    struct domain *d;
+    bool_t copyback = 0;
+
+    if ( copy_from_guest(&mpo, arg, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_live_remote_domain_by_id(mpo.domain, &d);
+    if ( rc )
         return rc;
 
-    switch( mpo->op )
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
+    if ( rc )
+        goto out;
+
+    rc = -ENODEV;
+    if ( unlikely(!d->vm_event->paging.ring_page) )
+        goto out;
+
+    switch( mpo.op )
     {
     case XENMEM_paging_op_nominate:
-        rc = p2m_mem_paging_nominate(d, mpo->gfn);
+        rc = p2m_mem_paging_nominate(d, mpo.gfn);
         break;
 
     case XENMEM_paging_op_evict:
-        rc = p2m_mem_paging_evict(d, mpo->gfn);
+        rc = p2m_mem_paging_evict(d, mpo.gfn);
         break;
 
     case XENMEM_paging_op_prep:
-        rc = p2m_mem_paging_prep(d, mpo->gfn, mpo->buffer);
+        rc = p2m_mem_paging_prep(d, mpo.gfn, mpo.buffer);
+        if ( !rc )
+            copyback = 1;
         break;
 
     default:
@@ -50,6 +68,11 @@ int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo)
         break;
     }
 
+    if ( copyback && __copy_to_guest(arg, &mpo, 1) )
+        rc = -EFAULT;
+
+out:
+    rcu_unlock_domain(d);
     return rc;
 }
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 4959407..ff01378 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -28,6 +28,7 @@
 #include <xen/grant_table.h>
 #include <xen/sched.h>
 #include <xen/rcupdate.h>
+#include <xen/guest_access.h>
 #include <xen/vm_event.h>
 #include <asm/page.h>
 #include <asm/string.h>
@@ -1293,39 +1294,66 @@ int relinquish_shared_pages(struct domain *d)
     return rc;
 }
 
-int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
+int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
-    int rc = 0;
+    int rc;
+    xen_mem_sharing_op_t mso;
+    struct domain *d;
+
+    rc = -EFAULT;
+    if ( copy_from_guest(&mso, arg, 1) )
+        return rc;
+
+    if ( mso.op == XENMEM_sharing_op_audit )
+        return mem_sharing_audit();
+
+    rc = rcu_lock_live_remote_domain_by_id(mso.domain, &d);
+    if ( rc )
+        return rc;
+
+    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
+    if ( rc )
+        goto out;
 
     /* Only HAP is supported */
+    rc = -ENODEV;
     if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
-         return -ENODEV;
+        goto out;
 
-    switch(mec->op)
+    rc = -ENODEV;
+    if ( unlikely(!d->vm_event->share.ring_page) )
+        goto out;
+
+    switch ( mso.op )
     {
         case XENMEM_sharing_op_nominate_gfn:
         {
-            unsigned long gfn = mec->u.nominate.u.gfn;
+            unsigned long gfn = mso.u.nominate.u.gfn;
             shr_handle_t handle;
+
+            rc = -EINVAL;
             if ( !mem_sharing_enabled(d) )
-                return -EINVAL;
+                goto out;
+
             rc = mem_sharing_nominate_page(d, gfn, 0, &handle);
-            mec->u.nominate.handle = handle;
+            mso.u.nominate.handle = handle;
         }
         break;
 
         case XENMEM_sharing_op_nominate_gref:
         {
-            grant_ref_t gref = mec->u.nominate.u.grant_ref;
+            grant_ref_t gref = mso.u.nominate.u.grant_ref;
             unsigned long gfn;
             shr_handle_t handle;
 
+            rc = -EINVAL;
             if ( !mem_sharing_enabled(d) )
-                return -EINVAL;
+                goto out;
             if ( mem_sharing_gref_to_gfn(d, gref, &gfn) < 0 )
-                return -EINVAL;
+                goto out;
+
             rc = mem_sharing_nominate_page(d, gfn, 3, &handle);
-            mec->u.nominate.handle = handle;
+            mso.u.nominate.handle = handle;
         }
         break;
 
@@ -1335,57 +1363,61 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             struct domain *cd;
             shr_handle_t sh, ch;
 
+            rc = -EINVAL;
             if ( !mem_sharing_enabled(d) )
-                return -EINVAL;
+                goto out;
 
-            rc = rcu_lock_live_remote_domain_by_id(mec->u.share.client_domain,
+            rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                    &cd);
             if ( rc )
-                return rc;
+                goto out;
 
-            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mec->op);
+            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
             if ( rc )
             {
                 rcu_unlock_domain(cd);
-                return rc;
+                goto out;
             }
 
             if ( !mem_sharing_enabled(cd) )
             {
                 rcu_unlock_domain(cd);
-                return -EINVAL;
+                rc = -EINVAL;
+                goto out;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.source_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.source_gfn) )
             {
                 grant_ref_t gref = (grant_ref_t) 
                                     (XENMEM_SHARING_OP_FIELD_GET_GREF(
-                                        mec->u.share.source_gfn));
+                                        mso.u.share.source_gfn));
                 if ( mem_sharing_gref_to_gfn(d, gref, &sgfn) < 0 )
                 {
                     rcu_unlock_domain(cd);
-                    return -EINVAL;
+                    rc = -EINVAL;
+                    goto out;
                 }
             } else {
-                sgfn  = mec->u.share.source_gfn;
+                sgfn  = mso.u.share.source_gfn;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.client_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.client_gfn) )
             {
                 grant_ref_t gref = (grant_ref_t) 
                                     (XENMEM_SHARING_OP_FIELD_GET_GREF(
-                                        mec->u.share.client_gfn));
+                                        mso.u.share.client_gfn));
                 if ( mem_sharing_gref_to_gfn(cd, gref, &cgfn) < 0 )
                 {
                     rcu_unlock_domain(cd);
-                    return -EINVAL;
+                    rc = -EINVAL;
+                    goto out;
                 }
             } else {
-                cgfn  = mec->u.share.client_gfn;
+                cgfn  = mso.u.share.client_gfn;
             }
 
-            sh = mec->u.share.source_handle;
-            ch = mec->u.share.client_handle;
+            sh = mso.u.share.source_handle;
+            ch = mso.u.share.client_handle;
 
             rc = mem_sharing_share_pages(d, sgfn, sh, cd, cgfn, ch); 
 
@@ -1399,37 +1431,40 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             struct domain *cd;
             shr_handle_t sh;
 
+            rc = -EINVAL;
             if ( !mem_sharing_enabled(d) )
-                return -EINVAL;
+                goto out;
 
-            rc = rcu_lock_live_remote_domain_by_id(mec->u.share.client_domain,
+            rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                    &cd);
             if ( rc )
-                return rc;
+                goto out;
 
-            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mec->op);
+            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
             if ( rc )
             {
                 rcu_unlock_domain(cd);
-                return rc;
+                goto out;
             }
 
             if ( !mem_sharing_enabled(cd) )
             {
                 rcu_unlock_domain(cd);
-                return -EINVAL;
+                rc = -EINVAL;
+                goto out;
             }
 
-            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mec->u.share.source_gfn) )
+            if ( XENMEM_SHARING_OP_FIELD_IS_GREF(mso.u.share.source_gfn) )
             {
                 /* Cannot add a gref to the physmap */
                 rcu_unlock_domain(cd);
-                return -EINVAL;
+                rc = -EINVAL;
+                goto out;
             }
 
-            sgfn    = mec->u.share.source_gfn;
-            sh      = mec->u.share.source_handle;
-            cgfn    = mec->u.share.client_gfn;
+            sgfn    = mso.u.share.source_gfn;
+            sh      = mso.u.share.source_handle;
+            cgfn    = mso.u.share.client_gfn;
 
             rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn); 
 
@@ -1440,7 +1475,10 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
         case XENMEM_sharing_op_resume:
         {
             if ( !mem_sharing_enabled(d) )
-                return -EINVAL;
+            {
+                rc = -EINVAL;
+                goto out;
+            }
 
             vm_event_resume(d, &d->vm_event->share);
         }
@@ -1448,14 +1486,14 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
 
         case XENMEM_sharing_op_debug_gfn:
         {
-            unsigned long gfn = mec->u.debug.u.gfn;
+            unsigned long gfn = mso.u.debug.u.gfn;
             rc = mem_sharing_debug_gfn(d, gfn);
         }
         break;
 
         case XENMEM_sharing_op_debug_gref:
         {
-            grant_ref_t gref = mec->u.debug.u.gref;
+            grant_ref_t gref = mso.u.debug.u.gref;
             rc = mem_sharing_debug_gref(d, gref);
         }
         break;
@@ -1465,6 +1503,11 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             break;
     }
 
+    if ( !rc && __copy_to_guest(arg, &mso, 1) )
+        rc = -EFAULT;
+
+out:
+    rcu_unlock_domain(d);
     return rc;
 }
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index d5eb920..d034bd0 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -1,9 +1,9 @@
 #include <xen/event.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <xen/multicall.h>
 #include <compat/memory.h>
 #include <compat/xen.h>
+#include <asm/mem_paging.h>
 #include <asm/mem_sharing.h>
 
 int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
@@ -187,30 +187,10 @@ int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return mem_sharing_get_nr_shared_mfns();
 
     case XENMEM_paging_op:
-    {
-        xen_mem_paging_op_t mpo;
-
-        if ( copy_from_guest(&mpo, arg, 1) )
-            return -EFAULT;
-        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
-        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
 
     case XENMEM_sharing_op:
-    {
-        xen_mem_sharing_op_t mso;
-
-        if ( copy_from_guest(&mso, arg, 1) )
-            return -EFAULT;
-        if ( mso.op == XENMEM_sharing_op_audit )
-            return mem_sharing_audit(); 
-        rc = do_vm_event_op(cmd, mso.domain, &mso);
-        if ( !rc && __copy_to_guest(arg, &mso, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t));
 
     default:
         rc = -ENOSYS;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 4953dcd..de95c38 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -26,7 +26,6 @@
 #include <xen/nodemask.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
-#include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/current.h>
 #include <asm/asm_defns.h>
@@ -37,6 +36,7 @@
 #include <asm/msr.h>
 #include <asm/setup.h>
 #include <asm/numa.h>
+#include <asm/mem_paging.h>
 #include <asm/mem_sharing.h>
 #include <public/memory.h>
 
@@ -984,28 +984,10 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return mem_sharing_get_nr_shared_mfns();
 
     case XENMEM_paging_op:
-    {
-        xen_mem_paging_op_t mpo;
-        if ( copy_from_guest(&mpo, arg, 1) )
-            return -EFAULT;
-        rc = do_vm_event_op(cmd, mpo.domain, &mpo);
-        if ( !rc && __copy_to_guest(arg, &mpo, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
 
     case XENMEM_sharing_op:
-    {
-        xen_mem_sharing_op_t mso;
-        if ( copy_from_guest(&mso, arg, 1) )
-            return -EFAULT;
-        if ( mso.op == XENMEM_sharing_op_audit )
-            return mem_sharing_audit(); 
-        rc = do_vm_event_op(cmd, mso.domain, &mso);
-        if ( !rc && __copy_to_guest(arg, &mso, 1) )
-            return -EFAULT;
-        break;
-    }
+        return mem_sharing_memop(guest_handle_cast(arg, xen_mem_sharing_op_t));
 
     default:
         rc = -ENOSYS;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index d276bd8..1eb9825 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -27,15 +27,6 @@
 #include <xen/vm_event.h>
 #include <xen/mem_access.h>
 #include <asm/p2m.h>
-
-#ifdef HAS_MEM_PAGING
-#include <asm/mem_paging.h>
-#endif
-
-#ifdef HAS_MEM_SHARING
-#include <asm/mem_sharing.h>
-#endif
-
 #include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
@@ -513,40 +504,6 @@ static void mem_sharing_notification(struct vcpu *v, unsigned int port)
 }
 #endif
 
-int do_vm_event_op(int op, uint32_t domain, void *arg)
-{
-    int ret;
-    struct domain *d;
-
-    ret = rcu_lock_live_remote_domain_by_id(domain, &d);
-    if ( ret )
-        return ret;
-
-    ret = xsm_vm_event_op(XSM_DM_PRIV, d, op);
-    if ( ret )
-        goto out;
-
-    switch (op)
-    {
-#ifdef HAS_MEM_PAGING
-        case XENMEM_paging_op:
-            ret = mem_paging_memop(d, arg);
-            break;
-#endif
-#ifdef HAS_MEM_SHARING
-        case XENMEM_sharing_op:
-            ret = mem_sharing_memop(d, arg);
-            break;
-#endif
-        default:
-            ret = -ENOSYS;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return ret;
-}
-
 /* Clean up on domain destruction */
 void vm_event_cleanup(struct domain *d)
 {
diff --git a/xen/include/asm-x86/mem_paging.h b/xen/include/asm-x86/mem_paging.h
index 5f0f91b..4179eb7 100644
--- a/xen/include/asm-x86/mem_paging.h
+++ b/xen/include/asm-x86/mem_paging.h
@@ -23,7 +23,7 @@
 #ifndef __ASM_X86_MEM_PAGING_H__
 #define __ASM_X86_MEM_PAGING_H__
 
-int mem_paging_memop(struct domain *d, xen_mem_paging_op_t *mpo);
+int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg);
 
 #endif /*__ASM_X86_MEM_PAGING_H__ */
 
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index da99d46..e4a1024 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -90,8 +90,7 @@ static inline int mem_sharing_unshare_page(struct domain *d,
  */
 int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
                                 bool_t allow_sleep);
-int mem_sharing_memop(struct domain *d, 
-                       xen_mem_sharing_op_t *mec);
+int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg);
 int mem_sharing_domctl(struct domain *d, 
                        xen_domctl_mem_sharing_op_t *mec);
 int mem_sharing_audit(void);
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 960beb2..51cbe8b 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -69,7 +69,6 @@ int vm_event_get_response(struct domain *d, struct vm_event_domain *ved,
 
 void vm_event_resume(struct domain *d, struct vm_event_domain *ved);
 
-int do_vm_event_op(int op, uint32_t domain, void *arg);
 int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V9 5/6] xen/xsm: Split vm_event_op into three separate labels
  2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2015-04-09 14:32 ` [PATCH V9 4/6] xen/vm_event: Relocate memop checks Tamas K Lengyel
@ 2015-04-09 14:32 ` Tamas K Lengyel
  2015-04-09 14:32 ` [PATCH V9 6/6] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
  2015-04-16  8:53 ` [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tim Deegan
  6 siblings, 0 replies; 8+ messages in thread
From: Tamas K Lengyel @ 2015-04-09 14:32 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy, rcojocaru

The XSM label vm_event_op has been used to control the three memops
controlling mem_access, mem_paging and mem_sharing. While these systems
rely on vm_event, these are not vm_event operations themselves. Thus,
in this patch we introduce three separate labels for each of these memops.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/mem_paging.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c       |  2 +-
 xen/common/mem_access.c             |  2 +-
 xen/include/xsm/dummy.h             | 20 +++++++++++++++++++-
 xen/include/xsm/xsm.h               | 33 ++++++++++++++++++++++++++++++---
 xen/xsm/dummy.c                     | 13 ++++++++++++-
 xen/xsm/flask/hooks.c               | 33 ++++++++++++++++++++++++++++++---
 xen/xsm/flask/policy/access_vectors |  6 ++++++
 8 files changed, 100 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 17d2319..9ee3aba 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -39,7 +39,7 @@ int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
     if ( rc )
         return rc;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_paging_op);
+    rc = xsm_mem_paging(XSM_DM_PRIV, d);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index ff01378..78fb013 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1311,7 +1311,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
     if ( rc )
         return rc;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_sharing_op);
+    rc = xsm_mem_sharing(XSM_DM_PRIV, d);
     if ( rc )
         goto out;
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 511c8c5..aa00513 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -48,7 +48,7 @@ int mem_access_memop(unsigned long cmd,
     if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
-    rc = xsm_vm_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+    rc = xsm_mem_access(XSM_DM_PRIV, d);
     if ( rc )
         goto out;
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 50ee929..16967ed 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -519,11 +519,29 @@ static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static XSM_INLINE int xsm_mem_access(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
+#endif
+
+#ifdef HAS_MEM_PAGING
+static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
 
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index ca8371c..49f06c9 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -142,7 +142,18 @@ struct xsm_operations {
     int (*get_vnumainfo) (struct domain *d);
 
     int (*vm_event_control) (struct domain *d, int mode, int op);
-    int (*vm_event_op) (struct domain *d, int op);
+
+#ifdef HAS_MEM_ACCESS
+    int (*mem_access) (struct domain *d);
+#endif
+
+#ifdef HAS_MEM_PAGING
+    int (*mem_paging) (struct domain *d);
+#endif
+
+#ifdef HAS_MEM_SHARING
+    int (*mem_sharing) (struct domain *d);
+#endif
 
 #ifdef CONFIG_X86
     int (*do_mca) (void);
@@ -546,10 +557,26 @@ static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int
     return xsm_ops->vm_event_control(d, mode, op);
 }
 
-static inline int xsm_vm_event_op (xsm_default_t def, struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static inline int xsm_mem_access (xsm_default_t def, struct domain *d)
 {
-    return xsm_ops->vm_event_op(d, op);
+    return xsm_ops->mem_access(d);
 }
+#endif
+
+#ifdef HAS_MEM_PAGING
+static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->mem_paging(d);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->mem_sharing(d);
+}
+#endif
 
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 6d12d32..3ddb4f6 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -119,7 +119,18 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
     set_to_dummy_if_null(ops, vm_event_control);
-    set_to_dummy_if_null(ops, vm_event_op);
+
+#ifdef HAS_MEM_ACCESS
+    set_to_dummy_if_null(ops, mem_access);
+#endif
+
+#ifdef HAS_MEM_PAGING
+    set_to_dummy_if_null(ops, mem_paging);
+#endif
+
+#ifdef HAS_MEM_SHARING
+    set_to_dummy_if_null(ops, mem_sharing);
+#endif
 
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 061d897..6215001 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1139,10 +1139,26 @@ static int flask_vm_event_control(struct domain *d, int mode, int op)
     return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
 }
 
-static int flask_vm_event_op(struct domain *d, int op)
+#ifdef HAS_MEM_ACCESS
+static int flask_mem_access(struct domain *d)
 {
-    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__VM_EVENT);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_ACCESS);
+}
+#endif
+
+#ifdef HAS_MEM_PAGING
+static int flask_mem_paging(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_PAGING);
+}
+#endif
+
+#ifdef HAS_MEM_SHARING
+static int flask_mem_sharing(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_SHARING);
 }
+#endif
 
 #if defined(HAS_PASSTHROUGH) && defined(HAS_PCI)
 static int flask_get_device_group(uint32_t machine_bdf)
@@ -1579,7 +1595,18 @@ static struct xsm_operations flask_ops = {
     .get_vnumainfo = flask_get_vnumainfo,
 
     .vm_event_control = flask_vm_event_control,
-    .vm_event_op = flask_vm_event_op,
+
+#ifdef HAS_MEM_ACCESS
+    .mem_access = flask_mem_access,
+#endif
+
+#ifdef HAS_MEM_PAGING
+    .mem_paging = flask_mem_paging,
+#endif
+
+#ifdef HAS_MEM_SHARING
+    .mem_sharing = flask_mem_sharing,
+#endif
 
 #ifdef CONFIG_COMPAT
     .do_compat_op = compat_flask_op,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 9a9d1c5..af4a6ae 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -223,6 +223,12 @@ class domain2
 # XEN_DOMCTL_monitor_op
 # XEN_DOMCTL_vm_event_op
     vm_event
+# XENMEM_access_op
+    mem_access
+# XENMEM_paging_op
+    mem_paging
+# XENMEM_sharing_op
+    mem_sharing
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V9 6/6] xen/vm_event: Add RESUME option to vm_event_op domctl
  2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2015-04-09 14:32 ` [PATCH V9 5/6] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
@ 2015-04-09 14:32 ` Tamas K Lengyel
  2015-04-16  8:53 ` [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tim Deegan
  6 siblings, 0 replies; 8+ messages in thread
From: Tamas K Lengyel @ 2015-04-09 14:32 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, wei.liu2, ian.campbell, steve, stefano.stabellini,
	jun.nakajima, tim, ian.jackson, eddie.dong, andres, jbeulich,
	Tamas K Lengyel, rshriram, keir, dgdegra, yanghy, rcojocaru

Thus far mem_access and mem_sharing memops had been able to signal
to Xen to start pulling responses off the corresponding rings. In this patch
we retire these memops and add them to the option to the vm_event_op domctl.

The vm_event_op domctl suboptions are the same for each ring thus we
consolidate them into XEN_VM_EVENT_ENABLE/DISABLE/RESUME.

As part of this patch in libxc we also rename the mem_access_enable/disable
functions to monitor_enable/disable and move them into xc_monitor.c.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v6: Style fixes
---
 tools/libxc/include/xenctrl.h       | 22 +++++++++++-----------
 tools/libxc/xc_mem_access.c         | 25 -------------------------
 tools/libxc/xc_mem_paging.c         | 12 ++++++++++--
 tools/libxc/xc_memshr.c             | 15 ++++++---------
 tools/libxc/xc_monitor.c            | 22 ++++++++++++++++++++++
 tools/libxc/xc_vm_event.c           |  6 +++---
 tools/tests/xen-access/xen-access.c | 10 +++++-----
 xen/arch/x86/mm/mem_sharing.c       | 12 ------------
 xen/common/mem_access.c             |  9 ---------
 xen/common/vm_event.c               | 33 +++++++++++++++++++++++++++------
 xen/include/public/domctl.h         | 32 +++++++++++++++-----------------
 xen/include/public/memory.h         | 20 +++++++++-----------
 12 files changed, 108 insertions(+), 110 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index a1861cb..6994c51 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2274,6 +2274,7 @@ int xc_tmem_restore_extra(xc_interface *xch, int dom, int fd);
  */
 int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id);
+int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id);
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id,
                            uint64_t gfn);
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, uint64_t gfn);
@@ -2287,17 +2288,6 @@ int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
  */
 
 /*
- * Enables mem_access and returns the mapped ring page.
- * Will return NULL on error.
- * Caller has to unmap this page when done.
- */
-void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
-void *xc_mem_access_enable_introspection(xc_interface *xch, domid_t domain_id,
-                                         uint32_t *port);
-int xc_mem_access_disable(xc_interface *xch, domid_t domain_id);
-int xc_mem_access_resume(xc_interface *xch, domid_t domain_id);
-
-/*
  * Set a range of memory to a specific access.
  * Allowed types are XENMEM_access_default, XENMEM_access_n, any combination of
  * XENMEM_access_ + (rwx), and XENMEM_access_rx2rw
@@ -2325,7 +2315,17 @@ int xc_mem_access_disable_emulate(xc_interface *xch, domid_t domain_id);
 
 /***
  * Monitor control operations.
+ *
+ * Enables the VM event monitor ring and returns the mapped ring page.
+ * This ring is used to deliver mem_access events, as well a set of additional
+ * events that can be enabled with the xc_monitor_* functions.
+ *
+ * Will return NULL on error.
+ * Caller has to unmap this page when done.
  */
+void *xc_monitor_enable(xc_interface *xch, domid_t domain_id, uint32_t *port);
+int xc_monitor_disable(xc_interface *xch, domid_t domain_id);
+int xc_monitor_resume(xc_interface *xch, domid_t domain_id);
 int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id, bool enable,
                           bool sync, bool onchangeonly);
 int xc_monitor_mov_to_cr3(xc_interface *xch, domid_t domain_id, bool enable,
diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c
index f27bc44..5cfa611 100644
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -24,31 +24,6 @@
 #include "xc_private.h"
 #include <xen/memory.h>
 
-void *xc_mem_access_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
-{
-    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
-                              port);
-}
-
-int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
-{
-    return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_MONITOR_DISABLE,
-                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
-                               NULL);
-}
-
-int xc_mem_access_resume(xc_interface *xch, domid_t domain_id)
-{
-    xen_mem_access_op_t mao =
-    {
-        .op    = XENMEM_access_op_resume,
-        .domid = domain_id
-    };
-
-    return do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao));
-}
-
 int xc_set_mem_access(xc_interface *xch,
                       domid_t domain_id,
                       xenmem_access_t access,
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libxc/xc_mem_paging.c
index 9c311d9..4aa48d6 100644
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -48,7 +48,7 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
     }
 
     return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_PAGING_ENABLE,
+                               XEN_VM_EVENT_ENABLE,
                                XEN_DOMCTL_VM_EVENT_OP_PAGING,
                                port);
 }
@@ -56,7 +56,15 @@ int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
 int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
 {
     return xc_vm_event_control(xch, domain_id,
-                               XEN_VM_EVENT_PAGING_DISABLE,
+                               XEN_VM_EVENT_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_PAGING,
+                               NULL);
+}
+
+int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_RESUME,
                                XEN_DOMCTL_VM_EVENT_OP_PAGING,
                                NULL);
 }
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 14cc1ce..0960c6d 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -53,7 +53,7 @@ int xc_memshr_ring_enable(xc_interface *xch,
     }
 
     return xc_vm_event_control(xch, domid,
-                               XEN_VM_EVENT_SHARING_ENABLE,
+                               XEN_VM_EVENT_ENABLE,
                                XEN_DOMCTL_VM_EVENT_OP_SHARING,
                                port);
 }
@@ -62,7 +62,7 @@ int xc_memshr_ring_disable(xc_interface *xch,
                            domid_t domid)
 {
     return xc_vm_event_control(xch, domid,
-                               XEN_VM_EVENT_SHARING_DISABLE,
+                               XEN_VM_EVENT_DISABLE,
                                XEN_DOMCTL_VM_EVENT_OP_SHARING,
                                NULL);
 }
@@ -185,13 +185,10 @@ int xc_memshr_add_to_physmap(xc_interface *xch,
 int xc_memshr_domain_resume(xc_interface *xch,
                             domid_t domid)
 {
-    xen_mem_sharing_op_t mso;
-
-    memset(&mso, 0, sizeof(mso));
-
-    mso.op = XENMEM_sharing_op_resume;
-
-    return xc_memshr_memop(xch, domid, &mso);
+    return xc_vm_event_control(xch, domid,
+                               XEN_VM_EVENT_RESUME,
+                               XEN_DOMCTL_VM_EVENT_OP_SHARING,
+                               NULL);
 }
 
 int xc_memshr_debug_gfn(xc_interface *xch,
diff --git a/tools/libxc/xc_monitor.c b/tools/libxc/xc_monitor.c
index fe090bc..87ad968 100644
--- a/tools/libxc/xc_monitor.c
+++ b/tools/libxc/xc_monitor.c
@@ -23,6 +23,28 @@
 
 #include "xc_private.h"
 
+void *xc_monitor_enable(xc_interface *xch, domid_t domain_id, uint32_t *port)
+{
+    return xc_vm_event_enable(xch, domain_id, HVM_PARAM_MONITOR_RING_PFN,
+                              port);
+}
+
+int xc_monitor_disable(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_DISABLE,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
+}
+
+int xc_monitor_resume(xc_interface *xch, domid_t domain_id)
+{
+    return xc_vm_event_control(xch, domain_id,
+                               XEN_VM_EVENT_RESUME,
+                               XEN_DOMCTL_VM_EVENT_OP_MONITOR,
+                               NULL);
+}
+
 int xc_monitor_mov_to_cr0(xc_interface *xch, domid_t domain_id, bool enable,
                           bool sync, bool onchangeonly)
 {
diff --git a/tools/libxc/xc_vm_event.c b/tools/libxc/xc_vm_event.c
index 7277e86..a5b3277 100644
--- a/tools/libxc/xc_vm_event.c
+++ b/tools/libxc/xc_vm_event.c
@@ -99,17 +99,17 @@ void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
     switch ( param )
     {
     case HVM_PARAM_PAGING_RING_PFN:
-        op = XEN_VM_EVENT_PAGING_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_PAGING;
         break;
 
     case HVM_PARAM_MONITOR_RING_PFN:
-        op = XEN_VM_EVENT_MONITOR_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_MONITOR;
         break;
 
     case HVM_PARAM_SHARING_RING_PFN:
-        op = XEN_VM_EVENT_SHARING_ENABLE;
+        op = XEN_VM_EVENT_ENABLE;
         mode = XEN_DOMCTL_VM_EVENT_OP_SHARING;
         break;
 
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 23438fc..8a899da 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -124,8 +124,8 @@ int xenaccess_teardown(xc_interface *xch, xenaccess_t *xenaccess)
 
     if ( mem_access_enable )
     {
-        rc = xc_mem_access_disable(xenaccess->xc_handle,
-                                   xenaccess->vm_event.domain_id);
+        rc = xc_monitor_disable(xenaccess->xc_handle,
+                                xenaccess->vm_event.domain_id);
         if ( rc != 0 )
         {
             ERROR("Error tearing down domain xenaccess in xen");
@@ -196,9 +196,9 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
 
     /* Enable mem_access */
     xenaccess->vm_event.ring_page =
-            xc_mem_access_enable(xenaccess->xc_handle,
-                                 xenaccess->vm_event.domain_id,
-                                 &xenaccess->vm_event.evtchn_port);
+            xc_monitor_enable(xenaccess->xc_handle,
+                              xenaccess->vm_event.domain_id,
+                              &xenaccess->vm_event.evtchn_port);
     if ( xenaccess->vm_event.ring_page == NULL )
     {
         switch ( errno ) {
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 78fb013..0700f00 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1472,18 +1472,6 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         }
         break;
 
-        case XENMEM_sharing_op_resume:
-        {
-            if ( !mem_sharing_enabled(d) )
-            {
-                rc = -EINVAL;
-                goto out;
-            }
-
-            vm_event_resume(d, &d->vm_event->share);
-        }
-        break;
-
         case XENMEM_sharing_op_debug_gfn:
         {
             unsigned long gfn = mso.u.debug.u.gfn;
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index aa00513..4ed55a6 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -58,15 +58,6 @@ int mem_access_memop(unsigned long cmd,
 
     switch ( mao.op )
     {
-    case XENMEM_access_op_resume:
-        if ( unlikely(start_iter) )
-            rc = -ENOSYS;
-        else
-        {
-            vm_event_resume(d, &d->vm_event->monitor);
-            rc = 0;
-        }
-        break;
 
     case XENMEM_access_op_set_access:
         rc = -EINVAL;
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 1eb9825..120a78a 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -577,7 +577,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_PAGING_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
         {
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -607,11 +607,18 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         }
         break;
 
-        case XEN_VM_EVENT_PAGING_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
             break;
 
+        case XEN_VM_EVENT_RESUME:
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+            break;
+
         default:
             rc = -ENOSYS;
             break;
@@ -627,17 +634,24 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_MONITOR_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
             rc = vm_event_enable(d, vec, ved, _VPF_mem_access,
                                  HVM_PARAM_MONITOR_RING_PFN,
                                  monitor_notification);
             break;
 
-        case XEN_VM_EVENT_MONITOR_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
             break;
 
+        case XEN_VM_EVENT_RESUME:
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+            break;
+
         default:
             rc = -ENOSYS;
             break;
@@ -653,7 +667,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
 
         switch( vec->op )
         {
-        case XEN_VM_EVENT_SHARING_ENABLE:
+        case XEN_VM_EVENT_ENABLE:
             rc = -EOPNOTSUPP;
             /* pvh fixme: p2m_is_foreign types need addressing */
             if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
@@ -669,11 +683,18 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
                                  mem_sharing_notification);
             break;
 
-        case XEN_VM_EVENT_SHARING_DISABLE:
+        case XEN_VM_EVENT_DISABLE:
             if ( ved->ring_page )
                 rc = vm_event_disable(d, ved);
             break;
 
+        case XEN_VM_EVENT_RESUME:
+            if ( ved->ring_page )
+                vm_event_resume(d, ved);
+            else
+                rc = -ENODEV;
+            break;
+
         default:
             rc = -ENOSYS;
             break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 3e5d1f4..10b51ef 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -745,6 +745,17 @@ struct xen_domctl_gdbsx_domstatus {
 /* XEN_DOMCTL_vm_event_op */
 
 /*
+ * There are currently three rings available for VM events:
+ * sharing, monitor and paging. This hypercall allows one to
+ * control these rings (enable/disable), as well as to signal
+ * to the hypervisor to pull responses (resume) from the given
+ * ring.
+ */
+#define XEN_VM_EVENT_ENABLE               0
+#define XEN_VM_EVENT_DISABLE              1
+#define XEN_VM_EVENT_RESUME               2
+
+/*
  * Domain memory paging
  * Page memory in and out.
  * Domctl interface to set up and tear down the 
@@ -760,9 +771,6 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_VM_EVENT_OP_PAGING            1
 
-#define XEN_VM_EVENT_PAGING_ENABLE               0
-#define XEN_VM_EVENT_PAGING_DISABLE              1
-
 /*
  * Monitor helper.
  *
@@ -774,25 +782,18 @@ struct xen_domctl_gdbsx_domstatus {
  * of every page in a domain.  When one of these permissions--independent,
  * read, write, and execute--is violated, the VCPU is paused and a memory event
  * is sent with what happened. The memory event handler can then resume the
- * VCPU and redo the access with a XENMEM_access_op_resume hypercall.
+ * VCPU and redo the access with a XEN_VM_EVENT_RESUME option.
  *
  * See public/vm_event.h for the list of available events that can be
  * subscribed to via the monitor interface.
  *
- * To enable MOV-TO-MSR interception on x86, it is necessary to enable this
- * interface with the XEN_VM_EVENT_MONITOR_ENABLE_INTROSPECTION
- * operator.
- *
- * The XEN_VM_EVENT_MONITOR_ENABLE* domctls return several
+ * The XEN_VM_EVENT_MONITOR_* domctls returns
  * non-standard error codes to indicate why access could not be enabled:
  * ENODEV - host lacks HAP support (EPT/NPT) or HAP is disabled in guest
  * EBUSY  - guest has or had access enabled, ring buffer still active
  *
  */
-#define XEN_DOMCTL_VM_EVENT_OP_MONITOR                        2
-
-#define XEN_VM_EVENT_MONITOR_ENABLE                           0
-#define XEN_VM_EVENT_MONITOR_DISABLE                          1
+#define XEN_DOMCTL_VM_EVENT_OP_MONITOR           2
 
 /*
  * Sharing ENOMEM helper.
@@ -809,13 +810,10 @@ struct xen_domctl_gdbsx_domstatus {
  */
 #define XEN_DOMCTL_VM_EVENT_OP_SHARING           3
 
-#define XEN_VM_EVENT_SHARING_ENABLE              0
-#define XEN_VM_EVENT_SHARING_DISABLE             1
-
 /* Use for teardown/setup of helper<->hypervisor interface for paging, 
  * access and sharing.*/
 struct xen_domctl_vm_event_op {
-    uint32_t       op;           /* XEN_VM_EVENT_*_* */
+    uint32_t       op;           /* XEN_VM_EVENT_* */
     uint32_t       mode;         /* XEN_DOMCTL_VM_EVENT_OP_* */
 
     uint32_t port;              /* OUT: event channel for ring */
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 1fe3f75..832559a 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -387,11 +387,10 @@ typedef struct xen_mem_paging_op xen_mem_paging_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
 
 #define XENMEM_access_op                    21
-#define XENMEM_access_op_resume             0
-#define XENMEM_access_op_set_access         1
-#define XENMEM_access_op_get_access         2
-#define XENMEM_access_op_enable_emulate     3
-#define XENMEM_access_op_disable_emulate    4
+#define XENMEM_access_op_set_access         0
+#define XENMEM_access_op_get_access         1
+#define XENMEM_access_op_enable_emulate     2
+#define XENMEM_access_op_disable_emulate    3
 
 typedef enum {
     XENMEM_access_n,
@@ -442,12 +441,11 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
 #define XENMEM_sharing_op_nominate_gfn      0
 #define XENMEM_sharing_op_nominate_gref     1
 #define XENMEM_sharing_op_share             2
-#define XENMEM_sharing_op_resume            3
-#define XENMEM_sharing_op_debug_gfn         4
-#define XENMEM_sharing_op_debug_mfn         5
-#define XENMEM_sharing_op_debug_gref        6
-#define XENMEM_sharing_op_add_physmap       7
-#define XENMEM_sharing_op_audit             8
+#define XENMEM_sharing_op_debug_gfn         3
+#define XENMEM_sharing_op_debug_mfn         4
+#define XENMEM_sharing_op_debug_gref        5
+#define XENMEM_sharing_op_add_physmap       6
+#define XENMEM_sharing_op_audit             7
 
 #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
 #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V9 0/6] xen: Clean-up of mem_event subsystem
  2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2015-04-09 14:32 ` [PATCH V9 6/6] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
@ 2015-04-16  8:53 ` Tim Deegan
  6 siblings, 0 replies; 8+ messages in thread
From: Tim Deegan @ 2015-04-16  8:53 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: kevin.tian, wei.liu2, ian.campbell, rcojocaru,
	stefano.stabellini, ian.jackson, steve, xen-devel, jbeulich,
	eddie.dong, andres, jun.nakajima, rshriram, keir, dgdegra,
	yanghy

At 16:32 +0200 on 09 Apr (1428597167), Tamas K Lengyel wrote:
> This patch series aims to clean up the mem_event subsystem within Xen. The
> original use-case for this system was to allow external helper applications
> running in privileged domains to control various memory operations performed
> by Xen. Amongs these were paging, sharing and access control. The subsystem
> has since been extended to also deliver non-memory related events, namely
> various HVM debugging events (INT3, MTF, MOV-TO-CR, MOV-TO-MSR). The structures
> and naming of related functions however has not caught up to these new
> use-cases, thus leaving many ambiguities in the code. Furthermore, future
> use-cases envisioned for this subsystem include PV domains and ARM domains,
> thus there is a need to establish a common base to build on.
> 
> Each patch in the series has been build-tested on x86 and ARM, both with
> and without XSM enabled.

Applied, thanks.

Tim.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-04-16  8:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-09 14:32 [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tamas K Lengyel
2015-04-09 14:32 ` [PATCH V9 1/6] xen: Introduce monitor_op domctl Tamas K Lengyel
2015-04-09 14:32 ` [PATCH V9 2/6] xen/vm_event: Deprecate VM_EVENT_FLAG_DUMMY flag Tamas K Lengyel
2015-04-09 14:32 ` [PATCH V9 3/6] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
2015-04-09 14:32 ` [PATCH V9 4/6] xen/vm_event: Relocate memop checks Tamas K Lengyel
2015-04-09 14:32 ` [PATCH V9 5/6] xen/xsm: Split vm_event_op into three separate labels Tamas K Lengyel
2015-04-09 14:32 ` [PATCH V9 6/6] xen/vm_event: Add RESUME option to vm_event_op domctl Tamas K Lengyel
2015-04-16  8:53 ` [PATCH V9 0/6] xen: Clean-up of mem_event subsystem Tim Deegan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.